Skip to main content
Log in

A robust and efficient image de-fencing approach using conditional generative adversarial networks

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Image de-fencing is one of the most important aspects of recreational photography in which the objective is to remove the fence texture present in an image and generate an aesthetically pleasing version of the same image without the fence texture. In this paper, we present an automated and effective technique for fence removal and image reconstruction using conditional generative adversarial networks (cGANs). These networks have been successfully applied in several other domains of computer vision, focusing on image generation and rendering. Our approach is based on a two-stage architecture involving two cGANs in succession, in which the first cGAN generates the fence mask from an input fenced image, and the next one generates the final de-fenced image from the given input and the corresponding fence mask obtained from the previous cGAN. Training of these networks is carried out independently using suitable loss functions, and during the deployment phase, the above two networks are stacked together in an end-to-end manner to generate the de-fenced image from an unknown test image. Extensive qualitative and quantitative evaluations using challenging data sets emphasize the effectiveness of our approach over state-of-the-art de-fencing techniques. The data sets used in the experiments have also been made available for further comparison.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Appropriate values for \(C_1\) and \(C_2\) to compute the SSIM loss can be found in https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/dssim.py.

References

  1. Liu, Y., et al.: Image de-fencing. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

  2. Park, M. et al.: Image de-fencing revisited (2010)

  3. Khasare, V.S., et al.: Seeing through the fence: image de-fencing using a video sequence (2013)

  4. Jonna, S., et al.: My camera can see through fences: a deep learning approach for image de-fencing (2015)

  5. Jonna, S., et al.: Deep learning based fence segmentation and removal from an image using a video sequence (2016)

  6. Farid, M.S., et al.: Image de-fencing framework with hybrid inpainting algorithm. SIViP 10(7), 1193–1201 (2016)

    Article  Google Scholar 

  7. Kumar, V., et al.: Image defencing via signal demixing (2016)

  8. Khalid, M., et al.: Image de-fencing using histograms of oriented gradients. SIViP 12(6), 1173–1180 (2018)

    Article  Google Scholar 

  9. Matsui, T., Ikehara, M.: Single-image fence removal using deep convolutional neural network. IEEE Access 8, 38846–38854 (2019)

    Article  Google Scholar 

  10. Reed, S., et al.: Generative Adversarial Text to Image Synthesis (2016). arXiv:1605.05396

  11. Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks (2017)

  12. Huang, H., et al.: An introduction to image synthesis with generative adversarial nets (2018). arXiv:1803.04469

  13. Isola, P., et al.: Image-To-image translation with conditional adversarial networks (2017)

  14. Yeh, R.A., et al.: Semantic image inpainting with deep generative models (2017)

  15. Brkic, K., et al.: I know that person: generative full body and face de-identification of people in images (2017)

  16. Radford, A., et al.: Unsupervised representation learning with deep convolutional generative adversarial networks (2015). arXiv:1511.06434

  17. Chen, X., et al.: InfoGAN: interpretable representation learning by information maximizing generative adversarial nets (2016)

  18. Hays, J., et al.: Discovering texture regularity as a higher-order correspondence problem (2006)

  19. Park, M., et al.: Deformed lattice detection in real-world images using mean-shift belief propagation. IEEE Trans. Pattern Anal. Mach. Intell. 31(10), 1804–1816 (2009)

    Article  Google Scholar 

  20. Lin, W.-C., Liu, Y.: A lattice-based MRF model for dynamic near-regular texture tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29(5), 777–792 (2007)

    Article  Google Scholar 

  21. Park, M., et al.: Deformed lattice discovery via efficient mean-shift belief propagation (2008)

  22. Mu, Y., et al.: Video de-fencing. IEEE Trans. Circuits Syst. Video Technol. 24(7), 1111–1121 (2014)

    Article  Google Scholar 

  23. Bertalmio, M., et al.: Simultaneous structure and texture image inpainting. IEEE Trans. Image Process. 12(8), 882–889 (2003)

    Article  MathSciNet  Google Scholar 

  24. Levin, A., et al.: A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008)

    Article  Google Scholar 

  25. Criminisi, A., et al.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13(9), 1200–1212 (2004)

    Article  Google Scholar 

  26. Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19(5), 1153–1165 (2010)

    Article  MathSciNet  Google Scholar 

  27. Darabi, S., et al.: Image melding: combining inconsistent images using patch-based synthesis. ACM Trans. Graph. 31(4), 1–10 (2012)

    Article  Google Scholar 

  28. Huang, J.-B., et al.: Image completion using planar structure guidance. ACM Trans. Graph. 33(4), 129 (2014)

    Google Scholar 

  29. Pathak, D., et al.: Context encoders: feature learning by inpainting (2016)

  30. Yang, C., et al.: High-resolution image inpainting using multi-scale neural patch synthesis (2017)

  31. Yu, J., et al.: Generative image inpainting with contextual attention. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018)

  32. Nazeri, K., et al.: EdgeConnect: generative image inpainting with adversarial edge learning (2019). arXiv:1901.00212

  33. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Intl. J. Comput. Vis. 70(1), 41–54 (2006)

    Article  Google Scholar 

  34. Zheng, Y., Kambhamettu, C.: Learning based digital matting (2009)

  35. Jonna, S., et al.: A multimodal approach for image de-fencing and depth inpainting (2015)

  36. Xue, T., et al.: A computational approach for obstruction-free photography. ACM Trans. Graph. 34(4), 79 (2015)

    Article  Google Scholar 

  37. Du, C., et al.: Accurate and efficient video de-fencing using convolutional neural networks and temporal information (2018)

  38. Johnson, J., et al.: Perceptual losses for real-time style transfer and super-resolution (2016)

  39. Gatys, L.A., et al.: Image style transfer using convolutional neural networks (2016)

  40. Wang, Z., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans.Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  41. Zhao, H., et al.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2017)

    Article  Google Scholar 

  42. Everingham, M., et al.: The pascal visual object classes challenge. Intl. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  43. Lin, T.-Y., et al.: Microsoft COCO: common objects in context (2014)

  44. Chen, Q., et al.: Fast image processing with fully-convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2497–2506 (2017)

  45. Farid, M.S., et al.: Source code for image defencing (2016).https://www.researchgate.net/publication/296635266_Source_Code_Image_Defencing. Accessed 03 2016

Download references

Acknowledgements

We acknowledge NVIDIA for supporting our research with a TITAN Xp graphics processing unit. Our sincere gratitude goes to Dr. V. Maurya for helping us with the implementation of his team’s work that has been used in the comparative study. We are also thankful to everyone who contributed to obtaining the MOS, as reported in Table 2.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pratik Chattopadhyay.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gupta, D., Jain, S., Tripathi, U. et al. A robust and efficient image de-fencing approach using conditional generative adversarial networks. SIViP 15, 297–305 (2021). https://doi.org/10.1007/s11760-020-01749-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-020-01749-6

Keywords

Navigation