Skip to main content
Log in

Method for Enhancing High-Resolution Image Inpainting with Two-Stage Approach

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

In recent years, the field of image inpainting has developed rapidly, learning based approaches show impressive results in the task of filling missing parts in an image. But most deep methods are strongly tied to the resolution of the images on which they were trained. A slight resolution increase leads to serious artifacts and unsatisfactory filling quality. These methods are therefore unsuitable for interactive image processing. In this article, we propose a method that solves the problem of inpainting arbitrary-size images. We also describe a way to better restore texture fragments in the filled area. For this, we propose to use information from neighboring pixels by shifting the original image in four directions. Moreover, this approach can work with existing inpainting models, making them almost resolution independent without the need for retraining.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Similar content being viewed by others

REFERENCES

  1. Drori, I., Cohen-Or, D., and Yeshurun, H., Fragment-based image completion, ACM Trans. Graph., 2003, vol. 22, no. 3, pp. 303–312.

    Article  Google Scholar 

  2. Criminisi, A., Perez, P., and Toyama, K., Region filling and object removal by exemplar-based image inpainting, IEEE Trans. Image Process., 2004, vol. 13, pp. 1200–1212.

    Article  Google Scholar 

  3. Barnes, C., Shechtman, E., Finkelstein, A., and Goldman, D., PatchMatch: a randomized correspondence algorithm for structural image editing, ACM Trans. Graph. (Proc. SIGGRAPH), 2009, vol. 28, .no. 3.

  4. Yakubenko, A.A., Kononov, V.A., Mizin, I.S., Konushin, V.S., and Konushin, A.S., Reconstruction of structure and texture of city building facades, Progr. Comput. Software, 2011, vol. 37, no. 5, pp. 260–269.

    Article  Google Scholar 

  5. Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., and Catanzaro, B., Image inpainting for irregular holes using partial convolutions, Proc. European Conf. on Computer Vision (ECCV), Munich, 2018, pp. 85–100.

  6. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., and Huang, T.S., Generative image inpainting with contextual attention, Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, 2018, pp. 5005–5514.

  7. Hong, X., Xiong, P., Ji, R., and Fan, H., Deep fusion network for image completion, Proc. 27th ACM Int. Conf. on Multimedia, Nice, 2019, pp. 2033–2042.

  8. Zeng, Y., Lin, Z., Yang, J., Zhang, J., Shechtman, E., and Lu, H., High-resolution image inpainting with iterative confidence feedback and guided upsampling, 2020. arXiv:2005.11742.

  9. Molodetskikh, I., Erofeev, M., and Vatolin, D., Perceptually motivated method for image inpainting comparison, Proc. CEUR Workshop, 2019, vol. 2485, pp. 131–135.

  10. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y., Generative adversarial nets, Adv. Neural Inf. Process. Syst., 2014, vol. 27, pp. 2672–2680.

    Google Scholar 

  11. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., and Huang, T., Free-form image inpainting with gated convolution, Proc. IEEE Int. Conf. on Computer Vision, Seoul, 2019, pp. 4471–4480.

  12. Yi, Z., Tang, Q., Azizi, S., Jang, D., and Xu, Z., Contextual residual aggregation for ultra high-resolution image inpainting, Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2020, pp. 7508–7517.

  13. Ronneberger, O., Fischer, P., and Brox, T., U-Net: convolutional networks for biomedical image segmentation, Proc. Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, Munich, 2015, pp. 234–241.

  14. Simonyan, K. and Zisserman, A., Very deep convolutional networks for large-scale image recognition, 2014. arXiv:1409.1556.

  15. Timofte, R., Gu, S., Wu, J., Van Gool, L., Zhang, L., and Yang, M.H., NTIRE 2018 challenge on single image super-resolution: methods and results, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) Workshops, Salt Lake City, 2018, p. 965.

  16. Ioffe, S. and Szegedy, C., Batch normalization: accelerating deep network training by reducing internal covariate shift, 2016. arXiv:1502.03167.

  17. Johnson, J., Alahi, A., and Fei-Fei, L., Perceptual losses for real-time style transfer and super-resolution, Proc. Europ. Conf. on cComputer Vision, Amsterdam, 2016, pp. 694–711.

  18. Kingma, D.P. and Ba, J., Adam: a method for stochastic optimization, 2014. arXiv:1412.6980.

  19. Bradley, R.A. and Terry, M.E., Rank analysis of incomplete block designs: I. The method of paired comparisons, Biometrika, 1952, vol. 39, no. 3/4, pp. 324–345.

    MathSciNet  MATH  Google Scholar 

  20. Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E., Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., 2004, vol. 13, no. 4, pp. 600–612.

    Article  Google Scholar 

  21. Soman, K., GIMP-ML: python plugins for using computer vision models in GIMP, 2020. arXiv:2004.13060.

Download references

ACKNOWLEDGMENTS

This work was partially supported by Russian Foundation for Basic Research under Grant 19-01-00785 a. Model training for this work employed the IBM Polus computing cluster of the Faculty of Computational Mathematics andCybernetics at Moscow State University.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. Moskalenko, M. Erofeev or D. Vatolin.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moskalenko, A., Erofeev, M. & Vatolin, D. Method for Enhancing High-Resolution Image Inpainting with Two-Stage Approach. Program Comput Soft 47, 201–206 (2021). https://doi.org/10.1134/S0361768821030075

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0361768821030075

Navigation