Skip to main content
Log in

Painting completion with generative translation models

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Image completion has always been an important research area of image processing. With the continuous development of the deep learning model in recent years, further progress has been made in the repair of images. In this paper, we focused on realistic and painting portrait data, studied on semantic inpainting techniques based on regional completions, and proposed an improved generative translation model. Through the context generation network and the image discriminator network, a patch image is generated which should keep consistency between the hole and the surrounding area. Then the completed part will be processed according to the scene structure of the image through the style translation network to ensure the consistency between the generated area and the whole image, which means the repair part can better adapt to the style, texture, and structure of the artistic work. Experiments have shown that our method could achieve good results in the completion of realistic and painting portraits, and it also provided some reference for restoration and identification of art works.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Afonso MV, Bioucas-Dias JM, Figueiredo MAT (2011) An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans Image Process 20(3):681–695

    Article  MathSciNet  Google Scholar 

  2. Barnes C et al (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics (TOG) 28(3):24

    Article  Google Scholar 

  3. Chen X et al (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. Adv Neural Inf Proces Syst:2172–2180

  4. Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2414–2423

  5. Goodfellow I et al (2014) Generative adversarial nets. Adv Neural Inf Proces Syst:2672–2680

  6. Hays J, Efros AA (2007) Scene completion using millions of photographs. ACM Transactions on Graphics (TOG) 26(3):4

    Article  Google Scholar 

  7. Hu Y et al (2013) Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans Pattern Anal Mach Intell 35(9):2117–2130

    Article  Google Scholar 

  8. Huang J-B et al (2014) Image completion using planar structure guidance. ACM Transactions on Graphics (TOG) 33(4):129

    Google Scholar 

  9. Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Transactions on Graphics (TOG) 36(4):107

    Article  Google Scholar 

  10. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. European Conference on Computer Vision. Springer, Cham, pp 694–711

  11. Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114

    Google Scholar 

  12. Liu Z et al (2015) Deep learning face attributes in the wild. Proceedings of the IEEE International Conference on Computer Vision, pp 3730–3738

  13. Liu S, Pan J, Yang M-H (2016) Learning recursive filters for low-level vision via a hybrid neural network. European Conference on Computer Vision. Springer, Cham, pp 560–576

  14. Lu H et al (2017) Wound intensity correction and segmentation with convolutional neural networks. Concurrency and computation: practice and experience 29(6):e3927

    Article  Google Scholar 

  15. Lu H et al (2017) Motor anomaly detection for unmanned aerial vehicles using reinforcement learning. IEEE Internet of Things Journal 5(4):2315–2322

  16. Lu H et al (2018) Brain intelligence: go beyond artificial intelligence. Mobile Networks and Applications 23(2):368–375

    Article  Google Scholar 

  17. Pathak D et al (2016) Context encoders: feature learning by inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2536–2544

  18. Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434

    Google Scholar 

  19. Ren JSJ et al (2015) Shepard convolutional neural networks. Adv Neural Inf Proces Syst:901–909

  20. Serikawa S, Huimin L (2014) Underwater image dehazing using joint trilateral filter. Comput Electr Eng 40(1):41–50

    Article  Google Scholar 

  21. Wang D et al (2015) Inverse sparse tracker with a locally weighted distance metric. IEEE Trans Image Process 24(9):2646–2657

    Article  MathSciNet  Google Scholar 

  22. Wang D, Lu H, Yang M-H (2016) Robust visual tracking via least soft-threshold squares. IEEE Transactions on Circuits and Systems for Video Technology 26(9):1709–1721

    Article  Google Scholar 

  23. Xie J, Xu L, Chen E (2012) Image denoising and inpainting with deep neural networks. Adv Neural Inf Proces Syst:341–349

  24. Yang W et al (2017) Deep context convolutional neural networks for semantic segmentation. CCF Chinese Conference on computer vision. Springer, Singapore, pp 696–704

  25. Yeh RA et al (2017) Semantic image inpainting with deep generative models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 5485–5493

  26. Zhou Q et al (2016) Multi-scale context for scene labeling via flexible segmentation graph. Pattern Recognition 59:312–324

    Article  Google Scholar 

  27. Zhou Q et al (2018) Multi-scale deep context convolutional neural networks for semantic segmentation. World Wide Web:1–16

  28. Zhu JY et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. IEEE International Conference on Computer Vision, pp 2242–2251

Download references

Acknowledgements

This work is supported by the open funding project of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (Grant No. BUAA-VR-16KF-18), and supported by Construction of Scientific and Technological Innovation and Service Capability - Basic Scientific Research Funding Project (Grant No. PXM2018_014213_000033), Beijing Technology and Business University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruijun Liu.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, R., Yang, R., Li, S. et al. Painting completion with generative translation models. Multimed Tools Appl 79, 14375–14388 (2020). https://doi.org/10.1007/s11042-018-6761-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-6761-3

Keywords

Navigation