Abstract
Image completion has always been an important research area of image processing. With the continuous development of the deep learning model in recent years, further progress has been made in the repair of images. In this paper, we focused on realistic and painting portrait data, studied on semantic inpainting techniques based on regional completions, and proposed an improved generative translation model. Through the context generation network and the image discriminator network, a patch image is generated which should keep consistency between the hole and the surrounding area. Then the completed part will be processed according to the scene structure of the image through the style translation network to ensure the consistency between the generated area and the whole image, which means the repair part can better adapt to the style, texture, and structure of the artistic work. Experiments have shown that our method could achieve good results in the completion of realistic and painting portraits, and it also provided some reference for restoration and identification of art works.
Similar content being viewed by others
References
Afonso MV, Bioucas-Dias JM, Figueiredo MAT (2011) An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans Image Process 20(3):681–695
Barnes C et al (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics (TOG) 28(3):24
Chen X et al (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. Adv Neural Inf Proces Syst:2172–2180
Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2414–2423
Goodfellow I et al (2014) Generative adversarial nets. Adv Neural Inf Proces Syst:2672–2680
Hays J, Efros AA (2007) Scene completion using millions of photographs. ACM Transactions on Graphics (TOG) 26(3):4
Hu Y et al (2013) Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans Pattern Anal Mach Intell 35(9):2117–2130
Huang J-B et al (2014) Image completion using planar structure guidance. ACM Transactions on Graphics (TOG) 33(4):129
Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Transactions on Graphics (TOG) 36(4):107
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. European Conference on Computer Vision. Springer, Cham, pp 694–711
Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114
Liu Z et al (2015) Deep learning face attributes in the wild. Proceedings of the IEEE International Conference on Computer Vision, pp 3730–3738
Liu S, Pan J, Yang M-H (2016) Learning recursive filters for low-level vision via a hybrid neural network. European Conference on Computer Vision. Springer, Cham, pp 560–576
Lu H et al (2017) Wound intensity correction and segmentation with convolutional neural networks. Concurrency and computation: practice and experience 29(6):e3927
Lu H et al (2017) Motor anomaly detection for unmanned aerial vehicles using reinforcement learning. IEEE Internet of Things Journal 5(4):2315–2322
Lu H et al (2018) Brain intelligence: go beyond artificial intelligence. Mobile Networks and Applications 23(2):368–375
Pathak D et al (2016) Context encoders: feature learning by inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2536–2544
Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434
Ren JSJ et al (2015) Shepard convolutional neural networks. Adv Neural Inf Proces Syst:901–909
Serikawa S, Huimin L (2014) Underwater image dehazing using joint trilateral filter. Comput Electr Eng 40(1):41–50
Wang D et al (2015) Inverse sparse tracker with a locally weighted distance metric. IEEE Trans Image Process 24(9):2646–2657
Wang D, Lu H, Yang M-H (2016) Robust visual tracking via least soft-threshold squares. IEEE Transactions on Circuits and Systems for Video Technology 26(9):1709–1721
Xie J, Xu L, Chen E (2012) Image denoising and inpainting with deep neural networks. Adv Neural Inf Proces Syst:341–349
Yang W et al (2017) Deep context convolutional neural networks for semantic segmentation. CCF Chinese Conference on computer vision. Springer, Singapore, pp 696–704
Yeh RA et al (2017) Semantic image inpainting with deep generative models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 5485–5493
Zhou Q et al (2016) Multi-scale context for scene labeling via flexible segmentation graph. Pattern Recognition 59:312–324
Zhou Q et al (2018) Multi-scale deep context convolutional neural networks for semantic segmentation. World Wide Web:1–16
Zhu JY et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. IEEE International Conference on Computer Vision, pp 2242–2251
Acknowledgements
This work is supported by the open funding project of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (Grant No. BUAA-VR-16KF-18), and supported by Construction of Scientific and Technological Innovation and Service Capability - Basic Scientific Research Funding Project (Grant No. PXM2018_014213_000033), Beijing Technology and Business University.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Liu, R., Yang, R., Li, S. et al. Painting completion with generative translation models. Multimed Tools Appl 79, 14375–14388 (2020). https://doi.org/10.1007/s11042-018-6761-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-018-6761-3