Skip to main content
Log in

C3N: content-constrained convolutional network for mural image completion

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Ancient murals, suffering from severe diseases, usually exhibit the absence or distortion of local areas. The damaged murals severely impaired people’s visual appreciation and satisfaction in the digital conservation of cultural heritage. However, there is no large amount of murals due to their scarcity. In this paper, we propose a novel content-constrained convolutional network for mural image completion. This method employs frequency transformation to facilitate effective multi-scale feature fusion for image inpainting, taking into account both space and frequency domains. Our network uses adaptive space-varying activation functions to correct feature maps across scales. Our network also uses dual-domain partial convolution with a mask for computing on only valid points, whereas the mask is updated for the next layer. This iterative process is performed until the mask is filled to build the repaired image. The proposed method is verified on the datasets in comparison with baseline methods. The experimental results demonstrate that the proposed method achieves better results with less artifacts in repairing mural images and generally outperforms the state-of-the-art methods both quantitatively and qualitatively. The code and pretrained models are available at https://github.com/zhangyongqin/C3N.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availibility

The datasets used during the current study are from the public Places dataset (http://places2.csail.mit.edu/index.html) or available from the authors on reasonable request.

Notes

  1. https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting.

References

  1. Mao S, Xiong L, Jiao L, Feng T, Yeung SK (2017) A novel Riemannian metric based on Riemannian structure and scaling information for fixed low-rank matrix completion. IEEE Trans Cybern 47(5):1299–1312

    Article  Google Scholar 

  2. Miao J, Kou KI, Liu W (2020) Low-rank quaternion tensor completion for recovering color videos and images. Pattern Recognit 107:107505

    Article  Google Scholar 

  3. Zhang L, Song L, Du B, Zhang Y (2021) Nonlocal low-rank tensor completion for visual data. IEEE Trans Cybern 51(2):673–685

    Article  Google Scholar 

  4. Xie J, Xu L (2012) Chen E (2012) Image denoising and inpainting with deep neural networks. In: Proceedings of the advances in neural information processing systems, Lake Tahoe, vol. 3–6, pp 350–358

  5. Cai N, Su Z, Lin Z, Wang H, Yang Z, Ling BWK (2017) Blind inpainting using the fully convolutional neural network. Vis Comput 33(2):249–261

    Article  Google Scholar 

  6. Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graph 36(4):107:1-107:14

    Article  Google Scholar 

  7. Liu G, Reda FA, Shih KJ, Wang TC, Tao A, Catanzaro B (2018) Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European conference on computer vision, part XI, Munich, pp 89–105

  8. Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2019) Free-form image inpainting with gated convolution. In: Proceedings of the IEEE international conference on computer vision, Seoul, Korea (South), pp 4470–4479

  9. Xie C, Liu S, Li C, Cheng MM, Zuo W, Liu X, Wen S, Errui Ding E (2019) Image inpainting with learnable bidirectional attention maps. In: Proceedings of the IEEE international conference on computer vision, Seoul, Korea (South), pp 8857–8866

  10. Li J, Wang N, Zhang L, Du B, Tao D (2020) Recurrent feature reasoning for image inpainting. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Seattle, pp 7760–7768

  11. Wan Z, Zhang B, Chen D, Zhang P, Chen D, Liao J, Wen F (2020) Bringing old photos back to life. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Seattle, pp 2747–2757

  12. Zhang Y, Xiao J, Peng J, Ding Y, Liu J, Guo Z, Zong X (2018) Kernel Wiener filtering model with low-rank approximation for image denoising. Inf Sci 462:402–416

    Article  MathSciNet  MATH  Google Scholar 

  13. Zhang Y, Shi F, Cheng J, Wang L, Yap PT, Shen D (2018) Longitudinally guided super-resolution of neonatal brain magnetic resonance images. IEEE Trans Cybern 49(2):662–674

    Article  Google Scholar 

  14. Zhang Y, Yap PT, Chen G, Lin W, Wang L, Shen D (2019) Super-resolution reconstruction of neonatal brain magnetic resonance images via residual structured sparse representation. Med Image Anal 55:76–87

    Article  Google Scholar 

  15. Zhang Y, Yap PT, Qu L, Cheng JZ, Shen D (2019) Dual-domain convolutional neural networks for improving structural information in 3 T MRI. Magn Reson Imaging 64:90–100

    Article  Google Scholar 

  16. Zhang Y, Kang R, Peng X, Wang J, Zhu J, Peng J, Liu H (2020) Image denoising via structure-constrained low-rank approximation. Neural Comput Appl 32(16):12575–12590

    Article  Google Scholar 

  17. Peng J, Wang J, Wang J, Zhang E, Zhang Q, Zhang Y, Peng X, Yu K (2021) A relic sketch extraction framework based on detail-aware hierarchical deep network. Sign Process 183:108008

    Article  Google Scholar 

  18. Xiao J, Zhang S, Yao Y, Wang Z, Zhang Y, Wang YF (2022) Generative adversarial network with hybrid attention and compromised normalization for multi-scene image conversion. Neural Comput Appl 34(9):7209–7225

    Article  Google Scholar 

  19. Ji L, Zhu Q, Zhang Y, Yin J, Wei R, Xiao J, Xiao D, Zhao G (2022) Cross-domain heterogeneous residual network for single image super-resolution. Neural Netw 149:84–94

    Article  Google Scholar 

  20. Bertalmlo M, Sapiro G, Caselles V, Ballester C (2000) Image inpainting. In: Proceedings of the annual conference on computer graphics and interactive techniques, SIGGRAPH 2000, New Orleans, Louisiana, pp 417–424

  21. Guillemot C, Meur OL (2014) Image inpainting: overview and recent advances. IEEE Sign Process Mag 31(1):127–144

    Article  Google Scholar 

  22. ElHarrouss O, Almaadeed N, Al-Maadeed S, Akbari Y (2020) Image inpainting: a review. Neural Process Lett 51(2):2007–2028

    Article  Google Scholar 

  23. Haehnle J, Prohl A (2011) Mumford–Shah–Euler flow with sphere constraint and applications to color image inpainting. SIAM J Imaging Sci 4(4):1200–1233

    Article  MathSciNet  MATH  Google Scholar 

  24. Yashtini M, Kang SH (2016) A fast relaxed normal two split method and an effective weighted TV approach for Euler’s elastica image inpainting. SIAM J Imaging Sci 9(4):1552–1581

    Article  MathSciNet  MATH  Google Scholar 

  25. Halim A, Kumar BVR (2020) An anisotropic PDE model for image inpainting. Comput Math Appl 79(9):2701–2721

    Article  MathSciNet  MATH  Google Scholar 

  26. Criminisi A, Perez P, Toyama K (2004) Region filling and object removal by exemplar-based image inpainting. IEEE Trans Image Process 13(9):1200–1212

    Article  Google Scholar 

  27. Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans Graph 28(3):24

    Article  Google Scholar 

  28. Xiang S, Deng H, Zhu L, Wu J, Yu L (2019) Exemplar-based depth inpainting with arbitrary-shape patches and cross-modal matching. Sign Process: Image Commun 71:56–65

    Google Scholar 

  29. Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, pp 2536–2544

  30. Yang C, Lu X, Lin Z, Shechtman E, Wang O, Li H (2017) High-resolution image inpainting using multi-scale neural patch synthesis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, pp 4076–4084

  31. Yeh RA, Chen C, Lim TY, Schwing AG, Hasegawa-Johnson M, Do MN (2017) Semantic image inpainting with deep generative models. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, pp 6882–6890

  32. Song Y, Yang C, Lin ZL, Liu X, Huang Q, Li H, Kuo CCJ (2018) Contextual-based image inpainting: infer, match, and translate. In: Proceedings of the European conference on computer vision, part II, Munich, Germany, pp 3–18

  33. Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018) Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, pp 5505–5514

  34. Yang J, Qi Z, Shi Y (2020) Learning to incorporate structure knowledge for image inpainting. In: Proceedings of the association for the advance of artificial intelligence, New York, pp 12605–12612

  35. Ronneberger O, Fischer P, Thomas Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of international conference on medical image computing & computer assisted intervention, part III, Munich, pp 234–241

  36. Gatys LA, Ecker AS, Bethge M (2015) A neural algorithm of artistic style. 2015, http://arxiv.org/abs/1508.06576

  37. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Proceedings of the international conference on learning representations, San Diego, http://arxiv.org/abs/1409.1556

  38. Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2018) Places: a 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1452–1464

    Article  Google Scholar 

  39. He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE international conference on computer vision, Santiago, pp 1026–1034

  40. Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of the international conference on learning representations, San Diego, http://arxiv.org/abs/1412.6980

  41. Xie C, Liu S, Li C, Cheng MM, Zuo W, Liu X, Wen S, Ding E (2019) Image inpainting with learnable bidirectional attention maps. In: Proceedings of the IEEE international conference on computer vision, Seoul, pp 8857–8866

  42. Salimans T, Goodfellow IJ, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training GANs. In: Proceedings of the advances in neural information processing systems, Barcelona, pp 2234–2242

  43. Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 586–595

  44. Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Proceedings of the advances in neural information processing systems, Long Beach, pp 6626–6637

Download references

Acknowledgements

This work was supported by the Social Science Foundation of Shaanxi Province (Grant No. 2019H010), the National Social Science Foundation of China (Grant No. 20BKG031), the New Star of Youth Science and Technology of Shaanxi Province (Grant No. 2020KJXX-007), the Natural Science Basic Research Program of Shaanxi (Program No. 2019JM-103), the Open Research Fund of CAS Key Laboratory of Spectral Imaging Technology (Grant No. LSIT201920W), the National Natural Science Foundation of China (Grant No. 62173270), the Xi’an Key Laboratory of Intelligent Perception and Cultural Inheritance (Grant No. 2019219614SYS011CG033), the Key Research and Development Program of Shaanxi (Program No. 2021ZDLGY15-06), and the Program for Changjiang Scholars and Innovative Research Team in University (Grant No. IRT\(\_\)17R87).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongqin Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, X., Zhao, H., Wang, X. et al. C3N: content-constrained convolutional network for mural image completion. Neural Comput & Applic 35, 1959–1970 (2023). https://doi.org/10.1007/s00521-022-07806-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07806-0

Keywords

Navigation