Skip to main content
Log in

Investigating low-delay deep learning-based cultural image reconstruction

  • Special Issue Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

Numerous cultural assets host a great historical and moral value, but due to their degradation, this value is heavily affected as their attractiveness is lost. One of the solutions that most heritage organizations and museums currently choose is to leverage the knowledge of art and history experts in addition to curators to recover and restore the damaged assets. This process is labor-intensive, expensive and more often results in just an assumption over the damaged or missing region. In this work, we tackle the issue of completing missing regions in artwork through advanced deep learning and image reconstruction (inpainting) techniques. Following our analysis of different image completion and reconstruction approaches, we noticed that these methods suffer from various limitations such as lengthy processing times and hard generalization when trained with multiple visual contexts. Most of the existing learning-based image completion and reconstruction techniques are trained on large datasets with the objective of retrieving the original data distribution of the training samples. However, this distribution becomes more complex when the training data is diverse making the training process difficult and the reconstruction inefficient. Through this paper, we present a clustering-based low-delay image completion and reconstruction approach which combines supervised and unsupervised learning to address the highlighted issues. We compare our technique to the current state of the art using a real-world dataset of artwork collected from various cultural institutions. Our approach is evaluated using statistical methods and a surveyed audience to better interpret our results objectively and subjectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Zheng, C., Cham, T.-J., Cai, J.: Pluralistic image completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1438–1447 (2019).

  2. Yu,J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5505–5514 (2018).

  3. Ashikhmin, M.: Synthesizing natural textures. SI3D 1, 217–226 (2001)

    Google Scholar 

  4. Ballester,C., Bertalmio, M., Caselles, V., Sapiro, G., Verdera, J.: Filling-in by joint interpolation of vector fields and gray levels (2000).

  5. Hays,J., Efros, A.A.: Scene completion using millions of photographs. In: ACM Transactions on Graphics (TOG), 2007, vol. 26(3), p. 4. ACM (2007).

  6. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graphics (ToG) 28(3), 24 (2009)

    Article  Google Scholar 

  7. Yeh,R.A., Chen, C., Yian Lim, T., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5485–5493 (2017).

  8. Demir,U., Unal, G.: Patch-based image inpainting with generative adversarial networks. arXiv preprint arXiv:1803.07422 (2018).

  9. Liu,G., Reda, F.A., Shih, K.J., Wang, T.-C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 85–100 (2018).

  10. Xiong,W., et al. Foreground-aware image inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 5840–5848 (2019).

  11. Pathak,D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2536–2544 (2016).

  12. Goodfellow,I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, 2014, pp. 2672–2680 (2014).

  13. Jboor,N.H., Belhi, A., Al-Ali, A.K., Bouras, A., Jaoua, A.: Towards an inpainting framework for visual cultural heritage. In: 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), 2019, pp. 602–607. IEEE (2019).

  14. Zhou, Q., Yao, H., Cao, F., Hu, Y.-C.: Efficient image compression based on side match vector quantization and digital inpainting. J. Real-Time Image Proc. 16(3), 799–810 (2019)

    Article  Google Scholar 

  15. Zhang, W., Kong, P., Yao, H., Hu, Y.-C., Cao, F.: Real-time reversible data hiding in encrypted images based on hybrid embedding mechanism. J. Real-Time Image Proc. 16(3), 697–708 (2019)

    Article  Google Scholar 

  16. Bertalmio,M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, 2000, pp. 417–424: ACM Press/Addison-Wesley Publishing Co (2000).

  17. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graphics (TOG) 36(4), 107 (2017)

    Article  Google Scholar 

  18. Elharrouss,O., Almaadeed, N., Al-Maadeed, S., Akbari, Y.: Image inpainting: a review. In: Neural Processing Letters, pp. 1–22 (2019).

  19. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, 2012, pp. 1097–1105 (2012).

  20. Shamsolmoali, P., Zhang, J., Yang, J.: Image super resolution by dilated dense progressive network. Image Vis. Comput. 88, 9–18 (2019)

    Article  Google Scholar 

  21. Shamsolmoali, P., Li, X., Wang, R.: Single image resolution enhancement by efficient dilated densely connected residual network. Signal Process Image Commun 79, 13–23 (2019)

    Article  Google Scholar 

  22. Shamsolmoali, P., Zareapoor, M., Wang, R., Jain, D.K., Yang, J.: G-GANISR: gradual generative adversarial network for image super resolution. Neurocomputing 366, 140–153 (2019)

    Article  Google Scholar 

  23. Li,Y., Liu, S., Yang, J., Yang, M.-H.: Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3911–3919 (2017).

  24. Yu,J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S.: Free-form image inpainting with gated convolution. arXiv preprint arXiv:1806.03589 (2018).

  25. Hong,X., Xiong, P., Ji, R., Fan, H.: Deep fusion network for image completion. arXiv preprint arXiv:1904.08060 (2019).

  26. Nazeri,K., Ng, E., Joseph, T., Qureshi, F., Ebrahimi, M.: Edgeconnect: generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212 (2019).

  27. Jain, A.K.: Data clustering: 50 years beyond K-means. Pattern Recogn. Lett. 31(8), 651–666 (2010)

    Article  Google Scholar 

  28. Belhi,A., Bouras, A., Foufou, S.: Towards a hierarchical multitask classification framework for cultural heritage. In 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA), 2018, pp. 1–7. IEEE (2018).

  29. Belhi, A., Bouras, A., Foufou, S.: Leveraging known data for missing label prediction in cultural heritage context. Appl. Sci. 8(10), 1768 (2018)

    Article  Google Scholar 

  30. Yu,J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T. S.: Generative image inpainting with contextual attention. arXiv preprintarXiv:1801.07892 (2018).

  31. Simonyan,K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).

  32. He,K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778 (2016).

  33. WikiArt.org.: WikiArt.org—visual art encyclopedia (31–01–2019). Available: https://www.wikiart.org/. Accessed on 29 March 2020.

  34. MET,T.: The Metropolitan Museum of Art (2019, 31–01–2019). Available: https://www.metmuseum.org/. Accessed on 29 March 2020.

  35. Mensink,T., Van Gemert, J.: The rijksmuseum challenge: museum-centered visual recognition. In: Proceedings of International Conference on Multimedia Retrieval, 2014, p. 451 (2014)

  36. Tiefenbacher,P., Bogischef, V., Merget, D., & Rigoll, G.: Subjective and objective evaluation of image inpainting quality. In: 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 447–451. IEEE (2015).

  37. Bt, R. I.-R.: Methodology for the subjective assessment of the quality of television pictures. International Telecommunication Union (2002).

Download references

Acknowledgements

This publication was made possible by NPRP grant 9-181-1-036 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors (www.ceproqha.qa). The authors would also like to thank the Museum of Islamic Art (MIA), the MIA Multimedia team, Mr. Marc Pelletreau, the Art Curators and the management staff of the Museum of Islamic Art, Doha Qatar for their help and contribution in the data acquisition.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdelhak Belhi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

User Evaluation Forms The evaluation form can be accessed following this link: https://forms.gle/gQATiv4HeJhRWKLj9

The following pages show part of the evaluation forms:

figure a
figure b

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Belhi, A., Al-Ali, A.K., Bouras, A. et al. Investigating low-delay deep learning-based cultural image reconstruction. J Real-Time Image Proc 17, 1911–1926 (2020). https://doi.org/10.1007/s11554-020-00975-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-020-00975-y

Keywords

Navigation