Abstract
In this paper, we introduce an innovative method for blind mural restoration, named “MuralRescue,” which demonstrates a systematic approach to progressively restore and enhance the quality of Dunhuang mural images by integrating damaged area segmentation, inpainting processing, and super-resolution techniques. In the process of mural damage segmentation, we employ the SAM-Adapter to optimize the “Segment Anything” model and to enhance the performance of mural damage segmentation. Specifically, we use an adapter module containing two layers of MLP to fine-tune the “Segment Anything” model, thereby increasing the accuracy of segmenting mural cracks. Through extensive experiments, we have proven the adapter’s effectiveness in detecting small targets and fine-grained mural cracks. Additionally, by combining detected cracks with image restoration, we have significantly improved the superiority of blind image restoration tasks without reference.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Pan, Y.H., et al.: Digital Protection and Restoration of Dunhuang Mural. J. Syst. Simul. 15(3), 310–314 (2003)
Jingni, S.H.E.N., et al.: Tang Dynasty Tomb Murals Inpainting Algorithm of MCA Decomposition. J. Front. Comput. Sci. Technol. 11(11), 1826 (2017)
Chen, et al.: Dunhuang mural inpainting algorithm based on sequential similarity detection and cuckoo optimization. Laser Optoelectronics
Zhang, et al.: Research on disease extraction and inpainting algorithm of digital grotto murals. Appl. Res. Comput. 38(08), 2495–2498+2504 (2021). https://doi.org/10.19734/j.issn.1001-3695.2020.09.0395
Yong, C., et al.: Inpainting Algorithm for Dunhuang Mural Based on Improved Curvature-Driven Diffusion Model. J. Comput.-Aided Des. Comput. Graph. 32(5), 787–796 (2020)
Xiaoping, Y., Shuwen, W., et al.: Dunhuang mural inpainting in intricate disrepaired region based on improvement of priority algorithm. J. Comput.-Aided Des. Comput. Graph. 23(2), 284–289 (2011)
Jiao, L.J., et al.: Wutai mountain mural inpainting based on improved block matching algorithm. J. Comput.-Aided Des. Comput. Graph. 31(1), 118–125 (2019)
Hendrycks, D., Gimpel, K.: Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415 (2016)
Yu, T., Feng, R., Feng, R., et al.: Inpaint anything: segment anything meets image inpainting. arXiv preprint arXiv:2304.06790 (2023)
Kirillov, A., Mintun, E., Ravi, N., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023)
Ma, J., Wang, B.: Segment anything in medical images. arXiv preprint arXiv:2304.12306 (2023)
Zhang, K., Liu, D.: Customized segment anything model for medical image segmentation. arXiv preprint arXiv:2304.13785 (2023)
Roy, S., Wald, T., Koehler, G., et al.: SAM.MD: zero-shot medical image segmentation capabilities of the segment anything model. arXiv preprint arXiv:2304.05396 (2023)
Wang, D., Zhang, J., Du, B., et al.: Scaling-up remote sensing segmentation dataset with segment anything model. arXiv preprint arXiv:2305.02034 (2023)
Chen, T., Zhu, L., Ding, C., et al.: SAM fails to segment anything?–SAM-adapter: adapting SAM in underperformed scenes: camouflage, shadow, and more. arXiv preprint arXiv:2304.09148 (2023)
Jing, Y., Wang, X., Tao, D.: Segment anything in non-euclidean domains: challenges and opportunities. arXiv preprint arXiv:2304.11595 (2023)
Peng, J., Liu, D., Xu, S., Li, H.: Generating diverse structure for image inpainting with hierarchical VQ-VAE. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10775–10784 (2021)
Wan, Z., Zhang, J., Chen, D., Liao, J.: High-fidelity pluralistic image completion with transformers. arXiv preprint arXiv:2103.14031 (2021)
Wan, Z., Zhang, B., Chen, D., et al.: Bringing old photos back to life. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2747–2757 (2020)
Zhao, S., Cui, J., Sheng, Y., et al.: Large scale image completion via co-modulated generative adversarial networks. arXiv preprint arXiv:2103.10428 (2021)
Wang, X., Xie, L., Dong, C., et al.: Real-ESRGAN: training real-world blind super-resolution with pure synthetic data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 905–1914 (2021)
Acknowledgments
This study was funded by the National Natural Science Foundation of China (grant number 52274160, 51874300), the Funding for the “Jiangsu Distinguished Professor” project in Jiangsu Province (grant number 140923070).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this paper.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Xu, Z. et al. (2024). MuralRescue: Advancing Blind Mural Restoration via SAM-Adapter Enhanced Damage Segmentation and Integrated Restoration Techniques. In: Huang, DS., Zhang, C., Zhang, Q. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14868. Springer, Singapore. https://doi.org/10.1007/978-981-97-5600-1_40
Download citation
DOI: https://doi.org/10.1007/978-981-97-5600-1_40
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5599-8
Online ISBN: 978-981-97-5600-1
eBook Packages: Computer ScienceComputer Science (R0)