Abstract
Dunhuang murals are the treasures of world culture and art, world-famous and extremely valuable for research. However, this treasure of human art has become extremely fragile due to environmental factors, major natural disasters and human destruction. As a result, the conservation and restoration of Dunhuang murals has become a task of great urgency, in which line drawings of Dunhuang murals occupy most of the time, and subsequent restoration work is guided by line drawings. The line drawings of the Dunhuang murals can be regarded as an image edge detection task, which is one of the most fundamental problem for advanced computer vision tasks. Its aim is to accurately extract object contours and meaningful in-object edges from images. Although many CNN-based edge detection methods have made significant progress on natural images, they still produce thick edges and non-edges when generating line drawings of Dunhuang murals. In this paper, we propose a novel edge detection method for generating line drawings of Dunhuang murals to address these problems. We first propose a novel loss function for edge detection, which effectively suppresses background noise pixels located near edges and enables the network to produce sharp edges. In addition, to take full advantage of the hierarchical features at different scales, we introduce DCM and SAM on the basis of the bottom-up/top-down structure. Finally, the results of experiments conducted on the BIPED dataset and the Dunhuang murals dataset show that the proposed method can generate richer and sharper edge maps.










Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
Data openly available in a public repository.
References
Liu B, Du S, Li J, Wang J, Liu W (2022) Dunhuang mural line drawing based on bi-dexined network and adaptive weight learning. In: Chinese conference on pattern recognition and computer vision (PRCV), Springer, pp 279–292
Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587
Ferrari V, Fevrier L, Jurie F, Schmid C (2007) Groups of adjacent contour segments for object detection. IEEE Trans Pattern Anal Mach Intell 30(1):36–51
Bertasius G, Shi J, Torresani L (2016) Semantic segmentation with boundary neural fields. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3602–3610
Pinheiro PO, Lin TY, Collobert R, Dollár P (2016) Learning to refine object segments. In: European conference on computer vision, Springer, pp 75–91
Liu B, He F, Du S, Zhang K, Wang J (2022) Dunhuang murals contour generation network based on convolution and self-attention fusion. arXiv preprint arXiv:2212.00935
Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: international conference on medical image computing and computer-assisted intervention, Springer, pp 234–241
Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1251–1258
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708
Abraham N, Khan NM (2019) A novel focal tversky loss function with improved attention u-net for lesion segmentation. In: 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019), IEEE, pp 683–687
Sobel IE (1970) Camera models and machine perception. Stanford University, Stanford
Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 6:679–698
Martin DR, Fowlkes CC, Malik J (2004) Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans Pattern Anal Mach Intell 26(5):530–549
Arbelaez P, Maire M, Fowlkes C, Malik J (2010) Contour detection and hierarchical image segmentation. IEEE Trans Pattern Anal Mach Intell 33(5):898–916
Dollár P, Zitnick CL (2013) Structured forests for fast edge detection. In: Proceedings of the IEEE international conference on computer vision, pp 1841–1848
Duan Y et al (2020) Redn: A recursive encoder-decoder network for edge detection. IEEE Access: Pract Innov Open Solut 8:90153–90164
Cao YJ, Lin C, Li YJ (2020) Learning crisp boundaries using deep refinement network and adaptive weighting loss. IEEE Trans Multimedia 23:761–771
Akbarinia A, Parraga CA (2018) Feedback and surround modulated boundary detection. Int J Comput Vision 126(12):1367–1380
Kelm AP, Rao VS, Zölzer U (2019) Object contour and edge detection with refinecontournet. In: International conference on computer analysis of images and patterns, Springer, pp 246–258
Xie S, Tu Z (2015) Holistically-nested edge detection. In: Proceedings of the IEEE international conference on computer vision, pp 1395–1403
Liu Y, Cheng MM, Hu X, Wang K, Bai X (2017) Richer convolutional features for edge detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3000–3009
He J, Zhang S, Yang M, Shan Y, Huang T (2019) Bi-directional cascade network for perceptual edge detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3828–3837
Deng R, Liu S (2020) Deep structural contour detection. In: Proceedings of the 28th ACM international conference on multimedia, pp 304–312
Poma XS, Riba E, Sappa A (2020) Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 1923–1932
Huan L, Xue N, Zheng X, He W, Gong J, Xia GS (2021) Unmixing convolutional features for crisp edge detection. IEEE Trans Pattern Anal Mach Intell
Su Z, Liu W, Yu Z, Hu D, Liao Q, Tian Q, Pietikäinen M, Liu L (2021) Pixel difference networks for efficient edge detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 5117–5127
Wang C, Dai D, Xia S, Liu Y, Wang G (2022) One-stage deep edge detection based on dense-scale feature fusion and pixel-level imbalance learning. IEEE Trans Artif Intell
Soria X, Pomboza-Junez G, Sappa AD (2022) LDC: Lightweight dense CNN for edge detection. IEEE Access 10:68281–68290
Sudre CH, Li W, Vercauteren T, Ourselin S, Jorge Cardoso M (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep learning in medical image analysis and multimodal learning for clinical decision support, Springer, pp 240–248
Salehi SSM, Erdogmus D, Gholipour A (2017) Tversky loss function for image segmentation using 3d fully convolutional deep networks. In: International workshop on machine learning in medical imaging, Springer, pp 379–387
Silberman N, Hoiem D, Kohli P, Fergus R (2012) Indoor segmentation and support inference from RGBD images. In: European conference on computer vision, Springer, pp 746–760
Mottaghi R, Chen X, Liu X, Cho NG, Lee SW, Fidler S, Urtasun R, Yuille A (2014) The role of context for object detection and semantic segmentation in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 891–898
Acuna D, Kar A, Fidler S (2019) Devil is in the edges: learning semantic boundaries from noisy annotations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11075–11083
Pu M, Huang Y, Guan Q, Ling H (2021) Rindnet: edge detection for discontinuity in reflectance, illumination, normal and depth. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6879–6888
Wibisono JK, Hang HM (2020) Traditional method inspired deep neural network for edge detection. In: 2020 IEEE international conference on image processing (ICIP), IEEE, pp 678–682
Acknowledgements
This work was supported in part by the Gansu Provincial Department of Education University Teachers Innovation Fund Project (No.2023B-056), the Fundamental Research Funds for the Central Universities (No. 31920230137, 31920220130, 31920230030, 31920220037), the Introduction of Talent Research Project of Northwest Minzu University (No. xbmuyjrc201904), the Gansu Provincial First-class Discipline Program of Northwest Minzu University (No.11080305), the Leading Talent of National Ethnic Affairs Commission (NEAC), and the Young Talent of NEAC, and the Innovative Research Team of NEAC (2018) 98.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, J., Li, J., Liu, W. et al. Dunhuang Mural Line Drawing Based on Multi-scale Feature Fusion and Sharp Edge Learning. Neural Process Lett 55, 10201–10214 (2023). https://doi.org/10.1007/s11063-023-11323-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-023-11323-z