Skip to main content
Log in

Dunhuang Mural Line Drawing Based on Multi-scale Feature Fusion and Sharp Edge Learning

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Dunhuang murals are the treasures of world culture and art, world-famous and extremely valuable for research. However, this treasure of human art has become extremely fragile due to environmental factors, major natural disasters and human destruction. As a result, the conservation and restoration of Dunhuang murals has become a task of great urgency, in which line drawings of Dunhuang murals occupy most of the time, and subsequent restoration work is guided by line drawings. The line drawings of the Dunhuang murals can be regarded as an image edge detection task, which is one of the most fundamental problem for advanced computer vision tasks. Its aim is to accurately extract object contours and meaningful in-object edges from images. Although many CNN-based edge detection methods have made significant progress on natural images, they still produce thick edges and non-edges when generating line drawings of Dunhuang murals. In this paper, we propose a novel edge detection method for generating line drawings of Dunhuang murals to address these problems. We first propose a novel loss function for edge detection, which effectively suppresses background noise pixels located near edges and enables the network to produce sharp edges. In addition, to take full advantage of the hierarchical features at different scales, we introduce DCM and SAM on the basis of the bottom-up/top-down structure. Finally, the results of experiments conducted on the BIPED dataset and the Dunhuang murals dataset show that the proposed method can generate richer and sharper edge maps.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data Availability

Data openly available in a public repository.

References

  1. Liu B, Du S, Li J, Wang J, Liu W (2022) Dunhuang mural line drawing based on bi-dexined network and adaptive weight learning. In: Chinese conference on pattern recognition and computer vision (PRCV), Springer, pp 279–292

  2. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587

  3. Ferrari V, Fevrier L, Jurie F, Schmid C (2007) Groups of adjacent contour segments for object detection. IEEE Trans Pattern Anal Mach Intell 30(1):36–51

    Article  Google Scholar 

  4. Bertasius G, Shi J, Torresani L (2016) Semantic segmentation with boundary neural fields. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3602–3610

  5. Pinheiro PO, Lin TY, Collobert R, Dollár P (2016) Learning to refine object segments. In: European conference on computer vision, Springer, pp 75–91

  6. Liu B, He F, Du S, Zhang K, Wang J (2022) Dunhuang murals contour generation network based on convolution and self-attention fusion. arXiv preprint arXiv:2212.00935

  7. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: international conference on medical image computing and computer-assisted intervention, Springer, pp 234–241

  8. Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1251–1258

  9. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  10. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708

  11. Abraham N, Khan NM (2019) A novel focal tversky loss function with improved attention u-net for lesion segmentation. In: 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019), IEEE, pp 683–687

  12. Sobel IE (1970) Camera models and machine perception. Stanford University, Stanford

    Google Scholar 

  13. Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 6:679–698

    Article  Google Scholar 

  14. Martin DR, Fowlkes CC, Malik J (2004) Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans Pattern Anal Mach Intell 26(5):530–549

    Article  Google Scholar 

  15. Arbelaez P, Maire M, Fowlkes C, Malik J (2010) Contour detection and hierarchical image segmentation. IEEE Trans Pattern Anal Mach Intell 33(5):898–916

    Article  Google Scholar 

  16. Dollár P, Zitnick CL (2013) Structured forests for fast edge detection. In: Proceedings of the IEEE international conference on computer vision, pp 1841–1848

  17. Duan Y et al (2020) Redn: A recursive encoder-decoder network for edge detection. IEEE Access: Pract Innov Open Solut 8:90153–90164

    Article  Google Scholar 

  18. Cao YJ, Lin C, Li YJ (2020) Learning crisp boundaries using deep refinement network and adaptive weighting loss. IEEE Trans Multimedia 23:761–771

    Article  Google Scholar 

  19. Akbarinia A, Parraga CA (2018) Feedback and surround modulated boundary detection. Int J Comput Vision 126(12):1367–1380

    Article  Google Scholar 

  20. Kelm AP, Rao VS, Zölzer U (2019) Object contour and edge detection with refinecontournet. In: International conference on computer analysis of images and patterns, Springer, pp 246–258

  21. Xie S, Tu Z (2015) Holistically-nested edge detection. In: Proceedings of the IEEE international conference on computer vision, pp 1395–1403

  22. Liu Y, Cheng MM, Hu X, Wang K, Bai X (2017) Richer convolutional features for edge detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3000–3009

  23. He J, Zhang S, Yang M, Shan Y, Huang T (2019) Bi-directional cascade network for perceptual edge detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3828–3837

  24. Deng R, Liu S (2020) Deep structural contour detection. In: Proceedings of the 28th ACM international conference on multimedia, pp 304–312

  25. Poma XS, Riba E, Sappa A (2020) Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 1923–1932

  26. Huan L, Xue N, Zheng X, He W, Gong J, Xia GS (2021) Unmixing convolutional features for crisp edge detection. IEEE Trans Pattern Anal Mach Intell

  27. Su Z, Liu W, Yu Z, Hu D, Liao Q, Tian Q, Pietikäinen M, Liu L (2021) Pixel difference networks for efficient edge detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 5117–5127

  28. Wang C, Dai D, Xia S, Liu Y, Wang G (2022) One-stage deep edge detection based on dense-scale feature fusion and pixel-level imbalance learning. IEEE Trans Artif Intell

  29. Soria X, Pomboza-Junez G, Sappa AD (2022) LDC: Lightweight dense CNN for edge detection. IEEE Access 10:68281–68290

    Article  Google Scholar 

  30. Sudre CH, Li W, Vercauteren T, Ourselin S, Jorge Cardoso M (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep learning in medical image analysis and multimodal learning for clinical decision support, Springer, pp 240–248

  31. Salehi SSM, Erdogmus D, Gholipour A (2017) Tversky loss function for image segmentation using 3d fully convolutional deep networks. In: International workshop on machine learning in medical imaging, Springer, pp 379–387

  32. Silberman N, Hoiem D, Kohli P, Fergus R (2012) Indoor segmentation and support inference from RGBD images. In: European conference on computer vision, Springer, pp 746–760

  33. Mottaghi R, Chen X, Liu X, Cho NG, Lee SW, Fidler S, Urtasun R, Yuille A (2014) The role of context for object detection and semantic segmentation in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 891–898

  34. Acuna D, Kar A, Fidler S (2019) Devil is in the edges: learning semantic boundaries from noisy annotations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11075–11083

  35. Pu M, Huang Y, Guan Q, Ling H (2021) Rindnet: edge detection for discontinuity in reflectance, illumination, normal and depth. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6879–6888

  36. Wibisono JK, Hang HM (2020) Traditional method inspired deep neural network for edge detection. In: 2020 IEEE international conference on image processing (ICIP), IEEE, pp 678–682

Download references

Acknowledgements

This work was supported in part by the Gansu Provincial Department of Education University Teachers Innovation Fund Project (No.2023B-056), the Fundamental Research Funds for the Central Universities (No. 31920230137, 31920220130, 31920230030, 31920220037), the Introduction of Talent Research Project of Northwest Minzu University (No. xbmuyjrc201904), the Gansu Provincial First-class Discipline Program of Northwest Minzu University (No.11080305), the Leading Talent of National Ethnic Affairs Commission (NEAC), and the Young Talent of NEAC, and the Innovative Research Team of NEAC (2018) 98.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiqiang Du.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, J., Li, J., Liu, W. et al. Dunhuang Mural Line Drawing Based on Multi-scale Feature Fusion and Sharp Edge Learning. Neural Process Lett 55, 10201–10214 (2023). https://doi.org/10.1007/s11063-023-11323-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-023-11323-z

Keywords

Navigation