Skip to main content

Advertisement

Log in

An additive feature fusion attention based on YOLO network for aircraft skin damage detection

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Aircraft skin damage detection is crucial for ensuring flight safety. This article presents an enhanced object detection algorithm tailored for scenarios with low image complexity. Initially, considering the low image complexity inherent in aircraft skin damage data, an additive feature fusion attention mechanism is proposed to enhance the YOLOv7 neck feature fusion approach, aiming at diminishing the computational and parametric loads of the model. Secondly, the Inner-CIoU loss function is enhanced and the dynamic Inner-CIoU loss function is introduced to substitute the original CIoU loss function in YOLOv7. Subsequently, to validate the proposed method, an Aircraft Skin Damage Dataset encompassing five types of damage, with image backgrounds solely comprising the aircraft skin, is generated. Lastly, experimental results demonstrate that the proposed additive feature fusion attention mechanism, tailored for scenarios like Aircraft Skin Damage Dataset with low image complexity, significantly reduces the model parameters without compromising model accuracy. Compared to the YOLOv7, the proposed method improves the detection accuracy by 1.6% and reduces the number of parameters by 19.4%; the enhanced YOLOv7 model is compared with mainstream object detection models to illustrate the superiority of the improved algorithm. The Pascal VOC2007 Database and Aluminum Profile Surface Detection Database are selected as control groups, which are used to further validate the correctness of the proposed theory, namely the additive feature fusion attention mechanism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

No datasets were generated or analyzed during the current study.

References

  1. Li H (2019) High-level feature learning and damage monitoring of aircraft surface images. BeiJing University of Poste and Telecommunications

  2. Min S (2011) Aircraft skin damage detection method based on machine vision. Nanjing University of Aeronautics and Astronautics

  3. Jiachen G, Juan X, Hongfu Z et al (2019) Civil aircraft surface defects detection based on histogram of oriented gradient. In: 2019 IEEE 1st International Conference on Civil Aviation Safety and Information Technology (ICCASIT). IEEE, pp 34–38

  4. Girshick R, Donahue J, Darrell T et al (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 580–587

  5. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. pp 1440–1448

  6. Ren S, He K, Girshick R et al (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28

  7. He K, Gkioxari G, Dollár P et al (2017) Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. pp 2961–2969

  8. Liu W, Anguelov D, Erhan D et al (2016) Ssd: single shot multibox detector. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14. Springer International Publishing: pp 21–37

  9. Chen Q, Wang Y, Yang T et al (2021) You only look one-level feature. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 13039–13048

  10. Ge Z, Liu S, Wang F et al (2021) Yolox: Exceeding yolo series in 2021. arxiv preprint https://arxiv.org/abs/2107.08430

  11. Wang CY, Bochkovskiy A, Liao HYM (2023) YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 7464–7475

  12. Lin TY, Dollár P, Girshick R et al (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 2117–2125

  13. Oktay O, Schlemper J, Folgoc LL et al (2018) Attention u-net: learning where to look for the pancreas. arxiv preprint https://arxiv.org/abs/1804.03999

  14. Sun JK, Zhang R, Guo LJ et al (2023) Multi-scale feature fusion and additive attention guide brain tumor MR image segmentation. J Image Graph 28(4):1157–1172

    Article  MATH  Google Scholar 

  15. Jetley S, Lord NA, Lee N et al (2018) Learn to pay attention. arxiv preprint https://arxiv.org/abs/1804.02391

  16. Wang X, Girshick R, Gupta A et al (2018) Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 7794–7803

  17. Wang F, Jiang M, Qian C et al (2017) Residual attention network for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 3156–3164

  18. Zhang H, Xu C, Zhang S (2023) Inner-iou: more effective intersection over union loss with auxiliary bounding box. arxiv preprint https://arxiv.org/abs/2311.02877

  19. Zheng Z, Wang P, Ren D et al (2021) Enhancing geometric factors in model learning and inference for object detection and instance segmentation. IEEE Trans Cybern 52(8):8574–8586

    Article  MATH  Google Scholar 

  20. Ramalingam B, Manuel VH, Elara MR et al (2019) Visual inspection of the aircraft surface using a teleoperated reconfigurable climbing robot and enhanced deep learning technique. Int J Aerospace Eng 2019:1–14

    Article  MATH  Google Scholar 

  21. Nong CR, Liu ZY, Zhang J et al (2020) Research on crack edge detection of aircraft skin based on traditional inspired network. In: proceedings of the 2020 2nd International Conference on Information Technology and Computer Application (ITCA). IEEE: pp 751–754

  22. Du H, Cao Y (2020) Research on aircraft skin damage identification method based on image analysis. J Phys Conf Series IOP Publish 1651(1):012171

    Article  MATH  Google Scholar 

  23. Li C, Wei X, Guo H, He W, Xin W, Haojun X, Liu X (2021) Recognition of the internal situation of aircraft skin based on deep learning. AIP Adv. https://doi.org/10.1063/5.0064663

    Article  MATH  Google Scholar 

  24. Meng D, Boer WU, Juan XU et al (2022) Visual inspection of aircraft skin: Automated pixel-level defect detection by instance segmentation. Chin J Aeronaut 35(10):254–264

    Article  Google Scholar 

  25. Zhou M, Zuo H, Xu J et al (2022) An Intelligent Detection Method for Civil Aircraft Skin Damage Based on YOLOv3. In: Proceedings of the 3rd Asia-Pacific Conference on Image Processing, Electronics and Computers. pp 586–590

  26. Dongping Z, Zhutao W, Yuejian X et al (2024) Aircraft skin defect detection algorithm based on enhanced YOLOv8. J Beijing Univ Aeronautics Astronautics 5:1–14

    MATH  Google Scholar 

  27. Tan J (2020) Complex object detection using deep proposal mechanism. Eng Appl Artif Intell 87:103234

    Article  MATH  Google Scholar 

  28. Han Y, Ma S, Xu Y et al (2020) Effective complex airport object detection in remote sensing images based on improved end-to-end convolutional neural network. IEEE Access 8:172652–172663

    Article  Google Scholar 

  29. Xi X, Wang J, Li F et al (2022) Irsdet: infrared small-object detection network based on sparse-skip connection and guide maps. Electronics 11(14):2154

    Article  MATH  Google Scholar 

  30. Jinsheng X, Haowen G, Yuntao Y et al (2022) Multi-scale object detection with the pixel attention mechanism in a complex background. Remote Sens 14(16):3969–3969

    Article  MATH  Google Scholar 

  31. Cho J, Kim K (2023) Detection of moving objects in multi-complex environments using selective attention networks (SANet). Autom Constr 155:105066

    Article  MATH  Google Scholar 

  32. Chen H, Chen Z, Yu H (2023) Enhanced YOLOv5: an efficient road object detection method. Sensors 23(20):8355

    Article  MATH  Google Scholar 

  33. Tan M, Pang R, Le QV (2020) Efficientdet: Scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 10781–10790

  34. Woo S, Park J, Lee JY et al. (2018) Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV). pp 3–19

  35. Zongwen G, Zhizhou W, Lipeng X (2020) Detection and segmentation methods for road traffic markings in complex environments. Computer Engineering and Applications, pp 1–11

  36. Xiaogang S, Dongdong Z, Pengfei Z (2024) Real time object detection algorithm for complex construction environments. Comput Appl 44(05):1605–1612

    MATH  Google Scholar 

  37. Gao ZY, Yang XM, Gong JM et al (2010) Research on image complexity description methods. J Image Graph 15(1):129–135

    MATH  Google Scholar 

  38. Feng T, Zhai Y, Yang J et al (2022) IC9600: a benchmark dataset for automatic image complexity assessment. IEEE Transactions on Pattern Analysis and Machine Intelligence

  39. Wang CY, Yeh IH, Liao HYM (2024) YOLOv9: learning what you want to learn using programmable gradient information. arxiv preprint https://arxiv.org/abs/2402.13616

  40. Selvaraju RR, Cogswell M, Das A et al (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision. pp 618–626

Download references

Funding

This work was supported by National Natural Science Foundation of China (52375557, 62173331, U2333205), Science and Technology Program of Tianjin (24JCYBJC00160, 23JCYBJC00060), Foundation of Tianjin educational committee (2023KJ222), The Basic Science-research Funds of National University (3122023PY06, 3122023044, 3122024052), Civil Aviation University of China Research Innovation Project for Postgraduate Students (2022YJS051).

Author information

Authors and Affiliations

Authors

Contributions

Jun Wu was involved in conceptualization, resources, writing—Review & eEditing, investigation, supervision, project administration, funding acquisition. Yajing Zhang was involved in software, formal analysis, data curation, writing—original draft. Tengfei Shan was involved in conceptualization, methodology, software, validation, formal analysis, investigation, writing—original draft, data Curation, visualization. Zhiwei Xing was involved in resources, funding acquisition. Jiusheng Chen was involved in resources, funding acquisition. Runxia Guo was involved in resources, funding acquisition.

Corresponding author

Correspondence to Jun Wu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethics approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, J., Zhang, Y., Shan, T. et al. An additive feature fusion attention based on YOLO network for aircraft skin damage detection. J Supercomput 81, 627 (2025). https://doi.org/10.1007/s11227-025-07148-3

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11227-025-07148-3

Keywords