Skip to main content
Log in

LE–MSFE–DDNet: a defect detection network based on low-light enhancement and multi-scale feature extraction

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Surface defect detection of industrial products has become a promising area of research. Among the existing defect detection algorithms, most of the CNN-based methods can achieve the task of defect detection under ideal experimental conditions. However, the accuracy of defect detection is easily affected by the different lighting conditions of the environment and the inconsistency of the defect scale. Therefore, general deep learning methods have difficulties in solving the problem of defect detection in complex scenes. In this paper, a defect detection network based on low-light enhancement and multi-scale feature extraction (LE–MSFE–DDNet) is proposed. There are two blocks in the proposed network, including a low-light enhancement block and a SE-FP block. In the low-light enhancement block, the deep network is applied to enhance light adaptation of the deconstructed low-light feature map in our network. The influence of illumination inconsistency is weakened by the introduction of this block. In the SE-FP block, the dependencies between different channels are combined with the multi-scale feature extraction. The defects with different scales are accurately located through the combination of this block and our network. In addition, a Fine Cans Defect dataset based on the surface of fine cans is collected by this paper to verify the feasibility of the proposed network. The proposed model is compared with the state-of-the-art object detection network and the proposed method achieves 94.3% average accuracy on the Fine Cans Defect dataset. The experimental results show that the proposed method outperforms the state-of-the-art method for surface defect detection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Liu, G., Zheng, X.: Fabric defect detection based on information entropy and frequency domain saliency. Vis. Comput. (2020). https://doi.org/10.1007/s00371-020-01820-w

    Article  Google Scholar 

  2. Liu, G., Li, F.: Fabric defect detection based on low-rank decomposition with structural constraints. Vis. Comput. (2021). https://doi.org/10.1007/s00371-020-02040-y

    Article  Google Scholar 

  3. Shafarenko, L., Petrou, M., Kittler, J.: Automatic watershed segmentation of randomly textured color images. IEEE Trans. Image Process. 6(11), 1530 (1997)

    Article  Google Scholar 

  4. Hoang, K., Wen, W., Nachimuthu, A., Jiang, X.L.: Achieving automation in leather surface inspection. Comput. Ind. 34(1), 43–54 (1997)

    Article  Google Scholar 

  5. Chan, C.H., Pang, G.K.H.: Fabric defect detection by Fourier analysis. IEEE Trans. Ind. Appl. 36(5), 1267–1276 (2000)

    Article  Google Scholar 

  6. Mehle, A., Bukovec, M., Likar, B., Tomazevic, D.: Print registration for automated visual inspection of transparent pharmaceutical capsules. Mach. Vis. Appl. 27(7), 1087–1102 (2016)

    Article  Google Scholar 

  7. Tsai, D.M., Huang, C.K.: Defect Detection in Electronic Surfaces Using Template-Based Fourier Image Reconstruction. IEEE Trans. Compon. Packag. Manuf. Technol. 9(1), 163–172 (2019)

    Article  Google Scholar 

  8. Bai, X., Fang, Y., Lin, W., Wang, L., Ju, B.: Saliency-based defect detection in industrial images by using phase spectrum. IEEE Trans. Ind. Inform. 10(4), 2135–2145 (2014)

    Article  Google Scholar 

  9. Redmon J, Farhadi A. (2018) YOLOv3: an incremental improvement. arXiv:1804.02767

  10. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017). https://doi.org/10.1109/ICCV.2017.322

  11. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  12. Tian, Z., Shen, C., Chen, H., He, T.: FCOS: Fully convolutional one-stage object detection. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9626–9635 (2019). https://doi.org/10.1109/ICCV.2019.00972

  13. Cai, Z., Vasconcelos, N.: Cascade R-CNN: high quality object detection and instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 43(5), 1483–1498 (2021). https://doi.org/10.1109/TPAMI.2019.2956516

    Article  Google Scholar 

  14. Soukup, D., Huber-Mörk, R.: Convolutional neural networks for steel surface defect detection from photometric stereo images. In: Bebis, G., Boyle, R., Parvin, B. et al. (eds), Cham, 2014, pp. 668–677. Springer International Publishing (2014)

  15. Tao, X., Zhang, D., Ma, W., Liu, X., Xu, D.: Automatic metallic surface defect detection and recognition with convolutional neural networks. Appl. Sci. 8, 1575 (2018). https://doi.org/10.3390/app8091575

    Article  Google Scholar 

  16. Tabernik, D., Šela, S., Skvarč, J., Skočaj, D.: Segmentation-based deep-learning approach for surface-defect detection. J. Intell. Manuf. (2020). https://doi.org/10.1007/s10845-019-01476-x

    Article  Google Scholar 

  17. Dung, C.V., Anh, L.D.: Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 99, 52–58 (2018)

    Article  Google Scholar 

  18. Shi, J., Li, Z., Zhu, T., Wang, D., Ni, C.: Defect detection of industry wood veneer based on NAS and multi-channel mask R-CNN. Sensors 20(4398), 4398 (2020)

    Article  Google Scholar 

  19. Huang, Y., Qiu, C., Yuan, K.: Surface defect saliency of magnetic tile. Vis. Comput. 36, 85–96 (2020). https://doi.org/10.1007/s00371-018-1588-5

    Article  Google Scholar 

  20. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds), Cham, 2015, pp. 234–241. Springer International Publishing (2015)

  21. Liu, Z., Liu, S., Li, C., Ding, S., Dong, Y.: Fabric defects detection based on SSD. In: Proceedings of the 2nd international conference on graphics and signal processing (ICGSP'18), pp. 74–78. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3282286.3282300

  22. Urbonas, A., Raudonis, V., Maskeliūnas, R., Damaševičius, R.: Automated identification of wood veneer surface defects using faster region-based convolutional neural network with data augmentation and transfer learning. Appl. Sci. 9(22), 4898 (2019)

    Article  Google Scholar 

  23. Yanan, S., Hui, Z., Li, L., Hang, Z.: Rail surface defect detection method based on YOLOv3 deep learning networks. In: 2018 Chinese Automation Congress (CAC), pp. 1563–1568 (2018). https://doi.org/10.1109/CAC.2018.8623082

  24. Ma, L., Lu, Y., Nan, X., Liu, Y., Jiang, H.Q.: Defect Detection of Mobile Phone Surface Based on Convolution Neural Network. DEStech Trans. Comput. Sci. Eng. (2018). https://doi.org/10.12783/dtcse/icmsie2017/18645

    Article  Google Scholar 

  25. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C., Berg, A.C.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds), Cham, 2016, pp. 21–37. Springer International Publishing (2016)

  26. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: (0007-12-20). Going deeper with convolutions 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 20151–20159. https://doi.org/10.1109/CVPR.2015.7298594

  27. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15(12), 3736–3745 (2006). https://doi.org/10.1109/TIP.2006.881969

    Article  MathSciNet  Google Scholar 

  28. Park, S., Yu, S., Moon, B., Ko, S., Paik, J.: Low-light image enhancement using variational optimization-based retinex model. IEEE Trans Consum Electron 63(2), 178–184 (2017). https://doi.org/10.1109/TCE.2017.014847

    Article  Google Scholar 

  29. Tanaka, M., Shibata, T., Okutomi, M.: Gradient-based low-light image enhancement. In: 2019 IEEE International Conference on Consumer Electronics (ICCE), pp. 1–2 (2019). https://doi.org/10.1109/ICCE.2019.8662059

  30. Trahanias, P.E., Venetsanopoulos, A.N.: Color image enhancement through 3-D histogram equalization. In: 11th IAPR International Conference on Pattern Recognition, 1992. Vol. III. Conference C: Image, Speech and Signal Analysis, Proceedings, pp. 545–548 (1992)

  31. Cheng, H.D., Shi, X.J.: A simple and effective histogram equalization approach to image enhancement. Digit. Signal Process. 14(2), 158–170 (2004)

    Article  Google Scholar 

  32. Guo, X., Li, Y., Ling, H.: LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017). https://doi.org/10.1109/TIP.2016.2639450

    Article  MathSciNet  MATH  Google Scholar 

  33. Ying Z, Li G, Gao W (2017) A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv:1711.00591

  34. Lore, K.G., Akintayo, A., Sarkar, S.: LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2017). https://doi.org/10.1016/j.patcog.2016.06.008

    Article  Google Scholar 

  35. Shen L, Yue Z, Feng F, Chen Q, Liu S, Ma J. (2017) MSR-net: low-light image enhancement using deep convolutional network. arXiv:1711.02488

  36. Cai, J., Gu, S., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018). https://doi.org/10.1109/TIP.2018.2794218

    Article  MathSciNet  MATH  Google Scholar 

  37. Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3291–3300 (2018). doi: https://doi.org/10.1109/CVPR.2018.00347

  38. Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. arXiv:1808.04560 (2018)

  39. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. doi: https://doi.org/10.1109/CVPR.2018.00813 (2018)

  40. Du, Y., Yuan, C., Li, B., Zhao, L., Li, Y., Hu, W.: Interaction-aware spatio-temporal pyramid attention networks for action classification. In: European Conference on Computer Vision, pp. 388–404 (2018)

  41. Woo, S., Park, J., Lee, J., Kweon, I.S.: CBAM: Convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds) Cham, 2018, pp. 3–19. Springer International Publishing (2018)

  42. Stollenga, M., Masci, J., Gomez, F., Schmidhuber, J.: Deep networks with internal selective attention through feedback connections. In: Twenty-eighth Conference on Neural Information Processing Systems, pp. 3545–3553 (2014)

  43. Park, J., Woo, S., Lee, J.Y., Kweon, I.S.: BAM: bottleneck attention module. In: Proceedings of the British Machine Vision Conference (BMVC) (2018)

  44. Zagoruyko, S., Komodakis, N.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In: ICLR (2017)

  45. Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 42(8), 2011–2023 (2020). https://doi.org/10.1109/TPAMI.2019.2913372

    Article  Google Scholar 

  46. Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3141–3149 (2019). https://doi.org/10.1109/CVPR.2019.00326

  47. Huang, Z., Wang, X., Wei, Y., Huang, L., Shi, H., Liu, W., Huang, T.S.: CCNet: Criss-cross attention for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. (2020). https://doi.org/10.1109/TPAMI.2020.3007032

    Article  Google Scholar 

  48. Li, H., Xiong, P., An, J., Wang, L.: Pyramid attention network for semantic segmentation. arXiv:1805.10180 (2018)

  49. Hu, B., Gao, B., Woo, W.L., Ruan, L., Jin, J., Yang, Y., Yu, Y.: A lightweight spatial and temporal multi-feature fusion network for defect detection. IEEE Trans. Image Process. 30, 472–486 (2021). https://doi.org/10.1109/TIP.2020.3036770

    Article  Google Scholar 

  50. Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Wang, X., Tang, X.: Residual attention network for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6450–6458 (2017)

  51. Yang, T., Zhang, T., Huang, L.: Detection of defects in voltage-dependent resistors using stacked-block-based convolutional neural networks. Vis. Comput. (2020). https://doi.org/10.1007/s00371-020-01901-w

    Article  Google Scholar 

Download references

Funding

This work was supported by the R & D projects in key areas of Guangdong Province (2018B010109007), the National Natural Science Foundation of Guangdong Joint Funds (U1801263, U1701262, U2001201), the Natural Science Foundation of Guangdong Province (2020A1515010890), the projects of science and technology plan of Guangdong Province (2017B090901019, 2016B010127005). Our work is also supported by the Guangdong Provincial Key Laboratory of Cyber-Physical System (2020B1212060069).

Author information

Authors and Affiliations

Authors

Contributions

TW contributed to the conception of the study; WH performed the experiment; WH and YW contributed significantly to analysis and manuscript preparation; WH performed the data analyses and wrote the manuscript; WH, TW, YW, ZC and GH helped perform the analysis with constructive discussions.

Corresponding author

Correspondence to Tao Wang.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, W., Wang, T., Wang, Y. et al. LE–MSFE–DDNet: a defect detection network based on low-light enhancement and multi-scale feature extraction. Vis Comput 38, 3731–3745 (2022). https://doi.org/10.1007/s00371-021-02210-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02210-6

Keywords

Navigation