skip to main content
10.1145/3577117.3577140acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicaipConference Proceedingsconference-collections
research-article

An Improved Defect Detection Algorithm for Industrial Products via Lightweight Convolutional Neural Network

Published:25 February 2023Publication History

ABSTRACT

Aiming at the problem that the existing computer vision detection algorithm based on deep learning consumes a lot of memory and computing resources, this paper improves the structure of convolutional neural network and proposes a lightweight algorithm for defect detection of industrial products by network pruning. The proposed algorithm uses the residual network to divide VGG-16 into different residual modules, introduces the sparse constraint of penalty factor and the attenuation constraint of weight matrix to measure the importance of each residual module, and cuts the residual modules with low importance, so as to greatly reduce the number of parameter learning in the deep residual network. Experiments show that this method can retain the accuracy, precision, recall and F1 score of the original network, and greatly improve the speed of network training to meet the real-time needs of product appearance defect detection.

References

  1. Han S, Pool J, Tran J, Learning both weights and connections for efficient neural network[J]. Advances in neural information processing systems, 2015:1135-1143.Google ScholarGoogle Scholar
  2. Wen W, Wu C, Wang Y, Learning structured sparsity in deep neural networks[J]. Advances in neural information processing systems, 2016:2074-2078.Google ScholarGoogle Scholar
  3. Lebedev V, Lempitsky V. Fast convnets using group-wise brain damage[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2554-2564.Google ScholarGoogle Scholar
  4. Mao H, Han S, Pool J, Exploring the granularity of sparsity in convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2017: 13-20.Google ScholarGoogle Scholar
  5. Iandola F N, Han S, Moskewicz M W, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size[J]. arXiv preprint arXiv:1602.07360, 2016.Google ScholarGoogle Scholar
  6. Gholami A, Kwon K, Wu B, Squeezenext: Hardware-aware neural network design[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018: 1638-1647.Google ScholarGoogle Scholar
  7. Howard A G, Zhu M, Chen B, Mobilenets: Efficient convolutional neural networks for mobile vision applications[J]. arXiv preprint arXiv:1704.04861, 2017.Google ScholarGoogle Scholar
  8. Sandler M, Howard A, Zhu M, Mobilenetv2: Inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 4510-4520.Google ScholarGoogle Scholar
  9. Zhang X, Zhou X, Lin M, Shufflenet: An extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 6848-6856.Google ScholarGoogle Scholar
  10. Ma N, Zhang X, Zheng H T, Shufflenet v2: Practical guidelines for efficient cnn architecture design[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 116-131.Google ScholarGoogle Scholar
  11. Denil M, Shakibi B, Dinh L, Predicting parameters in deep learning[J]. Advances in neural information processing systems, 2013:2148-2156.Google ScholarGoogle Scholar
  12. Wen W, Wu C, Wang Y, Learning structured sparsity in deep neural networks[J]. Advances in neural information processing systems, 2016, 2074-2082.Google ScholarGoogle Scholar
  13. Zhou H, Alvarez J M, Porikli F. Less is more: Towards compact cnns[C]//European conference on computer vision. Springer, Cham, 2016: 662-677.Google ScholarGoogle Scholar
  14. Liu Z, Li J, Shen Z, Learning efficient convolutional networks through network slimming[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2736-2744.Google ScholarGoogle Scholar
  15. Huang Z, Wang N. Data-driven sparse structure selection for deep neural networks[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 304-320.Google ScholarGoogle Scholar
  16. Lin S, Ji R, Yan C, Towards optimal structured cnn pruning via generative adversarial learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 2790-2799.Google ScholarGoogle Scholar
  17. Molchanov P, Tyree S, Karras T, Pruning convolutional neural networks for resource efficient inference[J]. arXiv preprint arXiv:1611.06440, 2016.Google ScholarGoogle Scholar
  18. Lin S, Ji R, Li Y, Accelerating Convolutional Networks via Global & Dynamic Filter Pruning[C]//IJCAI. 2018, 2(7): 2425–2432.Google ScholarGoogle Scholar
  19. Molchanov P, Mallya A, Tyree S, Importance estimation for neural network pruning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 11264-11272.Google ScholarGoogle Scholar
  20. Kim E, Ahn C, Oh S. Nestednet: Learning nested sparse structures in deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8669-8678.Google ScholarGoogle Scholar
  21. Lee H, Lee D S, Kang H, Sparse brain network recovery under compressed sensing[J]. IEEE Transactions on Medical Imaging, 2011, 30(5): 1154-1165.Google ScholarGoogle ScholarCross RefCross Ref
  22. Huang Z, Wang N. Data-driven sparse structure selection for deep neural networks[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 304-320.Google ScholarGoogle Scholar
  23. Parashar A, Rhu M, Mukkara A, SCNN: An accelerator for compressed-sparse convolutional neural networks[J]. ACM SIGARCH computer architecture news, 2017, 45(2): 27-40.Google ScholarGoogle Scholar
  24. Wang P, Ji Y, Hong C, SNrram: An efficient sparse neural network computation architecture based on resistive random-access memory[C]//2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC). IEEE, 2018: 1-6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Tang A, Quan P, Niu L, A Survey for Sparse Regularization Based Compression Methods[J]. Annals of Data Science, 2022: 1-28.Google ScholarGoogle Scholar
  26. Nesterov Y. Efficiency of coordinate descent methods on huge-scale optimization problems[J]. SIAM Journal on Optimization, 2012, 22(2): 341-362.Google ScholarGoogle ScholarCross RefCross Ref
  27. Johnson R, Zhang T. Accelerating stochastic gradient descent using predictive variance reduction[J]. Advances in neural information processing systems, 2013, 26.Google ScholarGoogle Scholar
  28. Sutskever I, Martens J, Dahl G, On the importance of initialization and momentum in deep learning[C]//International conference on machine learning. PMLR, 2013: 1139-1147.Google ScholarGoogle Scholar
  29. Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization[J]. Journal of machine learning research, 2011, 12(7).Google ScholarGoogle Scholar
  30. Zeiler M D. Adadelta: an adaptive learning rate method[J]. arXiv preprint arXiv:1212.5701, 2012.Google ScholarGoogle Scholar
  31. Reddi S J, Kale S, Kumar S. On the convergence of adam and beyond[J]. arXiv preprint arXiv:1904.09237, 2019.Google ScholarGoogle Scholar
  32. Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.Google ScholarGoogle Scholar
  33. Neyshabur B, Salakhutdinov R R, Srebro N. Path-sgd: Path-normalized optimization in deep neural networks[J]. Advances in neural information processing systems, 2015:2422-2430.Google ScholarGoogle Scholar
  34. Bartlett P L, Foster D J, Telgarsky M J. Spectrally-normalized margin bounds for neural networks[J]. Advances in neural information processing systems, 2017:6241-6250.Google ScholarGoogle Scholar
  35. Keskar N S, Mudigere D, Nocedal J, On large-batch training for deep learning: Generalization gap and sharp minima[J]. arXiv preprint arXiv:1609.04836, 2016.Google ScholarGoogle Scholar
  36. Dziugaite G K, Roy D M. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data[J]. arXiv preprint arXiv:1703.11008, 2017.Google ScholarGoogle Scholar
  37. TIBSHIRANI R. Regression shrinkage and selection via the lasso[J]. Journal of the Royal Statistical Society. Series B (Methodological), 1996:267–288.Google ScholarGoogle Scholar
  38. ZOU H. The adaptive lasso and its oracle properties[J]. Journal of the American statistical association,2006, 101 (476):1418–288.Google ScholarGoogle Scholar
  39. Zhang J, Wang Y. Temporally adaptive-dynamic sparse network for modeling disease progression[C]//2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, 2020: 1900-1904.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICAIP '22: Proceedings of the 6th International Conference on Advances in Image Processing
    November 2022
    202 pages
    ISBN:9781450397155
    DOI:10.1145/3577117

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 25 February 2023

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)22
    • Downloads (Last 6 weeks)2

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format