Skip to main content
Log in

Design of deep learning accelerated algorithm for online recognition of industrial products defects

  • S.I. : Emergence in Human-like Intelligence towards Cyber-Physical Systems
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

With the defects of LED chip as the research object, in LED chip defect recognition, an efficient and scalable parallel algorithm is critical to the deep model using a large data set training. As the implementation of parallel in multi-machine is relatively difficult, the paper puts forward a recognition method based on convolution neural network algorithm, so as to improve the small batch stochastic gradient descent algorithm that is very popular in the industry. This method has overcome the shortcomings of the existing defects recognition algorithms: It requires to manually extract features and needs a heuristic method. Furthermore, this method has made the following improvements to traditional methods: (1) It adds a “copy of model parameters” of critical resources to reduce the waiting time of a GPU in requesting for model parameters in the process of model parameters updating; (2) distribution mechanism of small batch data is designed, which, training set is some small batch data that are randomly distributed at GPU side, and is selected by using the critical resource p variable; (3) a certain amount of memory space storage gradients are created on the GPU side and the parameter server side, and the gradient propagation is passed through a “gradient distribution” thread for scheduling. The experimental data show that the recognition rate of network for short-shot defects has reached 99.4%. In addition, compared with BP neural network, it can be seen from the experiment that the recognition rate of the proposed method is significantly better than that of BP neural network, so the method has a good application prospect.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Socher R, Huval B, Manning CD, Ng AY (2010) Semantic compositionality through recursive matrix-vector spaces. In: Proceedings of the ACL’, pp 384–394

  2. Zhao Z, Yang S, Ma X (2010) Research on character recognition of license plate based on LeNet-5 convolution neural network. J Syst Simul 22(3):638–641

    Google Scholar 

  3. Lin M (2013) Research on face recognition based on deep learning. Dalian University of Technology, Dalian

    Google Scholar 

  4. Xu S, Liu YA, Xu S (2013) Wood defect recognition based on convolution neural network. J Shandong Univ 43:24–28

    Google Scholar 

  5. Arel I, Rose DC, Karnowski TP (2010) Research frontier: deep machine learning-a new frontier in artificial intelligence research. IEEE Comput Intell Mag 5(4):13–18

    Article  Google Scholar 

  6. Deng L, Yu D (2014) Deep learning: methods and applications. Found Trends Signal Process 7(3–4):197–387

    Article  MathSciNet  Google Scholar 

  7. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  8. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    Article  MathSciNet  Google Scholar 

  9. Vincent P, Larochelle H, Lajoie I et al (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11:3371–3408

    MathSciNet  MATH  Google Scholar 

  10. He K, Zhang X, Ren S, et al (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. arXiv:1502.01852

  11. Sun Y, Chen Y, Wang X, et al (2014) Deep learning face representation by joint identification-verification. In: Advances in neural information processing systems, pp 1988–1996

  12. Szegedy C, Liu W, Jia Y, et al (2014) Going deeper with convolutions. arXiv:1409.4842

  13. Merolla P, Arthur J, Akopyan F, et al (2011) A digital neurosynaptic core using embedded crossbar memory with 45 pJ per spike in 45 nm. Custom integrated circuits conference (CICC), pp 1–4. IEEE

  14. Chen T, Du Z, Sun N, et al (2014) Diannao: a small-footprint high-throughput accelerator for ubiquitous machine-learning. In: ACM Sigplan notices, vol 49, no 4. ACM, pp 269–284

  15. Chen Y, Luo T, Liu S, et al (2014) Dadiannao: a machine-learning supercomputer. In: 2014 47th annual IEEE/ACM international symposium on microarchitecture (MICRO). IEEE, pp 609–622

  16. Vanhoucke V, Senior A, Mao M Z (2011) Improving the speed of neural networks on CPUs. In: Proceedings of deep learning and unsupervised feature learning NIPS workshop

  17. Raina R, Madhavan A, Ng AY (2009) Large-scale deep unsupervised learning using graphics processors. In: Proceedings of the 26th annual international conference on machine learning. ACM, pp 873–880

  18. Jia Y, Shelhamer E, Donahue J, et al (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the ACM international conference on multimedia. ACM, pp 675–678

  19. Dean J, Corrado G, Monga R, et al (2012) Large scale distributed deep networks. In: Advances in neural information processing systems, pp 1223–1231

  20. Li M, Andersen DG, Park JW, et al (2014) Scaling distributed machine learning with the parameter server. In: Proceedings of the OSDI, pp 583–598

  21. Chilimbi T, Suzue Y, Apacible J, et al (2014) Project adam: building an efficient and scalable deep learning training system. The 11th USENIX symposium on operating systems design and implementation (OSDI 14), pp 571–582

  22. Robbins H, Monro S (1951) A stochastic approximation method. Ann Math Stat 22(3):400–407

    Article  MathSciNet  Google Scholar 

  23. Zhang S, Zhang C, You Z, Xu B (2013) Asynchronous stochastic gradient descent for DNN training. In: IEEE international conference on acoustics, speech and signal processing, pp 6660–6663

  24. Zhang J-K, Chen Q-K (2010) CUDA technology based recognition algorithm of convolutional neural networks. Comput Eng 36(15):179–181

    Google Scholar 

  25. Dean J, Corrado G, Monga R et al (2012) Large scale distributed deep networks. Adv Neural Inf Process Syst 25:p1232–p1240

    Google Scholar 

  26. Coates A, Huaval B, Wang T et al (2013) Deep learning with COTS HPC systems. J Mach Learn Res Workshop Conf Proc 28(3):1337–1345

    Google Scholar 

  27. Zou Y, Jin X, Li Y, et al (2014) Mariana: tencent deep learning platform and its applications. In: Proceedings of the very data bases endowment, pp 1772–1777

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Li.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shu, Y., Huang, Y. & Li, B. Design of deep learning accelerated algorithm for online recognition of industrial products defects. Neural Comput & Applic 31, 4527–4540 (2019). https://doi.org/10.1007/s00521-018-3511-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3511-4

Keywords

Navigation