Skip to main content

A Novel Pruning Method Based on Correlation Applied in Full-Connection Layer Neurons

  • Conference paper
  • First Online:
Artificial Intelligence and Security (ICAIS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13339))

Included in the following conference series:

Abstract

DNN (Deep Neural Networks) has achieved great success in various fields, but its deployment on many small devices is limited due to its huge model structure and low computing speed. The neural network faces the problems of large models and slow calculations. Experiments have shown that reasonable pruning methods can be effectively solved. In terms of network pruning, pruning technology can be divided into structured pruning and unstructured pruning. Compared with the limited application of unstructured pruning, structured pruning can greatly compress the model and increase the calculation speed under any framework, and has stronger applicability. In current structured pruning, there is a problem that the accuracy of the model decreases too quickly after a large number of neurons are deleted. To address this problem, we propose a new pruning method based on neuron similarity, which sorts the weights of neurons, according to the principle that neural network parameters will change with training, the integration method is introduced to reconstruct the neuron assignment, and the more relevant neurons are deleted by comparing the current ranking and the cumulative ranking difference of the integration system. Based on the experiments of MLP model, compared with other pruning methods, this method shows its superiority, which can compress the model by 10 times and reduce the accuracy by less than 1%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  2. Plamondon, R., Srihari, S.N.: On-line and off-line handwriting recognition: a comprehensive survey. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 84 (2000)

    Article  Google Scholar 

  3. Wan, J., Wang, D., Hoi, S.C.H., Wu, P.C., Zhu, J.K.: Deep learning for content-based image retrieval: a comprehensive study. In: Proceedings of the 22nd ACM international conference on Multimedia (MM 2014), pp. 157–166. Association for Computing Machinery (2014)

    Google Scholar 

  4. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. IEEE Computer Society, pp. 580–587. IEEE Computer Society (2013)

    Google Scholar 

  5. Wang, N., Yeung, D.Y.: Learning a deep compact image representation for visual tracking. In: Proceedings of the 26th International Conference on Neural Information Processing Systems, vol. 1, pp. 809–817. Curran Associates Inc. (2013)

    Google Scholar 

  6. Severyn, A., Moschitti, A.: Learning to rank short text pairs with convolutional deep neural networks. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2015), pp. 373–382. Association for Computing Machinery (2015)

    Google Scholar 

  7. Hussain, F., Hussain, R., Hassan, S.A., Hossain, E.: Machine learning in IOT security: current solutions and future challenges. IEEE Commun. Surv. Tutor. 22(3), 1686–1721 (2020)

    Article  Google Scholar 

  8. Le, Q.V., Ngiam, J.Q., Coates, A., Lahiri, A., Ng, A.Y.: On optimization methods for deep learning. In: Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML 2011), pp. 265–272. Omnipress (2011)

    Google Scholar 

  9. Yong, Z., Li, J., Gong, Y.: Low-rank plus diagonal adaptation for deep neural networks. In: IEEE International Conference on Acoustics, pp. 5005–5009 (2016)

    Google Scholar 

  10. Oh, Y.H., Quan, Q., Kim, D., Kim, S., Heo, J.: A portable, automatic data qantizer for deep neural networks. In: Proceedings of the 27th International Conference on Parallel Architectures and Compilation Techniques (PACT 2018), pp. 1–14. Association for Computing Machinery (2018)

    Google Scholar 

  11. Liu, X., Wang, X., Matwin, S.: Improving the interpretability of deep neural networks with knowledge distillation. In: IEEE International Conference on Data Mining Workshops (ICDMW), pp. 905–912 (2018)

    Google Scholar 

  12. Xue, H.F., Zhang, A.D., Su, L., Jiang, W.J., Xu, W.Y.: Deepfusion: a deep learning frame- work for the fusion of heterogeneous sensory data. In: Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing (Mobihoc 2019), pp. 151–160. Association for Computing Machinery (2019)

    Google Scholar 

  13. Yan, M., Zhao, M., Xu, Z., Zhang, Q., Wang, G.: VarGFaceNet: an efficient variable group convolutional neural network for lightweight face recognition. In: IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 1–8 (2020)

    Google Scholar 

  14. Zhou, Y., Yen, G.G., Yi, Z.: A knee-guided evolutionary algorithm for compressing deep neural networks. IEEE Trans. Cybern. 51(3), 1626–1638 (2021)

    Article  Google Scholar 

  15. Hanson, S.J., Pratt, L.Y.: Comparing biases for minimal network construction with back-propagation. In: Advances in Neural Information Processing Systems, pp. 177–185. Morgan Kaufmann Publishers Inc. (1989)

    Google Scholar 

  16. Lecun, Y.: Optimal brain damage. Neural Inf. Proc. Syst. 2(279), 598–605 (1990)

    Google Scholar 

  17. Hassibi, D.G.: Stork: second order derivatives for network pruning: optimal brain surgeon. In: Advances in Neural Information Processing Systems, pp. 164–171. Morgan Kaufmann (1993)

    Google Scholar 

  18. Srinivas, S., Babu, R.V.: Data-free parameter pruning for deep neural networks. arXiv:1507.06149 (2015)

  19. Leung, S., Wong, K.W., Sum, P.F., Chan, L.W.: A pruning method for the recursive least squared algorithm. Neural Netw. 14(2), 147–174 (2001)

    Article  Google Scholar 

  20. Babaeizadeh, M., Smaragdis, P., Campbell, R.H.: A simple yet effective method to prune dense layers of neural networks. In: International Conference on Learning Representations (ICLR 2017), pp. 24–26 (2017)

    Google Scholar 

  21. Jiang, H., Li, G.Y., Qian, C., Tang, K.: Efficient DNN neuron pruning by minimizing layerwise nonlinear reconstruction error. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 2298–2304. AAAI Press (2018)

    Google Scholar 

  22. Chen, S.X., Lin, L., Zhang, Z.X., Gen, M.: Evolutionary netarchitecture search for deep neural networks pruning. In: Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence (ACAI 2019), pp. 189–196. Association for Computing Machinery (2019)

    Google Scholar 

  23. Tan, M., Chen, B., Pang, R., Vasudevan, V., Le, Q.V.: MnasNet: platform-aware neural architecture search for mobile. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2820–2828 (2019)

    Google Scholar 

  24. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient ConvNets. In: International Conference on Learning Representations (2017)

    Google Scholar 

  25. Zhang, X., He, Y., Jian, S.: Channel pruning for accelerating very deep neural networks. In: Proceedings of the 16th IEEE International Conference on Computer Vision (ICCV 2017), pp. 1398–1406 (2017)

    Google Scholar 

  26. Jiang, C.H., Li, G.Y., Qian, C., Tang, K.: Efficient DNN neuron pruning by minimizing layer-wise nonlinear reconstruction error. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 2298–2304 (2018)

    Google Scholar 

  27. Lecun, Y., Bottou, L.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

Download references

Acknowledgement

This work is supported by the National Natural Science Foundation of China (Grant No. 61966011), Hainan University Education and Teaching Reform Research Project (Grant No. HDJWJG01), Key Research and Development Program of Hainan Province (Grant No. ZDYF2020033), Young Talents’Science and Technology Innovation Project of Hainan Association for Science and Technology(Grant No. QCXM202007), Hainan Provincial Natural Science Foundation of China (Grant No. 621RC612),Hainan Provincial Natural Science Foundation of China (Grant No. 2019RC107).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaozhang Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dong, S., Liu, X., Li, X., Xie, G., Tang, X. (2022). A Novel Pruning Method Based on Correlation Applied in Full-Connection Layer Neurons. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2022. Lecture Notes in Computer Science, vol 13339. Springer, Cham. https://doi.org/10.1007/978-3-031-06788-4_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06788-4_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06787-7

  • Online ISBN: 978-3-031-06788-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics