Abstract
DNN (Deep Neural Networks) has achieved great success in various fields, but its deployment on many small devices is limited due to its huge model structure and low computing speed. The neural network faces the problems of large models and slow calculations. Experiments have shown that reasonable pruning methods can be effectively solved. In terms of network pruning, pruning technology can be divided into structured pruning and unstructured pruning. Compared with the limited application of unstructured pruning, structured pruning can greatly compress the model and increase the calculation speed under any framework, and has stronger applicability. In current structured pruning, there is a problem that the accuracy of the model decreases too quickly after a large number of neurons are deleted. To address this problem, we propose a new pruning method based on neuron similarity, which sorts the weights of neurons, according to the principle that neural network parameters will change with training, the integration method is introduced to reconstruct the neuron assignment, and the more relevant neurons are deleted by comparing the current ranking and the cumulative ranking difference of the integration system. Based on the experiments of MLP model, compared with other pruning methods, this method shows its superiority, which can compress the model by 10 times and reduce the accuracy by less than 1%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Plamondon, R., Srihari, S.N.: On-line and off-line handwriting recognition: a comprehensive survey. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 84 (2000)
Wan, J., Wang, D., Hoi, S.C.H., Wu, P.C., Zhu, J.K.: Deep learning for content-based image retrieval: a comprehensive study. In: Proceedings of the 22nd ACM international conference on Multimedia (MM 2014), pp. 157–166. Association for Computing Machinery (2014)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. IEEE Computer Society, pp. 580–587. IEEE Computer Society (2013)
Wang, N., Yeung, D.Y.: Learning a deep compact image representation for visual tracking. In: Proceedings of the 26th International Conference on Neural Information Processing Systems, vol. 1, pp. 809–817. Curran Associates Inc. (2013)
Severyn, A., Moschitti, A.: Learning to rank short text pairs with convolutional deep neural networks. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2015), pp. 373–382. Association for Computing Machinery (2015)
Hussain, F., Hussain, R., Hassan, S.A., Hossain, E.: Machine learning in IOT security: current solutions and future challenges. IEEE Commun. Surv. Tutor. 22(3), 1686–1721 (2020)
Le, Q.V., Ngiam, J.Q., Coates, A., Lahiri, A., Ng, A.Y.: On optimization methods for deep learning. In: Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML 2011), pp. 265–272. Omnipress (2011)
Yong, Z., Li, J., Gong, Y.: Low-rank plus diagonal adaptation for deep neural networks. In: IEEE International Conference on Acoustics, pp. 5005–5009 (2016)
Oh, Y.H., Quan, Q., Kim, D., Kim, S., Heo, J.: A portable, automatic data qantizer for deep neural networks. In: Proceedings of the 27th International Conference on Parallel Architectures and Compilation Techniques (PACT 2018), pp. 1–14. Association for Computing Machinery (2018)
Liu, X., Wang, X., Matwin, S.: Improving the interpretability of deep neural networks with knowledge distillation. In: IEEE International Conference on Data Mining Workshops (ICDMW), pp. 905–912 (2018)
Xue, H.F., Zhang, A.D., Su, L., Jiang, W.J., Xu, W.Y.: Deepfusion: a deep learning frame- work for the fusion of heterogeneous sensory data. In: Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing (Mobihoc 2019), pp. 151–160. Association for Computing Machinery (2019)
Yan, M., Zhao, M., Xu, Z., Zhang, Q., Wang, G.: VarGFaceNet: an efficient variable group convolutional neural network for lightweight face recognition. In: IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 1–8 (2020)
Zhou, Y., Yen, G.G., Yi, Z.: A knee-guided evolutionary algorithm for compressing deep neural networks. IEEE Trans. Cybern. 51(3), 1626–1638 (2021)
Hanson, S.J., Pratt, L.Y.: Comparing biases for minimal network construction with back-propagation. In: Advances in Neural Information Processing Systems, pp. 177–185. Morgan Kaufmann Publishers Inc. (1989)
Lecun, Y.: Optimal brain damage. Neural Inf. Proc. Syst. 2(279), 598–605 (1990)
Hassibi, D.G.: Stork: second order derivatives for network pruning: optimal brain surgeon. In: Advances in Neural Information Processing Systems, pp. 164–171. Morgan Kaufmann (1993)
Srinivas, S., Babu, R.V.: Data-free parameter pruning for deep neural networks. arXiv:1507.06149 (2015)
Leung, S., Wong, K.W., Sum, P.F., Chan, L.W.: A pruning method for the recursive least squared algorithm. Neural Netw. 14(2), 147–174 (2001)
Babaeizadeh, M., Smaragdis, P., Campbell, R.H.: A simple yet effective method to prune dense layers of neural networks. In: International Conference on Learning Representations (ICLR 2017), pp. 24–26 (2017)
Jiang, H., Li, G.Y., Qian, C., Tang, K.: Efficient DNN neuron pruning by minimizing layerwise nonlinear reconstruction error. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 2298–2304. AAAI Press (2018)
Chen, S.X., Lin, L., Zhang, Z.X., Gen, M.: Evolutionary netarchitecture search for deep neural networks pruning. In: Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence (ACAI 2019), pp. 189–196. Association for Computing Machinery (2019)
Tan, M., Chen, B., Pang, R., Vasudevan, V., Le, Q.V.: MnasNet: platform-aware neural architecture search for mobile. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2820–2828 (2019)
Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient ConvNets. In: International Conference on Learning Representations (2017)
Zhang, X., He, Y., Jian, S.: Channel pruning for accelerating very deep neural networks. In: Proceedings of the 16th IEEE International Conference on Computer Vision (ICCV 2017), pp. 1398–1406 (2017)
Jiang, C.H., Li, G.Y., Qian, C., Tang, K.: Efficient DNN neuron pruning by minimizing layer-wise nonlinear reconstruction error. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 2298–2304 (2018)
Lecun, Y., Bottou, L.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (Grant No. 61966011), Hainan University Education and Teaching Reform Research Project (Grant No. HDJWJG01), Key Research and Development Program of Hainan Province (Grant No. ZDYF2020033), Young Talents’Science and Technology Innovation Project of Hainan Association for Science and Technology(Grant No. QCXM202007), Hainan Provincial Natural Science Foundation of China (Grant No. 621RC612),Hainan Provincial Natural Science Foundation of China (Grant No. 2019RC107).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Dong, S., Liu, X., Li, X., Xie, G., Tang, X. (2022). A Novel Pruning Method Based on Correlation Applied in Full-Connection Layer Neurons. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2022. Lecture Notes in Computer Science, vol 13339. Springer, Cham. https://doi.org/10.1007/978-3-031-06788-4_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-06788-4_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06787-7
Online ISBN: 978-3-031-06788-4
eBook Packages: Computer ScienceComputer Science (R0)