Abstract
Over-parameterized neural networks have good performance, but training such networks is computationally expensive. Pruning at initialization (PaI) avoids training a full network, which has attracted intense interest. But at high compression ratios, layer collapse severely compromises the performance of PaI. Existing methods introduce operations such as iterative pruning to alleviate layer collapse. However, these operations require additional computing and memory costs. In this paper, we focus on alleviating layer collapse without increasing cost. Therefore, we propose an efficient strategy called parameter threshold compensation. This strategy constrains the lower limit of network layer parameters and uses parameter transfer to compensate for layers with fewer parameters. To promote a more balanced transfer of parameters, we further propose a parameter preservation strategy, using the average number of preserved parameters to more strongly constrain the layers that reduce parameters. We conduct extensive experiments on five pruning methods on Cifar10 and Cifar100 datasets using VGG16 and ResNet18 architectures, verifying the effectiveness of our strategy. Furthermore, we compare the improved performance with two SOTA methods. The comparison results show that our strategy achieves similar performance, challenging the design of increasingly complex pruning strategies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc. (2015)
Vaswani, A., et al.: Attention is all you need. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017)
Wen, J., Zhao, Z., Cui, J., Chen, B.M.: Model-based reinforcement learning with self-attention mechanism for autonomous driving in dense traffic. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds.) ICONIP 2022. LNCS, pp. 317–330. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30108-7_27
Graves, A., Mohamed, A.r., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6645–6649 (2013)
Kaplan, J., et al.: Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 (2020)
Liu, S., et al.: The unreasonable effectiveness of random pruning: return of the most naive baseline for sparse training. In: International Conference on Learning Representations (2022)
Lee, N., Ajanthan, T., Torr, P.: Snip: single-shot network pruning based on connection sensitivity. In: International Conference on Learning Representations (2019)
Wang, C., Zhang, G., Grosse, R.: Picking winning tickets before training by preserving gradient flow. In: International Conference on Learning Representations (2020)
Tanaka, H., Kunin, D., Yamins, D.L., Ganguli, S.: Pruning neural networks without any data by iteratively conserving synaptic flow. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6377–6389. Curran Associates, Inc. (2020)
Alizadeh, M., et al.: Prospect pruning: finding trainable weights at initialization using meta-gradients. In: International Conference on Learning Representations (2022)
Frankle, J., Dziugaite, G.K., Roy, D., Carbin, M.: Pruning neural networks at initialization: Why are we missing the mark? In: International Conference on Learning Representations (2021)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Tech. Rep. 0, University of Toronto, Toronto, Ontario (2009)
Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. In: International Conference on Learning Representations (2017)
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)
Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc. (2015)
Le Cun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Proceedings of the 2nd International Conference on Neural Information Processing Systems, NIPS 1989, pp. 598–605. MIT Press, Cambridge (1989)
Wang, H., Qin, C., Bai, Y., Zhang, Y., Fu, Y.: Recent advances on neural network pruning at initialization. In: Raedt, L.D. (ed.) Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pp. 5638–5645. International Joint Conferences on Artificial Intelligence Organization (2022)
Lee, N., Ajanthan, T., Gould, S., Torr, P.H.S.: A signal propagation perspective for pruning neural networks at initialization. In: International Conference on Learning Representations (2020)
Mallya, A., Lazebnik, S.: Packnet: adding multiple tasks to a single network by iterative pruning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Wortsman, M., et al.: Supermasks in superposition. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 15173–15184. Curran Associates, Inc. (2020)
Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. In: International Conference on Learning Representations (2019)
Chen, T., et al.: The lottery ticket hypothesis for pre-trained bert networks. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 15834–15846. Curran Associates, Inc. (2020)
Frankle, J., Schwab, D.J., Morcos, A.S.: The early phase of neural network training. In: International Conference on Learning Representations (2020)
Evci, U., Ioannou, Y., Keskin, C., Dauphin, Y.: Gradient flow in sparse neural networks and how lottery tickets win. In: Proceedings of the AAAI Conference on Artificial Intelligence 36(6), 6577–6586 (2022)
Su, J., et al.: Sanity-checking pruning methods: Random tickets can win the jackpot. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 20390–20401. Curran Associates, Inc. (2020)
Liu, N., et al.: Lottery ticket implies accuracy degradation, is it a desirable phenomenon. CoRR (2021)
Acknowledgement
This work was supported in part by National Natural Science Foundation of China (No. 61831005).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Hao, X. et al. (2024). PTCP: Alleviate Layer Collapse in Pruning at Initialization via Parameter Threshold Compensation and Preservation. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1965. Springer, Singapore. https://doi.org/10.1007/978-981-99-8145-8_4
Download citation
DOI: https://doi.org/10.1007/978-981-99-8145-8_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8144-1
Online ISBN: 978-981-99-8145-8
eBook Packages: Computer ScienceComputer Science (R0)