Abstract
For practical deep neural network design on mobile devices, it is essential to consider the constraints incurred by the computational resources and the inference latency in various applications. Among deep network acceleration approaches, pruning is a widely adopted practice to balance the computational resource consumption and the accuracy, where unimportant connections can be removed either channel-wisely or randomly with a minimal impact on model accuracy. The coarse-grained channel pruning instantly results in a significant latency reduction, while the fine-grained weight pruning is more flexible to retain accuracy. In this paper, we present a unified framework for the Joint Channel pruning and Weight pruning, named JCW, which achieves a better pruning proportion between channel and weight pruning. To fully optimize the trade-off between latency and accuracy, we further develop a tailored multi-objective evolutionary algorithm in the JCW framework, which enables one single round search to obtain the accurate candidate architectures for various deployment requirements. Extensive experiments demonstrate that the JCW achieves a better trade-off between the latency and accuracy against previous state-of-the-art pruning methods on the ImageNet classification dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We use weight sparsity to denote the ratio of non-zero parameters of remaining channels across the whole paper.
- 2.
In MOO, the Pareto-frontier is a set of solutions that for each solution, it is not possible to further improve some objectives without degrading other objectives.
- 3.
Please refer to the Appendix for more results about this observation.
- 4.
More detailed derivation is given in Appendix.
References
Towards compact cnns via collaborative compression. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Berman, M., Pishchulin, L., Xu, N., B.Blaschko, M., Medioni, G.: Aows: adaptive and optimal network width search with latency constraints. In: 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Cai, H., Gan, C., Wang, T., Zhang, Z., Han, S.: Once for all: train one network and specialize it for efficient deployment. In: International Conference on Learning Representations (2020)
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002). https://doi.org/10.1109/4235.996017
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Ding, X., Ding, G., Zhou, X., Guo, Y., Liu, J., Han, J.: Approximated oracle filter pruning for destructive cnn width optimization. In: IEEE Conference on Machine Learning (ICML) (2019)
Ding, X., Ding, G., Zhou, X., Guo, Y., Liu, J., Han, J.: Global sparse momentum sgd for pruning very deep neural networks. In: Advances in Neural Information Processing Systems, (NeurIPS) (2019)
Dong, J.D., Cheng, A.C., Juan, D.C., Wei, W., Sun, M.: Dpp-net: device-aware progressive search for pareto-optimal neural architectures. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 517–531 (2018)
Dong, X., Yang, Y.: Network pruning via transformable architecture search. In: Advances in Neural Information Processing Systems, pp. 760–771 (2019)
Elsen, E., Dukhan, M., Gale, T., Simonyan, K.: Fast sparse convnets. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Elsken, T., Metzen, J.H., Hutter, F.: Efficient multi-objective neural architecture search via lamarckian evolution. In: International Conference on Learning Representations (ICLR) (2019)
Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. In: International Conference on Learning Representations (ICLR) (2019)
Gao, S., Huang, F., Cai, W., Huang, H.: Network pruning via performance maximization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Google-Inc.: machine learning for mobile devices: Tenworflow lite (2020). https://www.tensorflow.org/lite
Guo, S., Wang, Y., Li, Q., Yan, J.: Dmcp: differentiable markov channel pruning for neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1539–1547 (2020)
Guo, Z., et al.: Single path one-shot neural architecture search with uniform sampling. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 544–560. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_32
Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural network with pruning, trained quantization and huffman coding. In: International Conference on Learning Representations (ICLR) (2016)
Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. In: Advances in Neural Information Processing Systems (NIPS) (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017)
He, Y., Kang, G., Dong, X., Fu, Y., Yang, Y.: Soft filter pruning for accelerating deep convolutional neural networks. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI) (2018)
He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
He, Y., Lin, J., Liu, Z., Wang, H., Li, L.J., Han, S.: Amc: automl for model compression and acceleration on mobile devices. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. CoRR abs/1704.04861 (2017). https://arxiv.org/abs/1704.04861
Hsu, C.H., et al: Monas: multi-objective neural architecture search using reinforcement learning. arXiv preprint arXiv:1806.10332 (2018)
Hu, H., Peng, R., Tai, Y.W., Tang, C.K.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv:1607.03250 (2016)
Joo, D., Yi, E., Baek, S., Kim, J.: Linearly replaceable filters for deep network channel pruning. In: The 34th AAAI Conference on Artificial Intelligence (AAAI) (2021)
LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Advances in Neural Information Processing Systems, pp. 598–605 (1990)
Li, C., Wang, G., Wang, B., Liang, X., Li, Z., Chang, X.: Dynamic slimmable network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8607–8617 (2021)
Li, F., Li, G., He, X., Cheng, J.: Dynamic dual gating neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5330–5339 (2021)
Li, H., Kadav, A., Durdanovic, I.: Pruning filters for efficient convnets. In: International Conference on Learning Representation (ICLR) (2017)
Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016)
Lin, M., Ji, R., Zhang, Y., Zhang, B., Wu, Y., Tian, Y.: Channel pruning via automatic structure search. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI) (2020)
Liu, L., et al.: Group fisher pruning for practical network compression. In: International Conference on Machine Learning (ICML) (2021)
Liu, Z., et al.: Metapruning: meta learning for automatic neural network channel pruning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3296–3305 (2019)
Lu, Z., et al.: Nsga-net: neural architecture search using multi-objective genetic algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419–427 (2019)
Luo, J., Zhang, H., Zhou, H., Xie, C., Wu, J., Lin, W.: Thinet: pruning cnn filters for a thinner net. IEEE Trans. Pattern Anal. Mach. Intell. TPAMI (2018)
Molchanov, D., Ashukha, A., Vetrov, D.: Variational dropout sparsifies deep neural networks. In: Proceedings of the International Conference on Machine Learning (2017)
Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient inference. In: International Conference on Learning Representations (ICLR) (2017)
Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Peng, H., Wu, J., Chen, S., Huang, J.: Collaborative channel pruning for deep neural networks. In: International Conference on Machine Learning (ICML) (2019)
Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. In: International Conference on Machine Learning (2018)
Ruan, X., Liu, Y., Li, B., Yuan, C., Hu, W.: Dpfps: dynamic and progressive filter pruning for compressing convolutional neural networks from scratch. In: The 34th AAAI Conference on Artificial Intelligence (AAAI) (2021)
Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Su, X., You, S., Wang, F., Qian, C., Zhang, C., Xu, C.: Bcnet: searching for network width with bilaterally coupled network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Tan, M., et al.: Mnasnet: platform-aware neural architecture search for mobile. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2820–2828 (2019)
Tang, Y., et al.: Manifold regularized dynamic network pruning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5018–5028 (2021)
Wang, J., et al.: Revisiting parameter sharing for automatic neural channel number search. In: Advances in Neural Information Processing Systems 33 (2020)
Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: Haq: Hardware-aware automated quantization with mixed precision. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Wang, T., et al.: Apq: joint search for network architecture, pruning and quantization policy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2078–2087 (2020)
Wang, W., et al.: Accelerate cnns from three dimensions: a comprehensive pruning framework. In: International Conference on Machine Learning (ICML) (2021)
Wang, Z., Li, C., Wang, X.: Convolutional neural network pruning with structural redundancy reduction. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: Advances in Neural Information Processing Systems (NeurIPS) (2016)
Yang, H., Zhu, Y., Liu, J.: Ecc: Platform-independent energy-constrained deep neural network compression via a bilinear regression model. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Yang, T., et al.: Netadapt: Platform-aware neural network adaption for mobile applications. In: European Conference on Computer Vision (ECCV) (2018)
Yang, Z., et al.: Cars: continuous evolution for efficient neural architecture search. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Yu, J., Huang, T.S.: Autoslim: towards one-shot architecture search for channel numbers. CoRR abs/1903.11728 (2019). https://arxiv.org/abs/1903.11728
Yu, J., Huang, T.S.: Universally slimmable networks and improved training techniques. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1803–1811 (2019)
Yu, J., et al.: BigNAS: scaling up neural architecture search with big single-stage models. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 702–717. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_41
Zhuang, Z., et al.: Discrimination-aware channel pruning for deep neural networks. In: Advances in Neural Information Processing Systems (NeurIPS) (2018)
Acknowledgements
This work was supported in part by the National Key Research and Development Program of China under Grant 2021ZD0201504, in part by the Strategic Priority Research Program of Chinese Academy of Sciences under Grant XDA27040300.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhao, T. et al. (2022). Multi-granularity Pruning for Model Acceleration on Mobile Devices. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13671. Springer, Cham. https://doi.org/10.1007/978-3-031-20083-0_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-20083-0_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20082-3
Online ISBN: 978-3-031-20083-0
eBook Packages: Computer ScienceComputer Science (R0)