Skip to main content

Advertisement

Log in

Channel pruning method driven by similarity of feature extraction capability

  • Neural Networks
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Channel pruning is a method to compress convolutional neural networks, which can significantly reduce the number of model parameters and the computational amount. Current methods that focus on the internal parameters of a model and feature mapping information rely on artificially set a priori criteria or reflect filter attributes by partial feature mapping, which lack the ability to analyze and discriminate the channel feature extraction and ignore the basic reasons for the similarity of the channels. This study developed a pruning method based on similar structural features of channels, called SSF. This method focuses on analysing the ability to extract similar features between channels and exploring the characteristics of channels producing similar feature mapping. First, adaptive threshold coding was introduced to numerically transform the channel characteristics into structural features, and channels with similar coding results could generate highly similar feature mapping. Secondly, the spatial distance was calculated for the structural features matrix to obtain the similarity between channels. Moreover, in order to keep rich channel classes in the pruned network, different class cuts were made on the basis of similarity to randomly remove some of the channels. Thirdly, considering the differences in the overall similarity of different layers, this study determined the appropriate pruning ratio for different layers on the basis of the channel dispersion degree reflected by the similarity. Finally, extensive experiments were conducted on image classification tasks, and the experimental results demonstrated the superiority of the SSF method over many existing techniques. On ILSVRC-2012, the SSF method reduced the floating-point operations (FLOPs) of the ResNet-50 model by 57.70% while reducing the Top-1 accuracy only by 1.01%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The datasets that support the findings of this study are available in Krizhevsky (2009), Russakovsky et al. (2015). Our code is available at https://github.com/sunchuanmeng/SSF_Pruning. The results of our experiments were also uploaded to Google Cloud Drive, and the link to get them is in the Readme file in Gitgub.

References

  • Abbaszadeh Shahri A, Shan C, Larsson S (2022) A novel approach to uncertainty quantification in groundwater table modeling by automated predictive deep learning. Nat Resour Res 31(3):1351–1373

    Article  Google Scholar 

  • Aketi SA, Roy S, Raghunathan A et al (2020) Gradual channel pruning while training using feature relevance scores for convolutional neural networks. IEEE Access 8:171924–171932

    Article  Google Scholar 

  • Bai Y, Wang H, Tao Z et al (2022) Dual lottery ticket hypothesis. In: 10th international conference on learning representations, ICLR 2022

  • Chang J, Lu Y, Xue P et al (2022) Global balanced iterative pruning for efficient convolutional neural networks. Neural Comput Appl 34(23):21119–21138

    Article  MATH  Google Scholar 

  • Chen Z, Xu TB, Du C et al (2020) Dynamical channel pruning by conditional accuracy change for deep neural networks. IEEE Trans Neural Netw Learn Syst 32(2):799–813

    Article  MATH  Google Scholar 

  • Chen Y, Wen X, Zhang Y et al (2022) FPC: filter pruning via the contribution of output feature map for deep convolutional neural networks acceleration. Knowl-Based Syst 238:107876

    Article  MATH  Google Scholar 

  • Dong X, Yang Y (2019) Network pruning via transformable architecture search. Adv Neural Inf Process Syst 32

  • Dong X, Yan P, Wang M et al (2024) An optimization method for pruning rates of each layer in CNN based on the GA-SMSM. Memetic Comput 16(1):45–54

    Article  MATH  Google Scholar 

  • Frankle J, Carbin M (2019) The lottery ticket hypothesis: finding sparse, trainable neural networks. In: 7th international conference on learning representations, ICLR 2019

  • Gal Y, Ghahramani Z (2016) Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: International conference on machine learning. PMLR, pp 1050–1059

  • Guo Z, Zhang L, Zhang D (2010) A completed modeling of local binary pattern operator for texture classification. IEEE Trans Image Process 19(6):1657–1663

    Article  MathSciNet  MATH  Google Scholar 

  • Guo Y, Yao A, Chen Y (2016) Dynamic network surgery for efficient dnns. Advances in neural information processing systems 29

  • Guo C, Pleiss G, Sun Y et al (2017) On calibration of modern neural networks. In: International conference on machine learning. PMLR, pp 1321–1330

  • Han S, Pool J, Tran J et al (2015) Learning both weights and connections for efficient neural network. Adv Neural Inf Process Syst 28

  • He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  • He Y, Zhang X, Sun J (2017) Channel pruning for accelerating very deep neural networks. In: Proceedings of the IEEE international conference on computer vision, pp 1389–1397

  • He Y, Lin J, Liu Z et al (2018) AMC: automl for model compression and acceleration on mobile devices. In: Proceedings of the European conference on computer vision (ECCV), pp 784–800

  • He Y, Liu P, Wang Z et al (2019) Filter pruning via geometric median for deep convolutional neural networks acceleration. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4340–4349

  • Jiang P, Xue Y, Neri F (2023) Convolutional neural network pruning based on multi-objective feature map selection for image classification. Appl Soft Comput 139:110229

    Article  Google Scholar 

  • Kim M, Hospedales T (2023) BayesDLL: Bayesian deep learning library. In: arXiv preprint arXiv:2309.12928

  • Krizhevsky A (2009) Learning multiple layers of features from tiny images. Technical report

  • Lee N, Ajanthan T, Torr PH (2019) SNIP: single-shot network pruning based on connection sensitivity. In: 7th International conference on learning representations, ICLR 2019

  • Lian Y, Peng P, Xu W (2021) Filter pruning via separation of sparsity search and model training. Neurocomputing 462:185–194

    Article  MATH  Google Scholar 

  • Lin S, Ji R, Yan C et al (2019) Towards optimal structured CNN pruning via generative adversarial learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2790–2799

  • Lin M, Ji R, Wang Y et al (2020) HRank: filter pruning using high-rank feature map. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1529–1538

  • Lin M, Ji R, Li S et al (2021a) Network pruning using adaptive exemplar filters. IEEE Trans Neural Netw Learn Syst 33(12):7357–7366

    Article  MATH  Google Scholar 

  • Lin M, Ji R, Zhang Y et al (2021b) Channel pruning via automatic structure search. In: Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence, pp 673–679

  • Liu Z, Li J, Shen Z et al (2017) Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE international conference on computer vision, pp 2736–2744

  • Liu Z, Mu H, Zhang X et al (2019) MetaPruning: meta learning for automatic neural network channel pruning. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 3296–3305

  • Liu N, Ma X, Xu Z et al (2020) AutoCompress: an automatic DNN structured pruning framework for ultra-high compression rates. In: Proceedings of the AAAI conference on artificial intelligence, pp 4876–4883

  • Liu J, Zhuang B, Zhuang Z et al (2021) Discrimination-aware network pruning for deep model compression. IEEE Trans Pattern Anal Mach Intell 44(8):4035–4051

    MATH  Google Scholar 

  • Liu G, Zhang K, Lv M (2023a) SOKS: automatic searching of the optimal kernel shapes for stripe-wise network pruning. IEEE Trans Neural Netw Learn Syst 34(12):9912–9924

    Article  MATH  Google Scholar 

  • Liu Y, Wu D, Zhou W et al (2023b) EACP: an effective automatic channel pruning for neural networks. Neurocomputing 526:131–142

    Article  MATH  Google Scholar 

  • Louati H, Louati A, Bechikh S et al (2024) Joint filter and channel pruning of convolutional neural networks as a bi-level optimization problem. Memetic Comput 1–20

  • Luo JH, Wu J, Lin W (2017) ThiNet: a filter level pruning method for deep neural network compression. In: Proceedings of the IEEE international conference on computer vision, pp 5058–5066

  • Molchanov P, Tyree S, Karras T et al (2017) Pruning convolutional neural networks for resource efficient inference. In: 5th international conference on learning representations, ICLR 2017—conference track proceedings

  • Mousa-Pasandi M, Hajabdollahi M, Karimi N et al (2020) Convolutional neural network pruning using filter attenuation. In: 2020 IEEE international conference on image processing (ICIP), pp 2905–2909

  • Pachón CG, Ballesteros DM, Renza D (2022) SeNPIS: sequential network pruning by class-wise importance score. Appl Soft Comput 129:109558

    Article  Google Scholar 

  • Paszke A, Gross S, Chintala S et al (2017) Automatic differentiation in pytorch. NIPS Autodiff Workshop

  • Raffel C, Shazeer N, Roberts A et al (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21(1):5485–5551

    MathSciNet  MATH  Google Scholar 

  • Russakovsky O, Deng J, Su H et al (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115:211–252

    Article  MathSciNet  MATH  Google Scholar 

  • Sanh V, Wolf T, Rush A (2020) Movement pruning: adaptive sparsity by fine-tuning. Adv Neural Inf Process Syst 33:20378–20389

    Google Scholar 

  • Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: International conference on learning representations

  • Tang Y, Wang Y, Xu Y et al (2020) SCOP: scientific control for reliable neural network pruning. Adv Neural Inf Process Syst 33:10936–10947

    MATH  Google Scholar 

  • Wang H, Fu Y (2022) Trainability preserving neural pruning. In: International conference on learning representations

  • Wang Z, Bovik AC, Sheikh HR et al (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  MATH  Google Scholar 

  • Wang H, Qin C, Zhang Y et al (2021a) Neural pruning via growing regularization. In: ICLR 2021—9th international conference on learning representations

  • Wang J, Jiang T, Cui Z et al (2021b) Filter pruning with a feature map entropy importance criterion for convolution neural networks compressing. Neurocomputing 461:41–54

    Article  MATH  Google Scholar 

  • Wang W, Chen M, Zhao S et al (2021c) Accelerate CNNs from three dimensions: a comprehensive pruning framework. In: Proceedings of machine learning research, pp 10717–10726

  • Wang W, Yu Z, Fu C et al (2021d) COP: customized correlation-based filter level pruning method for deep CNN compression. Neurocomputing 464:533–545

    Article  MATH  Google Scholar 

  • Wang Z, Li C, Wang X (2021e) Convolutional neural network pruning with structural redundancy reduction. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14913–14922

  • Wang Z, Xie X, Shi G (2021f) RFPruning: a retraining-free pruning method for accelerating convolutional neural networks. Appl Soft Comput 113:107860

    Article  MATH  Google Scholar 

  • Wang Y, Guo S, Guo J et al (2024) Towards performance-maximizing neural network pruning via global channel attention. Neural Netw 171:104–113

    Article  MATH  Google Scholar 

  • Yang C, Liu H (2022) Channel pruning based on convolutional neural network sensitivity. Neurocomputing 507:97–106

    Article  MATH  Google Scholar 

  • Yao K, Cao F, Leung Y et al (2021) Deep neural network compression through interpretability-based filter pruning. Pattern Recogn 119:108056

    Article  Google Scholar 

  • Yu R, Li A, Chen CF et al (2018) NISP: pruning networks using neuron importance score propagation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9194–9203

  • Yuan T, Li Z, Liu B et al (2024) ARPruning: an automatic channel pruning based on attention map ranking. Neural Netw 106220

  • Zhang Y, Lin M, Lin CW et al (2022) Carrying out CNN channel pruning in a white box. IEEE Trans Neural Netw Learn Syst

  • Zhuang Z, Tan M, Zhuang B et al (2018) Discrimination-aware channel pruning for deep neural networks. Adv Neural Inf Process Syst 31

  • Zhuang T, Zhang Z, Huang Y et al (2020) Neuron-level structured pruning using polarization regularizer. Adv Neural Inf Process Syst 33:9865–9877

    Google Scholar 

  • Zou BJ, Guo YD, He Q et al (2018) 3D filtering by block matching and convolutional neural network for image denoising. J Comput Sci Technol 33:838–848

    Article  MATH  Google Scholar 

Download references

Funding

This work was supported by the National Key Research and Development Program of China (2022YFC2905700), the National Key Research and Development Program of China (2022YFB3205800), and the Fundamental Research Programs of Shanxi Province (202203021212129, 202203021221106). The funders had no role in study design, datacollection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

Sun Chuanmeng: Conceptualization, Funding Acquisition, Methodology, Visualization, Writing-Review. Chen Jiaxin: Conceptualization, Methodology, Software, Writing-Original Draft. Li Yong: Conceptualization, Methodology, Funding Acquisition. Wang Yu: Funding Acquisition, Data Curation. Ma Tiehua: Supervision, Investigation.

Corresponding author

Correspondence to Chuanmeng Sun.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethical approval

This article does not contain any research on animals by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, C., Chen, J., Li, Y. et al. Channel pruning method driven by similarity of feature extraction capability. Soft Comput 29, 1207–1226 (2025). https://doi.org/10.1007/s00500-025-10470-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-025-10470-w

Keywords