Skip to main content

Dynamic Network Pruning Based on Local Channel-Wise Relevance

  • Conference paper
  • First Online:
Cognitive Systems and Information Processing (ICCSIP 2021)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1515))

Included in the following conference series:

  • 1183 Accesses

Abstract

In recent years, deep convolutional neural networks (CNNs) significantly boost the various applications, but the high computational complexity of these models hinder the further deployment on device with limited computation resources. Hence, dynamic channel pruning has been recently proposed and widely used for compressing CNN-based models. In this paper, we propose a novel plug-and-play dynamic network pruning module. With very slight extra computation burden, it can achieve the comparable performance as the original model. Specifically, our proposed module measures the importance of each convolutional channel to prune the CNNs with small decrease in accuracy. The module reduces the computation cost by global pooling and channel-wise 1-dimensional convolution that considers the channels’ locality. Comprehensive experimental results demonstrate the effectiveness of our module, which makes a better trade-off between the performance and the acquired computational resources, comparing to its competing methods. In concrete, our dynamic pruning module can reduce 51.1\(\%\) FLOPs of VGG16 with only 0.18\(\%\) top-1 accuracy degradation on CIFAR10.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Denton, E., Zaremba, W., Bruna, J., LeCun, Y., Fergus, R.: Exploiting linear structure within convolutional networks for efficient evaluation. In: International Conference on Neural Information Processing Systems(NIPS), pp. 1269–1277 (2014)

    Google Scholar 

  2. Dong, X., Huang, J., Yang, Y., Yan, S.: More is less: A more complicated network with less inference complexity. In: International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5840–5848 (2017)

    Google Scholar 

  3. Gao, X., Zhao, Y., Dudziak, Ł., Mullins, R., Xu, C.Z.: Dynamic channel pruning: feature boosting and suppression. In: International Conference of Learning Representation (ICLR) (2018)

    Google Scholar 

  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  5. He, Y., Kang, G., Dong, X., Fu, Y., Yang, Y.: Soft filter pruning for accelerating deep convolutional neural networks. In: International Joint Conference on Artificial Intelligence (IJCAI), pp. 2234–2240 (2018)

    Google Scholar 

  6. He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4340–4349 (2019)

    Google Scholar 

  7. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)

  8. Hu, H., Peng, R., Tai, Y., Tang, C.: Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. CoRR (2016)

    Google Scholar 

  9. Huang, Z., Wang, N.: Data-driven sparse structure selection for deep neural networks. In: European Conference on Computer Vision (ECCV), pp. 304–320 (2018)

    Google Scholar 

  10. Jafarian, K., Vahdat, V., Salehi, S., Mobin, M.: Automating detection and localization of myocardial infarction using shallow and end-to-end deep neural networks. Appl. Soft Comput. 93, 106383 (2020)

    Article  Google Scholar 

  11. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report (2009)

    Google Scholar 

  12. Li, C., Wang, G., Wang, B., Liang, X., Li, Z., Chang, X.: Dynamic slimmable network. In: International Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  13. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. In: International Conference on Learning Representations (ICLR) (2016)

    Google Scholar 

  14. Li, K., Wan, G., Cheng, G., Meng, L., Han, J.: Object detection in optical remote sensing images: a survey and a new benchmark. ISPRS J. Photogrammetry Remote Sens. 159, 296–307 (2020)

    Article  Google Scholar 

  15. Li, Y., Zhang, H., Xue, X., Jiang, Y., Shen, Q.: Deep learning for remote sensing image classification: a survey. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 8(6), e1264 (2018)

    Google Scholar 

  16. Lin, M., et al.: Hrank: filter pruning using high-rank feature map. In: International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1529–1538 (2020)

    Google Scholar 

  17. Lin, S., Ji, R., Yan, C., Zhang, B., Cao, L., Ye, Q., Huang, F., Doermann, D.: Towards optimal structured CNN pruning via generative adversarial learning. In: International Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  18. Liu, C., Wang, Y., Han, K., Xu, C., Xu, C.: Learning instance-wise sparsity for accelerating deep models. In: International Joint Conference on Artificial Intelligence (IJCAI) (2019)

    Google Scholar 

  19. Liu, N., Ma, X., Xu, Z., Wang, Y., Tang, J., Ye, J.: Autocompress: an automatic DNN structured pruning framework for ultra-high compression rates. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 4876–4883 (2020)

    Google Scholar 

  20. Luo, J.H., Wu, J.: An entropy-based pruning method for CNN compression. arXiv preprint arXiv:1706.05791 (2017)

  21. Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient inference. In: International Conference of Learning Representation (ICLR) (2016)

    Google Scholar 

  22. Paszke, A., et al.: Pytorch: An imperative style, high-performance deep learning library. In: International Conference on Neural Information Processing Systems (NIPS) (2019)

    Google Scholar 

  23. Qilong, W., Banggu, W., Pengfei, Z., Peihua, L., Wangmeng, Z., Qinghua, H.: Eca-net: efficient channel attention for deep convolutional neural networks. In: International Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  24. Saikia, P., Baruah, R.D., Singh, S.K., Chaudhuri, P.K.: Artificial neural networks in the domain of reservoir characterization: a review from shallow to deep models. Comput. Geosci. 135, 104357 (2020)

    Article  Google Scholar 

  25. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  26. Tang, Y., et al.: Manifold regularized dynamic network pruning. In: International Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  27. Yu, R., et al.: Nisp: pruning networks using neuron importance score propagation. In: International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9194–9203 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenxi Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, L., Liu, W., Yu, Y. (2022). Dynamic Network Pruning Based on Local Channel-Wise Relevance. In: Sun, F., Hu, D., Wermter, S., Yang, L., Liu, H., Fang, B. (eds) Cognitive Systems and Information Processing. ICCSIP 2021. Communications in Computer and Information Science, vol 1515. Springer, Singapore. https://doi.org/10.1007/978-981-16-9247-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-9247-5_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-9246-8

  • Online ISBN: 978-981-16-9247-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics