skip to main content
10.1145/3508546.3508568acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacaiConference Proceedingsconference-collections
research-article

A Weight-Based Channel Pruning Algorithm for Depth-Wise Separable Convolution Unit

Published: 25 February 2022 Publication History

Abstract

Deep learning has become the hotspot of academia and industry, and has found its wide applications in many fields, especially computer vision. However, the computational and storage cost of the CNN model is now a headache to researchers. The famous model structures such as AlexNet, VGG and ResNet, are all going wider and deeper. Then it becomes hard to deploy these models on mobile and embedded devices. Accordingly, the model lightweight method is desiderated. This paper adapts the channel pruning method to features of the depth-wise separable convolution unit and its variant. By using the united prune ratio setting, the additional sparsity restraint, and the certain dataset preprocessing, our experiments show higher accuracy on the CIFAR-10 dataset than related work in some cases.

Supplementary Material

Poster (Poster-A060.pdf)

References

[1]
Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. 2017. Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC) 13, 3(2017), 1–18.
[2]
François Chollet. 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1251–1258.
[3]
Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, 2016. Dsd: Dense-sparse-dense training for deep neural networks. arXiv preprint arXiv:1607.04381(2016).
[4]
Song Han, Jeff Pool, John Tran, and William J Dally. 2015. Learning both weights and connections for efficient neural networks. arXiv preprint arXiv:1506.02626(2015).
[5]
Yihui He, Xiangyu Zhang, and Jian Sun. 2017. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE international conference on computer vision. 1389–1397.
[6]
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017).
[7]
Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. arXiv preprint arXiv:1602.07360(2016).
[8]
Farkhondeh Kiaee, Christian Gagné, and Mahdieh Abbasi. 2016. Alternating direction method of multipliers for sparse convolutional neural networks. arXiv preprint arXiv:1611.01590(2016).
[9]
A Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. Master’s thesis, University of Tront(2009).
[10]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012), 1097–1105.
[11]
Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. The annals of mathematical statistics(1951), 400–407.
[12]
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4510–4520.
[13]
Laurent Sifre and Stéphane Mallat. 2014. Rigid-motion scattering for texture classification. arXiv preprint arXiv:1403.1687(2014).
[14]
Robert Tibshirani. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58, 1(1996), 267–288.
[15]
Ke Zhang, Ken Cheng, Jingjing Li, and Yuanyuan Peng. 2019. A channel pruning algorithm based on depth-wise separable convolution unit. IEEE Access 7(2019), 173294–173309.
[16]
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition. 6848–6856.
[17]
Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578(2016).

Cited By

View all
  • (2024)Weed Recognition at Soybean Seedling Stage Based on YOLOV8nGP + NExG AlgorithmAgronomy10.3390/agronomy1404065714:4(657)Online publication date: 24-Mar-2024
  • (2022)Pruning-based Neural Network Reduction for Faster Profiling Side-Channel Attacks2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS)10.1109/ICECS202256217.2022.9971033(1-4)Online publication date: 24-Oct-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ACAI '21: Proceedings of the 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence
December 2021
699 pages
ISBN:9781450385053
DOI:10.1145/3508546
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 February 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. channel pruning
  2. convolutional neural networks
  3. depth-wise separable convolution
  4. model compression and acceleration

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ACAI'21

Acceptance Rates

Overall Acceptance Rate 173 of 395 submissions, 44%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Weed Recognition at Soybean Seedling Stage Based on YOLOV8nGP + NExG AlgorithmAgronomy10.3390/agronomy1404065714:4(657)Online publication date: 24-Mar-2024
  • (2022)Pruning-based Neural Network Reduction for Faster Profiling Side-Channel Attacks2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS)10.1109/ICECS202256217.2022.9971033(1-4)Online publication date: 24-Oct-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media