Loading [MathJax]/extensions/TeX/upgreek.js
Convolutional Neural Network Compression Method Based on Multi-Factor Channel Pruning | IEEE Conference Publication | IEEE Xplore

Convolutional Neural Network Compression Method Based on Multi-Factor Channel Pruning


Abstract:

Deep neural networks have been widely applied across various domains, but their numerous parameters and high computational demands limit their practical usage scenarios. ...Show More

Abstract:

Deep neural networks have been widely applied across various domains, but their numerous parameters and high computational demands limit their practical usage scenarios. To address this issue, this paper introduces a convolutional neural network compression method based on multi-factor channel pruning. By integrating scaling and shifting factors from batch normalization layers, a multi-factor channel salience metric is proposed to measure channel importance. By removing redundant channels within the convolutional neural network, a compressed model is obtained. On the CIFAR-10 dataset, we pruned 93.06% of the parameters and 91.92% of the calculations from the VGG13BN network, with only a 2.81% decrease in accuracy. On the CIFAR-100 dataset, we pruned 72.84% of the parameters and 72.03% of the calculations from the VGG13BN network, with an accuracy improvement of 4.11%.
Date of Conference: 16-18 December 2023
Date Added to IEEE Xplore: 09 February 2024
ISBN Information:
Conference Location: Changsha, China

References

References is not available for this document.