Skip to main content

Advertisement

Log in

Discrete cosine transform for filter pruning

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Neural network filter pruning has demonstrated its effectiveness for deploying the models with fewer resources and efficient inference. However, the process of pruning networks in existing methods is complex and inefficient. This paper use Discrete Cosine Transform (DCT) to transform the feature map to the frequency domain and propose a simple and effective filter importance calculation method for filter pruning called DCTPruning. The important information of an image is usually contained in the low-frequency part. Discrete cosine transform transforms the image from the spatial domain to the frequency domain. The high-frequency part can be removed and lossy compressed without affecting the storage of important information. This study finds that this phenomenon is also applicable to the feature map of neural networks. Based on the discrete cosine transform, this study proposes a discrete cosine transform pruning method. A discrete cosine transform is used to calculate the importance of each filter in the neural network feature map, and the filter is pruned according to its importance. The proposed method not only achieved a good result in the classification task but also in the saliency object detection task. For the classification task, compared with the existing state-of-the-art, the proposed method has a significant improvement in terms of FLOPs and parameter reduction and maintains the accuracies. For example, for VGG-16 on CIFAR-10, this study achieves the parameters are reduced by 94.2% and FLOPs are reduced by 84.1%, the accuracy is only reduced by 1.43% (92.53% vs 93.96%); for ResNet-50 on ImageNet, DCTPruning provide the FLOPs are reduced by 74.1% and parameters are reduced by 70.8%, and the Top-1 accuracy is only reduced by 3.84% (72.31% vs 76.15%), the Top-5 accuracy is only reduced by 2.10% (90.77% vs 92.87%). For the saliency object detection task, this study also performs effective network pruning and achieves great model size reduction while keeping a similar performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Li H, Kadav A, Durdanovic I, Samet H, Graf HP (2017) Pruning filters for efficient convnets. In: Proceedings of ICLR

  2. Huang Z, Wang N (2018) Data-driven sparse structure selection for deep neural networks. In: Proceedings of ECCV, pp 304–320

  3. Zhao C, Ni B, Zhang J, Zhao Q, Zhang W, Tian Q (2019) Variational convolutional neural network pruning. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 2780–2789

  4. Lin S, Ji R, Yan C, Zhang B, Cao L, Ye Q, Huang F, Doermann D (2019) Towards optimal structured cnn pruning via generative adversarial learning. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 2780–2789

  5. Lin M, Ji R, Wang Y et al (2020) HRank: Filter Pruning using High-Rank Feature Map. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 1529–1538

  6. Hu H, Peng R, Tai Y, Tang C (2016) Network trimming: A data-driven neuron pruning approach towards efficient deep architectures, arXiv:1607.03250

  7. Liu Z, Li J, Shen Z, Huang G, Yan S, Zhang C (2017) Learning efficient convolutional networks through network slimming. In: Proceedings of IEEE Int. Conf. Comput. Vis. (ICCV), pp 2736–2744

  8. Yu R, Li A, Chen CF, Lai JH, Morariu VI, Han X, Gao M, Lin CY, Davis LS (2018) Nisp: Pruning networks using neuron importance score propagation. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 9194–9203

  9. He Y, Zhang X, Sun J (2017) Channel pruning for accelerating very deep neural networks. In: Proceedings of IEEE Int. Conf. Comput. Vis. (ICCV), pp 1389–1397

  10. Luo JH, Wu J, Lin W (2017) Thinet: A filter level pruning method for deep neural network compression. In: Proceedings of IEEE Int. Conf. Comput. Vis. (ICCV), pp 5058–5066

  11. Lin S, Ji R, Li Y, Wu Y, Huang F, Zhang B (2018) Accelerating convolutional networks via global & dynamic filter pruning. In: International Joint Conference on Artificial Intelligence (IJCAI), pp 7

  12. Denil M, Shakibi B, Dinh L, de Freitas N, et al. (2013) Predicting parameters in deep learning, arXiv:1306.0543

  13. LeCun Y, Denker JS, Solla SA, Howard RE, Jackel LD (1990) Optimal brain damage. In: Advances in neural information processing systems, pp 598–605

  14. Hassibi B, Stork DG (1993) Second order derivatives for network pruning: Optimal brain surgeon. Morgan Kaufmann, pp 164–171

  15. Han S, Pool J, Tran J, Dally W (2015) Learning both weights and connections for efficient neural network. In: Proceedings of NIPS, pp 1135–1143

  16. Wei W et al (2016) Learning Structured Sparsity in Deep Neural Networks. In: Proceedings of NIPS, pp 2074–2082

  17. Ahmed N, Natarajan T, Rao KR (1974) Discrete cosine transfom. IEEE Trans Comput:90–93

  18. Wallace GK (1991) The jpeg still picture compression standard. Commun ACM 34(4):30–44

    Article  Google Scholar 

  19. Rabbani M, Joshi R (2002) An overview of the jpeg 2000 still image compression standard. Signal Process: Image Commun 17(1):3–48

    Google Scholar 

  20. Chen W et al (2016) Compressing convolutional neural networks in the frequency domain, the 22nd ACM SIGKDD International Conference ACM

  21. Chen W, Wilson JT, Tyree S, Weinberger KQ, Chen Y (2015) Compressing neural networks with the hashing trick. In: Proceedings of ICML, pp 2285–2294

  22. Han S, Mao H, Dally WJ (2016) Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In: Proceedings of ICLR

  23. Courbariaux M, Bengio Y (2016) Binarynet: Training deep neural networks with weights and activations constrained to + 1 or -1, arXiv:1602.02830

  24. Baker B, Gupta O, Naik N et al (2016) Designing neural network architectures using reinforcement learning, [J]. arXiv:1611.02167

  25. Real E, Moore S, Selle A et al (2017) Large-scale evolution of image classifiers, [J]. arXiv:1703.01041

  26. Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. Computer ence 14.7:38–39

  27. Huang Z, Wang N (2017) Like what you like: Knowledge distill via neuron selectivity transfer, [J]. arXiv:1707.01219

  28. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition, [J]. arXiv:1409.1556

  29. Szegedy C, Liu W, Jia Y, et al. (2015) Going deeper with convolutions. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit (CVPR), pp 1–9

  30. Huang G, Liu Z, Laurens VDM et al (2017) Densely Connected Convolutional Networks. In: Proceedings of IEEE Int. Conf. Comput. Vis. (ICCV), pp 4700–4708

  31. He K, Zhang X, Ren S et al (2016) Deep Residual Learning for Image Recognition. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 770–778

  32. Liu B, Wang M, Foroosh H et al (2015) Sparse Convolutional Neural Networks. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 806–814

  33. Lee N, Ajanthan T, Torr PHS (2018) Snip: Single-shot network pruning based on connection sensitivity, [J]. arXiv:1810.02340

  34. Molchanov P, Tyree S, Karras T, et al. (2016) Pruning convolutional neural networks for resource efficient inference, [J]. arXiv:1611.06440

  35. Ye J, Lu X, Lin Z, et al. (2018) Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers, [J]. arXiv:1802.00124

  36. Krizhevsky A, Hinton G, et al. (2009) Learning multiple layers of features from tiny images, Technical report, Citeseer, pp 2, 5

  37. Qin X, Zhang Z, Huang C, et al. (2019) BASNEt: Boundary-aware Salient Object Detection. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit (CVPR), pp 7479–7489

  38. Qin X, Zhang Z, Huang C, et al. (2020) U2-net: Going deeper with nested U-structure for salient object detection. [J] Pattern Recogn 106:107404

    Article  Google Scholar 

  39. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Internation J Comput Vis (IJCV) 2:5

    Google Scholar 

  40. Wang L, Lu H, Wang Y, Feng M, Wang D, Yin B, Ruan X (2017) Learning to detect salient objects with image-level supervision. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 136–145

  41. Li G, Yu Y (2016) Visual saliency detection based on multiscale deep cnn features. IEEE Trans Image Process 25(11):5012– 5024

    Article  MathSciNet  MATH  Google Scholar 

  42. Zhang P, Wang D, Lu H, Wang H, Yin B (2017) Learning uncertain convolutional features for accurate saliency detection. In: Proceedings of IEEE Int. Conf. Comput. Vis. (ICCV), pp 212–221

  43. Zhang P, Wang D, Lu H, Wang H, Ruan X (2017) Amulet: Aggregating multi-level convolutional features for salient object detection. In: Proceedings of IEEE Int. Conf. Comput. Vis. (ICCV), pp 202–211

  44. Luo Z, Mishra A, Achkar A, Eichel J, Li S, Jodoin PM (2017) Non-local deep features for salient object detection. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 6593–6601

  45. Chen S, Tan X, Wang BN, Hu X (2018) Reverse attention for salient object detection. In: Proceedings of ECCV, pp 234– 250

  46. Zeng Y, Zhuge Y, Lu H, Zhang L, Qian M, Yu Y (2019) Multi-source weak supervision for saliency detection. In: Proceedings of IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 6074–6083

  47. Deng Z, Hu X, Zhu L, Xu X, Qin J, Han G, Heng PA (2018) R3net: Recurrent residual refinement network for saliency detection. In: Proceedings of Int. Joint Conf. Artif. Intell.(AAAI), pp 684–690

  48. Wang T, Borji A, Zhang L, Zhang P, Lu H (2017) A stagewise refinement model for detecting salient objects in images. In: Proceedings of IEEE Int. Conf. Comput. Vis. (ICCV), pp 4039–4048

  49. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks, [J]. J Mach Learn Res 9:249–256

    Google Scholar 

  50. He K, Zhang X, Ren S, et al. (2015) Delving deep into rectifiers: surpassing human-level performance on ImagEnet classification. In: Proceedings of IEEE Int. Conf. Comput. Vis. (ICCV), pp 1026–1034

  51. Liu Z, Xu J, Peng X et al (2018) Frequency-domain dynamic pruning for convolutional neural networks, [C]//Advances in Neural Information Processing Systems, pp 1043–1053

  52. Wang X, Liang J (2016) Scalable compression of deep neural networks, [C]//Proceedings of the 24th ACM international conference on Multimedia, pp 511–515

  53. Chen Z, Wang S, Wu Do et al (2018) From data to knowledge: Deep learning model compression, transmission and communication, [C]//Proceedings of the 26th ACM international conference on Multimedia, pp 1625–1633

  54. Bilal H, Kumar A, Yin B (2021) Pruning filters with L1-norm and capped L1-norm for CNN compression. Appl Intell 2:51

    Google Scholar 

  55. Singh P, Verma VK, Rai P, Namboodiri VP (2020) Acceleration of deep convolutional neural networks using adaptive filter pruning. IEEE J Sel Top Signal Process 14(4):838–847. https://doi.org/10.1109/JSTSP.2020.2992390https://doi.org/10.1109/JSTSP.2020.2992390

  56. Chen Y, Guo B, Shen Y, et al. (2021) Using efficient group pseudo-3D network to learn spatio-temporal features[J]. SIViP 15(2):361–369

    Article  Google Scholar 

  57. Chen Y, Guo B, Shen Y, et al. (2021) Boundary graph convolutional network for temporal action detection[J]. Image Vis Comput 109:104144

    Article  Google Scholar 

  58. Chen Y, Guo B, Shen Y, Wang W, Lu W, Suo X Capsule boundary network with 3D convolutional dynamic routing for temporal action detection. In: IEEE Transactions on Circuits and Systems for Video Technology. https://doi.org/10.1109/TCSVT.2021.3104226https://doi.org/10.1109/TCSVT.2021.3104226

  59. Liu X, Wu L, Dai C, Chao H-C (2021) Compressing CNNs using multilevel filter pruning for the edge nodes of multimedia internet of things. IEEE Internet Things J 8(14):11041–11051. https://doi.org/10.1109/JIOT.2021.3052016

  60. Dai C, Cheng H, Liu X (2020) A tucker decomposition based on adaptive genetic algorithm for efficient deep model compression. IEEE 22nd International Conference on High Performance Computing and Communications; IEEE 18th International Conference on Smart City; IEEE 6th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pp 507–512. https://doi.org/10.1109/HPCC-SmartCity-DSS50907.2020.00062

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant No. 61772352; National Key Research and Development Project under Grant No. 2020YFB1711800 and 2020YFB1707900; the Science and Technology Project of Sichuan Province under Grant No. 2019YFG0400, 2021YFG0152, 2020YFG0479, 2020YFG0322, 2020GFW035, and the R&D Project of Chengdu City under Grant No. 2019-YF05-01790-GX.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bing Guo.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Y., Zhou, R., Guo, B. et al. Discrete cosine transform for filter pruning. Appl Intell 53, 3398–3414 (2023). https://doi.org/10.1007/s10489-022-03604-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03604-2

Keywords