Skip to main content
Log in

An efficient pruning and fine-tuning method for deep spiking neural network

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Spiking Neural Networks (SNNs) demonstrate low hardware and power consumption due to their inherent sparse spike-based computing characteristics, which is potential for deployment in resource-limited embedded devices. Nevertheless, the lack of efficient algorithms for compressing SNNs poses challenges when deploying large-scale SNNs on resource-limited devices. This work proposes an efficient SNN pruning and tuning method by incorporating the statistical characteristics of spike signals and weight sparsity, considering both network compression rate and performance. Channel balance factors determine which channels to prune in deep SNNs. These factors consider the scaling factors of batch normalization layers and the firing rate of spiking neurons to evaluate the importance of each channel. Then, a sparsity factor is applied during fine-tuning to prevent overfitting and reduce spiking, which restores performance loss caused by pruning. After using the proposed method, the number of parameters and FLOPs for the pruned 20-layer Resnet-SNNs are reduced by more than 60% and 80%, respectively, and the firing rate is reduced by 54.14% ~ 6.21% when the compression ratio varies from 10% ~ 70%. Meanwhile, the pruned Resnet-SNNs achieve competitive accuracies of 93.25%, 95.48%, and 73.60% on the CIFAR-10, DVS-Gesture, and DVS-CIFAR10 datasets, respectively, which are higher than the baseline (no compression SNNs) of 92.14%, 94.44%, and 72.60%. Our research has shown that deep SNNs have a significant potential for compression.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

The data underlying this article will be shared on reasonable request to the corresponding author.

References

  1. Samek W, Montavon G, Lapuschkin S, Anders CJ, Mueller K-R (2021) Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proc Ieee, Rev 109(3):247–278. https://doi.org/10.1109/jproc.2021.3060483

    Article  Google Scholar 

  2. Otter DW, Medina JR, Kalita JK (2021) A Survey of the Usages of Deep Learning for Natural Language Processing. Ieee Trans Neural Networks Learning Syst 32(2):604–624. https://doi.org/10.1109/tnnls.2020.2979670

    Article  MathSciNet  Google Scholar 

  3. Liu C et al (2022) A programmable diffractive deep neural network based on a digital-coding metasurface array. Nat Electron 5(2):113–122. https://doi.org/10.1038/s41928-022-00719-9

    Article  Google Scholar 

  4. Sun Y, Xu J, Lin G, Ji W, Wang L (2022) RBF Neural Network-Based Supervisor Control for Maglev Vehicles on an Elastic Track With Network Time Delay. Ieee Trans Industrial Inform 18(1):509–519. https://doi.org/10.1109/tii.2020.3032235

    Article  Google Scholar 

  5. Fei J, Liu L (2022) Real-Time Nonlinear Model Predictive Control of Active Power Filter Using Self-Feedback Recurrent Fuzzy Neural Network Estimator. Ieee Trans Industrial Electron 69(8):8366–8376. https://doi.org/10.1109/tie.2021.3106007

    Article  Google Scholar 

  6. Yang S et al (2022) BiCoSS: Toward Large-Scale Cognition Brain With Multigranular Neuromorphic Architecture. Ieee Trans Neural Networks Learn Syst 33(7):2801–2815. https://doi.org/10.1109/tnnls.2020.3045492

    Article  Google Scholar 

  7. Fang W, Yu Z, Chen Y, Huang T, Masquelier T, Tian Y (2021) Deep residual learning in spiking neural networks. Adv Neural Inf Process Syst 34:21056–21069

    Google Scholar 

  8. Cao Y, Chen Y, Khosla D (2015) Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition. Int J Comput Vision 113(1):54–66. https://doi.org/10.1007/s11263-014-0788-3

    Article  MathSciNet  Google Scholar 

  9. Park H-L, Lee Y, Kim N, Seo D-G, Go G-T, Lee T-W (2020) Flexible Neuromorphic Electronics for Computing, Soft Robotics, and Neuroprosthetics. Adv Mater 32(15):1903558. https://doi.org/10.1002/adma.201903558

    Article  Google Scholar 

  10. Zhang Y et al (2020) Brain-inspired computing with memristors: Challenges in devices, circuits, and systems. Appl Phys Rev 7(1):011308. https://doi.org/10.1063/1.5124027

    Article  MathSciNet  Google Scholar 

  11. Hu SG, Qiao GC, Chen TP, Yu Q, Liu Y, Rong LM (2021) Quantized STDP-based online-learning spiking neural network. Neural Comput Appl 33(19):12317–12332. https://doi.org/10.1007/s00521-021-05832-y

    Article  Google Scholar 

  12. Javanshir A, ThanhThi N, Mahmud MAP, Kouzani AZ (2022) Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput 34(6):1289–1328. https://doi.org/10.1162/neco_a_01499

    Article  MathSciNet  Google Scholar 

  13. Kuang Y et al (2022) ESSA: Design of a Programmable Efficient Sparse Spiking Neural Network Accelerator. Ieee Trans Very Large Scale Integr (Vlsi) Syst 30(11):1631–1641. https://doi.org/10.1109/tvlsi.2022.3183126

    Article  Google Scholar 

  14. Liu Y, Chen Y, Ye W, Gui Y (2022) FPGA-NHAP: A General FPGA-Based Neuromorphic Hardware Acceleration Platform With High Speed and Low Power. Ieee Trans Circuits Syst I-Regular Papers 69(6):2553–2566. https://doi.org/10.1109/tcsi.2022.3160693

    Article  Google Scholar 

  15. Xiang S, Zhang Y, Gong J, Guo X, Lin L, Hao Y (2019) STDP-Based Unsupervised Spike Pattern Learning in a Photonic Spiking Neural Network With VCSELs and VCSOAs. Ieee J Select Topics Quantum Electron 25(6):1700109. https://doi.org/10.1109/jstqe.2019.2911565

    Article  Google Scholar 

  16. Tavanaei A, Maida A (2019) BP-STDP: Approximating backpropagation using spike timing dependent plasticity. Neurocomputing 330:39–47. https://doi.org/10.1016/j.neucom.2018.11.014

    Article  Google Scholar 

  17. Chen GK, Kumar R, Sumbul HE, Knag PC, Krishnamurthy RK (2019) A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS. Ieee J Solid-State Circuits 54(4):992–1002. https://doi.org/10.1109/jssc.2018.2884901

    Article  Google Scholar 

  18. Kim S, Park S, Na B, Yoon S (2020) Spiking-YOLO: Spiking neural network for energy-efficient object detection. In: AAAI 2020 - 34th AAAI Conference on Artificial Intelligence. AAAI Press, New York, pp 11270–11277

  19. Meng Q, Yan S, Xiao M, Wang Y, Lin Z, Luo Z-Q (2022) Training much deeper spiking neural networks with a small number of timesteps. Neural Netw 153:254–268

    Article  Google Scholar 

  20. Li Y, Deng S, Dong X, Gong R, Gu S (2021) A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration. In: 2021 Proceedings of the 38th International Conference on Machine Learning (PMLR), vol 139, pp 6316–6325

  21. Lin S, et al. (2019) "Towards Optimal Structured CNN Pruning via Generative Adversarial Learning." In 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, 2019Jun 16–20 2019. in IEEE Conference on Computer Vision and Pattern Recognition, pp. 2785–2794, doi: https://doi.org/10.1109/cvpr.2019.00290

  22. Zhuang L, Jianguo L, Zhiqiang S, Gao H, Shoumeng Y, Changshui Z (2017) Learning efficient convolutional networks through network slimming. In: 2017 IEEE International Conference on Computer Vision (ICCV). IEEE Computer Society, Los Alamitos, pp 2755–2763. https://doi.org/10.1109/ICCV.2017.298

  23. Wang L, Yoon K-J (2022) Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks. Ieee Trans Pattern Anal Mach Intell 44(6):3048–3068. https://doi.org/10.1109/tpami.2021.3055564

    Article  Google Scholar 

  24. Deng L, Li G, Han S, Shi L, Xie Y (2020) Model compression and hardware acceleration for neural networks: A comprehensive survey. Proc IEEE 108(4):485–532

    Article  Google Scholar 

  25. Chen S, Zhan R, Wang W, Zhang J (2021) Learning Slimming SAR Ship Object Detector Through Network Pruning and Knowledge Distillation. Ieee J Select Topics Appl Earth Observ Remote Sens 14:1267–1282. https://doi.org/10.1109/jstars.2020.3041783

    Article  Google Scholar 

  26. Choudhary T, Mishra V, Goswami A, Sarangapani J (2021) A transfer learning with structured filter pruning approach for improved breast cancer classification on point-of-care devices.". Comput Biol Med 134:104432. https://doi.org/10.1016/j.compbiomed.2021.1044324

    Article  Google Scholar 

  27. Ma H, Liu D, Yan N, Li H, Wu F (2022) End-to-End Optimized Versatile Image Compression With Wavelet-Like Transform. Ieee Trans Pattern Anal Mach Intell 44(3):1247–1263. https://doi.org/10.1109/tpami.2020.3026003

    Article  Google Scholar 

  28. Deng L et al (2023) Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization. IEEE Trans Neural Networks Learn Syst 34(6):2791–2805. https://doi.org/10.1109/TNNLS.2021.3109064

    Article  MathSciNet  Google Scholar 

  29. Srinivasan G, Roy K (2019) ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing. Front Neurosci 13:189. https://doi.org/10.3389/fnins.2019.00189

    Article  Google Scholar 

  30. Liu Y et al (2019) Application of deep compression technique in spiking neural network chip. IEEE Trans Biomed Circuits Syst 14(2):274–282

    Article  Google Scholar 

  31. Doudou W, Xianghong L, Pangao D (2019) An adaptive structure learning algorithm for multi-layer spiking neural networks. In: 2019 15th International Conference on Computational Intelligence and Security (CIS). IEEE Computer Society, Los Alamitos, pp 98–102. https://doi.org/10.1109/CIS.2019.00029

  32. Liu F, Zhao W, Chen Y, Wang Z, Dai F (2022) DynSNN: A dynamic approach to reduce redundancy in spiking neural networks. In: ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Piscataway, pp 2130–2134. https://doi.org/10.1109/ICASSP43922.2022.9746566

  33. Kundu S, Datta G, Pedram M, Beerel PA (2021) Spike-Thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). Institute of Electrical and Electronics Engineers Inc, Waikoloa, pp 3952–3961. https://doi.org/10.1109/WACV48630.2021.00400

  34. Chen Y, Yu Z, Fang W, Huang T, Tian Y (2021) Pruning of deep spiking neural networks through gradient rewiring. In: 2021 Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI), Canada, pp 1713–1721

  35. Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117:500–544

    Article  Google Scholar 

  36. Gerstner W, Kistler WM, Naud R, Paninski L (2014) Neuronal dynamics: From single neurons to networks and models of cognition (neuronal dynamics: from single neurons to networks and models of cognition). Cambridge University Press, pp 1–577

  37. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536

  38. Neftci EO, Mostafa H, Zenke F (2019) Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process Mag 36(6):51–63

    Article  Google Scholar 

  39. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: 2015 32nd International Conference on Machine Learning (ICML), vol 1. International Machine Learning Society (IMLS), Lile, pp 448–456

  40. Schmidt M, Fung G, Rosales R (2007) Fast optimization methods for l1 regularization: A comparative study and two new approaches. European Conference on Machine Learning. Springer, Berlin, pp 286–297

    Google Scholar 

  41. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images. Technical Report

  42. Amir A et al (2017) A low power, fully event-based gesture recognition system. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, Los Alamitos, pp 7388–7397. https://doi.org/10.1109/CVPR.2017.781

  43. Li H, Liu H, Ji X, Li G, Shi L (2017) Cifar10-dvs: an event-stream dataset for object classification. Front Neurosci 11:309

    Article  Google Scholar 

  44. Fang WAC, Yanqi, Ding, Jianhao, Chen, Ding, Yu, Zhaofei, Zhou, Huihui, Tian, Yonghong et al. "SpikingJelly." https://github.com/fangwei123456/spikingjelly (Accessed 4 Mar, 2022)

  45. Paszke A et al (2019) PyTorch: An imperative style, high-performance deep learning library. In: 33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019, vol 32, Vancouver (Neural information processing systems foundation, in Advances in Neural Information Processing Systems, p. Citadel; Doc.AI; et al.; Lambda; Lyft; Microsoft Research)

  46. Fang W, Yu Z, Chen Y, Masquelier T, Huang T, Tian Y (2021) Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In: 18th IEEE/CVF International Conference on Computer Vision (ICCV). Institute of Electrical and Electronics Engineers Inc., Canada, pp 2641–2651. https://doi.org/10.1109/ICCV48922.2021.00266

  47. Hu Y et al (2022) Hand gesture recognition system using the dynamic vision sensor. In: 2022 5th International Conference on Circuits, Systems and Simulation (ICCSS). Institute of Electrical and Electronics Engineers Inc, Nanjing, pp 102–110. https://doi.org/10.1109/ICCSS55260.2022.9802196

  48. Zheng H, Wu Y, Deng L, Hu Y, Li G (2021) Going deeper with directly-trained larger spiking neural networks. Proc AAAI Confer Artif Intell 35(12):11062–11070

    Google Scholar 

  49. Xiao M, Meng Q, Zhang Z, Wang Y, Lin Z (2023) SPIDE: A purely spike-based method for training feedback spiking neural networks. Neural Networks 161:9–24. https://doi.org/10.1016/j.neunet.2023.01.026

    Article  Google Scholar 

  50. Samadzadeh A, Far FST, Javadi A, Nickabadi A, Chehreghani MH (2023) "Convolutional Spiking Neural Networks for Spatio-Temporal Feature Extraction." Neural Process Lett Article; Early Access.doi: https://doi.org/10.1007/s11063-023-11247-8

  51. Na B, Mok J, Park S, Lee D, Choe H, Yoon S (2022) AutoSNN: Towards energy-efficient spiking neural networks. In: 2022 39th International Conference on Machine Learning (ICML), vol 162. ML Research Press, Baltimore, pp 16253–16269

Download references

Acknowledgements

This work was supported in part by STI 2030-Major Projects 2022ZD0209700 and in part by the Open Foundation of State Key Laboratory of Electronic Thin Films and Integrated Devices (KFJJ202206).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to G. C. Qiao or S. G. Hu.

Ethics declarations

Conflict of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meng, L.W., Qiao, G.C., Zhang, X.Y. et al. An efficient pruning and fine-tuning method for deep spiking neural network. Appl Intell 53, 28910–28923 (2023). https://doi.org/10.1007/s10489-023-05056-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-05056-8

Keywords

Navigation