Skip to main content

Lossless Method of Constraining Membrane Potential in Deep Spiking Neural Networks

  • Conference paper
  • First Online:
Artificial Intelligence Applications and Innovations (AIAI 2023)

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 676))

  • 708 Accesses

Abstract

Biologically inspired Spiking Neural Networks (SNNs) offer a promising path toward achieving energy-efficient artificial intelligence systems. However, in the hardware field, the deployment of deep SNNs has been stagnant, where the wide range of membrane potential of spiking neuron poses a significant challenge to hardware efficiency. To address this issue, this work proposes a guideline and a novel hardware-friendly method to constrain the membrane potential, reducing the associated hardware overhead while fully maintaining the inference accuracy. Experiments demonstrate that the proposed method is effective and achieves substantial memory usage reduction for a 20-layer ResNet model. This work paves the way toward the efficient hardware implementation of even deeper SNNs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Javanshir, A., Nguyen, T.T., Mahmud, M.A.P., Kouzani, A.Z.: Advancements in algorithms and neuromorphic hardware for spiking neural networks. Neural Comput. 34, 1289–1328 (2022). https://doi.org/10.1162/neco_a_01499

  2. Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vision 113(1), 54–66 (2014). https://doi.org/10.1007/s11263-014-0788-3

    Article  MathSciNet  Google Scholar 

  3. Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M., Liu, S.-C.: Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification. Front. Neurosci. 11, (2017). https://doi.org/10.3389/fnins.2017.00682

  4. Hu, Y., Tang, H., Pan, G.: Spiking Deep Residual Networks. IEEE Trans. Neural Netw. Learn. Syst. 1–6, Early Access (2021). https://doi.org/10.1109/TNNLS.2021.3119238

  5. Hwang, S., et al.: Low-Latency spiking neural networks using pre-charged membrane potential and delayed evaluation. Front. Neurosci. 15 (2021). https://doi.org/10.3389/fnins.2021.629000

  6. Wang, Z., Lian, S., Zhang, Y., Cui, X., Yan, R., Tang, H.: Towards lossless ANNSNN conversion under ultra-low latency with dual-phase optimization, arXiv preprint arXiv:2205.07473 (2022)

  7. Kang, Z., Wang, L., Guo, S., Gong, R., Deng, Y., Dou, Q.: ASIE: an asynchronous SNN inference engine for AER events processing. In: 2019 25th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC), pp. 48–57 (2019). https://doi.org/10.1109/ASYNC.2019.00015

  8. Zhang, J., Wu, H., Wei, J., Wei, S., Chen, H.: An asynchronous reconfigurable SNN accelerator with event-driven time step update. In: 2019 IEEE Asian Solid-State Circuits Conference (A-SSCC), pp. 213–216 (2019). https://doi.org/10.1109/A-SSCC47793.2019.9056903

  9. Ju, X., Fang, B., Yan, R., Xu, X., Tang, H.: An FPGA implementation of deep spiking neural networks for low-power and fast classification. Neural Comput. 32, 182–204 (2020). https://doi.org/10.1162/neco_a_01245

  10. Wang, S.-Q., Wang, L., Deng, Yu., Yang, Z.-J., Guo, S.-S., Kang, Z.-Y., Guo, Y.-F., Xu, W.-X.: SIES: a novel implementation of spiking convolutional neural network inference engine on field-programmable Gate Array. J. Comput. Sci. Technol. 35(2), 475–489 (2020). https://doi.org/10.1007/s11390-020-9686-z

    Article  Google Scholar 

  11. Zhang, L., et al.: A cost-efficient high-speed VLSI architecture for spiking convolutional neural network inference using time-step binary spike maps. Sensors (Basel). 21, 6006 (2021). https://doi.org/10.3390/s21186006

  12. Aung, M.T.L., Qu, C., Yang, L., Luo, T., Goh, R.S.M., Wong, W.-F.: DeepFire: acceleration of convolutional spiking neural network on modern field programmable gate arrays. In: 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), pp. 28–32. IEEE, Dresden, Germany (2021). https://doi.org/10.1109/FPL53798.2021.00013

  13. Nallathambi, A., Chandrachoodan, N.: Probabilistic spike propagation for FPGA implementation of spiking neural networks, arXiv preprint arXiv:2001.09725 (2020)

  14. Hwang, S., Chang, J., Oh, M.-H., Lee, J.-H., Park, B.-G.: Impact of the sub-resting membrane potential on accurate inference in spiking neural networks. Sci. Rep. 10, 3515 (2020). https://doi.org/10.1038/s41598-020-60572-8

    Article  Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE, Las Vegas, NV, USA (2016). https://doi.org/10.1109/CVPR.2016.90

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yijie Miao or Makoto Ikeda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Miao, Y., Ikeda, M. (2023). Lossless Method of Constraining Membrane Potential in Deep Spiking Neural Networks. In: Maglogiannis, I., Iliadis, L., MacIntyre, J., Dominguez, M. (eds) Artificial Intelligence Applications and Innovations. AIAI 2023. IFIP Advances in Information and Communication Technology, vol 676. Springer, Cham. https://doi.org/10.1007/978-3-031-34107-6_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-34107-6_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-34106-9

  • Online ISBN: 978-3-031-34107-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics