Abstract
Biologically inspired Spiking Neural Networks (SNNs) offer a promising path toward achieving energy-efficient artificial intelligence systems. However, in the hardware field, the deployment of deep SNNs has been stagnant, where the wide range of membrane potential of spiking neuron poses a significant challenge to hardware efficiency. To address this issue, this work proposes a guideline and a novel hardware-friendly method to constrain the membrane potential, reducing the associated hardware overhead while fully maintaining the inference accuracy. Experiments demonstrate that the proposed method is effective and achieves substantial memory usage reduction for a 20-layer ResNet model. This work paves the way toward the efficient hardware implementation of even deeper SNNs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Javanshir, A., Nguyen, T.T., Mahmud, M.A.P., Kouzani, A.Z.: Advancements in algorithms and neuromorphic hardware for spiking neural networks. Neural Comput. 34, 1289–1328 (2022). https://doi.org/10.1162/neco_a_01499
Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vision 113(1), 54–66 (2014). https://doi.org/10.1007/s11263-014-0788-3
Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M., Liu, S.-C.: Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification. Front. Neurosci. 11, (2017). https://doi.org/10.3389/fnins.2017.00682
Hu, Y., Tang, H., Pan, G.: Spiking Deep Residual Networks. IEEE Trans. Neural Netw. Learn. Syst. 1–6, Early Access (2021). https://doi.org/10.1109/TNNLS.2021.3119238
Hwang, S., et al.: Low-Latency spiking neural networks using pre-charged membrane potential and delayed evaluation. Front. Neurosci. 15 (2021). https://doi.org/10.3389/fnins.2021.629000
Wang, Z., Lian, S., Zhang, Y., Cui, X., Yan, R., Tang, H.: Towards lossless ANNSNN conversion under ultra-low latency with dual-phase optimization, arXiv preprint arXiv:2205.07473 (2022)
Kang, Z., Wang, L., Guo, S., Gong, R., Deng, Y., Dou, Q.: ASIE: an asynchronous SNN inference engine for AER events processing. In: 2019 25th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC), pp. 48–57 (2019). https://doi.org/10.1109/ASYNC.2019.00015
Zhang, J., Wu, H., Wei, J., Wei, S., Chen, H.: An asynchronous reconfigurable SNN accelerator with event-driven time step update. In: 2019 IEEE Asian Solid-State Circuits Conference (A-SSCC), pp. 213–216 (2019). https://doi.org/10.1109/A-SSCC47793.2019.9056903
Ju, X., Fang, B., Yan, R., Xu, X., Tang, H.: An FPGA implementation of deep spiking neural networks for low-power and fast classification. Neural Comput. 32, 182–204 (2020). https://doi.org/10.1162/neco_a_01245
Wang, S.-Q., Wang, L., Deng, Yu., Yang, Z.-J., Guo, S.-S., Kang, Z.-Y., Guo, Y.-F., Xu, W.-X.: SIES: a novel implementation of spiking convolutional neural network inference engine on field-programmable Gate Array. J. Comput. Sci. Technol. 35(2), 475–489 (2020). https://doi.org/10.1007/s11390-020-9686-z
Zhang, L., et al.: A cost-efficient high-speed VLSI architecture for spiking convolutional neural network inference using time-step binary spike maps. Sensors (Basel). 21, 6006 (2021). https://doi.org/10.3390/s21186006
Aung, M.T.L., Qu, C., Yang, L., Luo, T., Goh, R.S.M., Wong, W.-F.: DeepFire: acceleration of convolutional spiking neural network on modern field programmable gate arrays. In: 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), pp. 28–32. IEEE, Dresden, Germany (2021). https://doi.org/10.1109/FPL53798.2021.00013
Nallathambi, A., Chandrachoodan, N.: Probabilistic spike propagation for FPGA implementation of spiking neural networks, arXiv preprint arXiv:2001.09725 (2020)
Hwang, S., Chang, J., Oh, M.-H., Lee, J.-H., Park, B.-G.: Impact of the sub-resting membrane potential on accurate inference in spiking neural networks. Sci. Rep. 10, 3515 (2020). https://doi.org/10.1038/s41598-020-60572-8
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE, Las Vegas, NV, USA (2016). https://doi.org/10.1109/CVPR.2016.90
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 IFIP International Federation for Information Processing
About this paper
Cite this paper
Miao, Y., Ikeda, M. (2023). Lossless Method of Constraining Membrane Potential in Deep Spiking Neural Networks. In: Maglogiannis, I., Iliadis, L., MacIntyre, J., Dominguez, M. (eds) Artificial Intelligence Applications and Innovations. AIAI 2023. IFIP Advances in Information and Communication Technology, vol 676. Springer, Cham. https://doi.org/10.1007/978-3-031-34107-6_42
Download citation
DOI: https://doi.org/10.1007/978-3-031-34107-6_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-34106-9
Online ISBN: 978-3-031-34107-6
eBook Packages: Computer ScienceComputer Science (R0)