Skip to main content
Log in

Symmetric-threshold ReLU for Fast and Nearly Lossless ANN-SNN Conversion

  • Research Article
  • Published:
Machine Intelligence Research Aims and scope Submit manuscript

Abstract

The artificial neural network-spiking neural network (ANN-SNN) conversion, as an efficient algorithm for deep SNNs training, promotes the performance of shallow SNNs, and expands the application in various tasks. However, the existing conversion methods still face the problem of large conversion error within low conversion time steps. In this paper, a heuristic symmetric-threshold rectified linear unit (stReLU) activation function for ANNs is proposed, based on the intrinsically different responses between the integrate-and-fire (IF) neurons in SNNs and the activation functions in ANNs. The negative threshold in stReLU can guarantee the conversion of negative activations, and the symmetric thresholds enable positive error to offset negative error between activation value and spike firing rate, thus reducing the conversion error from ANNs to SNNs. The lossless conversion from ANNs with stReLU to SNNs is demonstrated by theoretical formulation. By contrasting stReLU with asymmetric-threshold LeakyReLU and threshold ReLU, the effectiveness of symmetric thresholds is further explored. The results show that ANNs with stReLU can decrease the conversion error and achieve nearly lossless conversion based on the MNIST, Fashion-MNIST, and CIFAR10 datasets, with 6× to 250 speedup compared with other methods. Moreover, the comparison of energy consumption between ANNs and SNNs indicates that this novel conversion algorithm can also significantly reduce energy consumption.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Y. LeCun, Y. Bengio, G. Hinton. Deep learning. Nature, vol. 521, no. 7553, pp. 436–444, 2015. DOI: https://doi.org/10.1038/naturel4539.

    Article  Google Scholar 

  2. Y. Lecun, L. Bottou, Y. Bengio, P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. DOI: 12.112985.726791.

    Article  Google Scholar 

  3. W. Zaremba, I. Sutskever, O. Vinyals. Recurrent neural network regularization. [Online], Available: https://arxiv.org/abs/1409.2329, 2014.

  4. Y. J. Zhang, Z. F. Yu, J. K. Liu, T. J. Huang. Neural decoding of visual information across different neural recording modalities and approaches. Machine Intelligence Research, vol. 19, no. 5, pp. 352–365, 2022. DOI: https://doi.org/10.1007/s11633-022-1335-2.

    Article  Google Scholar 

  5. Y. Wu, D. H. Wang, X. T. Lu, F. Yang, M. Yao, W. S. Dong, J. B. Shi, G. Q. Li. Efficient visual recognition: A survey on recent advances and brain-inspired methodologies. Machine Intelligence Research, vol. 19, no. 5, pp. 366–411, 2022. DOI: 12.12278s11633-222-1342-5.

    Article  Google Scholar 

  6. R. Girshick, J. Donahue, T. Darrell, J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, pp. 582–587, 2014. DOI: https://doi.org/10.1109/CVPR.2014.81.

  7. W. Maass. Networks of spiking neurons: The third generation of neural network models. Neural Networks, vol. 10, no. 9, pp. 1659–1671, 1997. DOI: https://doi.org/10.1016/S0893-6080(97)00011-7.

    Article  Google Scholar 

  8. Q. Xu, J. R. Shen, X. M. Ran, H. J. Tang, G. Pan, J. K. Liu. Robust transcoding sensory information with neural spikes. IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 5, pp. 1935–1946, 2022. DOI: https://doi.org/10.1109/TNNLS.2021.3107449.

    Article  Google Scholar 

  9. K. Roy, A. Jaiswal, P. Panda. Towards spike-based machine intelligence with neuromorphic computing. Nature, vol. 575, no. 7784, pp. 607–617, 2019. DOI: https://doi.org/10.1038/s41586-019-1677-2.

    Article  Google Scholar 

  10. J. Pei, L. Deng, S. Song, M. G. Zhao, Y. H. Zhang, S. Wu, G. R. Wang, Z. Zou, Z. Z. Wu, W. He, F. Chen, N. Deng, S. Wu, Y. Wang, Y. J. Wu, Z. Y. Yang, C. Ma, G. Q. Li, W. T. Han, H. L. Li, H. Q. Wu, R. Zhao, Y. Xie, L. P. Shi. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, vol. 572, no. 7767, pp. 106–111, 2019. DOI: https://doi.org/10.1038/s41586-019-1424-8.

    Article  Google Scholar 

  11. P. U. Diehl, M. Cook. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in Computational Neuroscience, vol. 9, Article number 99, 2015. DOI: https://doi.org/10.3389/fncom.2015.00099.

  12. P. J. Gu, R. Xiao, G. Pan, H. J. Tang. STCA: Spatio-temporal credit assignment with delayed feedback in deep spiking neural networks. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, pp. 1366–1372, 2019.

  13. Y. J. Wu, L. Deng, G. Q. Li, J. Zhu, L. P. Shi. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience, vol. 12, Article number 331, 2018. DOI: https://doi.org/10.3389/fnins.2018.00331.

  14. Y. Q. Cao, Y. Chen, D. Khosla. Spiking deep convolutional neural networks for energy-efficient object recognition. International Journal of Computer Vision, vol. 113, no. 1, pp. 54–66, 2015. DOI: https://doi.org/10.1007/s11263-014-0788-3.

    Article  MathSciNet  Google Scholar 

  15. P. U. Diehl, D. Neil, J. Binas, M. Cook, S. C. Liu, M. Pfeiffer. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Proceedings of International Joint Conference on Neural Networks, IEEE, Killarney, Ireland, pp. 1–8, 2015. DOI: https://doi.org/10.1109/IJCNN.2015.7280696.

    Google Scholar 

  16. Z. M. Wang, S. Lian, Y. H. Zhang, X. X. Cui, R. Yan, H. J. Tang. Towards lossless ANN-SNN conversion under ultra-low latency with dual-phase optimization. [Online], Available: https://arxiv.org/abs/2205.07473, 2022.

  17. B. Rueckauer, I. A. Lungu, Y. H. Hu, M. Pfeiffer, S. C. Liu. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience, vol. 11, Article number 682, 2017. DOI: https://doi.org/10.3389/fnins.2017.00682.

  18. A. Sengupta, Y. T. Ye, R. Wang, C. Liu, K. Roy. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in Neuroscience, vol. 13, Article number 95, 2019. DOI: https://doi.org/10.3389/fnins.2019.00095.

  19. S. Kim, S. Park, B. Na, S. Yoon. Spiking-YOLO: Spiking neural network for energy-efficient object detection. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 7, pp. 11272–11277, 2020. DOI: https://doi.org/10.1609/aaai.v34i07.6787.

    Article  Google Scholar 

  20. Y. H. Li, S. K. Deng, X. Dong, R. H. Gong, S. Gu. A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration. In Proceedings of the 38th International Conference on Machine Learning, pp. 6316–6325, 2021.

  21. Z. L. Yan, J. Zhou, W. F. Wong. Near lossless transfer learning for spiking neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, pp. 10577–10584, 2021. DOI: https://doi.org/10.1609/aaai.v35i12.17265.

    Article  Google Scholar 

  22. J. H. Ding, Z. F. Yu, Y. H. Tian, T. J. Huang. Optimal ANN-SNN conversion for fast and accurate inference in deep spiking neural networks. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, Montreal, Canada, pp. 2328–2336, 2021.

  23. T. Bu, W. Fang, J. H. Ding, P. L. Dai, Z. F. Yu, T. J. Huang. Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks. In Proceedings of the 10th International Conference on Learning Representations, 2022.

  24. B. Rueckauer, S. C. Liu. Conversion of analog to spiking neural networks using sparse temporal coding. In Proceedings of IEEE International Symposium on Circuits and Systems, Florence, Italy, 2018. DOI: https://doi.org/10.1109/ISCAS.2018.8351295.

  25. Y. Li, D. C. Zhao, Y. Zeng. BSNN: Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neurons. Frontiers in Neuroscience, vol. 16, Article number 991851, 2022. DOI: https://doi.org/10.3389/fnins.2022.991851.

  26. Y. Li, Y. Zeng. Efficient and accurate conversion of spiking neural network with burst spikes. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, Vienna, Austria, pp. 2485–2491, 2022.

  27. Q. Yu, C. X. Ma, S. M. Song, G. Y. Zhang, J. W. Dang, K. C. Tan. Constructing accurate and efficient deep spiking neural networks with double-threshold and augmented schemes. IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 4, pp. 1714–1726, 2022. DOI: https://doi.org/10.1109/TNNLS.2020.3043415.

    Article  Google Scholar 

  28. B. Xu, N. Y. Wang, T. Q. Chen, M. Li. Empirical evaluation of rectified activations in convolutional network. [Online], Available: https://arxiv.org/abs/1505.00853, 2015.

  29. A. L. Maas, A. Y. Hannun, A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, USA, vol. 30, Article number 3, 2013.

  30. Y. H. Liu, X. J. Wang. Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron. Journal of Computational Neuroscience, vol. 10, no. 1, pp. 25–45, 2001. DOI: https://doi.org/10.1023/A:1008916026143.

    Article  Google Scholar 

  31. M. Barbi, S. Chillemi, A. Di Garbo, L. Reale. Stochastic resonance in a sinusoidally forced LIF model with noisy threshold. Biosystems, vol. 71, no. 1–2, pp. 23–28, 2003. DOI: https://doi.org/10.1016/S0303-2647(03)00106-0.

    Article  Google Scholar 

  32. S. K. Deng, S. Gu. Optimal conversion of conventional artificial neural networks to spiking neural networks. In Proceedings of the 9th International Conference on Learning Representations, 2021.

  33. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. M. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. J. Bai, S. Chintala. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the 33rd International Conference on Neural Information processing Systems, Vancouver, Canada, vol. 32, Article number 721, 2019.

  34. H. Xiao, K. Rasul, R. Vollgraf. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. [Online], Available: https://arxiv.org/abs/1708.07747, 2017.

  35. A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. University of Toronto, Canada, Technical Report TR-2009, 2009.

  36. K. Simonyan, A. Zisserman. Very deep convolutional networks for large-scale image recognition. [Online], Available: https://arxiv.org/abs/1409.1556, 2014.

  37. E. D. Cubuk, B. Zoph, D. Mané, V. Vasudevan, Q. V. Le. AutoAugment: Learning augmentation strategies from data. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 113–123, 2019. DOI: https://doi.org/10.1109/CVPR.2019.00020.

    Google Scholar 

  38. T. DeVries, G. W. Taylor. Improved regularization of convolutional neural networks with cutout. [Online], Available: https://arxiv.org/abs/1708.04552, 2017.

  39. B. Han, G. Srinivasan, K. Roy. RMP-SNN: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 13555–13564, 2020. DOI: https://doi.org/10.1109/CVPR42600.2020.01357.

    Google Scholar 

  40. J. R. Shen, Y. Zhao, J. K. Liu, Y. M. Wang. HybridSNN: Combining bio-machine strengths by boosting adaptive spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, to be published. DOI: https://doi.org/10.1109/TNNLS.2021.3131356.

  41. D. Roy, I. Chakraborty, K. Roy. Scaling deep spiking neural networks with binary stochastic activations. In Proceedings of IEEE International Conference on Cognitive Computing, Milan, Italy, pp. 50–58, 2019. DOI: https://doi.org/10.1109/ICCC.2019.00020.

  42. L. Deng, Y. J. Wu, X. Hu, L. Liang, Y. F. Ding, G. Q. Li, G. S. Zhao, P. Li, Y. Xie. Rethinking the performance comparison between SNNS and ANNS. Neural Networks, vol. 121, pp. 294–307, 2020. DOI: https://doi.org/10.1016/j.neunet.2019.09.005.

    Article  Google Scholar 

  43. N. Rathi, K. Roy. DIET-SNN: Direct Input encoding with leakage and threshold optimization in deep spiking neural networks. [Online], Available: https://arxiv.org/abs/2008.03658, 2020.

  44. P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner, W. P. Risk, R. Manohar, D. S. Modha. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, vol. 345, no. 6197, pp. 668–673, 2014. DOI: https://doi.org/10.1126/science.1254642.

    Article  Google Scholar 

  45. J. B. Wu, E. Yilmaz, M. L. Zhang, H. Z. Li, K. C. Tan. Deep spiking neural networks for large vocabulary automatic speech recognition. Frontiers in Neuroscience, vol. 14, Article number 199, 2020. DOI: https://doi.org/10.3389/fnins.2020.00199.

  46. M. Horowitz. 1.1 Computing’s energy problem (and what we can do about it). In Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers, San Francisco, USA, pp. 10–14, 2014. DOI: https://doi.org/10.1109/ISSCC.2014.6757323.

  47. N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska, G. Indiveri. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Frontiers in Neuroscience, vol. 9, Article number 141, 2015. DOI: https://doi.org/10.3389/fnins.2015.00141.

Download references

Acknowledgements

This work was supported by the National Key Research and Development Program of China (No. 2020AAA0105900), National Natural Science Foundation of China (No. 62236007), and Zhejiang Lab, China (No. 2021KC0AC01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiangrong Shen.

Additional information

Jianing Han received the B. Eng. degree in Internet of things engineering from Taiyuan University of Technology, China in 2021. Currently, she is a master student in computer technology at College of Computer Science and Technology, Zhejiang University, China.

Her research interests include learning algorithms in deep spiking neural networks and neural coding.

Ziming Wang received the B. Sc. degree in computer science from Sichuan University, China in 2020. Currently, he is a Ph.D. degree candidate in computer science and technology at Department of Computer Science, Zhejiang University, China.

His research interests include neuromorphic computing, machine learning, and model compression.

Jiangrong Shen received the Ph.D. degree in computer science and technology from College of Computer Science and Technology, Zhejiang University, China in 2022. She is currently a postdoctoral fellow at Zhejiang University, China. She studied as the honorary visiting scholar with University of Leicester, UK in 2019.

Her research interests include neuromorphic computing, cyborg intelligence and neural computation.

Huajin Tang received the Ph.D. degree in computing intelligence from National University of Singapore, Singapore in 2005. He is currently a professor with Zhejiang University, China.

He received the 2016 IEEE Outstanding Transactions on Neural Networks and Learning Systems (TNNLS) Paper Award. He has served as an Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cognitive and Developmental Systems, and Frontiers in Neuromorphic Engineering. His research work on brain GPS has been reported by MIT Technology Review in 2015.

His research interests include neuromorphic computing, neuromorphic hardware and cognitive systems, and robotic cognition.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, J., Wang, Z., Shen, J. et al. Symmetric-threshold ReLU for Fast and Nearly Lossless ANN-SNN Conversion. Mach. Intell. Res. 20, 435–446 (2023). https://doi.org/10.1007/s11633-022-1388-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11633-022-1388-2

Keywords

Navigation