Skip to main content

Deep Spiking Neural Network: Energy Efficiency Through Time Based Coding

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12355))

Abstract

Spiking Neural Networks (SNNs) are promising for enabling low-power event-driven data analytics. The best performing SNNs for image recognition tasks are obtained by converting a trained deep learning Analog Neural Network (ANN) composed of Rectified Linear Unit (ReLU) activation to SNN consisting of Integrate-and-Fire (IF) neurons with “proper” firing thresholds. However, this has come at the cost of accuracy loss and higher inference latency due to lack of a notion of time. In this work, we propose an ANN to SNN conversion methodology that uses a time-based coding scheme, named Temporal-Switch-Coding (TSC), and a corresponding TSC spiking neuron model. Each input image pixel is presented using two spikes and the timing between the two spiking instants is proportional to the pixel intensity. The real-valued ReLU activations in ANN are encoded using the spike-times of the TSC neurons in the converted TSC-SNN. At most two memory accesses and two addition operations are performed for each synapse during the whole inference, which significantly improves the SNN energy efficiency. We demonstrate the proposed TSC-SNN for VGG-16, ResNet-20 and ResNet-34 SNNs on datasets including CIFAR-10 (93.63% top-1), CIFAR-100 (70.97% top-1) and ImageNet (73.46% top-1 accuracy). It surpasses the best inference accuracy of the converted rate-encoded SNN with 7–14.5\(\times \) lesser inference latency, and 30–60\(\times \) fewer addition operations and memory accesses per inference across datasets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bellec, G., Salaj, D., Subramoney, A., Legenstein, R., Maass, W.: Long short-term memory and learning-to-learn in networks of spiking neurons. In: Advances in Neural Information Processing Systems, pp. 787–797. Montréal, Quebec, Canada (2018)

    Google Scholar 

  2. Blouw, P., Choo, X., Hunsberger, E., Eliasmith, C.: Benchmarking keyword spotting efficiency on neuromorphic hardware. In: Proceedings of the 7th Annual Neuro-inspired Computational Elements Workshop, p. 1. ACM (2019)

    Google Scholar 

  3. Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 113(1), 54–66 (2015). https://doi.org/10.1007/s11263-014-0788-3

    Article  MathSciNet  Google Scholar 

  4. Davies, M., et al.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)

    Article  Google Scholar 

  5. Diehl, P.U., Cook, M.: Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front. Comput. Neurosci. 9, 99 (2015)

    Article  Google Scholar 

  6. Diehl, P.U., Neil, D., Binas, J., Cook, M., Liu, S.C., Pfeiffer, M.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2015)

    Google Scholar 

  7. Diehl, P.U., Zarrella, G., Cassidy, A., Pedroni, B.U., Neftci, E.: Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware. In: 2016 IEEE International Conference on Rebooting Computing (ICRC), pp. 1–8. IEEE (2016)

    Google Scholar 

  8. Esser, S.K., et al.: Convolutional networks for fast, energy-efficient neuromorphic computing. CoRR abs/1603.08270 (2016). http://arxiv.org/abs/1603.08270

  9. Ferré, P., Mamalet, F., Thorpe, S.J.: Unsupervised feature learning with winner-takes-all based STDP. Front. Comput. Neurosci. 12, 24 (2018)

    Article  Google Scholar 

  10. Han, B., Srinivasan, G., Roy, K.: RMP-SNN: residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020

    Google Scholar 

  11. Hardt, M., Ma, T.: Identity matters in deep learning. CoRR abs/1611.04231. http://arxiv.org/abs/1611.04231 (2016)

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385. http://arxiv.org/abs/1512.03385 (2015)

  13. Heeger, D.: Poisson model of spike generation. Stanford Univ. Handout 5, 1–13 (2000)

    Google Scholar 

  14. Hunsberger, E., Eliasmith, C.: Training spiking deep networks for neuromorphic hardware. CoRR abs/1611.05141. http://arxiv.org/abs/1611.05141 (2016)

  15. Jin, Y., Zhang, W., Li, P.: Hybrid macro/micro level backpropagation for training deep spiking neural networks. In: Advances in Neural Information Processing Systems, pp. 7005–7015. Montréal, Quebec, Canada (2018)

    Google Scholar 

  16. Johnson, M., et al.: Google’s multilingual neural machine translation system: enabling zero-shot translation. Trans. Assoc. Comput. Linguit. 5, 339–351 (2017)

    Google Scholar 

  17. Kheradpisheh, S.R., Ganjtabesh, M., Thorpe, S.J., Masquelier, T.: STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw. 99, 56–67 (2018). https://doi.org/10.1016/j.neunet.2017.12.005. http://www.sciencedirect.com/science/article/pii/S0893608017302903

  18. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  19. Lee, C., Srinivasan, G., Panda, P., Roy, K.: Deep spiking convolutional neural network trained with unsupervised spike timing dependent plasticity. IEEE Trans. Cogn. Dev. Syst. pp. 1–1 (2018). https://doi.org/10.1109/TCDS.2018.2833071

  20. Lee, C., Sarwar, S.S., Panda, P., Srinivasan, G., Roy, K.: Enabling spike-based backpropagation for training deep neural network architectures. Front. Neurosci. 14, 119 (2020). https://doi.org/10.3389/fnins.2020.00119

    Article  Google Scholar 

  21. Lee, J.H., Delbruck, T., Pfeiffer, M.: Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016)

    Google Scholar 

  22. Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10(9), 1659–1671 (1997)

    Article  Google Scholar 

  23. Masquelier, T., Thorpe, S.J.: Unsupervised learning of visual features through spike timing dependent plasticity. PLoS Comput. Biol. 3(2), e31 (2007)

    Article  Google Scholar 

  24. Merolla, P.A., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)

    Article  Google Scholar 

  25. Miyashita, D., Lee, E.H., Murmann, B.: Convolutional neural networks using logarithmic data representation. CoRR abs/1603.01025. http://arxiv.org/abs/1603.01025 (2016)

  26. Mozafari, M., Ganjtabesh, M., Nowzari-Dalini, A., Thorpe, S.J., Masquelier, T.: Combining STDP and reward-modulated STDP in deep convolutional spiking neural networks for digit recognition. arXiv preprint arXiv:1804.00227 (2018)

  27. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807–814 (2010)

    Google Scholar 

  28. Neftci, E.O., Mostafa, H., Zenke, F.: Surrogate gradient learning in spiking neural networks. arXiv preprint arXiv:1901.09948 (2019)

  29. Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., Ng, A.Y.: Multimodal deep learning. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 689–696 (2011)

    Google Scholar 

  30. Panda, P., Roy, K.: Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 299–306. IEEE, Vancouver, British Columbia, Canada (2016)

    Google Scholar 

  31. Pérez-Carrasco, J.A., et al.: Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing-application to feedforward ConvNets. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2706–2719 (2013)

    Article  Google Scholar 

  32. Rathi, N., Srinivasan, G., Panda, P., Roy, K.: Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation. In: International Conference on Learning Representations. https://openreview.net/forum?id=B1xSperKvH (2020)

  33. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

    Google Scholar 

  34. Rueckauer, B., Liu, S.: Conversion of analog to spiking neural networks using sparse temporal coding. In: 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5 (2018)

    Google Scholar 

  35. Rueckauer, B., Lungu, I.A., Hu, Y., Pfeiffer, M.: Theory and tools for the conversion of analog to spiking convolutional neural networks. arXiv preprint arXiv:1612.04052 (2016)

  36. Rueckauer, B., Lungu, I.A., Hu, Y., Pfeiffer, M., Liu, S.C.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017)

    Article  Google Scholar 

  37. Sengupta, A., Ye, Y., Wang, R., Liu, C., Roy, K.: Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13, 95 (2019)

    Article  Google Scholar 

  38. Shrestha, S.B., Orchard, G.: Slayer: spike layer error reassignment in time. In: Advances in Neural Information Processing Systems, pp. 1412–1421. Montréal, Quebec, Canada (2018)

    Google Scholar 

  39. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  40. Srinivasan, G., Panda, P., Roy, K.: STDP-based unsupervised feature learning using convolution-over-time in spiking neural networks for energy-efficient neuromorphic computing. J. Emerg. Technol. Comput. Syst. 14(4), 1–12 (2018). https://doi.org/10.1145/3266229. https://doi.org/10.1145/3266229

  41. Srinivasan, G., Roy, K.: ReStoCNet: residual stochastic binary convolutional spiking neural network for memory-efficient neuromorphic computing. Front. Neurosci. 13, 189 (2019)

    Article  Google Scholar 

  42. Tavanaei, A., Kirby, Z., Maida, A.S.: Training spiking convnets by STDP and gradient descent. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Rio de Janeiro, Brazil, July 2018. https://doi.org/10.1109/IJCNN.2018.8489104

  43. Thiele, J.C., Bichler, O., Dupret, A.: Event-based, timescale invariant unsupervised online deep learning with STDP. Front. Comput. Neurosci. 12, 46 (2018). https://doi.org/10.3389/fncom.2018.00046. https://www.frontiersin.org/article/10.3389/fncom.2018.00046

  44. Wu, Y., Deng, L., Li, G., Zhu, J., Shi, L.: Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12, 331 (2018)

    Article  Google Scholar 

  45. Zambrano, D., Nusselder, R., Scholte, H.S., Bohte, S.M.: Efficient computation in adaptive artificial spiking neural networks. CoRR abs/1710.04838. http://arxiv.org/abs/1710.04838 (2017)

  46. Zhang, M., Zheng, N., Ma, D., Pan, G., Gu, Z.: Efficient spiking neural networks with logarithmic temporal coding. CoRR abs/1811.04233. http://arxiv.org/abs/1811.04233 (2018)

  47. Zhao, B., Ding, R., Chen, S., Linares-Barranco, B., Tang, H.: Feedforward categorization on AER motion events using cortex-like features in a spiking neural network. IEEE Trans. Neural Netw. Learn. Syst. 26(9), 1963–1978 (2014)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgment

This work was supported in part by Center for Brain-Inspired Computing (C-BRIC), a MARCO and DARPA sponsored StarNet center, by the Semiconductor Research Corporation, National Science Foundation, Sandia National Laboratories, Vannevar Bush Faculty Fellowship and by the US Army Research Laboratory and the UK Ministry of Defense under Agreement Number W911NF-16-3-0001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bing Han .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Han, B., Roy, K. (2020). Deep Spiking Neural Network: Energy Efficiency Through Time Based Coding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12355. Springer, Cham. https://doi.org/10.1007/978-3-030-58607-2_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58607-2_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58606-5

  • Online ISBN: 978-3-030-58607-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics