Skip to main content

Advertisement

Log in

Lotus: a memory organization for loose and tight coupling neurons in neuromorphic architecture

  • Regular Paper
  • Published:
CCF Transactions on High Performance Computing Aims and scope Submit manuscript

Abstract

Due to the bionic features, neuromorphic computing has achieved higher energy efficiency than deep learning in many fields in recent years. Similar to the biological brain, the memory of synapses and weights occupy a large area in a neuromorphic processor. The prior neuromorphic processors meet the challenge of the large area of the memory organization. In this work, based on the characteristics of the brain and spiking neural networks (SNNs), we propose a set-associative memory organization and a compressed SRAM memory organization with an adjacent matrix of synapses for loose and tight coupling structures in SNN respectively to construct an area-efficient memory organization for generalized neuromorphic architectures. A ping-pong memory is also proposed for the logic neuron number expansion. Experiments show that our methods use less chip area and consume less power than the CAM implementation in related work by 23.4–75.8% and 21.2–75.7% while bringing minor processor performance overhead.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Akopyan, F., et al.: Truenorth: design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 34(10), 1537–1557 (2015)

    Article  Google Scholar 

  • Amir, A., Datta, P., Risk, W.P., Cassidy, A.S., Kusnitz, J.A., Esser, S.K., Andreopoulos, A., Wong, T.M., Flickner, M., Alvarez-Icaza, R., McQuinn, E., Shaw, B., Pass, N., Modha, D.S.: Cognitive computing programming paradigm: a corelet language for composing networks of neurosynaptic cores. IEEE Computational Intelligence Society (IEEE-CIS); International Neural Network Society (INNS)

  • Amir, A., et al.: A low power, fully event-based gesture recognition system. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7388–7397 (2017)

  • Andrew, A.M., et al.: Spiking Neuron Models: Single Neurons, Populations, Plasticity, vol. 32. Emerald Group Publishing Limited, Bingley (2003)

    Google Scholar 

  • Barboza, R.: Alternative circuit model for the hodgkin-huxley nerve axon. In: Proceedings of the Twelfth Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 1855–1856 (1990)

  • Burkitt, A.N.: A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 95(1), 1–19 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Cassidy, A.S., Merolla, P., Arthur, J.V., Esser, S.K., Jackson, B., Alvarez-Icaza, R.D.P., Sawada, J., Wong, T.M., Feldman, V., Amir, A., Rubin, D.B.-D., Akopyan, F., McQuinn, E., Risk, W.P., Modha, D.S.: Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores. IEEE Computational Intelligence Society (IEEE-CIS); International Neural Network Society (INNS) (2013)

  • Davies, M., et al.: Loihi: a neuromorphic many-core processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)

    Article  Google Scholar 

  • de Azambuja, R., et al.: Short-term plasticity in a liquid state machine biomimetic robot arm controller. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 3399–3408 (2017)

  • DeWolf, T., Jaworski, P., Eliasmith, C.: Nengo and low-power ai hardware for robust, embedded neuro-robotics. Front. Neurorobot. 14, 568359 (2020)

    Article  Google Scholar 

  • Eliasmith, C., et al.: A large-scale model of the functioning brain. Science 338(6111), 1202–1205 (2012)

    Article  Google Scholar 

  • Holmgren, C., Harkany, T., Svennenfors, B., Zilberter, Y.: Pyramidal cell communication within local networks in layer 2/3 of rat neocortex. J. Physiol. 551(1), 139–153 (2003)

    Article  Google Scholar 

  • Ijspeert, A.J.: Central pattern generators for loco-motion control in animals and robots: a review. Neural Netw. 21(4), 642–653 (2008)

    Article  Google Scholar 

  • Izhikevich, E.M., et al.: Large-scale model of mammalian thalamocortical systems. Proc. Natl. Acad. Sci. USA 105, 3593–3598 (2008)

    Article  Google Scholar 

  • Kim, S., et al.: Spiking-YOLO: spiking neural network for energy-efficient object detection (2019)

  • Kulkarni, S., et al.: A spiking neural network (snn) forecast engine for short-term electrical load forecasting. Appl. Soft Comput. 13(8), 3628–3635 (2013)

    Article  Google Scholar 

  • LeCun, Y., et al.: The mnist database of handwritten digits, 10:34. http://yann.lecun.com/exdb/mnist (1998)

  • Mead, C.: Neuromorphic electronic systems. Proc. IEEE 78(10), 1629–1636 (1990)

    Article  Google Scholar 

  • Merolla, P., Arthur, J., Akopyan, F., Imam, N., Manohar, R., Modha, D.S.: A digital neurosynaptic core using embedded crossbar memory with 45pj per spike in 45nm (2011)

  • Moradi, S., Qiao, N., Stefanini, F., Indiveri, G.: A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). IEEE Trans. Biomed. Circuits Syst. 12(1), 106–122 (2018)

    Article  Google Scholar 

  • Orchard, G., et al.: Converting static image datasets to spiking neuromorphic datasets using saccades. Front. Neurosci. 9, 437 (2015)

    Article  Google Scholar 

  • Stimberg, M., et al.: Equation-oriented specification of neural models for simulations. Front. Neuroinform. 8, 6 (2014)

    Article  Google Scholar 

  • Thakor, N.: Translating the brain–machine interface. Sci. Transl. Med. 5, 210ps17 (2013)

    Article  Google Scholar 

  • Wang, L., et al.: LSMCore: a 69k-synapse/mm\(^{2}\) single-core digital neuromorphic processor for liquid state machine. IEEE Trans. Circuits Syst. I Regul. Pap. 69(5), 1976–1989 (2022)

    Article  MathSciNet  Google Scholar 

  • Yang, Z., et al.: CompressedCache: enabling storage compression on neuromorphic processor for liquid state machine. In: He, X., Shao, E., Tan, G. (eds.) Network and Parallel Computing. NPC 2020. Lecture Notes in Computer Science, vol. 12639. Springer, Cham (2021)

    Google Scholar 

  • Zhang, L., Zhang, B.: A geometrical representation of mcculloch-pitts neural model and its applications. IEEE Trans. Neural Netw. 10(4), 925–929 (1999)

    Article  Google Scholar 

Download references

Acknowledgements

On behalf of all authors, the corresponding author states that there is no conflict of interest. This work is funded by National Key Research and Development Programs of China [Grant numbers 2018YFB2202603 and 2020AAA0104602].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Wang.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Z., Wang, L., Wang, Y. et al. Lotus: a memory organization for loose and tight coupling neurons in neuromorphic architecture. CCF Trans. HPC 4, 448–460 (2022). https://doi.org/10.1007/s42514-022-00113-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42514-022-00113-z

Keywords

Navigation