Skip to main content
Log in

Multi-grained system integration for hybrid-paradigm brain-inspired computing

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Hybrid neuromorphic computing supporting the prevailing artificial neural networks and neuroscience-inspired models/algorithms offers substantial flexibility for cross-paradigm model integration. It is one of the most promising technologies for accelerating intelligence development, ultimately contributing to artificial general intelligence development. Recently, an increasing number of hybrid neuromorphic computing chips have been reported, but such research focuses on chip design without demonstrating systems for large-scale workloads. To this end, we construct a multi-grained system based on many Tianjic chips, presenting a large-scale system for hybrid-paradigm brain-inspired computing. With different numbers of chips and different connection topologies, we develop a Tianjic card and a Tianjic board as the infrastructure for building embedded systems and cloud servers, respectively. Extensive measurements of the communication latency, computational latency, and power consumption evidence the superior potential of Tianjic systems for exploring brain-inspired computing for artificial general intelligence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521: 436–444

    Article  Google Scholar 

  2. Chen Y H, Krishna T, Emer J S, et al. Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J Solid-State Circ, 2016, 52: 127–138

    Article  Google Scholar 

  3. Jouppi N P, Young C, Patil N, et al. In-datacenter performance analysis of a tensor processing unit. In: Proceedings of International Symposium on Computer Architecture (ISCA), Toronto, 2017. 1–12

  4. Yin S, Ouyang P, Tang S, et al. A high energy efficient reconfigurable hybrid neural network processor for deep learning applications. IEEE J Solid-State Circ, 2017, 53: 968–982

    Article  Google Scholar 

  5. Roy K, Jaiswal A, Panda P. Towards spike-based machine intelligence with neuromorphic computing. Nature, 2019, 575: 607–617

    Article  Google Scholar 

  6. Merolla P A, Arthur J V, Alvarez-Icaza R, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 2014, 345: 668–673

    Article  Google Scholar 

  7. Furber S B, Galluppi F, Temple S, et al. The SpiNNaker project. Proc IEEE, 2014, 102: 652–665

    Article  Google Scholar 

  8. Moradi S, Qiao N, Stefanini F, et al. A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). IEEE Trans Biomed Circ Syst, 2017, 12: 106–122

    Article  Google Scholar 

  9. Davies M, Srinivasa N, Lin T H, et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro, 2018, 38: 82–99

    Article  Google Scholar 

  10. Deng L, Wu Y, Hu X, et al. Rethinking the performance comparison between SNNS and ANNS. Neural Netw, 2020, 121: 294–307

    Article  Google Scholar 

  11. He W, Wu Y J, Deng L, et al. Comparing SNNs and RNNs on neuromorphic vision datasets: similarities and differences. Neural Netw, 2020, 132: 108–120

    Article  Google Scholar 

  12. Liang L, Hu X, Deng L, et al. Exploring adversarial attack in spiking neural networks with spike-compatible gradient. IEEE Trans Neural Netw Learn Syst, 2021. doi: https://doi.org/10.1109/TNNLS.2021.3106961

  13. Wu J, Chua Y, Zhang M, et al. A tandem learning rule for effective training and rapid inference of deep spiking neural networks. IEEE Trans Neural Netw Learn Syst, 2021. doi: https://doi.org/10.1109/TNNLS.2021.3095724

  14. Wu J, Xu C, Han X, et al. Progressive tandem learning for pattern recognition with deep spiking neural networks. IEEE Trans Pattern Anal Mach Intell, 2022, 44: 7824–7840

    Article  Google Scholar 

  15. Pei J, Deng L, Song S, et al. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 2019, 572: 106–111

    Article  Google Scholar 

  16. Deng L, Wang G, Li G, et al. Tianjic: a unified and scalable chip bridging spike-based and continuous neural computation. IEEE J Solid-State Circ, 2020, 55: 2228–2246

    Article  Google Scholar 

  17. Deng L, Liang L, Wang G, et al. SemiMap: a semi-folded convolution mapping for speed-overhead balance on crossbars. IEEE Trans Comput-Aided Des Integr Circ Syst, 2018, 39: 117–130

    Article  Google Scholar 

  18. Deng L, Zou Z, Ma X, et al. Fast object tracking on a many-core neural network chip. Front Neurosci, 2018, 12: 841

    Article  Google Scholar 

  19. Zou Z, Zhao R, Wu Y, et al. A hybrid and scalable brain-inspired robotic platform. Sci Rep, 2020, 10: 18160

    Article  Google Scholar 

  20. Wang G, Ma S, Wu Y, et al. End-to-end implementation of various hybrid neural networks on a cross-paradigm neuromorphic chip. Front Neurosci, 2021, 15: 615279

    Article  Google Scholar 

  21. Tian L, Wu Z Z, Wu S, et al. Hybrid neural state machine for neural network. Sci China Inf Sci, 2021, 64: 132202

    Article  Google Scholar 

  22. Zhang Y, Qu P, Ji Y, et al. A system hierarchy for brain-inspired computing. Nature, 2020, 586: 378–384

    Article  Google Scholar 

  23. Woźniak S, Pantazi A, Bohnstingl T, et al. Deep learning incorporating biologically inspired neural dynamics and in-memory computing. Nat Mach Intell, 2020, 2: 325–336

    Article  Google Scholar 

  24. S. Höppner, Y. Yan, A. Dixius, et al. The SpiNNaker 2 processing element architecture for hybrid digital neuromorphic computing. 2021. ArXiv:2103.08392

  25. Davidson S, Furber S B. Comparison of artificial and spiking neural networks on digital hardware. Front Neurosci, 2021, 15: 651141

    Article  Google Scholar 

  26. Abbott L F. Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Res Bull, 1999, 50: 303–304

    Article  Google Scholar 

  27. Gerstner W, Kistler W M, Naud R, et al. Neuronal Dynamics: from Single Neurons to Networks and Models of Cognition. Cambridge: Cambridge University Press, 2014

    Book  Google Scholar 

  28. Wang Z, Li C, Song W, et al. Reinforcement learning with analogue memristor arrays. Nat Electron, 2019, 2: 115–124

    Article  Google Scholar 

  29. Xue C X, Chiu Y C, Liu T W, et al. A CMOS-integrated compute-in-memory macro based on resistive random-access memory for AI edge devices. Nat Electron, 2021, 4: 81–90

    Article  Google Scholar 

  30. Painkras E, Plana L A, Garside J, et al. SpiNNaker: a 1-W 18-core system-on-chip for massively-parallel neural network simulation. IEEE J Solid-State Circ, 2013, 48: 1943–1953

    Article  Google Scholar 

  31. Akopyan F, Sawada J, Cassidy A, et al. TrueNorth: design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip. IEEE Trans Comput-Aided Des Integr Circ Syst, 2015, 34: 1537–1557

    Article  Google Scholar 

  32. Cassidy A S, Alvarez-Icaza R, Akopyan F, et al. Real-time scalable cortical computing at 46 giga-synaptic OPS/watt with 100× speedup in time-to-solution and 100000× reduction in energy-to-solution. In: Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, New Orleans, 2014. 27–38

  33. Bi X A, Jiang Q, Sun Q, et al. Analysis of Alzheimer’s disease based on the random neural network cluster in fMRI. Front Neuroinform, 2018, 12: 60

    Article  Google Scholar 

  34. Meszlényi R J, Buza K, Vidnyánszky Z. Resting state fMRI functional connectivity-based classification using a convolutional neural network architecture. Front Neuroinform, 2017, 11: 61

    Article  Google Scholar 

  35. Tu T, Koss J, Sajda P. Relating deep neural network representations to EEG-fMRI spatiotemporal dynamics in a perceptual decision-making task. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, 2018. 1985–1991

  36. Aram Z, Jafari S, Ma J, et al. Using chaotic artificial neural networks to model memory in the brain. Commun Nonlinear Sci Numer Simul, 2017, 44: 449–459

    Article  MathSciNet  MATH  Google Scholar 

  37. Hale A T, Stonko D P, Lim J, et al. Using an artificial neural network to predict traumatic brain injury. J Neurosurg-Pediatr, 2018, 23: 219–226

    Article  Google Scholar 

  38. Abdalla H E M, Esmail M Y. Brain tumor detection by using artificial neural network. In: Proceedings of International Conference on Computer, Control, Electrical, and Electronics Engineering, Khartoum, 2018. 1–6

  39. Brandli C, Berner R, Minhao Yang R, et al. A 240 × 180 130 dB 3 µs latency global shutter spatiotemporal vision sensor. IEEE J Solid-State Circ, 2014, 49: 2333–2341

    Article  Google Scholar 

  40. Zhao R, Yang Z, Zheng H, et al. A framework for the general design and computation of hybrid neural networks. Nat Commun, 2022, 13: 3427

    Article  Google Scholar 

Download references

Acknowledgements

This work was partly supported by National Nature Science Foundation of China (Grant Nos. 62088102, 61836004), National Key R&D Program of China (Grant Nos. 2018YFE0200200, 2021ZD0200300), CETC Haikang Group-Brain Inspired Computing Joint Research Center, IDG/McGovern Institute for Brain Research at Tsinghua University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luping Shi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pei, J., Deng, L., Ma, C. et al. Multi-grained system integration for hybrid-paradigm brain-inspired computing. Sci. China Inf. Sci. 66, 142403 (2023). https://doi.org/10.1007/s11432-021-3510-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-021-3510-6

Keywords

Navigation