Skip to main content

A Software-Hardware Co-exploration Framework for Optimizing Communication in Neuromorphic Processor

  • Conference paper
  • First Online:
Advanced Computer Architecture (ACA 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1256))

Included in the following conference series:

Abstract

Spiking neural networks (SNN) has been widely used to solve complex tasks such as pattern recognition, image classification and so on. The neuromorphic processors which use SNN to perform computation have been proved to be powerful and energy-efficient. These processors generally use Network-on-Chip (NoC) as the interconnect structure between neuromorphic cores. However, the connections between neurons in SNN are very dense. When a neuron fire, it will generate a large number of data packets. This will result in congestion and increase the packet transmission latency dramatically in NoC.

In this paper, we proposed a software-hardware co-exploration framework to alleviate this problem. This framework consists of three parts: software simulation, packet extraction&mapping, and hardware evaluation. At the software level, we can explore the impact of packet loss on the classification accuracy of different applications. At the hardware level, we can explore the impact of packet loss on transmission latency and power consumption in NoC. Experimental results show that when the neuromorphic processor runs MNIST handwritten digit recognition application, the communication delay can be reduced by 11%, the power consumption can be reduced by 5.3%, and the classification accuracy can reach 80.75% (2% higher than the original accuracy). When running FSDD speech recognition application, the communication delay can be reduced by 22%, the power consumption can be reduced by 2.2%, and the classification accuracy can reach 78.5% (1% higher than the original accuracy).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Akopyan, F., et al.: Truenorth: design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip. IEEE Trans. Comput. Aided Des. Integr. Circ. Syst. 34(10), 1537–1557 (2015)

    Article  Google Scholar 

  2. Beigné, E., Clermidy, F., Vivet, P., Clouard, A., Renaudin, M.: An asynchronous NOC architecture providing low latency service and its multi-level design framework. In: 11th IEEE International Symposium on Asynchronous Circuits and Systems, pp. 54–63. IEEE (2005)

    Google Scholar 

  3. Chou, T.S., et al.: CARLsim 4: an open source library for large scale, biologically detailed spiking neural network simulation using heterogeneous clusters. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018)

    Google Scholar 

  4. Davies, M., et al.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)

    Article  Google Scholar 

  5. Diehl, P.U., Zarrella, G., Cassidy, A., Pedroni, B.U., Neftci, E.: Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware. In: 2016 IEEE International Conference on Rebooting Computing (ICRC), pp. 1–8. IEEE (2016)

    Google Scholar 

  6. Furber, S.B., et al.: Overview of the spinnaker system architecture. IEEE Trans. Comput. 62(12), 2454–2467 (2012)

    Article  MathSciNet  Google Scholar 

  7. Gewaltig, M.O., Diesmann, M.: Nest (neural simulation tool). Scholarpedia 2(4), 1430 (2007)

    Article  Google Scholar 

  8. Guo, S., et al.: A systolic SNN inference accelerator and its co-optimized software framework. In: Proceedings of the 2019 on Great Lakes Symposium on VLSI, pp. 63–68 (2019)

    Google Scholar 

  9. Iakymchuk, T., Rosado-Muñoz, A., Guerrero-Martínez, J.F., Bataller-Mompeán, M., Francés-Víllora, J.V.: Simplified spiking neural network architecture and STDP learning algorithm applied to image classification. EURASIP J. Image Video Process. 2015(1), 4 (2015). https://doi.org/10.1186/s13640-015-0059-4

    Article  Google Scholar 

  10. Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14(6), 1569–1572 (2003). https://ieeexplore.ieee.org/document/1257420

  11. Jiang, N., Michelogiannakis, G., Becker, D., Towles, B., Dally, W.J.: Booksim 2.0 user’s guide. Standford University (2010)

    Google Scholar 

  12. Kumar, S., et al.: A network on chip architecture and design methodology. In: Proceedings IEEE Computer Society Annual Symposium on VLSI. New Paradigms for VLSI Systems Design, ISVLSI 2002, pp. 117–124. IEEE (2002)

    Google Scholar 

  13. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  14. Li, S., et al.: SNEAP: a fast and efficient toolchain for mapping large-scale spiking neural network onto NOC-based neuromorphic platform. In: Proceedings of the 2020 on Great Lakes Symposium on VLSI (2020)

    Google Scholar 

  15. Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10(9), 1659–1671 (1997)

    Article  Google Scholar 

  16. Moradi, S., Qiao, N., Stefanini, F., Indiveri, G.: A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (dynaps). IEEE Trans. Biomed. Circ. Syst. 12(1), 106–122 (2017)

    Article  Google Scholar 

  17. Pei, J., et al.: Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature 572(7767), 106–111 (2019)

    Article  Google Scholar 

  18. Schemmel, J., et al.: Live demonstration: a scaled-down version of the brainscales wafer-scale neuromorphic system. In: 2012 IEEE International Symposium on Circuits and Systems, pp. 702–702. IEEE (2012)

    Google Scholar 

  19. Stimberg, M., Brette, R., Goodman, D.F.: Brian 2, an intuitive and efficient neural simulator. Elife 8, e47314 (2019)

    Article  Google Scholar 

  20. Wysoski, S.G., Benuskova, L., Kasabov, N.: Fast and adaptive network of spiking neurons for multi-view visual pattern recognition. Neurocomputing 71(13–15), 2563–2575 (2008)

    Article  Google Scholar 

  21. Young, A.R., Dean, M.E., Plank, J.S., Rose, G.S.: A review of spiking neuromorphic hardware communication systems. IEEE Access 7, 135606–135620 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, S., Wang, L., Kang, Z., Qu, L., Li, S., Su, J. (2020). A Software-Hardware Co-exploration Framework for Optimizing Communication in Neuromorphic Processor. In: Dong, D., Gong, X., Li, C., Li, D., Wu, J. (eds) Advanced Computer Architecture. ACA 2020. Communications in Computer and Information Science, vol 1256. Springer, Singapore. https://doi.org/10.1007/978-981-15-8135-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-8135-9_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-8134-2

  • Online ISBN: 978-981-15-8135-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics