Skip to main content

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 956))

Included in the following conference series:

  • 395 Accesses

Abstract

Spiking Neural Network (SNN) is the biological-brain model that has the capability of solving complex problems related to pattern recognition and character classification. Each neuron fires an output based on the discrete spikes that are collected from its predecessors which results in changing the ionic level or the membrane potential in the neuron. Once the level reaches a threshold, the neuron transmits signal to its successors. Transmitting data among neurons is a data-dependency problem because each neuron depends on receiving signals from its surrounding neurons at the current time and transmitting it into next neurons. Due to this high level of dependency, scaling up spiking neural network (represented by increasing number of neurons and hidden layers) is a real challenge. In hardware, the number of neurons is limited by the hardware (device) capacity and the number of internal wires available to connect between neurons. In this paper, we examine the main factors that significantly impact the scalability of spiking neural networks in hardware through implementing a spiking neural network model using the hardware description language SystemVerilog. Through evaluating the design on Alveo U55 high performance compute FPGA card, we found that the highest number of neurons that we can map on the hardware is 128 neurons per hidden layer and the higher number of synapses is 384,000 per hidden layer.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Gupta, A., Lyle, L.N.: Hebbian learning with winner take all for spiking neural networks. In Proceedings of the International Joint Conference on Neural Networks, Atlanta, GE, USA, pp. 1054–1060 (2009)

    Google Scholar 

  2. Bouvier, M., et al.: Spiking neural networks hardware implementations and challenges: a survey. ACM J. Emerging Technol. Comput. Syst. (JETC) 15(2), 1–35 (2019)

    Google Scholar 

  3. Hu, Y., Liu, Y., Liu, Z.:  A survey on convolutional neural network accelerators: GPU, FPGA and ASIC. In: Proceedings of the 14th International Conference on Computer Research and Development (ICCRD), Shenzhen, China, pp.100–107 (2022)

    Google Scholar 

  4. Li, J., et al.: FireFly: A High-Throughput and Reconfigurable Hardware Accelerator for Spiking Neural Networks, arXiv preprint arXiv:2301.01905, (2023)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rasha Karakchi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Karakchi, R., Frierson, J. (2024). Towards a Scalable Spiking Neural Network. In: Daimi, K., Al Sadoon, A. (eds) Proceedings of the Second International Conference on Advances in Computing Research (ACR’24). ACR 2024. Lecture Notes in Networks and Systems, vol 956. Springer, Cham. https://doi.org/10.1007/978-3-031-56950-0_44

Download citation

Publish with us

Policies and ethics