Abstract
Spiking Neural Network (SNN) is the biological-brain model that has the capability of solving complex problems related to pattern recognition and character classification. Each neuron fires an output based on the discrete spikes that are collected from its predecessors which results in changing the ionic level or the membrane potential in the neuron. Once the level reaches a threshold, the neuron transmits signal to its successors. Transmitting data among neurons is a data-dependency problem because each neuron depends on receiving signals from its surrounding neurons at the current time and transmitting it into next neurons. Due to this high level of dependency, scaling up spiking neural network (represented by increasing number of neurons and hidden layers) is a real challenge. In hardware, the number of neurons is limited by the hardware (device) capacity and the number of internal wires available to connect between neurons. In this paper, we examine the main factors that significantly impact the scalability of spiking neural networks in hardware through implementing a spiking neural network model using the hardware description language SystemVerilog. Through evaluating the design on Alveo U55 high performance compute FPGA card, we found that the highest number of neurons that we can map on the hardware is 128 neurons per hidden layer and the higher number of synapses is 384,000 per hidden layer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Gupta, A., Lyle, L.N.: Hebbian learning with winner take all for spiking neural networks. In Proceedings of the International Joint Conference on Neural Networks, Atlanta, GE, USA, pp. 1054–1060 (2009)
Bouvier, M., et al.: Spiking neural networks hardware implementations and challenges: a survey. ACM J. Emerging Technol. Comput. Syst. (JETC) 15(2), 1–35 (2019)
Hu, Y., Liu, Y., Liu, Z.: A survey on convolutional neural network accelerators: GPU, FPGA and ASIC. In: Proceedings of the 14th International Conference on Computer Research and Development (ICCRD), Shenzhen, China, pp.100–107 (2022)
Li, J., et al.: FireFly: A High-Throughput and Reconfigurable Hardware Accelerator for Spiking Neural Networks, arXiv preprint arXiv:2301.01905, (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Karakchi, R., Frierson, J. (2024). Towards a Scalable Spiking Neural Network. In: Daimi, K., Al Sadoon, A. (eds) Proceedings of the Second International Conference on Advances in Computing Research (ACR’24). ACR 2024. Lecture Notes in Networks and Systems, vol 956. Springer, Cham. https://doi.org/10.1007/978-3-031-56950-0_44
Download citation
DOI: https://doi.org/10.1007/978-3-031-56950-0_44
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-56949-4
Online ISBN: 978-3-031-56950-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)