Skip to main content

SNN vs. CNN Implementations on FPGAs: An Empirical Evaluation

  • Conference paper
  • First Online:
Applied Reconfigurable Computing. Architectures, Tools, and Applications (ARC 2024)

Abstract

Convolutional Neural Networks (CNNs) are widely employed to solve various problems, e.g., image classification. Due to their compute- and data-intensive nature, CNN accelerators have been developed as ASICs or on FPGAs. The increasing complexity of applications has caused resource costs and energy requirements of these accelerators to grow. Spiking Neural Networks (SNNs) are an emerging alternative to CNN implementations, promising higher resource and energy efficiency. The main research question addressed in this paper is whether SNN accelerators truly meet these expectations of reduced energy demands compared to their CNN equivalents when implemented on modern FPGAs. For this purpose, we analyze multiple SNN hardware accelerators for FPGAs regarding performance and energy efficiency. We also present a novel encoding scheme of spike event queues and a novel memory organization technique to improve SNN energy efficiency further. Both techniques have been integrated into a state-of-the-art SNN architecture and evaluated for MNIST, SVHN, and CIFAR-10 data sets and corresponding network architectures on two differently sized modern FPGA platforms. A result of our empirical analysis is that for complex benchmarks such as SVHN and CIFAR-10, SNNs do live up to their expectations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/SpinalHDL/SpinalHDL.

  2. 2.

    https://keras.io.

References

  1. Chen, L., Xiong, X., Liu, J.: A survey of intelligent chip design research based on spiking neural networks. IEEE Access 10, 89663–89686 (2022)

    Article  Google Scholar 

  2. Smithson, S.C., Boga, K., Ardakani, A., Meyer, B.H., Gross, W.J.: Stochastic computing can improve upon digital spiking neural networks. In: Proceedings of International Workshop on Signal Processing Systems (SiPS), pp. 309–314. IEEE (2016)

    Google Scholar 

  3. Bouvier, M., et al.: Spiking neural networks hardware implementations and challenges: a survey. ACM J. Emerg. Technol. Comput. Syst. 15(2), 22:1-22:35 (2019)

    Article  Google Scholar 

  4. Izhikevich, E.M.: Which model to use for cortical spiking neurons? IEEE Trans. Neural Netw. 15(5), 1063–1070 (2004)

    Article  Google Scholar 

  5. Guo, W., Fouda, M.E., Eltawil, A.M., Salama, K.N.: Neural coding in spiking neural networks: a comparative study for robust neuromorphic systems. Frontiers Neurosci. 15, 1–14 (2021)

    Article  Google Scholar 

  6. Rueckauer, B., Liu, S.-C.: Conversion of analog to spiking neural networks using sparse temporal coding. In: Proceeding of International Symposium on Circuits and Systems (ISCAS), 27–30 May 2018, pp. 1–5. IEEE (2018)

    Google Scholar 

  7. Panzeri, S., Brunel, N., Logothetis, N.K., Kayser, C.: Sensory neural codes using multiplexed temporal scales. Trends Neurosci. 33(3), 111–120 (2010)

    Article  Google Scholar 

  8. Sommer, J., Özkan, M.A., Keszocze, O., Teich, J.: Efficient hardware acceleration of sparsely active convolutional spiking neural networks. IEEE Trans. CAD 41(11), 3767–3778 (2022)

    Article  Google Scholar 

  9. Han, B., Roy, K.: Deep spiking neural network: energy efficiency through time based coding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 388–404. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_23

    Chapter  Google Scholar 

  10. Wang, S.-Q., et al.: SIES: a novel implementation of spiking convolutional neural network inference engine on field-programmable gate array. J. Comput. Sci. Technol. 35, 475–489 (2020)

    Article  Google Scholar 

  11. Fang, H., et al.: Encoding, model, and architecture: systematic optimization for spiking neural network in FPGAs. In: Proceedings of the 39th International Conference on Computer- Aided Design (ICCAD), November 2–5, 2020, pp. 62:1–62:9. ACM (2020)

    Google Scholar 

  12. Li, J., Shen, G., Zhao, D., Zhang, Q., Yi, Z.: FireFly: a high-throughput and reconfigurable hardware accelerator for spiking neural networks. In: The Computing Research Repository (CoRR), January 2023. arXiv: 2301.01905 [cs.NE]

  13. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), December 8–14, 2019, pp. 8024–8035 (2019)

    Google Scholar 

  14. Zeng, Y., et al.: BrainCog: a spiking neural network based brain-inspired cognitive intelligence engine for brain-inspired AI and brain simulation. In: The Computing Research Repository (CoRR), July 2022. arXiv: 2207.08533 [cs.NE]

  15. Panchapakesan, S., Fang, Z., Li, J.: SyncNN: evaluating and accelerating spiking neural networks on FPGAs. ACM Trans. Reconfig. Technol. Syst. 15(4), 48:1-48:27 (2022)

    Article  Google Scholar 

  16. Chen, Q., Gao, C., Fu, Y.: Cerebron: a reconfigurable architecture for spatiotemporal sparse spiking neural networks. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 30(10), 1425–1437 (2022)

    Article  Google Scholar 

  17. Carpegna, A., Savino, A., Di Carlo, S.: Spiker: an FPGA-optimized hardware accelerator for spiking neural networks. In: Proceedings of IEEE Computer Society Annual Symposium on VLSI (ISVLSI), July 4–6, 2022, pp. 14–19. IEEE (2022)

    Google Scholar 

  18. Corradi, F., Adriaans, G., Stuijk, S.: Gyro: a digital spiking neural network architecture for multi-sensory data analytics. In: Proceedings of the Drone Systems Engineering (DroneSE) and Rapid Simulation and Performance Evaluation (RAPIDO): Methods and Tools, January 18, 2021, pp. 9–15. ACM (2021)

    Google Scholar 

  19. Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M., Liu, S.-C.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers Neurosci. 11, 1–17 (2017)

    Article  Google Scholar 

  20. Plagwitz, P., Hannig, F., Ströbel, M., Strohmeyer, C., Teich, J.: A safari through FPGA-based neural network compilation and design automation flows. In: Proceedings of International Symposium on Field-Programmable Custom Computing Machines (FCCM), May 9–12, 2021, pp. 10–19. IEEE (2021)

    Google Scholar 

  21. Shawahna, A., Sait, S.M., El-Maleh, A.: FPGA-based accelerators of deep learning networks for learning and classification: a review. IEEE Access 7, 7823–7859 (2018)

    Article  Google Scholar 

  22. Blott, M., et al.: FINN-R: an end-to-end deep-learning framework for fast exploration of quantized neural networks. ACM Trans. Reconfig. Technol. Syst. 11(3), 16:1-16:23 (2018)

    Article  Google Scholar 

  23. Lin, M., Chen, Q., Yan, S.: Network in Network. In: The Computing Research Repository (CoRR), December 2013. arXiv: 1312.4400 [cs.NE]

Download references

Acknowledgments

The paper has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 450987171.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick Plagwitz .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Plagwitz, P., Hannig, F., Teich, J., Keszocze, O. (2024). SNN vs. CNN Implementations on FPGAs: An Empirical Evaluation. In: Skliarova, I., Brox Jiménez, P., Véstias, M., Diniz, P.C. (eds) Applied Reconfigurable Computing. Architectures, Tools, and Applications. ARC 2024. Lecture Notes in Computer Science, vol 14553. Springer, Cham. https://doi.org/10.1007/978-3-031-55673-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-55673-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-55672-2

  • Online ISBN: 978-3-031-55673-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics