skip to main content
10.1145/3517343.3517352acmotherconferencesArticle/Chapter ViewAbstractPublication PagesniceConference Proceedingsconference-collections
research-article

Online learning in SNNs with e-prop and Neuromorphic Hardware

Published:03 May 2022Publication History

ABSTRACT

Online learning in neural networks has the potential to transform AI research. By enabling new information to be assimilated into existing systems, platforms can be adaptive to unseen data and can personalise performance to an individual. A common approach in providing AI to a user is to send queries to a remote cloud service which processes the information and sends back a response. Neuromorphic hardware offers an alternate solution by providing a dedicated computing platform from which neural networks can be run locally and efficiently. This work explores the potential of the SpiNNaker neuromorphic hardware to run the eligibility propagation (e-prop) algorithm on chip whilst learning online in real time.

References

  1. Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, and Wolfgang Maass. 2018. Long short-term memory and learning-to-learn in networks of spiking neurons. (2018). arxiv:1803.09574http://arxiv.org/abs/1803.09574Google ScholarGoogle Scholar
  2. Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. 2019. Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets. arXiv preprint (2019), 1–34. arxiv:2553450 [arXiv:submit]Google ScholarGoogle Scholar
  3. Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. 2020. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature communications 11, 1 (2020), 1–15.Google ScholarGoogle Scholar
  4. Petruţ A. Bogdan, Beatrice Marcinnò, Claudia Casellato, Stefano Casali, Andrew G.D. Rowley, Michael Hopkins, Francesco Leporati, Egidio D’Angelo, and Oliver Rhodes. 2021. Towards a Bio-Inspired Real-Time Neuromorphic Cerebellum. Frontiers in Cellular Neuroscience 15 (2021), 130. https://doi.org/10.3389/fncel.2021.622870Google ScholarGoogle ScholarCross RefCross Ref
  5. I. M. Comsa, T. Fischbacher, K. Potempa, A. Gesmundo, L. Versari, and J. Alakuijala. 2020. Temporal Coding in Spiking Neural Networks with Alpha Synaptic Function. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 8529–8533.Google ScholarGoogle Scholar
  6. Andrew P Davison, Daniel Brüderle, Jochen Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. 2008. PyNN: a common interface for neuronal network simulators. Frontiers in Neuroinformatics 2, January (2008), 11. https://doi.org/10.3389/neuro.11.011.2008Google ScholarGoogle Scholar
  7. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 249–256.Google ScholarGoogle Scholar
  8. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-term Memory. Neural computation 9 (12 1997), 1735–80. https://doi.org/10.1162/neco.1997.9.8.1735Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Dongsung Huh and Terrence J Sejnowski. 2018. Gradient descent for spiking neural networks. (2018), 1433–1443.Google ScholarGoogle Scholar
  10. Jacques Kaiser, Hesham Mostafa, and Emre Neftci. 2018. Synaptic plasticity dynamics for deep continuous local learning. arXiv preprint arXiv:1811.10766(2018).Google ScholarGoogle Scholar
  11. Chankyu Lee, Syed Shakib Sarwar, and Kaushik Roy. 2019. Enabling Spike-based Backpropagation in State-of-the-art Deep Neural Network Architectures. CoRR abs/1903.06379(2019). arxiv:1903.06379http://arxiv.org/abs/1903.06379Google ScholarGoogle Scholar
  12. Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. 2016. Training Deep Spiking Neural Networks Using Backpropagation. Frontiers in Neuroscience 10 (2016), 508. https://doi.org/10.3389/fnins.2016.00508Google ScholarGoogle ScholarCross RefCross Ref
  13. Timothy P. Lillicrap, Daniel Cownden, Douglas B. Tweed, and Colin J. Akerman. 2016. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications 7(2016), 1–10. https://doi.org/10.1038/ncomms13276 arxiv:arXiv:1411.0247v1Google ScholarGoogle ScholarCross RefCross Ref
  14. Wolfgang Maass. 1997. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 9 (1997), 1659–1671. https://doi.org/10.1016/S0893-6080(97)00011-7Google ScholarGoogle ScholarCross RefCross Ref
  15. Carver Mead. 1989. Analog VLSI and Neural Systems. Addison-Wesley Longman Publishing Co., Inc.Google ScholarGoogle Scholar
  16. Ari S Morcos and Christopher D Harvey. 2016. History-dependent variability in population dynamics during evidence accumulation in cortex. Nature neuroscience 19, 12 (2016), 1672–1681.Google ScholarGoogle Scholar
  17. H. Mostafa. 2018. Supervised Learning Based on Temporal Coding in Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 29, 7 (July 2018), 3227–3235. https://doi.org/10.1109/TNNLS.2017.2726060Google ScholarGoogle Scholar
  18. Javier Navaridas, Mikel Luján, Luis A Plana, Steve Temple, and Steve Byram Furber. 2015. SpiNNaker: Enhanced multicast routing. Parallel Comput. 45(2015), 49–66. https://doi.org/10.1016/j.parco.2015.01.002Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Emre O Neftci, Hesham Mostafa, and Friedemann Zenke. 2019. Surrogate gradient learning in spiking neural networks. arXiv preprint arXiv:1901.09948(2019).Google ScholarGoogle Scholar
  20. Eustace Painkras, Luis A Plana, Jim Garside, Steve Temple, Francesco Galluppi, Cameron Patterson, David Roland Lester, Andrew D Brown, and Steve Byram Furber. 2013. SpiNNaker: A 1-W 18-core system-on-chip for massively-parallel neural network simulation. IEEE Journal of Solid-State Circuits 48, 8 (2013), 1943–1953. https://doi.org/10.1109/JSSC.2013.2259038Google ScholarGoogle ScholarCross RefCross Ref
  21. Oliver Rhodes, Petruţ A. Bogdan, Christian Brenninkmeijer, Simon Davidson, Donal Fellows, Andrew Gait, David R. Lester, Mantas Mikaitis, Luis A. Plana, Andrew G. D. Rowley, Alan B. Stokes, and Steve B. Furber. 2018. sPyNNaker: A Software Package for Running PyNN Simulations on SpiNNaker. Frontiers in Neuroscience 12, November (2018). https://doi.org/10.3389/fnins.2018.00816Google ScholarGoogle Scholar
  22. Oliver Rhodes, Luca Peres, Andrew G.D. D. Rowley, Andrew Gait, Luis A. Plana, Christian Brenninkmeijer, and Steve B. Furber. 2020. Real-time cortical simulation on neuromorphic hardware. Phil. Trans. R. Soc. A 378 (2020), 20190160. https://doi.org/10.1098/rsta.2019.0160Google ScholarGoogle ScholarCross RefCross Ref
  23. Andrew G. D. Rowley, Christian Brenninkmeijer, Simon Davidson, Donal Fellows, Andrew Gait, David R. Lester, Luis A. Plana, Oliver Rhodes, Alan B. Stokes, and Steve B. Furber. 2019. SpiNNTools: The Execution Engine for the SpiNNaker Platform. Frontiers in Neuroscience 13 (2019), 231. https://doi.org/10.3389/fnins.2019.00231Google ScholarGoogle ScholarCross RefCross Ref
  24. Sumit Bam Shrestha and Garrick Orchard. 2018. SLAYER: Spike Layer Error Reassignment in Time. (2018), 1412–1421. http://papers.nips.cc/paper/7415-slayer-spike-layer-error-reassignment-in-time.pdfGoogle ScholarGoogle Scholar
  25. Timo C. Wunderlich and Christian Pehle. 2020. EventProp: Backpropagation for Exact Gradients in Spiking Neural Networks. (2020). arxiv:2009.08378 [q-bio.NC]Google ScholarGoogle Scholar
  26. Friedemann Zenke and Surya Ganguli. 2018. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks. Neural Computation 30, 6 (2018), 1514–1541. https://doi.org/10.1162/neco_a_01086 arXiv:https://doi.org/10.1162/neco_a_01086PMID: 29652587.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    NICE '22: Proceedings of the 2022 Annual Neuro-Inspired Computational Elements Conference
    March 2022
    122 pages
    ISBN:9781450395595
    DOI:10.1145/3517343

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 3 May 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate25of40submissions,63%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format