skip to main content
10.1145/3589737.3605967acmconferencesArticle/Chapter ViewAbstractPublication PagesiconsConference Proceedingsconference-collections
research-article

Interfacing Neuromorphic Hardware with Machine Learning Frameworks - A Review

Published: 28 August 2023 Publication History

Abstract

With the emergence of neuromorphic hardware as a promising low-power parallel computing platform, the need for tools that allow researchers and engineers to efficiently interact with such hardware is rapidly growing. Machine learning frameworks like Tensorflow, PyTorch and JAX have been instrumental for the success of machine learning in recent years as they enable seamless interaction with traditional machine learning accelerators such as GPUs and TPUs. In stark contrast, interfacing with neuromorphic hardware remains difficult since the aforementioned frameworks do not address the challenges associated with mapping neural network models and algorithms to physical hardware. In this paper, we review the various strategies employed throughout the neuromorphic computing community to tackle these challenges and categorize them according to their methodologies and implementation effort. This classification serves as a guideline for device engineers and software developers alike to enable them to choose the best-fit solution in regard of their demands and available resources. Finally, we provide a JAX-based proof-of-concept implementation of a compilation pipeline tailored to the needs of researchers in the early stages of device development, where parts of the computational graph can be mapped onto custom hardware via operations exposed through a C++ or Python interface. The code is available at https://github.com/PGI15/xbarax.

References

[1]
Martín Abadi, Ashish Agarwal, et al. "TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems". In: arXiv preprint arXiv:1603.04467 (2015).
[2]
Filipp Akopyan, Jun Sawada, et al. "TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip". In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 34.10 (2015), pp. 1537--1557.
[3]
Joao Ambrosi, Aayush Ankit, et al. "Hardware-software co-design for an analog-digital accelerator for machine learning". In: 2018 IEEE International Conference on Rebooting Computing (ICRC). IEEE. 2018, pp. 1--13.
[4]
Aayush Ankit, Izzat El Hajj, et al. PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energyefficient ReRAM. 2019. arXiv: 1912.11516 [cs.DC].
[5]
Aayush Ankit, Izzat El Hajj, et al. "PUMA: A programmable ultraefficient memristorbased accelerator for machine learning inference". In: Proceedings of the TwentyFourth International Conference on Architectural Support for Programming Languages and Operating Systems. 2019, p. 715731.
[6]
H. Bai, J. Cheng, et al. "ONNX: Open Neural Network Exchange". In: arXiv preprint arXiv:1909.11671 (2019).
[7]
James Bradbury, Roy Frostig, et al. JAX: composable transformations of Python+NumPy programs. https://github.com/google/jax. 2018.
[8]
Julian Büchel, A Vasilopoulos, et al. "Gradient descent-based programming of analog in-memory computing cores". In: IEEE International Electron Devices Meeting. 2022.
[9]
Tianqi Chen, Thierry Moreau, et al. "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning". In: Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI). 2018, pp. 578--594.
[10]
Dimitrios Danopoulos, Christoforos Kachris, et al. "Utilizing cloud FPGAs towards the open neural network standard". In: Sustainable Computing: Informatics and Systems 30 (2021), p. 100520.
[11]
M. Davies, N. Srinivasa, et al. "Loihi: A Neuromorphic Manycore Processor with OnChip Learning". In: IEEE Micro PP.99 (2018), p. 11. issn: 02721732.
[12]
AP Davison, D. Brüderle, et al. "PyNN: a common interface for neuronal network simulators. Front. Neuroinform". In: Front. Neuroinform. 2 (2008), p. 11.
[13]
Jason K Eshraghian, Max Ward, et al. "Training spiking neural networks using lessons from deep learning". In: arXiv preprint arXiv:2109.12894 (2021).
[14]
Wei Fang, Yanqi Chen, et al. SpikingJelly. https://github.com/fangwei123456/spikingjelly. Accessed: YYYY-MM-DD. 2020.
[15]
Steve B Furber, David R Lester, et al. "SpiNNaker: A multi-core system-on-chip for massively-parallel neural net simulation". In: IEEE Transactions on Computers 63.2 (2014), pp. 245--259.
[16]
Manash Goswami. Announcing accelerated training with ONNX Runtimeâtrain models up to 45% faster - Microsoft Open Source Blog. https://cloudblogs.microsoft.com/opensource/2020/05/19/announcing-support-for-accelerated-training-with-onnx-runtime/. [Online; accessed 07.10.2022]. 2020.
[17]
IEEE. "IEEE Standard Codes, Formats, Protocols, and Common Commands for Use With IEEE Std 488.1-1987, IEEE Standard Digital Interface for Programmable Instrumentation". In: IEEE Std 488.2-1992 (1992), pp. 1--254.
[18]
ArC Instruments. Introcuction to ArC ONE. https://www.arc-instruments.co.uk/products/arc-one/. [Online; accessed 07.10.2022]. 2015.
[19]
Intel. Lava: A software framework for neuromorphic computing. https://lava-nc.org/. [Online; accessed 07.10.2022]. 2015.
[20]
Hongsik Jeong and Luping Shi. "Memristor devices for neural networks". In: Journal of Physics D: Applied Physics 52.2 (2018), p. 023003.
[21]
Norman P Jouppi, Cliff Young, et al. "In-datacenter performance analysis of a tensor processing unit". In: Proceedings of the 44th Annual International Symposium on Computer Architecture. IEEE Press. 2017, pp. 1--12.
[22]
Qiuqiang Kong, Yong Xu, et al. Sinabs: A Python Library for Sound Event Detection. https://sinabs.readthedocs.io/en/v1.2.4/. 2023.
[23]
Chris Lattner and Vikram Adve. "LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation". In: CGO (2004), pp. 75--86.
[24]
Can Li, Jim Ignowski, et al. "CMOS-integrated nanoscale memristive crossbars for CNN and optimization acceleration". In: 2020 IEEE International Memory Workshop (IMW). IEEE. 2020, pp. 1--4.
[25]
Heng Liao, Jiajin Tu, et al. "Ascend: a scalable and unified architecture for ubiquitous deep neural network computing: Industry track paper". In: 2021 IEEE International Symposium on HPCA. IEEE. 2021, pp. 789--801.
[26]
Xiaoxiao Liu, Mengjie Mao, et al. "Harmonica: A framework of heterogeneous computing systems with memristor-based neuromorphic computing accelerators". In: IEEE Transactions on Circuits and Systems I: Regular Papers 63.5 (2016), pp. 617--628.
[27]
Xian Luo, Xiaochen Wu, et al. "Intelligence Processing Unit: When Cloud AI Meets Edge Computing". In: Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. ACM. 2019, pp. 15--29.
[28]
Christian Mayr, Sebastian Hoeppner, et al. "Spinnaker 2: A 10 million core processor system for brain simulation and machine learning". In: arXiv preprint arXiv:1911.02385 (2019).
[29]
Christian Mayr, Sebastian Höppner, et al. "SpiNNaker 2: A 10 Million Core Processor System for Brain Simulation and Machine Learning". In: CoRR abs/1911.02385 (2019). arXiv: 1911.02385. url: http://arxiv.org/abs/1911.02385.
[30]
Jamal Molin, Chetan Thakur, et al. "A Neuromorphic Proto-Object Based Dynamic Visual Saliency Model With a Hybrid FPGA Implementation". In: IEEE Transactions on Biomedical Circuits and Systems 15.3 (2021), pp. 580--594. url: https://doi.org/10.1109%2Ftbcas.2021.3089622.
[31]
Saber Moradi, Ning Qiao, et al. "A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs)". In: IEEE transactions on biomedical circuits and systems 12.1 (2017), pp. 106--122.
[32]
Dylan R. Muir, Felix Bauer, et al. Rockpool Documentaton. Sept. 2019. url: https://doi.org/10.5281/zenodo.3773845.
[33]
Eric Müller, Christian Mauch, et al. "Extending brainscales OS for BrainScaleS-2". In: arXiv preprint arXiv:2003.13750 (2020).
[34]
Sambhu Nath Pradhan, M. Tilak Kumar, et al. "Low power finite state machine synthesis using power-gating". In: Integration 44.3 (2011), pp. 175--184. issn: 0167-9260. url: https://www.sciencedirect.com/science/article/pii/S0167926011000289.
[35]
E. O. Neftci, H. Mostafa, et al. "Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of GradientBased Optimization to Spiking Neural Networks". In: IEEE Signal Processing Magazine 36.6 (2019), p. 5163. url: https://ieeexplore.ieee.org/ielaam/79/8887548/8891809aam.pdf.
[36]
Garrick Orchard, E. Paxon Frady, et al. Efficient Neuromorphic Signal Processing with Loihi 2. 2021. arXiv: 2111.03746 [cs.ET].
[37]
E. Painkras, L.A. Plana, et al. "SpiNNaker: A 1W 18Core SystemonChip for MassivelyParallel Neural Network Simulation". In: IEEE Journal of SolidState Circuits 48.8 (Aug. 2013). issn: 00189200.
[38]
Adam Paszke, Sam Gross, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. https://arxiv.org/abs/1912.01703. arXiv:1912.01703 [cs.LG]. 2019.
[39]
Christian Pehle, Sebastian Billaudelle, et al. "The BrainScaleS-2 accelerated neuromorphic system with hybrid plasticity". In: CoRR abs/2201.11063 (2022). arXiv: 2201.11063. url: https://arxiv.org/abs/2201.11063.
[40]
Christian Pehle and Jens Egholm Pedersen. Norse - A deep learning library for spiking neural networks. Version 0.0.7. Documentation: https://norse.ai/docs/. Jan. 2021.
[41]
Mirko Prezioso, Farnood Merrikh-Bayat, et al. "Training and operation of an integrated neuromorphic network based on metal-oxide memristors". In: Nature 521.7550 (2015), pp. 61--64.
[42]
Malte J. Rasch, Diego Moreda, et al. "A Flexible and Fast PyTorch Toolkit for Simulating Training and Inference on Analog Crossbar Arrays". In: 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS). 2021, pp. 1--4.
[43]
Ole Richter, Yannan Xing, et al. Speck: A Smart event-based Vision Sensor with a low latency 327K Neuron Convolutional Neuronal Network Processing Pipeline. 2023. arXiv: 2304.06793 [cs.NE].
[44]
Amit Sabne. "Xla: Compiling machine learning for peak performance". In: (2020).
[45]
Ali Shafiee, Anirban Nag, et al. "ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars". In: 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). 2016, pp. 14--26.
[46]
Philipp Spilger, Elias Arnold, et al. "hxtorch. snn: Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2". In: Neuro-Inspired Computational Elements Conference. 2023, pp. 57--62.
[47]
Philipp Spilger, Eric Müller, et al. "hxtorch: PyTorch for BrainScaleS2". In: IoT Streams for DataDriven Predictive Maintenance and IoT, Edge, and Mobile for Embedded Machine Learning. Springer, 2020, p. 189200.
[48]
Stein Stroobants, Christophe De Wagter, et al. Neuromorphic Control using Input-Weighted Threshold Adaptation. 2023. arXiv: 2304.08778 [cs.RO].
[49]
Synsense. SAMNA: Synsense Artificial Neural Network Architecture. https://synsense-sys-int.gitlab.io/samna/. 2023.
[50]
Zhongrui Wang, Can Li, et al. "Reinforcement learning with analogue memristor arrays". In: Nature electronics 2.3 (2019), pp. 115--124.
[51]
Jason Yik, Soikat Hasan Ahmed, et al. NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking. 2023. arXiv: 2304.04640 [cs.AI].

Cited By

View all
  • (2024)jaxsnn: Event-driven Gradient Estimation for Analog Neuromorphic Hardware2024 Neuro Inspired Computational Elements Conference (NICE)10.1109/NICE61972.2024.10548709(1-6)Online publication date: 23-Apr-2024
  • (2024)SNNAX - Spiking Neural Networks in JAX2024 International Conference on Neuromorphic Systems (ICONS)10.1109/ICONS62911.2024.00044(251-255)Online publication date: 30-Jul-2024
  • (2024)Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computingNature Communications10.1038/s41467-024-52259-915:1Online publication date: 16-Sep-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICONS '23: Proceedings of the 2023 International Conference on Neuromorphic Systems
August 2023
270 pages
ISBN:9798400701757
DOI:10.1145/3589737
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 August 2023

Check for updates

Qualifiers

  • Research-article

Funding Sources

  • BMBF

Conference

ICONS '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 13 of 22 submissions, 59%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)134
  • Downloads (Last 6 weeks)14
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)jaxsnn: Event-driven Gradient Estimation for Analog Neuromorphic Hardware2024 Neuro Inspired Computational Elements Conference (NICE)10.1109/NICE61972.2024.10548709(1-6)Online publication date: 23-Apr-2024
  • (2024)SNNAX - Spiking Neural Networks in JAX2024 International Conference on Neuromorphic Systems (ICONS)10.1109/ICONS62911.2024.00044(251-255)Online publication date: 30-Jul-2024
  • (2024)Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computingNature Communications10.1038/s41467-024-52259-915:1Online publication date: 16-Sep-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media