skip to main content
10.1145/2830772.2830789acmconferencesArticle/Chapter ViewAbstractPublication PagesmicroConference Proceedingsconference-collections
research-article

Neuromorphic accelerators: a comparison between neuroscience and machine-learning approaches

Published: 05 December 2015 Publication History

Abstract

A vast array of devices, ranging from industrial robots to self-driven cars or smartphones, require increasingly sophisticated processing of real-world input data (image, voice, radio, ...). Interestingly, hardware neural network accelerators are emerging again as attractive candidate architectures for such tasks. The neural network algorithms considered come from two, largely separate, domains: machine-learning and neuroscience. These neural networks have very different characteristics, so it is unclear which approach should be favored for hardware implementation. Yet, few studies compare them from a hardware perspective. We implement both types of networks down to the layout, and we compare the relative merit of each approach in terms of energy, speed, area cost, accuracy and functionality.
Within the limit of our study (current SNN and machine-learning NN algorithms, current best effort at hardware implementation efforts, and workloads used in this study), our analysis helps dispel the notion that hardware neural network accelerators inspired from neuroscience, such as SNN+STDP, are currently a competitive alternative to hardware neural networks accelerators inspired from machine-learning, such as MLP+BP: not only in terms of accuracy, but also in terms of hardware cost for realistic implementations, which is less expected. However, we also outline that SNN+STDP carry potential for reduced hardware cost compared to machine-learning networks at very large scales, if accuracy issues can be controlled (or for applications where they are less important). We also identify the key sources of inaccuracy of SNN+STDP which are less related to the loss of information due to spike coding than to the nature of the STDP learning algorithm. Finally, we outline that for the category of applications which require permanent online learning and moderate accuracy, SNN+STDP hardware accelerators could be a very cost-efficient solution.

References

[1]
I. Kuon and J. Rose, "Measuring the gap between FPGAs and ASICs," in International Symposium on Field Programmable Gate Arrays, FPGA '06, (New York, NY, USA), pp. 21--30, ACM, Feb. 2006.
[2]
S. Che, J. Li, J. W. Sheaffer, K. Skadron, and J. Lach, "Accelerating compute-intensive applications with gpus and fpgas," in Symposium on Application Specific Processors, pp. 101--107, IEEE, June 2008.
[3]
H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio, "An empirical evaluation of deep architectures on problems with many factors of variation," in International Conference on Machine Learning, (New York, New York, USA), pp. 473--480, ACM Press, 2007.
[4]
A. Krizhevsky, I. Sutskever, and G. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in Neural Information Processing Systems, pp. 1--9, 2012.
[5]
C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello, and Y. LeCun, "NeuFlow: A runtime reconfigurable dataflow processor for vision," in CVPR Workshop, pp. 109--116, Ieee, June 2011.
[6]
O. Temam, "A Defect-Tolerant Accelerator for Emerging High-Performance Applications," in International Symposium on Computer Architecture, (Portland, Oregon), 2012.
[7]
Y. Dan and M. M. Poo, "Hebbian depression of isolated neuromuscular synapses in vitro.," Science (New York, N.Y.), vol. 256, pp. 1570--3, June 1992.
[8]
P. Merolla, J. Arthur, F. Akopyan, N. Imam, R. Manohar, and D. Modha, "A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm," in IEEE Custom Integrated Circuits Conference, pp. 1--4, IEEE, Sept. 2011.
[9]
S. Kumar, "Introducing Qualcomm Zeroth Processors: Brain-Inspired Computing," 2013.
[10]
T. T. Masquelier and S. J. Thorpe, "Unsupervised learning of visual features through spike timing dependent plasticity.," PLoS computational biology, vol. 3, p. e31, Feb. 2007.
[11]
D. Querlioz, O. Bichler, and C. Gamrat, "Simulation of a memristor-based spiking neural network immune to device variations," in International Joint Conference on Neural Networks, (San Jose, CA), pp. 1775--1781, IEEE, July 2011.
[12]
B. Belhadj, A. Joubert, Z. Li, R. Heliot, and O. Temam, "Continuous Real-World Inputs Can Open Up Alternative Accelerator Designs," in International Symposium on Computer Architecture, 2013.
[13]
Y. Lecun, L. Bottou, Y. Bengio, and P. Hafiner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, 1998.
[14]
"Shape data for the mpeg-7 core experiment ce-shape-1."
[15]
A. Asuncion and D. J. Newman, "Uci machine learning repository."
[16]
T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, "DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning," in International Conference on Architectural Support for Programming Languages and Operating Systems, 2014.
[17]
P. Pinheiro and R. Collobert, "Recurrent Convolutional Neural Networks for Scene Parsing," arXiv preprint arXiv:1306.2795, no. June, 2013.
[18]
N. Brunel and S. Sergi, "Firing frequency of leaky intergrate-and-fire neurons with synaptic current dynamics.," Journal of theoretical biology, vol. 195, no. 1, pp. 87--95, 1998.
[19]
E. Marder and J.-M. Goaillard, "Variability, compensation and homeostasis in neuron and network function.," Nature reviews. Neuroscience, vol. 7, pp. 563--74, July 2006.
[20]
D. Querlioz, O. Bichler, P. Dollfus, and C. Gamrat, "Immunity to Device Variations in a Spiking Neural Network with Memristive Nanodevices," IEEE Transactions on Nanotechnology, vol. 12, pp. 288--295, May 2013.
[21]
D. Cireŧan, U. Meier, and J. Schmidhuber, "Multi-column Deep Neural Networks for Image Classification," in International Conference of Pattern Recognition, pp. 3642--3649, 2012.
[22]
P. Simard, D. Steinkraus, and J. Platt, "Best practices for convolutional neural networks applied to visual document analysis," International Conference on Document Analysis and Recognition, vol. 1, no. Icdar, pp. 958--963, 2003.
[23]
P. U. Diehl, S. Member, and M. Cook, "Unsupervised Learning of Digit Recognition Using Spike-Timing-Dependent Plasticity," IEEE Transactions in Neural Networks and Learning Systems, vol. 52538, pp. 1--6, 2014.
[24]
H. Markram, J. Lübke, M. Frotscher, and B. Sakmann, "Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs," Science, vol. 275, pp. 213--215, Jan. 1997.
[25]
G. Billings and M. C. W. van Rossum, "Memory retention and spike-timing-dependent plasticity.," Journal of neurophysiology, vol. 101, pp. 2775--88, June 2009.
[26]
S. Song, K. D. Miller, and L. F. Abbott, "Competitive Hebbian learning through spike-timing-dependent synaptic plasticity.," Nature neuroscience, vol. 3, pp. 919--26, Sept. 2000.
[27]
R. Serrano-Gotarredona, M. Oster, P. Lichtsteiner, A. Linares-Barranco, R. Paz-Vicente, F. Gomez-Rodriguez, H. K. Riis, T. Delbruck, S. C. Liu, S. Zahnd, A. M. Whatley, P. Haiger, G. Jimenez-Moreno, A. Civit, T. Serrano-Gotarredona, A. Acosta-Jimenez, and B. Linares-Barranco, "AER Building Blocks for Multi-Layer Multi-Chip Neuromorphic Vision Systems," in In Y. Weiss, B. Sch olkopf, J. Platt (Eds.), Advances in neural information processing (NIPS), no. 1, pp. 1217--1224, 2006.
[28]
S. Thorpe and J. Gautrais, "Rank order coding," Computational neuroscience: Trends in Research, vol. 13, pp. 113--119, 1998.
[29]
D.-u. Lee, J. D. Villasenor, S. Member, W. Luk, and P. H. W. Leong, "A Hardware Gaussian Noise Generator Using the Box-Muller Method and Its Error Analysis," IEEE Transactions on Computers, vol. 55, no. 6, pp. 659--671, 2006.
[30]
P. Leong, J. Villasenor, and R. Cheung, "Ziggurat-based hardware gaussian random number generator," International Conference on Field Programmable Logic and Applications, 2005., pp. 275--280, 2005.
[31]
J. S. Malik, J. N. Malik, A. Hemani, and N. Gohar, "Generating high tail accuracy Gaussian Random Numbers in hardware using central limit theorem," 2011 IEEE/IFIP 19th International Conference on VLSI and System-on-Chip, pp. 60--65, Oct. 2011.
[32]
D. Querlioz, P. Dollfus, O. Bichler, and C. Gamrat, "Learning with memristive devices: How should we model their behavior?," in Nanoscale Architectures (NANOARCH), pp. 150--156, 2011.
[33]
M. Holler, S. Tam, H. Castro, and R. Benson, "An electrically trainable artificial neural network (ETANN) with 10240 'floating gate' synapses," in Artificial neural networks, (Piscataway, NJ, USA), pp. 50--55, IEEE Press, 1990.
[34]
NVIDIA, "Cuda cublas library 5.5."
[35]
N. Fujimoto, "Faster matrix-vector multiplication on GeForce 8800GTX," 2008 IEEE International Symposium on Parallel and Distributed Processing, pp. 1--8, Apr. 2008.
[36]
R. Nath, S. Tomov, T. T. Dong, and J. Dongarra, "Optimizing symmetric dense matrix-vector multiplication on GPUs," Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '11, p. 1, 2011.
[37]
F. Rieke, D. Warland, R. De Ruyter Van Steveninck, and W. Bialek, "Spikes: Exploring the Neural Code," 1997.
[38]
W. Gerstner and W. M. Kistler, Spiking Neuron Models. Cambridge University Press, 2002.
[39]
R. C. Froemke and Y. Dan, "Spike-timing-dependent synaptic modification induced by natural spike trains," Nature, vol. 416, pp. 433--438, Mar. 2002.
[40]
P. J. Sjöström, G. G. Turrigiano, and S. B. Nelson, "Rate, Timing, and Cooperativity Jointly Determine Cortical Synaptic Plasticity," Neuron, vol. 32, pp. 1149--1164, Dec. 2001.
[41]
D. A. Butts, C. Weng, J. Jin, C.-I. Yeh, N. A. Lesica, J.-M. Alonso, and G. B. Stanley, "Temporal precision in the neural code and the timescales of natural vision," Nature, vol. 449, pp. 92--95, Sept. 2007.
[42]
P. A. Merolla, J. V. Arthur, R. Alvarez-icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner, W. P. Risk, R. Manohar, and D. S. Modha, "A million spiling-neuron interated circuit with a scalable communication network and interface," Science, vol. 345, no. 6197, 2014.
[43]
S. K. Esser, A. Andreopoulos, R. Appuswamy, P. Datta, D. Barch, A. Amir, J. Arthur, A. Cassidy, M. Flickner, P. Merolla, S. Chandra, N. Basilico, S. Carpin, T. Zimmerman, F. Zee, R. Alvarez-Icaza, J. a. Kusnitz, T. M. Wong, W. P. Risk, E. McQuinn, T. K. Nayak, R. Singh, and D. S. Modha, "Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores," Proceedings of the International Joint Conference on Neural Networks, no. December, 2013.
[44]
Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng, "Building High-level Features Using Large Scale Unsupervised Learning," in International Conference on Machine Learning, June 2012.
[45]
P. Huang, X. He, J. Gao, and L. Deng, "Learning deep structured semantic models for web search using clickthrough data," in International Conference on Information and Knowledge Management, 2013.
[46]
V. Mnih and G. Hinton, "Learning to Label Aerial Images from Noisy Data," in Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 567--574, 2012.
[47]
G. Dahl, T. Sainath, and G. Hinton, "Improving Deep Neural Networks for LVCSR using Rectified Linear Units and Dropout," in International Conference on Acoustics, Speech and Signal Processing, 2013.
[48]
A. Coates, P. Baumstarck, Q. Le, and A. Y. Ng, "Scalable learning for object detection with GPU hardware," 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4287--4293, Oct. 2009.
[49]
K.-S. Oh and K. Jung, "GPU implementation of neural networks," Pattern Recognition, vol. 37, pp. 1311--1314, June 2004.
[50]
Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, and O. Temam, "DaDianNao: A Machine-Learning Supercomputer," in Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-47), pp. 609--622, 2014.
[51]
Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, X. Feng, Y. Chen, and O. Temam, "ShiDianNao: Shifting Vision Processing Closer to the Sensor," in Proceedings of the 42nd Annual International Symposium on Computer Architecture, pp. 92--104, 2015.
[52]
H. Esmaeilzadeh, A. Sampson, L. Ceze, and D. Burger, "Neural Acceleration for General-Purpose Approximate Programs," in International Symposium on Microarchitecture, no. 3, pp. 1--6, 2012.
[53]
R. J. Vogelstein, U. Mallik, J. T. Vogelstein, and G. Cauwenberghs, "Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses," IEEE Transactions on Neural Networks, vol. 18, no. 1, pp. 253--265, 2007.
[54]
J. E. Smith, "Efficient Digital Neurons for Large Scale Cortical Architectures," in International Symposium on Computer Architecture, 2014.
[55]
J. Schemmel, D. Briiderle, A. Griibl, M. Hock, K. Meier, and S. Millner, "A wafer-scale neuromorphic hardware system for large-scale neural modeling," in Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pp. 1947--1950, IEEE, May 2010.
[56]
M. M. Khan, D. R. Lester, L. A. Plana, A. Rast, X. Jin, E. Painkras, and S. B. Furber, "SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor," in IEEE International Joint Conference on Neural Networks (IJCNN), pp. 2849--2856, Ieee, 2008.
[57]
J.-s. Seo, B. Brezzo, Y. Liu, B. D. Parker, S. K. Esser, R. K. Montoye, B. Rajendran, J. A. Tierno, L. Chang, D. S. Modha, and Others, "A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons," in IEEE Custom Integrated Circuits Conference, pp. 1--4, IEEE, Sept. 2011.
[58]
A. Joubert, B. Belhadj, O. Temam, and R. Heliot, "Hardware Spiking Neurons Design: Analog or Digital?," in International Joint Conference on Neural Networks, (Brisbane), 2012.
[59]
D. Roclin, O. Bichler, C. Gamrat, S. J. Thorpe, and J.-o. Klein, "Design Study of Efficient Digital Order-Based STDP Neuron Implementations for Extracting Temporal Features," in International Joint Conference on Neural Networks, 2013.
[60]
E. Neftci, S. Das, B. Pedroni, K. Kreutz-Delgado, and G. Cauwenberghs, "Event-driven contrastive divergence for spiking neuromorphic systems," Frontiers in Neuroscience, vol. 7, no. January, pp. 1--14, 2014.
[61]
J. Arthur, P. Merolla, F. Akopyan, R. Alvarez, A. Cassidy, S. Chandra, S. Esser, N. Imam, W. Risk, D. Rubin, R. Manohar, and D. Modha, "Building block of a programmable neuromorphic substrate: A digital neurosynaptic core," in Neural Networks (IJCNN), The 2012 International Joint Conference on, pp. 1--8, June 2012.
[62]
J. M. Brader, W. Senn, and S. Fusi, "Learning real-world stimuli in a neural network with spike-driven synaptic dynamics.," Neural computation, vol. 19, pp. 2881--912, Nov. 2007.
[63]
M. Beyeler, N. D. Dutt, and J. L. Krichmar, "Categorization and decision-making in a neurobiologically plausible spiking network using a stdp-like learning rule," Neural Networks, vol. 48, no. 0, pp. 109--124, 2013.
[64]
T. Serre, L. Wolf, and T. Poggio, "Object Recognition with Features Inspired by Visual Cortex," in Conference on Computer Vision and Pattern Recognition, pp. 994--1000, Ieee, 2005.
[65]
A. Nere, U. Olcese, D. Balduzzi, and G. Tononi, "A neuromorphic architecture for object recognition and motion anticipation using burst-STDP.," PloS one, vol. 7, p. e36958, Jan. 2012.
[66]
C. Farabet, R. Paz, J. Pérez-Carrasco, C. Zamarreño Ramos, A. Linares-Barranco, Y. Lecun, E. Culurciello, T. Serrano-Gotarredona, and B. Linares-Barranco, "Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.," Frontiers in neuroscience, vol. 6, p. 32, Jan. 2012.
[67]
A. D. Rast, L. a. Plana, S. R. Welbourne, and S. Furber, "Event-driven MLP implementation on neuromimetic hardware," The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1--8, June 2012.

Cited By

View all
  • (2025)Neuromorphic Computing: Bridging the Gap Between Neuroscience and Artificial IntelligenceCybernetics, Human Cognition, and Machine Learning in Communicative Applications10.1007/978-981-97-8533-9_7(81-92)Online publication date: 10-Jan-2025
  • (2024)NeuroVM: Dynamic Neuromorphic Hardware Virtualization2024 IEEE 15th International Green and Sustainable Computing Conference (IGSC)10.1109/IGSC64514.2024.00023(74-79)Online publication date: 2-Nov-2024
  • (2024)Mitigating Write Disturbance in Non-Volatile Memory via Coupling Machine Learning with Out-of-Place Updates2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)10.1109/HPCA57654.2024.00092(1184-1198)Online publication date: 2-Mar-2024
  • Show More Cited By

Index Terms

  1. Neuromorphic accelerators: a comparison between neuroscience and machine-learning approaches

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MICRO-48: Proceedings of the 48th International Symposium on Microarchitecture
    December 2015
    787 pages
    ISBN:9781450340342
    DOI:10.1145/2830772
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 December 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. accelerator
    2. comparison
    3. neuromorphic

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    MICRO-48
    Sponsor:

    Acceptance Rates

    MICRO-48 Paper Acceptance Rate 61 of 283 submissions, 22%;
    Overall Acceptance Rate 484 of 2,242 submissions, 22%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)77
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 02 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Neuromorphic Computing: Bridging the Gap Between Neuroscience and Artificial IntelligenceCybernetics, Human Cognition, and Machine Learning in Communicative Applications10.1007/978-981-97-8533-9_7(81-92)Online publication date: 10-Jan-2025
    • (2024)NeuroVM: Dynamic Neuromorphic Hardware Virtualization2024 IEEE 15th International Green and Sustainable Computing Conference (IGSC)10.1109/IGSC64514.2024.00023(74-79)Online publication date: 2-Nov-2024
    • (2024)Mitigating Write Disturbance in Non-Volatile Memory via Coupling Machine Learning with Out-of-Place Updates2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)10.1109/HPCA57654.2024.00092(1184-1198)Online publication date: 2-Mar-2024
    • (2024)A Convolutional Spiking Neural Network Accelerator with the Sparsity-Aware Memory and Compressed Weights2024 IEEE 35th International Conference on Application-specific Systems, Architectures and Processors (ASAP)10.1109/ASAP61560.2024.00041(163-171)Online publication date: 24-Jul-2024
    • (2024)Noise-resilient designs and analysis for optical neural networksNeuromorphic Computing and Engineering10.1088/2634-4386/ad836fOnline publication date: 4-Oct-2024
    • (2024)Two-Terminal Neuromorphic Devices for Spiking Neural Networks: Neurons, Synapses, and Array IntegrationACS Nano10.1021/acsnano.4c12884Online publication date: 12-Dec-2024
    • (2024)An ultra-low power adjustable current-mode analog integrated general purpose artificial neural network classifierAEU - International Journal of Electronics and Communications10.1016/j.aeue.2024.155467(155467)Online publication date: Aug-2024
    • (2024)An overview memristor based hardware accelerators for deep neural networkConcurrency and Computation: Practice and Experience10.1002/cpe.799736:9Online publication date: 4-Jan-2024
    • (2023)An RRAM-Based Neuromorphic Accelerator for Speech-Based Emotion RecognitionNeuromorphic Computing Systems for Industry 4.010.4018/978-1-6684-6596-7.ch003(63-93)Online publication date: 16-Jun-2023
    • (2023)SE-CNN: Convolution Neural Network Acceleration via Symbolic Value PredictionIEEE Journal on Emerging and Selected Topics in Circuits and Systems10.1109/JETCAS.2023.324476713:1(73-85)Online publication date: Mar-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media