skip to main content
10.1145/2627369.2627625acmconferencesArticle/Chapter ViewAbstractPublication PagesislpedConference Proceedingsconference-collections
research-article

SPINDLE: SPINtronic deep learning engine for large-scale neuromorphic computing

Published:11 August 2014Publication History

ABSTRACT

Deep Learning Networks (DLNs) are bio-inspired large-scale neural networks that are widely used in emerging vision, analytics, and search applications. The high computation and storage requirements of DLNs have led to the exploration of various avenues for their efficient realization. Concurrently, the ability of emerging post-CMOS devices to efficiently mimic neurons and synapses has led to great interest in their use for neuromorphic computing.

We describe SPINDLE, a programmable processor for deep learning based on spintronic devices. SPINDLE exploits the unique ability of spintronic devices to realize highly dense and energy-efficient neurons and memory, which form the fundamental building blocks of DLNs. SPINDLE consists of a three-tier hierarchy of processing elements to capture the nested parallelism present in DLNs, and a two-level memory hierarchy to facilitate data reuse. It can be programmed to execute DLNs with widely varying topologies for different applications. SPINDLE employs techniques to limit the overheads of spin-to-charge conversion, and utilizes output and weight quantization to enhance the efficiency of spin-neurons. We evaluate SPINDLE using a device-to-architecture modeling framework and a set of widely used DLN applications (handwriting recognition, face detection, and object recognition). Our results indicate that SPINDLE achieves 14.4X reduction in energy consumption and 20.4X reduction in EDP over the CMOS baseline under iso-area conditions.

References

  1. Y. LeCun et al. Gradient-based learning applied to document recognition. In Proc. IEEE, 1998.Google ScholarGoogle Scholar
  2. G. Hinton et al. A fast learning algorithm for deep belief nets. Trans. Neural Computation, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. George Rosenberg. Improving photo search: A step across the semantic gap http://googleresearch.blogspot.com/2013/06/improving-photo-search-step-across.html. June 2009.Google ScholarGoogle Scholar
  5. Y. Netzer et al. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.Google ScholarGoogle Scholar
  6. J. Bergstra. Theano: Deep learning on GPUs with Python. In Big Learn workshop NIPS, 2011.Google ScholarGoogle Scholar
  7. M. Sankaradas et al. A massively parallel coprocessor for convolutional neural networks. In Proc. ASAP, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Farabet et al. Hardware accelerated convolutional neural networks for synthetic vision systems. In Proc. ISCAS, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  9. E. Chen et al. Advances and future prospects of spin-transfer torque random access memory. IEEE Trans. Magnetics, 46(6):1873--1878, June 2010.Google ScholarGoogle ScholarCross RefCross Ref
  10. M. Sharad et al. Boolean and non-boolean computation with spin devices. In Proc. IEDM, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  11. M. Sharad et al. Spin neuron for ultra low power computational hardware. In Proc. DRC, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  12. T. Kimura et al. Switching magnetization of a nanoscale ferromagnetic particle using nonlocal spin injection. Phys. Rev. Lett., 2006.Google ScholarGoogle ScholarCross RefCross Ref
  13. R. Venkatesan et al. DWM-TAPESTRI - An energy efficient all-spin cache using domain wall shift based writes. In Proc. DATE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. G.D. Panagopoulos et al. Physics-Based SPICE-Compatible Compact Model for Simulating Hybrid MTJ/CMOS Circuits. Trans. TED, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  15. L Gao et al. Analog-input analog-weight dot-product operation with Ag/a-Si/Pt memristive devices. In Proc. VLSI-SoC, 2012.Google ScholarGoogle Scholar
  16. CACTI. http://www.hpl.hp.com/research/cacti/.Google ScholarGoogle Scholar
  17. R. Venkatesan et al. TapeCache: A High Density, Energy Efficient Cache Based on Domain Wall Memory. In Proc. ISLPED, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. R. Collobert et al. Torch7: A Matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011.Google ScholarGoogle Scholar
  19. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.Google ScholarGoogle Scholar
  20. E. Painkras et al. Spinnaker: A multi-core system-on-chip for massively-parallel neural net simulation. In Proc. CICC, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  21. J. Seo et al. A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons. In Proc. CICC, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  22. B. Rajendran et al. Specifications of nanoscale devices and circuits for neuromorphic computational systems. Trans. TED, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  23. D. Kuzum et al. Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Letters, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  24. S. H. Jo et al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Letters, 2010.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. SPINDLE: SPINtronic deep learning engine for large-scale neuromorphic computing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ISLPED '14: Proceedings of the 2014 international symposium on Low power electronics and design
      August 2014
      398 pages
      ISBN:9781450329750
      DOI:10.1145/2627369

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 August 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ISLPED '14 Paper Acceptance Rate63of184submissions,34%Overall Acceptance Rate398of1,159submissions,34%

      Upcoming Conference

      ISLPED '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader