The Next Generation of Deep Learning Hardware: Analog Computing | IEEE Journals & Magazine | IEEE Xplore

The Next Generation of Deep Learning Hardware: Analog Computing

Publisher: IEEE

Abstract:

Initially developed for gaming and 3-D rendering, graphics processing units (GPUs) were recognized to be a good fit to accelerate deep learning training. Its simple mathe...View more

Abstract:

Initially developed for gaming and 3-D rendering, graphics processing units (GPUs) were recognized to be a good fit to accelerate deep learning training. Its simple mathematical structure can easily be parallelized and can therefore take advantage of GPUs in a natural way. Further progress in compute efficiency for deep learning training can be made by exploiting the more random and approximate nature of deep learning work flows. In the digital space that means to trade off numerical precision for accuracy at the benefit of compute efficiency. It also opens the possibility to revisit analog computing, which is intrinsically noisy, to execute the matrix operations for deep learning in constant time on arrays of nonvolatile memories. To take full advantage of this in-memory compute paradigm, current nonvolatile memory materials are of limited use. A detailed analysis and design guidelines how these materials need to be reengineered for optimal performance in the deep learning space shows a strong deviation from the materials used in memory applications.
Published in: Proceedings of the IEEE ( Volume: 107, Issue: 1, January 2019)
Page(s): 108 - 122
Date of Publication: 12 October 2018

ISSN Information:

Publisher: IEEE

References

References is not available for this document.