Loading [MathJax]/extensions/MathZoom.js
Sub-uJ deep neural networks for embedded applications | IEEE Conference Publication | IEEE Xplore

Sub-uJ deep neural networks for embedded applications


Abstract:

To intelligently process sensor data on internet of things (IoT) devices, we require powerful classifiers that can operate at sub-uJ energy levels. Previous work has focu...Show More

Abstract:

To intelligently process sensor data on internet of things (IoT) devices, we require powerful classifiers that can operate at sub-uJ energy levels. Previous work has focused on spiking neural network (SNN) algorithms, which are well suited to VLSI implementation due to the single-bit connections between neurons in the network. In contrast, deep neural networks (DNNs) are not as well suited to hardware implementation, because the compute and storage demands are high. In this paper, we demonstrate that there are a variety of optimizations that can be applied to DNNs to reduce the energy consumption such that they outperform SNNs in terms of energy and accuracy. Six optimizations are surveyed and applied to a SIMD accelerator architecture. The accelerator is implemented in a 28nm SoC test chip. Measurement results demonstrate ~10X aggregate improvement in energy efficiency, with a minimum energy of 0.36uJ/inference at 667MHz clock frequency. Compared to previously published spiking neural network accelerators, we demonstrate an improvement in energy efficiency of more than an order of magnitude, across a wide energy-accuracy trade-off range.
Date of Conference: 29 October 2017 - 01 November 2017
Date Added to IEEE Xplore: 16 April 2018
ISBN Information:
Electronic ISSN: 2576-2303
Conference Location: Pacific Grove, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.