Loading [a11y]/accessibility-menu.js
Exploring Energy and Accuracy Tradeoff in Structure Simplification of Trained Deep Neural Networks | IEEE Journals & Magazine | IEEE Xplore

Exploring Energy and Accuracy Tradeoff in Structure Simplification of Trained Deep Neural Networks


Abstract:

This paper presents a structure simplification procedure that allows efficient energy and accuracy tradeoffs in implementation of trained deep neural networks (DNNs). Thi...Show More

Abstract:

This paper presents a structure simplification procedure that allows efficient energy and accuracy tradeoffs in implementation of trained deep neural networks (DNNs). This structure simplification procedure identifies and eliminates redundant neurons in any layer of the DNN based on the output values of these neurons over the training samples. This procedure may be applied to all layers of a DNN. Pareto-optimal points in terms of accuracy and energy consumption can be easily identified from the configurations across different layers. After redundant neurons are discarded, the weights connect to the remaining neurons will be updated using matrix multiplication without retraining. Yet, retraining may still be applied if desired to further fine tune the performance. In our experiments, we show energy-accuracy tradeoff that provides clear guidance to achieve efficient realization of trained DNNs. We also observe significant implementation cost reductions with up to 11.50× in energy and 12.30× in memory while the relative performance (accuracy) loss is negligible. We also discuss and validate strategies to apply our procedure to multiple layers in a DNN in order to maximize the compression ratio of the network after simplification subject to an overall budget in degradation of accuracy across all the simplified layers.
Page(s): 836 - 848
Date of Publication: 04 May 2018

ISSN Information:

Funding Agency:


References

References is not available for this document.