Loading [MathJax]/extensions/MathMenu.js
Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers | IEEE Conference Publication | IEEE Xplore

Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers


Abstract:

Neural Networks (NNs) have emerged as a fundamental technology for machine learning. The sparsity of weight and activation in NNs varies widely from 5%-90% and can potent...Show More

Abstract:

Neural Networks (NNs) have emerged as a fundamental technology for machine learning. The sparsity of weight and activation in NNs varies widely from 5%-90% and can potentially lower computation requirements. However, existing designs lack a universal solution to efficiently handle different sparsity in various layers and neural networks. This work, named STICKER, first systematically explores NN sparsity for inference and online tuning operations. Its major contributions are: 1) autonomous NN sparsity detector that switches the processor modes; 2) Multi-sparsity compatible Convolution (CONV) PE arrays that contain a multi-mode memory supporting different sparsity, and the set-associative PEs supporting both dense and sparse operations and reducing 92% memory area compared with previous hash memory banks; 3) Online tuning PE for sparse FCs that achieves 32.5x speedup compared with conventional CPU, using quantization center-based weight updating and Compressed Sparse Column (CSC) based back propagations. Peak energy efficiency of the 65nm STICKER chip is up to 62.1 TOPS/W at 8bit data length.
Date of Conference: 18-22 June 2018
Date Added to IEEE Xplore: 25 October 2018
ISBN Information:
Print on Demand(PoD) ISSN: 2158-5601
Conference Location: Honolulu, HI, USA

Contact IEEE to Subscribe

References

References is not available for this document.