Processing math: 100%
Structured Term Pruning for Computational Efficient Neural Networks Inference | IEEE Journals & Magazine | IEEE Xplore

Structured Term Pruning for Computational Efficient Neural Networks Inference


Abstract:

The state-of-the-art convolutional neural network accelerators are showing a growing interest in exploiting the bit-level sparsity and eliminating the ineffectual computa...Show More

Abstract:

The state-of-the-art convolutional neural network accelerators are showing a growing interest in exploiting the bit-level sparsity and eliminating the ineffectual computations of zero bits. However, the excessive redundancy and the irregular distribution of nonzero bits limit the real speedup in the accelerators. To address this, we propose an algorithm-architecture codesign, named structured term pruning (STP), to boost the computation efficiency of neural networks inference. Specifically, we enhance the bit sparsity by guiding the weights toward the value with fewer power-of-two terms. Then, we structure the terms with layer-wise group budgets. Retraining is adopted to recover the accuracy drop. We also design the hardware of the group processing element and the fast signed-digital encoder for efficient implementation of STP networks. The system design of STP is realized with some easy alterations on an input stationary systolic array design. Extensive evaluation results demonstrate that STP can reduce significant inference computation costs, and achieve 2.35\times computational energy saving for the ResNet18 network on the ImageNet dataset.
Page(s): 190 - 203
Date of Publication: 19 April 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.