Loading [a11y]/accessibility-menu.js
Flexible-width Bit-level Compressor for Convolutional Neural Network | IEEE Conference Publication | IEEE Xplore

Flexible-width Bit-level Compressor for Convolutional Neural Network


Abstract:

In this paper, a weight compression technique named Flexible-width Bit-level (FWBL) coding is proposed to compress convolutional neural networks (CNN) models without re-t...Show More

Abstract:

In this paper, a weight compression technique named Flexible-width Bit-level (FWBL) coding is proposed to compress convolutional neural networks (CNN) models without re-training. FWBL splits the weight parameters into independent size-optimized blocks and uses just-enough bits for each block. Bit-level run-length coding is employed on high bits (HBs) to further compress the redundancy due to non-uniformly distributed weights. We implemented a configurable hardware decoder and synthesize it with TSMC 28nm technology. Results show that FWBL achieves an average compression ratio of 1.6 which is close to the Huffman coding. The decoder has a throughput of 3.7GBps running at 1.1GHz, with a power dissipation of 3.55mW, which is 17.9x and 21x better in throughput and energy efficiency compared with the prior work. Implemented in FPGA, our decoder is 3.36x and 4.96x better than various Huffman decoders in throughput and area efficiency, making it a promising weight compression technique for mobile CNN applications.
Date of Conference: 06-09 June 2021
Date Added to IEEE Xplore: 23 June 2021
ISBN Information:
Conference Location: Washington DC, DC, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.