Loading [a11y]/accessibility-menu.js
Encoder Optimizations For The NNR Standard On Neural Network Compression | IEEE Conference Publication | IEEE Xplore

Encoder Optimizations For The NNR Standard On Neural Network Compression


Abstract:

The novel Neural Network Compression and Representation Standard (NNR), recently issued by ISO/IEC MPEG, achieves very high coding gains, compressing neural networks to 5...Show More

Abstract:

The novel Neural Network Compression and Representation Standard (NNR), recently issued by ISO/IEC MPEG, achieves very high coding gains, compressing neural networks to 5% in size without accuracy loss. The underlying NNR encoder technology includes parameter quantization, followed by efficient arithmetic coding, namely DeepCABAC. In addition, NNR also allows very flexible adaptations, such as signaling specific local scaling values, setting quantization parameters per tensor rather than per network and supporting specific parameter fusion operations. This paper presents our new approach for optimally deriving these parameters, namely the derivation of parameters for local scaling adaptation (LSA), inference-optimized quantization (IOQ), and batch-norm folding (BNF). By allowing inference and fine tuning within the encoding process, quantization errors are reduced and the NNR coding efficiency is further improved to a compressed bitstream size of only 3% in comparison to the original model size.
Date of Conference: 19-22 September 2021
Date Added to IEEE Xplore: 23 August 2021
ISBN Information:

ISSN Information:

Conference Location: Anchorage, AK, USA

Contact IEEE to Subscribe

References

References is not available for this document.