Loading [a11y]/accessibility-menu.js
Lattice Quantization | IEEE Conference Publication | IEEE Xplore

Abstract:

Post-training quantization of neural networks consists in quantizing a model without retraining nor hyperparameter search, while being fast and data frugal. In this paper...Show More

Abstract:

Post-training quantization of neural networks consists in quantizing a model without retraining nor hyperparameter search, while being fast and data frugal. In this paper, we propose LatticeQ, a novel post-training weight quantization method designed for deep convolutional neural networks (DC-NNs). Contrary to scalar rounding widely used in state-of-the-art quantization methods, LatticeQ uses a quantizer based on lattices - discrete algebraic structures. LatticeQ exploits the inner correlations between the model parameters to the benefit of minimizing quantization error. We achieve state-of-the-art results in post-training quantization. In particular, we achieve ImageNet classification results close to full precision on Resnet-18/50, with little to no accuracy drop for 4-bit models. Our code is available here, and a more thorough version of the paper here.
Date of Conference: 17-19 April 2023
Date Added to IEEE Xplore: 02 June 2023
Print on Demand(PoD) ISBN:979-8-3503-9624-9

ISSN Information:

Conference Location: Antwerp, Belgium

Contact IEEE to Subscribe

References

References is not available for this document.