Abstract:
Data quantization has been proved to be an effective method to compress deep neural networks (DNNs) by using less bits to represent the parameters and intermediate data. ...Show MoreMetadata
Abstract:
Data quantization has been proved to be an effective method to compress deep neural networks (DNNs) by using less bits to represent the parameters and intermediate data. The bit width of the data directly affects the memory footprint, computing capability, and energy consumption during the computation of the DNN models. Although there have been numerous existing studies on data quantization, there is still no quantitative analysis of the existing quantization methods, which results in empirical quantization with unpredictable DNN accuracy loss. To address this problem, we propose an effective method, called ultra-low loss quantization (μL2Q), to provide DNN quantization schemes based on comprehensive quantitative data analysis. μL2Q builds the transformation of the original data to a data space with standard normal distribution, and then find the optimal parameters to minimize the loss of the quantization of a targeted bit width. In addition, we integrate the proposed μL2Q into a popular machine learning framework Caffe for convenient end-to-end DNN design and training. By comparing to the state-of-the-art DNN compression designs, μL2Q shows the greatest ability to maintain DNN accuracy after quantization. In the experiments, our proposed method can deliver 4.42%, 16.70%, 1.95%, 8.26% and 5.63% accuracy improvements on Lenet-5, Cifarnet, VGG7-64 and Resnet-18 (Top1/5), respectively, compared to the state-of-the-art solutions with the same compression ratio.
Date of Conference: 14-19 July 2019
Date Added to IEEE Xplore: 30 September 2019
ISBN Information: