Skip to main content
Log in

Pse: mixed quantization framework of neural networks for efficient deployment

  • Research
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

Quantizing is a promising approach to facilitate deploying deep neural networks on resource-limited devices. However, existing methods are challenged by obtaining computation acceleration and parameter compression while maintaining excellent performance. To achieve this goal, we propose PSE, a mixed quantization framework which combines product quantization (PQ), scalar quantization (SQ), and error correction. Specifically, we first employ PQ to obtain the floating-point codebook and index matrix of the weight matrix. Then, we use SQ to quantize the codebook into integers and reconstruct an integer weight matrix. Finally, we propose an error correction algorithm to update the quantized codebook and minimize the quantization error. We extensively evaluate our proposed method on various backbones, including VGG-16, ResNet-18/50, MobileNetV2, ShuffleNetV2, EfficientNet-B3/B7, and DenseNet-201 on CIFAR-10 and ILSVRC-2012 benchmarks. The experiments demonstrate that PSE reduces computation complexity and model size with acceptable accuracy loss. For example, ResNet-18 achieves 1.8\(\times\) acceleration ratio and 30.4\(\times\) compression ratio with less than 1.54% accuracy loss on CIFAR-10.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Bucila, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: KDD ’06 (2006)

  2. Cai, Y., Yao, Z., Dong, Z., Gholami, A., Mahoney, M.W., Keutzer, K.: Zeroq: a novel zero shot quantization framework. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13169–13178 (2020)

  3. Chen, J., Bai, S., Huang, T., Wang, M., Tian, G., Liu, Y.: Data-free quantization via mixed-precision compensation without fine-tuning. Pattern Recognit. 143, 109780 (2023)

    Article  Google Scholar 

  4. Chen, J., Liu, L., Liu, Y., Zeng, X.: A learning framework for n-bit quantized neural networks toward fpgas. IEEE Trans. Neural Netw. Learn. Syst. 32(3), 1067–1081 (2021). https://doi.org/10.1109/TNNLS.2020.2980041

    Article  MathSciNet  Google Scholar 

  5. Chen, T., Du, Z., Sun, N., Wang, J., Wu, C., Chen, Y., Temam, O.: Diannao: a small-footprint high-throughput accelerator for ubiquitous machine-learning. in: Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (2014)

  6. Codrescu, L., Anderson, W., Venkumanhanti, S., Zeng, M., Plondke, E., Koob, C., Ingle, A., Tabony, C., Maule, R.: Hexagon dsp: an architecture optimized for mobile multimedia and communications. IEEE Micro 34(2), 34–43 (2014). https://doi.org/10.1109/MM.2014.12

    Article  Google Scholar 

  7. Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: training deep neural networks with weights and activations constrained to + 1 or - 1. arXiv preprint arXiv:1602.02830 (2016)

  8. Dettmers, T.: 8-bit approximations for parallelism in deep learning. arXiv preprint arXiv:1511.04561 (2015)

  9. Fan, A., Stock, P., Graham, B., Grave, E., Gribonval, R., Jegou, H., Joulin, A.: Training with quantization noise for extreme model compression. arXiv preprint arXiv:2004.07320 (2020)

  10. Fan, J., Pan, Z., Wang, L., Wang, Y.: Codebook-softened product quantization for high accuracy approximate nearest neighbor search. Neurocomputing 507, 107–116 (2022)

    Article  Google Scholar 

  11. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In: International Conference on machine Learning, pp. 1050–1059. PMLR (2016)

    Google Scholar 

  12. Ge, T., He, K., Ke, Q., Sun, J.: Optimized product quantization for approximate nearest neighbor search. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013)

  13. Gong, Y., Liu, L., Yang, M., Bourdev, L.: Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115 (2014)

  14. Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: Bach, F., Blei, D. (eds.) Rroceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 37, pp. 1737–1746. PMLR, Lille (2015)

    Google Scholar 

  15. Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M., Dally, W.J.: Eie: Efficient inference engine on compressed deep neural network. 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) pp. 243–254 (2016)

  16. Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. Vision and Pattern Recognition. arXiv: Computer (2016)

  17. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  18. Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. ArXiv abs/1503.02531 (2015)

  19. Hong, W., Chen, T., Lu, M., Pu, S., Ma, Z.: Efficient neural image decoding via fixed-point inference. IEEE Trans. Circuits Syst. Video Technol. 31(9), 3618–3630 (2021). https://doi.org/10.1109/TCSVT.2020.3040367

    Article  Google Scholar 

  20. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  21. Hu, B., Zhou, S., Xiong, Z., Wu, F.: Cross-resolution distillation for efficient 3D medical image registration. IEEE Trans. Circuits Syst. Video Technol. 32(10), 7269–7283 (2022). https://doi.org/10.1109/TCSVT.2022.3178178

    Article  Google Scholar 

  22. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

  23. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)

  24. Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., Kalenichenko, D.: Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704–2713 (2018)

  25. Jin, Q., Ren, J., Zhuang, R., Hanumante, S., Li, Z., Chen, Z., Wang, Y., Yang, K., Tulyakov, S.: F8net: Fixed-point 8-bit only multiplication for network quantization. arXiv preprint arXiv:2202.05239 (2022)

  26. Jégou, H., Douze, M., Schmid, C.: Product quantization for nearest neighbor search. IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 117–128 (2011). https://doi.org/10.1109/TPAMI.2010.57

    Article  Google Scholar 

  27. Krishnamoorthi, R.: Quantizing deep convolutional networks for efficient inference: a whitepaper. arXiv preprint arXiv:1806.08342 (2018)

  28. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. (2009)

  29. Li, F., Zhang, B., Liu, B.: Ternary weight networks. arXiv preprint arXiv:1605.04711 (2016)

  30. Li, Z., Sun, Y., Tian, G., Xie, L., Liu, Y., Su, H., He, Y.: A compression pipeline for one-stage object detection model. J. Real-Time Image Process. 18, 1949–1962 (2021)

    Article  Google Scholar 

  31. Liang, T., Glossner, J., Wang, L., Shi, S., Zhang, X.: Pruning and quantization for deep neural network acceleration: a survey. Neurocomputing 461, 370–403 (2021)

    Article  Google Scholar 

  32. Lin, J., Chen, W.M., Lin, Y., Cohn, J., Gan, C., Han, S.: Mcunet: tiny deep learning on IoT devices. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 11711–11722. Curran Associates Inc (2020)

    Google Scholar 

  33. Liu, C., Ding, W., Chen, P., Zhuang, B., Wang, Y., Zhao, Y., Zhang, B., Han, Y.: Rb-net: training highly accurate and efficient binary neural networks with reshaped point-wise convolution and balanced activation. IEEE Trans. Circuits Syst. Video Technol. 32(9), 6414–6424 (2022). https://doi.org/10.1109/TCSVT.2022.3166803

    Article  Google Scholar 

  34. Liu, Y., Wu, D., Zhou, W., Fan, K., Zhou, Z.: Eacp: an effective automatic channel pruning for neural networks. Neurocomputing 526, 131–142 (2023)

    Article  Google Scholar 

  35. Liu, Y., Wu, D., Zhou, W., Fan, K., Zhou, Z.: Eacp: an effective automatic channel pruning for neural networks. Neurocomputing 526, 131–142 (2023). https://doi.org/10.1016/j.neucom.2023.01.014

    Article  Google Scholar 

  36. Liu, Z., Wu, B., Luo, W., Yang, X., Liu, W., Cheng, K.: Bi-real net: enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In: ECCV (2018)

  37. Ma, N., Zhang, X., Zheng, H.T., Sun, J.: Shufflenet v2: practical guidelines for efficient CNN architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 116–131 (2018)

  38. Nguyen, D.T., Kim, H., Lee, H.J.: Layer-specific optimization for mixed data flow with mixed precision in FPGA design for CNN-based object detectors. IEEE Trans. Circuits Syst. Video Technol. 31(6), 2450–2464 (2021). https://doi.org/10.1109/TCSVT.2020.3020569

    Article  Google Scholar 

  39. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: an imperative style, high-performance deep learning library. Adv. Neural. Inf. Process. Syst. 32, 8026–8037 (2019)

    Google Scholar 

  40. Patel, G., Mopuri, K.R., Qiu, Q.: Learning to retain while acquiring: combating distribution-shift in adversarial data-free knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7786–7794 (2023)

  41. Qiu, J., Wang, J., Yao, S., Guo, K., Li, B., Zhou, E., Yu, J., Tang, T., Xu, N., Song, S., Wang, Y., Yang, H.: Going deeper with embedded fpga platform for convolutional neural network. In: Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (2016)

  42. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: Xnor-net: Imagenet classification using binary convolutional neural networks. In: European Conference on Computer Vision, pp. 525–542. Springer (2016)

    Google Scholar 

  43. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  44. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

  45. Sayed, R., Azmi, H., Shawkey, H., Khalil, A.H., Refky, M.: A systematic literature review on binary neural networks. IEEE Access 11, 27546–27578 (2023). https://doi.org/10.1109/ACCESS.2023.3258360

    Article  Google Scholar 

  46. Shang, Y., Xu, D., Zong, Z., Nie, L., Yan, Y.: Network binarization via contrastive learning. In: European Conference on Computer Vision, pp. 586–602. Springer (2022)

    Google Scholar 

  47. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  48. Stock, P., Joulin, A., Gribonval, R., Graham, B., Jégou, H.: And the bit goes down: Revisiting the quantization of neural networks. arXiv preprint arXiv:1907.05686 (2019)

  49. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  50. Tian, G., Chen, J., Zeng, X., Liu, Y.: Pruning by training: a novel deep neural network compression framework for image processing. IEEE Signal Process. Lett. 28, 344–348 (2021)

    Article  Google Scholar 

  51. Tu, Z., Chen, X., Ren, P., Wang, Y.: Adabin: improving binary neural networks with adaptive binary sets. In: European conference on computer vision, pp. 379–395. Springer (2022)

    Google Scholar 

  52. Vanhoucke, V., Senior, A., Mao, M.Z.: Improving the speed of neural networks on cpus (2011)

  53. Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: Haq: Hardware-aware automated quantization with mixed precision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

  54. Wang, Z., Xiao, H., Lu, J., Zhou, J.: Generalizable mixed-precision quantization via attribution rank preservation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5291–5300 (2021)

  55. Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4820–4828 (2016)

  56. Xu, Z., Lin, M., Liu, J., Chen, J., Shao, L., Gao, Y., Tian, Y., Ji, R.: Recu: Reviving the dead weights in binary neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5198–5208 (2021)

  57. Yang, C., Liu, H.: Channel pruning based on convolutional neural network sensitivity. Neurocomputing 507, 97–106 (2022)

    Article  Google Scholar 

  58. Zhang, D., Yang, J., Ye, D., Hua, G.: Lq-nets: Learned quantization for highly accurate and compact deep neural networks. ArXiv abs/1807.10029 (2018)

  59. Zhang, J., Su, Z., Feng, Y., Lu, X., Pietikäinen, M., Liu, L.: Dynamic binary neural network by learning channel-wise thresholds. In: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1885–1889. IEEE (2022)

  60. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856 (2018)

  61. Zhou, A., Yao, A., Guo, Y., Xu, L., Chen, Y.: Incremental network quantization: Towards lossless CNNS with low-precision weights. arXiv preprint arXiv:1702.03044 (2017)

  62. Zhuang, B., Shen, C., Tan, M., Liu, L., Reid, I.D.: Towards effective low-bitwidth convolutional neural networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 7920–7928 (2018)

Download references

Acknowledgements

This work is supported in part by the National Natural Science Foundation of China under Grant 62303405, in part by Ningbo Natural Science Foundation project under Grant 2023J400, and in part by Zhejiang Provincial Basic Public Welfare Research Project of China under Grant No. LGG22F030019.

Author information

Authors and Affiliations

Authors

Contributions

YY: Conceptualization, Software, Validation, Formal analysis, Data Curation, Writing - Original Draft, Writing-Review and Editing, Visualization. GT: Resources, Supervision, Project administration, Funding acquisition. ML: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Project administration. YC: Writing-Review and Editing. JC: Conceptualization, Writing-Review and Editing. LM: Resources, Funding acquisition. YL: Resources, Funding acquisition. YP: Resources, Funding acquisition. All authors reviewed the manuscript.

Corresponding author

Correspondence to Guanzhong Tian.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Y., Tian, G., Liu, M. et al. Pse: mixed quantization framework of neural networks for efficient deployment. J Real-Time Image Proc 20, 113 (2023). https://doi.org/10.1007/s11554-023-01366-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11554-023-01366-9

Keywords

Navigation