Skip to main content

LUT-LIC: Look-Up Table-Assisted Learned Image Compression

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1966))

Included in the following conference series:

  • 452 Accesses

Abstract

Image compression is indispensable in many visual applications. Recently, learned image compression (LIC) using deep learning has surpassed traditional image codecs such as JPEG in terms of compression efficiency but at the cost of increased complexity. Thus, employing LIC in resource-limited environments is challenging. In this paper, we propose an LIC model using a look-up table (LUT) to effectively reduce the complexity. Specifically, we design an LUT replacing the entropy decoder by analyzing its input characteristics and accordingly developing a dynamic sampling method for determining the indices of the LUT. Experimental results show that the proposed method achieves better compression efficiency than traditional codecs with faster runtime than existing LIC models.

This work was supported in part by the Ministry of Trade, Industry and Energy (MOTIE) under Grant P0014268, and in part by the NRF grant funded by the Korea government (MSIT) (2021R1A2C2011474).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aljumah, A., Ahmed, M.A.: Design of high speed data transfer direct memory access controller for system on chip based embedded products. J. Appl. Sci. 15(3), 576–581 (2015)

    Article  Google Scholar 

  2. Ballé, J., Minnen, D., Singh, S., Hwang, S.J., Johnston, N.: Variational image compression with a scale hyperprior. In: Proceedings of the International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  3. Bégaint, J., Racapé, F., Feltman, S., Pushparaja, A.: CompressAI: a PyTorch library and evaluation platform for end-to-end compression research. arXiv preprint arXiv:2011.03029 (2020)

  4. Bjøntegaard, G.: Calculation of average PSNR differences between RD-curves. ITU SG16 Doc. VCEG-M33 (2001)

    Google Scholar 

  5. Cheng, Z., Sun, H., Takeuchi, M., Katto, J.: Learned image compression with discretized gaussian mixture likelihoods and attention modules. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  6. Cui, Z., Wang, J., Gao, S., Guo, T., Feng, Y., Bai, B.: Asymmetric gained deep image compression with continuous rate adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  7. Gao, G., et al.: Neural image compression via attentional multi-scale back projection and frequency decomposition. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  8. Google: WebP. https://developers.google.com/speed/webp/docs/compression

  9. He, D., Yang, Z., Peng, W., Ma, R., Qin, H., Wang, Y.: ELIC: efficient learned image compression with unevenly grouped space-channel contextual adaptive coding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5718–5727 (2022)

    Google Scholar 

  10. He, D., Zheng, Y., Sun, B., Wang, Y., Qin, H.: Checkerboard context model for efficient learned image compression. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  11. Hestness, J., Keckler, S.W., Wood, D.A.: A comparative analysis of microarchitecture effects on CPU and GPU memory system behavior. In: Proceedings of the IEEE International Symposium on Workload Characterization (IISWC), pp. 150–160. IEEE (2014)

    Google Scholar 

  12. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: Proceedings of the NIPS Deep Learning and Representation Learning Workshop (2015)

    Google Scholar 

  13. Hu, Y., Yang, W., Liu, J.: Coarse-to-fine hyper-prior modeling for learned image compression. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2020)

    Google Scholar 

  14. Jacob, B., et al.: Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704–2713 (2018)

    Google Scholar 

  15. Jeon, G.W., Yu, S., Lee, J.S.: Integer quantized learned image compression. In: Proceedings of the IEEE International Conference on Image Processing (ICIP) (2023)

    Google Scholar 

  16. Jo, Y., Joo Kim, S.: Practical single-image super-resolution using look-up table. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  17. Kim, J.H., Choi, J.H., Chang, J., Lee, J.S.: Efficient deep learning-based lossy image compression via asymmetric autoencoder and pruning. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2063–2067 (2020)

    Google Scholar 

  18. Kim, J.H., Heo, B., Lee, J.S.: Joint global and local hierarchical priors for learned image compression. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  19. The Kodak PhotoCD dataset. http://r0k.us/graphics/kodak/

  20. Li, J., Chen, C., Cheng, Z., Xiong, Z.: MuLUT: cooperating multiple look-up tables for efficient image super-resolution. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022 Part XVIII, vol. 13678, pp. 238–256. Springer, Cham (2022)

    Chapter  Google Scholar 

  21. Liu, Z., Sun, M., Zhou, T., Huang, G., Darrell, T.: Rethinking the value of network pruning. In: Proceedings of the International Conference on Learning Representations (ICLR) (2019)

    Google Scholar 

  22. Minnen, D., Ballé, J., Toderici, G.: Joint autoregressive and hierarchical priors for learned image compression. In: Proceedings of the Advances in Neural Information Processing Systems (NeurIPS) (2018)

    Google Scholar 

  23. Minnen, D., Singh, S.: Channel-wise autoregressive entropy models for learned image compression. In: Proceedings of the IEEE International Conference on Image Processing (ICIP) (2020)

    Google Scholar 

  24. Qian, Y., et al.: Learning accurate entropy model with global reference for image compression. In: Proceedings of the International Conference on Learning Representations (ICLR) (2021)

    Google Scholar 

  25. Rabbani, M.: JPEG2000: image compression fundamentals, standards and practice. J. Electron. Imaging 11(2), 286 (2002)

    Article  Google Scholar 

  26. Sun, H., Yu, L., Katto, J.: Q-LIC: quantizing learned image compression with channel splitting. IEEE Trans. Circ. Syst. Video Technol. 1–1 (2022). https://doi.org/10.1109/TCSVT.2022.3231789

  27. Wallace, G.K.: The JPEG still picture compression standard. IEEE Trans. Consum. Electron. 38(1), xviii-xxxiv (1992)

    Google Scholar 

  28. Wang, X., Zheng, Z., He, Y., Yan, F., Zeng, Z., Yang, Y.: Progressive local filter pruning for image retrieval acceleration. IEEE Trans. Multimedia (2023). https://doi.org/10.1109/TMM.2023.3256092

    Article  Google Scholar 

  29. Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: Proceedings of the Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2, pp. 1398–1402 (2003)

    Google Scholar 

  30. Xue, T., Chen, B., Wu, J., Wei, D., Freeman, W.T.: Video enhancement with task-oriented flow. Int. J. Comput. Vis. (IJCV) 127(8), 1106–1125 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jong-Seok Lee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yu, S., Lee, JS. (2024). LUT-LIC: Look-Up Table-Assisted Learned Image Compression. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1966. Springer, Singapore. https://doi.org/10.1007/978-981-99-8148-9_34

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8148-9_34

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8147-2

  • Online ISBN: 978-981-99-8148-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics