Skip to main content

Advertisement

Low-light image enhancement via an attention-guided deep Retinex decomposition model

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Images acquired from optical imaging devices in a low-light or back-lit environment usually lead to a poor visual experience. The poor visibility and the attendant contrast or color distortion may degrade the performance of subsequent vision processing. To enhance the visibility of low-light image and mitigate the degradation of vision systems, an attention-guided deep Retinex decomposition model, dubbed Ag-Retinex-Net, is proposed. Inspired by the Retinex theory, the Ag-Retinex-Net first decomposes the input low-light image into two layers under an elaborate multi-term regularization, and then recomposes the refined two layers to obtain the final enhanced images via attention-guided generative adversarial learning. The multi-term constraints in the decomposition module can help better regularize and extract the decomposed illumination and reflectance. And the attention-guided generative adversarial learning in the recomposition module is utilized to help remove the degradation. The experimental results show that the proposed Ag-Retinex-Net outperforms other Retinex-based methods in terms of both visual quality and several objective evaluation metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The data that support the findings of this study are available from the author.

References

  1. Lee C, Lee C, Kim C-S (2012) Contrast enhancement based on layered difference representation. In: 2012 19th IEEE International Conference on Image Processing. pp 965–968

  2. Arici T, Dikbas S, Altunbasak Y (2009) A histogram modification framework and its application for image contrast enhancement. IEEE Trans Image Process 18(9):1921–1935

  3. Kong NSP and Ibrahim H (2008) Color image enhancement using brightness preserving dynamic histogram equalization. IEEE Trans Consum Electron 54(4):1962–1968

  4. Land EH (1965) The retinex. In: Ciba foundation symposium‐colour vision: physiology and experimental psychology. John Wiley & Sons, Ltd., Chichester, UK, pp 217–227 

  5. Land EH, McCann JJ (1971) Lightness and retinex theory. Josa 61(1):1–11

    Article  MATH  Google Scholar 

  6. Jobson DJ, Rahman Z, Woodell GA (1997) Properties and performance of a center/surround retinex. IEEE Trans Image Process 6(3):451–462

  7. Jobson DJ, Rahman Z, and Woodell GA(1997) A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans Image Process 6(7):965–976

  8. Guo X, Li Y, Ling H (2016) Lime: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993

    Article  MathSciNet  MATH  Google Scholar 

  9. Wang S, Zheng J, Hu H-M, Li Bo (2013) Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process 22(9):3538–3548

    Article  MATH  Google Scholar 

  10. Ren X, Li M, Cheng W-H, and Liu J(2018) Joint enhancement and denoising method via sequential decomposition. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pages 1–5. IEEE

  11. Pang Y, Zhao X, Zhang L, Lu H (2020) Multi-scale interactive network for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9413–9422

  12. Quan Y, Chen M, Pang T, and Ji H(2020) Self2Self with dropout: Learning self-supervised denoising from single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1890–1898

  13. Luo G, Zhou Y, Sun X, Cao L, Wu C, Deng C, Ji R (2020) Multi-task collaborative network for joint referring expression comprehension and segmentation. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 10034–10043

  14. Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu (2018) Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560

  15. Wang Y, Cao Y, Zha Z-J, Jing Zhang, Xiong Z, Zhang W, and Wu F (2019) Progressive Retinex: Mutually reinforced illumination-noise perception network for low light image enhancement. In Proceedings of the 27th ACM International Conference on Multimedia, pages 2015–2023

  16. Wang J, Tan W, Niu X, Yan B (2019) RDGAN: retinex decomposition based adversarial learning for low-light enhancement. In: 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, pp 1186–1191

  17. Shen L, Yue Z, Feng F, Chen Q, Liu S, and Ma J (2017) MSRNet: Low-light image enhancement using deep convolutional network. arXiv preprint arXiv:1711.02488.

  18. Park S, Yu S, Kim M, Park K, Paik J (2018) Dual autoencoder network for retinex-based low-light image enhancement. IEEE Access 6:22084–22093

    Article  MATH  Google Scholar 

  19. Liang J, Xu Y, Quan Y, Wang J, Ling H, and Ji H (2020) Deep bilateral retinex for low-light image enhancement. arXiv preprint arXiv:2007.02018

  20. Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062

    Article  MathSciNet  MATH  Google Scholar 

  21. Lore KG, Akintayo A, Sarkar S (2017) LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662

  22. Lv F, Lu F, Wu J, Lim C (2018) MBLLEN: Low-light image/video enhancement using cnns. In: BMVC, vol 220. no 1. p 4

  23. Ibrahim H, Kong NSP (2007) Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans Consum Electron 53(4):1752–1758

  24. Huang S-C, Cheng F-C, Chiu Y-S (2012) Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans Image Process 22(3):1032–1041

    Article  MathSciNet  MATH  Google Scholar 

  25. Abdullah-Al-Wadud M, Kabir MH, Dewan MAA, Chae O (2007) A dynamic histogram equalization for image contrast enhancement. IEEE Trans Consum Electron 53(2):593–600

  26. Kim Y-T (1997) Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans Consum Electron 43(1):1–8

  27. Stark JA (2000) Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans Image Process 9(5):889–896

    Article  MATH  Google Scholar 

  28. Rahman ZU, Jobson DJ, Woodell GA (1996) Multiscale retinex for color image enhancement. In: Proceedings of 3rd IEEE international conference on image processing, vol 3. IEEE, pp 1003–1006

  29. Xu Y, Yang C, Chen M (2021) A novel multiscale fusion framework for detail-preserving low-light image enhancement. Inf Sci 548:378–397

    Article  MATH  Google Scholar 

  30. Zhang Y, Zhang J, Guo X (2019) Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia, pp 1632–1640

  31. Yang W, Wang S, Fang Y, Wang Y, Liu J (2020) From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 3063–3072

  32. Lv F, Li Y, Lu F (2020) Attention guided low-light image enhancement with a large scale low-light simulation dataset. Int J Comp Vis 129(7):2175–2193

  33. Ma L, Liu R, Zhang J, Fan X, Luo Z (2021) Learning deep context-sensitive decomposition for low-light image enhancement. IEEE Trans Neural Networks Learn Syst 33(10):5666–5680

  34. Liu R, Ma L, Zhang J, Fan X, Luo Z (2020) Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10561–10570

  35. Yang W, Wang W, Huang H, Wang S, Liu J (2021) Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans Image Process 30:2072–2086

  36. Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, Cong R (2020) Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 1780–1789

  37. Zhang Y, Di X, Zhang B, Wang C (2020) Self-supervised image enhancement network: Training with low light images only. arXiv e-prints, pages arXiv–2002

  38. Jiang Y, Gong X, Liu D, Cheng Y, Fang C, Shen X, Yang J, Zhou P, Wang Z (2021) EnlightenGAN: Deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–2349

  39. Wang R, Zhang Q, Fu C-W, Shen X, Zheng W-S, Jia J (2019) Underexposed photo enhancement using deep illumination estimation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

  40. Hu J, Guo X, Chen J, Liang G, Deng F, Lam TL (2021) A two-stage unsupervised approach for low light image enhancement. IEEE Rob Autom Lett 6(4):8363–8370

  41. Li J, Feng X, Hua Z (2021) Low-light image enhancement via progressive recursive network. IEEE Trans Circuits Syst Video Technol 31(11), 4227–4240

  42. Woo S, Park J, Lee JY, Kweon IS (2018) CBAM: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19

  43. He K, Sun J, Tang X (2012) Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6):1397–1409

    Article  MATH  Google Scholar 

  44. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville A (2017) Improved training of wasserstein GANS. arXiv preprint arXiv:1704.00028

  45. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In International conference on machine learning. PMLR, pp 214–223

  46. Ma K, Zeng K, Wang Z (2015) Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process 24(11):3345–3356

  47. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment : from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  MATH  Google Scholar 

  48. Mittal A, Soundararajan R, Bovik AC (2012) Making a ‘completely blind’ image quality analyzer. IEEE Signal Process Lett 20(3):209–212

  49. Fu X, Zeng D, Huang Y, Zhang X-P, Ding X (2016) A weighted variational model for simultaneous reflectance and illumination estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2782–2790

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Ling.

Ethics declarations

Competing Interests

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, Y., Lv, G., Ling, J. et al. Low-light image enhancement via an attention-guided deep Retinex decomposition model. Appl Intell 55, 194 (2025). https://doi.org/10.1007/s10489-024-06044-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-06044-2

Keywords