Skip to main content
Log in

IATN: illumination-aware two-stage network for low-light image enhancement

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Images captured in low brightness environment have issues with low contrast and noise due to uneven lighting, which can seriously affect the accuracy of high-level computer vision tasks. Currently, most enhancement methods still suffer from color distortion and noise amplification. To overcome these issues, this paper proposes an illumination-aware two-stage network (IATN) for low-light image enhancement. In the first stage, a tiny illumination estimation network based on Retinex theory is constructed to generate a coarse enhanced image. In the second stage, an illumination-aware correction network (IACN) is designed by building an illumination map to guide the reconstruction of features, which can reduce color distortion and suppress noise in the results obtained in the first stage, thereby obtaining refined enhancement results. In IACN, considering the exposure difference in different regions of the image caused by uneven lighting, multiple illumination-aware modules are constructed to correct features at different scales by utilizing the long-range dependence of features. Numerous experiments conducted on public benchmark datasets have shown that our IATN generates enhanced images that are more natural, colorful, and superior to some state-of-the-art methods. The source code of this work will be available on GitHub.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: ArcFace: additive angular margin loss for deep face recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 4685–4694 (2019)

  2. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  3. Fu, J., et al.: Dual attention network for scene segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 3141–3149 (2019)

  4. Abdullah-Al-Wadud, M., Kabir, M.H., Akber Dewan, M.A., Chae, O.: A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consumer Electron. 53(2), 593–600 (2007)

    Article  Google Scholar 

  5. Wang, L., Xiao, L., Liu, H., Wei, Z.: Variational Bayesian method for retinex. IEEE Trans. Image Process. 23(8), 3381–3396 (2014)

    Article  MathSciNet  Google Scholar 

  6. Guo, X., Li, Y., Ling, H.: LIME: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)

    Article  MathSciNet  Google Scholar 

  7. Moran, S., Marza, P., McDonagh, S., Parisot, S., Slabaugh, G.: DeepLPF: deep local parametric filters for image enhancement. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 12823–12832 (2020)

  8. Xu, K., Yang, X., Yin, B., Lau, R.W.H.: Learning to restore low-light images via decomposition-and-enhancement. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 2278–2287 (2020)

  9. Xu, X., Wang, R., Fu, C.-W., Jia, J.: SNR-aware low-light image enhancement. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, pp. 17693–17703 (2022)

  10. Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 3060–3069 (2020)

  11. Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. In: BMVC (2018)

  12. Yang, W., Wang, W., Huang, H., Wang, S., Liu, J.: Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 30, 2072–2086 (2021)

    Article  Google Scholar 

  13. Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J.: Beyond brightening low-light images. Int. J. Comput. Vision 129, 1013–1037 (2021)

    Article  Google Scholar 

  14. Celik, T., Tjahjadi, T.: Contextual and variational contrast enhancement. IEEE Trans. Image Process. 20(12), 3431–3441 (2011)

    Article  MathSciNet  Google Scholar 

  15. Lee, C., Lee, C., Kim, C.-S.: Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 22(12), 5372–5384 (2013)

    Article  Google Scholar 

  16. Fu, X., Zeng, D., Huang, Y., Zhang, X.-P., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 2782–2790 (2016)

  17. Hao, S., Han, X., Guo, Y., Xu, X., Wang, M.: Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimed. 22(12), 3025–3038 (2020)

    Article  Google Scholar 

  18. Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1632–1640 (2019)

  19. Guo, X., Hu, Q.: Low-light image enhancement via breaking down the darkness. Int. J. Comput. Vis. 131, 48–66 (2023)

    Article  Google Scholar 

  20. Liang, Y., Wang, B., Ren, W., Liu, J., Wang, W., Zuo, W.: Learning hierarchical dynamics with spatial adjacency for image enhancement. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 2767–2776 (2022)

  21. Yang, Y., Xu, W., Huang, S., Wan, W.: Low-light image enhancement network based on multi-scale feature complementation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 3, pp. 3214–3221 (2023)

  22. Fu, Z., Yang, Y., Tu, X., Huang, Y., Ding, X., Ma, K.-K.: Learning a simple low-light image enhancer from paired low-light instances. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, pp. 22252–22261 (2023)

  23. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17), pp. 6000–6010 (2017)

  24. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, pp. 9992–10002 (2021)

  25. Wang, W., et al.: Pyramid vision transformer: a versatile backbone for dense prediction without convolutions. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, pp. 548–558 (2021)

  26. Xia, Z., Pan, X., Song, S., Li, L.E., Huang, G.: Vision transformer with deformable attention. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, pp. 4784–4793 (2022)

  27. Zhang, J., Huang, J., Yao, M., Zhou, M., Zhao, F.: Structure- and texture-aware learning for low-light image enhancement. In: Proceedings of the 30th ACM International Conference on Multimedia (MM '22), pp. 6483–6492 (2022)

  28. Chan, S.H., Khoshabeh, R., Gibson, K.B., Gill, P.E., Nguyen, T.Q.: An augmented Lagrangian method for video restoration. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, pp. 941–944 (2011)

  29. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.: Restormer: efficient transformer for high-resolution image restoration. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, pp. 5718–5729 (2022)

  30. Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Fast and accurate image super-resolution with deep laplacian pyramid networks. IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2599–2613 (2019)

    Article  Google Scholar 

  31. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  32. Loshchilov, I., Hutter, F.: Sgdr: stochastic gradient descent with warm restarts. In: ICLR (2017)

  33. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 586–595 (2018)

  34. Wang, W., Wei, C., Yang, W., Liu, J.: GLADNet: low-light enhancement network with global awareness. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi'an, China, pp. 751–755 (2018)

  35. Guo, C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 1777–1786 (2020)

  36. Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp. 10556–10565 (2021)

  37. Jiang, Y., et al.: EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  38. Jin, Y., Yang, W., Tan, R.T.: Unsupervised night image enhancement: when layer decomposition meets light-effects suppression. In: Proceedings of the European Conference on Computer Vision, pp. 401–421 (2022)

  39. Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: URetinex-Net: retinex-based deep unfolding network for low-light image enhancement. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, pp. 5891–5900 (2022)

Download references

Funding

This work is supported by the National Natural Science Foundation of China (No.62072218 and No.61862030), by the Natural Science Foundation of Jiangxi Province (No.20192ACB20002 and No.20192ACBL21008) and by the Talent project of Jiangxi Thousand Talents Program (No. jxsq2019201056).

Author information

Authors and Affiliations

Authors

Contributions

SH and HD conducted the experiments and wrote part of the paper and helped in methodology, software, writing—original draft. YY revised the paper and was involved in supervision, formal analysis, writing—review & editing. YW and MR conducted the experiments and helped in data curation, software. SW revised the paper and contributed to writing—review & editing.

Corresponding author

Correspondence to Yong Yang.

Ethics declarations

Conflicts of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, S., Dong, H., Yang, Y. et al. IATN: illumination-aware two-stage network for low-light image enhancement. SIViP 18, 3565–3575 (2024). https://doi.org/10.1007/s11760-024-03021-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-024-03021-7

Keywords

Navigation