Skip to main content
Log in

RCFNC: a resolution and contrast fusion network with ConvLSTM for low-light image enhancement

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Low-light image enhancement based on deep learning has achieved breakthroughs recently. However, the current methods based on deep learning have problems with inadequate resolution enhancement or inadequate contrast. To address these problems, this paper proposes a resolution and contrast fusion network with ConvLSTM (RCFNC) for low-light image enhancement. The network is mainly constructed by four parts, including resolution enhancement branch, contrast enhancement branch, multi-scale feature fusion block (MFFB), and convolution long short-time memory block (ConvLSTM). Specifically, to improve the resolution of the low-light image, a resolution enhancement branch consisting of multi-scale differential feature blocks is proposed, using residual features at different scales to enhance the spatial details of image. To enhance the contrast of the image, a contrast enhancement branch consisting of adaptive convolution residual blocks is introduced to learn the mapping relationship between global and local features in the image. In addition, a weighted fusion is performed using MFFB to better balance the resolution and contrast features obtained from the above branches. Finally, to improve the learning capability of the model, ConvLSTM is added to filter redundant information. Experiments on the LOL, MIT5K, and five benchmark datasets show that RCFNC outperforms current state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availibility

The datasets used during and/or analyzed during the current study are publicly available. The corresponding papers are cited accordingly.

References

  1. Mustafa, W.A., Kader, M.M.M.A.: A review of histogram equalization techniques in image enhancement application. J. Phys. Conf. Ser. 1019(1), 012026 (2018)

    Article  Google Scholar 

  2. Xie, Y., Ning, L., Wang, M., et al.: Image enhancement based on histogram equalization. J. Phys. Conf. Ser. 1314(1), 012161 (2019)

    Article  Google Scholar 

  3. Wang, P., Wang, Z., Lv, D., et al.: Low illumination color image enhancement based on Gabor filtering and Retinex theory. Multimed. Tools. Appl. 80(12), 17705–17719 (2021)

    Article  Google Scholar 

  4. Cai, B., Xu, X., Guo, K., et al.: A joint intrinsic-extrinsic prior model for Retinex. In: IEEE Conference in Computer Visual Pattern Recognition, pp. 4000–4009 (2017)

  5. Gao, Y., Hu, H.-M., Li, B., Guo, Q.: Naturalness preserved nonuniform illumination estimation for image enhancement based on Retinex. IEEE Trans. Multimed. 20(2), 335–344 (2018)

    Article  Google Scholar 

  6. Li, M., Liu, J., Yang, W., Sun, X., Guo, Z.: Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 27(6), 2828–2841 (2018)

    Article  MathSciNet  Google Scholar 

  7. Hao, S., Han, X., Guo, Y., et al.: Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimed. 22(12), 3025–3038 (2020)

    Article  Google Scholar 

  8. Wu, W., Weng, J., Zhang, P., et al.: URetinex-Net: Retinex-based deep unfolding network for low-light image enhancement. In: IEEE Conference in Computer Visual Pattern Recognition, Nashville, pp. 5901–5910 (2022)

  9. Fu, X., Zeng, D., Huang, Y., et al.: A weighted variational model for simultaneous reflectance and illumination estimation. IEEE/CVF Conference in Computer Visual Pattern Recognition, Las Vegas, pp. 2782–2790 (2016)

  10. Hao, S., Han, X., Guo, Y., et al.: Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimed. 22(12), 3025–3038 (2020)

    Article  Google Scholar 

  11. Yu, X., Li, H., Yang, H.: Two-stage image decomposition and color regulator for low-light image enhancement. In: The Visual Computer, pp. 1–11 (2022)

  12. Ren, X., Yang, W., Cheng, W.H., et al.: LR3M: robust low-light enhancement via low-rank regularized Retinex model. IEEE Trans. Image Process. 29, 5862–5876 (2020)

    Article  MathSciNet  Google Scholar 

  13. Li, M., Liu, J., Yang, W., et al.: Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 27(6), 2828–2841 (2018)

    Article  MathSciNet  Google Scholar 

  14. Cai, J., Gu, S., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)

    Article  MathSciNet  Google Scholar 

  15. Fu, X., Zeng, D., Huang, Y., et al.: A weighted variational model for simultaneous reflectance and illumination estimation. In: IEEE Conference in Computer Visual Pattern Recognition, Las Vegas, pp. 2782–2790 (2016)

  16. Yu, N., LI, J., Hua, Z.: FLA-Net: multi-stage modular network for low-light image enhancement. In: The Visual Computer, pp. 1–20 (2022)

  17. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: IEEE/CVF Conference in Computer Visual Pattern Recognition, Las Vegas, pp. 770–778 (2016)

  18. Huang, Y., Zha, Z.J., Fu, X., et al.: Real-world person re-identification via degradation invariance learning. In: IEEE/CVF Conference in Computer Visual Pattern Recognition, Seattle, pp. 14084–14094 (2020)

  19. Kim H., Choi S. M., C. Kim S., et al.: Representative color transform for image enhancement. In: International Conference in Computer Visual, pp. 4459–4468 (2021)

  20. Fu, Y., Hong, Y., Chen, L., et al.: LE-GAN: unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl. Based Syst. 240, 108010 (2022)

    Article  Google Scholar 

  21. Liu, Y., Wang, Z., Zeng, Y., et al.: PD-GAN: perceptual-Details GAN for extremely noisy low light image enhancement. In: ICASSP 2021, Toronto, pp. 1840–1844 (2021)

  22. Guo, S., Wang, W., Wang, X., Xu, X.: Low-light image enhancement with joint illumination and noise data distribution transformation. In: The Visual Computer, pp. 1–12 (2022)

  23. Wang, X., Zhai, Y., Ma, X., et al.: Low-light image enhancement based on GAN with attention mechanism and color constancy. Multimed. Tools. Appl. 1–19 (2022)

  24. Reza, A.M.: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 38(1), 35–44 (2004)

    Article  Google Scholar 

  25. Kim, Y.T.: Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron. 43(1), 1–8 (1997)

    Article  Google Scholar 

  26. Horiuchi, T.: Estimation of color for gray-level image by probabilistic relaxation. IEEE Int. Conf. Comput. Vis. 3, 867–870 (2002)

    Google Scholar 

  27. Wang, S., Zheng, J., Hu, H., Li, B.: Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22(9), 3538–3548 (2013)

    Article  Google Scholar 

  28. Fu, X., Zeng, D., Huang, Y., Zhang, X.P., Ding, X.: A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In: IEEE/CVF Conference in Computer Visual Pattern Recognition, Las Vegas, pp. 2782–2790 (2016)

  29. Fu, X., Zeng, D., Huang, Y., et al.: A fusion-based enhancing method for weakly illuminated images. Signal Process. 129, 82–96 (2016)

    Article  Google Scholar 

  30. Guo, X., Li, Y., Ling, H.: LIME: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)

    Article  MathSciNet  Google Scholar 

  31. Lore, K.G., Akintayo, A., Sarkar, S.: LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2017)

    Article  Google Scholar 

  32. Zhang, Y., Zhang, J., Guo, X.: Kindling the Darkness: A Practical Low-Light Image Enhancer, pp. 1632–1640. ACM, New York (2019)

    Google Scholar 

  33. Zhu, M., Pan, P., Chen, W., et al.: Eemefn: low-light image enhancement via edge-enhanced multi-exposure fusion network. AAAI 34(07), 13106–13113 (2020)

    Article  Google Scholar 

  34. Zhang, Y., Guo, X., Ma, J., et al.: Beyond brightening low-light images. Int. J. Comput. Vis. 129(4), 1013–1037 (2021)

    Article  Google Scholar 

  35. Li, J., Fang, F., et al.: Luminance-aware pyramid network for low-light image enhancement. IEEE Trans. Multimed. 23, 3153–3165 (2020)

    Article  Google Scholar 

  36. Xu, X., Wang, R., Fu, C.W., et al.: SNR-aware low-light image enhancement. In: IEEE Conference in Computer Visual Pattern Recognition, New Orleans, pp. 17714–17724 (2022)

  37. Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Wang, Z.: EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  38. Guo, C., Li, C., Guo, J., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: IEEE/CVF Conference in Computer Vision and Pattern Recognition, Seattle, pp. 1780–1789 (2020)

  39. Li, C., Guo, C., Chen, C.L.: Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

  40. Liu, R., Ma, L., Zhang, J., et al.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: IEEE/CVF Conference in Computer Vision and Pattern Recognition, pp. 10561–10570 (2021)

  41. Liang, D., Li, L., Wei, M., et al.: Semantically contrastive learning for low light image enhancement. AAAI Carnegie Mellon Univ. 36(2), 1555–1563 (2022)

    Google Scholar 

  42. Gong, M., Ma, J., Xu, H., et al.: D2TNet: a ConvLSTM network with dual-direction transfer for pan-sharpening. IEEE Trans. Geosci. Remote Sens. 60, 1–14 (2022)

    Google Scholar 

  43. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

  44. Johnson, J., Alahi, A., Li, F.: Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv: 1603.08155 (2016)

  45. Wei, C., Wang, W., Yang, W., et al.: Deep Retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560 (2018)

  46. Bychkovsky, V., Paris, S., Chan, E., et al.: Learning photographic global tonal adjustment with a database of input/output image pairs, pp. 97–104 (2011)

  47. Lee, C., Kim, C.: Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans. Image Process. 22(12), 5372–5384 (2013)

    Article  Google Scholar 

  48. Ma, K., Zeng, K., Wang, Z.: Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 24(11), 3345–3356 (2015)

    Article  MathSciNet  Google Scholar 

  49. Nikakis, V.O., Andreadis, I., Gasteratos, A.: Fast centre-surround contrast modification. IEEE Trans. Image Process. 2(1), 19–34 (2008)

    Article  Google Scholar 

  50. Nezhad, Z.H., Karami, A., Heylen, R., Scheunders, P.: Fusion of hyperspectral and multispectral images using spectral unmixing and sparse coding. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 9(6), 2377–2389 (2016)

    Article  Google Scholar 

  51. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind" image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012)

    Article  Google Scholar 

Download references

Acknowledgements

The work was supported in part by the Science and Technology Planning Project of Henan Province under Grant 212102210097.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Canlin Li.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest to this work.

Ethical approval

This chapter does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, C., Song, S., Wang, X. et al. RCFNC: a resolution and contrast fusion network with ConvLSTM for low-light image enhancement. Vis Comput 40, 2793–2806 (2024). https://doi.org/10.1007/s00371-023-02986-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02986-9

Keywords

Navigation