Skip to main content
Log in

Dual UNet low-light image enhancement network based on attention mechanism

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Low-light image enhancement has been an important research direction in the field of image processing. Recently, U-Net networks have shown better promise in low-light image enhancement. However, because of the semantic gap and the lack of connection between global contextual information in the U-shaped network, it leads to problems such as inaccurate color information in the enhanced images. To address the above problems, this paper proposes a Dual UNet low-light image enhancement network (DUAMNet) based on an attention mechanism. Firstly, the local texture features of the original image are extracted using the Local Binary Pattern(LBP) operator, and the illumination invariance of the LBP operator better maintains the texture information of the original image. Next, use the Brightness Enhancement Module(BEM). In the BEM module, the outer U-Net network captures feature information at different levels and luminance information of different regions, and the inner densely connected U-Net++ network enhances the correlation of feature information at different levels, mines more hidden feature information extracted by the encoder, and reduces the feature semantic gap between the encoder and decoder. The attention module Convolutional Block Attention Module(CBAM) is introduced in the decoder of U-Net++ network. CBAM further enhances the ability to model the global contextual information linkage and effectively improves the network’s attention to the weak light region. The network adopts a progressive recursive structure. The entire network includes four recursive units, and the output of the previous recursive unit is used as the input of the next recursive unit. Comparative experiments are conducted on seven public datasets, and the results are analyzed quantitatively and qualitatively. The results show that despite the simple structure of the network in this paper, the network in this paper outperforms other methods in image quality compared to other methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data Availability

Data available on request from the authors.

References

  1. Blau Y, Michaeli T (2018) The perception-distortion tradeoff. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6228–6237

  2. Bychkovsky V, Paris S, Chan E, Durand F (2011) Learning photographic global tonal adjustment with a database of input/output image pairs. In: CVPR 2011. IEEE, pp 97–104

  3. Chen C, Chen Q, Xu J, Koltun V (2018) Learning to see in the dark. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3291–3300

  4. Chen X, Zhou G, Chen A, Pu L, Chen W (2021) The fruit classification algorithm based on the multi-optimization convolutional neural network. Multimed Tools Appl 80(7):11313–11330

    Article  Google Scholar 

  5. Dai Q, Pu Y-F, Rahman Z, Aamir M (2019) Fractional-order fusion model for low-light image enhancement. Symmetry 11(4):574

    Article  Google Scholar 

  6. Guo X, Li Y, Ling H (2016) Lime: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993

    Article  MathSciNet  MATH  Google Scholar 

  7. Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, Cong R (2020) Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1780–1789

  8. Hao S, Han X, Guo Y, Xu X, Wang M (2020) Low-light image enhancement with semi-decoupled decomposition. IEEE Trans Multimed 22(12):3025–3038

    Article  Google Scholar 

  9. Jiang Y, Gong X, Liu D, Cheng Y, Fang C, Shen X, Yang J, Zhou P, Wang Z (2021) Enlightengan: deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–2349

    Article  Google Scholar 

  10. Jobson DJ, Rahman Z, Woodell GA (1997) Properties and performance of a center/surround retinex. IEEE Trans Image Process 6(3):451–462

    Article  Google Scholar 

  11. Land EH (1977) The retinex theory of color vision. Sci Am 237 (6):108–129

    Article  Google Scholar 

  12. Lee C, Lee C, Kim C-S (2013) Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans Image Process 22 (12):5372–5384

    Article  Google Scholar 

  13. Li M, Liu J, Yang W, Sun X, Guo Z (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841

    Article  MathSciNet  MATH  Google Scholar 

  14. Li J, Feng X, Fan H (2020) Saliency-based image correction for colorblind patients. Comput Vis Media 6(2):169–189

    Article  Google Scholar 

  15. Li C, Guo C, Chen CL (2021) Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans Pattern Anal Mach Intell

  16. Li J, Feng X, Hua Z (2021) Low-light image enhancement via progressive-recursive network. IEEE Trans Circ Syst Video Technol 31 (11):4227–4240

    Article  Google Scholar 

  17. Lv F, Li Y, Lu F (2021) Attention guided low-light image enhancement with a large scale low-light simulation dataset. Int J Comput Vis 129 (7):2175–2193

    Article  Google Scholar 

  18. Ma K, Zeng K, Wang Z (2015) Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process 24(11):3345–3356

    Article  MathSciNet  MATH  Google Scholar 

  19. Ma J, Fan X, Ni J, Zhu X, Xiong C (2017) Multi-scale retinex with color restoration image enhancement based on gaussian filtering and guided filtering. Int J Modern Phys B 31(16–19):1744077

    Article  MATH  Google Scholar 

  20. Meng Z, Xu R, Ho CM (2020) Gia-net: global information aware network for low-light imaging. In: European conference on computer vision. Springer, pp 327–342

  21. Mittal A, Soundararajan R, Bovik AC (2012) Making a “completely blind” image quality analyzer. IEEE Signal Process Lett 20(3):209–212

    Article  Google Scholar 

  22. Mittal A, Moorthy AK, Bovik AC (2012) No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 21(12):4695–4708

    Article  MathSciNet  MATH  Google Scholar 

  23. Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987

    Article  MATH  Google Scholar 

  24. Pizer SM, Johnston RE, Ericksen JP, Yankaskas BC, Muller KE (1990) Medical image display research group. Contrast-limited adaptive histogram equalization: speed and effectiveness. In: Proceedings of the first conference on visualization in biomedical computing, Atlanta, Georgia, vol 337, p 1

  25. Rahman Z, Jobson DJ, Woodell GA (1996) Multi-scale retinex for color image enhancement. In: Proceedings of 3rd IEEE international conference on image processing, vol 3. IEEE, pp 1003–1006

  26. Ramachandran P, Parmar N, Vaswani A, Bello I, Levskaya A, Shlens J (2019) Stand-alone self-attention in vision models. Adv Neural Inf Process Syst 32

  27. Ren X, Li M, Cheng W-H, Liu J (2018) Joint enhancement and denoising method via sequential decomposition. In: 2018 IEEE International symposium on circuits and systems (ISCAS). IEEE, pp 1–5

  28. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241

  29. van den Heuvel TLA, de Bruijn D, de Korte CL, van Ginneken B (2018) Automated measurement of fetal head circumference using 2d ultrasound images, vol 13

  30. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30

  31. Vonikakis V, Kouskouridas R, Gasteratos A (2018) On the evaluation of illumination compensation algorithms. Multimed Tools Appl 77 (8):9211–9231

    Article  Google Scholar 

  32. Wang X, Chen L (2018) Contrast enhancement using feature-preserving bi-histogram equalization. Signal Image Video Process 12(4):685–692

    Article  MathSciNet  Google Scholar 

  33. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  34. Wang S, Zheng J, Hu H-M, Li B (2013) Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process 22(9):3538–3548

    Article  Google Scholar 

  35. Wang L-W, Liu Z-S, Siu W-C, Lun DPK (2020) Lightening network for low-light image enhancement. IEEE Trans Image Process 29:7984–7996

    Article  MATH  Google Scholar 

  36. Zamir S W, Arora A, Khan S, Khan FS, Shao L (2021) Learning digital camera pipeline for extreme low-light imaging. Neurocomputing 452:37–47

    Article  Google Scholar 

  37. Wei C, Wang W, Yang W, Liu J (2018) Deep retinex decomposition for low-light enhancement. arXiv:1808.04560

  38. Woo S, Park J, Lee J-Y, In SK (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19

  39. Yang W, Wang S, Fang Y, Wang Y, Liu J (2020) From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3063–3072

  40. Ying Z, Li G, Gao W (2017) A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv:1711.00591

  41. Zhang K, Zuo W, Chen Y, Meng D, Zhang L (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans Image Process 26(7):3142–3155

    Article  MathSciNet  MATH  Google Scholar 

  42. Zhang Y, Zhang J, Guo X (2019) Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM international conference on multimedia, pp 1632–1640

  43. Zhang Y, Zhang M, Cui Y, Zhang D (2020) Detection and tracking of human track and field motion targets based on deep learning. Multimed Tools Appl 79(13):9543–9563

    Article  Google Scholar 

  44. Zhang C, Yan Q, Zhu Y u, Li X, Sun J, Zhang Y (2020) Attention-based network for low-light image enhancement. In: 2020 IEEE International conference on multimedia and expo (ICME). IEEE, pp 1–6

  45. Zhang T, Li J, Fan H (2022) Progressive edge-sensing dynamic scene deblurring. Comput Vis Media 8(3):495–508

    Article  Google Scholar 

  46. Zhao Z, Xiong B, Wang L, Ou Q, Yu L, Fa K (2021) Retinexdip: a unified deep framework for low-light image enhancement. IEEE Trans Circ Syst Video Technol 32(3):1076–1088

    Article  Google Scholar 

  47. Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J (2018) Unet++: a nested u-net architecture for medical image segmentation. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, pp 3–11

  48. Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J (2019) Unet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans Med Imaging 39(6):1856–1867

    Article  Google Scholar 

  49. Zhou S, Jia J, Yin Y, Li X, Yao Y, Zhang Y, Ye Z, Lei K, Huang Y, Shen J (2019) Understanding the teaching styles by an attention based multi-task cross-media dimensional modeling. In: Proceedings of the 27th ACM international conference on multimedia, pp 1322–1330

  50. Zhou S, Jia J, Wu Z, Yang Z, Wang Y, Chen W, Meng F, Huang S, Shen J, Wang X (2021) Inferring emotion from large-scale internet voice data: a semi-supervised curriculum augmentation based deep learning approach. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 6039–6047

  51. Zhuang L, Guan Y (2017) Image enhancement via subimage histogram equalization based on mean and variance. Comput Intell Neurosci

Download references

Acknowledgments

The authors acknowledge the National Natural Science Foundation of China (61772319, 62002200, 62176140 and 12001327), Shandong Natural Science Foundation of China (ZR2021QF134 and ZR2021MF068), and Yantai science and technology innovation development plan (2022JCYJ031).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinjiang Li.

Ethics declarations

Conflict of Interests

The authors declare that no potentail competing interests exist. There is no an undisclosed relationship thay may pose a competing interest. There is no an undisclosed funding source that may pose a competing interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, F., Hua, Z., Li, J. et al. Dual UNet low-light image enhancement network based on attention mechanism. Multimed Tools Appl 82, 24707–24742 (2023). https://doi.org/10.1007/s11042-022-14210-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-14210-2

Keywords

Navigation