Skip to main content
Log in

Multi-scale error feedback network for low-light image enhancement

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Low-light image enhancement is a challenging task because brightness, contrast, noise and other factors must be considered simultaneously. However, most of the existing studies focus on improving illumination, and it is difficult to obtain natural images when the images of complex scenes are enhanced. To address this issue, we propose a neural network—a multi-scale error feedback network (MSEFN)—to enhance low-light images. The proposed network consists of an error feedback encoder module (EFEM), an error feedback decoder module (EFDM) and a feature integration module (FIM). As the main component of EFEM and EFDM, the error feedback feature extraction module can effectively retain spatial information by using the shuffle attention fusion block (SAFB) to fuse the acquired multi-scale features and nonadjacent features. FIM has the ability to capture contextual information that can compensate for the lack of global features in the network. Furthermore, the local uneven illumination (LUI) dataset and polynomial loss function constructed in this paper make our network more stable. Extensive experiments demonstrate that the proposed network outperforms state-of-the-art methods both qualitatively and quantitatively. The LUI dataset is publicly available at: https://github.com/Qyizos/LUI-dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. Wang W, Wu X, Yuan X, Gao Z (2020) An experiment-based review of low-light image enhancement methods. IEEE Access 8:87884–87917

    Article  Google Scholar 

  2. Cheng H-D, Shi X (2004) A simple and effective histogram equalization approach to image enhancement. Digital Signal Process 14(2):158–170

    Article  Google Scholar 

  3. Reza AM (2004) Realization of the contrast limited adaptive histogram equalization (clahe) for real-time image enhancement. J VLSI Signal Process Syst Signal Image Video Technol 38(1):35–44

    Article  Google Scholar 

  4. Kim Y-T (1997) Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans Consum Electron 43(1):1–8

    Article  Google Scholar 

  5. Dong X, Wang G, Pang Y, Li W, Wen J, Meng W, Lu Y (2011) Fast efficient algorithm for enhancement of low lighting video. In 2011 IEEE international conference on multimedia and expo, pp 1–6. IEEE

  6. Li L, Wang R, Wang W, Gao W (2015) A low-light image enhancement method for both denoising and contrast enlarging. In: 2015 IEEE international conference on image processing (ICIP). IEEE, pp 3730–3734

  7. Ko S, Yu S, Park S, Moon B, Kang W, Paik J (2017) Variational framework for low-light image enhancement using optimal transmission map and combined l1 and l2-minimization. Signal Process Image Commun 58:99–110

    Article  Google Scholar 

  8. Ying Z, Li G, Gao W (2017) A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv preprint arXiv:1711.00591

  9. Ying Z, Li G, Ren Y, Wang R, Wang W (2017) A new low-light image enhancement algorithm using camera response model. In: Proceedings of the IEEE international conference on computer vision workshops, pp 3015–3022

  10. Jobson DJ, Rahman Z-U, Woodell GA (1997) Properties and performance of a center/surround retinex. IEEE Trans Image Process 6(3):451–462

    Article  Google Scholar 

  11. Jobson DJ, Rahman Z-U, Woodell GA (1997) A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans Image Process 6(7):965–976

    Article  Google Scholar 

  12. Li M, Liu J, Yang W, Sun X, Guo Z (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841

    Article  MathSciNet  MATH  Google Scholar 

  13. Land EH (1977) The retinex theory of color vision. Sci Am 237(6):108–129

    Article  Google Scholar 

  14. Wang B, Zou Y, Zhang L, Hu Y, Yan H, Zuo C, Chen Q (2021) Low-light-level image super-resolution reconstruction based on a multi-scale features extraction network. In: Photonics, vol 8. Multidisciplinary Digital Publishing Institute, p 321

  15. Ying C, Zhao P, Li Y (2018) Low-light-level image super-resolution reconstruction based on iterative projection photon localization algorithm. J Electron Imaging 27(1):013026

    Article  Google Scholar 

  16. Wang W, Wei C, Yang W, Liu J (2018) Gladnet: low-light enhancement network with global awareness. In: 2018 13th IEEE international conference on automatic face and gesture recognition (FG 2018). IEEE, pp 751–755

  17. Wei C, Wang W, Yang W, Liu J (2018) Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560

  18. Zhang C, Yan Q, Zhu Y, Li X, Sun J, Zhang Y (2020) Attention-based network for low-light image enhancement. In: 2020 IEEE international conference on multimedia and expo (ICME). IEEE, pp. 1–6

  19. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241

  20. Lore KG, Akintayo A, Sarkar S (2017) Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recogn 61:650–662

    Article  Google Scholar 

  21. Shen L, Yue Z, Feng F, Chen Q, Liu S, Ma J (2017) Msr-net: low-light image enhancement using deep convolutional network. arXiv preprint arXiv:1711.02488

  22. Xu K, Yang X, Yin B, Lau RW (2020) Learning to restore low-light images via decomposition-and-enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2281–2290

  23. Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, Cong R (2020) Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1780–1789

  24. Jiang Y, Gong X, Liu D, Cheng Y, Fang C, Shen X, Yang J, Zhou P, Wang Z (2021) Enlightengan: deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–2349

    Article  Google Scholar 

  25. Irani M, Peleg S (1991) Improving resolution by image registration. CVGIP Gr Models Image Process 53(3):231–239

    Article  Google Scholar 

  26. Haris M, Shakhnarovich G, Ukita N (2019) Deep back-projection networks for single image super-resolution. arXiv preprint arXiv:1904.05677

  27. Liu ZS, Wang LW, Li CT, Siu WC (2019) Hierarchical back projection network for image super-resolution. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops

  28. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. arXiv preprint arXiv:1706.03762

  29. Radford A, Narasimhan K, Salimans T, Sutskever I (2018) Improving language understanding by generative pre-training

  30. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV (2019) Xlnet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237

  31. Child R, Gray S, Radford A, Sutskever I (2019) Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509

  32. Zhu X, Su W, Lu L, Li B, Wang X, Dai J (2020) Deformable detr: deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159

  33. Alsallakh B, Kokhlikyan N, Miglani V, Yuan J, Reblitz-Richardson O (2020) Mind the pad–cnns can develop blind spots. arXiv preprint arXiv:2010.02178

  34. Zhang X, Zhou X, Lin M, Sun J (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6848–6856

  35. Daquan Z, Hou Q, Chen Y, Feng J, Yan S (2020) Rethinking bottleneck structure for efficient mobile network design. arXiv preprint arXiv:2007.02269

  36. Lv F, Lu F, Wu J, Lim C (2018) Mbllen: low-light image/video enhancement using cnns. In: BMVC, p 220

  37. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision. Springer, pp 694–711

  38. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  39. Chen C, Chen Q, Xu J, Koltun V (2018) Learning to see in the dark. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3291–3300

  40. Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062

    Article  MathSciNet  MATH  Google Scholar 

  41. Kwon D, Kim G, Kwon J (2020) Dale: dark region-aware low-light image enhancement. arXiv preprint arXiv:2008.12493

  42. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S (2012) Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274–2282

    Article  Google Scholar 

  43. Ren X, Li M, Cheng WH, Liu J (2018) Joint enhancement and denoising method via sequential decomposition. In: 2018 IEEE international symposium on circuits and systems (ISCAS). IEEE, pp 1–5

  44. Tao L., Zhu C, Xiang G, Li Y, Jia H, Xie X (2017) Llcnn: a convolutional neural network for low-light image enhancement. In: 2017 IEEE visual communications and image processing (VCIP). IEEE, pp 1–4

  45. Jiang Z, Li H, Liu L, Men A, Wang H (2021) A switched view of retinex: deep self-regularized low-light image enhancement. Neurocomputing 454:361–372

    Article  Google Scholar 

  46. Loh YP, Chan CS (2019) Getting to know low-light images with the exclusively dark dataset. Comput Vis Image Underst 178:30–42

    Article  Google Scholar 

  47. Brown M, Süsstrunk S (2011) Multi-spectral sift for scene category recognition. In: CVPR 2011. IEEE, pp 177–184

  48. Vonikakis V, Kouskouridas R, Gasteratos A (2013) A comparison framework for the evaluation of illumination compensation algorithms. In: 2013 IEEE international conference on imaging systems and techniques (IST). IEEE, pp 264–268

  49. Mittal A, Soundararajan R, Bovik AC (2012) Making a “completely blind’’ image quality analyzer. IEEE Signal Process Lett 20(3):209–212

    Article  Google Scholar 

  50. Lee C, Lee C, Kim CS (2012) Contrast enhancement based on layered difference representation. In: 2012 19th IEEE international conference on image processing. IEEE, pp 965–968

  51. Ma K, Zeng K, Wang Z (2015) Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process 24(11):3345–3356

    Article  MathSciNet  MATH  Google Scholar 

  52. Fu X, Zeng D, Huang Y, Ding X, Zhang XP (2013) A variational framework for single low light image enhancement using bright channel prior. In: 2013 IEEE global conference on signal and information processing. IEEE, pp 1085–1088

  53. Qin X, Zhang Z, Huang C, Dehghan M, Zaiane OR, Jagersand M (2020) U2-net: going deeper with nested u-structure for salient object detection. Pattern Recogn 106:107404

    Article  Google Scholar 

  54. Cai S, Zheng X, Dong X (2011) Cbm3d, a novel subfamily of family 3 carbohydrate-binding modules identified in cel48a exoglucanase of cellulosilyticum ruminicola. J Bacteriol 193(19):5199–5206

    Article  Google Scholar 

Download references

Acknowledgements

The authors will be grateful to the referees and the editor for their valuable time and contributions. This work is supported by Nature Science Foundation of China (62172118, 61876049) and Nature Science key Foundation of Guangxi (2021GXNSFDA196002), in part by the Guangxi Key Laboratory of Image and Graphic Intelligent Processing under Grants (GIIP2006, GIIP2007, GIIP2008) and in part by the Innovation Project of Guangxi Graduate Education under Grants (YCB2021070, YCBZ2018052, 2021YCXS071, YCSW2022269).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zetao Jiang.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qian, Y., Jiang, Z., He, Y. et al. Multi-scale error feedback network for low-light image enhancement. Neural Comput & Applic 34, 21301–21317 (2022). https://doi.org/10.1007/s00521-022-07612-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07612-8

Keywords

Navigation