Skip to main content
Log in

Exemplar-guided low-light image enhancement

  • Letter to the Editor
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

A novel exemplar-guided method is proposed in this work to deal with the low-light image enhancement issue using two inputs of low-light image and the corresponding normal-light exemplar image. To be different from the previous work, this work devotes itself to the guidance of the exemplars to restore the details of images with extremely low illumination and to control the degree of enhancement. The authors try to employ a joint image matching and generation end-to-end framework that consists of region-match module for matching images and attentional feature selector module for sampling pixels. For evaluation, we synthesize a pseudo paired dataset based on the LOL dataset and establish several groups of real-captured images under different collecting circumstances. Experimental results show that the proposed method not only enables the restoration of low-light images at a relatively high speed, but also brings about the better performance in terms of PSNR and SSIM qualities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

References

  1. Buslaev, A., Iglovikov, V.I., Khvedchenya, E., et al.: Albumentations: fast and flexible image augmentations. Inf 11(2), 125 (2020)

    Google Scholar 

  2. Chen, C., Chen, Q., Xu, J., et al.: Learning to see in the dark. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18–22, 2018. Computer Vision Foundation/IEEE Computer Society, pp. 3291–3300 (2018)

  3. Fu, X., Zeng, D., Huang, Y., et al.: A fusion-based enhancing method for weakly illuminated images. Signal Process 129, 82–96 (2016)

    Article  Google Scholar 

  4. Gao, Y., Wei, F., Bao, J., et al.: High-fidelity and arbitrary face editing. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19–25, 2021. Computer Vision Foundation/IEEE, pp. 16115–16124 (2021)

  5. Gonzalez, R.C., Woods, R.E., Masters, B.R. Digital image processing (2009)

  6. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. In: Ghahramani, Z., Welling, M., Cortes, C., et al. (eds) Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8–13 2014, Montreal, Quebec, Canada, pp. 2672–2680 (2014)

  7. Guo, C., Li, C., Guo, J., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 1777–1786 (2020)

  8. Guo, X., Li, Y., Ling, H.: LIME: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)

    Article  MathSciNet  Google Scholar 

  9. Jiang, Y., Gong, X., Liu, D., et al.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  10. Jobson, D.J., Rahman, Z., Woodell, G.A.: A multiscale Retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6(7), 965–976 (1997)

    Article  Google Scholar 

  11. Jobson, D.J., Rahman, Z., Woodell, G.A.: Properties and performance of a center/surround Retinex. IEEE Trans. Image Process. 6(3), 451–462 (1997)

    Article  Google Scholar 

  12. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., et al. (eds.) Computer Vision—ECCV 2016–14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part II, vol. 9906, pp. 694–711. Springer (2016)

  13. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation/IEEE, pp. 4401–4410 (2019)

  14. Karras, T., Laine, S., Aittala, M., et al.: Analyzing and improving the image quality of stylegan. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation/IEEE, pp. 8107–8116 (2020)

  15. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. IEEE Trans. Pattern Anal. Mach. Intell. 43(12), 4217–4228 (2021)

    Article  Google Scholar 

  16. Ketcham, D.J., Lowe, R.W., Weber, J.W.: Image enhancement techniques for cockpit displays. Tech. rep, HUGHES AIRCRAFT CO CULVER CITY CA DISPLAY SYSTEMS LAB (1974)

  17. Land, H.E., McCann, et al.: Lightness and Retinex theory. Josa 61(1), 1–11 (1971)

  18. Lee, H., Sohn, K., Min, D.: Unsupervised low-light image enhancement using bright channel prior. IEEE Signal Process. Lett. 27, 251–255 (2020)

    Article  Google Scholar 

  19. Li, C., Guo, J., Porikli, F., et al.: Lightennet: a convolutional neural network for weakly illuminated image enhancement. Pattern Recognit Lett 104, 15–22 (2018)

    Article  Google Scholar 

  20. Li, C., Guo, C., Chen, C.L. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. https://doi.org/10.1109/TPAMI.2021.3063604 (Early Access) (2021)

  21. Li, M., Liu, J., Yang, W., et al.: Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 27(6), 2828–2841 (2018)

    Article  MathSciNet  Google Scholar 

  22. Lore, K.G., Akintayo, A., Sarkar, S.: Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61, 650–662 (2017)

    Article  Google Scholar 

  23. Lv, F., Lu, F., Wu, J., et al.: MBLLEN: low-light image/video enhancement using CNNs. In: British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, September 3–6, 2018. BMVA Press, p. 220 (2018)

  24. Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. Int. J. Comput. Vis. 129(7), 2175–2193 (2021)

    Article  Google Scholar 

  25. Ma, L., Jia, X., Georgoulis, S., et al.: Exemplar guided unsupervised image-to-image translation with semantic consistency. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6–9, 2019. OpenReview.net (2019)

  26. Ma, T., Peng, B., Wang, W., et al.: MUST-GAN: multi-level statistics transfer for self-driven person image generation. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19–25, 2021. Computer Vision Foundation/IEEE, pp. 13622–13631 (2021)

  27. Pizer, S.M., Amburn, E.P., Austin, J.D., et al.: Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987)

    Article  Google Scholar 

  28. Pizer, S.M., Johnston, R.E., Ericksen, J.P., et al.: Contrast-limited adaptive histogram equalization: speed and effectiveness. In: [1990] Proceedings of the First Conference on Visualization in Biomedical Computing, IEEE Computer Society, pp. 337–338 (1990)

  29. Ren, Y., Li, G., Liu, S., et al.: Deep spatial transformation for pose-guided person image generation and animation. IEEE Trans. Image Process. 29, 8622–8635 (2020)

    Article  Google Scholar 

  30. Reza, A.M.: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process 38(1), 35–44 (2004)

    Article  Google Scholar 

  31. Shen, L., Yue, Z., Feng, F., et al.: MSR-Net: low-light image enhancement using deep convolutional network. CoRR arXiv:1711.02488 (2017)

  32. Shi, Y., Wu, X., Zhu, M.: Low-light image enhancement algorithm based on Retinex and generative adversarial network. arXiv preprint arXiv:190606027 (2019)

  33. Sun, X., Li, M., He, T., et al.: Enhance image as you like with unpaired learning. In: Zhou, Z. (ed) Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event/Montreal, Canada, 19–27 August 2021. ijcai.org, pp. 1011–1017 (2021)

  34. Tang, H., Xu, D., Sebe, N., et al.: Multi-channel attention selection GAN with cascaded semantic guidance for cross-view image translation. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation/IEEE, pp. 2417–2426 (2019)

  35. Wang, R., Zhang, Q., Fu, C., et al.: Underexposed photo enhancement using deep illumination estimation. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation/IEEE, pp. 6849–6857 (2019)

  36. Wang, W., Wei, C., Yang, W., et al.: Gladnet: low-light enhancement network with global awareness. In: 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, Xi’an, China, May 15–19, 2018. IEEE Computer Society, pp. 751–755 (2018)

  37. Wei, C., Wang, W., Yang, W., et al.: Deep Retinex decomposition for low-light enhancement. In: British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, September 3–6, 2018. BMVA Press, p. 155 (2018)

  38. Yang, W., Wang, S., Fang, Y., et al.: From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation/IEEE, pp. 3060–3069 (2020)

  39. Yang, W., Wang, S., Fang, Y., et al.: Band representation-based semi-supervised low-light image enhancement: bridging the gap between signal fidelity and perceptual quality. IEEE Trans. Image Process. 30, 3461–3473 (2021)

    Article  Google Scholar 

  40. Yang, W., Wang, W., Huang, H., et al.: Sparse gradient regularized deep Retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 30, 2072–2086 (2021)

    Article  Google Scholar 

  41. Zhang, P., Zhang, B., Chen, D., et al.: Cross-domain correspondence learning for exemplar-based image translation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation/IEEE, pp. 5142–5152 (2020)

  42. Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: a practical low-light image enhancer. In: Amsaleg, L., Huet, B., Larson, M.A., et al. (eds) Proceedings of the 27th ACM International Conference on Multimedia, MM 2019, Nice, France, October 21–25, 2019. ACM, pp. 1632–1640 (2019)

  43. Zhang, Y., Guo, X., Ma, J., et al.: Beyond brightening low-light images. Int. J. Comput. Vis. 129(4), 1013–1037 (2021)

    Article  Google Scholar 

  44. Zhao, H., Gallo, O., Frosio, I., et al.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2017)

    Article  Google Scholar 

  45. Zheng, H., Liao, H., Chen, L., et al. Example-guided image synthesis using masked spatial-channel attention and self-supervision. In: Vedaldi, A., Bischof, H., Brox, T., et al. (eds) Computer Vision—ECCV 2020—16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV, Lecture Notes in Computer Science, vol. 12359. Springer, pp. 422–439 (2020)

  46. Zhu, P., Abdal, R., Qin, Y., et al.: SEAN: image synthesis with semantic region-adaptive normalization. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation/IEEE, pp. 5103–5112 (2020)

Download references

Acknowledgements

This work is supported partly by the National Natural Science Foundation of China under Grant 61901434 and the Anhui Provincial Natural Science Foundation under Grant 1908085QF254.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaopo Wu.

Additional information

Communicated by Bing-Kun Bao.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Y., Wu, X., Wang, B. et al. Exemplar-guided low-light image enhancement. Multimedia Systems 28, 1861–1871 (2022). https://doi.org/10.1007/s00530-022-00913-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-022-00913-x

Keywords

Navigation