Skip to main content
Log in

Beyond Brightening Low-light Images

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Images captured under low-light conditions often suffer from (partially) poor visibility. Besides unsatisfactory lightings, multiple types of degradation, such as noise and color distortion due to the limited quality of cameras, hide in the dark. In other words, solely turning up the brightness of dark regions will inevitably amplify pollution. Thus, low-light image enhancement should not only brighten dark regions, but also remove hidden artifacts. To achieve the goal, this work builds a simple yet effective network, which, inspired by Retinex theory, decomposes images into two components. Following a divide-and-conquer principle, one component (illumination) is responsible for light adjustment, while the other (reflectance) for degradation removal. In such a way, the original space is decoupled into two smaller subspaces, expecting for better regularization/learning. It is worth noticing that our network is trained with paired images shot under different exposure conditions, instead of using any ground-truth reflectance and illumination information. Extensive experiments are conducted to demonstrate the efficacy of our design and its superiority over the state-of-the-art alternatives, especially in terms of the robustness against severe visual defects and the flexibility in adjusting light levels. Our code is made publicly available at https://github.com/zhangyhuaee/KinD_plus.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29

Similar content being viewed by others

Notes

  1. In the previous version, KinD (Zhang et al. 2019) is trained without using any synthetic pairs. In this version, for comparison fairness, we retrain KinD by embracing the synthetic data, and report new results accordingly.

  2. All the codes are from the authors’ websites.

  3. The rankings and votes will not change by setting any competitor as the benchmark.

References

  • Abdullah-Al-Wadud, M., Kabir, M. H., Dewan, M. A., & Chae, O. (2007). A dynamic histogram equalization for image contrast enhancement. IEEE TCE, 53(2), 593–600.

    Google Scholar 

  • Agostinelli, F., Anderson, M.R., & Lee, H. (2013). Adaptive multicolumn deep neural networks with application to robust image denoising. in: NeurIPS, pp. 1493–1501.

  • Bradley, R. A., & Terry, M. E. (1952). Rank analysis of incomplete block designs: I, the method of paired comparisons. Biometrika, 39, 324.

    MathSciNet  MATH  Google Scholar 

  • Bychkovsky, V., Paris, S., Chan, E., & Durand, F. (2011). Learning photographic global tonal adjustment with a database of input / output image pairs. in: CVPR, pp. 97–104.

  • Cai, B., Xu, X., Jia, K., Qing, C., & Tao, D. (2016). DehazeNet: An end-to-end system for single image haze removal. IEEE TIP, 25(11), 5187–5198.

    MathSciNet  MATH  Google Scholar 

  • Cai, J., Gu, S., & Zhang, L. (2018). Learning a deep single image contrast enhancer from multi-exposure images. IEEE TIP, 27(4), 2049–2062.

    MathSciNet  MATH  Google Scholar 

  • Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. in: CVPR, pp. 3291–3300.

  • Chen, Y., & Pock, T. (2017). Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE TPAMI, 39(6), 1256–1272.

    Article  Google Scholar 

  • Chen, Y., Wang, Y., Kao, M., & Chuang, Y. (2018). Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs. in: CVPR, pp. 6306–6314.

  • Cheng, H. D., & Shi, X. J. (2004). A simple and effective histogram equalization approach to image enhancement. Digital Signal Processing, 14(2), 158–170.

    Article  Google Scholar 

  • Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE TIP, 16(8), 2080–2095.

    MathSciNet  Google Scholar 

  • Dong, C., Chen, C. L., He, K., & Tang, X. (2016). Image super-resolution using deep convolutional networks. IEEE TPAMI, 38(2), 295–307.

    Article  Google Scholar 

  • Dong, C., Deng, Y., Loy, C.C., & Tang, X. (2015). Compression artifacts reduction by a deep convolutional network. in: ICCV, pp. 576–584.

  • Dong, X., Pang, Y., & Wen, J. (2011). Fast efficient algorithm for enhancement of low lighting video. in: ICME, pp. 1–6.

  • Fu, X., Zeng, D., Huang, Y., Zhang, X., & Ding, X. (2016). A weighted variational model for simultaneous reflectance and illumination estimation. in: CVPR, pp. 2782–2790.

  • Fu, X., Zeng, D., Yue, H., Liao, Y., Ding, X., & Paisley, J. (2016). A fusion-based enhancing method for weakly illuminated images. Signal Processing, 129, 82–96.

    Article  Google Scholar 

  • Gu, S., Zhang, L., Zuo, W., & Feng, X. (2014). Weighted nuclear norm minimization with application to image denoising. in: CVPR, pp. 2862–2869.

  • Guo, X., Li, Y., & Ling, H. (2017). LIME: Low-light image enhancement via illumination map estimation. IEEE TIP, 26(2), 982–993.

    MathSciNet  MATH  Google Scholar 

  • Huang, S., Cheng, F., & Chiu, Y. (2013). Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE TIP, 22(3), 1032–1041.

    MathSciNet  MATH  Google Scholar 

  • Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., & Van Gool, L. (2018). WESPE: Weakly supervised photo enhancer for digital cameras. in: CVPRW, pp. 691–700.

  • Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997). Properties and performance of a center/surround Retinex. IEEE TIP, 6(3), 451–462.

    Google Scholar 

  • Jobson, D. J., Rahman, Z., & Woodell, G. A. (2002). A multiscale Retinex for bridging the gap between color images and the human observation of scenes. IEEE TIP, 6(7), 965–976.

    Google Scholar 

  • Land, E. H. (1977). The Retinex theory of color vision. Scientific American, 237(6), 108–128.

    Article  MathSciNet  Google Scholar 

  • Lee, C., Lee, C., & Kim, C. S. (2013). Contrast enhancement based on layered difference representation of 2D histograms. IEEE TIP, 22(12), 5372–5384.

    Google Scholar 

  • Li, B., Peng, X., Wang, Z., Xu, J., & Feng, D. (2017). AOD-Net: All-in-one dehazing network. in: ICCV, pp. 4780–4788.

  • Li, M., Liu, J., Yang, W., Sun, X., & Guo, Z. (2018). Structure-revealing low-light image enhancement via robust Retinex model. IEEE TIP, 27(6), 2828–2841.

    MathSciNet  MATH  Google Scholar 

  • Lore, K. G., Akintayo, A., & Sarkar, S. (2017). LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61, 650–662.

    Article  Google Scholar 

  • Ma, K., Zeng, K., & Wang, Z. (2015). Perceptual quality assessment for multi-exposure image fusion. IEEE TIP, 24(11), 3345–3356.

    MathSciNet  MATH  Google Scholar 

  • Mateescu, V., & Bajic, I. V. (2016). Visual attention retargeting. IEEE Multimedia, 23(1),

  • Mechrez, R., Shechtman, E., & Zelnik-Manor, L. (2018). Saliency driven image manipulation. WACV, 30, 189–202.

    Google Scholar 

  • Mittal, A., Soundararajan, R., & Bovik, A. (2013). Making a completely blind image quality analyzer. IEEE SPL, 20(3), 209–212.

    Google Scholar 

  • Pisano, E., Zong, S., Hemminger, B., Deluca, M., Johnston, R., Muller, K., et al. (1998). Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. Journal of Digital Imaging, 11(4), 193–200.

    Article  Google Scholar 

  • Rahman, S., Rahman, M. M., Abdullah-Al-Wadud, M., Al-Quaderi, G. D., & Shoyaib, M. (2016). An adaptive gamma correction for image enhancement. Eurasip Journal on Image & Video Processing, 2016(1), 35.

    Article  Google Scholar 

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. in: MICCAI, pp. 234–241.

  • Sharma, G., Wu, W., & Dalal, E. N. (2005). The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research and Application, 30(1), 21–30.

    Article  Google Scholar 

  • Shen, L., Yue, Z., Feng, F., Chen, Q., Liu, S., & Ma, J. (2017). MSR-Net:low-light image enhancement using deep convolutional network. arXiv: 1711.02488.

  • Stevens, S. (1957). On the psychophysical law. Psychological Review, 64(3), 153–181.

    Article  Google Scholar 

  • Turgay, C., & Tardi, T. (2011). Contextual and variational contrast enhancement. IEEE TIP, 20(12), 3431–3441.

    MathSciNet  MATH  Google Scholar 

  • Wang, R., Zhang, Q., Fu, C.W., Shen, X., Zheng, W.S., & Jia, J. (2019). Underexposed photo enhancement using deep illumination estimation. in: CVPR, pp. 6849–6857.

  • Wang, S., Zheng, J., Hu, H., & Li, B. (2013). Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE TIP, 22(9), 3538–3548.

    Google Scholar 

  • Wang, W., Chen, W., Yang, W., & Liu, J. (2018). GLADNet: Low-light enhancement network with global awareness. in: FG.

  • Wang, Z. G., Liang, Z. H., & Liu, C. (2009). A real-time image processor with combining dynamic contrast ratio enhancement and inverse gamma correction for PDP. Displays, 30, 133–139.

    Article  Google Scholar 

  • Wei, C., Wang, W., Yang, W., & Liu, J. (2018). Deep Retinex decomposition for low-light enhancement. in: BMVC.

  • Xie, J., Xu, L., & Chen, E. (2012). Image denoising and inpainting with deep neural networks. in: NeurIPS, pp. 341–349.

  • Xie, J., Xu, L., Chen, E., Xie, J., & Xu, L. (2012). Image denoising and inpainting with deep neural networks. in: NeurIPS, pp. 341–349.

  • Ying, Z., Ge, L., & Gao, W. (2017). A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv: 1711.00591.

  • Ying, Z., Ge, L., Ren, Y., Wang, R., & Wang, W. (2018). A new low-light image enhancement algorithm using camera response model. in: ICCVW, pp. 3015–3022.

  • Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2016). Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE TIP, 26(7), 3142–3155.

    MathSciNet  MATH  Google Scholar 

  • Zhang, K., Zuo, W., & Zhang, L. (2018). FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE TIP, 27(9), 4608–4622.

    MathSciNet  Google Scholar 

  • Zhang, L., Dai, J., Lu, H., He, Y., & Wang, G. (2018). A bi-directional message passing model for salient object detection. in: CVPR, pp. 1741–1750.

  • Zhang, X., Lu, Y., Liu, J., & Dong, B. (2018). Dynamically unfolding recurrent restorer: A moving endpoint control method for image restoration. in: ICLR.

  • Zhang, Y., Zhang, J., & Guo, X. (2019). Kindling the darkness: A practical low-light image enhancer. in: ACM MM, pp. 1632–1640.

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 61772512 and 62072327, and in part by the National Key Research and Development Program of China under Grant No. 2019YFC1521200.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaojie Guo.

Additional information

Communicated by Michael S. Brown.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Guo, X., Ma, J. et al. Beyond Brightening Low-light Images. Int J Comput Vis 129, 1013–1037 (2021). https://doi.org/10.1007/s11263-020-01407-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-020-01407-x

Keywords

Navigation