Skip to main content
Log in

Lightness-aware contrast enhancement for images with different illumination conditions

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

It has become more convenient to take photographs in our daily life. However, without sufficient skills, we often produce poor photographs with low contrast and unclear details under various imperfect illumination conditions. Although plenty of image enhancing models have been developed, most of them impose a uniform enhancing strength to the whole image region, and thus tend to generate over-enhancement effects for regions with originally-satisfying illumination. To address this issue, we propose a novel contrast enhancing model, which is a simple linear fusion process based on an original image and its initial enhancement. As the key of our model, we construct a lightness map that estimates the scene lightness, which is aware of the image structure at pixel-wise level. In the fusion process, this map dynamically weighs between the initially enhanced image and the original image, and thus ensures a seamless fusion result. In our experiments, we validate our model on images with various illumination conditions, such as strong back light, imbalanced light, and low light. The results empirically show that our model performs well on simultaneously improving image contrast and keeping its naturalness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Chen Y, Klopp J, Sun M, Chien S, Ma K (2017) Learning to Compose with Professional Photographs on the Web. In: Proceedings of ACM Multimedia (ACM MM)

  2. Cheng G, Yang C, Yao X, Guo L, Han J (2018) When deep learning meets metric learning: remote sensing image scene classification via learning discriminative CNNs. IEEE Trans Geosci Remote Sens. https://doi.org/10.1109/TGRS.2017.2783902

  3. Deng Y, Loy C, Tang X (2017) Aesthetic-driven image enhancement by adversarial learning. arXiv:1707.05251v1

  4. Deng Y, Loy C, Tang X (2017) Image aesthetic assessment: an experimental survey. IEEE Signal Process Mag 34(4):80–106

    Article  Google Scholar 

  5. Dong X, Wang G, Pang Y (2011) Fast efficient algorithm for enhancement of low lighting video. Proc Int Conf Multimed Expo (ICME)

  6. Feng Z, Hao S (2017) Low-light image enhancement by refining illumination map with self-guided filtering. Proc Int Conf Big Knowledge (ICBK)

  7. Fu X, Zeng D, Huang Y, Liao Y, Ding X, Paisley J (2016) A fusion-based enhancing method for weakly illuminated images. Signal Process 129:82–96

    Article  Google Scholar 

  8. Fu X, Zeng D, Huang Y, Zhang X, Ding X (2016) A weighted variational model for simultaneous reflectance and illumination estimation. Proc Comput Vision Pattern Recogn (CVPR)

  9. Gao F, Yu J (2016) Biologically inspired image quality assessment. Signal Process 124:210–219

    Article  Google Scholar 

  10. Guo Y, Wu G, Jiang J, Shen D (2013) Robust anatomical correspondence detection by hierarchical sparse graph matching. IEEE Trans Med Imaging 32(2):268–277

    Article  Google Scholar 

  11. Guo Y, Gao Y, Shen D (2016) Deformable MR prostate segmentation via deep feature learning and sparse patch matching. IEEE Trans Med Imaging 35(4):1077–1089

    Article  Google Scholar 

  12. Guo X, Li Y, Ling H (2017) LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993

    Article  MathSciNet  Google Scholar 

  13. Han J, Zhang D, Cheng G, Liu N, Xu D (2018) Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Process Mag 35(1):84–100

    Article  Google Scholar 

  14. Han J, Quan R, Zhang D, Nie F (2018) Robust object co-segmentation using background prior. IEEE Trans Image Process 27(4):1639–1651

    Article  MathSciNet  Google Scholar 

  15. Hao S, Wang M, Hong R, Jiang J (2016) Spatially guided local Laplacian filter for nature image detail enhancement. Multimed Tools Appl 75(3):1529–1542

    Article  Google Scholar 

  16. He K, Sun J (2015) Fast guided filter. ArXiv, abs/150500996

  17. He K, Sun J, Tang X (2011) Single image haze removal using Dark Channel prior. IEEE Trans Pattern Anal Mach Intell 33(12):2341–2353

    Article  Google Scholar 

  18. He K, Sun J, Tang X (2013) Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6):1397–1409

    Article  Google Scholar 

  19. Hong R, Zhang L, Tao D (2016) Unified photo enhancement by discovering aesthetic communities from Flickr. IEEE Trans Image Process 25(3):1124–1135

    Article  MathSciNet  Google Scholar 

  20. Hong R, Zhang L, Zhang C, Zimmermann R (2016) Flickr circles: aesthetic tendency discovery by multi-view regularized topic modeling. IEEE Trans Multimed 18(8):1555–1567

    Article  Google Scholar 

  21. Lee C, Lee C, Kim C (2013) Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans Image Process 22(12):5372–5384

    Article  Google Scholar 

  22. Ni B, Xu M, Wang M, Yan S, Tian Q (2013) Learning to photograph: a compositional perspective. IEEE Trans Multimed 15(5):1138–1151

    Article  Google Scholar 

  23. Reza AM (2004) Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement, journal VLSI signal processing system for signal. Image Video Technol 38(1):35–44

    Google Scholar 

  24. Song J, Zhang L, Shen P, Peng X, Zhu G (2016) Single low-light image enhancement using luminance map. Proc Chin Conf Pattern Recogn (CCPR)

  25. Tan R (2008) Visibility in bad weather from a single image. Proc Comput Vision Pattern Recogn (CVPR)

  26. Tao X, Zhou C, Shen X, Wang J, Jia J (2017) Zero-order reverse filtering. Proc Int Conf Comput Vision (ICCV)

  27. Thung K, Raveendran P, Lim C (2013) Content-based image quality metric using similarity measure of moment vectors. Pattern Recogn 45(6):2193–2204

    Article  Google Scholar 

  28. Thung K, Adeli E, Yap P, Shen D (2016) Stability-weighted matrix completion of incomplete multi-modal data for disease diagnosis. Proc Med Image Comput Comput Assist Interven (MICCAI)

  29. Xu L, Yan Q, Xia Y, Jia J (2013) Structure extraction from texture via relative Total variation. ACM Trans Graph 31(6):Article 139

    Google Scholar 

  30. Yao X, Han J, Zhang D, Nie F (2017) Revisiting co-saliency detection: a novel approach based on two-stage multi-view spectral rotation co-clustering. IEEE Trans Image Process 26(7):3196–3209

    Article  MathSciNet  Google Scholar 

  31. Yi Z, Zhang H, Tan P, Gong M (2017) DualGAN: unsupervised dual learning for image-to-image translation. Proc Int Conf Comput Vision (ICCV)

  32. Yin W, Mei T, Chen C, Li S (2014) Socialized mobile photography: learning to photograph with social context via mobile devices. IEEE Trans Multimed 16(1):184–200

    Article  Google Scholar 

  33. Yu Z, Wu F, Zhang Y, Tang S, Shao J, Zhuang Y (2014) Hashing with list-wise learning to rank, In: proceedings of ACM SIGIR conference on research & development in information retrieval (SIGIR)

  34. Yu Z, Yu J, Fan J, Tao D (2017) Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. Proc Int Conf Comput Vision (ICCV)

  35. Yu Z, Yu J, Xiang C, Fan J, Tao D (2018) Beyond bilinear: generalized multi-modal factorized high-order pooling for visual question answering. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2018.2817340

  36. Yue H, Yang J, Sun X, Wu F, Hou C (2017) Contrast enhancement based on intrinsic image decomposition. IEEE Trans Image Process 26(8):3981–3994

    Article  MathSciNet  Google Scholar 

  37. Zhang Q, Shen X, Xu L, Jia J (2014) Rolling guidance filter. Proc Eur Conf Comput Vision (ECCV)

  38. Zhang L, Wang M, Nie L, Hong L, Rui Y, Tian Q (2015) Retargeting semantically rich photos. IEEE Trans Multimed 17(9):1538–1549

    Article  Google Scholar 

  39. Zhang L, Wang M, Nie L, Hong R, Xia Y, Zimmermann R (2015) Biologically inspired media quality modeling. Proc ACM Multimed (ACM MM)

  40. Zhang D, Meng D, Han J (2016) Co-saliency detection via a self-paced multiple-instance learning framework. IEEE Trans Pattern Anal Mach Intell 39(5):865–878

    Article  Google Scholar 

  41. Zhang H, Shang X, Luan HB, Wang M, Chua TS (2016) Learning from collective intelligence: feature learning using social images and tags. ACM Trans Multimed Comput Commun Appl 13(1):Article 1

    Article  Google Scholar 

  42. Zhu X, Huang Z, Cheng H, Cui J, Shen H (2013) Sparse hashing for fast multimedia search. ACM Trans Inf Syst 31(2):Article 9

    Article  Google Scholar 

  43. Zhu X, Li X, Zhang S (2016) Block-row sparse Multiview multilabel learning for image classification. IEEE Trans Cybernet 46(2):450–461

    Article  Google Scholar 

Download references

Acknowledgements

The authors sincerely appreciate the efforts of the anonymous reviewers and their useful comments during the reviewing process. The research was supported by the National Nature Science Foundation of China under grant number 61772171, grant number 61702156, and grant number 61632007.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanrong Guo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hao, S., Guo, Y. & Wei, Z. Lightness-aware contrast enhancement for images with different illumination conditions. Multimed Tools Appl 78, 3817–3830 (2019). https://doi.org/10.1007/s11042-018-6257-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-6257-1

Keywords

Navigation