Elsevier

Digital Signal Processing

Volume 58, November 2016, Pages 1-9
Digital Signal Processing

Adaptive contrast enhancement using edge-based lighting condition estimation

https://doi.org/10.1016/j.dsp.2016.04.009Get rights and content

Highlights

  • Lighting condition estimation using detected edges is presented.

  • Transfer function for improvement of local contrast and preservation of structure is proposed.

  • Contrast enhancement approach for improvement of global contrast and reduction of artifacts is proposed.

Abstract

This paper proposes a new approach to image contrast enhancement that improves the perceptual visual quality by considering the lighting condition and minimizing the structural distortion to a tolerable level. The proposed method consists of the following two major steps: lighting condition estimation and contrast enhancement processes. In the first step, the proposed method estimates the lighting condition by calculating the dynamic range along the edges of the image. In the second step, the method adaptively adjusts the luminance by considering both the estimated lighting condition and the order of luminance levels in order to improve the perceptual visual quality. In addition, the method properly reduces the structural distortion. Experimental results show that the proposed method improved the perceptual visual quality of various images by increasing the average structural fidelity, enhancement performance measure, entropy, and tone-mapped image quality index by up to 11%, 133%, 16%, and 11%, respectively, compared to the benchmark methods.

Introduction

With the recent improvement of imaging device technologies, the imaging devices can capture natural scenes that possess a high dynamic range (HDR) [1]. However, HDR images remain incapable of being displayed in display devices that are designed to display low dynamic range (LDR) images [1]. Therefore, in order to display them properly, HDR images are commonly converted into LDR images using tone mapping algorithms [1], [2], [3].

Tone mapping algorithms, which map a wide luminosity range onto a relatively narrow brightness range, cannot avoid generating poor contrast regions, which results in the poor visual quality [4]. Fig. 1(a) shows an HDR image having the high perceptual visual quality in both regions A and B. However, the LDR image in Fig. 1(b) has poor contrast regions C and D, which produce the low perceptual visual quality. Regions C and D correspond to regions A and B, respectively. To improve the perceptual visual quality, enhancing the contrast of poor contrast regions in the LDR image is necessary. Therefore, many contrast enhancement algorithms that compensate for the contrast in poor contrast regions have been proposed to improve the perceptual visual quality [5], [6], [7], [8].

Multi-scale retinex color restoration (MSRCR) has been proposed to improve the perceptual visual quality. This method extracts the illumination, which represents the effect of lighting sources, and removes it from the original image to obtain an output image. Therefore, this method increases the perceptual visual quality of the poor-contrast image that was generated under the uniform lighting condition such as fog, haze, and other unusual weather conditions [5]. However, it adjusts the luminance without considering the order of luminance levels (OLL), which represents the ascending or descending order of pixel values and is closely correlated with the irradiance. Therefore, the processed image can be severely distorted when this method changes the OLL considerably. For example, this method usually causes a halo artifact, which is unwanted halos surrounding an object in the enhanced image [9]. In addition, it causes a graying-out, which represents the low chroma [10]. Therefore, the adaptive and integrated neighborhood dependent approach for nonlinear enhancement (AINDANE) was proposed to compensate for both the halo artifact and graying-out of MSRCR [6]. AINDANE consists of two enhancement stages: adaptive luminance enhancement and adaptive contrast enhancement. In the adaptive luminance enhancement stage, AINDANE adjusts the luminance of each pixel depending on the luminance level that corresponds to the degree of darkness in the image. Next, it enhances the contrast by adjusting the luminance of each pixel compared to that of neighboring pixels in the adaptive contrast enhancement stage. This method improves the perceptual visual quality of poor contrast regions having low luminance that were generated under uniform and non-uniform lighting conditions. However, this method cannot properly compensate for poor contrast regions having high luminance. Therefore, the methods of space-variant luminance map (SVLM) and parallel nonlinear adaptive enhancement (PNAE) were proposed to improve the perceptual visual quality of poor contrast regions having both high and low luminance [7], [8]. These methods adjust the luminance of each pixel using the gamma curve or mapping function, then enhance the contrast by modifying the luminance of each pixel compared to that of neighboring pixels. These methods improve the perceptual visual quality of poor contrast regions having high and low luminance that were generated under the non-uniform lighting condition. However, these metrics do not consider the illumination of uniform lighting condition that produces the poor contrast in the whole image. Thus, they cannot properly improve the perceptual visual quality of the image generated under the uniform lighting condition. In summary, existing contrast enhancement algorithms cannot properly improve the perceptual visual quality by adjusting the luminance without considering the lighting condition in the original image.

In this paper, we propose an adaptive contrast enhancement algorithm based on the estimated lighting condition to improve the perceptual visual quality, and minimize the structural distortion to a tolerable level. The proposed method consists of a lighting condition estimation process (LCEP) and contrast enhancement processes (CEPs). In the first process, the proposed method estimates the lighting condition through the calculation of the dynamic range along the edges of the image. The weights are then calculated based on the estimated lighting condition. In the second step, pixel-wise contrast enhancement process (PCEP) and frame-wise contrast enhancement process (FCEP) are performed to improve the perceptual visual quality. Finally, an output image is obtained through the weighted sum of the images derived from PCEP and FCEP.

The remainder of this paper is organized as follows. In Section 2, we describe the proposed contrast enhancement algorithm. In Section 3, we present experimental results and evaluate the performance of the proposed method. Finally, we conclude this paper in Section 4.

Section snippets

Motivation of the proposed method

The lighting condition of poor-contrast images may be uniform or non-uniform. In the case of uniform lighting, the poor-contrast image generally has/poor contrast on the whole image. On the other hand, in the case of non-uniform lighting, the poor-contrast image has poor contrast locally on an image. Therefore, it is natural to enhance the contrast with different methods, depending on the lighting conditions. However, AINDANE [6] attempts to enhance the contrast without considering the lighting

Experimental results

Several experiments were conducted to evaluate both the subjective and objective performance of the proposed method. For the objective evaluation, the structural fidelity (SF) [14], the enhancement performance measure (EME) [15], entropy [16], and tone-mapped image quality index (TMQI) [14] were used as evaluation metrics. For the subjective evaluation, the image quality was visually compared using still images and video sequences enhanced by the proposed and benchmark methods. We used four

Conclusion

This paper proposed an adaptive contrast enhancement algorithm that employs edge-based lighting condition estimation. The proposed algorithm consists of two steps. In the first step, the proposed method estimates the lighting condition and calculates the weights for the CEPs based on the estimated lighting condition. In the second step, the proposed method performs both PCEP and FCEP to compensate for the poor contrast by properly preserve the OLL. Finally, the output image is obtained from the

Acknowledgements

This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the “ICT Consilience Creative Program” (IITP-R0346-16-1007) supervised by the IITP (Institute for Information & Communications Technology Promotion) and IDEC.

Chan Young Jang received a B.E. degree in electronic and electrical engineering in 2008 from Pusan National University, Pusan, Korea and is currently pursuing a Ph.D. degree in electronic and electrical engineering from the Pohang University of Science and Technology, Pohang, Korea. His current research interests include image processing algorithms, LED controller design, and computer vision.

References (24)

  • Y. Monobe et al.

    Dynamic range compression preserving local image contrast for digital video camera

    IEEE Trans. Consum. Electron.

    (2005)
  • C. Lee et al.

    Gradient domain tone mapping of high dynamic range videos

  • M. Barkowsky et al.

    Tone mapping HDR images using optimization: a general framework

  • K. Hasikin et al.

    Adaptive fuzzy intensity measure enhancement technique for non-uniform illumination and low-contrast images

    Signal Image Video Process.

    (2013)
  • D.J. Jobson et al.

    A multiscale retinex for bridging the gap between color images and the human observation of scenes

    IEEE Trans. Image Process.

    (1997)
  • L. Tao et al.

    Adaptive and integrated neighborhood-dependent approach for nonlinear enhancement of color images

    J. Electron. Imaging

    (2005)
  • S. Lee et al.

    A space-variant luminance map based color image enhancement

    IEEE Trans. Consum. Electron.

    (2010)
  • Z. Zhou et al.

    A parallel nonlinear adaptive enhancement algorithm for low- or high-intensity color images

    EURASIP J. Adv. Signal Process.

    (2014)
  • H. Tsutsui et al.

    Halo artifacts reduction method for variational based real-time retinex image enhancement

  • Z. Rahman et al.

    Retinex processing for automatic image enhancement

  • P. Tsai et al.

    Image enhancement for backlight-scaled TFT-LCD displays

    IEEE Trans. Circuits Syst. Video Technol.

    (2009)
  • J. Canny

    A computational approach to edge detection

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1986)
  • Cited by (7)

    • Integrating image processing and classification technology into automated polarizing film defect inspection

      2018, Optics and Lasers in Engineering
      Citation Excerpt :

      The contrast of the edge and background can be enhanced, but the noise in the background is still enhanced excessively. Jang et al. [16] used edge-based lighting condition estimation for image contrast enhancement, which improved the perceptual visual quality by considering the lighting condition and minimizing the structural distortion to a tolerable level. Kim et al. [17] presented an entropy scaling contrast enhancement in the wavelet domain.

    View all citing articles on Scopus

    Chan Young Jang received a B.E. degree in electronic and electrical engineering in 2008 from Pusan National University, Pusan, Korea and is currently pursuing a Ph.D. degree in electronic and electrical engineering from the Pohang University of Science and Technology, Pohang, Korea. His current research interests include image processing algorithms, LED controller design, and computer vision.

    Suk-Ju Kang received a B.S. degree in Electronic Engineering from Sogang University, Republic of Korea, in 2006 and a Ph.D. degree in electrical and computer engineering from Pohang University of Science and Technology, Republic of Korea, in 2011. From 2011 to 2012, he was a Senior Researcher at LG Display, Republic of Korea, where he was a project leader for resolution enhancement and multi-view 3D system projects. From 2012 to 2015, he was an Assistant Professor of Electrical Engineering at the Dong-A University, Busan, Republic of Korea. He is currently an Assistant Professor of Electronic Engineering at the Sogang University, Seoul, Republic of Korea. His current research interests include image analysis and enhancement, video processing, multimedia signal processing, and circuit design for LCD, OLED, and 3D display systems.

    Young Hwan Kim received the B.E. degree in electronics, in 1977, from the Kyungpook National University, Republic of Korea, and M.S. and Ph.D. degrees in electrical engineering, in 1985 and 1988, from the University of California, Berkeley, USA. From 1977 to 1982, he was with the Agency for Defense Development, Republic of Korea, where he was involved in various military research projects, including the development of auto-pilot guidance and control systems. From 1983 to 1988, he worked as a Post Graduate Researcher, developing VLSI CAD programs at the Electronic Research Laboratory of the University of California, Berkeley, USA. Since his graduation, in 1988, he has worked at the Division of Electronic and Computer Engineering at POSTECH, Republic of Korea, where he is currently a Professor. His research interests include the design of LCD display systems, MPSoC and GPGPU system design for display and computer vision applications, statistical analysis and design technology for deep-submicron semiconductor devices, and power noise analysis. He has served as an editor of the Journal of the Institute of Electronics Engineers of Korea, and as a General Chair and a committee member of various international and Korean domestic technical conferences, including International SoC Design Conference and IEEE ISCAS 2012.

    View full text