Journal of Visual Communication and Image Representation
Optimized contrast enhancement for real-time image and video dehazing
Highlights
► We propose a fast and optimized dehazing algorithm for hazy images and videos. ► We restore a hazy image by enhancing the contrast. ► The proposed algorithm enhances the contrast and preserves the information optimally. ► We reduce flickering artifacts in a dehazed video. ► The proposed algorithm is sufficiently fast for real-time applications.
Introduction
An image, captured in bad weather, often yields low contrast due to the presence of haze in the atmosphere, which attenuates scene radiance. Low contrast images degrade the performance of various image processing and computer vision algorithms. Dehazing is the process of removing haze from hazy images and enhancing the image contrast. Histogram equalization or unsharp masking can be employed to enhance the image contrast by stretching the histogram [1]. However, these methods do not consider that the haze thickness is proportional to object depths, which are locally different in an image. Thus, they cannot compensate the contrast degradation in a hazy image adaptively. More sophisticated dehazing algorithms first estimate object depths in a scene. Several dehazing algorithms have been proposed to estimate object depths using multiple images or additional information. For example, object depths are estimated from two images, which are captured in different weather conditions [2], [3] or with different degrees of polarization [4], [5]. Also, Kopf et al. [6] employed the prior knowledge of the scene geometry for dehazing. These algorithms can estimate scene depths and remove haze effectively, but require multiple images or additional information, which limits their applications.
Recently, single image dehazing algorithms have been developed to overcome the limitation of multiple image dehazing approaches. These algorithms make use of strong assumptions or constraints to remove haze from a single image. Tan [7] maximized the contrast of a hazy image, assuming that a haze-free image has a higher contrast ratio than the hazy image. Tan’s algorithm, however, tends to overcompensate for the reduced contrast, yielding halo artifacts. Fattal [8] decomposed the scene radiance of an image into the albedo and the shading, and then estimated the scene radiance based on independent component analysis (ICA), assuming that the shading and the object depth are locally uncorrelated. It can remove haze locally but cannot restore densely hazy images. Kratz and Nishino [9] estimated the albedo and the object depth jointly by modeling a hazy image as a factorial Markov random field (FMRF). Tarel and Hautiere [10] estimated the atmospheric veil, which is the map of blended atmospheric light, and refined the veil using the median filter. He et al. [11] estimated object depths in a hazy image based on the dark channel prior, which assumes that at least one color channel should have a small pixel value in a haze-free image. They also applied an alpha matting scheme to refine the object depths. Ancuti et al. [12] significantly reduced the complexity of He et al.’s algorithm by modifying the block-based approach to a layer-based one. In addition, He et al.’s algorithm has been adopted and improved in many algorithms [13], [14], [15], [16].
For video dehazing, Tarel et al. [17] focused on car vision. They partitioned a hazy video sequence into dynamically varying objects and a planar road, and then updated the scene depths only for the objects using the still image dehazing scheme in [10]. Also, Zhang et al. [18] estimated an initial depth map for each frame of a video sequence, using the algorithm in [11], and then refined the depth map by exploiting spatial and temporal similarities. Oakley and Bu [19] assumed the all pixels in an image had similar depths and subtracted the same offset value from all pixels. Their algorithm is computationally simple, but it cannot adaptively remove haze when a captured image has variable scene depths.
The existing dehazing algorithms often exhibit overstretched contrast [7], [9], [10], [11] or cannot remove dense haze [8] because of incorrect estimation of scene depths. To overcome these drawbacks, the contrast enhancement should be controlled more adaptively. Furthermore, the conventional video dehazing algorithms suffer from huge computational complexity [18] or low quality restored videos [19]. Therefore, an efficient real-time video dehazing algorithm is required for a wide range of practical applications.
In this work, we propose a fast dehazing algorithm for images and videos based on the optimized contrast enhancement. The proposed algorithm is based on our preliminary work on static image dehazing [20] and video dehazing [21]. We increase the contrast of a restored image to remove haze. However, if the contrast is overstretched, some pixel values are truncated by overflow or underflow. We design a cost function to alleviate this information loss while maximizing the contrast. Then, we find the optimal scene depth for each block by minimizing the cost function. Furthermore, for video dehazing, assuming that the scene radiance of an object point is invariant between adjacent frames, we add a temporal coherence cost to the total cost function. We also implement a parallel computing scheme for fast dehazing. Experimental results demonstrate that the proposed algorithm can estimate object depths in a scene reliably and restore the scene radiance efficiently.
The rest of the paper is organized as follows. Section 2 describes the haze model, which is employed in this work. Section 3 proposes the static image dehazing algorithm, and Section 4 describes the video dehazing algorithm. Section 5 presents experimental results. Finally, Section 6 concludes this work.
Section snippets
Haze modeling
The observed color of a captured image in the presence of haze can be modeled, based on the atmospheric optics [2], aswhere and denote the original and the observed colors at pixel position p, respectively, and is the global atmospheric light that represents the ambient light in the atmosphere. Also, is the transmission of the reflected light, which is determined by the distance between
Static image dehazing
Fig. 1 shows the block diagram of the proposed dehazing algorithm. First, we determine the atmospheric light for an input hazy image. Then, we assume that scene depths are similar within an image block and find the optimal transmission for each block to maximize the contrast of the restored image. Moreover, we also minimize the information loss due to the truncation of pixel values, while enhancing the contrast. Then, we refine the block-based transmission values into the pixel-based ones by
Video dehazing
The dehazing algorithm in Section 3 provides good results on static images. However, when applied to each frame of a hazy video sequence independently, it may break temporal coherence and produce a restored video with severe flickering artifacts. Moreover, its high computational complexity prohibits real-time applications, such as car vision or video surveillance. In this section, we propose a fast and temporally coherent dehazing algorithm for video sequences.
Static image dehazing
We evaluate the performance of the proposed static image dehazing algorithm on hazy “Cones,” “Forest,” “House,” “Town,” and “Plain” images in Fig. 8(a). “Cones” and “House” were used in [8], and the others were collected from flicker.com. Fig. 8(b) shows the estimated transmission maps, where yellow and red pixels represent near and far scene points, respectively. Fig. 8(c) shows the restored dehazed images when the parameter in (13) is set to 5. In the “Cones” and “Forest” images, upper
Conclusions
In this work, we proposed a dehazing algorithm based on the optimized contrast enhancement. The proposed algorithm first selects the atmospheric light in a hazy image using the quadtree-based subdivision. Then, since a hazy image has low contrast, the proposed algorithm determines transmission values, which are adaptive to scene depths, to increase the contrast of the restored image. However, some pixels in the restored image can be saturated, resulting in information loss. To overcome this
Acknowledgments
The work of J.-H. Kim, W.-D. Jang, and C.-S. Kim was supported partly by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2012-011031), and partly by Basic Science Research Program through the NRF of Korea funded by the MEST (No. 2012-0000916). The work of J.-Y. Sim was supported by Basic Science Research Program through the NRF of Korea funded by the MEST (2010-0006595).
References (34)
- et al.
Flicker sensitivity as a function of target area with and without temporal noise
Vis. Res.
(2000) - et al.
Digital Image Processing
(2007) - et al.
Vision and the atmosphere
Int. J. Comput. Vis.
(2002) - et al.
Contrast restoration of weather degraded images
IEEE Trans. Pattern Anal. Mach. Intell.
(2003) - S. Shwartz, E. Namer, Y. Schechner, Blind haze separation, in: Proc. IEEE CVPR, 2006, pp....
- Y. Schechner, S. Narasimhan, S. Nayar, Instant dehazing of images using polarization, in: Proc. IEEE CVPR, 2001, pp....
- et al.
Deep photo: model-based photograph enhancement and viewing
ACM Trans. Graph.
(2008) - R. Tan, Visibility in bad weather from a single image, in: Proc. IEEE CVPR, 2008, pp....
Single image dehazing
ACM Trans. Graph.
(2008)- L. Kratz, K. Nishino, Factorizing scene albedo and depth from a single foggy image, in: Proc. IEEE ICCV, 2009, pp....
Single image haze removal using dark channel prior
IEEE Trans. Pattern Anal. Mach. Intell.
Cited by (490)
A modified atmospheric scattering model and degradation image clarification algorithm for haze environments
2024, Optics CommunicationsAlgorithms for improving the quality of underwater optical images: A comprehensive review
2024, Signal ProcessingRecent advances in image dehazing: Formal analysis to automated approaches
2024, Information FusionMulti-scale dynamic fusion for correcting uneven illumination images
2023, Journal of Visual Communication and Image RepresentationUnderwater image enhancement via multi-scale fusion and adaptive color-gamma correction in low-light conditions
2023, Engineering Applications of Artificial IntelligenceTexture enhanced underwater image restoration via Laplacian regularization
2023, Applied Mathematical Modelling