1 Introduction

Haze, fog, and smoke weather commonly degenerates the visual quality of images or videos acquired at outdoor environments during daytime and nighttime. In particular, haze, fog, and smoke deteriorate image details such as contrast, colorfulness, texture, structures or sharpness, which lead to difficulty in various computer vision and computational photography tasks, e.g., object detection and tracking, video surveillance, intelligent transportation, and stereo reconstruction. Defogging is a topic of computer photography, for which various algorithms are developed to enhance or restore images degraded by haze, fog, and smoke.

Most currently available defogging methods have been focused on process foggy images at daytime. These methods are generally classified into two categories: (1) enhancement and (2) restoration techniques. Enhancement algorithms directly process pixel intensity of foggy images to improve their contrast. Typical enhancement methods include intensity transform or histogram analysis. While enhancement techniques are generally efficient and easy implementation, they are generally inaccurate and poor robustness. Most researchers have been worked on restoration approaches. These approaches usually define dehazing as an inverse and ill-posed issue on the basis of a physical imaging model. This model formulates a haze-free image (also called scene radiance) by its corresponding foggy image, atmospheric light, and scene transmission map. Based on this model, various restoration methods have been discussed to estimate the unknown atmospheric light and scene transmission map on daytime foggy images in the literature. He et al. [3] calculated the atmospheric light and transmission map using dark channel prior plus soft editing to defog single image under a heavy computational load. By skipping soft editing, Meng et al. [5] introduced the boundary constraint and contextual regularization to improve the dark channel-based dehazing method. Tarel et al. [11] employed a median of median filtering framework that can efficiently restore hazy image at daytime but it usually results in color distortion. Nishino et al. [6] recovered foggy visibility by a Bayesian defogging method that computes two statistically independent components of the scene albedo and depth. More recently, Sulami et al. [8] established a reduced formation model to analyze image pixels in small patches as lines that are used to estimate the atmospheric light orientation. Tang et al. [10] proposed a learning-based strategy to calculate the scene transmission, while Galdran et al. [2] introduced an improved variational defogging framework using inter-channel contrast. Generally speaking, the restoration-based defogging methods work better than the enhancement-based defogging algorithms.

Fig. 1.
figure 1

An example of compared defogged results from different methods. The (signal-to-noise ratio, peak signal-to-noise ratio, structural similarity index) for measuring defogged image quality of Tarel et al. [11], Li et al. [4], and ours were (8.67, 19.8, 0.7992), (5.69, 16.9, 0.5479), and (13.0, 24.2, 0.8775), respectively.

Although most current dehazing approaches work well on daytime foggy images, few of them can defog nighttime foggy images. A recently published paper has been proposed a nighttime dehazing method that uses glow and multiple light colors [4]. Our work also aims to defog nighttime foggy images. The contribution of our paper is clarified as follows. We propose a new visibility-guided fusion strategy for single nighttime image defogging. Compare to the previous methods [4, 11], our method provides better quality (Fig. 1). In addition, our proposed method is much faster than the nighttime defogging approach [4].

The remainder of this paper is organized as follows. Section 2 describes the technical details of our proposed defogging method that fuses fast visibility and lighting enhancement for nighttime images. We show the experimental results and discuss them in Sect. 3, followed by concluding this work in Sect. 4.

Fig. 2.
figure 2

Flowchart of our proposed defogging method for nighttime images

2 Visibility-Guided Fusion

This section details our visibility-guided fusion framework for defogging nighttime. Such a framework consists of three main steps: (1) visibility restoration, (2) lighting enhancement, and (3) blending fusion (Fig. 2). Each step will be explained after we define the nighttime haze model in the following.

2.1 Nighttime Haze Model

In the literature, a widely used physical imaging model is established for hazy images in accordance with the Koschmieder’s law [3]:

$$\begin{aligned} \mathbf {I}(u,v)=\mathbf {J}(u,v)\mathbf {T}(u,v) + \mathbf {A}_\infty (1-\mathbf {T}(u,v)), \end{aligned}$$
(1)

where \(\mathbf {I}(u,v)\) denotes an observed (foggy) image, \(\mathbf {J}(u,v)\) refers to as a haze-free image (also called scene radiance), and \(\mathbf {A}_\infty \) indicates the atmospheric light or the sky luminance. The transmission map \(\mathbf {T}(u,v)\) describes the amount of the unscattered light entering a camera, and can be computed by

$$\begin{aligned} \mathbf {T}(u,v)=\exp (-kd(u,v)) \end{aligned}$$
(2)

where k and d(uv) are the atmosphere’s scattering factor and the depth or distance between the camera and any objects in a scene, respectively.

Based on Eq. 1, we aims to solve hazy-free image \(\mathbf {J}(u,v)\) under the unknown variables \(\mathbf {A}_\infty \) and \(\mathbf {T}(u,v)\). In this respect, defogging is an ill-posed problem. Theoretically, the model is inappropriate to be directly introduced for nighttime hazy imaging, although it is widely used for daytime foggy images. The main reason lies in illumination variations, i.e., ambient lighting or illumination is totally different during daytime and nighttime.

Similar to the recent work [4], we modify Eq. 1 by adding a new term \(\mathbf {L}(u,v)\):

$$\begin{aligned} \mathbf {I}(u,v)=\mathbf {J}(u,v)\mathbf {T}(u,v) + \mathbf {A}_\infty (1-\mathbf {T}(u,v)) + \mathbf {L}(u,v), \end{aligned}$$
(3)

where \(\mathbf {L}(u,v)\) characterizes the luminance change between daytime and nighttime on foggy images. We estimate \(\mathbf {J}(u,v)\) and \(\mathbf {L}(u,v)\) and combine them to recover the nighttime foggy image and obtain the hazy-free image.

2.2 Visibility Restoration

This section uses a fast visibility recovery method to obtain \(\mathbf {J}(u,v)\) [11]. Based on fast visibility recovery, we did not directly estimate \(\mathbf {T}(u,v)\) since it is difficult to precisely predict the transmission map related to depth information. To skip \(\mathbf {T}(u,v)\), the atmospheric veil \(\mathbf {X}(u,v)\) was introduced [1]:

$$\begin{aligned} \mathbf {X}(u,v)=\mathbf {A}_\infty (1-\mathbf {T}(u,v)), \mathbf {T}(u,v)=1-\frac{\mathbf {X}(u,v)}{\mathbf {A}_\infty }. \end{aligned}$$
(4)

Then, Eq. 1 can be rewritten to calculate \(\mathbf {J}(u,v)\):

$$\begin{aligned} \mathbf {J}(u,v)=\frac{\mathbf {A}_\infty (\mathbf {I}(u,v) - \mathbf {X}(u,v))}{\mathbf {A}_\infty - \mathbf {X}(u,v)}. \end{aligned}$$
(5)

This requires the atmospheric light \(\mathbf {A}_\infty \) and veil \(\mathbf {X}(u,v)\) for which robust estimates can be obtained much more easily than the depth and transmission maps in the original formulation (Eq. 1). The methods that are used to determine \(\mathbf {A}_\infty \) and veil \(\mathbf {X}(u,v)\) have been discussed in the previous work [11]. Here we skip the technical details of how to estimate light \(\mathbf {A}_\infty \) and veil \(\mathbf {X}(u,v)\).

2.3 Lighting Enhancement

Nighttime foggy images are low-contrast and limited illumination, especially in hazy regions. The purpose of lighting enhancement is to increase the contrast of hazy-less regions on the foggy image and obtain the luminance \(\mathbf {L}(u,v)\), and to improve the illumination of the final defogged nighttime image.

The contrast enhancement step usually takes into consideration two rules (1) most regions on the foggy image are hazy pixels that critically affect the mean of the foggy image and (2) the level of haze in these regions depends on the distance between the atmospheric light and the scene, as discussed in a previous work [1]. Based on these rules, we calculate the enhanced luminance \(\mathbf {L}(u,v)\) by magnifying difference between nighttime hazy image \(\mathbf {I}(u,v)\) and its average luminance value \(\lambda \) in the three channels \(c\in \{r,g,b\}\):

$$\begin{aligned} \mathbf {L}_c(u,v)=\beta (\mathbf {I}_c(u,v)-\lambda ), \lambda =\frac{\sum _U \sum _V \mathbf {H}(u,v)}{UV}, \end{aligned}$$
(6)

where \(\beta \) is the magnification factor to control the luminance of the augmented foggy regions and \(U\times V\) are the width and height of the nighttime hazy image. The original luminance \(\mathbf {H}(u,v)\) at each pixel is computed by [9]

$$\begin{aligned} \mathbf {H}(u,v) =0.299\times \mathbf {I}_r(u,v) + 0.587\times \mathbf {I}_g(u,v) + 0.114\times \mathbf {I}_b(u,v). \end{aligned}$$
(7)

2.4 Blending Fusion

This step is to estimate illumination on image \(\mathbf {J}(u,v)\) and \(\mathbf {L}(u,v)\) and blend their illumination to improve the illumination of the defogged image.

We transfer the images \(\mathbf {J}(u,v)\) and \(\mathbf {L}(u,v)\) from the RGB to YCbCr color space. For the Y-component of them, we used recursive filtering [7] to estimate the illumination of \(\mathbf {J}(u,v)\) and \(\mathbf {L}(u,v)\) and obtain \(\mathbf {G}_J(u,v)\) and \(\mathbf {G}_L(u,v)\). By using image illumination \(\mathbf {G}_J(u,v)\) and \(\mathbf {G}_L(u,v)\), we seek to recognize pixels in hazy regions. So, a weight function \(W_K(\mathbf {G}_K(u,v)), K\in \{J,L\}\) is empirically introduced, and output \(\mathbf {O}_q(u,v)\) of the blending fusion can be formulated:

$$\begin{aligned} \mathbf {O}_q(u,v) =\frac{\sum _{K\in \{J,L\}}W_K(\mathbf {G}_K(u,v))\mathbf {O}_q(u,v)}{\sum _{K\in \{J,L\}}W_K(\mathbf {G}_K(u,v))}, \;q\in \{Y, Cb, Cr\}. \end{aligned}$$
(8)

The Y-component output \(\mathbf {O}_Y(u,v)\) may not be distributed into the full range of pixel intensity, resulting in a low-contrast image. We implement the following linear transformation to stretch its histogram to a specific intensity range [PQ]:

$$\begin{aligned} \mathbf {\hat{O}}_Y(u,v) = P+\frac{\mathbf {O}_Y(u,v)-\mathbf {O}_{Min}(u,v)}{\mathbf {O}_{Max}(u,v)-\mathbf {O}_{Min}(u,v)}(Q-P), \end{aligned}$$
(9)

where \(\mathbf {\hat{O}}_Y(u,v)\) denotes the final Y-component result, \(\mathbf {O}_{Min}(u,v)\) and \(\mathbf {O}_{Max}(u,v)\) are the minimum and maximum intensity of the blending output \(\mathbf {O}_Y(u,v)\), respectively. We empirically set \(P=15\) and \(Q=236\) in our work.

Eventually, we combine the Y-component \(\mathbf {\hat{O}}_Y(u,v)\) and the chromatic components \(\mathbf {O}_{Cb}(u,v)\) and \(\mathbf {O}_{Cr}(u,v)\) and transform them into the RGB color space, obtaining the final defogged nighttime image.

3 Results and Discussion

All the nighttime foggy images with various visual quality were collected through the Internet. We validated our proposed method on these images, and compared it to two methods: (1) M1, a daytime single image defogging approach by Tarel et al. [11], (2) M2, a nighttime single image defogging strategy on the basis of glow and multiple light colors  [4], and (3) M3, our proposed method as discussed in Sect. 2. On the other hand, we used three measures to evaluate the defogged results from the compared three approaches: (1) SNR: signal-to-noise ratio, (2) PSNR: peak signal-to-noise ratio, and (3) SSIM: structural similarity index [12]. Note that all the experiments were tested on a laptop installed with Windows 8.1 Professional 64-Bit System, 16.0-GB Memory, and Processor Intel(R) Core(TM) i7 CPU \(\times \) 8 and were implemented on the platform of Matlab 2017a.

Fig. 3.
figure 3

Comparison of SNR of using the three nighttime defogging methods

Fig. 4.
figure 4

Comparison of PSNR of using the three nighttime defogging methods

Fig. 5.
figure 5

Comparison of SSIM of using the three nighttime defogging methods

Fig. 6.
figure 6

Visual comparison of several defogged images of using the methods: The first column shows the input nighttime foggy images 01, 02, 04, 08, and 09, and the other columns correspond to their defogged results of using M1 [11], M2 [4], and M3 (ours), respectively. The forth column displays better or comparable results of using our proposed visibility-guided fusion approach.

Fig. 7.
figure 7

Our proposed approach processed images 03, 05, 06, and 10 with comparable or worse SNR, PSNR, and SSIM values, compared to the method M1.

Table 1. Computational time of using different nighttime defogging methods

Figures 3, 4 and 5 compare the SNR, PSNR, and SSIM of the defogged nighttime images of using the three different approaches. The average SNR of M1, M2, and M3 was 9.46, 6.25, and 11.4, respectively, while the average PSNR of the three methods were 17.2, 16.0, and 21.2. Moreover, the average SSIM of M1, M2, and M3 was 0.72, 0.42, and 0.85. General speaking, the SNR, PSNR, and SSIM of our proposed method were much better than the other two.

Figure 6 displays several examples of nighttime foggy images that were defogged by the three compared approaches. Our visibility-guided fusion framework outperforms the other two methods. In particular, the visual naturalness was much better than that of M2 [4], while our proposed visibility-guided fusion framework provides much better colorfulness than the other two methods. In addition, note that M1 does not work for images 01 and 08 and introduces sometimes introduces white images without any information. The essential step of median filtering used in the method M1 commonly brings some Null pixels to the output or filtered image. These Null pixels on the filtered image failed the local white-balance procedure in the method of Tarel et al. [11].

Figure 7 illustrates some night foggy images that our proposed approach does not work well. Compared to the method M1, our method provides worse or comparable quantitative results of SNR, PSNR, and SSIM (Figs. 3, 4, and 5). This is because of nonuniform fog. However, the visual quality of our approach defogged images was much better than that of the method M2.

Table 1 investigates the computational time on each nighttime foggy image that was defogged by the three methods of M1, M2, and M3. The average computational time of these approaches was 23.667, 27.704, and 0.5711 seconds, respectively. Our method significantly improved the computational efficiency.

The objective of this work is to remove haze or fog on nighttime images. Currently, most of single dehazing algorithms work well for daytime foggy images but they are difficult to process nighttime foggy images. This work developed a visibility-guided fusion strategy to deal with nighttime hazy images. Our strategy generally outperforms the two compared defogging methods. In particular, our proposed method combines the advantages of the enhancement- and restoration-based dehazing algorithms to address illumination variations during nighttime imaging, while it also provides an efficient nighttime defogging framework.

Unfortunately, our method remains challenging to deal with nighttime images with nonuniform fog or haze (Fig. 7). Illumination on these nighttime images were not estimated precisely. On the other hand, it is still difficult to establish a precise nighttime hazy imaging model. The model proposed by Li et al. [4] is difficult to precisely characterize the procedure of nighttime imaging since it usually over-defogs the image, results in loss of image naturalness, and introduces big color shift or distortion. Our future work is to address these issues.

4 Conclusions

This paper proposes a visibility-guided fusion approach for single nighttime image defogging. We combine fast visibility recovery and lighting enhancement to address illumination variations during nighttime imaging. The experimental results demonstrate the effectiveness and efficiency of the proposed method. Compared to a recent nighttime defogging method, our approach provides much better performance in image naturalness and colorfulness. Particularly, our method can significantly improve the SNR, PSNR, and SSIM of the defogged images.