1 Introduction

Images captured under foggy or hazy conditions are whitish with low contrast and saturation, resulting in significantly reduced object visibility. In recent years, active studies have been conducted on haze removal methods to improve the clarity and visibility of such images for use in outdoor surveillance and in-vehicle camera systems. Haze-removal algorithms in previous studies can be broadly classified into (1) physical model-based methods based on an image degradation model of hazy images and (2) methods based on deep learning.

In the methods categorized as (1), a hazy image is represented as a weighted sum of the direct light observed in the absence of haze and the ambient light. The weight is related to the attenuation of the direct light, that is the light transmittance in the scene [1]. In this model, the transmittance and the ambient light are unknown and must be estimated. Once the transmittance and the ambient light are properly estimated, the physical model is used to back-calculate the direct light from the observed hazy image to obtain a haze-removed output image. A typical model-based method is based on the dark channel prior (DCP) [2] proposed by He et al. DCP is a prior knowledge that in clear outdoor images without haze, most local patches contain some pixels that have very low lightness in at least one color component.

He et al.’s method uses DCP to estimate the ambient light and the transmittance of a scene. Although He et al.’s method produces good results for many hazy images, it suffers from unnatural hues, artifacts, and image darkening. To address these problems, X. Zhao proposed an automatic parameter adjustment method [3] that can be applied to He et al.’s method. However, although their method suppresses artifacts and image darkening, it does not address the problem of unnatural hue. Recently, a method for estimating the ambient light and the transmittance of a scene using deep learning [4] has also been proposed by Li et al. Although their method provides good results in many cases, it has the problem of reducing the image contrast. Other methods have been proposed for estimating the ambient light and the transmittance from multiple images [5,6,7,8,9,10]. However, these methods cannot be applied to single images.

The methods categorized as (2) learn the relationship between hazy images and their paired haze-free images, and use it to output haze-free, clear images for any hazy image. These methods do not require the estimation of the ambient light and the transmittance described above. As a typical method, haze removal using haze maps generated by gated autoencoders [11] has been proposed by Chen et al. However, their method shows good results only for certain datasets and is not sufficiently effective for other datasets; that is, it has insufficient. S. Zhao et al. proposed a method [12] to fuse the results obtained by deep learning with those by the He et al.’s method. Their method shows better generalization performance than Chen et al.’s one but suffers from low image contrast.

To address the above problems in improving the visibility of hazy images, this study proposes a method that uses lightness contrast enhancement and manipulation of the convex combination coefficients for white, black, and pure colors in the RGB color space. This method can overcome the problems of the conventional methods, such as unnatural hues, artifacts, and insufficient contrast. In particular, the primary motivation for this study is the implementation of a haze removal processing that addresses the issue of unnatural hue change, which has not been adequately addressed in previous studies. Image processing using manipulation of the convex combination coefficients of white, black, and pure colors in the RGB color space has been applied to contrast enhancement[13,14,15], image sharpening[16], low-light image enhancement[17, 18], and backlit image enhancement[19, 20], which can yield good results.

The proposed method first applies image enhancement to the lightness component of the input image using a multi-scale image enhancement method with S-shaped functions to obtain an enhanced image with high contrast. Next, the whiteness of the image, that is, the effect of residual haze that cannot be eliminated by this contrast enhancement, is removed by saturation enhancement. Because white is dominant in images with residual haze, when the pixel values are expressed in terms of the convex combination coefficients for white, black, and pure colors, the coefficients for white tend to be larger and those for pure colors tend to be smaller. The proposed method reduces the coefficients of white and increases those of pure colors by using a modified version of the gamma transformation and a histogram specification technique. This process makes the image more vivid, resulting in a haze-removed image with high visibility.

The academic contributions of this study are as follows:

  • In the domain of haze removal, we propose a novel processing method that emphasizes pixel value manipulation in the equi-hue plane to enhance the hue preservation performance, which has not been the primary focus of previous studies.

  • In contrast to approaches based on physical models of the scene or learning models derived from large datasets, we address the haze removal problem by combining two fundamental signal processing approaches: brightness enhancement and saturation correction using the equi-hue plane.

  • We demonstrate the novel applicability of image processing techniques utilizing convex combination coefficients of vertices of equi-hue planes in the RGB color space for haze removal.

The remainder of this paper is organized as follows. Section 2 describes the equi-hue plane and the convex combination coefficients of white, black, and a pure color in the RGB color space. Section 3 describes the proposed method in detail. In Sect. 4, the effectiveness of the proposed method is verified by comparing it with conventional methods in terms of image naturalness and haze-removal performance. Finally, Sect. 5 concludes this study.

2 Convex combination coefficients of white, black, and pure colors in the RGB color space

Let \(\varvec{I}(i,j)\!=\!(I_R(i,j), I_G(i,j), I_B(i,j))^\top \!\in \!\left[ 0,1\right] ^3\) be a column vector that represents a pixel value at two-dimensional coordinates (ij) of a 24-bits full-color input image, where \(I_R(i,j)\), \(I_G(i,j)\), and \(I_B(i,j)\) correspond to the red (R), green (G), and blue (B) components of the image, respectively.

Regarding hue-preserving transformation in the RGB color space, Naik and Murthy defined the following condition [21]:

$$\begin{aligned} \varvec{X}(i,j) = \alpha (i,j)\varvec{I}(i,j)+\beta (i,j)\varvec{e}, \end{aligned}$$
(1)

where \(\varvec{X}(i,j)\) is a transformed pixel’s value that is in the same hue plane as \(\varvec{I}(i,j)\); \(\alpha (i,j)\!\ge \!0\) and \(\beta (i,j)\) are the scaling and shifting parameters, respectively; \(\varvec{e}\) is (1, 1, 1). This equation is a parametric representation of the plane passing through three points (0, 0, 0), (1, 1, 1), and \(\left( I_R(i,j), I_G(i,j), I_B(i,j)\right) \). Thus, as shown in Fig. 1, a set of equi-hue pixels exists in an equi-hue plane in the RGB color space. The equi-hue plane is a triangular region whose vertices correspond to white (\(\varvec{w}\!=\!(1,1,1)\)), black (\(\varvec{k}\!=\!(0,0,0)\)), and a pure color defined as follows: \(\varvec{c}(i,j)\!=\!\left( \varvec{I}(i,j)-m(i,j)\varvec{e}\right) /\left( M(i,j)-m(i,j)\right) ,\) where \(M(i,\!j)\!=\!\max _{c\in \{R,G,B\}} I_c(i,\!j)\) and \(m(i,j)\!=\!\min _{c\in \{R, G, B\}} I_c(i,\!j)\) are the maximum and minimum RGB components of \(\varvec{I}(i,j)\), respectively.

Figure 2 shows an equi-hue plane in the RGB color space. A pixel \(\varvec{I}(i,j)\) exists in the triangular region whose vertices correspond to white \(\varvec{w}\), black \(\varvec{k}\), and a pure color \(\varvec{c}(i,j)\). Now, projecting the vector \(\varvec{I}(i,j)\) onto the achromatic axis yields the following value: \(P(i,j)\varvec{w} = \sum _{c\in \{R,G,B\}}I_c(i,j)/3\varvec{w},\) where \(\sum _{c\in \{R,G,B\}}I_c(i,j)/3\) is the same as the definition of lightness in the HSI color space [22]. Thus, if pixel \(\varvec{I}(i,j)\) is moved parallel to edge \(\varvec{kw}\), only the lightness changes. On the other hand, if pixel \(\varvec{I}(i,j)\) is moved perpendicular to edge \(\varvec{kw}\), only the saturation changes, as the hue and the lightness remain unchanged.

Using the convex combination of white, black and a pure color, \(\varvec{I}(i,j)\) can be represented as follows:

$$\begin{aligned} \varvec{I}(i,j) = a_w(i,j)\varvec{w} + a_k(i,j)\varvec{k} + a_c(i,j)\varvec{c}(i,j), \end{aligned}$$
(2)

where \(a_w(i,j)\), \(a_k(i,j)\), and \(a_c(i,j)\) are the convex combination coefficients for white, black, and a pure color, respectively. The coefficients are calculated as follows: \(a_w(i,j)\!=\!m(i,j), a_k(i,j)\!=\!1\!-\!M(i,j), a_c(i,j)\!=\!M(i,j)\!-\!m(i,j).\) In this regard, the following conditions must be satisfied for each coefficient. \(a_w(i,j) + a_k(i,j) + a_c(i,j) = 1, 0 \le a_w(i,j), a_k(i,j), a_c(i,j) \le 1.\)

In this study, we propose a new image enhancement method for hazy images that manipulates the coefficients described above.

Fig. 1
figure 1

An equi-hue plane in the RGB color space

Fig. 2
figure 2

An examaple of pixel representation by the convex combination in the equi-hue plane

Fig. 3
figure 3

A hazy image and its histograms of the convex combination coefficients. a Original image. b Histogram of \(a_w\). c Histogram of \(a_k\). d Histogram of \(a_c\)

3 Proposed method

Hazy images tend to be whitish and their contrast is reduced. For example, Fig. 3 shows a hazy image. Figure 3a shows the original image. Figures 3b–d show the histograms of the convex combination coefficients \(a_w\), \(a_k\), and \(a_c\) for Fig. 3a, respectively. As shown in Fig. 3b and c, the distribution spreads of the hazy image tend to be smaller for both \(a_w\) and \(a_k\). In addition, as shown in Fig. 3d, \(a_c\) tends to be small; the colors of the hazy image are less vivid.

Figure 4 shows the processing flow of the proposed method. In the proposed method, to enhance the lightness contrast of the input image by widening the distributions of \(a_w\) and \(a_k\), a multi-scale image enhancement method using S-shaped functions [23] is used. Subsequently, the saturation of the degraded image is enhanced by widening the distribution of \(a_c\) using a modified version of the gamma transformation and a histogram specification technique. The former transforms the lightness of the output image nonlinearly and inversely proportional to that of the input image. The latter spreads the distribution, which exhibits a bias toward achromatic colors owing to the haze effects. The integration of these two techniques results in a more vivid image, thereby producing a highly visible output.

Fig. 4
figure 4

Processing flow of the proposed method

Fig. 5
figure 5

Enhanced images obtained by applying the S-shaped functions to the lightness image \(\overline{I}\) of Fig. 5a. a Lightness image \(\overline{I}\). b Enhanced image (\(\sigma _g=5/255\)). c Enhanced image (\(\sigma _g=100/255\)). d Enhanced image (\(\sigma _g=150/255\))

Fig. 6
figure 6

An image obtained by calculating Eq. (4) \((w_n=1/3)\)

3.1 Multi-scale smoothing-based image enhancement using S-shaped functions

Now, let \(\overline{I}(i,j)\) be the lightness value of a pixel calculated as follows: \(\overline{I}(i,j)= \sum _{c\in \{R,G,B\}}I_c(i,j)/3.\) The lightness \(\overline{I}(i,j)\) is enhanced using a multi-scale enhancement method with S-shaped functions [23]. The S-shaped function is expressed as follows:

$$\begin{aligned} L\! =\! {\left\{ \begin{array}{ll} a_{\sigma _g}^{1-\lambda }\overline{I}^\lambda , & (0\!\le \! \overline{I} \!\le \! a_{\sigma _g}),\\ 1\!-\!(1\!-\!a_{\sigma _g})^{1-\lambda }(1\!-\!\overline{I})^\lambda , & (a_{\sigma _g} \!<\! \overline{I}\! \le \! 1). \end{array}\right. } \end{aligned}$$
(3)

In Eq. (3), (ij) is omitted owing to space limitations. \(\lambda \) and \(\sigma _g\) are the parameters, and \(r=\lfloor 3\sigma _g \rfloor \). \(\lfloor \cdot \rfloor \) denotes a floor function. \(\lambda \) is a parameter that modulates the magnitude of enhancement, with larger values corresponding to a more pronounced enhancement effect. \(a_{\sigma _g}(i,j)\) is the output value of a Gaussian filter with parameter \(\sigma _g\) applied to \(\overline{I}(i,j)\).

Figure 5 shows results of enhancing the lightness image using the S-shaped function. Figure 5a shows the lightness image. Figures 5b–d show the resultant images when \(\sigma _g\) is set to 5/255, 100/255, and 150/255, respectively. In Fig. 5b, the edges and details of the lightness image are enhanced. In Fig. 5c and d, the contrast of the lightness image is enhanced. That is, a smaller value of \(\sigma _g\) increases the edge and detail enhancement effects, whereas a larger value of \(\sigma _g\) increases the contrast enhancement effect.

The proposed method fuses multiple lightness images enhanced with multiple \(\sigma _g\). The fused result is calculated as follows:

$$\begin{aligned} \overline{I'}(i,j) = \sum _{n=1}^N w_nL_n(i,j), \end{aligned}$$
(4)

where N is the number of images to be fused; \(w_n\) is the weighting coefficient, \(\sum _{n=1}^N w_n\!=\!1\). \(L_n(i,j)\) is the value of Eq. (3) obtained when \(\sigma _g\!=\!\sigma _{g_n}\). Figure 6 shows the result of fusing the images shown in Fig. 5b–d by Eq. (4). Figure 6 shows that the edges and contrast of the lightness image are properly enhanced. The color image \(\varvec{I'}\) is then obtained by the following equation.

$$\begin{aligned} \varvec{I'}(i,j) = \varvec{I}(i,j) + (\overline{I'}(i,j) - \overline{I}(i,j))\varvec{e}. \end{aligned}$$
(5)

Figure 7 shows the movement of a pixel in the equi-hue plane in the RGB color space when the transformation is performed using Eq. (5). Figures 7a and b show the movements when \(\overline{I'}(i,j)\!>\!\overline{I}(i,j)\) and \(\overline{I'}(i,j)\!<\!\overline{I}(i,j)\), respectively. As shown in these figures, \(\varvec{I'}(i,j)\) is obtained by moving \(\varvec{I}(i,j)\) parallel to the edge \(\varvec{kw}\).

Figure 8a shows an enhanced color image. This image shows that the edges and the contrast are properly enhanced. Figures 8b–d show the histograms of the coefficients of white (\(a'_w\)), black (\(a'_k\)), and the pure colors (\(a'_c\)) for Fig. 8a, respectively. As shown in Fig. 8b and c, \(a'_w\) and \(a'_k\) have wider distributions. In contrast, as shown in Fig. 8d, the distribution of \(a'_c\) is not widespread. Furthermore, Fig. 9a and b show the distributions of \(a'_w\) and \(a'_c\) corresponding to the area enclosed by the red box in Fig. 8a. The distribution of \(a'_w\) is relatively wide, whereas that of \(a'_c\) is narrow. This indicates that whitish pixels, which have high lightness and low saturation values, are dominant. That is, the haze effect remains when only the lightness contrast is enhanced.

Fig. 7
figure 7

A schematic illustration of the movement of a pixel in the equi-hue plane in the RGB color space expressed by Eq. (5). a \(\overline{I'}(i,j)\!>\!\overline{I}(i,j)\). b \(\overline{I'}(i,j)\!<\!\overline{I}(i,j)\)

Fig. 8
figure 8

An image obtained by calculating Eq. (5). a Resultant image. b Histogram of \(a'_w\). c Histogram of \(a'_k\). d Histogram of \(a'_c\)

Fig. 9
figure 9

Histograms of \(a'_w\) and \(a'_c\) for the pixels in the red box shown in Fig. 8a. a Histogram of \(a'_w\). b Histogram of \(a'_c\)

3.2 Image saturation enhancement based on manipulation of white, black, and pure colors’ convex combination coefficients

This subsection describes the process of improving image visibility by increasing the saturation of colors while decreasing the lightness value. First, the following modified gamma transformation is applied to the lightness value \(\overline{I'}(i,j)\) of \(\varvec{I'}(i,j)\) to reduce the whitishness.

$$\begin{aligned} L_f(i,j)= & \overline{I'}(i,j)^{\gamma (i,j)}, \end{aligned}$$
(6)
$$\begin{aligned} \gamma (i,j)= & (\gamma _{\alpha }-1)\overline{I'}(i,j)^{\gamma _{\beta }}+1, \end{aligned}$$
(7)

where \(\gamma _{\alpha }\) and \(\gamma _{\beta }\) are the parameters.

Figure 10 shows the change in the shape of the function that yields the modified gamma transformation when the parameters \(\gamma _{\alpha }\) and \(\gamma _{\beta }\) are changed. In Fig. 10a and b, the black dotted line represents the modified gamma transformation with \((\gamma _{\alpha }\!=\!1, \gamma _{\beta }\!=\!1)\), which is the identity transformation. Figure 10a shows the function shapes that provide the modified gamma transformations when \(\gamma _{\alpha }\!=\!1\), 2, 3, or 4, while \(\gamma _{\beta }\!=\!1\) is fixed. Figure 10b shows the function shapes when \(\gamma _{\beta }\!=\!1\), 3, or 5, while \(\gamma _{\alpha }\!=\!2\) is fixed. As shown in Fig. 10a, the larger \(\gamma _{\alpha }\) is, the larger the transformation becomes. On the other hand, as shown in Fig. 10b, when \(\gamma _{\beta }\) is large, the transformation becomes the identity for most pixels.

Fig. 10
figure 10

Function shapes of the modified gamma transformation with changing parameters. a Effects of changing \(\gamma _{\alpha }\) (\(\gamma _{\beta }=1\)). b Effects of changing \(\gamma _{\beta }\) (\(\gamma _{\alpha }=2\))

Fig. 11
figure 11

A schematic illustration of pixel movement in the equi-hue plane and the histogram of t(ij). a \(\varvec{I}^{\varvec{w}}_{\text {max}}(i,j)\) in an equi-hue plane in the RGB color space. b Histogram of t(ij)

Next, while maintaining the calculated lightness \(L_f(i,j)\), the coefficients of white are reduced and those of pure colors are increased. For this objective, suppose a line segment connects white \(\varvec{w}\) and \(\varvec{I'}(i,j)\) in the equi-hue plane in the RGB color space as shown in Fig. 11a. The intersection \(\varvec{I}^{\varvec{w}}_{\text {max}}(i,j)\) of this line segment with the edge \(\varvec{kc}\) can be obtained as follows:

$$\begin{aligned} \varvec{I}^{\varvec{w}}_{\text {max}}(i,j)\!=\!\frac{a'_c(i,j)}{1-a'_w(i,j)}\varvec{c}(i,j)\!=\!t(i,j)\varvec{c}(i,j), \end{aligned}$$
(8)

where \(a'_w(i,j)\) and \(a'_c(i,j)\) are the coefficients of white and a pure color for \(\varvec{I'}(i,j)\), respectively. Note that manipulating the pixel value \(\varvec{I'}(i,j)\) along the line segment connecting \(\varvec{w}\) and \(\varvec{I}^{\varvec{w}}_{\text {max}}(i,j)\) tends to decrease the coefficient of white but does not significantly increase that of the pure color. Figure 11b shows the distribution of t(ij). As shown in Fig. 11b, the distribution of t is biased toward smaller values in hazy images; the coefficients of the pure colors can not be large by the above manipulation.

To address this problem, the proposed method spreads the distribution of t biased toward small values using a histogram specification technique that targets a distribution of t smoothed using a one-dimensional Gaussian filter as follows:

$$\begin{aligned} t'(i,j) = \min \{z|G_{\sigma '}(z)\ge H(t(i,j))\}, \end{aligned}$$
(9)

where \(\sigma '\) is the standard deviation of the one-dimensional Gaussian filter. \(H(\cdot )\) and \(G_{\sigma '}(\cdot )\) are the normalized cumulative histograms of t and the smoothed t, respectively.

In the proposed method, the output pixel is obtained in the line segment connecting \(t'(i,j)\varvec{c}(i,j)\) and \(\varvec{I'}(i,j)\) as follows:

$$\begin{aligned} \varvec{O}(i,j)\! =\! \varvec{I'}(i,j)\! +\! s(i,j)(t'(i,j)\varvec{c}(i,j) \!-\! \varvec{I'}(i,j)), \end{aligned}$$
(10)

where s(ij) is calculated to have the lightness value of \(\varvec{O}(i,j)\) equal to be \(L_f(i,j)\) as follows:

$$\begin{aligned} s(i,j)\!=\!\frac{L_f(i,j) - \overline{I'}(i,j)}{\text {mean}_{c'\in \{R,G,B\}}(t'(i,j)c_{c'}(i,j)\!-\!I'_{c'}(i,j))}. \end{aligned}$$
(11)

Fig. 12 illustrates the pixel movement caused by the aforementioned manipulation. As shown in this figure, the output pixel is located on the line segment connecting \(\varvec{I'}(i,j)\) and \(t'(i,j)\varvec{c}(i,j)\) where the lightness value is \(L_f(i,j)\).

Fig. 12
figure 12

A schematic illustration of pixel movement in the saturation enhancement using the convex coefficients of white, black, and a pure color

Figure 13 shows an example of an output image obtained using the proposed method. Figure 13a shows a result for Fig. 3a. Figure 13b–d show the histograms of white (\(a''_w\)), black (\(a''_k\)) and the pure colors (\(a''_c\)). The distributions shown in Fig. 13b and c are not significantly different from those in Fig. 8b and c, respectively. In contrast, Fig. 13d shows a significantly widened distribution.

Fig. 13
figure 13

An example of the output image of the proposed method. a Output image. b Histogram of \(a''_w\). c Histogram of \(a''_k\). d Histogram of \(a''_c\)

Fig. 14
figure 14

Results for Image 1, Image 2, and Image 3. a Original image. b Ground-truth image. c He et al.’s method [2]. d X. Zhao’s method[3]. e Li et al.’s method[4]. f Chen et al.’s method[11]. g S. Zhao et al.’s method[12]. h Proposed method

4 Experiments

4.1 Experimental conditions

In the experiments, REalistic Single Image DEhazing (RESIDE) [24],Footnote 1 I-HAZE[25],Footnote 2 and O-HAZE[26]Footnote 3 were used as datasets. RESIDE contains pseudo-hazy images generated with CG based on an observational model of haze and includes test data designated as Systematic Objective Testing Set (SOTS)-indoor (500 images) and SOTS-outdoor (500 images). I-HAZE and O-HAZE contain 30 and 45 images, respectively. All the datasets contain the ground-truth images without haze and those with haze superimposed by a fog machine. To confirm the effectiveness of the proposed method, we compared it with the methods of He et al. [2], X. Zhao [3], Li et al. [4], Chen et al. [11], and S. Zhao et al. [12]. The parameters of the compared methods were set according to the previous studies. The parameters of the proposed method were set to \(\lambda \!=\!4.0\), \((\sigma _{g_1}, \sigma _{g_2},\sigma _{g_3})\!=\!(5/255, 50/255, 150/255)\), \(w_n\!=\!1/3\), \(\gamma _{\alpha }\!=\!2.5\), \(\gamma _{\beta }\!=\!1.0\), and \(\sigma '\!=\!0.1\). These parameters were experimentally determined to optimize the visibility improvement while mitigating over-enhancement. In the experiment, the results were obtained using the consistent parameters for all four datasets. That is, the ability to obtain successful results using common parameters for each dataset, which includes a variety of types of hazy images, implies that the proposed method is highly robust and adaptative with respect to parameter variations.

4.2 Qualitative evaluation

Figure 14 shows the results for three representative images and their magnified views. The images were selected from RESIDE, I-HAZE, and O-HAZE datasets to show the characteristics of each method best. Note, however, that the resultant images of the Li et al.’s method shown in Fig. 14e are slightly different from the others in terms of resolution and magnification. This is because the resolution of the output image differs from that of the input image in their method.

Table 1 Average indices for each dataset
Fig. 15
figure 15

Results for images in real-world conditions. a and b are hazy images. c and d are processing results

In the resultant images of He et al.’s method shown in Fig. 14c, artifacts and unnatural colors appear in the sky. Regarding X. Zhao’s method shown in Fig. 14d, the images are darker and the visibility of objects such as benches and floor surfaces is reduced. In the resultant images of Li et al.’s method shown in Fig. 14e, the appearance is overexposed, which reduces the visibility of the subject. In the resultant images of Chen et al.’s method in Fig. 14f, haze removal is insufficient, and the visibility of the images is not improved. Comparing the magnified images of the resultant images of S. Zhao et al.’s method shown in Fig. 14g and the proposed method shown in Fig. 14h, the proposed method shows higher contrast on the bench and the ground surface and improves the visibility of the objects. Regarding the proposed method, in addition, the hue of the bench is closest in appearance to that of the ground-truth image shown in Fig. 14b.

4.3 Quantitative evaluation

In the quantitative evaluation, the peak signal-to-noise ratio (PSNR)[27], Structural Similarity Index Measure (SSIM)[27], Gradient Ratioing at Visible Edges (GRVE)[28], Blind Image Quality Measure of Enhanced images (BIQME)[29], and Hue Difference (HD) were used. The higher the value of PSNR, the better the image quality. SSIM is an index of the structural similarity between two images. The closer the value is to 1, the greater the structural similarity between images. GRVE is an index of image visibility that compares the edges between the input and output images, with larger values indicating better results. HD is an index that evaluates natural color tones by calculating the difference in hues between the input and output images. This study used Raines’s hue and CIE 1976 L*a*b* hue, although there are many options for the hue used in the calculations. The more natural the color of the output image, the lower the value of HD.

Table 1 presents the averages of each index for SOTS-indoor, SOTS-outdoor, I-HAZE, and O-HAZE. As shown in the results for SOTS-indoor and SOTS-outdoor, the proposed method yields the best results for BIQME, \(\hbox {HD}_{Lab}\), and \(\hbox {HD}_{Raines}\). This indicates that the visibility improvement of the proposed method is prominent for the images in which haze is superimposed by CG. However, the difference from the ground-truth image is more conspicuous than those of the other methods. Regarding I-HAZE and O-HAZE, the proposed method shows relatively large PSNR and SSIM. Furthermore, the proposed method yields the best results for BIQME, GRVE, \(\hbox {HD}_{Lab}\), and \(\hbox {HD}_{Raines}\). This indicates that the visibility improvement effect of the proposed method is particularly prominent for images in which haze is superimposed using fog machines.

5 Discussion and limitation

Figure 15 shows the results of applying the proposed method to hazy images captured under real-world conditions without ground truth. These results show that the proposed method can sufficiently improve the visibility of actual hazy images. These results and those presented in Sect. 4 indicate that the proposed method can be practically applied to improve the visibility of images captured by outdoor surveillance cameras and drones.

Conventional methods employ physical models or deep learning techniques to estimate the transmittance of light, and subsequently utilize this information to indirectly remove haze. Thus, if a discrepancy exists between the actual conditions and the model, their performance is significantly compromised. For example, they are less effective for artificial hazy images created with fog machines such as I-HAZE and O-HAZE. In contrast, the proposed method does not use physical models; instead, it enhances image appearance through direct manipulation of pixel values that have become whitish owing to haze. As a result, it demonstrates greater efficacy than conventional methods for artificially created hazy images using fog machines.

However, when employing a fog machine, haze may become more concentrated in certain areas, resulting in a non-uniform distribution. The proposed method is a global transformation approach that does not utilize local information. Therefore, in scenarios where the non-uniformity of haze distribution is exceptionally high, the efficacy of visibility improvement may be diminished. Furthermore, as the proposed method is predicated on the whiteness of haze, it may prove less effective for non-white haze. Nevertheless, non-white haze is generally uncommon; thus, it does not constitute a significant limitation.

6 Conclusion

This study proposes a method to improve the visibility of hazy images by applying an image enhancement method using the convex combination coefficients of white, black, and pure colors. The proposed method enhances the image edges and contrast by multi-scale image enhancement using S-shaped functions. Subsequently, image processing using a modified gamma transformation and a histogram specification technique increases the saturation while decreasing the lightness value of the image. To verify the effectiveness of the proposed method, the experimental results were compared with those of conventional methods on several datasets. The experimental results demonstrate that the proposed method sufficiently improves the visibility of hazy images without causing hue changes and artifacts.

Future work will focus on the development of a method for automatic parameter determination and its application to video processing.