1 Introduction

Visibility is an indicator of atmosphere transparency, which is closely related to our daily lives in many ways, like road safety applications, meteorological observation, airport safety and monitoring pollution in urban areas.

Little work has been done on visibility estimation based on outdoor cameras. There are two classes of methods. The first one is to estimate the distance to the most distant object on the road surface having a contrast above 5%. Pomerleau [19] measures the contrast attenuation of road markings at various distances in front of a moving vehicle for visibility estimation, which requires detecting road markings. Hautière et al. [7, 9] estimate two kinds of visibility distances: meteorological visibility and mobilized visibility. The meteorological visibility is the greatest distance at which a black object of a suitable dimension can be seen in the sky on the horizon. The method [11] consists of dynamically implementing Koschmieder’s law [16] and estimate the visibility based on local contrasts above 5%. The mobilized visibility represents the distance to the furthest pixel that still displays a contrast greater than 5%, which is very close to the meteorological visibility. They combine computations of a depth map and local contrasts above 5% to estimate the mobilized visibility [10]. This class of methods depends on an accurate geometric calibration of the camera. A second one correlates the contrast in the scene with the visual range estimated by reference additional sensors [8]. The geometric calibration is not necessary. The contrast can be simply calculated by a Sobel filter or a high-pass filter. Once the contrast is calculated, a linear regression is then performed to estimate the mapping function [14, 21]. Such methods can be seen as data-driven approaches. Hautière et al. propose a probabilistic model-driven approach, which performs a simplified depth distribution model of the scene and applies a non-linear fitting to the data [3]. This class of methods needs a learning phase to estimate the mapping function, which requires collecting a large number of ground truth data. All of existing methods based on outdoor cameras rely on human assistance, like geometric calibration of the camera, road markings extraction or ground truth data collection for building mapping functions. Moreover, strict constraints for scene environments are essential for almost all methods, some of which even require the presence of just road and sky in the scene.

This paper proposes a novel method to estimate visibility just using a single image as input. According to the haze imaging model [12], a hi-quality haze-free image can be recovered, which makes it possible to obtain the ratio of two extinction coefficients respectively in current atmosphere and clear atmosphere by a single image. With the aim to calculate current atmosphere extinction coefficient, our work is inspired by the success of proposing an important prior-the clear atmosphere extinction coefficient is approximately a constant. It can be calculated by the theory of atmospheric radiation. By multiplying the clear atmosphere extinction coefficient and the ratio, the extinction coefficient of the input image is obtained and then the visibility is finally computed. Our method succeeds in estimating visibility in various application scenes by just a single image. The feasible results show high accuracy and robustness of this method, which might open a new trend of visibility measurement in meteorological research.

The remainder of this paper is organized as follows. Section 2 introduces our approach in details. Section 3 provides experimental results in various scenes and comparisons with existing approaches. A discussion and summary concludes this paper in Sect. 4.

2 Our Approach

The visibility calculation formula is proposed as follows [11]:

$$\begin{aligned} V_{met}=\frac{1}{\beta }ln\frac{1}{\varepsilon } \end{aligned}$$
(1)

where \(\varepsilon \) is contract threshold and equal to 5\(\%\). \(\beta \) is extinction coefficient which indicates the scene radiance attenuated by the turbid atmosphere from a scene point to the sensor. The procedure of our method is depicted in Fig. 1. Based on a key observation of the clear atmosphere, we find that distributions of particles in clear atmosphere remain almost unchanged. Based on this, a conclusion is proposed that the clear atmosphere extinction coefficient \(\beta _0\) is usually a constant. By introducing transmittance T, \(\beta _0\) can be calculated combined with assumptions on clear atmosphere, which is developed by the theory of atmospheric radiation. Meanwhile, we advance the scene imaging model with dark channel prior to estimate \(\beta /\beta _{0}\) of the input image. By multiplying \(\beta _0\) and \(\beta /\beta _{0}\) , the extinction coefficient \(\beta \) is obtained and then the visibility is successfully calculated by Eq. (1). Moreover, the scene distance information of the input image also can be calculated as a by-product.

Fig. 1.
figure 1

Procedure of our method

2.1 \(\beta _0\) Calculation

The atmosphere is mainly composed of air molecules and aerosol particles. Air particles consist of nitrogen, oxygen, noble gas, carbon dioxide, water and other impurities. Aerosol is a suspension system of liquid or solid particles in the air. Only large-scale aerosol particles (e.g., dust, carbon, smoke particles) are sensitive to terrain, wind direction and other geographical factors [5], which affect visibility, climate and our health a lot. In the clear atmosphere, air molecules accounts for about 99\(\%\) of the total atmosphere composition and the remaining 1\(\%\) are small-scale aerosols. Experiments prove that the distribution of air particles remains unchanged within the height of 100 km [18]. Therefore, a reasonable assumption is proposed in this paper: the extinction coefficient in clear atmosphere is approximately a constant. We introduce transmittance T to calculate \(\beta _0\). Transmittance T indicates the degree of solar radiation weakened in vertical direction [15]. The transmittance in clear atmosphere is defined by

$$\begin{aligned} T_{0}(\lambda ,L)=e^{-\int _{0}^{L}\beta _{0}(\lambda ,L)dl} \end{aligned}$$
(2)

where \(\lambda =0.55\,\upmu {\text {m}}\), L is the path length of solar radiation. The clear extinction coefficient \(\beta _0\) consists of two parts: clear air molecular extinction coefficient \(\beta _{R_{0}}\) and clear aerosol extinction coefficient \(\beta _{\alpha _{0}}\)

$$\begin{aligned} \beta _{0}=\beta _{R_{0}}+\beta _{\alpha _{0}} \end{aligned}$$
(3)

where \(\beta _{R_{0}}\) is a known constant [18]: \(1.159\times 10^{-5}\)(at \(\lambda =0.55\,\upmu {\text {m}}\), and the unit is \(m^{-1}\)).With the aim to calculate \(\beta _{\alpha _{0}}\) , the clear aerosol transmittance is described by

$$\begin{aligned} T_{\alpha _{0}}(\lambda ,L)=e^{\int _{0}^{L}\beta _{\alpha _{0}}(\lambda ,L)dl} \end{aligned}$$
(4)

We introduce two assumptions on clear atmosphere to simplify the Eq. (4)

  • Assume that the effective height of the atmosphere H is \(10^4\)m. \(10^4\)m is the average height of troposphere including 80% of atmospheric particles and almost all the water vapor particles. In the height range higher than m, density coefficient of particle quantity is irrelevant to meteorological optical range [4].

  • Assume the clear aerosol extinction coefficient is invariable. In the clear atmosphere, aerosol particle distributions are unaffected by airflow activities like convection and turbulence [1].

Based on above two assumptions, Eq. (4) is simplified as

$$\begin{aligned} T_{\alpha _{0}}=e^{-\beta _{\alpha _{0}}L} \end{aligned}$$
(5)

By Eq. (5)

$$\begin{aligned} \beta _{\alpha _{0}} =-\frac{\log (T_{\alpha _{0}})}{L} \end{aligned}$$
(6)

where \(L=m_{0}H\), \(H=10^{4}m\). \(m_{0}\) is relative optical mass [20], which is the ratio of path length of solar radiation and the effective height of the atmosphere(see Fig. 2). \(m_{0}\) is calculated by

$$\begin{aligned} m_{0}=\frac{1 }{\cos \theta _{s}+0.15*(93.885-\theta _{s}\frac{180}{\pi })^{-1.253}} \end{aligned}$$
(7)

The solar zenith angle \(\mathbf {\theta _{s}}\) indicates the position relationship of the sun and the earth, which is determined by the observation information: date, time and location [15].

$$\begin{aligned} t = t_{s}+0.170\sin (\frac{4\pi (J-80)}{373})-0.129\sin (\frac{2\pi (J-8)}{355}) +\frac{12(SM-L)}{\pi } \end{aligned}$$
(8)
$$\begin{aligned} \delta =0.4093\sin (\frac{2\pi (J-81)}{368}) \end{aligned}$$
(9)
$$\begin{aligned} \theta _{s}=\frac{\pi }{2}-\arcsin (\sin l\sin \delta -\cos l\cos \delta \cos \frac{\pi t}{12}) \end{aligned}$$
(10)

where \(t_{s}\) is observation time in decimal hours, J is Julian date (the day of the year as an integer in the range 1–365), L is site longitude in radians, SM is standard meridian for the time zone in radians.

Fig. 2.
figure 2

Diagram of relative optical mass. P is the site point, D is the under-solar point, l is site latitude in radians, \(\mathbf {\delta }\) is solar declination, \(\theta _{s}\) is solar zenith angle.

With the aim to calculate \(\mathbf {\beta _{\alpha _{0}}}\), we introduce the clear aerosol transmittance formula proposed by Ångström [2]

$$\begin{aligned} T_{\alpha _{0}}=e^{-\beta _{A}m_{0}\lambda ^{-\alpha _{w}}} \end{aligned}$$
(11)

where \(\beta _{A}\) is Ångströms turbidity coefficient indicating the level of atmospheric turbidity. The value range of \(\beta _{A}\) is \(0.01\le \beta _{A}\le 0.2\) [2]. \(\alpha _{w}\) is wavelength exponent reflecting the distribution characteristics of aerosol particle spectrum. The value range of \(\alpha _{w}\) is \(0.1\le \beta _{A}\le 4\) [6]. When the proportion of small particles increases, the value of also grows. In pure atmosphere, air molecules accounts for about 99\(\%\) of atmosphere constituents, which has almost reached a limit condition of atmosphere. So we take the minimum value of \(\beta _{A}\) and the maximum value of \(\alpha _{w}\) to Eq. (11). Combining with Eq. (11), Eq. (6) becomes

$$\begin{aligned} \beta _{\alpha _{0}}=\frac{\beta _{A}m_{0}\lambda ^{-\alpha _{w}}}{L} \end{aligned}$$
(12)

where \(L=m_{0}H\)

$$\begin{aligned} \beta _{\alpha _{0}}=\frac{\beta _{A}\lambda ^{-\alpha _{w}}}{H} \end{aligned}$$
(13)

where \(m_{0}\) is significantly omitted, which proves that \(\beta _{\alpha _{0}}\) is independent on observation information (date, time and location) and always a constant. Based on Eq. (3), we finally calculate \(\beta _{0}\) : \(2.2518 \times 10^{-5}m^{-1}\).

2.2 \(\beta /\beta _{0}\) Calculation

In this paper, we combine the dark channel prior [12] and the improved scene imaging model to deduce \(\beta /\beta _0\). The dark channel prior is a kind of statistics of haze-free outdoor images–in clear atmosphere, most local patches contain some pixels which have very low intensities in at least one color channel. For an image I, the dark channel is defined by

$$\begin{aligned} I_{\varOmega \left( x \right) }^{dark}=\min _{c\in \left\{ r,g,b \right\} }\left( \min _{y\in \varOmega \left( x \right) }I^{c}\left( y \right) \right) \end{aligned}$$
(14)

where c is a color channel. \(\varOmega (x)\) is a local patch centered at x. The patch size is 15\(\times \)15 in this paper. Based on the statistics of 1000 outdoor images, we find that \(I^{dark}\) increase with the aggravation of the atmosphere turbidity. The patch with the minimum \(I^{dark}\) at other levels of atmosphere turbidity corresponds to the minimum one in clear atmosphere, which satisfies the dark channel prior most. Therefore, we take the min operation on Eq. (10) to select the minimal dark channel patch \(\varOmega _{min}(x)\) in the input image I (see Fig. 3[1st column]):

$$\begin{aligned} I_{\varOmega _{\min }(x)}^{\hat{dark}}=\min _{x\in I}(\min _{c\in \{r,g,b\}}(\min _{y\in \ \varOmega (x)}(I^{dark}(y)))) \end{aligned}$$
(15)

We refine \(\varOmega _{min}(x)\) using guided filter [13](see Fig. 3[2nd column])

$$\begin{aligned} I_{\varOmega _{\min }(x)}^{dark}=guidedfilter(I_{\varOmega _{\min }(x)}^{\hat{dark}}) \end{aligned}$$
(16)

Figure 3 [2nd column] shows that patches in \(\varOmega _{\min }(x)\) may be not at the same distance, and \(I_{\varOmega _{\min }(x)}^{dark}\) are probably unequal in a patch due to the block effects, which will influence the following calculation for \(\beta /\beta _{0}\). Therefore, we need to improve \(\varOmega _{\min }(x)\). The specific process is as follows:

  • Select the minimal value \(V_{\min }\) of \(I_{\varOmega _{\min }(x)}^{dark}\).

  • Retrieve all connected regions of \(\varOmega _{\min }(x)\) and find the \(\varOmega _{\min }(x_{\min })\) containing \(V_{\min }\). Note that the number of regions in \(\varOmega _{\min }(x_{\min })\) is always more than one.

  • Traverse the \(\varOmega _{\min }(x_{\min })\) and calculate \(\left| I_{\varOmega _{\min }(x_{\min })}^{dark}(y)-V_{\min } \right| \) where \(y=1,2\) \(\cdots n\), \(n\in \varOmega _{\min }(x_{\min })\). If \(\left| I_{\varOmega _{\min }(x_{\min })}^{dark}(y)-V_{\min } \right| \ge T\), y is supposed as a false pixel and omitted. We fix the T to 0.2 in this paper.

  • Count the number of each remaining regions in \(\varOmega _{\min }(x_{\min })\) and eliminate the noise region whose quantity of pixels is less than \(N_{num}\). We fix the \(N_{num}\) to 5\(\%\) of the total number of \(\varOmega _{\min }(x)\) in this paper.

Fig. 3.
figure 3

Procedure of \(\beta /\beta _{0}\) Calculation. [1st column] result of \(\varOmega _{\min }(x)\) and \(I_{\varOmega _{\min }(x)}^{\hat{dark}}\). [2nd column] result of \(I_{\varOmega _{\min }(x)}^{dark}\). [3rd column] result of ROI. [4th column] the above result is \(e^{-(\beta -\beta _{0})d}(y)\) and the below result is \(e^{-\beta _{0}d}(y), y\in ROI\). [5th column] result of \(\beta /\beta _{0}\).

The resulting regions are almost at the same distance and satisfy the dark channel prior most, which are supposed to be the ROI ((see Fig. 3[3rd column])). Next we will calculate \(\mathbf {\beta /\beta _0}\) of this ROI.

For an outdoor image, a model [17] is widely used to describe the imaging process (see Eq. (13)). Note that the atmosphere is homogenous

$$\begin{aligned} I=\rho I_{\infty }e^{-\beta d}+I_{\infty }(1-e^{-\beta d}) \end{aligned}$$
(17)

where I is scene image, \(\rho \) is scene albedo. \(I_{\infty }\) is air light, which is invariant with \(\beta \) and determined only by the illumination intensity. \(\rho I_{\infty }\) is scene radiance, \(I_{\infty }\) is medium transmission. Based on Eq. (13), the scene imaging in clear atmosphere is easily derived by

$$\begin{aligned} J=\rho I_{\infty }e^{-\beta _{0}d}+I_{\infty }(1-e^{-\beta _{0}d}) \end{aligned}$$
(18)

where J is intrinsic scene image. Combine Eq. (13) with Eq. (14), the advanced scene imaging model is obtained:

$$\begin{aligned} J=Je^{-(\beta -\beta _{0})d}+I_{\infty }(1-e^{-(\beta -\beta _{0})d} ) \end{aligned}$$
(19)

By Eq. (15), we find that when \(\beta \) decreases to \(\beta _0\) , scene image I is recovered to J. Compared with previous model-driven methods for haze removal, this restoring process retains the fundamental cue for human to perceive depth. So we use Eq. (15) to recover J and calculate \(\beta /\beta _{0}\). According to the dark channel prior, the dark channel of scene radiance tends to be 0.

$$\begin{aligned} \min _{c\in \left\{ r,g,b \right\} }\left( \min _{y\in ROI}\left( \rho I_{\infty }^{c}\left( y \right) \right) \right) =0 \end{aligned}$$
(20)

Note that the clear atmosphere is composed of air molecules and a few small-scale aerosol particles. So the dark channel of the intrinsic scene intensity is close to 0.

$$\begin{aligned} \min _{c\in \left\{ r,g,b \right\} }\left( \min _{y\in ROI}\left( J_{\infty }^{c}\left( y \right) \right) \right) =\mu \end{aligned}$$
(21)

where \(\mu \) should be very close to 0, so we fix it to 0.001 in this paper. \(e^{-\beta _{0}d}\) and \(e^{-(\beta -\beta _{0}d)}\) are obtained by

$$\begin{aligned} e^{-\beta _{0}d}(y)=guidedfilter(1-\min _{c\in \{r,g,b\}}(\min _{y\in \ ROI}(\frac{J^{c}(y)}{I_{\infty }^{c}}))) \end{aligned}$$
(22)
$$\begin{aligned} e^{(\beta -\beta _{0})d}(y)=guidedfilter((1-\min _{c\in \{r,g,b\}}(\min _{y\in \ ROI}(\frac{I^{c}(y)}{I_{\infty }^{c}})))/(1-\frac{\mu }{I_{\infty }^{c}})) \end{aligned}$$
(23)

where \(I_\infty \) can be estimated by [12] and J is recovered by:

$$\begin{aligned} J(y)=\frac{I(y)-I_{\infty }}{\max (e^{-(\beta -\beta _0)d}(y),t_0)}+I_\infty ,y\in ROI \end{aligned}$$
(24)

where \(t_0\) is a lower bound 0.1 in case \(e^{-(\beta -\beta _{0})d}\) decrease down to 0. Figure 3[4th column] shows the results of \(e^{(\beta -\beta _{0})d}\) and \(e^{-\beta _{0}d}\). Then \(\beta /\beta _0(y)\) is computed by

$$\begin{aligned} \frac{\beta }{\beta _{0}}(y)=\frac{\ln (e^{-(\beta -\beta _0)d})}{\ln (e^{-\beta _0d})}(y)+1 \end{aligned}$$
(25)

We take the average value of \(\beta /\beta _0(y)\) in ROI (see Fig. 3[5th column]). Based on Eq. (1), visibility \(V_{met}\) is finally obtained by

$$\begin{aligned} V_{met}=\frac{1}{\beta _{0}\times \frac{\beta }{\beta _{0}}}\ln \frac{1}{\varepsilon } \end{aligned}$$
(26)

3 Experimental Results

In our experiment, there are three kinds of scenes: city scene, road scene and nature scene. We use the visibility \(V_{met}^{ob}\) provided by Yahoo!Weather as a criterion of the experimental results. We define the error rate E by

$$\begin{aligned} E=\frac{\left| V_{met}-V_{met}^{ob} \right| }{V_{met}^{ob}} \end{aligned}$$
(27)

If error rate is less than 20\(\%\), the result is acceptable.

Figure 4 shows the experimental images in city scenes at different levels of atmosphere turbidity. As can be seen, the variance of atmosphere turbidity in city scene is mainly caused by air pollution. Visibility is a good indicator of city’s air quality. Figures 5 and 6 show the experimental images in road scenes and nature scenes at different levels of atmosphere turbidity. Tables 1, 2 and 3 shows the visibility results of Figs. 4, 5 and 6. Error rates in different kinds of scenes are less than 20%, which demonstrates the high accuracy of our method. Moreover, our method preforms well without human assistance. [More results can be found at http://airl.csu.edu.cn/airquality.html]

Table 1. Visibility results of Fig. 4
Table 2. Visibility results of Fig. 5
Table 3. Visibility results of Fig. 6
Fig. 4.
figure 4

Experimental images in different city scenes

Fig. 5.
figure 5

Experimental images in the road scene

Fig. 6.
figure 6

Experimental images in nature scenes

Fig. 7.
figure 7

Comparisons with Hautière N’s work [9, 11]

This paper also adopts Hautière N’s two popular methods [9, 11] as comparison objects (see Fig. 7). Method [9] based on in-vehicle cameras describes two specific onboard techniques to estimate the meteorological visibility and the mobilized visibility. We only consider the meteorological visibility in this paper. Method [11] consists of dynamically implementing Koschmieder’s law and estimate the visibility based on local contrasts above 5\(\%\). \(V_{met}^H\)represents the visibility estimated by Hautière N’s methods. In Table 4, error rates of our method are all less than 20\(\%\). Our results are comparable with Hautière N’s in the road scene. Furthermore, our method is also applicable to other types of scenes, which is better than Hautière N’s methods. Our approach even works for distance estimation. Having obtained \(\beta \) , we can calculate the distance from the scene point to the sensor according to Eq. (28)

$$\begin{aligned} d=\frac{\log (e^{-(\beta -\beta _{0})d})}{\beta _{0}-\beta } \end{aligned}$$
(28)
Table 4. Experimental results compared with Hautière N’s methods
Fig. 8.
figure 8

Distance results in the same scene at different scene points

In order to prove the above algorithm, we collect a specific scene(see Fig. 8). The building marked by a red circle is the China Central Television in Beijing, which is about 200 m far from the camera. The building marked by a green circle is the China World Trade Center Tower 3, which is about 1000 m far from the sensor. Figure 8 shows the distance results at four different scene points. Note that distances for reference at the four scene points are as follows: \(d_{1}\approx 200\) m, \(d_{2}\approx 400\) m, \(d_{3}\approx 1000\) m and \(d_{4}\approx 500\) m. Results show that calculated distances are close to the reference data, which proves the rationality of this method. Unlike other camera calibration methods, our method merely depends on an outdoor image, which is simple and cost-effective. This method might have an important propelling effect on field measurement in outdoor scenes.

4 Conclusion

In this paper, we propose a novel method for visibility estimation using a single image. A key assumption is developed - the extinction coefficient is approximately a constant in clear atmosphere which is composed of air molecules and a few small-scale aerosol particles. Using this assumption with the dark channel prior, the extinction coefficient of the input image can be calculated and the visibility is finally obtained. Without human assistance, the feasible results prove high accuracy and robustness, which might open a new trend in meteorological research on visibility measurement.

Compared with existing visibility measurements, our method has the following advantages:

  • This method proposes an assumption for calculating the clear extinction coefficient and realizes the separation of extinction coefficient \(\beta \) and distance d in depth map \(\beta d\), which makes it true that the visibility estimation no longer relies on the actual distance information.

  • This method depends on a single image and successfully accomplishes fully automatic visibility estimation without human assistance, which performs well in various types of scene.

  • This method makes it possible to measure scene distance using a single image, which might open a new trend in meteorological research about scene distance measurement.

Our method proposes a framework to estimate the visibility, which is based on an algorithm of single image haze removal. With the development of haze removal technology, other new methods may be proposed. Therefore, our future work is to explore the possibility of other methods to calculate \(\beta /\beta _{0}\).