Abstract
We review the physics based illuminant estimation methods, which extract information from highlights in images. Such highlights are caused by specular reflection from the surface of dielectric materials, and according to the dichromatic reflection model, provide cues about the illumination. This paper analyzes different categories of highlight based illuminant estimation techniques for color images from the point of view of their extension to multispectral imaging. We find that the use of chromaticity space for multispectral imaging is not straightforward and imposing constraints on illuminants in the multispectral imaging domain may not be efficient either. We identify some methods that are feasible for extension to multispectral imaging, and discuss the advantage of using highlight information for illuminant estimation.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Color constancy is the ability of the human visual system to perceive constant color of surfaces despite the change in spectral composition of the illuminant. Creating such capability for a computer vision system is called “computational color constancy”, and it can be achieved through two major techniques. One method is to use the statistical properties of the scene, while the other is to utilize the physics-based analysis of surfaces. Statistics-based methods can provide fairly accurate results when the assumptions are satisfied. One such common assumption is the presence of a variety of surfaces and objects in a scene. Generally, these methods work poorly in images with low color complexity [1].
In physics based illuminant estimation approaches, the analysis of interaction of light with surfaces is performed for estimating the scene illuminant. The optically inhomogenous objects (e.g. plastics, ceramics, paints, etc) create a neutral interface reflection [2] condition, where the spectral power distribution of the specular reflection is the same as the incident illumination; and this property is used to estimate the scene illuminant. The basis of illuminant estimation from specular highlights in images is the dichromatic reflection model [3], which describes the reflected light from an object as a combination of the diffuse and specular reflection. The specular reflection is also termed as highlights in an image, and therefore, the problem of illuminant estimation is treated as the analysis of highlights from surfaces [4, 5].
With advancement in sensor technology, use of multispectral and hyperspectral imaging is increasing for close range imaging applications [6]. Those images also contain diffuse and specular reflections which can be helpful for estimation of scene illuminant. The information of scene illumination is helpful in achieving multispectral constancy [7, 8]. In this paper, we discuss different categories of highlight based illuminant estimation methods for color images and their possible extension to spectral images. This paper is organized as follows; In Sect. 2, dichromatic reflection model and its use for illuminant estimation in images is discussed. In Sect. 3, different categories of work on illuminant estimation from highlights in color images are analyzed from the point of view of their possible extension towards multispectral imaging. In Sect. 4, use of highlights in multispectral imaging and the existing works dedicated to spectral imaging is discussed, followed by conclusion in Sect. 5.
2 Dichromatic Reflection Model
In a simplified color imaging model, the image values f(x) (where \(\mathbf {x}\) represents spatial coordinates) are dependent on the light source \(e(\lambda )\), the surface albedo \(s(\mathbf x ,\lambda )\) and the camera sensitivity function \({\mathbf {c}(\lambda )}\). It is represented as:
Shafer [3] proposed to decompose the light into its diffuse \(m^b(\mathbf {x})\) and specular \(m^s(\mathbf {x})\) components:
In the dichromatic reflection model, the diffuse reflection is of low intensity and is Lambertian, while the specular reflection is generally of high intensity, directional, independent from the surface albedo and dependent on the viewing direction. The illuminant in the sensor domain \(\mathbf {e}\) is defined as:
Equation 3 is the second part of Eq. 2. This relation states that the specular reflection part of Eq. 2 provides information about the illuminant. For a camera with \({\mathbf {c}(\lambda )}=\{R(\lambda ),G(\lambda ),B(\lambda )\} \), this linear combination defines a plane in the RGB color space. The data points from the RGB color space are projected onto a space normalized by \(r\,+\,g\,+\,b\,=\,1\), where they form a line. The two points defining the line consist of the diffuse and specular points on a surface. Data points from a uniformly colored surface are distributed along this line which is called the dichromatic line. Assuming uniform illumination in a scene, all the dichromatic lines have one point in common, which corresponds to the scene illuminant. Therefore, if there are two or more materials being illuminated by the same light source and their dichromatic planes are identified, the vector containing illuminant information can be estimated by the intersection of these planes. There are some methods that do not explicitly use the dichromatic reflection model, but still use highlights for illuminant estimation. In the following section, we discuss different categories of highlight based illuminant estimation models along with an analysis of their potential extension towards multispectral imaging.
3 Inversion of Dichromatic Reflection Model
In this section, we analyze the methods based on the dichromatic reflection for illuminant estimation and present empirical observations for their extension towards multispectral imaging.
3.1 Illuminant Estimation in Two-Dimensional Chromaticity Space
Using two-dimensional chromaticity space allows the removal of intensity component and simplifies the illuminant estimation problem in color images. One of the earlier attempts in highlight based illuminant estimation are by Lee [9], and D’Zmura and Lennie [10], who proposed the use of color coordinates of specular highlights for estimation of illumination. Lehmann and Palm [11] extended Lee’s red diagram [9] to the rg-diagram and called it “color line search”. Recently, Uchimi et al. [12] proposed estimation of illumination from \(u'v'\) chromaticity, xy chromaticity space is used in [13], and the rg color space in [14, 15].
One way to think about extension of dichromatic reflection model for multispectral imaging is to extend the results obtained in two dimensional chromaticity space. For example, if estimation is performed in rg space, the third component can be calculated directly as \(b=1-(r+g)\). However, this approach may not lead to an accurate three dimensional estimation because the errors of estimation in each component are accumulated in the third component [15]. If we imagine the same procedure for multispectral images, then there are two major issues. The first is the choice of base components or channels for creation of dichromatic lines. It is not intuitive to call any combination of two channels from multispectral channels as a chromaticity space. The second problem is that a space satisfying the condition \(c_1 + c_2 + \ldots + c_N=1\), where \(c_i\) is the \(i^{th}\) channel, leads to very small values. This causes instability in any numerical computation.
Lakehal and Ziou [15] proposed that to minimize the error in illuminant estimation while extending the two dimensional estimate into three dimensions, the chromatic coordinates in the rg and gb sub-spaces can be used. For each sub-space, the intersection of dichromatic lines is computed and the error for each coordinate of both sub-spaces are combined to form an objective function. They proposed to minimize this function individually for each R, G and B channel. That method shows promising results for color images, but if we want to extend it to multispectral imaging then we have to create at least \((N-1)\) sub-spaces, where N is the total number of channels in a multispectral camera. It might be possible to create a reduced number of sub-spaces but it is not obvious how the spectral correlation may permit the reduction of number of planes. Even if somehow the dichromatic lines and their intersections are computed for each subspace, projecting them to N dimensions will be computationally expensive and unstable because of being very sensitive to errors in estimation in any sub-space. In the presence of such issues, extending these methods is not expected to bring promising results.
3.2 Constraint on Possible Illuminants
For reducing the estimation error in a sub-space, some authors propose to impose constraints on the set of possible illuminants. Some algorithms [16,17,18,19] aim at finding the chromaticity of an illuminant by intersecting a dichromatic line with the Planckian locus, or a set of known illuminants. The basic idea of such methods is to calculate the projected area for a set of candidate illuminants along the Planckian locus and pick the candidate that gives the minimum error in terms of distance, as an estimate of the scene illuminant. Toro and Funt [20] assumed the presence of a fixed number of different materials in an image patch. This method requires a discrete set of candidate illuminants. Each of the illuminants from the given set is tested sequentially, and the illumination is estimated by solving a set of simultaneous linear equations using a veronese projection of multilinear constraints. Toro [21] proposed optimizing a cost function on the color of illuminant. The scene illuminant is found by fitting the color of candidate illuminants to the image colors. Shi and Funt [22] proposed two Hough Transforms for computation of a histogram that represents the likelihood that a candidate intersection line is the image illumination axis. In these methods, illumination estimation is obtained from a set of candidate illuminations.
Here we discuss the feasibility of extending the algorithms based on finding the intersection of dichromatic lines along the Planckian locus. Using this technique for color images, the possibilities are to find either none, one or two points of intersection. In case of no intersection, the closest point between the line and the curve of Planckian locus is identified as the scene illuminant. If there are two intersections, then one of them is selected as the correct intersection on the basis of some heuristics. Therefore, in theory it is possible that the constraint of approximating the illuminants to the curve of the black-body radiator may help in determination of scene illuminant from a single surface in color images. However, it was found in [23] that approximating the illuminants by the curve of black body radiator is not advantageous for illuminant estimation in color images. Also, the Planckian locus needs to be defined for the multispectral imaging domain. Constraining the set of possible illuminants also have the same issues of computational complexity and instability as discussed in Sect. 3.1, and therefore, it does not guarantee better results. Exhaustive quantitative investigations would need to be performed for evaluation of the complexity, assumptions and results for the extension of such algorithms as their extension to multispectral imaging is not straightforward.
3.3 Inverse Intensity Chromaticity Space
The inverse intensity chromaticity space (IICS) was introduced by Tan et al. [24]. It is spanned by the chromaticity of a single channel and the inverse of image intensity. A linear correlation between the image chromaticity and illumination chromaticity is formed in the IICS. This correlation allowed the estimation of scene illumination without segmenting the color beneath the highlights. This method has the potential of extension for multispectral imaging, since it deals with each channel separately. One of the limitations of this algorithm is the detection of highlight regions, which can be improved by selecting a suitable method for multispectral imaging [25]. Riess et al. [26] proposed local estimates of scene illuminant by decomposing the image into mini-regions and then using those estimates for computing the dominant illuminant of the scene by using the inverse intensity chromaticity space. The proposed method is an extension of [24] and also has the potential for extension to higher dimensional multispectral data.
3.4 Use of Polarizing Filter
The use of polarizing filter for obtaining specular only image is proposed in [27, 28]. In [27], knowledge of geometry of surfaces is required and its extension for multispectral imaging is not obvious. However, Badawi et al. [28] proposed the use of spatially constrained independent component analysis (ICA) and the mutual information-based ICA for estimation of scene illumination. This method has the potential to be extended for multispectral imaging. It is also worth noting that the proposed method in [28] is tested on hyperspectral images after thresholding the bright pixels and promising results are obtained.
4 Illuminant Information from Multispectral Images
4.1 Statistics Based Methods
The statistics based methods for illuminant estimation in color images are extended by Khan et al. [29] for multispectral images and they found that algorithms based on bright pixels (Max-RGB) and edge information (Gray-Edge) provide good results. Use of highlights may further improve those methods. We identified the method by Gijsenij et al. [30] who proposed the use of specular and shadow edges in an image for estimation of scene illuminant. In their approach, initially the illuminant estimation is performed in the same way as in the gray-edge algorithm [31]. The obtained estimate is improved by iteratively removing the color cast and assigning weights to the edges. The illuminant is estimated again by using the updated weights until a stable estimate of scene illuminant is found. Use of edge information suggests that there is some sheen reflection around the edges and this can be a valuable cue for the estimation of scene illuminant. It can be anticipated that extension of [30] to multispectral imaging can also bring promising results.
Drew et al. [32] proposed to estimate the scene illuminant by taking the mean of specular pixels or pixels from near-white materials. In their method, a planar constraint is imposed which assumes that the log-relative-chromaticity values for near-specular pixels are orthogonal to the illuminant chromaticity. They defined zeta-image which is formed by taking dot product of the log-relative-chromaticity and the light direction. Their method works well when there are strong specularities in the scene but the performance is significantly reduced when the specular reflection from colored surfaces is low. However, when their proposed planar constraint is used along with gray-world and gray-edge algorithms, the performance of illuminant estimation is increased. It will be interesting to evaluate the effect of imposing planar constraint and creating the so-called zeta images from multispectral data, for estimation of scene illumination.
4.2 Low-Rank Matrix Factorization of Multispectral Data
There are some methods for highlights based illuminant estimation in multispectral imaging that use the factorization property of data matrix. An et al. [33] proposed illuminant estimation from highlight in multispectral images by assuming that the chromaticity of specular components is uniform over all the pixels. They also assumed that the surfaces with specular reflection have their diffuse counterparts in the specular free regions. With this assumption, it is suggested that the diffuse component of a specular pixel can be reconstructed from another diffuse pixel containing the same material and the correlation between the spectra of specular and diffuse reflection is low. By assuming a single light source, or multiple light sources with normalized spectrum, a low-rank matrix is formed by the specular pixels and is used to estimate the scene illuminant. The proposed method in [33] requires more than 4 channels for its implementation. Low-rank matrix factorization is further analyzed by Zheng et al. [34] for signal separation. This method works well even with non-uniform illuminations but the performance is degraded in presence of multiple illuminants or shadows. Their work is improved by Chen et al. [35] by introducing a conditional random field model for handling scenes with multiple illuminations and shadows.
4.3 Use of Singular Value Decomposition
Tominaga and Wandell [36] observed that color planes from different objects intersect in a certain line, which gives the information of scene illumination. They computed the color planes of reflected light through singular value decomposition (SVD). This method is used for illuminant estimation in multispectral imaging by Huynh and Robles-Kelly [37]. They assumed that a scene is uniformly illuminated and used the uniform-albedo patches for illuminant estimation. Using this technique, satisfying performance is obtained for monochromatic patches. However, the pre-processing of images before applying this method is the key factor which can significantly degrade the performance if not done properly. Imai et al. [38] proposed that the surface containing specular reflection is two dimensional. They projected the segmented high dimensional spectral data from the specular reflection, onto a two-dimensional space which is spanned by the first two principal components. Within this space, the pixel distribution is divided into two clusters and it is assumed that the specular highlight cluster is coincident to the directional vector of the illuminant. Correct identification of highlights is crucial for the performance of their proposed method. This method is extended to multiple illuminants in [39, 40].
The SVD and principal components based illuminant estimation methods are already demonstrated to perform well for multispectral imaging but they are dependant on the correct segmentation of highlights region in image. A combination of highlight detection algorithm in multispectral imaging and the proposed algorithms in this section can provide promising results for illuminant estimation in multispectral imaging.
5 Conclusion
In this paper, we reviewed the illuminant estimation methods which are based on highlights in images and identified the potential methods which can be extended to multispectral imaging. Based on this qualitative analysis, the dichromatic reflection model based methods that work in two dimensional chromaticity space can be computationally complex and unstable for extension to multispectral imaging. We identified some techniques that can be extended for multispectral imaging. Quantitative analysis of such an extension is also required before reaching to a final conclusion. By combining the current work with state of the art highlight detection algorithms [25], the reader can pick the algorithms which best suit the type of images and data to be processed for illuminant estimation.
References
Finlayson, G.D., Hordley, S.D., Hubel, P.M.: Color by correlation: a simple, unifying framework for color constancy. IEEE Trans. Pattern Anal. Mach. Intell. 23, 1209–1221 (2001)
Lee, H.C., Breneman, E.J., Schulte, C.P.: Modeling light reflection for computer color vision. IEEE Trans. Pattern Anal. Mach. Intell. 12, 402–409 (1990)
Shafer, S.A.: Using color to separate reflection components. Color Res. Appl. 10(4), 210–218 (1985)
Klinker, G.J., Shafer, S.A., Kanade, T.: The measurement of highlights in color images. Int. J. Comput. Vis. 2, 7–32 (1988)
Lin, S., Shum, H.-Y.: Separation of diffuse and specular reflection in color images. In: Conference on Computer Vision and Pattern Recognition, pp. 341–346 (2001)
Lapray, P.-J., Wang, X., Thomas, J.-B., Gouton, P.: Multispectral filter arrays: recent advances and practical implementation. Sensors 14(11), 21626–21659 (2014)
Khan, H.A., Thomas, J.B., Hardeberg, J.Y.: Multispectral constancy based on spectral adaptation transform. In: Sharma, P., Bianchi, F.M. (eds.) SCIA 2017. LNCS, vol. 10270, pp. 459–470. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59129-2_39
Khan, H.A., Thomas, J.-B., Hardeberg, J.Y., Laligant, O.: Spectral adaptation transform for multispectral constancy. J. Imaging Sci. Technol. 62(2), 20504-1–20504-12 (2018)
Lee, H.-C.: Method for computing the scene-illuminant chromaticity from specular highlights. J. Opt. Soc. Am. A 3, 1694–1699 (1986)
D’Zmura, M., Lennie, P.: Mechanisms of color constancy. J. Opt. Soc. Am. A 3, 1662–1672 (1986)
Lehmann, T.M., Palm, C.: Color line search for illuminant estimation in real-world scenes. J. Opt. Soc. Am. A 18, 2679–2691 (2001)
Uchimi, Y., Jinno, T., Kuriyama, S.: Estimation of multiple illuminant colors using color lines of single image. In: International Conference on Advanced Informatics, Concepts, Theory, and Applications, pp. 1–6, August 2017
Kim, J.-Y., Seo, Y.-S., Ha, Y.-H.: Estimation of illuminant chromaticity from single color image using perceived illumination and highlight. J. Imaging Sci. Technol. 45(3), 274–282 (2001)
Kwon, O.-S., Cho, Y.-H., Kim, Y.-T., Ha, Y.-H.: Illumination estimation based on valid pixel selection in highlight region. In: IEEE International Conference on Image Processing, October 2004
Lakehal, E., Ziou, D.: Computational color constancy from maximal projections mean assumption. Multimedia Tools Appl. (2017). https://link.springer.com/article/10.1007/s11042-017-5476-1
Finlayson, G.D., Schaefer, G.: Solving for colour constancy using a constrained dichromatic reflection model. Int. J. Comput. Vision 42, 127–144 (2001)
Schaefer, G.: Robust dichromatic colour constancy. In: Campilho, A., Kamel, M. (eds.) ICIAR 2004. LNCS, vol. 3212, pp. 257–264. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30126-4_32
Li, Y.-Y., Lee, H.-C.: Auto white balance by surface reflection decomposition. J. Opt. Soc. Am. A 34, 1800–1809 (2017)
Mazin, B., Delon, J., Gousseau, Y.: Illuminant estimation from projections on the planckian locus. In: Fusiello, A., Murino, V., Cucchiara, R. (eds.) ECCV 2012. LNCS, vol. 7584, pp. 370–379. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33868-7_37
Toro, J., Funt, B.: A multilinear constraint on dichromatic planes for illumination estimation. IEEE Trans. Image Process. 16, 92–97 (2007)
Toro, J.: Dichromatic illumination estimation without pre-segmentation. Pattern Recogn. Lett. 29(7), 871–877 (2008)
Shi, L., Funt, B.: Dichromatic illumination estimation via Hough transforms in 3D. In: Conference on Color in Graphics, Imaging, & Vision, pp. 259–262 (2008)
Ebner, M., Herrmann, C.: On determining the color of the illuminant using the dichromatic reflection model. In: Kropatsch, W.G., Sablatnig, R., Hanbury, A. (eds.) DAGM 2005. LNCS, vol. 3663, pp. 1–8. Springer, Heidelberg (2005). https://doi.org/10.1007/11550518_1
Tan, R.T., Nishino, K., Ikeuchi, K.: Color constancy through inverse-intensity chromaticity space. J. Opt. Soc. Am. A 21, 321–334 (2004)
Khan, H.A., Thomas, J.-B., Hardeberg, J.Y.: Analytical survey of highlight detection in color and spectral images. In: Bianco, S., Schettini, R., Trémeau, A., Tominaga, S. (eds.) CCIW 2017. LNCS, vol. 10213, pp. 197–208. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56010-6_17
Riess, C., Eibenberger, E., Angelopoulou, E.: Illuminant color estimation for real-world mixed-illuminant scenes. In: International Conference on Computer Vision, pp. 782–789, November 2011
Hara, K., Nishino, K.: Variational estimation of inhomogeneous specular reflectance and illumination from a single view. J. Opt. Soc. Am. A 28, 136–146 (2011)
Badawi, W.K.M., Chibelushi, C.C., Patwary, M.N., Moniri, M.: Specular-based illumination estimation using blind signal separation techniques. IET Image Process. 6, 1181–1191 (2012)
Khan, H.A., Thomas, J.-B., Hardeberg, J.Y., Laligant, O.: Illuminant estimation in multispectral imaging. J. Opt. Soc. Am. A 34, 1085–1098 (2017)
Gijsenij, A., Gevers, T., van de Weijer, J.: Improving color constancy by photometric edge weighting. IEEE Trans. Pattern Anal. Mach. Intell. 34, 918–929 (2012)
van de Weijer, J., Gevers, T., Gijsenij, A.: Edge-based color constancy. IEEE Trans. Image Process. 16, 2207–2214 (2007)
Drew, M.S., Joze, H.R.V., Finlayson, G.D.: The zeta-image, illuminant estimation, and specularity manipulation. Comput. Vis. Image Underst. 127, 1–13 (2014)
An, D., Suo, J., Wang, H., Dai, Q.: Illumination estimation from specular highlight in a multispectral image. Opt. Express 23, 17008–17023 (2015)
Zheng, Y., Sato, I., Sato, Y.: Illumination and reflectance spectra separation of a hyperspectral image meets low-rank matrix factorization. In: Conference on Computer Vision and Pattern Recognition, pp. 1779–1787, June 2015
Chen, X., Drew, M.S., Li, Z.-N.: Illumination and reflectance spectra separation of hyperspectral image data under multiple illumination conditions. Electron. Imaging 18, 194–199 (2017)
Tominaga, S., Wandell, B.A.: Standard surface-reflectance model and illuminant estimation. J. Opt. Soc. Am. A 6, 576–584 (1989)
Huynh, C.P., Robles-Kelly, A.: A solution of the dichromatic model for multispectral photometric invariance. Int. J. Comput. Vis. 90, 1–27 (2010)
Imai, Y., Kato, Y., Kadoi, H., Horiuchi, T., Tominaga, S.: Estimation of multiple illuminants based on specular highlight detection. In: Schettini, R., Tominaga, S., Trémeau, A. (eds.) CCIW 2011. LNCS, vol. 6626, pp. 85–98. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20404-3_7
Hang, N.T.D., Horiuchi, T., Hirai, K., Tominaga, S.: Estimation of two illuminant spectral power distributions from highlights of overlapping illuminants. In: Signal-Image Technology & Internet-Based Systems, pp. 434–440, December 2013
Kato, Y., Horiuchi, T., Tominaga, S.: Estimation of multiple light sources from specular highlights. In: International Conference on Pattern Recognition, pp. 2033–2086, November 2012
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Khan, H.A., Thomas, JB., Hardeberg, J.Y. (2018). Towards Highlight Based Illuminant Estimation in Multispectral Images. In: Mansouri, A., El Moataz, A., Nouboud, F., Mammass, D. (eds) Image and Signal Processing. ICISP 2018. Lecture Notes in Computer Science(), vol 10884. Springer, Cham. https://doi.org/10.1007/978-3-319-94211-7_56
Download citation
DOI: https://doi.org/10.1007/978-3-319-94211-7_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-94210-0
Online ISBN: 978-3-319-94211-7
eBook Packages: Computer ScienceComputer Science (R0)