Keywords

1 Introduction

In recent years, we can obtain information on remote place more easily by the development of digital society. In addition, various fields such as digital archiving [1], online shopping, telemedicine are developing. Since users know things and information through images, the reliability of the color is very important. For example, if the color of the product image differs from the real product in the online shopping, the user will purchase undesirable items. Also, in the case of telemedicine, the doctor needs to judge the accurate position and state of the affected part based on the image information. So, the color information of the image is very important. However, there are some difficulties with typical RGB cameras. The first problem is that capturing natural scenes with high dynamic range content using conventional RGB cameras generally results in saturated and underexposed image. It is occurred because of the narrow dynamic range of common 8-bit (256 gradations) images. A method to solve the problem is an HDR image. The second problem is that the color that a typical RGB image can represent is less than the color that human beings can be perceived. To solve the problem, a spectral image having high dimensional spectral information for each pixel is effectively.

In previous studies, large equipment was required for HDR image measurement, and color reproduction was often insufficient. Therefore, we propose a system to measure HDR image and spectral image comprehensively.

2 Related Work

In this section, we explain what kind of images are HDR image and spectral image. In addition, we introduce the previous research and clarify the problem to overcome in the research.

2.1 HDR Image

A typical RGB image has luminance values represented by 8 bits (256 gradations) of red, green and blue. Capturing natural scenes with high dynamic range content using conventional RGB cameras generally results in saturated and underexposed image shown as Fig. 1. The phenomenon occurs because the dynamic range that conventional RGB images can express is narrow.

Fig. 1.
figure 1

Saturated and underexposed image. The left image is too bright, and floor is white. On the other hand, the right image is too dark. (Color figure online)

There is a high dynamic range (HDR) image that can be generated by synthesizing images with different brightness in order to solve the problem. The HDR image has luminance information of 8bits or more, so it is possible to prevent the lack of information at bright and dark area.

Debevec et al. proposed a method of generating HDR images by acquiring multiple images with different exposure times [2]. Haneishi et al. proposed a system to measure HDR spectral images using a multiband camera [3]. In this method, two pairs of R, G and B band (6 bands) images obtained by changing the exposure time are used. However, color reproduction was not sufficient when they tried to express a wide range using only two kinds of brightness. There was also a problem that the camera system was large.

2.2 Spectral Image

The spectral image has information on spectral reflectance which is the physical characteristic of the object for each pixel. Since spectral image has higher dimensional information than the typical RGB image, it can reproduce more colors and has paid attention in many fields in recent years. For example, there is research to identify objects with different spectral reflectance by using spectral information [4]. However, we need an expensive equipment and capturing becomes complicated to measure spectral information.

2.3 Problems of Previous Research

The conventional HDR method has a problem that it is difficult to shoot dynamic object. In addition, there were few researches to combine HDR images and spectral images. Therefore, we propose an HDR spectral video measurement system which does not require filter exchange by using complementary color filters, neutral density filter, and RGB camera.

3 Method

In this section, we propose HDR spectral video measurement system using complementary color filters, neural density filter, and RGB camera. In HDR processing we show two algorithms.

3.1 Image Acquisition

In this research, we use the camera system shown as Fig. 2. The system has four filters and a beam splitter that divides incident light into four. Therefore, it can capture four images with different colors at once. Mechanism of optical element of the camera system is shown in Fig. 3. The system has cyan (C), magenta (M), yellow (Y) and neutral density (ND) filters. The image with ND filter is used in HDR processing, and the image with complementary color filters are used in spectral estimation. The spectral transmission characteristics of the CMY-ND filters are shown in Fig. 4.

Fig. 2.
figure 2

HDR spectral video measurement system. The image on the left shows the camera with the optical element. The optical element has the beam splitter and the filter cartridge shown on the right figure. The cartridge has four filters.

Fig. 3.
figure 3

The mechanism of optical element. (Color figure online)

Fig. 4.
figure 4

The spectral transmission characteristics of the CMY-ND filter

3.2 Image Alignment

The introduced camera system can capture images connected with four color images as shown in Fig. 5. The captured image is divided into four, but the pixel positions of four images are slightly shifted from each other. For the reason, we align the positions of the four filtered images using homography transformation. Homography transformation is a method of projecting the coordination of a certain plane onto another plane. In our method, we use the image with ND as the reference image and adjust other three images to it. The transformation requires a homography matrix and we calculate the matrix using a 17 \( \times \) 20 checker board shown in Fig. 7.

Fig. 5.
figure 5

The 17 × 20 checkerboard. The right image is an obtained image by capturing a checkerboard with the camera system.

By capturing the checkerboard with the camera system, 340 feature points can be extracted for each filtered image. The homography matrix can be calculated by the Eq. (1) using the coordination of these feature points and the pseudo inverse matrix.

$$ \begin{aligned} {\mathbf{X}}^{'} & = {\mathbf{HX}}, \\ {\mathbf{H}} & = {\mathbf{X}}^{'} {\mathbf{X}}^{\text{T}} ({\mathbf{XX}}^{\text{T}} )^{ - 1} , \\ \end{aligned} $$
(1)

where \( {\mathbf{H}} \) is the homography matrix, \( {\mathbf{X}} ' \) is the coordination of feature points in ND image, and \( {\mathbf{X}} \) is the coordination in the CMY images to be aligned. By multiplying the CMY images by the homography matrix, it is possible to match the position of all the images to the ND image.

3.3 Estimate of Spectral Information

In this research, spectral information is estimated using RGB values captured through CMY-ND filters. Here, the relationship between the nine values and the spectral information is represented by Eq. (2).

$$ \begin{aligned} {\mathbf{r}} & = {\mathbf{Xg}}, \\ {\mathbf{X}} & \, = {\mathbf{rg}}^{\text{T}} ({\mathbf{gg}}^{\text{T}} )^{ - 1} . \\ \end{aligned} $$
(2)

To calculate the transformation matrix \( {\mathbf{X}} \), we use the x-rite color-checker shown in Fig. 6. The color-checker has 24 color patches, and we measure the spectral distribution and the value of 9 bands for each color. \( {\mathbf{r}} \) represents spectrum, and g represents nine values. We measure the spectral distribution data of the color-checker for 5 nm interval in the range of 380 nm–780 nm using a spectroradiometer.

Fig. 6.
figure 6

x-rite color-checker. (Color figure online)

3.4 HDR Processing

We propose two kinds of HDR processing. In both cases, HDR images are generated using the value of the ND image at the same position as the pixel value saturated in the CMY images. However, these methods are different algorithms and will be explained one by one.

HDR Processing Using Luminance Ratio of Filtered Images

The first is a method using the ratio of the luminance values of each CMY image with respect to the ND image [5]. The ratios are obtained beforehand by shooting a white object using the camera system and dividing the luminance values of each CMY images by the value of the ND image as shown in the Fig. 7.

Fig. 7.
figure 7

The procedure of the HDR process.

When HDR processing is performed, an HDR image is generated by multiplying the value of the ND image by this ratio.

HDR Processing Using Spectral Transmission Characteristics of Filters

We propose HDR processing method using the spectral transmission characteristics of filters. The flow of HDR processing is shown in Fig. 8. In this method, we calculate the ratios of the spectral transmission characteristic of the CMY color filter to the ND filter and perform the HDR processing using the ratio.

Fig. 8.
figure 8

The flow of HDR processing.

The spectral ratio is the transmission characteristic of the CMY color filters divided by the ND filter for each sampled wavelength. Although the camera system can acquire four images at once, the brightness of these images differs depending on the position before division. In an image captured without filters shown in Fig. 9, the top left of the image is dark, and the bottom right is bright. We estimate the spectrum of filters using the RGB values obtained through the filters to consider such an optical problem of the equipment.

Fig. 9.
figure 9

Captured image without filters.

The spectrum of each CMY-ND image can be estimated as shown in Eq. (3). We calculate for 24 patches of color checker.

$$ \begin{aligned} {\mathbf{S}}_{C} = & {\mathbf{X}}_{3band} {\mathbf{g}}_{C,} \\ {\mathbf{S}}_{M} = & {\mathbf{X}}_{3band} {\mathbf{g}}_{M} , \\ {\mathbf{S}}_{Y} = & {\mathbf{X}}_{3band} {\mathbf{g}}_{Y} , \\ {\mathbf{S}}_{ND} = \, & {\mathbf{X}}_{3band} {\mathbf{g}}_{ND} . \\ \end{aligned} $$
(3)

\( {\mathbf{X}}_{3band} \) is a matrix for estimating spectral from values of 3 bands. \( {\mathbf{g}}_{C} , {\mathbf{g}}_{M} , {\mathbf{g}}_{Y} \; {\text{and}}\; {\mathbf{g}}_{ND} \) are captured values with camera system and \( {\mathbf{S}}_{C} , {\mathbf{S}}_{M} , {\mathbf{S}}_{Y} \;{\text{and }}\;{\mathbf{S}}_{ND} \) are spectrum estimated from them. The ratio of filter can be calculated by dividing the spectrum of the CMY color filter by the spectrum of ND filter and calculating the average of 24 colors (Eq. (4)).

$$ \begin{aligned} {\mathbf{r}}_{C/ND} = \frac{1}{24}\mathop \sum \limits_{i = 1}^{24} ({\mathbf{S}}_{Ci} /{\mathbf{S}}_{NDi} ), \hfill \\ {\mathbf{r}}_{M/ND} = \frac{1}{24}\mathop \sum \limits_{i = 1}^{24} ({\mathbf{S}}_{Mi} /{\mathbf{S}}_{NDi} ), \hfill \\ {\mathbf{r}}_{Y/ND} = \frac{1}{24}\mathop \sum \limits_{i = 1}^{24} ({\mathbf{S}}_{Yi} /{\mathbf{S}}_{NDi} ). \hfill \\ \end{aligned} $$
(4)

When a saturated pixel is found in the CMY images, we extract the value of the ND image with the same coordinates. \( {\mathbf{g}}_{LDR - ND} \varvec{ } \) is the value of the ND image, and its spectrum \( {\mathbf{S}}_{LDR - ND} \) can be calculated by the Eq. (5) using transformation matrix \( {\mathbf{X}}_{3band} \).

$$ {\mathbf{S}}_{LDR - ND} = {\mathbf{X}}_{{3{\text{band}}}} {\mathbf{g}}_{{{\text{LDR}} - {\text{ND}}}} . $$
(5)

The accurate spectral information of the saturated pixel is estimated by multiplying this \( {\mathbf{S}}_{LDR - ND} \) by the ratio of the Eq. (5). The calculation is shown as Eq. (6).

$$ \begin{aligned} {\mathbf{S}}_{HDR - C} = {\mathbf{S}}_{LDR - ND} \times {\mathbf{r}}_{C/ND} , \hfill \\ {\mathbf{S}}_{HDR - M} = {\mathbf{S}}_{LDR - ND} \times {\mathbf{r}}_{M/ND} , \hfill \\ {\mathbf{S}}_{HDR - Y} = {\mathbf{S}}_{LDR - ND} \times {\mathbf{r}}_{Y/ND} , \hfill \\ \end{aligned} $$
(6)

where \( {\mathbf{S}}_{HDR - C} , {\mathbf{S}}_{HDR - M} \varvec{ }\,\,{\text{and}}\,\,\varvec{ }{\mathbf{S}}_{HDR - Y} \) represent the spectrum of the HDR spectral image. Finally, these spectra are inversely converted to RGB value (Eq. (7)).

$$ \begin{aligned} {\mathbf{g}}_{HDR - C} = {\mathbf{X}}_{3band}^{ + } {\mathbf{S}}_{HDR - C} , \hfill \\ {\mathbf{g}}_{HDR - M} = {\mathbf{X}}_{3band}^{ + } {\mathbf{S}}_{HDR - M} , \hfill \\ {\mathbf{g}}_{HDR - Y} = {\mathbf{X}}_{3band}^{ + } {\mathbf{S}}_{HDR - Y} . \hfill \\ \end{aligned} $$
(7)

4 Experiment

We experimented in the darkroom to block external light. Table 1 shows the specification of the camera system.

Table 1. The specification of camera system.

4.1 Output of HDR Spectral Image

We captured the image shown in Fig. 10 with the camera system and generated HDR spectral images by the propose two HDR methods.

Fig. 10.
figure 10

Captured image with the filters.

First, an output image without HDR processing is shown in Fig. 11 and the tone mapping image of HDR spectral images by two methods are shown in Fig. 12. There are places where noticeable differences occur in these images. Figure 13 shows images which enlarged a part of the Figs. 11 and 12. In the image without HDR processing, the pink pattern disappears due to saturation, but the pattern is preserved in the HDR image using second HDR method. Moreover, it can be seen that the false contour occurring in the output image using the first HDR method is disappeared in the image using the second method. From this result, it can be said that the second HDR method is superior in color reproduction.

Fig. 11.
figure 11

Output image without HDR processing.

Fig. 12.
figure 12

HDR images. The left image is output in the first method, and the right image is in the second method.

Fig. 13.
figure 13

Enlarged views of Figs. 11 and 12. These are shown enlarging the lower right of the output images in Figs. 11 and 12. The left image is output image without HDR processing, the middle is the first method, and the right is the second method.

4.2 Spectral Evaluation After HDR Processing

To evaluate the proposed method quantitatively, we check how accurately the output image can reproduce the spectral information.

First, we capture the x-rite color-checker in a bright environment. We generate HDR spectral images from captured images by the two HDR processing method, and evaluate the spectral information estimated for each patch against the correct value. The correct data was obtained by measurement the patches of 24 colors with a spectroradiometer. The mean squared error between the estimated spectrum and correct data are used for the evaluation. We define the patch number as shown in Fig. 14. The evaluation results are shown in Fig. 15. In this graph, the vertical axis represents the mean squared error and the horizontal axis represent the patch number. Also, the green bars are result of the first HDR processing method and orange bars are the second.

Fig. 14.
figure 14

The patch number of x-rite color checker (Color figure online)

Fig. 15.
figure 15

The mean squared error of spectra after HDR processing. (Color figure online)

We experiment in a bright environment, but not all patches were saturated. The saturated patches are represented by red numbers in the Fig. 15. As a result of the experiment, the second HDR method has better color reproducibility with many colors.

4.3 Evaluation of Luminance Linearity by HDR Processing

Since cameras receive light on the sensor while opening the shutter, the shutter speed and brightness of image are proportional. Therefore, we check whether the luminance of the HDR spectral image has linearity with respect to the shutter speed. First, we capture a white object at various shutter speeds. HDR spectral images are outputted from these images, and luminance \( L \) are calculated by the Eq. (8).

$$ L = 0.2126 \times R + 0.7152 \times G + 0.07722 \times B . $$
(8)

The relationship between the luminance \( L \) and the shutter speed shown in Fig. 16. In this graph, the solid lines of red blue, and gray indicate luminance of the output image when the first method, the second method and without the HDR processing, respectively. Also, the yellow line represents the ideal proportional relationship. The correlation between the luminance of the three methods and ideal are shown in Table 2.

Fig. 16.
figure 16

The relationship between the luminance and the shutter speed. (Color figure online)

Table 2. the correlation between the luminance of the three methods and ideal.

5 Discussion

In this research, we proposed HDR spectral video measurement system using filter and RGB camera. We performed HDR processing using the RGB value of the ND image when the CMY images were saturated. As a result, the reproducibility of many colors improved in the spectral evaluation experiment at Sect. 4.2. However, in the case of achromatic color, our method can’t improve the colors. It is considered that the conventional method is an algorithm specialized for achromatic color. In Sect. 4.3, we evaluated the linearity of luminance by HDR processing. As a result, it was found that the saturated pixel values can be interpolated by the HDR processing, and linearity is established between the luminance of the out put image and the exposure time.

We will further improve the color reproduction of the HDR processing by preparing more patches and applying machine learnings. We also want to improve the estimation accuracy of spectral information by considering the optical error of the camera system.

6 Conclusions

In this research, we proposed an HDR spectral video measurement system to improve the color reproduction of images. The previous research required a large scale equipment to generate HDR images, but the proposed method solved the problem by using complementary color filters, neutral density filter, and RGB camera. In addition, we proposed two HDR processes and experiments. As a result, the color reproduction of the HDR spectral image can be improved by performing these HDR processes. Particularly, the HDR method using spectral information reduces color shift in HDR processing and solve the problem of false contours. Although, the proposed system can’t generate video in real time, it can measure HDR spectral video from previously captured video. Spectral information and HDR processing technology are very important. We want to improve the accuracy of color reproduction and expand the range of use.