Keywords

1 Introduction

Measurement of shape and color of three-dimensional (3D) object surface is important in many fields including display, digital archives, computer graphics, and computer vision. For instance, the reproduction of object color appearance under various illuminations requires spectral color information of the object surface, and the shape information of the object surface is important for the shading. Conventional imaging systems cannot meet the recording of such high-order information.

There are several methods to take spectral color information and shape information simultaneously. Manabe et al. proposed a method to measure both of shape and spectrum based on color coded patterns projection [1], and the authors proposed stereo matching based on multiband imaging [2]. These methods were available for some applications. However, the 3D shape estimations in these methods are not accurate in comparison with recent 3D measurement methods, such as time of flight (ToF) [3].

In this paper, we propose the acquisition of 3D data and spectral color of the object surface by a system consisted of recent RGBD camera and a programmable light source that can emit narrow band light. We use Microsoft Kinect v2 [4] (referred to as Kinect) as the RGBD camera. The Kinect is a popular low-cost RGBD camera that can obtain RGB image with depth information (RGBD) data by using ToF method. The depth is related to the distance from the camera and obtained for each pixel in the image as the depth image. The 3D information of the scene is reconstructed from the depth image.

Therefore, we attempt to use Kinect for acquisition of 3D data and spectral color. The Kinect has auto white balance and auto contrast. However, we can’t find how to fix the white balance and contrast of the camera by the user. There are works for spectral color recovery from RGB images [5,6,7]. In our case, the camera parameters are unknown and changed for different scenes. Therefore, it is difficult to estimate the spectral color by the conventional methods. The available information for the estimation is the observed colors of the objects of those reflectances are known. Therefore we estimate the spectral color on the objects in each narrow band image by using the gray values of achromatic color patches in the image as references.

2 Measurement System

The measurement system consisted of a RGBD camera, a programmable light source and PCs for control of them. Figure 1 shows the system setting. Narrow band light is emitted from the light guide of the programmable light source. The light is diffused by a diffuser and illuminate target objects. The distribution of the light is compensated by the image of a large white board taken in advance at the position of the target object.

Fig. 1.
figure 1

Measurement system

2.1 RGBD Camera

RGBD camera is the camera that can capture RGB image and depth image simultaneously. As the RGBD camera, we use Kinect that launched in 2014. The Kinect is a game controller by using gesture recognition. The system is low cost and the software development kit (SDK) is provided. Therefore, the Kinect is used in various research areas. The resolution of RGB image is 1920 × 1080, and that of depth image is 512 × 424. The Kinect obtains the depth data by ToF.

2.2 Programmable Light Source

It is difficult to control the spectral power distribution of illumination by using conventional light source. We use a programmable light source (Gooch&Housego, Inc. OL490) to control the illumination spectra. Figure 2 shows the programmable light source. The light source can emit light with arbitrary spectral-power distribution. The programmable light source is composed of a xenon lamp source, a grating, a digital micro mirror device (DMD) chip, and a liquid light guide. The light beam of xenon is separated by the grating into its constituent wavelength. The intensity of the light on each wavelength is controlled by the DMD chip.

Fig. 2.
figure 2

Programmable light source

3 Spectral Reflectance Estimation Using Achromatic Color Patches

The simple model for the surface reflection defines the gray value observed by a camera as the following equation,

$$ {\text{g}} = \smallint E\left( \lambda \right)R\left( \lambda \right)S\left( \lambda \right)d\lambda $$
(1)

where \( E\left( \lambda \right) \) is the spectral power distribution of the illumination, \( R\left( \lambda \right) \) is the spectral reflectance of the object surface, and \( S\left( \lambda \right) \) is the spectral sensitivity of the camera. At a wavelength \( \lambda_{\alpha } \), the observed gray value \( {\text{g}}\left( {\lambda_{\alpha } } \right) \) on the object surface is simply defined as the following equation,

$$ {\text{g}}\left( {\lambda_{\alpha } } \right) = E\left( {\lambda_{\alpha } } \right)R\left( {\lambda_{\alpha } } \right)S\left( {\lambda_{\alpha } } \right) $$
(2)

When the reflectance of the standard white is supposed as 1, the observed gray value of the standard white is defined as the following equation,

$$ w\left( {\lambda_{\alpha } } \right) = E\left( {\lambda_{\alpha } } \right)S\left( {\lambda_{\alpha } } \right) $$
(3)

Therefore, \( R\left( {\lambda_{\alpha } } \right) \), the reflectance at a wavelength \( \lambda_{\alpha } \), is calculated from the gray value of the object surface \( g\left( {\lambda_{\alpha } } \right) \) and standard white \( {w}\left( {\lambda_{\alpha } } \right) \) under illumination with wavelength \( \lambda_{\alpha } \) by simple division as the following equation,

$$ R\left( {\lambda_{\alpha } } \right) = g\left( {\lambda_{\alpha } } \right)/{w}\left( {\lambda_{\alpha } } \right) $$
(4)

This conventional estimation is based on the linearity of the camera response. The precision of the estimation depends on the value of \( {w}\left( {\lambda_{\alpha } } \right) \).

In our case, the white balance and contrast of our camera was uncontrollable, and we think the camera response is nonlinear. To solve this problem, we used achromatic color patches as the reference. The spectral reflectance data of the achromatic patches are measured in advance. The estimation of spectral reflectance on object surface is done by the following steps;

  1. 1.

    Acquisition of RGB images of object with achromatic color patches under narrow band illuminations.

  2. 2.

    Detection of gray values of the achromatic patches in each band image.

  3. 3.

    Estimation of reflectance at each pixel on objects by the piecewise linear interpolation of reflectances of achromatic patches those have brighter and darker gray values for the gray value of the pixel in the narrow band image.

The piecewise linear interpolation is done by the following equation,

$$ R\left( {\lambda_{\alpha } } \right) = \frac{{g\left( {\lambda_{\alpha } } \right) - gd\left( {\lambda_{\alpha } } \right)}}{{gb\left( {\lambda_{\alpha } } \right) - gd\left( {\lambda_{\alpha } } \right)}}\left( {Rb\left( {\lambda_{\alpha } } \right) - Rd\left( {\lambda_{\alpha } } \right)} \right) + Rd\left( {\lambda_{\alpha } } \right) $$
(5)

where \( g\left( {\lambda_{\alpha } } \right) \) is the gray values at the pixel, \( gb\left( {\lambda_{\alpha } } \right) \) is the gray value of brighter achromatic patch, \( gd\left( {\lambda_{\alpha } } \right) \) is the gray value of darker achromatic patch, \( Rb\left( {\lambda_{\alpha } } \right) \) is the reflectance of brighter achromatic patch, and \( Rd\left( {\lambda_{\alpha } } \right) \) is the reflectance of darker achromatic patch. The difference between \( gb\left( {\lambda_{\alpha } } \right) \) and \( gd\left( {\lambda_{\alpha } } \right) \) is related to the estimation precision.

4 Experiments

4.1 Spectral Reflectance Estimation by Using Achromatic Color Patches

The camera response for narrow band illumination was low. Therefore, we set the bandwidth to 20 nm and captured images were integrated over some shots. We took 12-band images at 20 nm intervals in the wavelength range of 420–640 nm. The target object is a color chart (X-rite Color Checker CLASSIC) at 675 mm distance from the RGBD camera. The illumination distributions in taken images were compensated by the image of white board taken in advance.

The spectral reflectance estimation was done for 6 primary colors of blue (B), green (G), red (R), yellow (Y), magenta (M) and cyan (C) in the color chart (Fig. 3). The gray value of each color patch was defined as the averaged value of 40 × 40 pixels areas in each patch. The achromatic color patches used for the estimation are white (W), n8, n6.5, n5, n3.5, and black (Bl) as shown in Fig. 3. In our investigation, the reflectances of these patches were close to 0.87, 0.58, 0.36, 0.19, 0.09, and 0.03 in all wavelength, respectively. In this experiment, the reflection of the color patches was regarded as the Lambertian reflection.

Fig. 3.
figure 3

Measured color chart (Color figure online)

Figure 4 shows the result of spectral reflectance estimation by the proposed method using achromatic color patches. In Fig. 4, solid lines indicate estimated values and dotted lines indicate measured values by spectrophotometer (Konica Minolta, CM-2600d). For the comparison, we also estimated spectral reflectance of these color patches by using only the white patch as shown in Fig. 5. Table 1 shows the root mean square error (RMSE) in the estimation by using achromatic patches and the estimation by using only white patch. These results show that the estimation by using achromatic patches is better than the conventional estimation by using only white reference.

Fig. 4.
figure 4

Spectral reflectance estimation by using achromatic references (Color figure online)

Fig. 5.
figure 5

Spectral reflectance estimation by using only white reference. (Color figure online)

Table 1. Estimation error (RMSE)

4.2 Spectral Images

We obtained images of a color objects with the color chart under 12 narrow band illuminations. The scene of objects is shown in Fig. 6 and the 12 narrow band images are shown in Fig. 7. Spectral reflectance at each pixel in the center area of the image is estimated by the proposed method. Then, images of spectral color are generated as shown in Fig. 8. Each pixel of the image of a wavelength has the reflection at the wavelength as 8bit value. The images in short wavelength range include noise because the camera response in the range is low.

Fig. 6.
figure 6

Target objects (Color figure online)

Fig. 7.
figure 7

RGB images taken under 12 narrow band illuminations (Color figure online)

Fig. 8.
figure 8

Spectral images

Figure 9 shows the images reproduced from the spectral images as the image under illumination of D65 and LEDs of warm white, cool white, and daylight. In the reproduction, the spectral data at each pixel are converted to the XYZ tristimulus values and then to the sRGB color values. The influence of noise in short wavelength range is negligible in the results.

Fig. 9.
figure 9

Reproduced color images under various illumination color (Color figure online)

4.3 Reproduction of Color Images with Depth Data

We also obtained 3D data of objects in PLY (Polygon File) format by the Kinect. Figure 10 shows the obtained depth data visualized by MeshLab [8]. In the PLY format, it is possible to describe each vertex data of polygons by 3D (x, y, and, z) position and RGB data. In our investigation, the resolution of the data in the image plane (x-y plane) was 1.77 mm and the resolution in depth (z-axis) was 1.1 mm at the target position. We replaced the RGB data to that of the reproduced color images in 4.2. A visualization of one of the results is shown in Fig. 11. The reproduced colors from spectral images are shown on 3D object surface.

Fig. 10.
figure 10

3D data of objects

Fig. 11.
figure 11

3D visualization of objects with the reproduced color image under illumination of LED warm white (Color figure online)

In this experiment, we obtained high-resolution color image and low-resolution depth data. Then, we fitted the reproduced color image to low-resolution depth data by taking reference points from each data manually. Therefore, the fitting was not so strict, and the side of duck toy includes the color of the background color patch in Fig. 11. The precise fitting is one of future works.

5 Conclusions

This paper proposed a method for acquisition of 3D data and spectral color of the object surface by a system consisted of RGBD camera and a programmable light source that can emit narrow band light. We use Kinect as the RGBD camera. The Kinect has auto white balance and auto contrast. However, we can’t fix the white balance and the contrast by using the SDK. Therefore, we estimated the reflectance on object in each narrow band image by using achromatic color patches as references. In experiments, we estimated spectral colors from 12-band images at 20 nm intervals in the wavelength range of 420–640 nm by using the reflectances of the achromatic color patches. The results showed that RMSE for the spectral color estimation of six color patches was 0.035. We also generated spectral images from 12 band images and reproduced color images under several lighting conditions from the spectral images. Then, we combined the 3D data and the reproduced color image in PLY format. We showed that it is possible to measure 3D data and spectral color of the object by the system consisted of the RGBD camera and the programmable light source.