Keywords

1 Introduction

Projection mapping enables the control of an object’s surface appearance without painting and finishing. It has been applied to a wide range of specialized fields such as entertainment, and simulations of product development. Raskar and Bimber studied surface control techniques using a projector-camera (procam) system and contributed significantly to the technical development of projection mapping [1]. In particular, Rasker et al. developed shader lamps that project textures and animations on white mock-ups. Shader lamps have enabled the control of surface appearance using multiple projectors. They managed to defy the principles of projection mapping techniques and procam systems [2]. In addition, several researchers are addressing high-order visual information on surface appearances by advanced projection mapping techniques. For examples, Bimber et al. extended the dynamic ranges of reflective media by superimposing images [3]. Amano et al. proposed a method for controlling the glossiness and transparency of object surfaces using a procam system [4].

Recently, projection mapping has been applied to control facial expressions. For example, Team WOW developed a dynamic projection mapping system by attaching physical tracking markers on a face [5]. In addition, Bermano et al. developed the makeup lamps that performs markerless dynamic projection mapping for facial expression control using expression trackers with infrared illumination [6].

Although dynamic projection mapping systems have been developed actively for real-time facial appearance controls, studies on projection mapping for reproducing skin appearances spectrally are non-existent. As is known, skin colorimetry and human perceived colors are different. Yoshikawa et al. investigated the relationships among the perceived whiteness and the metric lightness/chroma/hue angle of the facial skin color of Japanese females. They demonstrated that, in regard to the facial skin color of Japanese females, the metric lightness disagrees with the perceived whiteness or brightness in a narrow lightness range [7]. Hamada et al. suggested that the discrimination ability for skin images tended to be higher than that of uniform color stimuli [8]. As shown in the skin investigations above, colorimetric (photometric) approaches are often insufficient for skin color control. Spectral (radiometric) approaches are more accurate and useful for skin appearance control. A spectral skin appearance control technique can manipulate and reproduce the complicated optical characteristics of human skins. In addition, spectral simulations of makeup skin appearance are important for the cosmetic industry. As an example, Doi et al. developed a spectral skin simulation method using the Kubelka-Munk theory [9]. Therefore, a spectral projection mapping for makeup skin simulation is important in the skin and cosmetic studies. However, studies on such projection mappings are non-existent.

In this study, we propose a method to reproduce the appearance of makeup skins by a spectral projection mapping that combines spatial appearance manipulation by projection mapping with spectral appearance manipulation by spectral lighting. Hence we first develop a spectral model for spectral projection mapping. Subsequently, we build a system for realizing the proposed method. Finally, we conduct experiments for validating the proposed system.

This paper is organized as follows: We describe the spectral projection mapping model to control the appearance of the makeup skin in Sect. 2. Subsequently, we propose a system based on the proposed model in Sect. 3. In Sect. 4, we present the experimental results and discussions. Finally, conclusions and future research are discussed in Sect. 5.

2 Proposed Model

Figure 1 shows the comparison of the real appearance of makeup skin with the augmented appearance of makeup skin by spectral projection mapping. Humans perceive reflected light from skins under an environmental lighting condition. The reproduced model of makeup skin by spectral projection mapping is as follows:

Fig. 1.
figure 1

Illustration of real makeup skin and spectral-projection-mapped skin

$$ E_{env} \left( \lambda \right)S_{make} \left( \lambda \right) = \left( {E_{pro} \left( \lambda \right) + E_{env} \left( \lambda \right)} \right)S_{skin} \left( \lambda \right), $$
(1)

where \( E_{env} \left( \lambda \right) \) is the spectral distribution of uniform environmental illumination, \( S_{make} \left( \lambda \right) \) is the spectral reflectance of makeup skin, \( E_{pro} \left( \lambda \right) \) is the spectral distribution of projected light, and \( S_{skin} \left( \lambda \right) \) is the spectral reflectance of the target non-makeup skin. \( \lambda \) comprises a visible wavelength range from 400 nm to 700 nm. From Eq. (1), the spectral distribution of the projection light source is calculated as follows:

$$ E_{pro} \left( \lambda \right) = \left( {\frac{{S_{make} \left( \lambda \right)}}{{S_{skin} \left( \lambda \right)}} - 1} \right)E_{env} \left( \lambda \right). $$
(2)

The equation above is a case of controlling one local point on the skin surface. However, in the projection mapping, the spatial control of skin is necessary, calculated as follows:

$$ E_{pro} \left( {x,y,\lambda } \right) = \left( {\frac{{S_{make} \left( \lambda \right)}}{{S_{skin} \left( {x,y,\lambda } \right)}} - 1} \right)E_{env} \left( \lambda \right), $$
(3)

where \( \left( {x,y} \right) \) are the pixel positions of the non-makeup skin. As shown in Eq. (3), the spectral reflectance of non-makeup skin depends on the spatial position. Meanwhile, we assumed that one of the makeup skins do not depend on the spatial position, because the makeup skin appears uniform.

3 System Overview

Figure 2 shows an overview of the proposed system. Prior to spectral projection mapping, spectral reflectances of reference makeup skins were measured at five places using a spectrodensitometer (KONICA MINOLTA FD-7, which measurement aperture is 3.5 mm). The spectral distribution of the environmental light source was measured using a spectroradiometer (KONICA MINOLTA CS2000). These two parameters were obtained in advance. Subsequently, the processing flow of the projection mapping is as follows:

Fig. 2.
figure 2

Flowchart of the spectral projection mapping system.

  1. 1.

    The spectral reflectance of a target non-makeup skin was measured using the spectral reflectance estimation system developed by Hirai et al. [10] (See Sect. 3.2)

  2. 2.

    Based on these measured information, the projection spectra were calculated based on the proposed model. (See Sect. 2)

  3. 3.

    The spatially-varying spectra for spectral projection mapping were represented by spectral basis functions and spatial weight images. We used non-negative matrix factorization (NMF) to obtain the basis functions. (See Sect. 3.3)

  4. 4.

    Spatial weight images with spectral basis functions were projected on the target non-makeup skin.

3.1 System Configuration

In this study, we developed a spectral projection mapping system using a multi-primary image projector [11]. This multi-primary projector can reproduce not only wide gamut images but also spectral images. The projector has been applied to computer vision problems such as photometric and geometric image calibration, and spectral three-dimensional (3D) measurements [10, 12].

The multi-primary image projector was configured primarily with a light source component and an image projection component (Fig. 3). The light source component of the projector consists of OL490 (Optronic Laboratories), which is programmable using a computer. It is composed of a xenon lamp source, grating, a DMD chip, and a liquid light guide. The wavelength resolution was in the range of 380–780 nm. In this study, the sampling pitch of the projection light source was set to a 4 nm interval. The image projection component is based on the DLP Lightcrafter (Texas Instruments). The original LED-based RGB primary colors were replaced with the spectral light source above. This system used a DMD chip with a resolution of 608 × 684 pixels for image projection. In summary, the grating and DMD in the light source of the projector produced the spectra, whereas the DMD in image projection reproduced a monochromatic image with each light source spectrum. The light source and the image projection components were both controlled by a computer to project image sequences synchronously. A trigger signal was sent from the image projection component to the light source.

Fig. 3.
figure 3

Configuration and projection principle of the multi-primary image projector.

3.2 Spectral Reflectance Estimation Measurement System

For the spectral reflectance measurement of the target non-makeup skins, we used the spectral reflectance measurement system [10]. This system measures the spectral reflectances using a multi-primary image projector and a high-speed monochrome camera system (Fig. 4). The projector and camera are synchronized by a trigger signal. The projector reproduces the spectral distributions of nine basis functions for estimating the spectral reflectance.

Fig. 4.
figure 4

Overview of the spectral measurement system.

An algorithm for the spectral reflectance measurement is described. This involves a set of orthonormal basis functions \( \psi_{m} \left( \lambda \right) \) to represent the surface spectral reflectance [13]. The surface spectral reflectance \( S\left( \lambda \right) \) can be expressed as follows:

$$ S\left( \lambda \right) = \sum\nolimits_{m = 1}^{M} {w_{m} \psi_{m} \left( \lambda \right)\,\left( {m = 1, 2, \cdots , M} \right),} $$
(4)

where M is the number of orthonormal basis functions, \( w_{m} \) are the weights of the functions, and \( M \) indicates the wavelength. In this study, we selected five spectral basis functions, i.e., M = 5. The basis functions were computed by the principal component analysis (PCA) of a spectral reflectance database with 507 samples. If we irradiate an object surface with spectrum \( E_{m} \left( \lambda \right) \) of the orthonormal basis functions divided by the camera sensitivity \( R\left( \lambda \right) \), the camera output \( O_{m} \) can be modeled as follows:

$$ \begin{aligned} O_{m} & = \smallint E_{m} \left( \lambda \right)R\left( \lambda \right)S\left( \lambda \right) \\ & = \smallint \left( {\psi_{m} \left( \lambda \right)/R\left( \lambda \right)} \right)R\left( \lambda \right)\sum\nolimits_{m = 1}^{M} {w_{m} \psi_{m} \left( \lambda \right)d\lambda } \\ & = w_{m} . \\ \end{aligned} $$
(5)

As shown in Eq. (5), we can obtain the weights \( w_{m} \) directly from the camera outputs that are obtained by the projections based on the orthonormal basis functions. In the actual case, we could not irradiate an object surface with the spectral basis functions based on PCA, because the orthonormal basis functions included negative values. In this study, we decompose the orthonormal basis functions into positive and negative functions. Subsequently, the absolute values of the decomposed negative functions were used as the projected illumination. Finally, we estimated the surface spectral reflectance using the following values:

$$ \psi_{m} \left( \lambda \right) = \psi_{m}^{ + } \left( \lambda \right) - \psi_{m}^{ - } \left( \lambda \right),\,w_{m} = O_{m}^{ + } - O_{m}^{ - } . $$
(6)

Figure 5 shows the projected spectra designed for estimating the spectral reflectance. The figure shows the waveforms of nine orthonormal basis functions with the negative values inverted and divided by the camera spectral sensitivity \( R\left( \lambda \right) \). The solid lines are the waveforms calculated by dividing the positive original orthonormal bases by the camera spectral sensitivity: the dashed lines are the waveforms obtained by dividing the reversed negative components by the camera sensitivity. The second to fifth principal components include negative values and require the illumination of two sources each. Subsequently, nine waveforms are projected for spectral reflectance estimation.

Fig. 5.
figure 5

Projected waveforms of orthonormal basis functions. (Solid line: Waveforms obtained by dividing the camera sensitivity are the positive values of the principal component. Dashed line: Waveforms divided by the camera sensitivity are the inverted negative values of the primary component).

Figure 6 shows examples of the spectral reflectance estimation accuracy of this system. We used an X-Rite Mini ColorChecker. The average root mean square error (RMSE) and color difference \( \Delta E_{ab}^{*} \) of 24 colors are 0.063 and 7.14.

Fig. 6.
figure 6

Estimated spectral reflectances: (left) reflectance of ColorChecker No.1 (RMSE = 0.028, \( \Delta E_{ab}^{*} \) = 8.83), and (right) reflectance of ColorChecker No.2 (RMSE = 0.055, \( \Delta E_{ab}^{*} \) = 11.14).

3.3 Spectral Basis Functions and Spatial Weight Images Using NMF

The number of projections from the proposed system is defined as N. If the spectral control of each spatial point on the skin surface is required, the brute-force spectral projection for a target skin can be applied. In this case, the number of projections becomes N = X × Y, where X is the number of horizontal pixels, and Y is that of vertical pixels. However, the number of projections is too large to realize spectral projection mapping. Subsequently, we applied the NMF technique to reduce the number of projections.

NMF is an algorithm that decomposes a non-negative matrix into two non-negative value matrices. NMF is often used in the research fields of image processing and computer vision. Lee and Seung extracted facial parts from facial images using NMF [14]. Akaishi et al. proposed a method for separating reflection components in a single image based on the dichromatic reflection model [15]. Their method is based on a modified version of sparse NMF. In this study, the projection spectra of Eq. (3) is decomposed by NMF as follows:

$$ \varvec{E} \approx \varvec{WH}, $$
(7)

where E is the projection spectra matrix of size N × 76 (N = X × Y pixels and 76 dimensions of 400 nm to 700 nm). W is a basis function matrix of size 76 × I, where I is the number of spectral basis functions. H is a weight image matrix with the size of I × N. Subsequently, each row of H corresponds to each grayscale weight image.

The projection spectra can be represented by the basis functions above. In other words, as shown in Eq. (3), we are to calculate only the basis functions from the spectral reflectance of the non-makeup skins. In this study, W is determined from the measured spectral reflectances of a target non-makeup skin. Finally, we rewrite Eq. (3) using the basis functions as follows

$$ E_{pro} \left( {x,y,\lambda } \right) = \sum\nolimits_{i = 1}^{I} {E_{i} \left( {x,y,\lambda } \right)} = \sum\nolimits_{i = 1}^{I} {\left( {\frac{{S_{make} \left( \lambda \right)}}{{W_{i} \left( {x,y} \right)S_{i} \left( \lambda \right)}} - 1} \right)} E_{env} \left( \lambda \right), $$
(8)

where \( S_{i} \) is the i-th basis function and \( W_{i} \) is the i-th weight image corresponding to the i-th basis function.

4 Validation Experiment

In the validation experiment, the feasibility of the proposed method was verified by comparing the reference makeup skin with augmented makeup skin by spectral projection mapping.

4.1 Experimental Setups

Figure 7 shows the experimental conditions. The experiment was conducted under the following conditions: The reference makeup skins were produced using powder foundations: CEZANNE Cream-Beige and Dark-Ocher. The powder foundations were applied to the back of the right hand. The experimental environment was established using the spot lighting of a xenon lamp. Figure 8 shows the spectral reflectance of the target non-makeup skin measured by the reflectance estimation method in Sect. 3.2. Figures 9, 10, and 11 show the calculation results of four basis functions of Fig. 7 using NMF, that of the basis functions for reproducing augmented makeup skin spectra, and the close-ups of projected weight images corresponding to the projection basis spectra, respectively.

Fig. 7.
figure 7

Experimental condition.

Fig. 8.
figure 8

Spectral reflectance of measured non-makeup skin. The white line is the average of the total spectral reflectances.

Fig. 9.
figure 9

Spectral basis functions of Fig. 6 using NMF.

Fig. 10.
figure 10

Spectral basis functions for reproducing augmented makeup skin based on NMF: (a) Spectral basis functions for Cream-Beige, (b) Spectral basis functions for Dark-Ocher.

Fig. 11.
figure 11

Close-ups of projected weight images: (a) Close-ups of projected images for Cream-Beige, (b) Close-ups of projected images for Dark-Ocher. Each close-up images corresponds to the spectral basis functions in Fig. 8 (Top left: Base 1, top right: Base 2, bottom left: Base 3, and bottom right: Base 4).

4.2 Results and Discussion

Figure 12 shows the experimental results of the calculated and measured projection spectra shown in Eq. (8). The results are averaged spatially. Figure 13 shows the results of multiplying the results of Fig. 12 by the average spectral reflectance of the target non-makeup skin (Fig. 6). As shown in Figs. 12 and 13, the projected spectra and augmented reflectance were reproduced accurately. By projecting the spatial weight images with the measured spectral basis functions onto the target non-makeup skins, an augmented makeup skin appearance that is similar to the reference makeup skin can be reproduced.

Fig. 12.
figure 12

Simulation result of superimpose-projected weight image with the spectral basis function. (Blue line: calculated spectral basis function, red line: measured spectral basis function) (Color figure online)

Fig. 13.
figure 13

Simulation result of the weight image with the spectral basis function superimpose-projected to non-makeup skin reflectance (Fig. 8). (Blue line: calculated spectral basis function, red line: measured spectral basis function) (Color figure online)

Figure 14 shows the spectral reflectance of the reference makeup skin and augmented skin. In this case, we projected and measured the actual target skin by spectral projection mapping. The RMSE of the Cream-Beige makeup skin and the Dark-Ocher makeup skin are 1.79 × 10−4, and 2.22 × 10−4, respectively. Those GFC (goodness-of-fit coefficient) [16] are 0.9944 and 0.9940, respectively. In addition \( \Delta E_{ab}^{*} \) are 7.97 and 9.83, respectively. The spectral forms between the reference and augmented spectral reflectance of the makeup skins were similar. However, the augmented makeup skin was darker than the reference skin. There are two primary concerns on the accuracy. One is the alignment problem. The accuracy deteriorates owing to the misalignment between the measured and projected spatial positions by the spectral reflectance estimation system and the spectral projection mapping. Because the proposed projection spectra were controlled pixel-by-pixel, it was difficult to accurately project the weight images on precise positions on a target skin. Furthermore, the 3D shape on the target hand may differ between the measurement and the projection. This system is not compatible with the projection mapping of 3D shapes. The other problem is that the proposed model cannot fully consider the optical properties of the skin. The actual makeup skin was divided into a two-layer structure of foundation and skin. The internally scattered light and the surface reflected light within the two layers caused spectral distributions such as sub-surface scattering. However, the current spectral projection mapping considers only surface reflected lights.

Fig. 14.
figure 14

Spectral distribution of reference makeup skin (Blue line) and augmented skin (Red line) and the result of normalizing each right figure with the maximum value: (a) spectral of distribution of Cream-Beige (RMSE = 1.79 × 10−4, GFC = 0.9944, \( \Delta E_{ab}^{*} \) = 7.97), and (b) spectral distribution of Dark-Ocher (RMSE = 2.22 × 10−4, GFC = 0.9940, \( \Delta E_{ab}^{*} \) = 9.83). (Color figure online)

5 Conclusion

We herein proposed a method to reproduce the appearance of makeup skin by applying spectral projection mapping to non-makeup skins. First, we modeled the spectral projection mapping to calculate the projected spectra. Subsequently, we developed a spectral projection mapping system using a multi-primary image projector. For realizing spectral projection mapping, we applied NMF to calculate spectral basis functions and spatial weight images. From the experimental results, we confirmed the feasibility of the proposed method that exhibited good accuracy.

In our study, the spectral reflectance of the makeup skin could be reproduced exactly. However, the actual use for the skin appearance reproduction is still insufficient. Alignment is a problem. It is necessary to develop a spatial alignment technique to address the spatially varying reflectance of skin surfaces. Additionally, the sub-surface scattering characteristics in skins are important for realistic skin appearance reproduction. It is necessary to improve such optical characteristics by the spectral projection mapping.