Automatic face authentication with self compensation

https://doi.org/10.1016/j.imavis.2007.10.002Get rights and content

Abstract

This paper presents a novel method for automatic face authentication in which the variance of faces due to aging has been considered. A bilateral symmetrical plane is used for weighting the correspondences of the scanned model and database model upon model verification. This bilateral symmetrical plane is determined by the nose tip and two canthus features. The coupled 2D and 3D feature extraction method is introduced to determine the positions of these canthus features. The central profile on this bilateral symmetrical plane is the foundation of the face recognition. A weighting function is used to determine the rational points for the correspondences of the optimized iterative closest point method. The discrepancy value is evaluated for the authentication and compensation between different models. We have implemented this method on the practical authentication of human faces. The result illustrates that this method works well in both self authentication and mutual authentication.

Introduction

Human authentication provides an alternative of the conventional biometric authentications which verify fingerprints, voiceprints, or face images in the applications of access security and intelligent robots. Most previous studies of the face recognition exploited 2D images, and the employment of entire 2.5D range images has also been reported [14]. Comparing to 3D images, 2D images is suitable only in constrained environments and poses because its information could be significantly influenced by the change of illuminations and poses. Although good performance of 2D face recognition has been achieved [15], it is still difficult to overcome the influence caused by head poses and expressions. On the other hand, 3D face recognition can be easily applied under various illuminations and head poses because the curvatures of 3D models with arbitrary orientations are invariant. Curvature analysis has been a critical tool for 3D feature extraction and pattern demarcation [4], [5], [17], [18]. However, automatic 3D human face recognition is still a challenging task. It requires complex computation and comparisons among a great number of images. In typical 3D face recognition, the range images are used for recording the texture contours and depth values. When a front face is being captured, the nose tip is typically assumed to be the closet point to the camera. In this case, the principal component analysis (PCA) performs well in 3D face identification with the same position and orientation [7]. The database of face images are then transformed into finite principal set, i.e. eigenvectors. After training the data set, the tested images with various expressions can be permitted. It should be noted that PCA does not work well when the data possess excessive noise. In contrast to PCA, the eigen-decomposition can be employed to extract the intrinsic geometric features of facial surfaces with geometric invariants. The eigen-decompositions can be applied to four different data sets which include range images, range images with textures, canonical images, and flattened texture, respectively. The eigenform method with the eigen-decomposition of flattened textures and canonical images has successively distinguished twins [3].

The biomorphic symmetry is a crucial behavior for automatic face recognition and authentication. Zhang et al. [16] proposed a bilateral symmetry analysis for face authentication. They introduced a MarkSkirt operator to determine the symmetrical plane of each facial surface. The attained symmetry profiles of different faces are then used for similarity comparisons. Their authentication function consists of three weighted differences of the distance between two compared 3D models.

The iterative closest point (ICP) is a widely used method for model registration [1]. The singular value decomposition (SVD) of ICP determines the minimum distance of the correspondences of two related models. ICP is also a significant method for estimating the difference between two models. However, ICP has a local minimum solution for the estimation of the rigid registration. For the non-rigid registration using standard ICP approach, an additional warping function is required. The grid-resampled control points have been used for matching distance in each hierarchical step [10], [11]. The fine alignment with Hybrid ICP has two steps for estimating one transformation matrix in single iteration. The shape index which is a function of principal curvatures represents the normalized scale value of the curvature. With the employment of convolution [9], the identification of specified features can be facilitated by the 2D shape index images which evolved from 3D curvatures. This coupled method which incorporates both 2D and 3D characteristics is more feasible than either pure 3D or pure 2D method for automatic feature extraction. However, the accuracy of the feature positions will be compromised after convolution. To solve this issue, Song et al. adopted an error-compensated singular value decomposition (ECSVD) method to estimate the error and consequently compensate this error after each image rotation [13]. Because ECSVD tunes the head pose from arbitrary view into the frontal view, the reliable range images for the PCA method can be easily handled.

The 2.5D range images have the advantages of maintaining 3D points and 2D images simultaneously. The curvatures in 2.5D range images are simpler to calculate with higher efficiency than those of 3D data [6]. The comparison among serial profiles, presented by Beumier and Acheroy [2], is used for 3D authentication from striped images. In our method, the 2D images are also used for acquiring the positions of eyeballs. The pair features on the nose bridge and the nose tip determine the bilateral symmetrical plane [8]. Our coarse alignment for ICP is based on the bilateral symmetrical plane. We use the frontal range images for human face databases and implement the fine alignment of modified ICP. A threshold value is designed to justify the result of fine alignment of the modified ICP. Once the tested face has a weighed distance less than the threshold, the person has the successful authentication. The 2.5D range images also have the capability of blending two related models with tunable depth values. Generally speaking, the successful authenticated model is also another kind of the database model and has the representation of this person in current period. In this paper, we provide a linear blending function to compensate the database models. This method confirms that the compensated database model will be updated to be close to the current state of the face. Our experimental results show that the discrepancy between two different persons is obvious after training.

Section snippets

Automatic face authentication

In our application, a portable 3D laser scanner is used for the enrollment and verification of human faces. The 3D laser scanner consists of an infrared device and a digital camera. The infrared device can acquire the depth values of the range images by sensing the reflection of the ribbon-shaped laser beam. The digital camera can capture the instant scene and store it as a high resolution image. Because the geometrical position between the digital camera and infrared device is fixed, the

Self compensation

The enrollment for each new face should be done before the implementation of the verification. The face models stored in database only represent the states in the instant of enrollment. However, the invariant database model will not match the scanned model which has been aging for a long period of time. Due to aging, fattening, and thinning, the database model should be updated for adapting the variant scanned models for the same person. In this paper, a linear blending function which is

Result and discussion

A method for face authentication is introduced. This method can compensate the database model after successful verification. We have tested 63 adult faces with various poses and expressions. A frontal pose with formal expression is acquired from each person for enrollment. The poses of all persons in verification are flexible but both the inner canthi must be visible to the camera. The head poses with slants or tilts are available. In our method, the face expression is recommended not being too

Conclusion

We have successfully implemented our feature extraction method in a face authentication system. This method coupled 3D and 2D features and has been able to retrieve the canthus features. The bilateral symmetrical plane for coarse alignment was also carried out. The weighted point–point ICP was used to determine the optimal transformation matrix. The experimental results verified our method correctly in both self authentication and mutual authentication. The range of the available threshold has

Acknowledgements

The authors thank Mr. Wen-Chao Chen and Mr. Wei-Yih Ho for establishing scanning devices and systemic experiments. We also thank Mr. Ming-Hui Lin for providing experimental devices.

References (18)

  • C. Beumier et al.

    Automatic 3d face authentication

    Image and Vision Computing

    (2000)
  • Y. Wang et al.

    Robust face recognition from 2D and 3D images using structural Hausdorff distance

    Image and Vision Computing

    (2006)
  • W. Yu et al.

    Face recognition using discriminant locality preserving projections

    Image and Vision Computing

    (2006)
  • P. Besl et al.

    A method for registration of 3D shapes

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (1992)
  • A.M. Bronstein, M.M. Bronstein, R. Kimmel, Expression-invariant 3D face recognition, in: International Conference on...
  • K.I. Chang, K.W. Bowyer, P.J. Flynn, Face recognition using 2D and 3D facial data, Workshop in Multimodal User...
  • G.G. Gordon, Face recognition based on depth and curvature features, in: Proceedings of the IEEE Computer Society...
  • B. Hamann

    Curvature approximation for triangulated surfaces

    Computing

    (1993)
  • C. Hesher, A. Srivastava, G. Erlebacher, A novel technique for face recognition using range imaging, in: Proceedings of...
There are more references available in the full text version of this article.

Cited by (0)

View full text