Short Communication
Face recognition using message passing based clustering method

https://doi.org/10.1016/j.jvcir.2009.09.002Get rights and content

Abstract

Traditional subspace analysis methods are inefficient and tend to be affected by noise as they compare the test image to all training images, especifically when there are large numbers of training images. To solve such problem, we propose a fast face recognition (FR) technique called APLDA by combining a novel clustering method affinity propagation (AP) with linear discriminant analysis (LDA). By using AP on the reduced features derived from LDA, a representative face image for each subject can be reached. Thus, our APLDA uses only the representative images rather than all training images for identification. Obviously, APLDA is much more computationally efficient than Fisherface. Also, unlike Fisherface who uses pattern classifier for identification, APLDA performs the identification using AP once again to cluster the test image into one of the representative images. Experimental results also indicate that APLDA outperforms Fisherface in terms of recognition rate.

Introduction

Face recognition (FR) [1] has received extensive attention in recent years, due to its wide applications in homeland security, surveillance systems and access control. Many techniques have been developed in the past decades for FR task. In this work, we only focus on the subspace analysis methods. In this category, there are two representative methods, principal component analysis (PCA) [2] and linear discriminant analysis (LDA) [3]. Many other methods are extensions or modifications of the above two methods. The methods based on PCA include 2DPCA [4], independent component analysis (ICA) [5], nonlinear PCA (NLPCA) [6], IPCA-ICA [7] and kernel PCA (KPCA) [8]. Other methods are based on LDA such as kernel-based Fisher discriminant analysis (KFDA) [9] and 2DLDA [10]. Besides directly processing image appearance, subspace analysis methods can also be combined with Gabor features [11] to derive Gabor-based kernel PCA [12], independent Gabor features (IGFs) [13] and Gabor-feature Fisher classifier (GFC) [14].

Most subspace analysis methods represent a face as a linear combination of low rank basis images [15]. They work within the same framework. First, the reduced features of training images are obtained using dimensionality reduction methods. Secondly, the test image is converted into the reduced feature. Finally, the test image is identified by comparing its reduced feature to that of all training images, using pattern classifiers such as nearest-neighbor, Bayesian, Support Vector Machine, etc. [16]. Obviously, using all training images for identification is time-consuming, especifically when the size of the training set (a dataset contains all the training images) is too large. The computational time for identifying a test image using such kind of scheme is affected much by the size of training set. It increases linearly with the size of training set. Moreover, using all training images also affects the recognition rate as there must be some noise (a general definition, it can be derived from capture device, or bad lighting condition, whatever, everything which damages the quality of image) in the training set. For example, if the test image matches one of the training image best, it will be identified to belong to the class of this training image. But if this training image is a noise in the training set, such kind of identification is incorrect and eventually affect the recognition rate. To speed up the identification and avoid the affection of noise, using much fewer representative images free of noise for identification task is a good solution. Accordingly, we propose to combine a newly devised clustering method called affinity propagation (AP) [17] with LDA to derive APLDA for this task. The APLDA method first projects the training face images into the subspaces and gets their reduced features, followed by using AP to cluster these features and detecting a representative face image for each subject; then obtains the reduced feature for the test face image by projecting it into the subspaces. Finally, test image is identified using the AP once again, i.e., using AP to find which representative face image the test face image belongs to. Since only the representative face images are used, APLDA is much more computationally efficient when compared to Fisherface which use all training face images for recognition. Furthermore, such kind of identification scheme is not affected by the noise as the representative face images are free of noise.

The rest of this paper is organized as follows: In the next section, we review the AP clustering method and LDA, respectively. In Section 3, we explain the reason for combining AP with Fisherface. In Section 4, we detail our APLDA and theoretically compare it to LDA regarding the efficiency. Section 5 provides the experimental results and discussion. The last section concludes this paper.

Section snippets

AP

We only describe the AP briefly. More details can be traced back to Ref. [17]. AP clustering method is a very simple but very efficient method for clustering data. It cannot only cluster data points into different classes, but also detect a representative sample for each class. It first simultaneously considers all data points as potential representative examples and, considers each data point as a node in a network, then recursively transmits real-valued messages along edges of the network

The reasons for combining AP with LDA

The comparison between AP and other clustering methods can be found in [17], which is beyond the scope of this paper. In this section, we only explain the reason for combining AP with LDA as follows.

The reason behind this combination is twofold. On one hand, LDA achieves high separability between different patterns, and encodes discriminating information in a linear separable space. An outstanding feature that LDA holds is its capability of projecting away variations in lighting and facial

Outline of APLDA

Now, let us present the detailed implementation of APLDA. Suppose there is a face dataset of N face images belonging to c different subjects.

  • (1)

    Select n(n>1) face images per person (hence n×c in total) to form the training set and using LDA for dimensionality reduction, then a transform matrix W and n×c corresponding low-dimensional features can be achieved.

  • (2)

    Use AP to cluster these n×c features into j different classes and thus obtain j features, one for each person. Note that j may not equal to c

Experiments

In this section, three sets of experiments are carried out to show the effectiveness of the proposed APLDA, and also to compare it to Fisherface in terms of recognition rate, under conditions where the number of training images per person is varied. In test stage, Fisherface uses the nearest-neighbor classifier (NNC) to identify a test image while APLDA identifies the test image by using AP to find which representative face image the test image belongs to.

Three benchmark face datasets were

Discussions

Three sets of experiments have been conducted. Comparative experimental results show that:

  • (1)

    APLDA gives an acceptable face recognition rate in comparison with Fisherface.

  • (2)

    As the number of training images per person increases, the recognition rate of APLDA becomes gradually higher. This is because the more images per person used for training, the better representative face images can be detected, then the better results can be achieved. Thus we can conclude that APLDA can detect a good

Conclusion and future work

A novel method APLDA has been proposed for face recognition. Theoretic analysis shows that APLDA is much more efficient than Fisherface. Experimental results indicate that APLDA outperforms Fisherface in terms of recognition rate. The main contributions of this paper contain: (1) FR task is dealt with in another way, i.e., using a novel clustering method called AP. AP is used in both training stage and test stage. In the former, AP is combined with LDA to detect a most representative face image

Acknowledgments

The authors would like to thank the anonymous reviewers for their critical and constructive comments and suggestions. This research has been supported by the National Natural Science Foundation of China (Nos.: 60675023 and 60602012).

References (20)

  • J. Yang et al.

    Two-dimensional discriminant transform for face recognition

    Pattern Recognition

    (2005)
  • P. Phillips et al.

    The FERET evaluation methodology for face-recognition algorithms

    IEEE Transaction on Pattern Analysis and Machine Intelligence

    (2000)
  • M. Turk et al.

    Face recognition using eigenfaces

    IEEE Computer Society Conference on Computer Vision and Pattern Recognition

    (1991)
  • P. Belhumeur et al.

    Eigenfaces vs. Fisherfaces: recognition using class specific linear projection

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (1997)
  • J. Yang et al.

    Two-dimensional PCA: a new approach to appearance-based face representation and recognition

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (2004)
  • M.S. Bartlett et al.

    Face recognition by independent component analysis

    IEEE Transactions on Neural Networks

    (2002)
  • M.A. Kramer

    Nonlinear principal component analysis using autoassociative neural networks

    AIChE Journal

    (1991)
  • I. Dagher et al.

    Face recognition using IPCA-ICA algorithm

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (2006)
  • M. Yang, N. Ahuja, D. Kriegman, Face recognition using kernel eigenfaces, in: International Conference on Image...
  • Q. Liu et al.

    Kernel-based optimized feature vectors selection and discriminant analysis for face recognition

    International Conference on Pattern Recognition

    (2002)
There are more references available in the full text version of this article.

Cited by (4)

View full text