Elsevier

Digital Signal Processing

Volume 50, March 2016, Pages 103-113
Digital Signal Processing

Face recognition using discriminative locality preserving vectors

https://doi.org/10.1016/j.dsp.2015.11.001Get rights and content

Abstract

We proposed an effective face recognition method based on the discriminative locality preserving vectors method (DLPV). Using the analysis of eigenspectrum modeling of locality preserving projections, we selected the reliable face variation subspace of LPP to construct the locality preserving vectors to characterize the data set. The discriminative locality preserving vectors (DLPV) method is based on the discriminant analysis on the locality preserving vectors. Furthermore, the theoretical analysis showed that the DLPV is viewed as a generalized discriminative common vector, null space linear discriminant analysis and null space discriminant locality preserving projections, which gave the intuitive motivation of our method. Extensive experimental results obtained on four well-known face databases (ORL, Yale, Extended Yale B and CMU PIE) demonstrated the effectiveness of the proposed DLPV method.

Introduction

Over the last ten years or so, face recognition has become a popular area of research, which has a wide range of commercial and law enforcement applications [1], [2], [3], [4]. The problem of face recognition continues to attract researchers from disciplines such as image processing, pattern recognition, neural networks, computer vision and psychology [5], [6], [7], [8], [9], [10]. One of the most successful and well-studied techniques is the appearance-based method [24]. However, appearance-based methods used in face recognition may produce the curse of dimensionality [11]. A common way to resolve the problem is to use the dimensionality reduction technique. Many linear approaches have been proposed for dimensionality reduction, such as principal component analysis (PCA) [12] and linear discriminant analysis (LDA) [13], [14], which have been widely used in visualization and classification. However, PCA does not encode discriminant information which is important for a recognition task, and LDA aims to preserve global structures of samples. Furthermore, PCA and LDA fail to explore the essential structure of data with nonlinear distribution.

Based on eigenspectrum modeling of PCA, null-space LDA (NLDA) was proposed for dealing with the small sample size problem [15]. In this method, PCA is applied to remove the null space of the total scatter matrix, which contains the intersection of the null spaces of the between-class scatter matrix and the within-class scatter matrix. Then, the optimal projection vectors are found in the remaining lower dimensional space by using the null space method. Cevikalp et al. proposed a face recognition method based on the discriminative common vectors method (DCV) [16] which yields an optimal solution by maximizing the modified Fisher's linear discriminant criterion [17]. The linear methods mentioned above, however, may fail to find the underlying nonlinear structure of a data set. To remedy this deficiency, a number of nonlinear dimensionality reduction techniques have been developed in the past few years, among which two received increasing attention: kernel-based method and manifold learning based method. Kernel principal component analysis [18], generalized discriminant analysis [19], and kernel discriminative common vector [20] are the representative kernel based methods. However, the kernel based techniques are computationally intensive, and do not explicitly consider the local structure of a data set, which is important for classification.

Recently, a number of research efforts have shown that the face images possibly reside on a nonlinear sub-manifold [21], [22], [23], [24] and many manifold learning-based approaches such as the isometric feature [25], locally linear embedding [26], and Laplacian eigenmaps [21] have been developed for analyzing high dimensional data. Manifold learning methods are straight forward in finding the inherent nonlinear structure hidden in the observation space [27]. However, none of them explicitly considers the structure of the manifold on which the face images possibly reside. Locality preserving projections (LPP) [28] is a new linear dimensionality reduction algorithm. LPP transforms different samples into new representations using the same linear transform and tries to preserve the local structure of the samples. Based on LPP, some methods were further developed for face recognition, such as neighborhood preserving embedding (NPE) [29], Laplacian faces [30], orthogonal locality preserving projections (OLPP) [31], and Locality preserving indexing [32], providing encouraging performance. Although LPP is effective in many domains, it suffers from a limitation: it de-emphasizes discriminant information, which makes it unsuitable for a recognition task. In other words, for a classification problem, the locality quantity itself is not sufficient. To encode discriminant information, discriminant locality preserving projections (DLPP) has been mentioned [33]. However, similar with LDA, DLPP also suffers from the small sample size problem. Developed from the DLPP, null space discriminant locality preserving projections (NDLPP) [34] inherits the characteristics of DLPP that encodes both the geometrical and discriminant structure of the data, and addresses the small sample size problem by solving an eigenvalue problem in null space.

Inspired by LPP and DCV, we proposed a new method, termed as discriminative locality preserving vectors (DLPV), for face recognition. Based on the analysis of eigenspectrum modeling of supervised locality preserving projections (LPP) [35], [36], we selected the reliable face variation subspace of supervised LPP to obtain the locality preserving vectors (LPV). The discriminative locality preserving vectors (DLPV) is based on the discriminant analysis on the LPV. Furthermore, we present a theoretical analysis of DLPV and its connections with NLDA, DCV and NDLPP.

The remainder of this article is organized as follows: the related works are described in Section 2. DLPV method and its theoretical analysis are presented in Section 3. Experimental results and analysis are presented in Section 4. Finally, conclusions are given in Section 5.

Section snippets

Related works

In this section, we will briefly review null space based linear discriminant analysis (NLDA), discriminant common vectors (DCV), locality preserving projections (LPP), and null spaced discriminant locality preserving projections (NDLPP) since our proposed DLPV stems from these methods.

Methodology

In this section, we describe our algorithm and provide the theoretical analysis. We begin with the motivations of our work.

Experiments

In a natural environment, face images obtained under various conditions always affect the accuracy of face recognition. In this section, we investigate the performance of our proposed DLPV method under conditions where there is a variation in facial expression, lighting condition, illumination condition and poses condition. The system performance is compared with PCA [12], LDA [13], LPP [28], NLDA [15], DCV [16] and NDLPP [34]. To validate the performance of the proposed DLPV, we chose two

Conclusion

A novel and effective face recognition method based on discriminative locality preserving vectors method was proposed in this article. The contribution of this paper is using a reliable face variation subspace vectors of LPP to construct the locality preserving vectors (LPV). Thus, the discriminative locality preserving vectors can be obtained by the discriminant analysis on the LPV. Furthermore, we gave a theoretical analysis of DLPV and its connections with NLDA, DCV and NDLPP. The proposed

Acknowledgements

This work was supported by The National Natural Science Foundation of China (Nos. 61273261, 61272267), Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (ZF1213), Open Project of the Key Laboratory of Embedded System and Service Computing Ministry of Education (Tongji University) and the Science and Technology Commission of Shanghai Municipality under research grant No. 14DZ2260800.

Ying Wen received her B.Sc. degree in industry automation from Technology of Hefei University in 1997 and the M.Sc. and Ph.D. degrees in imaging processing and pattern recognition from Shanghai University and Shanghai Jiao Tong University, China, in 2002 and 2009 respectively. She did research as a Postdoctoral Research Fellow at Columbia University in the city of New York form 2009 to 2011. She currently is an associate professor at East China Normal University. Her research interests include

References (39)

  • J.W. Wan et al.

    Pairwise costs in semisupervised discriminant analysis for face recognition

    IEEE Trans. Inf. Forensics Secur.

    (2014)
  • L.E. Shafey et al.

    A scalable formulation of probabilistic linear discriminant analysis: applied to face recognition

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2013)
  • K. Anderson et al.

    A real-time automated system for the recognition of human facial expressions

    IEEE Trans. Syst. Man Cybern., Part B, Cybern.

    (2006)
  • G. Botella et al.

    Robust bioinspired architecture for optical-flow computation

    IEEE Trans. Very Large Scale Integr. (VLSI) Syst.

    (2010)
  • C. Garcia et al.

    Multi-GPU based on multicriteria optimization for motion estimation system

    EURASIP J. Adv. Signal Process.

    (February 2013)
  • C. Liu et al.

    Gabor feature based classification using the enhanced Fisher linear discriminant model for face recognition

    IEEE Trans. Image Process.

    (2012)
  • C. Carcia et al.

    Multi-GPU based on multicriteria optimization for motion estimation system

    EURASIP J. Adv. Signal Process.

    (February 2013)
  • M. Turk

    A random walk through eigenspace

    IEICE Trans. Inf. Syst.

    (2001)
  • I.T. Jolliffe

    Principal Component Analysis

    (1986)
  • Cited by (0)

    Ying Wen received her B.Sc. degree in industry automation from Technology of Hefei University in 1997 and the M.Sc. and Ph.D. degrees in imaging processing and pattern recognition from Shanghai University and Shanghai Jiao Tong University, China, in 2002 and 2009 respectively. She did research as a Postdoctoral Research Fellow at Columbia University in the city of New York form 2009 to 2011. She currently is an associate professor at East China Normal University. Her research interests include documentary processing, pattern recognition, machine learning and medical image processing.

    Le Zhang received the B.E. degree in computer science and technology from the Institute of Technology, Shanghai, China, in 2013. He is currently working toward a master's degree in pattern recognition and intelligent systems at East China Normal University, Shanghai, China. His research interests include pattern recognition and image processing.

    Karen M. von Deneen received the A.A.S. and B.U.S. degrees in education and veterinary technology from Morehead State University, Morehead, KY, USA, in 1997 and 1998, respectively, the M.S. degree in animal science from Oregon State University, Corvallis, OR, USA, in 2002, the D.V.M. degree in veterinary medicine from the College of Veterinary Medicine and Biomedical Sciences, Colorado State University, Fort Collins, CO, USA, and the Ph.D. degree in pathobiology and large animal clinical sciences from the University of Florida, Gainesville, FL, USA, in 2009.

    She is currently an Associate Professor with the School of Life Science and Technology, Xidian University, Xi'an, China. Her current research interests include functional magnetic resonance imaging and neuroimaging applications to acupuncture research in obesity and food/drug addiction.

    Lianghua He received the B.S. degree in surveying and mapping from the Wuhan Technology University of Surveying and Mapping, Wuhan, China, in 1999, the M.S. degree in surveying and mapping from Wuhan University, Wuhan, in 2002, and the Ph.D. degree in electronic engineering from Southeast University, Nanjing, China, in 2005. From 2005 to 2007, he was a Post-Doctoral Researcher with Tongji University, Shanghai, China. Since 2007, he has been with the Department of Computer Science and Technology, Tongji University, where he is currently an Associate Professor. His current research interests include pattern analysis, machine learning, and cognitive computing.

    View full text