Skip to main content
Log in

Locally Minimizing Embedding and Globally Maximizing Variance: Unsupervised Linear Difference Projection for Dimensionality Reduction

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Recently, many dimensionality reduction algorithms, including local methods and global methods, have been presented. The representative local linear methods are locally linear embedding (LLE) and linear preserving projections (LPP), which seek to find an embedding space that preserves local information to explore the intrinsic characteristics of high dimensional data. However, both of them still fail to nicely deal with the sparsely sampled or noise contaminated datasets, where the local neighborhood structure is critically distorted. On the contrary, principal component analysis (PCA), the most frequently used global method, preserves the total variance by maximizing the trace of feature variance matrix. But PCA cannot preserve local information due to pursuing maximal variance. In order to integrate the locality and globality together and avoid the drawback in LLE and PCA, in this paper, inspired by the dimensionality reduction methods of LLE and PCA, we propose a new dimensionality reduction method for face recognition, namely, unsupervised linear difference projection (ULDP). This approach can be regarded as the integration of a local approach (LLE) and a global approach (PCA), so that it has better performance and robustness in applications. Experimental results on the ORL, YALE and AR face databases show the effectiveness of the proposed method on face recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Turk M, Pentland AP (1991) Face recognition using eigenfaces. In: Proceedings of IEEE computer society conference on computer vision and pattern recognition, Maui, Hawaii, June 1991, pp 586–591

  2. Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1): 71–86

    Article  Google Scholar 

  3. Martinez AM, Kak AC (2001) PCA versus LDA. IEEE Trans Pattern Anal Mach Intell 23(2): 228–233

    Article  Google Scholar 

  4. Ye J, Janardan R, Park C, Park H (2004) An optimization criterion for generalized discriminant analysis on undersampled problems. IEEE Trans Pattern Anal Mach Intell 26(8): 982–994

    Article  Google Scholar 

  5. Belhumeur PN, Hespanha JP, Kriengman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7): 711–720

    Article  Google Scholar 

  6. Li H, Jiang T, Zhang K (2006) Efficient and robust feature extraction by maximum margin criterion. IEEE Trans Neural Netw 17(1): 1157–1165

    Article  Google Scholar 

  7. Howland P, Wang J, Park H (2006) Solving the small sample size problem in face recognition using generalized discriminant analysis. Pattern Recognit 39: 277–287

    Article  Google Scholar 

  8. Zheng W, Zhao L, Zou C (2005) Foley–Sammon optimal discriminant vectors using kernel approach. IEEE Trans Neural Netw 16(1): 1–9

    Article  Google Scholar 

  9. Zheng W, Zhao L, Zou C (2004) An efficient algorithm to solve the small sample size problem for LDA. Pattern Recognit 37: 1077–1079

    Article  MATH  Google Scholar 

  10. Friedman JH (1989) Regularized discriminant analysis. J Am Stat Assoc 84(405): 165–175

    Article  Google Scholar 

  11. Golub GH, Van Loan CF (1996) Matrix computations. Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  12. Feng G, Hu D, Zhou Z (2008) A direct locality preserving projections (DLPP) algorithm for image recognition. Neural Process Lett 27: 247–255

    Article  Google Scholar 

  13. Swets DL, Weng J (1996) Using discriminant eigenfeatures for image retrieval. IEEE Trans Pattern Anal Mach Intell 18(8): 831–836

    Article  Google Scholar 

  14. Howland P, Jeon M, Park H (2003) Structure preserving dimension reduction for clustered text data based on the generalized singular value decomposition. SIAM J Matrix Anal Appl 25(1): 165–179

    Article  MathSciNet  MATH  Google Scholar 

  15. Ye J, Janardan R, Park CH, Park H (2004) An optimization criterion for generalized discriminant analysis on undersampled problems. IEEE Trans Pattern Anal Mach Intell 26(8): 982–994

    Article  Google Scholar 

  16. Ye J, Li Q (2005) A two-stage linear discriminant analysis via QR-decomposition. IEEE Trans Pattern Anal Mach Intell 27(6): 929–941

    Article  Google Scholar 

  17. Chen L-F, Hong-Yuan X, Liao M, Ko M-T, Lin J-C, Yu G-J (2000) A new LDA-based face recognition system which can solve the small sample size problem. Pattern Recognit 33: 1713–1726

    Article  Google Scholar 

  18. Kirby M, Sirovich L (1990) Application of the KL procedure for the characterization of human faces. IEEE Trans Pattern Anal Mach Intell 12(1): 103–108

    Article  Google Scholar 

  19. Lee JM (1997) Riemannian manifolds: an introduction to curvature. Springer, Berlin

    MATH  Google Scholar 

  20. Scholkopf B, Smola A, Muller KR (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10(5): 1299–1319

    Article  Google Scholar 

  21. Mika S, Ratsch G, Weston J, Scholkopf B, Smola A, Muller K-R (2003) Constructing descriptive and discriminative nonlinear features: Rayleigh coefficients in kernel feature spaces. IEEE Trans Pattern Anal Mach Intell 25(5): 623–628

    Article  Google Scholar 

  22. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500): 2319–2323

    Article  Google Scholar 

  23. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500): 2323–2326

    Article  Google Scholar 

  24. Saul LK, Roweis ST (2003) Think globally, fit locally: unsupervised learning of low dimensional manifolds. J Mach Learn Res 4: 119–155

    Article  MathSciNet  Google Scholar 

  25. Belkin M, Niyogi P, Dietterich TG, Becker S, Ghahramani Z (2000) Laplacian eigenmaps and spectral techniques for embedding and clustering. Adv Neural Inf Process Syst 14: 873–878

    Google Scholar 

  26. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6): 1373–1396

    Article  MATH  Google Scholar 

  27. Zhang Z, Zha H (2004) Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM J Sci Comput 26(1): 313–338

    Article  MathSciNet  MATH  Google Scholar 

  28. He X, Niyogi P (2003) Locality preserving projections. In: Proceedings of the 17th annual conference on neural information processing systems, Vancouver and Whistler, Canada, December 2003, pp 153–160

  29. He X, Yan S, Hu Y, Niyogi P, Zhang H-J (2005) Face recognition using laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27(3): 328–340

    Article  Google Scholar 

  30. Hu H (2008) Orthogonal neighborhood preserving discriminant analysis for face recognition. Pattern Recognit 41: 2045–2054

    Article  MATH  Google Scholar 

  31. Yu W, Teng X, Liu C (2006) Face recognition using discriminant locality preserving projections. Image Vis Comput 24: 239–248

    Article  Google Scholar 

  32. Yang L, Gong W, Gu X, Li W, Liang Y (2008) Null space discriminant locality preserving projections for face recognition. Neurocomputing 71: 3644–3649

    Article  Google Scholar 

  33. Lu GF, Lin Z, Jin Z (2010) Face recognition using discriminant locality preserving projections based on maximum margin criterion. Pattern Recognit 43: 3572–3579

    Article  MATH  Google Scholar 

  34. Yang J, Zhang D, Yang J, Niu B (2007) Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics. IEEE Trans Pattern Anal Mach Intell 29(4): 650–664

    Article  Google Scholar 

  35. He XF, Cai D, Yan SC, Zhang HJ (2005) Neighborhood preserving embedding. In: Proceedings of IEEE international conference on computer vision (ICCV), Beijing, China, October 2005, pp 1208–1213

  36. Zeng XH, Luo SW (2007) A supervised subspace learning algorithm: supervised neighborhood preserving embedding. In: Proceedings of 3rd international conference on advanced data mining and applications, Harbin, China, August 2007, pp 81–88

  37. Wang Y, Wu Y (2010) Complete neighborhood preserving embedding for face recognition. Pattern Recognit 43: 1008–1015

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minghua Wan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wan, M., Lai, Z. & Jin, Z. Locally Minimizing Embedding and Globally Maximizing Variance: Unsupervised Linear Difference Projection for Dimensionality Reduction. Neural Process Lett 33, 267–282 (2011). https://doi.org/10.1007/s11063-011-9177-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-011-9177-x

Keywords

Navigation