Skip to main content

Linear Dimension Reduction Techniques

  • Reference work entry
  • First Online:
Encyclopedia of Biometrics

Synonyms

Linear Feature Extraction

Definition

Linear dimension reduction technique reduces the dimension of biometric data using a linear transform. The linear transform is always learned by optimization of a criterion. Biometric data are then projected onto the range space of this transform. Subsequent processing will then be performed in that lower-dimensional space.

Introduction

In biometrics, data are always represented in vectors and the dimensionality is always very high. It would be computationally expensive to process them directly by many algorithms. Moreover, it is sometimes desirable to exact robust, informative or discriminative information from the data. For these reasons, a lower-dimensional subspace is always found such that most important information of data is retained for linear representation. Among the techniques for learning a subspace, linear dimension reduction methods are always popular.

Suppose given a set of N data samples {x1, ⋯ , x N }, where \(\mathbf{x}_{i}...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. M. Turk, A. Pentland, Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991)

    Article  Google Scholar 

  2. N. Kwak, Principal component analysis based on L1-norm maximization. IEEE Trans. Pattern Anal. Mach. Intell. 30(9), 1672–1680 (2008)

    Article  Google Scholar 

  3. A. Hyvärinen, E. Oja, Independent component analysis: algorithms and applications. Neural Netw. 13, 411–430 (2000)

    Article  Google Scholar 

  4. D.D. Lee, H.S. Seung, Learning the parts of objects by non-negative matrix factorization. Nature 401, 788–791 (1999)

    Article  Google Scholar 

  5. D.D. Lee, H.S. Seung, Algorithms for non-negative matrix factorization, in Advances in Neural Information Processing Systems, Denver, 2000, pp. 556–562

    Google Scholar 

  6. C.H. Ding, T. Li, M.I. Jordan, Convex and semi-nonnegative matrix factorizations. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 45–55 (2010)

    Article  Google Scholar 

  7. W.S. Zheng, J. Lai, S. Liao, R. He, Extracting non-negative basis images using pixel dispersion penalty. Pattern Recognit. 45(8), 2912–2926 (2012)

    Article  Google Scholar 

  8. X. He, P. Niyogi, Locality preserving projections, in Advances in Neural Information Processing Systems, Vancouver, 2003, pp. 153–160

    Google Scholar 

  9. R.A., Fisher, The use of multiple measures in taxonomic problems. Ann. Eugen. 7, 179–188 (1936)

    Google Scholar 

  10. P.N. Belhumeour, J.P. Hespanha, D.J. Kriegman, Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997)

    Google Scholar 

  11. L.F. Chen, H.Y.M. Liao, M.T. Ko, J.C. Lin, G.J. Yu, A new LDA-based face recognition system which can solve the small sample size problem. Pattern Recognit. 33, 1713–1726 (2000)

    Article  Google Scholar 

  12. A.R. Webb (ed.), Statistical Pattern Recognition, 2nd edn. (Wiley, West Sussex, 2002)

    MATH  Google Scholar 

  13. H. Li, T. Jiang, K. Zhang, Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. Neural Netw. 17(1), 157–165 (2006)

    Article  Google Scholar 

  14. A.M. Martínez, A.C. Kak, PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 228–233 (2001)

    Article  Google Scholar 

  15. D. Lin, X. Tang, Inter-modality face recognition, in European Conference on Computer Vision, Graz, 2006

    Chapter  Google Scholar 

  16. D. Cai, X. He, J. Han, Semi-supervised discriminant analysis, in IEEE International Conference on Computer Vision, Rio de Janeiro, 2007

    Book  Google Scholar 

  17. J. Yang, D. Zhang, A.F. Frangi, J.y. Yang, Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(1), 131–137 (2004)

    Google Scholar 

  18. H. Xiong, M.N.S. Swamy, M.O. Ahmad, Two-dimensional FLD for face recognition. Pattern Recognit. 38, 1121–1124 (2005)

    Article  Google Scholar 

  19. H. Kong, X. Li, L. Wang, E.K. Teoh, J.-G. Wang, R. Venkateswarlu, Generalized 2D principal component analysis, in IEEE International Joint Conference on Neural Networks, Oxford, vol. 1, 2005, pp. 108–113

    Google Scholar 

  20. J.P. Ye, R. Janardan, Q. Li, Two-dimensional linear discriminant analysis, in Advances in Neural Information Processing Systems, Vancouver, 2004, pp. 1569–1576

    Google Scholar 

  21. W.S. Zheng, J.H. Lai, S.Z. Li, 1D-LDA versus 2D-LDA: when is vector-based linear discriminant analysis better than matrix-based? Pattern Recognit. 41(7), 2156–2172 (2008)

    Article  MATH  Google Scholar 

  22. S.J. Pan, I.W. Tsang, J.T. Kwok, Q. Yang, Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Science+Business Media New York

About this entry

Cite this entry

Zheng, WS., Lai, JH., Yuen, P.C. (2015). Linear Dimension Reduction Techniques. In: Li, S.Z., Jain, A.K. (eds) Encyclopedia of Biometrics. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7488-4_9220

Download citation

Publish with us

Policies and ethics