Skip to main content

Dimensionality Reduction

  • Reference work entry
  • First Online:
Computer Vision
  • 215 Accesses

Synonyms

Dimensional compression; Dimensional embedding; Dimension reduction

Related Concepts

Feature Selection

Definition

Dimensionality reduction is the process of reducing the dimension of the vector space spanned by feature vectors (pattern vectors). Various kinds of reduction can be achieved by defining a map from the original space into a dimensionality-reduced space.

Background

The feature space, i.e., the vector space spanned by feature vectors (pattern vectors) defined on d-dimensional space, can be transformed into a vector space of lower-dimension d′( < d) spanned by d′-dimensional feature vectors through linear or nonlinear transformation. This transformation allows feature vectors to be represented by lower-dimensional vectors, and various kinds of vector operations and statistical analysis, such as multivariate analysis, machine learning, clustering, and classification, become less expensive to perform. Moreover, it tackles the “curse of dimensionality,” the various...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 649.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 899.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Watanabe S (1969) Knowing & guessing – quantitative study of inference and information. John Wiley & Sons Inc., Hoboken, NJ, USA

    Google Scholar 

  2. Oja E (1983) Subspace methods of pattern recognition. Research Studies Press, Baldock, UK

    Google Scholar 

  3. Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1):71–86

    Article  Google Scholar 

  4. Belhumeur P, Hespanha J, Kriegman D (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7): 711–720

    Article  Google Scholar 

  5. Murase H, Nayar SK (1995) Visual learning and recognition of 3-d objects from appearance. Int J Comput Vis 14(1): 5–24

    Article  Google Scholar 

  6. Schölkopf B, Smola A, Müller KR (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10(5):1299–1319

    Article  Google Scholar 

  7. Mika S, Rätsch G, Weston J, Schölkopf B, Müller K (1999) Fisher discriminant analysis with kernels. In: Proceedings of IEEE neural networks for signal processing workshop IX (NNSP’99), Madison, pp 41–48

    Google Scholar 

  8. Baudat G, Anouar F (2000) Generalized discriminant analysis using a kernel approach. Neural Comput 12:2385–2404

    Article  Google Scholar 

  9. Burges CJ (2005) Geometric methods for feature extraction and dimensional reduction. In: Maimon O, Rokach L (eds) Data mining and knowledge discovery handbook: a complete guide for researchers and practitioners. Springer, NY, USA

    Google Scholar 

  10. Pless R, Souvenir R (2009) A survey of manifold learning for images. IPSJ Trans Comput Vis Appl 1:83–94

    Google Scholar 

  11. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323

    Article  Google Scholar 

  12. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290: 2323–2326

    Article  Google Scholar 

  13. Saul LK, Roweis ST, Singer Y (2003) Think globally, fit locally: unsupervised learning of low dimensional manifolds. J Mach Learn Res 4:119–155

    Google Scholar 

  14. Belkin M, Niyogi P (2002) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396

    Article  Google Scholar 

  15. Ham J, Lee DD, Mika S, Schölkopf B (2004) A kernel view of the dimensionality reduction of manifolds. In: Proceedings of the 21st international conference on machine learning (ICML’04), Banff, pp 369–376

    Google Scholar 

  16. DeMers D, Cottrell G (1992) Non-linear dimensionality reduction. In: Advances in Neural Information Processing Systems 5. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pp 580–587

    Google Scholar 

  17. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eisaku Maeda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this entry

Cite this entry

Maeda, E. (2014). Dimensionality Reduction. In: Ikeuchi, K. (eds) Computer Vision. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-31439-6_652

Download citation

Publish with us

Policies and ethics