Skip to main content

A Fast Algorithm for Incremental Principal Component Analysis

  • Conference paper
Book cover Intelligent Data Engineering and Automated Learning (IDEAL 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2690))

Abstract

We introduce a fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), to compute the principal components of a sequence of samples incrementally without estimating the covariance matrix (thus covariance-free). This new method is for real-time applications where no iterations are allowed and high-dimensional inputs are involved, such as appearance-based image analysis. CCIPCA is motivated by the concept of statistical efficiency (the estimate has the smallest variance given the observed data). The convergence rate of CCIPCA is very high compared with other IPCA algorithms on high-dimensional data, although the highest possible efficiency is not guaranteed because of the unknown sample distribution.

The work is supported in part by National Science Foundation under grant No. IIS 9815191, DARPA ETO under contract No. DAAN02-98-C-4025, and DARPA ITO under grant No. DABT63-99-1-0014. The authors would like to thank Shaoyun Chen for his codes to do batch PCA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Golub, G.H., van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (1989)

    MATH  Google Scholar 

  2. Weng, J., Stockman, I. (eds.): Proceedings of NSF/DARPA Workshop on Development and Learning, East Lansing, Michigan, April 5–7 (2000)

    Google Scholar 

  3. Sirovich, I., Kirby, M.: Low-dimensional procedure for the caracterization of human faces. Journal of Optical Society of America A 4(3), 519–524 (1987)

    Article  Google Scholar 

  4. Fisz, M.: Probability theory and mathematical statistics, 3rd edn. John Wiley & Sons, Inc., New York (1963)

    MATH  Google Scholar 

  5. Owsley, N.L.: Adaptive data orthogonalization. In: Proc. IEEE Int’l Conf. Acoust., Speech and Signal Processing, Tulsa, Oklahoma, April 10–12, pp. 109–112 (1978)

    Google Scholar 

  6. Thompson, P.A.: An adaptive spectral analysis technique for unbiased frequency estimation in the presence of white noise. In: Proc. 13th Asilomar Conf. on Circuits, System and Computers, Pacific Grove, CA, pp. 529–533 (1979)

    Google Scholar 

  7. Oja, E.: Subspace Methods of Pattern Recognition. Research Studies Press, Letchworth, UK (1983)

    Google Scholar 

  8. Oja, E., Karhunen, J.: On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. Journal of Mathematical Analysis and Application 106, 69–84 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  9. Kreyszig, E.: Advanced engineering mathematics. Wiley, New York (1988)

    Google Scholar 

  10. Sanger, T.D.: Optimal unsupervised learning in a single-layer linear feedforward neural network. IEEE Trans. Neural Networks 2, 459–473 (1989)

    Google Scholar 

  11. Phillips, P.J., Moon, H., Rauss, P., Rizvi, S.A.: The FERET evaluation methodology for face-recognition algorithms. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 1997, pp. 137–143 (1997)

    Google Scholar 

  12. Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recips in C, 2nd edn. University Press, New York (1986)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Weng, J., Zhang, Y., Hwang, WS. (2003). A Fast Algorithm for Incremental Principal Component Analysis. In: Liu, J., Cheung, Ym., Yin, H. (eds) Intelligent Data Engineering and Automated Learning. IDEAL 2003. Lecture Notes in Computer Science, vol 2690. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45080-1_122

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45080-1_122

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40550-4

  • Online ISBN: 978-3-540-45080-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics