Abstract
We introduce a fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), to compute the principal components of a sequence of samples incrementally without estimating the covariance matrix (thus covariance-free). This new method is for real-time applications where no iterations are allowed and high-dimensional inputs are involved, such as appearance-based image analysis. CCIPCA is motivated by the concept of statistical efficiency (the estimate has the smallest variance given the observed data). The convergence rate of CCIPCA is very high compared with other IPCA algorithms on high-dimensional data, although the highest possible efficiency is not guaranteed because of the unknown sample distribution.
The work is supported in part by National Science Foundation under grant No. IIS 9815191, DARPA ETO under contract No. DAAN02-98-C-4025, and DARPA ITO under grant No. DABT63-99-1-0014. The authors would like to thank Shaoyun Chen for his codes to do batch PCA.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Golub, G.H., van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (1989)
Weng, J., Stockman, I. (eds.): Proceedings of NSF/DARPA Workshop on Development and Learning, East Lansing, Michigan, April 5–7 (2000)
Sirovich, I., Kirby, M.: Low-dimensional procedure for the caracterization of human faces. Journal of Optical Society of America A 4(3), 519–524 (1987)
Fisz, M.: Probability theory and mathematical statistics, 3rd edn. John Wiley & Sons, Inc., New York (1963)
Owsley, N.L.: Adaptive data orthogonalization. In: Proc. IEEE Int’l Conf. Acoust., Speech and Signal Processing, Tulsa, Oklahoma, April 10–12, pp. 109–112 (1978)
Thompson, P.A.: An adaptive spectral analysis technique for unbiased frequency estimation in the presence of white noise. In: Proc. 13th Asilomar Conf. on Circuits, System and Computers, Pacific Grove, CA, pp. 529–533 (1979)
Oja, E.: Subspace Methods of Pattern Recognition. Research Studies Press, Letchworth, UK (1983)
Oja, E., Karhunen, J.: On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. Journal of Mathematical Analysis and Application 106, 69–84 (1985)
Kreyszig, E.: Advanced engineering mathematics. Wiley, New York (1988)
Sanger, T.D.: Optimal unsupervised learning in a single-layer linear feedforward neural network. IEEE Trans. Neural Networks 2, 459–473 (1989)
Phillips, P.J., Moon, H., Rauss, P., Rizvi, S.A.: The FERET evaluation methodology for face-recognition algorithms. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 1997, pp. 137–143 (1997)
Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recips in C, 2nd edn. University Press, New York (1986)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Weng, J., Zhang, Y., Hwang, WS. (2003). A Fast Algorithm for Incremental Principal Component Analysis. In: Liu, J., Cheung, Ym., Yin, H. (eds) Intelligent Data Engineering and Automated Learning. IDEAL 2003. Lecture Notes in Computer Science, vol 2690. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45080-1_122
Download citation
DOI: https://doi.org/10.1007/978-3-540-45080-1_122
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40550-4
Online ISBN: 978-3-540-45080-1
eBook Packages: Springer Book Archive