Abstract
Nonlinear component analysis is a popular nonlinear feature extraction method. It generally uses eigen-decomposition technique to extract the principal components. But the method is infeasible for large-scale data set because of the storage and computational problem. To overcome these disadvantages, an efficient iterative method of computing kernel principal components based on fixed-point algorithm is proposed.The kernel principle components can be iteratively computed without the eigen-decomposition. The space and time complexity of proposed method is reduced to o(m) and o(m 2), respectively, where m is the number of samples. More important, it still can be used even if traditional eigen-decomposition technique cannot be applied when faced with the extremely large-scale data set. The effectiveness of proposed method is validated from experimental results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Fukunaga, K.: Introduction to Statistical Pattern Recognition. Academic Press, London (1990)
Scholkopf, B., Smola, A.: Learning with Kernels: Support Vector MAchines, Regularization, Optimization and Beyond. MIT Press, Cambridge (2002)
Scholkopf, B., Smola, A., Muller, K.R.: Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation 10, 1299–1319 (1998)
Zheng, W.M., Zou, C.R., Zhao, L.: An Improved Algorithm for Kernel Principal Components Analysis. Neural Processing Letters 22, 49–56 (2005)
France, V., Hlavac, V.: Greedy Algorithm for a Training Set Reduction in the Kernel Methods. In: IEEE International Conference on Computer Analysis of Images and Patterns, pp. 426–433 (2003)
Rosipal, R., Girolami, M.: An Expectative-maximization Approach to Nonlinear Component Analysis. Neural Computation 13, 505–510 (2001)
Williams, C., Seeger, M.: Using the Nystrom Method to Speed up Kernel Machine. In: Advances in Neural Information Processing Systems (2001)
Smola, A., Cristianini, N.: Sparse Greefy Matrix Approximation for Machine Learning. In: International Conference on Machine Learning (2000)
Kim, K.I., Franz, M.O., Scholkopf, B.: Iterative Kernel Principal Component Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 27(9), 1351–1366 (2005)
Gunter, S., Schraudolph, N., Vishwanathan, S.V.N.: Fast iterative kernel Principal Component Analysis. Journal of Machine Learning Research 8, 1893–1918 (2007)
Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis, 3rd edn. Cambridge University Press, Cambridge (2004)
Hyvärinen, A., Oja, E.: A Fast Fixed-point algorithm for independent Component Analysis. Neural computation 9(7), 1483–1492 (1997)
Sharma, A., Paliwal, K.K.: Fast Principal Component Analysis using Fixed-point Algorithm. Pattern Recognition Letters 28, 1151–1155 (2007)
Mika, S., Scholkopf, B., Smola, A., Muller, K.R., Scholz, M., Ratsch, G.: Kernel PCA and de-noising in Feature Spaces. In: Advances in Neural Information Processing Systems (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Shi, W., Guo, YF. (2009). Nonlinear Component Analysis for Large-Scale Data Set Using Fixed-Point Algorithm. In: Yu, W., He, H., Zhang, N. (eds) Advances in Neural Networks – ISNN 2009. ISNN 2009. Lecture Notes in Computer Science, vol 5553. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01513-7_16
Download citation
DOI: https://doi.org/10.1007/978-3-642-01513-7_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01512-0
Online ISBN: 978-3-642-01513-7
eBook Packages: Computer ScienceComputer Science (R0)