Skip to main content

Nonlinear Component Analysis for Large-Scale Data Set Using Fixed-Point Algorithm

  • Conference paper
Advances in Neural Networks – ISNN 2009 (ISNN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5553))

Included in the following conference series:

  • 2261 Accesses

Abstract

Nonlinear component analysis is a popular nonlinear feature extraction method. It generally uses eigen-decomposition technique to extract the principal components. But the method is infeasible for large-scale data set because of the storage and computational problem. To overcome these disadvantages, an efficient iterative method of computing kernel principal components based on fixed-point algorithm is proposed.The kernel principle components can be iteratively computed without the eigen-decomposition. The space and time complexity of proposed method is reduced to o(m) and o(m 2), respectively, where m is the number of samples. More important, it still can be used even if traditional eigen-decomposition technique cannot be applied when faced with the extremely large-scale data set. The effectiveness of proposed method is validated from experimental results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Fukunaga, K.: Introduction to Statistical Pattern Recognition. Academic Press, London (1990)

    MATH  Google Scholar 

  2. Scholkopf, B., Smola, A.: Learning with Kernels: Support Vector MAchines, Regularization, Optimization and Beyond. MIT Press, Cambridge (2002)

    Google Scholar 

  3. Scholkopf, B., Smola, A., Muller, K.R.: Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation 10, 1299–1319 (1998)

    Article  Google Scholar 

  4. Zheng, W.M., Zou, C.R., Zhao, L.: An Improved Algorithm for Kernel Principal Components Analysis. Neural Processing Letters 22, 49–56 (2005)

    Article  Google Scholar 

  5. France, V., Hlavac, V.: Greedy Algorithm for a Training Set Reduction in the Kernel Methods. In: IEEE International Conference on Computer Analysis of Images and Patterns, pp. 426–433 (2003)

    Google Scholar 

  6. Rosipal, R., Girolami, M.: An Expectative-maximization Approach to Nonlinear Component Analysis. Neural Computation 13, 505–510 (2001)

    Article  MATH  Google Scholar 

  7. Williams, C., Seeger, M.: Using the Nystrom Method to Speed up Kernel Machine. In: Advances in Neural Information Processing Systems (2001)

    Google Scholar 

  8. Smola, A., Cristianini, N.: Sparse Greefy Matrix Approximation for Machine Learning. In: International Conference on Machine Learning (2000)

    Google Scholar 

  9. Kim, K.I., Franz, M.O., Scholkopf, B.: Iterative Kernel Principal Component Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 27(9), 1351–1366 (2005)

    Article  Google Scholar 

  10. Gunter, S., Schraudolph, N., Vishwanathan, S.V.N.: Fast iterative kernel Principal Component Analysis. Journal of Machine Learning Research 8, 1893–1918 (2007)

    MathSciNet  MATH  Google Scholar 

  11. Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis, 3rd edn. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  12. Hyvärinen, A., Oja, E.: A Fast Fixed-point algorithm for independent Component Analysis. Neural computation 9(7), 1483–1492 (1997)

    Article  Google Scholar 

  13. Sharma, A., Paliwal, K.K.: Fast Principal Component Analysis using Fixed-point Algorithm. Pattern Recognition Letters 28, 1151–1155 (2007)

    Article  Google Scholar 

  14. Mika, S., Scholkopf, B., Smola, A., Muller, K.R., Scholz, M., Ratsch, G.: Kernel PCA and de-noising in Feature Spaces. In: Advances in Neural Information Processing Systems (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Shi, W., Guo, YF. (2009). Nonlinear Component Analysis for Large-Scale Data Set Using Fixed-Point Algorithm. In: Yu, W., He, H., Zhang, N. (eds) Advances in Neural Networks – ISNN 2009. ISNN 2009. Lecture Notes in Computer Science, vol 5553. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01513-7_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-01513-7_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-01512-0

  • Online ISBN: 978-3-642-01513-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics