Skip to main content
Log in

Error bounds for suboptimal solutions to kernel principal component analysis

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

Suboptimal solutions to kernel principal component analysis are considered. Such solutions take on the form of linear combinations of all n-tuples of kernel functions centered on the data, where n is a positive integer smaller than the cardinality m of the data sample. Their accuracy in approximating the optimal solution, obtained in general for n = m, is estimated. The analysis made in Gnecco and Sanguineti (Comput Optim Appl 42:265–287, 2009) is extended. The estimates derived therein for the approximation of the first principal axis are improved and extensions to the successive principal axes are derived.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Achlioptas, D., McSherry, F., Schölkopf, B.: Sampling techniques for kernel methods. In: Advances in Neural Information Processing Systems, vol. 14 (Proc. NIPS 2001), pp. 335–342. MIT Press, Cambridge (2002)

  2. Aronszajn N.: Theory of reproducing kernels. Trans. AMS 68, 337–404 (1950)

    Article  MATH  MathSciNet  Google Scholar 

  3. Barron A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory 39, 930–945 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  4. Bhatia R., Elsner L.: The Hoffman–Wielandt inequality in infinite dimensions. Proc. Indian Acad. Sci. (Math. Sci.) 104, 483–494 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  5. Dahlquist G., Bjorck A.: Numerical Methods in Scientific Computing. SIAM, Philadelphia (2008)

    MATH  Google Scholar 

  6. Georgiev P., Pardalos P., Cichocki A.: Algorithms with high order convergence speed for blind source extraction. Comput. Optim. Appl. 38, 123–131 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  7. Georgiev P., Pardalos P., Theis F.J., Cichocki A., Bakardjian H.: Sparse component analysis: a new tool for data mining. In: Pardalos, P., Boginski, V., Vazacopoulos, A. (eds) Data Mining in Biomedicine, pp. 91–116. Springer, Berlin (2007)

    Chapter  Google Scholar 

  8. Gnecco G., Sanguineti M.: Accuracy of suboptimal solutions to kernel principal component analysis. Comput. Optim. Appl. 42, 265–287 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  9. Hussain Z., Shawe-Taylor J.: Theory of matching pursuit. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds) Advances in Neural Information Processing Systems 21 (Proc. NIPS 2008), MIT Press, Cambridge (2009)

    Google Scholar 

  10. Jolliffe I.T.: Principal Component Analysis. Springer, New York (1986)

    Google Scholar 

  11. Kůrková V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261–270. Birkhäuser, Boston (1997)

    Google Scholar 

  12. Kůrková V., Sanguineti M.: Comparison of worst-case errors in linear and neural network approximation. IEEE Trans. Inform. Theory 48, 264–275 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  13. Kůrková V., Sanguineti M.: Error estimates for approximate optimization by the extended Ritz method. SIAM J. Optim. 15, 261–287 (2005)

    Google Scholar 

  14. Koltchinskii V., Giné E.: Random matrix approximation of spectra of integral operators. Bernoulli 6, 113–167 (1994)

    Article  Google Scholar 

  15. Limaye B.V.: Functional Analysis. New Age Publishers, New Delhi (1996)

    MATH  Google Scholar 

  16. Pardalos, P., Hansen, P. (eds): Data Mining and Mathematical Programming. American Mathematical Society, Providence (2008)

    MATH  Google Scholar 

  17. Parlett B.N.: The Symmetric Eigenvalue Problem. SIAM, Philadelphia (1998)

    MATH  Google Scholar 

  18. Schölkopf B., Smola A., Müller K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10, 1299–1319 (1998)

    Article  Google Scholar 

  19. Schölkopf B., Smola A.J.: Learning with Kernels—Support Vector Machines Regularization Optimization and Beyond. MIT Press, Cambridge (2002)

    Google Scholar 

  20. Shawe-Taylor, J., Cristianini, N.: Estimating the moments of a random vector with applications. In: Proceedings of GRETSI 2003 Conference, pp. 47–52 (2003)

  21. Weinberger H.F.: Variational Methods for Eigenvalue Approximation. SIAM, Philadelphia (1974)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marcello Sanguineti.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gnecco, G., Sanguineti, M. Error bounds for suboptimal solutions to kernel principal component analysis. Optim Lett 4, 197–210 (2010). https://doi.org/10.1007/s11590-009-0158-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-009-0158-1

Keywords

Navigation