Skip to main content

Sparse Kernel Partial Least Squares Regression

  • Conference paper
Learning Theory and Kernel Machines

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2777))

Abstract

Partial Least Squares Regression (PLS) and its kernel version (KPLS) have become competitive regression approaches. KPLS performs as well as or better than support vector regression (SVR) for moderately-sized problems with the advantages of simple implementation, less training cost, and easier tuning of parameters. Unlike SVR, KPLS requires manipulation of the full kernel matrix and the resulting regression function requires the full training data. In this paper we rigorously derive a sparse KPLS algorithm. The underlying KPLS algorithm is modified to maintain sparsity in all steps of the algorithm. The resulting ν-KPLS algorithm explicitly models centering and bias rather than using kernel centering. An ε-insensitive loss function is used to produce sparse solutions in the dual space. The final regression function for the ν-KPLS algorithm only requires a relatively small set of support vectors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bach, F., Jordan, M.: Kernel independent component analysis. Journal of Machine Learning Research 3, 1–48 (2002)

    Article  MathSciNet  Google Scholar 

  2. Bennett, K., Embrechts, M.: An optimization perspective on kernel partial least quares. In: Proc. of NATO Adv. Study Inst. on Learning Theory and Prac. (2003)

    Google Scholar 

  3. Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  4. Höskuldsson, A.: PLS Regression Methods. Journ. of Chemometrics 2, 211–228 (1988)

    Article  Google Scholar 

  5. Platt, J.: Fast training of svm using sequential minimal optimization. In: Adv. in Kernel Methods - Support Vector Learning. MIT Press, Cambridge (1999)

    Google Scholar 

  6. Rosipal, R., Trejo, L.: Kernel partial least squares regression in reproducing kernel Hilbert space. Journal of Machine Learning Research 2, 97–123 (2001)

    Article  Google Scholar 

  7. Schölkopf, B., Smola, A., Müller, K.-R.: Kernel principal component analysis. In: Adv. in Kernel Methods - Support Vector Learning. MIT Press, Cambridge (1999)

    Google Scholar 

  8. Schölkopf, B., Smola, A., Williamson, R.: Shrinking the tube: A new support regression algorithm. In: Adv. in Neural Info. Proc. Systems, vol. 11. MIT Press, Cambridge (1999)

    Google Scholar 

  9. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1996)

    Google Scholar 

  10. Vinokourov, A., Shawe-Taylor, J., Cristianini, N.: Finding language-independent semantic representation of text using kernel canonical correlation analysis. Neuro- COLT Technical Report NC-TR-02-119 (2002)

    Google Scholar 

  11. Wold, H.: Estimation of principal components and related models by iterative least squares. In: Multivariate Analysis, New York. Academic Press, London (1966)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Momma, M., Bennett, K.P. (2003). Sparse Kernel Partial Least Squares Regression. In: Schölkopf, B., Warmuth, M.K. (eds) Learning Theory and Kernel Machines. Lecture Notes in Computer Science(), vol 2777. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45167-9_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45167-9_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40720-1

  • Online ISBN: 978-3-540-45167-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics