Skip to main content

Learning Performance of Tikhonov Regularization Algorithm with Strongly Mixing Samples

  • Conference paper
Advances in Neural Networks – ISNN 2009 (ISNN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5551))

Included in the following conference series:

  • 1075 Accesses

Abstract

The generalization performance is the main purpose of machine learning theoretical research. The previous bounds describing the generalization ability of Tikhonov regularization algorithm are almost all based on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the bound on the generalization ability of Tikhonov regularization algorithm with exponentially strongly mixing observations. We then show that Tikhonov regularization algorithm with exponentially strongly mixing observations is consistent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vapnik, V.: Statistical Learning Theory. John Wiley, New York (1998)

    MATH  Google Scholar 

  2. Cucker, F., Smale, S.: On the Mathematical Foundations of Learning. Bulletin of the American Mathematical Society 39, 1–49 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  3. Steinwart, I., Hush, D., Scovel, C.: Learning from Dependent Observations. Los Alamos National Laboratory Technical Report LA-UR-06-3507 (submitted for publication) (2006)

    Google Scholar 

  4. Modha, S., Masry, E.: Minimum Complexity Regression Estimation with Weakly Dependent Observations. IEEE Trans. Inform. Theory 42, 2133–2145 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  5. Vidyasagar, M.: Learning and Generalization with Applications to Neural Networks, 2nd edn. Springer, London (2002)

    Google Scholar 

  6. Zou, B., Li, L.Q., Xu, Z.B.: The Generalization Performance of ERM Algorithm with Strongly Mixing Observations. Machine Learning (accepted, 2008)

    Google Scholar 

  7. Wu, Q.: Classification and Regularization in Learning Theory. Thesis of doctor of philiosophy. Hong Kong: City University of Hong Kong (2005)

    Google Scholar 

  8. Tikhonov, A.N.: On Solving Ill-posed Problem and Method of Regularization. Dokl. Akad. Nauk, 501–504 (1963)

    Google Scholar 

  9. DeVore, R., Kerkyacharian, G., Picard, D., Temlyakov, V.: Approximation Methods for Supervised Learning. Foundations of Computational Mathematics 6(1), 3–58 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  10. Wu, Q., Zhou, D.X.: Support Vector Machines: Linear Programming Virsus Quadratic Programming. Neural Computation 17, 1160–1187 (2005)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Xu, J., Zou, B. (2009). Learning Performance of Tikhonov Regularization Algorithm with Strongly Mixing Samples. In: Yu, W., He, H., Zhang, N. (eds) Advances in Neural Networks – ISNN 2009. ISNN 2009. Lecture Notes in Computer Science, vol 5551. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01507-6_81

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-01507-6_81

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-01506-9

  • Online ISBN: 978-3-642-01507-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics