Skip to main content
Log in

Concentration estimates for learning with unbounded sampling

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

The least-square regression problem is considered by regularization schemes in reproducing kernel Hilbert spaces. The learning algorithm is implemented with samples drawn from unbounded sampling processes. The purpose of this paper is to present concentration estimates for the error based on ℓ2-empirical covering numbers, which improves learning rates in the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press (1999)

  2. Aronszajn, N.: Theory of reproducing kernels. Trans. Am. Math. Soc. 68, 337–404 (1950)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bartlett, P.L., Bousquet, O., Mendelson, S.: Local Rademacher complexities. Ann. Stat. 33, 1497–1537 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bennett, G.: Probability inequalities for the sum of independent random variables. J. Am. Stat. Assoc. 57, 33–45 (1962)

    Article  MATH  Google Scholar 

  5. Caponnetto, A., De Vito, E.: Optimal retes for regularized least-squares algorithm. Found. Comput. Math. 7, 331–368 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chen, D.R., Wu, Q., Ying, Y., Zhou, D.X.: Support vector machine soft margin classifiers: error analysis. J. Mach. Learn. Res. 5, 1143–1175 (2004)

    MathSciNet  MATH  Google Scholar 

  7. Cucker, F., Smale, S.: On the mathematical foundations of learning. Bull. Am. Math. Soc. 39, 1–49 (2001)

    Article  MathSciNet  Google Scholar 

  8. Cucker, F., Smale, S.: Best choices for regularization parameters in learning theory: on the bias-variance problem. Found. Comput. Math. 2, 413–428 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  9. De Mol, C., De Vito, E., Rosasco, L.: Elastic-net regularization in learning theory. J. Complex. 25, 201–230 (2009)

    Article  MATH  Google Scholar 

  10. De Vito, E., Caponnetto, A., Rosasco, L.: Model selection for regularized least-squares algorithm in learning theory. Found. Comput. Math. 5, 59–85 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  11. Koltchinskii, V., Panchenko, D.: Complexities of convex combinations and bounding the generalization error in classification. Ann. Stat. 33, 1455–1496 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  12. Mendelson, S., Neeman, J.: Regularization in kernel learning. Ann. Stat. 38, 526–565 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  13. Pan, Z.W., Xiao, Q.W.: Least-square regularized regression with non-iid sampling. J. Stat. Plan. Inference 139, 3579–3587 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Pontil, M.: A note on different covering numbers in learning theory. J. Complex. 19, 665–671 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  15. Smale, S., Zhou, D.X.: Learning theory estimates via integral operators and their approximations. Constr. Approx. 26, 153–172 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  16. Smale, S., Zhou, D.X.: Online learning with Markov sampling. Anal. Appl. 7, 87–113 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  17. Steinwart, I., Hush, D., Scovel, C.: Optimal rates for regularized least-squares regression. In: Dasgupta, S., Klivans, A. (eds.) Proceedings of the 22nd Annual Conference on Learning Theory, pp. 79–93 (2009)

  18. Steinwart, I., Scovel, C.: Fast rates for support vector machines. Lect. Notes Comput. Sci. 3559, 279–294 (2005)

    Article  MathSciNet  Google Scholar 

  19. Sun, H., Wu, Q.: A note on application of integral operator in learning theory. Appl. Comput. Harmon. Anal. 26, 416–421 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Temlyakov, V.N.: Approximation in learning theory. Constr. Approx. 27, 33–74 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  21. van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes. Springer, New York (1996)

    Book  MATH  Google Scholar 

  22. Wang, C., Zhou, D.X.: Optimal learning rates for least square regularized regression with unbounded sampling. J. Complex. 27, 55–67 (2011)

    Article  MATH  Google Scholar 

  23. Wu, Q., Ying, Y., Zhou, D.X.: Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  24. Wu, Q., Ying, Y., Zhou, D.X.: Multi-kernel regularized classifiers. J. Complex. 23, 108–134 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  25. Wu, Q., Zhou, D.X.: Learning with sample dependent hypothesis spaces. Comput. Math. Appl. 56, 2896–2907 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  26. Wu, Z.M.: Compactly supported positive definite radial functions. Adv. Comput. Math. 4, 283–292 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  27. Xu, Y.L., Chen, D.R.: Learning rates of regularized regression for exponentially strongly mixing sequence. J. Stat. Plan. Inference 138, 2180–2189 (2008)

    Article  MATH  Google Scholar 

  28. Zhang, T.: Leave-one-out bounds for kernel methods. Neural Comput. 15, 1397–1437 (2003)

    Article  MATH  Google Scholar 

  29. Zhou, D.X.: Capacity of reproducing kernel spaces in learning theory. IEEE Trans. Inf. Theory 49, 1743–1752 (2003)

    Article  Google Scholar 

  30. Zhou, D.X.: Derivative reproducing properties for kernel methods in learning theory. J. Comput. Appl. Math. 220, 456–463 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ding-Xuan Zhou.

Additional information

Communicated by Lixin Shen.

The work described in this paper is supported partially by the Research Grants Council of Hong Kong [Project No. CityU 103508] and National Science Fund for Distinguished Young Scholars of China [Project No. 10529101].

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guo, ZC., Zhou, DX. Concentration estimates for learning with unbounded sampling. Adv Comput Math 38, 207–223 (2013). https://doi.org/10.1007/s10444-011-9238-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-011-9238-8

Keywords

Mathematics Subject Classifications (2010)

Navigation