Abstract
The least-square regression problem is considered by regularization schemes in reproducing kernel Hilbert spaces. The learning algorithm is implemented with samples drawn from unbounded sampling processes. The purpose of this paper is to present concentration estimates for the error based on ℓ2-empirical covering numbers, which improves learning rates in the literature.
Similar content being viewed by others
References
Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press (1999)
Aronszajn, N.: Theory of reproducing kernels. Trans. Am. Math. Soc. 68, 337–404 (1950)
Bartlett, P.L., Bousquet, O., Mendelson, S.: Local Rademacher complexities. Ann. Stat. 33, 1497–1537 (2005)
Bennett, G.: Probability inequalities for the sum of independent random variables. J. Am. Stat. Assoc. 57, 33–45 (1962)
Caponnetto, A., De Vito, E.: Optimal retes for regularized least-squares algorithm. Found. Comput. Math. 7, 331–368 (2007)
Chen, D.R., Wu, Q., Ying, Y., Zhou, D.X.: Support vector machine soft margin classifiers: error analysis. J. Mach. Learn. Res. 5, 1143–1175 (2004)
Cucker, F., Smale, S.: On the mathematical foundations of learning. Bull. Am. Math. Soc. 39, 1–49 (2001)
Cucker, F., Smale, S.: Best choices for regularization parameters in learning theory: on the bias-variance problem. Found. Comput. Math. 2, 413–428 (2002)
De Mol, C., De Vito, E., Rosasco, L.: Elastic-net regularization in learning theory. J. Complex. 25, 201–230 (2009)
De Vito, E., Caponnetto, A., Rosasco, L.: Model selection for regularized least-squares algorithm in learning theory. Found. Comput. Math. 5, 59–85 (2005)
Koltchinskii, V., Panchenko, D.: Complexities of convex combinations and bounding the generalization error in classification. Ann. Stat. 33, 1455–1496 (2005)
Mendelson, S., Neeman, J.: Regularization in kernel learning. Ann. Stat. 38, 526–565 (2010)
Pan, Z.W., Xiao, Q.W.: Least-square regularized regression with non-iid sampling. J. Stat. Plan. Inference 139, 3579–3587 (2009)
Pontil, M.: A note on different covering numbers in learning theory. J. Complex. 19, 665–671 (2003)
Smale, S., Zhou, D.X.: Learning theory estimates via integral operators and their approximations. Constr. Approx. 26, 153–172 (2007)
Smale, S., Zhou, D.X.: Online learning with Markov sampling. Anal. Appl. 7, 87–113 (2009)
Steinwart, I., Hush, D., Scovel, C.: Optimal rates for regularized least-squares regression. In: Dasgupta, S., Klivans, A. (eds.) Proceedings of the 22nd Annual Conference on Learning Theory, pp. 79–93 (2009)
Steinwart, I., Scovel, C.: Fast rates for support vector machines. Lect. Notes Comput. Sci. 3559, 279–294 (2005)
Sun, H., Wu, Q.: A note on application of integral operator in learning theory. Appl. Comput. Harmon. Anal. 26, 416–421 (2009)
Temlyakov, V.N.: Approximation in learning theory. Constr. Approx. 27, 33–74 (2008)
van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes. Springer, New York (1996)
Wang, C., Zhou, D.X.: Optimal learning rates for least square regularized regression with unbounded sampling. J. Complex. 27, 55–67 (2011)
Wu, Q., Ying, Y., Zhou, D.X.: Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192 (2006)
Wu, Q., Ying, Y., Zhou, D.X.: Multi-kernel regularized classifiers. J. Complex. 23, 108–134 (2007)
Wu, Q., Zhou, D.X.: Learning with sample dependent hypothesis spaces. Comput. Math. Appl. 56, 2896–2907 (2008)
Wu, Z.M.: Compactly supported positive definite radial functions. Adv. Comput. Math. 4, 283–292 (1995)
Xu, Y.L., Chen, D.R.: Learning rates of regularized regression for exponentially strongly mixing sequence. J. Stat. Plan. Inference 138, 2180–2189 (2008)
Zhang, T.: Leave-one-out bounds for kernel methods. Neural Comput. 15, 1397–1437 (2003)
Zhou, D.X.: Capacity of reproducing kernel spaces in learning theory. IEEE Trans. Inf. Theory 49, 1743–1752 (2003)
Zhou, D.X.: Derivative reproducing properties for kernel methods in learning theory. J. Comput. Appl. Math. 220, 456–463 (2008)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Lixin Shen.
The work described in this paper is supported partially by the Research Grants Council of Hong Kong [Project No. CityU 103508] and National Science Fund for Distinguished Young Scholars of China [Project No. 10529101].
Rights and permissions
About this article
Cite this article
Guo, ZC., Zhou, DX. Concentration estimates for learning with unbounded sampling. Adv Comput Math 38, 207–223 (2013). https://doi.org/10.1007/s10444-011-9238-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10444-011-9238-8
Keywords
- Learning theory
- Least-square regression
- Regularization in reproducing kernel Hilbert spaces
- Empirical covering number
- Concentration estimates