Abstract
This paper considers the regularized learning algorithm associated with the least-square loss, strongly mixing observations and reproducing kernel Hilbert spaces. We first give the bound of the sample error with exponentially strongly mixing observations and the rate of approximation by Jackson-type theorem of approximation based on exponentially strongly mixing sequence. Then the generalization error of the least-square regularized regression is obtained by estimating sample error and regularization error.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Aronszajn N (1950) Theory of reproducing kernels. Trans Am Soc 68:337–404
Ibragimov IA, Linnik YV (1971) Independent and stationary sequence of random variables. Wolters-Noordnoff, Groningen
Modha S, Masry E (1996) Minimum complexity regression estimation with weakly dependent observations. IEEE Trans Inf Theory 42:2133–2145
Xu YL, Chen DR (2008) Learning rates of regularized regression for exponentially strongly mixing sequence. J Stat Plan Inference 138:2180–2189
Sun HW, Wu Q (2010) Regularized least square regression with dependent samples. Adv Comput Math 32:175–189
Rosenblatt M (1956) A central theorem and strong mixing conditions. Proc Natl Acad Sci 4:43–47
Vrdasagar M (2002) Learning and generalization with application to neural networks. 2nd edn. Springer, Berlin
Yu B (1994) Rates of convergence for empirical processes of stationary mixing sequences. Ann Probab 22:94–114
Zou B, Li LQ, Xu ZB (2009) The generalization performance of ERM algorithm with strongly mixing observations. Mach Learn 75:275–295
Sidorov G, Koeppen M, Cruz-Corts N (2011) Recent advances in machine learning techniques and applications. Int J Mach Learn Cybern 2(3):123–124
Liu Z, Wu QH, Zhang Y, Philip Chen CL (2011) Adaptive least squares support vector machines filter for hand tremor canceling in microsurgery. Int J Mach Learn Cybern 2(1):37–47
Wang XZ, Lu SX, Zhai JH (2008) Fast fuzzy multi-category SVM based on support vector domain description. Int J Pattern Recogn Artif Intell 22(1):109–120
He Q, Wu CX (2011) Separating theorem of samples in Banach space for support vector machine learning. Int J Mach Learn Cybern 2(1):49–54
Small K, Roth D (2010) Margin-based active learning for structured predictions. Int J Mach Learn Cybern 1(1-4):3–25
Wang XZ, Zhang SF, Zhai JH (2007) A nonlinear integral defined on partition and its application to decision trees. Soft Comput 11(4):317–321
Vapnik V (1998) Statistical learning theory. Wiley, New York
Vapnik V (2000) The nature of statistical learning theory. Springer, New York
Xie TF, Zhou SP (1998) Real function approximation theory. Hangzhou University Press, Hangzhou
Cucker F, Smale S (2001) On the mathematical foundations of learning. Bull Am Math Soc 39:1–49
Cucker F, Zhou DX (2007) Learning theory: an approximation theory viewpoint, Cambridge mongraphs on applied and computional mathematics. Cambridge University Press, Cambridge
Cucker F, Smale S (2002) Best choice for regularization parameters in learning theory: on the bias-variance problem. Found Comput Math 2:413–428
Author information
Authors and Affiliations
Corresponding author
Additional information
The research was supported by the National Natural Science Foundation of China (Nos. 90818020, 60873206)
Rights and permissions
About this article
Cite this article
Zhang, Y., Cao, F. & Yan, C. Learning rates of least-square regularized regression with strongly mixing observation. Int. J. Mach. Learn. & Cyber. 3, 277–283 (2012). https://doi.org/10.1007/s13042-011-0058-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-011-0058-4