Skip to main content
Log in

Learning rates of least-square regularized regression with strongly mixing observation

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

This paper considers the regularized learning algorithm associated with the least-square loss, strongly mixing observations and reproducing kernel Hilbert spaces. We first give the bound of the sample error with exponentially strongly mixing observations and the rate of approximation by Jackson-type theorem of approximation based on exponentially strongly mixing sequence. Then the generalization error of the least-square regularized regression is obtained by estimating sample error and regularization error.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Aronszajn N (1950) Theory of reproducing kernels. Trans Am Soc 68:337–404

    Article  MathSciNet  MATH  Google Scholar 

  2. Ibragimov IA, Linnik YV (1971) Independent and stationary sequence of random variables. Wolters-Noordnoff, Groningen

    Google Scholar 

  3. Modha S, Masry E (1996) Minimum complexity regression estimation with weakly dependent observations. IEEE Trans Inf Theory 42:2133–2145

    Article  MathSciNet  MATH  Google Scholar 

  4. Xu YL, Chen DR (2008) Learning rates of regularized regression for exponentially strongly mixing sequence. J Stat Plan Inference 138:2180–2189

    Article  MATH  Google Scholar 

  5. Sun HW, Wu Q (2010) Regularized least square regression with dependent samples. Adv Comput Math 32:175–189

    Article  MathSciNet  MATH  Google Scholar 

  6. Rosenblatt M (1956) A central theorem and strong mixing conditions. Proc Natl Acad Sci 4:43–47

    Article  MathSciNet  Google Scholar 

  7. Vrdasagar M (2002) Learning and generalization with application to neural networks. 2nd edn. Springer, Berlin

    Google Scholar 

  8. Yu B (1994) Rates of convergence for empirical processes of stationary mixing sequences. Ann Probab 22:94–114

    Article  MathSciNet  MATH  Google Scholar 

  9. Zou B, Li LQ, Xu ZB (2009) The generalization performance of ERM algorithm with strongly mixing observations. Mach Learn 75:275–295

    Article  Google Scholar 

  10. Sidorov G, Koeppen M, Cruz-Corts N (2011) Recent advances in machine learning techniques and applications. Int J Mach Learn Cybern 2(3):123–124

    Article  Google Scholar 

  11. Liu Z, Wu QH, Zhang Y, Philip Chen CL (2011) Adaptive least squares support vector machines filter for hand tremor canceling in microsurgery. Int J Mach Learn Cybern 2(1):37–47

    Article  Google Scholar 

  12. Wang XZ, Lu SX, Zhai JH (2008) Fast fuzzy multi-category SVM based on support vector domain description. Int J Pattern Recogn Artif Intell 22(1):109–120

    Article  Google Scholar 

  13. He Q, Wu CX (2011) Separating theorem of samples in Banach space for support vector machine learning. Int J Mach Learn Cybern 2(1):49–54

    Article  Google Scholar 

  14. Small K, Roth D (2010) Margin-based active learning for structured predictions. Int J Mach Learn Cybern 1(1-4):3–25

    Article  Google Scholar 

  15. Wang XZ, Zhang SF, Zhai JH (2007) A nonlinear integral defined on partition and its application to decision trees. Soft Comput 11(4):317–321

    Article  MATH  Google Scholar 

  16. Vapnik V (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  17. Vapnik V (2000) The nature of statistical learning theory. Springer, New York

    MATH  Google Scholar 

  18. Xie TF, Zhou SP (1998) Real function approximation theory. Hangzhou University Press, Hangzhou

    Google Scholar 

  19. Cucker F, Smale S (2001) On the mathematical foundations of learning. Bull Am Math Soc 39:1–49

    Article  MathSciNet  Google Scholar 

  20. Cucker F, Zhou DX (2007) Learning theory: an approximation theory viewpoint, Cambridge mongraphs on applied and computional mathematics. Cambridge University Press, Cambridge

    Book  Google Scholar 

  21. Cucker F, Smale S (2002) Best choice for regularization parameters in learning theory: on the bias-variance problem. Found Comput Math 2:413–428

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feilong Cao.

Additional information

The research was supported by the National Natural Science Foundation of China (Nos. 90818020, 60873206)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, Y., Cao, F. & Yan, C. Learning rates of least-square regularized regression with strongly mixing observation. Int. J. Mach. Learn. & Cyber. 3, 277–283 (2012). https://doi.org/10.1007/s13042-011-0058-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-011-0058-4

Keywords

Navigation