Skip to main content
Log in

Generalization Bounds of Regularization Algorithm with Gaussian Kernels

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In many practical applications, the performance of a learning algorithm is not actually affected only by an unitary factor just like the complexity of hypothesis space, stability of the algorithm and data quality. This paper addresses in the performance of the regularization algorithm associated with Gaussian kernels. The main purpose is to provide a framework of evaluating the generalization performance of the algorithm conjointly in terms of hypothesis space complexity, algorithmic stability and data quality. The new bounds on generalization error of such algorithm measured by regularization error and sample error are established. It is shown that the regularization error has polynomial decays under some conditions, and the new bounds are based on uniform stability of the algorithm, covering number of hypothesis space and data information simultaneously. As an application, the obtained results are applied to several special regularization algorithms, and some new results for the special algorithms are deduced.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Aronszajn N (1950) Theory of reproducing kernels. Trans Am Math Soc 68:337–404

    Article  MATH  MathSciNet  Google Scholar 

  2. Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44:525–536

    Article  MATH  MathSciNet  Google Scholar 

  3. Bousquet O, Elisseeff A (2002) Stability and generalization. J Mach Learn Res 2:499–526

    MATH  MathSciNet  Google Scholar 

  4. Bousquet O (2003) New approaches to statistical learning theory. Ann Inst Stat Math 55:371–389

    MATH  MathSciNet  Google Scholar 

  5. Boser H, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers. Proc Fifth Annu Workshop Comput Learn Theory 5:144–152

    Article  Google Scholar 

  6. Chang XY, Xu ZB, Zou B, Zhang H (2011) Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality. Int. J. Wavel Multiresolut Inf Process 9:549–570

    Article  MATH  MathSciNet  Google Scholar 

  7. Chen DR, Wu Q, Ying YM, Zhou DX (2004) Supprot vector machine soft margin clssifiers: error analysis. J Mach Learn Res 5:1143–1175

    MATH  MathSciNet  Google Scholar 

  8. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297

    MATH  Google Scholar 

  9. Cuker F, Smale S (2001) On the mathematical foundations of learning. Bull Am Math Soc 39:1–49

    Article  Google Scholar 

  10. Cuker F, Smale S (2002) Best choices for regularization parameters in learning theory: on the bias-variance problem. Found Comput Math 2:413–428

    Article  MathSciNet  Google Scholar 

  11. Cucker F, Zhou DX (2006) Learning theory: an approximation theory viewpoint. Cambridge University Press, Cambridge

    Google Scholar 

  12. Devroye L, Wagner T (1979) Distribution-free performance bounds for potential function rules. IEEE Trans Inf Theory 25(5):601–604

    Article  MATH  MathSciNet  Google Scholar 

  13. Evgeniou T, Pontil M (1999) On the V-gamma dimension for regression in reproducing kernel Hilbert spaces, in proceedings of algorithmic learning theory. Lecture Notes Comput Sci 1720:106–117

    Article  MathSciNet  Google Scholar 

  14. Kearns M, Ron D (1999) Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Comput 11:1427–1453

    Article  Google Scholar 

  15. Kutin S, Niyogi P (2002) Almost-everywhere algorithmic stability and generalization error, Tecnical Report TR-2002-03. Department of Computer Science, The University of Chicago, Chicago

  16. Martinand A, Bartlett PL (1999) Neural network learning: theoretical foundations. Cambridge University Press, Cambridge

    Google Scholar 

  17. Poggio T, Girosi F (1990) Regularization algorithms for learning that are equivalent to multilayer networks. Sci New Ser 247:978–982

    MATH  MathSciNet  Google Scholar 

  18. Smale S, Zhou DX (2004) Shannon sample and function reconstruction from point values. Bull Am Math Soc 41:279–305

    Article  MATH  MathSciNet  Google Scholar 

  19. Steinwart I, Hush D, Scovel C (2006) An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels. IEEE Trans Inf Theory 52:4635–4643

    Article  MathSciNet  Google Scholar 

  20. Vapnik V, Chervonenkis A (1971) On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab Appl 16:264–280

    Article  MATH  Google Scholar 

  21. Vapnik V (1998) Statistical learning theory. John Wiley, New York

    MATH  Google Scholar 

  22. De Vito E, Caponnteeo A, Rosasco L (2005) Model selection for regularized least-squares algorithm. Found Comput Math 5(1):59–85

    Article  MATH  MathSciNet  Google Scholar 

  23. Wu Q, Zhou DX (2005) SVM soft margin classifiers: linear programming versus quadratic programming. Neural Comput 17:1160–1187

    Article  MATH  MathSciNet  Google Scholar 

  24. Xiang DH, Zhou DX (2009) Classification with Gaussians and convex loss. J Mach Learn Res 10: 1447–1468

    Google Scholar 

  25. Zhou DX (2002) The covering number in learning theory. J Complex 18:739–767

    Article  MATH  Google Scholar 

  26. Zou B, Li LQ, Xu J (2005) The bounds on the rate of uniform convergence for learning machine. Lect Notes Comput Sci 3496:538–545

    Article  Google Scholar 

Download references

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Nos. 61272023, 61101240) and the major program of National Social Science Foundation of China (No. 11&ZD156).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feilong Cao.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cao, F., Liu, Y. & Zhang, W. Generalization Bounds of Regularization Algorithm with Gaussian Kernels. Neural Process Lett 39, 179–194 (2014). https://doi.org/10.1007/s11063-013-9298-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-013-9298-5

Keywords

Navigation