Skip to main content
Log in

Online Gradient Descent Learning Algorithms

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error which we define in the paper. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately can yield competitive error rates with those in the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yiming Ying or Massimiliano Pontil.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ying, Y., Pontil, M. Online Gradient Descent Learning Algorithms. Found Comput Math 8, 561–596 (2008). https://doi.org/10.1007/s10208-006-0237-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-006-0237-y

Keywords