Abstract
This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error which we define in the paper. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately can yield competitive error rates with those in the literature.
Similar content being viewed by others
Author information
Authors and Affiliations
Corresponding authors
Rights and permissions
About this article
Cite this article
Ying, Y., Pontil, M. Online Gradient Descent Learning Algorithms. Found Comput Math 8, 561–596 (2008). https://doi.org/10.1007/s10208-006-0237-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10208-006-0237-y