Abstract
In this paper, we study a class of sample dependent convex optimization problems, and derive a general sequential approximation bound for their solutions. This analysis is closely related to the regret bound framework in online learning. However we apply it to batch learning algorithms instead of online stochastic gradient decent methods. Applications of this analysis in some classification and regression problems will be illustrated.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
N. Cesa-Bianchi, P. Long, and M.K. Warmuth. Worst-case quadratic loss bounds for prediction using linear functions and gradient descent. IEEE Transactions on Neural Networks, 7:604–619, 1996.
Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other Kernel-based Learning Methods. Cambridge University Press, 2000.
Jüergen Forster and Manfred Warmuth. Relative expected instantaneous loss bounds. In COLT 00, pages 90–99, 2000.
Jüergen Forster and Manfred Warmuth. Relative loss bounds for temporaldifference learning. In ICML 00, pages 295–302, 2000.
Geoffrey J. Gordon. Regret bounds for prediction problems. In COLT 99, pages 29–40, 1999.
Harro G. Heuser. Functional analysis. John Wiley & Sons Ltd., Chichester, 1982. Translated from the German by John Horváth, A Wiley-Interscience Publication.
T. Jaakkola and D. Haussler. Probabilistic kernel regression models. In Proceedings of the 1999 Conference on AI and Statistics, 1999.
Michael Kearns and Dana Ron. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Computation, 11(6):1427–1453, 1999.
J. Kivinen and M.K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Journal of Information and Computation, 132:1–64, 1997.
Yi Li and Philip M. Long. The relaxed online maximum margin algorithm. In S.A. Solla, T.K. Leen, and K.-R. Müller, editors, Advances in Neural Information Processing Systems 12, pages 498–504. MIT Press, 2000.
V.N. Vapnik. Statistical learning theory. John Wiley & Sons, New York, 1998.
Tong Zhang. Convergence of large margin separable linear classification. In Advances in Neural Information Processing Systems 13, pages 357–363, 2001.
Tong Zhang. A leave-one-out cross validation bound for kernel methods with applications in learning. In COLT, 2001.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhang, T. (2001). A Sequential Approximation Bound for Some Sample-Dependent Convex Optimization Problems with Applications in Learning. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_5
Download citation
DOI: https://doi.org/10.1007/3-540-44581-1_5
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42343-0
Online ISBN: 978-3-540-44581-4
eBook Packages: Springer Book Archive