Skip to main content

A Sequential Approximation Bound for Some Sample-Dependent Convex Optimization Problems with Applications in Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2111))

Abstract

In this paper, we study a class of sample dependent convex optimization problems, and derive a general sequential approximation bound for their solutions. This analysis is closely related to the regret bound framework in online learning. However we apply it to batch learning algorithms instead of online stochastic gradient decent methods. Applications of this analysis in some classification and regression problems will be illustrated.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N. Cesa-Bianchi, P. Long, and M.K. Warmuth. Worst-case quadratic loss bounds for prediction using linear functions and gradient descent. IEEE Transactions on Neural Networks, 7:604–619, 1996.

    Article  Google Scholar 

  2. Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other Kernel-based Learning Methods. Cambridge University Press, 2000.

    Google Scholar 

  3. Jüergen Forster and Manfred Warmuth. Relative expected instantaneous loss bounds. In COLT 00, pages 90–99, 2000.

    Google Scholar 

  4. Jüergen Forster and Manfred Warmuth. Relative loss bounds for temporaldifference learning. In ICML 00, pages 295–302, 2000.

    Google Scholar 

  5. Geoffrey J. Gordon. Regret bounds for prediction problems. In COLT 99, pages 29–40, 1999.

    Google Scholar 

  6. Harro G. Heuser. Functional analysis. John Wiley & Sons Ltd., Chichester, 1982. Translated from the German by John Horváth, A Wiley-Interscience Publication.

    MATH  Google Scholar 

  7. T. Jaakkola and D. Haussler. Probabilistic kernel regression models. In Proceedings of the 1999 Conference on AI and Statistics, 1999.

    Google Scholar 

  8. Michael Kearns and Dana Ron. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Computation, 11(6):1427–1453, 1999.

    Article  Google Scholar 

  9. J. Kivinen and M.K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Journal of Information and Computation, 132:1–64, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  10. Yi Li and Philip M. Long. The relaxed online maximum margin algorithm. In S.A. Solla, T.K. Leen, and K.-R. Müller, editors, Advances in Neural Information Processing Systems 12, pages 498–504. MIT Press, 2000.

    Google Scholar 

  11. V.N. Vapnik. Statistical learning theory. John Wiley & Sons, New York, 1998.

    MATH  Google Scholar 

  12. Tong Zhang. Convergence of large margin separable linear classification. In Advances in Neural Information Processing Systems 13, pages 357–363, 2001.

    Google Scholar 

  13. Tong Zhang. A leave-one-out cross validation bound for kernel methods with applications in learning. In COLT, 2001.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, T. (2001). A Sequential Approximation Bound for Some Sample-Dependent Convex Optimization Problems with Applications in Learning. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_5

Download citation

  • DOI: https://doi.org/10.1007/3-540-44581-1_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42343-0

  • Online ISBN: 978-3-540-44581-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics