Skip to main content

On relative loss bounds in generalized linear regression

  • Conference paper
  • First Online:
Fundamentals of Computation Theory (FCT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1684))

Included in the following conference series:

Abstract

When relative loss bounds are considered, an on-line learning algorithm is compared to the performance of a class of off-line algorithms, called experts. In this paper we reconsider a result by Vovk, namely an upper bound on the on-line relative loss for linear regression with square loss — here the experts are linear functions. We give a shorter and simpler proof of Vovk’s result and give a new motivation for the choice of the predictions of Vovk’s learning algorithm. This is done by calculating the, in some sense, best prediction for the last trial of a sequence of trials when it is known that the outcome variable is bounded. We try to generalize these ideas to the case of generalized linear regression where the experts are neurons and give a formula for the “best” prediction for the last trial in this case, too. This prediction turns out to be essentially an integral over the “best” expert applied to the last instance. Predictions that are “optimal” in this sense might be good predictions for long sequences of trials as well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Azoury, K., Warmuth, M.: Relative Loss Bounds for On-line Density Estimation with the Exponential Family of Distributions, to appear at the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI’99.

    Google Scholar 

  2. Beckenbach, E. F., Bellman, R.: Inequalities, Berlin: Springer, 1965.

    Google Scholar 

  3. Foster, D. P.: Prediction in the worst case, Annals of Statistics 19, 1084–1090.

    Google Scholar 

  4. Kivinen, J., Warmuth, M.: Relative Loss Bounds for Multidimensional Regression Problems. In Jordan, M., Kearns, M., Solla, S., editors, Advances in Neural Infor-mation Processing Systems 10 (NIPS 97), 287–293, MIT Press, Cambridge, MA, 1998.

    Google Scholar 

  5. Kivinen, J., Warmuth, M.: Additive versus exponentiated gradient updates for linear prediction, Information and Computation 132:1–64, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  6. Vovk, V.: Competitive On-Line Linear Regression. Technical Report CSD-TR-97-13, Department of Computer Science, Royal Holloway, University of London, 1997.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Forster, J. (1999). On relative loss bounds in generalized linear regression. In: Ciobanu, G., Păun, G. (eds) Fundamentals of Computation Theory. FCT 1999. Lecture Notes in Computer Science, vol 1684. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48321-7_22

Download citation

  • DOI: https://doi.org/10.1007/3-540-48321-7_22

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66412-3

  • Online ISBN: 978-3-540-48321-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics