Skip to main content

An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9925))

Included in the following conference series:

  • 1169 Accesses

Abstract

The paper presents a competitive prediction-style upper bound on the square loss of the Aggregating Algorithm for Regression with Changing Dependencies in the linear case. The algorithm is able to compete with a sequence of linear predictors provided the sum of squared Euclidean norms of differences of regression coefficient vectors grows at a sublinear rate.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The fact that the powers of n and T sum to 1 makes the straightforward kernelisation of the bound based on the representer theorem useless. This observation may potentially lead to a lower bound.

References

  1. Adamskiy, D., Koolen, W.M., Chernov, A., Vovk, V.: A closer look at adaptive regret. In: Bshouty, N.H., Stoltz, G., Vayatis, N., Zeugmann, T. (eds.) ALT 2012. LNCS, vol. 7568, pp. 290–304. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  2. Azoury, K.S., Warmuth, M.K.: Relative loss bounds for on-line density estimation with the exponential family of distributions. Mach. Learn. 43, 211–246 (2001)

    Article  MATH  Google Scholar 

  3. Beckenbach, E.F., Bellman, R.E.: Inequalities. Springer, Heidelberg (1961)

    Book  MATH  Google Scholar 

  4. Busuttil, S., Kalnishkan, Y.: Online regression competitive with changing predictors. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) ALT 2007. LNCS (LNAI), vol. 4754, pp. 181–195. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  5. Busuttil, S., Kalnishkan, Y.: Weighted kernel regression for predicting changing dependencies. In: Kok, J.N., Koronacki, J., Mantaras, R.L., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNAI, vol. 4701, pp. 535–542. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  6. Cesa-Bianchi, N., Conconi, A., Gentile, C.: A second-order perceptron algorithm. SIAM J. Comput. 34(3), 640–668 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  7. Forster, J.: On relative loss bounds in generalized linear regression. In: Ciobanu, G., Păun, G. (eds.) FCT 1999. LNCS, vol. 1684, pp. 269–280. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  8. Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1994)

    MATH  Google Scholar 

  9. Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)

    MATH  Google Scholar 

  10. Herbster, M., Warmuth, M.K.: Tracking the best linear predictor. J. Mach. Learn. Res. 1, 281–309 (2001)

    MathSciNet  MATH  Google Scholar 

  11. Moroshko, E., Vaits, N., Crammer, K.: Second-order non-stationary on-line learning for regression. J. Mach. Learn. Res. 16, 1481–1517 (2015)

    MathSciNet  MATH  Google Scholar 

  12. Salkuyeh, D.K.: Comments on “A note on a three-term recurrence for a tridiagonal matrix”. Appl. Math. Comput. 176(2), 442–444 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Vovk, V.: Aggregating strategies. In: Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pp. 371–383. Morgan Kaufmann, San Mateo (1990)

    Google Scholar 

  14. Vovk, V.: A game of prediction with expert advice. J. Comput. Syst. Sci. 56, 153–173 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  15. Vovk, V.: Competitive on-line statistics. Int. Stat. Rev. 69(2), 213–248 (2001)

    Article  MATH  Google Scholar 

Download references

Acknowledgement

The author has been supported by the Leverhulme Trust through the grant RPG-2013-047 ‘Online self-tuning learning algorithms for handling historical information’. The author would like to thank Vladimir Vovk, Dmitry Adamskiy, and Vladimir V’yugin for useful discussions. Special thanks to Alexey Chernov, who helped to simplify the statement of the main result.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuri Kalnishkan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Kalnishkan, Y. (2016). An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies. In: Ortner, R., Simon, H., Zilles, S. (eds) Algorithmic Learning Theory. ALT 2016. Lecture Notes in Computer Science(), vol 9925. Springer, Cham. https://doi.org/10.1007/978-3-319-46379-7_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46379-7_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46378-0

  • Online ISBN: 978-3-319-46379-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics