Skip to main content

On-Line Learning: Where Are We So Far?

  • Chapter

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6202))

Abstract

After a long period of neglect, on-line learning is re-emerging as an important topic in machine learning. On one hand, this is due to new applications involving data flows, the detection of, or adaption to, changing conditions and long-life learning. On the other hand, it is now apparent that the current statistical theory of learning, based on the independent and stationary distribution assumption, has reached its limits and must be completed or superseded to account for sequencing effects, and more generally, for the information carried by the evolution of the data generation process.

This chapter first presents the current, still predominant paradigm. It then underlines the deviations to this framework introduced by new on-line learning settings, and the associated challenges that they raise both for devising novel algorithms and for developing a satisfactory new theory of learning. It concludes with a brief description of a new learning concept, called tracking, which may hint as to what could come off as algorithms and theoretical questions from looking anew to this all pervading situation: never to stop learning.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems, NIPS Foundation, vol. 20, pp. 161–168 (2008), http://books.nips.cc

  2. Bottou, L., LeCun, Y.: On-line learning for very large datasets. Applied Stochastic Models in Business and Industry 21(2), 137–151 (2005)

    Article  MathSciNet  Google Scholar 

  3. Cesa-Bianchi, N., Lugosi, G.: Prediction, learning and games. Cambridge University Press, Cambridge (2006)

    Book  MATH  Google Scholar 

  4. Cormode, G., Muthukrishnan, S.: An improved data stream summary: The count-min sketch and its applications. Journal of Algorithms 55(1), 58–75 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Domingos, P., Hulten, G.: Mining high-speed data streams. In: Proceedings of the Sixth International Conference on Knowledge Discovery and Data Mining, pp. 71–80. ACM Press, New York (2000)

    Google Scholar 

  6. Helmbold, D., Long, P.: Tracking drifting concepts by minimizing disagreements. Machine Learning 14(1), 27–45 (1994)

    MATH  Google Scholar 

  7. Hulten, G., Spencer, L., Domingos, P.: Mining time-changing data streams. In: KDD 2001: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 97–106. ACM, New York (2001)

    Google Scholar 

  8. Kolter, J.Z., Maloof, M.A.: Dynamic weighted majority: An ensemble method for drifting concepts. Journal of Machine Learning Research 8, 2755–2790 (2007)

    MATH  Google Scholar 

  9. Quinonero-Candela, J., Sugiyama, M., Schwaighofer, A., Lawrence, N.: Dataset shift in machine learning. MIT Press, Cambridge (2009)

    Google Scholar 

  10. Ritter, F., Nerb, J., Lehtinen, E., O’Shea, T. (eds.): In order to learn. How the sequence of topics influences learning. Oxford University Press, Oxford (2007)

    Google Scholar 

  11. Scholz, M., Klinkenberg, R.: Boosting classifiers for drifting concepts. Intelligent Data Analysis 11(1), 3–28 (2007)

    Google Scholar 

  12. Sugiyama, M., Kraudelat, M., Müller, K.-R.: Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research 8, 985–1005 (2007)

    MATH  Google Scholar 

  13. Sugiyama, M., Nakajima, S., Kashima, H., Von Buenau, P., Kawanabe, M.: Direct importance estimation with model selection and its application to covariate shift adaptation. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S.T. (eds.) NIPS. MIT Press, Cambridge (2007)

    Google Scholar 

  14. Sutton, R., Koop, A., Silver, D.: On the role of tracking in stationary environments. In: Proceedings of the 24th International Conference on Machine Learning, Corvalis, Oregon, pp. 871–878. ACM, New York (2007)

    Google Scholar 

  15. Wang, H., Fan, W., Yu, P.S., Han, J.: Mining concept-drifting data streams using ensemble classifiers. In: Proceedings of the Ninth International Conference on Knowledge Discovery and Data Mining (KDD 2003), pp. 226–235 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Cornuéjols, A. (2010). On-Line Learning: Where Are We So Far?. In: May, M., Saitta, L. (eds) Ubiquitous Knowledge Discovery. Lecture Notes in Computer Science(), vol 6202. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16392-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16392-0_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16391-3

  • Online ISBN: 978-3-642-16392-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics