Skip to main content

Ensembles and Multiple Classifiers: A Game-Theoretic View

  • Conference paper
Multiple Classifier Systems (MCS 2011)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 6713))

Included in the following conference series:

  • 1556 Accesses

Aggregating Strategies

The study of multiple classifier systems is a fundamental topic in modern machine learning. However, early work on aggregation of predictors can be traced back to the Fifties, in the area of game theory. At that time, the pioneering work of James Hannan [11] and David Blackwell [2] laid down the foundations of repeated game theory. In a nutshell, a repeated game is the game-theoretic interpretation of learning. In games played once, lacking any information about the opponent, the best a player can do is to play the minimax strategy (the best strategy against the worst possible opponent). In repeated games, by examining the history of past opponent moves, the player acquires information about the opponent’s behavior and can adapt to it, in order to achieve a better payoff than that guaranteed by the minimax strategy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Azoury, K.S., Warmuth, M.K.: Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning 43(3), 211–246 (2001)

    Article  MATH  Google Scholar 

  2. Blackwell, D.: An analog of the minimax theorem for vector payoffs. Pacific Journal of Mathematics 6, 1–8 (1956)

    Article  MATH  Google Scholar 

  3. Cavallanti, G., Cesa-Bianchi, N., Gentile, C.: Linear algorithms for online multitask classification. Journal of Machine Learning Research 11, 2597–2630 (2010)

    MATH  Google Scholar 

  4. Cesa-Bianchi, N., Conconi, A., Gentile, C.: On the generalization ability of on-line learning algorithms. IEEE Transactions on Information Theory 50(9), 2050–2057 (2004)

    Article  MATH  Google Scholar 

  5. Cesa-Bianchi, N., Freund, Y., Helmbold, D.P., Haussler, D., Schapire, R., Warmuth, M.K.: How to use expert advice. Journal of the ACM 44(3), 427–485 (1997)

    Article  MATH  Google Scholar 

  6. Cesa-Bianchi, N., Long, P.M., Warmuth, M.K.: Worst-case quadratic loss bounds for a generalization of the Widrow-Hoff rule. In: Proceedings of the 6th Annual ACM Workshop on Computational Learning Theory, pp. 429–438. ACM Press, New York (1993)

    Google Scholar 

  7. Cover, T.: Behaviour of sequential predictors of binary sequences. In: Proceedings of the 4th Prague Conference on Information Theory, Statistical Decision Functions and Random Processes, pp. 263–272. Publishing house of the Czechoslovak Academy of Sciences (1965)

    Google Scholar 

  8. Cover, T.: Universal portfolios. Mathematical Finance 1, 1–29 (1991)

    Article  MATH  Google Scholar 

  9. Freund, Y., Schapire, R.: Game theory, on-line prediction and boosting. In: Proceedings of the 9th Annual Conference on Computational Learning Theory. ACM Press, New York (1996)

    Google Scholar 

  10. Gentile, C.: The robustness of the p-norm algorithms. Machine Learning 53(3), 265–299 (2003)

    Article  MATH  Google Scholar 

  11. Hannan, J.: Approximation to Bayes risk in repeated play. Contributions to the theory of games 3, 97–139 (1957)

    MATH  Google Scholar 

  12. Jie, L., Orabona, F., Fornoni, M., Caputo, B., Cesa-Bianchi, N.: OM-2: An online multi-class multi-kernel learning algorithm. In: Proceedings of the 4th IEEE Online Learning for Computer Vision Workshop. IEEE Press, Los Alamitos (2007)

    Google Scholar 

  13. Kakade, S., Shalev-Shwartz, S., Tewari, A.: On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization (2009)

    Google Scholar 

  14. Littlestone, N.: Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning 2(4), 285–318 (1988)

    Google Scholar 

  15. Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Information and Computation 108, 212–261 (1994)

    Article  MATH  Google Scholar 

  16. Novikoff, A.B.J.: On convergence proofs of Perceptrons. In: Proceedings of the Symposium on the Mathematical Theory of Automata vol. XII, pp. 615–622 (1962)

    Google Scholar 

  17. Vovk, V.: Competitive on-line statistics. International Statistical Review 69, 213–248 (2001)

    Article  MATH  Google Scholar 

  18. Vovk, V.G.: Aggregating strategies. In: Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pp. 372–383 (1990)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cesa-Bianchi, N. (2011). Ensembles and Multiple Classifiers: A Game-Theoretic View. In: Sansone, C., Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2011. Lecture Notes in Computer Science, vol 6713. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21557-5_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21557-5_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21556-8

  • Online ISBN: 978-3-642-21557-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics