Skip to main content

Theoretical Views of Boosting and Applications

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1720))

Included in the following conference series:

Abstract

Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, we briefly survey theoretical work on boosting including analyses of AdaBoost’s training error and generalization error, connections between boosting and game theory, methods of estimating probabilities using boosting, and extensions of AdaBoost for multiclass classification problems. Some empirical work and applications are also described.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Steven Abney, Robert E. Schapire, and Yoram Singer. Boosting applied to tagging and PP attachment. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999.

    Google Scholar 

  2. Peter L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525–536, March 1998.

    Article  MATH  MathSciNet  Google Scholar 

  3. Eric Bauer and Ron Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, to appear.

    Google Scholar 

  4. Eric B. Baum and David Haussler. What size net gives valid generalization? Neural Computation, 1(1):151–160, 1989.

    Article  Google Scholar 

  5. Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, pages 144–152, 1992.

    Google Scholar 

  6. Leo Breiman. Arcing the edge. Technical Report 486, Statistics Department, University of California at Berkeley, 1997.

    Google Scholar 

  7. Leo Breiman. Prediction games and arcing classifiers. Technical Report 504, Statistics Department, University of California at Berkeley, 1997.

    Google Scholar 

  8. Leo Breiman. Arcing classifiers. The Annals of Statistics, 26(3):801–849, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  9. William Cohen. Fast effective rule induction. In Proceedings of the Twelfth International Conference on Machine Learning, pages 115–123, 1995.

    Google Scholar 

  10. William W. Cohen and Yoram Singer. A simple, fast, and effective rule learner. In Proceedings of the Sixteenth National Conference on Artificial Intelligence, 1999.

    Google Scholar 

  11. Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, September 1995.

    MATH  Google Scholar 

  12. Thomas G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, to appear.

    Google Scholar 

  13. Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2:263–286, January 1995.

    MATH  Google Scholar 

  14. Harris Drucker and Corinna Cortes. Boosting decision trees. In Advances in Neural Information Processing Systems 8, pages 479–485, 1996.

    Google Scholar 

  15. Harris Drucker, Robert Schapire, and Patrice Simard. Boosting performance in neural networks. International Journal of Pattern Recognition and Artificial Intelligence, 7(4):705–719, 1993.

    Article  Google Scholar 

  16. Yoav Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, 1995.

    Article  MATH  MathSciNet  Google Scholar 

  17. Yoav Freund. An adaptive version of the boost by majority algorithm. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, 1999.

    Google Scholar 

  18. Yoav Freund, Raj Iyer, Robert E. Schapire, and Yoram Singer. An effcient boosting algorithm for combining preferences. In Machine Learning: Proceedings of the Fifteenth International Conference, 1998.

    Google Scholar 

  19. Yoav Freund and Llew Mason. The alternating decision tree learning algorithm. In Machine Learning: Proceedings of the Sixteenth International Conference, 1999. to appear.

    Google Scholar 

  20. Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In Machine Learning: Proceedings of the Thirteenth International Conference, pages 148–156, 1996.

    Google Scholar 

  21. Yoav Freund and Robert E. Schapire. Game theory, on-line prediction and boosting. In Proceedings of the Ninth Annual Conference on Computational Learning Theory, pages 325–332, 1996.

    Google Scholar 

  22. Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997.

    Article  MATH  MathSciNet  Google Scholar 

  23. Yoav Freund and Robert E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, to appear.

    Google Scholar 

  24. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Technical Report, 1998.

    Google Scholar 

  25. Johannes Fürnkranz and Gerhard Widmer. Incremental reduced error pruning. In Machine Learning: Proceedings of the Eleventh International Conference, pages 70–77, 1994.

    Google Scholar 

  26. Adam J. Grove and Dale Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, 1998.

    Google Scholar 

  27. Masahiko Haruno, Satoshi Shirai, and Yoshifumi Ooyama. Using decision trees to construct a practical parser. Machine Learning, 34:131–149, 1999.

    Article  MATH  Google Scholar 

  28. Jeffrey C. Jackson and Mark W. Craven. Learning sparse perceptrons. In Advances in Neural Information Processing Systems 8, pages 654–660, 1996.

    Google Scholar 

  29. Michael Kearns and Leslie G. Valiant. Learning Boolean formulae or finite automata is as hard as factoring. Technical Report TR-14-88, Harvard University Aiken Computation Laboratory, August 1988.

    Google Scholar 

  30. Michael Kearns and Leslie G. Valiant. Cryptographic limitations on learning Boolean formulae and finite automata. Journal of the Association for Computing Machinery, 41(1):67–95, January 1994.

    MATH  MathSciNet  Google Scholar 

  31. Richard Maclin and David Opitz. An empirical evaluation of bagging and boosting. In Proceedings of the Fourteenth National Conference on Artificial Intelligence, pages 546–551, 1997.

    Google Scholar 

  32. Llew Mason, Peter Bartlett, and Jonathan Baxter. Direct optimization of margins improves generalization in combined classifiers. Technical report, Deparment of Systems Engineering, Australian National University, 1998.

    Google Scholar 

  33. C. J. Merz and P. M. Murphy. UCI repository of machine learning databases, 1999. http://www.ics.uci.edu/~mlearn/MLRepository.html.

  34. J. R. Quinlan. Bagging, boosting, and C4.5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 725–730, 1996.

    Google Scholar 

  35. J. Ross Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.

    Google Scholar 

  36. Robert E. Schapire. The strength of weak learnability. Machine Learning, 5(2):197–227, 1990.

    Google Scholar 

  37. Robert E. Schapire. Using output codes to boost multiclass learning problems. In Machine Learning: Proceedings of the Fourteenth International Conference, pages 313–321, 1997.

    Google Scholar 

  38. Robert E. Schapire. Drifting games. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, 1999.

    Google Scholar 

  39. Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651–1686, October 1998.

    Article  MATH  MathSciNet  Google Scholar 

  40. Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 80–91, 1998. To appear, Machine Learning.

    Google Scholar 

  41. Robert E. Schapire and Yoram Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, to appear.

    Google Scholar 

  42. Robert E. Schapire, Yoram Singer, and Amit Singhal. Boosting and Rocchio applied to text filtering. In SIGIR’ 98: Proceedings of the 21st Annual International Conference on Research and Development in Information Retrieval, 1998.

    Google Scholar 

  43. Holger Schwenk and Yoshua Bengio. Training methods for adaptive boosting of neural networks. In Advances in Neural Information Processing Systems 10, pages 647–653, 1998.

    Google Scholar 

  44. L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, November 1984.

    Article  MATH  Google Scholar 

  45. Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schapire, R.E. (1999). Theoretical Views of Boosting and Applications. In: Watanabe, O., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1999. Lecture Notes in Computer Science(), vol 1720. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46769-6_2

Download citation

  • DOI: https://doi.org/10.1007/3-540-46769-6_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66748-3

  • Online ISBN: 978-3-540-46769-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics