Skip to main content

Maximizing the Margin with Boosting

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2375))

Abstract

AdaBoost produces a linear combination of weak hypotheses. It has been observed that the generalization error of the algorithm continues to improve even after all examples are classified correctly by the current linear combination, i.e. by a hyperplane in feature space spanned by the weak hypotheses. The improvement is attributed to the experimental observation that the distances (margins) of the examples to the separating hyperplane are increasing even when the training error is already zero, that is all examples are on the correct side of the hyperplane. We give an iterative version of AdaBoost that explicitly maximizes the minimum margin of the examples. We bound the number of iterations and the number of hypotheses used in the final linear combination which approximates the maximum margin hyperplane with a certain precision. Our modified algorithm essentially retains the exponential convergence properties of AdaBoost and our result does not depend on the size of the hypothesis class.

This work was done while G. Rätsch was at Fraunhofer FIRST Berlin and at UC Santa Cruz. G. Rätsch was partially funded by DFG under contract JA 379/91, JA 379/71, MU 987/1-1 and by EU in the NeuroColt II project. M.K. Warmuth and visits of G. Rätsch to UC Santa Cruz were partially funded by the NSF grant CCR-9821087. G. Rätsch thanks S. Mika, S. Lemm and K.-R. Müller for discussions.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. K.P. Bennett, A. Demiriz, and J. Shawe-Taylor. A column generation algorithm for boosting. In P. Langley, editor, Proceedings, 17th ICML, pages 65–72, San Francisco, 2000.

    Google Scholar 

  2. L. Breiman. Prediction games and arcing algorithms. Neural Computation, 11(7):1493–1518, 1999. Also Technical Report 504, Statistics Dept., University of California Berkeley.

    Article  Google Scholar 

  3. Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, September 1995.

    Google Scholar 

  4. Y. Freund and R.E. Schapire. Experiments with a new boosting algorithm. In Proc. 13th International Conference on Machine Learning, pages 148–146. Morgan Kaufmann, 1996.

    Google Scholar 

  5. Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  6. Y. Freund and R.E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79–103, 1999.

    Article  MATH  MathSciNet  Google Scholar 

  7. A.J. Grove and D. Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In Proc. of the Fifteenth National Conference on Artifical Intelligence, 1998.

    Google Scholar 

  8. R. Hettich and K.O. Kortanek. Semi-infinite programming: Theory, methods and applications. SIAM Review, 3:380–429, September 1993.

    Google Scholar 

  9. J. Kivinen and M. Warmuth. Boosting as entropy projection. In Proc. 12th Annu. Conference on Comput. Learning Theory, pages 134–144. ACM Press, New York, NY, 1999.

    Google Scholar 

  10. V. Koltchinskii, D. Panchenko, and F. Lozano. Some new bounds on the generalization error of combined classifiers. In Advances in Neural Inf. Proc. Systems, volume 13, 2001.

    Google Scholar 

  11. O.L. Mangasarian. Arbitrary-norm separating plane. Op. Res. Letters, 24(1):15–23, 1999.

    Article  MATH  MathSciNet  Google Scholar 

  12. S. Nash and A. Sofer. Linear and Nonlinear Programming. McGraw-Hill, New York, 1996.

    Google Scholar 

  13. J.R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1992.

    Google Scholar 

  14. J.R. Quinlan. Boosting first-order learning. Lecture Notes in Comp. Sci., 1160:143, 1996.

    Google Scholar 

  15. G. Rätsch. Robust Boosting via Convex Optimization. PhD thesis, University of Potsdam, October 2001. http://mlg.anu.edu.au/~raetsch/thesis.ps.gz.

  16. G. Rätsch, A. Demiriz, and K. Bennett. Sparse regression ensembles in infinite and finite hypothesis spaces. Machine Learning, 48(1–3):193–221, 2002. Special Issue on New Methods for Model Selection and Model Combination. Also NeuroCOLT2 Technical Report 2000-085.

    Google Scholar 

  17. G. Rätsch, T. Onoda, and K.-R. Müller. Soft margins for AdaBoost. Machine Learning, 42(3):287–320, March 2001. also NeuroCOLT Technical Report NC-TR-1998-021.

    Google Scholar 

  18. R.E. Schapire. The Design and Analysis of Efficient Learning Algorithms. PhD thesis, MIT Press, 1992.

    Google Scholar 

  19. R.E. Schapire, Y. Freund, P.L. Bartlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651 ff., 1998.

    Article  MATH  MathSciNet  Google Scholar 

  20. R.E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297–336, December 1999. also Proceedings of the 14th Workshop on Computational Learning Theory 1998, pages 80–91.

    Google Scholar 

  21. L.G. Valiant. A theory of the learnable. Comm. of the ACM, 27(11):1134–1142, 1984.

    Article  MATH  Google Scholar 

  22. J. von Neumann. Zur Theorie der Gesellschaftsspiele. Math. Ann., 100:295–320, 1928.

    Article  MathSciNet  MATH  Google Scholar 

  23. T. Zhang. Sequential greedy approximation for certain convex optimization problems. Technical report, IBM T.J. Watson Research Center, 2002.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rätsch, G., Warmuth, M.K. (2002). Maximizing the Margin with Boosting. In: Kivinen, J., Sloan, R.H. (eds) Computational Learning Theory. COLT 2002. Lecture Notes in Computer Science(), vol 2375. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45435-7_23

Download citation

  • DOI: https://doi.org/10.1007/3-540-45435-7_23

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43836-6

  • Online ISBN: 978-3-540-45435-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics