Skip to main content

A Geometric Approach to Leveraging Weak Learners

  • Conference paper
  • First Online:
Computational Learning Theory (EuroCOLT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1572))

Included in the following conference series:

Abstract

AdaBoost is a popular and effective leveraging procedure for improving the hypotheses generated by weak learning algorithms. AdaBoost and many other leveraging algorithms can be viewed as performing a constrained gradient descent over a potential function. At each iteration the distribution over the sample given to the weak learner is the direction of steepest descent. We introduce a new leveraging algorithm based on a natural potential function. For this potential function, the direction of steepest descent can have negative components. Therefore we provide two transformations for obtaining suitable distributions from these directions of steepest descent. The resulting algorithms have bounds that are incomparable to AdaBoost’s, and their empirical performance is similar to AdaBoost’s.

Both authors were supported by NSF Grant CCR 9700201.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N. Abe, J. Takeuchi, and M.K. Warmuth. Polynomial learnability of probabilistic concepts with respect to the Kullback-Leibler divergence. In Proc. 4th Annu. Workshop on Comput. Learning Theory, pages 277–289, San Mateo, CA, 1991. Morgan Kaufmann.

    Google Scholar 

  2. B.E. Boser, I.M. Guyon, and V.N. Vapnik. A training algorithm for optimal margin classifiers. In Proc. 5th Annu. Workshop on Comput. Learning Theory, pages 144–152. ACM Press, New York, NY, 1992.

    Google Scholar 

  3. Leo Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996.

    MATH  MathSciNet  Google Scholar 

  4. Leo Breiman. Arcing the edge. Technical Report 486, Department of Statistics, University of California, Berkeley, 1997. Available at http://www.stat.berkeley.edu.

  5. Leo Breiman. Bias, variance, and arcing classifiers. Technical Report 460, Department of Statistics, University of California, Berkeley, 1997. Available at http://www.stat.berkeley.edu.

  6. Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, September 1995. Also appeared in COLT90.

    Article  MATH  MathSciNet  Google Scholar 

  7. Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997.

    Article  MATH  MathSciNet  Google Scholar 

  8. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Unpublished manuscript available at http://www-stat.stanford.edu, 1998.

  9. Ron Kohavi, Dan Sommerfield, and James Dougherty. Data mining usingMLC++: A machine learning library in C++. In Tools with Artificial Intelligence. IEEE Computer Society Press, 1996. http://www.sgi.com/Technology/mlc.

  10. D.G. Luenberger. Linear and Nonlinear Programming. Addison-Wesley, Reading, MA, 1984.

    MATH  Google Scholar 

  11. Llew Mason, Peter Bartlett, and Jonathan Baxter. Improved generalization through explicit optimization of margins. Technical report, Department of Systems Engineering, Research School of Information Sciences and Engineering, Australian National University, 1998.

    Google Scholar 

  12. J.R. Quinlan. Bagging, boosting and c4.5. In Proceedings of the Thirteenth National Conference of Artificial Intelligence, pages 725–730. AAAI Press and the MIT Press, 1996.

    Google Scholar 

  13. Gunnar Rätsch, Takashi Onoda, and Klaus-R. Müller. Soft margins for adaboost. Technical Report NC-TR-1998-021, NeuroCOLT2, 1998.

    Google Scholar 

  14. Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. In Proc. 14th International Conference on Machine Learning, pages 322–330. Morgan Kaufmann, 1997.

    Google Scholar 

  15. Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. In Proc. 11th Annu. Conf. on Comput. learning Theory, 1998.

    Google Scholar 

  16. V.N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, New York, 1982.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Duffy, N., Helmbold, D. (1999). A Geometric Approach to Leveraging Weak Learners. In: Fischer, P., Simon, H.U. (eds) Computational Learning Theory. EuroCOLT 1999. Lecture Notes in Computer Science(), vol 1572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49097-3_3

Download citation

  • DOI: https://doi.org/10.1007/3-540-49097-3_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65701-9

  • Online ISBN: 978-3-540-49097-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics