Abstract
AdaBoost is a popular and effective leveraging procedure for improving the hypotheses generated by weak learning algorithms. AdaBoost and many other leveraging algorithms can be viewed as performing a constrained gradient descent over a potential function. At each iteration the distribution over the sample given to the weak learner is the direction of steepest descent. We introduce a new leveraging algorithm based on a natural potential function. For this potential function, the direction of steepest descent can have negative components. Therefore we provide two transformations for obtaining suitable distributions from these directions of steepest descent. The resulting algorithms have bounds that are incomparable to AdaBoost’s, and their empirical performance is similar to AdaBoost’s.
Both authors were supported by NSF Grant CCR 9700201.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
N. Abe, J. Takeuchi, and M.K. Warmuth. Polynomial learnability of probabilistic concepts with respect to the Kullback-Leibler divergence. In Proc. 4th Annu. Workshop on Comput. Learning Theory, pages 277–289, San Mateo, CA, 1991. Morgan Kaufmann.
B.E. Boser, I.M. Guyon, and V.N. Vapnik. A training algorithm for optimal margin classifiers. In Proc. 5th Annu. Workshop on Comput. Learning Theory, pages 144–152. ACM Press, New York, NY, 1992.
Leo Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996.
Leo Breiman. Arcing the edge. Technical Report 486, Department of Statistics, University of California, Berkeley, 1997. Available at http://www.stat.berkeley.edu.
Leo Breiman. Bias, variance, and arcing classifiers. Technical Report 460, Department of Statistics, University of California, Berkeley, 1997. Available at http://www.stat.berkeley.edu.
Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, September 1995. Also appeared in COLT90.
Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997.
Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Unpublished manuscript available at http://www-stat.stanford.edu, 1998.
Ron Kohavi, Dan Sommerfield, and James Dougherty. Data mining usingMLC++: A machine learning library in C++. In Tools with Artificial Intelligence. IEEE Computer Society Press, 1996. http://www.sgi.com/Technology/mlc.
D.G. Luenberger. Linear and Nonlinear Programming. Addison-Wesley, Reading, MA, 1984.
Llew Mason, Peter Bartlett, and Jonathan Baxter. Improved generalization through explicit optimization of margins. Technical report, Department of Systems Engineering, Research School of Information Sciences and Engineering, Australian National University, 1998.
J.R. Quinlan. Bagging, boosting and c4.5. In Proceedings of the Thirteenth National Conference of Artificial Intelligence, pages 725–730. AAAI Press and the MIT Press, 1996.
Gunnar Rätsch, Takashi Onoda, and Klaus-R. Müller. Soft margins for adaboost. Technical Report NC-TR-1998-021, NeuroCOLT2, 1998.
Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. In Proc. 14th International Conference on Machine Learning, pages 322–330. Morgan Kaufmann, 1997.
Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. In Proc. 11th Annu. Conf. on Comput. learning Theory, 1998.
V.N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, New York, 1982.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Duffy, N., Helmbold, D. (1999). A Geometric Approach to Leveraging Weak Learners. In: Fischer, P., Simon, H.U. (eds) Computational Learning Theory. EuroCOLT 1999. Lecture Notes in Computer Science(), vol 1572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49097-3_3
Download citation
DOI: https://doi.org/10.1007/3-540-49097-3_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65701-9
Online ISBN: 978-3-540-49097-5
eBook Packages: Springer Book Archive