Abstract
A new view of majority voting as a Monte Carlo stochastic algorithm is presented in this paper. The relation between the two approches allows Adaboost’s example weighting strategy to be compared with the greedy covering strategy used for a long time in Machine Learning. Even though one may expect that the greedy strategy is very much prone to overfitting, extensive experimental results do not support this guess. The greedy strategy does not clearly show overfitting, it runs in at least one order of magnitude less time, it reaches zero error on the training set in few trials, and the error on the test set is most of the time comparable, if not lower, than that exhibited by Adaboost.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Bauer, E., & Kohavi, R. (1999). “An empirical comparison of voting classifcation algorithms: Bagging, boosting, and variants.” Machine Learning, 30, 113–142.
Brassard G., and Bratley P. (1988). Algorithmics: Theory and Practice. Prentice Hall, Englewood Cliffs, NJ.
Breiman, L. (1996). Bagging predictors. Machine Learning, 24, 123–140.
Freund, Y. & Schapire, R. E. (1996a). Experiments with a new boosting algorithm. Machine Learning: Proceedings of the Thirteen International Conference (pp. 148–156).
Freund, Y. & Schapire, R. E. (1996b). Game theory, on-line prediction and boosting. In Proceedings of the Ninth Annual Conference on Computational Learning Theory, (pp. 325–332).
Freund, Y. & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55, 119–139.
Kong, E. B., & Dietterich, T. G. (1995).”Error-correcting output coding corrects bias and variance.” In Proceedings of the Twelfth International Conference on Machine Learning (Lake Tahoe, CA), pp. 313–321.
Quinlan, J. R. (1996). Bagging, boosting, and C4.5. Proceedings of the Thirteenth National Conference on Artificial Intelligence, (pp. 725–730).
Schapire, R. E., Freund, Y., Bartlett, P. & Lee, W. S. (1998).”Boosting the margin: A new explanation for the effectiveness of voting methods.” The Annals of Statistics, 26, 1651–1686.
Schapire, R. E. (1990). The strength of weak learnability. Machine Learning, 5(2), 197–227.
Shawe-Taylor, J., Bartlett, P. L., Williamson, R. C., & Anthony, M. (1996).”A framework for structural risk minimisation.” In Proceedings of the Ninth Annual Conference on Computational Learning Theory, (pp. 68–76)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Esposito, R., Saitta, L. (2001). Boosting as a Monte Carlo Algorithm. In: Esposito, F. (eds) AI*IA 2001: Advances in Artificial Intelligence. AI*IA 2001. Lecture Notes in Computer Science(), vol 2175. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45411-X_2
Download citation
DOI: https://doi.org/10.1007/3-540-45411-X_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42601-1
Online ISBN: 978-3-540-45411-3
eBook Packages: Springer Book Archive