Abstract
We discuss algorithmic aspects of boosting techniques, such as Majority Vote Boosting [Fre95], AdaBoost [FS97], and MadaBoost [DW00a]. Considering a situation where we are given a huge amount of examples and asked to find some rule for explaining these example data, we show some reasonable algorithmic approaches for dealing with such a huge dataset by boosting techniques. Through this example, we explain how to use and how to implement “adaptivity” for scaling-up existing algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
M. Collins, R.E. Schapire, and Y. Singer, Logistic regression, AdaBoost and Bregman Distance, in Proc. of the Thirteenth Annual ACM Workshop on Computational Learning Theory (COLT’00), ACM, 158–169, 2000.
T.G. Dietterich, An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting and randomization, Machine Learning 32, 1–22, 1998.
C. Domingo, R. Gavaldà, and O. Watanabe, Adaptive sampling methods for scaling up knowledge discovery algorithms, Data Mining and Knowledge Discovery (special issue edited by H. Liu and H. Motoda), 2001, to appear.
C. Domingo and O. Watanabe, MadaBoost: A modification of AdaBoost, in Proc. of the Thirteenth Annual ACM Workshop on Computational Learning Theory (COLT’00), ACM, 180–189, 2000.
C. Domingo and O. Watanabe, Scaling up a boosting-based learner via adaptive sampling, in Proc. of Knowledge Discovery and Data Mining (PAKDD’00), Lecture Notes in AI 1805, 317–328, 2000.
Y. Freund, Boosting a weak learning algorithm by majority, Information and Computation, 121(2):256–285, 1995.
Y. Freund, An adaptive version of the boost by majority algorithm, in Proc. of the Twelfth Annual Conference on Computational Learning Theory (COLT’99), ACM, 102–113, 1999.
J. Friedman, T. Hastie, and R. Tibshirani, Additive logistic regression: a statistical view of boosting, Technical report, 1998.
Y. Freund and R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, J. Comput. Syst. Sci., 55(1):119–139, 1997.
S. Morishita, Computing optimal hypotheses efficiently for boosting, in this issue.
J.R. Quinlan, Bagging, boosting, and C4.5, in Proc. of the 13th National Conference on Artificial Intelligence, 725–730, 1996.
R.E. Schapire, The strength of weak learnability, Machine Learning, 5(2):197–227, 1990.
O. Watanabe, Simple sampling techniques for discovery science, IEICE Transactions on Information and Systems, E83-D(1), 19–26, 2000.
O. Watanabe, Sequential sampling techniques for algorithmic learning theory, in Proc. 11th Int’l Conference on Algorithmic Learning Theory (ALT’00), Lecture Notes in AI 1968, 27–40, 2000.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Watanabe, O. (2002). Algorithmic Aspects of Boosting. In: Arikawa, S., Shinohara, A. (eds) Progress in Discovery Science. Lecture Notes in Computer Science(), vol 2281. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45884-0_25
Download citation
DOI: https://doi.org/10.1007/3-540-45884-0_25
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43338-5
Online ISBN: 978-3-540-45884-5
eBook Packages: Springer Book Archive