Abstract
In this article, several boosting methods are discussed, which are notable implementations of the ensemble learning. Starting from the firstly introduced “boosting by filter” which is an embodiment of the proverb “Two heads are better than one”, more advanced versions of boosting methods “AdaBoost” and “U-Boost” are introduced. A geometrical structure and some statistical properties such as consistency and robustness of boosting algorithms are discussed, and then simulation studies are presented for confirming discussed behaviors of algorithms.
Similar content being viewed by others
References
Bartlett, P.L., Jordan, M.I. and McAuliffe, J.D., “Convexity, Classification and Risk Bounds," Technical Report 638, Statistics Department, University of California, Berkeley, Nov., 2003.
Friedman, J.H., Hastie, T. and Tibshirani, R., “Additive Logistic Regression: A Statistical View of Boosting,” Annals of Statistics, 28, pp. 337–407, 2000.
Hampel, F.R., Rousseeuw, P.J., Ronchetti, E.M. and Stahel, W.A., Robust Statistics, John Wiley and Sons, Inc., 1986.
Kanamori, T., Takenouchi, T. Eguchi, S. and Murata, N., “The Most Robust Loss Function for Boosting,” in Neural Information Processing: 11th International Conference, ICONIP, LNCS, pp. 496–501, 2004.
Kearns, M. and Valiant, L.G., “Learning Boolean Formulae or Finte Automata is as Hard as Foctoring,” Technical Report TR-14-88, Harvard University Aiken Computation Laboratry, Aug., 1988.
Freund, Y. and Schapire, R.E., “A Decision-theoretic Generalization of On-line Learning and an Application to Boosting,” Journ. of Computer and System Sciences, 55, 1, pp. 119–139, Aug., 1997.
Lebanon, G. and Lafferty, J., “Boosting and Maximum Likelihood for Exponential Models,” Technical Report CMU-CS-01-144, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, Aug., 2001.
Minami, M. and Eguchi, S., “Robust Blind Source Separation by Beta-divergence,” Neural Computation, 14, pp. 1859–1886, 2004.
Murata, N., Takenouchi, T., Kanamori, T. and Eguchi, S., “Information Geometry of U-boost and Bregman Divergence,” Neural Computation, 16, pp. 1437–1481, 2004.
Schapire, R.E., “The Strength of Weak Learnability,” Machine Learning, 5, pp. 197–227, 1990.
Schapire, R.E., Freund, Y., Bartlett, P. and Lee, W.S., “Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods,” The Annals of Statistics, 26, 5, pp. 1651–1686, 1998.
Takenouchi, T. and Eguchi, S., “Robustifying AdaBoost by Adding the Naive Error Rate,” Neural Computation, 16, 4, 2004.
Author information
Authors and Affiliations
Corresponding author
About this article
Cite this article
Kanamori, T., Takenouchi, T. & Murata, N. Part 6: Geometrical Structure of Boosting Algorithm. New Gener. Comput. 25, 117–141 (2006). https://doi.org/10.1007/s00354-006-0006-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00354-006-0006-0