Skip to main content
Log in

Part 6: Geometrical Structure of Boosting Algorithm

  • Tutorial Series on Brain-Inspired Computing
  • Published:
New Generation Computing Aims and scope Submit manuscript

Abstract

In this article, several boosting methods are discussed, which are notable implementations of the ensemble learning. Starting from the firstly introduced “boosting by filter” which is an embodiment of the proverb “Two heads are better than one”, more advanced versions of boosting methods “AdaBoost” and “U-Boost” are introduced. A geometrical structure and some statistical properties such as consistency and robustness of boosting algorithms are discussed, and then simulation studies are presented for confirming discussed behaviors of algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bartlett, P.L., Jordan, M.I. and McAuliffe, J.D., “Convexity, Classification and Risk Bounds," Technical Report 638, Statistics Department, University of California, Berkeley, Nov., 2003.

  2. Friedman, J.H., Hastie, T. and Tibshirani, R., “Additive Logistic Regression: A Statistical View of Boosting,” Annals of Statistics, 28, pp. 337–407, 2000.

    Article  MATH  MathSciNet  Google Scholar 

  3. Hampel, F.R., Rousseeuw, P.J., Ronchetti, E.M. and Stahel, W.A., Robust Statistics, John Wiley and Sons, Inc., 1986.

  4. Kanamori, T., Takenouchi, T. Eguchi, S. and Murata, N., “The Most Robust Loss Function for Boosting,” in Neural Information Processing: 11th International Conference, ICONIP, LNCS, pp. 496–501, 2004.

  5. Kearns, M. and Valiant, L.G., “Learning Boolean Formulae or Finte Automata is as Hard as Foctoring,” Technical Report TR-14-88, Harvard University Aiken Computation Laboratry, Aug., 1988.

  6. Freund, Y. and Schapire, R.E., “A Decision-theoretic Generalization of On-line Learning and an Application to Boosting,” Journ. of Computer and System Sciences, 55, 1, pp. 119–139, Aug., 1997.

  7. Lebanon, G. and Lafferty, J., “Boosting and Maximum Likelihood for Exponential Models,” Technical Report CMU-CS-01-144, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, Aug., 2001.

  8. Minami, M. and Eguchi, S., “Robust Blind Source Separation by Beta-divergence,” Neural Computation, 14, pp. 1859–1886, 2004.

    Google Scholar 

  9. Murata, N., Takenouchi, T., Kanamori, T. and Eguchi, S., “Information Geometry of U-boost and Bregman Divergence,” Neural Computation, 16, pp. 1437–1481, 2004.

    Article  MATH  Google Scholar 

  10. Schapire, R.E., “The Strength of Weak Learnability,” Machine Learning, 5, pp. 197–227, 1990.

    Google Scholar 

  11. Schapire, R.E., Freund, Y., Bartlett, P. and Lee, W.S., “Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods,” The Annals of Statistics, 26, 5, pp. 1651–1686, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  12. Takenouchi, T. and Eguchi, S., “Robustifying AdaBoost by Adding the Naive Error Rate,” Neural Computation, 16, 4, 2004.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takafumi Kanamori.

About this article

Cite this article

Kanamori, T., Takenouchi, T. & Murata, N. Part 6: Geometrical Structure of Boosting Algorithm. New Gener. Comput. 25, 117–141 (2006). https://doi.org/10.1007/s00354-006-0006-0

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00354-006-0006-0

Keywords

Navigation