Skip to main content

Improving Boosting by Exploiting Former Assumptions

  • Conference paper
Mining Complex Data (MCD 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4944))

Included in the following conference series:

  • 464 Accesses

Abstract

The error reduction in generalization is one of the principal motivations of research in machine learning. Thus, a great number of work is carried out on the classifiers aggregation methods in order to improve generally, by voting techniques, the performance of a single classifier. Among these methods of aggregation, we find the Boosting which is most practical thanks to the adaptive update of the distribution of the examples aiming at increasing in an exponential way the weight of the badly classified examples. However, this method is blamed because of overfitting, and the convergence speed especially with noise. In this study, we propose a new approach and modifications carried out on the algorithm of AdaBoost. We will demonstrate that it is possible to improve the performance of the Boosting, by exploiting assumptions generated with the former iterations to correct the weights of the examples. An experimental study shows the interest of this new approach, called hybrid approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vezhnevets, V., Vezhnevets, A.: Modest adaboost: Teaching adaboost to generalize better, Moscow State University (2002)

    Google Scholar 

  2. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 24, 173–202 (1999)

    Google Scholar 

  3. Breiman, L.: Bagging predictors. Machine Learning 26, 123–140 (1996)

    Google Scholar 

  4. Brodley, C.E., Friedl, M.A.: Identifying and eliminating mislabeled training instances. In: AAAI/IAAI, vol. 1, pp. 799–805 (1996)

    Google Scholar 

  5. Dharmarajan, R.: An effecient boosting algorithm for combining preferences. Technical report, MIT, Septembet (1999)

    Google Scholar 

  6. Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning, 1–22 (1999)

    Google Scholar 

  7. Dietterich, T.G.: Ensemble methodes in machine learning. In: First International Workshop on Multiple ClassifierSystems, pp. 1–15 (2000)

    Google Scholar 

  8. Blake, C.L., Newman, D.J., Hettich, S., Merz, C.J.: Uci repository of machine learning databases (1998)

    Google Scholar 

  9. Domingo, C., Watanabe, O.: Madaboost: A modification of adaboost. In: Proc. 13th Annu. Conference on Comput. Learning Theory, pp. 180–189. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  10. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Dept. of Statistics, Stanford University Technical Report (1998)

    Google Scholar 

  11. Friedman, J.H., Popescu, B.E.: Predictive learning via rule ensembles (technical report). Stanford University (7) (2005)

    Google Scholar 

  12. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: International Joint Conference on Artificial Intelligence (IJCAI) (1995)

    Google Scholar 

  13. Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Information and computation 24, 212–261 (1994)

    Article  MathSciNet  Google Scholar 

  14. Maclin, R.: Boosting classifiers regionally. In: AAAI/IAAI, pp. 700–705 (1998)

    Google Scholar 

  15. McDonald, R., Hand, D., Eckley, I.: An empirical comparison of three boosting algorithms on real data sets with artificial class noise. In: Fourth International Workshop on Multiple Classifier Systems, pp. 35–44 (2003)

    Google Scholar 

  16. Meir, R., El-Yaniv, R., Ben-David, S.: Localized boosting. In: Proc. 13th Annu. Conference on Comput. Learning Theory, pp. 190–199. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  17. Rätsch, G.: Ensemble learning methods for classification. Master’s thesis, Dep of computer science, University of Potsdam (April 1998)

    Google Scholar 

  18. Rätsch, G., Onoda, T., Müller, K.-R.: Soft margins for adaboost. Mach. Learn. 42(3), 287–320 (2001)

    Article  MATH  Google Scholar 

  19. Schapire, R.E., Singer, Y.: Improved boosting algorithms using confedence rated predictions. Machine Learning 37(3), 297–336 (1999)

    Article  MATH  Google Scholar 

  20. Sebban, M., Suchier, H.-M.: Étude sur amélioration du boosting: réduction de l’erreur et accélération de la convergence. Journal électronique d’intelligence artificielle (submitted, 2003)

    Google Scholar 

  21. Servedio, R.A.: Smooth boosting and learning with malicious noise. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS (LNAI), vol. 2111, pp. 473–489. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  22. Shapire, R.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)

    Google Scholar 

  23. Kwek, S., Nguyen, C.: iboost: Boosting using an instance-based exponential weighting scheme. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 245–257. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  24. Stolfo, S.J., Fan, W., Lee, W., Prodromidis, A., Chan, P.K.: Cost-based modeling and evaluation for data mining with application to fraud and intrusion detection (1999)

    Google Scholar 

  25. Torre, F.: Globoost: Boosting de moindres généralisés. Technical report, GRAppA - Université Charles de Gaulle - Lille 3 (September 2004)

    Google Scholar 

  26. Wilson, D.R., Martinez, T.R.: Reduction techniques for instance-based learning algorithms. Machine Learning 38(3), 257–286 (2000)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Zbigniew W. Raś Shusaku Tsumoto Djamel Zighed

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bahri, E., Nicoloyannis, N., Maddouri, M. (2008). Improving Boosting by Exploiting Former Assumptions. In: Raś, Z.W., Tsumoto, S., Zighed, D. (eds) Mining Complex Data. MCD 2007. Lecture Notes in Computer Science(), vol 4944. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-68416-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-68416-9_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-68415-2

  • Online ISBN: 978-3-540-68416-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics