Skip to main content

On a Unified Framework for Sampling With and Without Replacement in Decision Tree Ensembles

  • Conference paper
Artificial Intelligence: Methodology, Systems, and Applications (AIMSA 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4183))

Abstract

Classifier ensembles is an active area of research within the machine learning community. One of the most successful techniques is bagging, where an algorithm (typically a decision tree inducer) is applied over several different training sets, obtained applying sampling with replacement to the original database. In this paper we define a framework where sampling with and without replacement can be viewed as the extreme cases of a more general process, and analyze the performance of the extension of bagging to such framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine Learning 36(1-2), 105–142 (1999)

    Article  Google Scholar 

  2. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  3. Dietterich, T.G.: Machine learning research: four current directions. AI Magazine 18(4), 97–136 (1997)

    MATH  Google Scholar 

  4. Gama, J.: Combining Classification Algorithms. Phd Thesis. University of Porto (2000)

    Google Scholar 

  5. Gunes, V., Ménard, M., Loonis, P.: Combination, cooperation and selection of classifiers: A state of the art. International Journal of Pattern Recognition 17(8), 1303–1324 (2003)

    Article  Google Scholar 

  6. Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 16, 66–75 (1994)

    Article  Google Scholar 

  7. Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (1996)

    Google Scholar 

  8. Kohavi, R., Sommerfield, D., Dougherty, J.: Data mining using \(\mathcal{MLC\bf ++}\), a machine learning library in \(\mathcal{C\bf ++}\). International Journal of Artificial Intelligence Tools 6(4), 537–566 (1997)

    Article  Google Scholar 

  9. Kuncheva, L.I.: Combining pattern classifiers: methods and algorithms. Wiley-Interscience, Hoboken (2004)

    Book  MATH  Google Scholar 

  10. Lu, Y.: Knowledge integration in a multiple classifier system. Applied Intelligence 6(2), 75–86 (1996)

    Article  Google Scholar 

  11. Newman, D., Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases (1998)

    Google Scholar 

  12. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

  13. Sierra, B., Serrano, N., Larrañaga, P., Plasencia, E.J., Inza, I., Jiménez, J.J., Revuelta, P., Mora, M.L.: Using bayesian networks in the construction of a bi-level multi-classifier. Artificial Intelligence in Medicine 22, 233–248 (2001)

    Article  Google Scholar 

  14. Wolpert, D.: Stacked generalization. Neural Networks 5, 241–259 (1992)

    Article  Google Scholar 

  15. Xu, L., Kryzak, A., Suen, C.Y.: Methods for combining multiple classifiers and their applications to handwriting recognition. IEEE Transactions on SMC 22, 418–435 (1992)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Martínez-Otzeta, J.M., Sierra, B., Lazkano, E., Jauregi, E. (2006). On a Unified Framework for Sampling With and Without Replacement in Decision Tree Ensembles. In: Euzenat, J., Domingue, J. (eds) Artificial Intelligence: Methodology, Systems, and Applications. AIMSA 2006. Lecture Notes in Computer Science(), vol 4183. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11861461_14

Download citation

  • DOI: https://doi.org/10.1007/11861461_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40930-4

  • Online ISBN: 978-3-540-40931-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics