Abstract
Classifier ensembles is an active area of research within the machine learning community. One of the most successful techniques is bagging, where an algorithm (typically a decision tree inducer) is applied over several different training sets, obtained applying sampling with replacement to the original database. In this paper we define a framework where sampling with and without replacement can be viewed as the extreme cases of a more general process, and analyze the performance of the extension of bagging to such framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine Learning 36(1-2), 105–142 (1999)
Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)
Dietterich, T.G.: Machine learning research: four current directions. AI Magazine 18(4), 97–136 (1997)
Gama, J.: Combining Classification Algorithms. Phd Thesis. University of Porto (2000)
Gunes, V., Ménard, M., Loonis, P.: Combination, cooperation and selection of classifiers: A state of the art. International Journal of Pattern Recognition 17(8), 1303–1324 (2003)
Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 16, 66–75 (1994)
Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (1996)
Kohavi, R., Sommerfield, D., Dougherty, J.: Data mining using \(\mathcal{MLC\bf ++}\), a machine learning library in \(\mathcal{C\bf ++}\). International Journal of Artificial Intelligence Tools 6(4), 537–566 (1997)
Kuncheva, L.I.: Combining pattern classifiers: methods and algorithms. Wiley-Interscience, Hoboken (2004)
Lu, Y.: Knowledge integration in a multiple classifier system. Applied Intelligence 6(2), 75–86 (1996)
Newman, D., Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases (1998)
Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)
Sierra, B., Serrano, N., Larrañaga, P., Plasencia, E.J., Inza, I., Jiménez, J.J., Revuelta, P., Mora, M.L.: Using bayesian networks in the construction of a bi-level multi-classifier. Artificial Intelligence in Medicine 22, 233–248 (2001)
Wolpert, D.: Stacked generalization. Neural Networks 5, 241–259 (1992)
Xu, L., Kryzak, A., Suen, C.Y.: Methods for combining multiple classifiers and their applications to handwriting recognition. IEEE Transactions on SMC 22, 418–435 (1992)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Martínez-Otzeta, J.M., Sierra, B., Lazkano, E., Jauregi, E. (2006). On a Unified Framework for Sampling With and Without Replacement in Decision Tree Ensembles. In: Euzenat, J., Domingue, J. (eds) Artificial Intelligence: Methodology, Systems, and Applications. AIMSA 2006. Lecture Notes in Computer Science(), vol 4183. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11861461_14
Download citation
DOI: https://doi.org/10.1007/11861461_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40930-4
Online ISBN: 978-3-540-40931-1
eBook Packages: Computer ScienceComputer Science (R0)