Abstract
Two major types of ensemble methods exist: while the first type is based on the idea of varying the training data (Boosting, Bagging), the second type tries to exploit information on each classifier’s area of expertise (Grading, Delegating, Arbitrating). This paper presents a new ensemble method called partitioner trees that combines both approaches. Information on misclassifications is used to train meta classifiers called partitioners and to split training data into disjoint subsets. On these subsets succeeding specialised classifiers are constructed. This process yields a binary decision tree with partitioners on the inner nodes to perform the splitting and specialised local classifiers on the leaves for final classification. Partitioner trees are compared to five other ensemble methods in experiments on four different datasets. The results show that on large datasets partitioner trees have a similar classification accuracy as AdaBoost , and are superior to those of other meta classifier based ensemble methods. In addition, partitioner trees outperform most other ensemble methods in regard to training time and allow for adaptively tuning parameters.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Asuncion, A., Newman, D.J.: UCI machine learning repository, http://www.ics.uci.edu/~mlearn/MLRepository.html
Breiman, L.: Bagging predictors. Tech. Rep. 421, Dept. Stat., Univ. California, Berkeley, CA (1994)
Breiman, L., Friedman, J.H., Olshen, R., Stone, C.J.: Classification and Regression Trees. Chapman & Hall, Boca Raton (1984)
Chan, P., Stolfo, S.J.: Learning arbiter and combiner trees from partitioned data for scaling machine learning. In: Fayyad, U.M., Uthurusamy, R. (eds.) Proc. 1st Int. Conf. Knowl. Discovery and Data Mining, Montreal, QB, pp. 39–44. AAAI Press, Menlo Park (1995)
Ferri, C., Flach, P.A., Hernandez-Orallo, J.: Delegating classifiers. In: Brodley, C.E. (ed.) Proc. 21st Int. Conf. Mach. Learn., Banff, AL. ACM, New York (2004)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Vitányi, P.M.B. (ed.) EuroCOLT 1995. LNCS, vol. 904, pp. 23–37. Springer, Heidelberg (1995)
Krempl, G., Hofer, V.: Partitioner trees: combining boosting and arbitrating. In: Okun, O., Valentini, G. (eds.) Proc. 2nd Workshop Supervised and Unsupervised Ensemble Methods and Their Appl., Patras, Greece (2007)
Ortega, J., Koppel, M., Argamon, S.: Arbitrating among competing classifiers using learned referees. Knowl. Inf. Syst. 3(4), 470–490 (2001)
Partridge, D., Yates, W.B.: Engineering multiversion neural-net systems. Neural Computation 8(4), 869–893 (1995)
Provost, F., Fawcett, T.: Analysis and visualization of classifier performance: comparison under imprecise class and cost distributions. In: Heckerman, D., Mannila, H., Pregibor, D. (eds.) Proc. 3rd Int. Conf. Knowl. Discovery and Data Mining, Newport Beach, CA, pp. 42–48. AAAI Press, Menlo Park (1997)
R Development Core Team: R: a Language and Environment for Statistical Computing (2006)
Schapire, R.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)
Seewald, A.K., Fürnkranz, J.: An evaluation of grading classifiers. In: Hoffmann, F., Adams, N., Fisher, D., Guimarães, G., Hand, D.J. (eds.) IDA 2001. LNCS, vol. 2189, p. 115. Springer, Heidelberg (2001)
Wolpert, D.H.: Stacked generalization. Neural Networks 5(2), 241–259 (1992)
Yates, W.B., Partridge, D.: Use of methodological diversity to improve neural network generalisation. Neural Computing and Appl. 4(2), 114–128 (1996)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Krempl, G., Hofer, V. (2009). Partitioner Trees for Classification: A New Ensemble Method. In: Okun, O., Valentini, G. (eds) Applications of Supervised and Unsupervised Ensemble Methods. Studies in Computational Intelligence, vol 245. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03999-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-03999-7_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-03998-0
Online ISBN: 978-3-642-03999-7
eBook Packages: EngineeringEngineering (R0)