Skip to main content

Partitioner Trees for Classification: A New Ensemble Method

  • Chapter

Part of the book series: Studies in Computational Intelligence ((SCI,volume 245))

Abstract

Two major types of ensemble methods exist: while the first type is based on the idea of varying the training data (Boosting, Bagging), the second type tries to exploit information on each classifier’s area of expertise (Grading, Delegating, Arbitrating). This paper presents a new ensemble method called partitioner trees that combines both approaches. Information on misclassifications is used to train meta classifiers called partitioners and to split training data into disjoint subsets. On these subsets succeeding specialised classifiers are constructed. This process yields a binary decision tree with partitioners on the inner nodes to perform the splitting and specialised local classifiers on the leaves for final classification. Partitioner trees are compared to five other ensemble methods in experiments on four different datasets. The results show that on large datasets partitioner trees have a similar classification accuracy as AdaBoost , and are superior to those of other meta classifier based ensemble methods. In addition, partitioner trees outperform most other ensemble methods in regard to training time and allow for adaptively tuning parameters.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Asuncion, A., Newman, D.J.: UCI machine learning repository, http://www.ics.uci.edu/~mlearn/MLRepository.html

  2. Breiman, L.: Bagging predictors. Tech. Rep. 421, Dept. Stat., Univ. California, Berkeley, CA (1994)

    Google Scholar 

  3. Breiman, L., Friedman, J.H., Olshen, R., Stone, C.J.: Classification and Regression Trees. Chapman & Hall, Boca Raton (1984)

    MATH  Google Scholar 

  4. Chan, P., Stolfo, S.J.: Learning arbiter and combiner trees from partitioned data for scaling machine learning. In: Fayyad, U.M., Uthurusamy, R. (eds.) Proc. 1st Int. Conf. Knowl. Discovery and Data Mining, Montreal, QB, pp. 39–44. AAAI Press, Menlo Park (1995)

    Google Scholar 

  5. Ferri, C., Flach, P.A., Hernandez-Orallo, J.: Delegating classifiers. In: Brodley, C.E. (ed.) Proc. 21st Int. Conf. Mach. Learn., Banff, AL. ACM, New York (2004)

    Google Scholar 

  6. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Vitányi, P.M.B. (ed.) EuroCOLT 1995. LNCS, vol. 904, pp. 23–37. Springer, Heidelberg (1995)

    Google Scholar 

  7. Krempl, G., Hofer, V.: Partitioner trees: combining boosting and arbitrating. In: Okun, O., Valentini, G. (eds.) Proc. 2nd Workshop Supervised and Unsupervised Ensemble Methods and Their Appl., Patras, Greece (2007)

    Google Scholar 

  8. Ortega, J., Koppel, M., Argamon, S.: Arbitrating among competing classifiers using learned referees. Knowl. Inf. Syst. 3(4), 470–490 (2001)

    Article  MATH  Google Scholar 

  9. Partridge, D., Yates, W.B.: Engineering multiversion neural-net systems. Neural Computation 8(4), 869–893 (1995)

    Article  Google Scholar 

  10. Provost, F., Fawcett, T.: Analysis and visualization of classifier performance: comparison under imprecise class and cost distributions. In: Heckerman, D., Mannila, H., Pregibor, D. (eds.) Proc. 3rd Int. Conf. Knowl. Discovery and Data Mining, Newport Beach, CA, pp. 42–48. AAAI Press, Menlo Park (1997)

    Google Scholar 

  11. R Development Core Team: R: a Language and Environment for Statistical Computing (2006)

    Google Scholar 

  12. Schapire, R.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)

    Google Scholar 

  13. Seewald, A.K., Fürnkranz, J.: An evaluation of grading classifiers. In: Hoffmann, F., Adams, N., Fisher, D., Guimarães, G., Hand, D.J. (eds.) IDA 2001. LNCS, vol. 2189, p. 115. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  14. Wolpert, D.H.: Stacked generalization. Neural Networks 5(2), 241–259 (1992)

    Article  Google Scholar 

  15. Yates, W.B., Partridge, D.: Use of methodological diversity to improve neural network generalisation. Neural Computing and Appl. 4(2), 114–128 (1996)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Krempl, G., Hofer, V. (2009). Partitioner Trees for Classification: A New Ensemble Method. In: Okun, O., Valentini, G. (eds) Applications of Supervised and Unsupervised Ensemble Methods. Studies in Computational Intelligence, vol 245. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03999-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-03999-7_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-03998-0

  • Online ISBN: 978-3-642-03999-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics