Skip to main content
Log in

What influences the accuracy of decision tree ensembles?

  • Published:
Journal of Intelligent Information Systems Aims and scope Submit manuscript

Abstract

An ensemble in machine learning is defined as a set of models (such as classifiers or predictors) that are induced individually from data by using one or more machine learning algorithms for a given task and then work collectively in the hope of generating improved decisions. In this paper we investigate the factors that influence ensemble performance, which mainly include accuracy of individual classifiers, diversity between classifiers, the number of classifiers in an ensemble and the decision fusion strategy. Among them, diversity is believed to be a key factor but more complex and difficult to be measured quantitatively, and it was thus chosen as the focus of this study, together with the relationships between the other factors. A technique was devised to build ensembles with decision trees that are induced with randomly selected features. Three sets of experiments were performed using 12 benchmark datasets, and the results indicate that (i) a high level of diversity indeed makes an ensemble more accurate and robust compared with individual models; (ii) small ensembles can produce results as good as, or better than, large ensembles provided the appropriate (e.g. more diverse) models are selected for the inclusion. This has implications that for scaling up to larger databases the increased efficiency of smaller ensembles becomes more significant and beneficial. As a test case study, ensembles are built based on these findings for a real world application—osteoporosis classification, and found that, in each case of three datasets used, the ensembles out-performed individual decision trees consistently and reliably.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Similar content being viewed by others

References

  • Bian, S., & Wang, W. (2007). On diversity and accuracy of homogeneous and heterogeneous ensembles. International Journal of Hybrid Intelligent Systems, 4(2), 103–128.

    MATH  Google Scholar 

  • Breiman, L. (1996). Bagging predictors. Machine Learning, 24, 123–140.

    MathSciNet  MATH  Google Scholar 

  • Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.

    Article  MATH  Google Scholar 

  • Chan, P., & Stolfo, S. (1997). On the accuracy of meta-learning for scalable data mining. International Journal of Intelligent Information Systems, 8(1), 5–28.

    Article  Google Scholar 

  • Dietterich, T. (2000). Ensemble methods in machine learning. In Multiple classifier systems, Cagliari, Italy (pp. 1–15).

  • Eckhardt, D., & Lee, L. (1985). A theoretical basis for the analysis of multiversion software subject to coincident errors. IEEE Transactions on Software Engineering, 11(12), 1511–1517.

    Article  MATH  Google Scholar 

  • Freund, Y., & Schapire, R. E. (1996). Experiments with a new boosting algorithm. In 13th international conference on machine learning (pp. 148–156).

  • Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4(1), 1–58.

    Article  Google Scholar 

  • Giacinto, G., & Roli, F. (2001). An approach to the automatic design of multiple classifier systems. Pattern Recognition Letters, 22(1), 25–33.

    Article  MATH  Google Scholar 

  • Guile, G., & Wang, W. (2008). Relationships between depth of decision tree and boosting performance. In IEEE IJCNN08 (pp. 2268–2275).

  • Hansen, L., & Salamon, P. (1990). Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12, 993–1001.

    Article  Google Scholar 

  • Ho, T., Hull, J., & Sargur, S. (1994). Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1), 66–75.

    Article  Google Scholar 

  • Kuncheva, L., & Whitaker, J. (2003). Measures of diversity in classifier ensembles and their relationships with the ensemble accuracy. Machine Learning, 51(2), 181–207.

    Article  MATH  Google Scholar 

  • Optiz, D., & Maclin, R. (1999). Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research, 11, 169–198.

    Google Scholar 

  • Partridge, D., & Yates, W. (1996). Engineering multi-version neural-net systems. Neural Computation, 8(4), 869–893.

    Article  Google Scholar 

  • Partridge, D., & Krzanowski, W. (1997). Distinct failure diversity in multiversion software. Technical report 348, Department of Computer Science, Exeter University.

  • Quinlan, J. (1992). C4.5 programs for machine learning. Morgan Kaufmann.

  • Richards, G., & Wang, W. (2006). Empirical investigations on characteristics of ensemble and diversity. In IEEE IJCNN06 (pp. 5140–5147).

  • Ruta, D., & Gabrys, B. (2005). Classifier selection for majority voting. Information Fusion, 6, 63–81.

    Article  Google Scholar 

  • Wang, W. (2008). Some fundamental issues in ensemble methods. In IEEE IJCNN08 (pp. 2243–2250).

  • Wang, W., & Partridge, D. (1998). Multi-version neural network systems. NEURAP 98, 351–357.

    Google Scholar 

  • Wang, W., Jones, P., & Partridge, D. (2000). Diversity between neural networks and decision trees for building multiple classifier systems. In Multiple classifier systems (pp. 240–249).

  • Wang W., Partridge, D., & Etherington, J. (2001). Hybrid ensembles and coincident-failure diversity. In IEEE IJCNN01 (pp. 2376–2381).

  • Wang, W., Richards, G., & Rae, S. (2005). Hybrid data mining ensemble for predicting osteoporosis risk. In 27th int. conf. on engineering in medicine and biology (pp. 886–889).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenjia Wang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Richards, G., Wang, W. What influences the accuracy of decision tree ensembles?. J Intell Inf Syst 39, 627–650 (2012). https://doi.org/10.1007/s10844-012-0206-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10844-012-0206-7

Keywords

Navigation