Skip to main content
Log in

Pruning the Ensemble of ANN Based on Decision Tree Induction

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Ensemble learning is a powerful approach for achieving more accurate predictions compared with single classifier. However, this powerful classification ability is achieved at the expense of heavy storage requirements and computational burdens on the ensemble. Ensemble pruning is a crucial step for the reduction of the predictive overhead without worsening the performance of original ensemble. This paper suggests an efficient and effective ordering-based ensemble pruning based on the induction of decision tree. The suggested method maps the dataset and base classifiers to a new dataset where the ensemble pruning can be transformed to a feature selection problem. Furthermore, a set of accurate, diverse and complementary base classifiers can be selected by the induction of decision tree. Moreover, an evaluation function that deliberately favors the candidate sub-ensembles with an improved performance in classifying low margin instances has also been designed. The comparative experiments on 24 benchmark datasets demonstrate the effectiveness of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Soltanmohammadi E, Naraghi-Pour M, van der Schaar M (2016) Context-based unsupervised ensemble learning and feature ranking. Mach Learn 105(3):459–485

    Article  MathSciNet  MATH  Google Scholar 

  2. Termenon M, Grana M (2012) A two stage sequential ensemble applied to the classification of Alzheimer’s disease based on MRI features. Neural Process Lett 35(1):1–12

    Article  Google Scholar 

  3. Lin SJ, Chen TF (2016) Multi-agent architecture for corporate operating performance assessment. Neural Process Lett 43(1):115–132

    Article  Google Scholar 

  4. Laradji IH, Alshayeb M, Ghouti L (2015) Software defect prediction using ensemble learning on selected features. Inf SoftwTechnol 58:388–402

    Google Scholar 

  5. Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207

    Article  MATH  Google Scholar 

  6. Kittler J, Hatef M, Duin RPW, Matas J (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20(3):226–239

    Article  Google Scholar 

  7. Britto AS, Sabourin R, Oliveira LES (2014) Dynamic selection of classifiers—a comprehensive review. Pattern Recognit 47(11):3665–3680

    Article  Google Scholar 

  8. Haghighi MS, Vahedian A, Yazdi HS (2012) Making diversity enhancement based on multiple classifier system by weight tuning. Neural Process Lett 35(1):61–80

    Article  Google Scholar 

  9. Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139

    Article  MathSciNet  MATH  Google Scholar 

  10. Melville P, Mooney RJ (2005) Creating diversity in ensembles using artificial data. Inf Fusion 6(1):99–111

    Article  Google Scholar 

  11. Wang L, Sugiyama M, Jing Z, Yang C, Zhou ZH, Feng J (2011) A refined margin analysis for boosting algorithms via equilibrium margin. J Mach Learn Res 12(2):1835–1863

    MathSciNet  MATH  Google Scholar 

  12. Sun B, Chen H, Wang J (2015) An empirical margin explanation for the effectiveness of DECORATE ensemble learning algorithm. Knowl Based Syst 78:1–12

    Article  Google Scholar 

  13. Freund Y, Schapire RE (1999) Large margin classification using the perceptron algorithm. Mach Learn 37(3):277–296

    Article  MATH  Google Scholar 

  14. Hu Q, Zhu P, Yang Y, Yu D (2011) Large-margin nearest neighbor classifiers via sample weight learning. Neurocomputing 74(4):656–660

    Article  Google Scholar 

  15. Hu Q, Li L, Wu X, Schaefer G, Yu D (2014) Exploiting diversity for optimizing margin distribution in ensemble learning. Knowl Based Syst 67:90–104

    Article  Google Scholar 

  16. Zhou H, Zhao X, Wang X (2014) An effective ensemble pruning algorithm based on frequent patterns. Knowl Based Syst 56(3):79–85

    Article  Google Scholar 

  17. Margineantu DD, Dietterich, TG (1997) Pruning Adaptive Boosting. In: Proceedings of the fourteenth international conference on machine learning. Morgan Kaufmann Publishers Inc, pp 211–218

  18. Martinez-Muoz G, Hernandez-Lobato D, Suarez A (2009) An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Trans Pattern Anal Mach Intell 31(2):245–259

    Article  Google Scholar 

  19. Martínez-Muñoz G, Suárez A (2006) Pruning in ordered bagging ensembles. In: Proceedings of the 23rd international conference on machine learning. ACM, pp 609–616

  20. Guo L, Boukir S (2013) Margin-based ordered aggregation for ensemble pruning. Pattern Recognit Lett 34(6):603–609

    Article  Google Scholar 

  21. Dai Q, Han XM (2016) An efficient ordering-based ensemble pruning algorithm via dynamic programming. Appl Intell 44(4):816–830

    Article  MathSciNet  Google Scholar 

  22. Bhardwaj M, Bhatnagar V (2015) Towards an optimally pruned classifier ensemble. Int J Mach Learn Cybern 6(5):1–20

    Article  Google Scholar 

  23. Zhou ZH, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137(1–2):239–263

    Article  MathSciNet  MATH  Google Scholar 

  24. Yin XC, Huang K, Hao HW, Iqbal K, Wang ZB (2014) A novel classifier ensemble method with sparsity and diversity. Neurocomputing 134(134):214–221

    Article  Google Scholar 

  25. Zhang Y, Burer S, Street WN (2006) Ensemble pruning via semi-definite programming. J Mach Learn Res 7(3):1315–1338

    MathSciNet  MATH  Google Scholar 

  26. Dai Q (2013) A novel ensemble pruning algorithm based on randomized greedy selective strategy and ballot. Neurocomputing 122(122):258–265

    Article  Google Scholar 

  27. Bakker B, Heskes T (2003) Clustering ensembles of neural network models. Neural Netw 16(2):261–269

    Article  Google Scholar 

  28. Giacinto G, Roli F, Fumera G (2000) Design of effective multiple classifier systems by clustering of classifiers. In: Proceedings of 15th international conference on pattern recognition, vol 2, pp 160–163

  29. Zhang H, Cao L (2014) A spectral clustering based ensemble pruning approach. Neurocomputing 139(139):289–297

    Article  Google Scholar 

  30. Partalas I, Tsoumakas G, Vlahavas I (2009) Pruning an ensemble of classifiers via reinforcement learning. Neurocomputing 72(7–9):1900–1909

    Article  Google Scholar 

  31. Ykhlef H, Bouchaffra D (2017) An efficient ensemble pruning approach based on simple coalitional games. Inf Fusion 34:28–42

    Article  Google Scholar 

  32. Zhao QL, Jiang YH, Xu M (2009) A fast ensemble pruning algorithm based on pattern mining process. Data Min Knowl Discov 19(2):277–292

    Article  MathSciNet  Google Scholar 

  33. Krawczyk B, Woźniak M (2016) Untrained weighted classifier combination with embedded ensemble pruning. Neurocomputing 196:14–22

    Article  Google Scholar 

  34. r-Aky Z, Reyya S, Windeatt T, Smith R (2015) Pruning of error correcting output codes by optimization of accuracy—diversity trade off. Mach Learn 101(1):1–17

    MathSciNet  Google Scholar 

  35. Partridge D, Yates WB (1996) Engineering multiversion neural-net systems. Neural Comput 8(4):869–893

    Article  Google Scholar 

  36. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  37. Tsch G, Warmuth MK (2005) Efficient margin maximizing with boosting. J Mach Learn Res 6:2131–2152

    MathSciNet  MATH  Google Scholar 

  38. Shen C, Li H (2010) Boosting through optimization of margin distributions. IEEE Trans Neural Netw 21(4):659–666

    Article  Google Scholar 

  39. Vapnik V, Chapelle O (2000) Bounds on error expectation for support vector machines. Neural Comput 12(9):2013–2036

    Article  Google Scholar 

  40. Bache K, Lichman M (2013) UCI machine learning repository

  41. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The WEKA data mining software: an update. SIGKDD Explor Newsl 11(1):10–18

    Article  Google Scholar 

  42. Hodges JL, Lehmann EL (1962) Rank Methods for combination of independent experiments in analysis of variance. Ann Math Stat 33(2):482–497

    Article  MathSciNet  MATH  Google Scholar 

  43. Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat 6(2):65–70

    MathSciNet  MATH  Google Scholar 

  44. Whitaker CJ, Kuncheva LI (2002) Examining the relationship between majority vote accuracy and diversity in bagging and boosting. Inform Softw Technol 1(1):1–19

    Google Scholar 

  45. Tang EK, Suganthan PN, Yao X (2006) An analysis of diversity measures. Mach Learn 65(1):247–271

    Article  Google Scholar 

  46. Tsymbal A, Pechenizkiy M, Cunningham P (2005) Diversity in search strategies for ensemble feature selection. Inf Fusion 6(1):83–98

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Lin.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, S., Chen, Z., Zhao, Sy. et al. Pruning the Ensemble of ANN Based on Decision Tree Induction. Neural Process Lett 48, 53–70 (2018). https://doi.org/10.1007/s11063-017-9703-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-017-9703-6

Keywords

Navigation