Skip to main content
Log in

Dynamic weighting ensemble classifiers based on cross-validation

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Ensemble of classifiers constitutes one of the main current directions in machine learning and data mining. It is accepted that the ensemble methods can be divided into static and dynamic ones. Dynamic ensemble methods explore the use of different classifiers for different samples and therefore may get better generalization ability than static ensemble methods. However, for most of dynamic approaches based on KNN rule, additional part of training samples should be taken out for estimating “local classification performance” of each base classifier. When the number of training samples is not sufficient enough, it would lead to the lower accuracy of the training model and the unreliableness for estimating local performances of base classifiers, so further hurt the integrated performance. This paper presents a new dynamic ensemble model that introduces cross-validation technique in the process of local performances’ evaluation and then dynamically assigns a weight to each component classifier. Experimental results with 10 UCI data sets demonstrate that when the size of training set is not large enough, the proposed method can achieve better performances compared with some dynamic ensemble methods as well as some classical static ensemble approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Bi Y, Bell D, Wang H et al (2004) Combining multiple classifiers for text categorization using dempster-shafer theory of evidence. In: Torra V, Narukawa Y (eds) Proceedings of the 1st international conference on modeling decisions for artificial intelligence. Barcelona, pp 127–138

  2. Dietterich TG (2002) Ensemble learning. The handbook of brain theory and neural networks, 2nd edn. MIT Press, Cambridge

    Google Scholar 

  3. Xu L, Krzyzak A, Suen CY (1992) Methods for combining multiple classifiers and their applications to handwriting recognition. IEEE Trans Syst Man Cybernet 23:418–435

    Google Scholar 

  4. Kim KM, Park JJ, Song YG et al (2004) Recognition of handwritten numerals using a combined classifier with hybrid features. In: Fred A, Caelli T, Duin RP W et al (eds) Proceedings of the 5th international conference on statistical techniques in pattern recognition. Lisbon, 992–1000

  5. Oliveira LS, Morita M, Sabourin R (2006) Feature selection for ensembles applied to handwriting recognition. Int J Document Anal Recogn 8:262–279

    Article  Google Scholar 

  6. Ko AHR, Sabourin R, Britto AS Jr (2008) From dynamic classifier selection to dynamic ensemble selection. Pattern Recogn 41:1718–1731

    Article  MATH  Google Scholar 

  7. Geng X, Zhou ZH (2006) Image region selection and ensemble for face recognition. J Com Sci Technol 21:116–125

    Article  Google Scholar 

  8. Sirlantzis K, Hoque S, Fairhurst MC (2008) Diversity in multiple classifier ensembles based on binary feature quantisation with application to face recognition. Appl Soft Comput 8:437–445

    Article  Google Scholar 

  9. Heseltine T, Pears N, Austin J (2008) Three-dimensional face recognition using combinations of surface feature map subspace components. Image Vis Comput 26:382–396

    Article  Google Scholar 

  10. Cappelli R, Maio D, Maltoni D (2002) A multi-classifier approach to fingerprint classification. Pattern Anal Appl 5:136–144

    Article  MATH  MathSciNet  Google Scholar 

  11. Nanni L, Lumini A (2006) Random bands: a novel ensemble for fingerprint matching. NeuroComputing 69:1702–1705

    Article  Google Scholar 

  12. Shen HB, Chou KC (2007) Hum-mPLoc: an ensemble classifier for large-scale human protein subcellular location prediction by incorporating samples with multiple sites. Biochem Biophys Res Commun 355(4):1006–1011

    Article  Google Scholar 

  13. Gu Q, Ding YS, Jiang XY, Zhang TL Prediction of subcellular location apoptosis proteins with ensemble classifier and feature selection. Amino Acids doi:10.1007/s00726-008-0209-4

  14. Nanni L, Lumini A (2007) Ensemblator: an ensemble of classifiers for reliable classification of biological data. Pattern Recogn Lett 28:622–630

    Article  Google Scholar 

  15. Lam L, Suen CY (1997) Application of majority voting to pattern recognition: an analysis of its behavior and performance. IEEE Trans Syst Man Cybernet 27:553–568

    Article  Google Scholar 

  16. Lee DS, Srihari SN (1995) A theory of classifier combination: the neural network approach. In: Kavanaugh M, Storms M (eds) Proceedings of the 3rd international conference on document analysis and recognition. Montreal, pp 42–45

  17. Schaffer C (1993) Selecting a classification method by cross-validation. Mach Learn 13:135–143

    Google Scholar 

  18. Woods K, Kegelmeyer WP, Bowyer K (1997) Combination of multiple classifiers using local accuracy estimates. IEEE Trans Pattern Anal Mach Intell 19:405–410

    Article  Google Scholar 

  19. Giacinto G, Roli F (1997) Adaptive selection of image classifiers. In: Bimbo AD (ed) Proceedings of the 9th international conference on image analysis and processing. Florence, Italy, pp 38–45

  20. Puuronen S, Terziyan V, Tsymbal A (1999) A dynamic integration algorithm for an ensemble of classifiers. In: Ras ZW, Skowron A (eds) Proceedings of the 11th international symposium on foundations of intelligent systems. Warsaw, pp 592–600

  21. Giacinto G, Roli F (2000) Dynamic classifier selection. In: Goos G, Hartmanis J, van Leeuwen J (eds) Proceedings of the 1st international workshop on multiple classifier systems. Cagliari, pp 177–189

  22. Giacinto G, Roli F (2001) Dynamic classifier selection based on multiple classifier behaviour. Pattern Recogn 34:1879–1881

    Article  MATH  Google Scholar 

  23. Canuto AMP, Soares RGF, Santana A et al (2006) Using accuracy and diversity to select classifiers to build ensembles. In: Proceedings of international joint conference on neural networks. Vancouver, pp 2289–2295

  24. de Souto M, Soares R, Santana A, Canuto A (2008) Empirical comparison of dynamic classifier selection methods based on diversity and accuracy for building ensembles. In: Proceedings of IEEE international joint conference on neural networks. HongKong, pp 1480–1487

  25. Kuncheva LI. Cluster-and-selection model for classifier combination. In: Howlett RJ, Jain LC (eds) Proceedings of international conference on knowledge based intelligent engineering systems and allied technologies. University of Brighton, United Kingdom, pp 185–188

  26. Liu R, Yuan B (2001) Multiple classifier combination by clustering and selection. Inf Fusion 2:163–168

    Article  Google Scholar 

  27. Kuncheva LI (2002) Switching between selection and fusion in combining classifiers: an experiment. IEEE Trans Syst Man Cybernet-Part B 32:146–156

    Article  Google Scholar 

  28. Zhu XQ, Wu XD, Yang Y (2004) Dynamic selection for effective mining from noisy data streams. In: Rastogi R, Morik K, Bramer M et al (eds) Proceedings of the 4th IEEE international conference on data mining. Brighton, pp 305–312

  29. Singh S, Singh M (2005) A dynamic classifier selection and combination approach to image region labelling. Signal Process: Image Commun 20:219–231

    Article  Google Scholar 

  30. Breiman L (1996) Bagging predictors. Mach Learn 24:123–140

    MATH  MathSciNet  Google Scholar 

  31. Freund Y (1995) Boosting a weak algorithm by majority. Inf Comput 121:256–285

    Article  MATH  MathSciNet  Google Scholar 

  32. Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20:832–844

    Article  Google Scholar 

  33. Merz CJ, Murphy PM (1998) UCI repository of machine learning databases. Department of Information and Computer Science, University of California, Irvine, http://www.ics.uci.edu/~mlearn/MLRepository

  34. Witten I, Frank E, et al (2007) Weka 3: data mining software in java. University of Waikato, Hamilton, New Zealand, http://www.cs.waikato.ac.nz/~ml/

  35. Tsymbal A, Pechenizkiy M, Cunningham P (2005) Diversity in search strategies for ensemble feature selection. Inf Fusion 6(1):83–98

    Article  Google Scholar 

  36. Mitchell T (1997) Machine learning. McGraw-Hill Companies, NY

    MATH  Google Scholar 

  37. Didaci L, Giacinto G, Roli F et al (2005) A study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recogn 38:2188–2191

    Article  MATH  Google Scholar 

  38. Witten IH, Frank E (2005) Data mining: practical machine learning tools and techniques, 2nd edn. Elsevier Inc., Amsterdam

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ou Ji-Shun.

Additional information

Sponsored by Qing Lan Project of Jiangsu province, Innovation Fund for Small Technology-based Firms of China (No. 09C26213203797), National Natural Science Foundation of China (No. 70971067), the High-tech Research and Development Program of Jiangsu Province (No. BG2007028) and Natural Science Foundation of Jiangsu province (No. 08KJA520001).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yu-Quan, Z., Ji-Shun, O., Geng, C. et al. Dynamic weighting ensemble classifiers based on cross-validation. Neural Comput & Applic 20, 309–317 (2011). https://doi.org/10.1007/s00521-010-0372-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-010-0372-x

Keywords

Navigation