Skip to main content

Empirical Study on Weighted Voting Multiple Classifiers

  • Conference paper
Pattern Recognition and Data Mining (ICAPR 2005)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 3686))

Included in the following conference series:

Abstract

Combining multiple classifiers is expected to increase classification accuracy. Research on combination strategies of multiple classifiers becomes a popular topic. For a crisp classifier, which returns a discrete class label instead of a set of real-valued probabilities respecting to every classes, the often used combination method is majority voting. Both majority and weighted majority voting are classifier-based voting schemes, which provide a certain base classifier with an identical confidence in voting. However, each classifier should have different voting priorities with respect to its learning space. This differences can not be reflected by classifier-based voting strategy. In this paper, we propose another two voting strategies in an effort to take such differences into consideration. We apply the AdaBoost algorithm to generate multiple classifiers and vary its voting strategy. Then, the prediction ability of each voting strategy is tested and compared on 8 datasets taken from UCI Machine Learning Repository. The experimental results show that one of the proposed voting strategies, namely sample-based voting scheme, achieves better performance in view of classification accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alexandre, L.A., Campilho, A.C., Kamel, M.: On combining classifiers using sum and product rules. Pattern Recognition Letters 22(12), 1283–1289 (2001)

    Article  MATH  Google Scholar 

  2. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algoirthm: Bagging, boosting and variants. Machine Learning 36, 105–142 (1999)

    Article  Google Scholar 

  3. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  4. Freund, Y.: Boosting a weak leaning algorithm by majority. Information and computation 121(2), 256–285 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  5. Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Proc. of the Thirteenth International Conference on Machine Learning, pp. 148–156. Morgan Kaufmann, San Francisco (1996), The Mit Press

    Google Scholar 

  6. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an aplication to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  7. Kamel, M., Wanas, N.: Data dependence in combining classifiers. In: Multiple Classifiers Systems, Fourth International Workshop, Surrey, UK, June 2003, pp. 11–13 (2003)

    Google Scholar 

  8. Kohavi, R., Sommerfield, D., Dougherty, J.: Data Mining Using MLC++: A machine learning library in C++. In: Tools with Artificial Intelligence. IEEE CS Press, Los Alamitos (1996)

    Google Scholar 

  9. Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: An experimental comparision. Pattern Recognition 34(2), 299–314 (2001)

    Article  MATH  Google Scholar 

  10. Murph, P.M., Aha, D.W.: UCI Repository Of Machine Learning Databases. In: Dept. of Information and Computer Science, Univ. of California: Irvine (1991)

    Google Scholar 

  11. Osteyee, D.B., Good, I.J.: Information, Weight of Evidence. In: The Singularity Between Probability Measures and Signal Detection. Springer, Berlin (1974)

    Google Scholar 

  12. Schapire, R.E., Singer, Y.: Boosting the margin: A new explanation for the effectiveness of voting methods. Machine Learning 37(3), 297–336 (1999)

    Article  MATH  Google Scholar 

  13. Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37(3), 297–336 (1999)

    Article  MATH  Google Scholar 

  14. Wang, Y., Wong, A.K.C.: From association to classification: Inference using weight of evidence. IEEE Trans. On Knowledge and Data Engineering 15(3), 764–767 (2003)

    Article  MathSciNet  Google Scholar 

  15. Wong, A.K.C., Wang, Y.: High order discovery from discrete-valued data. IEEE Trans. On Knowledge and Data Engineering 9(6), 877–893 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sun, Y., Kamel, M.S., Wong, A.K.C. (2005). Empirical Study on Weighted Voting Multiple Classifiers. In: Singh, S., Singh, M., Apte, C., Perner, P. (eds) Pattern Recognition and Data Mining. ICAPR 2005. Lecture Notes in Computer Science, vol 3686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11551188_36

Download citation

  • DOI: https://doi.org/10.1007/11551188_36

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28757-5

  • Online ISBN: 978-3-540-28758-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics