Skip to main content

Relationship of Sum and Vote Fusion Strategies

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2096))

Abstract

Amidst the conflicting evidence of superiority of one over the other, we investigate the Sum and majority Vote combining rules for the two class case at a single point. We show analytically that, for Gaussian estimation error distributions, Sum always outperforms Vote, whereas for heavy tail distributions Vote may outperform Sum.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Luis Alexandre, Aurelio Campilho, and Mohamed Kamel. Combining independent and unbiased classifiers using weighted average. In Proceedings of ICPR15, volume 2, pages 495–498. IEEE, 2000.

    Google Scholar 

  2. F. M. Alkoot and J. Kittler. Experimental evaluation of expert fusion strategies. Pattern Recognition Letters, 20(11-13):1361–1369, 1999.

    Article  Google Scholar 

  3. F. M. Alkoot and J. Kittler. Multiple expert system design by combined feature selection and probability level fusion. In Proceedings of the Fusion 2000 conference, volume II, pages ThC5(9–16), Paris, France, 2000.

    Google Scholar 

  4. Fuad M. Alkoot and J. Kittler. Feature selection for an ensemble of classifiers. In Proceedings of the SCI 2000 conference, pages 622–627, Orlando, Florida, 2000.

    Google Scholar 

  5. E. Bauer and R. Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine Learning, pages 1–38, 1998.

    Google Scholar 

  6. Robert Duin and David Tax. Experiments with classifier combining rules. In Josef Kittler and Fabio Roli, editors, First International Workshop on Multiple Classifier systems, pages 16–29. Springer, 2000.

    Google Scholar 

  7. L.K. Hansen and P. Salamon. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(10):993–1001, 1990.

    Article  Google Scholar 

  8. S. Hashem and B. Schmeiser. Improving model accuracy using optimal linear combination of trained neural networks. IEEE Transactions on Neural Networks, 6(3):792–794, 1995.

    Article  Google Scholar 

  9. T.K. Ho, J.J. Hull, and S.N. Srihari. Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1):66–75, 1994.

    Article  Google Scholar 

  10. J. Kittler. Combining classifiers: A theoretical framework. Pattern Analysis and Applications, 1:18–27, 1998.

    Article  Google Scholar 

  11. J. Kittler, M. Hatef, R. Duin, and J. Matas. On combining classifiers. IEEE Transaction on Pattern Analysis and Machine Intelligence, 20(3):226–239, 1998.

    Article  Google Scholar 

  12. J. Kittler, A. Hojjatoleslami, and T. Windeatt. Strategies for combining classifiers employing shared and distinct pattern representations. Pattern Recognition Letters, 18:1373–1377, 1997.

    Article  Google Scholar 

  13. L. Lam and C. Suen. Application of majority voting to pattern recognition: An analysis of its behaviour and performance. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 27(5):553–568, 1997.

    Article  Google Scholar 

  14. David Tax, Martijn van Breukelen, Robert Duin, and Josef Kittler. Combining multiple classifiers by averaging or by multiplying. Pattern Recognition, 33(9):1475–1485, 2000.

    Article  Google Scholar 

  15. K. Tumer and J. Ghosh. Analysis of decision boundaries in linearly combined neural classifiers. Pattern Recognition, 29(2):341–348, 1996.

    Article  Google Scholar 

  16. L. Xu, A. Krzyzak, and C.Y. Suen. Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Transaction. SMC, 22(3):418–435, 1992.

    Google Scholar 

  17. K. Yu, X. Jiang, and H. Bunke. Lipreading a classifier combination approach. Pattern Recognition Letters, 18(11-13):1421–1426, 1997.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kittler, J., Alkoot, F.M. (2001). Relationship of Sum and Vote Fusion Strategies. In: Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2001. Lecture Notes in Computer Science, vol 2096. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48219-9_34

Download citation

  • DOI: https://doi.org/10.1007/3-540-48219-9_34

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42284-6

  • Online ISBN: 978-3-540-48219-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics