Skip to main content

Dynamic Integration of Decision Committees

  • Conference paper
  • First Online:
  • 408 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1970))

Abstract

Decision committee learning has demonstrated outstanding success in reducing classification error with an ensemble of classifiers. In a way a decision committee is a classifier formed upon an ensemble of subsidiary classifiers. Voting, which is commonly used to produce the final decision of committees has, however, a shortcoming. It is unable to take into account local expertise. When a new instance is difficult to classify, then it easily happens that only the minority of the classifiers will succeed, and the majority voting will quite probably result in a wrong classification. We suggest that dynamic integration of classifiers is used instead of majority voting in decision committees. Our method is based on the assumption that each classifier is best inside certain subareas of the whole domain. In this paper, the proposed dynamic integration is evaluated in combination with the well-known decision committee approaches AdaBoost and Bagging. The comparison results show that both boosting and bagging produce often significantly higher accuracy with the dynamic integration than with voting.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bauer, E., Kohavi, R.: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning, Vol.36 (1999) 105–139.

    Article  Google Scholar 

  2. Blake, C.L., Merz, C.J.: UCI Repository of Machine Learning Databases [http://www.ics.uci.edu/ ~mlearn/ MLRepository.html]. Dep-t of Information and CS, Un-ty of California, Irvine CA (1998).

  3. Breiman, L.: Arcing the Edge. Tech. Rep. 486, Un-ty of California, Berkely CA (1997).

    Google Scholar 

  4. Breiman, L.: Bagging Predictors. Machine Learning, Vol. 24 (1996) 123–140.

    MATH  MathSciNet  Google Scholar 

  5. Cost, S., Salzberg, S.: A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features. Machine Learning, Vol. 10, No. 1 (1993) 57–78.

    Google Scholar 

  6. Freund, Y., Schapire, R.E.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. In: Proc. 2nd European Conf. on Computational Learning Theory, Springer-Verlag (1995) 23–37.

    Google Scholar 

  7. Kohavi, R., Sommerfield, D., Dougherty, J.: Data Mining Using MLC++: A Machine Learning Library in C++. Tools with Artificial Intelligence, IEEE CS Press (1996) 234–245.

    Google Scholar 

  8. Koppel, M., Engelson, S.P.: Integrating Multiple Classifiers by Finding their Areas of Expertise. In: AAAI-96 Workshop On Integrating Multiple Learning Models (1996) 53–58.

    Google Scholar 

  9. Merz, C.: Dynamical Selection of Learning Algorithms. In: D. Fisher, H.-J. Lenz (eds.), Learning from Data, Artificial Intelligence and Statistics, Springer-Verlag, NY (1996).

    Google Scholar 

  10. Puuronen, S., Terziyan, V., Tsymbal, A.: A Dynamic Integration Algorithm for an Ensemble of Classifiers. In: Z. W. Ras, A. Skowron (eds.), Foundations of Intelligent Systems: ISMIS’99, Lecture Notes in AI, Vol. 1609, Springer-Verlag, Warsaw (1999) 592–600.

    Chapter  Google Scholar 

  11. Puuronen, S., Tsymbal, A., Terziyan, V.: Distance Functions in Dynamic Integration of Data Mining Techniques. In: B.V. Dasarathy (ed.), Data Mining and Knowledge Discovery: Theory, Tools, and Techniques, SPIE-The International Society for Optical Engineering, USA (2000) 22–32.

    Google Scholar 

  12. Quinlan, J.R.: C4.5 Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA (1993).

    Google Scholar 

  13. Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods. The Annals of Statistics, Vol. 26, No 5 (1998) 1651–1686.

    Article  MATH  MathSciNet  Google Scholar 

  14. Schapire, R.E.: A Brief Introduction to Boosting. In: Proc. 16th Int. Joint Conf. on AI (1999).

    Google Scholar 

  15. Schapire, R.E.: The Strength of Weak Learnability. Machine Learning, Vol. 5, No. 2 (1990) 197–227.

    Google Scholar 

  16. Skalak, D.B.: Combining Nearest Neighbor Classifiers. Ph.D. Thesis, Dept. of Computer Science, University of Massachusetts, Amherst, MA (1997).

    Google Scholar 

  17. Tsymbal, A., Puuronen, S.: Bagging and Boosting with Dynamic Integration of Classifiers. In: Proc. PKDD’2000 4th European Conf. on Principles and Practice of Knowledge Discovery in Databases, Lyon, France, LNCS, Springer-Verlag (2000) to appear.

    Google Scholar 

  18. Tsymbal, A.: Decision Committee Learning with Dynamic Integration of Classifiers. In: Proc. 2000 ADBIS-DASFAA Symp. on Advances in Databases and Information Systems, Prague, Czech Republic, LNCS, Springer-Verlag (2000) to appear.

    Google Scholar 

  19. Webb, G.I.: MultiBoosting: A Technique for Combining Boosting and Wagging. Machine Learning (2000) in press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tsymbal, A., Puuronen, S. (2000). Dynamic Integration of Decision Committees. In: Valero, M., Prasanna, V.K., Vajapeyam, S. (eds) High Performance Computing — HiPC 2000. HiPC 2000. Lecture Notes in Computer Science, vol 1970. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44467-X_49

Download citation

  • DOI: https://doi.org/10.1007/3-540-44467-X_49

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41429-2

  • Online ISBN: 978-3-540-44467-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics