Skip to main content

Choosing among algorithms to improve accuracy

  • Conference paper
  • First Online:
Computational Methods in Neural Modeling (IWANN 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2686))

Included in the following conference series:

  • 995 Accesses

Abstract

It is a widely accepted fact that no single Machine Learning System (MLS) gets the smaller classification error on all data sets. Different algorithms fit better to certain problems and it is interesting to combine them in some way to improve the overall accuracy. In this paper, we propose a method to construct a new MLS from given ones. It is based on the selection of the system that will perform better on a particular data set. We study several ways of selecting the systems and carry out experiments with well-known MLS on the Holte data set.

The research reported in this paper has been supported in part under MCyT and Feder grant TIC2001-3579

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cohen, P.R.: Empirical Methods for Artificial Intelligence. MIT Press. (1995)

    Google Scholar 

  2. Everitt, B.S.: The analysis ofc ontingency tables. Chapman and Hall, London. (1977)

    Google Scholar 

  3. Snedecor, G.W., Cochran, W.G.: Statistical Methods. Iowa State University Press, Ames, IA, 8th edition. (1989)

    MATH  Google Scholar 

  4. Dietterich, T.G.: Approximate statistical test for comparing supervised classification learning algorithms. Neural Computation. (1998) 10(7):1895–1923

    Article  Google Scholar 

  5. Pavel, B., Brazdil, Soares, C.: A comparison of ranking methods for classification algorithm selection. In Proceedings of the 11th European Conference on Machine Learning (ECML-2000) Barcelona, Spain. Springer-Verlag. (2000) 63–74

    Google Scholar 

  6. Quinlan, J.R.: Combining instance-based and model-based learning. In Machine Learning: Proceedings oft he Tenth International Conference, Amherst, Massachusetts. Morgan Kaufmann. (1993) 236–243

    Google Scholar 

  7. Quevedo, J.R., Bahamonde, A.: Aprendizaje de funciones usando inducción sobre clasificaciones discretas. Proceedings of CAEPIA’99-TTIA’99-VIII Conferencia de la Asociación Española para la Inteligencia Artificial— III Jornadas de Transferencia Tecnológica de Inteligencia Artificial, Murcia, Spain. (1999) 64–71

    Google Scholar 

  8. Fürnkranz, J.: Round Robin Classification. Journal of Machine Learning Research. (2002) 2:721–747

    Article  MATH  Google Scholar 

  9. Kohavi, R., John, G., Long, R., Manley, D., Pfleger, K.: MLC++: A machine learning library in C++. IEEE Computer Society Press. In Proc. of the 6th International Conference on Tools with Artificial Intelligence. (1994) 740–743

    Google Scholar 

  10. Holte, R.C.: Very simple classification rules perform well on most commonly used datasets. Machine Learning. (1993) 11:63–91

    Article  MATH  Google Scholar 

  11. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann. (1993)

    Google Scholar 

  12. Domingos, P.: Unifying Instance-Based and Rule-Based Induction. Machine Learning. (1996) 24:141–168

    Google Scholar 

  13. Cohen, W.W.: Fast Effective Rule Induction. Proceedings of the 12th International Conference on Machine Learning (ML95), Morgan Kaufmann, San Francisco. (1995) 115–123

    Google Scholar 

  14. Murthy, S.K., S. Kasif, S. Salzberg.: A system for induction of oblique decision trees. Journal of Artificial Intelligence Research. (1994) 2:1–33

    Article  MATH  Google Scholar 

  15. Aha, D.W., Kibler, D., Albert, M.K.: Instance based learning algorithms. Machine Learning, Vol. 6. (1991) 37–66

    Google Scholar 

  16. Wilson, D., Martinez, T.: Improved heterogeneous distance functions. Journal of Artificial Intelligence Research. (1997) 6:1–34

    MATH  MathSciNet  Google Scholar 

  17. Kohavi, R.: Wrappers for performance enhancement and oblivious decision graphs. Ph.D. thesis, Stanford University. (1995)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Quevedo, J.R., Combarro, E.F., Bahamonde, A. (2003). Choosing among algorithms to improve accuracy. In: Mira, J., Álvarez, J.R. (eds) Computational Methods in Neural Modeling. IWANN 2003. Lecture Notes in Computer Science, vol 2686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44868-3_32

Download citation

  • DOI: https://doi.org/10.1007/3-540-44868-3_32

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40210-7

  • Online ISBN: 978-3-540-44868-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics