Skip to main content

Meta-classifiers and Selective Superiority

  • Conference paper
  • First Online:
Intelligent Problem Solving. Methodologies and Approaches (IEA/AIE 2000)

Abstract

Given that no one classification method is the best in all tasks, a variety of approaches have evolved to prevent poor performance due to mismatch of capabilities. One approach to overcome this problem is to determine when a method may be appropriate for a given problem. A second, more popular approach is to combine the capabilities of two or more classification methods. This paper provides some evidence that the combining of classifiers can yield more robust solutions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aha, D. A., D. Kibler, and M. K. Albert, “Instance-Based Learning Algorithms”, Machine Learning, 6 (1991), 37–66

    Google Scholar 

  2. Akkus, A. and H. A. Guvenir, “K Nearest Neighbor Classification On Feature Projections,” Proceedings of The 13th International Conference On Machine Learning, July 3–6, 1996, pp. 12–19.

    Google Scholar 

  3. Brodley, C. E., “Recursive Automatic Bias Selection for Classifier Construction”, Machine Learning, 20, 63–95 (1995)

    Google Scholar 

  4. Dietterich, T. G., and Kong E. B., Machine Learning Bias, Statistical Bias, and Variance of Decision Tree Algorithms (Manuscript), 1995.

    Google Scholar 

  5. Gama, J. and P. Brazdil, “Characterization of Classification Algorithms,” Seventh Portuguese Conference on Artificial Intelligence, 1995, pp. 189–200.

    Google Scholar 

  6. Gama, J., and P. Brazdil, “Linear Tree”, Intelligent Data Analysis, 3 (1999), p. 1–22.

    Article  MATH  Google Scholar 

  7. King, R. D., C. Feng, and A. Sutherland, “Statlog: Comparison Of Classification Algorithms On Large Real-World Problems”, Applied Artificial Intelligence, 9, (1995), 289–333

    Article  Google Scholar 

  8. Kubat, M., and M. Cooperson, Jr., “Initializing RBF-Networks with Small Subsets of Training Examples”, Proceedings of the 16th National Conference on Artificial Intelligence, AAAI=99. July 18–22, 1999, pp. 188–193.

    Google Scholar 

  9. Merz, C. J. and P. M. Murphy, UCI Repository of Machine Learning Databases [http://www.ics.uci.edu/~mlearn/MLRepository.html], Irvine, CA, University of California, Department of Information and Computer Science, 1998.

    Google Scholar 

  10. Merz, C., “Using Correspondence Analysis to Combine Classifiers”, Machine Learning, 36 (1999), 33–58.

    Article  Google Scholar 

  11. Quinlan, R. J., C4.5: Programs for Machine Learning, San Mateo, CA: Morgan Kaufmann, 1993.

    Google Scholar 

  12. Quinlan, Ross J., “Induction of Decision Trees”, Machine Learning, 1 (1986), p. 91–106.

    Google Scholar 

  13. Rendell, Larry and Howard Cho, “Empirical Learning As A Function Of Concept Character,” Machine Learning, 5 (1990), 267–296

    Google Scholar 

  14. Rumelhart, D. E., and J. L. McClelland, Parallel Distributed Processing: Exploration in the Microstructure of Cognition., Cambridge, MA: MIT Press.

    Google Scholar 

  15. Salzberg, Steven, Arthur L. Delcher, David Heath, and Simon Kasif, “Best-Case Results For Nearest-Neighbor Learning”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 17 (1995), 599–608

    Article  Google Scholar 

  16. Schapire, R. E., “The Strength of Weak Learnability”, Machine Learning, 5 (1990), 197–227.

    Google Scholar 

  17. Tveter, D., The Pattern Recognition Basis of AI: Neural Networking Software [http://www.dontveter.com/nnsoft/nnsoft.html], 1999.

  18. Wilson, D. Randall, Prototype Styles of Generalization, (1994), [Brigham Young University, Department of Computer Science, 39 pages]. (Master’s Thesis)

    Google Scholar 

  19. Wolpert, D. H., “Stacked Generalization”, Neural Networks, 5 (1992), 241–259.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Benton, R., Kubat, M., Loganantharaj, R. (2000). Meta-classifiers and Selective Superiority. In: Logananthara, R., Palm, G., Ali, M. (eds) Intelligent Problem Solving. Methodologies and Approaches. IEA/AIE 2000. Lecture Notes in Computer Science(), vol 1821. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45049-1_53

Download citation

  • DOI: https://doi.org/10.1007/3-540-45049-1_53

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67689-8

  • Online ISBN: 978-3-540-45049-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics