Abstract
Given that no one classification method is the best in all tasks, a variety of approaches have evolved to prevent poor performance due to mismatch of capabilities. One approach to overcome this problem is to determine when a method may be appropriate for a given problem. A second, more popular approach is to combine the capabilities of two or more classification methods. This paper provides some evidence that the combining of classifiers can yield more robust solutions.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Aha, D. A., D. Kibler, and M. K. Albert, “Instance-Based Learning Algorithms”, Machine Learning, 6 (1991), 37–66
Akkus, A. and H. A. Guvenir, “K Nearest Neighbor Classification On Feature Projections,” Proceedings of The 13th International Conference On Machine Learning, July 3–6, 1996, pp. 12–19.
Brodley, C. E., “Recursive Automatic Bias Selection for Classifier Construction”, Machine Learning, 20, 63–95 (1995)
Dietterich, T. G., and Kong E. B., Machine Learning Bias, Statistical Bias, and Variance of Decision Tree Algorithms (Manuscript), 1995.
Gama, J. and P. Brazdil, “Characterization of Classification Algorithms,” Seventh Portuguese Conference on Artificial Intelligence, 1995, pp. 189–200.
Gama, J., and P. Brazdil, “Linear Tree”, Intelligent Data Analysis, 3 (1999), p. 1–22.
King, R. D., C. Feng, and A. Sutherland, “Statlog: Comparison Of Classification Algorithms On Large Real-World Problems”, Applied Artificial Intelligence, 9, (1995), 289–333
Kubat, M., and M. Cooperson, Jr., “Initializing RBF-Networks with Small Subsets of Training Examples”, Proceedings of the 16th National Conference on Artificial Intelligence, AAAI=99. July 18–22, 1999, pp. 188–193.
Merz, C. J. and P. M. Murphy, UCI Repository of Machine Learning Databases [http://www.ics.uci.edu/~mlearn/MLRepository.html], Irvine, CA, University of California, Department of Information and Computer Science, 1998.
Merz, C., “Using Correspondence Analysis to Combine Classifiers”, Machine Learning, 36 (1999), 33–58.
Quinlan, R. J., C4.5: Programs for Machine Learning, San Mateo, CA: Morgan Kaufmann, 1993.
Quinlan, Ross J., “Induction of Decision Trees”, Machine Learning, 1 (1986), p. 91–106.
Rendell, Larry and Howard Cho, “Empirical Learning As A Function Of Concept Character,” Machine Learning, 5 (1990), 267–296
Rumelhart, D. E., and J. L. McClelland, Parallel Distributed Processing: Exploration in the Microstructure of Cognition., Cambridge, MA: MIT Press.
Salzberg, Steven, Arthur L. Delcher, David Heath, and Simon Kasif, “Best-Case Results For Nearest-Neighbor Learning”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 17 (1995), 599–608
Schapire, R. E., “The Strength of Weak Learnability”, Machine Learning, 5 (1990), 197–227.
Tveter, D., The Pattern Recognition Basis of AI: Neural Networking Software [http://www.dontveter.com/nnsoft/nnsoft.html], 1999.
Wilson, D. Randall, Prototype Styles of Generalization, (1994), [Brigham Young University, Department of Computer Science, 39 pages]. (Master’s Thesis)
Wolpert, D. H., “Stacked Generalization”, Neural Networks, 5 (1992), 241–259.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Benton, R., Kubat, M., Loganantharaj, R. (2000). Meta-classifiers and Selective Superiority. In: Logananthara, R., Palm, G., Ali, M. (eds) Intelligent Problem Solving. Methodologies and Approaches. IEA/AIE 2000. Lecture Notes in Computer Science(), vol 1821. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45049-1_53
Download citation
DOI: https://doi.org/10.1007/3-540-45049-1_53
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67689-8
Online ISBN: 978-3-540-45049-8
eBook Packages: Springer Book Archive