Abstract
The use of multiple features by a classifier often leads to a reduced probability of error, but the design of an optimal Bayesian classifier for multiple features is dependent on the estimation of multidimensional joint probability density functions and therefore requires a design sample size that, in general, increases exponentially with the number of dimensions. The classification method described in this paper makes decisions by combining the decisions made by multiple Bayesian classifiers using an additional classifier that estimates the joint probability densities of the decision space rather than the joint probability densities of the feature space. A proof is presented for the restricted case of two classes and two features; showing that the method always demonstrates a probability of error that is less than or equal to the probability of error of the marginal classifier with the lowest probability of error.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bellman, R., 1961. Adaptive Control Processes: A Guided Tour. New Jersey: Princeton University Press.
Bloch, I., January 1996. “Information Combination Operators for Data Fusion: A Comparative Review With Classification’ in IEEE Transactions on Systems, Man and Cybernetics, Part A, vol. 26, no. 1, pp. 52–67.
Buede, D., and P. Girardi, 1997. “A Target Identification Comparison of Bayesian and Dempster-Shafer Multisensor Fusion” in IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, Vol. 27, No. 5, pp. 569–577.
Dietterich, T., 1997. Machine Learning Research: Four Current Directions“ in AI Magazine, Winter 1997, pp. 97–136.
Duda, R., and P. Hart, 1973. Pattern Classification and Scene Analysis. New York: John Wiley & Sons, Inc.
Fukunaga, K., 1990. Introduction to Statistical Pattern Recognition (2nd ed.). Boston: Academic Press, Inc.
Happel, M., and P. Bock, 2000. “Overriding the Experts: A Stacking Method For Combining Marginal Classifiers” in Proceedings of the 13th International FLAIRS Conference. Menlo Park, CA: AAAI Press, forthcoming.
Happel, M., 1999. A Fusion Method for Combining Marginal Classification Decisions using an Override-Capable Classifier (unpublished dissertation proposal). Washington, DC: The George Washington University.
Lam, L., and C. Suen, 1997.“Application of Majority Voting to Pattern Recognition: An Analysis of Its Behavior and Performance” in IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, Vol. 27, No. 5, pp. 553–568.
Mitchell, T, 1997. Machine Learning. Boston: McGraw-Hill.
Shafer, G., 1976. A Mathematical Theory of Evidence. Princeton, NJ: Princeton University Press.
Woods, K., W. Kegelmeyer Jr., and K. Bowyer, 1997. “Combination of Multiple Classifiers using Local Accuracy Estimates” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 4, pp. 405–410.
Xu, L., A. Krzylak, and C. Suen, 1992. “Methods of Combining Multiple Classifiers and Their Applications to Handwriting Recognition” in IEEE Transactions on Systems, Man, and Cybernetics, vol. 22, no. 3, pp. 418–435.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Happel, M.D., Bock, P. (2000). Analysis of a Fusion Method for Combining Marginal Classifiers. In: Multiple Classifier Systems. MCS 2000. Lecture Notes in Computer Science, vol 1857. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45014-9_13
Download citation
DOI: https://doi.org/10.1007/3-540-45014-9_13
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67704-8
Online ISBN: 978-3-540-45014-6
eBook Packages: Springer Book Archive