Skip to main content

Yet Another Method for Combining Classifiers Outputs: A Maximum Entropy Approach

  • Conference paper
Multiple Classifier Systems (MCS 2004)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3077))

Included in the following conference series:

Abstract

In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or classifiers outputs, problem. The maxent approach is quite versatile and allows us to express in a clear, rigorous, way the a priori knowledge that is available on the problem. For instance, our knowledge about the reliability of the experts and the correlations between these experts can be easily integrated: Each piece of knowledge is expressed in the form of a linear constraint. An iterative scaling algorithm is used in order to compute the maxent solution of the problem. The maximum entropy method seeks the joint probability density of a set of random variables that has maximum entropy while satisfying the constraints. It is therefore the ”most honest” characterization of our knowledge given the available facts (constraints). In the case of conflicting constraints, we propose to minimise the ”lack of constraints satisfaction” or to relax some constraints and recompute the maximum entropy solution. The maxent fusion rule is illustrated by some simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Chen, D., Cheng, X.: An asymptotic analysis of some expert fusion methods. Pattern Recognition Letters 22, 901–904 (2001)

    Article  MATH  Google Scholar 

  2. Cooke, R.M.: Experts in uncertainty. Oxford University Press, Oxford (1991)

    Google Scholar 

  3. Dubois, D., Grabisch, M., Prade, H., Smets, P.: Assessing the value of a candidate: Comparing belief function and possibility theories. In: Proceedings of the Fifteenth international conference on Uncertainty in Artificial Intelligence, pp. 170–177 (1999)

    Google Scholar 

  4. Fang, S.-C., Rajasekera, J., Tsao, H.-S.J.: Entropy optimization and mathematical programming. Kluwer Academic Publishers, Dordrecht (1997)

    MATH  Google Scholar 

  5. Fouss, F., Saerens, M.: A maximum entropy approach to multiple classifiers combination. Technical Report, IAG, Universite catholique de Louvain (2004)

    Google Scholar 

  6. Genest, C., Zidek, J.V.: Combining probability distributions: A critique and an annotated bibliography. Statistical Science 36, 114–148 (1986)

    Article  MathSciNet  Google Scholar 

  7. Golan, A., Judge, G., Miller, D.: Maximum entropy econometrics: Robust estimation with limited data. John Wiley and Sons, Chichester (1996)

    MATH  Google Scholar 

  8. Jacobs, R.A.: Methods for combining experts’ probability assessments. Neural Computation 7, 867–888 (1995)

    Article  Google Scholar 

  9. Jelinek, F.: Statistical methods for speech recognition. The MIT Press, Cambridge (1997)

    Google Scholar 

  10. Kapur, J.N., Kesavan, H.K.: The generalized maximum entropy principle (with applications). Standford Educational Press, Standford (1987)

    MATH  Google Scholar 

  11. Kapur, J.N., Kesavan, H.K.: Entropy optimization principles with applications. Academic Press, London (1992)

    Google Scholar 

  12. Kittler, J., Alkoot, F.: Sum versus vote fusion in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(1), 110–115 (2003)

    Article  Google Scholar 

  13. Kittler, J., Hatef, M., Duin, R., Matas, J.: On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3), 226–239 (1998)

    Article  Google Scholar 

  14. Klir, G.J., Folger, T.A.: Fuzzy sets, uncertainty, and information. Prentice-Hall, Englewood Cliffs (1988)

    MATH  Google Scholar 

  15. Ku, H., Kullback, S.: Approximating discrete probability distributions. IEEE Transactions on Information Theory 15(4), 444–447 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  16. Lad, F.: Operational subjective statistical methods. John Wiley and Sons, Chichester (1996)

    MATH  Google Scholar 

  17. Levy, W.B., Delic, H.: Maximum entropy aggregation of individual opinions. IEEE Transactions on Systems, Man and Cybernetics 24(4), 606–613 (1994)

    Article  MathSciNet  Google Scholar 

  18. Merz, C.: Using correspondence analysis to combine classifiers. Machine Learning 36, 226–239 (1999)

    Google Scholar 

  19. Myung, J., Ramamoorti, S., Andrew, J., Bailey, D.: Maximum entropy aggregation of expert predictions. Management Science 42(10), 1420–1436 (1996)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Saerens, M., Fouss, F. (2004). Yet Another Method for Combining Classifiers Outputs: A Maximum Entropy Approach. In: Roli, F., Kittler, J., Windeatt, T. (eds) Multiple Classifier Systems. MCS 2004. Lecture Notes in Computer Science, vol 3077. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-25966-4_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-25966-4_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22144-9

  • Online ISBN: 978-3-540-25966-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics