Skip to main content

Explanation of Pseudo-Boolean Functions Using Cooperative Game Theory and Prime Implicants

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13562))

Abstract

Explanation methods can be formal or heuristic-based. Many explanation methods have been developed. Formal methods provide principles to derive sufficient reasons (prime implicants) or necessary reasons (counterfactuals, causes). These approaches are appealing but require to discretize the input and output spaces. Heuristic-based approaches such as feature attribution (such as the Shapley value) work in any condition but the relation to explanation is less clear. We show that they cannot distinguish between conjunction and disjunction, which is not the case with sufficient explanations. This work is an initial work that aims at combining some idea of prime implicants into feature attribution, in order to measure sufficiency of the features. We apply it to two values - the proportional division and the Shapley value.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: more accurate approximations to Shapley values. In: arXiv preprint arXiv:1903.10464 (2019)

  2. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  3. Audemard, G., Koriche, F., Marquis, P.: On tractable XAI queries based on compiled representations. In: Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020), pp. 838–849. Rhodes, Greece (2020)

    Google Scholar 

  4. Banzhaf, J.: Weighted voting doesn’t work: a mathematical analysis. Rutgers Law Rev. 19, 317–343 (1965)

    Google Scholar 

  5. Bisdorff, R., Dias, L.C., Meyer, P., Mousseau, V., Pirlot, M. (eds.): Evaluation and Decision Models with Multiple Criteria. IHIS, Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46816-6

    Book  Google Scholar 

  6. Cano, J.R., Gutiérrez, P., Krawczyk, B., Woźniak, M., García, S.: Monotonic classification: an overview on algorithms, performance measures and data sets. arXiv:1811.07155 (2018)

  7. Darwiche, A., Hirth, A.: On the reasons behind decisions. In: Proceedings of the European Conference on Artificial Intelligence (ECAI 2020), pp. 712–720. Santiago, Spain (2020)

    Google Scholar 

  8. Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: IEEE Symposium on Security and Privacy. San Jose, CA (2016)

    Google Scholar 

  9. Halpern, J.Y., Pearl, J.: Causes and explanations: a structural-model approach - Part I: causes. In: Proceedings of the Seventeenth Conference on Uncertainy in Artificial Intelligence (UAI), pp. 194–202. San Francisco, CA (2001)

    Google Scholar 

  10. Halpern, J.Y., Pearl, J.: Causes and explanations: a structural-model approach - Part II: explanations. Br. J. Philos. Sci. 56(4), 889–911 (2005)

    Article  Google Scholar 

  11. Ignatiev, A., Narodytska, N., Marques-Silva, J.: Abduction-based explanations for machine learning models. In: AAAI, pp. 1511–1519. Honolulu, Hawai (2019)

    Google Scholar 

  12. Kumar, I., Venkatasubramanian, S., Scheidegger, C., Friedler, S.: Problems with Shapley-value-based explanations as feature importance measures. In: 37th International Conference on Machine Learning (ICML 2020), pp. 5491–5500 (2020)

    Google Scholar 

  13. Lemaire, J.: An application of game theory: cost allocation. ASTIN Bull.: J. IAA 14, 61–81 (1984)

    Article  Google Scholar 

  14. Lundberg, S., Enrion, G., Lee, S.: Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888 (2018)

  15. Lundberg, S., Lee, S.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) 31st Conference on Neural Information Processing Systems (NIPS 2017), pp. 4768–4777. Long Beach, CA (2017)

    Google Scholar 

  16. Marquis, P.: Consequence finding algorithms. In: Handbook of Defeasible Reasoning and Uncertainty Management Systems, pp. 41–145 (2000)

    Google Scholar 

  17. Merrick, L., Taly, A.: The explanation game: explaining machine learning models with cooperative game theory. arXiv preprint arXiv:1909.08128 (2018)

  18. Mothilal, R.K., Mahajan, D., Tan, C., Sharma, A.: Towards unifying feature attribution and counterfactual explanations: different means to the same end. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES 2021), pp. 652–663 (2021)

    Google Scholar 

  19. Ribeiro, M., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: KDD 2016 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. San Francisco, California (2016)

    Google Scholar 

  20. Schmeidler, D.: The nucleolus of a characteristic function game. SIAM J. Appl. Math. 17(6), 1163–1170 (1969)

    Article  MathSciNet  Google Scholar 

  21. Shapley, L.S.: A value for \(n\)-person games. In: Kuhn, H.W., Tucker, A.W. (eds.) Contributions to the Theory of Games, Vol. II, pp. 307–317, no. 28 in Annals of Mathematics Studies, Princeton University Press (1953)

    Google Scholar 

  22. Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining Bayesian network classifiers. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 5103–5111. Stockholm, Sweden (2018)

    Google Scholar 

  23. Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint arxiv:2010.10596 (2020)

  24. Štrumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)

    MathSciNet  MATH  Google Scholar 

  25. Zou, Z., van den Brink, R., Chun, Y., Funaki, Y.: Axiomatizations of the proportional division value. Soc. Choice Welfare 57, 35–62 (2021)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christophe Labreuche .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Labreuche, C. (2022). Explanation of Pseudo-Boolean Functions Using Cooperative Game Theory and Prime Implicants. In: Dupin de Saint-Cyr, F., Öztürk-Escoffier, M., Potyka, N. (eds) Scalable Uncertainty Management. SUM 2022. Lecture Notes in Computer Science(), vol 13562. Springer, Cham. https://doi.org/10.1007/978-3-031-18843-5_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-18843-5_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-18842-8

  • Online ISBN: 978-3-031-18843-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics