Skip to main content

Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and Without Interacting Criteria

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Abstract

We are interested in explaining models from Multi-Criteria Decision Aiding. These can be used to perform pairwise comparisons between two options, or classify an instance in an ordered list of categories, on the basis of multiple and conflicting criteria. Several models can be used to achieve such goals ranging from the simplest one assuming independence among criteria - namely the weighted sum model - to complex models able to represent complex interaction among criteria, such as the Hierarchical Choquet Integral (HCI). We consider two complementary explanations of these two models under these two goals: sufficient explanation (a.k.a. Prime Implicants) and necessary explanations (a.k.a. Counterfactual explanations). The idea of prime implicants is to identify the parts of the instance that need to be kept unchanged so that the decision remains the same, while the other parts are replaced by any value. We generalize the notion of information that needs to be kept not only on the values of the criteria (values of the instance of the criteria) but also on the weights of criteria (parameters of the model). For the HCI model, we propose a Mixed-Integer Linear Program (MILP) formulation to compute the prime implicants. We also propose a weak version of prime implicants to account for the case where the requirements of changing the other criteria in any possible way is too strong. Finally, we also propose a MILP formulation for computing counterfactual explanations of the HCI model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: more accurate approximations to shapley values. Artif. Intell. 298, 103502 (2021)

    Article  MathSciNet  Google Scholar 

  2. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  3. Bana e Costa, C.A., De Corte, J., Vansnick, J.C.: MACBETH. Int. J. Inf. Technol. Decis. Making 11, 359–387 (2012)

    Google Scholar 

  4. Belahcène, K., Chevaleyre, Y., Labreuche, C., Maudet, N., Mousseau, V., Ouerdane, W.: Accountable approval sorting. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 70–76. Stockholm, Sweden (2018)

    Google Scholar 

  5. Belahcène, K., Labreuche, C., Maudet, N., Mousseau, V., Ouerdane, W.: Comparing options with argument schemes powered by cancellation. In: Proceedings of the Twenty-Eight International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 1537–1543. Macao, China (2019)

    Google Scholar 

  6. Benabbou, N., Perny, P., Viappiani, P.: Incremental elicitation of Choquet capacities for multicriteria choice, ranking and sorting problems. Artif. Intell. 246, 152–180 (2017)

    Article  MathSciNet  Google Scholar 

  7. Bresson, R., Cohen, J., Hüllermeier, E., Labreuche, C., Sebag, M.: On the identifiability of hierarchical decision models. In: Proceedings of the 18th International Conference on Principles of Knowledge Representation and Reasoning (KR 2021), pp. 151–162. Hanoi, Vietnam (2021)

    Google Scholar 

  8. Bresson, R., Cohen, J., Hüllermeier, E., Labreuche, C., Sebag, M.: Neural representation and learning of hierarchical 2-additive Choquet integrals. In: Proceedings of the Twenty-Eight International Joint Conference on Artificial Intelligence (IJCAI 2020), pp. 1984–1991. Yokohoma, Japan (2020)

    Google Scholar 

  9. Carenini, G., Moore, J.: Generating and evaluating evaluative arguments. Artif. Intell. 170, 925–952 (2006)

    Article  Google Scholar 

  10. Choquet, G.: Theory of capacities. Annales de l’Institut Fourier 5, 131–295 (1953)

    Article  MathSciNet  Google Scholar 

  11. Darwiche, A., Hirth, A.: On the reasons behind decisions. In: Proceedings of the European Conference on Artificial Intelligence (ECAI 2020), pp. 712–720. Santiago, Spain (2020)

    Google Scholar 

  12. Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: IEEE Symposium on Security and Privacy. San Jose, CA, USA (2016)

    Google Scholar 

  13. Figueira, J., Greco, S., Ehrgott, M. (eds.): Multiple Criteria Decision Analysis: State of the Art Surveys, 2nd edn. Kluwer Acad Publisher, New York (2016)

    Google Scholar 

  14. Främling, K.: Decision theory meets explainable AI. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) EXTRAAMAS 2020. LNCS (LNAI), vol. 12175, pp. 57–74. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51924-7_4

    Chapter  Google Scholar 

  15. Grabisch, M.: The application of fuzzy integrals in multicriteria decision making. Eur. J. Oper. Res. 89, 445–456 (1996)

    Article  Google Scholar 

  16. Grabisch, M., Labreuche, C.: A decade of application of the Choquet and Sugeno integrals in multi-criteria decision aid. Ann. Oper. Res. 175, 247–286 (2010)

    Article  MathSciNet  Google Scholar 

  17. Grabisch, M., Murofushi, T., Sugeno, M., Kacprzyk, J.: Fuzzy Measures and Integrals. Theory and Applications. Physica Verlag, Berlin (2000)

    Google Scholar 

  18. Grabisch, M., Labreuche, C.: Interpretation of multicriteria decision making models with interacting criteria. In: Doumpos, M., Figueira, J.R., Greco, S., Zopounidis, C. (eds.) New Perspectives in Multiple Criteria Decision Making. MCDM, pp. 151–176. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11482-4_6

    Chapter  Google Scholar 

  19. Greco, S., Matarazzo, B., Słowinski, R.: Rough sets methodology for sorting problems in presence of multiple attributes and criteria. Eur. J. Oper. Res. 138, 247–259 (2002)

    Article  MathSciNet  Google Scholar 

  20. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems. In: CoRR abs/1805.10820 (2018)

    Google Scholar 

  21. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018). Article 93

    Article  Google Scholar 

  22. Halpern, J.Y., Pearl, J.: Causes and explanations: a structural-model approach - part II: explanations. Br. J. Philos. Sci. 56(4), 889–911 (2005)

    Article  Google Scholar 

  23. Havens, T.C., Anderson, D.T.: Machine Learning of Choquet integral regression with respect to a bounded capacity (or non-monotonic fuzzy measure). In: FUZZ-IEEE-18 (2018)

    Google Scholar 

  24. Ignatiev, A., Narodytska, N., Marques-Silva, J.: Abduction-based explanations for machine learning models. In: AAAI, pp. 1511–1519. Honolulu, Hawai (2019)

    Google Scholar 

  25. Jacquet-Lagrèze, E., Siskos, Y.: Assessing a set of additive utility functions for multicriteria decision making: the UTA method. Eur. J. Oper. Res. 10, 151–164 (1982)

    Article  Google Scholar 

  26. Keane, M., Kenny, E., Delaney, E., Smyth, B.: If only we had better counterfactual explanations: five key deficits to rectify in the evaluation of counterfactual XAI techniques. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), pp. 4466–4474 (2021)

    Google Scholar 

  27. Keeney, R.L., Raiffa, H.: Decision with Multiple Objectives. Wiley, New York (1976)

    Google Scholar 

  28. Kenny, E., Keane, M.: On generating plausible counterfactual and semi-factual explanations for deep learning. In: Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), pp. 11575–11585 (2021)

    Google Scholar 

  29. Klein, D.: Decision Analytic Intelligent Systems: Automated Explanation and Knowledge Acquisition. Lawrence Erlbaum Associates (1994)

    Google Scholar 

  30. Krantz, D., Luce, R., Suppes, P., Tversky, A.: Foundations of measurement: Additive and Polynomial Representations, vol. 1. Academic Press, New York (1971)

    Google Scholar 

  31. Labreuche, C., Destercke, S.: How to handle missing values in multi-criteria decision aiding? In: Proceedings of the Twenty-Eight International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 1756–1763. Macao, China (2019)

    Google Scholar 

  32. Labreuche, C.: A general framework for explaining the results of a multi-attribute preference model. Artif. Intell. 175, 1410–1448 (2011)

    Article  MathSciNet  Google Scholar 

  33. Labreuche, C., Maudet, N., Ouerdane, W.: Justifying dominating options when preferential information is incomplete. In: European Conference on Artificial Intelligence (ECAI), pp. 486–491. Montepellier, France (2012)

    Google Scholar 

  34. Labreuche, C., Fossier, S.: Explaining multi-criteria decision aiding models with an extended shapley value. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp. 331–339. International Joint Conferences on Artificial Intelligence Organization (2018)

    Google Scholar 

  35. Lundberg, S., Lee, S.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) 31st Conference on Neural Information Processing Systems (NIPS 2017), pp. 4768–4777. Long Beach, CA, USA (2017)

    Google Scholar 

  36. Marquis, P.: Consequence Finding Algorithms. In: Handbook of Defeasible Reasoning and Uncertainty Management Systems, pp. 41–145 (2000)

    Google Scholar 

  37. Miller, T.: Explainable AI: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  Google Scholar 

  38. Montmain, J., Mauris, G., Akharraz, A.: Elucidation and decisional risk in a multi criteria decision based on a Choquet integral aggregation: a cybernetic framework. Int. J. Multi-Criteria Decision Anal. 13, 239–258 (2005)

    Article  Google Scholar 

  39. Nunes, I., Miles, S., Luck, M., Barbosa, S., Lucena, C.: Pattern-based explanation for automated decisions. In: European Conference on Artificial Intelligence (ECAI), pp. 669–674. Prague, Czech Republic (2014)

    Google Scholar 

  40. Ribeiro, M., Singh, S., Guestrin, C.: “Why should i trust you?": explaining the predictions of any classifier. In: KDD 2016 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. San Francisco, California, USA (2016)

    Google Scholar 

  41. Ribeiro, M., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Thirty-Second AAAI Conference on Artificial Intelligence, pp. 1527–1535 (2018)

    Google Scholar 

  42. Roy, B.: Classement et choix en présence de points de vue multiples (la méthode ELECTRE). R.I.R.O. 2, 57–75 (1968)

    Google Scholar 

  43. Roy, B.: How outranking relations helps multiple criteria decision making. In: Cochrane, J.L., Zeleny, M. (eds.) Multiple Criteria Decision Making, pp. 179–201. University of South California Press, Columbia (1973)

    Google Scholar 

  44. Roy, B.: Decision aiding today: what should we expect? In: Gal, T., Stewart, T., Hanne, T. (eds.) Multicriteria decision making: advances in MCDM models, algorithms, theory and applications, pp. 1.1-1.35. Kluwer Academic Publishers, Boston (1999)

    Google Scholar 

  45. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (2019)

    Google Scholar 

  46. Saaty, T.L.: A scaling method for priorities in hierarchical structures. J. Math. Psychol. 15, 234–281 (1977)

    Article  MathSciNet  Google Scholar 

  47. Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining Bayesian network classifiers. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 5103–5111. Stockholm, Sweden (2018)

    Google Scholar 

  48. Stepin, I., Alonso, J., Catala, A., Pereira-Farina, M.: A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence. IEEE Access 9, 11974–12001 (2021)

    Article  Google Scholar 

  49. Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)

    MathSciNet  Google Scholar 

  50. Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. In: arXiv preprint arxiv:2010.10596 (2020)

  51. Winter, E.: A value for cooperative games with levels structure of cooperation. Int. J. Game Theory 18(2), 227–40 (1989)

    Article  MathSciNet  Google Scholar 

  52. Zhong, Q., Fan, X., Toni, F., Luo, X.: Explaining best decisions via argumentation. In: Proceedings of the European Conference on Social Intelligence (ECSI-2014), pp. 224–237. Barcelona, Spain (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christophe Labreuche .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Labreuche, C., Bresson, R. (2023). Necessary and Sufficient Explanations of Multi-Criteria Decision Aiding Models, with and Without Interacting Criteria. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1902. Springer, Cham. https://doi.org/10.1007/978-3-031-44067-0_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44067-0_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44066-3

  • Online ISBN: 978-3-031-44067-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics