Abstract
A recent surge of research has focused on counterfactual explanations as a promising solution to the eXplainable AI (XAI) problem. Over 100 counterfactual XAI methods have been proposed, many emphasising the key role of features that are “important” or “causal” or “actionable” in making explanations comprehensible to human users. However, these proposals rest on intuition rather than psychological evidence. Indeed, recent psychological evidence [22] shows that it is abstract feature-types that impact people’s understanding of explanations; categorical features better support people’s learning of an AI model’s predictions than continuous features. This paper proposes a more psychologically-valid counterfactual method, one extending case-based techniques with additional functionality to transform feature-differences into categorical versions of themselves. This enhanced case-based counterfactual method, still generates good counterfactuals relative to baseline methods on coverage and distances metrics. This is the first counterfactual method specifically designed to meet identified psychological requirements of end-users, rather than merely reflecting the intuitions of algorithm designers.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
2 As well as considering multiple natives, CB2-CF also considers nearest-like-neighbours of the native’s x’ (e.g., the three closest, same-class datapoints to x’) to expand on the variations of natives considered. This second step in not implemented in our version of CB2-CF.
References
Gunning, D., Aha, D.W.: DARPA’s explainable artificial intelligence program. AI Mag. 40(2), 44–58 (2019)
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Miller, T.: Explanation in artificial intelligence. Artif. Intell. 267, 1–38 (2019)
Goodman, B., Flaxman, S.: European Union regulations on algorithmic decision-making and a “right to explanation.” AI Mag. 38(3), 50–57 (2017)
Leake, D., McSherry, D.: Introduction to the special issue on explanation in case-based reasoning. Artif. Intell. Rev. 24(2), 103–108 (2005)
Sørmo, F., Cassens, J., Aamodt, A.: Explanation in case-based reasoning–perspectives and goals. Artif. Intell. Rev. 24(2), 109–143 (2005)
Schoenborn, J.M., Althoff, K.D.: Recent trends in XAI: In: Case-Based Reasoning for the Explanation of intelligent systems (XCBR) Workshop (2019)
Kenny, E.M., Keane, M.T.: Twin-systems to explain neural networks using case-based reasoning. In: IJCAI-19, pp. 326–333 (2019)
Keane, M.T., Kenny, E.M.: How case-based reasoning explains neural networks: a theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 155–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_11
Kenny, E.M., Keane, M.T.: Explaining deep learning using examples: optimal feature weighting methods for twin systems using post-hoc, explanation-by-example in XAI. Knowl.-Based Syst. 233, 1–14, 107530 (2021)
Nugent, C., Cunningham, P.: Gaining insight through case-based explanation. J. Intell. Inf. Syst. 32(3), 267–295 (2009)
Cummins, L., Bridge, D.: KLEOR: a knowledge lite approach to explanation oriented retrieval. Comput. Inform. 25(2–3), 173–193 (2006)
Kenny, E.M., Keane, M.T.: On generating plausible counterfactual and semi-factual explanations for deep learning. In: AAAI-21, pp. 11575–11585 (2021)
Martens, D., Provost, F.: Explaining data-driven document classifications. MIS Q. 38, 73–100 (2014)
Keane, M.T., Kenny, E.M., Delaney, E., Smyth, B.: If only we had better counterfactual explanations. In: IJCAI-21, pp. 4466–4474 (2021)
Karimi, A.-H., Barthe, G., Schölkopf, B., Valera, I.: A survey of algorithmic recourse. arXiv preprint arXiv:2010.04050 (2020)
Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: IJCAI-19, pp. 6276–6282 (2019)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2018)
Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 163–178. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_11
Smyth, B., Keane, M.T.: A few good counterfactuals: generating interpretable, plausible and diverse counterfactual explanations. In: ICCBR-22, Springer, Berlin (2022)
Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., Wilson, J.: The what-if tool: Interactive probing of machine learning models. IEEE TVCG 26(1), 56–65 (2019)
Warren, G., Keane, M.T., Byrne, R.M.J.: Features of explainability: how users understand counterfactual and causal explanations for categorical and continuous features in XAI. In: IJCAI-22 Workshop on Cognitive Aspects of Knowledge Representation (2022)
Nugent, C., Cunningham, P.: A case-based explanation system for black-box systems. Artif. Intell. Rev. 24(2), 163–178 (2005)
Kumar, R.R., Viswanath, P., Bindu, C.S.: Nearest neighbor classifiers: a review. Int. J. Comput. Intell. Res. 13(2), 303–311 (2017)
Aggarwal, C.C., Chen, C., Han, J.: The inverse classification problem. J. Comput. Sci. Technol. 25(3), 458–468 (2010)
Laugel, T., Lesot, M.J., Marsala, C., Renard, X., Detyniecki, M.: The dangers of post-hoc interpretability. In: IJCAI-19, pp. 2801–2807 (2019)
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: FAT*20, pp. 607–617 (2020)
Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds.) ECML PKDD 2021. LNCS (LNAI), vol. 12976, pp. 650–665. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86520-7_40
Russell, C.: Efficient search for diverse coherent explanations. In: FAT-19, pp. 20–28 (2019)
Kahneman, D., Miller, D.T.: Norm theory. Psychol. Rev. 93(2), 136–153 (1986)
Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: FAT-19, pp. 10–19 (2019)
Karimi, A.H., Barthe, G., Balle, B., Valera, I.: Model-agnostic counterfactual explanations for consequential decisions. In: AISTATS-20, Palermo, Italy, vol. 108. PMLR (2020)
Wiratunga, N., Wijekoon, A., Nkisi-Orji, I., Martin, K., Palihawadana, C., Corsar, D.: Actionable feature discovery in counterfactuals using feature relevance explainers. In: CEUR Workshop Proceedings (2021)
Karimi, A.H., von Kügelgen, J., Schölkopf, B., Valera, I.: Algorithmic recourse under imperfect causal knowledge. In: NeurIPS-20, 33 (2020)
Ramon, Y., Martens, D., Provost, F., Evgeniou, T.: A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C. Adv. Data Anal. Classif. 14(4), 801–819 (2020). https://doi.org/10.1007/s11634-020-00418-3
Delaney, E., Greene, D., Keane, M.T.: Instance-based counterfactual explanations for time series classification. In: Sánchez-Ruiz, A.A., Floyd, M.W. (eds.) ICCBR 2021. LNCS (LNAI), vol. 12877, pp. 32–47. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86957-1_3
Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: IUI-19, pp. 275–285 (2019)
Lucic, A., Haned, H., de Rijke, M.: Contrastive local explanations for retail forecasting. In: FAT*20, pp. 90–98 (2020)
Van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating XAI: a comparison of rule-based and example-based explanations. Artif. Intell. 291 (2021)
Lage, I., et al.: Human evaluation of models built for interpretability. In: HCOMP-19, pp. 59–67 (2019)
Kirfel, L., Liefgreen, A.: What if (and how...)? Actionability shapes people’s perceptions of counterfactual explanations in automated decision-making. In: ICML-21 Workshop on Algorithmic Recourse (2021)
Kahneman, D., Tversky, A.: The simulation heuristic. In: Kahneman, D., Slovic, P., Tversky, A. (eds.), Judgment Under Uncertainty: Heuristics and Biases, pp. 201–208. CUP (1982)
Dua, D., Graff, C.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2019). http://archive.ics.uci.edu/ml
Keil, F.C.: Explanation and understanding. Ann. Rev. Psychol. 57, 227–254 (2006)
Förster, M., Klier, M., Kluge, K., Sigler, I.: Evaluating explainable artificial intelligence: what users really appreciate. In ECIS-2020 (2020)
Acknowledgments
This research was supported by (i) the UCD Foundation, (ii) UCD Science Foundation Ireland via the Insight SFI Research Centre for Data Analytics (12/RC/2289) and (iii) the Department of Agriculture, Food and Marine via the VistaMilk SFI Research Centre (16/RC/3835).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Warren, G., Smyth, B., Keane, M.T. (2022). “Better” Counterfactuals, Ones People Can Understand: Psychologically-Plausible Case-Based Counterfactuals Using Categorical Features for Explainable AI (XAI). In: Keane, M.T., Wiratunga, N. (eds) Case-Based Reasoning Research and Development. ICCBR 2022. Lecture Notes in Computer Science(), vol 13405. Springer, Cham. https://doi.org/10.1007/978-3-031-14923-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-14923-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14922-1
Online ISBN: 978-3-031-14923-8
eBook Packages: Computer ScienceComputer Science (R0)