Abstract
Explainable artificial intelligence (XAI) aims at automatically generating user-centric explanations to help users scrutinize artificial intelligence (AI) decisions and establish trust in AI systems. XAI methods that generate counterfactual explanations are particularly promising as they mimic how humans construct explanations. Insights from the social sciences suggest that counterfactual explanations should convey the most abnormal causes that lead to the AI decision, as unexpected information fosters understanding. So far, no XAI method incorporates abnormality when generating counterfactual explanations. This paper aims to design a novel XAI method to generate abnormal counterfactual explanations. To this end, we propose a novel measure to quantify the abnormality of features in explanations and integrate it into a method to generate counterfactual explanations. We demonstrate the XAI method’s applicability on a real-world data set within the use case of house price prediction. We evaluate its efficacy following functionally-grounded and human-grounded evaluation. The results of our evaluation indicate that our method successfully integrates abnormality in generating counterfactual explanations. The resulting explanations are perceived as more helpful by users to scrutinize AI decisions and lead to higher trust in AI systems compared to state-of-the-art counterfactual explanations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Collins, C., Dennehy, D., Conboy, K., Mikalef, P.: Artificial intelligence in information systems research: a systematic literature review and research agenda. Int. J. Inf. Manage. 60(1), 102383 (2021)
von Eschenbach, W.J.: Transparency and the black box problem: why we do not trust AI. Philos. Technol. 34(4), 1607–1622 (2021)
Hoffman, R., Mueller, S.T., Klein, G., Litman, J.: Measuring trust in the XAI context. PsyArXiv Preprints (2021)
Jacovi, A., Marasović, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624–635. ACM, New York (2021)
Brasse, J., Broder, H.R., Förster, M., Klier, M., Sigler, I.: Explainable artificial intelligence in information systems: a review of the status quo and future research directions. Electron. Mark. 33(1) (2023)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267(1), 1–38 (2019)
Guidotti, R.: Counterfactual explanations and how to find them: literature review and benchmarking. Data Min. Knowl. Discov. (2022)
Hilton, D.J., Slugoski, B.R.: Knowledge-based causal attribution. The abnormal conditions focus model. Psychol. Rev. 93(1), 75–88 (1986)
Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in information systems research. Manag. Inf. Syst. Q. 28(1), 75–105 (2004)
Doshi-Velez, F., Kim, B.: Considerations for evaluation and generalization in interpretable machine learning. In: Escalante, H.J., et al. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 3–17. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_1
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. J. Law Technol. 31(2), 841–887 (2018)
Förster, M., Hühn, P., Klier, M., Kluge, K.: User-centric explainable AI: design and evaluation of an approach to generate coherent counterfactual explanations for structured data. J. Decis. Syst. 32(4), 1–32 (2022)
Le, T., Wang, S., Lee, D.: GRACE: generating concise and informative contrastive sample to explain neural network model’s prediction. In: KDD 2020: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 238–248. ACM, A Virtual ACM Conference (2020)
Rasouli, P., Chieh Yu, I.: CARE: coherent actionable recourse based on sound counterfactual explanations. Int. J. Data Sci. Anal. 17(1), 13–38 (2022)
Hilton, D.J., Erb, H.-P.: Mental models and causal explanation: judgements of probable cause and explanatory relevance. Think. Reason. 2(4), 273–308 (1996)
Hesslow, G.: The problem of causal selection. In: Hilton, D.J. (ed.) Contemporary Science and Natural Explanation: Commonsense Conceptions of Causality, pp. 11–32. New York University Press, New York (1988)
Hitchcock, C., Knobe, J.: Cause and norm. J. Philos. 106(11), 587–612 (2009)
Hilton, D.J.: Conversational processes and causal explanation. Psychol. Bull. 107(1), 65–81 (1990)
Miles, S.R., Averill, L.A.: Definitions of abnormality. In: Cautin, R.L., Lilienfeld, S.O. (eds.) The Encyclopedia of Clinical Psychology, pp. 1–5. Wiley, Hoboken (2014)
van Lente, J., Borg, A., Bex, F., Kuhlmann, I., Mumford, J., Sarkadi, S.: Everyday argumentative explanations for classification. In: 1st International Workshop on Argumentation & Machine Learning, pp. 14–26. CEUR WS, Cardiff (2022)
Riveiro, M., Thill, S.: “That’s (not) the output I expected!” On the role of end user expectations in creating explanations of AI systems. Artif. Intell. 298(1), 103507 (2021)
Förster, M., Klier, M., Kluge, K., Sigler, I.: Evaluating explainable artificial intelligence – what users really appreciate. In: Proceedings of the 28th European Conference on Information Systems (ECIS), pp. 1–18. AIS, A Virtual AIS Conference (2020)
Aggarwal, C.C.: Outlier Analysis, 2nd edn. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47578-3
Racine, J.S.: Nonparametric econometrics: a primer. Found. Trends Econom. 3(1), 1–88 (2008)
Venable, J., Pries-Heje, J., Baskerville, R.: FEDS: a framework for evaluation in design science research. Eur. J. Inf. Syst. 25(1), 77–89 (2016)
Bennet, P., Doerr, C., Moreau, A., Rapin, J., Teytaud, F., Teytaud, O.: Nevergrad. SIGEVOlution 14(1), 8–15 (2021)
Ma, S., et al.: Who should i trust: AI or myself? Leveraging human and AI correctness likelihood to promote appropriate trust in AI-assisted decision-making. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–19. ACM, Hamburg (2023)
Chen, D.L., Schonger, M., Wickens, C.: OTree—an open-source platform for laboratory, online, and field experiments. J. Behav. Exp. Financ. 9(1), 88–97 (2016)
Adams, B., Bruyn, L., Houde, S., Angelopoulos, P., Iwasa-Madge, K., McCann, C.: Trust in automated systems. Ministry of National Defence, Toronto, Ontario, Canada (2003)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Jahn, T., Hühn, P., Förster, M. (2024). Wasn’t Expecting that – Using Abnormality as a Key to Design a Novel User-Centric Explainable AI Method. In: Mandviwalla, M., Söllner, M., Tuunanen, T. (eds) Design Science Research for a Resilient Future. DESRIST 2024. Lecture Notes in Computer Science, vol 14621. Springer, Cham. https://doi.org/10.1007/978-3-031-61175-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-61175-9_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-61174-2
Online ISBN: 978-3-031-61175-9
eBook Packages: Computer ScienceComputer Science (R0)