Skip to main content

Wasn’t Expecting that – Using Abnormality as a Key to Design a Novel User-Centric Explainable AI Method

  • Conference paper
  • First Online:
Design Science Research for a Resilient Future (DESRIST 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14621))

  • 938 Accesses

Abstract

Explainable artificial intelligence (XAI) aims at automatically generating user-centric explanations to help users scrutinize artificial intelligence (AI) decisions and establish trust in AI systems. XAI methods that generate counterfactual explanations are particularly promising as they mimic how humans construct explanations. Insights from the social sciences suggest that counterfactual explanations should convey the most abnormal causes that lead to the AI decision, as unexpected information fosters understanding. So far, no XAI method incorporates abnormality when generating counterfactual explanations. This paper aims to design a novel XAI method to generate abnormal counterfactual explanations. To this end, we propose a novel measure to quantify the abnormality of features in explanations and integrate it into a method to generate counterfactual explanations. We demonstrate the XAI method’s applicability on a real-world data set within the use case of house price prediction. We evaluate its efficacy following functionally-grounded and human-grounded evaluation. The results of our evaluation indicate that our method successfully integrates abnormality in generating counterfactual explanations. The resulting explanations are perceived as more helpful by users to scrutinize AI decisions and lead to higher trust in AI systems compared to state-of-the-art counterfactual explanations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Collins, C., Dennehy, D., Conboy, K., Mikalef, P.: Artificial intelligence in information systems research: a systematic literature review and research agenda. Int. J. Inf. Manage. 60(1), 102383 (2021)

    Article  Google Scholar 

  2. von Eschenbach, W.J.: Transparency and the black box problem: why we do not trust AI. Philos. Technol. 34(4), 1607–1622 (2021)

    Article  Google Scholar 

  3. Hoffman, R., Mueller, S.T., Klein, G., Litman, J.: Measuring trust in the XAI context. PsyArXiv Preprints (2021)

    Google Scholar 

  4. Jacovi, A., Marasović, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624–635. ACM, New York (2021)

    Google Scholar 

  5. Brasse, J., Broder, H.R., Förster, M., Klier, M., Sigler, I.: Explainable artificial intelligence in information systems: a review of the status quo and future research directions. Electron. Mark. 33(1) (2023)

    Google Scholar 

  6. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267(1), 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  7. Guidotti, R.: Counterfactual explanations and how to find them: literature review and benchmarking. Data Min. Knowl. Discov. (2022)

    Google Scholar 

  8. Hilton, D.J., Slugoski, B.R.: Knowledge-based causal attribution. The abnormal conditions focus model. Psychol. Rev. 93(1), 75–88 (1986)

    Google Scholar 

  9. Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in information systems research. Manag. Inf. Syst. Q. 28(1), 75–105 (2004)

    Article  Google Scholar 

  10. Doshi-Velez, F., Kim, B.: Considerations for evaluation and generalization in interpretable machine learning. In: Escalante, H.J., et al. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 3–17. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_1

    Chapter  Google Scholar 

  11. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. J. Law Technol. 31(2), 841–887 (2018)

    Google Scholar 

  12. Förster, M., Hühn, P., Klier, M., Kluge, K.: User-centric explainable AI: design and evaluation of an approach to generate coherent counterfactual explanations for structured data. J. Decis. Syst. 32(4), 1–32 (2022)

    Google Scholar 

  13. Le, T., Wang, S., Lee, D.: GRACE: generating concise and informative contrastive sample to explain neural network model’s prediction. In: KDD 2020: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 238–248. ACM, A Virtual ACM Conference (2020)

    Google Scholar 

  14. Rasouli, P., Chieh Yu, I.: CARE: coherent actionable recourse based on sound counterfactual explanations. Int. J. Data Sci. Anal. 17(1), 13–38 (2022)

    Article  Google Scholar 

  15. Hilton, D.J., Erb, H.-P.: Mental models and causal explanation: judgements of probable cause and explanatory relevance. Think. Reason. 2(4), 273–308 (1996)

    Article  Google Scholar 

  16. Hesslow, G.: The problem of causal selection. In: Hilton, D.J. (ed.) Contemporary Science and Natural Explanation: Commonsense Conceptions of Causality, pp. 11–32. New York University Press, New York (1988)

    Google Scholar 

  17. Hitchcock, C., Knobe, J.: Cause and norm. J. Philos. 106(11), 587–612 (2009)

    Article  Google Scholar 

  18. Hilton, D.J.: Conversational processes and causal explanation. Psychol. Bull. 107(1), 65–81 (1990)

    Article  Google Scholar 

  19. Miles, S.R., Averill, L.A.: Definitions of abnormality. In: Cautin, R.L., Lilienfeld, S.O. (eds.) The Encyclopedia of Clinical Psychology, pp. 1–5. Wiley, Hoboken (2014)

    Google Scholar 

  20. van Lente, J., Borg, A., Bex, F., Kuhlmann, I., Mumford, J., Sarkadi, S.: Everyday argumentative explanations for classification. In: 1st International Workshop on Argumentation & Machine Learning, pp. 14–26. CEUR WS, Cardiff (2022)

    Google Scholar 

  21. Riveiro, M., Thill, S.: “That’s (not) the output I expected!” On the role of end user expectations in creating explanations of AI systems. Artif. Intell. 298(1), 103507 (2021)

    Article  Google Scholar 

  22. Förster, M., Klier, M., Kluge, K., Sigler, I.: Evaluating explainable artificial intelligence – what users really appreciate. In: Proceedings of the 28th European Conference on Information Systems (ECIS), pp. 1–18. AIS, A Virtual AIS Conference (2020)

    Google Scholar 

  23. Aggarwal, C.C.: Outlier Analysis, 2nd edn. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47578-3

    Book  Google Scholar 

  24. Racine, J.S.: Nonparametric econometrics: a primer. Found. Trends Econom. 3(1), 1–88 (2008)

    Article  Google Scholar 

  25. Venable, J., Pries-Heje, J., Baskerville, R.: FEDS: a framework for evaluation in design science research. Eur. J. Inf. Syst. 25(1), 77–89 (2016)

    Article  Google Scholar 

  26. Bennet, P., Doerr, C., Moreau, A., Rapin, J., Teytaud, F., Teytaud, O.: Nevergrad. SIGEVOlution 14(1), 8–15 (2021)

    Article  Google Scholar 

  27. Ma, S., et al.: Who should i trust: AI or myself? Leveraging human and AI correctness likelihood to promote appropriate trust in AI-assisted decision-making. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–19. ACM, Hamburg (2023)

    Google Scholar 

  28. Chen, D.L., Schonger, M., Wickens, C.: OTree—an open-source platform for laboratory, online, and field experiments. J. Behav. Exp. Financ. 9(1), 88–97 (2016)

    Article  Google Scholar 

  29. Adams, B., Bruyn, L., Houde, S., Angelopoulos, P., Iwasa-Madge, K., McCann, C.: Trust in automated systems. Ministry of National Defence, Toronto, Ontario, Canada (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maximilian Förster .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jahn, T., Hühn, P., Förster, M. (2024). Wasn’t Expecting that – Using Abnormality as a Key to Design a Novel User-Centric Explainable AI Method. In: Mandviwalla, M., Söllner, M., Tuunanen, T. (eds) Design Science Research for a Resilient Future. DESRIST 2024. Lecture Notes in Computer Science, vol 14621. Springer, Cham. https://doi.org/10.1007/978-3-031-61175-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-61175-9_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-61174-2

  • Online ISBN: 978-3-031-61175-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics