Skip to main content

Exploring Mental Models for Explainable Artificial Intelligence: Engaging Cross-disciplinary Teams Using a Design Thinking Approach

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2023)

Abstract

Exploring end-users’ understanding of Artificial Intelligence (AI) systems’ behaviours and outputs is crucial in developing accessible Explainable Artificial Intelligence (XAI) solutions. Investigating mental models of AI systems is core in understanding and explaining the often opaque, complex, and unpredictable nature of AI. Researchers engage surveys, interviews, and observations for software systems, yielding useful evaluations. However, an evaluation gulf still exists, primarily around comprehending end-users’ understanding of AI systems. It has been argued that by exploring theories related to human decision-making examining the fields of psychology, philosophy, and human computer interaction (HCI) in a more people-centric rather than product or technology-centric approach can result in the creation of initial XAI solutions with great potential. Our work presents the results of a design thinking workshop with 14 cross-collaborative participants with backgrounds in philosophy, psychology, computer science, AI systems development and HCI. Participants undertook design thinking activities to ideate how AI system behaviours may be explained to end-users to bridge the explanation gulf of AI systems. We reflect on design thinking as a methodology for exploring end-users’ perceptions and mental models of AI systems with a view to creating effective, useful, and accessible XAI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Executive Office of the President. Big Data: a report on algorithmic systems, opportunity, and civil rights. Executive Office of the President, The White House, Washington, pp. 8–9 (2016)

    Google Scholar 

  2. IBM, IBM Global AI Adoption Index 2022. https://www.ibm.com/downloads/cas/GVAGA3JP. Accessed 02 Oct 2023

  3. OpenAI, ChatGPT: optimising language models for dialogue. https://openai.com/blog/chatgpt/. Accessed 02 Oct 2023

  4. Vallance, C.: AI image creator faces UK and US legal challenges, BBC. https://www.bbc.com/news/technology-64285227. Accessed 02 Oct 2023

  5. Piper, K.: OpenAI’s ChatGPT is a fascinating glimpse into the scary power of AI – Vox. https://www.vox.com/future-perfect/2022/12/15/23509014/chatgpt-artificial-intelligence-openai-language-models-ai-risk-google. Accessed 02 Oct 2023

  6. IBM, Explainable AI (XAI). https://www.ibm.com/watson/explainable-ai. Accessed 02 Oct 2023

  7. Ahmed, I., Jeon, G., Piccialli, F.: From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where. IEEE Trans. Ind. Inf. 18(8), pp. 5031–5042 (2022)

    Google Scholar 

  8. EPRS | European parliamentary research service scientific foresight unit (STOA). The impact of the general data protection regulation (GDPR) on artificial intelligence. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf. Accessed 02 Oct 2023

  9. Federal trade commission, algorithmic accountability act of 2022. https://www.congress.gov/bill/117th-congress/house-bill/6580/text. Accessed 02 Oct 2023

  10. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation.” AI Mag. 38(3), 50–57 (2017)

    Google Scholar 

  11. Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 76–99 (2017)

    Article  Google Scholar 

  12. Selbst, A., Powles, J.: Meaningful information and the right to explanation. In: Conference on Fairness, Accountability and Transparency (p. 48). PMLR (2018)

    Google Scholar 

  13. Casey, B., Farhangi, A., Vogl, R.: Rethinking explainable machines. Berkeley Technol. Law J. 34(1), 143–188 (2019)

    Google Scholar 

  14. Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extract 3(3), 615–661 (2021)

    Article  Google Scholar 

  15. Kenny, E.M., Delaney, E.D., Greene, D., Keane, M.T.: Post-hoc explanation options for XAI in deep learning: the insight centre for data analytics perspective. In: Del Bimbo, A., et al. (eds.) ICPR 2021. LNCS, vol. 12663, pp. 20–34. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-68796-0_2

    Chapter  Google Scholar 

  16. Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E.: “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design. J. Multimodal User Interfaces 15(2), 87–98 (2020). https://doi.org/10.1007/s12193-020-00332-0

    Article  Google Scholar 

  17. Ngo, T., Kunkel, J., Ziegler, J.: Exploring mental models for transparent and controllable recommender systems: a qualitative study. In: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (pp. 183–191) (2020)

    Google Scholar 

  18. Tsai, C.H., Brusilovsky, P.: Explaining recommendations in an interactive hybrid social recommender. In: Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 391–396) (2019)

    Google Scholar 

  19. Saarela, M., Geogieva, L.: Robustness, stability, and fidelity of explanations for a deep skin cancer classification model. Appl. Sci. 12(19), 9545 (2022)

    Article  Google Scholar 

  20. Hauser, K., et al.: Explainable artificial intelligence in skin cancer recognition: a systematic review. Eur. J. Cancer 167, 54–69 (2022)

    Article  Google Scholar 

  21. Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum Comput Stud. 146, 102551 (2021)

    Article  Google Scholar 

  22. Miller, T., Howe, P. Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences (2017). arXiv preprint arXiv:1712.00547

  23. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  24. Sheridan, H, O’Sullivan, D., Murphy, E.: Ideating XAI: an exploration of user’s mental models of an ai-driven recruitment system using a design thinking approach. In: Proceedings IARIA, CENTRIC, International Conference on Advances in Human-oriented and Personalized Mechanisms, Technologies, and Services, Lisbon (2022)

    Google Scholar 

  25. Dove, G., Halskov, K., Forlizzi, J., Zimmerman, J.: UX design innovation: challenges for working with machine learning as a design material. In: Proceedings of the 2017 Chi Conference on Human Factors in Computing Systems (pp. 278–288) (2017)

    Google Scholar 

  26. Yang, Q., Steinfeld, A., Rosé, C., Zimmerman, J.: Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In: Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems (pp. 1–13) (2020)

    Google Scholar 

  27. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1–15), Brown, T., Katz, B., 2011. Change by design. J. Product Innov. Manage. 28(3), pp. 381–383 (2019)

    Google Scholar 

  28. Luchs, M.G., Swan, S., Griffin, A.: Design thinking: New product development essentials from the PDMA. John Wiley & Sons (2015)

    Google Scholar 

  29. IBM, learn the enterprise design thinking framework - enterprise design thinking. https://www.ibm.com/design/thinking/page/framework/keys/playbacks. Accessed 02 Oct 2023] [Ideo this work can’t wait, IDEO | Global design & innovation company | This work can't wait. https://cantwait.ideo.com/. Accessed 02 Oct 2023

  30. Han, E.: 5 Examples of design thinking in business | HBS Online. https://online.hbs.edu/blog/post/design-thinking-examples. Accessed 02 Oct 2023

  31. Stanford, Hasso Plattner, Institute of design at stanford, an introduction to design thinking process guide. https://web.stanford.edu/~mshanks/MichaelShanks/files/509554.pdf. Accessed 02 Oct 2023

  32. Brown, T., Katz, B.: Change by design. J. Prod. Innov. Manag. 28(3), 381–383 (2011)

    Article  Google Scholar 

  33. Luchs, M.G.: A brief introduction to design thinking. Design thinking: New product development essentials from the PDMA, pp.1–12 (2015)

    Google Scholar 

  34. Jensen, M.B., Lozano, F., Steinert, M.: The origins of design thinking and the relevance in software innovations. In: Abrahamsson, P., Jedlitschka, A., Nguyen Duc, A., Felderer, M., Amasaki, S., Mikkonen, T. (eds.) PROFES 2016. LNCS, vol. 10027, pp. 675–678. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49094-6_54

    Chapter  Google Scholar 

  35. Dove, G., Fayard, A.L.: Monsters, metaphors, and machine learning. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–17) (2020)

    Google Scholar 

  36. Holtrop, J.S., Scherer, L.D., Matlock, D.D., Glasgow, R.E., Green, L.A.: The importance of mental models in implementation science. Front. Public Health 9, 680316 (2021)

    Article  Google Scholar 

  37. Nielsen, J.: Mental models and user experience design (2010). https://www.nngroup.com/articles/mental-models/. Accessed 02 Oct 2023

  38. Johnson-Laird, P.N.: Mental Models. Cambridge University Press, Cambridge (1983)

    Google Scholar 

  39. Norman, D.A.: Some observations on mental models. In: Mental Models, pp. 15–22. Psychology Press (2014)

    Google Scholar 

  40. Kaur, H., Williams, A., Lasecki, W.S.: Building shared mental models between humans and AI for effective collaboration. CHI 2019, May 2019, Glasgow, Scotland (2019)

    Google Scholar 

  41. Interaction design foundation, gulf of evaluation and gulf of execution | The glossary of human computer interaction. https://www.interaction-design.org/literature/book/the-glossary-of-human-computer-interaction/gulf-of-evaluation-and-gulf-of-execution. Accessed 02 Oct 2023

  42. Schellman, H.: Finding it hard to get a new job? Robot recruiters might be to blame | Work & careers | The Guardian (2022). https://www.theguardian.com/us-news/2022/may/11/artitifical-intelligence-job-applications-screen-robot-recruiters. Accessed 02 Oct 2023

  43. Pessach, D., Shmueli, E.: Algorithmic fairness (2020). arXiv preprint arXiv:2001.09784

  44. Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women (2018). https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruitingtool-that-showed-bias-against-women-idUSKCN1MK08G. Accessed 02 Oct 2023

  45. Nugent, S., et al.: Recruitment AI has a disability problem: questions employers should be asking to ensure fairness in recruitment (2020)

    Google Scholar 

  46. Pessach, D. and Shmueli, E., 2020. Algorithmic fairness. arXiv preprint arXiv:2001.09784.][ Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved September 6, (2022). https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruitingtool-that-showed-bias-against-women-idUSKCN1MK08G

  47. Krueger, A.E.: Two methods for experience design based on the needs empathy map: persona with needs and needs persona. Mensch und Computer 2022-Workshopband (2022)

    Google Scholar 

  48. IBM, learn the enterprise design thinking framework - enterprise design thinking. https://www.ibm.com/design/thinking/page/framework/keys/playbacks. Accessed 02 Oct 2023

  49. NNGroup, three levels of pain points in customer experience. https://www.nngroup.com/articles/pain-points/. Accessed 02 Oct 2023

  50. Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2020)

    Google Scholar 

  51. Aechtner, J., Cabrera, L., Katwal, D., Onghena, P., Valenzuela, D.P., Wilbik, A.: Comparing user perception of explanations developed with XAI methods. In: 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–7. IEEE (2022)

    Google Scholar 

  52. Markus, A.F., Kors, J.A., Rijnbeek, P.R.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113, 103655 (2021)

    Article  Google Scholar 

  53. Dhanorkar, S., Wolf, C.T., Qian, K., Xu, A., Popa, L., Li, Y.: Who needs to know what, when?: Broadening the Explainable AI (XAI) design space by looking at explanations across the AI lifecycle. In: Designing Interactive Systems Conference 2021, pp. 1591–1602 (2021)

    Google Scholar 

  54. Sperrle, F., et al.: A survey of human‐centered evaluations in human‐centered machine learning. Comput. Graph. Forum 40(3), 543–568 (2021). https://doi.org/10.1111/cgf.14329

    Article  Google Scholar 

  55. Becker, C. R, UX sketching: the missing link. I recognize this will make me sound… | by Chris R Becker | UX Collective, https://uxdesign.cc/ux-sketching-the-missing-link-4ac2f5bcc8be. Accessed 02 Oct 2023

  56. Executive Office of the President, Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights, Executive Office of the President. The White House, Washington, pp. 8–9 (2016)

    Google Scholar 

  57. Law Society of Ireland, Rationale for High-Stakes AI Decisions must be Public and Transparent. https://www.lawsociety.ie/gazette/top-stories/2021/08-august/rationale-for-high-stakes-ai-decisions-must-be-public-and-transparent. Accessed 02 Oct 2023

  58. Schwarz, J.: No user interface and data-driven design: how AI is changing the UI/UX landscape | software development company in NYC. https://www.dvginteractive.com/no-user-interface-and-data-driven-design-how-ai-is-changing-the-ui-ux-landscape/. Accessed 02 Oct 2023

  59. Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

The support of the TU Dublin Scholarship Programme is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Helen Sheridan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sheridan, H., Murphy, E., O’Sullivan, D. (2023). Exploring Mental Models for Explainable Artificial Intelligence: Engaging Cross-disciplinary Teams Using a Design Thinking Approach. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science(), vol 14050. Springer, Cham. https://doi.org/10.1007/978-3-031-35891-3_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35891-3_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35890-6

  • Online ISBN: 978-3-031-35891-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics