skip to main content
10.1145/3523227.3547427acmotherconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
abstract

Designing and evaluating explainable AI for non-AI experts: challenges and opportunities

Published: 13 September 2022 Publication History

Abstract

Artificial intelligence (AI) has seen a steady increase in use in the health and medical field, where it is used by lay users and health experts alike. However, these AI systems often lack transparency regarding the inputs and decision making process (often called black boxes), which in turn can be detrimental to the user’s satisfaction and trust towards these systems. Explainable AI (XAI) aims to overcome this problem by opening up certain aspects of the black box, and has proven to be a successful means of increasing trust, transparency and even system effectiveness. However, for certain groups (i.e. lay users in health), explanation methods and evaluation metrics still remain underexplored. In this paper, we will outline our research regarding designing and evaluating explanations for health recommendations for lay users and domain experts, as well as list a few takeaways we were already able to find in our initial studies.

References

[1]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized Explanations for Hybrid Recommender Systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 379–390. https://doi.org/10.1145/3301275.3302306
[2]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2020. Generating and Understanding Personalized Explanations in Hybrid Recommender Systems. ACM Trans. Interact. Intell. Syst. 10, 4, Article 31 (nov 2020), 40 pages. https://doi.org/10.1145/3365843
[3]
Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. CoRR abs/2112.11471(2021). arXiv:2112.11471https://arxiv.org/abs/2112.11471
[4]
Martijn Millecamp, Robin Haveneers, and Katrien Verbert. 2020. Cogito Ergo Quid? The Effect of Cognitive Style in a Transparent Mobile Music Recommender System. UMAP ’20 (2020), 323–327. https://doi.org/10.1145/3340631.3394871
[5]
Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To Explain or Not to Explain: The Effects of Personal Characteristics When Explaining Music Recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 397–407. https://doi.org/10.1145/3301275.3302313
[6]
Martijn Millecamp, Toon Willemot, and Katrien Verbert. 2021. Your eyes explain everything: exploring the use of eye tracking to provide explanations on-the-fly. In Proceedings of the 8th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with 15th ACM Conference on Recommender Systems (RecSys 2021), Vol. 2948. CEUR Workshop Proceedings, 89–100.
[7]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Trans. Interact. Intell. Syst. 11, 3–4, Article 24 (aug 2021), 45 pages. https://doi.org/10.1145/3387166
[8]
Sidra Naveed, Tim Donkers, and Jürgen Ziegler. 2018. Argumentation-Based Explanations in Recommender Systems: Conceptual Framework and Empirical Results. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization (Singapore, Singapore) (UMAP ’18). Association for Computing Machinery, New York, NY, USA, 293–298. https://doi.org/10.1145/3213586.3225240
[9]
Mireia Ribera and Agata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. CEUR Workshop Proceedings 2327 (2019).
[10]
Nava Tintarev, Matt Dennis, and Judith Masthoff. 2013. Adapting Recommendation Diversity to Openness to Experience: A Study of Human Behaviour. In User Modeling, Adaptation, and Personalization, Sandra Carberry, Stephan Weibelzahl, Alessandro Micarelli, and Giovanni Semeraro (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 190–202.

Cited By

View all
  • (2024)Toward explainable artificial intelligenceNeurocomputing10.1016/j.neucom.2023.126919563:COnline publication date: 1-Jan-2024
  • (2023)Watch Out for Updates: Understanding the Effects of Model Explanation Updates in AI-Assisted Decision MakingProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581366(1-19)Online publication date: 19-Apr-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
RecSys '22: Proceedings of the 16th ACM Conference on Recommender Systems
September 2022
743 pages
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 September 2022

Check for updates

Author Tags

  1. explainable AI
  2. explainable recommender systems
  3. explanation interpretation
  4. health recommendations
  5. non-expert users

Qualifiers

  • Abstract
  • Research
  • Refereed limited

Funding Sources

  • Research Foundation Flanders (FWO)
  • Flanders Innovation & Entrepreneurship (VLAIO)

Conference

Acceptance Rates

Overall Acceptance Rate 254 of 1,295 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)140
  • Downloads (Last 6 weeks)20
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Toward explainable artificial intelligenceNeurocomputing10.1016/j.neucom.2023.126919563:COnline publication date: 1-Jan-2024
  • (2023)Watch Out for Updates: Understanding the Effects of Model Explanation Updates in AI-Assisted Decision MakingProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581366(1-19)Online publication date: 19-Apr-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media