skip to main content
10.1145/3523227.3547427acmotherconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
abstract

Designing and evaluating explainable AI for non-AI experts: challenges and opportunities

Published:13 September 2022Publication History

ABSTRACT

Artificial intelligence (AI) has seen a steady increase in use in the health and medical field, where it is used by lay users and health experts alike. However, these AI systems often lack transparency regarding the inputs and decision making process (often called black boxes), which in turn can be detrimental to the user’s satisfaction and trust towards these systems. Explainable AI (XAI) aims to overcome this problem by opening up certain aspects of the black box, and has proven to be a successful means of increasing trust, transparency and even system effectiveness. However, for certain groups (i.e. lay users in health), explanation methods and evaluation metrics still remain underexplored. In this paper, we will outline our research regarding designing and evaluating explanations for health recommendations for lay users and domain experts, as well as list a few takeaways we were already able to find in our initial studies.

References

  1. Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized Explanations for Hybrid Recommender Systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 379–390. https://doi.org/10.1145/3301275.3302306Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2020. Generating and Understanding Personalized Explanations in Hybrid Recommender Systems. ACM Trans. Interact. Intell. Syst. 10, 4, Article 31 (nov 2020), 40 pages. https://doi.org/10.1145/3365843Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. CoRR abs/2112.11471(2021). arXiv:2112.11471https://arxiv.org/abs/2112.11471Google ScholarGoogle Scholar
  4. Martijn Millecamp, Robin Haveneers, and Katrien Verbert. 2020. Cogito Ergo Quid? The Effect of Cognitive Style in a Transparent Mobile Music Recommender System. UMAP ’20 (2020), 323–327. https://doi.org/10.1145/3340631.3394871Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To Explain or Not to Explain: The Effects of Personal Characteristics When Explaining Music Recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 397–407. https://doi.org/10.1145/3301275.3302313Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Martijn Millecamp, Toon Willemot, and Katrien Verbert. 2021. Your eyes explain everything: exploring the use of eye tracking to provide explanations on-the-fly. In Proceedings of the 8th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with 15th ACM Conference on Recommender Systems (RecSys 2021), Vol. 2948. CEUR Workshop Proceedings, 89–100.Google ScholarGoogle Scholar
  7. Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Trans. Interact. Intell. Syst. 11, 3–4, Article 24 (aug 2021), 45 pages. https://doi.org/10.1145/3387166Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Sidra Naveed, Tim Donkers, and Jürgen Ziegler. 2018. Argumentation-Based Explanations in Recommender Systems: Conceptual Framework and Empirical Results. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization (Singapore, Singapore) (UMAP ’18). Association for Computing Machinery, New York, NY, USA, 293–298. https://doi.org/10.1145/3213586.3225240Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Mireia Ribera and Agata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. CEUR Workshop Proceedings 2327 (2019).Google ScholarGoogle Scholar
  10. Nava Tintarev, Matt Dennis, and Judith Masthoff. 2013. Adapting Recommendation Diversity to Openness to Experience: A Study of Human Behaviour. In User Modeling, Adaptation, and Personalization, Sandra Carberry, Stephan Weibelzahl, Alessandro Micarelli, and Giovanni Semeraro (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 190–202.Google ScholarGoogle Scholar

Index Terms

  1. Designing and evaluating explainable AI for non-AI experts: challenges and opportunities

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format .

            View HTML Format