ABSTRACT
Artificial intelligence (AI) has seen a steady increase in use in the health and medical field, where it is used by lay users and health experts alike. However, these AI systems often lack transparency regarding the inputs and decision making process (often called black boxes), which in turn can be detrimental to the user’s satisfaction and trust towards these systems. Explainable AI (XAI) aims to overcome this problem by opening up certain aspects of the black box, and has proven to be a successful means of increasing trust, transparency and even system effectiveness. However, for certain groups (i.e. lay users in health), explanation methods and evaluation metrics still remain underexplored. In this paper, we will outline our research regarding designing and evaluating explanations for health recommendations for lay users and domain experts, as well as list a few takeaways we were already able to find in our initial studies.
- Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized Explanations for Hybrid Recommender Systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 379–390. https://doi.org/10.1145/3301275.3302306Google ScholarDigital Library
- Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2020. Generating and Understanding Personalized Explanations in Hybrid Recommender Systems. ACM Trans. Interact. Intell. Syst. 10, 4, Article 31 (nov 2020), 40 pages. https://doi.org/10.1145/3365843Google ScholarDigital Library
- Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. CoRR abs/2112.11471(2021). arXiv:2112.11471https://arxiv.org/abs/2112.11471Google Scholar
- Martijn Millecamp, Robin Haveneers, and Katrien Verbert. 2020. Cogito Ergo Quid? The Effect of Cognitive Style in a Transparent Mobile Music Recommender System. UMAP ’20 (2020), 323–327. https://doi.org/10.1145/3340631.3394871Google ScholarDigital Library
- Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To Explain or Not to Explain: The Effects of Personal Characteristics When Explaining Music Recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 397–407. https://doi.org/10.1145/3301275.3302313Google ScholarDigital Library
- Martijn Millecamp, Toon Willemot, and Katrien Verbert. 2021. Your eyes explain everything: exploring the use of eye tracking to provide explanations on-the-fly. In Proceedings of the 8th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with 15th ACM Conference on Recommender Systems (RecSys 2021), Vol. 2948. CEUR Workshop Proceedings, 89–100.Google Scholar
- Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Trans. Interact. Intell. Syst. 11, 3–4, Article 24 (aug 2021), 45 pages. https://doi.org/10.1145/3387166Google ScholarDigital Library
- Sidra Naveed, Tim Donkers, and Jürgen Ziegler. 2018. Argumentation-Based Explanations in Recommender Systems: Conceptual Framework and Empirical Results. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization (Singapore, Singapore) (UMAP ’18). Association for Computing Machinery, New York, NY, USA, 293–298. https://doi.org/10.1145/3213586.3225240Google ScholarDigital Library
- Mireia Ribera and Agata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. CEUR Workshop Proceedings 2327 (2019).Google Scholar
- Nava Tintarev, Matt Dennis, and Judith Masthoff. 2013. Adapting Recommendation Diversity to Openness to Experience: A Study of Human Behaviour. In User Modeling, Adaptation, and Personalization, Sandra Carberry, Stephan Weibelzahl, Alessandro Micarelli, and Giovanni Semeraro (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 190–202.Google Scholar
Index Terms
- Designing and evaluating explainable AI for non-AI experts: challenges and opportunities
Recommendations
An Argumentative Framework for Generating Explainable Group Recommendations
UMAP '23 Adjunct: Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and PersonalizationIn the context of group recommender systems, explanations strategies have been proposed to improve recommendations perceived fairness, consensus, satisfaction, and to help the group members in the decision-making process. In general, such explanations ...
Methods and standards for research on explainable artificial intelligence: Lessons from intelligent tutoring systems
AbstractThe DARPA Explainable Artificial Intelligence (AI) (XAI) Program focused on generating explanations for AI programs that use machine learning techniques. This article highlights progress during the DARPA Program (2017‐2021) relative to research ...
Lessons learned in the work on intelligent tutoring systems that apply to system design in Explainable AI. image image
Counterfactual Explainable Recommendation
CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge ManagementBy providing explanations for users and system designers to facilitate better understanding and decision making, explainable recommendation has been an important research problem. In this paper, we propose Counterfactual Explainable Recommendation (...
Comments