ABSTRACT
Explainable AI (XAI) has emerged in recent years as a set of techniques to build systems that enable humans to understand the outcomes produced by artificial intelligent entities. Although these initiatives have advanced over the past few years, most approaches focus on explanations that are meant for literate or even skilled end users such as engineers, researchers etc. Few works available in the literature address the needs of illiterate end-users in XAI (illiterate centered design). This paper proposes a generic model to extract the contents of explanations from a given explainable AI system, and translate them into a representation format that illiterate end users may understand. The usefulness of the model is shown by reference to an application of a food recommender system.
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.Google Scholar
- Dang Minh, H Xiang Wang, Y Fen Li, and Tan N Nguyen. 2022. Explainable artificial intelligence: a comprehensive review. Artificial Intelligence Review 55 (2022), 1–66. Issue 5.Google ScholarDigital Library
- Yazan Mualla, Igor Tchappi, Timotheus Kampik, Amro Najjar, Davide Calvaresi, Abdeljalil Abbas-Turki, Stéphane Galland, and Christophe Nicolle. 2022. The quest of parsimonious XAI: a human-agent architecture for explanation formulation. Artificial intelligence 302 (2022), 103573.Google Scholar
- Amro Najjar, Harisha Prakash, Igor Tchappi, Jean Etienne Ndamlabin Mboula, and Yazan Mualla. 2022. Towards a Smart Robot Model for Traffic Signal Management in Developing Countries. In Proc of the 10th Int Conf on Human-Agent Interaction.Google ScholarDigital Library
- Scientific United Nations Educational and Cultural Organization (UNESCO). 2010. Education for all global monitoring report 2010: Reaching the marginalized.Google Scholar
Index Terms
- Towards Explainable Recommender Systems for Illiterate Users
Recommendations
Methods and standards for research on explainable artificial intelligence: Lessons from intelligent tutoring systems
AbstractThe DARPA Explainable Artificial Intelligence (AI) (XAI) Program focused on generating explanations for AI programs that use machine learning techniques. This article highlights progress during the DARPA Program (2017‐2021) relative to research ...
Lessons learned in the work on intelligent tutoring systems that apply to system design in Explainable AI. image image
Explainable AI and Fuzzy Logic Systems
Theory and Practice of Natural ComputingAbstractThe recent advances in computing power coupled with the rapid increases in the quantity of available data has led to a resurgence in the theory and applications of Artificial Intelligence (AI). However, the use of complex AI algorithms like Deep ...
On the role of knowledge graphs in explainable AI
The current hype of Artificial Intelligence (AI) mostly refers to the success of machine learning and its sub-domain of deep learning. However, AI is also about other areas, such as Knowledge Representation and Reasoning, or Distributed AI, i.e., areas ...
Comments