ABSTRACT
Machine learning (ML) models are ubiquitous: we encounter them when using a search engine, behind online text translation, etc. However, these models have to be used with care, as they are susceptible to social biases. Further, most ML models are inherently opaque, another obstacle to understand and verify them.
Being concerned with meaningful explanations, this work is putting forward two research paths: constructing counterfactual explanations with prior knowledge, and reasoning over explanations and time. Prior knowledge has the potential to significantly increase explanation quality, whereas time dimensions are necessary to track changes in ML models and explanations. The proposal builds on (constraint) logic programming and meta-reasoning. While situated in the computer sciences, it strives to reflect the interdisciplinary character of the field of eXplainable Artificial Intelligence.
- Krzysztof Apt. 1997. From logic programming to Prolog. Prentice Hall.Google ScholarDigital Library
- Antonio Brogi, Paolo Mancarella, Dino Pedreschi, and Franco Turini. 1991. Theory Construction in Computational Logic. In ICLP Workshop on Construction of Logic Programs. Wiley, 241--250.Google Scholar
- Andrew Cropper and Sebastijan Dumancic. 2020. Inductive logic programming at 30: a new introduction. CoRR abs/2008.07912 (2020).Google Scholar
- Stuart J. Russell and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach (2 ed.). Pearson Education.Google ScholarDigital Library
- Laura State. 2021. Logic Programming for XAI: A Technical Perspective. In ICLP Workshops (CEUR Workshop Proceedings, Vol. 2970). CEUR-WS.org.Google Scholar
- Sandra Wachter, Brent D. Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. CoRR abs/1711.00399 (2017).Google Scholar
Index Terms
- Constructing Meaningful Explanations: Logic-based Approaches
Recommendations
Counterfactual Explanations in Explainable AI: A Tutorial
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningDeep learning has shown powerful performances in many fields, however its black-box nature hinders its further applications. In response, explainable artificial intelligence emerges, aiming to explain the predictions and behaviors of deep learning ...
On generating trustworthy counterfactual explanations
AbstractDeep learning models like chatGPT exemplify AI success but necessitate a deeper understanding of trust in critical sectors. Trust can be achieved using counterfactual explanations, which is how humans become familiar with unknown processes; by ...
Highlights- Trustworthy counterfactuals: plausibility, change intensity, adversarial power.
- Reliability: detecting bias and data misrepresentation in deep learning models.
- Generating realistic counterfactual examples for improved trust in deep ...
Optimizing LIME Explanations Using REVEL Metrics
Hybrid Artificial Intelligent SystemsAbstractExplainable artificial intelligence (XAI) has emerged as a crucial topic in the field of machine learning to provide insights into the reasoning performed by artificial intelligence (AI) systems. However, the lack of a clear definition of ...
Comments