Abstract
Explainable artificial intelligence (XAI) has become popular in the last few years. The artificial intelligence (AI) community in general, and the machine learning (ML) community in particular, is coming to the realisation that in many applications, for AI to be trusted, it must not only demonstrate good performance in its decisionmaking, but it also must explain these decisions and convince us that it is making the decisions for the right reasons. However, different applications have different requirements on the information required of the underlying AI system in order to convince us that it is worthy of our trust. How do we define these requirements? In this paper, we present three dimensions for categorising the explanatory requirements of different applications. These are Source, Depth and Scope. We focus on the problem of matching up the explanatory requirements of different applications with the capabilities of underlying ML techniques to provide them. We deliberately avoid including aspects of explanation that are already well-covered by the existing literature and we focus our discussion on ML although the principles apply to AI more broadly.

Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Athalye A, Engstrom L, Ilyas A, Kwok K (2017) Synthesizing robust adversarial examples. arXiv:1707.07397
Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 10(7):1–46
Bibal A (2016) interpretability of machine learning models and representations : an introduction. Proc. ESANN, 77–82
Biran O, Cotton C (2017) Explanation and justification in machine learning: A Survey. IJCAI XAI Workshop
Brown TB, Mané D, Roy A, Abadi M, Gilmer J (2017) Adversarial patch. arXiv:1712.09665
Chakraborti T, Sreedharan S, Zhang Y, Kambhampati S (2016) Plan explanations as model reconciliation. Proc. IJCAI. arXiv:1701.08317v3
Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualization of perspectives. arXiv:1710.00794
Doshi-velez F, Kim B (2017) Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608v2
Doyle D, Tsymbal A, Cunningham P (2003) A review of explanation and explanation in case-based reasoning. Tech. rep., Trinity College Dublin, Department of Computer Science
Freed M (2018) Three elements of trust. Private communication
French RM (1999) Catastrophic forgetting in connectionist networks. Trends Cognit Sci 3(4):128–135
Goodman B, Flaxman S (2016) European Union regulations on algorithmic decision-making and a “right to explanation”. In: Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning. arXiv:1606.08813
Gunning D (2016) Explainable artificial intelligence (XAI). Broad Agency Announcement DARPA-BAA-16-53, Defence Advanced Research Projects Agency
Hendricks LA, Akata Z, Rohrbach M, Donahue J, Schiele B, Darrell T (2016) Generating visual explanations. In: European Conference on Computer Vision, Springer, pp 3–19
Huang X, Kwiatkowska M, Wang S, Wu M (2017) Safety verification of deep neural networks. Lecture Notes Comp Sci 10426 LNCS:3–29
Keil FC (2003) Folkscience: coarse interpretations of a complex reality. Trends Cognit Sci 7(8):368–373
van Lent M, Laird JE (2001) Learning procedural knowledge through observation. In: Proc. 1st Int’l Conf. on Knowledge Capture, pp 179–186
Miller T, Howe P, Sonenberg L (2017) Explainable AI : Beware of inmates running the asylum. In: IJCAI Workshop on XAI
Montavon G, Samek W, Müller KR (2018) Methods for interpreting and understanding deep neural networks. Dig Signal Process Rev J 73:1–15
Pomerleau DA (1989) Alvinn: An autonomous land vehicle in a neural network. In: Advances in neural information processing systems, pp 305–313
Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144
Ross AS, Hughes MC, Doshi-Velez F (2017) Right for the right reasons: Training differentiable models by constraining their explanations. In: Proc. IJCAI. arXiv:1703.03717
Roth-Berghofer TR (2004) Explanations and case-based reasoning: foundational issues. In: European Conference on Case-Based Reasoning, Springer, pp 389–403
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
Sculley D, Holt G, Golovin D, Davydov E, Phillips T, Ebner D, Chaudhary V, Young M (2014) Machine learning: the high interest credit card of technical debt. In: NIPS Workshop on SE4ML
Sheh R (2017) Different XAI for different HRI. In: AAAI FSS Workshop on AI-HRI
Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489
Su J, Vargas DV, Sakurai K (2017) One pixel attack for fooling deep neural networks. CoRR abs/1710.08864
Swartout WR (1983) XPLAIN: a system for creating and explaining expert consulting programs. Artif Intell 21(3):285–325
Swartout WR, Moore JD (1993) Explanation in second generation expert systems. In: Second generation expert systems. Springer, Berlin. https://doi.org/10.1007/978-3-642-77927-5_24
Tolchinsky P, Modgil S, Atkinson K, McBurney P, Cortés U (2012) Deliberation dialogues for reasoning about safety critical actions. Autonomous Agents Multi-Agent Syst 25(2):209–259
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Sheh, R., Monteath, I. Defining Explainable AI for Requirements Analysis. Künstl Intell 32, 261–266 (2018). https://doi.org/10.1007/s13218-018-0559-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13218-018-0559-3