Abstract
Explainable Artificial Intelligence (XAI) aims at making the results of Artificial Intelligence (AI) applications more understandable. It may also help to understand the applications themselves and to get an insight into how results are obtained. Such capabilities are particularly required with regard to Machine Learning approaches like Deep Learning which must be generally considered as black boxes, today. In the last years, different XAI approaches became available. However, many of them adopt a mainly technical perspective and do not sufficiently take into consideration that giving a well-comprehensible explanation means that the output has to be provided in a human understandable form. By supplementing Machine Learning with semantic knowledge models, Semantic XAI can fill some of these gaps. In this publication, we raise awareness for its potential and, taking Deep Learning for object recognition as an example, we present initial research results on how to achieve explainability on a semantic level.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Sokol, K., Flach, P.: One explanation does not fit all. KI - Künstliche Intelligenz 34, 235–250 (2020)
Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., Kieseberg, P., Holzinger, A.: Explainable AI: the new 42?. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds.) Machine Learning and Knowledge Extraction. CD-MAKE 2018. LNCS, vol. 11015, pp. 295–303. Springer, Cham (2018)
Arya, V., Bellamy, R.K.E., Chen, P., Dhurandhar, A., Hind, M., Hoffman, S.C., Houde, S., Liao, Q.V., Luss, R., Mojsilovic, A., Mourad, S., Pedemonte, P., Raghavendra, R., Richards, J., Sattigeri, P., Shanmugam, K., Singh, M., Varshney, K.R., Wei, D., Zhang, Y.: One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv:1909.03012v2 [cs.AI] (2019)
Döbel, I., Leis, M., Molina Vogelsang, M., Welz, J., Neustroev, D., Petzka, H., Riemer, A., Püping, S., Voss, A., Wegele, M.: Maschinelles Lernen. Eine Analyse zu Kompetenzen, Forschung und Anwendung. Fraunhofer-Gesellschaft, München (2018)
Gunning, D., Aha, D.W.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)
Arrieta, A.B., DÃaz-RodrÃguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019)
Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS, vol. 11700, pp. 5–22. Springer, Cham (2019)
Park, D.H., Hendricks, L.A., Akata, Z., Rohrbach, A., Schiele, B., Darrell, T., Rohrbach, M.: Multimodal explanations: justifying decisions and pointing to the evidence. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 185–191. IEEE (2018)
Kuwertz, A., Schneider, G.: Ontology-based meta model in object-oriented world modeling for interoperable information access. In: ICONS 2013, The Eighth International Conference on Systems. IARIA (2013)
Kuwertz, A.: On adaptive open-world modeling based on information fusion and inductive inference. In: Beyerer, J. (ed.) Proceedings of the 2010 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. Karlsruher Schriften zur Anthropomatik, vol. 7, pp. 227–242. KIT Scientific Publishing, Karlsruhe (2010)
Ehrlinger, L., Wöß, W.: Towards a definition of knowledge graphs. In: Martin, M., Cuquet, M., Folmer, E. (eds.) Joint Proceedings of the Posters and Demos Track of 12th International Conference on Semantic Systems (SEMANTiCS2016) and 1st International Workshop on Semantic Change & Evolving Semantics (SuCCESS16). CEUR Workshop Proceedings, vol. 1695, pp. 13–16 (2016)
Hogan, A., Blomqvist, E., Cochez, M., d’Amato, C., de Melo, G., Gutierrez, C., Labra Gayo, J.E., Kirrane, S., Neumaier, S., Polleres, A., Navigli, R., Ngonga Ngomo, A.-C., Rashid, S.M., Rula, A., Schmelzeisen, L., Sequeda, J., Staab, S., Zimmermann, A.: Knowledge Graphs. arXiv:2003.02320v3 [cs.AI] (2020)
Wang, Z., Tang, L., Liu, X., Yao, Z., Yi, S., Shao, J., Yan, J., Wang, S., Li, H., Wang, X.: Orientation invariant feature embedding and spatial temporal regularization for vehicle re-identification. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 379–387. IEEE (2017)
Holzinger, A., Plass, M., Kickmeier-Rust, M., Holzinger, K., Crisan, G.C., Pintea, C.-M., Palade, V.: Interactive machine learning: experimental evidence for the human in the algorithmic loop. Appl. Intell. 49, 2401–2414 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sander, J., Kuwertz, A. (2021). Supplementing Machine Learning with Knowledge Models Towards Semantic Explainable AI. In: Ahram, T., Taiar, R., Groff, F. (eds) Human Interaction, Emerging Technologies and Future Applications IV. IHIET-AI 2021. Advances in Intelligent Systems and Computing, vol 1378. Springer, Cham. https://doi.org/10.1007/978-3-030-74009-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-74009-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-73270-7
Online ISBN: 978-3-030-74009-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)