Abstract
Artificial Intelligence (AI) applications are increasingly present in the professional and private worlds. This is due to the success of technologies such as deep learning and automatic decision-making, allowing the development of increasingly robust and autonomous AI applications. Most of them analyze historical data and learn models based on the experience recorded in this data to make decisions or predictions. However, automatic decision-making based on AI now raises new challenges in terms of human understanding of processes resulting from learning and of explanations of the decisions made (crucial issue when ethical or legal considerations are involved). To meet these needs, the field of Explainable Artificial Intelligence (XAI) has recently developed. Indeed, according to the literature, the notion of intelligence can be considered under four abilities: (a) to perceive rich, complex and subtle information, (b) to learn in a particular environment or context; (c) to abstract, to create new meanings and (d) to reason, for planning and decision-making. These four skills are implemented by XAI with the goal of building explanatory models, to try and overcome shortcomings of pure statistical learning by providing justifications, understandable by a human, for decisions made. In the last few years, several contributions have been proposed to this fascinating new research field. In this chapter, we will focus on the joint use of symbolic and connectionist artificial intelligence with the aim of improving explainability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Cédric Villani is a French mathematician and politician.
- 2.
https://www.kaggle.com/c/bosch-production-line-performance.
References
D. Gunning, Explainable artificial intelligence research at DARPA. https://www.darpa.mil/program/explainable-artificial-intelligence. Accessed 06 Jan 2020
C. Villani, Donner un sens à l’Intelligence Artificielle. Pour une stratégie nationale et européene. https://www.aiforhumanity.fr. Accessed 06 Jan 2020
J. Deng, W. Dong, R. Socher, L. Li, K. Li, L. Fei-Fei, ImageNet: a large-scale hierarchical image database, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 2009, pp. 248–255. https://doi.org/10.1109/CVPR.2009.5206848
2018 in Review: 10 AI Failures. https://medium.com/syncedreview/2018-in-review-10-ai-failures-c18faadf5983. Accessed 06 Jan 2020
A. Adadi, M. Berrada, Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) in IEEE Access, vol. 6 (2018), pp. 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
T. Miller, Explanation in artificial intelligence: insights from the social sciences (2017) https://arxiv.org/abs/1706.07269. Accessed 20 Jan 2020
I. Tiddi, M. d’Aquin, E. Motta, An ontology design pattern to define explanations, in Proceedings of the 8th International Conference on Knowledge Capture (ACM, 2015); Article no. 3
O. Brian, C. Cotton, Explanation and justification in machine learning: a survey, IJCAI-17 Workshop on Explainable AI (XAI) (2017)
M.T. Ribeiro, S. Singh, C. Guestrin, Why should i trust you? Explaining the predictions of any classifier, in KDD (2016)
J. Chen, F. Lecue, J. Pan, I. Horrocks, H. Chen, Knowledge-based transfer learning explanation, in Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, Oct 2018, Tempe, United States
F. Lecue, J. Chen, J.Z. Pan, H. Chen, Augmenting transfer learning with semantic reasoning, in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 1779–1885
M.A. Casteleiro, M.J.F. Prieto, G. Demetriou, N. Maroto, W.J. Read, D. Maseda-Fernandez, J.J. Des Diz, G. Nenadic, J.A. Keane, R. Stevens, Ontology learning with deep learning: a case study on patient safety using pubmed, in SWAT4LS (2016)
Patrick Hohenecker and Thomas Lukasiewicz. Deep learning for ontology reasoning (2017), arXiv:1705.10342
D. Dou, H. Wang, H. Liu, Semantic data mining: a survey of ontology-based approaches, in Semantic Computing (ICSC), 2015 IEEE International Conference (IEEE, 2015), pp. 244–251
A. Hotho, A. Maedche, S. Staab, Ontology-based text document clustering. KI 16(4), 48–54 (2002)
A. Hotho, S. Staab, G. Stumme, Ontologies improve text document clustering, in Third IEEE International Conference on Data Mining, 2003. ICDM 2003 (IEEE, 2003), pp 541–544
L. Jing, L. Zhou, M.K. Ng, J. Zhexue Huang, Ontology-based distance measure for text clustering, in Proceedings of SIAM SDM Workshop on Text Mining, Bethesda, Maryland, USA, 2006
N. Phan, D. Dou, H. Wang, D. Kil, B. Piniewski, Ontology-based deep learning for human behavior prediction with explanations in health social networks. Inf. Sci. 384, 298–313 (2017)
H. Wang, D. Dou, D. Lowd, Ontology-based deep restricted boltzmann machine, in International Conference on Database and Expert Systems Applications (Springer, 2016), pp. 431–445
H.M.D. Kabir, A. Khosravi, M.A. Hosen, S. Nahavandi, Neural network-based uncertainty quantification: a survey of methodologies and applications. IEEE Access 6, 36218–36234 (2018)
M. Bellucci, N. Delestre, N. Malandain, C. Zanni-Merk, Towards a terminology for a fully contextualized XAI. Submitted to KES 2021 - 25th International Conference on Knowledge-Based and Intelligent Information & Engineering System
X. Huang, C. Zanni-Merk, B. Crémilleux, Enhancing deep learning with semantics : an application to manufacturing time series analysis, in KES 2019 - International Conference on Knowledge-Based and Intelligent Information & Engineering Systems. T. 159 (Elsevier, 2019), pp. 437–446. https://doi.org/10.1016/j.procs.2019.09.198
O. Sigaud, S.W. Wilson, Learning classifier systems: a survey. Soft Comput. 11(11), 1065–1078 (2007)
W. Stolzmann, An introduction to anticipatory classifier systems. International Workshop on Learning Classifier Systems (Springer, Berlin, 1999)
M.V. Butz, W. Stolzmann, An Algorithmic Description of ACS2, International Workshop on Learning Classifier Systems (Springer, Berlin, 2001)
P. Gerard, W. Stolzmann, O. Sigaud, YACS: a new learning classifier system using anticipation. Soft Comput. 6(3–4), 216–228 (2002)
M.V. Butz, D.E. Goldberg, Generalized state values in an anticipatory learning classifier system, in Anticipatory Behavior in Adaptive Learning Systems (Springer, Berlin, 2003), pp. 282–301
P. Gérard, J.-A. Meyer, O. Sigaud, Combining latent learning with dynamic programming in the modular anticipatory classifier system. Eur. J. Oper. Res. 160(3), 614–637 (2005)
R. Orhand, A. Jeannin-Girardon, P. Parrend, P. Collet, BACS: a thorough study of using behavioral sequences in ACS2, in International Conference on Parallel Problem Solving from Nature (Springer, Cham, 2020), pp. 524–538
R. Orhand, A. Jeannin-Girardon, P. Parrend, P. Collet, PEPACS: integrating probability-enhanced predictions to ACS2, in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion (2020), pp. 1774–1781
R. Orhand, A. Jeannin-Girardon, P. Parrend, P. Collet, DeepExpert: vers une Intelligence Artificielle autonome et explicable. In Rencontres des Jeunes Chercheurs en Intelligence Artificielle 2019, 63–65 (2019)
Baruch Spinoza, Ethique, III, prop.6
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Zanni-Merk, C., Jeannin-Girardon, A. (2022). Towards the Joint Use of Symbolic and Connectionist Approaches for Explainable Artificial Intelligence. In: Virvou, M., Tsihrintzis, G.A., Jain, L.C. (eds) Advances in Selected Artificial Intelligence Areas. Learning and Analytics in Intelligent Systems, vol 24. Springer, Cham. https://doi.org/10.1007/978-3-030-93052-3_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-93052-3_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93051-6
Online ISBN: 978-3-030-93052-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)