Skip to main content

Towards the Joint Use of Symbolic and Connectionist Approaches for Explainable Artificial Intelligence

  • Chapter
  • First Online:
Advances in Selected Artificial Intelligence Areas

Part of the book series: Learning and Analytics in Intelligent Systems ((LAIS,volume 24))

  • 481 Accesses

Abstract

Artificial Intelligence (AI) applications are increasingly present in the professional and private worlds. This is due to the success of technologies such as deep learning and automatic decision-making, allowing the development of increasingly robust and autonomous AI applications. Most of them analyze historical data and learn models based on the experience recorded in this data to make decisions or predictions. However, automatic decision-making based on AI now raises new challenges in terms of human understanding of processes resulting from learning and of explanations of the decisions made (crucial issue when ethical or legal considerations are involved). To meet these needs, the field of Explainable Artificial Intelligence (XAI) has recently developed. Indeed, according to the literature, the notion of intelligence can be considered under four abilities: (a) to perceive rich, complex and subtle information, (b) to learn in a particular environment or context; (c) to abstract, to create new meanings and (d) to reason, for planning and decision-making. These four skills are implemented by XAI with the goal of building explanatory models, to try and overcome shortcomings of pure statistical learning by providing justifications, understandable by a human, for decisions made. In the last few years, several contributions have been proposed to this fascinating new research field. In this chapter, we will focus on the joint use of symbolic and connectionist artificial intelligence with the aim of improving explainability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Cédric Villani is a French mathematician and politician.

  2. 2.

    https://www.kaggle.com/c/bosch-production-line-performance.

References

  1. D. Gunning, Explainable artificial intelligence research at DARPA. https://www.darpa.mil/program/explainable-artificial-intelligence. Accessed 06 Jan 2020

  2. C. Villani, Donner un sens à l’Intelligence Artificielle. Pour une stratégie nationale et européene. https://www.aiforhumanity.fr. Accessed 06 Jan 2020

  3. J. Deng, W. Dong, R. Socher, L. Li, K. Li, L. Fei-Fei, ImageNet: a large-scale hierarchical image database, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 2009, pp. 248–255. https://doi.org/10.1109/CVPR.2009.5206848

  4. 2018 in Review: 10 AI Failures. https://medium.com/syncedreview/2018-in-review-10-ai-failures-c18faadf5983. Accessed 06 Jan 2020

  5. A. Adadi, M. Berrada, Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) in IEEE Access, vol. 6 (2018), pp. 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052

  6. T. Miller, Explanation in artificial intelligence: insights from the social sciences (2017) https://arxiv.org/abs/1706.07269. Accessed 20 Jan 2020

  7. I. Tiddi, M. d’Aquin, E. Motta, An ontology design pattern to define explanations, in Proceedings of the 8th International Conference on Knowledge Capture (ACM, 2015); Article no. 3

    Google Scholar 

  8. O. Brian, C. Cotton, Explanation and justification in machine learning: a survey, IJCAI-17 Workshop on Explainable AI (XAI) (2017)

    Google Scholar 

  9. M.T. Ribeiro, S. Singh, C. Guestrin, Why should i trust you? Explaining the predictions of any classifier, in KDD (2016)

    Google Scholar 

  10. J. Chen, F. Lecue, J. Pan, I. Horrocks, H. Chen, Knowledge-based transfer learning explanation, in Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, Oct 2018, Tempe, United States

    Google Scholar 

  11. F. Lecue, J. Chen, J.Z. Pan, H. Chen, Augmenting transfer learning with semantic reasoning, in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 1779–1885

    Google Scholar 

  12. M.A. Casteleiro, M.J.F. Prieto, G. Demetriou, N. Maroto, W.J. Read, D. Maseda-Fernandez, J.J. Des Diz, G. Nenadic, J.A. Keane, R. Stevens, Ontology learning with deep learning: a case study on patient safety using pubmed, in SWAT4LS (2016)

    Google Scholar 

  13. Patrick Hohenecker and Thomas Lukasiewicz. Deep learning for ontology reasoning (2017), arXiv:1705.10342

  14. D. Dou, H. Wang, H. Liu, Semantic data mining: a survey of ontology-based approaches, in Semantic Computing (ICSC), 2015 IEEE International Conference (IEEE, 2015), pp. 244–251

    Google Scholar 

  15. A. Hotho, A. Maedche, S. Staab, Ontology-based text document clustering. KI 16(4), 48–54 (2002)

    Google Scholar 

  16. A. Hotho, S. Staab, G. Stumme, Ontologies improve text document clustering, in Third IEEE International Conference on Data Mining, 2003. ICDM 2003 (IEEE, 2003), pp 541–544

    Google Scholar 

  17. L. Jing, L. Zhou, M.K. Ng, J. Zhexue Huang, Ontology-based distance measure for text clustering, in Proceedings of SIAM SDM Workshop on Text Mining, Bethesda, Maryland, USA, 2006

    Google Scholar 

  18. N. Phan, D. Dou, H. Wang, D. Kil, B. Piniewski, Ontology-based deep learning for human behavior prediction with explanations in health social networks. Inf. Sci. 384, 298–313 (2017)

    Article  Google Scholar 

  19. H. Wang, D. Dou, D. Lowd, Ontology-based deep restricted boltzmann machine, in International Conference on Database and Expert Systems Applications (Springer, 2016), pp. 431–445

    Google Scholar 

  20. H.M.D. Kabir, A. Khosravi, M.A. Hosen, S. Nahavandi, Neural network-based uncertainty quantification: a survey of methodologies and applications. IEEE Access 6, 36218–36234 (2018)

    Google Scholar 

  21. M. Bellucci, N. Delestre, N. Malandain, C. Zanni-Merk, Towards a terminology for a fully contextualized XAI. Submitted to KES 2021 - 25th International Conference on Knowledge-Based and Intelligent Information & Engineering System

    Google Scholar 

  22. X. Huang, C. Zanni-Merk, B. Crémilleux, Enhancing deep learning with semantics : an application to manufacturing time series analysis, in KES 2019 - International Conference on Knowledge-Based and Intelligent Information & Engineering Systems. T. 159 (Elsevier, 2019), pp. 437–446. https://doi.org/10.1016/j.procs.2019.09.198

  23. O. Sigaud, S.W. Wilson, Learning classifier systems: a survey. Soft Comput. 11(11), 1065–1078 (2007)

    Article  Google Scholar 

  24. W. Stolzmann, An introduction to anticipatory classifier systems. International Workshop on Learning Classifier Systems (Springer, Berlin, 1999)

    Google Scholar 

  25. M.V. Butz, W. Stolzmann, An Algorithmic Description of ACS2, International Workshop on Learning Classifier Systems (Springer, Berlin, 2001)

    Google Scholar 

  26. P. Gerard, W. Stolzmann, O. Sigaud, YACS: a new learning classifier system using anticipation. Soft Comput. 6(3–4), 216–228 (2002)

    MATH  Google Scholar 

  27. M.V. Butz, D.E. Goldberg, Generalized state values in an anticipatory learning classifier system, in Anticipatory Behavior in Adaptive Learning Systems (Springer, Berlin, 2003), pp. 282–301

    Google Scholar 

  28. P. Gérard, J.-A. Meyer, O. Sigaud, Combining latent learning with dynamic programming in the modular anticipatory classifier system. Eur. J. Oper. Res. 160(3), 614–637 (2005)

    Article  Google Scholar 

  29. R. Orhand, A. Jeannin-Girardon, P. Parrend, P. Collet, BACS: a thorough study of using behavioral sequences in ACS2, in International Conference on Parallel Problem Solving from Nature (Springer, Cham, 2020), pp. 524–538

    Google Scholar 

  30. R. Orhand, A. Jeannin-Girardon, P. Parrend, P. Collet, PEPACS: integrating probability-enhanced predictions to ACS2, in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion (2020), pp. 1774–1781

    Google Scholar 

  31. R. Orhand, A. Jeannin-Girardon, P. Parrend, P. Collet, DeepExpert: vers une Intelligence Artificielle autonome et explicable. In Rencontres des Jeunes Chercheurs en Intelligence Artificielle 2019, 63–65 (2019)

    Google Scholar 

  32. Baruch Spinoza, Ethique, III, prop.6

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cecilia Zanni-Merk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zanni-Merk, C., Jeannin-Girardon, A. (2022). Towards the Joint Use of Symbolic and Connectionist Approaches for Explainable Artificial Intelligence. In: Virvou, M., Tsihrintzis, G.A., Jain, L.C. (eds) Advances in Selected Artificial Intelligence Areas. Learning and Analytics in Intelligent Systems, vol 24. Springer, Cham. https://doi.org/10.1007/978-3-030-93052-3_12

Download citation

Publish with us

Policies and ethics