Skip to main content

SeXAI: A Semantic Explainable Artificial Intelligence Framework

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12414))

Abstract

The interest in Explainable Artificial Intelligence (XAI) research is dramatically grown during the last few years. The main reason is the need of having systems that beyond being effective are also able to describe how a certain output has been obtained and to present such a description in a comprehensive manner with respect to the target users. A promising research direction making black boxes more transparent is the exploitation of semantic information. Such information can be exploited from different perspectives in order to provide a more comprehensive and interpretable representation of AI models. In this paper, we present the first version of SeXAI, a semantic-based explainable framework aiming to exploit semantic information for making black boxes more transparent. After a theoretical discussion, we show how this research direction is suitable and worthy of investigation by showing its application to a real-world use case.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://prodi.gy/.

  2. 2.

    In the remaining of the paper, we will refer to some concepts defined within the HeLiS ontology. We leave to the reader the task of checking the meaning of each concept within the reference paper.

  3. 3.

    The dataset, its comparison and the code are available at https://bit.ly/2Y7zSWZ.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Ai, Q., Azizi, V., Chen, X., Zhang, Y.: Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11(9), 137 (2018)

    Article  MathSciNet  Google Scholar 

  3. Androutsopoulos, I., Lampouras, G., Galanis, D.: Generating natural language descriptions from OWL ontologies: the NaturalOWL system. J. Artif. Intell. Res. 48, 671–715 (2013)

    Article  Google Scholar 

  4. Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F. (eds.): The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  5. Bauer, J., Sattler, U., Parsia, B.: Explaining by example: model exploration for ontology comprehension. In: Description Logics. CEUR Workshop Proceedings, vol. 477. CEUR-WS.org (2009)

    Google Scholar 

  6. Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science and Statistics, 5th edn. Springer, New York (2007)

    MATH  Google Scholar 

  7. Borgida, A., Franconi, E., Horrocks, I.: Explaining ALC subsumption. In: Horn, W. (ed.) ECAI 2000, Proceedings of the 14th European Conference on Artificial Intelligence, Berlin, Germany, 20–25 August 2000, pp. 209–213. IOS Press (2000)

    Google Scholar 

  8. Cherkassky, V., Dhar, S.: Interpretation of black-box predictive models. In: Vovk, V., Papadopoulos, H., Gammerman, A. (eds.) Measures of Complexity, pp. 267–286. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21852-6_19

    Chapter  MATH  Google Scholar 

  9. Daniele, A., Serafini, L.: Neural networks enhancement through prior logical knowledge. CoRR abs/2009.06087 (2020)

    Google Scholar 

  10. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)

    Google Scholar 

  11. Diligenti, M., Gori, M., Saccà, C.: Semantic-based regularization for learning and inference. Artif. Intell. 244, 143–165 (2017)

    Article  MathSciNet  Google Scholar 

  12. Donadello, I., Dragoni, M., Eccher, C.: Persuasive explanation of reasoning inferences on dietary data. In: SEMEX: 1st Workshop on Semantic Explainability, vol. 2465, pp. 46–61. CEUR-WS.org (2019)

    Google Scholar 

  13. Donadello, I., Serafini, L.: Mixing low-level and semantic features for image interpretation. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 283–298. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16181-5_20

    Chapter  Google Scholar 

  14. Donadello, I., Serafini, L.: Compensating supervision incompleteness with prior knowledge in semantic image interpretation. In: IJCNN, pp. 1–8. IEEE (2019)

    Google Scholar 

  15. Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. In: Besold, T.R., Kutz, O. (eds.) Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 Co-Located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017). CEUR Workshop Proceedings, Bari, Italy, 16–17 November 2017, vol. 2071. CEUR-WS.org (2017)

    Google Scholar 

  16. Dragoni, M., Bailoni, T., Maimone, R., Eccher, C.: HeLiS: an ontology for supporting healthy lifestyles. In: Vrandečić, D., et al. (eds.) ISWC 2018. LNCS, vol. 11137, pp. 53–69. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00668-6_4

    Chapter  Google Scholar 

  17. Ell, B., Harth, A., Simperl, E.: SPARQL query verbalization for explaining semantic search engine queries. In: Presutti, V., d’Amato, C., Gandon, F., d’Aquin, M., Staab, S., Tordai, A. (eds.) ESWC 2014. LNCS, vol. 8465, pp. 426–441. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07443-6_29

    Chapter  Google Scholar 

  18. Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. University of Montreal 1341(3), 1 (2009)

    Google Scholar 

  19. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Bonchi, F., Provost, F.J., Eliassi-Rad, T., Wang, W., Cattuto, C., Ghani, R. (eds.) 5th IEEE International Conference on Data Science and Advanced Analytics, DSAA 2018, Turin, Italy, 1–3 October 2018, pp. 80–89. IEEE (2018)

    Google Scholar 

  20. Hamed, R.G., Pandit, H.J., O’Sullivan, D., Conlan, O.: Explaining disclosure decisions over personal data. In: ISWC Satellites. CEUR Workshop Proceedings, vol. 2456, pp. 41–44. CEUR-WS.org (2019)

    Google Scholar 

  21. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? CoRR abs/1712.09923 (2017)

    Google Scholar 

  22. Holzinger, A., Kieseberg, P., Weippl, E., Tjoa, A.M.: Current advances, trends and challenges of machine learning and knowledge extraction: from machine learning to explainable AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2018. LNCS, vol. 11015, pp. 1–8. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99740-7_1

    Chapter  Google Scholar 

  23. Kaljurand, K.: ACE view – an ontology and rule editor based on attempto controlled English. In: OWLED. CEUR Workshop Proceedings, vol. 432. CEUR-WS.org (2008)

    Google Scholar 

  24. Kaljurand, K., Fuchs, N.E.: Verbalizing OWL in attempto controlled English. In: OWLED. CEUR Workshop Proceedings, vol. 258. CEUR-WS.org (2007)

    Google Scholar 

  25. Kalyanpur, A., Parsia, B., Horridge, M., Sirin, E.: Finding all justifications of OWL DL entailments. In: Aberer, K., et al. (eds.) ASWC/ISWC -2007. LNCS, vol. 4825, pp. 267–280. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-76298-0_20

    Chapter  Google Scholar 

  26. Kalyanpur, A., Parsia, B., Sirin, E., Hendler, J.A.: Debugging unsatisfiable classes in OWL ontologies. J. Web Semant. 3(4), 268–293 (2005)

    Article  Google Scholar 

  27. Kazakov, Y., Klinov, P., Stupnikov, A.: Towards reusable explanation services in protege. In: Description Logics. CEUR Workshop Proceedings, vol. 1879. CEUR-WS.org (2017)

    Google Scholar 

  28. Khan, O.Z., Poupart, P., Black, J.P.: Explaining recommendations generated by MDPs. In: Roth-Berghofer, T., Schulz, S., Leake, D.B., Bahls, D. (eds.) Explanation-Aware Computing, Papers from the 2008 ECAI Workshop, Patras, Greece, 21–22 July 2008, pp. 13–24. University of Patras (2008)

    Google Scholar 

  29. Kontopoulos, E., Bassiliades, N., Antoniou, G.: Visualizing semantic web proofs of defeasible logic in the DR-DEVICE system. Knowl.-Based Syst. 24(3), 406–419 (2011)

    Article  Google Scholar 

  30. Lam, J.S.C.: Methods for resolving inconsistencies in ontologies. Ph.D. thesis, University of Aberdeen, UK (2007)

    Google Scholar 

  31. Lécué, F.: On the role of knowledge graphs in explainable AI. Semant. Web 11(1), 41–51 (2020)

    Article  Google Scholar 

  32. Mao, J., Gan, C., Kohli, P., Tenenbaum, J.B., Wu, J.: The neuro-symbolic concept learner: interpreting scenes, words, and sentences from natural supervision. In: ICLR. OpenReview.net (2019)

    Google Scholar 

  33. McGuinness, D.L., Borgida, A.: Explaining subsumption in description logics. In: IJCAI (1), pp. 816–821. Morgan Kaufmann (1995)

    Google Scholar 

  34. Neves, M., Ševa, J.: An extensive review of tools for manual annotation of documents. Briefings Bioinform. 22(1), 146–163 (2019)

    Article  Google Scholar 

  35. Robinson, J.A., Voronkov, A. (eds.): Handbook of Automated Reasoning, vol. 2. Elsevier and MIT Press (2001)

    Google Scholar 

  36. Selvaraju, R.R.: Choose your neuron: incorporating domain knowledge through neuron-importance. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 540–556. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_32

    Chapter  Google Scholar 

  37. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR, pp. 2818–2826. IEEE Computer Society (2016)

    Google Scholar 

  38. Vougiouklis, P., et al.: Neural wikipedian: generating textual summaries from knowledge base triples. J. Web Semant. 52–53, 1–15 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ivan Donadello .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Donadello, I., Dragoni, M. (2021). SeXAI: A Semantic Explainable Artificial Intelligence Framework. In: Baldoni, M., Bandini, S. (eds) AIxIA 2020 – Advances in Artificial Intelligence. AIxIA 2020. Lecture Notes in Computer Science(), vol 12414. Springer, Cham. https://doi.org/10.1007/978-3-030-77091-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77091-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77090-7

  • Online ISBN: 978-3-030-77091-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics