Skip to main content

A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence

  • Conference paper
  • First Online:

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 646))

Abstract

One of the aim of Explainable Artificial Intelligence (XAI) is to equip data-driven, machine-learned models with a high degree of explainability for humans. Understanding and explaining the inferences of a model can be seen as a defeasible reasoning process. This process is likely to be non-monotonic: a conclusion, linked to a set of premises, can be retracted when new information becomes available. In formal logic, computational argumentation is a method, within Artificial Intelligence (AI), focused on modeling defeasible reasoning. This research study focuses on the automatic formation of an argument-based representation for a machine-learned model in order to enhance its degree of explainability, by employing principles and techniques from computational argumentation. It also contributes to the body of knowledge by introducing a novel quantitative human-centred technique to evaluate such a novel representation, and potentially other XAI methods, in the form of a questionnaire for explainability. An experiment have been conducted with two groups of human participants, one interacting with the argument-based representation, and one with a decision trees, a representation deemed naturally transparent and comprehensible. Findings demonstrate that the explainability of the original argument-based representation is statistically similar to that associated to the decision-trees, as reported by humans via the novel questionnaire.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://www.kaggle.com/teejmahal20/airline-passenger-satisfaction.

References

  1. Amgoud, L., Ben-Naim, J.: Ranking-based semantics for argumentation frameworks. In: Liu, W., Subrahmanian, V.S., Wijsen, J. (eds.) SUM 2013. LNCS (LNAI), vol. 8078, pp. 134–147. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40381-1_11

    Chapter  Google Scholar 

  2. Baroni, P., Caminada, M., Giacomin, M.: An introduction to argumentation semantics. Knowl. Eng. Rev. 26(4), 365–410 (2011)

    Article  Google Scholar 

  3. Besnard, P., Hunter, A.: A logic-based theory of deductive arguments. Artif. Intell. 128(1–2), 203–235 (2001)

    Article  MathSciNet  Google Scholar 

  4. Bryant, D., Krause, P.: A review of current defeasible reasoning implementations. Knowl. Eng. Rev. 23(3), 227–260 (2008)

    Article  Google Scholar 

  5. Choi, B.C., Pak, A.W.: Peer reviewed: a catalog of biases in questionnaires. Prevent Chronic Disease 2(1), 1 (2005)

    Google Scholar 

  6. Cocarascu, O., Toni, F.: Argumentation for machine learning: a survey. In: COMMA, pp. 219–230 (2016)

    Google Scholar 

  7. Dam, H.K., Tran, T., Ghose, A.: Explainable software analytics. In: Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, pp. 53–56. ACM, Gothenburg, Sweden (2018)

    Google Scholar 

  8. Dejl, A., et al.: Argflow: a toolkit for deep argumentative explanations for neural networks. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1761–1763 (2021)

    Google Scholar 

  9. Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215. IEEE (2018)

    Google Scholar 

  10. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)

    Article  MathSciNet  Google Scholar 

  11. Freitas, A.A.: Are we really discovering interesting knowledge from data. Expert Update (BCS-SGAI Mag.) 9(1), 41–47 (2006)

    Google Scholar 

  12. Gómez, S.A., Chesnevar, C.I.: Integrating defeasible argumentation and machine learning techniques: a preliminary report. In: Proceedings of Workshop of Researchers in Computer Science, pp. 320–324 (2003)

    Google Scholar 

  13. Gómez, S.A., Chesnevar, C.I.: Integrating defeasible argumentation with fuzzy art neural networks for pattern classification. J. Comput. Sci. Technol. 4(1), 45–51 (2004)

    Google Scholar 

  14. Kriegel, H.P., Kröger, P., Sander, J., Zimek, A.: Density-based clustering. Wiley Interdisc. Rev. Data Mining Knowl. Disc. 1(3), 231–240 (2011)

    Article  Google Scholar 

  15. Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1675–1684. ACM, San Francisco, California, USA (2016)

    Google Scholar 

  16. Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018)

    Article  Google Scholar 

  17. Longo, L.: Argumentation for knowledge representation, conflict resolution, defeasible inference and its integration with machine learning. In: Holzinger, A. (ed.) Machine Learning for Health Informatics. LNCS (LNAI), vol. 9605, pp. 183–208. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50478-0_9

    Chapter  Google Scholar 

  18. Longo, L.: Formalising human mental workload as a defeasible computational concept. Ph.D. thesis, Technological University Dublin (2014)

    Google Scholar 

  19. Longo, L., Goebel, R., Lecue, F., Kieseberg, P., Holzinger, A.: Explainable artificial intelligence: concepts, applications, research challenges and visions. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 1–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_1

    Chapter  Google Scholar 

  20. Longo, L., Rizzo, L., Dondio, P.: Examining the modelling capabilities of defeasible argumentation and non-monotonic fuzzy reasoning. Knowl. Based Syst. 211, 106514 (2021)

    Article  Google Scholar 

  21. Modgil, S., Prakken, H.: The aspic+ framework for structured argumentation: a tutorial. Argum. Comput. 5(1), 31–62 (2014)

    Article  Google Scholar 

  22. Modgil, S., et al.: The added value of argumentation. In: Ossowski, S. (eds) Agreement Technologies. Law, Governance and Technology Series, vol. 8, pp. 357–403. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-5583-3_21

  23. Riveret, R., Governatori, G.: On learning attacks in probabilistic abstract argumentation. In: Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems, pp. 653–661 (2016)

    Google Scholar 

  24. Rizzo, L., Longo, L.: An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems. Expert Syst. App. 147, 113220 (2020)

    Article  Google Scholar 

  25. Thimm, M., Kersting, K.: Towards argumentation-based classification. In: Logical Foundations of Uncertainty and Machine Learning, IJCAI Workshop, vol. 17 (2017)

    Google Scholar 

  26. Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. arXiv preprint arXiv:2006.00093 (2020)

  27. Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extract. 3(3), 615–661 (2021)

    Article  Google Scholar 

  28. Vilone, G., Longo, L.: Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 76, 89–106 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giulia Vilone .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vilone, G., Longo, L. (2022). A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence. In: Maglogiannis, I., Iliadis, L., Macintyre, J., Cortez, P. (eds) Artificial Intelligence Applications and Innovations. AIAI 2022. IFIP Advances in Information and Communication Technology, vol 646. Springer, Cham. https://doi.org/10.1007/978-3-031-08333-4_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08333-4_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08332-7

  • Online ISBN: 978-3-031-08333-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics