Skip to main content

Interactive Natural Language Technology for Explainable Artificial Intelligence

  • Conference paper
  • First Online:
Trustworthy AI - Integrating Learning, Optimization and Reasoning (TAILOR 2020)

Abstract

We have defined an interdisciplinary program for training a new generation of researchers who will be ready to leverage the use of Artificial Intelligence (AI)-based models and techniques even by non-expert users. The final goal is to make AI self-explaining and thus contribute to translating knowledge into products and services for economic and social benefit, with the support of Explainable AI systems. Moreover, our focus is on the automatic generation of interactive explanations in natural language, the preferred modality among humans, with visualization as a complementary modality.

Supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://claire-ai.org/.

  2. 2.

    https://liu.se/en/research/tailor.

  3. 3.

    https://www.ai4eu.eu/.

  4. 4.

    https://nl4xai.eu/.

References

  1. Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, New York (2018). https://doi.org/10.1145/3173574.3174156

  2. Alonso, J.M., Bugarín, A.: ExpliClas: automatic generation of explanations in natural language for Weka classifiers. In: Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) (2019). https://doi.org/10.1109/FUZZ-IEEE.2019.8859018

  3. Alonso, J.M., Castiello, C., Magdalena, L., Mencar, C.: Explainable Fuzzy Systems - Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems. Studies in Computational Intelligence. Springer (2021). https://doi.org/10.1007/978-3-030-71098-9

  4. Alonso, J.M., Ramos-Soto, A., Reiter, E., van Deemter, K.: An exploratory study on the benefits of using natural language for explaining fuzzy rule-based systems. In: Proceedings of the IEEE International Conference on Fuzzy Systems (2017). https://doi.org/10.1109/FUZZ-IEEE.2017.8015489

  5. Alonso, J.M., Toja-Alamancos, J., Bugarín, A.: Experimental study on generating multi-modal explanations of black-box classifiers in terms of gray-box classifiers. In: Proceedings of the IEEE World Congress on Computational Intelligence (2020). https://doi.org/10.1109/FUZZ48607.2020.9177770

  6. Budzynska, K., Villata, S.: Argument mining. IEEE Intell. Inform. Bull. 17, 1–7 (2016)

    Google Scholar 

  7. Demollin, M., Shaheen, Q., Budzynska, K., Sierra, C.: Argumentation theoretical frameworks for explainable artificial intelligence. In: Proceedings of the Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) at the International Conference on Natural Language Generation (INLG). Dublin, Ireland (2020). https://www.aclweb.org/anthology/2020.nl4xai-1.10/

  8. EU High Level Expert Group on AI: AI Ethics Guidelines for Trustworthy AI. Technical report, European Commission, Brussels, Belgium (2019). https://doi.org/10.2759/346720

  9. EU High Level Expert Group on AI: The assessment list for trustworthy artificial intelligence (ALTAI) for self assessment. Technical report, European Commission, Brussels, Belgium (2019). https://doi.org/10.2759/002360

  10. European Commission: Artificial Intelligence for Europe. Technical report, European Commission, Brussels, Belgium (2018). https://ec.europa.eu/digital-single-market/en/news/communicationartificial-intelligence-europe. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions (SWD(2018) 137 final)

  11. Faille, J., Gatt, A., Gardent, C.: The natural language pipeline, neural text generation and explainability. In: Proceedings of the Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) at the International Conference on Natural Language Generation (INLG), Dublin, Ireland (2020). https://www.aclweb.org/anthology/2020.nl4xai-1.5/

  12. Ferreira, T.C., van der Lee, C., van Miltenburg, E., Krahmer, E.: Neural data-to-text generation: a comparison between pipeline and end-to-end architectures. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Hong Kong, pp. 552–562. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/D19-1052

  13. Fisher, W.R.: Human Communication as Narration: Toward a Philosophy of Reason, Value, and Action. University of South Carolina Press, Columbia (1989)

    Google Scholar 

  14. Floridi, L., et al.: AI4People - an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. 28(4), 689–707 (2018). https://doi.org/10.1007/s11023-018-9482-5

    Article  Google Scholar 

  15. Forrest, J., Sripada, S., Pang, W., Coghill, G.: Towards making NLG a voice for interpretable machine learning. In: Proceedings of the International Conference on Natural Language Generation (INLG), The Netherlands, pp. 177–182. Association for Computational Linguistics, Tilburg University (2018). https://doi.org/10.18653/v1/W18-6522

  16. Gatt, A., Krahmer, E.: Survey of the state of the art in natural language generation: core tasks, applications and evaluation. J. Artif. Intell. Res. 61, 65–170 (2018). https://doi.org/10.1613/jair.5477

    Article  MathSciNet  MATH  Google Scholar 

  17. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1–93:42 (2018). https://doi.org/10.1145/3236009

  18. Hennessy, C., Bugarin, A., Reiter, E.: Explaining Bayesian Networks in natural language: state of the art and challenges. In: Proceedings of the Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) at the International Conference on Natural Language Generation (INLG), Dublin, Ireland (2020). https://www.aclweb.org/anthology/2020.nl4xai-1.7/

  19. Mariotti, E., Alonso, J.M., Gatt, A.: Towards harnessing natural language generation to explain black-box models. In: Proceedings of the Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) at the International Conference on Natural Language Generation (INLG), Dublin, Ireland (2020). https://www.aclweb.org/anthology/2020.nl4xai-1.6/

  20. Mayn, A., van Deemter, K.: Towards generating effective explanations of logical formulas: challenges and strategies. In: Proceedings of the Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) at the International Conference on Natural Language Generation (INLG), Dublin, Ireland (2020). https://www.aclweb.org/anthology/2020.nl4xai-1.9/

  21. Moryossef, A., Goldberg, Y., Dagan, I.: Improving quality and efficiency in plan-based neural data-to-text generation. In: Proceedings of the International Conference on Natural Language Generation (INLG), Tokyo, Japan, pp. 377–382. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/w19-8645

  22. Narayan, S., Gardent, C.: Deep learning approaches to text production. In: Synthesis Lectures on Human Language Technologies, vol. 13, no. 1, pp. 1–199 (2020)

    Google Scholar 

  23. Parliament and Council of the European Union: General data protection regulation (GDPR) (2016). http://data.europa.eu/eli/reg/2016/679/oj

  24. Pereira-Fariña, M., Bugarín, A.: Content determination for natural language descriptions of predictive Bayesian Networks. In: Proceedings of the Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT), pp. 784–791. Atlantis Press (2019)

    Google Scholar 

  25. Polanyi, M.: The Tacit Dimension. Doubleday & Company Inc., New York (1966)

    Google Scholar 

  26. Rago, A., Cocarascu, O., Toni, F.: Argumentation-based recommendations: fantastic explanations and how to find them. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 1949–1955 (2018). https://doi.org/10.24963/ijcai.2018/269

  27. Reiter, E.: Natural language generation challenges for explainable AI. In: Proceedings of the Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI), pp. 3–7. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/W19-8402

  28. Rieger, A., Theune, M., Tintarev, N.: Toward natural language mitigation strategies for cognitive biases in recommender systems. In: Proceedings of the Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) at the International Conference on Natural Language Generation (INLG), Dublin, Ireland (2020). https://www.aclweb.org/anthology/2020.nl4xai-1.11/

  29. Sevilla, J.: Explaining data using causal Bayesian Networks. In: Proceedings of the Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) at the International Conference on Natural Language Generation (INLG), Dublin, Ireland (2020). https://www.aclweb.org/anthology/2020.nl4xai-1.8/

  30. Sierra, C., de Mántaras, R.L., Simoff, S.J.: The argumentative mediator. In: Proceedings of the European Conference on Multi-Agent Systems (EUMAS) and the International Conference on Agreement Technologies (AT), Valencia, Spain, pp. 439–454 (2016). https://doi.org/10.1007/978-3-319-59294-7_36

  31. Stent, A., Bangalore, S.: Natural Language Generation in Interactive Systems. Cambridge University Press, Cambridge (2014)

    Google Scholar 

  32. Stepin, I., Alonso, J.M., Catala, A., Pereira, M.: Generation and evaluation of factual and counterfactual explanations for decision trees and fuzzy rule-based classifiers. In: Proceedings of the IEEE World Congress on Computational Intelligence (2020). https://doi.org/10.1109/FUZZ48607.2020.9177629

  33. Stepin, I., Alonso, J.M., Catala, A., Pereira-Fariña, M.: A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence. IEEE Access 9, 11974–12001 (2021). https://doi.org/10.1109/ACCESS.2021.3051315

  34. Tintarev, N., Masthoff, J.: Explaining recommendations: design and evaluation. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 353–382. Springer, Boston, MA (2015). https://doi.org/10.1007/978-1-4899-7637-6_10

    Chapter  Google Scholar 

  35. Walton, D., Reed, C., Macagno, F.: Argumentation Schemes. Cambridge University Press, Cambridge (2008)

    Google Scholar 

  36. Williams, S., Reiter, E.: Generating basic skills reports for low-skilled readers. Nat. Lang. Eng. 14, 495–535 (2008). https://doi.org/10.1017/S1351324908004725

    Article  Google Scholar 

Download references

Acknowledgment

The NL4XAI project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 860621.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jose M. Alonso .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Alonso, J.M. et al. (2021). Interactive Natural Language Technology for Explainable Artificial Intelligence. In: Heintz, F., Milano, M., O'Sullivan, B. (eds) Trustworthy AI - Integrating Learning, Optimization and Reasoning. TAILOR 2020. Lecture Notes in Computer Science(), vol 12641. Springer, Cham. https://doi.org/10.1007/978-3-030-73959-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-73959-1_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-73958-4

  • Online ISBN: 978-3-030-73959-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics