Skip to main content

MoReXAI - A Model to Reason About the eXplanation Design in AI Systems

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2022)

Abstract

The interest in systems that use machine learning has been growing in recent years. Some algorithms implemented in these intelligent systems hide their fundamental assumptions, input information and parameters in black box models that are not directly observable. The adoption of these systems in sensitive and large-scale application domains involves several ethical issues. One way to promote these ethics requirements is to improve the explainability of these models. However, explainability may have different goals and content according to the intended audience (developers, domain experts, and end-users. Some explanations does not always represent the requirements of the end-users, because developers and users do not share the same social meaning system, making it difficult to build more effective explanations. This paper proposes a conceptual model, based on Semiotic Engineering, which explores the problem of explanation as a communicative process, in which designers and users work together on requirements on explanations. A Model to Reason about the eXplanation design in Artificial Intelligence Systems (MoReXAI) is based on a structured conversation, with promotes reflection on subjects such as Privacy, Fairness, Accountability, Equity and Explainability, aiming to help end-users understand how the systems work and supporting the explanation design system. The model can work as an epistemic tool, given the reflections raised in the conversations related to the topics of ethical principles, which helped in the process of raising important requirements for the design of the explanation.

This work is partially supported by the FUNCAP projects 04772314/2020.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. de A. Barbosa, C.M., Prates, R.O., de Souza, C.S.: Identifying potential social impact of collaborative systems at design time. In: Baranauskas, C., Palanque, P., Abascal, J., Barbosa, S.D.J. (eds.) INTERACT 2007. LNCS, vol. 4662, pp. 31–44. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74796-3_6

    Chapter  Google Scholar 

  2. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: Ethically Aligned Design - A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, 1st edn. IEEE (2019). https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html

  3. Barbosa, S., Silva, B.: Interação humano-computador. Elsevier, Brasil (2010)

    Google Scholar 

  4. Barbosa, S.D.J., Barbosa, G.D.J., de Souza, C.S., Leitão, C.F.: A semiotics-based epistemic tool to reason about ethical issues in digital technology design and development. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2021, pp. 363–374. Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3442188.3445900

  5. Barbosa, S.D.J., de Paula, M.G.: Designing and evaluating interaction as conversation: a modeling language based on semiotic engineering. In: Jorge, J.A., Jardim Nunes, N., Falcão e Cunha, J. (eds.) DSV-IS 2003. LNCS, vol. 2844, pp. 16–33. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39929-2_2

    Chapter  Google Scholar 

  6. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 8 (2017)

    Google Scholar 

  7. Brandão, R., Carbonera, J., de Souza, C., Ferreira, J., Gonçalves, B., Leitão, C.: Mediation challenges and socio-technical gaps for explainable deep learning applications (2019)

    Google Scholar 

  8. Brennen, A.: What do people really want when they say they want “explainable AI?” we asked 60 stakeholders. In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA 2020, pp. 1–7. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3334480.3383047

  9. Burle, C., Cortiz, D.: Mapping principles of artificial intelligence (November 2019)

    Google Scholar 

  10. Carbonera, J., Gonçalves, B., de Souza, C.: O problema da explicação em inteligência artificial: consideraçõees a partir da semiótica. TECCOGS: Revista Digital de Tecnologias Cognitivas (17) (2018)

    Google Scholar 

  11. De Souza, C.S., Leitão, C.F.: Semiotic engineering methods for scientific research in HCI. Synth. Lect. Hum. Centered Inf. 2(1), 1–122 (2009)

    Google Scholar 

  12. De Souza, C.S., Nardi, B.A., Kaptelinin, V., Foot, K.A.: The Semiotic Engineering of Human-Computer Interaction. MIT Press (2005)

    Google Scholar 

  13. (DIB), D.I.B.: AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. Department of Defense (DoD) (2019). https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF

  14. Dudley, J.J., Kristensson, P.O.: A review of user interface design for interactive machine learning. ACM Trans. Interact. Intell. Syst. (TiiS) 8(2), 1–37 (2018)

    Article  Google Scholar 

  15. Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., Hussmann, H.: Bringing transparency design into practice. In: 23rd International Conference on Intelligent User Interfaces, IUI 2018, pp. 211–223. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3172944.3172961

  16. Ferreira, J.J., Monteiro, M.: Designer-user communication for XAI: an epistemological approach to discuss XAI design. arXiv preprint arXiv:2105.07804 (2021)

  17. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication (2020-1) (2020)

    Google Scholar 

  18. Floridi, L.: AI4People-an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind. Mach. 28(4), 689–707 (2018)

    Article  Google Scholar 

  19. Gebru, T., et al.: Datasheets for datasets. arXiv preprint arXiv:1803.09010 (2018)

  20. Google: AI at Google: our principles (2018). https://www.blog.google/technology/ai/ai-principles/

  21. IBM: Everyday ethics for artificial intelligence (2019). https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf

  22. Jakobson, R.: Linguistics and poetics. In: Style in Language, pp. 350–377. MIT Press, MA (1960)

    Google Scholar 

  23. Future of Life Institute, F.: Asilomar AI principles (2017). https://futureoflife.org/ai-principles/

  24. Lopes, B.G., Soares, L.S., Prates, R.O., Gonçalves, M.A.: Analysis of the user experience with a multiperspective tool for explainable machine learning in light of interactive principles. In: Proceedings of the XX Brazilian Symposium on Human Factors in Computing Systems, pp. 1–11 (2021)

    Google Scholar 

  25. Microsoft: Microsoft AI principles (2019). https://www.microsoft.com/en-us/ai/our-approach-to-ai

  26. Mitchell, M., et al.: Model cards for model reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, pp. 220–229. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3287560.3287596

  27. Mohseni, S.: Toward design and evaluation framework for interpretable machine learning systems. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019, pp. 553–554. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3306618.3314322

  28. Molnar, C.: Interpretable Machine Learning. Lulu.com (2020)

    Google Scholar 

  29. Mueller, S.T., et al.: Principles of explanation in human-AI systems. arXiv preprint arXiv:2102.04972 (2021)

  30. de O. Carvalho, N., Sampaio, A.L., Monteiro, I.T.: Evaluation of Facebook advertising recommendations explanations with the perspective of semiotic engineering. In: Proceedings of the 19th Brazilian Symposium on Human Factors in Computing Systems, IHC 2020. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3424953.3426632

  31. O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 1st edn. Crown, New York (2016)

    Google Scholar 

  32. ACM Code of Ethics and Professional Conduct: ACM Code of Ethics and Professional Conduct. Association for Computing Machinery (ACM) (2018). https://www.acm.org/binaries/content/assets/about/acm-code-of-ethics-booklet.pdf

  33. Sampaio, A.L.: Um Modelo para Descrever e Negociar Modificaçoes em Sistemas Web. Ph.D. thesis, PUC-Rio (2010)

    Google Scholar 

  34. Shneiderman, B.: Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Hum. Comput. Interact. 36(6), 495–504 (2020)

    Article  Google Scholar 

  35. Silveira, M.S., Barbosa, S.D.J., de Souza, C.S.: Model-based design of online help systems. In: Jacob, R.J.K., Limbourg, Q., Vanderdonckt, J. (eds.) Computer-Aided Design of User Interfaces IV, pp. 29–42. Springer, Dordrecht (2005). https://doi.org/10.1007/1-4020-3304-4_3

    Chapter  Google Scholar 

  36. Tintarev, N., Masthoff, J.: Explaining recommendations: design and evaluation. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 353–382. Springer, Boston, MA (2015). https://doi.org/10.1007/978-1-4899-7637-6_10

    Chapter  Google Scholar 

  37. Toreini, E., et al.: Technologies for trustworthy machine learning: a survey in a socio-technical context. arXiv preprint arXiv:2007.08911 (2020)

  38. UNI Global Union: Top 10 principles for ethical artificial intelligence. Nyon, Switzerland (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Niltemberg de Oliveira Carvalho .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Oliveira Carvalho, N., Libório Sampaio, A., de Vasconcelos, D.R. (2022). MoReXAI - A Model to Reason About the eXplanation Design in AI Systems. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05643-7_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05642-0

  • Online ISBN: 978-3-031-05643-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics