Skip to main content

Transparency and Explainability of AI Systems: Ethical Guidelines in Practice

  • Conference paper
  • First Online:
Requirements Engineering: Foundation for Software Quality (REFSQ 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13216))

Abstract

[Context and Motivation] Recent studies have highlighted transparency and explainability as important quality requirements of AI systems. However, there are still relatively few case studies that describe the current state of defining these quality requirements in practice. [Question] The goal of our study was to explore what ethical guidelines organizations have defined for the development of transparent and explainable AI systems. We analyzed the ethical guidelines in 16 organizations representing different industries and public sector. [Results] In the ethical guidelines, the importance of transparency was highlighted by almost all of the organizations, and explainability was considered as an integral part of transparency. Building trust in AI systems was one of the key reasons for developing transparency and explainability, and customers and users were raised as the main target groups of the explanations. The organizations also mentioned developers, partners, and stakeholders as important groups needing explanations. The ethical guidelines contained the following aspects of the AI system that should be explained: the purpose, role of AI, inputs, behavior, data utilized, outputs, and limitations. The guidelines also pointed out that transparency and explainability relate to several other quality requirements, such as trustworthiness, understandability, traceability, privacy, auditability, and fairness. [Contribution] For researchers, this paper provides insights into what organizations consider important in the transparency and, in particular, explainability of AI systems. For practitioners, this study suggests a structured way to define explainability requirements of AI systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abdollahi, B., Nasraoui, O.: Transparency in fair machine learning: the case of explainable recommender systems. In: Zhou, J., Chen, F. (eds.) Human and Machine Learning. HIS, pp. 21–35. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-90403-0_2

    Chapter  Google Scholar 

  2. Ahmad, K., Bano, M., Abdelrazek, M., Arora, C., Grundy, J.: What’s up with requirements engineering for artificial intelligent systems? In: International Requirements Engineering Conference, pp. 1–12 (2021)

    Google Scholar 

  3. Balasubramaniam, N., Kauppinen, M., Kujala, S., Hiekkanen, K.: Ethical guidelines for solving ethical issues and developing AI systems. In: Morisio, M., Torchiano, M., Jedlitschka, A. (eds.) PROFES 2020. LNCS, vol. 12562, pp. 331–346. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64148-1_21

  4. Charmaz, K.: Constructing Grounded Theory, 2nd edn. SAGE Publications Inc., Thousand Oaks (2014)

    Google Scholar 

  5. Chazette, L.: Mitigating challenges in the elicitation and analysis of transparency requirements. In: International Requirements Engineering Conference, pp. 470–475 (2019)

    Google Scholar 

  6. Chazette, L., Brunotte, W., Speith, T.: Exploring explainability: a definition, a model, and a knowledge catalogue. In: International Requirements Engineering Conference, pp. 197–208 (2021)

    Google Scholar 

  7. Chazette, L., Karras, O., Schneider, K.: Do end-users want explanations? Analyzing the role of explainability as an emerging aspect of non-functional requirements. In: International Requirements Engineering Conference, pp. 223–233 (2019)

    Google Scholar 

  8. Chazette, L., Schneider, K.: Explainability as a non-functional requirement: challenges and recommendations. Requirements Eng. 25(4), 493–514 (2020). https://doi.org/10.1007/s00766-020-00333-1

    Article  Google Scholar 

  9. Cohn, M.: Agile Estimating and Planning. Prentice Hall, Upper Saddle River (2006)

    Google Scholar 

  10. Corbin, J., Strauss, A.: Basics of Qualitative Research, 4th edn. SAGE, Thousand Oaks (2015)

    Google Scholar 

  11. Cysneiros, L.M.: Using i* to elicit and model transparency in the presence of other non-functional requirements: a position paper. In: iStar: Citeseer, pp. 19–24 (2013)

    Google Scholar 

  12. Cysneiros, L., do Prado, J.: Non-functional requirements orienting the development of socially responsible software. In: Nurcan, S., Reinhartz, I., Soffer, P., Zdravkovic, J. (eds.) BPMDS/EMMSAD - 2020. LNBIP, vol. 387, pp. 335–342. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49418-6_23

  13. Cysneiros, L.M., Raffi, M., Sampaio do Prado Leite, J.C.: Software transparency as a key requirement for self-driving cars. In: International Requirements Engineering Conference, pp. 382–387 (2018)

    Google Scholar 

  14. do Prado Leite, J.C.S., Cappelli, C.: Software transparency. Business Inf. Syst. Eng. 2(3), 127–139 (2010)

    Google Scholar 

  15. Drobotowicz, K., Kauppinen, M., Kujala, S.: Trustworthy AI services in the public sector: what are citizens saying about it? In: Requirements Engineering: Foundation for Software Quality, pp. 99–115 (2021)

    Google Scholar 

  16. European Commission: Ethics Guidelines for Trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines. Accessed 24 Oct 2021

  17. Guizzardi, R., Amaral, G., Guizzardi, G., Mylopoulos, J.: Ethical requirements for AI systems. In: Goutte, C., Zhu, X. (eds.) Canadian AI 2020. LNCS (LNAI), vol. 12109, pp. 251–256. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47358-7_24

  18. Habibullah, K.M., Horkoff, J.: Non-functional requirements for machine learning: understanding current use and challenges in industry. In: International Requirements Engineering Conference, pp. 13–23 (2021)

    Google Scholar 

  19. Horkoff, J.: Non-functional requirements for machine learning: challenges and new directions. In: International Requirements Engineering Conference, pp. 386–391 (2019)

    Google Scholar 

  20. IEEE: Ethically Aligned Design, 1st edn. https://ethicsinaction.ieee.org/. Accessed 24 Oct 2021

  21. Kwan, D., Cysneiros, L.M., do Prado Leite, J.C.S.: Towards Achieving Trust Through Transparency and Ethics (Pre-Print) (2021). http://arxiv.org/abs/2107.02959. Accessed 30 Aug 2021

  22. Köhl, M.A., Baum, K., Langer, M., Oster, D., Speith, T., Bohlender, D.: Explainability as a non-functional requirement. In: International Requirements Engineering Conference, pp. 363–368 (2019)

    Google Scholar 

  23. Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)

    Article  Google Scholar 

  24. Paech, B., Schneider, K.: How do users talk about software? Searching for common ground. In: Workshop on Ethics in Requirements Engineering Research and Practice, pp. 11–14 (2020)

    Google Scholar 

  25. SIIA (Software and Information Industry Association): Ethical Principles for Artificial Intelligence and Data Analytics, pp. 1–25 (2017)

    Google Scholar 

  26. Zieni, B., Heckel, R.: TEM: a transparency engineering methodology enabling users’ trust judgement. In: International Requirements Engineering Conference, pp. 94–105 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nagadivya Balasubramaniam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Balasubramaniam, N., Kauppinen, M., Hiekkanen, K., Kujala, S. (2022). Transparency and Explainability of AI Systems: Ethical Guidelines in Practice. In: Gervasi, V., Vogelsang, A. (eds) Requirements Engineering: Foundation for Software Quality. REFSQ 2022. Lecture Notes in Computer Science, vol 13216. Springer, Cham. https://doi.org/10.1007/978-3-030-98464-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98464-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98463-2

  • Online ISBN: 978-3-030-98464-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics