Skip to main content

Elements that Influence Transparency in Artificial Intelligent Systems - A Survey

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2023 (INTERACT 2023)

Abstract

Artificial Intelligence (AI) models operate as black boxes where most parts of the system are opaque to users. This reduces the user’s trust in the system. Although the Human-Computer Interaction (HCI) community has proposed design practices to improve transparency, work that provides a mapping of these practices and interactive elements that influence AI transparency is still lacking. In this paper, we conduct an in-depth literature survey to identify elements that influence transparency in the field of HCI. Research has shown that transparency allows users to have a better sense of the accuracy, fairness, and privacy of a system. In this context, much research has been conducted on providing explanations for the decisions made by AI systems. Researchers have also studied the development of interactive interfaces that allow user interaction to improve the explanatory capability of systems. This literature review provides key insights about transparency and what the research community thinks about it. Based on the insights gained we gather that a simplified explanation of the AI system is key. We conclude the paper with our proposed idea of representing an AI system, which is an amalgamation of the AI Model (algorithms), data (input and output, including outcomes), and the user interface, as visual interpretations (e.g. Venn diagrams) can aid in understanding AI systems better and potentially making them more transparent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Anik, A.I., Bunt, A.: Data-centric explanations: explaining training data of machine learning systems to promote transparency. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021)

    Google Scholar 

  2. Bertino, E., Merrill, S., Nesen, A.: A multidimensional approach. Computer, Redefining data transparency (2019)

    Google Scholar 

  3. Burrell, J.: How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 3(1), 2053951715622512 (2016)

    Google Scholar 

  4. Cheng, H.F., et al.: Strategies to help non-expert stakeholders, Explaining decision-making algorithms through UI (2019)

    Google Scholar 

  5. Chromik, M., Eiband, M., Völkel, S.T., Buschek, D.: Dark patterns of explainability, transparency, and user control for intelligent systems. In: IUI Workshops (2019)

    Google Scholar 

  6. Clinciu, M., Hastie, H.: A survey of explainable AI terminology. In: Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, pp. 8–13. Association for Computational Linguistics (2019)

    Google Scholar 

  7. Cramer, H., et al.: The effects of transparency on trust and acceptance in interaction with a content-based art recommender. User Model. User-Adapt. Interact. 18, 455–496 (2008)

    Article  Google Scholar 

  8. Diakopoulos, N.A.: Accountability in algorithmic decision making. Commun. ACM 59(2), 56–62 (2016)

    Article  Google Scholar 

  9. Fallon, C.K., Blaha, L.M.: Improving automation transparency: addressing some of machine learning’s unique challenges (2018)

    Google Scholar 

  10. Ferrario, A., Loi, M., Viganò, E.: In AI we trust incrementally: a multi-layer model of trust to analyze human-artificial intelligence interactions. Philos. Technol. 33(3), 523–539 (2019). https://doi.org/10.1007/s13347-019-00378-3

    Article  Google Scholar 

  11. Ozmen Garibay, O., et al.: Six human-centered artificial intelligence grand challenges. Int. J. Hum.-Comput. Interact. 39(3), 391–437 (2023)

    Article  Google Scholar 

  12. Gilpin, L., Paley, A., Alam, M., Spurlock, S., Hammond, K.: Explanation is not a technical term: the problem of ambiguity in xai (2022)

    Google Scholar 

  13. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) (2018)

    Google Scholar 

  14. Glass, A., McGuinness, D.L., Wolverton, M.: Toward establishing trust in adaptive agents. In: Proceedings of the 13th International Conference on Intelligent User Interfaces (2008)

    Google Scholar 

  15. Gregor, S., Benbasat, I.: Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Q. 23(4), 497–530 (1999)

    Article  Google Scholar 

  16. Hollanek, T.: Ai transparency: a matter of reconciling design with critique. AI & Soc. (2020). https://doi.org/10.1007/s00146-020-01110-y

  17. Höök, K.: Steps to take before intelligent user interfaces become real. Interact. Comput. 12(4), 409–426 (2000)

    Article  Google Scholar 

  18. Kirsch, A.: Explain to whom? putting the user in the center of explainable AI. In: Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML (2017)

    Google Scholar 

  19. Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.K.: Too much, too little, or just right? ways explanations impact end users’ mental models. In: 2013 IEEE Symposium on Visual Languages and Human Centric Computing (2013)

    Google Scholar 

  20. Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2009)

    Google Scholar 

  21. Liu, B.: In AiIwe trust? effects of agency locus and transparency on uncertainty reduction in human-AI interaction. J. Comput.-Med. Commun. 26(6), 384–402 (2021)

    Google Scholar 

  22. Lopes, P., Silva, E., Braga, C., Oliveira, T., Rosado, L.: A review of human and computer-centred methods. Appl. Sci. Xai Syst. Eval. 12(19), 9423 (2022)

    Google Scholar 

  23. Miller, C.: Delegation and transparency: Coordinating interactions so information exchange is no surprise, June 2014

    Google Scholar 

  24. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  25. Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (2019)

    Google Scholar 

  26. Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.-R.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recogn. 65, 211–222 (2017)

    Article  Google Scholar 

  27. Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Sig. Process. 73, 1–15 (2018)

    Google Scholar 

  28. Nielsen, J.: Enhancing the explanatory power of usability heuristics. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 152–158 (1994)

    Google Scholar 

  29. Donald, A.: Norman. Basic Books Inc, The Design of Everyday Things (2002)

    Google Scholar 

  30. Ribeiro, M.T., Singh, S. and Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier (2016)

    Google Scholar 

  31. Rubin, V.: Ai opaqueness: what makes AI systems more transparent? In: Proceedings of the Annual Conference of CAIS/Actes du congrès annuel de l’ACSI, November 2020

    Google Scholar 

  32. Springer, A., Whittaker, S.: Progressive disclosure: when, why, and how do users want algorithmic transparency information? ACM Trans. Interact. Intell. Syst. 10(4), 1–32 (2020)

    Article  Google Scholar 

  33. Tintarev, N., Masthoff, J.: Designing and evaluating explanations for recommender systems (2011)

    Google Scholar 

  34. Tomsett, R., Braines, D., Harborne, D., Preece, A., Chakraborty, S.: Supriyo: interpretable to whom? A role-based model for analyzing interpretable machine learning systems, CoRR (2018)

    Google Scholar 

  35. van Nuenen, T., Ferrer, X., Such, J.M., Cote, M.: Transparency for whom? assessing discriminatory artificial intelligence. Computer 53(11), 36–44 (2020)

    Article  Google Scholar 

  36. Weller, A.: Transparency: motivations and challenges (2019)

    Google Scholar 

  37. Lipton Zachary, C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  38. Zerilli, J., Knott, A., Maclaurin, J., Gavaghan, C.: Transparency in algorithmic and human decision-making: is there a double standard? Philos. Technol. 32(4), 661–683 (2018). https://doi.org/10.1007/s13347-018-0330-6

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deepa Muralidhar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Muralidhar, D., Belloum, R., de Oliveira, K.M., Ashok, A. (2023). Elements that Influence Transparency in Artificial Intelligent Systems - A Survey. In: Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2023. INTERACT 2023. Lecture Notes in Computer Science, vol 14142. Springer, Cham. https://doi.org/10.1007/978-3-031-42280-5_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42280-5_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42279-9

  • Online ISBN: 978-3-031-42280-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics