Skip to main content

What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice

  • Conference paper
  • First Online:
Design, User Experience, and Usability. Design for Contemporary Interactive Environments (HCII 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12201))

Included in the following conference series:

Abstract

Explainability is a hot topic nowadays for artificial intelligent (AI) systems. The role of machine learning (ML) models on influencing human decisions shed light on the back-box of computing systems. AI based system are more than just ML models. ML models are one element for the AI explainability’ design and needs to be combined with other elements so it can have significant meaning for people using AI systems. There are different goals and motivations for AI explainability. Regardless the goal for AI explainability, there are more to AI explanation than just ML models or algorithms. The explainability of an AI systems behavior needs to consider different dimensions: 1) who is the receiver of that explanation, 2) why that explanation is needed, and 3) in which context and other situated information the explanation is presented. Considering those three dimensions, the explanation can be effective by fitting the user needs and expectation in the right moment and format. The design of an AI explanation user experience is central for the pressing need from people and the society to understand how an AI system may impact on human decisions. In this paper, we present a literature review on AI explainability research and practices. We first looked at the computer science (CS) community research to identify the main research themes about AI explainability, or “explainable AI”. Then, we focus on Human-Computer Interaction (HCI) research trying to answer three questions about the selected publications: to whom the AI explainability is for (who), which is the purpose of the AI explanation (why), and in which context the AI explanation is presented (what + when).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Apicella, A., Isgro, F., Prevete, R., Tamburrini, G., Vietri, A.: Sparse dictionaries for the explanation of classification systems. In: PIE, p. 009 (2015)

    Google Scholar 

  2. Barria-Pineda, J., Brusilovsky, P.: Making educational recommendations transparent through a fine-grained open learner model. In: IUI Workshops (2019)

    Google Scholar 

  3. Belle, V.: Logic meets probability: towards explainable AI systems for uncertain worlds. In: IJCAI, pp. 5116–5120 (2017)

    Google Scholar 

  4. Benjamin, J.J., Müller-Birn, C.: Materializing interpretability: exploring meaning in algorithmic systems. In: Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion, pp. 123–127. ACM (2019)

    Google Scholar 

  5. Bhatia, A., Garg, V., Haves, P., Pudi, V.: Explainable clustering using hyper-rectangles for building energy simulation data. In: IOP Conference Series: Earth and Environmental Science, vol. 238, p. 012068. IOP Publishing (2019)

    Google Scholar 

  6. Browne, J.T.: Wizard of OZ prototyping for machine learning experiences. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW2621. ACM (2019)

    Google Scholar 

  7. Cabitza, F., Campagner, A., Ciucci, D.: New frontiers in explainable AI: understanding the GI to interpret the GO. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2019. LNCS, vol. 11713, pp. 27–47. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29726-8_3

    Chapter  MATH  Google Scholar 

  8. Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 258–262. ACM (2019)

    Google Scholar 

  9. Chander, A., Srinivasan, R., Chelian, S., Wang, J., Uchino, K.: Working with beliefs: AI transparency in the enterprise. In: IUI Workshops (2018)

    Google Scholar 

  10. Charleer, S., Gutiérrez, F., Verbert, K.: Supporting job mediator and job seeker through an actionable dashboard. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 121–131 (2019)

    Google Scholar 

  11. Chen, L., Wang, F.: Explaining recommendations based on feature sentiments in product reviews. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces, pp. 17–28. ACM (2017)

    Google Scholar 

  12. Cheng, H.F., et al.: Explaining decision-making algorithms through UI: strategies to help non-expert stakeholders. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 559. ACM (2019)

    Google Scholar 

  13. Chromik, M., Eiband, M., Völkel, S.T., Buschek, D.: Dark patterns of explainability, transparency, and user control for intelligent systems. In: IUI Workshops (2019)

    Google Scholar 

  14. Clewley, N., Dodd, L., Smy, V., Witheridge, A., Louvieris, P.: Eliciting expert knowledge to inform training design. In: Proceedings of the 31st European Conference on Cognitive Ergonomics, pp. 138–143 (2019)

    Google Scholar 

  15. Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 598–617. IEEE (2016)

    Google Scholar 

  16. Di Castro, F., Bertini, E.: Surrogate decision tree visualization interpreting and visualizing black-box classification models with surrogate decision tree. In: CEUR Workshop Proceedings, vol. 2327 (2019)

    Google Scholar 

  17. Dimitrova, R., Majumdar, R., Prabhu, V.S.: Causality analysis for concurrent reactive systems. arXiv preprint arXiv:1901.00589 (2019)

  18. Ding, L.: Human knowledge in constructing AI systems-neural logic networks approach towards an explainable AI. Procedia Comput. Sci. 126, 1561–1570 (2018)

    Article  Google Scholar 

  19. Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 275–285. ACM (2019)

    Google Scholar 

  20. Dodge, J., Penney, S., Anderson, A., Burnett, M.M.: What should be in an XAI explanation? what IFT reveals. In: IUI Workshops (2018)

    Google Scholar 

  21. Dominguez, V., Messina, P., Donoso-Guzmán, I., Parra, D.: The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 408–416. ACM (2019)

    Google Scholar 

  22. Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 263–274. ACM (2019)

    Google Scholar 

  23. Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The impact of placebic explanations on trust in intelligent systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW0243. ACM (2019)

    Google Scholar 

  24. Eiband, M., Schneider, H., Buschek, D.: Normative vs. pragmatic: two perspectives on the design of explanations in intelligent systems. In: IUI Workshops (2018)

    Google Scholar 

  25. Eisenstadt, V., Espinoza-Stapelfeld, C., Mikyas, A., Althoff, K.-D.: Explainable distributed case-based support systems: patterns for enhancement and validation of design recommendations. In: Cox, M.T., Funk, P., Begum, S. (eds.) ICCBR 2018. LNCS (LNAI), vol. 11156, pp. 78–94. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01081-2_6

    Chapter  Google Scholar 

  26. Eisenstadt, V., Langenhan, C., Althoff, K.-D.: FLEA-CBR – a flexible alternative to the classic 4R cycle of case-based reasoning. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 49–63. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_4

    Chapter  Google Scholar 

  27. Eljasik-Swoboda, T., Engel, F., Hemmje, M.: Using topic specific features for argument stance recognition

    Google Scholar 

  28. Escalante, H.J., et al.: Design of an explainable machine learning challenge for video interviews. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 3688–3695. IEEE (2017)

    Google Scholar 

  29. Finkbeiner, B., Kleinberg, S.: Proceedings 3rd workshop on formal reasoning about causation, responsibility, and explanations in science and technology. arXiv preprint arXiv:1901.00073 (2019)

  30. Garcia, R., Telea, A.C., da Silva, B.C., Tørresen, J., Comba, J.L.D.: A task-and-technique centered survey on visual analytics for deep learning model engineering. Comput. Graph. 77, 30–49 (2018)

    Article  Google Scholar 

  31. Gervasio, M.T., Myers, K.L., Yeh, E., Adkins, B.: Explanation to avert surprise. In: IUI Workshops, vol. 2068 (2018)

    Google Scholar 

  32. Goebel, R., et al.: Explainable AI: the new 42? In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2018. LNCS, vol. 11015, pp. 295–303. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99740-7_21

    Chapter  Google Scholar 

  33. Gorzałczany, M.B., Rudziński, F.: Interpretable and accurate medical data classification-a multi-objective genetic-fuzzy optimization approach. Expert Syst. Appl. 71, 26–39 (2017)

    Article  Google Scholar 

  34. Grigsby, S.S.: Artificial intelligence for advanced human-machine symbiosis. In: Schmorrow, D.D., Fidopiastis, C.M. (eds.) AC 2018. LNCS (LNAI), vol. 10915, pp. 255–266. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91470-1_22

    Chapter  Google Scholar 

  35. Guo, K., Pratt, D., MacDonald III, A., Schrater, P.: Labeling images by interpretation from natural viewing. In: IUI Workshops (2018)

    Google Scholar 

  36. Guzdial, M., Reno, J., Chen, J., Smith, G., Riedl, M.: Explainable PCGML via game design patterns. arXiv preprint arXiv:1809.09419 (2018)

  37. Hamidi-Haines, M., Qi, Z., Fern, A., Li, F., Tadepalli, P.: Interactive naming for explaining deep neural networks: a formative study. arXiv preprint arXiv:1812.07150 (2018)

  38. Hepenstal, S., Kodagoda, N., Zhang, L., Paudyal, P., Wong, B.W.: Algorithmic transparency of conversational agents. In: IUI Workshops (2019)

    Google Scholar 

  39. Hohman, F., Head, A., Caruana, R., DeLine, R., Drucker, S.M.: Gamut: a design probe to understand how data scientists understand machine learning models. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 579. ACM (2019)

    Google Scholar 

  40. Hohman, F.M., Kahng, M., Pienta, R., Chau, D.H.: Visual analytics in deep learning: an interrogative survey for the next frontiers. IEEE Trans. Vis. Comput. Graph. 25(8), 2674–2693 (2018)

    Article  Google Scholar 

  41. Ishii, K.: Comparative legal study on privacy and personal data protection for robots equipped with artificial intelligence: looking at functional and technological aspects. AI Soc. 34, 1–25 (2017)

    Google Scholar 

  42. Jain, A., Keller, J., Popescu, M.: Explainable AI for dataset comparison. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–7. IEEE (2019)

    Google Scholar 

  43. Jentzsch, S.F., Höhn, S., Hochgeschwender, N.: Conversational interfaces for explainable AI: a human-centred approach. In: Calvaresi, D., Najjar, A., Schumacher, M., Främling, K. (eds.) EXTRAAMAS 2019. LNCS (LNAI), vol. 11763, pp. 77–92. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30391-4_5

    Chapter  Google Scholar 

  44. Kampik, T., Nieves, J.C., Lindgren, H.: Explaining sympathetic actions of rational agents. In: Calvaresi, D., Najjar, A., Schumacher, M., Främling, K. (eds.) EXTRAAMAS 2019. LNCS (LNAI), vol. 11763, pp. 59–76. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30391-4_4

    Chapter  Google Scholar 

  45. Kizilcec, R.F.: How much information?: Effects of transparency on trust in an algorithmic interface. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 2390–2395. ACM (2016)

    Google Scholar 

  46. Krebs, L.M., et al.: Tell me what you know: GDPR implications on designing transparency and accountability for news recommender systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW2610. ACM (2019)

    Google Scholar 

  47. Krishnan, J., Coronado, P., Reed, T.: SEVA: a systems engineer’s virtual assistant. In: AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering (2019)

    Google Scholar 

  48. Kwon, B.C., et al.: RetainVis: visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Trans. Vis. Comput. Graph. 25(1), 299–309 (2018)

    Article  Google Scholar 

  49. Lee, O.J., Jung, J.J.: Explainable movie recommendation systems by using story-based similarity. In: IUI Workshops (2018)

    Google Scholar 

  50. Lim, B.Y., Wang, D., Loh, T.P., Ngiam, K.Y.: Interpreting intelligibility under uncertain data imputation. In: IUI Workshops (2018)

    Google Scholar 

  51. Lim, B.Y., Yang, Q., Abdul, A.M., Wang, D.: Why these explanations? selecting intelligibility types for explanation goals. In: IUI Workshops (2019)

    Google Scholar 

  52. Loi, D., Wolf, C.T., Blomberg, J.L., Arar, R., Brereton, M.: Co-designing AI futures: Integrating AI ethics, social computing, and design. In: A Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion, pp. 381–384. ACM (2019)

    Google Scholar 

  53. Magdalena, L.: Semantic interpretability in hierarchical fuzzy systems: creating semantically decouplable hierarchies. Inf. Sci. 496, 109–123 (2019)

    Article  Google Scholar 

  54. Meacham, S., Isaac, G., Nauck, D., Virginas, B.: Towards explainable AI: design and development for explanation of machine learning predictions for a patient readmittance medical application. In: Arai, K., Bhatia, R., Kapoor, S. (eds.) CompCom 2019. AISC, vol. 997, pp. 939–955. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22871-2_67

    Chapter  Google Scholar 

  55. Millecamp, M., Htun, N.N., Conati, C., Verbert, K.: To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In: IUI, pp. 397–407 (2019)

    Google Scholar 

  56. Ming, Y., Qu, H., Bertini, E.: RuleMatrix: visualizing and understanding classifiers with rules. IEEE Trans. Vis. Comput. Graph. 25(1), 342–352 (2018)

    Article  Google Scholar 

  57. Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Signal Proc. 73, 1–15 (2018)

    Article  MathSciNet  Google Scholar 

  58. Montenegro, J.L.Z., da Costa, C.A., Righi, R.D.R.: Survey of conversational agents in health. Expert Syst. Appl. 129, 56–67 (2019). https://doi.org/10.1016/j.eswa.2019.03.054. http://www.sciencedirect.com/science/article/pii/S0957417419302283

    Article  Google Scholar 

  59. Nassar, M., Salah, K., ur Rehman, M.H., Svetinovic, D.: Blockchain for explainable and trustworthy artificial intelligence. Wiley Interdisc. Rev.: Data Min. Knowl. Discovery 10(1), e1340 (2020)

    Google Scholar 

  60. Neerincx, M.A., van der Waa, J., Kaptein, F., van Diggelen, J.: Using perceptual and cognitive explanations for enhanced human-agent team performance. In: Harris, D. (ed.) EPCE 2018. LNCS (LNAI), vol. 10906, pp. 204–214. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91122-9_18

    Chapter  Google Scholar 

  61. Nguyen, A.T., et al.: Believe it or not: designing a human-AI partnership for mixed-initiative fact-checking. In: The 31st Annual ACM Symposium on User Interface Software and Technology, pp. 189–199. ACM (2018)

    Google Scholar 

  62. Nguyen, A.T., Lease, M., Wallace, B.C.: Explainable modeling of annotations in crowdsourcing. In: IUI, pp. 575–579 (2019)

    Google Scholar 

  63. Nguyen, A.T., Lease, M., Wallace, B.C.: Mash: software tools for developing interactive and transparent machine learning systems. In: IUI Workshops (2019)

    Google Scholar 

  64. Nunes, I., Jannach, D.: A systematic review and taxonomy of explanations in decision support and recommender systems. User Model. User-Adap. Inter. 27(3–5), 393–444 (2017)

    Article  Google Scholar 

  65. Olszewska, J.I.: Designing transparent and autonomous intelligent vision systems. In: Proceedings of the International Conference on Agents and Artificial Intelligence (ICAART), pp. 850–856 (2019)

    Google Scholar 

  66. van Oosterhout, A.: Understanding the benefits and drawbacks of shape change in contrast or addition to other modalities. In: Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion, pp. 113–116. ACM (2019)

    Google Scholar 

  67. van Otterlo, M., Atzmueller, M.: On requirements and design criteria for explainability in legal AI (2018)

    Google Scholar 

  68. Paudyal, P., Lee, J., Kamzin, A., Soudki, M., Banerjee, A., Gupta, S.K.: Learn2sign: explainable AI for sign language learning. In: IUI Workshops (2019)

    Google Scholar 

  69. Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering. In: Ease, vol. 8, pp. 68–77 (2008)

    Google Scholar 

  70. Ribera, M., Lapedriza, À.: Can we do better explanations? A proposal of user-centered explainable AI. In: IUI Workshops (2019)

    Google Scholar 

  71. Rotsidis, A., Theodorou, A., Wortham, R.H.: Robots that make sense: transparent intelligence through augmented reality. In: IUI Workshops (2019)

    Google Scholar 

  72. Santos, T.I., Abel, A.: Using feature visualisation for explaining deep learning models in visual speech. In: 2019 IEEE 4th International Conference on Big Data Analytics (ICBDA), pp. 231–235, March 2019. https://doi.org/10.1109/ICBDA.2019.8713256

  73. Schmidmaier, M., Han, Z., Weber, T., Liu, Y., Hußmann, H.: Real-time personalization in adaptive ides (2019)

    Google Scholar 

  74. Schuessler, M., Weiß, P.: Minimalistic explanations: capturing the essence of decisions. arXiv preprint arXiv:1905.02994 (2019)

  75. Sellam, T., Lin, K., Huang, I., Yang, M., Vondrick, C., Wu, E.: DeepBase: deep inspection of neural networks. In: Proceedings of the 2019 International Conference on Management of Data, pp. 1117–1134 (2019)

    Google Scholar 

  76. Singh, M., Martins, L.M., Joanis, P., Mago, V.K.: Building a cardiovascular disease predictive model using structural equation model & fuzzy cognitive map. In: 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1377–1382. IEEE (2016)

    Google Scholar 

  77. Sliwinski, J., Strobel, M., Zick, Y.: An axiomatic approach to linear explanations in data classification. In: IUI Workshops (2018)

    Google Scholar 

  78. Smith, A., Nolan, J.: The problem of explanations without user feedback. In: IUI Workshops (2018)

    Google Scholar 

  79. Smith-Renner, A., Rua, R., Colony, M.: Towards an explainable threat detection tool. In: IUI Workshops (2019)

    Google Scholar 

  80. Sokol, K., Flach, P.A.: Conversational explanations of machine learning predictions through class-contrastive counterfactual statements. In: IJCAI, pp. 5785–5786 (2018)

    Google Scholar 

  81. Springer, A., Whittaker, S.: Progressive disclosure: designing for effective transparency. arXiv preprint arXiv:1811.02164 (2018)

  82. Stumpf, S.: Horses for courses: making the case for persuasive engagement in smart systems. In: Joint Proceedings of the ACM IUI 2019 Workshops, vol. 2327. CEUR (2019)

    Google Scholar 

  83. Stumpf, S., Skrebe, S., Aymer, G., Hobson, J.: Explaining smart heating systems to discourage fiddling with optimized behavior. In: CEUR Workshop Proceedings, vol. 2068 (2018)

    Google Scholar 

  84. Sundararajan, M., Xu, J., Taly, A., Sayres, R., Najmi, A.: Exploring principled visualizations for deep network attributions. In: IUI Workshops (2019)

    Google Scholar 

  85. Theodorou, A., Wortham, R.H., Bryson, J.J.: Designing and implementing transparency for real time inspection of autonomous robots. Connect. Sci. 29(3), 230–241 (2017)

    Article  Google Scholar 

  86. Tsai, C.H., Brusilovsky, P.: Explaining social recommendations to casual users: design principles and opportunities. In: Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, p. 59. ACM (2018)

    Google Scholar 

  87. Tsai, C.H., Brusilovsky, P.: Designing explanation interfaces for transparency and beyond. In: IUI Workshops (2019)

    Google Scholar 

  88. Vellido, A.: The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput. Appl. 1–15 (2019)

    Google Scholar 

  89. Vijay, A., Umadevi, K.: Secured AI guided architecture for D2D systems of massive MIMO deployed in 5G networks. In: 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), pp. 468–472. IEEE (2019)

    Google Scholar 

  90. Vorm, E.S., Miller, A.D.: Assessing the value of transparency in recommender systems: an end-user perspective (2018)

    Google Scholar 

  91. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 601. ACM (2019)

    Google Scholar 

  92. Wang, Q., et al.: ATMSeer: increasing transparency and controllability in automated machine learning. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 681. ACM (2019)

    Google Scholar 

  93. Wang, X., Chen, Y., Yang, J., Wu, L., Wu, Z., Xie, X.: A reinforcement learning framework for explainable recommendation. In: 2018 IEEE International Conference on Data Mining (ICDM), pp. 587–596. IEEE (2018)

    Google Scholar 

  94. Wolf, C.T., Blomberg, J.: Explainability in context: lessons from an intelligent system in the it services domain. In: IUI Workshops (2019)

    Google Scholar 

  95. Xie, Y., Gao, G., Chen, X.: Outlining the design space of explainable intelligent systems for medical diagnosis. arXiv preprint arXiv:1902.06019 (2019)

  96. Yang, Q., Banovic, N., Zimmerman, J.: Mapping machine learning advances from HCI research to reveal starting places for design innovation. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 130. ACM (2018)

    Google Scholar 

  97. Yeganejou, M., Dick, S.: Improved deep fuzzy clustering for accurate and interpretable classifiers. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–7. IEEE (2019)

    Google Scholar 

  98. Zhao, R., Benbasat, I., Cavusoglu, H.: Transparency in advice-giving systems: a framework and a research model for transparency provision. In: IUI Workshops (2019)

    Google Scholar 

  99. Zheng, X.l., Zhu, M.Y., Li, Q.B., Chen, C.C., Tan, Y.C.: FinBrain: when finance meets AI 2.0. Front. Inf. Technol. Electron. Eng. 20(7), 914–924 (2019)

    Google Scholar 

  100. Zhou, J., et al.: Effects of influence on user trust in predictive decision making. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6 (2019)

    Google Scholar 

  101. Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G.M.: Explainable AI for designers: a human-centered perspective on mixed-initiative co-creation. In: 2018 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8. IEEE (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juliana J. Ferreira .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ferreira, J.J., Monteiro, M.S. (2020). What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice. In: Marcus, A., Rosenzweig, E. (eds) Design, User Experience, and Usability. Design for Contemporary Interactive Environments. HCII 2020. Lecture Notes in Computer Science(), vol 12201. Springer, Cham. https://doi.org/10.1007/978-3-030-49760-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49760-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49759-0

  • Online ISBN: 978-3-030-49760-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics