Abstract
The opaque nature of many intelligent systems violates established usability principles and thus presents a challenge for human-computer interaction. Research in the field therefore highlights the need for transparency, scrutability, intelligibility, interpretability and explainability, among others. While all of these terms carry a vision of supporting users in understanding intelligent systems, the underlying notions and assumptions about users and their interaction with the system often remain unclear.
We review the literature in HCI through the lens of implied user questions to synthesise a conceptual framework integrating user mindsets, user involvement, and knowledge outcomes to reveal, differentiate and classify current notions in prior work. This framework aims to resolve conceptual ambiguity in the field and enables researchers to clarify their assumptions and become aware of those made in prior work. We further discuss related aspects such as stakeholders and trust, and also provide material to apply our framework in practice (e.g., ideation/design sessions). We thus hope to advance and structure the dialogue on supporting users in understanding intelligent systems.
- [1] . 2018. Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (
CHI’18 ). ACM, New York, Article582 , 18 pages. Google ScholarDigital Library - [2] . 2019. Where can my career take me?: Harnessing dialogue for interactive career goal recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (
IUI’19 ). ACM, New York, 603–613. Google ScholarDigital Library - [3] . 2019. Street-level algorithms: A theory at the gaps between policy and decisions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (
CHI’19 ). ACM, New York, Article530 , 13 pages. Google ScholarDigital Library - [4] . 2020. Evaluating saliency map explanations for convolutional neural networks: A user study. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (
IUI’20 ). ACM, New York, 275–285. Google ScholarDigital Library - [5] . 2018. Towards algorithmic experience: Initial efforts for social media contexts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (
CHI’18 ). ACM, New York, Article286 , 12 pages. Google ScholarDigital Library - [6] . 2014. Power to the people: The role of humans in interactive machine learning. AI Magazine 35, 4 (
December 2014), 105. Google ScholarDigital Library - [7] . 2015. ModelTracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (
CHI’15 ). ACM, New York, 337–346. Google ScholarDigital Library - [8] . 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (
CHI’19 ). ACM, New York, NY, Article3 , 13 pages. Google ScholarDigital Library - [9] . 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3 (
March 2018), 973–989. Google ScholarCross Ref - [10] . 2017. Statement on Algorithmic Transparency and Accountability. https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf.Google Scholar
- [11] . 2019. Transparent, scrutable and explainable user models for personalized recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (Paris France) (
SIGIR’19 ). ACM, New York, 265–274. Google ScholarDigital Library - [12] . 2020. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (2020), 82–115. Google ScholarDigital Library
- [13] . 2017. Artificial Intelligence isn’t a “Black Box”. It’s a Key to Studying the Brain. https://massivesci.com/articles/artificial-intel ligence-human-brain-black-box-algorithm/.
Accessed: 05.10.2020 .Google Scholar - [14] . 1999. A personal news agent that talks, learns and explains. In Proceedings of the 3rd Annual Conference on Autonomous Agents (Seattle, Washington) (
AGENTS’99 ). ACM, New York, 268–275. Google ScholarDigital Library - [15] . 2018. “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (
CHI’18 ). ACM, New York, Article377 , 14 pages. Google ScholarDigital Library - [16] . 2012. You should read this! Let me explain you why: Explaining news recommendations to users. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management (Maui, Hawaii) (
CIKM’12 ). ACM, New York, 1995–1999. Google ScholarDigital Library - [17] . 2015. Third-wave HCI, 10 years later – participation and sharing. Interactions 22, 5 (
Aug. 2015), 24–31. Google ScholarDigital Library - [18] . 2017. AI-supported decision-making under the general data protection regulation. In Proceedings of the 16th Edition of the International Conference on Articial Intelligence and Law (London, UK) (
ICAIL’17 ). ACM, New York, 3–8. Google ScholarDigital Library - [19] . 2019. Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (
CHI’19 ). ACM, New York, Article41 , 12 pages. Google ScholarDigital Library - [20] . 2017. The algorithmic imaginary: Exploring the ordinary affects of facebook algorithms. Information Communication and Society 20, 1 (2017), 30–44. Google ScholarCross Ref
- [21] . 2012. Are explanations always important?: A study of deployed, low-cost intelligent interactive systems. In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces (Lisbon, Portugal) (
IUI’12 ). ACM, New York, 169–178. Google ScholarDigital Library - [22] . 2016. How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 2053951715622512. Google ScholarCross Ref
- [23] . 2018. Working with beliefs: AI transparency in the enterprise. In Explainable Smart Systems Workshop at IUI 2018. http://ceur-ws.org/Vol-2068/exss14.pdf.Google Scholar
- [24] . 2019. Fortunettes: Feedforward about the future state of GUI widgets. In Proceedings of the ACM on Human-Computer Interaction 3, (EICS), Article
20 (June 2019), 20 pages. Google ScholarDigital Library - [25] . 2019. Empowering end users in debugging trigger-action rules. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (
CHI’19 ). ACM, New York, Article388 , 13 pages. Google ScholarDigital Library - [26] . 2009. The effects of transparency on perceived and actual competence of a content-based recommender. CEUR Workshop Proceedings 543 (2009), 1–10. Google ScholarDigital Library
- [27] . 2018. A debugging approach for trigger-action programming. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (
CHI EA’18 ). ACM, New York, ArticleLBW105 , 6 pages. Google ScholarDigital Library - [28] . 2015. Algorithmic accountability. Digital Journalism 3, 3 (2015), 398–415. Google ScholarCross Ref
- [29] . 2016. Accountability in algorithmic decision making. Commun. ACM 59, 2 (
Jan. 2016), 56–62. Google ScholarDigital Library - [30] . 2017. Enabling Accountability of Algorithmic Media: Transparency as a Constructive and Critical Lens. Springer International Publishing, Cham, 25–43. Google ScholarCross Ref
- [31] . 2020. Explaining recommendations by means of aspect-based transparent memories. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (
IUI’20 ). ACM, New York, 166–176. Google ScholarDigital Library - [32] . 2017. Towards a rigorous science of interpretable machine learning. (
February 2017).arxiv:1702.08608 http://arxiv.org/abs/1702.08608Google Scholar - [33] . 2020. Trust in AutoML: Exploring information needs for establishing trust in automated machine learning systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (
IUI’20 ). ACM, New York, 297–307. Google ScholarDigital Library - [34] . 2018. A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems 8, 2, Article
8 (June 2018), 37 pages. Google ScholarDigital Library - [35] . 2017. Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For. (2017).
arxiv:arXiv:1802.01557v1 Google ScholarCross Ref - [36] . 2021. Expanding Explainability: Towards Social Transparency in AI Systems. ACM, New York. Google ScholarDigital Library
- [37] . 2019. Supporting Users in Understanding Intelligent Everyday Systems. http://nbn-resolving.de/urn:nbn:de:bvb:19-256754Google Scholar
- [38] . 2018. Bringing transparency design into practice. In Proceedings of the 2018 Conference on Intelligent User Interfaces (IUI’18), 211–223. Google ScholarDigital Library
- [39] . 2018. Normative vs. pragmatic: Two perspectives on the design of explanations in intelligent systems. In Explainable Smart Systems Workshop at IUI 2018.Google Scholar
- [40] . 2012. The process of solving complex problems. Journal of Problem Solving (2012). Google ScholarCross Ref
- [41] . 2014. How should i explain? A comparison of different explanation types for recommender systems. International Journal of Human Computer Studies (2014). Google ScholarDigital Library
- [42] . 2018. Explaining explanations: An approach to evaluating interpretability of machine learning. (2018).
arxiv:1806.00069 http://arxiv.org/abs/1806.00069Google Scholar - [43] . 2017. Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge.Google ScholarCross Ref
- [44] . 2008. Toward establishing trust in adaptive agents. In Proceedings of the 13th International Conference on Intelligent User Interfaces (Gran Canaria, Spain) (
IUI’08 ). ACM, New York, 227–236. Google ScholarDigital Library - [45] . 1993. Goal achievement: The role of intentions. European Review of Social Psycholog 4, 1 (1993), 141–185. Google ScholarCross Ref
- [46] . 2017. European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Magazine, 38, 3 (2017), 50–57. Google ScholarDigital Library
- [47] . 2017. The Data that Turned the World Upside Down. https://motherboard.vice.com/en_us/article/mg9vv n/how-our-likes-helped-trump-win.
Accessed: 01.10.2020 .Google Scholar - [48] . 2017. Advances in artificial intelligence require progress across all of computer science. CoRR abs/1707.04352 (2017).
arxiv:1707.04352 http://arxiv.org/abs/1707.04352Google Scholar - [49] . 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (Philadelphia, PA) (
CSCW’00 ). ACM, New York, 241–250. Google ScholarDigital Library - [50] . 2004. Design science in information systems research. MIS Q. 28, 1 (
March 2004), 75–105. http://dl.acm.org/citation.cfm?id=2017212.2017217Google ScholarCross Ref - [51] . 2016. The new imbroglio: Living with machine algorithms. The Art of Ethics in the Information Society. (2016), 55–60.Google Scholar
- [52] . 2017. Transparent algorithms? Here’s why that’s a bad idea, Google tells MPs. https://www. theregister.co.uk/2017/11/07/google_on_commons_algorithm_inquiry/.
Accessed: 05.10.2020 .Google Scholar - [53] . 2019. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (
CHI’19 ). ACM, New York, Article600 , 16 pages. Google ScholarDigital Library - [54] . 2000. Steps to take before intelligent user interfaces become real. Interacting with Computers 12, 4 (2000), 409–426. Google ScholarCross Ref
- [55] . 2017. What is interaction?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado,) (
CHI’17 ). ACM, New York, 5040–5052. Google ScholarDigital Library - [56] . 1999. Principles of mixed-initiative user interfaces. In Proceedings of the 1999 CHI Conference on Human Factors in Computing Systems (CHI’99). ACM, New York, 159–166. Google ScholarDigital Library
- [57] . 2010. Explanation of spreading activation based recommendations. In Proceedings of the 1st International Workshop on Semantic Models for Adaptive Interactive Systems (Hong Kong, China) (
SEMAIS’10 ). ACM, New York, 24–28. Google ScholarDigital Library - [58] . 1997. The use of animation to explain genetic algorithms. In Proceedings of the 28th SIGCSE Technical Symposium on Computer Science Education (San Jose, CA) (
SIGCSE’97 ). ACM, New York, 243–247. Google ScholarDigital Library - [59] . 2018. Algorithmic anxiety and coping strategies of Airbnb hosts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018), 1–12. Google ScholarDigital Library
- [60] . 1993. Explanation facilities and interactive systems. In Proceedings of the 1st International Conference on Intelligent User Interfaces (Orlando, FL) (
IUI’93 ). ACM, New York, 159–166. Google ScholarDigital Library - [61] . 1989. Mental models. In Foundations of Cognitive Science. The MIT Press, Cambridge, MA.Google ScholarDigital Library
- [62] . 2010. Interactive optimization for steering machine classification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, GA) (
CHI’10 ). ACM, New York, 1343–1352. Google ScholarDigital Library - [63] . 2013. Creating personalized systems that people can scrutinize and control: Drivers, principles and experience. ACM Transactions on Interactive Intelligent Systems 2, 4, Article
24 (January 2013), 42 pages. Google ScholarDigital Library - [64] . 2015. Interactive and Interpretable Machine Learning Models for Human Machine Collaboration. Ph. D. Dissertation. Massachusetts Institute of Technology (MIT).Google Scholar
- [65] . 2016. How much information?: Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, CA) (
CHI’16 ). ACM, New York, 2390–2395. Google ScholarDigital Library - [66] . 2019. May AI? Design ideation with cooperative contextual bandits. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (
CHI’19 ). ACM, New York, 1–12. Google ScholarDigital Library - [67] . 2019. Will you accept an imperfect AI?: Exploring designs for adjusting end-user expectations of AI systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (
CHI’19 ). ACM, New York, Article411 , 14 pages. Google ScholarDigital Library - [68] . 2017. Can A.I. Be Taught to Explain Itself?https://www.nytimes.com/2017/11/21/magazine/can- ai-be-taught-to-explain-itself.html/.
Accessed: 01.10.2020 .Google Scholar - [69] . 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (Atlanta, GA) (
IUI’15 ). ACM, New York, 126–137. Google ScholarDigital Library - [70] . 2012. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the 2012 ACM Conference on Human Factors in Computing Systems (2012), 1. Google ScholarDigital Library
- [71] . 2010. Explanatory debugging: Supporting end-user debugging of machine-learned programs. In Proceedings of the 2010 IEEE Symposium on Visual Languages and Human-Centric Computing. IEEE, 41–48. Google ScholarDigital Library
- [72] . 2013. Too much, too little, or just right? Ways explanations impact end users’ mental models. In Proceedings of the 2013 IEEE Symposium on Visual Languages and Human Centric Computing. 3–10. Google ScholarCross Ref
- [73] . 2011. Why-oriented end-user debugging of naive Bayes text classification. ACM Transactions on Interactive Intelligent Systems 1, 1, Article
2 (October 2011), 31 pages. Google ScholarDigital Library - [74] . 2008. Fixing the program my computer learned. In Proceedings of the 13th International Conference on Intelligent User Interfaces (IUI’09). 187–196. Google ScholarDigital Library
- [75] . 2021. What do we want from explainable artificial intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473. Google ScholarCross Ref
- [76] . 2018. The science of fake news. Science 359, 6380 (2018), 1094–1096. arXiv: http://science.sciencemag.org/content/359/6380/1094.full.pdfGoogle ScholarCross Ref
- [77] . 2004. Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society 46, 1 (2004), 50–80. Google ScholarCross Ref
- [78] . 2010. Improving trust in context-aware applications with intelligibility. In Proceedings of the 12th ACM International Conference Adjunct Papers on Ubiquitous Computing (Copenhagen, Denmark) (
UbiComp’10 Adjunct ). ACM, New York, NY, USA, 477–480. Google ScholarDigital Library - [79] . 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing (UbiComp’09). 195–204. Google ScholarDigital Library
- [80] . 2011. Design of an intelligible mobile context-aware application. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI’11). 157–166. Google ScholarDigital Library
- [81] . 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, 2119–2128. Google ScholarDigital Library
- [82] . 2018. The mythos of model interpretability. Queue 16, 3, Article
30 (June 2018), 27 pages. Google ScholarDigital Library - [83] . 2002. The general method of theory-building research in applied disciplines. Advances in Developing Human Resources 4, 3 (2002), 221–241. Google ScholarCross Ref
- [84] . 2016. The ethics of algorithms: Mapping the debate. Big Data & Society 3, 2 (2016), 205395171667967. Google ScholarCross Ref
- [85] . 2020. Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/.Google Scholar
- [86] . 2016. Automation, algorithms, and politics | talking to bots: Symbiotic agency and the case of tay. International Journal of Communication 10 (2016). http://ijoc.org/index.php/ijoc/article/view/6277.Google Scholar
- [87] . 2013. The Design of Everyday Things. arXiv: arXiv:1011.1669v3Google ScholarCross Ref
- [88] . 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction 27, 3 (
01 December 2017), 393–444. Google ScholarDigital Library - [89] . 2017. Us vs. them: Understanding artificial intelligence technophobia over the Google Deepmind challenge match. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, CO) (
CHI’17 ). ACM, New York, 2523–2534. Google ScholarDigital Library - [90] . 2004. Trust and etiquette in high-criticality automated systems. Commun. ACM 47, 4 (2004), 51. Google ScholarDigital Library
- [91] . 2016. General Data Protection Regulation. https:// gdpr-info.eu/.
Accessed: 09.10.2020 .Google Scholar - [92] . 2011. Explanation and trust: what to tell the user in security and AI?Ethics and Information Technology 13, 1 (
01 Mar 2011), 53–64. Google ScholarDigital Library - [93] . 2006. Trust building with explanation interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces (IUI’06). 93–100. Google ScholarDigital Library
- [94] . 2018. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (
CHI’18 ). ACM, New York, Article103 , 13 pages. Google ScholarDigital Library - [95] . 2018. Explanations that are intrinsic to recommendations. In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization (Singapore, Singapore) (
UMAP’18 ). ACM, New York, 187–195. Google ScholarDigital Library - [96] . 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, CA) (
KDD’16 ). ACM, New York, 1135–1144. Google ScholarDigital Library - [97] . 2021. Evaluating the interpretability of generative models by interactive reconstruction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, Article
80 , 15 pages. Google ScholarDigital Library - [98] . 2015. Confidence, command, complexity: Metamodels for structured interaction with machine intelligence. In Proceedings of the 26th Annual Conference of the Psychology of Programming Interest Group. 23–36. http://www.ppig.org/library/paper/confidence-command-complexity-metamodels-structured-interaction-machine-intelligence.Google Scholar
- [99] . 2016. Constructivist design for interactive machine learning. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (San Jose, CA) (
CHI EA’16 ). ACM, New York, 1467–1475. Google ScholarDigital Library - [100] . 2018. Let’s talk about race: Identity, chatbots, and AI. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (
CHI’18 ). ACM, New York, Article315 , 14 pages. Google ScholarDigital Library - [101] . 2002. Examples, rules, and strategies in the control of dynamic systems.Cognitive Science Quarterly 2, 1 (2002), 63–92.Google Scholar
- [102] . 2015. Privacy transparency patterns. In Proceedings of the 20th European Conference on Pattern Languages of Programs (Kaufbeuren, Germany) (
EuroPLoP’15 ). ACM, New York, Article52 , 11 pages. Google ScholarDigital Library - [103] . 1994. Multiagent Systems. Springer Berlin Heidelberg, Berlin, Heidelberg, 1–14. Google ScholarCross Ref
- [104] . 2004. The pudding of trust. IEEE Intelligent Systems 19, 5 (2004), 74–88. Google ScholarDigital Library
- [105] . 1998. Basics of Qualitative Research Techniques. SAGE.Google Scholar
- [106] . 2009. EnsembleMatrix: Interactive visualization to support machine learning with multiple classifiers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA,) (
CHI’09 ). ACM, New York, 1283–1292. Google ScholarDigital Library - [107] . 2014. Demo: Making plans scrutable with argumentation and natural language generation. In Proceedings of the Companion Publication of the 19th International Conference on Intelligent User Interfaces (Haifa, Israel) (
IUI Companion’14 ). ACM, New York, 29–32. Google ScholarDigital Library - [108] . 2018. Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552 (2018).Google Scholar
- [109] . 2019. Evaluating visual explanations for similarity-based recommendations: User perception and performance. In Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization (Larnaca, Cyprus) (
UMAP’19 ). ACM, New York, 22–30. Google ScholarDigital Library - [110] . 2019. Explaining recommendations in an interactive hybrid social recommender. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, CA) (
IUI’19 ). ACM, New York, 391–396. Google ScholarDigital Library - [111] . 2007. How it works: A field study of non-technical users interacting with an intelligent system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07). 31–40. Google ScholarDigital Library
- [112] . 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (
CHI’18 ). ACM, New York, Article440 , 14 pages. Google ScholarDigital Library - [113] . 2020. What is “Intelligent” in intelligent user interfaces? A meta-analysis of 25 years of IUI. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (
IUI’20 ). ACM, New York, 477–487. Google ScholarDigital Library - [114] . 1989. Cognition, construction of knowledge, and teaching. Synthese (1989).
arxiv:arXiv:1011.1669v3 Google ScholarCross Ref - [115] . 2017. Transparent, explainable, and accountable AI for robotics. Science Robotics 2, 6 (2017).
arxiv:2794335 Google ScholarCross Ref - [116] . 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. SSRN (2017), 1–47.
arxiv:1606.08813 Google ScholarCross Ref - [117] . 2020. Silva: Interactively Assessing Machine Learning Fairness Using Causality. ACM, New York, 1–13. Google ScholarDigital Library
- [118] . 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (
IUI’20 ). Association for Computing Machinery, New York, NY, USA, 189–201. Google ScholarDigital Library - [119] . 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (
CHI’20 ). Association for Computing Machinery, New York, NY, USA, 1–13. Google ScholarDigital Library
Index Terms
- How to Support Users in Understanding Intelligent Systems? An Analysis and Conceptual Framework of User Questions Considering User Mindsets, Involvement, and Knowledge Outcomes
Recommendations
How to Support Users in Understanding Intelligent Systems? Structuring the Discussion
IUI '21: Proceedings of the 26th International Conference on Intelligent User InterfacesThe opaque nature of many intelligent systems violates established usability principles and thus presents a challenge for human-computer interaction. Research in the field therefore highlights the need for transparency, scrutability, intelligibility, ...
Transparent, Scrutable and Explainable User Models for Personalized Recommendation
SIGIR'19: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information RetrievalMost recommender systems base their recommendations on implicit or explicit item-level feedback provided by users. These item ratings are combined into a complex user model, which then predicts the suitability of other items. While effective, such ...
Enabling Effective Transparency: Towards User-Centric Intelligent Systems
AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and SocietyMuch of the current research in transparency and explainability is highly technical and focuses on how to derive explanations from models and algorithms. Less thought is being given to how users actually want to receive transparency and explanations ...
Comments