skip to main content
research-article

How to Support Users in Understanding Intelligent Systems? An Analysis and Conceptual Framework of User Questions Considering User Mindsets, Involvement, and Knowledge Outcomes

Published:05 November 2022Publication History
Skip Abstract Section

Abstract

The opaque nature of many intelligent systems violates established usability principles and thus presents a challenge for human-computer interaction. Research in the field therefore highlights the need for transparency, scrutability, intelligibility, interpretability and explainability, among others. While all of these terms carry a vision of supporting users in understanding intelligent systems, the underlying notions and assumptions about users and their interaction with the system often remain unclear.

We review the literature in HCI through the lens of implied user questions to synthesise a conceptual framework integrating user mindsets, user involvement, and knowledge outcomes to reveal, differentiate and classify current notions in prior work. This framework aims to resolve conceptual ambiguity in the field and enables researchers to clarify their assumptions and become aware of those made in prior work. We further discuss related aspects such as stakeholders and trust, and also provide material to apply our framework in practice (e.g., ideation/design sessions). We thus hope to advance and structure the dialogue on supporting users in understanding intelligent systems.

REFERENCES

  1. [1] Abdul Ashraf, Vermeulen Jo, Wang Danding, Lim Brian Y., and Kankanhalli Mohan. 2018. Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI’18). ACM, New York, Article 582, 18 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Alkan Oznur, Daly Elizabeth M., Botea Adi, Valente Abel N., and Pedemonte Pablo. 2019. Where can my career take me?: Harnessing dialogue for interactive career goal recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI’19). ACM, New York, 603613. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Alkhatib Ali and Bernstein Michael. 2019. Street-level algorithms: A theory at the gaps between policy and decisions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI’19). ACM, New York, Article 530, 13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Alqaraawi Ahmed, Schuessler Martin, Weiß Philipp, Costanza Enrico, and Berthouze Nadia. 2020. Evaluating saliency map explanations for convolutional neural networks: A user study. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI’20). ACM, New York, 275285. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Alvarado Oscar and Waern Annika. 2018. Towards algorithmic experience: Initial efforts for social media contexts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI’18). ACM, New York, Article 286, 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Amershi Saleema, Cakmak Maya, Knox William Bradley, and Kulesza Todd. 2014. Power to the people: The role of humans in interactive machine learning. AI Magazine 35, 4 (December 2014), 105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Amershi Saleema, Chickering Max, Drucker Steven M., Lee Bongshin, Simard Patrice, and Suh Jina. 2015. ModelTracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI’15). ACM, New York, 337346. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Amershi Saleema, Weld Dan, Vorvoreanu Mihaela, Fourney Adam, Nushi Besmira, Collisson Penny, Suh Jina, Iqbal Shamsi, Bennett Paul N., Inkpen Kori, Teevan Jaime, Kikin-Gil Ruth, and Horvitz Eric. 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI’19). ACM, New York, NY, Article 3, 13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Ananny Mike and Crawford Kate. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3 (March 2018), 973989. Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] (USACM) Association for Computing Machinery US Public Policy Council. 2017. Statement on Algorithmic Transparency and Accountability. https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf.Google ScholarGoogle Scholar
  11. [11] Balog Krisztian, Radlinski Filip, and Arakelyan Shushan. 2019. Transparent, scrutable and explainable user models for personalized recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (Paris France) (SIGIR’19). ACM, New York, 265274. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Arrieta Alejandro Barredo, Díaz-Rodríguez Natalia, Ser Javier Del, Bennetot Adrien, Tabik Siham, Barbado Alberto, Garcia Salvador, Gil-Lopez Sergio, Molina Daniel, Benjamins Richard, Chatila Raja, and Herrera Francisco. 2020. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (2020), 82115. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Bear Daniel. 2017. Artificial Intelligence isn’t a “Black Box”. It’s a Key to Studying the Brain. https://massivesci.com/articles/artificial-intel ligence-human-brain-black-box-algorithm/. Accessed: 05.10.2020.Google ScholarGoogle Scholar
  14. [14] Billsus Daniel and Pazzani Michael J.. 1999. A personal news agent that talks, learns and explains. In Proceedings of the 3rd Annual Conference on Autonomous Agents (Seattle, Washington) (AGENTS’99). ACM, New York, 268275. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Binns Reuben, Kleek Max Van, Veale Michael, Lyngs Ulrik, Zhao Jun, and Shadbolt Nigel. 2018. “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI’18). ACM, New York, Article 377, 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Blanco Roi, Ceccarelli Diego, Lucchese Claudio, Perego Raffaele, and Silvestri Fabrizio. 2012. You should read this! Let me explain you why: Explaining news recommendations to users. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management (Maui, Hawaii) (CIKM’12). ACM, New York, 19951999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Bødker Susanne. 2015. Third-wave HCI, 10 years later – participation and sharing. Interactions 22, 5 (Aug. 2015), 2431. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Brkan Maja. 2017. AI-supported decision-making under the general data protection regulation. In Proceedings of the 16th Edition of the International Conference on Articial Intelligence and Law (London, UK) (ICAIL’17). ACM, New York, 38. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Brown Anna, Chouldechova Alexandra, Putnam-Hornstein Emily, Tobin Andrew, and Vaithianathan Rhema. 2019. Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI’19). ACM, New York, Article 41, 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Bucher Taina. 2017. The algorithmic imaginary: Exploring the ordinary affects of facebook algorithms. Information Communication and Society 20, 1 (2017), 3044. Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Bunt Andrea, Lount Matthew, and Lauzon Catherine. 2012. Are explanations always important?: A study of deployed, low-cost intelligent interactive systems. In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces (Lisbon, Portugal) (IUI’12). ACM, New York, 169178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Burrell Jenna. 2016. How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 2053951715622512. Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Chander Ajay, Srinivasan Ramya, Chelian Suhas, Wang Jun, and Uchino Kanji. 2018. Working with beliefs: AI transparency in the enterprise. In Explainable Smart Systems Workshop at IUI 2018. http://ceur-ws.org/Vol-2068/exss14.pdf.Google ScholarGoogle Scholar
  24. [24] Coppers Sven, Luyten Kris, Vanacken Davy, Navarre David, Palanque Philippe, and Gris Christine. 2019. Fortunettes: Feedforward about the future state of GUI widgets. In Proceedings of the ACM on Human-Computer Interaction 3, (EICS), Article 20 (June 2019), 20 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Corno Fulvio, Russis Luigi De, and Roffarello Alberto Monge. 2019. Empowering end users in debugging trigger-action rules. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI’19). ACM, New York, Article 388, 13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Cramer Henriette, Wielinga Bob, Ramlal Satyan, Evers Vanessa, Rutledge Lloyd, and Stash Natalia. 2009. The effects of transparency on perceived and actual competence of a content-based recommender. CEUR Workshop Proceedings 543 (2009), 110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Russis Luigi De and Roffarello Alberto Monge. 2018. A debugging approach for trigger-action programming. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI EA’18). ACM, New York, Article LBW105, 6 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Diakopoulos Nicholas. 2015. Algorithmic accountability. Digital Journalism 3, 3 (2015), 398415. Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Diakopoulos Nicholas. 2016. Accountability in algorithmic decision making. Commun. ACM 59, 2 (Jan. 2016), 5662. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Diakopoulos Nicholas. 2017. Enabling Accountability of Algorithmic Media: Transparency as a Constructive and Critical Lens. Springer International Publishing, Cham, 2543. Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Donkers Tim, Kleemann Timm, and Ziegler Jürgen. 2020. Explaining recommendations by means of aspect-based transparent memories. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI’20). ACM, New York, 166176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Doshi-Velez Finale and Kim Been. 2017. Towards a rigorous science of interpretable machine learning. (February 2017). arxiv:1702.08608 http://arxiv.org/abs/1702.08608Google ScholarGoogle Scholar
  33. [33] Drozdal Jaimie, Weisz Justin, Wang Dakuo, Dass Gaurav, Yao Bingsheng, Zhao Changruo, Muller Michael, Ju Lin, and Su Hui. 2020. Trust in AutoML: Exploring information needs for establishing trust in automated machine learning systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI’20). ACM, New York, 297307. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Dudley John J. and Kristensson Per Ola. 2018. A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems 8, 2, Article 8 (June 2018), 37 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Edwards Lilian and Veale Michael. 2017. Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For. (2017). arxiv:arXiv:1802.01557v1Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Ehsan Upol, Liao Q. Vera, Muller Michael, Riedl Mark O., and Weisz Justin D.. 2021. Expanding Explainability: Towards Social Transparency in AI Systems. ACM, New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Eiband Malin. 2019. Supporting Users in Understanding Intelligent Everyday Systems. http://nbn-resolving.de/urn:nbn:de:bvb:19-256754Google ScholarGoogle Scholar
  38. [38] Eiband Malin, Schneider Hanna, Bilandzic Mark, Fazekas-Con Julian, Haug Mareike, and Hussmann Heinrich. 2018. Bringing transparency design into practice. In Proceedings of the 2018 Conference on Intelligent User Interfaces (IUI’18), 211223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Eiband Malin, Schneider Hanna, and Buschek Daniel. 2018. Normative vs. pragmatic: Two perspectives on the design of explanations in intelligent systems. In Explainable Smart Systems Workshop at IUI 2018.Google ScholarGoogle Scholar
  40. [40] Fischer Andreas, Greiff Samuel, and Funke Joachim. 2012. The process of solving complex problems. Journal of Problem Solving (2012). Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Gedikli Fatih, Jannach Dietmar, and Ge Mouzhi. 2014. How should i explain? A comparison of different explanation types for recommender systems. International Journal of Human Computer Studies (2014). Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Gilpin Leilani H., Bau David, Yuan Ben Z., Bajwa Ayesha, Specter Michael, and Kagal Lalana. 2018. Explaining explanations: An approach to evaluating interpretability of machine learning. (2018). arxiv:1806.00069 http://arxiv.org/abs/1806.00069Google ScholarGoogle Scholar
  43. [43] Glaser Barney G. and Strauss Anselm L.. 2017. Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Glass Alyssa, McGuinness Deborah L., and Wolverton Michael. 2008. Toward establishing trust in adaptive agents. In Proceedings of the 13th International Conference on Intelligent User Interfaces (Gran Canaria, Spain) (IUI’08). ACM, New York, 227236. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Gollwitzer Peter M.. 1993. Goal achievement: The role of intentions. European Review of Social Psycholog 4, 1 (1993), 141185. Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Goodman Bryce and Flaxman Seth. 2017. European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Magazine, 38, 3 (2017), 5057. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Grassegger Hannes and Krogerus Mikael. 2017. The Data that Turned the World Upside Down. https://motherboard.vice.com/en_us/article/mg9vv n/how-our-likes-helped-trump-win. Accessed: 01.10.2020.Google ScholarGoogle Scholar
  48. [48] Hager Gregory D., Bryant Randal, Horvitz Eric, Mataric Maja J., and Honavar Vasant G.. 2017. Advances in artificial intelligence require progress across all of computer science. CoRR abs/1707.04352 (2017). arxiv:1707.04352 http://arxiv.org/abs/1707.04352Google ScholarGoogle Scholar
  49. [49] Herlocker Jonathan L., Konstan Joseph A., and Riedl John. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (Philadelphia, PA) (CSCW’00). ACM, New York, 241250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Hevner Alan R., March Salvatore T., Park Jinsoo, and Ram Sudha. 2004. Design science in information systems research. MIS Q. 28, 1 (March 2004), 75105. http://dl.acm.org/citation.cfm?id=2017212.2017217Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Hildebrandt Mireille. 2016. The new imbroglio: Living with machine algorithms. The Art of Ethics in the Information Society. (2016), 5560.Google ScholarGoogle Scholar
  52. [52] Hill Rebecca. 2017. Transparent algorithms? Here’s why that’s a bad idea, Google tells MPs. https://www. theregister.co.uk/2017/11/07/google_on_commons_algorithm_inquiry/. Accessed: 05.10.2020.Google ScholarGoogle Scholar
  53. [53] Holstein Kenneth, Vaughan Jennifer Wortman, III Hal Daumé,, Dudik Miro, and Wallach Hanna. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI’19). ACM, New York, Article 600, 16 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Höök Kristina. 2000. Steps to take before intelligent user interfaces become real. Interacting with Computers 12, 4 (2000), 409426. Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Hornbæk Kasper and Oulasvirta Antti. 2017. What is interaction?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado,) (CHI’17). ACM, New York, 50405052. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Horvitz Eric. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the 1999 CHI Conference on Human Factors in Computing Systems (CHI’99). ACM, New York, 159166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Hussein Tim and Neuhaus Sebastian. 2010. Explanation of spreading activation based recommendations. In Proceedings of the 1st International Workshop on Semantic Models for Adaptive Interactive Systems (Hong Kong, China) (SEMAIS’10). ACM, New York, 2428. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Jackson David and Fovargue Andrew. 1997. The use of animation to explain genetic algorithms. In Proceedings of the 28th SIGCSE Technical Symposium on Computer Science Education (San Jose, CA) (SIGCSE’97). ACM, New York, 243247. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Jhaver Shagun, Karpfen Yoni, and Antin Judd. 2018. Algorithmic anxiety and coping strategies of Airbnb hosts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018), 112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Johnson Hilary and Johnson Peter. 1993. Explanation facilities and interactive systems. In Proceedings of the 1st International Conference on Intelligent User Interfaces (Orlando, FL) (IUI’93). ACM, New York, 159166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Johnson-Laird Philip N.. 1989. Mental models. In Foundations of Cognitive Science. The MIT Press, Cambridge, MA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Kapoor Ashish, Lee Bongshin, Tan Desney, and Horvitz Eric. 2010. Interactive optimization for steering machine classification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, GA) (CHI’10). ACM, New York, 13431352. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Kay Judy and Kummerfeld Bob. 2013. Creating personalized systems that people can scrutinize and control: Drivers, principles and experience. ACM Transactions on Interactive Intelligent Systems 2, 4, Article 24 (January 2013), 42 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Kim Been. 2015. Interactive and Interpretable Machine Learning Models for Human Machine Collaboration. Ph. D. Dissertation. Massachusetts Institute of Technology (MIT).Google ScholarGoogle Scholar
  65. [65] Kizilcec René F.. 2016. How much information?: Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, CA) (CHI’16). ACM, New York, 23902395. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Koch Janin, Lucero Andrés, Hegemann Lena, and Oulasvirta Antti. 2019. May AI? Design ideation with cooperative contextual bandits. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI’19). ACM, New York, 112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Kocielnik Rafal, Amershi Saleema, and Bennett Paul N.. 2019. Will you accept an imperfect AI?: Exploring designs for adjusting end-user expectations of AI systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI’19). ACM, New York, Article 411, 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Kuang Cliff. 2017. Can A.I. Be Taught to Explain Itself?https://www.nytimes.com/2017/11/21/magazine/can- ai-be-taught-to-explain-itself.html/. Accessed: 01.10.2020.Google ScholarGoogle Scholar
  69. [69] Kulesza Todd, Burnett Margaret, Wong Weng-Keen, and Stumpf Simone. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (Atlanta, GA) (IUI’15). ACM, New York, 126137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Kulesza Todd, Stumpf Simone, Burnett Margaret, and Kwan Irwin. 2012. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the 2012 ACM Conference on Human Factors in Computing Systems (2012), 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Kulesza Todd, Stumpf Simone, Burnett Margaret, Wong Weng-Keen, Riche Yann, Moore Travis, Oberst Ian, Shinsel Amber, and McIntosh Kevin. 2010. Explanatory debugging: Supporting end-user debugging of machine-learned programs. In Proceedings of the 2010 IEEE Symposium on Visual Languages and Human-Centric Computing. IEEE, 4148. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Kulesza Todd, Stumpf Simone, Burnett Margaret, Yang Sherry, Kwan Irwin, and Wong Weng-Keen. 2013. Too much, too little, or just right? Ways explanations impact end users’ mental models. In Proceedings of the 2013 IEEE Symposium on Visual Languages and Human Centric Computing. 310. Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Kulesza Todd, Stumpf Simone, Wong Weng-Keen, Burnett Margaret M., Perona Stephen, Ko Andrew, and Oberst Ian. 2011. Why-oriented end-user debugging of naive Bayes text classification. ACM Transactions on Interactive Intelligent Systems 1, 1, Article 2 (October 2011), 31 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Kulesza Todd, Wong Weng-Keen, Stumpf Simone, Perona Stephen, White Rachel, Burnett Margaret M., Oberst Ian, and Ko Andrew J.. 2008. Fixing the program my computer learned. In Proceedings of the 13th International Conference on Intelligent User Interfaces (IUI’09). 187196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. [75] Langer Markus, Oster Daniel, Speith Timo, Hermanns Holger, Kästner Lena, Schmidt Eva, Sesing Andreas, and Baum Kevin. 2021. What do we want from explainable artificial intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473. Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Lazer David M. J., Baum Matthew A., Benkler Yochai, Berinsky Adam J., Greenhill Kelly M., Menczer Filippo, Metzger Miriam J., Nyhan Brendan, Pennycook Gordon, Rothschild David, Schudson Michael, Sloman Steven A., Sunstein Cass R., Thorson Emily A., Watts Duncan J., and Zittrain Jonathan L.. 2018. The science of fake news. Science 359, 6380 (2018), 10941096. arXiv: http://science.sciencemag.org/content/359/6380/1094.full.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  77. [77] Lee J. D. and See K. A.. 2004. Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society 46, 1 (2004), 5080. Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Lim Brian Y.. 2010. Improving trust in context-aware applications with intelligibility. In Proceedings of the 12th ACM International Conference Adjunct Papers on Ubiquitous Computing (Copenhagen, Denmark) (UbiComp’10 Adjunct). ACM, New York, NY, USA, 477480. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. [79] Lim Brian Y. and Dey Anind K.. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing (UbiComp’09). 195204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. [80] Lim Brian Y. and Dey Anind K.. 2011. Design of an intelligible mobile context-aware application. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI’11). 157166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. [81] Lim Brian Y., Dey Anind K., and Avrahami Daniel. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, 21192128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. [82] Lipton Zachary C.. 2018. The mythos of model interpretability. Queue 16, 3, Article 30 (June 2018), 27 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. [83] Lynham Susan A.. 2002. The general method of theory-building research in applied disciplines. Advances in Developing Human Resources 4, 3 (2002), 221241. Google ScholarGoogle ScholarCross RefCross Ref
  84. [84] Mittelstadt Brent D., Allo Patrick, Taddeo Mariarosaria, Wachter Sandra, and Floridi Luciano. 2016. The ethics of algorithms: Mapping the debate. Big Data & Society 3, 2 (2016), 205395171667967. Google ScholarGoogle ScholarCross RefCross Ref
  85. [85] Molnar Christoph. 2020. Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/.Google ScholarGoogle Scholar
  86. [86] Neff Gina and Nagy Peter. 2016. Automation, algorithms, and politics | talking to bots: Symbiotic agency and the case of tay. International Journal of Communication 10 (2016). http://ijoc.org/index.php/ijoc/article/view/6277.Google ScholarGoogle Scholar
  87. [87] Norman Donald A.. 2013. The Design of Everyday Things. arXiv: arXiv:1011.1669v3Google ScholarGoogle ScholarCross RefCross Ref
  88. [88] Nunes Ingrid and Jannach Dietmar. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction 27, 3 (01 December 2017), 393444. Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. [89] Oh Changhoon, Lee Taeyoung, Kim Yoojung, Park SoHyun, Kwon Sae bom, and Suh Bongwon. 2017. Us vs. them: Understanding artificial intelligence technophobia over the Google Deepmind challenge match. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, CO) (CHI’17). ACM, New York, 25232534. Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. [90] Parasuraman Raja and Miller Christopher A.. 2004. Trust and etiquette in high-criticality automated systems. Commun. ACM 47, 4 (2004), 51. Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. [91] Parliament The European and Union the Council of the European. 2016. General Data Protection Regulation. https:// gdpr-info.eu/. Accessed: 09.10.2020.Google ScholarGoogle Scholar
  92. [92] Pieters Wolter. 2011. Explanation and trust: what to tell the user in security and AI?Ethics and Information Technology 13, 1 (01 Mar 2011), 5364. Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. [93] Pu Pearl and Chen Li. 2006. Trust building with explanation interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces (IUI’06). 93100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. [94] Rader Emilee, Cotter Kelley, and Cho Janghee. 2018. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI’18). ACM, New York, Article 103, 13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. [95] Rana Arpit and Bridge Derek. 2018. Explanations that are intrinsic to recommendations. In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization (Singapore, Singapore) (UMAP’18). ACM, New York, 187195. Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. [96] Ribeiro Marco Tulio, Singh Sameer, and Guestrin Carlos. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, CA) (KDD’16). ACM, New York, 11351144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. [97] Ross Andrew, Chen Nina, Hang Elisa Zhao, Glassman Elena L., and Doshi-Velez Finale. 2021. Evaluating the interpretability of generative models by interactive reconstruction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, Article 80, 15 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. [98] Sarkar Advait. 2015. Confidence, command, complexity: Metamodels for structured interaction with machine intelligence. In Proceedings of the 26th Annual Conference of the Psychology of Programming Interest Group. 2336. http://www.ppig.org/library/paper/confidence-command-complexity-metamodels-structured-interaction-machine-intelligence.Google ScholarGoogle Scholar
  99. [99] Sarkar Advait. 2016. Constructivist design for interactive machine learning. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (San Jose, CA) (CHI EA’16). ACM, New York, 14671475. Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. [100] Schlesinger Ari, O’Hara Kenton P., and Taylor Alex S.. 2018. Let’s talk about race: Identity, chatbots, and AI. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI’18). ACM, New York, Article 315, 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. [101] Schoppek Wolfgang. 2002. Examples, rules, and strategies in the control of dynamic systems.Cognitive Science Quarterly 2, 1 (2002), 6392.Google ScholarGoogle Scholar
  102. [102] Siljee Johanneke. 2015. Privacy transparency patterns. In Proceedings of the 20th European Conference on Pattern Languages of Programs (Kaufbeuren, Germany) (EuroPLoP’15). ACM, New York, Article 52, 11 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. [103] Singh Munindar P.. 1994. Multiagent Systems. Springer Berlin Heidelberg, Berlin, Heidelberg, 114. Google ScholarGoogle ScholarCross RefCross Ref
  104. [104] Staab Steffen, Bhargava Bharat, Lilien Leszek, Rosenthal Arnon, Winslett Marianne, Sloman Morris, Dillon Tharam S., Chang Elizabeth, Hussain Farookh Khadeer, Nejdl Wolfgang, Olmedilla Daniel, and Kashyap Vipul. 2004. The pudding of trust. IEEE Intelligent Systems 19, 5 (2004), 7488. Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. [105] Strauss Anselm and Corbin Juliet. 1998. Basics of Qualitative Research Techniques. SAGE.Google ScholarGoogle Scholar
  106. [106] Talbot Justin, Lee Bongshin, Kapoor Ashish, and Tan Desney S.. 2009. EnsembleMatrix: Interactive visualization to support machine learning with multiple classifiers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA,) (CHI’09). ACM, New York, 12831292. Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. [107] Tintarev Nava and Kutlak Roman. 2014. Demo: Making plans scrutable with argumentation and natural language generation. In Proceedings of the Companion Publication of the 19th International Conference on Intelligent User Interfaces (Haifa, Israel) (IUI Companion’14). ACM, New York, 2932. Google ScholarGoogle ScholarDigital LibraryDigital Library
  108. [108] Tomsett Richard, Braines Dave, Harborne Dan, Preece Alun, and Chakraborty Supriyo. 2018. Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552 (2018).Google ScholarGoogle Scholar
  109. [109] Tsai Chun-Hua and Brusilovsky Peter. 2019. Evaluating visual explanations for similarity-based recommendations: User perception and performance. In Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization (Larnaca, Cyprus) (UMAP’19). ACM, New York, 2230. Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. [110] Tsai Chun-Hua and Brusilovsky Peter. 2019. Explaining recommendations in an interactive hybrid social recommender. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, CA) (IUI’19). ACM, New York, 391396. Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. [111] Tullio Joe, Dey Anind K., Chalecki Jason, and Fogarty James. 2007. How it works: A field study of non-technical users interacting with an intelligent system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07). 3140. Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. [112] Veale Michael, Kleek Max Van, and Binns Reuben. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI’18). ACM, New York, Article 440, 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. [113] Völkel Sarah Theres, Schneegass Christina, Eiband Malin, and Buschek Daniel. 2020. What is “Intelligent” in intelligent user interfaces? A meta-analysis of 25 years of IUI. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI’20). ACM, New York, 477487. Google ScholarGoogle ScholarDigital LibraryDigital Library
  114. [114] Glasersfeld Ernst von. 1989. Cognition, construction of knowledge, and teaching. Synthese (1989). arxiv:arXiv:1011.1669v3Google ScholarGoogle ScholarCross RefCross Ref
  115. [115] Wachter Sandra, Mittelstadt Brent, and Floridi Luciano. 2017. Transparent, explainable, and accountable AI for robotics. Science Robotics 2, 6 (2017). arxiv:2794335Google ScholarGoogle ScholarCross RefCross Ref
  116. [116] Wachter Sandra, Mittelstadt Brent, and Floridi Luciano. 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. SSRN (2017), 147. arxiv:1606.08813Google ScholarGoogle ScholarCross RefCross Ref
  117. [117] Yan Jing Nathan, Gu Ziwei, Lin Hubert, and Rzeszotarski Jeffrey M.. 2020. Silva: Interactively Assessing Machine Learning Fairness Using Causality. ACM, New York, 113. Google ScholarGoogle ScholarDigital LibraryDigital Library
  118. [118] Yang Fumeng, Huang Zhuanyi, Scholtz Jean, and Arendt Dustin L.. 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI’20). Association for Computing Machinery, New York, NY, USA, 189201. Google ScholarGoogle ScholarDigital LibraryDigital Library
  119. [119] Yang Qian, Steinfeld Aaron, Rosé Carolyn, and Zimmerman John. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI’20). Association for Computing Machinery, New York, NY, USA, 113. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. How to Support Users in Understanding Intelligent Systems? An Analysis and Conceptual Framework of User Questions Considering User Mindsets, Involvement, and Knowledge Outcomes

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Interactive Intelligent Systems
      ACM Transactions on Interactive Intelligent Systems  Volume 12, Issue 4
      December 2022
      321 pages
      ISSN:2160-6455
      EISSN:2160-6463
      DOI:10.1145/3561952
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 November 2022
      • Online AM: 27 April 2022
      • Accepted: 15 February 2022
      • Revised: 26 November 2021
      • Received: 28 July 2021
      Published in tiis Volume 12, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed
    • Article Metrics

      • Downloads (Last 12 months)261
      • Downloads (Last 6 weeks)31

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format