skip to main content
10.1145/3570945.3607335acmconferencesArticle/Chapter ViewAbstractPublication PagesivaConference Proceedingsconference-collections
research-article

Alexa, What's Inside of You: A Qualitative Study to Explore Users' Mental Models of Intelligent Voice Assistants

Published: 22 December 2023 Publication History

Abstract

Despite the rising prevalence of intelligent voice assistants in people's homes, it remains opaque to most users how they function. With the overall goal to foster informed usage and responsible handling of personal data, the user's understanding increasingly moves into the focus of research. Particularly in the context of black box technologies such as voice assistants, individuals build intuitive mental models (also called folk theories) which can act as a guide for how they understand and interact with such systems. To shed light on individuals' mental models regarding intelligent voice assistants, we applied a visual elicitation method during a citizen science workshop: 26 participants were asked to visualize how they imagine a voice assistant to function. The resulting drawings were categorized based on their theorization complexity level. While basic awareness of the overall functionality (user input -- something happens -- voice assistant output) was found in every drawing, 13 participants revealed mental models on how the input is received (e.g., audio recording and speech processing), six on how the information is then processed (e.g., database requests and data categorization), and five on how the output is generated (e.g., speech synthesis). Overall, the results underline the need to address individuals' gaps in understanding of intelligent technology that is already in widespread use.

References

[1]
Noura Abdi, Kopo M. Rakokapane, and Jose M. Such. 2019. More than smart speakers: Security and privacy perceptions of smart home personal assistants. In Proceedings of the Fifteenth Symposium on Usable Privacy and Security - SOUPS 2019. USENIX Association, Berkeley, CA, 451--466.
[2]
Jaime Banks. 2020. Optimus primed: Media cultivation of robot mental models and social judgments. Frontiers in Robotics and AI 7, 62.
[3]
Charles R. Berger and Richard J. Calabrese. 1975. Some explorations in initial interaction and beyond: toward a developmental theory of interpersonal communication. Human Communication Research 1, 2, 99--112.
[4]
G. H. Bower and D. G. Morrow. 1990. Mental models in narrative comprehension. Science 247, 4938, 44--48.
[5]
Eugene Cho. 2019. Hey Google, can I ask you something in private? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems -CHI '19. ACM, New York, NY, USA, 1--9.
[6]
Janghee Cho. 2018. Mental models and home virtual assistants (HVAs). In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM, New York, NY, USA, 1--6.
[7]
Kenneth Craik. 1943. The Nature of Explanation. Cambridge University Press.
[8]
Maartje De Graaf and Bertram F. Malle. 2017. How people explain action (and autonomous intelligent systems should too). In 2017 AAAI Fall Symposium Series. AI Access Foundation, 19--26.
[9]
Michael A. DeVito. 2021. Adaptive folk theorization as a path to algorithmic literacy on changing platforms. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2, Article 339, 1--35.
[10]
Michael A. DeVito, Jeremy Birnholtz, Jeffery T. Hancock, Megan French, and Sunny Liu. 2018. How people form folk theories of social media feeds and what it means for how we study self-presentation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM, New York, NY, USA, 1--12.
[11]
Michael A. DeVito, Darren Gergle, and Jeremy Birnholtz. 2017. "Algorithms ruin everything": #RIPTwitter, folk theories, and resistance to algorithmic change in social media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI '17. ACM, New York, NY, USA, 3163--3174.
[12]
Derek Doran, Sarah Schulz, and Tarek R. Besold. 2017. What does explainable AI really mean? A new conceptualization of perspectives.
[13]
Warren J. von Eschenbach. 2021. Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology 34, 4, 1607--1622.
[14]
Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I "like" it, then I hide it: Folk theories of social feeds. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI '16. ACM, New York, NY, USA, 2371--2382.
[15]
Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. "I always assumed that I wasn't really that close to [her]": Reasoning about invisible algorithms in news feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15. ACM, New York, NY, USA, 153--162.
[16]
Susan A. Gelman and Cristine H. Legare. 2011. Concepts and folk theories. Annual Review of Anthropology 40, 379--398.
[17]
Dedre Gentner and Albert L. Stevens, Eds. 2014. Mental Models. Psychology Press.
[18]
Anne-Britt Gran, Peter Booth, and Taina Bucher. 2021. To be or not to be algorithm aware: a question of a new digital divide? Information, Communication & Society 24, 12, 1779--1796.
[19]
David Gunning, Mark Stefik, Jaesik Choi, Tim Miller, Simone Stumpf, and Guang-Zhong Yang. 2019. XAI - Explainable artificial intelligence. Science Robotics 4, 37.
[20]
Frank G. Halasz and Thomas P. Moran. 1983. Mental models and problem solving in using a calculator. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '83. ACM Press, New York, New York, USA, 212--216.
[21]
Fritz Heider. 1958. The Psychology of Interpersonal Relations. Wiley, New York, NY, USA.
[22]
Philip N. Johnson-Laird. 1983. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Harvard University Press.
[23]
Philip N. Johnson-Laird. 2004. The history of mental models. In Psychology of Reasoning. Theoretical and Historical Perspectives, Ken Manktelow and Man C. Chung, Eds. Psychology Press, Hove, New York, 179--212.
[24]
Frank C. Keil. 2012. Running on empty? How folk science gets by with less. Current Directions in Psychological Science 21, 5, 329--334.
[25]
Sunyoung Kim. 2021. Exploring how older adults use a smart speaker-based voice assistant in their first interactions: Qualitative study. JMIR mHealth and uHealth 9, 1, e20427.
[26]
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '12. ACM, New York, NY, USA, 1--10.
[27]
Josephine Lau, Benjamin Zimmerman, and Florian Schaub. 2018. Alexa, are you listening? Privacy perceptions, concerns and privacy-seeking behaviors with smart speakers. Proceedings of the ACM on Human-Computer Interaction 2, CSCW, 1--31.
[28]
Sau-lai Lee, Ivy Y. Lau, S. Kiesler, and Chi-Yue Chiu. 2005. Human mental models of humanoid robots. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation. IEEE, 2767--2772.
[29]
Silvia B. Lovato, Anne M. Piper, and Ellen A. Wartella. 2019. Hey Google, do unicorns exist? Conversational agents as a path to answers to children's questions. In Proceedings of the 18th ACM International Conference on Interaction Design and Children. ACM, New York, NY, USA, 301--313.
[30]
Ewa Luger and Abigail Sellen. 2016. "Like Having a Really Bad PA" The Gulf between user expectation and experience of conversational agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI '16. ACM, New York, NY, USA, 5286--5297.
[31]
Thao Ngo, Johannes Kunkel, and Jürgen Ziegler. 2020. Exploring mental models for transparent and controllable recommender systems: A qualitative study. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. ACM, New York, NY, USA, 183--191.
[32]
Donald A. Norman. 2013. The Design of Everyday Things: Revised and Expanded Edition. Basic Books, New York, NY, USA.
[33]
Donald A. Norman. 2014. Some observations on mental models. In Mental Models, Dedre Gentner and Albert L. Stevens, Eds. Psychology Press, 15--22.
[34]
Emilee Rader and Rebecca Gray. 2015. Understanding user beliefs about algorithmic curation in the Facebook news feed. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15. ACM, New York, NY, USA, 173--182.
[35]
Gert Rickheit and Lorenz Sichelschmidt. 1999. Mental Models: Some answers, some questions, some suggestions. In Mental Models in Discourse Processing and Reasoning, Gert Rickheit and Christopher Habel, Eds. Advances in Psychology. Elsevier, 9--40.
[36]
Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars K. Hansen, and Klaus-Robert Müller. 2019. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 11700. Springer, Cham.
[37]
Donghee Shin. 2021. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies 146, 102551.
[38]
Jessica M. Szczuka, Clara Strathmann, Natalia Szymczyk, Lina Mavrina, and Nicole C. Krämer. 2022. How do children acquire knowledge about voice assistants? A longitudinal field study on children's knowledge about how voice assistants store and process data. International Journal of Child-Computer Interaction 33, 100460.
[39]
E. S. Vorm and David J. Y. Combs. 2022. Integrating transparency, trust, and acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM). International Journal of Human--Computer Interaction 38, 18-20, 1828--1845.
[40]
Justin Walden, Eun Hwa Jung, S. Shyam Sundar, and Ariel Celeste Johnson. 2015. Mental models of robots among senior citizens. Interaction Studies 16, 1, 68--88.
[41]
Karl E. Weick. 1995. Sensemaking in Organizations. Sage.
[42]
Lynn Westbrook. 2006. Mental models: a theoretical overview and preliminary study. Journal of Information Science 32, 6, 563--579.
[43]
Gerald Zaltman. 2003. How Customers Think: Essential Insights Into the Mind of the Market. Harvard Business School Press.
[44]
Eric Zeng and Shrirang Mare. 2017. End user security & privacy concerns with smart homes. In Proceedings of the Thirteenth Symposium on Usable Privacy and Security - SOUPS 2017, 65--80.

Cited By

View all
  • (2024)Sustainable Impact of Stance Attribution Design Cues for Robots on Human–Robot Relationships—Evidence from the ERSPSustainability10.3390/su1617725216:17(7252)Online publication date: 23-Aug-2024
  • (2024)Intentional or Designed? The Impact of Stance Attribution on Cognitive Processing of Generative AI Service FailuresBrain Sciences10.3390/brainsci1410103214:10(1032)Online publication date: 17-Oct-2024
  • (2024)"I look at it as the king of knowledge": How Blind People Use and Understand Generative AI ToolsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675631(1-14)Online publication date: 27-Oct-2024

Index Terms

  1. Alexa, What's Inside of You: A Qualitative Study to Explore Users' Mental Models of Intelligent Voice Assistants

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      IVA '23: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents
      September 2023
      376 pages
      ISBN:9781450399944
      DOI:10.1145/3570945
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 December 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Human-Computer Interaction
      2. Mental Models
      3. Qualitative Analysis
      4. Visualization
      5. Voice Assistants

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • VolkswagenStiftung

      Conference

      IVA '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 53 of 196 submissions, 27%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)129
      • Downloads (Last 6 weeks)8
      Reflects downloads up to 17 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Sustainable Impact of Stance Attribution Design Cues for Robots on Human–Robot Relationships—Evidence from the ERSPSustainability10.3390/su1617725216:17(7252)Online publication date: 23-Aug-2024
      • (2024)Intentional or Designed? The Impact of Stance Attribution on Cognitive Processing of Generative AI Service FailuresBrain Sciences10.3390/brainsci1410103214:10(1032)Online publication date: 17-Oct-2024
      • (2024)"I look at it as the king of knowledge": How Blind People Use and Understand Generative AI ToolsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675631(1-14)Online publication date: 27-Oct-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media