skip to main content
10.1145/1240624.1240630acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
Article

How it works: a field study of non-technical users interacting with an intelligent system

Published: 29 April 2007 Publication History

Abstract

In order to develop intelligent systems that attain the trust of their users, it is important to understand how users perceive such systems and develop those perceptions over time. We present an investigation into how users come to understand an intelligent system as they use it in their daily work. During a six-week field study, we interviewed eight office workers regarding the operation of a system that predicted their managers' interruptibility, comparing their mental models to the actual system model. Our results show that by the end of the study, participants were able to discount some of their initial misconceptions about what information the system used for reasoning about interruptibility. However, the overarching structures of their mental models stayed relatively stable over the course of the study. Lastly, we found that participants were able to give lay descriptions attributing simple machine learning concepts to the system despite their lack of technical knowledge. Our findings suggest an appropriate level of feedback for user interfaces of intelligent systems, provide a baseline level of complexity for user understanding, and highlight the challenges of making users aware of sensed inputs for such systems.

References

[1]
Antifakos, S. Kern, N., Schiele, B., Schwaninger, A., (2005) Towards improving trust in context-aware systems by displaying system confidence. In Proc. MobileHCI 2005, pp. 9--14.
[2]
Bellotti, V. and Edwards, W.K. (2001) Intelligibility and Accountability: Human Considerations in Context-Aware Systems, Human-Computer Interaction, 16(2--4):193--212.
[3]
Birnbaum, L., Horvitz, E., Kurlander, D., Lieberman, H., Marks, J., and Roth, S. (1997). Compelling intelligent user interfaces--how much AI? In Proc. of IUI'97, pp. 173--175.
[4]
Borgman, C. (1986) The User's Mental Model of an Information Retrieval System. 'Int'l Journal of Man-Machine Studies 24(1):47--64.
[5]
Cannon-Bowers, J.E., Salas, E., and Converse, S. (1993) Shared Mental Models in Expert Team Decision-Making," In Individual and Group Decision-Making: Current Issues, J. Castellan, (ed.), Hillsdale, NJ: Erlbaum.
[6]
Chi, M.T.H. (2000). Self-explaining expository texts: The dual processes of generating inferences and repairing mental models. In R. Glaser (Ed.), Advances in Instructional Psychology, Hillsdale, NJ: Erlbaum. pp. 161--238.
[7]
Dourish, P. (1995). Accounting for System Behaviour: Representation, Reflection and Resourceful Action. In Proc. of Conference on Computers in Context CIC'95, pp. 145--170.
[8]
Dzindolet, M., Peterson, S., Pomranky, S. Pierce, L. and Beck, H. (2003) The role of trust in automation reliance, 'Int'l Journal of Human-Computer Studies, 58(6):697--718.
[9]
Fogarty, J., Hudson, S.E., and Lai, J. (2004). Examining the Robustness of Sensor-Based Statistical Models of Human Interruptibility. In Proc. ofallCHI 2004, pp. 207--214.
[10]
Fogarty, J. and Hudson, S.E. (2007) Toolkit Support for Developing and Deploying Sensor-Based Statistical Models of Human Situations. To Appear, CHI 2007.
[11]
Herlocker, J., Konstan, and J., Riedl, J. (2000) Explaining collaborative filtering recommendations, In Proc. of CSCW 2000, pp.241--250.
[12]
Johnson, H. and Johnson, P. (1993) Explanation facilities and interactive systems. In Proc. of IUI'93, pp. 159--166.
[13]
Johnson-Laird, P. N. (1983) Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge, MA: Harvard Press.
[14]
Kempton, W. (1987) Two theories of home heat control. In N. Quinn & D. Holland (Eds.) Cultural Models in Language and Thought, Cambridge University Press.
[15]
Kendon, A. and Ferber, A. (1973) A description of some human greetings. In R. Michael and J. Crook (Eds.), Comparative Ecology and Behavior of Primates, pp. 591--668. New York: Academic Press.
[16]
Kohavi, R. and John, G.H. (1997) Wrappers for Feature Subset Selection, Artificial Intelligence 97(1--2):273--324.
[17]
Maes, P. (1994) Agents that Reduce Work and Information Overload. Communications of the ACM, 37(7):31--40.
[18]
Moray, N. (1987) Intelligent Aids, Mental Models, and the Theory of Machines. 'Int'l Journal of Man-Machine Studies, 27 (5):619--629.
[19]
Morgan, M.G., Fischhoff, B., Bostrom, A. and Atman, C.J. (2002) Risk Communication: A Mental Models Approach. Cambridge, UK: Cambridge University Press.
[20]
Muir, B. (1994) Trust in automation Part I: Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11):1905--1922.
[21]
Muramatsu, J. and Pratt, W. (2001) Transparent Queries: Investigating Users' Mental Models of Search Engines, In Proc. of SIGIR 2001, pp. 217--224.
[22]
Norman, D.A. (1983). Some observations on mental models. In D. Gentner & A.Stevens (Eds.) Mental Models, pp. 7--15. Hillsdale, NJ: Erlbaum.
[23]
Suermondt, J. and Cooper, G. (1992) An Evaluation of Explanations of Probabilistic Inference. In Proc. Computer Applications in Medical Care, pp. 579--585.
[24]
Tversky, A. and Kahneman, D. (1974) Judgment under Uncertainty: Heuristics and Biases, Science 185(4157):1124--1131.
[25]
Williams, M.D., Hollan, J.D., and Stevens, A.L. (1983) Human Reasoning about a Simple Physical System. In D. Gentner & A. Stevens (Eds.) Mental Models, pp. 131--154. Hillsdale, NJ: Erlbaum.

Cited By

View all
  • (2025) Exploring Trust and Mistrust Dynamics: Generative AI‐Curated Narratives in Health Communication Media Content Among Gen X Generative Artificial Intelligence for Biomedical and Smart Health Informatics10.1002/9781394280735.ch21(417-433)Online publication date: 3-Jan-2025
  • (2024)A Transparency-Based Action Model Implemented in a Robotic Physical Trainer for Improved HRIACM Transactions on Human-Robot Interaction10.1145/370059814:1(1-19)Online publication date: 30-Oct-2024
  • (2024)What Do We Do? Lessons Learned from Conducting Systematic Reviews to Improve HCI DisseminationExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3637117(1-8)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. How it works: a field study of non-technical users interacting with an intelligent system

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '07: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      April 2007
      1654 pages
      ISBN:9781595935939
      DOI:10.1145/1240624
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 29 April 2007

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. context-aware
      2. field study
      3. intelligent systems
      4. machine learning
      5. mental models
      6. qualitative research

      Qualifiers

      • Article

      Conference

      CHI07
      Sponsor:
      CHI07: CHI Conference on Human Factors in Computing Systems
      April 28 - May 3, 2007
      California, San Jose, USA

      Acceptance Rates

      CHI '07 Paper Acceptance Rate 182 of 840 submissions, 22%;
      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)74
      • Downloads (Last 6 weeks)13
      Reflects downloads up to 02 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025) Exploring Trust and Mistrust Dynamics: Generative AI‐Curated Narratives in Health Communication Media Content Among Gen X Generative Artificial Intelligence for Biomedical and Smart Health Informatics10.1002/9781394280735.ch21(417-433)Online publication date: 3-Jan-2025
      • (2024)A Transparency-Based Action Model Implemented in a Robotic Physical Trainer for Improved HRIACM Transactions on Human-Robot Interaction10.1145/370059814:1(1-19)Online publication date: 30-Oct-2024
      • (2024)What Do We Do? Lessons Learned from Conducting Systematic Reviews to Improve HCI DisseminationExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3637117(1-8)Online publication date: 11-May-2024
      • (2024)Conceptualizing understanding in explainable artificial intelligence (XAI): an abilities-based approachEthics and Information Technology10.1007/s10676-024-09769-326:2Online publication date: 15-Jun-2024
      • (2023)Privacy mental models of electronic health recordsProceedings of the Nineteenth USENIX Conference on Usable Privacy and Security10.5555/3632186.3632215(525-542)Online publication date: 7-Aug-2023
      • (2023)Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performanceFrontiers in Computer Science10.3389/fcomp.2023.10962575Online publication date: 6-Feb-2023
      • (2023)“Not There Yet”: Feasibility and Challenges of Mobile Sound Recognition to Support Deaf and Hard-of-Hearing PeopleProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3608431(1-14)Online publication date: 22-Oct-2023
      • (2023)Playing with Dezgo: Adapting Human-AI Interaction to the Context of PlayProceedings of the 18th International Conference on the Foundations of Digital Games10.1145/3582437.3587198(1-5)Online publication date: 12-Apr-2023
      • (2023)Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciencesRisk Analysis10.1111/risa.1424544:6(1498-1513)Online publication date: 8-Nov-2023
      • (2023)A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)2023 IEEE 31st International Requirements Engineering Conference Workshops (REW)10.1109/REW57809.2023.00061(325-331)Online publication date: Sep-2023
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media