skip to main content
10.1145/2930238.2930290acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
short-paper

Trust and Reliance Based on System Accuracy

Published: 13 July 2016 Publication History

Abstract

Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system's output serves as one of the inputs for the users' decision making processes. In this work, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly.

References

[1]
J. D. Lee and N. Moray, "Trust, self-confidence, and operators' adaptation to automation," Int. J. Hum.-Comput. Stud., vol. 40, no. 1, pp. 153--184, Jan. 1994.
[2]
K. A. Hoff and M. Bashir, "Trust in Automation Integrating Empirical Evidence on Factors That Influence Trust," Hum. Factors J. Hum. Factors Ergon. Soc., vol. 57, no. 3, pp. 407--434, May 2015.
[3]
B. M. Muir, "Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems," Ergonomics, vol. 37, no. 11, pp. 1905--1922, Nov. 1994.
[4]
J. M. McGuirl and N. B. Sarter, "Supporting Trust Calibration and the Effective Use of Decision Aids by Presenting Dynamic System Confidence Information," Hum. Factors J. Hum. Factors Ergon. Soc., vol. 48, no. 4, pp. 656--665, Dec. 2006.
[5]
W. Wang and I. Benbasat, "Attributions of Trust in Decision Support Technologies: A Study of Recommendation Agents for E-Commerce," J. Manag. Inf. Syst., vol. 24, no. 4, pp. 249--273, Apr. 2008.
[6]
J. Zhou, "Transparent Machine Learning-Revealing Internal States of Machine Learning," in Proceedings of IUI2013 Workshop on Interactive Machine Learning, Santa Monica, CA, 2013.
[7]
J. D. Lee and K. A. See, "Trust in Automation: Designing for Appropriate Reliance," Hum. Factors J. Hum. Factors Ergon. Soc., vol. 46, no. 1, pp. 50--80, Mar. 2004.
[8]
D. M. Rousseau, S. B. Sitkin, R. S. Burt, and C. Camerer, "Not So Different After All: A Cross-Discipline View Of Trust," Acad. Manage. Rev., vol. 23, no. 3, pp. 393--404, Jul. 1998.
[9]
R. C. Mayer, J. H. Davis, and F. D. Schoorman, "An Integrative Model Of Organizational Trust," Acad. Manage. Rev., vol. 20, no. 3, pp. 709--734, Jul. 1995.
[10]
J.-Y. Jian, A. M. Bisantz, and C. G. Drury, "Foundations for an Empirically Determined Scale of Trust in Automated Systems," Int. J. Cogn. Ergon., vol. 4, no. 1, pp. 53--71, Mar. 2000.
[11]
J. Rotter, "An new scale for the measurement of interpersonal trust," J. Pers., vol. 35, no. 4, pp. 651--665, 1967.
[12]
C. L. Scott, "Interpersonal Trust: A Comparison of Attitudinal and Situational Factors," Hum. Relat., vol. 33, no. 11, pp. 805--812, Nov. 1980.
[13]
I. L. Singh, R. Molloy, and R. Parasuraman, "Automation-Induced Complacency: Development of the Complacency-Potential Rating Scale," Int. J. Aviat. Psychol., vol. 3, no. 2, pp. 111--122, Apr. 1993.
[14]
P. Madhavan and D. A. Wiegmann, "Similarities and differences between human-human and human-automation trust: an integrative review," Theor. Issues Ergon. Sci., vol. 8, no. 4, pp. 277--301, Jul. 2007.
[15]
P. C. Earley, "Computer-generated performance feedback in the magazine-subscription industry," Organ. Behav. Hum. Decis. Process., vol. 41, no. 1, pp. 50--64, Feb. 1988.
[16]
B. J. Dietvorst, J. P. Simmons, and C. Massey, "Algorithm aversion: People erroneously avoid algorithms after seeing them err," J. Exp. Psychol. Gen., vol. 144, no. 1, pp. 114--126, 2015.
[17]
M. T. Dzindolet, L. G. Pierce, H. P. Beck, and L. A. Dawe, "The Perceived Utility of Human and Automated Aids in a Visual Detection Task," Hum. Factors J. Hum. Factors Ergon. Soc., vol. 44, no. 1, pp. 79--94, Mar. 2002.
[18]
S. Berkovsky, J. Freyne, and H. Oinas-Kukkonen, Eds., "Influencing Individually: Fusing Personalization and Persuasion," ACM Trans Interact Intell Syst, vol. 2, no. 2, pp. 9:1--9:8, Jun. 2012.
[19]
S. Merritt and D. Ilgen, "Not all trust is created equal: Dispositional and history-based trust in human-automation interactions," Hum. Factors, vol. 50, no. 2, pp. 194--210, 2008.

Cited By

View all
  • (2024)Trust in algorithmic decision-making systems in health: A comparison between ADA health and IBM Watson.Cyberpsychology: Journal of Psychosocial Research on Cyberspace10.5817/CP2024-1-518:1Online publication date: 1-Feb-2024
  • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
  • (2024)Trust Development and Repair in AI-Assisted Decision-Making during Complementary ExpertiseProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658924(546-561)Online publication date: 3-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UMAP '16: Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization
July 2016
366 pages
ISBN:9781450343688
DOI:10.1145/2930238
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 July 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. reliance
  2. system accuracy
  3. trust formation
  4. user-system trust

Qualifiers

  • Short-paper

Funding Sources

  • NICTA
  • Asian Office of Aerospace Research & Development (AOARD)

Conference

UMAP '16
Sponsor:
UMAP '16: User Modeling, Adaptation and Personalization Conference
July 13 - 17, 2016
Nova Scotia, Halifax, Canada

Acceptance Rates

UMAP '16 Paper Acceptance Rate 21 of 123 submissions, 17%;
Overall Acceptance Rate 162 of 633 submissions, 26%

Upcoming Conference

UMAP '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)44
  • Downloads (Last 6 weeks)4
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Trust in algorithmic decision-making systems in health: A comparison between ADA health and IBM Watson.Cyberpsychology: Journal of Psychosocial Research on Cyberspace10.5817/CP2024-1-518:1Online publication date: 1-Feb-2024
  • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
  • (2024)Trust Development and Repair in AI-Assisted Decision-Making during Complementary ExpertiseProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658924(546-561)Online publication date: 3-Jun-2024
  • (2024)A Decision Theoretic Framework for Measuring AI RelianceProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658901(221-236)Online publication date: 3-Jun-2024
  • (2024)Vistrust: a Multidimensional Framework and Empirical Study of Trust in Data VisualizationsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332657930:1(348-358)Online publication date: 1-Jan-2024
  • (2023) When Algorithms Err: Differential Impact of Early vs. Late Errors on Users’ Reliance on AlgorithmsACM Transactions on Computer-Human Interaction10.1145/355788930:1(1-36)Online publication date: 18-Mar-2023
  • (2023)Integrated Recognition Assistant Framework Based on Deep Learning for Autonomous Driving: Human-Like Restoring Damaged Road Sign InformationInternational Journal of Human–Computer Interaction10.1080/10447318.2023.220427440:15(3982-4002)Online publication date: 27-Apr-2023
  • (2022)Reliance and Automation for Human-AI Collaborative Data Labeling Conflict ResolutionProceedings of the ACM on Human-Computer Interaction10.1145/35552126:CSCW2(1-27)Online publication date: 11-Nov-2022
  • (2022)Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support SystemsProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3502104(1-9)Online publication date: 29-Apr-2022
  • (2022)How Good is Good Enough? Quantifying the Impact of Benefits, Accuracy, and Privacy on Willingness to Adopt COVID-19 Decision AidsDigital Threats: Research and Practice10.1145/34883073:3(1-18)Online publication date: 26-Mar-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media