skip to main content
10.1145/3434073.3444673acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Using Trust to Determine User Decision Making & Task Outcome During a Human-Agent Collaborative Task

Published: 08 March 2021 Publication History

Abstract

Optimal performance of collaborative tasks requires consideration of the interactions between socially intelligent agents, such as social robots, and their human counterparts. The functionality and success of these systems lie in their ability to establish and maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology, with the work in this paper focusing on the first step: investigating user trust as a behavioural prior. Two pilot studies (Study 1 and 2) are presented, the results of which inform the design of Study 3. Study 3 investigates whether trust can determine user decision making and task outcome during a human-agent collaborative task. Results demonstrate that trust can be behaviourally assessed in this context using an adapted version of the Trust Game. Further, an initial behavioural measure of trust can significantly predict task outcome. Finally, assistance type and task difficulty interact to impact user performance. Notably, participants were able to improve their performance on the hard task when paired with correct assistance, with this improvement comparable to performance on the easy task with no assistance. Future work will focus on investigating factors that influence user trust during human-agent collaborative tasks and providing a domain-independent model of trust calibration.

References

[1]
McLeod S. A. 2018. Attitude measurement. https://www.simplypsychology.org/attitude-measurement.html Retrieved June 6, 2019 from
[2]
Kumar Akash, Tahira Reid, and Neera Jain. 2019. Improving Human-Machine Collaboration Through Transparency-based Feedback--Part II: Control Design and Synthesis. IFAC-PapersOnLine, Vol. 51, 34 (2019), 322--328.
[3]
Nicole D. Anderson. 2015. Teaching signal detection theory with pseudoscience. Frontiers in psychology, Vol. 6 (2015), 762.
[4]
Richard F. Antonak and Hanoch Livneh. 1995. Direct and indirect methods to measure attitudes toward persons with disabilities, with an exegesis of the error-choice test method. Rehabilitation Psychology, Vol. 40, 1 (1995), 3.
[5]
American Psychological Association. 2020 a. APA Dictionary of Psychology: practice effect. https://dictionary.apa.org/practice-effect Retrieved January 2, 2021 from
[6]
American Psychological Association. 2020 b. APA Dictionary of Psychology: yes-no task. https://dictionary.apa.org/yes-no-task Retrieved January 2, 2021 from
[7]
Roy F. Baumeister and Kathleen D. Vohs. 2007. Encyclopedia of social psychology. Vol. 1. Sage.
[8]
Izak Benbasat and Weiquan Wang. 2005. Trust in and adoption of online recommendation agents. Journal of the association for information systems, Vol. 6, 3 (2005), 4.
[9]
Joyce Berg, John Dickhaut, and Kevin McCabe. 1995. Trust, reciprocity, and social history. Games and economic behavior, Vol. 10, 1 (1995), 122--142.
[10]
Glenn S. Brown and K. Geoffrey White. 2005. The optimal correction for estimating extreme discriminability. Behavior research methods, Vol. 37, 3 (2005), 436--449.
[11]
Hua Cai and Yingzi Lin. 2010. Tuning trust using cognitive cues for better human-machine collaboration. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 54. SAGE Publications Sage CA: Los Angeles, CA, 2437--2441.
[12]
Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. 2018. Planning with trust for human-robot collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 307--315.
[13]
Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. 2020. Trust-aware decision making for human-robot collaboration: Model learning and planning. ACM Transactions on Human-Robot Interaction (THRI), Vol. 9, 2 (2020), 1--23.
[14]
Jaewon Choi, Hong Joo Lee, and Yong Cheol Kim. 2011. The influence of social presence on customer intention to reuse online recommender systems: the roles of personalization and product type. International Journal of Electronic Commerce, Vol. 16, 1 (2011), 129--154.
[15]
Michael G. Collins, Ion Juvina, and Kevin A. Gluck. 2016. Cognitive Model of Trust Dynamics Predicts Human Behavior within and between Two Games of Strategic Interaction with Computerized Confederate Agents. Frontiers in Psychology, Vol. 7 (2016), 49. https://doi.org/10.3389/fpsyg.2016.00049
[16]
Paolo Cremonesi, Franca Garzotto, and Roberto Turrin. 2012. Investigating the Persuasion Potential of Recommender Systems from a Quality Perspective: An Empirical Study. ACM Trans. Interact. Intell. Syst., Vol. 2, 2, Article 11 (June 2012), bibinfonumpages41 pages. https://doi.org/10.1145/2209310.2209314
[17]
Steve Dent. 2017. Tesla driver in fatal Autopilot crash ignored safety warnings. https://www.engadget.com/2017/06/20/tesla-driver-in-fatal-autopilot-crash-ignored-safety-warnings/?guccounter=1 Retrieved September 30, 2020 from
[18]
David A. Dickie and Linda N. Boyle. 2009. Drivers' understanding of adaptive cruise control limitations. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 53. SAGE Publications Sage CA: Los Angeles, CA, 1806--1810.
[19]
Anthony M. Evans and Joachim I. Krueger. 2009. The psychology (and economics) of trust. Social and Personality Psychology Compass, Vol. 3, 6 (2009), 1003--1017.
[20]
Franz Faul, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior research methods, Vol. 39, 2 (2007), 175--191.
[21]
Amos Freedy, Ewart DeVisser, Gershon Weltman, and Nicole Coeyman. 2007. Measurement of trust in human-robot collaboration. In 2007 International Symposium on Collaborative Technologies and Systems. IEEE, 106--114.
[22]
David Gefen. 2000. E-commerce: the role of familiarity and trust. Omega, Vol. 28, 6 (2000), 725--737.
[23]
Edward L. Glaeser, David I. Laibson, Jose A. Scheinkman, and Christine L. Soutter. 2000. Measuring trust. The quarterly journal of economics, Vol. 115, 3 (2000), 811--846.
[24]
Gene V. Glass, Percy D. Peckham, and James R. Sanders. 1972. Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. Review of educational research, Vol. 42, 3 (1972), 237--288.
[25]
Victoria Groom and Clifford Nass. 2007. Can robots be teammates?: Benchmarks in human--robot teams. Interaction Studies, Vol. 8, 3 (2007), 483--500.
[26]
Jaap Ham, Raymond H. Cuijpers, and John-John Cabibihan. 2015. Combining Robotic Persuasive Strategies: The Persuasive Power of a Storytelling Robot that Uses Gazing and Gestures. International Journal of Social Robotics, Vol. 7, 4 (01 Aug 2015), 479--487. https://doi.org/10.1007/s12369-015-0280-4
[27]
Paul H.P. Hanel and Katia C. Vione. 2016. Do student samples provide an accurate estimate of the general public? PloS one, Vol. 11, 12 (2016), e0168354.
[28]
Kerstin S. Haring, Yoshio Matsumoto, and Katsumi Watanabe. 2013. How do people perceive and trust a lifelike robot. In Proceedings of the world congress on engineering and computer science, Vol. 1.
[29]
Michael J. Hautus. 1995. Corrections for extreme proportions and their biasing effects on estimated values ofd'. Behavior Research Methods, Instruments, & Computers, Vol. 27, 1 (1995), 46--51.
[30]
Denise Hebesberger, Tobias Koertner, Christoph Gisinger, and Jürgen Pripfl. 2017. A long-term autonomous robot at a care hospital: A mixed methods study on social acceptance and experiences of staff and older adults. International Journal of Social Robotics, Vol. 9, 3 (2017), 417--429.
[31]
Patrick J. Henry. 2008. Student sampling as a theoretical problem. Psychological Inquiry, Vol. 19, 2 (2008), 114--126.
[32]
Sarita Herse, Jonathan Vitale, Daniel Ebrahimian, Meg Tonkin, Suman Ojha, Sidra Sidra, Benjamin Johnston, Sophie Phillips, Siva Leela Krishna Chand Gudi, Jesse Clark, et almbox. 2018a. Bon appetit! robot persuasion for food recommendation. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 125--126.
[33]
Sarita Herse, Jonathan Vitale, Meg Tonkin, Daniel Ebrahimian, Suman Ojha, Benjamin Johnston, William Judge, and Mary-Anne Williams. 2018b. Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 7--14.
[34]
Industrial Business Machines [IBM]. 2016. IBM SPSS Statistics for Windows, Version 24.
[35]
Sooyeon Jeong, Deirdre E. Logan, Matthew S. Goodwin, Suzanne Graca, Brianna O'Connell, Honey Goodenough, Laurel Anderson, Nicole Stenquist, Katie Fitzpatrick, Miriam Zisook, et almbox. 2015. A social robot to mitigate stress, anxiety, and pain in hospital pediatric care. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts. 103--104.
[36]
Noel D. Johnson and Alexandra A. Mislin. 2011. Trust games: A meta-analysis. Journal of Economic Psychology, Vol. 32, 5 (2011), 865--889.
[37]
Daniel Kermany, Kang Zhang, and Michael Goldbaum. 2018b. Labeled optical coherence tomography (OCT) and Chest X-Ray images for classification. Mendeley data, Vol. 2 (2018).
[38]
Daniel S. Kermany, Michael Goldbaum, Wenjia Cai, Carolina C.S. Valentim, Huiying Liang, Sally L. Baxter, Alex McKeown, Ge Yang, Xiaokang Wu, Fangbing Yan, et almbox. 2018a. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell, Vol. 172, 5 (2018), 1122--1131.
[39]
John D. Lee and Katrina A. See. 2004. Trust in automation: Designing for appropriate reliance. Human factors, Vol. 46, 1 (2004), 50--80.
[40]
Michael Lewis, Katia Sycara, and Phillip Walker. 2018. The role of trust in human-robot interaction. In Foundations of Trusted Autonomy. Springer, Cham, 135--159.
[41]
Jamy Li. 2015. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies, Vol. 77 (2015), 23--37.
[42]
Rose Luckin, Wayne Holmes, Mark Griffiths, and Laurie B Forcier. 2016. Intelligence unleashed: An argument for AI in education. (2016).
[43]
Kevin A. McCabe and Vernon L. Smith. 2000. A comparison of naive and sophisticated subject behavior with game theoretic predictions. Proceedings of the National Academy of Sciences, Vol. 97, 7 (2000), 3777--3781.
[44]
Harrison D. McKnight, Vivek Choudhury, and Charles Kacmar. 2002. Developing and validating trust measures for e-commerce: An integrative typology. Information systems research, Vol. 13, 3 (2002), 334--359.
[45]
Ali Meghdari, Azadeh Shariati, Minoo Alemi, Gholamreza R. Vossoughi, Abdollah Eydi, Ehsan Ahmadi, Behrad Mozafari, Ali Amoozandeh Nobaveh, and Reza Tahami. 2018. Arash: A social robot buddy to support children with cancer in a hospital environment. Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine, Vol. 232, 6 (2018), 605--618.
[46]
Neville Moray and T. Inagaki. 1999. Laboratory studies of trust between humans and machines in automated systems. Transactions of the Institute of Measurement and Control, Vol. 21, 4--5 (1999), 203--211.
[47]
Nancy A. Obuchowski. 2003. Receiver operating characteristic curves and their use in radiology. Radiology, Vol. 229, 1 (2003), 3--8.
[48]
Kohei Ogawa, Christoph Bartneck, Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, and Hiroshi Ishiguro. 2009. Can an android persuade you?. In Robot and Human Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE International Symposium on. IEEE, 516--521.
[49]
Emrah Onal, James Schaffer, John O'Donovan, Laura Marusich, S. Yu Michael, Cleotilde Gonzalez, and Tobias Höllerer. 2014. Decision-making in abstract trust games: A user interface perspective. In 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA). IEEE, 21--27.
[50]
Scott Ososky, Tracy Sanders, Florian Jentsch, Peter Hancock, and Jessie YC Chen. 2014. Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In Unmanned Systems Technology XVI, Vol. 9084. International Society for Optics and Photonics, 90840E.
[51]
Raja Parasuraman and Christopher A. Miller. 2004. Trust and etiquette in high-criticality automated systems. Commun. ACM, Vol. 47, 4 (2004), 51--55.
[52]
David V. Pynadath, Ning Wang, and Sreekar Kamireddy. 2019. A Markovian Method for Predicting Trust Behavior in Human-Agent Interaction. In Proceedings of the 7th International Conference on Human-Agent Interaction. 171--178.
[53]
Lingyun Qiu and Izak Benbasat. 2009. Evaluating anthropomorphic product recommendation agents: A social relationship perspective to designing information systems. Journal of Management Information Systems, Vol. 25, 4 (2009), 145--182.
[54]
Francesco Ricci, Lior Rokach, and Bracha Shapira. 2011. Introduction to recommender systems handbook. In Recommender systems handbook. Springer, 1--35.
[55]
Mohammad Nasser Saadatzi, Robert C. Pennington, Karla C. Welch, and James H. Graham. 2018. Effects of a Robot Peer on the Acquisition and Observational Learning of Sight Words in Young Adults With Autism Spectrum Disorder. Journal of Special Education Technology, Vol. 33, 4 (2018), 284--296.
[56]
Tracy Sanders, Kristin E. Oleson, Deborah R. Billings, Jessie Y.C. Chen, and Peter A. Hancock. 2011. A model of human-robot trust: Theoretical model development. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 55. SAGE Publications Sage CA: Los Angeles, CA, 1432--1436.
[57]
Sebastian Schneider and Franz Kummert. 2018. Comparing the effects of social robots and virtual agents on exercising motivation. In International Conference on Social Robotics. Springer, 451--461.
[58]
Richard Seymour and Gilbert L. Peterson. 2009. A trust-based multiagent system. In 2009 International Conference on Computational Science and Engineering, Vol. 3. IEEE, 109--116.
[59]
Thomas B. Sheridan. 2019. Extending three existing models to analysis of trust in automation: signal detection, statistical parameter estimation, and model-based control. Human factors, Vol. 61, 7 (2019), 1162--1170.
[60]
Rashmi R. Sinha and Kirsten Swearingen. 2001. Comparing recommendations made by online systems and friends. In DELOS workshop: personalisation and recommender systems in digital libraries, Vol. 106.
[61]
Mariacarla Staffa and Silvia Rossi. 2016. Recommender Interfaces: The More Human-Like, the More Humans Like. In Social Robotics, Arvin Agah, John-John Cabibihan, Ayanna M. Howard, Miguel A. Salichs, and Hongsheng He (Eds.). Springer International Publishing, Cham, 200--210.
[62]
Harold Stanislaw and Natasha Todorov. 1999. Calculation of signal detection theory measures. Behavior research methods, instruments, & computers, Vol. 31, 1 (1999), 137--149.
[63]
Laerd Statistics. 2015. Two-way repeated measures ANOVA using SPSS Statistics. https://statistics.laerd.com/ Retrieved September 30, 2020 from
[64]
Meg Tonkin, Jonathan Vitale, Suman Ojha, Jesse Clark, Sammy Pfeiffer, William Judge, Xun Wang, and Mary-Anne Williams. 2017. Embodiment, privacy and social robots: May i remember you?. In International Conference on Social Robotics. Springer, 506--515.
[65]
Jonathan Vitale, Meg Tonkin, Sarita Herse, Suman Ojha, Jesse Clark, Mary-Anne Williams, Xun Wang, and William Judge. 2018. Be more transparent and users will like you: A robot privacy and user experience design experiment. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 379--387.
[66]
Lin Wang, Pei-Luen Patrick Rau, Vanessa Evers, Benjamin Krisper Robinson, and Pamela Hinds. 2010. When in Rome: the role of culture & context in adherence to robot recommendations. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 359--366. https://doi.org/10.1109/HRI.2010.5453165
[67]
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016a. The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams. In AAMAS. 997--1005.
[68]
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016b. Trust calibration within a human-robot team: Comparing automatically generated explanations. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 109--116.
[69]
Philipp Wintersberger, Anna-Katharina Frison, Andreas Riener, and Linda Ng Boyle. 2016. Towards a personalized trust model for highly automated driving. Mensch und Computer 2016--Workshopband (2016).
[70]
Michelle Yeh and Christopher D. Wickens. 2001. Display signaling in augmented reality: Effects of cue reliability and image realism on attention allocation and trust calibration. Human Factors, Vol. 43, 3 (2001), 355--365.
[71]
Tatjana Zorcec, Ben Robins, and Kerstin Dautenhahn. 2018. Getting Engaged: Assisted Play with a Humanoid Robot Kaspar for Children with Severe Autism. In International Conference on Telecommunications. Springer, 198--207.

Cited By

View all
  • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
  • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024
  • (2024)Human–Robot Coordination and Collaboration in Industry 4.0Digital Transformation10.1007/978-981-99-8118-2_9(195-219)Online publication date: 30-Jan-2024
  • Show More Cited By

Index Terms

  1. Using Trust to Determine User Decision Making & Task Outcome During a Human-Agent Collaborative Task

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        HRI '21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
        March 2021
        425 pages
        ISBN:9781450382892
        DOI:10.1145/3434073
        • General Chairs:
        • Cindy Bethel,
        • Ana Paiva,
        • Program Chairs:
        • Elizabeth Broadbent,
        • David Feil-Seifer,
        • Daniel Szafir
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 08 March 2021

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. decision making
        2. human-agent collaboration
        3. recommender system
        4. signal detection theory
        5. socially intelligent agent
        6. trust

        Qualifiers

        • Research-article

        Funding Sources

        • Australian Government Research Training Program Scholarship

        Conference

        HRI '21
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 268 of 1,124 submissions, 24%

        Upcoming Conference

        HRI '25
        ACM/IEEE International Conference on Human-Robot Interaction
        March 4 - 6, 2025
        Melbourne , VIC , Australia

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)142
        • Downloads (Last 6 weeks)18
        Reflects downloads up to 27 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
        • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024
        • (2024)Human–Robot Coordination and Collaboration in Industry 4.0Digital Transformation10.1007/978-981-99-8118-2_9(195-219)Online publication date: 30-Jan-2024
        • (2023)Using Agent Features to Influence User Trust, Decision Making and Task Outcome during Human-Agent CollaborationInternational Journal of Human–Computer Interaction10.1080/10447318.2022.215069139:9(1740-1761)Online publication date: 11-Jan-2023
        • (2023)Simulation Evidence of Trust Calibration: Using POMDP with Signal Detection Theory to Adapt Agent Features for Optimised Task Outcome During Human-Agent CollaborationInternational Journal of Social Robotics10.1007/s12369-023-01041-w16:6(1381-1403)Online publication date: 16-Aug-2023
        • (2022)No Evidence for an Effect of the Smell of Hexanal on Trust in Human–Robot InteractionInternational Journal of Social Robotics10.1007/s12369-022-00918-615:8(1429-1438)Online publication date: 15-Sep-2022

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media