Skip to main content

How the Timing and Magnitude of Robot Errors Influence Peoples’ Trust of Robots in an Emergency Scenario

  • Conference paper
  • First Online:
Social Robotics (ICSR 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10652))

Included in the following conference series:

Abstract

Trust is a key factor in human users’ acceptance of robots in a home or human oriented environment. Humans should be able to trust that they can safely interact with their robot. Robots will sometimes make errors, due to mechanical or functional failures. It is therefore important that a domestic robot should have acceptable interactive behaviours when exhibiting and recovering from an error situation. In order to define these behaviours, it is firstly necessary to consider that errors can have different degrees of consequences. We hypothesise that the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impacts on users’ attitudes towards a domestic robot. In this study we used an interactive storyboard presenting ten different scenarios in which a robot performed different tasks under five different conditions. Each condition included the ten different tasks performed by the robot, either correctly, or with small or big errors. The conditions with errors were complemented with four correct behaviours. At the end of each experimental condition, participants were presented with an emergency scenario to evaluate their current trust in the robot. We conclude that there is correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amazon mechanical turk https://www.mturk.com

  2. Agresti, A.: Categorical Data Analysis, 2nd edn. Wiley-Interscience, Chichester, New York (2002)

    Book  MATH  Google Scholar 

  3. Bainbridge, W.A., Hart, J.W., Kim, E.S., Scassellati, B.: The benefits of interactions with physically present robots over video-displayed agents. Int. J. Social Robot. 3(1), 41–52 (2011)

    Article  Google Scholar 

  4. Billings, D.: Computer poker. University of Alberta M.Sc. thesis (1995)

    Google Scholar 

  5. Booth, S., Tompkin, J., Pfister, H., Waldo, J., Gajos, K., Nagpal, R.: Piggybacking robots: human-robot overtrust in university dormitory security, pp. 426–434. ACM (2017)

    Google Scholar 

  6. Cameron, D., Aitken, J.M., Collins, E.C., Boorman, L., Chua, A., Fernando, S., McAree, O., Martinez-Hernandez, U., Law, J.: Framing factors: the importance of context and the individual in understanding trust in human-robot interaction. In: International Conference on Intelligent Robots and Systems (2015)

    Google Scholar 

  7. Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., Yanco, H.: Impact of robot failures and feedback on real-time trust. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 251–258 (2013)

    Google Scholar 

  8. Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko, S., Bruggeman, C., Steinfeld, A., Yanco, H.: Effects of changing reliability on trust of robot systems. In: Proceedings of the Seventh Annual ACM IEEE International Conference on Human Robot Interaction, HRI 2012, pp. 73–80 (2012)

    Google Scholar 

  9. Deutsch, M.: Trust and suspicion. J. Confl. Resolut. 2, 265–279 (1958)

    Article  Google Scholar 

  10. Golder, S., Donath, J.: Hiding and revealing in online poker games, pp. 370–373 (2004)

    Google Scholar 

  11. Gosling, S.D., Rentfrow, P.J., Swann Jr., W.B.: A very brief measure of the big five personality domains. J. Res. Pers. 37, 504–528 (2003)

    Article  Google Scholar 

  12. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 53(5), 517–527 (2011)

    Article  Google Scholar 

  13. Haselhuhn, M.P., Schweitzer, M.E., Wood, A.M.: How implicit beliefs influence trust recovery. Psychol. Sci. 5, 645–648 (2010)

    Article  Google Scholar 

  14. Koay, K.L., Syrdal, D.S., Walters, M.L., Dautenhahn, K.: Living with robots: investigating the habituation effect in participants’ preferences during a longitudinal human-robot interaction study. In: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, pp. 564–569 (2007)

    Google Scholar 

  15. Kramer, R.M., Carnevale, P.J.: Trust and intergroup negotiation. In: Brown, R., Gaertner, S.L. (eds.) Handbook of Social Psychology: Intergroup Processes. Blackwell, Boston (2003)

    Google Scholar 

  16. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 46(1), 50–80 (2004)

    Article  Google Scholar 

  17. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20, 709–734 (1995)

    Google Scholar 

  18. McKnight, D.H., Choudhury, V., Kacmar, C.: Propensity to trust scale 13, 339–359 (2001). http://highered.mheducation.com/sites/0073381225/student/view0/chapter7/self-assessment/74.html

  19. Muir, B.M., Moray, N.: Trust in automation: Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 429–460 (1996)

    Article  Google Scholar 

  20. Robinette, P., Howard, A.M., Wagner, A.R.: Timing is key for robot trust repair. Social Robotics. LNCS, vol. 9388, pp. 574–583. Springer, Cham (2015). doi:10.1007/978-3-319-25554-5_57

    Chapter  Google Scholar 

  21. Robinette, P., Li, W., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: Proceeding of the Eleventh ACM/IEEE International Conference on Human Robot Interation, HRI 2016, pp. 101–108. IEEE Press, Piscataway (2016)

    Google Scholar 

  22. Ross, J.M.: Moderators of trust and reliance across multiple decision aids (Doctoral dissertation), University of Central Florida, Orlando (2008)

    Google Scholar 

  23. Rossi, A., Dautenhahn, K., Koay, K.L., Walters, M.L.: Human perceptions of the severity of domestic robot errors. In: Accepted for the Ninth International Conference on Social Robotics, ICSR 2017, 22–24th November 2017, Tsukuba, Japan (2017)

    Google Scholar 

  24. Salem, M., Dautenhahn, K.: Evaluating trust and safety in HRI: practical issues and ethical challenges. In: Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015): Workshop on the Emerging Policy and Ethics of Human-Robot Interaction (2015)

    Google Scholar 

  25. Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 141–148 (2015)

    Google Scholar 

  26. Schilke, O., Reimann, M., Cook, K.S.: Effect of relationship experience on trust recovery following a breach. Proc. Natl. Acad. Sci. 110(38), 15236–15241 (2013)

    Article  Google Scholar 

  27. Simpson, J.A.: Foundations of interpersonal trust. In: Kruglanski, A.W., Higgins, E.T. (eds.) Social Psychology: Handbook of Basic Principles, pp. 587–607. Guilford, New York (2007)

    Google Scholar 

  28. Simpson, J.A.: Psychological foundations of trust. Curr. Dir. Psychol. Sci. 16(5), 264–268 (2007)

    Article  Google Scholar 

  29. Slovic, P.: Perceived risk, trust, and democracy. Risk Anal. 13, 675–682 (2000)

    Article  Google Scholar 

  30. Wang, N., Pynadath, D.V., Unnikrishnan, K.V., Shankar, S., Merchant, C.: Intelligent agents for virtual simulation of human-robot interaction. In: Shumaker, R., Lackey, S. (eds.) VAMR 2015. LNCS, vol. 9179, pp. 228–239. Springer, Cham (2015). doi:10.1007/978-3-319-21067-4_24

    Chapter  Google Scholar 

  31. Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., Chen, F.: User trust dynamics: an investigation driven by differences in system performance, vol. 126745, pp. 307–317. ACM (2017)

    Google Scholar 

Download references

Acknowledgments

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 642667 (Safety Enables Cooperation in Uncertain Robotic Environments - SECURE).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandra Rossi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Rossi, A., Dautenhahn, K., Koay, K.L., Walters, M.L. (2017). How the Timing and Magnitude of Robot Errors Influence Peoples’ Trust of Robots in an Emergency Scenario. In: Kheddar, A., et al. Social Robotics. ICSR 2017. Lecture Notes in Computer Science(), vol 10652. Springer, Cham. https://doi.org/10.1007/978-3-319-70022-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70022-9_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70021-2

  • Online ISBN: 978-3-319-70022-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics