Skip to main content

The Reason for an Apology Matters for Robot Trust Repair

  • Conference paper
  • First Online:
Social Robotics (ICSR 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13818))

Included in the following conference series:

Abstract

Recent advances in the areas of human-robot interaction (HRI) and robot autonomy are changing the world. Today robots are used in a variety of applications. People and robots work together in human autonomous teams to accomplish tasks that, separately, cannot be easily accomplished. Trust between robots and humans in teams is vital to task completion and effective team cohesion. For the optimal performance and safety of human teammates, their level of trust should be adjusted to the performance of the robotic system. The method of adjusting levels of human trust by a robot is called trust calibration. The cost of poor trust calibration in HRI, is at a minimum, low performance, and at higher levels it causes human injury or critical task failures. A robot is able to calibrate trust through policies that use trust calibration cues (TCCs). Verbal cues are often used to help calibrate trust. In this experiment we test the difference between two verbal TCCs, an apology and a denial. Both of which were meant to repair trust that the robot lost during the course of a search and rescue teaming situation. This study included 219 participants whom were spilt across 6 search and rescue (S &R) simulations. The simulations were composed of two different multi-round interaction games to study the effectiveness of competence violations and moral violations were created to simulate these different trust violations. While most of the TCCs were shown to be ineffective at significantly increasing the amount of trust in the robot after a trust violations. The use of a apology when the robot was being selfish was shown to be effective.

Supported by the AI Caring Institute.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. You, S., Robert, L., Alahmad, R., Esterwood, C., Zhang, Q.: A review of personality in human-robot interactions (2020)

    Google Scholar 

  2. Barnes, M., Jentsch, F., Chen, J.Y., Haas, E., Cosenzo, K.: Five things you should know about soldier - robot teaming, p. 7 (2008)

    Google Scholar 

  3. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)

    Article  Google Scholar 

  4. Wagner, A., Arkin, R.: Recognizing situations that demand trust, pp. 7–14 (2011)

    Google Scholar 

  5. Khavas, Z.R., Ahmadzadeh, S.R., Robinette, P.: Modeling trust in human-robot interaction: a survey. In: Wagner, A.R., et al. (eds.) ICSR 2020. LNCS (LNAI), vol. 12483, pp. 529–541. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-62056-1_44

    Chapter  Google Scholar 

  6. de Visser, E.J., Cohen, M., Freedy, A., Parasuraman, R.: A design methodology for trust cue calibration in cognitive agents. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014. LNCS, vol. 8525, pp. 251–262. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07458-0_24

    Chapter  Google Scholar 

  7. Ullman, D., Malle, B.F.: Measuring gains and losses in human-robot trust: evidence for differentiable components of trust, vol. 2019 (2019)

    Google Scholar 

  8. Johnson, D., Grayson, K.: Cognitive and affective trust in service relationships. J. Bus. Res. 58, 500–507 (2005)

    Article  Google Scholar 

  9. Razin, Y.S., Feigh, K.M.: Committing to interdependence: implications from game theory for human-robot trust. Paladyn J. Behav. Robot. 12(1), 481–502 (2021). https://doi.org/10.1515/pjbr-2021-0031

    Article  Google Scholar 

  10. Ullman, D., Malle, B.F.: What does it mean to trust a robot?: steps toward a multidimensional measure of trust (2018)

    Google Scholar 

  11. Kim, P.H., Dirks, K.T., Cooper, C.D., Ferrin, D.L.: When more blame is better than less: the implications of internal vs. external attributions for the repair of trust after a competence- vs. integrity-based trust violation. Organ. Behav. Hum. Decis. Processes 99, 49–65 (2006)

    Article  Google Scholar 

  12. Robinette, P., Howard, A.M., Wagner, A.R.: Effect of robot performance on human-robot trust in time-critical situations. IEEE Trans. Hum. Mach. Syst. 47(4), 425–436 (2017)

    Article  Google Scholar 

  13. Desai, M.: Modeling trust to improve human-robot interaction, ProQuest Dissertations and Theses, vol. 3537137 (2012)

    Google Scholar 

  14. Chen, M., Nikolaidis, S., Soh, H., Hsu, D., Srinivasa, S.: Planning with trust for human-robot collaboration (2018)

    Google Scholar 

  15. Lee, J., Fong, J., Kok, B. C., Soh, H.: Getting to know one another: calibrating intent, capabilities and trust for human-robot collaboration (2020)

    Google Scholar 

  16. Desai, M., Stubbs, K., Steinfeld, A., Yanco, H.: Creating trustworthy robots: lessons and inspirations from automated systems (2009)

    Google Scholar 

  17. Okamura, K., Yamada, S.: Calibrating trust in human-drone cooperative navigation. In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1274–1279 (2020)

    Google Scholar 

  18. Robinette, P., Wagner, A.R., Howard, A.M.: Assessment of robot guidance modalities conveying instructions to humans in emergency situations (2014)

    Google Scholar 

  19. Okamura, K., Yamada, S.: Empirical evaluations of framework for adaptive trust calibration in human-AI cooperation. IEEE Access 8, 220335–220351 (2020)

    Article  Google Scholar 

  20. Kohn, S.C., Quinn, D., Pak, R., De Visser, E.J., Shaw, T.H.: Trust repair strategies with self-driving vehicles: an exploratory study, vol. 2 (2018)

    Google Scholar 

  21. Nayyar, M., Wagner, A.R.: When should a robot apologize? understanding how timing affects human-robot trust repair. In: Ge, S.S., et al. (eds.) ICSR 2018. LNCS (LNAI), vol. 11357, pp. 265–274. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-05204-1_26

    Chapter  Google Scholar 

  22. Sebo, S.S., Krishnamurthi, P., Scassellati, B.: I don’t believe you’: investigating the effects of robot trust violation and repair, vol. 2019 (2019)

    Google Scholar 

  23. Perkins, R., Khavas, Z.R., Robinette, P.: Trust calibration and trust respect: a method for building team cohesion in human robot teams (2021). https://arxiv.org/abs/2110.06809

  24. Robinette, P., Howard, A.M., Wagner, A.R.: Timing is key for robot trust repair. In: ICSR 2015. LNCS (LNAI), vol. 9388, pp. 574–583. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25554-5_57

    Chapter  Google Scholar 

  25. Khavas, Z. R., S. Ahmadzadeh, S. R.: Do humans trust robots that violate moral-trust? unpublished

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Russell Perkins .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Perkins, R., Khavas, Z.R., McCallum, K., Kotturu, M.R., Robinette, P. (2022). The Reason for an Apology Matters for Robot Trust Repair. In: Cavallo, F., et al. Social Robotics. ICSR 2022. Lecture Notes in Computer Science(), vol 13818. Springer, Cham. https://doi.org/10.1007/978-3-031-24670-8_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-24670-8_56

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-24669-2

  • Online ISBN: 978-3-031-24670-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics