Abstract
Recent advances in the areas of human-robot interaction (HRI) and robot autonomy are changing the world. Today robots are used in a variety of applications. People and robots work together in human autonomous teams to accomplish tasks that, separately, cannot be easily accomplished. Trust between robots and humans in teams is vital to task completion and effective team cohesion. For the optimal performance and safety of human teammates, their level of trust should be adjusted to the performance of the robotic system. The method of adjusting levels of human trust by a robot is called trust calibration. The cost of poor trust calibration in HRI, is at a minimum, low performance, and at higher levels it causes human injury or critical task failures. A robot is able to calibrate trust through policies that use trust calibration cues (TCCs). Verbal cues are often used to help calibrate trust. In this experiment we test the difference between two verbal TCCs, an apology and a denial. Both of which were meant to repair trust that the robot lost during the course of a search and rescue teaming situation. This study included 219 participants whom were spilt across 6 search and rescue (S &R) simulations. The simulations were composed of two different multi-round interaction games to study the effectiveness of competence violations and moral violations were created to simulate these different trust violations. While most of the TCCs were shown to be ineffective at significantly increasing the amount of trust in the robot after a trust violations. The use of a apology when the robot was being selfish was shown to be effective.
Supported by the AI Caring Institute.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
You, S., Robert, L., Alahmad, R., Esterwood, C., Zhang, Q.: A review of personality in human-robot interactions (2020)
Barnes, M., Jentsch, F., Chen, J.Y., Haas, E., Cosenzo, K.: Five things you should know about soldier - robot teaming, p. 7 (2008)
Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)
Wagner, A., Arkin, R.: Recognizing situations that demand trust, pp. 7–14 (2011)
Khavas, Z.R., Ahmadzadeh, S.R., Robinette, P.: Modeling trust in human-robot interaction: a survey. In: Wagner, A.R., et al. (eds.) ICSR 2020. LNCS (LNAI), vol. 12483, pp. 529–541. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-62056-1_44
de Visser, E.J., Cohen, M., Freedy, A., Parasuraman, R.: A design methodology for trust cue calibration in cognitive agents. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014. LNCS, vol. 8525, pp. 251–262. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07458-0_24
Ullman, D., Malle, B.F.: Measuring gains and losses in human-robot trust: evidence for differentiable components of trust, vol. 2019 (2019)
Johnson, D., Grayson, K.: Cognitive and affective trust in service relationships. J. Bus. Res. 58, 500–507 (2005)
Razin, Y.S., Feigh, K.M.: Committing to interdependence: implications from game theory for human-robot trust. Paladyn J. Behav. Robot. 12(1), 481–502 (2021). https://doi.org/10.1515/pjbr-2021-0031
Ullman, D., Malle, B.F.: What does it mean to trust a robot?: steps toward a multidimensional measure of trust (2018)
Kim, P.H., Dirks, K.T., Cooper, C.D., Ferrin, D.L.: When more blame is better than less: the implications of internal vs. external attributions for the repair of trust after a competence- vs. integrity-based trust violation. Organ. Behav. Hum. Decis. Processes 99, 49–65 (2006)
Robinette, P., Howard, A.M., Wagner, A.R.: Effect of robot performance on human-robot trust in time-critical situations. IEEE Trans. Hum. Mach. Syst. 47(4), 425–436 (2017)
Desai, M.: Modeling trust to improve human-robot interaction, ProQuest Dissertations and Theses, vol. 3537137 (2012)
Chen, M., Nikolaidis, S., Soh, H., Hsu, D., Srinivasa, S.: Planning with trust for human-robot collaboration (2018)
Lee, J., Fong, J., Kok, B. C., Soh, H.: Getting to know one another: calibrating intent, capabilities and trust for human-robot collaboration (2020)
Desai, M., Stubbs, K., Steinfeld, A., Yanco, H.: Creating trustworthy robots: lessons and inspirations from automated systems (2009)
Okamura, K., Yamada, S.: Calibrating trust in human-drone cooperative navigation. In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1274–1279 (2020)
Robinette, P., Wagner, A.R., Howard, A.M.: Assessment of robot guidance modalities conveying instructions to humans in emergency situations (2014)
Okamura, K., Yamada, S.: Empirical evaluations of framework for adaptive trust calibration in human-AI cooperation. IEEE Access 8, 220335–220351 (2020)
Kohn, S.C., Quinn, D., Pak, R., De Visser, E.J., Shaw, T.H.: Trust repair strategies with self-driving vehicles: an exploratory study, vol. 2 (2018)
Nayyar, M., Wagner, A.R.: When should a robot apologize? understanding how timing affects human-robot trust repair. In: Ge, S.S., et al. (eds.) ICSR 2018. LNCS (LNAI), vol. 11357, pp. 265–274. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-05204-1_26
Sebo, S.S., Krishnamurthi, P., Scassellati, B.: I don’t believe you’: investigating the effects of robot trust violation and repair, vol. 2019 (2019)
Perkins, R., Khavas, Z.R., Robinette, P.: Trust calibration and trust respect: a method for building team cohesion in human robot teams (2021). https://arxiv.org/abs/2110.06809
Robinette, P., Howard, A.M., Wagner, A.R.: Timing is key for robot trust repair. In: ICSR 2015. LNCS (LNAI), vol. 9388, pp. 574–583. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25554-5_57
Khavas, Z. R., S. Ahmadzadeh, S. R.: Do humans trust robots that violate moral-trust? unpublished
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Perkins, R., Khavas, Z.R., McCallum, K., Kotturu, M.R., Robinette, P. (2022). The Reason for an Apology Matters for Robot Trust Repair. In: Cavallo, F., et al. Social Robotics. ICSR 2022. Lecture Notes in Computer Science(), vol 13818. Springer, Cham. https://doi.org/10.1007/978-3-031-24670-8_56
Download citation
DOI: https://doi.org/10.1007/978-3-031-24670-8_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-24669-2
Online ISBN: 978-3-031-24670-8
eBook Packages: Computer ScienceComputer Science (R0)