Abstract
Performance assessment in the era of human‒robot collaboration poses new challenges. Will human managers display varying responses to the success and failure of human versus robot employees? This study aims to investigate people’s responses to success and errors made by humans compared to those made by robots using self-report measures and neuroimaging techniques. Twenty-four participants were asked to imagine themselves as managers tasked with reviewing videos of human‒robot collaboration and evaluating the human and robot employees in the video. Results showed that, when the employee performed correctly, participants assigned more credit to the employee and showed stronger positive emotions when the employee was a robot than a human. When the employee made an error and caused failure, participants attributed more blame to the employee and showed stronger negative emotions when the employee was a human than a robot. Additionally, employee errors resulted in decreased trust, and the trust damage caused by human errors was higher than that caused by robot errors. Furthermore, the functional near-infrared spectroscopy technique showed that viewing robot errors caused decreased activation in the prefrontal cortex. These findings enrich our understanding of attribution, trust, and emotions in human‒robot collaboration from the perspective of human managers, providing practical managerial implications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors, pp. 72–78 (1994). https://doi.org/10.1145/259963.260288
Mao, Z., Zhang, J., Fang, K., Huang, D., Sun, Y.: Balancing U-type assembly lines with human–robot collaboration. Comput. Oper. Res. 159, 106359 (2023). https://doi.org/10.1016/j.cor.2023.106359
Chen, J., Fu, Y., Lu, W., Pan, Y.: Augmented reality-enabled human-robot collaboration to balance construction waste sorting efficiency and occupational safety and health. J. Environ. Manage. 348, 119341 (2023). https://doi.org/10.1016/j.jenvman.2023.119341
Casper, J., Murphy, R.R.: Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center. IEEE Trans. Syst. Man Cybern. Part B Cybern. 33, 367–385 (2003). https://doi.org/10.1109/TSMCB.2003.811794
Arikan, E., Altinigne, N., Kuzgun, E., Okan, M.: May robots be held responsible for service failure and recovery? The role of robot service provider agents’ human-likeness. J. Retail. Consum. Serv. 70, 103175 (2023). https://doi.org/10.1016/j.jretconser.2022.103175
Harrison-Walker, L.J.: The effect of consumer emotions on outcome behaviors following service failure. J. Serv. Mark. 33, 285–302 (2019). https://doi.org/10.1108/JSM-04-2018-0124
Baker, M.A., Kim, K.: Other customer service failures: emotions, impacts, and attributions. J. Hosp. Tour. Res. 42, 1067–1085 (2018). https://doi.org/10.1177/1096348016671394
Lei, X., Rau, P.-L.P.: Effect of relative status on responsibility attributions in human–robot collaboration: mediating role of sense of responsibility and moderating role of power distance orientation. Comput. Hum. Behav. 122, 106820 (2021). https://doi.org/10.1016/j.chb.2021.106820
Leo, X., Huh, Y.E.: Who gets the blame for service failures? Attribution of responsibility toward robot versus human service providers and service firms. Comput. Hum. Behav. 113, 106520 (2020). https://doi.org/10.1016/j.chb.2020.106520
Lei, X., Rau, P.-L.P.: Should I blame the human or the robot? Attribution within a human–robot group. Int. J. Soc. Robot. 13, 363–377 (2021). https://doi.org/10.1007/s12369-020-00645-w
Mezulis, A.H., Abramson, L.Y., Hyde, J.S., Hankin, B.L.: Is there a universal positivity bias in attributions? A meta-analytic review of individual, developmental, and cultural differences in the self-serving attributional bias. Psychol. Bull. 130, 711–747 (2004). https://doi.org/10.1037/0033-2909.130.5.711
Malle, B.F.: The actor-observer asymmetry in attribution: a (surprising) meta-analysis. Psychol. Bull. 132, 895–919 (2006). https://doi.org/10.1037/0033-2909.132.6.895
Sanders, T., Kaplan, A., Koch, R., Schwartz, M., Hancock, P.A.: The relationship between trust and use choice in human-robot interaction. Hum. Factors 61, 614–626 (2019). https://doi.org/10.1177/0018720818816838
Schwarz, N.: Emotion, cognition, and decision making. Cogn. Emot. 14, 433–440 (2000). https://doi.org/10.1080/026999300402745
Balconi, M., Fronda, G., Bartolo, A.: Affective, social, and informative gestures reproduction in human interaction: hyperscanning and brain connectivity. J. Mot. Behav. 53, 296–315 (2021). https://doi.org/10.1080/00222895.2020.1774490
Lei, X., Rau, P.-L.P.: Emotional responses to performance feedback in an educational game during cooperation and competition with a robot: evidence from fNIRS. Comput. Hum. Behav. 138, 107496 (2023). https://doi.org/10.1016/j.chb.2022.107496
Yorgancigil, E., Yildirim, F., Urgen, B.A., Erdogan, S.B.: An exploratory analysis of the neural correlates of human-robot interactions with functional near infrared spectroscopy. Front. Hum. Neurosci. 16, 883905 (2022). https://doi.org/10.3389/fnhum.2022.883905
Kelley, H.H., Michela, J.L.: Attribution theory and research. Annu. Rev. Psychol. 31, 457–501 (1980). https://doi.org/10.1146/annurev.ps.31.020180.002325
Belanche, D., Casaló, L.V., Flavián, C., Schepers, J.: Robots or frontline employees? Exploring customers’ attributions of responsibility and stability after service failure or success. J. Serv. Manag. 31, 267–289 (2020). https://doi.org/10.1108/JOSM-05-2019-0156
Ryoo, Y., Jeon, Y.A., Kim, W.: The blame shift: robot service failures hold service firms more accountable. J. Bus. Res. 171, 114360 (2024). https://doi.org/10.1016/j.jbusres.2023.114360
Henderson, C., Gillan, D.J.: Attributing blame in human-robot teams with robots of differing appearance. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 65, 139–142 (2021). https://doi.org/10.1177/1071181321651020
Pavone, G., Meyer-Waarden, L., Munzel, A.: Rage against the machine: experimental insights into customers’ negative emotional responses, attributions of responsibility, and coping strategies in artificial intelligence-based service failures. J. Interact. Mark. 58, 52–71 (2023). https://doi.org/10.1177/10949968221134492
Kim, T., Hinds, P.: Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In: IEEE International Symposium on Robot and Human Interactive Communication, ROMAN 2006, pp. 80–85 (2006). https://doi.org/10.1109/ROMAN.2006.314398
Furlough, C., Stokes, T., Gillan, D.J.: Attributing blame to robots: I. The influence of robot autonomy. Hum. Factors J. Hum. Factors Ergon. Soc. 63, 592–602 (2021). https://doi.org/10.1177/0018720819880641
Hong, J.-W., Williams, D.: Racism, responsibility and autonomy in HCI: testing perceptions of an AI agent. Comput. Hum. Behav. 100, 79–84 (2019). https://doi.org/10.1016/j.chb.2019.06.012
Swanson, S.R., Davis, J.C.: The relationship of differential loci with perceived quality and behavioral intentions. J. Serv. Mark. 17, 202–219 (2003). https://doi.org/10.1108/08876040310467943
Gailey, J.A.: Attribution of responsibility for organizational wrongdoing: a partial test of an integrated model. J. Criminol. 2013, e920484 (2013). https://doi.org/10.1155/2013/920484
Coulter, K.S., Coulter, R.A.: Determinants of trust in a service provider: the moderating role of length of relationship. J. Serv. Mark. 16, 35–50 (2002). https://doi.org/10.1108/08876040210419406
Hancock, P.A., Kessler, T.T., Kaplan, A.D., Brill, J.C., Szalma, J.L.: Evolving trust in robots: Specification through sequential and comparative meta-analyses. Hum. Factors J. Hum. Factors Ergon. Soc. 63, 1196–1229 (2021). https://doi.org/10.1177/0018720820922080
Wright, J.L., Chen, J.Y.C., Lakhmani, S.G.: Agent transparency and reliability in human-robot interaction: the influence on user confidence and perceived reliability. IEEE Trans. Hum. Mach. Syst. 50, 254–263 (2020). https://doi.org/10.1109/THMS.2019.2925717
Kaniarasu, P., Steinfeld, A.M.: Effects of blame on trust in human robot interaction. In: 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp. 850–855 (2014). https://doi.org/10.1109/ROMAN.2014.6926359
Paetzel, M., Perugia, G., Castellano, G.: The persistence of first impressions: the effect of repeated interactions on the perception of a social robot. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 73–82. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3319502.3374786
Alarcon, G.M., Gibson, A.M., Jessup, S.A., Capiola, A.: Exploring the differential effects of trust violations in human-human and human-robot interactions. Appl. Ergon. 93, 103350 (2021). https://doi.org/10.1016/j.apergo.2020.103350
Wang, Y., Quadflieg, S.: In our own image? Emotional and neural processing differences when observing human–human vs human–robot interactions. Soc. Cogn. Affect. Neurosci. 10, 1515–1524 (2015). https://doi.org/10.1093/scan/nsv043
Zonca, J., Folsø, A., Sciutti, A.: Trust is not all about performance: trust biases in interaction with humans, robots and computers. arXiv (2021)
Ekman, P.: An argument for basic emotions. Cogn. Emot. 6, 169–200 (1992). https://doi.org/10.1080/02699939208411068
Plutchik, R.: A psychoevolutionary theory of emotions. Soc. Sci. Inf. 21, 529–553 (1982). https://doi.org/10.1177/053901882021004003
Lin, H., Chi, O.H., Gursoy, D.: Antecedents of customers’ acceptance of artificially intelligent robotic device use in hospitality services. J. Hosp. Mark. Manag. 29, 530–549 (2020). https://doi.org/10.1080/19368623.2020.1685053
Gelbrich, K.: Anger, frustration, and helplessness after service failure: coping strategies and effective informational support. J. Acad. Mark. Sci. 38, 567–585 (2010). https://doi.org/10.1007/s11747-009-0169-6
Roseman, I.J.: Appraisal determinants of discrete emotions. Cogn. Emot. 5, 161–200 (1991). https://doi.org/10.1080/02699939108411034
Boere, K., Hecker, K., Krigolson, O.E.: Validation of a mobile fNIRS device for measuring working memory load in the prefrontal cortex. Int. J. Psychophysiol. 195, 112275 (2024). https://doi.org/10.1016/j.ijpsycho.2023.112275
Wiese, E., Abubshait, A., Azarian, B., Blumberg, E.J.: Brain stimulation to left prefrontal cortex modulates attentional orienting to gaze cues. Philos. Trans. R. Soc. B Biol. Sci. 374, 20180430 (2019). https://doi.org/10.1098/rstb.2018.0430
Harmon-Jones, E., Gable, P.A., Peterson, C.K.: The role of asymmetric frontal cortical activity in emotion-related phenomena: a review and update. Biol. Psychol. 84, 451–462 (2010). https://doi.org/10.1016/j.biopsycho.2009.08.010
Kreplin, U., Fairclough, S.: Activation of the rostromedial prefrontal cortex during the experience of positive emotion in the context of esthetic experience. An fNIRS study. Front. Hum. Neurosci. 7, 879 (2013). https://doi.org/10.3389/fnhum.2013.00879
Zhou, L., Wu, B., Deng, Y., Liu, M.: Brain activation and individual differences of emotional perception and imagery in healthy adults: a functional near-infrared spectroscopy (fNIRS) study. Neurosci. Lett. 797, 137072 (2023). https://doi.org/10.1016/j.neulet.2023.137072
Bradley, M.M., Lang, P.J.: Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 25, 49–59 (1994). https://doi.org/10.1016/0005-7916(94)90063-9
Acknowledgements
This work was supported by National Natural Science Foundation of China 71942005.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, F., Ji, Y., Lei, X., Rau, PL.P. (2024). Responses to Human and Robot Errors in Human‒Robot Collaboration: An fNIRS Study. In: Rau, PL.P. (eds) Cross-Cultural Design. HCII 2024. Lecture Notes in Computer Science, vol 14702. Springer, Cham. https://doi.org/10.1007/978-3-031-60913-8_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-60913-8_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60912-1
Online ISBN: 978-3-031-60913-8
eBook Packages: Computer ScienceComputer Science (R0)