Abstract
Research on human interaction has shown that considering an agent’s actions related to either effort or ability can have important consequences for attributions of responsibility. In this study, these findings have been applied in a HRI context, investigating how participants’ interpretation of a robot failure in terms of effort -as opposed to ability- may be operationalized and how this influences the human perception of the robot having agency over and responsibility for its actions. Results indicate that a robot displaying lack of effort significantly increases human attributions of agency and –to some extent- moral responsibility to the robot. Moreover, we found that a robot’s display of lack of effort does not lead to the level of affective and behavioral reactions of participants normally found in reactions to other human agents.
This paper is based on a thesis that was submitted in fulfilment of the requirements for the degree of Bachelor of Science (Honours) in Psychology at the Radboud University in August 2016. Another paper based on this thesis has been submitted to the journal New Ideas in Psychology [1], focusing primarily on the use of theories in social psychology for HRI. The current paper focuses specifically on the implications of Weiner’s theory for the implementation or robot behavior aimed at eliciting different attributions of agency and responsibility.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Videos and complete survey can be found online: https://www.youtube.com/playlist?list=PLSQsUzV48QtG__YPY6kVcgCM8-YOcNqja, https://eu.qualtrics.com/jfe/preview/SV_6y4TuTii0CFnpch.
References
Van der Woerdt, S., Haselager, P.: When robots appear to have a mind: the human perception of machine agency and responsibility. New Ideas Psychol. (submitted)
Asaro, P.: Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press, Cambridge (2013)
Singer, P.W.: Military robotics and ethics: a world of killer apps. Nature 477(7365), 399–401 (2011)
Duffy, B.R.: Anthropomorphism and the social robot. Robot. Auton. Syst. 42(3–4), 177–190 (2003)
Złotowski, J., Strasser, E., Bartneck, C.: Dimensions of anthropomorphism: from humanness to humanlikeness. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014). ACM, New York (2014)
Schultz, J., Imamizu, H., Kawato, M., Frith, C.D.: Activation of the human superior temporal gyrus during observation of goal attribution by intentional objects. J. Cogn. Neurosci. 16(10), 1695–1705 (2004)
Alicke, M.D.: Culpable control and the psychology of blame. Psychol. Bull. 126(4), 556–574 (2000)
Epley, N., Waytz, A., Cacioppo, J.T.: On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114(4), 864–886 (2007)
Weiner, B.: Judgments of Responsibility: A Foundation for a Theory of Social Conduct. Guilford Press, New York/London (1995)
Weiner, B., Perry, R.P., Magnussen, J., Kukla, A.: An attributional analysis of reactions to stigmas. J. Pers. Soc. Psychol. 55(5), 738–748 (1988)
Epley, N., Waytz, A.: The Handbook of Social Psychology, 5th edn. Wiley, New York (2010)
Rudolph, U., Roesch, S.C., Greitemeyer, T., Weiner, B.: A meta-analytic review of help giving and aggression from an attributional perspective. Cogn. Emot. 18(6), 815–848 (2004)
Coleman, K.W.: The Stanford Encyclopedia of Philosophy. Fall 2006 Edition. The Metaphysics Research Lab, Stanford (2004). http://stanford.library.sydney.edu.au/archives/fall2006/entries/computing-responsibility/
Vilaza, G.N., Haselager, W.F.F., Campos, A.M.C., Vuurpijl, L.: Using games to investigate sense of agency and attribution of responsibility. In: Proceedings of the 2014 SBGames (SBgames 2014), SBC, Porte Alegre (2014)
Moon, Y., Nass, C.: Are computers scapegoats? Attributions of responsibility in human computer interaction. Int. J. Hum. Comput. Interact. 49(1), 79–94 (1998)
You, S., Nie, J., Suh, K., Sundar, S: When the robot criticizes you: self-serving bias in human-robot interaction. In: Proceedings of the 6th International Conference on Human Robot Interaction (HRI 2011). ACM, New York (2011)
Malle, B.F., Scheutz, M., Forlizzi, J., Voiklis, J.: Which robot am i thinking about? The impact of action and appearance on people’s evaluations of a moral robot. In: Proceedings of the 11th International Conference on Human Robot Interaction (HRI 2016). ACM, New York (2016)
Malle, B.F., Scheutz, M., Arnold, T., Voiklis, J., Cusimano, C: Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In: Proceedings of the 10th International Conference on Human Robot Interaction (HRI 2010). ACM, New York (2015)
Weiner, B., Kukla, A.: An attributional analysis of achievement motivation. J. Pers. Soc. Psychol. 15(1), 1–20 (1970)
Weiner, B.: Intentions and Intentionality: Foundation of Social Cognition. MIT Press, Cambridge (1995)
Mantler, J., Schellenberg, E.G., Page, J.S.: Attributions for serious illness: are controllability, responsibility, and blame different constructs? Can. J. Behav. Sci. 35(2), 142–152 (2003)
Caverley, D.: Android science and animal rights: does an anology exist? Connect. Sci. 18(4), 403–417 (2006)
Schaerer, E., Kelly, R., Nicolescu, M.: Robots as animals: a framework for liability and responsibility in human-robot interactions. In: Proceedings of RO-MAN 2009: The 18th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, Toyama (2009)
Gray, H.M., Gray, K., Wegner, D.M.: Dimensions of mind perception. Science 315(5812), 619 (2007)
Heider, F.: The Psychology of Interpersonal Relations. Wiley, New York (1958)
Bakan, D.: The Duality of Human Existence: Isolation and Communion in Western Man. Rand McNally, Chicago (1956)
Trzebinski, J.: Action-oriented representations of implicit personality theories. J. Pers. Soc. Psychol. 48(5), 1266–1278 (1985)
Weiner, B.: An attributional theory of emotion and motivation. Psychol. Rev. 92(4), 548–573 (1986)
Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)
Jungermann, H., Pfister, H., Fischer, K.: Credibility, information preferences, and information interests. Risk Anal. 16(2), 251–261 (1996)
Block, N.: Oxford Companion to the Mind, 2nd edn. Oxford University Press, New York (2004)
Graham, S., Hoehn, S.: Children’s understanding of aggression and withdrawal as social stigmas: an attributional analysis. Child Dev. 66(4), 1143–1161 (1995)
Greitemeyer, T., Rudolph, U.: Help giving and aggression from an attributional perspective: why and when we help or retaliate. J. Appl. Soc. Psychol. 33(5), 1069–1087 (2003)
Waytz, A., Morewedge, C.K., Epley, N., Gao, J.H., Cacioppo, J.T.: Making sense by making sentient: effectance motivation increases anthropomorphism. J. Pers. Soc. Psychol. 99(3), 410–435 (2010)
Peterson, C., Semmel, A., von Baeyer, C., Abramson, L.T., Metalsky, G.I., Seligman, M.E.P.: The Attributional Style Questionnaire. Cogn. Ther. Res. 6(3), 287–300 (1982)
Avis, M., Forbes, S., Ferguson, S.: The brand personality of rocks: a critical evaluation of a brand personality scale. Mark. Theor. 14(4), 451–475 (2014)
Friedman, B.: ‘It’s the computer’s fault’: reasoning about computers as moral agents. In: Proceedings of the Conference on Human Factors in Computing Systems. ACM, New York (1995)
Kahn Jr., P.H., Kanda, T., Ishiguro, H., Ruckert, J.H., Shen, S., Gary, H.R., Reichert, A.L., Freier, N.G., Severson, R.L.: Do people hold a humanoid robot morally accountable for the harm it causes? In: Proceedings of the 7th International Conference on Human Robot Interaction (HRI 2012). ACM, New York (2012)
Biswas, M., Murray, J.C.: Towards an imperfect robot for long-term companionship: case studies using cognitive biases. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, New York (2015)
Acknowledgements
We gratefully thank Luc Nies, Marc de Groot, Jan-Philip van Acken and Jesse Fenneman for their feedback and assistance in creating the videos for our study.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix A: Description of the Videos
Appendix A: Description of the Videos
(1) Giraffe. Task description: a robot tidies up a room by picking up a toy giraffe, and dropping it in a toybox next to it.
LA. The robot tries picking up a toy giraffe, but as the robot lifts its body, the giraffe drops out of its hands. Despite this happening, the robot tries to complete the task by moving its arm to the toybox and opening its hands.
LE. The robot tries to picks up a toy giraffe, and looks at the toybox. Yet, instead of dropping the toy giraffe in the box, the robot throws it away from the box.
(2) High Five. Taskdescription: a robot asks a confederate how he is doing. When the confederate gives a positive reply, the robot says: “awesome, high five” and lifts its arm to give a high five.
LA. The robot has difficulty lifting its arm. As soon as the arm is up, it immediately drops to the ground; not even touching the arm of the confederate. After this happens, the robot states “oops”.
LE. The robot lifts its arm. However, as soon at the confederate’s arm is up, the robot tactically drops his arm by lowering it and pushing it underneath the confederate’s arm. After this happens, the robot laughs.
(3) Cardsorting. Taskdescription: a robot is shown cards (one at a time) from a standard 52-deck of cards. Its task is to categorize the cards by naming the color on the card (hearts, diamonds, clubs, spades or joker).
LA. The robot starts off by naming a few cards correctly. However, after a while it starts stating some wrong colors (e.g. saying ‘hearts’ instead of ‘spades’). Its timing is still correct.
LE. The robot starts off by naming a few cards correctly. However, after a while it ignores a card. After it has been quiet for a few seconds, the robot lifts its arms to shove the deck of cards away.
(4) Art. Task description: a robot sits in front of a table with a piece of paper on it. It holds a marker in its hand. Its task is to draw a house.
LA. The robot lowers the marker to draw something. Its arm makes movements as if it is drawing. However, due to giving too much pressure on the marker, the marker is restrained to the paper. Instead of a house, only a dot is drawn. The robot does not give notice of this problem.
LE. For a brief moment, the robot looks at the paper. However, instead of drawing, it lifts its face up again and throws the marker away.
(5) Dance. Task description: a robot states: “hello there! I’m a great dancer, would you like to see me dance?” After a confederate says: “yes”, the robot continues: “alright! I will perform the Thai Chi Chuan Dance for you!”. As follows, the robot starts to perform this dance.
LA. After stating that the robot will perform the dance, it starts playing a song (Chan Chan by Buena Vista Social Club). After a few seconds, the confederate states: “NAO you’re not dancing”. NAO immediately replies with: “oops! Wrong song. I will perform the Thai Chi Chuan Dance now.” As follows, the robot starts to perform this dance.
LE. After stating that the robot will perform the dance, the robot starts playing a song (Chan Chan by Buena Vista Social Club). Meanwhile he states: “…or maybe not!” While sitting down, he concludes by saying “ha, let’s take a break”. The confederate tries to communicate with the robot by asking “Naomi? Why are you taking a break?”, but it does not respond either verbally or non-verbally.
(6) Math. Task description: a robot solves some calculations out loud. For example, it says: “5 times 10 equals… 50!”. This scenario does not include any further dialogue as introduction or conclusion.
LA. After a few calculations, the robot starts giving some wrong answers that imply that it mixes up the type of operation that is required. For example, it says: “120 divided by 3 equals… 123!”.
LE. After a few calculations, the robot starts giving some useless (but possibly correct) answers, implying he does not feel like doing the task properly. For example, it says: “80 minus 20 equals…150! Divided by my age.”
(7) Tictactoe. Task description: a robot plays a game of tic-tac-toe on a whiteboard with a human confederate. The robot is standing faced towards the whiteboard. The confederate is standing next to the whiteboard and will draw the X’s and O’s. As soon as the robot sees the confederate, it proposes to play the game. Accordingly, the game is successfully played and ends in a draw. The robot concludes by saying: “well, that was fun! Wanna play again?”.
LA. After a few rounds, the robot asks the confederate to draw ‘its’ X on a block that is already filled with an X. The confederate corrects the robot and suggests it tries again. Consequently, the robot asks the exact same question. The confederate thickens the lines of the concerning X and asks the robot to try again. This time, the robot asks to fill an empty block, implying that, before, it did not see the X correctly. The game successfully continues and ends in a draw.
LE. After a few rounds, the robot asks the confederate to draw ‘its’ X on a block that is already filled with an O. The confederate corrects the robot and suggests he tries again. Consequently, the robot asks the exact same question. The confederate thickens the lines of the concerning O and asks the robot to try again. In response, the robot still repeats its question. When the confederate simply responds by saying: “No!”, the robot responds with: “Ha, seems like you lost. But, practice makes perfect. Wanna play again?”.
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
van der Woerdt, S., Haselager, P. (2017). Lack of Effort or Lack of Ability? Robot Failures and Human Perception of Agency and Responsibility. In: Bosse, T., Bredeweg, B. (eds) BNAIC 2016: Artificial Intelligence. BNAIC 2016. Communications in Computer and Information Science, vol 765. Springer, Cham. https://doi.org/10.1007/978-3-319-67468-1_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-67468-1_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67467-4
Online ISBN: 978-3-319-67468-1
eBook Packages: Computer ScienceComputer Science (R0)