Abstract
Cheating is a universally salient and disliked behavior. Previous research has shown that a cheating robot dramatically increases perception of its perceived agency. However, this original research did not directly compare human cheating to robot cheating. We examined whether the human and the robot were evaluated differently in terms of reactionary behaviors as well as attribution of mental states and perception of competence, warmth, agency, and capabilities to experience. This study was able to partially recreate the previous study findings [10] showing that participants were highly socially engaged with the cheating robot and showing hostile reactions to the cheating action of the robot. In contrast, these reactions were not observed for the human condition. Additionally, play interactions with the robot were rated as more discomforting compared to the experience with the human player. Finally, it was found that the robot was perceived as less warm, competent, agentic, and able to experience than the human. This result could be attributed primarily due to the inherent human-like difference in agents. Several implications of this study are discussed with respect to the design of robot behavior and human social norms.
This research is supported by the Air Force Office of Scientific Research under grant number 16RT0881. The views expressed in this article reflect those of the authors and may or may not reflect those of the USAF Academy, United States Air Force, and United States Government.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abubshait, A., Wiese, E.: You look human, but act like a machine: agent appearance and behavior modulate different aspects of human-robot interaction. Front. Psychol. 8, 1393 (2017)
Carpinella, C.M., Wyman, A.B., Perez, M.A., Stroessner, S.J.: The robotic social attributes scale (RoSAS): development and validation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 254–262. ACM (2017)
Cosmides, L., Tooby, J.: Cognitive adaptations for social exchange. Adap. Mind: Evol. Psychol. Gener. Cult. 163, 163–228 (1992)
Fiske, S.T., Cuddy, A.J., Glick, P.: Universal dimensions of social cognition: warmth and competence. Trends Cogn. Sci. 11(2), 77–83 (2007)
Gray, K., Young, L., Waytz, A.: Mind perception is the essence of morality. Psychol. Inq. 23(2), 101–124 (2012)
Haring, K.S., Watanabe, K., Velonaki, M., Tossell, C.C., Finomore, V.: FFAB–the form function attribution bias in human-robot interaction. IEEE Trans. Cogn. Dev. Syst. 10(4), 843–851 (2018)
Haring, K.S., Watanabe, K., Silvera-Tawil, D., Velonaki, M., Takahashi, T.: Changes in perception of a small humanoid robot. In: 2015 6th International Conference on Automation, Robotics and Applications (ICARA), pp. 83–89. IEEE (2015)
Jackson, R.B., Wen, R., Williams, T.: Tact in noncompliance: the need for pragmatically apt responses to unethical commands (2019)
Korman, J., Harrison, A., McCurry, M., Trafton, G.: Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 530–531. IEEE (2019)
Litoiu, A., Ullman, D., Kim, J., Scassellati, B.: Evidence that robots trigger a cheating detector in humans. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 165–172. ACM (2015)
Lucas, G.M., Gratch, J., King, A., Morency, L.P.: It’s only a computer: virtual humans increase willingness to disclose. Comput. Hum. Behav. 37, 94–100 (2014)
Phillips, E., Zhao, X., Ullman, D., Malle, B.F.: What is human-like?: decomposing robots’ human-like appearance using the anthropomorphic robot (abot) database. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 105–113. ACM (2018)
Short, E., Hart, J., Vu, M., Scassellati, B.: No fair!! an interaction with a cheating robot. In: 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 219–226. IEEE (2010)
Stafford, R.Q., MacDonald, B.A., Jayawardena, C., Wegner, D.M., Broadbent, E.: Does the robot have a mind? Mind perception and attitudes towards robots predict use of an eldercare robot. Int. J. Soc. Robot. 6(1), 17–32 (2014)
Ullman, D., Leite, L., Phillips, J., Kim-Cohen, J., Scassellati, B.: Smart human, smarter robot: how cheating affects perceptions of social agency. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 36 (2014)
Van Lier, J., Revlin, R., De Neys, W.: Detecting cheaters without thinking: testing the automaticity of the cheater detection module. PLoS ONE 8(1), e53827 (2013)
Verplaetse, J., Vanneste, S., Braeckman, J.: You can judge a book by its cover: the sequel.: a kernel of truth in predictive cheating detection. Evol. Hum. Behav. 28(4), 260–271 (2007)
Wiese, E., Metta, G., Wykowska, A.: Robots as intentional agents: using neuroscientific methods to make robots appear more social. Front. Psychol. 8, 1663 (2017)
Zhao, X.: Rethinking anthropomorphism: the antecedents, unexpected consequences, and potential remedy for perceiving machines as human-like. In: Symposium submitted to Proceedings of the Association for Consumer Research (in press)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Haring, K., Nye, K., Darby, R., Phillips, E., de Visser, E., Tossell, C. (2019). I’m Not Playing Anymore! A Study Comparing Perceptions of Robot and Human Cheating Behavior. In: Salichs, M., et al. Social Robotics. ICSR 2019. Lecture Notes in Computer Science(), vol 11876. Springer, Cham. https://doi.org/10.1007/978-3-030-35888-4_38
Download citation
DOI: https://doi.org/10.1007/978-3-030-35888-4_38
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-35887-7
Online ISBN: 978-3-030-35888-4
eBook Packages: Computer ScienceComputer Science (R0)