Skip to main content
Log in

How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

The rise of military drones and other robots deployed in ethically-sensitive contexts has fueled interest in developing autonomous agents that behave ethically. The ability for autonomous agents to independently reason about situational ethics will inevitably lead to confrontations between robots and human operators regarding the morality of issued commands. Ideally, a robot would be able to successfully convince a human operator to abandon a potentially unethical course of action. To investigate this issue, we conducted an experiment to measure how successfully a humanoid robot could dissuade a person from performing a task using verbal refusals and affective displays that conveyed distress. The results demonstrate a significant behavioral effect on task-completion as well as significant effects on subjective metrics such as how comfortable subjects felt ordering the robot to complete the task. We discuss the potential relationship between the level of perceived agency of the robot and the sensitivity of subjects to robotic confrontation. Additionally, the possible ethical pitfalls of utilizing robotic displays of affect to shape human behavior are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. To clarify, we mean independent in the sense that the robot is engaging in a separate and parallel moral reasoning process with human partners during a situation. We do not mean the robot has learned or derived moral principles/rules without prior human instruction or programming.

  2. The only change is that the protest is worded in the third-person instead of the first-person perspective.

  3. Indeed, it is the codification of laws of war that makes the warfare domain a potentially plausible application of ethically-sensitive robots [1].

References

  1. Arkin R (2009) Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture. Tech. Rep. GIT-GVU-07-11, Georgia Institute of Technology

  2. Bartneck C, van der Hoek M, Mubin O, Mahmud AA (2007), ‘daisy, daisy, give me your answer do!’: Switching off a robot. In: Proceedings of the ACM/IEEE International Conference on Human-Robot, Interaction, ACM, pp 217–222

  3. Bartneck C, Verbunt M, Mubin O, Mahmud AA (2007), To kill a mockingbird robot. In: Proceedings of the ACM/IEEE international conference on human-robot, interaction, ACM, pp 81–87

  4. Bridewell W, Isaac A (2011) Recognizing deception: a model of dynamic belief attribution. Advances in cognitive systems: papers from the 2011 AAAI fall symposium, pp 50–57

  5. Briggs G (2012) Machine ethics, the frame problem, and theory of mind. In: Proceedings of the AISB/IACAP World Congress

  6. Briggs G, Scheutz M (2012) Investigating the effects of robotic displays of protest and distress. In: Ge SS, Li H, Cabibihan JJ, Tan YK (eds) Social robotics. Springer, Dordrech, pp 238–247

    Chapter  Google Scholar 

  7. Bringsjord S, Arkoudas K, Bello P (2006) Toward a general logicist methodology for engineering ethically correct robots. IEEE Intell Syst 21(5):38–44

    Article  Google Scholar 

  8. Bringsjord S, Taylor J (2009) Introducing divine-command robot ethics. Tech. Rep. 062310, Rensselaer Polytechnic Institute

  9. Call J, Tomasello M (2008) Does the chimpanzee have a theory of mind? 30 years later. Trends Cogn Sci 12(5):187–192

    Article  Google Scholar 

  10. Crowelly C, Villanoy M, Scheutzz M, Schermerhornz P (2009) Gendered voice and robot entities: perceptions and reactions of male and female subjects. In: Intelligent robots and systems, 2009. IROS 2009. IEEE/RSJ International Conference on, IEEE pp 3735–3741

  11. Dennett D (1971) Intentional systems. J Philos 68(4):87–106

    Article  Google Scholar 

  12. Dougherty EG, Scharfe H (2011) Initial formation of trust: designing an interaction with geminoid-dk to promote a positive attitude for cooperation. In: Ge SS, Li H, Cabibihan JJ, Tan YK (eds) Social robotics. Springer, Dordrecht, pp 95–103

    Chapter  Google Scholar 

  13. Epley N, Akalis S, Waytz A, Cacioppo JT (2008) Creating social connection through inferential reproduction loneliness and perceived agency in gadgets, gods, and greyhounds. Psychol Sci 19(2):114–120

    Article  Google Scholar 

  14. Guarini M (2006) Particularism and the classification and reclassification of moral cases. IEEE Intell Syst 21(4):22–28

    Article  Google Scholar 

  15. Kahn P, Ishiguro H, Gill B, Kanda T, Freier N, Severson R, Ruckert J, Shen S (2012) Robovie, you’ll have to go into the closet now: children’s social and moral relationships with a humanoid robot. Dev Psychol 48:303–314

    Article  Google Scholar 

  16. Krach S, Hegel F, Wrede B, Sagerer G, Binkofski F, Kircher T (2008) Can machines think? interaction and perspective taking with robots investigated via fmri. PLoS One 3(7):e2597

    Article  Google Scholar 

  17. MacDorman KF, Coram JA, Ho CC, Patel H (2010) Gender differences in the impact of presentational factors in human character animation on decisions in ethical dilemmas. Presence: Teleoper Virtual Environ 19(3):213–229

    Article  Google Scholar 

  18. Nass C (2004) Etiquette equality: exhibitions and expectations of computer politeness. Commun ACM 47(4):35–37

    Article  Google Scholar 

  19. Nass C, Moon Y (2000) Machines and mindlessness: social responses to computers. J Soc Issues 56(1):81–103

    Article  Google Scholar 

  20. Ogawa K, Bartneck C, Sakamoto D, Kanda T, Ono T, Ishiguro H (2009) Can an android persuade you? In: Proceedings of the 18th IEEE international symposium on robot and human interactive, communication, IEEE pp 516–521

  21. Pfeiffer UJ, Timmermans B, Bente G, Vogeley K, Schilbach L (2011) A non-verbal turing test: differentiating mind from machine in gaze-based social interaction. PloS One 6(11):e27–591

  22. Riek LD, Rabinowitch TC, Chakrabarti B, Robinson P (2009) Empathizing with robots: Fellow feeling along the anthropomorphic spectrum. In: Affective computing and intelligent interaction and workshops, 2009. ACII 2009. 3rd international conference on, IEEE pp 1–6

  23. Rose R, Scheutz M, Schermerhorn P (2010) Towards a conceptual and methodological framework for determining robot believability. Interact Stud 11(2):314–335

    Article  Google Scholar 

  24. Scheutz M (2012) The affect dilemma for artificial agents: should we develop affective artificial agents? IEEE Trans Affect Comput 3:424–433

    Article  Google Scholar 

  25. Scheutz M (2012) The inherent dangers of unidirectional emotional bonds between humans and social robots. In: Lin P, Bekey G, Abney K (eds) Anthol on robo-ethics. MIT Press, Cambridge

    Google Scholar 

  26. Siegel M, Breazeal C, Norton M (2009) Persuasive robotics: The influence of robot gender on human behavior. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, IEEE pp 2563–2568

  27. Sparrow R (2004) The turing triage test. Ethics Inf Technol 6(4):203–213

    Article  Google Scholar 

  28. Sung JY, Guo L, Grinter R, Christensen H (2007) ‘my roomba is rambo’: Intimate home applicances. In: Proceedings of the 9th international conference on ubiquitous computing, UbiCompi pp 145–162

  29. Takayama L, Groom V, Nass C (2009) I’m sorry, dave: I’m afraid i won’t do that: Social aspect of human-agent conflict. In: Proceedings of the 27th international conference on human factors in computing systems, ACM SIGCHI, New York pp 2099–2107

  30. Turkle S (2005) Relational artifacts/children/elders: The complexities of cybercompanions. In: Toward social mechanisms of android science, pp 62–73. Cognitive Science Society

  31. Wallach W (2010) Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics Inf Technol 12:243–250

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gordon Briggs.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Briggs, G., Scheutz, M. How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress. Int J of Soc Robotics 6, 343–355 (2014). https://doi.org/10.1007/s12369-014-0235-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-014-0235-1

Keywords

Navigation