Skip to main content
Log in

Why and How Robots Should Say ‘No’

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

“Listen, Mike, what did you say to Speedy when you sent him after the selenium?” Donovan was taken aback. “Well damn it – I don’t know. I just told him to get it.” “Yes, I know, but how? Try to remember the exact words.” “I said...uh...I said: ‘Speedy, we need some selenium. You can get it such-and-such a place. Go get it – that’s all. What more did you want me to say?” “You didn’t put any urgency into the order, did you?” “What for? It was pure routine.” Powell sighed. ‘Well, it can’t be helped now – but we’re in a fine fix.”

– Isaac Asimov, “Runaround” (1942)

Abstract

Language-enabled robots with moral reasoning capabilities will inevitably face situations in which they have to respond to human commands that might violate normative principles and could cause harm to humans. We believe that it is critical for robots to be able to reject such commands. We thus address the two key challenges of when and how to reject norm-violating directives. First, we present research in both engineering language-enabled robots that can engage in rudimentary rejection dialogues, as well as related HRI research into the effectiveness of robot protest. Second, we argue that how rejections are phrased is important and review the factors that should guide natural language formulations of command rejections. Finally, we conclude by identifying relevant open questions that will further inform the design of future language-capable and morally competent robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Video of the interaction can be found at https://www.youtube.com/watch?v=0tu4H1g3CtE

  2. Video at: https://www.youtube.com/watch?v=SkAAl7ERZPo

  3. Video at: https://www.youtube.com/watch?v=7YxmdpS5M_s (Note: The underscore in the URL may not copy and paste correctly).

References

  1. Abel D, MacGlashan J, Littman ML (2016) Reinforcement learning as a framework for ethical decision making. In: Proceedings of the AAAI workshop on AI, ethics, and society, pp 54–61

  2. Ågotnes T, Van Der Hoek W, Rodríguez-Aguilar JA, Sierra C, Wooldridge M (2007) On the logic of normative systems. In: Proceedings of the international joint conference on artificial intelligence (IJCAI), vol 7, pp 1181–1186

  3. Aha DW, Coman A (2017) The AI rebellion: changing the narrative. In: Proceedings of the thirty-first AAAI conference on artificial intelligence, pp 4826–4830

  4. Alicke MD, Zell E (2009) Social attractiveness and blame. J Appl Soc Psychol 39(9):2089–2105

    Article  Google Scholar 

  5. Anderson M, Anderson SL (2014) Geneth: a general ethical dilemma analyzer. In: Twenty-eighth AAAI conference on artificial intelligence

  6. Anderson SL (2011) The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, New York, pp 285–296

    Chapter  Google Scholar 

  7. Andrighetto G, Villatoro D, Conte R (2010) Norm internalization in artificial societies. AI Commun 23(4):325–339

    Article  MathSciNet  Google Scholar 

  8. Arkin RC (2008) Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture. In: Proceedings of the 3rd ACM/IEEE international conference on human–robot interaction. ACM, pp 121–128

  9. Arkin RC, Ulam P (2009) An ethical adaptor: behavioral modification derived from moral emotions. In: Proceedings of computational intelligence in robotics and automation (CIRA). IEEE, pp 381–387

  10. Arnold T, Kasenberg D, Scheutz M (2017) Value alignment or misalignment—what will keep systems accountable? In: Proceedings of the AAAI workshop on AI, ethics, and society

  11. Asimov I (1942) Runaround. Astounding science. Fiction 29(1):94–103

    Google Scholar 

  12. Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Soc Robot 1(1):71–81

    Article  Google Scholar 

  13. Bartneck C, Yogeeswaran K, Ser QM, Woodward G, Sparrow R, Wang S, Eyssel F (2018) Robots and racism. In: Proceedings of the 2018 ACM/IEEE international conference on human–robot interaction. ACM, pp 196–204

  14. Bickmore TW, Trinh H, Olafsson S, O’Leary TK, Asadi R, Rickles NM, Cruz R (2018) Patient and consumer safety risks when using conversational assistants for medical information: an observational study of siri, alexa, and google assistant. J Med Internet Res 20(9):e11510

    Article  Google Scholar 

  15. Blass JA, Forbus KD (2015) Moral decision-making by analogy: generalizations versus exemplars. In: Proceedings of the AAAI conference on artificial intelligence (AAAI), pp 501–507

  16. Bower GH, Morrow DG (1990) Mental models in narrative comprehension. Science 247(4938):44–48

    Article  Google Scholar 

  17. Briggs G, Gessell B, Dunlap M, Scheutz M (2014) Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress? In: The 23rd IEEE international symposium on robot and human interactive communication. IEEE, pp 1122–1127

  18. Briggs G, McConnell I, Scheutz M (2015) When robots object: evidence for the utility of verbal, but not necessarily spoken protest. In: International conference on social robotics. Springer, pp 83–92

  19. Briggs G, Scheutz M (2012) Investigating the effects of robotic displays of protest and distress. In: International conference on social robotics, pp 238–247

  20. Briggs G, Scheutz M (2014) How robots can affect human behavior: investigating the effects of robotic displays of protest and distress. Int J Soc Robot 6(3):343–355

    Article  Google Scholar 

  21. Briggs G, Scheutz M (2015) “Sorry, I can’t do that”: Developing mechanisms to appropriately reject directives in human–robot interactions. In: Proceedings of the AAAI fall symposium series

  22. Briggs G, Scheutz M (2017) The case for robot disobedience (cover story). Sci Am 316(1):44–47

    Article  Google Scholar 

  23. Bringsjord S, Arkoudas K, Bello P (2006) Toward a general logicist methodology for engineering ethically correct robots. Intell Syst 21(4):38–44

    Article  Google Scholar 

  24. Bringsjord S, Taylor J (2012) The divine-command approach to robot ethics. In: Robot ethics: the ethical and social implications of robotics, pp 85–108

  25. Brown P, Levinson S (1987) Politeness: some universals in language usage. Cambridge University Press, Cambridge

    Book  Google Scholar 

  26. Buhrmester M, Kwang T, Gosling SD (2011) Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6(1):3–5

    Article  Google Scholar 

  27. Carpenter J, Davis JM, Erwin-Stewart N, Lee TR, Bransford JD, Vye N (2009) Gender representation and humanoid robots designed for domestic use. Int J Soc Robot 1(3):261

    Article  Google Scholar 

  28. Charisi V, Dennis L, Lieck MFR, Matthias A, Sombetzki MSJ, Winfield AF, Yampolskiy R (2017) Towards moral autonomous systems. arXiv preprint arXiv:1703.04741

  29. Chita-Tegmark M, Lohani M, Scheutz M (2019) Gender effects in perceptions of robots and humans with varying emotional intelligence. In: 2019 14th ACM/IEEE international conference on human–robot interaction (HRI). IEEE, pp 230–238

  30. Clark HH (1996) Using language LL, vol 1996. Cambridge University Press, Cambridge

    Book  Google Scholar 

  31. Clarke R (2011) Asimov’s laws of robotics: implications for information technology. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, New York, pp 254–284

    Chapter  Google Scholar 

  32. Crump MJ, McDonnell JV, Gureckis TM (2013) Evaluating amazon’s mechanical turk as a tool for experimental behavioral research. PloS one 8(3)

  33. Cushman F (2008) Crime and punishment: distinguishing the roles of causal and intentional analyses in moral judgment. Cognition 108(2):353–380

    Article  Google Scholar 

  34. Dannenhauer D, Floyd MW, Magazzeni D, Aha DW (2018) Explaining rebel behavior in goal reasoning agents. In: ICAPS Workshop on EXplainable AI Planning (XAIP)

  35. Dehghani M, Tomai E, Forbus KD, Klenk M (2008) An integrated reasoning approach to moral decision-making. In: Proceedings of the AAAI conference on artificial intelligence (AAAI), pp 1280–1286

  36. Dennis L, Fisher M, Slavkovik M, Webster M (2016) Formal verification of ethical choices in autonomous systems. Robot Auton Syst 77:1–14

    Article  Google Scholar 

  37. Eyssel F, Hegel F (2012) (s)he’s got the look: gender stereotyping of robots. J Appl Soc Psychol 42(9):2213–2230

    Article  Google Scholar 

  38. Frankfurt HG (1986) On bullshit. Princeton University Pres, Princeton

    Google Scholar 

  39. Frasca T, Thielstrom R, Krause E, Scheutz M (2020) “can you do this?” self-assessment dialogues with autonomous robots before, during, and after a mission. In: HRI workshop on assessing, explaining, and conveying robot proficiency for human–robot teaming

  40. Fraune MR, Kawakami S, Sabanovic S, De Silva PRS, Okada M (2015) Three’s company, or a crowd?: The effects of robot number and behavior on hri in japan and the usa. In: Robotics: Science and systems

  41. Gervits F, Briggs G, Scheutz M (2017) The pragmatic parliament: a framework for socially-appropriate utterance selection in artificial agents. In: 39th annual meeting of the cognitive science society, London, UK

  42. Gibbon D, Griffiths S (2017) Multilinear grammar: ranks and interpretations. Open Linguistics 3(1):265–307

    Article  Google Scholar 

  43. de Graaf MM, Malle BF (2017) How people explain action (and autonomous intelligent systems should too). In: 2017 AAAI Fall Symposium Series

  44. de Graaf MM, Malle BF (2019) People’s explanations of robot behavior subtly reveal mental state inferences

  45. Greene JD (2004) Why are VMPFC patients more utilitarian. A dual-process theory of moral judgment explains. Department of Psychology, Harvard University, Cambridge

    Google Scholar 

  46. Greene JD (2009) Dual-process morality and the personal/impersonal distinction: a reply to McGuire, Langdon, Coltheart, and Mackenzie. J Exp Soc Psychol 45(3):581–584

    Article  Google Scholar 

  47. Gureckis TM, Martin J, McDonnell J, Rich AS, Markant D, Coenen A, Halpern D, Hamrick JB, Chan P (2016) psiturk: an open-source framework for conducting replicable behavioral experiments online. Behav Res Methods 48(3):829–842

    Article  Google Scholar 

  48. Haring KS, Mougenot C, Ono F, Watanabe K (2014) Cultural differences in perception and attitude towards robots. Int J Affect Eng 13(3):149–157

    Article  Google Scholar 

  49. Haring KS, Silvera-Tawil D, Matsumoto Y, Velonaki M, Watanabe K (2014) Perception of an android robot in Japan and Australia: a cross-cultural comparison. In: International conference on social robotics. Springer, pp 166–175

  50. Hayes B, Shah JA (2017) Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE international conference on human–robot interaction. ACM, pp 303–312

  51. Jackson RB, Wen R, Williams T (2019) Tact in noncompliance: the need for pragmatically apt responses to unethical commands. In: Proceedings of the AAAI/ACM conference on artificial intelligence, ethics, and society

  52. Jackson RB, Williams T (2018) Robot: asker of questions and changer of norms? In: Proceedings of the international conference on robot ethics and standards

  53. Jackson RB, Williams T (2019) Language-capable robots may inadvertently weaken human moral norms. In: Proceedings of the companion of the 14th ACM/IEEE international conference on human–robot interaction

  54. Jackson RB, Williams T (2019) On perceived social and moral agency in natural language capable robots. In: Proceedings of the 2019 HRI workshop on the dark side of human–robot interaction: ethical considerations and community guidelines for the Field of HRI

  55. Jackson RB, Williams T, Smith NM (2020) Exploring the role of gender in perceptions of robotic noncompliance. In: Proceedings of the 15th ACM/IEEE international conference on human–robot interaction

  56. Johnson-Laird PN (1980) Mental models in cognitive science. Cogn Sci 4(1):71–115

    Article  Google Scholar 

  57. Johnson-Laird PN (1983) Mental models: towards a cognitive science of language, inference, and consciousness. 6. Harvard University Press

  58. Kasenberg D, Arnold T, Scheutz M (2018) Norms, rewards, and the intentional stance: Comparing machine learning approaches to ethical training. In: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society. ACM, pp 184–190

  59. Kasenberg D, Scheutz M (2018) Inverse norm conflict resolution. In: Proceedings of the 1st AAAI/ACM workshop on artificial intelligence, ethics, and society

  60. Kasenberg D, Thielstrom R, Scheutz M (2020) Generating explanations for temporal logic planner decisions. In: Proceedings of the 30th international conference on automated planning and scheduling (ICAPS)

  61. Kennedy J, Baxter P, Belpaeme T (2014) Children comply with a robot’s indirect requests. In: Proceedings of the international conference on human–robot interaction. ACM, pp 198–199

  62. Komatsu T, Malle BF, Scheutz M (2021) Blaming the reluctant robot: parallel blame judgments for robots in moral dilemmas across U.S. and Japan. In: Proceedings of the 2021 ACM/IEEE international conference on human–robot interaction, pp 63–72

  63. Kuipers B (2016) Human-like morality and ethics for robots. In: AAAI workshop: AI, ethics, and society

  64. Kuipers B (2016) Toward morality and ethics for robots. In: Ethical and moral considerations in non-human agents, AAAI Spring Symposium Series

  65. Le Bui M, Noble SU (2020) We’re missing a moral framework of justice in artificial intelligence. In: The Oxford handbook of ethics of AI

  66. Lee HR, Šabanović S (2014) Culturally variable preferences for robot design and use in South Korea, Turkey, and the United States. In: 2014 9th ACM/IEEE international conference on human–robot interaction (HRI). IEEE, pp 17–24

  67. Lee HR, Sung J, Šabanović S, Han J (2012) Cultural design of domestic robots: a study of user expectations in Korea and the United States. In: 2012 IEEE RO-MAN: The 21st IEEE international symposium on robot and human interactive communication. IEEE, pp 803–808

  68. Lee N, Kim J, Kim E, Kwon O (2017) The influence of politeness behavior on user compliance with social robots in a healthcare service setting. Int J Soc Robot 9(5):727–743

    Article  Google Scholar 

  69. Levinson SC (2000) Presumptive meanings: the theory of generalized conversational implicature. MIT Press, Cambridge

    Book  Google Scholar 

  70. Lockshin J, Williams T (2020) “we need to start thinking ahead”: the impact of social context on linguistic norm adherence. In: Proceedings of the annual meeting of the cognitive science society

  71. Lomas M, Chevalier R, Cross II EV, Garrett RC, Hoare J, Kopack M (2012) Explaining robot actions. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction. ACM, pp 187–188

  72. Madumal P, Miller T, Vetere F, Sonenberg L (2018) Towards a grounded dialog model for explainable artificial intelligence. arXiv preprint arXiv:1806.08055

  73. Malle BF (2016) Integrating robot ethics and machine morality: the study and design of moral competence in robots. Ethics Info Tech 18(4):243–256

    Article  Google Scholar 

  74. Malle BF, Guglielmo S, Monroe AE (2014) A theory of blame. Psychol Inq 25(2):147–186

    Article  Google Scholar 

  75. Mavridis N (2007) Grounded situation models for situated conversational assistants. Ph.D. thesis, Massachusetts Institute of Technology

  76. Miller T (2018) Explanation in artificial intelligence: insights from the social sciences. Artificial Intelligence

  77. Mills S (2003) Gender and politeness, vol 17. Cambridge University Press, Cambridge

    Book  Google Scholar 

  78. Mills S (2005) Gender and impoliteness

  79. Murphy RR, Woods DD (2009) Beyond asimov: the three laws of responsible robotics. IEEE Intell Syst 24(4):14–20

    Article  Google Scholar 

  80. Nass C, Moon Y, Green N (1997) Are machines gender neutral? gender-stereotypic responses to computers with voices. J Appl Soc Psychol 27(10):864–876

    Article  Google Scholar 

  81. Nikolaidis S, Kwon M, Forlizzi J, Srinivasa S (2017) Planning with verbal communication for human–robot collaboration. arXiv preprint arXiv:1706.04694

  82. Oosterveld B, Brusatin L, Scheutz M (2017) Two bots, one brain: component sharing in cognitive robotic architectures. In: Proceedings of the companion of the 2017 ACM/IEEE international conference on human–robot interaction. ACM

  83. Park DH, Hendricks LA, Akata Z, Schiele B, Darrell T, Rohrbach M (2016) Attentive explanations: justifying decisions and pointing to the evidence. arXiv preprint arXiv:1612.04757

  84. Pereira LM, Saptawijaya A (2009) Modelling morality with prospective logic. Int J Reason Based Intell Syst 1(3–4):209–221

    Google Scholar 

  85. Rosemont Jr H, Ames RT (2016) Confucian role ethics: a moral vision for the 21st century? V&R unipress GmbH

  86. Russell S, Dewey D, Tegmark M (2015) Research priorities for robust and beneficial artificial intelligence. AI Mag 36(4):105–114

    Google Scholar 

  87. Šabanović S (2010) Robots in society, society in robots. Int J Soc Robot 2(4):439–450

    Article  Google Scholar 

  88. Sarathy V, Arnold T, Scheutz M (2019) When exceptions are the norm: exploring the role of consent in HRI. ACM Trans Hum Robot Interact 9(2):1–21

    Article  Google Scholar 

  89. Schermerhorn P, Scheutz M, Crowell CR (2008) Robot social presence and gender: Do females view robots differently than males? In: Proceedings of the 3rd ACM/IEEE international conference on human robot interaction. ACM, pp 263–270

  90. Scheutz M (2016) The need for moral competency in autonomous agent architectures. In: Fundamental issues of artificial intelligence. Springer, pp 515–525

  91. Scheutz M (2017) The case for explicit ethical agents. AI Mag 38(4):57–64

    Google Scholar 

  92. Scheutz M, Briggs G, Cantrell R, Krause E, Williams T, Veale R (2013) Novel mechanisms for natural human–robot interactions in the Diarc architecture. In: Proceedings of AAAI workshop on intelligent robotic systems

  93. Scheutz M, Williams T, Krause E, Oosterveld B, Sarathy V, Frasca T (2018) An overview of the distributed integrated cognition affect and reflection Diarc architecture. In: Ferreira MIA, Sequeira JS, Ventura R (eds) Cognitive architectures (in press)

  94. Searle JR (1969) Speech acts: an essay in the philosophy of language

  95. Searle JR (1976) A classification of illocutionary acts. Lang Soc 5(1):1–23

    Article  MathSciNet  Google Scholar 

  96. Shibata T, Wada K, Ikeda Y, Sabanovic S (2009) Cross-cultural studies on subjective evaluation of a seal robot. Adv Robot 23(4):443–458

    Article  Google Scholar 

  97. Shim J, Arkin RC (2013) A taxonomy of robot deception and its benefits in HRI. In: 2013 IEEE international conference on systems, man, and cybernetics. IEEE, pp 2328–2335

  98. Siegel M, Breazeal C, Norton MI (2009) Persuasive robotics: the influence of robot gender on human behavior. In: 2009 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 2563–2568

  99. Stewart N, Chandler J, Paolacci G (2017) Crowdsourcing samples in cognitive science. Trends in Cognitive Sciences

  100. Strait M, Briggs P, Scheutz M (2015) Gender, more so than age, modulates positive perceptions of language-based human-robot interactions. In: 4th international symposium on new frontiers in human robot interaction

  101. Strait M, Ramos AS, Contreras V, Garcia N (2018) Robots racialized in the likeness of marginalized social identities are subject to greater dehumanization than those racialized as white. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, pp 452–457

  102. Sun R (2013) Moral judgment, human motivation, and neural networks. Cogn Comput 5(4):566–579

    Article  Google Scholar 

  103. Tay B, Jung Y, Park T (2014) When stereotypes meet robots: the double-edge sword of robot gender and personality in human-robot interaction. Comput Hum Behav 38:75–84

    Article  Google Scholar 

  104. Thielstrom R, Roque A, Chita-Tegmark M, Scheutz M (2020) Generating explanations of action failures in a cognitive robotic architecture. In: Proceedings of NL4XAI: 2nd workshop on interactive natural language technology for explainable artificial intelligence

  105. Vanderelst D, Winfield A (2017) An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research

  106. Wallach W, Franklin S, Allen C (2010) A conceptual and computational model of moral decision making in human and artificial agents. Top Cogn Sci 2(3):454–485

    Article  Google Scholar 

  107. Wang Y, Young JE (2014) Beyond pink and blue: gendered attitudes towards robots in society. In: Proceedings of gender and IT appropriation. Science and practice on dialogue-forum for interdisciplinary exchange. European Society for Socially Embedded Technologies, p 49

  108. Wen R, Jackson RB, Williams T, Zhu Q (2019) Towards a role ethics approach to command rejection. In: Proceedings of the 2019 HRI workshop on the dark side of human-robot interaction: ethical considerations and community guidelines for the field of HRI

  109. Wen R, Siddiqui MA, Williams T (2020) Dempster–Shafer theoretic learning of indirect speech act comprehension norms. In: AAAI, pp 10410–10417

  110. Williams T, Briggs G, Oosterveld B, Scheutz M (2015) Going beyond command-based instructions: extending robotic natural language interaction capabilities. In: Proceedings of twenty-ninth AAAI conference on artificial intelligence

  111. Williams T, Jackson RB, Lockshin J (2018) A Bayesian analysis of moral norm malleability during clarification dialogues. In: Proceedings of the 40th annual meeting of the Cognitive Science Society

  112. Williams T, Zhu Q, Wen R, de Visser EJ (2020) The confucian matador: three defenses against the mechanical bull. In: Companion of the 2020 ACM/IEEE International conference on human–robot interaction (alt.HRI), pp 25–33

  113. Winfield AF, Blum C, Liu W (2014) Towards an ethical robot: internal models, consequences and ethical action selection. In: Conference towards autonomous robotic systems. Springer, pp 85–96

  114. Zhu Q, Williams T, Jackson B, Wen R (2020) Blame-laden moral rebukes and the morally competent robot: a Confucian ethical perspective. Sci Eng Ethics 26(5):2511–2526

    Article  Google Scholar 

  115. Zhu Q, Williams T, Wen R (2019) Confucian robot ethics. IN: Computer Ethics-Philosophical Enquiry (CEPE) Proceedings 2019, vol 1, p 12

  116. Zwaan RA (2016) Situation models, mental simulations, and abstract concepts in discourse comprehension. Psychon Bull Rev 23(4):1028–1034

    Article  Google Scholar 

Download references

Funding

Portions of this work were supported by a U.S. Army Research Laboratory contract award to the second author. Portions of this work were supported by a National Research Council Postdoctoral Fellowship awarded to the first author. The views expressed in this paper are solely those of the authors and should not be taken to reflect any official policy or position of the United States Government or the Department of Defense. This work was also funded in part by Air Force Young Investigator Award 19RT0497, and by NSF Grants IIS-1909847, IIS-1849348, and IIS-1723963.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tom Williams.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest beyond the financial relationships listed above.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Briggs, G., Williams, T., Jackson, R.B. et al. Why and How Robots Should Say ‘No’. Int J of Soc Robotics 14, 323–339 (2022). https://doi.org/10.1007/s12369-021-00780-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-021-00780-y

Keywords

Navigation