“Listen, Mike, what did you say to Speedy when you sent him after the selenium?” Donovan was taken aback. “Well damn it – I don’t know. I just told him to get it.” “Yes, I know, but how? Try to remember the exact words.” “I said...uh...I said: ‘Speedy, we need some selenium. You can get it such-and-such a place. Go get it – that’s all. What more did you want me to say?” “You didn’t put any urgency into the order, did you?” “What for? It was pure routine.” Powell sighed. ‘Well, it can’t be helped now – but we’re in a fine fix.”
– Isaac Asimov, “Runaround” (1942)
Abstract
Language-enabled robots with moral reasoning capabilities will inevitably face situations in which they have to respond to human commands that might violate normative principles and could cause harm to humans. We believe that it is critical for robots to be able to reject such commands. We thus address the two key challenges of when and how to reject norm-violating directives. First, we present research in both engineering language-enabled robots that can engage in rudimentary rejection dialogues, as well as related HRI research into the effectiveness of robot protest. Second, we argue that how rejections are phrased is important and review the factors that should guide natural language formulations of command rejections. Finally, we conclude by identifying relevant open questions that will further inform the design of future language-capable and morally competent robots.
Similar content being viewed by others
Notes
Video of the interaction can be found at https://www.youtube.com/watch?v=0tu4H1g3CtE
Video at: https://www.youtube.com/watch?v=7YxmdpS5M_s (Note: The underscore in the URL may not copy and paste correctly).
References
Abel D, MacGlashan J, Littman ML (2016) Reinforcement learning as a framework for ethical decision making. In: Proceedings of the AAAI workshop on AI, ethics, and society, pp 54–61
Ågotnes T, Van Der Hoek W, Rodríguez-Aguilar JA, Sierra C, Wooldridge M (2007) On the logic of normative systems. In: Proceedings of the international joint conference on artificial intelligence (IJCAI), vol 7, pp 1181–1186
Aha DW, Coman A (2017) The AI rebellion: changing the narrative. In: Proceedings of the thirty-first AAAI conference on artificial intelligence, pp 4826–4830
Alicke MD, Zell E (2009) Social attractiveness and blame. J Appl Soc Psychol 39(9):2089–2105
Anderson M, Anderson SL (2014) Geneth: a general ethical dilemma analyzer. In: Twenty-eighth AAAI conference on artificial intelligence
Anderson SL (2011) The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, New York, pp 285–296
Andrighetto G, Villatoro D, Conte R (2010) Norm internalization in artificial societies. AI Commun 23(4):325–339
Arkin RC (2008) Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture. In: Proceedings of the 3rd ACM/IEEE international conference on human–robot interaction. ACM, pp 121–128
Arkin RC, Ulam P (2009) An ethical adaptor: behavioral modification derived from moral emotions. In: Proceedings of computational intelligence in robotics and automation (CIRA). IEEE, pp 381–387
Arnold T, Kasenberg D, Scheutz M (2017) Value alignment or misalignment—what will keep systems accountable? In: Proceedings of the AAAI workshop on AI, ethics, and society
Asimov I (1942) Runaround. Astounding science. Fiction 29(1):94–103
Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Soc Robot 1(1):71–81
Bartneck C, Yogeeswaran K, Ser QM, Woodward G, Sparrow R, Wang S, Eyssel F (2018) Robots and racism. In: Proceedings of the 2018 ACM/IEEE international conference on human–robot interaction. ACM, pp 196–204
Bickmore TW, Trinh H, Olafsson S, O’Leary TK, Asadi R, Rickles NM, Cruz R (2018) Patient and consumer safety risks when using conversational assistants for medical information: an observational study of siri, alexa, and google assistant. J Med Internet Res 20(9):e11510
Blass JA, Forbus KD (2015) Moral decision-making by analogy: generalizations versus exemplars. In: Proceedings of the AAAI conference on artificial intelligence (AAAI), pp 501–507
Bower GH, Morrow DG (1990) Mental models in narrative comprehension. Science 247(4938):44–48
Briggs G, Gessell B, Dunlap M, Scheutz M (2014) Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress? In: The 23rd IEEE international symposium on robot and human interactive communication. IEEE, pp 1122–1127
Briggs G, McConnell I, Scheutz M (2015) When robots object: evidence for the utility of verbal, but not necessarily spoken protest. In: International conference on social robotics. Springer, pp 83–92
Briggs G, Scheutz M (2012) Investigating the effects of robotic displays of protest and distress. In: International conference on social robotics, pp 238–247
Briggs G, Scheutz M (2014) How robots can affect human behavior: investigating the effects of robotic displays of protest and distress. Int J Soc Robot 6(3):343–355
Briggs G, Scheutz M (2015) “Sorry, I can’t do that”: Developing mechanisms to appropriately reject directives in human–robot interactions. In: Proceedings of the AAAI fall symposium series
Briggs G, Scheutz M (2017) The case for robot disobedience (cover story). Sci Am 316(1):44–47
Bringsjord S, Arkoudas K, Bello P (2006) Toward a general logicist methodology for engineering ethically correct robots. Intell Syst 21(4):38–44
Bringsjord S, Taylor J (2012) The divine-command approach to robot ethics. In: Robot ethics: the ethical and social implications of robotics, pp 85–108
Brown P, Levinson S (1987) Politeness: some universals in language usage. Cambridge University Press, Cambridge
Buhrmester M, Kwang T, Gosling SD (2011) Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6(1):3–5
Carpenter J, Davis JM, Erwin-Stewart N, Lee TR, Bransford JD, Vye N (2009) Gender representation and humanoid robots designed for domestic use. Int J Soc Robot 1(3):261
Charisi V, Dennis L, Lieck MFR, Matthias A, Sombetzki MSJ, Winfield AF, Yampolskiy R (2017) Towards moral autonomous systems. arXiv preprint arXiv:1703.04741
Chita-Tegmark M, Lohani M, Scheutz M (2019) Gender effects in perceptions of robots and humans with varying emotional intelligence. In: 2019 14th ACM/IEEE international conference on human–robot interaction (HRI). IEEE, pp 230–238
Clark HH (1996) Using language LL, vol 1996. Cambridge University Press, Cambridge
Clarke R (2011) Asimov’s laws of robotics: implications for information technology. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, New York, pp 254–284
Crump MJ, McDonnell JV, Gureckis TM (2013) Evaluating amazon’s mechanical turk as a tool for experimental behavioral research. PloS one 8(3)
Cushman F (2008) Crime and punishment: distinguishing the roles of causal and intentional analyses in moral judgment. Cognition 108(2):353–380
Dannenhauer D, Floyd MW, Magazzeni D, Aha DW (2018) Explaining rebel behavior in goal reasoning agents. In: ICAPS Workshop on EXplainable AI Planning (XAIP)
Dehghani M, Tomai E, Forbus KD, Klenk M (2008) An integrated reasoning approach to moral decision-making. In: Proceedings of the AAAI conference on artificial intelligence (AAAI), pp 1280–1286
Dennis L, Fisher M, Slavkovik M, Webster M (2016) Formal verification of ethical choices in autonomous systems. Robot Auton Syst 77:1–14
Eyssel F, Hegel F (2012) (s)he’s got the look: gender stereotyping of robots. J Appl Soc Psychol 42(9):2213–2230
Frankfurt HG (1986) On bullshit. Princeton University Pres, Princeton
Frasca T, Thielstrom R, Krause E, Scheutz M (2020) “can you do this?” self-assessment dialogues with autonomous robots before, during, and after a mission. In: HRI workshop on assessing, explaining, and conveying robot proficiency for human–robot teaming
Fraune MR, Kawakami S, Sabanovic S, De Silva PRS, Okada M (2015) Three’s company, or a crowd?: The effects of robot number and behavior on hri in japan and the usa. In: Robotics: Science and systems
Gervits F, Briggs G, Scheutz M (2017) The pragmatic parliament: a framework for socially-appropriate utterance selection in artificial agents. In: 39th annual meeting of the cognitive science society, London, UK
Gibbon D, Griffiths S (2017) Multilinear grammar: ranks and interpretations. Open Linguistics 3(1):265–307
de Graaf MM, Malle BF (2017) How people explain action (and autonomous intelligent systems should too). In: 2017 AAAI Fall Symposium Series
de Graaf MM, Malle BF (2019) People’s explanations of robot behavior subtly reveal mental state inferences
Greene JD (2004) Why are VMPFC patients more utilitarian. A dual-process theory of moral judgment explains. Department of Psychology, Harvard University, Cambridge
Greene JD (2009) Dual-process morality and the personal/impersonal distinction: a reply to McGuire, Langdon, Coltheart, and Mackenzie. J Exp Soc Psychol 45(3):581–584
Gureckis TM, Martin J, McDonnell J, Rich AS, Markant D, Coenen A, Halpern D, Hamrick JB, Chan P (2016) psiturk: an open-source framework for conducting replicable behavioral experiments online. Behav Res Methods 48(3):829–842
Haring KS, Mougenot C, Ono F, Watanabe K (2014) Cultural differences in perception and attitude towards robots. Int J Affect Eng 13(3):149–157
Haring KS, Silvera-Tawil D, Matsumoto Y, Velonaki M, Watanabe K (2014) Perception of an android robot in Japan and Australia: a cross-cultural comparison. In: International conference on social robotics. Springer, pp 166–175
Hayes B, Shah JA (2017) Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE international conference on human–robot interaction. ACM, pp 303–312
Jackson RB, Wen R, Williams T (2019) Tact in noncompliance: the need for pragmatically apt responses to unethical commands. In: Proceedings of the AAAI/ACM conference on artificial intelligence, ethics, and society
Jackson RB, Williams T (2018) Robot: asker of questions and changer of norms? In: Proceedings of the international conference on robot ethics and standards
Jackson RB, Williams T (2019) Language-capable robots may inadvertently weaken human moral norms. In: Proceedings of the companion of the 14th ACM/IEEE international conference on human–robot interaction
Jackson RB, Williams T (2019) On perceived social and moral agency in natural language capable robots. In: Proceedings of the 2019 HRI workshop on the dark side of human–robot interaction: ethical considerations and community guidelines for the Field of HRI
Jackson RB, Williams T, Smith NM (2020) Exploring the role of gender in perceptions of robotic noncompliance. In: Proceedings of the 15th ACM/IEEE international conference on human–robot interaction
Johnson-Laird PN (1980) Mental models in cognitive science. Cogn Sci 4(1):71–115
Johnson-Laird PN (1983) Mental models: towards a cognitive science of language, inference, and consciousness. 6. Harvard University Press
Kasenberg D, Arnold T, Scheutz M (2018) Norms, rewards, and the intentional stance: Comparing machine learning approaches to ethical training. In: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society. ACM, pp 184–190
Kasenberg D, Scheutz M (2018) Inverse norm conflict resolution. In: Proceedings of the 1st AAAI/ACM workshop on artificial intelligence, ethics, and society
Kasenberg D, Thielstrom R, Scheutz M (2020) Generating explanations for temporal logic planner decisions. In: Proceedings of the 30th international conference on automated planning and scheduling (ICAPS)
Kennedy J, Baxter P, Belpaeme T (2014) Children comply with a robot’s indirect requests. In: Proceedings of the international conference on human–robot interaction. ACM, pp 198–199
Komatsu T, Malle BF, Scheutz M (2021) Blaming the reluctant robot: parallel blame judgments for robots in moral dilemmas across U.S. and Japan. In: Proceedings of the 2021 ACM/IEEE international conference on human–robot interaction, pp 63–72
Kuipers B (2016) Human-like morality and ethics for robots. In: AAAI workshop: AI, ethics, and society
Kuipers B (2016) Toward morality and ethics for robots. In: Ethical and moral considerations in non-human agents, AAAI Spring Symposium Series
Le Bui M, Noble SU (2020) We’re missing a moral framework of justice in artificial intelligence. In: The Oxford handbook of ethics of AI
Lee HR, Šabanović S (2014) Culturally variable preferences for robot design and use in South Korea, Turkey, and the United States. In: 2014 9th ACM/IEEE international conference on human–robot interaction (HRI). IEEE, pp 17–24
Lee HR, Sung J, Šabanović S, Han J (2012) Cultural design of domestic robots: a study of user expectations in Korea and the United States. In: 2012 IEEE RO-MAN: The 21st IEEE international symposium on robot and human interactive communication. IEEE, pp 803–808
Lee N, Kim J, Kim E, Kwon O (2017) The influence of politeness behavior on user compliance with social robots in a healthcare service setting. Int J Soc Robot 9(5):727–743
Levinson SC (2000) Presumptive meanings: the theory of generalized conversational implicature. MIT Press, Cambridge
Lockshin J, Williams T (2020) “we need to start thinking ahead”: the impact of social context on linguistic norm adherence. In: Proceedings of the annual meeting of the cognitive science society
Lomas M, Chevalier R, Cross II EV, Garrett RC, Hoare J, Kopack M (2012) Explaining robot actions. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction. ACM, pp 187–188
Madumal P, Miller T, Vetere F, Sonenberg L (2018) Towards a grounded dialog model for explainable artificial intelligence. arXiv preprint arXiv:1806.08055
Malle BF (2016) Integrating robot ethics and machine morality: the study and design of moral competence in robots. Ethics Info Tech 18(4):243–256
Malle BF, Guglielmo S, Monroe AE (2014) A theory of blame. Psychol Inq 25(2):147–186
Mavridis N (2007) Grounded situation models for situated conversational assistants. Ph.D. thesis, Massachusetts Institute of Technology
Miller T (2018) Explanation in artificial intelligence: insights from the social sciences. Artificial Intelligence
Mills S (2003) Gender and politeness, vol 17. Cambridge University Press, Cambridge
Mills S (2005) Gender and impoliteness
Murphy RR, Woods DD (2009) Beyond asimov: the three laws of responsible robotics. IEEE Intell Syst 24(4):14–20
Nass C, Moon Y, Green N (1997) Are machines gender neutral? gender-stereotypic responses to computers with voices. J Appl Soc Psychol 27(10):864–876
Nikolaidis S, Kwon M, Forlizzi J, Srinivasa S (2017) Planning with verbal communication for human–robot collaboration. arXiv preprint arXiv:1706.04694
Oosterveld B, Brusatin L, Scheutz M (2017) Two bots, one brain: component sharing in cognitive robotic architectures. In: Proceedings of the companion of the 2017 ACM/IEEE international conference on human–robot interaction. ACM
Park DH, Hendricks LA, Akata Z, Schiele B, Darrell T, Rohrbach M (2016) Attentive explanations: justifying decisions and pointing to the evidence. arXiv preprint arXiv:1612.04757
Pereira LM, Saptawijaya A (2009) Modelling morality with prospective logic. Int J Reason Based Intell Syst 1(3–4):209–221
Rosemont Jr H, Ames RT (2016) Confucian role ethics: a moral vision for the 21st century? V&R unipress GmbH
Russell S, Dewey D, Tegmark M (2015) Research priorities for robust and beneficial artificial intelligence. AI Mag 36(4):105–114
Šabanović S (2010) Robots in society, society in robots. Int J Soc Robot 2(4):439–450
Sarathy V, Arnold T, Scheutz M (2019) When exceptions are the norm: exploring the role of consent in HRI. ACM Trans Hum Robot Interact 9(2):1–21
Schermerhorn P, Scheutz M, Crowell CR (2008) Robot social presence and gender: Do females view robots differently than males? In: Proceedings of the 3rd ACM/IEEE international conference on human robot interaction. ACM, pp 263–270
Scheutz M (2016) The need for moral competency in autonomous agent architectures. In: Fundamental issues of artificial intelligence. Springer, pp 515–525
Scheutz M (2017) The case for explicit ethical agents. AI Mag 38(4):57–64
Scheutz M, Briggs G, Cantrell R, Krause E, Williams T, Veale R (2013) Novel mechanisms for natural human–robot interactions in the Diarc architecture. In: Proceedings of AAAI workshop on intelligent robotic systems
Scheutz M, Williams T, Krause E, Oosterveld B, Sarathy V, Frasca T (2018) An overview of the distributed integrated cognition affect and reflection Diarc architecture. In: Ferreira MIA, Sequeira JS, Ventura R (eds) Cognitive architectures (in press)
Searle JR (1969) Speech acts: an essay in the philosophy of language
Searle JR (1976) A classification of illocutionary acts. Lang Soc 5(1):1–23
Shibata T, Wada K, Ikeda Y, Sabanovic S (2009) Cross-cultural studies on subjective evaluation of a seal robot. Adv Robot 23(4):443–458
Shim J, Arkin RC (2013) A taxonomy of robot deception and its benefits in HRI. In: 2013 IEEE international conference on systems, man, and cybernetics. IEEE, pp 2328–2335
Siegel M, Breazeal C, Norton MI (2009) Persuasive robotics: the influence of robot gender on human behavior. In: 2009 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 2563–2568
Stewart N, Chandler J, Paolacci G (2017) Crowdsourcing samples in cognitive science. Trends in Cognitive Sciences
Strait M, Briggs P, Scheutz M (2015) Gender, more so than age, modulates positive perceptions of language-based human-robot interactions. In: 4th international symposium on new frontiers in human robot interaction
Strait M, Ramos AS, Contreras V, Garcia N (2018) Robots racialized in the likeness of marginalized social identities are subject to greater dehumanization than those racialized as white. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, pp 452–457
Sun R (2013) Moral judgment, human motivation, and neural networks. Cogn Comput 5(4):566–579
Tay B, Jung Y, Park T (2014) When stereotypes meet robots: the double-edge sword of robot gender and personality in human-robot interaction. Comput Hum Behav 38:75–84
Thielstrom R, Roque A, Chita-Tegmark M, Scheutz M (2020) Generating explanations of action failures in a cognitive robotic architecture. In: Proceedings of NL4XAI: 2nd workshop on interactive natural language technology for explainable artificial intelligence
Vanderelst D, Winfield A (2017) An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research
Wallach W, Franklin S, Allen C (2010) A conceptual and computational model of moral decision making in human and artificial agents. Top Cogn Sci 2(3):454–485
Wang Y, Young JE (2014) Beyond pink and blue: gendered attitudes towards robots in society. In: Proceedings of gender and IT appropriation. Science and practice on dialogue-forum for interdisciplinary exchange. European Society for Socially Embedded Technologies, p 49
Wen R, Jackson RB, Williams T, Zhu Q (2019) Towards a role ethics approach to command rejection. In: Proceedings of the 2019 HRI workshop on the dark side of human-robot interaction: ethical considerations and community guidelines for the field of HRI
Wen R, Siddiqui MA, Williams T (2020) Dempster–Shafer theoretic learning of indirect speech act comprehension norms. In: AAAI, pp 10410–10417
Williams T, Briggs G, Oosterveld B, Scheutz M (2015) Going beyond command-based instructions: extending robotic natural language interaction capabilities. In: Proceedings of twenty-ninth AAAI conference on artificial intelligence
Williams T, Jackson RB, Lockshin J (2018) A Bayesian analysis of moral norm malleability during clarification dialogues. In: Proceedings of the 40th annual meeting of the Cognitive Science Society
Williams T, Zhu Q, Wen R, de Visser EJ (2020) The confucian matador: three defenses against the mechanical bull. In: Companion of the 2020 ACM/IEEE International conference on human–robot interaction (alt.HRI), pp 25–33
Winfield AF, Blum C, Liu W (2014) Towards an ethical robot: internal models, consequences and ethical action selection. In: Conference towards autonomous robotic systems. Springer, pp 85–96
Zhu Q, Williams T, Jackson B, Wen R (2020) Blame-laden moral rebukes and the morally competent robot: a Confucian ethical perspective. Sci Eng Ethics 26(5):2511–2526
Zhu Q, Williams T, Wen R (2019) Confucian robot ethics. IN: Computer Ethics-Philosophical Enquiry (CEPE) Proceedings 2019, vol 1, p 12
Zwaan RA (2016) Situation models, mental simulations, and abstract concepts in discourse comprehension. Psychon Bull Rev 23(4):1028–1034
Funding
Portions of this work were supported by a U.S. Army Research Laboratory contract award to the second author. Portions of this work were supported by a National Research Council Postdoctoral Fellowship awarded to the first author. The views expressed in this paper are solely those of the authors and should not be taken to reflect any official policy or position of the United States Government or the Department of Defense. This work was also funded in part by Air Force Young Investigator Award 19RT0497, and by NSF Grants IIS-1909847, IIS-1849348, and IIS-1723963.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflicts of interest beyond the financial relationships listed above.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Briggs, G., Williams, T., Jackson, R.B. et al. Why and How Robots Should Say ‘No’. Int J of Soc Robotics 14, 323–339 (2022). https://doi.org/10.1007/s12369-021-00780-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-021-00780-y