ABSTRACT
Establishing when, how, and why robots should be considered moral agents is key for advancing human-robot interaction. For instance, whether a robot is considered a moral agent has significant implications for how researchers, designers, and users can, should, and do make sense of robots and whether their agency in turn triggers social and moral cognitive and behavioral processes in humans. Robotic moral agency also has significant implications for how people should and do hold robots morally accountable, ascribe blame to them, develop trust in their actions, and determine when these robots wield moral influence. In this workshop on Perspectives on Moral Agency in Human-Robot Interaction, we plan to bring together participants who are interested in or have studied the topics concerning a robot's moral agency and its impact on human behavior. We intend to provide a platform for holding interdisciplinary discussions about (1) which elements should be considered to determine the moral agency of a robot, (2) how these elements can be measured, (3) how they can be realized computationally and applied to the robotic system, and (4) what societal impact is anticipated when moral agency is assigned to a robot. We encourage participants from diverse research fields, such as computer science, psychology, cognitive science, and philosophy, as well as participants from social groups marginalized in terms of gender, ethnicity, and culture.
- Gordon Briggs and Matthias Scheutz. 2014. How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress. International Journal of Social Robotics 6, 3 (2014), 343--355.Google ScholarCross Ref
- Ryan Blake Jackson and Tom Williams. 2019. Language-capable robots may inadvertently weaken human moral norms. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 401--410.Google ScholarCross Ref
- Peter H Kahn Jr, Aimee L Reichert, Heather E Gary, Takayuki Kanda, Hiroshi Ishiguro, Solace Shen, Jolina H Ruckert, and Brian Gill. 2011. The new ontological category hypothesis in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction. 159--160.Google ScholarDigital Library
- Boyoung Kim, Ruchen Wen, Qin Zhu, Tom Williams, and Elizabeth Phillips. 2021. Robots as moral advisors: The effects of deontological, virtue, and confucian role ethics on encouraging honest behavior. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. 10--18.Google ScholarDigital Library
- Michael Laakasuo, Jussi Palomäki, Anton Kunnari, Sanna Rauhala, Marianna Drosinou, Juho Halonen, Noora Lehtonen, Mika Koverola, Marko Repo, Jukka Sundvall, et al. 2022. Moral psychology of nursing robots: Exploring the role of robots in dilemmas of patient autonomy. European Journal of Social Psychology (2022).Google Scholar
- Gert-Jan Lokhorst and Jeroen Van Den Hoven. 2012. Responsibility for military robots. Robot ethics: The ethical and social implications of robotics (2012), 145--156.Google Scholar
- Bertram F Malle, Stuti Thapa Magar, and Matthias Scheutz. 2019. AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. In Robotics and well-being. Springer, 111--133.Google Scholar
- Robert Sparrow. 2007. Killer robots. Journal of applied philosophy 24, 1 (2007), 62--77.Google ScholarCross Ref
- Ruchen Wen, Boyoung Kim, Elizabeth Phillips, Qin Zhu, and Tom Williams. 2021. Comparing strategies for robot communication of role-grounded moral norms. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. 323--327.Google ScholarDigital Library
Index Terms
- Perspectives on Moral Agency in Human-Robot Interaction
Recommendations
Artificial Quasi Moral Agency
AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and SocietyMy research explores interrelated theoretical and practical questions about artificial intelligence (AI) and moral decision-making by focusing on the concept of moral agency. I propose a distinction between quasi (non-sentient) moral agents and sentient ...
Is it time for robot rights? Moral status in artificial entities
AbstractSome authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the ...
Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?
In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the ...
Comments