Abstract:
The conversational ethical reasoning robot Immanuel is presented. Immanuel can reason about moral dilemmas from multiple ethical views. The reported study evaluates the p...Show MoreMetadata
Abstract:
The conversational ethical reasoning robot Immanuel is presented. Immanuel can reason about moral dilemmas from multiple ethical views. The reported study evaluates the perceived morality of the robot. The participants had a conversation with the robot on whether lying is permissibile in a given situation. Immanuel first signaled uncertainty about whether lying is right or wrong in the situation, then disagreed with the participant's view, and finally asked for justification. The results indicate that participants with a higher tendency to utilitarian judgments are initially more certain about their view as compared to participants with a higher tendency to deontological judgments. These differences vanish towards the end of the dialogue. Lying is defended and argued against by both utilitarian and deontologically oriented participants. The diversity of the reported arguments points to the variety of human moral judgment and calls for more fine-grained representations of moral reasons for social robots.
Published in: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
Date of Conference: 28 August 2017 - 01 September 2017
Date Added to IEEE Xplore: 14 December 2017
ISBN Information:
Electronic ISSN: 1944-9437