Abstract:
Technological advances will soon make it possible for automated systems (such as vehicles or search and rescue drones) to take over tasks that have been performed by huma...Show MoreMetadata
Abstract:
Technological advances will soon make it possible for automated systems (such as vehicles or search and rescue drones) to take over tasks that have been performed by humans. Still, it will be humans that interact with these systems - relying on the system ('s decisions) will require trust in the robot/machine and its algorithms. Trust research has a long history. One dimension of trust, ethical or morally acceptable decisions, has not received much attention so far. Humans are continuously faced with ethical decisions, reached based on a personal value system and intuition. In order for people to be able to trust a system, it must have widely accepted ethical capabilities. Although some studies indicate that people prefer utilitarian decisions in critical situations, e.g. when a decision requires to favor one person over another, this approach would violate laws and international human rights as individuals must not be ranked or classified by personal characteristics. One solution to this dilemma would be to make decisions by chance - but what about acceptance by system users? To find out if randomized decisions are accepted by humans in morally ambiguous situations, we conducted an online survey where subjects had to rate their personal attitudes toward decisions of moral algorithms in different scenarios. Our results (n=330) show that, despite slightly more respondents state preferring decisions based on ethical rules, randomization is perceived to be most just and morally right and thus may drive decisions in case other objective parameters equate.
Published in: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
Date of Conference: 28 August 2017 - 01 September 2017
Date Added to IEEE Xplore: 14 December 2017
ISBN Information:
Electronic ISSN: 1944-9437