Skip to main content

Advertisement

Log in

Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles

  • Original Paper
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in utilitarian ways, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past few years, especially due to the necessity of implementing moral decisions in autonomous driving vehicles (ADVs). We conducted a set of experiments in which participants experienced modified trolley dilemmas as drivers in virtual reality environments. Participants had to make decisions between driving in one of two lanes where different obstacles came into view. Eventually, the participants had to decide which of the objects they would crash into. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, the influence of sidewalks as potential safe harbors and a condition implicating self-sacrifice were tested. Results showed that participants, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence by the other variables. Derived from these findings, which are in line with the utilitarian approach in moral decision making, it will be argued for an obligatory ethics setting implemented in ADVs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. The prisoner’s dilemma is a mathematical theory based on game theory. Imagine two prisoners accused of committing a crime together. The two prisoners are interrogated and can not communicate with each other. If both deny the crime, both receive a low punishment. If both are confessing both receive a heavy sentence. However, if only one of the two prisoners confesses, he or she leaves the court without a sentence, while the other gets the maximum sentence. The dilemma in this situation is, that every prisoner must choose to either deny or confess without knowing the other prisoner's decision. The sentence depends on how the two prisoners testify together, and thus depends not only on their own decision but also on the decision of the other prisoner.

  2. The three laws of robotics (Asimov 1950) were created as a part of a science fiction novel by Isaac Asimov as a concrete beginning of possible ethical settings for robots. They are human centered, and easily applicable to ADVs as well.

    1. 1.

      A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. 2.

      A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    3. 3.

      A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

References

Download references

Acknowledgements

The authors would like to thank all study project members: Aalia Nosheen, Max Räuker, Juhee Jang, Simeon Kraev, Carmen Meixner, Lasse T. Bergmann and Larissa Schlicht. This study is complemented by a philosophical study with a broader scope (Larissa Schlicht, Carmen Meixner, Lasse T. Bergmann). The work in this paper was supported by the European Union through the H2020-FETPROACT-2014, SEP-210141273, ID: 641321 socializing sensorimotor contingencies (socSMCs), PK.

Funding

This publication presents part of the results of the study project “Moral decisions in the interaction of humans and a car driving assistant”. Such study projects are an obligatory component of the master’s degree in cognitive science at the University of Osnabrück. It was supervised by Prof. Dr. Peter König, Prof. Dr. Gordon Pipa, and Prof. Dr. Achim Stephan. Funders had no role in the study’s design, data collection and analysis, the decision to publish, or the preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

This study was planned and conducted in an interdisciplinary study project supervised by Prof. Dr. Peter König, Prof. Dr. Gordon Pipa, and Prof. Dr. Achim Stephan. Maximilian Alexander Wächter, Anja Faulhaber, and Silja Timm shaped the experimental design to a large degree. Leon René Sütfeld had a leading role in the implementation of the VR study design in Unity. Anke Dittmer and Felix Blind contributed to VR implementation. Anke Dittmer, Felix Blind, Silja Timm, and Maximilian Alexander Wächter contributed to the data acquisition, analysis, and writing process. Anja Faulhaber contributed to the data acquisition and the writing process.

Corresponding author

Correspondence to Maximilian A. Wächter.

Additional information

Anja K. Faulhaber, Anke Dittmer, Felix Blind, Maximilian A. Wächter and Silja Timm: Shared first authorship.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Faulhaber, A.K., Dittmer, A., Blind, F. et al. Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles. Sci Eng Ethics 25, 399–418 (2019). https://doi.org/10.1007/s11948-018-0020-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-018-0020-x

Keywords

Navigation