Skip to main content
Log in

Virtuous vs. utilitarian artificial moral agents

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on virtue theory. While the virtuous artificial moral agent has various strengths, this paper argues that a rule-based utilitarian approach (in contrast to a strict act utilitarian approach) is superior, because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits. Specifically, a two-level utilitarian artificial moral agent incorporating both established moral rules and a utility calculator is especially well suited for machine ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. There are approaches to ethics that eschew the big three theoretical traditions; but I would argue that such approaches cannot entirely avoid the concerns and lessons that study of these three traditions has revealed.

  2. There are ways of developing AMAs that may not explicitly reference any standard moral theories, but rely upon a set of moral standards derived from common human interests or scientific surveys of preferences.

  3. H&M (2016, 2017) prefer the term ‘AAMA’: Artificial Autonomous Moral Agent.

  4. H&M (2017: 136–138) have a fourth analogy, that the moral cognition of an AMA can be modeled on machine learning—but this is logically implied by the others, a sort of conclusion to their set of analogies.

  5. Their intention is that the virtuous AMA is geared towards specific moral domains such as patient care or autonomous vehicles (H&M 2017: 221), but it seems that with enough computational power it could be extended to handle multiple moral domains simultaneously.

  6. One key difference between the two is that Bentham thinks there is only one kind of pleasure (any differences in pleasures are merely conceptual), whereas Mill makes a real distinction between intellectual and bodily pleasures, holding that the intellectual pleasures are superior.

  7. Here, neural nets and deep learning could be incorporated towards identifying and establishing new rules or patterns of behavior (but this is a question for software engineers to decide).

  8. Hooker (2000: 89) argues that conflicts between rules should not be resolved by applying Act U, partly because people would lose confidence in the system of rules; however, this is not a worry for machine ethics since confidence does not enter the equation.

  9. Thanks to an anonymous reviewer for raising this objection.

References

  • Anderson SL, Anderson M (2011) A prima facie duty approach to machine ethics and its application to elder care. American Association for Artificial Intelligence, Menlo Park, pp 2–7

  • Anderson M, Anderson SL, Armen C (2004) Towards machine ethics: implementing two action-based ethical theories. American Association for Artificial Intelligence, Menlo Park

  • Anderson M, Anderson SL, Armen C (2006) MedEthEx: a prototype medical ethics advisor. American Association for Artificial Intelligence, Menlo Park, pp 1759–1765

    Google Scholar 

  • Aristotle (350 BCE) Nicomachean ethics. In: Ross WD (trans) The internet classics archive. http://classics.mit.edu/Aristotle/nicomachaen.html. Accessed 7 Oct 2018

  • Bentham J (1789) An introduction to the principles of morals and legislation (in the version by Jonathan Bennett presented at http://www.earlymoderntexts.com)

    Google Scholar 

  • Bringsjord S, Konstantine A, Bello P (2006) Toward a general logicist methodology for engineering ethically correct robots. IEEE Intell Syst 21(4):38–44

    Article  Google Scholar 

  • Doyle J (1983) What is rational psychology? Toward a modern mental philosophy. AI Mag 4(3):50–53

    Google Scholar 

  • Floridi L (2013) The ethics of information. Oxford University Press, New York

    Book  Google Scholar 

  • Foot P (1967) The problem of abortion and the doctrine of double effect. Oxf Rev 5:5–15

    Google Scholar 

  • Grau C (2006) There is no “I” in “robot”: robots and utilitarianism. IEEE Intell Syst 21(4):52–55

    Article  Google Scholar 

  • Hare RM (1983) Moral thinking: its levels, method, and point. Oxford University Press, New York

    Google Scholar 

  • Hooker B (2000) Ideal code, real world. Oxford University Press, New York

    Google Scholar 

  • Howard D, Muntean I (2016) A minimalist model of the artificial autonomous moral agent (AAMA). Association for the Advancement of Artificial Intelligence, Menlo Park

    Google Scholar 

  • Howard D, Muntean I (2017) Artificial moral cognition: moral functionalism and autonomous moral agency. In: Powers TM (ed) Philosophy and computing. Philosophical studies series, vol 128. Springer, New York, pp 121–160

    Chapter  Google Scholar 

  • Jackson F (1998) From metaphysics to ethics: a defence of conceptual analysis. Oxford University Press, New York

    Google Scholar 

  • Kahneman D (2011) Thinking, fast and slow. Farrar, Straus, and Giroux, New York

    Google Scholar 

  • Kant I (1785) Groundwork for the metaphysics of morals (in the version by J. Bennett presented at http://www.earlymoderntexts.com)

  • Leben D (2017) A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol 19:107–115

    Article  Google Scholar 

  • Mill JS (1861) Utilitarianism (in the version by Jonathan Bennett presented at http://www.earlymoderntexts.com)

  • Nathanson S (2018) Act and rule utilitarianism. In: Internet encyclopedia of philosophy. http://www.iep.utm.edu/util-a-r/. Accessed 7 Oct 2018

  • Powers TM (2006) Prospects for a Kantian machine. IEEE Intell Syst 21(4):46–51

    Article  Google Scholar 

  • Rawls J (1971) A theory of justice. Belknap (Harvard University Press), Cambridge

    Google Scholar 

  • Ross WD (1930) The right and the good. Oxford University Press, New York

    Google Scholar 

  • Singer P (2011) The expanding circle: ethics, evolution, and moral progress. Princeton University Press, Princeton (revised edition; original published in 1981)

    Book  Google Scholar 

  • Varner G (2012) Personhood, ethics, and animal cognition: situating animals in Hare’s two level utilitarianism. Oxford University Press, New York

    Book  Google Scholar 

  • Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford University Press, New York

    Book  Google Scholar 

Download references

Acknowledgements

Thanks to participants at the Brain-Based and Artificial Intelligence workshop held at the Center for the Study of Ethics in the Professions, Illinois Institute of Technology (May 10–11, 2018), for helpful questions and discussion. Thanks also to two anonymous reviewers for helpful comments. Images of the stop and yield signs are courtesy of http://www.pixabay.com.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William A. Bauer.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bauer, W.A. Virtuous vs. utilitarian artificial moral agents. AI & Soc 35, 263–271 (2020). https://doi.org/10.1007/s00146-018-0871-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-018-0871-3

Keywords

Navigation