Skip to main content
Log in

Categorization and challenges of utilitarianisms in the context of artificial intelligence

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

The debates about ethics in the context of artificial intelligence have been recently focusing primarily on various types of utilitarianisms. This article suggests a categorization of the various presented utilitarianisms into static utilitarianisms and dynamic utilitarianisms. It explains the main features of both. Then, it presents the challenges the utilitarianisms in each group need to be able to deal with. Since it appears that those cannot be overcome in the context of each group alone, the article suggests a possibility of using a combination of the two categories of utilitarianisms to resolve most of the challenges without the need to abandon the concept of utilitarianisms as such. Even this possibility comes with its own issues that might not be resolved within the boundaries of utilitarianisms, however. Therefore, another potential alternative based on a combination of various ethical systems is suggested and briefly explored.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It is necessary to point out that utility in the classical utilitarianism translated to well-being or happiness. In modern contexts, however, utility can be any assigned value that we consider worth maximizing.

  2. It could probably be argued that static utilitarianisms could circumvent this issue by including various fail-safes to prevent potentially disastrous consequences, but it is hard to imagine how this would work in practice. All of these fail-safes would only treat the effects and not the causes of problematic behavior, since treating the causes would require some changes in the utility value function, which could lead to further problematic behavior.

References

  • Abbeel P, Ng AY (2004) Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the twenty-first international conference on Machine learning (ICML '04). https://doi.org/10.1145/1015330.1015430

  • Abel D, MacGlashan J, Littman ML (2016) Reinforcement learning as a framework for ethical decision making. In: Papers from the 2016 AAAI workshop. AAAI Digital Library. http://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12582. Accessed 28 Nov 2020

  • Aliman N-M, Kester L (2019) Augmented utilitarianism for AGI safety. In: Hammer P, Agrawal P, Goertzel B, Iklé M (eds) Artificial general intelligence: AGI 2019. Lecture notes in computer science, vol 11654. Springer, Cham, pp 11–21

    Chapter  Google Scholar 

  • Bauer WA (2018) Virtuous vs. utilitarian artificial moral agents. AI & Soc 35:263–271

    Article  Google Scholar 

  • Baujard A (2009) A return to Bentham’s felicific calculus: from moral welfarism to technical non-welfarism. Eur J Hist Econ Thought 16(3):431–453

    Article  Google Scholar 

  • Bonnemains V, Saurel C, Tessier C (2018) Embedded ethics: some technical and ethical challenges. Ethics Inf Technol 20:41–58

    Article  Google Scholar 

  • Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford

    Google Scholar 

  • Bowden P (2009) Defense of utilitarianism. University Press of America, Lanham. https://doi.org/10.2139/ssrn.1534305

    Book  Google Scholar 

  • Faulhaber AK, Dittmer A, Blind F, Wächter MA, Timm S, Sütfeld LR et al (2018) Human decisions in moral dilemmas are largely described by utilitarianism: virtual car driving study provides guidelines for autonomous driving vehicles. Sci Eng Ethics 25:399–418

    Article  Google Scholar 

  • Greaves H, MacAskill W (2019) The case for strong longermism. GPI working paper no. 7–2019. Global Priorities Institute. https://globalprioritiesinstitute.org/wp-content/uploads/2020/Greaves_MacAskill_strong_longtermism.pdf. Accessed 28 Nov 2020

  • Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Mind Mach 30:99–120

    Article  Google Scholar 

  • Herd S, Read SJ, Oreilly R, Jilk DJ (2018) Goal changes in intelligent agents. In: Yampolskiy RV (ed) Artificial intelligence safety and security. CRC Press, Cambridge, pp 217–224

    Chapter  Google Scholar 

  • Hibbard B (2012) Avoiding unintended AI behaviors. In: Bach J, Goertzel B, Iklé M (eds) Artificial general intelligence: AGI 2012: Lecture notes in computer science, vol 7716. Springer, Berlin

    Google Scholar 

  • Hibbard B (2015) Ethical artificial intelligence. https://arxiv.org/abs/1411.1373

  • Hooker JN, Kim TW (2018) Toward non-intuition-based machine and artificial intelligenceethics: a deontological approach based on modal logic. In: AIES '18: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and Society. https://doi.org/10.1145/3278721.3278753

  • Leike J, Krueger D, Everitt T, Martic M, Maini V, Legg S (2018) Scalable agent alignment via reward modeling: a research direction. https://arxiv.org/abs/1811.07871

  • Lucas J, Comstock G (2015) Do machines have prima facie duties? In: Van Rysewyk SP, Pontier M (eds) Machine medical ethics. Springer, Berlin, pp 79–92

    Google Scholar 

  • Monton B (2019) How to avoid maximizing expected utility. Philos Impr 19(18):1–25

    Google Scholar 

  • Omohundro SM (2008) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Proceedings of the AGI conference, vol 171, IOS Press, The Netherlands, pp 483–492.

  • Poulsen A, Anderson M, Anderson SL, Byford B, Fossa F, Neely EL et al (2019) Responses to a critique of artificial moral agents. https://arxiv.org/abs/1903.07021

  • Powers TM (2006) Prospects for a Kantian machine. IEEE Intell Syst 21(4):46–51

    Article  Google Scholar 

  • Rautenbach G, Keet CM (2020) Toward equipping artificial moral agents with multiple ethical theories. In: Proceedings of RobOntics: international workshop on ontologies for autonomous robotics, September, CEUR-WS. UCT computer science research document archive. https://pubs.cs.uct.ac.za/id/eprint/1393/. Accessed 18 Jan 2021

  • Rawls J (1971) A theory of justice. Harvard University Press, Cambridge

    Google Scholar 

  • Ray A, Achiam J, Amodei D (2019) Benchmarking safe exploration in deep reinforcement learning. In: OpenAI. https://openai.com/blog/safety-gym/

  • Tonkens R (2009) A challenge for machine ethics. Mind Mach 19:421

    Article  Google Scholar 

  • Torres P (2019) The possibility and risks of artifical general intelligence. Bull At Sci 75(3):105–108

    Article  Google Scholar 

  • Turchin A (2018) Levels of AI self-improvement. LessWrong. https://www.lesswrong.com/posts/os7N7nJoezWKQnnuW/levels-of-ai-self-improvement. Accessed 18 Jan 2021

  • Wang X, Zhao Y, Pourpanah F (2020) Recent advances in deep learning. Int J Mach Learn Cybern 11:747–750

    Article  Google Scholar 

  • White J (2020) Autonomous reboot: Aristotle, autonomy and the ends of machine ethics. AI & Soc. https://doi.org/10.1007/s00146-020-01039-2

    Article  Google Scholar 

  • Yampolskiy RV (2020) Unpredictability of AI: on the impossibility of accurately predicting all actions of a smarter agent. J Artif Intell Conscious 7(1):109–118

    Article  Google Scholar 

Download references

Funding

The research was funded via a university grant (Antropocentrismus v etice, University of Ostrava).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Štěpán Cvik.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cvik, Š. Categorization and challenges of utilitarianisms in the context of artificial intelligence. AI & Soc 37, 291–297 (2022). https://doi.org/10.1007/s00146-021-01169-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01169-1

Keywords

Navigation