Skip to main content

Advertisement

Log in

What do academics say about artificial intelligence ethics? An overview of the scholarship

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

This paper presents an overview of the academic scholarship in artificial intelligence (AI) ethics. The goal is to assess whether the academic scholarship on AI ethics constitutes a coherent field, with shared concepts and meanings, philosophical underpinnings, and citations. The data for this paper consist of the content of 221 peer-reviewed AI ethics articles published in the fields of medicine, law, science and engineering, and business and marketing. The bulk of the analysis consists of quantitative descriptions of the terms mentioned in each article. In addition, each term’s associations are analyzed to understand the specific meaning attached to each term. The analysis of the content is complemented by a social network analysis of cited authors. The findings suggest that some concepts, problem definitions and suggested solutions in the literature converge, but their content and meaning drive considerable variation across disciplines. Thus, there is limited support for the notion that shared concepts and meanings exist in the AI ethics literature. The field appears united in what it excludes: labor exploitation, poverty, global inequality, and gender inequality are not prominently mentioned as problems. The findings also show that the philosophical underpinnings of this academic field should be rethought: only a small number of texts mentions any major philosophical tradition or concept. Moreover, the field has very few shared citations. Most of the scholarship has been developed in relative isolation from others conducting similar research. Thus, it may be premature to talk about an AI ethics canon or a coherent field.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. The query terms are: “Artificial Intelligence ethics” OR “AI ethics” OR “ethical Artificial Intelligence” OR “ethical AI” OR “ethics of Artificial Intelligence” OR “ethics of AI” OR “responsible AI” OR “responsible Artificial Intelligence”. The query was conducted on May 6, 2022.

  2. Five words before and after the mention of a concept are collected. Expanding the analysis to ten words does not change the results significantly.

  3. In order to check for the robustness of the term associations, only terms there are mentioned a threshold of at least five times in an article are included in the visualization. Changing the threshold value between one and five does not change the content of the analysis.

  4. The terms used for each category are as follows—capitalism: “capitalism”, “private sector”, “private initiative”; ecology: “ecological”, “climate change”, “environmental”; gender: “gender”, “women’s rights”, “LGBT”; human rights: “human rights”, “fundamental rights”; military: “military”, “autonomous weapons”; race: “Race”, “racial”; singularity: “singularity”, “Artificial General Intelligence”; Turing test: “Turing test”.

  5. Some issue frames have strong associations with other terms. Gender and race tend to go together in texts. In addition, gender is associated with voice, owing to debates on gender bias in voice recognition. Capitalism is prefaced with surveillance in a number of law articles. Ecology and sustainability go together, especially in medicine articles. Race is associated with bias in medicine and law articles, and with discrimination and concepts related to criminal justice in law articles.

  6. The terms used for each category are as follows—deepfake: “deepfake”; destruction of humanity: “destroy human”, “destruction of human”, “humanity will be destroyed”, “end of human”, “kill human”; disinformation/misinformation: “disinformation”, “misinformation”; ecological destruction: “destruction of the environment”, “ecological destruction”, “global warming”, “environmental problem”; exploitation: “exploitation”, “exploitative”; gender bias: “gender bias”, “gender-based bias”, “bias on the basis of gender”, “discrimination against women”, “discrimination against LGBT”, “gender inequality”; global inequality: “global inequality”, “colonialism”, “imperialism”; job replacement: “job replace”, “replacing jobs”, “replacement of jobs”; killer robots: “killer robot”, “lethal autonomous weapon”; poverty: “poverty”, “economic inequality”; privacy violation: “privacy concern”, “threat to privacy”, “violation of privacy”; racist bias: “racism”, “white supremacy”, “anti-Black”, “racial bias”; social isolation: “social isolation”, “affective bonds”.

  7. Association analysis yields that what is understood by exploitation is not labor exploitation, but rather, the exploitation of data.

  8. The terms used for each category are as follows—better data and algorithms: “better data”, “better algorithm”, “improve algorithm”; debias: “debias”, “de-bias”; diversity: “increasing diversity”, “hiring of diverse”; guidelines: “ethical guideline”, “ethical AI guideline”; human in the loop: “human in the loop”; in-house team: “ethics team”, “in-house team”; legislation: “legislation”, “statutory”; self-regulation: “self-regulate”, “regulate themselves”; training: “ethics training”, “training in AI ethics”, “education in AI ethics”; whistleblower protection: “whistleblower”.

References

  1. Anderson, M., Anderson, S.L.: Robot be good. Sci. Am. 303(4), 72–77 (2010)

    Article  Google Scholar 

  2. Awad, E., Dsouza, S., Kim, R., et al.: The moral machine experiment. Nature 563(7729), 59–64 (2018). https://doi.org/10.1038/s41586-018-0637-6

    Article  Google Scholar 

  3. Bærøe, K., Miyata-Sturm, A., Henden, E.: How to achieve trustworthy artificial intelligence for health. Bull. World Health Organ. 98(4), 257–262 (2020). https://doi.org/10.2471/BLT.19.237289

    Article  Google Scholar 

  4. Berendt, B.: AI for the common good?! Pitfalls, challenges, and ethics pen-testing. Paladyn, J. Behav. Robotics 10(1), 44–65 (2019). https://doi.org/10.1515/pjbr-2019-0004

    Article  Google Scholar 

  5. Birhane, A., Kalluri, P., Card, D., et al.: The values encoded in machine learning research. https://arxiv.org/abs/2106.15590 (2021) Accessed 1 July 2021

  6. Bogosian, K.: Implementation of moral uncertainty in intelligent machines. Mind. Mach. 27(4), 591–608 (2017). https://doi.org/10.1007/s11023-017-9448-z

    Article  Google Scholar 

  7. Borenstein, J., Grodzinsky, F.S., Howard, A., et al.: AI ethics: a long history and a recent burst of attention. Computer 54(1), 96–102 (2021). https://doi.org/10.1109/MC.2020.3034950

    Article  Google Scholar 

  8. Bryson, J., Winfield, A.: Standardizing ethical design for artificial intelligence and autonomous systems. Computer 50(5), 116–119 (2017). https://doi.org/10.1109/MC.2017.154

    Article  Google Scholar 

  9. Canca, C.: Operationalizing AI ethics principles. Commun. ACM 63(12), 18–21 (2020). https://doi.org/10.1145/3430368

    Article  Google Scholar 

  10. Coeckelbergh, M.: Health care, capabilities, and AI assistive technologies. Ethical Theory Moral Pract. 13(2), 181–190 (2010). https://doi.org/10.1007/s10677-009-9186-2

    Article  Google Scholar 

  11. Coeckelbergh, M.: Responsibility and the moral phenomenology of using self-driving cars. Appl. Artif. Intell. 30(8), 748–757 (2016). https://doi.org/10.1080/08839514.2016.1229759

    Article  Google Scholar 

  12. Crawford, K.: Atlas of AI. Yale University Press (2021)

    Book  Google Scholar 

  13. Denno, D.W., Surujnath, R.: Foreword: rise of the machines: artificial intelligence, robotics, and the reprogramming of law. Fordham Law Rev. 88(2), 381–404 (2019)

    Google Scholar 

  14. Doorn, N.: Artificial intelligence in the water domain: opportunities for responsible use. Sci. Total Environ. 755, 142561 (2021). https://doi.org/10.1016/j.scitotenv.2020.142561

    Article  Google Scholar 

  15. Dreyfus, H.L.: What computers still can’t do: a critique of artificial reason. MIT Press, Cambridge, Massachusetts (1992)

    Google Scholar 

  16. Elrod, H.J.W.: Trial by siri: ai comes to the courtroom. Houston Law Rev. 57, 19 (2020)

    Google Scholar 

  17. Estrada, D.: Ideal theory in AI ethics. arXiv:2011.02279 [cs]. http://arxiv.org/abs/2011.02279 (2020) Accessed 14 Apr 2021

  18. Etienne, H.: When AI ethics goes astray: a case study of autonomous vehicles. Soc. Sci. Comput. Rev. (2020). https://doi.org/10.1177/0894439320906508

    Article  Google Scholar 

  19. Etzioni, A., Etzioni, O.: Should artificial intelligence be regulated? Issues Sci. Technol. 33(4), 32–36 (2017)

    Google Scholar 

  20. Ferretti, T.: An institutionalist approach to ai ethics: justifying the priority of government regulation over self-regulation. Moral Philos. Politics (2021). https://doi.org/10.1515/mopp-2020-0056

    Article  Google Scholar 

  21. Floridi, L., Sanders, J.W.: On the morality of artificial agents. Mind. Mach. 14(3), 349–379 (2004)

    Article  Google Scholar 

  22. Floridi, L., Cowls, J., Beltrametti, M., et al.: AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind. Mach. 28(4), 689–707 (2018). https://doi.org/10.1007/s11023-018-9482-5

    Article  Google Scholar 

  23. Forsythe, D.E.: Studying those who study US: an anthropologist in the World of artificial intelligence. Stanford University Press, Stanford, California (2001)

    Google Scholar 

  24. Frank, D.-A., Chrysochou, P., Mitkidis, P., et al.: Human decision-making biases in the moral dilemmas of autonomous vehicles. Sci. Rep. 9(1), 13080 (2019). https://doi.org/10.1038/s41598-019-49411-7

    Article  Google Scholar 

  25. Ghallab, M.: Responsible AI: requirements and challenges. AI Perspect. 1(1), 3 (2019). https://doi.org/10.1186/s42467-019-0003-z

    Article  Google Scholar 

  26. Gless, S., Silverman, E., Weigend, T.: If robots cause harm, Who is to blame? Self-driving cars and criminal liability. New Crim. Law Rev. 19(3), 412–436 (2016). https://doi.org/10.1525/nclr.2016.19.3.412

    Article  Google Scholar 

  27. Green, B., Kak, A.: The false comfort of human oversight as an antidote to A.I. harm. Future Tense. https://slate.com/technology/2021/06/human-oversight-artificial-intelligence-laws.html (2021). Accessed 15 June 2021

  28. Habli, I., Lawton, T., Porter, Z.: Artificial intelligence in health care: accountability and safety. Bull. World Health Organ. 98(4), 251–256 (2020). https://doi.org/10.2471/BLT.19.237487

    Article  Google Scholar 

  29. Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30(1), 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8

    Article  Google Scholar 

  30. Hagendorff, T.: The missing link in putting AI ethics into practice. arXiv preprint: 22. (2020b)

  31. Hanna, R., Kazim, E.: Philosophical foundations for digital ethics and AI Ethics: a dignitarian approach. AI Ethics 1, 405–423 (2021)

    Article  Google Scholar 

  32. Hao, K.: The creepy fake humans herald a new age in AI. MIT Technol. Rev. https://www.technologyreview.com/2021/06/11/1026135/ai-synthetic-data (2021). Accessed 11 June 2021

  33. Hauer, T.: Society and the second age of machines: algorithms versus ethics. Society 55(2), 100–106 (2018). https://doi.org/10.1007/s12115-018-0221-6

    Article  Google Scholar 

  34. Himmelreich, J.: Responsibility for killer robots. Ethical Theory Moral Pract. 22(3), 731–747 (2019). https://doi.org/10.1007/s10677-019-10007-9

    Article  Google Scholar 

  35. Ho, A.: Deep ethical learning: taking the interplay of human and artificial intelligence seriously. Hastings Cent. Rep. 49(1), 36–39 (2019). https://doi.org/10.1002/hast.977

    Article  Google Scholar 

  36. Howard, A., Borenstein, J.: The Ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Sci. Eng. Ethics 24(5), 1521–1536 (2018). https://doi.org/10.1007/s11948-017-9975-2

    Article  Google Scholar 

  37. Johnson, D.G., Verdicchio, M.: Reframing AI discourse. Mind. Mach. 27(4), 575–590 (2017). https://doi.org/10.1007/s11023-017-9417-6

    Article  Google Scholar 

  38. Jonas, H.: The Imperative of Responsibility (trans. H Jonas and D Herr). Chicago and London: The University of Chicago Press. (1984)

  39. Katyal, S.K.: Private accountability in the age of artificial intelligence. UCLA Law Rev. 54, 89 (2019)

    Google Scholar 

  40. Kazim, E., Koshiyama, A.: The interrelation between data and AI ethics in the context of impact assessments. AI and Ethics 1, 219–225 (2021). https://doi.org/10.1007/s43681-020-00029-w

    Article  Google Scholar 

  41. Kerr, A., Barry, M., Kelleher, J.D.: Expectations of artificial intelligence and the performativity of ethics: Implications for communication governance. Big Data Soc. 7(1), 205395172091593 (2020). https://doi.org/10.1177/2053951720915939

    Article  Google Scholar 

  42. Kertysova, K.: artificial intelligence and disinformation: how AI changes the way disinformation is produced, disseminated, and can be countered. Secur. Human Rights 29(1–4), 55–81 (2018). https://doi.org/10.1163/18750230-02901005

    Article  Google Scholar 

  43. Khalil, O.E.M.: Artificial decision-making and artificial ethics: a management concern. J. Bus. Ethics 12(4), 313–321 (1993). https://doi.org/10.1007/BF01666535

    Article  Google Scholar 

  44. King, T.C., Aggarwal, N., Taddeo, M., et al.: Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions. Sci. Eng. Ethics 26(1), 89–120 (2020). https://doi.org/10.1007/s11948-018-00081-0

    Article  Google Scholar 

  45. Köbis, N., Rahwan, I., Bonnefon, J-F.: Op-Ed: how AI’s growing influence can make humans less moral. Los Angeles Times, 2 August. (2021)

  46. Kuipers, B.: Perspectives on ethics of AI: computer science. In: Dubber, M.D., Pasquale, F., Das, S. (eds.) The Oxford Handbook of Ethics of AI, p. 21. Oxford University Press, USA (2020)

    Google Scholar 

  47. Lauer, D.: You cannot have AI ethics without ethics. AI Ethics 1(1), 21–25 (2021). https://doi.org/10.1007/s43681-020-00013-4

    Article  MathSciNet  Google Scholar 

  48. Lazzaro, S.: Are AI ethics teams doomed to be a facade? Women who pioneered them weigh in. Ventur. Beat. https://venturebeat.com/2021/09/30/are-ai-ethics-teams-doomed-to-be-a-facade-the-women-who-pioneered-them-weigh-in/ (2021). Accessed 30 sept 2021

  49. List, C.: Group agency and artificial intelligence. Philos. Technol. (2021). https://doi.org/10.1007/s13347-021-00454-7

    Article  Google Scholar 

  50. Lloyd, D.: Frankenstein’s children: artificial intelligence and human value. Metaphilosophy 16(4), 307–318 (1985). https://doi.org/10.1111/j.1467-9973.1985.tb00177.x

    Article  Google Scholar 

  51. London, A.J.: Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent. Rep. 49(1), 15–21 (2019). https://doi.org/10.1002/hast.973

    Article  Google Scholar 

  52. Lyons, H., Velloso, E., Miller : Fair and responsible AI: a focus on the ability to contest. arXiv:2102.10787 [cs]. http://arxiv.org/abs/2102.10787 (2021). Accessed 14 Apr 2021.

  53. Malik, A., Patel, P., Ehsan, L., et al.: Ten simple rules for engaging with artificial intelligence in biomedicine. PLOS Comput. Biol. Markel S (ed.) 17(2), e1008531 (2021). https://doi.org/10.1371/journal.pcbi.1008531

    Article  Google Scholar 

  54. Manheim, K., Kaplan, L.: Artificial intelligence: risks to privacy and democracy. Yale J. Law Technol. 21, 106–188 (2019)

    Google Scholar 

  55. Marx, J., Tiefensee, C.: Of animals, robots and men. Hist. Soc. Res. Hist. Sozialforschung. 40(4), 70–91 (2015). https://doi.org/10.1275/HSR.40.2015.4.70-91. (HSR GESIS Leibniz Institute for the Social Sciences)

    Article  Google Scholar 

  56. Medeiros, M.: Public and private dimensions of AI technology and security. Centre for International Governance Innovation: 7. https://www.cigionline.org/articles/public-and-private-dimensions-ai-technology-and-security (2020)

  57. Metcalf, J.: Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Soc. Res. Int. Q. 86(2), 449–476 (2019)

    Article  Google Scholar 

  58. Moor, J.H.: Is ethics computable? Metaphilosophy 26(1–2), 1–21 (1995). https://doi.org/10.1111/j.1467-9973.1995.tb00553.x

    Article  Google Scholar 

  59. Murphy, K., Di Ruggiero, E., Upshur, R., et al.: Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med. Ethics 22(1), 14 (2021). https://doi.org/10.1186/s12910-021-00577-8

    Article  Google Scholar 

  60. Nagy, P., Wylie, R., Eschrich, J., et al.: Facing the pariah of science: the Frankenstein myth as a social and ethical reference for scientists. Sci. Eng. Ethics 26(2), 737–759 (2020). https://doi.org/10.1007/s11948-019-00121-3

    Article  Google Scholar 

  61. Neff, G.: From bad users and failed uses to responsible technologies: a call to expand the ai ethics toolkit. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 5–6. New York, NY, USA, 7 February 2020, ACM. (2020) https://doi.org/10.1145/3375627.3377141.

  62. Nersessian, D., Mancha, R.: From automation to autonomy: legal and ethical responsibility gaps in artificial intelligence innovation. Mich. Technol. Law Rev. 27, 43 (2020)

    Google Scholar 

  63. Nunez, C.: Artificial intelligence and legal ethics: whether AI lawyers can make ethical decisions. Tulane J. Technol. Intell. Prop. 20, 17 (2017)

    Google Scholar 

  64. Pasquale, F.: Data-informed duties in AI development. Columbia Law Rev. 119(7), 1917–1940 (2019)

    Google Scholar 

  65. Pasquale, F.: New laws of robotics. Harvard University Press (2020)

    Book  Google Scholar 

  66. Pizzi, M., Romanoff, M., Engelhardt, T.: AI for humanitarian action: human rights and ethics. Int. Rev. Red Cross 102(913), 145–180 (2020). https://doi.org/10.1017/S1816383121000011

    Article  Google Scholar 

  67. Pollitzer, E.: Creating a better future. J. Int. Aff. 72(1), 75–90 (2019)

    Google Scholar 

  68. Purves, D., Jenkins, R., Strawser, B.J.: Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory Moral Pract. 18(4), 851–872 (2015). https://doi.org/10.1007/s10677-015-9563-y

    Article  Google Scholar 

  69. Quinn, M.J.: Ethics for the information age. Pearson, Boston, MA (2014)

    Google Scholar 

  70. Raymond, A.H., Young, E.A.S., Shackelford, S.J.: Building a better HAL 9000: algorithms, the market, and the need to prevent the engraining of Bias. Northwest. J. Technol. Intellect. Prop. 15(3), 41 (2018)

    Google Scholar 

  71. Rességuier, A., Rodrigues, R.: AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data Soc. 7(2), 205395172094254 (2020). https://doi.org/10.1177/2053951720942541

    Article  Google Scholar 

  72. Robbins, S.: A misdirected principle with a catch: explicability for AI. Mind. Mach. 29(4), 495–514 (2019). https://doi.org/10.1007/s11023-019-09509-3

    Article  MathSciNet  Google Scholar 

  73. Rochel, J., Evéquoz, F.: Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics. AI Soc. 36, 609–622 (2021)

    Article  Google Scholar 

  74. Rossi, F.: Building trust in artificial intelligence. J. Int. Aff. 72(1), 127–134 (2019)

    Google Scholar 

  75. Rudschies, C., Schneider, I., Simon J.: Value Pluralism in the AI Ethics Debate – Different Actors, Different Priorities. The International Review of Information Ethics 29. http://informationethics.ca/index.php/irie/article/view/419 (2020) Accessed 14 Apr 2021

  76. Russell, S.: Take a stand on AI weapons. Nature 521, 415–418 (2015)

    Google Scholar 

  77. Sack, W.: Artificial human nature. Des. Issues 13(2), 55–64 (1997)

    Article  Google Scholar 

  78. Sauer, F.: Stopping ‘killer robots’: why now is the time to ban autonomous weapons systems. Arms Control Today 46(8), 8–13 (2016)

    Google Scholar 

  79. Schwitzgebel, E., Garza, M.: A defense of the rights of artificial intelligences: defense of the rights of artificial intelligences. Midwest Stu. Philos. 39(1), 98–119 (2015). https://doi.org/10.1111/misp.12032

    Article  Google Scholar 

  80. Segun, S.T.: From machine ethics to computational ethics. AI & Soc. 36(1), 263–276 (2021). https://doi.org/10.1007/s00146-020-01010-1

    Article  Google Scholar 

  81. Sharkey, N.: The ethical frontiers of robotics. Science 322(5909), 1800–1801 (2008). https://doi.org/10.1126/science.1164582

    Article  Google Scholar 

  82. Smuha, N.A.: The EU approach to ethics guidelines for trustworthy artificial intelligence. Comput. Law Rev. Int. 20(4), 97–106 (2019). https://doi.org/10.9785/cri-2019-200402

    Article  Google Scholar 

  83. Smuha, N.A.: From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence. Law Innov. Technol. 13(1), 57–84 (2021). https://doi.org/10.1080/17579961.2021.1898300

    Article  Google Scholar 

  84. Stahl, B.C.: Information, ethics, and computers: the problem of autonomous moral agents. Mind. Mach. 14, 67–83 (2004)

    Article  Google Scholar 

  85. Stahl, B.C., Andreou, A., Brey, P., et al.: Artificial intelligence for human flourishing – beyond principles for machine learning. J. Bus. Res. 124, 374–388 (2021). https://doi.org/10.1016/j.jbusres.2020.11.030

    Article  Google Scholar 

  86. Stone, P., Brooks, R., Brynjolfsson, E., et al.: Artificial intelligence and life in 2030: the one hundred year stsudy on artificial intelligence. Report of the 2015 Study Panel, September. (2016)

  87. Sutrop, M.: Should we trust artificial intelligence? Trames J. Humanit. Soc. Sci. 23(4), 499 (2019). https://doi.org/10.3176/tr.2019.4.07

    Article  Google Scholar 

  88. Tavani, H.T.: Expanding the standard ICT-ethics framework in an era of AI. J. Inf. Ethics 29(2), 11–35 (2020)

    Google Scholar 

  89. Tonkens, R.: A challenge for machine ethics. Mind. Mach. 19(3), 421–438 (2009). https://doi.org/10.1007/s11023-009-9159-1

    Article  Google Scholar 

  90. Ulnicane, I., Eke, D.O., Knight, W., et al.: Good governance as a response to discontents? Déjà vu, or lessons for AI from other emerging technologies. Interdisc. Sci. Rev. 46(1–2), 71–93 (2021). https://doi.org/10.1080/03080188.2020.1840220

    Article  Google Scholar 

  91. Unger, J.-P., Morales, I., De Paepe, P., et al.: A plea to merge clinical and public health practices: reasons and consequences. BMC Health Serv. Res. 20(S2), 1068 (2020). https://doi.org/10.1186/s12913-020-05885-0

    Article  Google Scholar 

  92. Wallach, W., Allen, C.: Moral machines: teaching robots right from wrong. Oxford University Press, New York (2009)

    Book  Google Scholar 

  93. Walz, A., Firth-Butterfield, K.: Implementing ethics into artificial intelligence: a contribution, from a legal perspective, to the development of an AI governance regime. Duke Law Technol. Rev. 17, 180–231 (2018)

    Google Scholar 

  94. Wellman, M.P., Rajan, U.: Ethical issues for autonomous trading agents. Mind. Mach. 27(4), 609–624 (2017). https://doi.org/10.1007/s11023-017-9419-4

    Article  Google Scholar 

  95. Zanzotto, F.M.: Viewpoint: Human-in-the-loop Artificial Intelligence. J. Artif. Intell. Res. 64, 243–252 (2019). https://doi.org/10.1613/jair.1.11345

    Article  MathSciNet  Google Scholar 

  96. Zhang, B., Anderljung, M., Kahn, L., et al.: Ethics and governance of artificial intelligence: evidence from a survey of machine learning researchers. J Artif. Intell. Res. 71, 591–666 (2021). https://doi.org/10.1613/jair.1.12895

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The author would like to thank the participants of the Seattle University Celebration of Scholarship meeting (May 20, 2021) and GESIS—Eurolab Brown Bag Series (November 18, 2021) for their helpful feedback.

Funding

On behalf of all the authors, the corresponding author states that this research was not funded by any agency.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Onur Bakiner.

Ethics declarations

Conflict of interest

On behalf of all the authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bakiner, O. What do academics say about artificial intelligence ethics? An overview of the scholarship. AI Ethics 3, 513–525 (2023). https://doi.org/10.1007/s43681-022-00182-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00182-4

Keywords