Skip to main content

Is AI a Problem for Forward Looking Moral Responsibility? The Problem Followed by a Solution

  • Conference paper
  • First Online:
Artificial Intelligence Research (SACAIR 2021)

Abstract

Recent work in AI ethics has come to bear on questions of responsibility. Specifically, questions of whether the nature of AI-based systems render various notions of responsibility inappropriate. While substantial attention has been given to backward-looking senses of responsibility, there has been little consideration of forward-looking senses of responsibility. This paper aims to plug this gap, and will concern itself with responsibility as moral obligation, a particular kind of forward-looking sense of responsibility. Responsibility as moral obligation is predicated on the idea that agents have at least some degree of control over the kinds of systems they create and deploy. AI systems, by virtue of their ability to learn from experience once deployed, and their often experimental nature, may therefore pose a significant challenge to forward-looking responsibility. Such systems might not be able to have their course altered, and so even if their initial programming determines their goals, the means by which they achieve these goals may be outside the control of human operators. In cases such as this, we might say that there is a gap in moral obligation. However, in this paper, I argue that there are no “gaps” in responsibility as moral obligation, as this question comes to bear on AI systems. I support this conclusion by focusing on the nature of risks when developing technology, and by showing that technological assessment is not only about the consequences that a specific technology might have. Technological assessment is more than merely consequentialist, and should also include a hermeneutic component, which looks at the societal meaning of the system. Therefore, while it may be true that the creators of AI systems might not be able to fully appreciate what the consequences of their systems might be, this does not undermine or render improper their responsibility as moral obligation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Birhane, A.: Algorithmic injustice: a relational ethics approach. Patterns 2(1), 1–9 (2021a). https://doi.org/10.1016/j.patter.2021.100205

  2. Birhane, A.: The impossibility of automating ambiguity. Artif. Life 27, 1–18 (2021b)

    Google Scholar 

  3. Blok, V.: ‘What Is Innovation? Laying the ground for a philosophy of innovation. Techné Res. Philos. Technol. 1, 1–25 (2020). https://doi.org/10.5840/techne2020109129

    Article  Google Scholar 

  4. Boenink, M., Kudina, O.: Values in responsible research and innovation: from entities to practices. J. Responsible Innov. 7(3), 450–470 (2020). https://doi.org/10.1080/23299460.2020.1806451

    Article  Google Scholar 

  5. Bryson, J.J.: Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics Inf. Technol. 20(1), 15–26 (2018). https://doi.org/10.1007/s10676-018-9448-6

    Article  Google Scholar 

  6. Bryson, J.J.: The artificial intelligence of the ethics of artificial intelligence: an introductory overview for law and regulation. In: Dubber, M., Pasquale, F., Das, S. (eds.) The Oxford Handbook of Ethics of AI. Oxford University Press, New York (2020)

    Google Scholar 

  7. Carrier, M.: How to conceive of science for the benefit of society: prospects of responsible research and innovation. Synthese 198(19), 4749–4768 (2019). https://doi.org/10.1007/s11229-019-02254-1

    Article  Google Scholar 

  8. Collingridge, D.: The Social Control of Technology. Frances Pinter Limited, London (1980). https://doi.org/10.2307/1960465

    Book  Google Scholar 

  9. Gardner, A., Smith, A.L., Steventon, A., Coughlan, E., Oldfield, M.: Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice. AI Ethics 15, 1–15 (2021). https://doi.org/10.1007/s43681-021-00069-w

    Article  Google Scholar 

  10. Grunwald, A.: The objects of technology assessment. Hermeneutic extension of consequentialist reasoning. J. Responsible Innov. 7(1), 96–112 (2020). https://doi.org/10.1080/23299460.2019.1647086

    Article  Google Scholar 

  11. Henry, N., Powell, A.: Sexual Violence in the Digital Age, Social and Legal Studies. Palgrave Macmillan, London (2017). https://doi.org/10.1177/0964663915624273

    Book  Google Scholar 

  12. Matthias, A.: The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf. Technol. 6(3), 175–183 (2004). https://doi.org/10.1007/s10676-004-3422-1

    Article  Google Scholar 

  13. Mecacci, G., Santoni de Sio, F.: Meaningful human control as reason-responsiveness: the case of dual-mode vehicles. Ethics Inf. Technol. 22(2), 103–115 (2019). https://doi.org/10.1007/s10676-019-09519-w

    Article  Google Scholar 

  14. Morley, J., et al.: Ethics as a service: a pragmatic operationalisation of AI ethics. Minds Mach. 31, 239–256 (2021). https://doi.org/10.2139/ssrn.3784238

  15. Nyholm, S.: Attributing agency to automated systems: reflections on human–robot collaborations and responsibility-loci. Sci. Eng. Ethics 24(4), 1201–1219 (2017). https://doi.org/10.1007/s11948-017-9943-x

    Article  Google Scholar 

  16. van de Poel, I., Sand, M.: Varieties of responsibility: two problems of responsible innovation. Synthese 198(19), 4769–4787 (2018). https://doi.org/10.1007/s11229-018-01951-7

    Article  Google Scholar 

  17. Santoni de Sio, F., Mecacci, G.: Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos. Technol. 34, 1057–1084 (2021). https://doi.org/10.1007/s13347-021-00450-x

  18. Shapin, S.: The Scientific Life: A Moral History of a Late Modern Vocation. The University of Chicago Press, Chicago (2008). https://doi.org/10.1002/sce.20372

    Book  Google Scholar 

  19. Stilgoe, J., Owen, R., Macnaghten, P.: Developing a framework for responsible innovation. Res. Policy 42, 1568–1580 (2013). https://doi.org/10.1002/9781118551424.ch2

    Article  Google Scholar 

  20. Tigard, D.W.: There is no techno-responsibility gap. Philos. Technol. 34(3), 589–607 (2020). https://doi.org/10.1007/s13347-020-00414-7

    Article  Google Scholar 

  21. Verbeek, P.P.: What Things Do. The Pennsylvania State University Press, University Park (2005). https://doi.org/10.1017/CBO9781107415324.004

    Book  Google Scholar 

  22. Verbeek, P.P.: Materializing morality: design ethics and technological mediation. Sci. Technol. Human Values 31(3), 361–380 (2006). https://doi.org/10.1097/EDE.0b013e3181

    Article  Google Scholar 

  23. Yeung, K.: “Hypernudge”: big data as a mode of regulation by design. Inf. Commun. Soc. 20(1), 118–136 (2017). https://doi.org/10.1080/1369118X.2016.1186713

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fabio Tollon .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tollon, F. (2022). Is AI a Problem for Forward Looking Moral Responsibility? The Problem Followed by a Solution. In: Jembere, E., Gerber, A.J., Viriri, S., Pillay, A. (eds) Artificial Intelligence Research. SACAIR 2021. Communications in Computer and Information Science, vol 1551. Springer, Cham. https://doi.org/10.1007/978-3-030-95070-5_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-95070-5_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-95069-9

  • Online ISBN: 978-3-030-95070-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics