Abstract
This paper presents the argument that there ought to be a categorical ban on autonomous weapons systems (AWS) in warfare. First, I provide a foundational argument that international humanitarian law (jus in bello) is deontological. Following the argument shared by Peter Asaro and Robert Sparrow, I then argue that AWS lack the ability to properly acknowledge its target and consequently, breaches jus in bello principles. I, however, go further than Asaro and Sparrow by emphasizing the necessity of reciprocity for deontological law. Because AWS lack a constitutive symmetry with human combatants, humans and AWS cannot coexist in warfare if they are to respect the existing international principles. After addressing foreseeable objections, including arguments for reducing deaths and the prohibition of other weapons, I conclude that a categorical ban of AWS remains a reasonable consideration. The benefit of this paper is that it avoids complex and hypothetical considerations of future developments of AWS capabilities. It also shows that if the moral underpinnings of jus in bello principles are respected, then categorically banning AWS from warfare is already an accepted position.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
For example, the US set aside US$18 billion between 2016 and 2020 for AWS development [5].
According to the report, these machines were autonomous as once in the battlefield they could, “…attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.” [13, p. 17/548].
This term refers to combatants who are in a situation, often through injury, where they are unable to perform regular military duties.
There are many ways to define illegitimate harm. However, by illegitimate harm in warfare, including deaths, I only consider that which occurs without respecting the principles of jus in bello.
While they all share the openness of AWS sufficiency progressing to the necessary moral standards, their arguments are not precisely the same. Arkin only requires accurate performance while, as mentioned in the previous section, Asaro and Sparrow require, in addition to performance, the associated awareness of the other human’s intrinsic value.
This is modified from Korsgaard’s example between a military general and private [12, p. 124].
My thanks to the anonymous reviewer for raising this objection.
I am alluding to someone who is aware of the practical realities that may require concessions to one’s philosophical obligations.
References
Arkin, R.: Lethal autonomous systems and the plight of the non-combatant. In: Kiggins, R. (ed.) The Political Economy of Robots—Prospects for Prosperity and Peace in the Automated 21st Century, pp. 317–326. Palgrave Macmillan, Cham (2018). https://doi.org/10.1007/978-3-319-51466-6_15
Asaro, P.: Autonomous Weapons and the Ethics of Artificial Intelligence. In: Liao, S.M. (ed.) Ethics of Artificial Intelligence, pp. 212–236. Oxford University Press, Oxford (2020)
Cantrell, H.: Autonomous weapon systems and the claim-rights of innocents on the battlefield. AI Ethics (2021). https://doi.org/10.1007/s43681-021-00119-3
Davison, N.: A legal perspective: autonomous weapon systems under international humanitarian law. UNODA Occasional Papers No. 30, November 2017 (2018). https://doi.org/10.18356/29a571ba-en
Dawes, J.: UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research. The Conversation. https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616 (2021). Accessed 22 Apr 2022
Dilmegani, C.: When will singularity happen? 995 experts’ opinions on AGI. AI Multiple. https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/ (2022). Accessed 12 Apr 2022
Dreveskracht, R.: Just war in international law: an argument for a deontological approach to humanitarian law. Buff Hum Rts L Rev 16, 237–288 (2010)
Felter, J.H., Shapiro, J.N.: Limiting civilian casualties as part of a winning strategy: the case of courageous restraint. Dædalus (2017). https://doi.org/10.1162/DAED_a_00421
Haner, J., Garcia, D.: The artificial intelligence arms race: trends and world leaders in autonomous weapons development. Glob Policy (2019). https://doi.org/10.1111/1758-5899.12713
Jenkins, R., Purves, D.: Robots and respect: a response to Robert sparrow. Ethics Int Aff (2016). https://doi.org/10.1017/S0892679416000277
Kant, I.: Groundwork of the metaphysics of morals. In: Gregor, M. (ed.) Cambridge University Press, Cambridge (1998)
Korsgaard, C.M.: Fellow Creatures: Our Obligations to the Other Animals. Oxford University Press, Oxford (2018)
Majumdar Roy Choudhury, L. et al.: Final report of the Panel of Experts on Libya established pursuant to Security Council resolution 1973 (2011) S/2021/229. United Nations Security Council. https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf?OpenElement (2021). Accessed 15 Apr 2022
Moor, J.H.: The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. (2006). https://doi.org/10.1109/MIS.2006.80
Moseley, A.: Just war theory. The Internet Encyclopedia of Philosophy. https://iep.utm.edu/justwar/. Accessed 5 Apr 2021
Sparrow, R.: Robots and respect: assessing the case against autonomous weapon systems. Ethics Int. Aff. (2016). https://doi.org/10.1017/S0892679415000647
Worsnip, P.: Wars less deadly than they used to be, report says. Reuters. https://www.reuters.com/article/us-war-casualties-report-idUSTRE60J5UG20100121 (2010). Accessed 12 Apr 2022
Acknowledgements
I would like to thank Dr Shannon Vallor for introducing me to this topic and providing invaluable discussions. I would also like to thank Marijus Dingilevskis, Astrid Bertrand, and the anonymous reviewers for their insightful comments and suggestions.
Funding
No funds, grants, or other support was received.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author has no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Brand, J.L.M. Why reciprocity prohibits autonomous weapons systems in war. AI Ethics 3, 619–624 (2023). https://doi.org/10.1007/s43681-022-00193-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-022-00193-1