Abstract
That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both implementable into machines and whose tenets permit the creation of such AMAs in the first place. Without consistency between ethics and engineering, the resulting AMAs would not be genuine ethical robots, and hence the discipline of Machine Ethics would be a failure in this regard. Here this challenge is articulated through a critical analysis of the development of Kantian AMAs, as one of the leading contenders for being the ethic that can be implemented into machines. In the end, however, the development of Kantian artificial moral machines is found to be anti-Kantian. The upshot of all this is that machine ethicists need to look elsewhere for an ethic to implement into their machines.
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Notes
The most thorough examination of Kantian ethics within the Machine Ethics literature thus far is offered by Powers (2006). Although such an analysis of Kant’s ethics is a step in the right direction, Powers’ discussion all along remains at the level of implementation, and never considers whether Kantian morality permits the development of Kantian AMAs in the first place.
Nadeau (2006) even goes so far as to suggest that only androids could be ethical.
For a nice review of these weapons, see Sparrow (2007).
The Metaphysics of Morals, p. 141. (Hereafter MM).
Fundamental Principles of the Metaphysic of Morals, p. 80. (Hereafter FPMM).
FPMM, p. 65.
FPMM, p. 49.
FPMM, p. 58.
Kant puts the idea quite nicely in FPMM: “If now we attend to ourselves on occasion of any transgression of duty, we shall find that we in fact do not will that our maxim should be a universal law, for that is impossible for us; on the contrary we will that the opposite should be a universal law, only we assume the liberty of making an exception in our own favor of (just for this time only) in favor of our inclination” (52).
FPMM, pp. 17–20.
Here I assume that Kant was not a compatibilist. In this way, if the will of an entity is determined, as determinists and compatibilists suggest it to be (albeit for the purpose of supporting different views), then they are not the possessors of unabated free will. Although it is worth noting that Kant believed that the existence of free will could never be proven, he also believed it to be an indispensable element of genuine moral agency.
This claim is admittedly controversial. Some have argued that free will could be instilled in robots. See especially McCarthy (2000).
MM, p. 148.
See Wallach and Allen (2009) for a discussion of the benefits and promise of developing virtuous artificial moral agents.
MM, p. 186.
See Kant’s Lectures on Ethics. There Kant distinguishes between heroic (supererogatory), blameworthy (abhorrent), and permissible (accidental) suicide. Heroic suicide represents self-termination that is done with the intent of maintaining morality in the world, most notably in cases where remaining alive would initiate a more severe moral violation.
See Mill’s Utilitarianism for a classic Utilitarian account.
References
Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7, 149–155.
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence, 12(3), 251–261.
Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? IEEE Intelligent Systems, 21(4), 12–17.
Anderson, M., & Anderson, S. L. (2006). Machine ethics. IEEE Intelligent Systems, 21(4), 10–11.
Anderson, M., & Anderson, S. L. (2007a). The status of machine ethics: A report from the AAAI symposium. Minds and Machines, 17, 1–10.
Anderson, M. & Anderson, S. L. (2007b). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15–26.
Boden, M. A. (Ed.). (1994). Dimensions of creativity. Cambridge: MIT Press.
Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47, 139–159.
Calverley, D. J. (2008). Imagining a non-biological machine as a legal person. AI & SOCIETY, 22(4), 523–537.
Floridi, L., & Sanders, J. W. (2007). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Gips, J. (1995). Towards the ethical robot. In K. Ford, C. Glymour, & P. Hayes (Eds.), Android epistemology (pp. 243–252). Cambridge: MIT Press.
Gips, J. (2005). Creating ethical robots: A grand challenge. AAAI symposium on machine ethics. Washington, DC.
Grau, C. (2006). There is no ‘I’ in ‘Robot’: Robots and utilitarianism. IEEE Intelligent Systems, 21(4), 52–55.
Guarini, M. (2006). Particularism and the classification and reclassification of moral cases. IEEE Intelligent Systems, 21(4), 22–28.
Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.
Kant, I. (1785/1988). Fundamental principles of the metaphysic of morals (T. K. Abbott, Trans.). New York: Prometheus Books.
Kant, I. (1797/1996). The metaphysics of morals (M. Gregor, Trans.). Cambridge: Cambridge University Press.
Kant, I. (1997). Lectures on ethics (P. Heath, Trans.). Cambridge: Cambridge University Press.
McCarthy, J. (2000). Free will—even for robots. Journal of Experimental and Theoretical Artificial Intelligence, 12(3), 341–352.
McLaren, B. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE Intelligent Systems, 21(4), 29–37.
Mill, J. S. (1871/2000). Utilitarianism. New York: Broadview.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.
Nadeau, J. E. (2006). Only androids can be ethical. In K. Ford, C. Glymour, & P. J. Hayes (Eds.), Thinking about android epistemology (pp. 241–248). Cambridge: MIT Press.
O’Neill, O. (1989). Constructions of reason: Explorations of Kant’s practical philosophy. New York: Cambridge University Press.
Picard, R. W. (1997). Affective computing. Cambridge: MIT Press.
Powers, T. (2006). Prospects for a Kantian machine. IEEE Intelligent Systems, 21(4), 46–51.
Rawls, J. (2000). Lectures on the history of moral philosophy. Cambridge: Harvard University Press.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.
The U.S. Army Future Combat Systems Program. (2006). Retrieved July 31, 2008, from www.cbo.gov/ftpdoc.cfm?index=7122.
Torrance, S. (2008). Ethics and consciousness in artificial agents. AI & SOCIETY, 22(4), 495–521.
Wallach, W. & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press (forthcoming).
Wallach, W., Allen, C., & Smit, I. (2008). Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI & SOCIETY, 22, 565–582.
Acknowledgments
Thank you to Olaf Eleffson, Verena Gottschling, Marcello Guarini, and Hilary Martin for helpful discussion in the early stages of this research. An earlier version of this paper was published as part of the proceedings from the 2009 SSAISB Symposium on Computing and Philosophy in Edinburgh. Thank you to the engaging audience there for their comments, especially Steve Torrance.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Tonkens, R. A Challenge for Machine Ethics. Minds & Machines 19, 421–438 (2009). https://doi.org/10.1007/s11023-009-9159-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-009-9159-1