Abstract
The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of what it means to be an artificial moral agent is provided. Then, an overview of a recent account of heroism that seeks to define the construct as the dynamic and interactive integration of character strengths (e.g., bravery and integrity) and situational constraints that afford the opportunity for moral behavior (i.e., moral affordances). With this as a foundation, a discussion is provided for what it might mean for a robot to be an ideal moral agent by proposing a dynamic and interactive connectionist model of robotic heroism. Given the limited accounts of robots engaging in moral behavior, a case for extending robotic moral capacities beyond just being a moral agent to the level of heroism is supported by drawing from exemplar situations where robots demonstrate heroism in popular film and fiction.

Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.
Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? Intelligent Systems, IEEE, 21(4), 12–17.
Becker, S. W., & Eagly, A. H. (2004). The heroism of women and men. American Psychologist, 59(3), 163.
Beer, R. D. (1995). A dynamical systems perspective on agent-environment interaction. Artificial Intelligence, 72, 173–215.
Chemero, A. (2003). An outline of a theory of affordances. Ecological Psychology, 15(2), 181–195.
Churchland, P. M. (1996). The engine of reason, the seat of the soul: A philosophical journey into the brain. Cambridge, MA: MIT Press.
Dautenhahn, L., Ogden, B., & Quick, T. (2002). From embodied to social embedded agents: Implications for interaction-aware robots. Cognitive Systems Research, 3, 397–428.
Di Stefano, P. (2010). Motivation and responsibility: Understanding the phenomenon of rescuing during the Rwandan genocide. Master’s Dissertation, Center for the Study of Human Rights, The London School of Economics and Political Science.
Flescher, A. M. (2003). Heroes, saints, and ordinary morality. Washington, DC: Georgetown University Press.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Freeman, J. B., & Ambady, N. (2011). A dynamic interactive theory of person construal. Psychological Review, 118(2), 247.
Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.
Goud, N. H. (2005). Courage: Its nature and development. The Journal of Humanistic Counseling, Education and Development, 44(1), 102–116.
Guarini, M. (2010). Particularism, analogy, and moral cognition. Minds and Machines, 20(3), 385–422.
Haidt, J., & Joseph, C. (2008). The moral mind: How five sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, and S. Stich (Eds.), The innate mind, 3, 367–391.
Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613.
Hodges, B. H., & Baron, R. M. (1992). Values as constraints on affordances: Perceiving and acting properly. Journal for the Theory of Social Behaviour, 22(3), 263–294.
Hofmann, W., Wisneski, D. C., Brandt, M. J., & Skitka, L. J. (2014). Morality in everyday life. Science, 345(6202), 1340–1343.
Honarvar, A. R., & Ghasem-Aghaee, N. (2009). An artificial neural network approach for creating an ethical artificial agent. In IEEE international symposium on computational intelligence in robotics and automation (CIRA), 2009 (pp. 290–295). IEEE.
Jayawickreme, E., & Chemero, A. (2008). Ecological moral realism: An alternative theoretical framework for studying moral psychology. Review of General Psychology, 12(2), 118.
Jayawickreme, E., & Di Stefano, P. (2012). How can we study heroism? Integrating persons, situations and communities. Political Psychology, 33(1), 165–178.
Jayawickreme, E., & Forgeard, M. J. (2011). Insight or data: Using non-scientific sources to teach positive psychology. The Journal of Positive Psychology, 6(6), 499–505.
Johnson, A. M., & Axinn, S. (2014). Acting vs. being moral: The limits of technological moral actors. In IEEE international symposium on ethics in engineering, sciences, and technology.
Jordan, J. S. (2008). Wild agency: Nested intentionalities in cognitive neuroscience and archaeology. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1499), 1981–1991.
Jordan, J. S., & Ghin, M. (2006). (Proto-) consciousness as a contextually emergent property of self-sustaining systems. Mind and Matter, 4(1), 45–68.
Ladikos, A. (2004). Revisiting the virtue of courage in Aristotle. Phronimon, 5(2), 77–92.
Li, H., Doermann, D., & Kia, O. (2000). Automatic text detection and tracking in digital video. Image Processing, IEEE Transactions on, 9(1), 147–156.
Lyons, M. T. (2005). Who are the heroes? Characteristics of people who rescue others. Journal of Cultural and Evolutionary Psychology, 3(3), 245–254.
MacIntyre, A. (1984). After virtue (2nd ed.). Notre Dame: University of Notre Dame Press.
Merritt, M. W., Doris, J. M., & Harman, G. (2010). Character. In J. M. Doris & F. Cushman (Eds.), The moral psychology handbook. (pp. 355–401). Oxford: Oxford University Press.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE, 21(4), 18–21.
Peterson, C., & Seligman, M. E. P. (2004). Character strengths and virtue: A classification. Oxford: Oxford University Press.
Pezzulo, G. (2012). The “interaction engine”: A common pragmatic competence across linguistic and nonlinguistic interactions. IEEE Transactions on Autonomous Mental Development, 4(2), 105–123.
Pomerleau, D. A. (1991). Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1), 88–97.
Prinz, J. J., & Nichols, S. (2010). Moral emotions. In J. M. Doris & F. Cushman (Eds.), The moral psychology handbook. (pp. 111–146). Oxford: Oxford University Press.
Schwartz, B. (1990). The creation and destruction of value. American Psychologist, 45(1), 7–15.
Smirnov, O., Arrow, H., Kennett, D., & Orbell, J. (2007). Ancestral war and the evolutionary origins of “heroism”. Journal of Politics, 69(4), 927–940.
Stenstrom, D. M., & Curtis, M. (2012). Heroism and risk of harm. Psychology, 3(12A), 1085–1090.
Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.
Wallach, W. (2010). Robot minds and human ethics: The need for a comprehensive model of moral decision making. Ethics and Information Technology, 12(3), 243–250.
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. New York: Oxford University Press.
Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science, 2(3), 454–485.
Wilson, D. H. (2011). Robopocalypse. New York: Simon and Schuster.
Wiltshire, T. J., Barber, D., & Fiore, S. M. (2013). Towards modeling social-cognitive mechanisms in robots to facilitate human-robot teaming. In Proceedings of the human factors and ergonomics society 57th annual meeting, pp. 1278–1282.
Acknowledgments
The author wishes to thank Maura Austin, Mason Cash, Stephen M. Fiore, and Emilio J. C. Lobato for their insightful comments on this manuscript. Also, the author thanks two anonymous reviewers for their helpful and insightful comments that contributed to improving this manuscript.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Wiltshire, T.J. A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans. Minds & Machines 25, 57–71 (2015). https://doi.org/10.1007/s11023-015-9361-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-015-9361-2