Abstract
In this paper, I develop a view about machine autonomy grounded in the theoretical frameworks of 4E cognition and PF Strawson’s reactive attitudes. I begin with critical discussion of White (this issue), and conclude that his view is strongly committed to functionalism as it has developed in mainstream analytic philosophy since the 1950s. After suggesting that there is good reason to resist this view by appeal to developments in 4E cognition, I propose an alternative view of machine autonomy. Namely, machines count as autonomous when we members of the moral community adopt reactive attitudes in response to their actions. I distinguish this view from White’s and suggest assets and liabilities of this approach.
Similar content being viewed by others
Notes
For an excellent account of Aristotle’s theory of the will, see Kenny (1979).
In a similar vein, Strawson in “Self, Mind, and Body” (Strawson 2014/1962) argues that our concept of mind is parasitic on our concept of body, i.e. when thinking about minds we’re always and already thinking about bodies.
Whether or not there’s good reason to shift focus within the Kantian framework depends on larger theoretical issues on which I don’t take a stance here.
Although an AMA that is autonomous or capable of judgment in the ways that children or mentally-ill or -disabled adults are is a real possibility that requires attention.
What seems most likely is that different agents from different cultures have family resemblances of moral attitudes.
Thanks to Stephen Cowley for bringing this to my attention.
Thanks to Stephen Cowley for making this point explicit for me.
See MacIntyre (1981) for critical discussion of ‘ought implies can.’ Note that his objection is that moral narratives can pull us in two directions; we ought to do two incompatible actions. MacIntyre doesn’t consider the principle in light of the metaphysics of moral minds. My point here and his are consistent with one another.
Kant, in “A Supposed Right to Lie Because of Philanthropic Concerns”, says that we are not allowed to lie even to the murderer standing at our door, looking for a victim whose whereabouts we know. To lie to the murderer would be to fail to respect his autonomy and rationality. But see Langton 1992 for another interpretation of this case.
‘Autonomy’ can be predicated of people but also of mental states and actions. ‘Autonomy’ is said of people whenever their mental states and actions have the property of being performed autonomously: an entity is an autonomous agent when they believe, desire, intend, and act autonomously. To keep things readable, we’ll talk about “autonomous action,” but what is said here can prima facie apply to mental states as well.
See Kim (1993) for excellent discussions.
Despite what Clark and Chalmers (1998) would have you believe.
This is just a fact of the matter: why go to all the trouble to make a deep neural network for something that can be modeled by a simple linear equation?
Many thanks to Stephen Cowley and Rasmus Gahrn-Andersen for this delightful and apt way of putting the matter.
Gahrn-Andersen and I are working in different traditions with different conceptual resources. Even so, I submit that, much as Merleau-Ponty observed of himself and Ryle, our work is not all that far apart.
I’d like to express my gratitude to Stephen Cowley and Rasmus Gahrn-Andersen for their invitation to read and respond to White’s paper “Autonomous Reboot.” Their offer and subsequent feedback has been helpful beyond measure. I am additionally grateful to White for engaging in a dialogue about these issues. His insights about autonomous machines encouraged me to think more deeply about my own commitments. Finally, my eternal love and gratitude to Michele Lassiter for listening to me drone on about machine autonomy.
References
Bruner J (1990) Acts of meaning. Harvard University Press, Cambridge, MA
Chemero A (2009) Radical embodied cognitive science. MIT Press, Cambridge, MA
Clark A, Chalmers D (1998) The extended mind. Analysis 58(1):7–19
Dennett D (1987) The intentional stance. MIT Press, Cambridge, MA
Fodor JA (1987) Psychosemantics: The problem of meaning in the philosophy of mind. MIT Press, Cambridge, MA
Greenemeier L (2017) 20 years after deep blue: how AI has advanced since conquering chess. http://www.scientificamerican.com/article/20-years-after-deep-blue-how-ai-has-advanced-since-conquering-chess/. Accessed 8 Aug 2020
Irwin T (translator) (1999) Nicomachean ethics by Aristotle. Hackett Publishing Company, Indianapolis, IN
Kenny A (1979) Aristotle’s theory of the will. Yale University Press, New Haven
Kim J (1993) Supervenience and mind: selected philosophical essays. Cambridge University Press, New York
Langton R (1992) Duty and desolation. Philosophy 67:481–505
Lassiter C (2016) Aristotle and distributed language: capacity, matter, structure, and languaging. Lang Sci 53:8–20
Lassiter C (2019) Language and simplexity: a powers view. Lang Sci 71:27–37
Lewis DK (1996) An argument for the identity theory. J Philos 63:17–25
MacIntryre A (1981) After virtue. Notre Dame University Press, South Bend
Putnam H (1975) The mental life of some machines. In: Mind, language, and reality: philosophical papers, 2. Cambridge University Press, New York, pp 408–428
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1:206–215
Strawson PF (2014/1962). Freedom and resentment and other essays. Routledge, New York
Tonkens R (2009) A challenge for machine ethics. Minds Mach 19(3):421–438
Versenyi L (1974) Can robots be moral? Ethics 84:248–259
Vukov J, Lassiter C (2020) How to power encultured minds. Synthese 197(8):3507–3534
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Lassiter, C. Could a robot flirt? 4E cognition, reactive attitudes, and robot autonomy. AI & Soc 37, 675–686 (2022). https://doi.org/10.1007/s00146-020-01116-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-020-01116-6