Skip to main content
Log in

Could a robot flirt? 4E cognition, reactive attitudes, and robot autonomy

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

In this paper, I develop a view about machine autonomy grounded in the theoretical frameworks of 4E cognition and PF Strawson’s reactive attitudes. I begin with critical discussion of White (this issue), and conclude that his view is strongly committed to functionalism as it has developed in mainstream analytic philosophy since the 1950s. After suggesting that there is good reason to resist this view by appeal to developments in 4E cognition, I propose an alternative view of machine autonomy. Namely, machines count as autonomous when we members of the moral community adopt reactive attitudes in response to their actions. I distinguish this view from White’s and suggest assets and liabilities of this approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. For an excellent account of Aristotle’s theory of the will, see Kenny (1979).

  2. In a similar vein, Strawson in “Self, Mind, and Body” (Strawson 2014/1962) argues that our concept of mind is parasitic on our concept of body, i.e. when thinking about minds we’re always and already thinking about bodies.

  3. Whether or not there’s good reason to shift focus within the Kantian framework depends on larger theoretical issues on which I don’t take a stance here.

  4. Although an AMA that is autonomous or capable of judgment in the ways that children or mentally-ill or -disabled adults are is a real possibility that requires attention.

  5. What seems most likely is that different agents from different cultures have family resemblances of moral attitudes.

  6. Thanks to Stephen Cowley for bringing this to my attention.

  7. Thanks to Stephen Cowley for making this point explicit for me.

  8. See MacIntyre (1981) for critical discussion of ‘ought implies can.’ Note that his objection is that moral narratives can pull us in two directions; we ought to do two incompatible actions. MacIntyre doesn’t consider the principle in light of the metaphysics of moral minds. My point here and his are consistent with one another.

  9. Kant, in “A Supposed Right to Lie Because of Philanthropic Concerns”, says that we are not allowed to lie even to the murderer standing at our door, looking for a victim whose whereabouts we know. To lie to the murderer would be to fail to respect his autonomy and rationality. But see Langton 1992 for another interpretation of this case.

  10. ‘Autonomy’ can be predicated of people but also of mental states and actions. ‘Autonomy’ is said of people whenever their mental states and actions have the property of being performed autonomously: an entity is an autonomous agent when they believe, desire, intend, and act autonomously. To keep things readable, we’ll talk about “autonomous action,” but what is said here can prima facie apply to mental states as well.

  11. See Kim (1993) for excellent discussions.

  12. Despite what Clark and Chalmers (1998) would have you believe.

  13. This is just a fact of the matter: why go to all the trouble to make a deep neural network for something that can be modeled by a simple linear equation?

  14. Many thanks to Stephen Cowley and Rasmus Gahrn-Andersen for this delightful and apt way of putting the matter.

  15. Gahrn-Andersen and I are working in different traditions with different conceptual resources. Even so, I submit that, much as Merleau-Ponty observed of himself and Ryle, our work is not all that far apart.

  16. I’d like to express my gratitude to Stephen Cowley and Rasmus Gahrn-Andersen for their invitation to read and respond to White’s paper “Autonomous Reboot.” Their offer and subsequent feedback has been helpful beyond measure. I am additionally grateful to White for engaging in a dialogue about these issues. His insights about autonomous machines encouraged me to think more deeply about my own commitments. Finally, my eternal love and gratitude to Michele Lassiter for listening to me drone on about machine autonomy.

References

  • Bruner J (1990) Acts of meaning. Harvard University Press, Cambridge, MA

    Google Scholar 

  • Chemero A (2009) Radical embodied cognitive science. MIT Press, Cambridge, MA

    Book  Google Scholar 

  • Clark A, Chalmers D (1998) The extended mind. Analysis 58(1):7–19

    Article  Google Scholar 

  • Dennett D (1987) The intentional stance. MIT Press, Cambridge, MA

    Google Scholar 

  • Fodor JA (1987) Psychosemantics: The problem of meaning in the philosophy of mind. MIT Press, Cambridge, MA

    Book  Google Scholar 

  • Greenemeier L (2017) 20 years after deep blue: how AI has advanced since conquering chess. http://www.scientificamerican.com/article/20-years-after-deep-blue-how-ai-has-advanced-since-conquering-chess/. Accessed 8 Aug 2020

  • Irwin T (translator) (1999) Nicomachean ethics by Aristotle. Hackett Publishing Company, Indianapolis, IN

  • Kenny A (1979) Aristotle’s theory of the will. Yale University Press, New Haven

    Google Scholar 

  • Kim J (1993) Supervenience and mind: selected philosophical essays. Cambridge University Press, New York

    Book  Google Scholar 

  • Langton R (1992) Duty and desolation. Philosophy 67:481–505

    Article  Google Scholar 

  • Lassiter C (2016) Aristotle and distributed language: capacity, matter, structure, and languaging. Lang Sci 53:8–20

    Article  Google Scholar 

  • Lassiter C (2019) Language and simplexity: a powers view. Lang Sci 71:27–37

    Article  Google Scholar 

  • Lewis DK (1996) An argument for the identity theory. J Philos 63:17–25

    Article  Google Scholar 

  • MacIntryre A (1981) After virtue. Notre Dame University Press, South Bend

    Google Scholar 

  • Putnam H (1975) The mental life of some machines. In: Mind, language, and reality: philosophical papers, 2. Cambridge University Press, New York, pp 408–428

  • Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1:206–215

    Article  Google Scholar 

  • Strawson PF (2014/1962). Freedom and resentment and other essays. Routledge, New York

  • Tonkens R (2009) A challenge for machine ethics. Minds Mach 19(3):421–438

    Article  Google Scholar 

  • Versenyi L (1974) Can robots be moral? Ethics 84:248–259

    Article  Google Scholar 

  • Vukov J, Lassiter C (2020) How to power encultured minds. Synthese 197(8):3507–3534

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles Lassiter.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lassiter, C. Could a robot flirt? 4E cognition, reactive attitudes, and robot autonomy. AI & Soc 37, 675–686 (2022). https://doi.org/10.1007/s00146-020-01116-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01116-6

Keywords

Navigation