Abstract
Ethically significant consequences of artificially intelligent artifacts will stem from their effects on existing social relations. Artifacts will serve in a variety of socially important roles—as personal companions, in the service of elderly and infirm people, in commercial, educational, and other socially sensitive contexts. The inevitable disruptions that these technologies will cause to social norms, institutions, and communities warrant careful consideration. As we begin to assess these effects, reflection on degrees and kinds of social agency will be required to make properly informed decisions concerning the deployment of artificially intelligent artifacts in important settings. The social agency of these systems is unlike a human social agency, and this paper provides a methodological framework that is more suited for inquiry into artificial social agents than conventional philosophical treatments of the concept of agency. Separate aspects and dimensions of agency can be studied without assuming that agency must always look like adult human agency. This revised approach to the agency of artifacts is conducive to progress in the topics studied by AI ethics.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
We do not analyze or generate any datasets, because our work proceeds within a theoretical and mathematical approach.
Notes
Sam Altman, head of OpenAI recently tweeted “i am a stochastic parrot, and so r u”. https://twitter.com/sama/status/1599471830255177728?lang=en (Dec 22, 2022).
There is increased interest in Confucian approaches to these questions, see for example Zhu, 2020 that engages with the effects of technology on social roles as traditionally conceived in Chinese thought.
https://workspace.google.com/blog/product-announcements/duet-ai-in-workspace-now-available (last accessed August 29 2023).
A folk psychological conception of agency detection along the lines Dennett describes in The Intentional Stance (Dennett, 1987) will be of little assistance in cases where we find ourselves devoting energy and attention to determining the nature of the beings with whom we are talking and interacting. The challenge here is that unaided common sense is not equipped to detect agent behavior in suitably sophisticated AI.
Those whose moral framework involves an individualistic focus on personal utility might regard social harms as irrelevant or secondary. However, we will assume for the sake of this paper that a radical form of subjectivism with respect to moral matters is either self-undermining or that there are indirect individualist reasons to care about social goods and harms. We are grateful to an anonymous referee for forcing us to be clear on this point.
We are grateful to an anonymous referee for pressing us on this issue and for encouraging us to discuss AI systems that have forms of non-linguistic social agency.
See Gonzalez-Gonzalez et al., 2021 for a systematic review of the scientific literature on sexbots. See also the 2022 special issue of The Journal of Future of Robot Life edited by Simon Dube and David Levy on robot sex. Other notable discussions include David Levy’s, 2007 book Love and Sex with Robots.
Some of these questions are touched upon in Ruiping and Cherry, eds., 2021. Adshade (2017) discusses the economic aspects of social change involving robot sex.
Under these circumstances, our desires to engage in degrading, violent, or simply obnoxious sexual encounters with others or the desire for fully compliant or idealized partners could be acted upon without those desires being brought into question or challenged by the vulnerability and needs of another human person. Of course, for some, the absence of a real human person would make it impossible to genuinely satisfy some obnoxious sexual desires given the interpersonal nature of that desire. Sadism, for example, involves the subordination of another human person. It is hard to imagine a sadist enjoying torturing his sex robot for very long, no matter how realistic the robot’s expressions of pain might be given the absence of coercion or subordination.
Traditionally, adult-level human linguistic competence provided a key benchmark for twentieth-century philosophers as they considered the questions of intelligence, agency, and moral standing. Chatbots that run on state-of-the-art LLMs now have the capacity to pass for human interlocutors under certain circumstances, and thus—in the spirit of the Turing test—we are forced to reflect on their level of agency and perhaps even on their moral status. In this paper, we will focus on the question of their agency.
We agree with one of our referees who noted that AI researchers do not simply assume that AI has agency but also presume that the goal of AI is the creation of agents of a certain kind.
According to Wooldridge and Jennings, it was not until the 1980s that the concept of agency received much attention from technologists. They note that “the problem [was] that although the term [was] widely used by many people working closely in related areas, it defied attempts to produce a single universally accepted definition” (Wooldridge & Jennings, 1995, 4). Of course, science fiction has a long history of reflection on the idea of artifacts as agents.
For an informative analysis on how people perceive dogs versus robots as companions.
Silver et al. (2022) also recognize that social agency is best modeled multidimensionally. Although their model primarily tracks level of cooperation between agents, they note, “there are many interactions dimensions critically under researched in relation to Social Agency, and whilst this [their rendition] continuum is centered around the degree of cooperation in an interaction, as Social Agency grows as a field, it is hoped that more key elements will be incorporated into this model” (442).
For an introduction on the issue, see Nyholm (2023), especially chapter 6.
For an overview of the logic of threshold arguments in the study of cognition, see Calvo and Symons (2014).
Of course, those who hold the threshold account might retreat to some kind of instrumentalist conception of artifact agency. We can certainly act as though an artifact is an agent for instrumental reasons in the spirit of Dennett’s intentional stance (see Symons (2001) and Dennett (1987)), but given this version of the threshold view, we cannot ascribe agency to artifacts like chatbots independently of an observer’s ascription of agency. We will return to this option below.
One issue with this is that as highlighted by Silver et al. (2022, 449), several psychological studies have demonstrated that joint action or joint agency is difficult to justify between robots and humans. Humans tend to not think or report a sense of joint agency when collaborating with robots. Also see Nylhom (2023): Nylhom spends nearly an entire chapter (chapter 3) in his book (2023) on the various moral issues and approaches for autonomous vehicle. Also, see chapter 4 of the same book for further debates on autonomous cars.
Floridi and Sanders also make a point to underscore the difficulty of holding humans responsible for computing systems (AI, regular software, and so on), features, or actions unforeseeable by humans (2004, 371–372) (CITE), like our example of the ABS system in cars.
Also, see chapter 2 of Nyholm (2020).
We thank an anonymous referee for encouraging us to distinguish between moral agency and agency per se.
Consider what van Hateren says concerning conditions required for minimal agency, “such conditions should indicate which species have agency and which behaviors are acts [emphasize ours] rather than something else (… such as sneezing, shivering [automatic reflexes]”.
Debates around group agency are also worth noting here. Groups per se lack any representational content or reflective thought but do seem to take actions which, at least, seem irreducible to individual members (parliament voted to do X). For informative and contrasting view on group agency, see Lewis-Martin (2022). It is worth noting that some philosophers (Christian List, 2021) have characterized AI agency as similar to group agency—List argues that AIs are agents by drawing parallels with group agency. Group agency is a contentious topic, and nothing in our current argument rests on accepting it. We mention it here to note the possibility of agency without intentionality or at least without intentionality in the conventional sense.
For example, when a user engaged with a therapy agent in a conversation, for the human, even if they know the interlocuter is an AI, the perception of the outputs of the chatbots for the user is perceived as conversational actions. Take the example by Yang (2020); the user says to a chatbot “Hey, I know you are not real, but I just wanted to send these pictures of my family out at Disneyland having a great time. I’m doing better now. Thank you” (35). The user seems to take the chatbot as an agent worthy of respect that they should be polite and share intimate family details with. Another example is the language use around ChatGPT or midjourney. It is common to see headlines or conversations that have language like, “what does chatGPT think X is, or this is what AI thinks people from Y country look like.” A person in Korea legally married a virtual avatar (Jozuka et al., 2018). Robotic animals, like Paro, have been around for a while, or for our case, the ChatGPT induced pet bots like Loona. One final example to demonstrate the inclusion of AI systems like chatbots. Take the prevalence of friendbots like Replika. During the pandemic, reports of using chatbots like Replika for therapeutic reasons (Weber-Guskar, 2022) were up. As mentioned, there is a growing acceptance of using chatbots or LLM-equipped robots as sexbots.
One of our referees noted that it might be helpful to think of the social by reference to Floridi’s concept of levels of abstraction (LoA) (2006, Floridi & Sanders, 2004). By using abstraction, one can further clarify a particular phenomenon or artifact of inquiry by focusing on one set of properties or detail over another set. Usually, one set is more abstract than the other. This permits researchers to focus on a particular aspect of the inquiry for different purposes or to be more explicit about the goals of particular explanations. Floridi puts it as follows: consider the wine example. Different LoAs may be appropriate for different purposes. To evaluate a wine, the “tasting LoA,” consisting of observables like those mentioned in the previous section, would be relevant. For the purpose of ordering wine, a “purchasing LoA” (containing observables like maker, region, vintage, supplier, quantity,and price) would be appropriate, but here, the “tasting LoA” would be irrelevant. In our case, we can focus on the social LoA: the level of conversations between two entities and the socio-linguistic world.
Here, the conditions governing the individuality (rather than the identification) of the artifact come into play. Here, see Symons (2010) for a discussion of the individuality of artifacts and organisms.
Like Barandiarian et al.’s, conditions for minimal agency, Floridi and Sanders (2004) also provide base conditions for agency—(a) interactivity, responds to environmental stimuli; (b) autonomy, governs its behavior independent of environmental stimuli; and (c) adaptability, modify its past system states and transition rules according to the environment taking into account success and failure of task (357- 358, 363–364). These conditions are similar Barandiarian et al.’s. Autonomy is similar to the individuality, adaptability and interactivity have parallels with interactional asymmetry, and adaptability is akin to normativity (success or failure at achieving normative goals). Of course, these conditions are not exact replicates. Also, like Floridi and Sanders, we highlight the importance of LoA for chatbot agency. Chatbots are best understood as agents when viewed at the social or linguistic LoA. Although Floridi and Sanders differentiate between agency and moral agency, their ultimate goal is to establish moral agency for AI systems by first showing their agential status.
Shanahan (2023) underscores this point for LLMs, as he says, “a bare bone LLM [for instance] doesn’t really know anything because all it does, at a fundamental level, is a sequence prediction” (2023, 5). So, although it is tempting to ascribe intentionality, beliefs, and desires to these systems, it is a mistake. For Dennett, the intentional stance was understood to be an adaptive trait to specific environmental and evolutionary pressures. In this sense, we are “right” to ascribe beliefs and intentions to aspects of the world that evolution shaped us to detect. See AUTHOR 20?? for a discussion of the relationship between the appropriateness of taking the intentional stance and Dennett’s skepticism with respect to realism about representations and intentions.
Not all chatbots are deliberately deceptive in this respect. In 2022, Sparrow AI from Deepmind was explicitly built to avoid this kind of deceptive action in relation to users. Their working paper provides a detailed description of the heuristics that they employed to guide their chatbot (The Sparrow Team, 2022).
Similarly, van Lingen et al. (2023) also affirm threshold approaches and slip between moral agency and agency simplciter. For example, Blinkley and Pilkington say that to be a minimal agent is “to simply [perform an] intentional action” (2023, 25), and van Lingen differentiates between strong and weak AI (2023). For van Lingen, chatbots are weak AI. Strong AI can have phenomenal experiences, but weak AI cannot; therefore, weak AI is not a moral agent (22). Furthermore, weak AI, for Lingen, cannot act without human actors; thus, they cannot be agents (23). Some, like Huber, take a different approach. Huber suggests that the pragmatic benefit of AI is more important than whether they are actual agents or not. Lastly, Holohan et al. (2023) suggest that agency in therapeutic contexts emerges as a result of the relationship between chatbots and patient (15).
Glock provides an overview of the reasons philosophers deny that animals act. The primary basis is the claim that animals do not act in virtue of reasons (Glock, 2019, 667).
References
Adshade, M. (2017). “Sexbot-induced social change: An economic perspective.” In Robot Sex: Social and Ethical Implications, 289–300. MIT Press.
Anscombe, G. E. M. (1957). Intention. Basil Blackwell.
Barandiaran, X. E., Di Paolo, E., & Rohde, M. (2009). Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action. Adaptive Behavior, 17(5), 367–86. https://doi.org/10.1177/1059712309343819
Binkley, C. E., & Pilkington, B. (2023). The actionless agent: An account of human-CAI relationships. The American Journal of Bioethics, 23(5), 25–27. https://doi.org/10.1080/15265161.2023.2191035
Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI friend: How users of a social chatbot understand their human–AI friendship. Human Communication Research, 48(3), 404–429.
Brey, P. (2014). From moral agents to moral factors: The structural ethics approach. In P. Kroes & P.-P. Verbeek (Eds.), The Moral Status of Technical Artefacts 17:125–42. Philosophy of Engineering and Technology. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-007-7914-3_8
Burge, T. (2009). Primitive agency and natural norms. Philosophy and Phenomenological Research, 79(2), 251–278.
Calvo, P., & Symons, J. (Eds.). (2014). The architecture of cognition: Rethinking Fodor and Pylyshyn’s systematicity challenge. MIT Press.
Calvo, P., Martín, E., & Symons, J. (2014). The emergence of systematicity in minimally cognitive agents. The architecture of cognition: Rethinking Fodor and Pylyshyn’s systematicity challenge, 397.
Davidson, D. (1980). 1963, “Actions, reasons, and causes”, reprinted in Davidson Essays on actions and events (pp. 3–20). Clarendon Press.
De Gennaro, M., Krumhuber, E. G., & Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in psychology, 10: 3061.
Dennett, D. C. (1987). The intentional stance. MIT press.
di Paolo, E. A. (2005). Autopoiesis, adaptivity, teleology, agency. Phenomenology and the Cognitive Sciences, 4(4), 429–452. https://doi.org/10.1007/s11097-005-9002-y
Ferrero, L. (Ed.). (2022). “Introduction.” In The Routledge Handbook of Philosophy of Agency, 1–18. Routledge Handbooks in Philosophy. Abingdon, Oxon ; New York, NY: Routledge.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Floridi, L. (2023). AI as agency without intelligence: On ChatGPT, large language models, and other generative models. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4358789
Friston, K., Moran, R. J., Nagai, Y., Taniguchi, T., Gomi, H., & Tenenbaum, J. (2021). World model learning and inference. Neural Networks, 144, 573–590. https://doi.org/10.1016/j.neunet.2021.09.011
Gao, J., Zheng, P., Jia, Y., Chen, H., Mao, Y., Chen, S., Wang, Yi., Hua, Fu., & Dai, J. (2020). Mental health problems and social media exposure during COVID-19 outbreak. PLoS ONE, 15(4), e0231924.
Gillath, O., Abumusab, S., Ai, T., Branicky, M. S., Davison, R. B., Rulo, M., Symons, J., & Thomas, G. (2023). How deep is AI's love? Understanding relational AI. Behavioral and Brain Sciences, 46, e33.
Glock, H.-J. (2019). Agency, intelligence and reasons in animals. Philosophy, 94(04), 645–671. https://doi.org/10.1017/S0031819119000275
Glock, H.-J. (2009). Can animals act for reasons? Inquiry, 52(3), 232–254. https://doi.org/10.1080/00201740902917127
Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29. https://doi.org/10.1007/s10676-008-9167-5
Holohan, M., Buyx, A., & Fiske, A. (2023). Staying curious with conversational AI in psychotherapy. The American Journal of Bioethics, 23(5), 14–16. https://doi.org/10.1080/15265161.2023.2191059
Jackson, R. B., & Tom W. (2021). “A theory of social agency for human-robot interaction.” Frontiers in Robotics and AI 8 (August 13, 2021): 687726. https://doi.org/10.3389/frobt.2021.687726
Jecker, N. S. (2023). Social robots for later life: Carebots, Friendbots and Sexbots. In R. Fan & M. J. Cherry (Eds.), Sex Robots: Social Impact and the Future of Human Relations (pp. 20–40). Springer.
Jecker, N. S. (2021). Nothing to be ashamed of: Sex robots for older adults with disabilities. Journal of Medical Ethics, 47(1), 26–32. https://doi.org/10.1136/medethics-2020-106645
Jozuka, E., Sato, H., Chan, A., & Mulholland, T. (2018). “Beyond dimensions: The man who marries a hologram.” CNN, December 29, 2018. https://www.cnn.com/2018/12/28/health/rise-of-digisexuals-intl/index.html
Karaian, L. (2022). “Plastic fantastic: Sex robots and/as sexual fantasy.” Sexualities, June, 136346072211066. https://doi.org/10.1177/13634607221106667
Khan, R., & Das, A. (2018). Build better chatbots: A complete guide to getting started with chatbots. Springer.
Levy, D. N. L. (2007). Love + sex with robots: The evolution of human-robot relations (1st ed.). HarperCollins.
Lewis-Martin, J. (2022). What kinds of groups are group agents? Synthese, 200(4), 283. https://doi.org/10.1007/s11229-022-03766-z
Lingen, V., Marlies, N., Noor, A. A., Giesbertz, J. P., Tintelen, V., & Jongsma, K. R. (2023). Why we should understand conversational AI as a tool. The American Journal of Bioethics, 23(5), 22–24. https://doi.org/10.1080/15265161.2023.2191039
List, C. (2021). Group agency and artificial intelligence. Philosophy & Technology, 34, 1213–1242.
Ma, J., Tojib, D., & Tsarenko, Y. (2022). Sex robots: Are we ready for them? An exploration of the psychological mechanisms underlying people’s receptiveness of sex robots. Journal of Business Ethics, 178(4), 1091–1107.
Marečková, A., Androvičová, R., Bártová, K., Krejčová, L., & Klapilová, K. (2022). Men with paraphilic interests and their desire to interact with a sex robot. Journal of Future Robot Life, 3(1), 39–48. https://doi.org/10.3233/FRL-210010
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1
Mecacci, G., Calvert, S. C., & Sio, F. S. D. (2023). Human–machine coordination in mixed traffic as a problem of meaningful human control. AI & Society, 38(3), 1151–1166. https://doi.org/10.1007/s00146-022-01605-w
Natale, S. (2021). Deceitful media: Artificial intelligence and social life after the Turing test. Oxford University Press.
Nyholm, S. (2018). Attributing agency to automated systems: Reflections on human–robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201–1219. https://doi.org/10.1007/s11948-017-9943-x
Nyholm, S. (2020). Human-robot collaborations and responsibility-loci. In Humans and Robots: Ethics, Agency, and Anthropomorphism. Philosophy, Technology and Society. London New York: Rowman & Littlefield International.
Nyholm, S. (2023). Tools and/or agents? Reflections on Sedlakova and Trachsel’s discussion of conversational artificial intelligence. The American Journal of Bioethics, 23(5), 17–19. https://doi.org/10.1080/15265161.2023.2191053
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior (http://arxiv.org/abs/2304.03442). arXiv. http://arxiv.org/abs/2304.03442
Paul, S. K. (2021). Philosophy of action: A contemporary introduction. Routledge Contemporary Introductions to Philosophy. New York London: Routledge, Taylor & Francis Group.
Russell, S. J., & Norvig, P. (2010). Artificial intelligence a modern approach. Pearson Education, Inc.
Schlosser, M. (2019). Agency” The Stanford Encyclopedia of Philosophy (Winter 2019 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2019/entries/agency/
Schwitzgebel, E., & Shevlin, H. (2023, March 5). Opinion: Is it time to start considering personhood rights for AI chatbots? Los Angeles Times. https://www.latimes.com/opinion/story/2023-03-05/chatgpt-ai-feelings-consciousness-rights
Shanahan, M. (2023). “Talking about large language models.” arXiv, February 16, 2023. http://arxiv.org/abs/2212.03551.
Sparrow, R. (2021). Sex robot fantasies. Journal of Medical Ethics, 47(1), 33–34. https://doi.org/10.1136/medethics-2020-106932
Sternlicht, A. (2023). CarynAI will be your girlfriend for $1 a minute. Fortune. https://fortune.com/2023/05/09/snapchat-influencer-launches-carynai-virtualgirlfriend-bot-openai-gpt4/ (visited on August 7,2023).
Steward, H. (2009). Animal agency. Inquiry, 52(3), 217–31. https://doi.org/10.1080/00201740902917119
Strasser, A. (2022). Distributed responsibility in human–machine interactions. AI and Ethics, 2(3), 523–532. https://doi.org/10.1007/s43681-021-00109-5
Swanepoel, D. (2021). Does artificial intelligence have agency?. The mind-technology problem: Investigating minds, selves and 21st century artefacts, 83–104.
Symons, J. (2001). On Dennett. Wadsworth.
Symons, J. (2010). The individuality of artifacts and organisms. History and philosophy of the life sciences, 233–246.
Symons, J., & Alvarado, R. (2022). Epistemic injustice and data science technologies. Synthese, 200(2), 87.
Symons, J., & Elmer, S. (2022). Resilient institutions and social norms: Some notes on ongoing theoretical and empirical research. Merrill Series on The Research Mission of Public Universities.
The Sparrow Team. (2022). Training an AI to communicate in a way that’s more helpful, correct, and harmless. Building Safer Dialogue Agents. Retrieved March 10, 2023, from https://www.deepmind.com/blog/building-safer-dialogue-agents
Ullman, T. (2023). Large language models fail on trivial alterations to theory-of-mind tasks. https://doi.org/10.48550/ARXIV.2302.08399
van Grunsven, J. (2022). Anticipating sex robots: A critique of the sociotechnical vanguard vision of sex robots as ‘good companions’. In Being and value in technology, pp. 63–91. Cham: Springer International Publishing.
van Hateren, J. H. (2015). The origin of agency, consciousness, and free will. Phenomenology and the Cognitive Sciences, 14(4), 979–1000. https://doi.org/10.1007/s11097-014-9396-5
van Hateren, J. H. (2016). Insects have agency but probably not sentience because they lack social bonding. Animal Sentience 1, no. 9. https://doi.org/10.51291/2377-7478.1130
Véliz, C. (2021). Moral zombies: Why algorithms are not moral agents. AI & SOCIETY, 36(2), 487–497. https://doi.org/10.1007/s00146-021-01189-x
Weber-Guskar, E. (2022). How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners. Ethics and Information Technology, 23(4), 601–610.
Wooldridge, M., & Nicholas, J. (June 7, 1995). Intelligent agents: Theory and practice.” The Knowledge Engineering Review, 10(2),115–152. https://doi.org/10.1017/S0269888900008122
Yang, M. (2020). Painful conversations: Therapeutic chatbots and public capacities. Communication and the Public, 5(1–2), 35–44. https://doi.org/10.1177/2057047320950636
Zhu, Q. (2020). Ethics, society, and technology: A Confucian role ethics perspective. Technology in Society, 63, 101424.
Acknowledgements
We gratefully acknowledge the excellent feedback from two anonymous referees. Conversations with the AI ethics graduate seminar at the University of Kansas in 2021 formed the basis for this project in addition to helpful discussions with Ramon Alvarado, Oluwaseun Sanwoolu, Oluwakorede Ajibona, Francisco Pipa, Luciano Floridi, Jack Horner, Amir Modarresi, John Sullins, and Caroline Arruda.
Funding
Ripple,U.S. Department of Defense, H98230-23-C-0277, John Symons.
Author information
Authors and Affiliations
Contributions
The authors equally contributed to this study.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Symons, J., Abumusab, S. Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence. DISO 3, 2 (2024). https://doi.org/10.1007/s44206-023-00086-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44206-023-00086-8