Skip to main content

Advertisement

Log in

On the moral status of social robots: considering the consciousness criterion

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

This article has been updated

Abstract

While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with human beings. In recent years, some approaches to moral consideration have been proposed that would include social robots as proper objects of moral concern, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the “consciousness criterion,” which proposes phenomenal consciousness as a necessary condition for accrediting moral status. First, I explain why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience) and then, I respond to some of the common objections against this view. Then, I examine three inclusive alternative approaches to moral consideration that could accommodate social robots and point out why they are ultimately implausible. Finally, I conclude that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Change history

  • 07 April 2024

    Change history note: The error in the article citation Benford and Malartre 2007, 163 has been resolved.

Notes

  1. Consider, for example, service robots (van Wynsberghe 2016); robotic caregivers and healthcare robots (van Wynsberghe 2015; Gunkel 2014b); pets and companions (e.g., “interactive robotic toys” such as Pleo (a robotic dinosaur), Aibo (a robotic dog), and Paro (a robotic baby seal)); as well as research robots, such as MIT’s Cog and Kismet (Tavani 2018, 3).

  2. Jaworska and Tannenbaum (2018) distinguish between sophisticated cognitive capacities as criteria for moral status and the rudimentary ones. According to them, sentience falls under the latter category.

  3. It may be important to note that many philosophers maintain that self-consciousness itself is a necessary criterion for rationality, since it makes self-knowledge possible (Smith 2017, § 4.2).

  4. As should be evident from the very term, self-consciousness is itself a kind of consciousness (i.e., consciousness of oneself) (see Smith 2017).

  5. See Warren 1997 Ch. 4 for an in-depth discussion of the personhood criterion for moral status. Warren confirms that “a person is necessarily 〈…〉  an entity that has conscious experiences” (Warren 1997, 94) and adds that even the “minimalist” definitions of personhood require “some capacity for thought and self-awareness” (Warren 1997, 90).

  6. I owe this discussion suggestion to an anonymous referee.

  7. It must be noted that I am not claiming that consciousness is sufficient for moral status. For example, John Basl (2012) argues that a machine that can only experience colors and has no preferences for any particular color would not qualify for moral status (i.e., it would not be wronged if we were to change the color it is experiencing or simply shut it down), and I agree with that. The point is rather that consciousness is necessary for moral status without being sufficient, because non-conscious entities are incapable of having positively or negatively valenced experiences.

  8. Note that I do not mean to imply here that no doubts about the mental states of other entities are ever justified. As Erica Neely notes, there is always going to be a certain margin of error in our extrapolations from the structural and behavioral to the mental (see Neely 2014, 104–106). Similarly, as Schwitzgebel and Garza (2015) observe, our poor general understanding of consciousness imposes considerable limitations on our epistemology of consciousness as well. But the point here is simply that our doubts regarding the conscious states of other entities should not be of the Cartesian sort (i.e., based on the supposition that the so-called privileged access is the only way to go when attempting to discern whether this or that entity possesses consciousness).

  9. I am grateful to an anonymous referee for this discussion suggestion.

  10. In the context of Schwitzgebel’s work, “phenomenal realism” refers to the view that phenomenal consciousness is real, in contrast to the “illusionist” position of Keith Frankish (2012), who argues that it does not really exist. To meet Frankish’s challenge, Schwitzgebel gives a helpful analogy that concerns teaching someone the concept “pink.” Even if there is no corresponding term to “pink” in the language of the person being taught, once a sufficient number of different pink objects is shown to them, the person will latch onto the concept and it will no longer make sense for them to ask questions such as, to use Schwitzgebel’s examples, “so do you mean this-shade-and-mentioned-by-you?” or “must ‘pink’ things be less than six miles wide?” (Schwitzgebel 2016, 15). Once that happens, the person should be committed to what we may refer to as “pinkness realism” regardless of what the ultimate metaphysical ground for pinkness is; whether pinkness is a primary or a secondary property; and whether someone else perceives the color of these objects differently. Of course, the metaphysical and epistemological questions remain significant, but, as it may be recalled, the original objection was that the notion of consciousness is problematic in the context of moral philosophy because of conceptual and definitional difficulties, and I think that Schwitzgebel’s argument successfully meets this particular challenge.

  11. Gunkel and Coeckelbergh, for example, raise a number of objections against the standard properties-based view of moral consideration, which also apply to the consciousness criterion. I have responded to some of them elsewhere (see Mosakas 2020).

  12. See Coeckelbergh and Gunkel (2014) for the relational approach applied to animals and Coeckelbergh (2018) for relational plant ethics.

  13. The closest Coeckelbergh seems to get to explaining how his approach could be applied in practice is in his example about pigs. He claims: “Consider meat production in industrial societies. Instead of asking first what kind of animal a pig is, we must study and evaluate relations between humans and pigs within meat production systems and within industrial society and compare this with other human-animal relations such as human-pet relations” (Coeckelbergh 2010, 218). Then he goes on to suggest that we do the same with robots. Surely, however, the ways we relate to different animals are just descriptive facts about the world and something more than that seems to be needed. Some animals, for example, are kept as pets, while others are hunted for the fun of it. The vague suggestion to consider different relations just seems too weak without any overarching moral guidelines.

  14. Worse, Gunkel does not provide any rigorous account of pluralism, nor does he explain how exactly it links to his relational approach. For the most part, what he gives is a number of citations from different authors on how there are alternatives between extreme relativism and moral absolutism (Gunkel 2018a, subsec. 6.2.3, ¶ 2–3). But that seems insufficient, given that (1) the typical concept of non-relativistic ethical pluralism seems to conflict with radically relational ethics and (2) Gunkel does not explain how the two can be linked together.

  15. For example, in both of his major works on machine rights Gunkel (2012, § 3.5; 2018a, subsec. 6.2.3) cites Robert Scott (1976, 264), who notes that “relativism can indicate circumstances in which standards have to be established cooperatively and renewed repeatedly.”.

  16. Here one may note that earlier I have granted that there is some space between extreme relativism and moral absolutism. However, I do not think that there is any such space between extreme relativism and moral centrism, which is the view that there is at least one valid central criterion for moral evaluation. Views such as “moderate” or “soft” meta-ethical relativism still have to posit at least one such criterion, even if all else is relative and even if the criterion in question is rather abstract and realizable in multiple ways. To illustrate this, suppose that “valuing life” is one such criterion. As MacKinnon and Fiala (2015, 49) write, “different cultures may share the idea that ‘life should be valued,’ for example, but they will disagree about what counts as ‘life’ and what counts as ‘valuing life.’ It might be that both human and animal lives count and so no animal lives can be taken in order to support human beings. Or it might be that some form of ritual sacrifice could be justified as a way of valuing life.” See also Nussbaum 2001 for a paradigmatic example of such approach.

  17. I owe this way of phrasing the issue to an anonymous referee. I also pose this problem elsewhere (see Mosakas 2020).

  18. To “take on face” is to become morally considerable—Gunkel borrows the notion of face from the ethics of Emmanuel Levinas (1969).

  19. Gunkel objects to Darling’s argument on the grounds that it “renders the inclusion of previously excluded others less than altruistic 〈…〉. The rights of others, in other words, is not about them; it is all about us” (Gunkel 2018b, 95). But this fails to consider the further implications of Darling’s position—if humans become more callous and less caring, then all entities worthy of moral concern may suffer from that. Now with respect to the question of which entities these are, Gunkel may disagree with Darling, but he cannot just assume that Darling should be more altruistic towards entities that he (Gunkel) is morally concerned for and include them as well, for that is clearly question-begging.

  20. See also Cappuccio et al. 2019 for a similar sort of argument from the standpoint of virtue ethics.

  21. It should be clear that Mary is neither sentient, nor rational, in which case she is below the threshold of both of the typical criteria for moral status. Floridi then considers whether one can, nevertheless, argue that Mary qualifies for moral status on these grounds by pointing out that she possesses the necessary properties “in principle, though not in practice” (Floridi 2002, 294–295), and, I think, quite rightly rejects this suggestion. Biocentrism, it is pointed out, would not work either, because then one can just tweak the thought experiment and suppose that Mary is no longer alive, and yet the intuition that she deserves moral respect would not go away (Floridi 2002, 296).

  22. For Floridi, the problem with extrinsic value seems to be that it is contingent and not grounded in any inherent properties of an object. However, it is not clear to me that the intuition that Mary’s corpse deserves respect is so powerful that it would necessarily require one to accept that we owe direct moral duties to the corpse itself.

  23. Of course, there are more possible explanations. As far as I can see, virtue ethics should have little problem in accounting for the moral wrongness of “mistreating” the corpse, although it would do so by appealing to the character of the person who is behaving inappropriately. Another interesting suggestion is put forward by Jürgen Habermas, who argues that a rational discourse on the moral self-understanding of our species would include not only the concept of human dignity as granted by law, but also the dignity of human life in general (e.g., prepersonal life), which, he claims, is a distinction that “is also echoed in the phenomenology of our highly emotional attitude toward the dead” (Habermas 2003, 36).

  24. See Brey 2008 for an alternative to IE. In Philip Brey’s view, one should abandon the idea that informational objects have intrinsic value and rather shift toward a respect-centered ethics, according to which many—but not all—inanimate objects deserve minimal respect by virtue of their non-intrinsic value. This may indeed be a more defensible view than is IE.

  25. This is a slightly rephrased version based on Tavani (2018, 12) and Carr (2018, 5–6), who formalizes the argument more comprehensively than Tavani does. Carr’s (2018) unpublished commentary on different possible grounds for our moral duties to robots is available here: https://www2.rivier.edu/faculty/lcarr/OUR%20MORAL%20OBLIGATION%20TO%20ROBOTS.pdf.

  26. Notice that it cannot merely refer to some human beings, for that would make human existence possible without being-in-the-world, which seems metaphysically absurd in the context of the argument.

  27. Consider again the two ways of being part of a moral concern that I outlined in the first section based on Kamm (2007). All else being equal, an embryo may impose on us the moral duty to preserve it under the supposition that it would develop into an extraordinary person. But under the typical use of the notion, our moral duties toward an embryo of this sort would not be direct, for even “if an embryo can matter in its own right, this does not mean that its continued existence is good for it, or that it is harmed by not continuing on, or that we can act for its sake in saving its life” (Kamm 2007, 229). In other words, if the embryo had moral status, then it would impose moral duties on us not merely because it matters in its own right in some way, but also because the way it is treated would matter to it for its own sake. Only in that case the imposed duties would be direct.

  28. One could also argue against the second premise on the grounds that the idea that human existence should be conceived in terms of being-in-the-world or dasein is itself one that numerous philosophers—especially those in the analytic tradition of thought—would reject in favor of other stances towards human ontology, such as the atomistic ones. While I do think that this offers a basis for a sound objection, it is not within the scope of this paper to cover it in any detail.

  29. Indeed, one of the creators of AlphaGo, Thore Graepel, remarked during one of the matches against Seedol: “Although we have programmed this machine to play, we have no idea what moves it will come up with. Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands—and much better than we, as Go players, could come up with” (Graepel in Metz 2016). Moreover, as Joel Walmsley rightly pointed out to me, AlphaGo even developed certain successful unconventional strategies that no human had thought of before.

  30. In a private correspondence, Joel Walmsley suggested to me an analogous reductio ad absurdum objection to IE on the grounds that it makes the sphere of moral concern too big. While I do think that he is right to point out that IE is overinclusive, I choose to object to it on different grounds, given that the radical moral inclusion entailed by IE is its raison d’etre. I see IE as a sort of philosophical gambit whereby one chooses to radicalize certain aspects of a theory to see whether benefits of different sorts could be reaped. In Floridi’s case, the goal of positing IE was to propose a model that would avoid certain methodological difficulties that the standard theories face when addressing problems related to Computer Ethics. In contrast to this, the overinclusion on Tavani’s account seems like a completely unintended and unwelcome consequence that does not offer any additional benefits.

References

  • Baker LR (2000) Persons and bodies: a constitution view. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Basl J (2012) Machines as moral patients we shouldn’t care about (yet): the interests and welfare of current machines. In: Gunkel DJ, Bryson JJ, Torrance S (eds) Proceedings of the AISB/IACAP world congress 2012: the machine question: AI, ethics and moral responsibility. Birmingham, England

  • Benford G, Malartre E (2007) Beyond human: living with robots and cyborgs. Tom Doherty, New York

    Google Scholar 

  • Bhatnagar S et al. (2018) Mapping intelligence: requirements and possibilities. In: Müller VC (ed) PT-AI 2017. SAPERE, vol 44. Springer, Cham, pp 117–135

  • Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. In: Frankish K, Ramsey W (eds) Cambridge handbook of artificial intelligence. Cambridge University Press, New York, pp 316–334. https://intelligence.org/files/EthicsofAI.pdf. Accessed 22 Dec 2019

  • Brey P (2008) Do we have moral duties towards information objects? Ethics Inf Technol 10:109–114

    Article  Google Scholar 

  • Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close Engagements with artificial companions: key social, psychological, ethical and design issue. John Benjamins Publishing, Amsterdam, pp 63–74

    Chapter  Google Scholar 

  • Cappuccio ML, Peeters A, McDonald W (2019) Sympathy for dolores: moral consideration for ROBOTS based on virtue and recognition. Philos Technol, Online First, pp 1–23

  • Carruthers P (1992) The animal issue: moral theory inpractice. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Chalmers D (1995) Facing up to the problem of consciousness. J Conscious Stud 2(3):200–219

    Google Scholar 

  • Chmait N et al. (2016). A Dynamic intelligence test framework for evaluating AI agents. In: Proceedings of evaluating general-purpose AI (EGPAI), ECAI workshop. The Hague, The Netherlands

  • Coeckelbergh M (2010) Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf Technol 12(3):209–221

    Article  Google Scholar 

  • Coeckelbergh M (2011) Humans, animals, and robots: a phenomenological approach to human-robot relations. Int J Soc Robot 3(2):197–204

    Article  Google Scholar 

  • Coeckelbergh M (2014) The moral standing of machines: towards a relational and non-cartesian moral hermeneutics. Philos Technol 27(1):61–77

    Article  Google Scholar 

  • Coeckelbergh M (2018) What do we mean by a relational ethics? Growing a relational approach to the moral standing of plants, robots, and other non-humans. In: Kallhoff A, Di Paola M, Schörgenhumer M (eds) Plant ethics concepts and applications. Routledge, pp 98–109

  • Coeckelbergh M, Gunkel DJ (2014) Facing animals: a relational, other-oriented approach to moral standing. J Agric Environ Ethics 27(5):715–733

    Article  Google Scholar 

  • Danaher J (2019) Welcoming robots into the moral circle: a defence of ethical behaviourism. Sci Eng Ethics. https://doi.org/10.1007/s11948-019-00119-x

    Article  Google Scholar 

  • Darling K (2016) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Calo R, Michael Froomkin A, Kerr I (eds) Robot law. Edward Elgar, Northampton, pp 213–231

    Google Scholar 

  • Dennett D (1996) Kinds of minds: toward an understanding of consciousness. Basic Books, New York

    Google Scholar 

  • Estrada D (2017) Made of robots 1: robot rights. Cheap, yo!. https://www.youtube.com/watch?v=TUMIxBnVsGc. Accessed 14 May 2020

  • Estrada D (2018) Value alignment, fair play, and the rights of service robots. In: AIES ‘18: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society, pp 102–107. https://doi.org/10.1145/3278721.3278730

  • Fagan A. Human rights. Internet Encyclopedia of philosophy. https://www.iep.utm.edu/hum-rts/. Accessed 21 Dec 2019

  • Floridi L (1999) Information ethics: on the philosophical foundation of computer ethics. Ethics Inf Technol 1(1):37–56

    Article  Google Scholar 

  • Floridi L (2002) On the intrinsic value of information objects and the infosphere. Ethics Inf Technol 4(4):287–304

    Article  Google Scholar 

  • Floridi L (2008) Information ethics: its nature and scope. In: van den Hoven J, Weckert J (eds) Information technology and moral philosophy. Cambridge University Press, Cambridge, pp 40–65

    Chapter  Google Scholar 

  • Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(3):349–379

    Article  Google Scholar 

  • Frankish K (2012) Quining diet qualia. Conscious Cogn 21:667–676

    Article  Google Scholar 

  • Gordon J-S (2018a) What do we owe to intelligent robots? AI Soc. https://doi.org/10.1007/s00146-018-0844-6

    Article  Google Scholar 

  • Gordon J-S (2018b) Indignity and old age. Bioethics 32(4):223–232

    Article  Google Scholar 

  • Grace K, Salvatier J, Dafoe A et al (2017) When will AI exceed human performance? evidence from AI experts. arXiv:1705.08807 [cs.AI]

  • Gunkel DJ (2007) Thinking otherwise: philosophy, communication, technology. Purdue University Press, USA

    Google Scholar 

  • Gunkel DJ (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT, Cambridge

    Book  Google Scholar 

  • Gunkel DJ (2014a) A vindication of the rights of machines. Philos Technol 27(1):113–132

    Article  Google Scholar 

  • Gunkel DJ (2014b) The rights of machines: caring for robotic care-givers. In: van Rysewyk SP, Matthijs P (eds) Machine medical ethics. Springer, Heidelberg, pp 151–166

    Google Scholar 

  • Gunkel DJ (2018a) Robot rights. MIT, Cambridge

    Book  Google Scholar 

  • Gunkel DJ (2018b) The other question: can and should robots have rights? Ethics Inf Technol 20(2):87–99

    Article  Google Scholar 

  • Habermas J (2003) The future of human nature. Translated by Beister H, Pensky M, Rehg W. Polity Press, Cambridge

    Google Scholar 

  • Hernández-Orallo J (2017) The measure of all minds: evaluating natural and artificial intelligence. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Himma KE (2004) There’s something about mary: the moral value of things qua information objects. Ethics Inf Technol 6(3):145–159

    Article  Google Scholar 

  • Jaworska A, Tannenbaum J (2018) The grounds of moral status. Stanford Encyclopedia of Philosophy https://plato.stanford.edu/archives/spr2018/entries/grounds-moral-status. Accessed 22 Dec 2019

  • Jonas H (1973) Technology and responsibility: reflections on the new tasks of ethics. Soc Res 40(1):31–54

    Google Scholar 

  • Kamm FM (2007) Intricate ethics: rights, responsibilities, and permissible harm. Oxford University Press, New York

    Book  Google Scholar 

  • Kant I (1983) Grounding for the metaphysics of morals Translated by Ellington JW. Hackett, Indianapolis

    Google Scholar 

  • Laukyte M (2013) The capabilities approach as a bridge between animals and robots. European Institute Working Papers https://cadmus.eui.eu/handle/1814/27058. Accessed 21 Dec 2019

  • Lecky WEH (1809) History of European morals. D. Appleton & Co, New York

    Google Scholar 

  • Legg S, Hutter M (2007) A collection of definitions of intelligence. In: Goertzel B, Wang P (eds) Advances in artificial general intelligence: concepts, architectures and algorithms—proceedings of the AGI workshop 2006. Frontiers in artificial intelligence and applications 157. IOS, Amsterdam, pp 17–24

  • Leopold A (1949) A sand county almanac. Oxford University Press, Oxford

    Google Scholar 

  • Levinas E (1969) Totality and infinity: an essay on exteriority. Translated by Lingis A. Duquesne University, Pittsburgh

    Google Scholar 

  • MacKinnon B, Fiala A (2015) Ethics: theory and contemporary issues, 8th edn. Cengage Learning, Stamford

    Google Scholar 

  • Marquis D (2013) An argument that abortion is wrong. In: Shafer-Landau R (ed) Ethical theory: an anthology, 2nd edn. Wiley, Oxford

    Google Scholar 

  • Mason E (2018) Value pluralism. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/value-pluralism/. Accessed 14 May 2020

  • McLuhan M (1995) Understanding media: the extensions of man. MIT, Cambridge

    Google Scholar 

  • McMahan J (2002) The ethics of killing: problems at the margins of life. Oxford University Press, Oxford

    Book  Google Scholar 

  • Metz C (2016) Google’s AI wins a pivotal second game in match with go grandmaster. Wired. http://www.wired.com/2016/03/googles-ai-wins-pivotal-game-two-match-go-grandmaster/. Accessed 22 Dec 2019

  • Miller LF (2015) Granting automata human rights: challenge to a basis of full-rights privilege. Hum Rights Rev 16(4):369–391

    Article  Google Scholar 

  • Mosakas K (2020) Machine moral standing: in defence of the standard properties-based view. In: Gordon J-S (ed) Smart technologies and fundamental rights. Brill (forthcoming)

  • Moser PK, Carson TL (eds) (2001) Moral relativism: a reader. Oxford University Press, New York

    Google Scholar 

  • Müller VC, Bostrom N (2014) Future progress in artificial intelligence: a survey of expert opinion. In: Müller VC (ed) Fundamental issues of artificial intelligence. Springer, Synthese Library, Berlin, pp 552–572

    Google Scholar 

  • Musschenga AW, Meynen G (2017) Moral progress: an introduction. Ethic Theory Moral Prac 20(1):3–15

    Article  Google Scholar 

  • Nash RF (1989) The rights of nature. The University of Wisconsin Press, Madison

    Google Scholar 

  • Neely EL (2014) Machines and the moral community. Philos Technol 27(1):97–111

    Article  Google Scholar 

  • Nussbaum MC (2001) Women and human development. Cambridge University Press, Cambridge

    Google Scholar 

  • Nyholm S (2020) Humans and robots: ethics, agency, and anthropomorphism. Rowman & Littlefield International, London

    Google Scholar 

  • Parfit D (1984) Reasons and persons. Oxford University Press, Oxford

    Google Scholar 

  • Quinn W (1984) Abortion: identity and loss. Philos Public Aff 13:24–54

    Google Scholar 

  • Schwitzgebel E (2016) Phenomenal consciousness, defined and defended as innocently as I can manage. J Conscious Stud 23(11–12):224–235. https://faculty.ucr.edu/~eschwitz/SchwitzPapers/DefiningConsciousness-160712.pdf. Accessed 22 Dec 2019

  • Schwitzgebel E, Garza M (2015) A defense of the rights of artificial intelligences. Midwest Stud Philos 39(1):89–119

    Article  Google Scholar 

  • Scott RL (1976) On viewing rhetoric as epistemic: ten years later. Cent States Speech J 27(4):258–266

    Article  Google Scholar 

  • Silver D et al (2017) Mastering the game of Go without human knowledge. Nature 550:354–359

    Article  Google Scholar 

  • Singer P (2011) The expanding circle: ethics, evolution and moral progress. Princeton University Press, New Jersey

    Book  Google Scholar 

  • Smith J (2017) Self-consciousness. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/self-consciousness/. Accessed 22 Dec 2019

  • Sullins JP (2011) When is a robot a moral agent? In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 151–161

    Chapter  Google Scholar 

  • Swinburne R (2004) The existence of god. Oxford University Press, Oxford

    Book  Google Scholar 

  • Sytsma J, Machery E (2012) Two sources of moral standing. Rev Philos Psychol 3:303–324. https://www.researchgate.net/publication/257797401_The_Two_Sources_of_Moral_Standing. Accessed 22 Dec 2019

  • Tavani H (2018) Can social robots qualify for moral consideration? Reframing the question about robot rights. Information 9(4):73. https://www.mdpi.com/2078-2489/9/4/73/htm. Accessed 22 Dec 2019

  • Tooley M (1972) Abortion and infanticide. Philos Public Aff 2(1):37–65

    Google Scholar 

  • Torrance S (2014) Artificial consciousness and artificial ethics: between realism and social relationism. Philos Technol 27:9–29

    Article  Google Scholar 

  • van Wynsberghe A (2015) Healthcare robots: ethics, design and implementation. Ashgate Publishing Ltd, Farnham

    Google Scholar 

  • van Wynsberghe A (2016) Service robots, care ethics, and design. Ethics Inf Technol 18(4):311–321

    Article  Google Scholar 

  • Walters JW (1997) What is a Person? An ethical exploration. University of Illinois Press, Urbana

    Google Scholar 

  • Warren MA (1997) Moral status: obligations to persons and other living things. Oxford University Press, Oxford

    Google Scholar 

Download references

Acknowledgements

I am extremely grateful to Joel Walmsley for providing a detailed commentary on an early draft of this paper. I would also like to thank the staff of the philosophy department of Maynooth University—Philipp Rosemann, Cyril McDonnell, David O’Brien, and Daire Boyle in particular—for engaging in discussions with me on related topics, as well as John-Stewart Gordon and Sven Nyholm for giving me some valuable ideas. Finally, I am grateful to the anonymous reviewers for their highly helpful feedback.

Funding

This research is funded by the European Social Fund according to the activity “Improvement of researchers” qualification by implementing world-class R&D projects of Measure no. 09.3.3-LMT-K-712.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kestutis Mosakas.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised to update incorrect reference citation in sectiom 3.1 from Benford and Malartre (2007, 163) to Anne Foerst (cited in Benford and Malartre 2007, 163) .

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mosakas, K. On the moral status of social robots: considering the consciousness criterion. AI & Soc 36, 429–443 (2021). https://doi.org/10.1007/s00146-020-01002-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01002-1

Keywords

Navigation