Skip to main content
Log in

Can Robots Make us Better Humans?

Virtuous Robotics and the Good Life with Artificial Agents

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

This position paper proposes a novel approach to the ethical design of social robots. We coin the term “Virtuous Robotics” to describe Human–Robot Interaction (HRI) designed to help humans reach a higher level of moral development. Our approach contrasts with mainstream approaches to robot design inspired by the other normative theories, Consequentialism and Deontology. In the paper we theoretically justify our proposal, illustrating how the Virtuous Robotics approach allows us to discriminate between positive and negative applications of robotics systems, of which we provide examples. From an ethical perspective, our proposal is theoretically robust because it is based on the assistive role played by the robot rather than the robot’s moral agency. From a designer’s perspective, Virtuous Robotics is technically feasible because it transfers the cognitive burden of HRI from the robot to the user, bypassing the need for complex decision-making abilities. From the user’s perspective, it is concretely advantageous, because it envisions a realistic way to make robots morally desirable in our lives, as supports for personal betterment and fulfilment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Ackerman E (2018) Robotic tortoise helps kids to learn that robot abuse is a bad thing—IEEE Spectrum. Library Catalog: spectrum.ieee.org. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/shelly-robotic-tortoise-helps-kids-learn-that-robot-abuse-is-a-bad-thing

  2. Addison A, Bartneck C, Yogeeswaran K (2019) Robots can be more than Black and White: examining racial bias towards robots. In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society. ACM, pp 493–498

  3. Albright G, Goldman R, Shockley KM, McDevitt F, Akabas S (2012) Using an avatar-based simulation to train families to motivate veterans with post-deployment stress to seek help at the va. Games Health: Res Dev Clin Appl 1(1):21–28

    Article  Google Scholar 

  4. Anderson SL (2008) Asimovs three laws of robotics and machine metaethics. AI Soc 22(4):477–493

    Article  Google Scholar 

  5. Aroyo A, Kyohei T, Koyama T, Takahashi H, Rea F, Sciutti A, Yoshikawa Y, Ishiguro H, Sandini G (2018) Will people morally crack under the authority of a famous wicked robot? In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 35–42. https://doi.org/10.1109/ROMAN.2018.8525744. ISSN: 1944-9437

  6. author N. Selected RoboticLab projects | iCampus Wildau. https://icampus.th-wildau.de/icampus/home/en/selected-roboticlab-projects-0

  7. author N. This Little Robot Will Teach You Yoga | Digital Trends. https://www.digitaltrends.com/cool-tech/alpha-2-robot-crowdfunding-news/

  8. Bacchus F, Kabanza F (2000) Using temporal logics to express search control knowledge for planning. Artif Intell 116(1–2):123–191

    Article  MathSciNet  Google Scholar 

  9. Bartlett RC, Collins SD et al (2011) Aristotle’s Nicomachean ethics. University of Chicago Press, Chicago

    Google Scholar 

  10. Bartneck C, Yogeeswaran K, Ser QM, Woodward G, Sparrow R, Wang S, Eyssel F (2018) Robots and racism. In: Proceedings of the 2018 ACM/IEEE international conference on human–robot interaction, HRI ’18. Association for Computing Machinery, Chicago, pp 196–204. https://doi.org/10.1145/3171221.3171260

  11. Bassett C (2019) The computational therapeutic: exploring Weizenbaums ELIZA as a history of the present. AI Soc 34(4):803–812

    Article  Google Scholar 

  12. Borenstein J, Arkin RC (2017) Nudging for good: robots and the ethical appropriateness of nurturing empathy and charitable behavior. AI Soc 32(4):499–507. https://doi.org/10.1007/s00146-016-0684-1

    Article  Google Scholar 

  13. Briggs G (2012) Machine ethics, the frame problem, and theory of mind. In: Proceedings of the AISB/IACAP world congress

  14. Bryson J.J (2010) Robots should be slaves. In: Close engagements with artificial companions: key social, psychological, ethical and design issues, pp 63–74

  15. Bryson JJ (2010) Why robot nannies probably wont do much psychological damage. Interact Stud 11(2):196–200. https://doi.org/10.1075/is.11.2.03bry

    Article  Google Scholar 

  16. Burton E, Goldsmith J, Koenig S, Kuipers B, Mattei N, Walsh T (2017) Ethical considerations in artificial intelligence courses. AI Mag 38(2):22–34. https://doi.org/10.1609/aimag.v38i2.2731

    Article  Google Scholar 

  17. Calo CJ, Hunt-Bull N, Lewis L, Metzler T (2011) Ethical implications of using the paro robot, with a focus on dementia patient care. In: Workshops at the twenty-fifth AAAI conference on artificial intelligence

  18. Cappuccio M, Wheeler M (2011) The sign of the hand: symbolic practices and the extended mind. Versus 113:33–56

    Google Scholar 

  19. Cappuccio M, Wheeler M (2012) Ground-level intelligence: inter-context frame problem and dynamics of the background. In: Knowing without thinking. Mind, action, cognition and the phenomenon of the background. Palgrave Macmillan, London

  20. Cappuccio ML, Peeters A, McDonald W (2019) Sympathy for Dolores: moral consideration for robots based on virtue and recognition. Philos Technol 1–23

  21. Chartrand TL, Bargh JA (1999) The chameleon effect: the perception-behavior link and social interaction. J Personal Soc Psychol 76(6):893

    Article  Google Scholar 

  22. Dautenhahn K, Woods S, Kaouri C, Walters M, Koay KL, Werry I (2005) What is a robot companion—friend, assistant or butler? In: 2005 IEEE/RSJ international conference on intelligent robots and systems, pp 1192–1197. https://doi.org/10.1109/IROS.2005.1545189. ISSN: 2153-0866

  23. Deng B (2015) Machine ethics: the robot’s dilemma. Nat News 523(7558):24

    Article  Google Scholar 

  24. Draper H, Sorell T (2014) Using robots to modify demanding or impolite behavior of older people. In: Beetz M, Johnston B, Williams MA (eds) Social robotics. Lecture notes in computer science. Springer, Cham, pp 125–134. https://doi.org/10.1007/978-3-319-11973-1_13

  25. Eyssel F, Hegel F (2012) (S)he’s got the look: gender stereotyping of robots. J Appl Soc Psychol 42(9):2213–2230. https://doi.org/10.1111/j.1559-1816.2012.00937.x

    Article  Google Scholar 

  26. Fasola J, Matari MJ (2013) A socially assistive robot exercise coach for the elderly. https://doi.org/10.5898/JHRI.2.2.Fasola

  27. Forlizzi J, Saensuksopa T, Salaets N, Shomin M, Mericli T, Hoffman G (2016) Let’s be honest: a controlled field study of ethical behavior in the presence of a robot. In: 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 769–774. https://doi.org/10.1109/ROMAN.2016.7745206. ISSN: 1944-9437

  28. Freedman R, Borg J.S, Sinnott-Armstrong W, Dickerson J.P, Conitzer V (2020) Adapting a kidney exchange algorithm to align with human values. Artif Intell, p 103261

  29. Ghaffary S (2018) Is this robot really going to replace a security guard? https://www.vox.com/2018/10/8/17913420/security-robot-cobalt-robotics-knightscope-slack-yelp

  30. Ghazali AS, Ham J, Barakova EI, Markopoulos P (2017) Pardon the rude robot: social cues diminish reactance to high controlling language. In: 2017 26th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 411–417. https://doi.org/10.1109/ROMAN.2017.8172335. ISSN: 1944-9437

  31. Goetz J, Kiesler S, Powers A (2003) Matching robot appearance and behavior to tasks to improve human-robot cooperation. In: Proceedings on the 12th IEEE international workshop on robot and human interactive communication, 2003, ROMAN 2003. IEEE, pp 55–60

  32. Graham G et al (2004) Eight theories of ethics. Psychology Press, London

    Book  Google Scholar 

  33. Guarini M (2011) Computational neural modeling and the philosophy of ethics reflections on the particularism-generalism debate. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge Google-Books-ID: N4IF2p4w7uwC

    Google Scholar 

  34. Ham J, Spahn A (2015) Shall i show you some other shirts too? The psychology and ethics of persuasive robots. In: Trappl R (ed) A construction manual for robots’ ethical systems: requirements, methods, implementations, cognitive technologies. Springer, Cham, pp 63–81. https://doi.org/10.1007/978-3-319-21548-8_4

    Chapter  Google Scholar 

  35. Haring KS, Mosley A, Pruznick S, Fleming J, Satterfield K, de Visser EJ, Tossell CC, Funke G (2019) Robot authority in human-machine teams: effects of human-like appearance on compliance. In: Chen JY, Fragomeni G (eds) Virtual, augmented and mixed reality. applications and case studies. Lecture Notes in Computer Science. Springer, Cham, pp 63–78. https://doi.org/10.1007/978-3-030-21565-1_5

    Chapter  Google Scholar 

  36. Harman G (2000) The nonexistence of character traits. In: Proceedings of the Aristotelian society, vol 100. JSTOR, pp 223–226

  37. Hoffman G, Forlizzi J, Ayal S, Steinfeld A, Antanitis J, Hochman G, Hochendoner E, Finkenaur J (2015) Robot presence and human honesty: experimental evidence. In: 2015 10th ACM/IEEE international conference on human–robot interaction (HRI), pp 181–188. ISSN: 2167-2121

  38. Howard A, Borenstein J (2018) The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Sci Eng Ethics 24(5):1521–1536

    Article  Google Scholar 

  39. Hughes CE, Benoit TS (2017) Culturally adaptive avatar simulator. https://patents.google.com/patent/US9690784B1/en

  40. Jeong S, Logan DE, Goodwin MS, Graca S, O’Connell B, Goodenough H, Anderson L, Stenquist N, Fitzpatrick K, Zisook M, Plummer L, Breazeal C, Weinstock P (2015) A social robot to mitigate stress, anxiety, and pain in hospital pediatric care. In: Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction extended abstracts, HRI’15 extended abstracts. Association for Computing Machinery, Portland, pp 103–104. https://doi.org/10.1145/2701973.2702028

  41. Ku H, Choi JJ, Lee S, Jang S, Do W (2018) Designing shelly, a robot capable of assessing and restraining children’s robot abusing behaviors. In: Companion of the 2018 ACM/IEEE international conference on human–robot interaction, HRI ’18. Association for Computing Machinery, Chicago, pp 161–162. https://doi.org/10.1145/3173386.3176973

  42. Laitinen A (2016) Robots and human sociality: normative expectations, the need for recognition, and the social bases of self-esteem. In: Sociable robots and the future of social relations. IOS Press, Amsterdam, pp 313–322. https://doi.org/10.3233/978-1-61499-708-5-313

  43. Lin YC, Liu TC, Chang M, Yeh SP (2009) Exploring childrens perceptions of the robots. In: Chang M, Kuo R, Kinshuk, Chen GD, Hirose M (eds) Learning by playing. Game-based education system design and development, Lecture Notes in Computer Science. Springer, Berlin, pp 512–517. https://doi.org/10.1007/978-3-642-03364-3_63

  44. Lumbreras S (2017) The limits of machine ethics. Religions 8(5):100. https://doi.org/10.3390/rel8050100

    Article  Google Scholar 

  45. Malle BF, Scheutz M (2015) When will people regard robots as morally competent social partners? In: 2015 24th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 486–491. https://doi.org/10.1109/ROMAN.2015.7333667

  46. Merritt M (2000) Virtue ethics and situationist personality psychology. Ethical Theory Moral Pract 3(4):365–383. https://doi.org/10.1023/A:1009926720584

    Article  Google Scholar 

  47. Meuhlhauser L, Helm L (2012) Intelligence explosion and machine ethics. In: Singularity hypotheses: a scientific and philosophical assessment, pp 101–126

  48. Moor JH (2006) The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21

    Article  Google Scholar 

  49. Mubin O, Cappuccio M, Alnajjar F, Ahmad MI, Shahid S (2020) Can a robot invigilator prevent cheating? AI Soc. https://doi.org/10.1007/s00146-020-00954-8

  50. Nomura T, Kanda T, Kidokoro H, Suehiro Y, Yamada S (2016) Why do children abuse robots? Interact Stud 17(3):347–369. https://doi.org/10.1075/is.17.3.02nom

    Article  Google Scholar 

  51. Nussbaum MC (2009) Hiding from humanity: disgust, shame, and the law. Princeton University Press, Princeton

    Book  Google Scholar 

  52. Obaid M, Aylett R, Barendregt W, Basedow C, Corrigan LJ, Hall L, Jones A, Kappas A, Küster D, Paiva A et al (2018) Endowing a robotic tutor with empathic qualities: design and pilot evaluation. Int J Humanoid Rob 15(06):1850025

    Article  Google Scholar 

  53. Ogunyale T, Bryant D, Howard A (2018) Does removing stereotype priming remove bias? A pilot human-robot interaction study. arXiv preprint arXiv:1807.00948

  54. Reich-Stiebert N, Eyssel F (2015) Learning with educational companion robots? Toward attitudes on education robots, predictors of attitudes, and application potentials for education robots. Int J Soc Robotics 7(5):875–888. https://doi.org/10.1007/s12369-015-0308-9

    Article  Google Scholar 

  55. Reichenbach J, Bartneck C, Carpenter J (2006) Well done, robot! The importance of praise and presence in human–robot collaboration. In: ROMAN 2006—The 15th IEEE international symposium on robot and human interactive communication, pp 86–90. https://doi.org/10.1109/ROMAN.2006.314399. ISSN: 1944-9437

  56. Roizman M, Hoffman G, Ayal S, Hochman G, Tagar MR, Maaravi Y (2016) Studying the opposing effects of robot presence on human corruption. In: 2016 11th ACM/IEEE international conference on human–robot interaction (HRI), pp 501–502. https://doi.org/10.1109/HRI.2016.7451826. ISSN: 2167-2148

  57. Sandoval EB (2019) Addiction to social robots: a research proposal. In: 2019 14th ACM/IEEE international conference on human–robot interaction (HRI), pp 526–527. https://doi.org/10.1109/HRI.2019.8673143

  58. Sandoval EB, Brandstetter J, Bartneck C (2016) Can a robot bribe a human? The measurement of the negative side of reciprocity in human robot interaction. In: 2016 11th ACM/IEEE international conference on human–robot interaction (HRI)

  59. Sandoval EB, Brandstetter J, Obaid M, Bartneck C (2016) Reciprocity in human–robot interaction: a quantitative approach through the Prisoner’s dilemma and the ultimatum game. Int J Soc Robotics 8(2):303–317. https://doi.org/10.1007/s12369-015-0323-x

    Article  Google Scholar 

  60. Sartre JP (2001) Being and nothingness: an essay in phenomenological ontology. Citadel Press, New York

    Google Scholar 

  61. Serholt S, Barendregt W (2014) Students’ attitudes towards the possible future of social robots in education. In: Workshop proceedings of RO-MAN

  62. Sparrow R (2017) Robots, rape, and representation. Int J Soc Robotics 9(4):465–477

    Article  Google Scholar 

  63. Sparrow R (2017) Robots, rape, and representation. Int J Soc Robotics 9(4):465–477. https://doi.org/10.1007/s12369-017-0413-z

    Article  Google Scholar 

  64. Sparrow R (2020) Virtue and vice in our relationships with robots: Is there an asymmetry and how might it be explained? Int J Soc Robotics 1–7

  65. Strait M, Ramos AS, Contreras V, Garcia N (2018) Robots racialized in the likeness of marginalized social identities are subject to greater dehumanization than those racialized as white. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, pp 452–457

  66. Vallor S (2015) Moral deskilling and upskilling in a new machine age: reflections on the ambiguous future of character. Philos Technol 28(1):107–124. https://doi.org/10.1007/s13347-014-0156-9

    Article  Google Scholar 

  67. Vallor S (2016) Technology and the virtues: a philosophical guide to a future worth wanting. Oxford University Press, Oxford Google-Books-ID: RaCkDAAAQBAJ

    Book  Google Scholar 

  68. Vlachos E, Schärfe H (2014) Social robots as persuasive agents. In: international conference on social computing and social media. Springer, Cham, pp 277–284

  69. Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

    Google Scholar 

  70. Wallach W, Franklin S, Allen C (2010) A conceptual and computational model of moral decision making in human and artificial agents. Top Cogn Sci 2(3):454–485

    Article  Google Scholar 

  71. Weizenbaum J (1966) Elizaa computer program for the study of natural language communication between man and machine. Commun ACM 9(1):36–45

    Article  Google Scholar 

  72. Wesche JS, Sonderegger A (2019) When computers take the lead: the automation of leadership. Comput Hum Behav 101:197–209. https://doi.org/10.1016/j.chb.2019.07.027

    Article  Google Scholar 

  73. Whitby B (2012) Do you want a robot lover? The ethics of caring technologies. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MITP, pp 233–248. https://ieeexplore.ieee.org/document/6733984

  74. Wiegel V (2010) Wendell Wallach and Colin Allen: moral machines: teaching robots right from wrong. Ethics Inf Technol 12(4):359–361. https://doi.org/10.1007/s10676-010-9239-1

    Article  Google Scholar 

  75. You S, Nie J, Suh K, Sundar SS (2011) When the robot criticizes you...: self-serving bias in human–robot interaction. In: Proceedings of the 6th international conference on Human–robot interaction, HRI ’11. Association for Computing Machinery, Lausanne, pp 295–296. https://doi.org/10.1145/1957656.1957778

  76. Zaal E, Mills G, Hagen A, Huisman C, Hoeks J (2017) Convincing conversations: using a computer-based dialogue system to promote a plant-based diet. In: CogSci

  77. Zahavi D (2010) Shame and the exposed self. In: Webber J (ed) Reading Sartre: on phenomenology and existentialism. Routledge, Abingdon

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Massimiliano L. Cappuccio.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Funding

This research was supported by a DHRG Seedcorn Funding Grant awarded to Omar Mubin and Massimiliano L. Cappuccio by the School of Digital Humanities of Western Sydney University on 19/5/2018.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cappuccio, M.L., Sandoval, E.B., Mubin, O. et al. Can Robots Make us Better Humans?. Int J of Soc Robotics 13, 7–22 (2021). https://doi.org/10.1007/s12369-020-00700-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-020-00700-6

Keywords

Navigation