Skip to main content

Improving of Robotic Virtual Agent’s Errors Accepted by Agent’s Reaction and Human’s Preference

  • Conference paper
  • First Online:
Social Robotics (ICSR 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14453 ))

Included in the following conference series:

  • 248 Accesses

Abstract

One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. In this study, we focused on a task between an agent and a human in which the agent makes a mistake. To investigate significant factors for designing a robotic agent that can promote humans’ empathy, we experimentally examined the hypothesis that agent reaction and human’s preference affect human empathy and acceptance of the agent’s mistakes. In this experiment, participants allowed the agent to manage their schedules by answering the questions they were asked. The experiment consisted of a four-condition, three-factor mixed design with agent reaction, selected agent’s body color for human’s preference, and pre- and post-task as factors. The results showed that agent reaction and human’s preference did not affect empathy toward the agent but did allow the agent to make mistakes. It was also shown that empathy for the agent decreased when the agent made a mistake on the task. The results of this study provide a way to influence impressions of the robotic virtual agent’s behaviors, which are increasingly used in society.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Al Farisi, R., Ferdiana, R., Adji, T.B.: The effect of anthropomorphic design cues on increasing chatbot empathy. In: 2022 1st International Conference on Information System and Information Technology (ICISIT), pp. 370–375 (2022). https://doi.org/10.1109/ICISIT54091.2022.9873008

  2. Baron-Cohen, S., Wheelwright, S.: The empathy quotient: an investigation of adults with asperger syndrome or high functioning autism, and normal sex differences. J. Autism Dev. Disord. 34(2), 163–175 (2004)

    Article  Google Scholar 

  3. Crump, M.J.C., McDonnell, J.V., Gureckis, T.M.: Evaluating amazon’s mechanical Turk as a tool for experimental behavioral research. PLOS ONE 8(3), 1–18 (2013). https://doi.org/10.1371/journal.pone.0057410

  4. Davis, M.H.: A multidimensional approach to individual difference in empathy. In: JSAS Catalog of Selected Documents in Psychology, p. 85 (1980)

    Google Scholar 

  5. Davis, R.: Web-based administration of a personality questionnaire: comparison with traditional methods. Behav. Res. Methods Instrum. Comput. 31, 572–577 (1999). https://doi.org/10.3758/BF03200737

  6. Johanson, D., Ahn, H.S., Goswami, R., Saegusa, K., Broadbent, E.: The effects of healthcare robot empathy statements and head nodding on trust and satisfaction: a video study. J. Hum.-Robot Interact. 12(1) (2023). https://doi.org/10.1145/3549534

  7. Leite, I., Castellano, G., Pereira, A., Martinho, C., Paiva, A.: Empathic robots for long-term interaction. Int. J. Soc. Robot. (2014). https://doi.org/10.1007/s12369-014-0227-1

  8. Mathur, L., Spitale, M., Xi, H., Li, J., Matarić, M.J.: Modeling user empathy elicited by a robot storyteller. In: 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 1–8 (2021). https://doi.org/10.1109/ACII52823.2021.9597416

  9. Nomura, T., Kanda, T., Kidokoro, H., Suehiro, Y., Yamada, S.: Why do children abuse robots? Interact. Stud. 17(3), 347–369 (2016). https://doi.org/10.1075/is.17.3.02nom, https://www.jbe-platform.com/content/journals/10.1075/is.17.3.02nom

  10. Nomura, T., Kanda, T., Suzuki, T.: Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI Soc. 20(2), 138–150 (2006). https://doi.org/10.1007/s00146-005-0012-7

  11. Nomura, T., Kanda, T., Suzuki, T., Kato, K.: Prediction of human behavior in human-robot interaction using psychological scales for anxiety and negative attitudes toward robots. IEEE Trans. Rob. 24(2), 442–451 (2008). https://doi.org/10.1109/TRO.2007.914004

    Article  Google Scholar 

  12. Okamura, K., Yamada, S.: Adaptive trust calibration for human-AI collaboration. PLOS ONE 15(2), 1–20 (2020). https://doi.org/10.1371/journal.pone.0229132

  13. Okanda, M., Taniguchi, K., Itakura, S.: The role of animism tendencies and empathy in adult evaluations of robot. In: Proceedings of the 7th International Conference on Human-Agent Interaction. HAI ’19, pp. 51–58. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3349537.3351891

  14. Omdahl, B.L.: Cognitive Appraisal, Emotion, and Empathy. Lecture Notes in Computer Science, 1st edn. Psychology Press, New York (1995). https://doi.org/10.4324/9781315806556

  15. Paiva, A.: Empathy in social agents. Int. J. Virtual Real. 10(1), 1–4 (2011). https://doi.org/10.20870/IJVR.2011.10.1.2794, https://ijvr.eu/article/view/2794

  16. Paiva, A., et al.: Caring for agents and agents that care: building empathic relations with synthetic agents. In: International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 194–201, January 2004. https://doi.org/10.1109/AAMAS.2004.82

  17. Paiva, A., Leite, I., Boukricha, H., Wachsmuth, I.: Empathy in virtual agents and robots: a survey. ACM Trans. Interact. Intell. Syst. 7(3) (2017). https://doi.org/10.1145/2912150

  18. Preston, S.D., de Waal, F.B.M.: Empathy: its ultimate and proximate bases. Behav. Brain Sci. 25(1), 1–20 (2002). https://doi.org/10.1017/S0140525X02000018

    Article  Google Scholar 

  19. Reeves, B., Nass, C.: The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press, Cambridge (1996)

    Google Scholar 

  20. Rossi, A., Dautenhahn, K., Koay, K.L., Walters, M.L.: How the timing and magnitude of robot errors influence peoples’ trust of robots in an emergency scenario. In: Kheddar, A., et al. (eds.) ICSR 2017. LNCS, pp. 42–52. Springer, Cham (2017)

    Chapter  Google Scholar 

  21. Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., Joublin, F.: To err is human(-like): effects of robot gesture on perceived anthropomorphism and likability. Int. J. Soc. Robot. 5(3), 313–323 (2013). https://doi.org/10.1007/s12369-013-0196-9

    Article  Google Scholar 

  22. Samrose, S., Anbarasu, K., Joshi, A., Mishra, T.: Mitigating boredom using an empathetic conversational agent. In: Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. IVA ’20, Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3383652.3423905

  23. Tsumura, T., Yamada, S.: Agents facilitate one category of human empathy through task difficulty. In: 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 22–28 (2022). https://doi.org/10.1109/RO-MAN53752.2022.9900686

  24. Tsumura, T., Yamada, S.: Influence of agent’s self-disclosure on human empathy. PLOS ONE 18(5), 1–24 (2023). https://doi.org/10.1371/journal.pone.0283955

  25. Tsumura, T., Yamada, S.: Influence of anthropomorphic agent on human empathy through games. IEEE Access 11, 40412–40429 (2023). https://doi.org/10.1109/ACCESS.2023.3269301

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially supported by JST, CREST (JPMJCR21D4), Japan. This work was also supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2136.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takahiro Tsumura .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tsumura, T., Yamada, S. (2024). Improving of Robotic Virtual Agent’s Errors Accepted by Agent’s Reaction and Human’s Preference. In: Ali, A.A., et al. Social Robotics. ICSR 2023. Lecture Notes in Computer Science(), vol 14453 . Springer, Singapore. https://doi.org/10.1007/978-981-99-8715-3_25

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8715-3_25

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8714-6

  • Online ISBN: 978-981-99-8715-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics