skip to main content
10.1145/3568294.3580170acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
short-paper
Public Access

"Can You Guess My Moves?: Playing Charades with a Humanoid Robot Employing Mutual Learning with Emotional Intelligence

Published:13 March 2023Publication History

ABSTRACT

Social play is essential in human interactions, increasing social bonding, mitigating stress, and relieving anxiety. With advancements in robotics, social robots can employ this role to assist in human-robot interaction scenarios for clinical and healthcare purposes. However, robotic intelligence still needs further development to match the wide spectrum of social behaviors and contexts in human interactions. In this paper, we present our robotic intelligence framework with a mutual learning paradigm in which we apply deep learning based on emotion recognition and behavior perception, through which the robot learns human movements and contexts through the interactive game of charades. Furthermore, we designed a gesture-based social game to provide a more empathetic and engaging social robot for the user. We also created a custom behavior database containing contextual behaviors for the proposed social games. A pilot study was conducted with participants ranging in age from 12 to 19 for a preliminary evaluation.

Skip Supplemental Material Section

Supplemental Material

HRI23-lbr1232.mp4

mp4

7.2 MB

References

  1. Ferdous Ahmed, ASM Hossain Bari, and Marina L Gavrilova. 2019. Emotion recognition from body movement. IEEE Access, Vol. 8 (2019), 11761--11781.Google ScholarGoogle ScholarCross RefCross Ref
  2. Archana Balmik, Mrityunjay Jha, and Anup Nandy. 2021. NAO Robot Teleoperation with Human Motion Recognition. Arabian Journal for Science and Engineering (2021), 1--10.Google ScholarGoogle Scholar
  3. Pablo Barros, Doreen Jirak, Cornelius Weber, and Stefan Wermter. 2015. Multimodal emotional state recognition using sequence-dependent deep hierarchical features. Neural Networks, Vol. 72 (2015), 140--151.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Pablo Barros and Stefan Wermter. 2016. Developing crossmodal expression recognition based on a deep neural model. Adaptive behavior, Vol. 24, 5 (2016), 373--396.Google ScholarGoogle Scholar
  5. Erdenebileg Batbaatar, Meijing Li, and Keun Ho Ryu. 2019. Semantic-emotion neural network for emotion recognition from text. IEEE Access, Vol. 7 (2019), 111866--111878.Google ScholarGoogle ScholarCross RefCross Ref
  6. Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7291--7299.Google ScholarGoogle ScholarCross RefCross Ref
  7. Stéphanie Carlier, Sara Van der Paelt, Femke Ongenae, Femke De Backere, and Filip De Turck. 2019. Using a serious game to reduce stress and anxiety in children with autism spectrum disorder. In Proceedings of the 13th EAI International Conference on Pervasive Computing Technologies for Healthcare. 452--461.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Stéphanie Carlier, Sara Van der Paelt, Femke Ongenae, Femke De Backere, and Filip De Turck. 2020. Empowering children with ASD and their parents: Design of a serious game for anxiety and stress reduction. Sensors, Vol. 20, 4 (2020), 966.Google ScholarGoogle ScholarCross RefCross Ref
  9. Ankush Chatterjee, Umang Gupta, Manoj Kumar Chinnakotla, Radhakrishnan Srikanth, Michel Galley, and Puneet Agrawal. 2019. Understanding emotions in text using deep learning and big data. Computers in Human Behavior, Vol. 93 (2019), 309--317.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Mingyi Chen, Xuanji He, Jing Yang, and Han Zhang. 2018. 3-D convolutional recurrent neural networks with attention model for speech emotion recognition. IEEE Signal Processing Letters, Vol. 25, 10 (2018), 1440--1444.Google ScholarGoogle ScholarCross RefCross Ref
  11. Andreia P Costa, Georges Steffgen, Francisco Rodríguez Lera, Aida Nazarikhorram, and Pouyan Ziafati. 2017. Socially assistive robots for teaching emotional abilities to children with autism spectrum disorder. In 3rd Workshop on Child-Robot Interaction at HRI.Google ScholarGoogle Scholar
  12. Kerstin Dautenhahn. 2003. Roles and functions of robots in human society: implications from research in autism therapy. Robotica, Vol. 21, 4 (2003), 443--452.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Beatrice de Gelder, AW De Borst, and R Watson. 2015. The perception of emotion in body expressions. Wiley Interdisciplinary Reviews: Cognitive Science, Vol. 6, 2 (2015), 149--158.Google ScholarGoogle ScholarCross RefCross Ref
  14. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF international conference on computer vision. 6202--6211.Google ScholarGoogle ScholarCross RefCross Ref
  15. Christoph Feichtenhofer, Axel Pinz, and Richard Wildes. 2016. Spatiotemporal residual networks for video action recognition. In Advances in Neural Information Processing Systems (NIPS). 3468--3476.Google ScholarGoogle Scholar
  16. Panagiotis Giannopoulos, Isidoros Perikos, and Ioannis Hatzilygeroudis. 2018. Deep learning approaches for facial emotion recognition: A case study on FER-2013. In Advances in hybridization of intelligent methods. Springer, 1--16.Google ScholarGoogle Scholar
  17. SoftBank Robotics Group. 2022. Pepper the humanoid and programmable robot: SoftBank Robotics. https://us.softbankrobotics.com/pepper Retrieved December 7, 2022 fromGoogle ScholarGoogle Scholar
  18. Hatice Gunes and Massimo Piccardi. 2006. A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior. In 18th International conference on pattern recognition (ICPR'06), Vol. 1. IEEE, 1148--1153.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Hatice Gunes and Massimo Piccardi. 2008. Automatic temporal segment detection and affect recognition from face and body display. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol. 39, 1 (2008), 64--84.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Chaudhary Muhammad Aqdus Ilyas, Rita Nunes, Kamal Nasrollahi, Matthias Rehm, and Thomas B Moeslund. 2021. Deep Emotion Recognition through Upper Body Movements and Facial Expression.. In VISIGRAPP (5: VISAPP). 669--679.Google ScholarGoogle Scholar
  21. Hifza Javed, Rachael Burns, Myounghoon Jeon, Ayanna M Howard, and Chung Hyuk Park. 2019. A robotic framework to facilitate sensory experiences for children with autism spectrum disorder: A preliminary study. ACM Transactions on Human-Robot Interaction (THRI), Vol. 9, 1 (2019), 1--26.Google ScholarGoogle Scholar
  22. Hifza Javed and Chung Hyuk Park. 2019. Interactions with an empathetic agent: regulating emotions and improving engagement in autism. IEEE robotics & automation magazine, Vol. 26, 2 (2019), 40--48.Google ScholarGoogle Scholar
  23. Hifza Javed and Chung Hyuk Park. 2022. Promoting Social Engagement with a Multi-Role Dancing Robot for In-Home Autism Care. Frontiers in Robotics and AI (2022), 161.Google ScholarGoogle Scholar
  24. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017).Google ScholarGoogle Scholar
  25. Jessica L Lakin, Valerie E Jefferis, Clara Michelle Cheng, and Tanya L Chartrand. 2003. The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. Journal of nonverbal behavior, Vol. 27, 3 (2003), 145--162.Google ScholarGoogle ScholarCross RefCross Ref
  26. Katie Lang, Marcela Marin Dapelo, Mizanur Khondoker, Robin Morris, Simon Surguladze, Janet Treasure, and Kate Tchanturia. 2015. Exploring emotion recognition in adults and adolescents with anorexia nervosa using a body motion paradigm. European Eating Disorders Review, Vol. 23, 4 (2015), 262--268.Google ScholarGoogle ScholarCross RefCross Ref
  27. Naoki Miura, Motoaki Sugiura, Makoto Takahashi, Tomohisa Moridaira, Atsushi Miyamoto, Yoshihiro Kuroki, and Ryuta Kawashima. 2008. An advantage of bipedal humanoid robot on the empathy generation: A neuroimaging study. In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2465--2470.Google ScholarGoogle ScholarCross RefCross Ref
  28. Samira Rasouli, Garima Gupta, Elizabeth Nilsen, and Kerstin Dautenhahn. 2022. Potential applications of social robots in robot-assisted interventions for social anxiety. International Journal of Social Robotics (2022), 1--32.Google ScholarGoogle ScholarCross RefCross Ref
  29. Jakob Reinhardt, Aaron Pereira, Dario Beckert, and Klaus Bengler. 2017. Dominance and movement cues of robot motion: A user study on trust and predictability. In 2017 IEEE international conference on systems, man, and cybernetics (SMC). IEEE, 1493--1498.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Daniel J Ricks and Mark B Colton. 2010. Trends and considerations in robot-assisted autism therapy. In 2010 IEEE international conference on robotics and automation. IEEE, 4354--4359.Google ScholarGoogle ScholarCross RefCross Ref
  31. Nicu Sebe, Ira Cohen, and Thomas S Huang. 2005. Multimodal emotion recognition. In Handbook of pattern recognition and computer vision. World Scientific, 387--409.Google ScholarGoogle Scholar
  32. Dag Sverre Syrdal, Kerstin Dautenhahn, Kheng Lee Koay, and Michael L Walters. 2009. The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. Adaptive and emergent behaviour and complex systems (2009).Google ScholarGoogle Scholar
  33. Adriana Tapus, Mataric Maja, and Brian Scassellatti. 2007. The grand challenges in socially assistive robotics. IEEE Robotics and Automation Magazine, Vol. 14, 1 (2007), N--A.Google ScholarGoogle ScholarCross RefCross Ref
  34. Paweł Tarnowski, Marcin Kołodziej, Andrzej Majkowski, and Remigiusz J Rak. 2017. Emotion recognition using facial expressions. Procedia Computer Science, Vol. 108 (2017), 1175--1184.Google ScholarGoogle ScholarCross RefCross Ref
  35. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Proceedings of the conference. Association for Computational Linguistics. Meeting, Vol. 2019. NIH Public Access, 6558.Google ScholarGoogle Scholar
  36. Iain Werry, Kerstin Dautenhahn, and William Harwin. 2001. Investigating a robot as a therapy partner for children with autism. Procs AAATE 2001, (2001).Google ScholarGoogle Scholar
  37. Baijun Xie, Jonathan C Kim, and Chung Hyuk Park. 2020. Musical emotion recognition with spectral feature extraction based on a sinusoidal model with model-based and deep-learning approaches. Applied Sciences, Vol. 10, 3 (2020), 902.Google ScholarGoogle ScholarCross RefCross Ref
  38. Baijun Xie and Chung Hyuk Park. 2021. Empathetic robot with transformer-based dialogue agent. In 2021 18th International Conference on Ubiquitous Robots (UR). IEEE, 290--295.Google ScholarGoogle ScholarCross RefCross Ref
  39. Baijun Xie, Mariia Sidulova, and Chung Hyuk Park. 2021. Robust multimodal emotion recognition from conversation with transformer-based crossmodality fusion. Sensors, Vol. 21, 14 (2021), 4913.Google ScholarGoogle ScholarCross RefCross Ref
  40. Unai Zabala, Igor Rodriguez, José María Martínez-Otzeta, and Elena Lazkano. 2022. Modeling and evaluating beat gestures for social robots. Multimedia Tools and Applications, Vol. 81, 3 (2022), 3421--3438.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Zhijun Zhang, Yaru Niu, Ziyi Yan, and Shuyang Lin. 2018. Real-time whole-body imitation by humanoid robots and task-oriented teleoperation using an analytical mapping method and quantitative evaluation. Applied Sciences, Vol. 8, 10 (2018), 2005.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. "Can You Guess My Moves?: Playing Charades with a Humanoid Robot Employing Mutual Learning with Emotional Intelligence

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
          March 2023
          612 pages
          ISBN:9781450399708
          DOI:10.1145/3568294

          Copyright © 2023 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 13 March 2023

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • short-paper

          Acceptance Rates

          Overall Acceptance Rate242of1,000submissions,24%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader