skip to main content
10.1145/3527188.3561934acmotherconferencesArticle/Chapter ViewAbstractPublication PageshaiConference Proceedingsconference-collections
research-article

Study on the Perception of Implicit Indication When Collaborating with an Artificial Agent: Example with the Japanese Rice Cake “Mochi” Making

Authors Info & Claims
Published:05 December 2022Publication History

ABSTRACT

Body language is an essential component of communication. The amount of unspoken information it transmits during interpersonal interactions is an invaluable complement to simple speech and makes the process smoother and more sustainable. On the contrary, existing approaches to human-robot collaboration and communication are not as intuitive, especially when it comes to robots transmitting information to their users. This is an issue that needs to be addressed if we aim to continue using artificial intelligence and machines to increase our capabilities, or as collaboration partners.

The purpose of this study was to analyze the concept of a machine communicating with its user in a manner that closely relates to how humans communicate during collaborative work, using the Japanese rice cake making process as an example. First, the elements necessary to achieve the synchrony of individuals participating in the rice cake making process were clarified. Then, the same elements were reproduced in a simulated environment and applied to a rice cake making situation, this time requiring a collaboration between an individual and a robot. In this seccond step, the robot was made to mimick some key inter-personnal implicit communication elements discovered during the first stage of the study. Results showed an improvement in performance, along with a better capacity to predict and adapt motion according to the other party’s, suggesting that the system was capable of assisting in realizing a certain level of understanding between biological and artificial agents for collaborative work without any form of explicit communication.

References

  1. Ralph Adolphs. 2002. Recognizing Emotion from Facial Expressions: Psychological and Neurological Mechanisms. Behavioral and Cognitive Neuroscience Reviews 1, 1 (2002), 21–62. https://doi.org/10.1177/1534582302001001003Google ScholarGoogle ScholarCross RefCross Ref
  2. Manfredo Atzori, Arjan Gijsberts, Claudio Castellini, Barbara Caputo, Anne-Gabrielle Mittaz Hager, Simone Elsig, Giorgio Giatsidis, Franco Bassetto, and Henning Müller. 2014. Electromyography data for non-invasive naturally-controlled Robotic hand prostheses. Nature 1 (12 2014). https://doi.org/10.1038/sdata.2014.53Google ScholarGoogle ScholarCross RefCross Ref
  3. Francisco Cruz, Johannes Twiefel, Sven Magg, Cornelius Weber, and Stefan Wermter. 2015. Interactive reinforcement learning through speech guidance in a domestic scenario. In 2015 International Joint Conference on Neural Networks (IJCNN). 1–8. https://doi.org/10.1109/IJCNN.2015.7280477Google ScholarGoogle ScholarCross RefCross Ref
  4. Pedro Miguel Faria, Rodrigo A. M. Braga, Eduardo Valgode, and Luis Paulo Reis. 2007. Interface Framework to Drive an Intelligent Wheelchair Using Facial Expressions. In 2007 IEEE International Symposium on Industrial Electronics. 1791–1796. https://doi.org/10.1109/ISIE.2007.4374877Google ScholarGoogle ScholarCross RefCross Ref
  5. F GALAN. 2008. A brain-actuated wheelchair : asynchronous and non-invasive brain-computer interfaces for continuous control of robots. Clinical Neurophysiology 119 (2008), 2159–2169. https://ci.nii.ac.jp/naid/10027369286/en/Google ScholarGoogle ScholarCross RefCross Ref
  6. Beatrice Gelder and Nouchine Hadjikhani. 2006. Non-conscious recognition of emotional body language. Neuroreport 17 (05 2006), 583–6. https://doi.org/10.1097/00001756-200604240-00006Google ScholarGoogle ScholarCross RefCross Ref
  7. Shizuo Goto. 2002. Bunraku Ningyo Joruri’s present: Considering its multiple mediation. Humanities Bulletin 86(2002), 281–293.Google ScholarGoogle Scholar
  8. Yukiko Iwasaki, Takafumi Watanabe, and Hiroyasu Iwata. 2016. Research on [third arm] that can be manipulated-First report: Verification of target indication by face vector in VR-. Master’s thesis. Waseda Unniversity, Tokyo.Google ScholarGoogle Scholar
  9. Rachel Jack and Philippe Schyns. 2015. The Human Face as a Dynamic Tool for Social Communication. Current biology : CB 25 (07 2015), R621–R634. https://doi.org/10.1016/j.cub.2015.05.052Google ScholarGoogle ScholarCross RefCross Ref
  10. Mizuho Information & Research Institute 2020. Meeting trend report. https://www.mizuho-ir.co.jp/publication/report/mhir/020.htmlGoogle ScholarGoogle Scholar
  11. Kengo Nawata, Hiroyuki Yamaguchi, Toru Hatano, and Mika Aoshima. 2015. Investigation of team processes that enhance team performance in business organization. The Japanese Journal of Psychology 85 (01 2015), 529–539.Google ScholarGoogle Scholar
  12. Oskar Pfungst. 1911. Clever Hans : the horse of Mr Von Osten.Google ScholarGoogle Scholar
  13. Shiyuan Qiu, Zhijun Li, Wei He, Longbin Zhang, Chenguang Yang, and Chun-Yi Su. 2017. Brain–Machine Interface and Visual Compressive Sensing-Based Teleoperation Control of an Exoskeleton Robot. IEEE Transactions on Fuzzy Systems 25, 1 (2017), 58–69. https://doi.org/10.1109/TFUZZ.2016.2566676Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Ericka Janet Rechy-Ramirez, Huosheng Hu, and Klaus McDonald-Maier. 2012. Head movements based control of an intelligent wheelchair in an indoor environment. In 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO). 1464–1469. https://doi.org/10.1109/ROBIO.2012.6491175Google ScholarGoogle ScholarCross RefCross Ref
  15. Ramon Rico, Miriam Manzanares, Francisco Gil, and C. Gibson. 2008. Team Implicit Coordination Processes: A Team Knowledge-Based Approach. The Academy of Management Review (AMR) 33 (01 2008), 163–184.Google ScholarGoogle Scholar
  16. Tetsuro Sakura, Toshio Morita, and Kazuhiro Ueda. 2009. Feature Extraction of Cooperative Manipulation in Bunraku Puppets. Journal of Human Interface Society 11 (2009), 255–264.Google ScholarGoogle Scholar
  17. mhd yamen Saraiji, Tomoya Sasaki, Reo Matsumura, Kouta Minamizawa, and Masahiko Inami. 2018. Fusion: full body surrogacy for collaborative communication. 1–2.Google ScholarGoogle Scholar
  18. Tomoya Sasaki, mhd yamen Saraiji, Charith Fernando, Kouta Minamizawa, and Masahiko Inami. 2017. MetaLimbs: metamorphosis for multiple arms interaction using artificial limbs. 1–2.Google ScholarGoogle Scholar
  19. Lanbo She, Yu Cheng, Joyce Y. Chai, Yunyi Jia, Shaohua Yang, and Ning Xi. 2014. Teaching Robots New Actions through Natural Language Instructions. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. 868–873. https://doi.org/10.1109/ROMAN.2014.6926362Google ScholarGoogle ScholarCross RefCross Ref
  20. Tomonori Shibuya, Yui Morita, Haruaki Fukuda, Kazuhiro Ueda, and Masato Sasaki. 2012. Asynchronous Relation between Body Action and Breathing in Bunraku: Uniqueness of Manner of Breathing in Japanese Traditional Performing Arts,. Cognitive Studies 19(2012), 337–364.Google ScholarGoogle Scholar
  21. Great Big Story. [n.d.]. Pounding Mochi With the Fastest Mochi Maker in Japan. Youtube. https://www.youtube.com/watch?v=tmSrULDVRPcGoogle ScholarGoogle Scholar
  22. Susumu Tachi and Minoru Abe. 1982. Research on Telexistence (1st Report) -Design of Visual Display-. 21st SICE Academic Lecture(1982).Google ScholarGoogle Scholar
  23. Susumu Tachi, Yasuyuki Inoue, and Fumihiro Kato. 2020. Telesar VI: Telexistence Surrogate Anthropomorphic Robot VI. International Journal of Humanoid Robotics 17 (08 2020).Google ScholarGoogle Scholar
  24. Shota TAKAHASHI, Yukiko IWASAKI, Koki NAKABAYASHI, and Hiroyasu IWATA. 2017. Research on Third Arm: voluntarily operative wearable robot arm:- Development of Face Vector sensing eyeglasses -. The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2017, 0 (2017), 1P2–L08.Google ScholarGoogle Scholar
  25. Kazuhiro Ueda, Tetsuro Sakura, Yuki Narita, Kazuya Sawai, and Toshio Morita. 2012. Silent communication among Bunraku puppeteers. In Proceedings of the 29th annual meeting of the Japanese Cognitive Science Society.Google ScholarGoogle Scholar
  26. Yue H. Yin, Yuan J. Fan, and Li D. Xu. 2012. EMG and EPP-Integrated Human–Machine Interface Between the Paralyzed and Rehabilitation Exoskeleton. IEEE Transactions on Information Technology in Biomedicine 16, 4(2012), 542–549. https://doi.org/10.1109/TITB.2011.2178034Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Study on the Perception of Implicit Indication When Collaborating with an Artificial Agent: Example with the Japanese Rice Cake “Mochi” Making

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          HAI '22: Proceedings of the 10th International Conference on Human-Agent Interaction
          December 2022
          352 pages
          ISBN:9781450393232
          DOI:10.1145/3527188

          Copyright © 2022 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 5 December 2022

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate121of404submissions,30%
        • Article Metrics

          • Downloads (Last 12 months)45
          • Downloads (Last 6 weeks)2

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format