ABSTRACT
Body language is an essential component of communication. The amount of unspoken information it transmits during interpersonal interactions is an invaluable complement to simple speech and makes the process smoother and more sustainable. On the contrary, existing approaches to human-robot collaboration and communication are not as intuitive, especially when it comes to robots transmitting information to their users. This is an issue that needs to be addressed if we aim to continue using artificial intelligence and machines to increase our capabilities, or as collaboration partners.
The purpose of this study was to analyze the concept of a machine communicating with its user in a manner that closely relates to how humans communicate during collaborative work, using the Japanese rice cake making process as an example. First, the elements necessary to achieve the synchrony of individuals participating in the rice cake making process were clarified. Then, the same elements were reproduced in a simulated environment and applied to a rice cake making situation, this time requiring a collaboration between an individual and a robot. In this seccond step, the robot was made to mimick some key inter-personnal implicit communication elements discovered during the first stage of the study. Results showed an improvement in performance, along with a better capacity to predict and adapt motion according to the other party’s, suggesting that the system was capable of assisting in realizing a certain level of understanding between biological and artificial agents for collaborative work without any form of explicit communication.
- Ralph Adolphs. 2002. Recognizing Emotion from Facial Expressions: Psychological and Neurological Mechanisms. Behavioral and Cognitive Neuroscience Reviews 1, 1 (2002), 21–62. https://doi.org/10.1177/1534582302001001003Google ScholarCross Ref
- Manfredo Atzori, Arjan Gijsberts, Claudio Castellini, Barbara Caputo, Anne-Gabrielle Mittaz Hager, Simone Elsig, Giorgio Giatsidis, Franco Bassetto, and Henning Müller. 2014. Electromyography data for non-invasive naturally-controlled Robotic hand prostheses. Nature 1 (12 2014). https://doi.org/10.1038/sdata.2014.53Google ScholarCross Ref
- Francisco Cruz, Johannes Twiefel, Sven Magg, Cornelius Weber, and Stefan Wermter. 2015. Interactive reinforcement learning through speech guidance in a domestic scenario. In 2015 International Joint Conference on Neural Networks (IJCNN). 1–8. https://doi.org/10.1109/IJCNN.2015.7280477Google ScholarCross Ref
- Pedro Miguel Faria, Rodrigo A. M. Braga, Eduardo Valgode, and Luis Paulo Reis. 2007. Interface Framework to Drive an Intelligent Wheelchair Using Facial Expressions. In 2007 IEEE International Symposium on Industrial Electronics. 1791–1796. https://doi.org/10.1109/ISIE.2007.4374877Google ScholarCross Ref
- F GALAN. 2008. A brain-actuated wheelchair : asynchronous and non-invasive brain-computer interfaces for continuous control of robots. Clinical Neurophysiology 119 (2008), 2159–2169. https://ci.nii.ac.jp/naid/10027369286/en/Google ScholarCross Ref
- Beatrice Gelder and Nouchine Hadjikhani. 2006. Non-conscious recognition of emotional body language. Neuroreport 17 (05 2006), 583–6. https://doi.org/10.1097/00001756-200604240-00006Google ScholarCross Ref
- Shizuo Goto. 2002. Bunraku Ningyo Joruri’s present: Considering its multiple mediation. Humanities Bulletin 86(2002), 281–293.Google Scholar
- Yukiko Iwasaki, Takafumi Watanabe, and Hiroyasu Iwata. 2016. Research on [third arm] that can be manipulated-First report: Verification of target indication by face vector in VR-. Master’s thesis. Waseda Unniversity, Tokyo.Google Scholar
- Rachel Jack and Philippe Schyns. 2015. The Human Face as a Dynamic Tool for Social Communication. Current biology : CB 25 (07 2015), R621–R634. https://doi.org/10.1016/j.cub.2015.05.052Google ScholarCross Ref
- Mizuho Information & Research Institute 2020. Meeting trend report. https://www.mizuho-ir.co.jp/publication/report/mhir/020.htmlGoogle Scholar
- Kengo Nawata, Hiroyuki Yamaguchi, Toru Hatano, and Mika Aoshima. 2015. Investigation of team processes that enhance team performance in business organization. The Japanese Journal of Psychology 85 (01 2015), 529–539.Google Scholar
- Oskar Pfungst. 1911. Clever Hans : the horse of Mr Von Osten.Google Scholar
- Shiyuan Qiu, Zhijun Li, Wei He, Longbin Zhang, Chenguang Yang, and Chun-Yi Su. 2017. Brain–Machine Interface and Visual Compressive Sensing-Based Teleoperation Control of an Exoskeleton Robot. IEEE Transactions on Fuzzy Systems 25, 1 (2017), 58–69. https://doi.org/10.1109/TFUZZ.2016.2566676Google ScholarDigital Library
- Ericka Janet Rechy-Ramirez, Huosheng Hu, and Klaus McDonald-Maier. 2012. Head movements based control of an intelligent wheelchair in an indoor environment. In 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO). 1464–1469. https://doi.org/10.1109/ROBIO.2012.6491175Google ScholarCross Ref
- Ramon Rico, Miriam Manzanares, Francisco Gil, and C. Gibson. 2008. Team Implicit Coordination Processes: A Team Knowledge-Based Approach. The Academy of Management Review (AMR) 33 (01 2008), 163–184.Google Scholar
- Tetsuro Sakura, Toshio Morita, and Kazuhiro Ueda. 2009. Feature Extraction of Cooperative Manipulation in Bunraku Puppets. Journal of Human Interface Society 11 (2009), 255–264.Google Scholar
- mhd yamen Saraiji, Tomoya Sasaki, Reo Matsumura, Kouta Minamizawa, and Masahiko Inami. 2018. Fusion: full body surrogacy for collaborative communication. 1–2.Google Scholar
- Tomoya Sasaki, mhd yamen Saraiji, Charith Fernando, Kouta Minamizawa, and Masahiko Inami. 2017. MetaLimbs: metamorphosis for multiple arms interaction using artificial limbs. 1–2.Google Scholar
- Lanbo She, Yu Cheng, Joyce Y. Chai, Yunyi Jia, Shaohua Yang, and Ning Xi. 2014. Teaching Robots New Actions through Natural Language Instructions. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. 868–873. https://doi.org/10.1109/ROMAN.2014.6926362Google ScholarCross Ref
- Tomonori Shibuya, Yui Morita, Haruaki Fukuda, Kazuhiro Ueda, and Masato Sasaki. 2012. Asynchronous Relation between Body Action and Breathing in Bunraku: Uniqueness of Manner of Breathing in Japanese Traditional Performing Arts,. Cognitive Studies 19(2012), 337–364.Google Scholar
- Great Big Story. [n.d.]. Pounding Mochi With the Fastest Mochi Maker in Japan. Youtube. https://www.youtube.com/watch?v=tmSrULDVRPcGoogle Scholar
- Susumu Tachi and Minoru Abe. 1982. Research on Telexistence (1st Report) -Design of Visual Display-. 21st SICE Academic Lecture(1982).Google Scholar
- Susumu Tachi, Yasuyuki Inoue, and Fumihiro Kato. 2020. Telesar VI: Telexistence Surrogate Anthropomorphic Robot VI. International Journal of Humanoid Robotics 17 (08 2020).Google Scholar
- Shota TAKAHASHI, Yukiko IWASAKI, Koki NAKABAYASHI, and Hiroyasu IWATA. 2017. Research on Third Arm: voluntarily operative wearable robot arm:- Development of Face Vector sensing eyeglasses -. The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2017, 0 (2017), 1P2–L08.Google Scholar
- Kazuhiro Ueda, Tetsuro Sakura, Yuki Narita, Kazuya Sawai, and Toshio Morita. 2012. Silent communication among Bunraku puppeteers. In Proceedings of the 29th annual meeting of the Japanese Cognitive Science Society.Google Scholar
- Yue H. Yin, Yuan J. Fan, and Li D. Xu. 2012. EMG and EPP-Integrated Human–Machine Interface Between the Paralyzed and Rehabilitation Exoskeleton. IEEE Transactions on Information Technology in Biomedicine 16, 4(2012), 542–549. https://doi.org/10.1109/TITB.2011.2178034Google ScholarDigital Library
Index Terms
- Study on the Perception of Implicit Indication When Collaborating with an Artificial Agent: Example with the Japanese Rice Cake “Mochi” Making
Recommendations
A methodology to assess the acceptability of human-robot collaboration using virtual reality
VRST '13: Proceedings of the 19th ACM Symposium on Virtual Reality Software and TechnologyRobots are becoming more and more present in our everyday life: they are already used for domestic tasks, for companionship activities, and soon they will be used to assist humans and collaborate with them in their work. Human-robot collaboration has ...
Augmenting the Human-Robot Communication Channel in Shared Task Environments
Collaboration Technologies and Social ComputingAbstractAdaptive robots that collaborate with humans in shared task environments are expected to enhance production efficiency and flexibility in a near future. In this context, the question of acceptance of such a collaboration by human workers is ...
Modelling simple human-robot collaborative manufacturing tasks in interactive virtual environments
VRIC '16: Proceedings of the 2016 Virtual Reality International ConferenceThis paper presents in brief a novel interactive Virtual Environment (VE) that simulates in real-time collaborative manufacturing tasks between a human and an industrial robotic manipulator, working in close proximity, while sharing their workspaces. ...
Comments