ABSTRACT
Body language is an essential component of communication. The amount of unspoken information it transmits during interpersonal interactions is an invaluable complement to simple speech and makes the process smoother and more sustainable. On the contrary, existing approaches to human-machine collaboration and communication are not as intuitive. This is an issue that needs to be addressed if we aim to continue using artificial intelligence and machines to increase our cognitive or even physical capabilities.
In this study, we analyse the potential of an intuitive communication method between biological and artificial agents, based on machines understanding and learning the subtle unspoken and involuntary cues found in human motion during the interaction process. Our work was divided into two stages: the first, analysing whether a machine using these implicit cues would produce the same positive effect as when they are manifested in interpersonal communication; the second, evaluating whether a machine could identify the cues manifested in human motion and learn to associate them with the appropriate command intended from its user. Results showed promising results with improved work performance and reduced cognitive load on the user side when relying on the proposed method, hinting to the potential of more intuitive, human to human inspired, communication methods in human-machine interaction.
- Ralph Adolphs. 2002. Recognizing Emotion from Facial Expressions: Psychological and Neurological Mechanisms. Behavioral and Cognitive Neuroscience Reviews 1, 1 (2002), 21–62. https://doi.org/10.1177/1534582302001001003Google ScholarCross Ref
- ATR-Promotions. [n.d.]. TSND151. http://www.atr-p.com/products/sensor.html Accessed on 01.02.2021.Google Scholar
- HTC Corporation. [n.d.]. HTC Vive Tracker. https://www.vive.com/us/accessory/tracker3/ Accessed on 01.11.2022.Google Scholar
- Beatrice Gelder. 2006. de Gelder, B. Towards the neurobiology of emotional body language. Nature Rev. Neurosci. 7, 242-249. Nature reviews. Neuroscience 7 (04 2006), 242–9. https://doi.org/10.1038/nrn1872Google ScholarCross Ref
- Yasuhisa Hayakawa, TetsuyaOgata, and Shigeki Sugano. 2003. Flexible assembly work cooperation based on work state identifications by a self-organizing map. In Proceedings 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), Vol. 2. 1031–1036.Google ScholarCross Ref
- Rachel Jack and Philippe Schyns. 2015. The Human Face as a Dynamic Tool for Social Communication. Current biology : CB 25 (07 2015), R621–R634. https://doi.org/10.1016/j.cub.2015.05.052Google Scholar
- Kei Kase, Kanata Suzuki, Pin-Chu Yang, Hiroki Mori, and Tetsuya Ogata. 2018. Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA). 6447–6452. https://doi.org/10.1109/ICRA.2018.8460623Google ScholarDigital Library
- Bob L.Sturm. 2014. A Simple Method to Determine if a Music Information Retrieval System is a “Horse”. IEEE Transactions on Multimedia 16, 6 (2014), 1636–1644. https://doi.org/10.1109/TMM.2014.2330697Google ScholarCross Ref
- Zhenhui Peng, Yunhwan Kwon, Jiaan Lu, Ziming Wu, and Xiaojuan Ma. 2019. Design and Evaluation of Service Robot’s Proactivity in Decision-Making Support Process. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3290605.3300328Google Scholar
- Sandra Robla-Gómez, Victor M. Becerra, Jose R. Llata, Esther González-Sarabia, Carlos Torre-Ferrero, and Juan Pérez-Oria. 2017. Working Together: A Review on Safe Human-Robot Collaboration in Industrial Environments. IEEE Access 5(2017), 26754–26773. https://doi.org/10.1109/ACCESS.2017.2773127Google ScholarCross Ref
- Mingfei Sun, Zhenjie Zhao, and Xiaojuan Ma. 2017. Sensing and Handling Engagement Dynamics in Human-Robot Interaction Involving Peripheral Computing Devices. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3025453.3025469Google Scholar
- Pin-Chu Yang, Kazuma Sasaki, Kanata Suzuki, Kei Kase, Shigeki Sugano, and Tetsuya Ogata. 2017. Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning. IEEE Robotics and Automation Letters 2, 2 (2017), 397–403. https://doi.org/10.1109/LRA.2016.2633383Google ScholarCross Ref
- Andrea Maria Zanchettin, Nicola Maria Ceriani, Paolo Rocco, Hao Ding, and Björn Matthias. 2016. Safety in human-robot collaborative manufacturing environments: Metrics and control. IEEE Transactions on Automation Science and Engineering 13, 2(2016), 882–893. https://doi.org/10.1109/TASE.2015.2412256Google ScholarCross Ref
Index Terms
- Ideomotor Principle as a Human-Robot Communication Method during Collaborative Work
Recommendations
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionIn this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is ...
On evaluating the effects of feedback for Sign language learning using Explainable AI
IUI '20 Companion: Companion Proceedings of the 25th International Conference on Intelligent User InterfacesComputer Aided Sign Language Learning (CASLL) is a recent and promising field of research which is made feasible by advances in Computer Vision and Sign Language Recognition. The importance of feedback for language learning has been established by many ...
Learning channel attention for decoding of visual imagined text from multi-band EEG using metric learning
PETRA '23: Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive EnvironmentsThe electroencephalogram (EEG) signals represent proxies of the perceptual cognitive, and emotional processes of the user, which allows neuro-engineering researchers to decode the activities of the human mind non-invasively. Many studies resulting in ...
Comments