ABSTRACT
Communication is integral to knowledge transfer in human-human interaction. To inform effective knowledge transfer in human-robot interaction, we conducted an observational study to better understand how people use gaze and other backchannel signals to ground their mutual understanding of task-oriented instruction during learning interactions. Our results highlight qualitative and quantitative differences in how people exhibit and respond to gaze, depending on motivation and instructional context. The findings of this study inform future research that seeks to improve the efficacy and naturalness of robots as they communicate with people as both learners and instructors.
- Kendon A. 1967. Some functions of gaze-direction in social interaction. Acta Psychol (Amst) (1967), 26(1):22--63.Google Scholar
- Henny Admoni and Brian Scassellati. 2017. Social eye gaze in human-robot interaction: a review. Journal of Human-Robot Interaction 6, 1 (2017), 25--63.Google ScholarDigital Library
- Michael Argyle and Mark Cook. 1976. Gaze and mutual gaze. (1976).Google Scholar
- Herbert H Clark. 1994. Managing problems in speaking. Speech communication 15, 3--4 (1994), 243--250.Google Scholar
- Herbert H Clark. 1996. Using language. Cambridge university press.Google Scholar
- John M Digman. 1990. Personality structure: Emergence of the five-factor model. Annual review of psychology 41, 1 (1990), 417--440.Google Scholar
- Susan R Fussell, Leslie D Setlock, and Elizabeth M Parker. 2003. Where do helpers look? Gaze targets during collaborative physical tasks. In CHI'03 Extended Abstracts on Human Factors in Computing Systems. 768--769.Google Scholar
- Susan R Fussell, Leslie D Setlock, Jie Yang, Jiazhi Ou, Elizabeth Mauer, and Adam DI Kramer. 2004. Gestures over video streams to support remote collaboration on physical tasks. Human-Computer Interaction 19, 3 (2004), 273--309.Google ScholarDigital Library
- Chien-Ming Huang and Bilge Mutlu. 2013. Modeling and Evaluating Narrative Gestures for Human-like Robots. In Robotics: Science and Systems. 57--64.Google Scholar
- Chien-Ming Huang and Andrea L Thomaz. 2011. Efects of responding to, initiating and ensuring joint attention in human-robot interaction. In 2011 Ro-Man. IEEE, 65--71.Google Scholar
- Malte F Jung, Jin Joo Lee, Nick DePalma, Sigurdur O Adalgeirsson, Pamela J Hinds, and Cynthia Breazeal. 2013. Engaging robots: easing complex human-robot teamwork using backchanneling. In Proceedings of the 2013 conference on Computer supported cooperative work. 1555--1566.Google ScholarDigital Library
- Jiazhi Ou, Lui Min Oh, Jie Yang, and Susan R Fussell. 2005. Effects of task proper-ties, partner actions, and message content on eye gaze patterns in a collaborative task. In Proceedings of the SIGCHI conference on Human factors in computing systems. 231--240.Google ScholarDigital Library
- HaeWon Park, Mirko Gelsomini, Jin Joo Lee, and Cynthia Breazeal. 2017. Telling stories to robots: The efect of backchanneling on a child's storytelling. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI. IEEE, 100--108.Google ScholarDigital Library
- Cañigueral R and Hamilton AFdC. 2019. The Role of Eye Gaze During Natural Social Interactions in Typical and Autistic People. Frontiers in Psychology (2019).Google Scholar
- Ponsleur Brett Holroyd Aaron Rich, Charles and Candace Sidner. 2010. Recognizing engagement in human-robot interaction. In Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI). 375--382.Google Scholar
- Allison Sauppé and Bilge Mutlu. 2014. How social cues shape task coordination and communication. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 97--108.Google ScholarDigital Library
- Toshiyuki Shiwa, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, and Norihiro Hagita. 2008. How quickly should communication robots respond?. In 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 153--160.Google ScholarDigital Library
- Candace L Sidner, Christopher Lee, Cory Kidd, Neal Lesh, and Charles Rich. 2005. Explorations in engagement for humans and robots. arXiv preprint cs/0507056 (2005).Google Scholar
- Maia Stiber and Chien-Ming Huang. 2020. Not All Errors Are Created Equal: Exploring Human Responses to Robot Errors with Varying Severity. In Companion Publication of the 2020 International Conference on Multimodal Interaction. 97--101.Google ScholarDigital Library
- Tanya Stivers. 2008. Stance, alignment, and affiliation during storytelling: When nodding is a token of affiliation. Research on language and social interaction 41, 1 (2008), 31--57.Google Scholar
- Michael Tomasello et al. 1995. Joint attention as social cognition. Joint attention: Its origins and role in development 103130 (1995).Google Scholar
Index Terms
- Mental Synchronization in Human Task Demonstration: Implications for Robot Teaching and Learning
Recommendations
Spontaneous spoken dialogues with the furhat human-like robot head
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionFurhat [1] is a robot head that deploys a back-projected animated face that is realistic and human-like in anatomy. Furhat relies on a state-of-the-art facial animation architecture allowing accurate synchronized lip movements with speech, and the ...
Footing in human-robot conversations: how robots might shape participant roles using gaze cues
HRI '09: Proceedings of the 4th ACM/IEEE international conference on Human robot interactionDuring conversations, speakers establish their and others' participant roles (who participates in the conversation and in what capacity)--or "footing" as termed by Goffman-using gaze cues. In this paper, we study how a robot can establish the ...
Human-robot speech interface understanding inexplicit utterances using vision
CHI EA '04: CHI '04 Extended Abstracts on Human Factors in Computing SystemsSpeech interfaces should have a capability of dealing with inexplicit utterances including such as ellipsis and deixis since they are common phenomena in our daily conversation. Their resolution using context and a priori knowledge has been investigated ...
Comments