skip to main content
10.1145/2666242.2666249acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Comparison of Human-Human and Human-Robot Turn-Taking Behaviour in Multiparty Situated Interaction

Published: 16 November 2014 Publication History

Abstract

In this paper, we present an experiment where two human subjects are given a team-building task to solve together with a robot. The setting requires that the speakers' attention is partly directed towards objects on the table between them, as well as to each other, in order to coordinate turn-taking. The symmetrical setup allows us to compare human-human and human-robot turn-taking behaviour in the same interactional setting. The analysis centres around the interlocutors' attention (as measured by head pose) and gap length between turns, depending on the pragmatic function of the utterances.

References

[1]
Clark, H. H., & Marshall, C. R. (1981). Definite reference and mutual knowledge. In Joshi, A. K., Webber, B. L., & Sag, I. A. (Eds.), Elements of discourse understanding (pp. 10--63). Cambridge, England: Cambridge University Press.
[2]
Burgoon, J. K., Bonito, J. A., Bengtsson, B., Cederberg, C., Lundeberg, M., & Allspach, L. (2000). Interactivity in human-computer interaction: A study of credibility, understanding, and influence. Computers in Human Behavior, 16(6), 553--574.
[3]
Kendon, A. (1967). Some functions of gaze direction in social interaction. Acta Psychologica, 26, 22--63.
[4]
Oertel, C., Wlodarczak, M., Edlund, J., Wagner, P., & Gustafson, J. (2012). Gaze Patterns in Turn-Taking. In Proc. of Interspeech 2012. Portland, Oregon, US.
[5]
Duncan, S. (1972). Some Signals and Rules for Taking Speaking Turns in Conversations. Journal of Personality and Social Psychology, 23(2), 283--292.
[6]
Koiso, H., Horiuchi, Y., Tutiya, S., Ichikawa, A., & Den, Y. (1998). An analysis of turn-taking and backchannels based on prosodic and syntactic features in Japanese Map Task dialogs. Language and Speech, 41, 295--321.
[7]
Morency, L. P., de Kok, I., & Gratch, J. (2010). A probabilistic multimodal approach for predicting listener backchannels. Autonomous Agents and Multi-Agent Systems, 20(1), 70--84.
[8]
Meena, R., Skantze, G., & Gustafson, J. (2013). A Data-driven Model for Timing Feedback in a Map Task Dialogue System. In 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial (pp. 375--383). Metz, France.
[9]
Bohus, D., & Horvitz, E. (2011). Decisions about turns in multiparty conversation: from perception to action. In ICMI '11 Proceedings of the 13th international conference on multimodal interfaces (pp. 153--160).
[10]
Traum, D., & Rickel, J. (2001). Embodied Agents for Multi-party Dialogue in Immersive Virtual Worlds. In Proc. of IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems (pp. 766--773). Seattle, WA, US.
[11]
Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., & Ishiguro, H. (2012). Conversational Gaze Mechanisms for Humanlike Robots. ACM Trans. Interact. Intell. Syst., 1(2), 12:1--12:33.
[12]
Vertegaal, R., Slagter, R., van der Veer, G., & Nijholt, A. (2001). Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In Proceedings of ACM Conf. on Human Factors in Computing Systems.
[13]
Clark, H. H., & Krych, M. A. (2004). Speaking while monitoring addressees for understanding. Journal of Memory and Language, 50, 62--81.
[14]
Allopenna, P. D., Magnuson, J. S., & Tanenhaus, M. K. (1998). Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of memory and language, 38(4), 419--439.
[15]
Argyle, M., & Graham, J. A. (1976). The central Europe experiment: Looking at persons and looking at objects. Environmental Psychology and Nonverbal Behavior, 1(1), 6--16.
[16]
Johansson, M., Skantze, G., & Gustafson, J. (2013). Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions. In International Conference on Social Robotics - ICSR 2013. Bristol, UK.
[17]
Katzenmaier, M., Stiefelhagen, R., Schultz, T., Rogina, I., & Waibel, A. (2004). Identifying the Addressee in Human-Human-Robot Interactions based on Head Pose and Speech. In Proceedings of International Conference on Multimodal Interfaces ICMI 2004. PA, USA: State College.
[18]
Stiefelhagen, R., & Zhu, J. (2002). Head orientation and gaze direction in meetings. In CHI '02 Extended Abstracts on Human Factors in Computing Systems (pp. 858--859).
[19]
Ba, S. O., & Odobez, J.-M. (2009). Recognizing visual focus of attention from head pose in natural meetings. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 39(1), 16--33.
[20]
Al Moubayed, S., Skantze, G., & Beskow, J. (2013). The Furhat Back-Projected Humanoid Head - Lip reading, Gaze and Multiparty Interaction. International Journal of Humanoid Robotics, 10(1).
[21]
Skantze, G., Hjalmarsson, A., & Oertel, C. (in press). Turn-taking, Feedback and Joint Attention in Situated Human-Robot Interaction. Speech Communication, 65, 50--66.
[22]
Allen, J. F., & Core, M. (1997). Draft of DAMSL: Dialog act markup in several layers. Unpublished manuscript.
[23]
Skantze, G., & Al Moubayed, S. (2012). IrisTK: a statechart-based toolkit for multi-party face-to-face interaction. In Proceedings of ICMI. Santa Monica, CA.

Cited By

View all
  • (2021)Multi-Agent Voice Assistants: An Investigation of User ExperienceProceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia10.1145/3490632.3490662(98-107)Online publication date: 5-Dec-2021
  • (2020)Social Dynamics in Human-Robot Groups – Possible Consequences of Unequal Adaptation to Group Members Through Machine Learning in Human-Robot GroupsArtificial Intelligence in HCI10.1007/978-3-030-50334-5_27(396-411)Online publication date: 19-Jul-2020
  • (2017)Gaze and filled pause detection for smooth human-robot conversations2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)10.1109/HUMANOIDS.2017.8246889(297-304)Online publication date: Nov-2017
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UM3I '14: Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions
November 2014
58 pages
ISBN:9781450306522
DOI:10.1145/2666242
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 November 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. multiparty human-robot dialogue
  2. situated dialogue
  3. turn taking

Qualifiers

  • Research-article

Funding Sources

Conference

ICMI '14
Sponsor:

Acceptance Rates

UM3I '14 Paper Acceptance Rate 8 of 8 submissions, 100%;
Overall Acceptance Rate 8 of 8 submissions, 100%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)3
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Multi-Agent Voice Assistants: An Investigation of User ExperienceProceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia10.1145/3490632.3490662(98-107)Online publication date: 5-Dec-2021
  • (2020)Social Dynamics in Human-Robot Groups – Possible Consequences of Unequal Adaptation to Group Members Through Machine Learning in Human-Robot GroupsArtificial Intelligence in HCI10.1007/978-3-030-50334-5_27(396-411)Online publication date: 19-Jul-2020
  • (2017)Gaze and filled pause detection for smooth human-robot conversations2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)10.1109/HUMANOIDS.2017.8246889(297-304)Online publication date: Nov-2017
  • (2016)Are you talking to me?Proceedings of the Fourth International Conference on Human Agent Interaction10.1145/2974804.2974823(43-50)Online publication date: 4-Oct-2016
  • (2016)Developing an Interactive Gaze Algorithm for Android RobotsSocial Robotics10.1007/978-3-319-47437-3_43(441-448)Online publication date: 7-Oct-2016
  • (2015)A Collaborative Human-Robot Game as a Test-bed for Modelling Multi-party, Situated InteractionIntelligent Virtual Agents10.1007/978-3-319-21996-7_37(348-351)Online publication date: 1-Aug-2015
  • (undefined)Analysis of multi-party human interaction towards a robot mediator2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)10.1109/ROMAN.2016.7745085(17-21)

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media