skip to main content
10.1145/2559636.2563681acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
poster

Human-robot collaborative tutoring using multiparty multimodal spoken dialogue

Published: 03 March 2014 Publication History

Abstract

In this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is collected. The corpus targets the development of a dialogue system platform to study verbal and nonverbal tutoring strategies in multiparty spoken interactions with robots which are capable of spoken dialogue. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the participants perform the task, and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal dominance, and how that is correlated with the verbal and visual feed-back, turn-management, and conversation regulatory actions generated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to measure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a well-coordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task.
This project sets the first steps to explore the potential of using multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications.

References

[1]
Nass, C., Steuer, J., & Tauber, E. (1994), "Computers are social actors", CHI '94: Proceedings of the SIGCHI conference on Human factors in computing systems, ACM Press, pp. 72--78
[2]
Al Moubayed, S., Beskow, J., Skantze, G., & Granström, B. 2012. Furhat: A Back-projected Human-like Robot Head for Multiparty Human-Machine Interaction. In Esposito et al. (Eds.), Cognitive Behavioural Systems. Lecture Notes in Computer Science. Springer.
[3]
Al Moubayed, S., Skantze, G., & Beskow, J. (2013). The Furhat Back-Projected Humanoid Head - Lip reading, Gaze and Multiparty Interaction. International Journal of Humanoid Robotics, 10(1).
[4]
Al Moubayed, S., Edlund, J., & Beskow, J. 2012. Taming Mona Lisa: communicating gaze faithfully in 2D and 3D facial projections. ACM Transactions on Interactive Intelligent Systems, 1(2), 25.
[5]
Skantze, G. and Al Moubayed, S. 2012. IrisTK: a statechart-based toolkit for multi-party face-to-face interaction. Proc. of the 14th ACM International Conference on Multimodal In-teraction ICMI. Santa Monica, CA, USA.

Cited By

View all
  • (2024)Design of a Multimodal Robot-Based Conversational Interface: A Case Study with FURHATHCI International 2024 – Late Breaking Papers10.1007/978-3-031-76803-3_17(299-311)Online publication date: 29-Jun-2024
  • (2016)Modern Human-Robot Interaction in Smart Services and Value Co-creationDigital Human Modeling: Applications in Health, Safety, Ergonomics and Risk Management10.1007/978-3-319-40247-5_40(399-408)Online publication date: 23-Jun-2016

Index Terms

  1. Human-robot collaborative tutoring using multiparty multimodal spoken dialogue

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
    March 2014
    538 pages
    ISBN:9781450326582
    DOI:10.1145/2559636
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 March 2014

    Check for updates

    Author Tags

    1. furhat robot
    2. human-robot collaboration
    3. human-robot interaction
    4. multiparty interaction
    5. spoken dialogue

    Qualifiers

    • Poster

    Conference

    HRI'14
    Sponsor:

    Acceptance Rates

    HRI '14 Paper Acceptance Rate 32 of 132 submissions, 24%;
    Overall Acceptance Rate 268 of 1,124 submissions, 24%

    Upcoming Conference

    HRI '25
    ACM/IEEE International Conference on Human-Robot Interaction
    March 4 - 6, 2025
    Melbourne , VIC , Australia

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)18
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Design of a Multimodal Robot-Based Conversational Interface: A Case Study with FURHATHCI International 2024 – Late Breaking Papers10.1007/978-3-031-76803-3_17(299-311)Online publication date: 29-Jun-2024
    • (2016)Modern Human-Robot Interaction in Smart Services and Value Co-creationDigital Human Modeling: Applications in Health, Safety, Ergonomics and Risk Management10.1007/978-3-319-40247-5_40(399-408)Online publication date: 23-Jun-2016

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media