skip to main content
10.1145/2522848.2532189acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Designing effective multimodal behaviors for robots: a data-driven perspective

Published: 09 December 2013 Publication History

Abstract

Robots need to effectively use multimodal behaviors, including speech, gaze, and gestures, in support of their users to achieve intended interaction goals, such as improved task performance. This proposed research concerns designing effective multimodal behaviors for robots to interact with humans using a data-driven approach. In particular, probabilistic graphical models (PGMs) are used to model the interdependencies among multiple behavioral channels and generate complexly contingent multimodal behaviors for robots to facilitate human-robot interaction. This data-driven approach not only allows the investigation of hidden and temporal relationships among behavioral channels but also provides a holistic perspective on how multimodal behaviors as a whole might shape interaction outcomes. Three studies are proposed to evaluate the proposed data-driven approach and to investigate the dynamics of multimodal behaviors and interpersonal interaction. This research will contribute to the multimodal interaction community in theoretical, methodological, and practical aspects.

References

[1]
J. A. Bilmes. Graphical models and automatic speech recognition. In Mathematical foundations of speech and language processing, pages 191--245. Springer, 2004.
[2]
P. J. Carrington, J. Scott, and S. Wasserman. Models and methods in social network analysis. Cambridge U. Press, 2005.
[3]
H. De Jong. Modeling and simulation of genetic regulatory systems: a literature review. Journal of computational biology, 9(1):67--103, 2002.
[4]
J. Fasola and M. Mataric. A socially assistive robot exercise coach for the elderly. Journal of Human-Robot Interaction, 2(2):3--32, 2013.
[5]
A. Henderson, F. Goldman-Eisler, and A. Skarbek. Sequential temporal patterns in spontaneous speech. Language and Speech, 9:207--216, 1966.
[6]
C.-M. Huang and B. Mutlu. Modeling and evaluating narrative gestures for humanlike robots. In Proceedings of the 9th R:SS, 2013.
[7]
C.-M. Huang and B. Mutlu. The repertoire of robot behavior: Enabling robots to achieve interaction goals through social behavior. Journal of Human-Robot Interaction, 2(2):80--102, 2013.
[8]
J. Jaffe, L. Cassotta, and S. Feldstein. Markovian model of time patterns of speech. Science, 144:884--886, 1964.
[9]
J. Lee and S. Marsella. Modeling speaker behavior: a comparison of two approaches. In Proceedings of IVA, pages 161--174, 2012.
[10]
D. McNeill. Hand and Mind. U. of Chicago Press, 1992.
[11]
L.-P. Morency. Modeling human communication dynamics. IEEE Signal Processing Magazine, 27(5):112--116, 2010.
[12]
B. Mutlu, A. Terrell, and C.-M. Huang. Coordination mechanisms in human-robot collaboration. In Proceedings of the Workshop on Collaborative Manipulation at the 8th HRI, 2013.
[13]
N. M. Oliver, B. Rosario, and A. P. Pentland. A bayesian computer vision system for modeling human interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):831--843, 2000.
[14]
K. Otsuka, H. Sawada, and J. Yamato. Automatic inference of cross-modal nonverbal interactions in multiparty conversations: who responds to whom, when, and how? from gaze, head gestures, and utterances. In Proceedings of the 9th ICMI, pages 255--262, 2007.
[15]
J. Sidnell. Coordinating gesture, talk, and gaze in reenactments. Research on Language and Social Interaction, 39(4):377--409, 2006.

Index Terms

  1. Designing effective multimodal behaviors for robots: a data-driven perspective

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interaction
    December 2013
    630 pages
    ISBN:9781450321297
    DOI:10.1145/2522848
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 December 2013

    Check for updates

    Author Tags

    1. data-driven
    2. human-robot interaction
    3. multimodal behavior
    4. probabilistic graphical models

    Qualifiers

    • Poster

    Conference

    ICMI '13
    Sponsor:

    Acceptance Rates

    ICMI '13 Paper Acceptance Rate 49 of 133 submissions, 37%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 136
      Total Downloads
    • Downloads (Last 12 months)5
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 06 Jan 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media