skip to main content
10.1145/1452392.1452452acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Smoothing human-robot speech interactions by using a blinking-light as subtle expression

Published: 20 October 2008 Publication History

Abstract

Speech overlaps, undesired collisions of utterances between systems and users, harm smooth communication and degrade the usability of systems. We propose a method to enable smooth speech interactions between a user and a robot, which enables subtle expressions by the robot in the form of a blinking LED attached to its chest. In concrete terms, we show that, by blinking an LED from the end of the user's speech until the robot's speech, the number of undesirable repetitions, which are responsible for speech overlaps, decreases, while that of desirable repetitions increases. In experiments, participants played a last-and-first game with the robot. The experimental results suggest that the blinking-light can prevent speech overlaps between a user and a robot, speed up dialogues, and improve user's impressions.

References

[1]
C. Bartneck and J. Reichenbach. Subtle emotional expressions of synthetic characters. International Journal of Human-Computer Studies, 62(2):179--192, 2005.
[2]
L. Bell, J. Boye, and J. Gustafson. Real-time handling of fragmented utterances. In Proc. of NAACL-2001 workshop on Adaptation in Dialogue Systems, 2001.
[3]
A. Kendon. Do gestures communicate? A Review. Research in Language and Social Interaction, 27(3):175--200, 1994.
[4]
N. Kitaoka, M. Takeuchi, R. Nishimura, and S. Nakagawa. Response timing detection using prosodic and linguistic information for human-friendly spoken dialog systems. Journal of The Japanese Society for Artificial Intellignece, 20(3):220--228, 2005.
[5]
K. Kobayashi, K. Funakoshi, S. Yamada, M. Nakano, Y. Kitamura, and H. Tsujino. Smoothing human-robot speech interaction with blinking-light expressions. In Proc. of RO-MAN 2008, 2008.
[6]
T. Komatsu and S. Yamada. How do robotic agents' appearances affect people's interpretations of the agents' attitudes? In Proc. of CHI-2007, pages 2519--2524, 2007.
[7]
M. Nakano, Y. Nagano, K. Funakoshi, T. Ito, K. Araki, Y. Hasegawa, and H. Tsujino. Analysis of user reactions to turn-taking failures in spoken dialogue systems. In Proc. of SIGdial-2007, 2007.
[8]
T. Ohsuga, M. Nishida, Y. Horiuchi, and A. Ichikawa. Investigation of the relationship between turn-taking and prosodic features in spontaneous dialogue. In Proc. of European Conference on Speech Communication and Technology, pages 33--36, 2005.
[9]
W. Rogers. The contribution of kinesic illustrators towards the comprehension of verbal behavior within utterances. Human Communication Research, 5:54--62, 1978.
[10]
N. Ward. On the expressive competencies needed for responsive systems. In Proc. of the CHI2003 workshop on Subtle Expressivity for Characters and Robots, 2003.

Cited By

View all
  • (2024)Can Respiration Make Spoken Interactions Better?Proceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3690904(423-425)Online publication date: 24-Nov-2024
  • (2024)Respiration-enhanced Human-Robot CommunicationCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640707(813-816)Online publication date: 11-Mar-2024
  • (2023)Video-based Respiratory Waveform Estimation in Dialogue: A Novel Task and Dataset for Human-Machine InteractionProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614154(649-660)Online publication date: 9-Oct-2023
  • Show More Cited By

Index Terms

  1. Smoothing human-robot speech interactions by using a blinking-light as subtle expression

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '08: Proceedings of the 10th international conference on Multimodal interfaces
    October 2008
    322 pages
    ISBN:9781605581989
    DOI:10.1145/1452392
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 October 2008

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. human-robot interaction
    2. speech overlap
    3. subtle expression
    4. turn-taking

    Qualifiers

    • Poster

    Conference

    ICMI '08
    Sponsor:
    ICMI '08: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES
    October 20 - 22, 2008
    Crete, Chania, Greece

    Acceptance Rates

    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)23
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 15 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Can Respiration Make Spoken Interactions Better?Proceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3690904(423-425)Online publication date: 24-Nov-2024
    • (2024)Respiration-enhanced Human-Robot CommunicationCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640707(813-816)Online publication date: 11-Mar-2024
    • (2023)Video-based Respiratory Waveform Estimation in Dialogue: A Novel Task and Dataset for Human-Machine InteractionProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614154(649-660)Online publication date: 9-Oct-2023
    • (2023)Modeling Adaptive Expression of Robot Learning Engagement and Exploring Its Effects on Human TeachersACM Transactions on Computer-Human Interaction10.1145/357181330:5(1-48)Online publication date: 23-Sep-2023
    • (2023) HREyes: Design, Development, and Evaluation of a Novel Method for AUVs to Communicate Information and Gaze Direction * 2023 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA48891.2023.10161179(7468-7475)Online publication date: 29-May-2023
    • (2022)Effects of Colored LEDs in Robotic Storytelling on Storytelling Experience and Robot Perception2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI)10.1109/HRI53351.2022.9889469(1053-1058)Online publication date: 7-Mar-2022
    • (2022)A survey on the design and evolution of social robots — Past, present and futureRobotics and Autonomous Systems10.1016/j.robot.2022.104193156:COnline publication date: 1-Oct-2022
    • (2022)Understanding AI-Generated Personal Narratives as Design Material for Socially Engaging Things[ ] With Design: Reinventing Design Modes10.1007/978-981-19-4472-7_75(1135-1152)Online publication date: 6-Nov-2022
    • (2021)Modeling of Pre-Touch Reaction Distance for Faces in a Virtual EnvironmentJournal of Information Processing10.2197/ipsjjip.29.65729(657-666)Online publication date: 2021
    • (2020)Gaze-Height and Speech-Timing Effects on Feeling Robot-Initiated TouchesJournal of Robotics and Mechatronics10.20965/jrm.2020.p006832:1(68-75)Online publication date: 20-Feb-2020
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media