skip to main content
10.1145/3371382.3378302acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
abstract

Does the Appearance of an Agent Affect How We Perceive his/her Voice?: Audio-visual Predictive Processes in Human-robot Interaction

Published:01 April 2020Publication History

ABSTRACT

Robots increasingly become part of our lives. How we perceive and predict their behavior has been an important issue in HRI. To address this issue, we adapted a well-established prediction paradigm from cognitive science for HRI. Participants listened a greeting phrase that sounds either human-like or robotic. They indicated whether the voice belongs to a human or a robot as fast as possible with a key press. Each voice was preceded with a human or robot image (a human-like robot or a mechanical robot) to cue the participant about the upcoming voice. The image was either congruent or incongruent with the sound stimulus. Our findings show that people reacted faster to robotic sounds in congruent trials than incongruent trials, suggesting the role of predictive processes in robot perception. In sum, our study provides insights about how robots should be designed, and suggests that designing robots that do not violate our expectations may result in a more efficient interaction between humans and robots.

References

  1. A. Clark (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36 (3), 181--204Google ScholarGoogle ScholarCross RefCross Ref
  2. O. Doehrmann, & M.J. Naumer (2008). Semantics and the multisensory brain: how meaning modulates processes of audio-visual integration. Brain research, 1242, 136--150Google ScholarGoogle Scholar
  3. C. Nass (2015). Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship. MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  4. S.E. Stern, J.W. Mullennix, I. Yaroslavsky (2006). Persuasion and social perception of human vs. synthetic voice across person as source and computer as source conditions. International Journal of Human-Computer Interaction, 64, 43--52.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M.L. Walters, D.D. Dyrdal, K.L. Koay, K. Dautenhahn, R. te Boeckhorst (2008). Human approach distances to a mechanical-looking robot with different voice styles. Proceedings of RO-MAN, Munich, Germany.Google ScholarGoogle ScholarCross RefCross Ref
  6. K. Zibrek, E. Kokkinara, R. Mcdonnell (2018). The effect of realistic appearance of virtual characters in immersive environments -- Does the character's personality play a role? IEEE Transactions on Visualization and Computer Graphics, 24 (4), 1681--1690.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. Mousas, D. Anastasiou, O. Spantidi (2018). The effects of appearance and motion of virtual characters on emotional reactivity. Computers in Human Behavior, 86, 99--108.Google ScholarGoogle ScholarCross RefCross Ref
  8. Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98--100.Google ScholarGoogle ScholarCross RefCross Ref
  9. B.A. Urgen, M. Kutas, A.P. Saygin (2018). Uncanny valley as a window into predictive processing in the social brain. Neuropsychologia, 114, 181--185Google ScholarGoogle ScholarCross RefCross Ref
  10. W.J. Mitchell, K.A. Szerszen, A.S. Lu, P.W. Schermerhorn, M. Scheutz, K.F.Macdorman (2011). A mismatch in the human realism of face and voice produces an uncanny valley. Iperception, 2 (1), 10--12.Google ScholarGoogle ScholarCross RefCross Ref
  11. A.P. Saygin, T.Chaminade, H.Ishiguro, J.Driver, C.Frith (2012). The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social Cognitive Affective Neuroscience, 7, 413--422.Google ScholarGoogle ScholarCross RefCross Ref
  12. B.A. Urgen, M. Plank, H. Ishiguro, H.Poizner, A.P.Saygin (2013). EEG theta and mu oscillations during perception of human and robot actions. Frontiers in Neurorobotics, 7:19.Google ScholarGoogle ScholarCross RefCross Ref
  13. B.A. Urgen, S.Pehlivan, A.P.Saygin (2019). Distinct representations in occipito-temporal, parietal, and premotor cortex during action perception revealed by fMRI and computational modeling. Neuropsychologia, 127, 35--47.Google ScholarGoogle ScholarCross RefCross Ref
  14. F.P. de Lange, M. Heilbron, P. Kok (2018). How do expectations shape perception? Trends in Cognitive Sciences, 22 (9), 764--779.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Does the Appearance of an Agent Affect How We Perceive his/her Voice?: Audio-visual Predictive Processes in Human-robot Interaction

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
      March 2020
      702 pages
      ISBN:9781450370578
      DOI:10.1145/3371382

      Copyright © 2020 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 April 2020

      Check for updates

      Qualifiers

      • abstract

      Acceptance Rates

      Overall Acceptance Rate192of519submissions,37%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader