skip to main content
10.1145/1877826.1877843acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Interpreting non-linguistic utterances by robots: studying the influence of physical appearance

Published:29 October 2010Publication History

ABSTRACT

This paper presents a survey in which participants were asked to interpret non-linguistic utterances made by two different types of robot, one humanoid robot and one pet-like robot. The study set out to answer the question of whether the interpretation of emotions differed across types of robots, participant parameters and classes of utterance. We found that both male and female subjects were consistently more coherent in interpreting human over animal utterances, and animal over technological utterances. This held true with regard to the emotional and intentional interpretation, as well as the perception of appropriateness of a particular utterance with respect to a particular type of robot. We also found that males and females frequently differed significantly in their emotional and intentional interpretations of utterances. Finally, our results indicate that the morphology of a robot influences peoples judgment of what class of utterance is deemed appropriate for a particular type of robot.

References

  1. A. D. Angeli, G. I. Johnson, and L. Coventry. The unfriendly user: exploring social reactions to chatterbots. In K. Helander and T. Helander, editors, Proceedings of The International Conference on Affective Human Factors Design. 2001.Google ScholarGoogle Scholar
  2. R. Banse and K. Scherer. Acoustic profiles in vocal emotion expression. Journal of personality and social psychology, 70(3):614--636, 1996.Google ScholarGoogle Scholar
  3. C. Breazeal. Designing Sociable Robots. The MIT Press, Cambridge, MA, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. G. Bryant and H. Barrett. Recognizing intentions in infant-directed speech: Evidence for universals. Psychological Science, 18(8):746--751, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  5. G. Bryant and H. Barrett. Vocal emotion recognition across disparate cultures. Journal of Cognition and Culture, 8, 1(2):135--148, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  6. J. Cahn. Generating expression in synthesized speech. Master's thesis, MIT Media Lab, Cambridge, MA, 1990.Google ScholarGoogle Scholar
  7. R. Cowie and R. Cornelius. Describing the emotional states that are expressed in speech. Speech Communication, 40(1--2):5--32, Apr. 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Fernald. Intonation and communicative intent in mothers' speech to infants: is the melody the message? Child development, 60(6):1497--1510, 1989.Google ScholarGoogle Scholar
  9. S. R. Fussell, S. Kiesler, L. D. Setlock, and V. Yew. How people anthropomorphize robots. In HRI '08: Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, pages 145--152, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. B. A. Knott and P. Kortum. Personification of voice user interfaces: Impacts on user performance. Human Factors and Ergonomics Society Annual Meeting Proceedings, 50(5):599--603, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  11. T. Komatsu and S. Yamada. How appearance of robotic agents affects how people interpret the agents' attitudes. In ACE '07: Proceedings of the international conference on Advances in computer entertainment technology, pages 123--126, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. H. Kose-Bagci, E. Ferrari, K. Dautenhahn, D. S. Syrdal, and C. L. Nehaniv. Effects of Embodiment and Gestures on Social Interaction in Drumming Games with a Humanoid Robot. Advanced Robotics, 23(14):1951--1996, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  13. K. Krippendorff. Content analysis: An introduction to its methodology. Sage, Thousand Oaks, CA, 2004.Google ScholarGoogle Scholar
  14. P. Lison and G. J. M. Kruijff. Robust processing of situated spoken dialogue. KI 2009: Advances in Artificial Intelligence, Proceedings, 5803:241--248, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Lohse, K. J. Rohlfing, B. Wrede, and G. Sagerer. "Try something else!" - when users change their discursive behavior in human-robot interaction. 2008 IEEE International Conference on Robotics and Automation, Vols 1--9, pages 3481--3486, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  16. P.-Y. Oudeyer. The production and recognition of emotions in speech: features and algorithms. International Journal of Human-Computer Studies, 59(1--2):157--183, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. R. Plutchik. The Psychology and Biology of Emotion. Harper-Collins College Publishers, New York, NY, 1994.Google ScholarGoogle Scholar
  18. M. Schröder. Speech and Emotion Research: An overview of research frameworks and a dimensional approach to emotional speech synthesis. PhD thesis, Institute of Phonetics, Saarland University, 2003.Google ScholarGoogle Scholar
  19. T. Shiwa, T. Kanda, M. Imai, H. Ishiguro, and N. Hagita. How quickly should a communication robot respond? delaying strategies and habituation effects. International Journal of Social Robotics, 1(2):141--155, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  20. A. Thomaz, M. Berlin, and C. Breazeal. Robot science meets social science: An embodied computational model of social referencing. In Workshop toward social mechanisms of android science (CogSci), pages 7--17, 2005.Google ScholarGoogle Scholar
  21. K. Wada and T. Shibata. Living with Seal Robots in a Care House-Evaluations of Social and Physiological Influences. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4940--4945, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  22. S. Yilmazyildiz, L. Latacz, W. Mattheyses, and W. Verhelst. Expressive gibberish speech synthesis for affective human-computer interaction. In Proceedings of the 13th International Conference on Text, Speech and Dialogue (TSD) 2010, 2010.. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Interpreting non-linguistic utterances by robots: studying the influence of physical appearance

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        AFFINE '10: Proceedings of the 3rd international workshop on Affective interaction in natural environments
        October 2010
        106 pages
        ISBN:9781450301701
        DOI:10.1145/1877826

        Copyright © 2010 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 29 October 2010

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader