ABSTRACT
It is expected that in the near-future people will have daily natural language interactions with robots. However, we know very little about how users feel they should talk to robots, especially users who have never before interacted with a robot. The present study evaluated first-time users' expectations about a robot's cognitive and communicative capabilities by comparing robot-directed speech to the way in which participants talked to a human partner. The results indicate that participants spoke more loudly, raised their pitch, and hyperarticulated their messages when they spoke to the robot, suggesting that they viewed the robot as having low linguistic competence. However, utterances show that speakers often assumed that the robot had humanlike cognitive capabilities. The results suggest that while first-time users were concerned with the fragility of the robot's speech recognition system, they believed that the robot had extremely strong information processing capabilities.
- Kiesler, S. & Goetz, J. 2002. Mental models of robotic assistants. In CHI '02 Extended Abstracts on Human Factors in Computing Systems (Minneapolis, MN, USA, April 20-25, 2002) CHI '02. ACM, New York, NY, 576--577. Google ScholarDigital Library
- Lee, S.-L., Kiesler, S., Lau, I. Y.-M., & Chiu, C.-Y. 2005. Human mental models of humanoid robots. In Proceedings of the International Conference on Robotics and Automation (Barcelona, April 18-22, 2005). ICRA 2005. IEEE, Piscataway, NJ, 2767--2772.Google Scholar
- Fussell, S., Kiesler, S., Setlock, L., & Yew, V. 2008. How people anthropomophize robots. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (Amsterdam, The Netherlands, March 12-15, 2008). HRI '08. ACM, New York, NY, 145--152. Google ScholarDigital Library
- Koay, K., Dautenhahn, K., Woods, S., & Walters, M. 2006. Empirical results from using a comfort level device in human-robot interaction studies. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction (Salt Lake City, Utah, USA, March 2-3, 2006). HRI '06. ACM, New York, NY, 194--201. Google ScholarDigital Library
- Chafe, W. 1980. The pear stories: Cognitive, cultural, and linguistic aspects of narrative production. Norwood, NJ: Ablex.Google Scholar
- Levinson, S. 2003. Spatial language and cognition. Cambridge: Cambridge University Press.Google Scholar
- Freed, B. 1980. Talking to foreigners versus talking to children: Similarities and differences. Research in second language acquisition: Selected papers of the Los Angeles Second Language Acquisition Research Forum.Google Scholar
- Kuhl, P. K., Andruski, J. E., Chistovich, I. A., Chistovich, L. A., Kozhevnikova, E. V., Ryskina, V. L., et al. 1997. Cross-Language Analysis of Phonetic Units in Language Addressed to Infants. Science, 277(5326),684.Google ScholarCross Ref
- Uther, M., Knoll, M. A., Burnham, D. 2007. Do you speak E-NG-L-I-SH? A comparison of foreigner- and infant-directed speech. Speech Communication, 49, 2--7. Google ScholarDigital Library
- Garnica, O. K. 1977. Some prosodic and paralinguistic features of speech to young children. Talking to children: Language input and acquisition, 63--88.Google Scholar
- Fernald, A., & Simon, T. 1984. Expanded intonation contours in mothers' speech to newborns. Developmental psychology, 20(1), 104--113.Google Scholar
- Fernald, A. 1989. Intonation and communicative intent in mother's speech to infants: is the melody the message? Child development, 60(6), 1497--1510.Google Scholar
- Burnham, D., Kitamura, C., & Vollmer-Conna, U. 2002. What's New, Pussycat? On Talking to Babies and Animals. Science, 296(5572), 1435--1435.Google ScholarCross Ref
- Snow, C. E. 1972. Mothers' speech to children learning language. Child Development, 43(2), 549--565.Google ScholarCross Ref
- Depaulo, B. M., & Coleman, L. M. 1986. Talking to children, foreigners, and retarded adults. Journal of personality and social psychology, 51(5), 945--959.Google ScholarCross Ref
- Boersma, P. & Weenink, D. 2008. Praat; doing phonetics by computer (Version 5.0.06).Google Scholar
- Blisard, S. N. & Skubic, M. 2005. Modelling spatial referencing language for human-robot interactions. Proceedings of IEEE International Proceedings on Robot and Human Interactive Communication 2005. ROMAN 2005, 698--703. Google ScholarDigital Library
- Skubic, M., Perzanowski, D., Blisard, S., Schultz, A., Adams, W., Bugajska, M. & Brock, D. 2004. Spatial language for human-robot dialogs. IEEE Transactions on Systems, Man, & Cybernetics, 34(2), 154--167. Google ScholarDigital Library
- Tenbrink, T. 2003. Communicative Aspects of Human-Robot Interaction. In: Helle Metslang & Mart Rannut (Eds.), Languages in development, Lincom Europa.Google Scholar
- Kriz, S., Anderson, G., Bugajska, M., & Trafton, J. G. (2009). Robot-directed speech as a means of exploring conceptualizations of robots. In Proceedings of the ACM/IEEE International Conference on Human Robot Interaction (San Diego, CA, March 11-13, 2009), HRI '09. ACM, New York, NY, 271--272. Google ScholarDigital Library
Index Terms
- Robot-directed speech: using language to assess first-time users' conceptualizations of a robot
Recommendations
Miscommunication Detection and Recovery in Situated Human–Robot Dialogue
Even without speech recognition errors, robots may face difficulties interpreting natural-language instructions. We present a method for robustly handling miscommunication between people and robots in task-oriented spoken dialogue. This capability is ...
Multimodal communication involving movements of a robot
CHI EA '08: CHI '08 Extended Abstracts on Human Factors in Computing SystemsCommunication between humans is multimodal and involves movements as well. While communication between humans and robots is becoming more and more multimodal, movements of a robot in 2D space have not yet been used for communication. In this paper, we ...
Recognition of Affective Communicative Intent in Robot-Directed Speech
Human speech provides a natural and intuitive interface for both communicating with humanoid robots as well as for teaching them. In general, the acoustic pattern of speech contains three kinds of information: who the speaker is, what the speaker said, ...
Comments