skip to main content
10.1145/1040830.1040872acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
Article

Animating an interactive conversational character for an educational game system

Published: 10 January 2005 Publication History

Abstract

Within the framework of the project NICE (Natural Interactive Communication for Edutainment) [2], we have been developing an educational and entertaining computer game that allows children and teenagers to interact with a conversational character impersonating the fairy tale writer H.C. Andersen (HCA). The rationale behind our system is to make kids learn about HCA's life, fairy tales and historical period while playing and having fun. We report on the character's generation and realization of both verbal and 3D graphical non-verbal output behaviors, such as speech, body gestures and facial expressions. This conveys the impression of a human-like agent with relevant domain knowledge, and distinct personality. With the educational goal in the foreground, coherent and synchronized output presentation becomes mandatory, as any inconsistency may undermine the user's learning process rather than reinforcing it.

References

[1]
http://www.gamestudies.org/
[2]
http://www.niceproject.com
[3]
http://simcity.ea.com/
[4]
http://www.dfki.de/crosstalk/
[5]
http://www.speech.kth.se/broker
[6]
http://www.w3.org/AudioVideo/Activity.html
[7]
http://www.vhml.org/workshops/AAMAS/papers.html
[8]
http://www.ask.com
[9]
André, E., et al. The PPP persona: a multipurpose animated presentation agent, Proceedings of the ACM International Conference on Advanced Visual Interfaces (AVI), 245--247, 1996.
[10]
Bernsen, N.O., et al. First Prototype of Conversational H.C. Andersen, Proceedings of the ACM International Conference on Advanced Visual Interfaces (AVI), 458--461, 2004.
[11]
Bernsen, N.O., and Dybkjær, L. Domain-Oriented Conversation with H.C. Andersen. Proceedings of the Workshop on Affective Dialogue Systems (ADS), Lecture Notes in Artificial Intelligence 3086, Springer Verlag, 305--308, 2004.
[12]
Bernsen, N.O. and Dybkjær, L. Evaluation of spoken multimodal Conversation. In: Proceedings of the 6th International Conference on Multimodal Interfaces (ICMI), Penn State University (PA), USA, 38--45, 2004.
[13]
Bruce, L., and Bodnar, C. Identity, TV and Papa Smurf: As children of the 1980s, we have been shaped by what we watched, Varsity Online, The University of Toronto, 120(34), February, 1999.
[14]
Callois, R. Man, Play and Games. The Free Press, 1961.
[15]
Cassell, J., et al. Embodiment in conversational interfaces: Rea, Proceedings of CHI, 520--527, 1999.
[16]
Cassell, J., and Stone, M. Living Hand to Mouth: Psychological Theories about Speech and Gesture in Interactive Dialogue Systems, Proceedings of the AAAI Fall Symposium on Psychological Models of Communication in Collaborative Systems, 34--42, 1999.
[17]
Cassell, J., et al. (eds), Embodied Conversational Agents, MIT Press, 2000.
[18]
Cassell, J., et al. BEAT: the Behavior Expression Animation Toolkit, Proceedings of SIGGRAPH, 477--486, 2001.
[19]
Cohen, P.R, et al. Quickset: Multimodal interaction for distributed applications, Proceedings of the International Multimedia Conference, ACM Press, 31--40, 1997.
[20]
Corradini, A. and Cohen, P.R. On the Relationships among Speech, Gestures, and Object Manipulation in Virtual Environments: Initial Evidence, Proceedings of the International CLASS Workshop on Natural, Intelligent and Effective Interaction in Multimodal Dialogue Systems, 52--61, 2002.
[21]
Corradini, A., et al. Towards Believable Behavior Generation for Embodied Conversational Agents, Proceedings of the International Conference on Computational Science (ICCS), Lecture Notes in Artificial Intelligence 3038, Springer Verlag, 913--918, 2004.
[22]
Fiske, S.T., et al. Social Cognition. McGraw Hill, 1991.
[23]
Huizinga, J. Homo Ludens: A Study of the Play-Element in Culture, Beacon Press, 1971.
[24]
Johnson, W.L., et al. Pedagogical Agents on the Web, Proceedings of the International Conference on Autonomous Agents, 283--290, 1999.
[25]
Johnson, W.L., et al. Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments, International Journal of Artificial Intelligence in Education 11, 47--78, 2000.
[26]
Johnston, O., and Thomas, F. The Illusion of Life, Walt Disney Production, 1981.
[27]
Katz, B. Annotating the World Wide Web using natural language, Proceedings of the 12th International Conference on Information and Knowledge Management, 2003.
[28]
Loyall, A.B. Believable Agents: Building Interactive Personalities. PhD thesis, Technical Report CMU-CS-97-126, Carnegie Mellon University, 1997.
[29]
Marriott, A., et al. VHML - Directing a Talking Head, Proceedings of the International Conference on Active Media Technology, 90--100, 2001.
[30]
Massaro, D.W., et al. Development and Evaluation of a Computer-Animated Tutor for Language and Vocabulary Learning, Proceedings of the 15th International Congress of Phonetic Sciences, 2003.
[31]
Nass, C., et al. Truth is beauty: Researching embodied conversational agents, In: Cassell, J. et al. (eds.), Embodied Conversational Agents, 374--402, 2000.
[32]
Okonkwo, C., and Vassileva, J. Affective Pedagogical Agents and User Persuasion, In: Stephanidis, C. (eds.), Proceedings of Universal Access in Human-Computer Interaction, 2001.
[33]
Oviatt S.L. Multimodal interfaces. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, In: Jacko, J. et al. (eds.), 286--304, 2003.
[34]
Pazzani, M.and Billsus, D. Adaptive Web Site Agents, Proceedings of the 3rd International Conference on Autonomous Agents, 1999.
[35]
Pelachaud, C., et al. Embodied Contextual Agent in Information Delivering Application, Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems, 2002.
[36]
Perlin, K., and Goldberg, A., Improv: A System for Scripting Interactive Actors in Virtual Worlds, Computer Graphics, 29(3):1--11, 1996.
[37]
Picard, R. Affective Computing, MIT Press, 1997.
[38]
Prensky, M. Digital Game-Based Learning. McGraw Hill, 2001.
[39]
Reeves B., and Nass, C., The Media Equation: how people treat computers, televisions and new media like real people and places, Cambridge Univ. Press, 1996.
[40]
Waibel, A, et al. Multimodal Interfaces, Artificial Intelligence Review, 10(3-4):299--319, 1996.
[41]
Zheng, Z. AnswerBus Question Answering System, Proceedings of the Human Language Technology Conference, 2002.

Cited By

View all

Recommendations

Reviews

Stewart Mark Godwin

The continuing development of computer-generated characters and the human computer interface (HCI) are focused on in this paper. The rationale for the development of this system is to provide a learning environment for children based on a game play activity. It is widely acknowledged that playing and gaming can contribute to educational objectives and increase the involvement of the individual. In the research described in this paper, the computer-generated character responds with verbal and three-dimensional (3D) graphical nonverbal output in response to user inputs that are ambiguous, asynchronous, and inaccurate. This is a significant departure from traditional HCI based on windows, icons, menus, and pointers (WIMP). This work differs from previous work in HCI, it extends the user interface beyond the traditional task-oriented activities, which means the computer character in this study will respond to nonverbal input and initiate output without user stimulation. Furthermore, the computer character will use a conversation inventory list or history and Web-based searches to prevent output repetition, and supply realistic answers to complex nondomain questions. Therefore, the computer character will attempt to answer questions that are outside of the strict context of the program parameters. The results from this work highlight a significant extension of the HCI, one that accentuates nonverbal input, and output that relates closely to natural human communication through speech and gestures. While traditional HCI focuses on providing an interface more suitable to computer requirements, this paper demonstrates an interface that is more suited to human behaviors, and a computer output that imitates easily recognized human behavior. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '05: Proceedings of the 10th international conference on Intelligent user interfaces
January 2005
344 pages
ISBN:1581138946
DOI:10.1145/1040830
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 January 2005

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. edutainment
  2. embodied conversational agent
  3. multimodal output
  4. user interface

Qualifiers

  • Article

Conference

IUI05
IUI05: Tenth International Conference on Intelligent User Interfaces
January 10 - 13, 2005
California, San Diego, USA

Acceptance Rates

Overall Acceptance Rate 746 of 2,811 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)1
Reflects downloads up to 15 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2020)A Graphical Tool for the Creation of Behaviors in Virtual WorldsNatural Language Processing10.4018/978-1-7998-0951-7.ch028(561-583)Online publication date: 2020
  • (2016)A Graphical Tool for the Creation of Behaviors in Virtual WorldsIntegrating Cognitive Architectures into Virtual Character Design10.4018/978-1-5225-0454-2.ch003(65-93)Online publication date: 2016
  • (2015)Second MindProceedings of the XVI International Conference on Human Computer Interaction10.1145/2829875.2829908(1-6)Online publication date: 7-Sep-2015
  • (2015)From Small Seeds Grow Fruitful Trees: How the PHelpS Peer Help System Stimulated a Diverse and Innovative Research Agenda over 15 YearsInternational Journal of Artificial Intelligence in Education10.1007/s40593-015-0073-926:1(431-447)Online publication date: 4-Nov-2015
  • (2015)Multimodal interaction with virtual worlds XMMVR: eXtensible language for MultiModal interaction with virtual reality worldsJournal on Multimodal User Interfaces10.1007/s12193-015-0176-59:3(153-172)Online publication date: 10-Jun-2015
  • (2014)Towards the Use of Dialog Systems to Facilitate Inclusive EducationAssistive Technologies10.4018/978-1-4666-4422-9.ch068(1292-1312)Online publication date: 2014
  • (2014)Embodied Conversational Agents in Interactive Applications for Children with Special Educational NeedsAssistive Technologies10.4018/978-1-4666-4422-9.ch041(811-840)Online publication date: 2014
  • (2014)An Approach to Behavior Authoring for Non-Playing Characters in Digital GamesProceedings of the 2014 Mulitmedia, Interaction, Design and Innovation International Conference on Multimedia, Interaction, Design and Innovation10.1145/2643572.2643585(1-7)Online publication date: 24-Jun-2014
  • (2013)An Architecture to Develop Multimodal Educative Applications with ChatbotsInternational Journal of Advanced Robotic Systems10.5772/5579110:3Online publication date: 1-Jan-2013
  • (2013)Embodied Conversational Agents in Interactive Applications for Children with Special Educational NeedsTechnologies for Inclusive Education10.4018/978-1-4666-2530-3.ch004(59-88)Online publication date: 2013
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media