skip to main content
10.1145/1088463.1088500acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
Article

XfaceEd: authoring tool for embodied conversational agents

Published: 04 October 2005 Publication History

Abstract

In this paper, XfaceEd, our open source, platform independent tool for authoring 3D embodied conversational agents (ECAs) is presented. Following MPEG-4 Facial Animation (FA) standard, XfaceEd provides an easy to use interface to generate MPEG-4 ready ECAs from static 3D models. Users can set MPEG-4 Facial Definition Points (FDP) and Facial Animation Parameter Units (FAPU), define the zone of influence of each feature point and how this influence is propagated among the neighboring vertices. As an alternative to MPEG-4, one can also specify morph targets for different categories such as visemes, emotions and expressions, in order to achieve facial animation using the keyframe interpolation technique. Morph targets from different categories are blended to create more lifelike behaviour.Results can be previewed and parameters can be tweaked real time within the application for fine tuning. Changes made take into effect immediately, which in turn ensures rapid production. The final output is a configuration file XML format and can be interpreted by XfacePlayer or other applications for easy authoring of embodied conversational agents for multimodal environments.

References

[1]
K. Balci. Xface: MPEG-4 based open source toolkit for 3d facial animation. In Proc. Advance Visual Interfaces, Italy, May 2004.
[2]
J. Cassell, H. Vilhjálmsson, and T. Bickmore. BEAT the behavior expression animation toolkit. In Proc. of SIGGRAPH 01, 2001.
[3]
N. DeCarolis, V. Carofiglio, and C. Pelachaud. From discourse plans to believable behavior generation. In Proc. Int. Natural Language Generation Conf., New York, July 2002.
[4]
P. Ekman. Handbook of Cognition and Emotion. Wiley, New York, 1999.
[5]
P. Faloutsos, M. van de Panne, and D. Terzopoulos. Dynamic free-form deformations for animation synthesis. IEEE Transactions on Visualization and Computer Graphics, 3(3):201--214, September 1997.
[6]
InterFace. The Virtual Human Markup Language. Curtin University of Technology, 2001. Working Draft {Online} Available: http://www.vhml.org.
[7]
N. Kojekine, V. Savchenko, M. Senin, and I. Hagiwara. Real-time 3d deformations by means of compactly supported radial basis functions. In Short papers proceedings of Eurographics, pages 35--43, Saarbrucken, Germany, 2-6 September 2002.
[8]
I. J. N1901. Text for CD 14496-1 Systems. Fribourg Meeting, November 1997.
[9]
I. J. N1902. Text for CD 14496-1 Visual. Fribourg Meeting, November 1997.
[10]
J. Noh and U. Neumann. A survey of facial modeling and animation techniques. Technical Report 99-705, USC, 1998.
[11]
E. Not, K. Balci, F. Pianesi, and M. Zancanaro. Synthetic characters as multichannel interfaces. In Proc. of ICMI05, October 2005.
[12]
I. Pandzic and R. Forchheimer. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley, New York, 2002.
[13]
F. Parke. Parameterized models for facial animation. IEEE Computer Graphics App. Mag., 2(9):61--68, 1982.
[14]
H. Prendinger and M. Ishizuka, editors. Life-Like Characters: Tools, Affective Functions, and Applications. Springer, 2004.
[15]
H. Pyun, W. Chae, Y. Kim, H. Kang, and S. Y. Shin. An example-based approach to text-driven speech animation with emotional expressions. Technical Report 200, KAIST, July 2004.
[16]
FaceGen Modeller from Singular Inversions Inc. http://www.facegen.com.

Cited By

View all
  • (2014)Modeling facial signs of appraisal during interactionProceedings of the 2014 international conference on Autonomous agents and multi-agent systems10.5555/2615731.2615855(765-772)Online publication date: 5-May-2014
  • (2013)A comprehensive system for facial animation of generic 3D head models driven by speechEURASIP Journal on Audio, Speech, and Music Processing10.1186/1687-4722-2013-52013:1(1-18)Online publication date: 1-Dec-2013
  • (2012)On the Development of Adaptive and User-Centred Interactive Multimodal InterfacesSpeech, Image, and Language Processing for Human Computer Interaction10.4018/978-1-4666-0954-9.ch013(262-291)Online publication date: 2012
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '05: Proceedings of the 7th international conference on Multimodal interfaces
October 2005
344 pages
ISBN:1595930280
DOI:10.1145/1088463
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 October 2005

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. 3D facial animation
  2. MPEG-4
  3. embodied conversational agents
  4. open source
  5. talking heads

Qualifiers

  • Article

Conference

ICMI05
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2014)Modeling facial signs of appraisal during interactionProceedings of the 2014 international conference on Autonomous agents and multi-agent systems10.5555/2615731.2615855(765-772)Online publication date: 5-May-2014
  • (2013)A comprehensive system for facial animation of generic 3D head models driven by speechEURASIP Journal on Audio, Speech, and Music Processing10.1186/1687-4722-2013-52013:1(1-18)Online publication date: 1-Dec-2013
  • (2012)On the Development of Adaptive and User-Centred Interactive Multimodal InterfacesSpeech, Image, and Language Processing for Human Computer Interaction10.4018/978-1-4666-0954-9.ch013(262-291)Online publication date: 2012
  • (2012)An avatar acceptance study for home automation scenariosProceedings of the 7th international conference on Articulated Motion and Deformable Objects10.1007/978-3-642-31567-1_23(230-238)Online publication date: 11-Jul-2012
  • (2011)Enhancement of Conversational Agents By Means of Multimodal InteractionConversational Agents and Natural Language Interaction10.4018/978-1-60960-617-6.ch010(223-252)Online publication date: 2011
  • (2011)Realistic Tree-Dimensional Facial Expression SynthesisProceedings of the 2011 Third International Conference on Intelligent Human-Machine Systems and Cybernetics - Volume 0210.1109/IHMSC.2011.102(131-134)Online publication date: 26-Aug-2011
  • (2011)Animation of generic 3D head models driven by speechProceedings of the 2011 IEEE International Conference on Multimedia and Expo10.1109/ICME.2011.6011861(1-6)Online publication date: 11-Jul-2011
  • (2010)A computational model of emotion for 3D talking heads2010 IEEE International Conference on Intelligent Computing and Intelligent Systems10.1109/ICICISYS.2010.5658438(536-539)Online publication date: Oct-2010
  • (2006)A Wizard‐of‐Oz platform for embodied conversational agentsComputer Animation and Virtual Worlds10.1002/cav.12917:3-4(249-257)Online publication date: 14-Jun-2006

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media