Abstract
The virtual world of Second LifeTM does not offer support for complex facial animations, such as those needed for an intelligent virtual agent to lip sync to audio clips. However, it is possible to access a limited range of default facial animations through the native scripting language, LSL. Our solution to produce lip sync in this environment is to rapidly trigger and stop these default animations in custom sequences to produce the illusion that the intelligent virtual agent is speaking the phrases being heard.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsAuthor information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Chance, E., Morie, J. (2009). Method for Custom Facial Animation and Lip-Sync in an Unsupported Environment, Second LifeTM. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds) Intelligent Virtual Agents. IVA 2009. Lecture Notes in Computer Science(), vol 5773. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04380-2_89
Download citation
DOI: https://doi.org/10.1007/978-3-642-04380-2_89
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04379-6
Online ISBN: 978-3-642-04380-2
eBook Packages: Computer ScienceComputer Science (R0)