Abstract
This paper describes an efficient method to make an individual face for animation from several possible inputs and how to use this result for a realistic talking head communication in a virtual world. We present a method to reconstruct 3D facial model from two orthogonal pictures taken from front and side views. The method is based on extracting features from a face in a semiautomatic way and deforming a generic model. Texture mapping based on cylindrical projection is employed using a composed image from the two images. A reconstructed head is animated immediately and is able to talk with given text, which is transformed to corresponding phonemes and visemes. We also propose a system for individualized face-to-face communication through network using MPEG4.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
8. References
Meet Geri: The New Face of Animation, Computer Graphics World, Volume 21, Number 2, February 1998.
Sannier G., Magnenat Thalmann N., “A User-Friendly Texture-Fitting Methodology for Virtual Humans”, Computer Graphics International’97, 1997.
Exhibition On the 10th and 11th September 1996 at the Industrial Exhibition of the British Machine Vision Conference.
Takaaki Akimoto, Yasuhito Suenaga, and Richard S. Wallace, Automatic Creation of 3D Facial Models, IEEE Computer Graphics & Applications, Sep., 1993.
P. Beylot, P. Gingins, P. Kalra, N. Magnenat Thalmann, W. Maurel, D. Thalmann, and F. Fasel, 3D Interactive Topological Modeling using Visible Human Dataset. Computer Graphics Forum, 15(3):33–44, 1996.
Peter J. Burt and Edward H. Andelson, A Multiresolution Spline With Application to Image Mosaics, ACM Transactions on Graphics, 2(4):217–236, Oct., 1983.
Horace H.S. Ip, Lijin Yin, Constructing a 3D individual head model from two orthogonal views. The Visual Computer, Springer-Verlag, 12:254–266, 1996.
M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active Contour Models, International Journal of Computer Vision, pp. 321–331, 1988.
Tsuneya Kurihara and Kiyoshi Arai, A Transformation Method for Modeling and Animation of the Human Face from Photographs, Computer Animation, Springer-Verlag Tokyo, pp. 45–58, 1991.
Yuencheng Lee, Demetri Terzopoulos, and Keith Waters, Realistic Modeling for Facial Animation, In Computer Graphics (Proc. SIGGRAPH), pp. 55–62, 1996.
L. Moccozet, N. Magnenat-Thalmann, Dirichlet Free-Form Deformations and their Application to Hand Simulation, Proc. Computer Animation, IEEE Computer Society, pp. 93–102, 1997.
N. Magnenat-Thalmann, D. Thalmann, The direction of Synthetic Actors in the film Rendez-vous à Montrèal, IEEE Computer Graphics and Applications, 7(12):9–19, 1987.
Marc Proesmans, Luc Van Gool. Reading between the lines-a method for extracting dynamic 3D with texture. In Proceedings of VRST, pp. 95–102, 1997.
LeBlanc. A., Kalra, P., Magnenat-Thalmann, N. and Thalmann, D. Sculpting with the ‘Ball & Mouse’ Metaphor, Proc. Graphics Interface’ 91. Calgary, Canada, pp. 152–9, 1991.
Lee W. S., Kalra P., Magenat Thalmann N, Model Based Face Reconstruction for Animation, Proc. Multimedia Modeling (MMM)’ 97, Singapore, pp. 323–338, 1997.
A. Pearce, B. Wyvill, G. Wyvill, D. Hill (1986) Speech and Expression: A Computer Solution to Face Animation, Proc Graphics Interface’ 86, Vision Interface’ 86, pp. 136–140.
M. M. Cohen, D. W. Massaro (1993) Modeling Coarticulation in Synthetic Visual Speech, eds N. M. Thalmann, D. Thalmann, Models and Techniques in Computer Animation, Springer-Verlag, Tokyo, 1993, pp. 139–156.
T. Ezzat, T. Poggio (1998) MikeTalk: A Talking Facial Display Based on Morphing Visemes, Submitted to IEEE Computer Animation’ 98.
E. Cossato, H.P. Graf (1998) Sample-Based Synthesis of Photo-Realistic Talking Heads, Submitted to IEEE Computer Animation’ 98.
A. Black, P. Taylor (1997) Festival Speech Synthesis System: system documentation (1.1.1), Technical Report HCRC/TR-83, Human Communication Research Centre, University of Edinburgh.
P. Kalra (1993) An Interactive Multimodal Facial Animation System, PH.D. Thesis, Ecole Polytechnique Federale de Lausanne.
Barrus J. W., Waters R. C., Anderson D. B., “Locales and Beacons: Efficient and Precise Support For Large Multi-User Virtual Environments”, Proceedings of IEEE VRAIS, 1996.
Carlsson C., Hagsand O., “DIVE-a Multi-User Virtual Reality System”, Proceedings of IEEE VRAIS’ 93, Seattle, Washington, 1993.
Macedonia M.R., Zyda M.J., Pratt D.R., Barham P.T., Zestwitz, “NPSNET: A Network Software Architecture for Large-Scale Virtual Environments”, Presence: Teleoperators and Virtual Environments, Vol. 3, No. 4, 1994.
I. Pandzic, T. Capin, E. Lee, N. Magnenat-Thalmann, D. Thalmann (1997) A Flexible Architecture for Virtual Humans in Networked Collaborative Virtual Environments, Proc EuroGraphics’ 97, Computer Graphics Forum, Vol 16, No 3, pp C177–C188.
I. Pandzic, T. Capin, N. Magnenat-Thalmann, D. Thalmann (1997) MPEG-4 for Networked Collaborative Virtual Environments, Proc VSMM’ 97, pp.19–25.
J. Shen, D. Thalmann, Interactive Shape Design Using Metaballs and Splines, Proc. Implicit Surfaces’ 95, Grenoble, pp.187–196.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lee, WS., Lee, E., Thalmann, N.M. (1998). Real Face Communication in a Virtual World. In: Heudin, JC. (eds) Virtual Worlds. VW 1998. Lecture Notes in Computer Science(), vol 1434. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-68686-X_1
Download citation
DOI: https://doi.org/10.1007/3-540-68686-X_1
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64780-5
Online ISBN: 978-3-540-68686-6
eBook Packages: Springer Book Archive