Skip to main content

Real Face Communication in a Virtual World

  • Conference paper
  • First Online:
Virtual Worlds (VW 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1434))

Included in the following conference series:

  • 694 Accesses

Abstract

This paper describes an efficient method to make an individual face for animation from several possible inputs and how to use this result for a realistic talking head communication in a virtual world. We present a method to reconstruct 3D facial model from two orthogonal pictures taken from front and side views. The method is based on extracting features from a face in a semiautomatic way and deforming a generic model. Texture mapping based on cylindrical projection is employed using a composed image from the two images. A reconstructed head is animated immediately and is able to talk with given text, which is transformed to corresponding phonemes and visemes. We also propose a system for individualized face-to-face communication through network using MPEG4.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

8. References

  1. Meet Geri: The New Face of Animation, Computer Graphics World, Volume 21, Number 2, February 1998.

    Google Scholar 

  2. Sannier G., Magnenat Thalmann N., “A User-Friendly Texture-Fitting Methodology for Virtual Humans”, Computer Graphics International’97, 1997.

    Google Scholar 

  3. Exhibition On the 10th and 11th September 1996 at the Industrial Exhibition of the British Machine Vision Conference.

    Google Scholar 

  4. Takaaki Akimoto, Yasuhito Suenaga, and Richard S. Wallace, Automatic Creation of 3D Facial Models, IEEE Computer Graphics & Applications, Sep., 1993.

    Google Scholar 

  5. P. Beylot, P. Gingins, P. Kalra, N. Magnenat Thalmann, W. Maurel, D. Thalmann, and F. Fasel, 3D Interactive Topological Modeling using Visible Human Dataset. Computer Graphics Forum, 15(3):33–44, 1996.

    Article  Google Scholar 

  6. Peter J. Burt and Edward H. Andelson, A Multiresolution Spline With Application to Image Mosaics, ACM Transactions on Graphics, 2(4):217–236, Oct., 1983.

    Article  Google Scholar 

  7. Horace H.S. Ip, Lijin Yin, Constructing a 3D individual head model from two orthogonal views. The Visual Computer, Springer-Verlag, 12:254–266, 1996.

    Article  Google Scholar 

  8. M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active Contour Models, International Journal of Computer Vision, pp. 321–331, 1988.

    Google Scholar 

  9. Tsuneya Kurihara and Kiyoshi Arai, A Transformation Method for Modeling and Animation of the Human Face from Photographs, Computer Animation, Springer-Verlag Tokyo, pp. 45–58, 1991.

    Google Scholar 

  10. Yuencheng Lee, Demetri Terzopoulos, and Keith Waters, Realistic Modeling for Facial Animation, In Computer Graphics (Proc. SIGGRAPH), pp. 55–62, 1996.

    Google Scholar 

  11. L. Moccozet, N. Magnenat-Thalmann, Dirichlet Free-Form Deformations and their Application to Hand Simulation, Proc. Computer Animation, IEEE Computer Society, pp. 93–102, 1997.

    Google Scholar 

  12. N. Magnenat-Thalmann, D. Thalmann, The direction of Synthetic Actors in the film Rendez-vous à Montrèal, IEEE Computer Graphics and Applications, 7(12):9–19, 1987.

    Article  Google Scholar 

  13. Marc Proesmans, Luc Van Gool. Reading between the lines-a method for extracting dynamic 3D with texture. In Proceedings of VRST, pp. 95–102, 1997.

    Google Scholar 

  14. http://www.viewpoint/com/freestuff/cyberscan

  15. LeBlanc. A., Kalra, P., Magnenat-Thalmann, N. and Thalmann, D. Sculpting with the ‘Ball & Mouse’ Metaphor, Proc. Graphics Interface’ 91. Calgary, Canada, pp. 152–9, 1991.

    Google Scholar 

  16. Lee W. S., Kalra P., Magenat Thalmann N, Model Based Face Reconstruction for Animation, Proc. Multimedia Modeling (MMM)’ 97, Singapore, pp. 323–338, 1997.

    Google Scholar 

  17. A. Pearce, B. Wyvill, G. Wyvill, D. Hill (1986) Speech and Expression: A Computer Solution to Face Animation, Proc Graphics Interface’ 86, Vision Interface’ 86, pp. 136–140.

    Google Scholar 

  18. M. M. Cohen, D. W. Massaro (1993) Modeling Coarticulation in Synthetic Visual Speech, eds N. M. Thalmann, D. Thalmann, Models and Techniques in Computer Animation, Springer-Verlag, Tokyo, 1993, pp. 139–156.

    Google Scholar 

  19. T. Ezzat, T. Poggio (1998) MikeTalk: A Talking Facial Display Based on Morphing Visemes, Submitted to IEEE Computer Animation’ 98.

    Google Scholar 

  20. E. Cossato, H.P. Graf (1998) Sample-Based Synthesis of Photo-Realistic Talking Heads, Submitted to IEEE Computer Animation’ 98.

    Google Scholar 

  21. A. Black, P. Taylor (1997) Festival Speech Synthesis System: system documentation (1.1.1), Technical Report HCRC/TR-83, Human Communication Research Centre, University of Edinburgh.

    Google Scholar 

  22. P. Kalra (1993) An Interactive Multimodal Facial Animation System, PH.D. Thesis, Ecole Polytechnique Federale de Lausanne.

    Google Scholar 

  23. Barrus J. W., Waters R. C., Anderson D. B., “Locales and Beacons: Efficient and Precise Support For Large Multi-User Virtual Environments”, Proceedings of IEEE VRAIS, 1996.

    Google Scholar 

  24. Carlsson C., Hagsand O., “DIVE-a Multi-User Virtual Reality System”, Proceedings of IEEE VRAIS’ 93, Seattle, Washington, 1993.

    Google Scholar 

  25. Macedonia M.R., Zyda M.J., Pratt D.R., Barham P.T., Zestwitz, “NPSNET: A Network Software Architecture for Large-Scale Virtual Environments”, Presence: Teleoperators and Virtual Environments, Vol. 3, No. 4, 1994.

    Google Scholar 

  26. I. Pandzic, T. Capin, E. Lee, N. Magnenat-Thalmann, D. Thalmann (1997) A Flexible Architecture for Virtual Humans in Networked Collaborative Virtual Environments, Proc EuroGraphics’ 97, Computer Graphics Forum, Vol 16, No 3, pp C177–C188.

    Article  Google Scholar 

  27. I. Pandzic, T. Capin, N. Magnenat-Thalmann, D. Thalmann (1997) MPEG-4 for Networked Collaborative Virtual Environments, Proc VSMM’ 97, pp.19–25.

    Google Scholar 

  28. J. Shen, D. Thalmann, Interactive Shape Design Using Metaballs and Splines, Proc. Implicit Surfaces’ 95, Grenoble, pp.187–196.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lee, WS., Lee, E., Thalmann, N.M. (1998). Real Face Communication in a Virtual World. In: Heudin, JC. (eds) Virtual Worlds. VW 1998. Lecture Notes in Computer Science(), vol 1434. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-68686-X_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-68686-X_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64780-5

  • Online ISBN: 978-3-540-68686-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics