Skip to main content

Extracting Facial Motion Parameters by Tracking Feature Points

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1554))

Abstract

A method for extracting facial motion parameters is proposed. The method consists of three steps. First, the feature points of the face, selected automatically in the first frame, are tracked in successive frames. Then, the feature points are connected with Delaunay triangulation so that the motion of each point relative to the surrounding points can be computed. Finally, muscle motions are estimated based on motions of the feature points placed near each muscle. The experiments showed that the proposed method can extract facial motion parameters accurately. In addition, the facial motion parameters are used to render a facial animation sequence.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Massaro, D, W.: Perceiving Talking Faces, MIT Press (1998).

    Google Scholar 

  2. Ekman, P., Friesen, W.V.: The Facial Action Coding System, Consulting Psychologists Press, Inc., (1978).

    Google Scholar 

  3. Terzopoulos, D., Waters, K.: Physically-based facial modeling, analysis, and animation, The J. of Visualization and Computer Animation, 1(2) (1990) 73–80.

    Google Scholar 

  4. Terzopoulos, D., Waters, K.: Analysis and synthesis of facial image sequences using physical and anatomical models, IEEE Trans. on Pattern Analysis and Machine Intelligence, 15(6) (1993) 569–579.

    Article  Google Scholar 

  5. Mase, K.: Recognition of facial expression from optical flow, IEICE Trans., E74(10) (1991) 3474–3483.

    Google Scholar 

  6. Essa, I. A., Pentland, A.: Coding, Analysis, Interpretation, and Recognition of Facial Expressions, IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(7) (1997).

    Google Scholar 

  7. Otsuka, T., Ohya, J.: Recognizing Multiple Persons’ Facial Expressions Using HMM Based on Automatic Extraction of Significant Frames from Image Sequences, ICIP’97, vol.II (1997) 546–549.

    Google Scholar 

  8. DeCarlo, D., Metaxas, D.: Deformable Modes-Based Shape and Motion Analysis from Images using Motion Residual Error, Proc. ICCV’98 (1998) 113–119.

    Google Scholar 

  9. Cambridge Digital Research Laboratory: FaceWorks, URL http://www.interface.digital.com/.

  10. Shi, J., Tomasi, C.: Good features to track, Proc of the IEEE Conf. on Computer Vision and Pattern Recognition (1994) 593–600, 1994.

    Google Scholar 

  11. Shapiro, L., Zisserman, A., Brady, M.: 3D motion recovery via affine epipolar geometry, Int. J. Computer Vision, 16(2) (1995) 147–182.

    Article  Google Scholar 

  12. Lawson, C.L., Transforming triangulations, Discrete Math., 3 (1972) 365–372.

    Article  MATH  MathSciNet  Google Scholar 

  13. de Berg, M., van Kreveld, M., Overmars, M., Schwarzkopf, O.: Computational Geometry, Chapter 9. Springer-Verlag (1997).

    Google Scholar 

  14. Pelachaud, C., Badler, N. I., Steedman, M.: Generating facial expressions for speech, Cognitive Science, 20(1) (1996) 1–46.

    Article  Google Scholar 

  15. Otsuka, T., J. Ohya, J.: Converting Facial Expressions Using Recognition-Based Analysis of Image Sequences, ACCV’98, vol. II (1998) 703–710.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Otsuka, T., Ohya, J. (1999). Extracting Facial Motion Parameters by Tracking Feature Points. In: Nishio, S., Kishino, F. (eds) Advanced Multimedia Content Processing. AMCP 1998. Lecture Notes in Computer Science, vol 1554. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48962-2_30

Download citation

  • DOI: https://doi.org/10.1007/3-540-48962-2_30

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65762-0

  • Online ISBN: 978-3-540-48962-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics