Skip to main content

Abstract

The modelling and understanding of the facial dynamics of individuals is crucial to achieving higher levels of realistic facial animation. We address the recognition of individuals through modelling the facial motions of several subjects. Modelling facial motion comes with numerous challenges including accurate and robust tracking of facial movement, high dimensional data processing and non-linear spatial-temporal structural motion. We present a novel framework which addresses these problems through the use of video-specific Active Appearance Models (AAM) and Gaussian Process Latent Variable Models (GP-LVM). Our experiments and results qualitatively and quantitatively demonstrate the framework’s ability to successfully differentiate individuals by temporally modelling appearance invariant facial motion. Thus supporting the proposition that a facial activity model may assist in the areas of motion retargeting, motion synthesis and experimental psychology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Tang, H., Fu, Y., Tu, J., Huang, T.S., Hasegawa-Johnson, M.: EAVA: A 3D Emotive Audio-Visual Avatar. In: WACV 2008, pp. 1–6 (2008)

    Google Scholar 

  2. Kähler, K., Haber, J., Seidel, H.P.: Reanimating the Dead: Reconstruction of Expressive Faces from Skull Data. In: SIGGRAPH 2003, pp. 554–561 (2003)

    Google Scholar 

  3. MacDorman, K.F., Green, R.D., Ho, C.C., Koch, C.T.: Too Real for Comfort? Uncanny Responses to Computer Generated Faces. Computers in Human Behavior 25, 695–710 (2009)

    Article  Google Scholar 

  4. Ye, N., Sim, T.: Combining Facial Appearance and Dynamics for Face Recognition. In: Jiang, X., Petkov, N. (eds.) CAIP 2009. LNCS, vol. 5702, pp. 133–140. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  5. Yang, P., Liu, Q., Metaxas, D.: Dynamic Soft Encoded Patterns for Facial Event Analysis. Computer Vision and Image Understanding 115, 456–465 (2011)

    Article  Google Scholar 

  6. Hadid, A., Pietikäinen, M., Li, S.Z.: Learning personal specific facial dynamics for face recognition from videos. In: Zhou, S.K., Zhao, W., Tang, X., Gong, S. (eds.) AMFG 2007. LNCS, vol. 4778, pp. 1–15. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  7. Fan, X., Sun, Y., Yin, B., Guo, X.: Gabor-based Dynamic Representation for Human Fatigue Monitoring in Facial Image Sequences. Pattern Recognition Letters 31, 234–243 (2010)

    Article  Google Scholar 

  8. Raducanu, B., Dornaika, F.: Dynamic vs. Static recognition of facial expressions. In: Aarts, E., Crowley, J.L., de Ruyter, B., Gerhäuser, H., Pflaum, A., Schmidt, J., Wichert, R. (eds.) AmI 2008. LNCS, vol. 5355, pp. 13–25. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  9. Dornaika, F., Lazkano, E., Sierra, B.: Improving Dynamic Facial Expression Recognition with Feature Subset Selection. Pattern Recognition Letters 32, 740–748 (2011)

    Article  Google Scholar 

  10. Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, pp. 484–498. Springer, Heidelberg (1998)

    Google Scholar 

  11. Asthana, A., Saragih, J., Wagner, M., Goecke, R.: Evaluating AAM Fitting Methods for Facial Expression Recognition. In: ACII 2009, pp. 1–8 (2009)

    Google Scholar 

  12. Matthews, I., Baker, S.: Active Appearance Models Revisited. International Journal of Computer Vision 60, 135–164 (2004)

    Article  Google Scholar 

  13. Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time Combined 2D+3D Active Appearance Models. In: CVPR 2004, vol. 2, pp. 535–542 (2004)

    Google Scholar 

  14. Lawrence, N., Hyvärinen, A.: Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models. Journal of Machine Learning Research 6, 1783–1816 (2005)

    MathSciNet  MATH  Google Scholar 

  15. Grochow, K., Martin, S.L., Hertzmann, A., Popović, Z.: Style-based Inverse Kinematics. In: SIGGRAPH 2004, pp. 522–531 (2004)

    Google Scholar 

  16. Lawrence, N.D., Quiñonero Candela, J.: Local Distance Preservation in the GP-LVM through Back Constraints. In: ICML 2006, pp. 513–520 (2006)

    Google Scholar 

  17. Ek, C.H., Torr, P., Lawrence, N.D.: Gaussian Process Latent Variable Models for Human Pose Estimation. In: Popescu-Belis, A., Renals, S., Bourlard, H. (eds.) MLMI 2007. LNCS, vol. 4892, pp. 132–143. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  18. Quirion, S., Duchesne, C., Laurendeau, D., Marchand, M.: Comparing GPLVM Approaches for Dimensionality Reduction in Character Animation. WSCG 16, 41–48 (2008)

    Google Scholar 

  19. Huang, M., Wang, Z., Ying, Z.: A Novel Method of Facial Expression Recognition Based on GPLVM Plus SVM. In: ICSP 2010, pp. 916–919 (2010)

    Google Scholar 

  20. Deena, S., Galata, A.: Speech-driven facial animation using a shared gaussian process latent variable model. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Kuno, Y., Wang, J., Wang, J.-X., Wang, J., Pajarola, R., Lindstrom, P., Hinkenjann, A., Encarnação, M.L., Silva, C.T., Coming, D. (eds.) ISVC 2009. LNCS, vol. 5875, pp. 89–100. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  21. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive Database for Facial Expression Analysis. In: AFGR 2000, pp. 46–53 (2000)

    Google Scholar 

  22. Wallhoff, F.: Facial Expressions and Emotion Database (2006)

    Google Scholar 

  23. Ekman, P., Friesen, W.V., Hager, J.C.: Facial Action Coding System. [CD-ROM] (2002)

    Google Scholar 

  24. Ramer, U.: An Iterative Procedure for the Polygonal Approximation of Plane Curves. Computer Graphics and Image Processing 1, 244–256 (1972)

    Article  Google Scholar 

  25. Zhu, Z., Ji, Q.: Real Time 3D Face Pose Tracking From an Uncalibrated Camera. In: CVPR 2004, vol. 73 (2004)

    Google Scholar 

  26. Akhter, I., Sheikh, Y.A., Khan, S., Kanade, T.: Nonrigid Structure from Motion in Trajectory Space. Neural Information Processing Systems (2008)

    Google Scholar 

  27. Arthur, D., Vassilvitskii, S.: K-Means++: The Advantages of Careful Seeding. In: SODA 2007, pp. 1027–1035 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Davies, A., Ek, C.H., Dalton, C., Campbell, N. (2011). Facial Movement Based Recognition. In: Gagalowicz, A., Philips, W. (eds) Computer Vision/Computer Graphics Collaboration Techniques. MIRAGE 2011. Lecture Notes in Computer Science, vol 6930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24136-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24136-9_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24135-2

  • Online ISBN: 978-3-642-24136-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics