Skip to main content

Facial Animation and Affective Human–Computer Interaction

  • Reference work entry
Encyclopedia of Multimedia

Synonyms

Facial hand gestures; Facial animation in MPEG-4

Definition

Affective Human–Computer Interaction (HCI) systems utilize multimodal information about the emotional state of users.

Affective Human–Computer Interaction

Even though everyday human-to-human communication is thought to be based on vocal and lexical content, people seem to base both expressive and cognitive capabilities on facial expressions and body gestures. Related research in both the analysis and synthesis fields is based on trying to recreate the way the human mind works while making an effort to recognize such emotion. This inherently multimodal process means that in order to achieve robust results, one should take into account features like speech, face and hand gestures or body pose, as well as the interaction between them. In the case of speech, features can come from both linguistic and paralinguistic analysis; in the case of facial and body gestures, messages are conveyed in a much more expressive and...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 449.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, and J. Taylor, “Emotion Recognition in Human–Computer Interaction,” IEEE Signal Processing Magazine, pp. 32–80, January 2001.

    Google Scholar 

  2. P. Ekman and W.V. Friesen, “Facial Action Coding System,” Consulting Psychologists Press, Palo Alto, CA, 1978.

    Google Scholar 

  3. G. Faigin, “The Artist's Complete Guide to Facial Expressions,” Watson-Guptill, New York, 1990.

    Google Scholar 

  4. C. Izard, L. Dougherty, and E.A. Hembree, “A System for Identifying Affect Expressions by Holistic Judgments,” Technical Report, University of Delaware, 1983.

    Google Scholar 

  5. C.M. Whissel, “The dictionary of affect in language,” in R. Plutchnik and H. Kellerman (Eds.), Emotion: Theory, research and experience: Vol. 4, The measurement of emotions, Academic Press, New York, 1989.

    Google Scholar 

  6. F. Parke and K. Waters, “Computer Facial Animation,” A.K. Peters, 1996.

    Google Scholar 

  7. K. Karpouzis, G. Votsis, N. Tsapatsoulis, and S. Kollias, “Compact 3D Model Generation based on 2D Views of Human Faces: Application to Face Recognition,” Machine Graphics & Vision, Vol. 7, No. 1/2, 1998, pp. 75–85.

    Google Scholar 

  8. P. Eisert, “Analysis and Synthesis of Facial Expressions,” in Sarris and Strintzis (Eds.), 3D Modeling and Animation: Synthesis and Analysis Techniques for the Human Body, pp. 235–265, Idea Group Publishing, 2004.

    Google Scholar 

  9. S. Chao and J. Robinson, “Model-based analysis/synthesis image coding with eye and mouth patch codebooks,” Proceedings of Vision Interface, pp. 104–109, 1994.

    Google Scholar 

  10. W.J. Welsh, S. Searsby, and J.B. Waite, “Model-based image coding,” British Telecom Technology Journal, Vol. 8, No. 3, 1990, pp. 94–106.

    Google Scholar 

  11. D. Terzopoulos and K. Waters, “Analysis and synthesis of facial image sequences using physical and anatomical models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 569–579, June 1993.

    Google Scholar 

  12. F. Pereira and T. Ebrahimi, “The MPEG-4 Book,” Prentice Hall, 2002.

    Google Scholar 

  13. M. Tekalp and J. Ostermann, “Face and 2-D mesh animation in MPEG-4,” Signal Processing: Image Communication, Elsevier, Vol. 15, No. 4–5, 2000, pp. 387–421.

    Article  Google Scholar 

  14. MPEG Video & SNHC, “Text of ISO/IEC FDIS 14496-2: Visual,” Doc. ISO/MPEG N2502, Atlantic City MPEG Meeting, October 1998.

    Google Scholar 

  15. A. Raouzaiou, N. Tsapatsoulis, K. Karpouzis, and S. Kollias, “Parameterized facial expression synthesis based on MPEG-4,” EURASIP Journal on Applied Signal Processing, Vol. 2002, No. 10, pp. 1021–1038.

    Google Scholar 

  16. K. Aizawa and T.S. Huang, “Model-based image coding: Advanced video coding techniques for very low bit-rate applications,” Proceedings of the IEEE, Vol. 83, No. 2, 1995, pp. 259–271.

    Article  Google Scholar 

  17. M. Hoch, G. Fleischmann, and B. Girod, “Modeling and animation of facial expressions based on B-splines,” Visual Computer, Vol. 11, 1994, pp. 87–95.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag

About this entry

Cite this entry

Karpouzis, K., Kollias, S. (2008). Facial Animation and Affective Human–Computer Interaction. In: Furht, B. (eds) Encyclopedia of Multimedia. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-78414-4_323

Download citation

Publish with us

Policies and ethics