Skip to main content

Facial Expression Analysis

  • Chapter
Visual Analysis of Humans

Abstract

The face is one of the most powerful channels of nonverbal communication. Facial expression provides cues about emotion, intention, alertness, pain, personality, regulates interpersonal behavior, and communicates psychiatric and biomedical status among other functions. Within the past 15 years, there has been increasing interest in automated facial expression analysis within the computer vision and machine learning communities. This chapter reviews fundamental approaches to facial measurement by behavioral scientists and current efforts in automated facial expression recognition. We consider challenges, review databases available to the research community, approaches to feature detection, tracking, and representation, and both supervised and unsupervised learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Bold uppercase letters denote matrices (e.g., D), bold lowercase letters denote column vectors (e.g., d). d j represents the jth column of the matrix D. d ij denotes the scalar in the row ith and column jth of the matrix D. Non-bold letters represent scalar variables. tr(D)=∑ i d ii is the trace of square matrix D. \(\|\mathbf{d}\|_{2} = \sqrt{\mathbf{d}^{T}\mathbf{d}}\) designates Euclidean norm of d.

References

  1. Ambadar, Z., Cohn, J.F., Reed, L.I.: All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverbal Behav. 33(1), 17–34 (2009)

    Google Scholar 

  2. Ambadar, Z., Schooler, J.W., Cohn, J.F.: Deciphering the enigmatic face. Psychol. Sci. 16(5), 403–410 (2005)

    Google Scholar 

  3. Anderson, K., McOwan, P.W.: A real-time automated system for the recognition of human facial expressions. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 36(1), 96–105 (2006)

    Google Scholar 

  4. Ashraf, A.B., Lucey, S., Cohn, J.F., Chen, T., Ambadar, Z., Prkachin, K.M., Solomon, P.E.: The painful face-pain expression recognition using active appearance models. Image Vis. Comput. 27(12), 1788–1796 (2009)

    Google Scholar 

  5. Baker, S., Matthews, I.: Lucas–Kanade 20 years on: A unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)

    Google Scholar 

  6. Bartlett, M., Littlewort, G., Fasel, I., Movellan, J.R.: Real time face detection and facial expression recognition: Development and applications to human computer interaction. In: CVPR Workshops for HCI (2003)

    Google Scholar 

  7. Bartlett, M.S., Littlewort, G.C., Frank, M.G., Lainscsek, C., Fasel, I., Movellan, J.R.: Automatic recognition of facial actions in spontaneous expressions. J. Multimed. 1(6), 22–35 (2006)

    Google Scholar 

  8. Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Fully automatic facial action recognition in spontaneous behavior. In: AFGR, pp. 223–230 (2006)

    Google Scholar 

  9. Beebe, B., Badalamenti, A., Jaffe, J., Feldstein, S., Marquette, L., Helbraun, E.: Distressed mothers and their infants use a less efficient timing mechanism in creating expectancies of each other’s looking patterns. J. Psycholinguist. Res. 37(5), 293–307 (2008)

    Google Scholar 

  10. Bergen, J.R., Anandan, P., Hanna, K.J., Hingorani, R.: Hierarchical model-based motion estimation. In: European Conference on Computer Vision, pp. 237–252 (1992)

    Google Scholar 

  11. Bettinger, F., Cootes, T.F., Taylor, C.J.: Modelling facial behaviours. In: BMVC (2002)

    Google Scholar 

  12. Black, M.J., Jepson, A.D.: Eigentracking: Robust matching and tracking of objects using view-based representation. Int. J. Comput. Vis. 26(1), 63–84 (1998)

    Google Scholar 

  13. Black, M.J., Yacoob, Y.: Recognizing facial expressions in image sequences using local parameterized models of image motion. Int. J. Comput. Vis. 25(1), 23–48 (1997)

    Google Scholar 

  14. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: SIGGRAPH (1999)

    Google Scholar 

  15. Blaschko, M., Lampert, C.: Learning to localize objects with structured output regression. In: ECCV, pp. 2–15 (2008)

    Google Scholar 

  16. Bobick, A., Davis, J.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 257–267 (2001)

    Google Scholar 

  17. Breiman, L.: Classification and Regression Trees. Chapman & Hall, London (1998)

    Google Scholar 

  18. Bruce, V.: What the human face tells the human mind: Some challenges for the robot–human interface. In: IEEE Int. Workshop on Robot and Human Communication (1992)

    Google Scholar 

  19. Chang, K.Y., Liu, T.L., Lai, S.H.: Learning partially-observed hidden conditional random fields for facial expression recognition. In: CVPR (2009)

    Google Scholar 

  20. Chang, Y., Hu, C., Feris, R., Turk, M.: Manifold based analysis of facial expression. In: CVPR Workshops, p. 81 (2004)

    Google Scholar 

  21. Chetverikov, D., Péteri, R.: A brief survey of dynamic texture description and recognition. In: Computer Recognition Systems, pp. 17–26 (2005)

    Google Scholar 

  22. Cohen, I., Sebe, N., Cozman, F.G., Cirelo, M.C., Huang, T.S.: Learning Bayesian network classifiers for facial expression recognition using both labeled and unlabeled data. In: CVPR (2003)

    Google Scholar 

  23. Cohen, I., Sebe, N., Garg, A., Chen, L.S., Huang, T.S.: Facial expression recognition from video sequences: Temporal and static modeling. Comput. Vis. Image Underst. 91(1–2), 160–187 (2003)

    Google Scholar 

  24. Cohn, J.F., Ambadar, Z., Ekman, P.: Observer-based measurement of facial expression with the facial action coding system. In: The Handbook of Emotion Elicitation and Assessment. Series in Affective Science. Oxford University Press, New York (2007)

    Google Scholar 

  25. Cohn, J.F., Ekman, P.: Measuring facial action by manual coding, facial emg, and automatic facial image analysis. In: Handbook of Nonverbal Behavior Research Methods in the Affective Sciences, pp. 9–64 (2005)

    Google Scholar 

  26. Cohn, J.F., Kanade, T.: Automated facial image analysis for measurement of emotion expression. In: The Handbook of Emotion Elicitation and Assessment, pp. 222–238 (2007)

    Google Scholar 

  27. Cohn, J.F., Simon, T., Hoai, M., Zhou, F., Tejera, M., De la Torre, F.: Detecting depression from facial actions and vocal prosody. In: ACII (2009)

    Google Scholar 

  28. Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)

    Google Scholar 

  29. Dai, Y., Shibata, Y., Ishii, T., Hashimoto, K., Katamachi, K., Noguchi, K., Kakizaki, N., Ca, D.: An associate memory model of facial expressions and its application in facial expression recognition of patients on bed. In: ICME, pp. 591–594 (2001)

    Google Scholar 

  30. Darwin, C.: The Expression of the Emotions in Man and Animals. Oxford University Press New York (1872/1998)

    Google Scholar 

  31. De la Torre, F., Black, M.J.: Robust parameterized component analysis: theory and applications to 2d facial appearance models. Comput. Vis. Image Underst. 91, 53–71 (2003)

    Google Scholar 

  32. De la Torre, F., Campoy, J., Ambadar, Z., Cohn, J.: Temporal segmentation of facial behavior. In: International Conference on Computer Vision (2007)

    Google Scholar 

  33. De la Torre, F., Collet, A., Cohn, J., Kanade, T.: Filtered component analysis to increase robustness to local minima in appearance models. In: CVPR (2007)

    Google Scholar 

  34. De la Torre, F., Vitrià, J., Radeva, P., Melenchón, J.: Eigenfiltering for flexible eigentracking. In: ICPR (2000)

    Google Scholar 

  35. De la Torre, F., Yacoob, Y., Davis, L.: A probabilistic framework for rigid and non-rigid appearance based tracking and recognition. In: AFGR, pp. 491–498 (2000)

    Google Scholar 

  36. DePaulo, B., Lindsay, J., Malone, B., Muhlenbruck, L., Charlton, K., Cooper, H.: Cues to deception. Psychol. Bull. 129(1), 74–118 (2003)

    Google Scholar 

  37. Donato, G., Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J.: Classifying facial actions. IEEE Trans. Pattern Anal. Mach. Intell. 21(10), 979–984 (1999)

    Google Scholar 

  38. Ekman, P.: An argument for basic emotions. Cogn. Emot. 6, 169–200 (1992)

    Google Scholar 

  39. Ekman, P., Davidson, R.J., Friesen, W.V.: The Duchenne smile: Emotional expression and brain physiology II. J. Pers. Soc. Psychol. 58(2), 342–353 (1990)

    Google Scholar 

  40. Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978)

    Google Scholar 

  41. Ekman, P., Huang, T.S., Sejnowski, T.J., Hager, J.C.: Final report to NSF of the planning workshop on facial expression understanding. Human Interaction Laboratory, University of California, San Francisco (1993)

    Google Scholar 

  42. Ekman, P., Rosenberg, E.L.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, London (2005)

    Google Scholar 

  43. Essa, I.A., Pentland, A.P.: Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 757–763 (2002)

    Google Scholar 

  44. Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognit. 36(1), 259–275 (2003)

    MATH  Google Scholar 

  45. Forbes, E.E., Cohn, J.F., Allen, N.B., Lewinsohn, P.M.: Infant affect during parent–infant interaction at 3 and 6 months: Differences between mothers and fathers and influence of parent history of depression. Infancy 5, 61–84 (2004)

    Google Scholar 

  46. Gatica-Perez, D.: Automatic nonverbal analysis of social interaction in small groups: A review. Image Vis. Comput. 27(12), 1775–1787 (2009)

    Google Scholar 

  47. Griffin, K.M., Sayette, M.A.: Facial reactions to smoking cues relate to ambivalence about smoking. Psychol. Addict. Behav. 22(4), 551 (2008)

    Google Scholar 

  48. Gross, R., Matthews, I., Cohn, J.F., Kanade, T., Baker, S.: The cmu multi-pose, illumination, and expression (multi-pie) face database. Technical report, Carnegie Mellon University Robotics Institute, TR-07-08 (2007)

    Google Scholar 

  49. Guerra-Filho, G., Aloimonos, Y.: A language for human action. Computer 40, 42–51 (2007)

    Google Scholar 

  50. Guo, G., Dyer, C.R.: Learning from examples in the small sample case: Face expression recognition. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 35(3), 477–488 (2005)

    Google Scholar 

  51. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000)

    MATH  Google Scholar 

  52. Hatfield, E., Cacioppo, J.T., Rapson, R.L.: Primitive emotional contagion. Emotion and Social Behavior 13, 151–177 (1992)

    Google Scholar 

  53. Hoey, J.: Hierarchical unsupervised learning of facial expression categories. In: IEEE Workshop on Detection and Recognition of Events in Video, pp. 99–106 (2002)

    Google Scholar 

  54. Huang, D., De la Torre, F.: Bilinear kernel reduced rank regression for facial expression synthesis. In: ECCV (2010)

    Google Scholar 

  55. Izard, C.E., Huebner, R.R., Risser, D., Dougherty, L.: The young infant’s ability to produce discrete emotion expressions. Dev. Psychol. 16(2), 132–140 (1980)

    Google Scholar 

  56. Jolliffe, I.T.: Principal Component Analysis. Springer, New York (1986)

    Google Scholar 

  57. Jones, M.J., Poggio, T.: Multidimensional morphable models. In: ICCV (1998)

    Google Scholar 

  58. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: AFGR (2000)

    Google Scholar 

  59. Koelstra, S., Pantic, M.: Non-rigid registration using free-form deformations for recognition of facial actions and their temporal dynamics. In: AFGR (2008)

    Google Scholar 

  60. Kohler, C.G., Martin, E.A., Stolar, N., Barrett, F.S., Verma, R., Brensinger, C., Bilker, W., Gur, R.E., Gur, R.C.: Static posed and evoked facial expressions of emotions in schizophrenia. Schizophr. Res. 105, 49–60 (2008)

    Google Scholar 

  61. Kotsia, I., Pitas, I.: Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans. Image Process. 16, 172–187 (2007)

    MathSciNet  Google Scholar 

  62. Krumhuber, E., Manstead, A.S., Cosker, D., Marshall, D., Rosin, P.: Effects of dynamic attributes of smiles in human and synthetic faces: A simulated job interview setting. J. Nonverbal Behav. 33(1), 1–15 (2009)

    Google Scholar 

  63. Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML (2001)

    Google Scholar 

  64. Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., van Knippenberg, A.: Presentation and validation of the Radboud Faces Database. Cogn. Emot. 24(8), 1377–1388 (2010)

    Google Scholar 

  65. Lee, C., Elgammal, A.: Facial expression analysis using nonlinear decomposable generative models. In: IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp. 17–31 (2005)

    Google Scholar 

  66. Levenson, R.W., Ekman, P., Friesen, W.V.: Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology 27(4), 363–384 (1990)

    Google Scholar 

  67. Li, S., Jain, A.: Handbook of Face Recognition. Springer, New York (2005)

    MATH  Google Scholar 

  68. Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. Image Vis. Comput. 24(6), 615–625 (2006)

    Google Scholar 

  69. Littlewort, G.C., Bartlett, M.S., Lee, K.: Automatic coding of facial expressions displayed during posed and genuine pain. Image Vis. Comput. 12(27), 1797–1803 (2009)

    Google Scholar 

  70. Littlewort, G., Bartlett, M.S., Whitehill, J., Wu, T.F., Butko, N., Ruvulo, P., et al.: The motion in emotion: A cert based approach to the fera emotion challenge. In: Paper presented at the 1st Facial Expression Recognition and Analysis challenge 2011, 9th IEEE International Conference on AFGR (2011)

    Google Scholar 

  71. Liu, X.: Generic face alignment using boosted appearance model. In: CVPR (2007)

    Google Scholar 

  72. Lo, H., Chung, R.: Facial expression recognition approach for performance animation. In: International Workshop on Digital and Computational Video (2001)

    Google Scholar 

  73. Lowe, D.: Object recognition from local scale-invariant features. In: ICCV (1999)

    Google Scholar 

  74. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of Imaging Understanding Workshop (1981)

    Google Scholar 

  75. Lucey, P., Cohn, J., Howlett, J., Lucey, S., Sridharan, S.: Recognizing emotion with head pose variation: Identifying pain segments in video. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 41(3), 664–674 (2011)

    Google Scholar 

  76. Lucey, P., Cohn, J., Lucey, S., Sridharan, S., Prkachin, K.M.: Automatically detecting action units from faces of pain: Comparing shape and appearance features. In: CVPR Workshops (2009)

    Google Scholar 

  77. Lucey, P., Cohn, J.F., Lucey, S., Sridharan, S., Prkachin, K.M.: Automatically detecting pain using facial actions. In: ACII (2009)

    Google Scholar 

  78. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn–Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: CVPR Workshops for Human Communicative Behavior Analysis (2010)

    Google Scholar 

  79. Lucey, P., Cohn, J.F., Matthews, I., Lucey, S., Sridharan, S., Howlett, J., Prkachin, K.M.: Automatically detecting pain in video through facial action units. IEEE Trans. Syst. Man Cybern., Part B, Cybern. PP(99), 1–11 (2010)

    Google Scholar 

  80. Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P., Matthews, I.: Painful data: The UNBC-McMaster shoulder pain expression archive database. In: AFGR (2011)

    Google Scholar 

  81. Lucey, S., Matthews, I., Hu, C., Ambadar, Z., De la Torre, F., Cohn, J.: AAM derived face representations for robust facial action recognition. In: AFGR (2006)

    Google Scholar 

  82. Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: AFGR (2002)

    Google Scholar 

  83. Madsen, M., el Kaliouby, R., Eckhardt, M., Hoque, M., Goodwin, M., Picard, R.W.: Lessons from participatory design with adolescents on the autism spectrum. In: Proc. Computer Human Interaction (2009)

    Google Scholar 

  84. Malatesta, C.Z., Culver, C., Tesman, J.R., Shepard, B., Fogel, A., Reimers, M., Zivin, G.: The Development of Emotion Expression During the First Two Years of Life. Monographs of the Society for Research in Child Development, pp. 97–136 (1989)

    Google Scholar 

  85. Martinez, A.M., Benavente, R.: The AR face database. In: CVC Technical Report, number 24 (June 1998)

    Google Scholar 

  86. Mase, K., Pentland, A.: Automatic lipreading by computer. Trans. Inst. Electron. Inf. Commun. Eng. J73-D-II(6), 796–803 (1990)

    Google Scholar 

  87. Matthews, I., Baker, S.: Active appearance models revisited. Int. J. Comput. Vis. 60(2), 135–164 (2004)

    Google Scholar 

  88. Matthews, I., Xiao, J., Baker, S.: 2d vs. 3d deformable face models: Representational power, construction, and real-time fitting. Int. J. Comput. Vis. 75(1), 93–113 (2007)

    Google Scholar 

  89. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005)

    Google Scholar 

  90. Nguyen, N., Guo, Y.: Comparisons of sequence labeling algorithms and extensions. In: ICML (2007)

    Google Scholar 

  91. O’Toole, A.J., Harms, J., Snow, S.L., Hurst, D.R., Pappas, M.R., Ayyad, J.H., Abdi, H.: A video database of moving faces and people. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 812–816 (2005)

    Google Scholar 

  92. Pandzic, I.S., Forchheimer, R.R. (eds.): MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley, New York (2002)

    Google Scholar 

  93. Pantic, M., Bartlett, M.S.: Machine analysis of facial expressions. In: Face Recognition, pp. 377–416 (2007)

    Google Scholar 

  94. Pantic, M., Patras, I.: Dynamics of facial expression: Recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 36, 433–449 (2006)

    Google Scholar 

  95. Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: The state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1424–1445 (2002)

    Google Scholar 

  96. Pantic, M., Rothkrantz, L.J.M.: Facial action recognition for facial expression analysis from static face images. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 34(3), 1449–1461 (2004)

    Google Scholar 

  97. Pantic, M., Sebe, N., Cohn, J.F., Huang, T.: Affective multimodal human–computer interaction. In: ACM International Conference on Multimedia, pp. 669–676 (2005)

    Google Scholar 

  98. Pentland, A.: Looking at people: Sensing for ubiquitous and wearable computing. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 107–119 (2000)

    Google Scholar 

  99. Pilz, S.K., Thornton, I.M., Bülthoff, H.H.: A search advantage for faces learned in motion. Exp. Brain Res. 171(4) 436–447 (2006)

    Google Scholar 

  100. Prkachin, K.M., Solomon, P.E.: The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain. Pain 139(2), 267–274 (2008)

    Google Scholar 

  101. Rademaker, R., Pantic, M., Valstar, M.F., Maat, L.: Web-based database for facial expression analysis. In: ICME (2005)

    Google Scholar 

  102. Saragih, J., Goecke, R.: A nonlinear discriminative approach to AAM fitting. In: ICCV (2007)

    Google Scholar 

  103. Sayette, M.A., Cohn, J.F., Wertz, J.M., Perrott, M.A., Parrott, D.J.: A psychometric evaluation of the facial action coding system for assessing spontaneous expression. J. Nonverbal Behav. 25(3), 167–185 (2001)

    Google Scholar 

  104. Scherer, K., Ekman, P.: Handbook of Methods in Nonverbal Behavior Research. Cambridge University Press, Cambridge (1982)

    Google Scholar 

  105. Schmidt, K.L., Cohn, J.F.: Human facial expressions as adaptations: Evolutionary perspectives in facial expression research. Yearb. Phys. Antropol. 116, 8–24 (2001)

    Google Scholar 

  106. Shang, L.F., Chan, K.P.: Nonparametric discriminant HMM and application to facial expression recognition. In: CVPR (2009)

    Google Scholar 

  107. Shergill, G.H., Sarrafzadeh, H., Diegel, O., Shekar, A.: Computerized sales assistants: The application of computer technology to measure consumer interest;a conceptual framework. J. Electron. Commer. Res. 9(2), 176–191 (2008)

    Google Scholar 

  108. Simon, T., Nguyen, M.H., De la Torre, F., Cohn, J.F.: Action unit detection with segment-based SVMs. In: Conference on Computer Vision and Pattern Recognition, pp. 2737–2744 (2010)

    Google Scholar 

  109. Taskar, B., Guestrin, C., Koller, D.: Max-margin Markov networks. In: NIPS (2003)

    Google Scholar 

  110. Theobald, B.J., Cohn, J.F.: Facial image synthesis. In: Oxford Companion to Emotion and the Affective Sciences, pp. 176–179. Oxford University Press, London (2009)

    Google Scholar 

  111. Tian, Y., Kanade, T., Cohn, J.F.: Evaluation of Gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In: AFGR (2002)

    Google Scholar 

  112. Tian, Y., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 97–115 (2002)

    Google Scholar 

  113. Tian, Y., Kanade, T., Cohn, J.F.: Facial expression analysis. In: Handbook of Face Recognition, Springer, Berlin (2008)

    Google Scholar 

  114. Tola, E., Lepetit, V., Fua, P.: A fast local descriptor for dense matching. In: CVPR (2008)

    Google Scholar 

  115. Tola, E., Lepetit, V., Fua, P.: Daisy: An efficient dense descriptor applied to wide baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. 99(1) (2009)

    Google Scholar 

  116. Tomkins, S.S.: Affect, Imagery, Consciousness. Springer, New York (1962)

    Google Scholar 

  117. Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE Trans. Pattern Anal. Mach. Intell. 29 1683–1699 (2007)

    Google Scholar 

  118. Tremeau, F., Malaspina, D., Duval, F., Correa, H., Hager-Budny, M., Coin-Bariou, L., Macher, J.P., Gorman, J.M.: Facial expressiveness in patients with schizophrenia compared to depressed patients and nonpatient comparison subjects. Am. J. Psychiatr. 162(1), 92 (2005)

    Google Scholar 

  119. Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res. 6, 1453–1484 (2005)

    MathSciNet  Google Scholar 

  120. Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res. 6, 1453–1484 (2005)

    MathSciNet  Google Scholar 

  121. Valstar, M., Pantic, M., Patras, I.: Motion history for facial action detection in video. In: IEEE Int’l Conf. on Systems, Man and Cybernetics, pp. 635–640 (2005)

    Google Scholar 

  122. Valstar, M.F., Pantic, M.: Fully automatic facial action unit detection and temporal analysis. In: CVPR (2006)

    Google Scholar 

  123. Valstar, M.F., Pantic, M.: Combined support vector machines and hidden Markov models for modeling facial action temporal dynamics. In: ICCV Workshop on HCI (2007)

    Google Scholar 

  124. Valstar, M.F., Pantic, M.: Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings of the EMOTION 2010 Workshop (2010)

    Google Scholar 

  125. Valstar, M.F., Patras, I., Pantic, M.: Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data. In: CVPR Workshops (2005)

    Google Scholar 

  126. van Dam, A.: Beyond wimp. IEEE Comput. Graph. Appl. 20(1), 50–51 (2000)

    Google Scholar 

  127. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: CVPR (2001)

    Google Scholar 

  128. Vural, E., Bartlett, M., Littlewort, G., Cetin, M., Ercil, A., Movellan, J.: Discrimination of moderate and acute drowsiness based on spontaneous facial expressions. In: ICPR (2010)

    Google Scholar 

  129. Wen, Z., Huang, T.S.: Capturing subtle facial motions in 3d face tracking. In: CVPR (2008)

    Google Scholar 

  130. Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time combined 2D+3D active appearance models. In: CVPR (2004)

    Google Scholar 

  131. Yacoob, Y., Davis, L.S.: Recognizing human facial expressions from long image sequences using optical flow. IEEE Trans. Pattern Anal. Mach. Intell. 18(6), 636–642 (2002)

    Google Scholar 

  132. Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3d facial expression database for facial behavior research. In: AFGR (2006)

    Google Scholar 

  133. Zelnik-Manor, L., Irani, M.: Temporal factorization vs. spatial factorization. In: ECCV (2004)

    Google Scholar 

  134. Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2008)

    Google Scholar 

  135. Zeng, Z., Hu, Y., Roisman, G.I., Wen, Z., Fu, Y., Huang, T.S.: Audio-visual emotion recognition in adult attachment interview. In: 8th International Conference on Multimodal Interfaces (2009)

    Google Scholar 

  136. Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 31–58 (2009)

    Google Scholar 

  137. Zhang, C., Zhango, Z.: A survey of recent advances in face detection. In: Technical Report, MSR-TR-2010-66 Microsoft Research (June 2010)

    Google Scholar 

  138. Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron. In: AFGR (2002)

    Google Scholar 

  139. Zhao, G., Pietikainen, M.: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 915–928 (2007)

    Google Scholar 

  140. Zhao, W., Chellappa, R.: Face Processing: Advanced Modeling and Methods. Academic Press, San Diego (2006)

    Google Scholar 

  141. Zhou, F., De la Torre, F., Hodgins, J.: Aligned cluster analysis for temporal segmentation of human motion. In: IEEE Automatic Face and Gesture Recognition (2008)

    Google Scholar 

  142. Zhu, Y., De la Torre, F., Cohn, J.F.: Dynamic cascades with bidirectional bootstrapping for spontaneous facial action unit detection. In: ACII (2009)

    Google Scholar 

  143. Zhou, F., De la Torre, F., Cohn, J.: Unsupervised discovery of facial events. In: CVPR (2010)

    Google Scholar 

  144. Zhou, F., De la Torre, F., Cohn, J.F.: Unsupervised discovery of facial events. In: Conference on Computer Vision and Pattern Recognition, pp. 2574–2581 (2010)

    Google Scholar 

  145. Zue, V.W., Glass, J.R.: Conversational interfaces: Advances and challenges. Proc. IEEE 88(8), 1166–1180 (2002)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by National Institute of Health Grant R01 MH 051435, and the National Science Foundation under Grant No. EEC-0540865. Thanks to Tomas Simon, Minh H. Nguyen, Feng Zhou, Simon Baker, Simon Lucey and Iain Matthews for helpful discussions, and some figures.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fernando De la Torre .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag London Limited

About this chapter

Cite this chapter

De la Torre, F., Cohn, J.F. (2011). Facial Expression Analysis. In: Moeslund, T., Hilton, A., Krüger, V., Sigal, L. (eds) Visual Analysis of Humans. Springer, London. https://doi.org/10.1007/978-0-85729-997-0_19

Download citation

  • DOI: https://doi.org/10.1007/978-0-85729-997-0_19

  • Publisher Name: Springer, London

  • Print ISBN: 978-0-85729-996-3

  • Online ISBN: 978-0-85729-997-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics