Skip to main content
Log in

Region-based facial representation for real-time Action Units intensity detection across datasets

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Most research on facial expressions recognition has focused on binary Action Units (AUs) detection, while graded changes in their intensity have rarely been considered. This paper proposes a method for the real-time detection of AUs intensity in terms of the Facial Action Coding System scale. It is grounded on a novel and robust anatomically based facial representation strategy, for which features are registered from a different region of interest depending on the AU considered. Real-time processing is achieved by combining Histogram of Gradients descriptors with linear kernel Support Vector Machines. Following this method, AU intensity detection models are built and validated through the DISFA database, outperforming previous approaches without real-time capabilities. An in-depth evaluation through three different databases (DISFA, BP4D and UNBC Shoulder-Pain) further demonstrates that the proposed method generalizes well across datasets. This study also brings insights about existing public corpora and their impact on AU intensity prediction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Agarwal S, Umer S (2015) Mp-feg: media player controlled by facial expressions and gestures. In: 5th National conference on computer vision, pattern recognition, image processing and graphics, pp 1–4

  2. Baltrušaitis T, Mahmoud M, Robinson P (2015) Cross-dataset learning and person-specific normalisation for automatic action unit detection. In: 11th IEEE international conference and workshops on automatic face and gesture recognition, vol 6, pp 1–6

  3. Bingol D, Celik T, Omlin CW, Vadapalli HB (2014) Facial action unit intensity estimation using rotation invariant features and regression analysis. In: 2014 IEEE international conference on image processing, pp 1381–1385

  4. Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: 5th Annual workshop on computational learning theory, pp 144–152. ACM

  5. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2:27:1–27:27. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

  6. Chew SW, Lucey P, Lucey S, Saragih J, Cohn JF, Matthews I, Sridharan S (2012) In the pursuit of effective affective computing: the relationship between features and registration. IEEE Trans Syst Man Cybern B Cybern 42(4):1006–1016

    Article  Google Scholar 

  7. Chu WS, Torre F, Cohn J (2013) Selective transfer machine for personalized facial action unit detection. In: IEEE conference on computer vision and pattern recognition, pp 3515–3522

  8. Cohn JF, Ambadar Z, Ekman P (2007) Observer-based measurement of facial expression with the facial action coding system. The handbook of emotion elicitation and assessment, pp 203–221

  9. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. IEEE Comput Soc Conf Comput Vis Pattern Recognit 1:886–893

    Google Scholar 

  10. De Moor K, Mazza F, Hupont I, Quintero MR, Mäki T, Varela M (2014) Chamber QoE: a multi-instrumental approach to explore affective aspects in relation to quality of experience. In: IS&T/SPIE electronic imaging. International society for optics and photonics

  11. Flores VC (2005) Artnatomy. www.artnatomia.net

  12. Gehrig T, Al-Halah Z, Ekenel HK, Stiefelhagen R (2015) Action unit intensity estimation using hierarchical partial least squares. In: 2015 11th IEEE international conference and workshops on automatic face and gesture recognition, vol 1, pp 1–6

  13. Girard JM, Cohn JF, Mahoor MH, Mavadati S, Rosenwald DP (2013) Social risk and depression: evidence from manual and automatic facial expression analysis. In: 2013 10th IEEE international conference and workshops on automatic face and gesture recognition, pp 1–8

  14. Girard JM, Cohn JF, De la Torre F (2015) Estimating smile intensity: a better way. Pattern Recogn Lett 66:13–21

    Article  Google Scholar 

  15. Gonzalez I, Verhelst W, Oveneke M, Sahli H, Jiang D (2015) Framework for combination aware au intensity recognition. In: 2015 International conference on affective computing and intelligent interaction, pp 602–608

  16. Hager JC, Ekman P, Friesen WV (2002) Facial action coding system. A Human Face, Salt Lake City

    Google Scholar 

  17. Happy S, George A, Routray A (2012) A real time facial expression classification system using local binary patterns. In: 4th IEEE international conference on intelligent human computer interaction, pp 1–5

  18. Hupont I, Cerezo E, Baldassarri S (2008) Facial emotional classifier for natural interaction. ELCVIA Electron Lett Comput Vis Image Anal 7(4):1–12

    Article  Google Scholar 

  19. Jiang B, Martinez B, Valstar MF, Pantic M (2014) Decision level fusion of domain specific regions for facial action recognition. In: 2014 22nd international conference on pattern recognition, pp 1776–1781

  20. Jiang B, Valstar MF, Pantic M (2011) Action unit detection using sparse appearance descriptors in space-time video volumes. In: 2011 IEEE international conference on automatic face and gesture recognition and workshops, pp 314–321

  21. Kaltwang S, Todorovic S, Pantic M (2016) Doubly sparse relevance vector machine for continuous facial behavior estimation. IEEE Trans Pattern Anal Mach Intell 38(9):1748–1761

    Article  Google Scholar 

  22. Li Y, Mavadati SM, Mahoor MH, Ji Q (2013) A unified probabilistic framework for measuring the intensity of spontaneous facial action units. In: 2013 10th IEEE international conference and workshops on automatic face and gesture recognition, pp 1–7

  23. Li Y, Mavadati SM, Mahoor MH, Zhao Y, Ji Q (2015) Measuring the intensity of spontaneous facial action units with dynamic Bayesian network. Pattern Recogn 48(11):3417–3427

    Article  Google Scholar 

  24. Littlewort G, Whitehill J, Wu T, Fasel I, Frank M, Movellan J, Bartlett M (2011) The computer expression recognition toolbox (CERT). In: 2011 IEEE international conference on automatic face and gesture recognition and workshops, pp 298–305

  25. Littlewort GC, Bartlett MS, Lee K (2009) Automatic coding of facial expressions displayed during posed and genuine pain. Image Vis Comput 27(12):1797–1803

    Article  Google Scholar 

  26. Lucey P, Cohn JF, Prkachin KM, Solomon PE, Matthews I (2011) Painful data: the UNBC-McMaster shoulder pain expression archive database. In: 2011 IEEE international conference on automatic face and gesture recognition and workshops, pp 57–64

  27. Mahoor MH, Cadavid S, Messinger DS, Cohn, JF (2009) A framework for automated measurement of the intensity of non-posed facial action units. In: IEEE computer society conference on computer vision and pattern recognition workshops, pp 74–80

  28. Mavadati SM, Mahoor MH, Bartlett K, Trinh P, Cohn JF (2013) DISFA: a spontaneous facial action intensity database. IEEE Trans Affect Comput 4(2):151–160

    Article  Google Scholar 

  29. Mohammadi MR, Fatemizadeh E, Mahoor MH (2016) Intensity estimation of spontaneous facial action units based on their sparsity properties. IEEE Trans Cybern 46(3):817–826

    Article  Google Scholar 

  30. Nakamura T, Maejima A, Morishima S (2014) Driver drowsiness estimation from facial expression features computer vision feature investigation using a CG model. In: 2014 IEEE international conference on computer vision theory and applications, vol 2, pp 207–214

  31. Nicolle J, Bailly K, Chetouani M (2015) Facial action unit intensity prediction via hard multi-task metric learning for kernel regression. In: 2015 11th IEEE international conference and workshops on automatic face and gesture recognition, vol 6, pp 1–6

  32. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830

    MathSciNet  MATH  Google Scholar 

  33. Ren H, Li ZN (2014) Age estimation based on complexity-aware features. In: Computer vision–ACCV 2014, pp 115–128. Springer

  34. Rudovic O, Pavlovic V, Pantic M (2015) Context-sensitive dynamic ordinal regression for intensity estimation of facial action units. IEEE Trans Pattern Anal Mach Intell 37(5):944–958

    Article  Google Scholar 

  35. Sariyanidi E, Gunes H, Cavallaro A (2015) Automatic analysis of facial affect: a survey of registration, representation, and recognition. IEEE Trans Pattern Anal Mach Intell 37(6):1113–1133

    Article  Google Scholar 

  36. Savran A, Sankur B, Bilge MT (2012) Regression-based intensity estimation of facial action units. Image Vis Comput 30(10):774–784

    Article  Google Scholar 

  37. Shrout PE, Fleiss JL (1979) Intraclass correlations: uses in assessing rater reliability. Psychol Bull 86(2):420

    Article  Google Scholar 

  38. Suk M, Prabhakaran B (2014) Real-time mobile facial expression recognition system–a case study. In: 2014 IEEE conference on computer vision and pattern recognition workshops, pp 132–137

  39. Suk M, Prabhakaran B (2014) Real-time mobile facial expression recognition system-a case study. In: IEEE conference on computer vision and pattern recognition workshops, pp 132–137

  40. Tian Yl, Kanade T, Cohn JF (2001) Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 23(2):97–115

    Article  Google Scholar 

  41. Valstar MF, Almaev T, Girard JM, McKeown G, Mehu M, Yin L, Pantic M, Cohn JF (2015) FERA 2015-second facial expression recognition and analysis challenge. In: 2015 11th IEEE international conference and workshops on automatic face and gesture recognition, vol 6, pp 1–8

  42. Valstar MF, Pantic M (2012) Fully automatic recognition of the temporal phases of facial actions. IEEE Trans Syst Man Cybern B Cybern 42(1):28–43

    Article  Google Scholar 

  43. Velusamy S, Gopalakrishnan V, Anand B, Moogi P, Pandey B (2013) Improved feature representation for robust facial action unit detection. In: IEEE consumer communications and networking conference, pp 681–684

  44. Viola P, Jones MJ (2004) Robust real-time face detection. Int J Comput Vis 57(2):137–154

    Article  Google Scholar 

  45. Werner P, Saxen F, Al-Hamadi A (2015) Handling data imbalance in automatic facial action intensity estimation. FERA, p 26

  46. Wu T, Butko NJ, Ruvolo P, Whitehill J, Bartlett MS, Movellan JR (2011) Action unit recognition transfer across datasets. In: 2011 IEEE international conference on automatic face and gesture recognition and workshops, pp 889–896

  47. Xiong X, Torre F (2013) Supervised descent method and its applications to face alignment. In: IEEE conference on computer vision and pattern recognition, pp 532–539

  48. Zhang X, Yin L, Cohn JF, Canavan S, Reale M, Horowitz A, Liu P, Girard JM (2014) BP4D-spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image Vis Comput 32(10):692–706

    Article  Google Scholar 

Download references

Acknowledgements

This research has been supported by the Laboratory of Excellence SMART (ANR-11-LABX-65) supported by French State funds managed by the ANR within the Investissements d'Avenir programme (ANR-11-IDEX-0004-02).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Isabelle Hupont.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hupont, I., Chetouani, M. Region-based facial representation for real-time Action Units intensity detection across datasets. Pattern Anal Applic 22, 477–489 (2019). https://doi.org/10.1007/s10044-017-0645-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-017-0645-4

Keywords

Navigation