skip to main content
10.1145/2671188.2749311acmconferencesArticle/Chapter ViewAbstractPublication PagesicmrConference Proceedingsconference-collections
research-article

Facial Action Unit Classification with Hidden Knowledge under Incomplete Annotation

Published: 22 June 2015 Publication History

Abstract

Facial action unit (AU) recognition is an important task for facial expression analysis. Traditional AU recognition methods typically include a supervised training, where the AU annotated training images are needed. AU annotation is a time consuming, expensive, and error prone process. While AU is hard to annotate, facial expression is relatively easy to label. To take advantage of this, we introduce a new learning method that trains an AU classifier using images with incomplete AU annotation but with complete expression labels. The goal is to use expression labels as hidden knowledge to complement the missing AU labels. Towards this goal, we propose to construct a Bayesian Network (BN) to capture the relationships between facial expression and AUs. Structural Expectation Maximum is used to learn the structure and parameters of the BN when the AU labels are missing. Given the learned BNs and measurements of AUs and expression, we can then perform AU recognition within the BN through a probabilistic inference. Experimental results on the CK+ and ISL databases demonstrate the effectiveness of our method.

References

[1]
W.-S. Chu, F. D. L. Torre, and J. F. Cohn. Selective transfer machine for personalized facial action unit detection. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 3515--3522. IEEE, 2013.
[2]
C. P. de Campos and Q. Ji. Efficient structure learning of bayesian networks using constraints. Journal of Machine Learning Research, 12(3):663--689, 2011.
[3]
P. Ekman and W. V. Friesen. Facial action coding system: A technique for the measurement of facial movement. palo alto. CA: Consulting Psychologists Press. Ellsworth, PC, & Smith, CA (1988). From appraisal to emotion: Differences among unpleasant feelings. Motivation and Emotion, 12:271--302, 1978.
[4]
X. D. W.-S. C. Fernando, F. De la Torre2 Jeffery, and Q. Wang. Facial action unit detection by cascade of tasks.
[5]
Q. Ji. Rpi intelligent systems lab (isl) image databases. http://www.ecse.rpi.edu/homepages/cvrl/database/database.html.
[6]
M. Khademi and L.-P. Morency. Relative facial action unit detection. In Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pages 1090--1095. IEEE, 2014.
[7]
Y. Li, S. Wang, Y. Zhao, and Q. Ji. Simultaneous facial feature tracking and facial expression recognition. Image Processing, IEEE Transactions on, 22(7):2559--2573, 2013.
[8]
G. Littlewort, J. Whitehill, T. Wu, I. Fasel, M. Frank, J. Movellan, and M. Bartlett. The computer expression recognition toolbox (cert). In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 298--305. IEEE, 2011.
[9]
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages 94--101. IEEE, 2010.
[10]
S. Lucey, A. B. Ashraf, and J. Cohn. Investigating spontaneous facial action recognition through aam representations of the face. Face recognition, pages 275--286, 2007.
[11]
M. Pantic and L. J. Rothkrantz. Expert system for automatic analysis of facial expressions. Image and Vision Computing, 18(11):881--905, 2000.
[12]
J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 1988.
[13]
A. M. Rahman, M. I. Tanveer, and M. Yeasin. A spatio-temporal probabilistic framework for dividing and predicting facial action units. In Affective Computing and Intelligent Interaction, pages 598--607. Springer, 2011.
[14]
M. S. Sorower. A literature survey on algorithms for multi-label learning. Technical report, Technical report, Oregon State University, Corvallis, OR, USA (December 2010), 2010.
[15]
Y. Tong, J. Chen, and Q. Ji. A unified probabilistic framework for spontaneous facial action modeling and understanding. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(2):258--273, 2010.
[16]
Y. Tong, W. Liao, and Q. Ji. Inferring facial action units with causal relations. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 1623--1630. IEEE, 2006.
[17]
Y. Tong, W. Liao, and Q. Ji. Facial action unit recognition by exploiting their dynamic and semantic relationships. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(10):1683--1699, 2007.
[18]
M. F. Valstar and M. Pantic. Fully automatic recognition of the temporal phases of facial actions. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 42(1):28--43, 2012.
[19]
L. van der Maaten and E. Hendriks. Action unit classification using active appearance models and conditional random fields. Cognitive processing, 13(2):507--518, 2012.
[20]
S. Velusamy, H. Kannan, B. Anand, A. Sharma, and B. Navathe. A method to infer emotions from facial action units. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages 2028--2031. IEEE, 2011.
[21]
Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(1):39--58, 2009.
[22]
X. Zhang, M. H. Mahoor, S. M. Mavadati, and J. F. Cohn. A l p-norm mtmkl framework for simultaneous detection of multiple facial action units. In Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pages 1104--1111. IEEE, 2014.
[23]
X. Zhang, M. H. Mahoor, and R. D. Nielsen. On multi-task learning for facial action unit detection. In Image and Vision Computing, 2013 28th International Conference on, pages 202--207. IEEE, 2013.
[24]
Y. Zhang and Q. Ji. Active and dynamic information fusion for facial expression understanding from image sequences. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 27(5):699--714, 2005.

Cited By

View all
  • (2020)Knowledge augmented deep neural networks for joint facial expression and action unit recognitionProceedings of the 34th International Conference on Neural Information Processing Systems10.5555/3495724.3496926(14338-14349)Online publication date: 6-Dec-2020
  • (2020)Crossing Domains for AU Coding: Perspectives, Approaches, and MeasuresIEEE Transactions on Biometrics, Behavior, and Identity Science10.1109/TBIOM.2020.29772252:2(158-171)Online publication date: Apr-2020
  • (2020)Exploring Domain Knowledge for Facial Expression-Assisted Action Unit Activation RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2018.282230311:4(640-652)Online publication date: 13-Nov-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMR '15: Proceedings of the 5th ACM on International Conference on Multimedia Retrieval
June 2015
700 pages
ISBN:9781450332743
DOI:10.1145/2671188
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 June 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. bayesian network
  2. facial action unit recognition
  3. incomplete labels
  4. multi-label recognition
  5. structural expectation maximum

Qualifiers

  • Research-article

Funding Sources

  • National Nature Science Foundation of China

Conference

ICMR '15
Sponsor:

Acceptance Rates

ICMR '15 Paper Acceptance Rate 48 of 127 submissions, 38%;
Overall Acceptance Rate 254 of 830 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2020)Knowledge augmented deep neural networks for joint facial expression and action unit recognitionProceedings of the 34th International Conference on Neural Information Processing Systems10.5555/3495724.3496926(14338-14349)Online publication date: 6-Dec-2020
  • (2020)Crossing Domains for AU Coding: Perspectives, Approaches, and MeasuresIEEE Transactions on Biometrics, Behavior, and Identity Science10.1109/TBIOM.2020.29772252:2(158-171)Online publication date: Apr-2020
  • (2020)Exploring Domain Knowledge for Facial Expression-Assisted Action Unit Activation RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2018.282230311:4(640-652)Online publication date: 13-Nov-2020
  • (2019)Weakly Supervised Dual Learning for Facial Action Unit RecognitionIEEE Transactions on Multimedia10.1109/TMM.2019.291606321:12(3218-3230)Online publication date: 19-Nov-2019
  • (2019)Facial Action Unit Recognition and Intensity Estimation Enhanced Through Label DependenciesIEEE Transactions on Image Processing10.1109/TIP.2018.287833928:3(1428-1442)Online publication date: 1-Mar-2019
  • (2019)Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2017.273754010:3(348-359)Online publication date: 3-Sep-2019
  • (2019)Cross-domain AU Detection: Domains, Learning Approaches, and Measures2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019)10.1109/FG.2019.8756543(1-8)Online publication date: 14-May-2019
  • (2018)Weakly Supervised Facial Action Unit Recognition With Domain KnowledgeIEEE Transactions on Cybernetics10.1109/TCYB.2018.286819448:11(3265-3276)Online publication date: Nov-2018
  • (2018)Facial Action Unit Recognition Augmented by Their Dependencies2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)10.1109/FG.2018.00036(187-194)Online publication date: 15-May-2018
  • (2018)Classifier Learning with Prior Probabilities for Facial Action Unit Recognition2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition10.1109/CVPR.2018.00536(5108-5116)Online publication date: Jun-2018
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media