skip to main content
10.1145/3240508.3240613acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Personalized Multiple Facial Action Unit Recognition through Generative Adversarial Recognition Network

Published: 15 October 2018 Publication History

Abstract

Personalized facial action unit (AU) recognition is challenging due to subject-dependent facial behavior. This paper proposes a method to recognize personalized multiple facial AUs through a novel generative adversarial network, which adapts the distribution of source domain facial images to that of target domain facial images and detects multiple AUs by leveraging AU dependencies. Specifically, we use a generative adversarial network to generate synthetic images from source domain; the synthetic images have a similar appearance to the target subject and retain the AU patterns of the source images. We simultaneously leverage AU dependencies to train a multiple AU classifier. Experimental results on three benchmark databases demonstrate that the proposed method can successfully realize unsupervised domain adaptation for individual AU detection, and thus outperforms state-of-the-art AU detection methods.

References

[1]
Timur Almaev, Brais Martinez, and Michel Valstar. 2015. Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection. In Proceedings of the IEEE International Conference on Computer Vision. 3774--3782.
[2]
Tadas Baltruvs aitis, Marwa Mahmoud, and Peter Robinson. 2015. Cross-dataset learning and person-specific normalisation for automatic action unit detection. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, Vol. 6. IEEE, 1--6.
[3]
Jixu Chen, Xiaoming Liu, Peter Tu, and Amy Aragones. 2013. Learning person-specific models for facial expression and action unit recognition. Pattern Recognition Letters, Vol. 34, 15 (2013), 1964--1970.
[4]
Wen-Sheng Chu, Fernando De la Torre, and Jeffrey F Cohn. 2017a. Learning spatial and temporal cues for multi-label facial action unit detection. In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on . IEEE, 25--32.
[5]
Wen-Sheng Chu, Fernando De la Torre, and Jeffrey F Cohn. 2017b. Selective transfer machine for personalized facial expression analysis. IEEE transactions on pattern analysis and machine intelligence, Vol. 39, 3 (2017), 529--545.
[6]
H. Iii Daume. 2007. Frustratingly easy domain adaptation. In Proc. Meeting of the Association for Computational Linguistics .
[7]
Lixin Duan, Dong Xu, and Ivor Wai-Hung Tsang. 2012. Domain adaptation from multiple sources: A domain-dependent regularization approach. IEEE Transactions on Neural Networks and Learning Systems, Vol. 23, 3 (2012), 504--518.
[8]
Paul Ekman and Wallace Friesen. 1978. Facial action coding system: a technique for the measurement of facial movement. Palo Alto: Consulting Psychologists (1978).
[9]
Stefanos Eleftheriadis, Ognjen Rudovic, Marc P Deisenroth, and Maja Pantic. 2016. Gaussian process domain experts for model adaptation in facial behavior analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 18--26.
[10]
Stefanos Eleftheriadis, Ognjen Rudovic, Marc Peter Deisenroth, and Maja Pantic. 2017. Gaussian process domain experts for modeling of facial affect. IEEE Transactions on Image Processing, Vol. 26, 10 (2017), 4697--4711.
[11]
Stefanos Eleftheriadis, Ognjen Rudovic, and Maja Pantic. 2015. Multi-conditional latent variable model for joint facial action unit detection. In Proceedings of the IEEE International Conference on Computer Vision . 3792--3800.
[12]
Patrick Lucey, Jeffrey F Cohn, Kenneth M Prkachin, Patricia E Solomon, and Iain Matthews. 2011. Painful data: The UNBC-McMaster shoulder pain expression archive database. In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on. IEEE, 57--64.
[13]
Brais Martinez, Michel F Valstar, Bihan Jiang, and Maja Pantic. 2017. Automatic analysis of facial actions: A survey. IEEE Transactions on Affective Computing (2017).
[14]
S Mohammad Mavadati, Mohammad H Mahoor, Kevin Bartlett, Philip Trinh, and Jeffrey F Cohn. 2013. Disfa: A spontaneous facial action intensity database. IEEE Transactions on Affective Computing, Vol. 4, 2 (2013), 151--160.
[15]
Kenneth M Prkachin and Patricia E Solomon. 2008. The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain. Pain, Vol. 139, 2 (2008), 267--274.
[16]
Gabriele Schweikert, Gunnar R"atsch, Christian Widmer, and Bernhard Schölkopf. 2009. An empirical analysis of domain adaptation algorithms for genomic sequence analysis. In Advances in Neural Information Processing Systems. 1433--1440.
[17]
Shan Wu, Shangfei Wang, Bowen Pan, and Qiang Ji. 2017. Deep Facial Action Unit Recognition From Partially Labeled Data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . 3951--3959.
[18]
Gloria Zen, Lorenzo Porzi, Enver Sangineto, Elisa Ricci, and Nicu Sebe. 2016. Learning personalized models for facial expression analysis and gesture recognition. IEEE Transactions on Multimedia, Vol. 18, 4 (2016), 775--788.
[19]
Jiabei Zeng, Wen-Sheng Chu, Fernando De la Torre, Jeffrey F Cohn, and Zhang Xiong. 2015. Confidence preserving machine for facial action unit detection. In Proceedings of the IEEE International Conference on Computer Vision. 3622--3630.
[20]
Xiao Zhang, Mohammad H Mahoor, S Mohammad Mavadati, and Jeffrey F Cohn. 2014. A l p-norm mtmkl framework for simultaneous detection of multiple facial action units. In Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on. IEEE, 1104--1111.
[21]
Xing Zhang, Lijun Yin, Jeffrey F Cohn, Shaun Canavan, Michael Reale, Andy Horowitz, and Peng Liu. 2013. A high-resolution spontaneous 3d dynamic facial expression database. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on . IEEE, 1--6.
[22]
Kaili Zhao, Wen-Sheng Chu, and Honggang Zhang. 2016. Deep region and multi-label learning for facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . 3391--3399.

Cited By

View all
  • (2024)Expression Complementary Disentanglement Network for Facial Expression RecognitionChinese Journal of Electronics10.23919/cje.2022.00.35133:3(742-752)Online publication date: May-2024
  • (2024)Facial Action Unit detection based on multi-task learning strategy for unlabeled facial images in the wildExpert Systems with Applications10.1016/j.eswa.2024.124285253(124285)Online publication date: Nov-2024
  • (2024)Boosting Facial Action Unit Detection with CGAN-Based Data AugmentationDecision Making in Healthcare Systems10.1007/978-3-031-46735-6_13(323-335)Online publication date: 1-Jan-2024
  • Show More Cited By

Index Terms

  1. Personalized Multiple Facial Action Unit Recognition through Generative Adversarial Recognition Network

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '18: Proceedings of the 26th ACM international conference on Multimedia
    October 2018
    2167 pages
    ISBN:9781450356657
    DOI:10.1145/3240508
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 October 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. domain adaptation
    2. generative adversarial networks
    3. personalized au recognition

    Qualifiers

    • Research-article

    Funding Sources

    • National Science Foundation of ChinaNational Science Foundation of China

    Conference

    MM '18
    Sponsor:
    MM '18: ACM Multimedia Conference
    October 22 - 26, 2018
    Seoul, Republic of Korea

    Acceptance Rates

    MM '18 Paper Acceptance Rate 209 of 757 submissions, 28%;
    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)12
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Expression Complementary Disentanglement Network for Facial Expression RecognitionChinese Journal of Electronics10.23919/cje.2022.00.35133:3(742-752)Online publication date: May-2024
    • (2024)Facial Action Unit detection based on multi-task learning strategy for unlabeled facial images in the wildExpert Systems with Applications10.1016/j.eswa.2024.124285253(124285)Online publication date: Nov-2024
    • (2024)Boosting Facial Action Unit Detection with CGAN-Based Data AugmentationDecision Making in Healthcare Systems10.1007/978-3-031-46735-6_13(323-335)Online publication date: 1-Jan-2024
    • (2023)Cascading CNNs for facial action unit detectionEngineering Science and Technology, an International Journal10.1016/j.jestch.2023.10155347(101553)Online publication date: Nov-2023
    • (2022)Pursuing Knowledge Consistency: Supervised Hierarchical Contrastive Learning for Facial Action Unit RecognitionProceedings of the 30th ACM International Conference on Multimedia10.1145/3503161.3548116(111-119)Online publication date: 10-Oct-2022
    • (2022)Unconstrained Facial Action Unit Detection via Latent Feature DomainIEEE Transactions on Affective Computing10.1109/TAFFC.2021.309133113:2(1111-1126)Online publication date: 1-Apr-2022
    • (2022)3D-FERNet: A Facial Expression Recognition Network utilizing 3D information2022 26th International Conference on Pattern Recognition (ICPR)10.1109/ICPR56361.2022.9956497(3265-3272)Online publication date: 21-Aug-2022
    • (2021)CaFGraph: Context-aware Facial Multi-graph Representation for Facial Action Unit RecognitionProceedings of the 29th ACM International Conference on Multimedia10.1145/3474085.3475295(1029-1037)Online publication date: 17-Oct-2021
    • (2021)Cross-Modal Representation Learning for Lightweight and Accurate Facial Action Unit DetectionIEEE Robotics and Automation Letters10.1109/LRA.2021.30989446:4(7619-7626)Online publication date: Oct-2021
    • (2021)Facial Action Unit Detection with ViT and Perceiver Using Landmark Patches2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)10.1109/IEMCON53756.2021.9623198(0281-0285)Online publication date: 27-Oct-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media