skip to main content
10.1145/3674225.3674319acmotherconferencesArticle/Chapter ViewAbstractPublication PagespeaiConference Proceedingsconference-collections
research-article

Cross-domain face expression recognition based on joint constrained reconstruction subspace learning

Published: 31 July 2024 Publication History

Abstract

Facial expression recognition is difficult to achieve good results in unlabeled cross domain situations, a cross-domain face expression recognition algorithm based on joint constraint reconstruction subspace learning is proposed. This method first measures the relationship between shallow features extracted by convolutional neural networks and uses the Gram matrix as the original feature expression of the expression image; Secondly, by projecting the original features onto the latent subspace, a dual alignment constraint of edge distribution and conditional distribution is constructed to reduce domain distribution differences; To further preserve discriminative geometric features in feature representation, a sparse data matrix is constructed in the reconstructed subspace, and the structure is constructed to maintain constraints that keep similar classes within the class away from each other; Finally, the reconstructed subspace is trained by combining the dual distribution alignment constraint and structure preservation constraint, and the transformed cross domain facial expression invariant feature expression is learned from the reconstructed subspace. It is input into the SVM classifier to complete cross domain facial expression recognition. Experiments on three publicly available facial expression datasets show that the proposed cross domain facial expression recognition method achieves an average recognition rate of 59.97%, which is 2.02% better than the suboptimal method. This proves the effectiveness of our method in cross domain facial expression recognition tasks.

References

[1]
SeungJun O, DongKeun K. Comparative Analysis of Emotion Classification Based on Facial Expression and Physiological Signals Using Deep Learning[J]. Applied Sciences, 2022, 12(3): 151-162.
[2]
Maryam P, Rieklaurel D. Facial Expression Modeling and Synthesis for Patient Simulator Systems: Past, Present, and Future[J]. ACM Transactions on Computing for Healthcare (HEALTH), 2022.
[3]
Yao J, Yu F Q. Pedestrian detection based on candidate region localization and HOG-CLBP feature combination [J]. Laser & Optoelectronics Process, 2021,58 (02): 165-172
[4]
Yu J, Bhanu B . Evolutionary feature synthesis for facial expression recognition[J], Pattern Recognition Letters, 2006, 27 (11): 1289-1298.
[5]
Qi Mei, Liu Zefen, Zhang Hairong. Small sample facial expression recognition based on geometric coefficient weighted texture features [J]. Journal of Lanzhou Institute of Technology, 2023,30 (01): 84-89.
[6]
Zou W, Dong Z, Dah-Jye Lee: A new multi-feature fusion based convolutional neural network for facial expression recognition[J]. Applied Intelligence, 2022,52(3): 2918-2929.
[7]
Liao H B, Xu B. Robust facial expression recognition based on gender and age factor analysis[J]. Journal of Computer Research and Development,2021,58(3):528-538.
[8]
Nadir Kamel Benamara, Mikel Val-Calvo, José Ramón Álvarez Sánchez, Real-time facial expression recognition using smoothed deep neural network ensemble[J]. Integrated Computer Aided Engineering, 2021,28(1): 97-111.
[9]
Wang H, Lu J, Nwosu L, Two-channel convolutional neural network for facial expression recognition using facial parts[J]. International Journal of Big Data Intelligence, 2019, 6(3/4):269-290.
[10]
Shen J, Zheng E R, Cheng Z Y,Assisting attraction classification by harvesting Webdata I . [J].IEEE Access, 2017, 10 ( 5 ) : 1600-1608.
[11]
Zhang Wenjing, Song Peng, Chen Dongling, Latent sparse transfer subspace learning for cross-corpus facial expression recognition[J]. Digital Signal Processing, 2021(1):103121.
[12]
Long M S, Cao Y, Cao Z J, Transferable Representation Learning with Deep Adaptation Networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2019,41(12):3071-3085.
[13]
Martina R, Matto S, Rossi S . Personalized models for facial emotion recognition through transfer learning[J]. Multimedia Tools and Applications, 2020, 79(47):811-828.
[14]
Xu Y, Liu J, Zhai Yi H, Weakly supervised facial expression recognition via transferred DAL-CNN and active incremental learning[J]. Soft Computer. 2020,24(8): 5971-5985.
[15]
Pan S J, Tang W I, Domain adaption via transfer componet analysis[J]. IEEE Transaction on Neural Networks, 2011,22(2):199-220.
[16]
Elkhalil K, Kammoun A, Al Naffouri Tareq Y, Numerically Stable Evaluation of Moments of Random Gram Matrices with Applications[J]. IEEE Signal Processing Letters,2017,24(9):1552-1563.
[17]
Borwein J, Lewis A S. Convex analysis and nonlinear optimization: theory and examples[M]. Springer Science Business Media, 2010.
[18]
Lyons M, Akamatsu S, Kamachi M, Coding facial expression with gabor wavelets[C]//Processing Third IEEE International Conference on Automatic Face and Gesture Recognition, IEEE,1998:200-205.
[19]
Lucey P, Cohn J. F., Kande T, The extended cohn-kanda dataset(ck+): A complete dataset for acion unit and emotion-specified expression[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recogniton-workshops. IEEE,2010:94-101.
[20]
Gunes H, Piccardi M. A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior//Proceedings of the 18th International Conference on Pattern Recognition. Hong Kong: IEEE: 2006:1148-1153[
[21]
Wold S, Esbensen K, Geladi P. Principal component analysis[J]. Chemometrics and intelligent Laboratory Systems, 1987,2(1-3):37-52.
[22]
Long M, Wang J, Ding G, Transfer feature learning with joint distribution adaption[C]. IEEE International Conference on Computer Vision(ICCV), 2013:2200-2207.
[23]
Zhang J, Li W, Ogunbona P. Joint geometrical and statistical alignment for visual domian adaption[C]. IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2017:5150-5158.
[24]
Mahshid Hosseini, Cornelia Caragea. Semi-Supervised Domain Adaptation for Emotion-Related Tasks[C]. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics: Findings (ACL-Findings) 2023:5402-5410.

Index Terms

  1. Cross-domain face expression recognition based on joint constrained reconstruction subspace learning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    PEAI '24: Proceedings of the 2024 International Conference on Power Electronics and Artificial Intelligence
    January 2024
    969 pages
    ISBN:9798400716638
    DOI:10.1145/3674225
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 31 July 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    PEAI 2024

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 14
      Total Downloads
    • Downloads (Last 12 months)14
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 17 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media