Abstract
The contribution of appearance feature variations between expressive and neutral face image towards facial expression classification problem has received limited attention in the literature. The prime focus of the proposed work is to investigate the abstract, robust and discriminative feature space to effectively model expression classification problem. The significant contributions of the work are, hybrid feature space developed by the integration of discriminative features derived from histogram of oriented gradients (HOG) and local gabor binary pattern histogram sequence (LGBPHS) feature descriptor; computing the shape and texture feature variations between expressive and neutral face image (feature difference); novel feature space developed by combining the hybrid feature space acquired from the expressive face image with the feature difference; stacked deep convolutional autoencoder is employed as an efficient feature selection algorithm for dimensionality reduction and multiclass support vector machine (MSVM) for classification. The combination of HOG and LGBPHS has improved the recognition accuracy, robustness, and generalization capability of the model. The work is carried out on three benchmark datasets (CK+, KDEF, and JAFFE) and the model has shown remarkable recognition rates on all the three benchmark datasets (96.43% on CK+ , 96.03% on KDEF, and 88.53% on JAFFE).




Similar content being viewed by others
Data Availability
Data derived from public domain resources. The data that support the findings of this study are available in reference number [9, 10, 11]. These data were derived from the following resources available in the public domain: https://www.kaggle.com/datasets, https://www.kdef.se/download-2/register.html.
References
Pinto LVL, Alves AVN, Medeiros AM, da Silva Costa SW, Pires Y, Ribeiro Costa FA, da Rocha Seruffo MC. A systematic review of facial expression detection methods. IEEE Access. 2023. https://doi.org/10.1109/ACCESS.2023.3287090.
Mohana M, Subashini P. Facial expression recognition using machine learning and deep learning techniques: a systematic review. SN Comput Sci. 2024;5(4):1–26.
Wang Z, Zeng F, Liu S, Zeng B. OAENet: oriented attention ensemble for accurate facial expression recognition. Pattern Recogn. 2021. https://doi.org/10.1016/j.patcog.2020.107694.
Sanoar H, Umer S, Rout RK, Al Marzouqi H. A deep quantum convolutional neural network based facial expression recognition for mental health analysis. IEEE Trans Neural Syst Rehabil Eng. 2024. https://doi.org/10.1109/TNSRE.2024.3385336.
Shu L, Xu Y, Wan T, Kui X. Ada-DF: an adaptive label distribution fusion network for facial expression recognition. Preprint atarXiv:2404.15714. 2024.
Zhu Q, Mao Q, Jia H, Noi OEN, Juanjuan Tu. Convolutional relation network for facial expression recognition in the wild with few-shot learning. Expert Syst Appl. 2022;189: 116046.
Sun Z, Zhang H, Bai J, Liu M, Zhengping Hu. A discriminatively deep fusion approach with improved conditional GAN (im-cGAN) for facial expression recognition. Pattern Recogn. 2023;135: 109157.
Ma F, Sun B, Li S. Spatio-temporal transformer for dynamic facial expression recognition in the wild. Preprint at arXiv:2205.04749. 2022.
Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: IEEE computer society conference on computer vision and pattern recognition workshops. pp. 94–101. 2010.
Lyons M, Akamatsu S, Kamachi M, Gyoba J. Coding facial expressions with gabor wavelets. In: Proceedings Third IEEE international conference on automatic face and gesture recognition. pp. 200–205. 1998.
Lundqvist D, Litton JE. The averaged Karolinska directed emotional faces. Stockholm: Karolinska Institute, Department of Clinical Neuroscience, Section Psychology; 1998.
Chen P, Wang Z, Mao S, Hui X, Yanning H. Dual-branch residual disentangled adversarial learning network for facial expression recognition. IEEE Signal Process Lett. 2024. https://doi.org/10.1109/LSP.2024.3390987.
Yu Chengyan, Zhang Dong, Zou Wei, Li Ming. Joint Training on Multiple Datasets With Inconsistent Labeling Criteria for Facial Expression Recognition. IEEE Transn Affective Comput. 2024. https://doi.org/10.1109/TAFFC.2024.3382618.
Zhang F, Cheng ZQ, Zhao J, Peng X, Li X. LEAF: unveiling two sides of the same coin in semi-supervised facial expression recognition. Preprint at arXiv:2404.15041. 2024.
Yang D, Yang K, Li M, Wang S, Wang S, Zhang L. Robust emotion recognition in context debiasing. Preprint at arXiv:2403.05963. 2024.
Lv Y, Huang G, Yan Y, Xue J-H, Chen S, Wang H. Visual-textual attribute learning for class-incremental facial expression recognition. IEEE Trans Multimed. 2024. https://doi.org/10.1109/TMM.2024.3374573.
Liu Y, Dai W, Fang F, Chen Y, Huang R, Wang R, Wan Bo. Dynamic multi-channel metric network for joint pose-aware and identity-invariant facial expression recognition. Inf Sci. 2021;578:195–213.
Zhang W, Zhang X, Tang Y. Facial expression recognition based on improved residual network. IET Image Proc. 2023;17(7):2005–14.
Jin X, Jin Z. MiniExpNet: a small and effective facial expression recognition network based on facial local regions. Neurocomputing. 2021;462:353–64.
Sun Z, Chiong R, Zheng-ping Hu. Self-adaptive feature learning based on a priori knowledge for facial expression recognition. Knowl-Based Syst. 2020;204: 106124.
Fan X, Tjahjadi T. Fusing dynamic deep learned features and handcrafted features for facial expression recognition. J Vis Commun Image Represent. 2019;65: 102659.
Nie W, Wang Z, Wang X, Chen B, Zhang H, Liu H. Diving into sample selection for facial expression recognition with noisy annotations. IEEE Trans Biometrics Behav Identity Sci. 2024. https://doi.org/10.1109/TBIOM.2024.3435498.
Zhang Y, Fei Z, Li X, Zhou W, Fei M. A method for recognizing facial expression intensity based on facial muscle variations. Multimed Tools Appl. 2024. https://doi.org/10.1007/s11042-024-19779-4.
Wang H, Song H, Li P. Multi-task network with inter-task consistency learning for face parsing and facial expression recognition at real-time speed. J Vis Commun Image Represent. 2024;103: 104213.
Tan Y, Xia H, Song S. Robust consistency learning for facial expression recognition under label noise. Vis Comput. 2024. https://doi.org/10.1007/s00371-024-03558-1.
Yang Y, Lin Hu, Chen Zu, Zhang J, Hou Y, Chen Y, Zhou J, Zhou L, Wang Y. CL-TransFER: collaborative learning based transformer for facial expression recognition with masked reconstruction. Pattern Recogn. 2024;156: 110741.
Viola P, Jones MJ. Robust real-time face detection. Int J Comput Vision. 2004;57:137–54.
Naveen Kumar HN, Jagadeesha S, Jain AK. Human Facial Expression Recognition from static images using shape and appearance feature. In: 2016 2nd international conference on applied and theoretical computing and communication technology (iCATccT), IEEE. pp. 598–603. 2016.
Turan C, Lam K-M. Histogram-based local descriptors for facial expression recognition (FER): a comprehensive study. J Vis Commun Image Represent. 2018;55:331–41.
Mahmut T, Küçüksille EU. Comparative analysis of dimension reduction and classification using cardiotocography data. In: ICONST EST’21 19. 2021.
Kumar HN, Naveen A, Suresh Kumar A, Guru Prasad MS, Mohd As. Automatic facial expression recognition combining texture and shape features from prominent facial regions. IET Image Process. 2023;17:1111–25.
Naveen Kumar HN, Guru Prasad MS, Mohd AS, Mahadevaswamy, Sudheesh K. Modelling appearance variations in expressive and neutral face image for automatic facial expression recognition. IET Image Process. 2024. https://doi.org/10.1049/ipr2.13109.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Ethics Approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Jain, A.K., Naveen Kumar, H.N. Integration of Discriminative Information from Expressive and Neutral Face Image for Effective Modelling of Facial Expression Classification Problem. SN COMPUT. SCI. 5, 1157 (2024). https://doi.org/10.1007/s42979-024-03469-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42979-024-03469-x