Skip to main content
Log in

Integration of Discriminative Information from Expressive and Neutral Face Image for Effective Modelling of Facial Expression Classification Problem

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

The contribution of appearance feature variations between expressive and neutral face image towards facial expression classification problem has received limited attention in the literature. The prime focus of the proposed work is to investigate the abstract, robust and discriminative feature space to effectively model expression classification problem. The significant contributions of the work are, hybrid feature space developed by the integration of discriminative features derived from histogram of oriented gradients (HOG) and local gabor binary pattern histogram sequence (LGBPHS) feature descriptor; computing the shape and texture feature variations between expressive and neutral face image (feature difference); novel feature space developed by combining the hybrid feature space acquired from the expressive face image with the feature difference; stacked deep convolutional autoencoder is employed as an efficient feature selection algorithm for dimensionality reduction and multiclass support vector machine (MSVM) for classification. The combination of HOG and LGBPHS has improved the recognition accuracy, robustness, and generalization capability of the model. The work is carried out on three benchmark datasets (CK+, KDEF, and JAFFE) and the model has shown remarkable recognition rates on all the three benchmark datasets (96.43% on CK+ , 96.03% on KDEF, and 88.53% on JAFFE).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability

Data derived from public domain resources. The data that support the findings of this study are available in reference number [9, 10, 11]. These data were derived from the following resources available in the public domain: https://www.kaggle.com/datasets, https://www.kdef.se/download-2/register.html.

References

  1. Pinto LVL, Alves AVN, Medeiros AM, da Silva Costa SW, Pires Y, Ribeiro Costa FA, da Rocha Seruffo MC. A systematic review of facial expression detection methods. IEEE Access. 2023. https://doi.org/10.1109/ACCESS.2023.3287090.

    Article  Google Scholar 

  2. Mohana M, Subashini P. Facial expression recognition using machine learning and deep learning techniques: a systematic review. SN Comput Sci. 2024;5(4):1–26.

    Article  Google Scholar 

  3. Wang Z, Zeng F, Liu S, Zeng B. OAENet: oriented attention ensemble for accurate facial expression recognition. Pattern Recogn. 2021. https://doi.org/10.1016/j.patcog.2020.107694.

    Article  Google Scholar 

  4. Sanoar H, Umer S, Rout RK, Al Marzouqi H. A deep quantum convolutional neural network based facial expression recognition for mental health analysis. IEEE Trans Neural Syst Rehabil Eng. 2024. https://doi.org/10.1109/TNSRE.2024.3385336.

    Article  Google Scholar 

  5. Shu L, Xu Y, Wan T, Kui X. Ada-DF: an adaptive label distribution fusion network for facial expression recognition. Preprint atarXiv:2404.15714. 2024.

  6. Zhu Q, Mao Q, Jia H, Noi OEN, Juanjuan Tu. Convolutional relation network for facial expression recognition in the wild with few-shot learning. Expert Syst Appl. 2022;189: 116046.

    Article  Google Scholar 

  7. Sun Z, Zhang H, Bai J, Liu M, Zhengping Hu. A discriminatively deep fusion approach with improved conditional GAN (im-cGAN) for facial expression recognition. Pattern Recogn. 2023;135: 109157.

    Article  Google Scholar 

  8. Ma F, Sun B, Li S. Spatio-temporal transformer for dynamic facial expression recognition in the wild. Preprint at arXiv:2205.04749. 2022.

  9. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: IEEE computer society conference on computer vision and pattern recognition workshops. pp. 94–101. 2010.

  10. Lyons M, Akamatsu S, Kamachi M, Gyoba J. Coding facial expressions with gabor wavelets. In: Proceedings Third IEEE international conference on automatic face and gesture recognition. pp. 200–205. 1998.

  11. Lundqvist D, Litton JE. The averaged Karolinska directed emotional faces. Stockholm: Karolinska Institute, Department of Clinical Neuroscience, Section Psychology; 1998.

    Google Scholar 

  12. Chen P, Wang Z, Mao S, Hui X, Yanning H. Dual-branch residual disentangled adversarial learning network for facial expression recognition. IEEE Signal Process Lett. 2024. https://doi.org/10.1109/LSP.2024.3390987.

    Article  Google Scholar 

  13. Yu Chengyan, Zhang Dong, Zou Wei, Li Ming. Joint Training on Multiple Datasets With Inconsistent Labeling Criteria for Facial Expression Recognition. IEEE Transn Affective Comput. 2024. https://doi.org/10.1109/TAFFC.2024.3382618.

    Article  Google Scholar 

  14. Zhang F, Cheng ZQ, Zhao J, Peng X, Li X. LEAF: unveiling two sides of the same coin in semi-supervised facial expression recognition. Preprint at arXiv:2404.15041. 2024.

  15. Yang D, Yang K, Li M, Wang S, Wang S, Zhang L. Robust emotion recognition in context debiasing. Preprint at arXiv:2403.05963. 2024.

  16. Lv Y, Huang G, Yan Y, Xue J-H, Chen S, Wang H. Visual-textual attribute learning for class-incremental facial expression recognition. IEEE Trans Multimed. 2024. https://doi.org/10.1109/TMM.2024.3374573.

    Article  Google Scholar 

  17. Liu Y, Dai W, Fang F, Chen Y, Huang R, Wang R, Wan Bo. Dynamic multi-channel metric network for joint pose-aware and identity-invariant facial expression recognition. Inf Sci. 2021;578:195–213.

    Article  MathSciNet  Google Scholar 

  18. Zhang W, Zhang X, Tang Y. Facial expression recognition based on improved residual network. IET Image Proc. 2023;17(7):2005–14.

    Article  Google Scholar 

  19. Jin X, Jin Z. MiniExpNet: a small and effective facial expression recognition network based on facial local regions. Neurocomputing. 2021;462:353–64.

    Article  Google Scholar 

  20. Sun Z, Chiong R, Zheng-ping Hu. Self-adaptive feature learning based on a priori knowledge for facial expression recognition. Knowl-Based Syst. 2020;204: 106124.

    Article  Google Scholar 

  21. Fan X, Tjahjadi T. Fusing dynamic deep learned features and handcrafted features for facial expression recognition. J Vis Commun Image Represent. 2019;65: 102659.

    Article  Google Scholar 

  22. Nie W, Wang Z, Wang X, Chen B, Zhang H, Liu H. Diving into sample selection for facial expression recognition with noisy annotations. IEEE Trans Biometrics Behav Identity Sci. 2024. https://doi.org/10.1109/TBIOM.2024.3435498.

    Article  Google Scholar 

  23. Zhang Y, Fei Z, Li X, Zhou W, Fei M. A method for recognizing facial expression intensity based on facial muscle variations. Multimed Tools Appl. 2024. https://doi.org/10.1007/s11042-024-19779-4.

    Article  Google Scholar 

  24. Wang H, Song H, Li P. Multi-task network with inter-task consistency learning for face parsing and facial expression recognition at real-time speed. J Vis Commun Image Represent. 2024;103: 104213.

    Article  Google Scholar 

  25. Tan Y, Xia H, Song S. Robust consistency learning for facial expression recognition under label noise. Vis Comput. 2024. https://doi.org/10.1007/s00371-024-03558-1.

    Article  Google Scholar 

  26. Yang Y, Lin Hu, Chen Zu, Zhang J, Hou Y, Chen Y, Zhou J, Zhou L, Wang Y. CL-TransFER: collaborative learning based transformer for facial expression recognition with masked reconstruction. Pattern Recogn. 2024;156: 110741.

    Article  Google Scholar 

  27. Viola P, Jones MJ. Robust real-time face detection. Int J Comput Vision. 2004;57:137–54.

    Article  Google Scholar 

  28. Naveen Kumar HN, Jagadeesha S, Jain AK. Human Facial Expression Recognition from static images using shape and appearance feature. In: 2016 2nd international conference on applied and theoretical computing and communication technology (iCATccT), IEEE. pp. 598–603. 2016.

  29. Turan C, Lam K-M. Histogram-based local descriptors for facial expression recognition (FER): a comprehensive study. J Vis Commun Image Represent. 2018;55:331–41.

    Article  Google Scholar 

  30. Mahmut T, Küçüksille EU. Comparative analysis of dimension reduction and classification using cardiotocography data. In: ICONST EST’21 19. 2021.

  31. Kumar HN, Naveen A, Suresh Kumar A, Guru Prasad MS, Mohd As. Automatic facial expression recognition combining texture and shape features from prominent facial regions. IET Image Process. 2023;17:1111–25.

    Article  Google Scholar 

  32. Naveen Kumar HN, Guru Prasad MS, Mohd AS, Mahadevaswamy, Sudheesh K. Modelling appearance variations in expressive and neutral face image for automatic facial expression recognition. IET Image Process. 2024. https://doi.org/10.1049/ipr2.13109.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to H. N. Naveen Kumar.

Ethics declarations

Conflict of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Ethics Approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jain, A.K., Naveen Kumar, H.N. Integration of Discriminative Information from Expressive and Neutral Face Image for Effective Modelling of Facial Expression Classification Problem. SN COMPUT. SCI. 5, 1157 (2024). https://doi.org/10.1007/s42979-024-03469-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-024-03469-x

Keywords

Navigation