Skip to main content
Log in

Automated facial expression recognition using exemplar hybrid deep feature generation technique

  • Data analytics and machine learning
  • Published:
Soft Computing Aims and scope Submit manuscript

A Correction to this article was published on 01 February 2024

This article has been updated

Abstract

The perception and recognition of emotional expressions provide essential information about individuals’ social behavior. Therefore, decoding emotional expressions is very important. Facial expression recognition (FER) is one of the most frequently studied topics. An accurate FER model has four prime phases. (i) Facial areas are segmented from the face images. (ii) An exemplar deep feature-based model is proposed. Two pretrained deep models (AlexNet and MobileNetV2) are utilized as feature generators. By merging both pretrained networks, a feature generation function is presented. (iii) The most valuable 1000 features are selected by neighborhood component analysis (NCA). (iv) These 1000 features are selected on a support vector machine (SVM). We have developed our model using five FER corpora: TFEID, JAFFE, KDEF, CK+, and Oulu-CASIA. Our developed model is able to yield an accuracy of 97.01, 98.59, 96.54, 100, and 100%, using TFEID, JAFFE, KDEF, CK+, and Oulu-CASIA, respectively. The results obtained in this study showed that the proposed exemplar deep feature extraction approach has obtained high success rates in the automatic FER method using various databases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

The authors do not have permission to share data.

Change history

References

  • Ahmed N, Al Aghbari Z, Girija S (2023) A systematic survey on multimodal emotion recognition using learning algorithms. Intell Syst Appl 17:200171

    Google Scholar 

  • Akhand MAH, Roy S, Siddique N, Kamal MAS, Shimamura T (2021) Facial emotion recognition using transfer learning in the deep CNN. Electronics 10:1036

    Article  Google Scholar 

  • Arul Vinayakam Rajasimman M, Manoharan RK, Subramani N, Aridoss M, Galety MG (2023) Robust facial expression recognition using an evolutionary algorithm with a deep learning model. Appl Sci 13:468

    Article  Google Scholar 

  • Canal FZ, Müller TR, Matias JC, Scotton GG, de Sa Junior AR, Pozzebon E et al (2022) A survey on facial emotion recognition techniques: a state-of-the-art literature review. Inf Sci 582:593–617

    Article  Google Scholar 

  • Celniak W, Augustyniak P (2022) Eye-tracking as a component of multimodal emotion recognition systems. In: International conference on information technologies in biomedicine. Springer, pp 66–75

  • Cha H-S, Im C-H (2022) Performance enhancement of facial electromyogram-based facial-expression recognition for social virtual reality applications using linear discriminant analysis adaptation. Virtual Real 26:385–398

    Article  Google Scholar 

  • Chen L-F, Yen Y-S (2007) Taiwanese facial expression image database. Brain Mapp Lab Inst Brain Sci Natl Yang-Ming Univ Taipei, Taiwan

  • Chowdary MK, Nguyen TN, Hemanth DJ (2021) Deep learning-based facial emotion recognition for human–computer interaction applications. Neural Comput Appl. https://doi.org/10.1007/S00521-021-06012-8

    Article  Google Scholar 

  • Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255. IEEE

  • Ding C, Peng H (2005) Minimum redundancy feature selection from microarray gene expression data. J Bioinform Comput Biol 3:185–205

    Article  Google Scholar 

  • Dzedzickis A, Kaklauskas A, Bucinskas V (2020) Human emotion recognition: review of sensors and methods. Sensors 20:592

    Article  Google Scholar 

  • Ekman P (1973) Cross-cultural studies of facial expression. Darwin and facial expression: a century of research in review. Academic Press, New York, pp 169–222

    Google Scholar 

  • Eng S, Ali H, Cheah A, Chong Y (2091) Facial expression recognition in JAFFE and KDEF Datasets using histogram of oriented gradients and support vector machine. In: IOP conference series: materials science and engineering. IOP Publishing, p 012031

  • Farajzadeh N, Hashemzadeh M (2018) Exemplar-based facial expression recognition. Inf Sci 460:318–330

    Article  Google Scholar 

  • Febrian R, Halim BM, Christina M, Ramdhan D, Chowanda A (2023) Facial expression recognition using bidirectional LSTM-CNN. Procedia Comput Sci 216:39–47

    Article  Google Scholar 

  • Foggia P, Greco A, Saggese A, Vento M (2023) Multi-task learning on the edge for effective gender, age, ethnicity and emotion recognition. Eng Appl Artif Intell 118:105651

    Article  Google Scholar 

  • Gao H, Wu M, Chen Z, Li Y, Wang X, An S et al (2023) SSA-ICL: multi-domain adaptive attention with intra-dataset continual learning for Facial expression recognition. Neural Netw 158:228–238

    Article  Google Scholar 

  • Geiger M, Wilhelm O (2023) Computerized facial emotion expression recognition. Digital phenotyping and mobile sensing: new developments in psychoinformatics. Springer, Cham, pp 43–56

    Chapter  Google Scholar 

  • Ghosh S, Priyankar A, Ekbal A, Bhattacharyya P (2023) Multitasking of sentiment detection and emotion recognition in code-mixed Hinglish data. Knowl Based Syst 260:110182

    Article  Google Scholar 

  • Gil S, Le Bigot L (2023) Emotional face recognition when a colored mask is worn: a cross-sectional study. Sci Rep 13:1–15

    Article  Google Scholar 

  • Goldberger J, Hinton GE, Roweis S, Salakhutdinov RR (2004) Neighbourhood components analysis. Adv Neural Inf Process Syst 17:513–520

    Google Scholar 

  • Goodfellow IJ, Erhan D, Luc Carrier P et al (2015) Challenges in representation learning: a report on three machine learning contests. Neural Networks 64:59–63. https://doi.org/10.1016/j.neunet.2014.09.005

  • Jupalli TK, Reddy MST, Kondaveeti HK (2023) Artificial intelligence in higher education. Mobile and sensor-based technologies in higher education. IGI Global, pp 1–30

    Google Scholar 

  • Kanade T, Cohn JF, Tian Y (2000) Comprehensive database for facial expression analysis. In: Proc - 4th IEEE Int Conf Autom Face Gesture Recognition, FG 2000, pp 46–53. https://doi.org/10.1109/AFGR.2000.840611

  • Kas M, Ruichek Y, Messoussi R (2021) New framework for person-independent facial expression recognition combining textural and shape analysis through new feature extraction approach. Inf Sci 549:200–220

    Article  MathSciNet  Google Scholar 

  • Kavitha M, RajivKannan A (2023) Hybrid convolutional neural network and long short-term memory approach for facial expression recognition. Intell Autom Soft Comput 35:689–704

    Article  Google Scholar 

  • Khattak A, Asghar MZ, Ali M, Batool U (2022) An efficient deep learning technique for facial emotion recognition. Multimed Tools Appl 81:1649–1683

    Article  Google Scholar 

  • Kononenko I (1994) Estimating attributes: analysis and extensions of RELIEF. In: European conference on machine learning, Springer, pp 171–182

  • Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105

    Google Scholar 

  • Kumari N, Bhatia R (2022) Efficient facial emotion recognition model using deep convolutional neural network and modified joint trilateral filter. Soft Comput. https://doi.org/10.21203/rs.3.rs-866042/v1

    Article  Google Scholar 

  • Li S, Deng W, Du J (2017) Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2852–2861

  • Liu H, Setiono R (1995) Chi2: Feature selection and discretization of numeric attributes. In: Proceedings of 7th IEEE international conference on tools with artificial intelligence, IEEE, pp 388–391

  • Liu W-L, Gong Y-J, Chen W-N, Liu Z, Wang H, Zhang J (2019) Coordinated charging scheduling of electric vehicles: a mixed-variable differential evolution approach. IEEE Trans Intell Transp Syst 21:5094–5109

    Article  Google Scholar 

  • Liu Y, Zeng J, Shan S, Zheng Z (2018) Multi-channel pose-aware convolution neural networks for multi-view facial expression recognition. In: 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pp 458–465. IEEE

  • Liu S, Gao P, Li Y, Fu W, Ding W (2023) Multi-modal fusion network with complementarity and importance for emotion recognition. Inf Sci 619:679–694

    Article  Google Scholar 

  • Lucey P, Cohn JF, Kanade T et al (2010) The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Comput Soc Conf Comput Vis Pattern Recognit - Work CVPRW 2010, pp 94–101. https://doi.org/10.1109/CVPRW.2010.5543262

  • Lundqvist D, Flykt A, Ohman A (1998) The Karolinska directed emotional faces (KDEF). CD ROM from Dep Clin Neurosci Psychol Sect Karolinska Institutet 2–2

  • Lyons MJ (2021) Excavating AI Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset. arXiv preprint http://arxiv.org/abs/2107.13998

  • Lyons MJ, Kamachi M, Gyoba J (2020) Coding facial expressions with Gabor wavelets (IVC special issue). arXiv preprint http://arxiv.org/abs/arXiv:2009.05938

  • Nikolaus M, Fourtassi A (2023) Communicative feedback in language acquisition. New Ideas Psychol 68:100985

    Article  Google Scholar 

  • Othman E, Werner P, Saxen F, Al-Hamadi A, Gruss S, Walter S (2023) Classification networks for continuous automatic pain intensity monitoring in video using facial expression on the X-ITE Pain Database. J vis Commun Image Represent 91:103743

    Article  Google Scholar 

  • Pantic M, Valstar M, Rademaker R, Maat L (2005) Web-based database for facial expression analysis. IEEE Int Conf Multimed Expo, ICME 2005:317–321. https://doi.org/10.1109/ICME.2005.1521424

  • Peng H, Long F, Ding C (2005) Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Machine Intell 27:1226–1238

  • Porcu S, Floris A, Atzori L (2020) Evaluation of data augmentation techniques for facial expression recognition systems. Electronics 9:1892

    Article  Google Scholar 

  • Robnik-Šikonja M, Kononenko I (2003) Theoretical and empirical analysis of ReliefF and RReliefF. Mach Learn 53:23–69

    Article  Google Scholar 

  • Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510–20

  • Shen J, Yang H, Li J, Cheng Z (2022) Assessing learning engagement based on facial expression recognition in MOOC’s scenario. Multimed Syst 28:469–478

    Article  Google Scholar 

  • Sun Z, Chiong R, Hu Z-P (2020) Self-adaptive feature learning based on a priori knowledge for facial expression recognition. Knowl Based Syst 204:106124

    Article  Google Scholar 

  • Tang Y, Zhang X, Hu X, Wang S, Wang H (2020) Facial expression recognition using frequency neural network. IEEE Trans Image Process 30:444–457

    Article  Google Scholar 

  • Vapnik V (1998) The support vector method of function estimation. Nonlinear Modeling: advanced black-box techniques. Springer, pp 55–85

    Chapter  Google Scholar 

  • Vedantham R, Reddy ES (2020) A robust feature extraction with optimized DBN-SMO for facial expression recognition. Multimed Tools Appl 79:21487–21512

    Article  Google Scholar 

  • Wang Y, Song W, Tao W, Liotta A, Yang D, Li X et al (2022) A systematic review on affective computing: emotion models, databases, and recent advances. Inf Fus. https://doi.org/10.48550/arXiv.2203.06935

    Article  Google Scholar 

  • Wani AH, Hashmy R (2023) A supervised multinomial classification framework for emotion recognition in textual social data. Int J Adv Intell Paradig 24:173–189

    Google Scholar 

  • Yang W, Wang K, Zuo W (2012) Neighborhood component feature selection for high-dimensional data. J Comput 7:161–168

    Article  Google Scholar 

  • Yin L, Wei X, Sun Y et al (2006) A 3D facial expression database for facial behavior research. FGR 2006 Proc 7th Int Conf Autom Face Gesture Recognit 2006:211–216. https://doi.org/10.1109/FGR.2006.6

  • Zhang Z, Luo P, Loy CC, Tang X (2018) From facial expression recognition to interpersonal relation prediction. Int J Comput Vis 126:550–569. https://doi.org/10.1007/s11263-017-1055-1

  • Zhao G, Huang X, Taini M et al (2011) Facial expression recognition from near-infrared videos. Image Vis Comput 29:607–619. https://doi.org/10.1016/j.imavis.2011.07.002

  • Zhao F, Di S, Wang L (2022a) A hyperheuristic with q-learning for the multiobjective energy-efficient distributed blocking flow shop scheduling problem. IEEE Trans Cybern

  • Zhao F, Hu X, Wang L, Zhao J, Tang J (2022b) A reinforcement learning brain storm optimization algorithm (BSO) with learning mechanism. Knowl Based Syst 235:107645

    Article  Google Scholar 

  • Zhen R, Song W, He Q, Cao J, Shi L, Luo J (2023) Human-computer interaction system: a survey of talking-head generation. Electronics 12:218

    Article  Google Scholar 

  • Zhou S, Xing L, Zheng X, Du N, Wang L, Zhang Q (2019) A self-adaptive differential evolution algorithm for scheduling a single batch-processing machine with arbitrary job sizes and release times. IEEE Trans Cybern 51:1430–1442

    Article  Google Scholar 

Download references

Funding

The authors state that this work has not received any funding.

Author information

Authors and Affiliations

Authors

Contributions

MB, IT, SD, PDB, TT, KHC, and URA were involved in conceptualization, methodology, and writing—review and editing; TT and SD were involved in software; MB, IT, and SD were involved in validation and investigation; MB and IT were involved in formal analysis and visualization; MB, IT, and PDB were involved in resources; MB, IT, PDB, SD, and TT were involved in data curation; MB, SD, and TT were involved in writing—original draft preparation; URA was involved in supervision and project administration. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Sengul Dogan.

Ethics declarations

Conflict of interest

The authors of this manuscript declare no conflicts of interest.

Ethical approval

Not applicable.

Informed consent

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original article has been updated: Due to reference update.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baygin, M., Tuncer, I., Dogan, S. et al. Automated facial expression recognition using exemplar hybrid deep feature generation technique. Soft Comput 27, 8721–8737 (2023). https://doi.org/10.1007/s00500-023-08230-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-023-08230-9

Keywords

Navigation