Skip to main content
Log in

Dominant and complementary emotion recognition using hybrid recurrent neural network

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Human emotion recognition has been a complex research problem due to the diversity in faces of individuals, especially in scenarios of compound emotions. Compound emotions are a combination or inclusion of two basic emotions; dominant and complementary. The recognition of compound emotions has been reported in several studies to be challenging, especially in comparison with the recognition of basic emotions. The paper proposes the use of a hybrid recurrent neural network to perform compound expression recognition. The hybrid recurrent neural network; CNN-LSTM is proposed, to train a network of extracted Facial Action Units representing the compound emotions. The iCV-MEFED is employed in the analysis. The iCV-MEFED dataset is made up of 50 classes of compound emotions which were captured in a controlled environment with the help of professional psychologists. The main contribution of the paper is the use of CNN and LSTM network fusion, with CNN acting as a feature extractor and reducer of feature dimensions, and LSTM for training using its memory blocks. This method has shown a significant improvement in the performance compared to previous studies carried out on the same dataset. The overall performances that are reported for all 50 classes of compound emotions are 24.7% of recognition accuracy, 25.3% of precision, 24.6% of recall and 24% of F1-score. The proposed method achieves comparable and encouraging results and forms a basis for future improvements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data availability

Not applicable.

References

  1. Mohan, K., Seal, A., Krejcar, O., Yazidi, A.: FER-net: facial expression recognition using deep neural net. Neural Comput. Appl. 33(15), 9125–9136 (2021)

    Article  Google Scholar 

  2. Chen, X., Li, D., Wang, P., Yang, X.: A deep convolutional neural network with fuzzy rough sets for FER. IEEE Access 8, 2772–2779 (2019)

    Article  Google Scholar 

  3. Ullah, S., Tian, W.: A systematic literature review of recognition of compound facial expression of emotions. In: 2020 the 4th International Conference on Video and Image Processing, pp. 116–121 (2020)

  4. Huang, Y., Chen, F., Lv, S., Wang, X.: Facial expression recognition: a survey. Symmetry 11(10), 1189 (2019)

    Article  Google Scholar 

  5. Zeng, J., Shan, S., Chen, X.: Facial expression recognition with inconsistently annotated datasets. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 222–237 (2018)

  6. Slimani, K., Lekdioui, K., Messoussi, R., & Touahni, R. (2019, March). Compound facial expression recognition based on highway cnn. In Proceedings of the New Challenges in Data Sciences: Acts of the Second Conference of the Moroccan Classification Society (pp. 1–7).

  7. Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124 (1971)

    Article  Google Scholar 

  8. Pelachaud, C.: Modeling multimodal expression of emotion in a virtual agent. Philos. Trans. R. Soc. B Biol. Sci. 364(1535), 3539–3548 (2009)

    Article  Google Scholar 

  9. Du, S., Martinez, A.M.: Compound facial expressions of emotion: from basic research to clinical applications. Dialogues Clin. Neurosci. 17(4), 443 (2015)

    Article  Google Scholar 

  10. Du, S., Tao, Y., Martinez, A.M.: Compound facial expressions of emotion. Proc. Natl. Acad. Sci. 111(15), E1454–E1462 (2014)

    Article  Google Scholar 

  11. Egede, J., Valstar, M., Martinez, B.: Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimation. In: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp. 689–696. IEEE (2017)

  12. Slimani, K., Messoussi, R., Bourekkadi, S., Khoulji, S.: An intelligent system solution for improving distance collaborative work. In: 2017 Intelligent Systems and Computer Vision (ISCV), pp. 1–4. IEEE (2017)

  13. Hickson, S., Dufour, N., Sud, A., Kwatra, V., Essa, I.: Eyemotion: Classifying facial expressions in VR using eye-tracking cameras. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1626–1635. IEEE (2019)

  14. Jarraya, S.K., Masmoudi, M., Hammami, M.: A comparative study of autistic children emotion recognition based on spatio-temporal and deep analysis of facial expressions features during a meltdown crisis. Multim. Tools Appl. 80(1), 83–125 (2021)

    Article  Google Scholar 

  15. Maithri, M., Raghavendra, U., Gudigar, A., Samanth, J., Barua, P. D., Murugappan, M., Acharya, U. R.: Automated emotion recognition: current trends and future perspectives. Comput. Methods Programs Biomed. 106646 (2022). https://www.sciencedirect.com/science/article/abs/pii/S0169260722000311

  16. Milad, A., Yurtkan, K.: An integrated 3D model based face recognition method using synthesized facial expressions and poses for single image applications. Appl. Nanosci. 1–11 (2022)

  17. Ukwu, H.U., Yurtkan, K.: 4D facial expression recognition using geometric landmark-based axes-angle feature extraction. Intell. Autom. Soft Comput. 34(3), 1819–1838 (2022)

    Article  Google Scholar 

  18. Krithika, L.B., Priya, G.G.: Graph-based feature extraction and hybrid classification approach for facial expression recognition. J. Ambient. Intell. Humaniz. Comput. 12(2), 2131–2147 (2021)

    Article  Google Scholar 

  19. Krithika, L.B., Priya, G.L.: MAFONN-EP: a minimal angular feature oriented neural network based emotion prediction system in image processing. J. King Saud. Univ. Comput. Inf. Sci. 34(1), 1320–1329 (2022)

    Google Scholar 

  20. Yu, Z., Liu, Q., Liu, G.: Deeper cascaded peak-piloted network for weak expression recognition. Vis. Comput. 34(12), 1691–1699 (2018)

    Article  Google Scholar 

  21. Ekundayo, O.S., Viriri, S.: Facial expression recognition: a review of trends and techniques. IEEE Access 9, 136944–136973 (2021)

    Article  Google Scholar 

  22. Xie, Y., Tian, W., Ma, T.: A transfer learning approach to compound facial expression recognition. In: 2020 4th International Conference on Advances in Image Processing, pp. 95–101 (2020)

  23. Swaminathan, A., Vadivel, A., Arock, M.: FERCE: facial expression recognition for combined emotions using FERCE algorithm. IETE J. Res. (2020). https://doi.org/10.1080/03772063.2020.1756471

    Article  Google Scholar 

  24. Loob, C., Rasti, P., Lüsi, I., Jacques, J. C., Baró, X., Escalera, S., Anbarjafari, G.: Dominant and complementary multi-emotional facial expression recognition using c-support vector classification. In: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp. 833–838. IEEE (2017)

  25. Benitez-Quiroz, C. F., Srinivasan, R., Feng, Q., Wang, Y., Martinez, A. M.: Emotionet challenge: recognition of facial expressions of emotion in the wild. arXiv preprint arXiv:1703.01210 (2017)

  26. Chen, B., Yuan, L., Liu, H., Bao, Z.: Kernel subclass discriminant analysis. Neurocomputing 71(1–3), 455–458 (2007)

    Article  Google Scholar 

  27. Li, S., Deng, W., Du, J.: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2852–2861 (2017)

  28. Ofodile, I., Kulkarni, K., Corneanu, C. A., Escalera, S., Baro, X., Hyniewska, S. J., Anbarjafari, G.: Automatic recognition of deceptive facial expressions of emotion (2017)

  29. Guo, J., Lei, Z., Wan, J., Avots, E., Hajarolasvadi, N., Knyazev, B., Anbarjafari, G.: Dominant and complementary emotion recognition from still images of faces. IEEE Access 6, 26391–26403 (2018)

    Article  Google Scholar 

  30. Lüsi, I., Junior, J. C. J., Gorbova, J., Baró, X., Escalera, S., Demirel, H., Anbarjafari, G.: Joint challenge on dominant and complementary emotion recognition using micro emotion features and head-pose estimation: databases. In: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp. 809–813. IEEE (2017)

  31. Jiddah, S. M., Yurtkan, K.: Fusion of geometric and texture features for ear recognition. In: 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–5. IEEE (2018)

  32. Wen, Y., Zhang, K., Li, Z., Qiao, Y.: A discriminative feature learning approach for deep face recognition. In: The European Conference on Computer Vision, pp. 499–515. Springer, Cham (2016)

  33. Kamińska, D., Aktas, K., Rizhinashvili, D., Kuklyanov, D., Sham, A.H., Escalera, S., Anbarjafari, G.: Two-stage recognition and beyond for compound facial emotion recognition. Electronics 10(22), 2847 (2021)

    Article  Google Scholar 

  34. Baltrušaitis, T., Robinson, P., Morency, L. P.: Openface: an open source facial behavior analysis toolkit. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10. IEEE (2016)

  35. Baltrušaitis, T., Mahmoud, M., Robinson, P.: Cross-dataset learning and person-specific normalization for automatic action unit detection. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp. 1–6. IEEE (2015)

  36. Baltrusaitis, T., Zadeh, A., Lim, Y. C., Morency, L. P.: Openface 2.0: facial behavior analysis toolkit. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 59–66. IEEE (2018)

Download references

Acknowledgements

Not applicable.

Funding

This research received no external funding.

Author information

Authors and Affiliations

Authors

Contributions

SJ and KY contributed to investigation, SJ contributed to research design, KY contributed to validation , SJ contributed to writing, KY contributed to review and editing.

Corresponding author

Correspondence to Salman Mohammed Jiddah.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Consent for publication

All authors have read and agreed to the submission of this version of the manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiddah, S.M., Yurtkan, K. Dominant and complementary emotion recognition using hybrid recurrent neural network. SIViP 17, 3415–3423 (2023). https://doi.org/10.1007/s11760-023-02563-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-023-02563-6

Keywords

Navigation