Skip to main content

A Temporal Approach to Facial Emotion Expression Recognition

  • Conference paper
  • First Online:
Artificial Intelligence Research (SACAIR 2021)

Abstract

Systems embedded with facial emotion expression recognition models enable the application of emotion-related knowledge to improve human and computer interaction and in doing so, users have a satisfying experience. Facial expressions exhibited by individuals are mostly used as non-verbal cues of communication. It is envisaged that accurate and real-time estimation of expressions and/or emotional changes will improve existing online platforms. However, further mapping of estimated expressions to emotions is highly useful in many applications such as sentiment analysis, market analysis, student comprehension among others. Feedback based on estimated emotions plays a crucial role in improving the usability of such models. However, there have been no or limited feedback mechanisms incorporated into these models. The proposed work, therefore, investigates the use of deep learning to identify and estimate emotional changes in human faces and further analysis of estimated emotions to provide feedback. The methodology involves a temporal approach including a VGG-19 pre-trained network for feature extraction, a BiLSTM architecture for facial emotion expression recognition, and mapping criteria to map estimated expressions and the resultant emotion (positive, negative, neutral). The CNN-BiLSTM model achieved an accuracy of 91% on a test set consisting of seven basic emotions of anger, disgust, fear, happy, surprise, sadness and neutral from the Denver Intensity of Spontaneous Facial Action (DISFA) data. The data set for affective States in E-Environment(DAiSEE) labeled with boredom, frustration, confusion, and engagement was used to further test the proposed model to estimate the seven basic expressions and re-evaluate the mapping model used for mapping expressions to emotions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ekman, P., Keltner, D.: Universal facial expressions of emotion. In: Segerstrale, U.P., Molnar, P. (eds.) Nonverbal Communication: Where Nature Meets Culture, vol. 27, p. 46 (1997)

    Google Scholar 

  2. Ekman, P., Cordaro, D.: What is meant by calling emotions basic. Emot. Rev. 3(4), 364–70 (2011)

    Article  Google Scholar 

  3. Ekman, P.: Basic emotions. Handbook Cogn. Emot. 98(45–60), 16 (1999)

    Google Scholar 

  4. Zadeh, M.M., Imani, M., Majidi, B.: Fast facial emotion recognition using convolutional neural networks and Gabor filters. In: 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI) 2019, pp. 577–581. IEEE (2019)

    Google Scholar 

  5. Wu, Y., Zhang, L., Chen, G., Michelini, P.N.: Unconstrained facial expression recogniton based on cascade decision and Gabor filters. In: 2020 25th International Conference on Pattern Recognition (ICPR), 10 January 2021, pp. 3336–3341. IEEE (2021)

    Google Scholar 

  6. Zhou, J., Zhang, S., Mei, H., et al.: A method of facial expression recognition based on Gabor and NMF. Pattern Recogn. Image Anal. 26(1), 119–124 (2016)

    Article  Google Scholar 

  7. Pranav, E., Kamal, S., Chandran, C.S., Supriya, M.H.: Facial emotion recognition using deep convolutional neural network. In: 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), 6 March 2020, pp. 317–320. IEEE (2020)

    Google Scholar 

  8. Guetari R, Chetouani A, Tabia H, Khlifa N. Real time emotion recognition in video stream, using B-CNN and F-CNN. In: 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), 2 September 2020, pp. 1–6. IEEE (2020)

    Google Scholar 

  9. John, A., Abhishek, M.C., Ajayan, A.S., Sanoop, S., Kumar, V.R.: Real-time facial emotion recognition system with improved preprocessing and feature extraction. In: 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), 20 August 2020, pp. 1328–1333. IEEE (2020)

    Google Scholar 

  10. Vulpe-Grigoraşi, A., Grigore, O.: Convolutional neural network hyperparameters optimization for facial emotion recognition. In: 2021 12th International Symposium on Advanced Topics in Electrical Engineering (ATEE), 25 March 2021, pp. 1–5. IEEE (2021)

    Google Scholar 

  11. Srivastava, S., Gupta, P., Kumar, P.: Emotion recognition based emoji retrieval using deep learning. In: 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), 3 June 2021, pp. 1182–1186. IEEE (2021)

    Google Scholar 

  12. Qiu, Y., Wan, Y.: Facial expression recognition based on landmarks. In: 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 20 December 2019, vol. 1, pp. 1356–1360. IEEE (2019)

    Google Scholar 

  13. Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–60 (2013)

    Article  Google Scholar 

  14. Benitez-Quiroz, C.F., Wang, Y., Martinez, A.M.: Recognition of action units in the wild with deep nets and a new global-local loss. In: ICCV 2017, pp. 3990–3999 (2017)

    Google Scholar 

  15. Kollias, D., Zafeiriou, S.: A multi-task learning and generation framework: valence-arousal, action units and primary expressions. arXiv preprint arXiv:1811.07771 (2018)

  16. Gupta, A., D’Cunha, A., Awasthi, K., Balasubramanian, V.: DAiSEE: towards user engagement recognition in the wild. arXiv preprint arXiv:1609.01885 (2018)

  17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint (2014)

    Google Scholar 

  18. George, D., Shen, H., Huerta, E.A.: Deep transfer learning: a new deep learning glitch classification method for advanced LIGO. arXiv preprint arXiv:1706.07446 (2017)

  19. Rahman, M., Watanobe, Y., Nakamura, K.: A bidirectional LSTM language model for code evaluation and repair. Symmetry 13(2), 247 (2021)

    Article  Google Scholar 

  20. Graves, A., Jaitly, N., Mohamed, A.R.: Hybrid speech recognition with deep bidirectional LSTM. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 273–278. IEEE (2013)

    Google Scholar 

  21. Siami-Namini, S., Tavakoli, N., Namin, A.S.: The performance of LSTM and BiLSTM in forecasting time series. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 3285–3292. IEEE (2019)

    Google Scholar 

  22. Baldi, P., Brunak, S., Frasconi, P., Soda, G., Pollastri, G.: Exploiting the past and the future in protein secondary structure prediction. Bioinformatics 15(11), 937–46 (1999)

    Article  Google Scholar 

  23. Xia, T., Song, Y., Zheng, Y., Pan, E., Xi, L.: An ensemble framework based on convolutional bi-directional LSTM with multiple time windows for remaining useful life estimation. Comput. Ind. 115 103182 (2020)

    Google Scholar 

  24. Sathik, M., Jonathan, S.G.: Effect of facial expressions on student’s comprehension recognition in virtual educational environments. SpringerPlus 2(1), 1–9 (2013)

    Article  Google Scholar 

  25. Kapoor, A., Mota, S., Picard, R.W.: Towards a learning companion that recognizes affect. In: AAAI Fall Symposium 2001, vol. 543, pp. 2–4 (2001)

    Google Scholar 

  26. Pan, M., Wang, J., Luo, Z.: Modelling study on learning affects for classroom teaching/learning auto-evaluation. Science 6(3), 81–6 (2018)

    Google Scholar 

  27. Zakka, B.E., Vadapalli, H.: Estimating student learning affect using facial emotions. In: 2020 2nd International Multidisciplinary Information Technology and Engineering Conference (IMITEC), pp. 1–6. IEEE (2020)

    Google Scholar 

  28. Akay, S., Arica, N.: Stacking multiple cues for facial action unit detection. Vis. Comput. 1–16 (2021). https://doi.org/10.1007/s00371-021-02291-3

  29. Hernandez, J., McDuff, D., Fung, A., Czerwinski, M.: DeepFN: towards generalizable facial action unit recognition with deep face normalization. arXiv preprint arXiv:2103.02484 (2021)

  30. Hinduja, S., Canavan, S.: Real-time action unit intensity detection. In: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), p. 916 (2020). https://doi.org/10.1109/FG47880.2020.00026

  31. Murali, S., Deepu. R., Shivamurthy, R.C.: ResNet-50 vs VGG-19 vs training from scratch: a comparative analysis of the segmentation and classification of Pneumonia from chest x-ray images. In: Global Transitions Proceedings (2021)

    Google Scholar 

  32. Wen, L., Li, X., Li, X., Gao, L.: A new transfer learning based on VGG-19 network for fault diagnosis. In: 2019 IEEE 23rd International Conference on Computer Supported Cooperative Work in Design (CSCWD), 6 May 2019, pp. 205–209. IEEE (2019)

    Google Scholar 

  33. Apostolopoulos, I.D., Mpesiana, T.A.: Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 43(2), 635–40 (2020)

    Article  Google Scholar 

  34. Bouaafia, S., Messaoud, S., Maraoui, A., Ammari, A.C., Khriji, L., Machhout, M.: Deep pre-trained models for computer vision applications: traffic sign recognition. In: 2021 18th International Multi-Conference on Systems, Signals and Devices (SSD), 22 March 2021, pp. 23–28. IEEE (2021)

    Google Scholar 

  35. Jack, R.E., Garrod, O.G., Yu, H., Caldara, R., Schyns, P.G.: Facial expressions of emotion are not culturally universal. Proc. Nat. Acad. Sci. 109(19), 7241–4 (2012)

    Article  Google Scholar 

  36. Amal, V.S., Suresh, S., Deepa, G.: Real-time emotion recognition from facial expressions using convolutional neural network with Fer2013 dataset. In: Karuppusamy, P., Perikos, I., García Márquez, F.P. (eds.) Ubiquitous Intelligent Systems. SIST, vol. 243, pp. 541–551. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-3675-2_41

    Chapter  Google Scholar 

  37. Boughida, A., Kouahla, M.N., Lafifi, Y.: A novel approach for facial expression recognition based on Gabor filters and genetic algorithm. Evol. Syst. 1–15 (2021). https://doi.org/10.1007/s12530-021-09393-2

  38. Brownlee, J.: A Gentle Introduction to Long Short-Term Memory Networks by the Experts. Mach. Learn. Mastery 1, 19 (2017)

    Google Scholar 

  39. Clark, E.A., et al.: The facial action coding system for characterization of human affective response to consumer product-based stimuli: a systematic review. Front. Psychol. 11, 920 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christine Asaju .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Asaju, C., Vadapalli, H. (2022). A Temporal Approach to Facial Emotion Expression Recognition. In: Jembere, E., Gerber, A.J., Viriri, S., Pillay, A. (eds) Artificial Intelligence Research. SACAIR 2021. Communications in Computer and Information Science, vol 1551. Springer, Cham. https://doi.org/10.1007/978-3-030-95070-5_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-95070-5_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-95069-9

  • Online ISBN: 978-3-030-95070-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics