Skip to main content
Log in

Impact of lockdown on Generation-Z: a fuzzy based multimodal emotion recognition approach using CNN

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The primary direction of most of the research done so far on the effects of Lockdown due to pandemic have been limited to areas such as clinical studies, possible impact on the global economy, or issues related to migrant workers. However, during this period, little attempt has been made to understand the emotions of Generation Z, one of the prime victims of this pandemic. Members of this generation were born after 1996. So, most of them are studying in various schools, colleges, or universities. In the proposed work, the emotions of some students of an engineering college in West Bengal, India, have been analyzed. A multimodal approach has been applied to obtain vivid pictures of 74 students’ minds. The valence-arousal inspired Organize-Split-Fuse (OSF) model has been proposed to achieve this objective. Two conventional Convolutional Neural Network (CNN) models have been employed separately, to classify human emotions using Acoustic Information (AcI) and Facial Expressions (FE) from the generated dataset. The employed models have achieved satisfactory performance (91% and 72.7% accuracy respectively) on the benchmark dataset. Afterward, classified emotions have been organized and split successfully. Finally, a fuzzy rule-based classification system has been used to fuse both emotions at the decision level. The results show that junior students have higher positivity and less Neutral emotions than the senior. In-depth analysis shows that boys are more apprehensive than girls while girls have a more optimistic outlook for the future. The year-wise observations show the chaotic state of students’ minds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1:
Algorithm 2:
Algorithm 3:
Fig. 5
Algorithm 4:
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

References

  1. Ali MNY, Sarowar MG, Rahman ML, Chaki J, Dey N, Tavares JMR (2019) Adam deep learning with SOM for human sentiment classification. Int J Ambient Comput Intell (IJACI) 10(3):92–116

    Article  Google Scholar 

  2. Alizadeh S, Fazel A (2017) Convolutional neural networks for facial expression recognition arXiv:1704:06756. https://doi.org/10.48550/arXiv.1704.06756

  3. Benitez-Quiroz CF, Srinivasan R, Martinez AM (2016) Emotional: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In: 2016 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp 5562–5570. https://doi.org/10.1109/CVPR.2016.600

  4. Boulmaiz A, Messadeg D, Doghmane N, Taleb-Ahmed A (2017) Design and implementation of a robust acoustic recognition system for waterbird species using TMS320C6713 DSK. Int J Ambient Comput Intell (IJACI) 8(1):98–118

    Article  Google Scholar 

  5. Breuer R, Kimmel RA (2014) deep learning perspective on the origin of facial expressions. arXiv 2017, arXiv:1705.01842

  6. Chakraborty I, Maity P (2020) COVID-19 outbreak: migration, effects on society, global environment, and prevention. Sci Total Environ 728:138882. https://doi.org/10.1016/j.scitotenv.2020.138882

    Article  Google Scholar 

  7. Chandrasekar R, Khare N (2016) Review of Fuzzy Rule-Based Classification systems. Res J Pharm Tech 9(8):1299–1302. https://doi.org/10.5958/0974-360X.2016.00247.X

    Article  Google Scholar 

  8. Chen C-R, Wong W-S, Chiu C-T (2010) A 0.64 mm 2 real-time cascade face detection design based on reduced two-field extraction. IEEE Trans Very Large Scale Integr (VLSI) Syst 19(11):1937–1948 20

  9. Covid-19 impact on young people and the youth sector (2020) Knowledge HUB: COVID-19 impact on the youth sector Council of Europe European Union. https://pjp-eu.coe.int/en/web/youth-partnership/covid-19

  10. Cowie R, Douglas-Cowie E, Tsapatsoulis N, Votsis G, Kollias S, Fellenz W, Taylor JG (2001) Emotion recognition in human-computer interaction. IEEE Signal Process Mag 18(1):32–80. https://doi.org/10.1109/79.911197

    Article  Google Scholar 

  11. Damasio A (2003) Virtue in mind. New Sci 180(49–51):2003

    Google Scholar 

  12. Darwin C, Prodger P (1998) The expression of the emotions in man and animals. Oxford University Press, Oxford

    Google Scholar 

  13. Ekman P, Friesen WV (1971) Constants across cultures in the face and emotion. J Pers Soc Psychol 17(2):124–129

    Article  Google Scholar 

  14. Fasel B, Luettin J (2003) Automatic facial expression analysis: a survey. Pattern Recogn 36(1):259–275

    Article  MATH  Google Scholar 

  15. Fathallah A, Abdi L, Douik A (2017) Facial expression recognition via deep learning. In: 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA). IEEE, pp 745–750. https://doi.org/10.1109/AICCSA.2017.124

  16. Fong SJ, Dey N, Chaki J (2020) Artificial intelligence for coronavirus outbreak, pp 23–45. https://doi.org/10.1007/978-981-15-5936-5_2

  17. Fong SJ, Li G, Dey N, Crespo RG, Herrera-Viedma E (2020) Monte Carlo decision making under high uncertainty of novel coronavirus epidemic using hybridized deep learning and fuzzy rule induction. Appl Soft Comput 93:106282

    Article  Google Scholar 

  18. Gasper K (2018) Utilizing neutral affective states in research: theory, assessment, and recommendations. Emot Rev 10:255–266. https://doi.org/10.1177/1754073918765660

    Article  Google Scholar 

  19. Goodfellow IJ, Erhan D, Carrier PL et al (2013) Challenges in representation learning: a report on three machine learning contests. Neural Networks : the Official Journal of the International Neural Network Society 64:59-63. https://doi.org/10.1016/j.neunet.2014.09.005

  20. Gupta P, Rajput N (2007) Two-stream emotion recognition for call center monitoring. Proc Interspeech 2007:2241–2244. https://doi.org/10.21437/Interspeech.2007-609

    Article  Google Scholar 

  21. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition, Las Vegas, pp 770–778. https://doi.org/10.1109/CVPR.2016.90

  22. Iqbal A, Barua K (2019) A real-time emotion recognition from speech using gradient boosting. In: 2019 IEEE international conference on electrical, Computer and Communication Engineering (ECCE), Cox’sBazar, Bangladesh, pp 1–5. https://doi.org/10.1109/ECACE.2019.8679271

  23. Izard CE (2007) Basic emotions, natural kinds, emotion schemas, and a new paradigm. Perspect Psychol Sci 2:260–280. https://doi.org/10.1111/j.1745-6916.2007.00044.x

    Article  Google Scholar 

  24. Jain DK, Shamsolmoali P, Sehdev P (2019) Extended deep neural network for facial emotion recognition. Pattern Recogn Lett 120:69–74

    Article  Google Scholar 

  25. Jannat R, Tynes I, Lime LL, Adorno J, Canavan S (2018) Ubiquitous emotion recognition using audio and video data. In: 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, ACM, pp 956–959. https://doi.org/10.1145/3267305.3267689

  26. Kim DH, Baddar W, Jang J, Ro, YM (2017) Multi-objective based spatio-temporal feature representation learning robust to expression intensity variations for facial expression recognition. IEEE Trans Affect Comput 10:223–236. https://doi.org/10.1109/TAFFC.2017.2695999

  27. Kim DH, Baddar WJ, Jang J, Ro YM (2017) Multi-objective based spatio-temporal feature representation learning robust to expression intensity variations for facial expression recognition. IEEE Trans Affect Comput 10(2):223–236

    Article  Google Scholar 

  28. Kotsia I, Pitas I (2006) Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans Image Process 16(1):172–187

    Article  MathSciNet  Google Scholar 

  29. Kuang Y, Wu Q, Wang Y, Dey N, Shi F, Crespo RG, Sherratt RS (2020) Simplified inverse filter tracked affective acoustic signals classification incorporating deep convolutional neural networks. Appl Soft Comput 97(Part A):106775

    Article  Google Scholar 

  30. Lecun Y, Bengio Y et al (1995) Convolutional networks for images, speech, and time series. Handb Brain Theory Neural Netw 3361:10

    Google Scholar 

  31. Li Y, Zeng J, Shan S, Chen X (2019) Occlusion aware facial expression recognition using CNN with attention mechanism. IEEE Trans Image Process 28:2439–2450

    Article  MathSciNet  Google Scholar 

  32. Liu C, Wechsler H (2002) Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Trans Image Process 11(4):467–476

    Article  Google Scholar 

  33. Liu P, Han S, Meng Z, Tong Y (2014) Facial expression recognition via a boosted deep belief network. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), Columbus, pp 1805–1812. https://doi.org/10.1109/CVPR.2014.233

  34. Livingstone SR, Russo FA (2018) The Ryerson audio-visual database of emotional speech and Song (RAVDESS): a dynamic, multimodal set of facial and vocal expressions in north American English. PLoS One 13(5):e0196391

    Article  Google Scholar 

  35. Low LA, Maddage NC, Lech M, Sheeber LB, Allen NB (2011) Detection of clinical depression in adolescents’ speech during family interactions. IEEE Trans Biomed Eng 58(3):574–586. https://doi.org/10.1109/TBME.2010.2091640

    Article  Google Scholar 

  36. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition-workshops. IEEE, pp 94–101. https://doi.org/10.1109/CVPRW.2010.5543262

  37. Lyons M, Akamatsu S, Kamachi M, Gyoba J (1998) Coding facial expressions with gabor wavelets. In: 1998 IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan, pp 200–205. https://doi.org/10.1109/AFGR.1998.670949

  38. Mahalle P, Kalamkar AB, Dey N, Chaki J, Shinde GR (2020) Forecasting models for coronavirus (covid-19): a survey of the state-of-the-art. SN Comput Sci 1(4):197. https://doi.org/10.1007/s42979-020-00209-9

  39. Minaee S, Abdolrashidi A (2019) Deep-emotion: facial expression recognition using attentional convolutional network. Computer Vision and Pattern Recognition. arXiv:1902.0101. https://doi.org/10.48550/arxiv.1902.01019

  40. Mohammadi MR, Fatemizadeh E, Mahoor MH (2014) Pca-based dictionary building for accurate facial expression recognition via sparse representation. J Vis Commun Image Represent 25(5):1082–1092 13

  41. Mohammadpour RA, Seyed M, Abedi M, Bagheri S, Ghaemian A (2015) Fuzzy rule-based classification system for assessing coronary artery disease. Comput Math Methods Med 2015(564867):8. https://doi.org/10.1155/2015/564867

    Article  MathSciNet  MATH  Google Scholar 

  42. Mohan K, Seal A, Krejcar O, Yazidi A (2020) Facial expression recognition using local gravitational force descriptor based deep convolution neural networks. IEEE Trans Instrum Meas 70:1–12

    Article  Google Scholar 

  43. Mohan K, Seal A, Krejcar O, Yazidi A (2021) FER-net: facial expression recognition using deep neural net. Neural Comput Applic 33:9125–9136. https://doi.org/10.1007/s00521-020-05676-y

    Article  Google Scholar 

  44. Mollahosseini A, Chan D, Mahoor MH (2016) Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE Winter Conference on Applications of Computer Vision(WACV), pp 1–10. https://doi.org/10.1109/WACV.2016.7477450

  45. Muda L, Begam M, Elamvazuthi I (2010) Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (dtw) techniques, ArXiv, abs/1003.4083. https://doi.org/10.48550/arXiv.1003.4083

  46. Pichora F, Kathleen M, Kate D (2020) Toronto emotional speech set (TESS), Borealis, V1. https://doi.org/10.5683/SP2/E8H2MF

  47. Pinto MG, Polignano M, Lops P, Semeraro G (2020) Emotions understanding model from spoken language using deep neural networks and Mel-frequency cepstral coefficients. In: 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS), pp 1–5. https://doi.org/10.1109/EAIS48028.2020.9122698

  48. Pons G, Masip D (2017) Supervised committee of convolutional neural networks in automated facial expression analysis. IEEE Trans Affect Comput 9(3):343–350

    Article  Google Scholar 

  49. Robinson DL (2008) Brain function, emotional experience and personality. Neth J Psychol 64:152–167

    Google Scholar 

  50. Russell J (1980) A circumplex model of affect. J Pers Soc Psychol 39(6):1161–1178. https://doi.org/10.1037/h0077714

    Article  Google Scholar 

  51. Shan C, Gong S, McOwan PW (2009) Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput 27(6):803–816

    Article  Google Scholar 

  52. Shao J, Qian Y (2019) Three convolutional neural network models for facial expression recognition in the wild. Neurocomputing 355:82–92

    Article  Google Scholar 

  53. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. https://doi.org/10.48550/arXiv.1409.1556

  54. Socio-economic impact of COVID-19 (2020) Briefs and Report. https://www.undp.org/content/undp/en/home/coronavirus/socio-economic-impact-of-covid-19.html

  55. Sun N, Li Q, Huan R, Liu J, Han G (2017) Deep spatial-temporal feature fusion for facial expression recognition in static images. Pattern Recogn Lett 119(49–61):31

    Google Scholar 

  56. Surrey Audio-Visual Expressed Emotion (SAVEE). (n.d.), http://kahlan.eps.surrey.ac.uk/savee/

  57. Tian YI, Kanade T, Cohn JF (2001) Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 23(2):97–115

    Article  Google Scholar 

  58. Turk MA, Pentland AP (1991) Face recognition using eigenfaces. In: 1991 IEEE Conference on computer society computer vision and pattern recognition, Maui, pp 586–591

  59. Tzirakis P, Zafeiriou S, Schuller B (2019) Real-world automatic continuous affect recognition from audiovisual signals. In: Pineda A, Sebe R (eds) Multimodal Behavioral Analysis in the Wild: Advances and Challenges. Academic Press Ltd-Elsevier Science Ltd, pp 387–406. https://doi.org/10.1016/B978-0-12-814601-9.00028-6

  60. Viola P, Jones P (2001) Rapid object detection using a boosted cascade of simple features. In: 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, pp 511–518. https://doi.org/10.1109/CVPR.2001.990517

  61. Wang D, He T, Li Z, Cao L, Dey N, Ashour AS, Shi F (2018) Image feature-based affective retrieval employing improved parameter and structure identification of adaptive neuro-fuzzy inference system. Neural Comput Applic 29(4):1087–1102

    Article  Google Scholar 

  62. Watson D, Wiese D, Vaidya J, Tellegen A (1999) The two general activation systems of affect: structural findings, evolutionary considerations, and psychobiological evidence. J Pers Soc Psychol 76:820–838. https://doi.org/10.1037/0022-3514.76.5.820

    Article  Google Scholar 

  63. Whissell CM (1989) The dictionary of affect in language. In: Plutchik R, Kellerman H (eds) The measurement of emotion. Academic Press, pp 113–131. https://doi.org/10.1016/B978-0-12-558704-4.50011-6

  64. Yang N, Dey N, Sherratt RS, Shi F (2020) Recognize basic emotional statesin speech by machine learning techniques using mel-frequency cepstral coefficient features. J Intell Fuzzy Syst 39(2):1925–1936 ISSN 1875-8967

  65. Youth and COVID-19: Response, Recovery and Resilience (2020) OECD Survey on COVID-19 and Youth. http://www.oecd.org/coronavirus/policy-responses/youth-and-covid-19-response-recovery-and-resilience-c40e61c6/

  66. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353. https://doi.org/10.1016/S00199958(65)90241-X

    Article  MATH  Google Scholar 

  67. Zhang B, Essl G, Provost EM (2015) Recognizing emotion from singing and speaking using shared models. In: IEEE 2015 International Conference on Affective Computing and Intelligent Interaction (ACII) IEEE, pp 139–145. https://doi.org/10.1109/ACII.2015.7344563

  68. Zhang D, Song F, Xu Y, Liang Z (2009) Decision level fusion, advanced pattern recognition technologies with applications to biometrics. IGI Global, pp 328–348. https://doi.org/10.4018/978-1-60566-200-8.ch015

  69. Zhao X, Liang X, Liu L, Li T, Han Y, Vasconcelos N, Yan S (2016) Peak-piloted deep network for facial expression recognition. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, Proceedings, Part II 14 425–442. Springer International Publishing

  70. Zhong L, Liu Q, Yang P, Huang J, Metaxas DN (2014) Learning multiscale active facial patches for expression analysis. IEEE Trans Cybern 45(8):1499–1510

    Article  Google Scholar 

Download references

Acknowledgments

We appreciate the efforts of the Hooghly Engineering and Technology College students in expressing their views during the lockdown time. We are also expressing our gratitude towards the College authority and all the stakeholders for their support and encouragement.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sirshendu Hore.

Ethics declarations

Conflict of Interest/Competing Interest

The authors certify that there is no conflict of interest regarding the material discussed in the manuscript. The authors also declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Appendix Figs. 22 and 23.

Fig. 22
figure 22

Facial Expression of Engineering student

Fig. 23
figure 23

Valence-Arousal based 2D emotion model

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hore, S., Bhattacharya, T. Impact of lockdown on Generation-Z: a fuzzy based multimodal emotion recognition approach using CNN. Multimed Tools Appl 82, 33835–33863 (2023). https://doi.org/10.1007/s11042-023-14543-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-14543-6

Keywords

Navigation