skip to main content
research-article

Dynamic Transfer Exemplar based Facial Emotion Recognition Model Toward Online Video

Published:06 October 2022Publication History
Skip Abstract Section

Abstract

In this article, we focus on the dynamic facial emotion recognition from online video. We combine deep neural networks with transfer learning theory and propose a novel model named DT-EFER. In detail, DT-EFER uses GoogLeNet to extract the deep features of key images from video clips. Then to solve the dynamic facial emotion recognition scenario, the framework introduces transfer learning theory. Thus, to improve the recognition performance, model DT-EFER focuses on the differences between key images instead of those images themselves. Moreover, the time complexity of this model is not high, even if previous exemplars are introduced here. In contrast to other exemplar-based models, experiments based on two datasets, namely, BAUM-1s and Extended Cohn–Kanade, have shown the efficiency of the proposed DT-EFER model.

REFERENCES

  1. [1] Bi Shitong Wang and Anqi. 2016. Incremental enhanced \( \alpha \) expansion move for large data: A probability regularization perspective. International Journal of Machine Learning and Cybernetics 8, 5 (2016), 117.Google ScholarGoogle Scholar
  2. [2] Chung Shitong Wang, Anqi Bi, and Fulai. 2016. Bayesian enhanced \( \alpha \) expansion move clustering with loose link constraints. Neurocomputing 194 (2016), 288300. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Asghar Muhammad Adeel, Khan Muhammad Jamil, Fawad, Amin Yasar, Rizwan Muhammad, Rahman MuhibUr, Badnava Salman, and Mirjavadi Seyed Sajad. 2019. EEG-based multi-modal emotion recognition using bag of deep features: An optimal feature selection approach. Sensors 19, 23 (2019).Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Badshah Abdul Malik, Rahim Nasir, Ullah Noor, Ahmad Jamil, Muhammad Khan, Lee Mi Young, Kwon Soonil, and Baik Sung Wook. 2019. Deep features-based speech emotion recognition for smart affective services. Multimedia Tools and Applications 78, 5 (2019), 55715589.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Bi Anqi, Wenhao Ying, and Lu Zhao. 2020. Fast Enhanced Exemplar-Based Clustering for Incomplete EEG Signals. Computational and Mathematical Methods in Medicine 2020, Article ID 4147807 (2020), 11 pages. Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Deng Didan, Chen Zhaokang, Zhou Yuqian, and Shi Bertram. 2020. Mimamo net: Integrating micro-and macro-motion for video emotion recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 26212628.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Du Guanglong, Wang Zhiyao, Gao Boyu, Mumtaz Shahid, Abualnaja Khamael M., and Du Cuifeng. 2021. A convolution bidirectional long short-term memory neural network for driver emotion recognition. IEEE Transactions on Intelligent Transportation Systems 22, 7 (2021), 45704578. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Fan Yin, Lu Xiangju, Li Dian, and Liu Yuanliu. 2016. Video-based emotion recognition using CNN-RNN and C3D hybrid networks. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. 445450.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Fang Liming, Yin Changchun, Zhu Juncen, Ge Chunpeng, Tanveer M., Jolfaei Alireza, and Cao Zehong. 2021. Privacy protection for medical data sharing in smart healthcare. ACM Transactions on Multimedia Computing Communications and Applications 16, 3s (2021), 118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Frey Brendan J. and Dueck Delbert. 2007. Clustering by passing messages between data points. Science 315, 5814 (2007), 972976. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Hickson Steven, Dufour Nick, Sud Avneesh, Kwatra Vivek, and Essa Irfan. 2019. Eyemotion: Classifying facial expressions in VR using eye-tracking cameras. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision. 16261635. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Houshmand Bita and Khan Naimul Mefraz. 2020. Facial expression recognition under partial occlusion from virtual reality headsets based on transfer learning. In Proceedings of the 2020 IEEE 6th International Conference on Multimedia Big Data. 7075. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Jiang Wenhao and Chung Fu-lai. 2012. Transfer spectral clustering. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 789803.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Kahou Samira Ebrahimi, Bouthillier Xavier, Lamblin Pascal, Gulcehre Caglar, Michalski Vincent, Konda Kishore, Jean Sébastien, Froumenty Pierre, Dauphin Yann, Boulanger-Lewandowski Nicolas, et al. 2016. Emonets: Multimodal deep learning approaches for emotion recognition in video. Journal on Multimodal User Interfaces 10, 2 (2016), 99111.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Kim Dae Hoe, Baddar Wissam J., Jang Jinhyeok, and Ro Yong Man. 2017. Multi-objective based spatio-temporal feature representation learning robust to expression intensity variations for facial expression recognition. IEEE Transactions on Affective Computing 10, 2 (2017), 223236.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Ko Byoung Chul. 2018. A brief review of facial emotion recognition based on visual information. Sensors 18, 2 (2018), 401.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Li Shan and Deng Weihong. 2020. Deep facial expression recognition: A survey. IEEE Transactions on Affective Computing (2020), 11. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Lucey Patrick, Cohn Jeffrey F., Kanade Takeo, Saragih Jason, Ambadar Zara, and Matthews Iain. 2010. The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. 94101.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Mehta Dhwani, Siddiqui Mohammad Faridul Haque, and Javaid Ahmad Y.. 2018. Facial emotion recognition: A survey and real-world user experiences in mixed reality. Sensors 18, 2 (2018), 416–440. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Min Hu, Haowen Wang, Xiaohua Wang, Juan Yang, and Ronggui Wang. 2019. Video facial emotion recognition based on local enhanced motion history image and CNN-CTSLSTM networks. Journal of Visual Communication and Image Representation 59 (2019), 176185. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Nguyen Hai-Long, Woon Yew-Kwong, and Ng Wee-Keong. 2015. A survey on data stream clustering and classification. Knowledge and Information Systems 45, 3 (2015), 535569.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Noroozi Fatemeh, Kaminska Dorota, Corneanu Ciprian, Sapinski Tomasz, Escalera Sergio, and Anbarjafari Gholamreza. 2021. Survey on emotional body gesture recognition. IEEE Transactions on Affective Computing 12, 2 (2021), 505–523. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Noroozi Fatemeh, Marjanovic Marina, Njegus Angelina, Escalera Sergio, and Anbarjafari Gholamreza. 2017. Audio-visual emotion recognition in video clips. IEEE Transactions on Affective Computing 10, 1 (2017), 6075.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Szegedy Christian, Liu Wei, Jia Yangqing, Sermanet Pierre, Reed Scott, Anguelov Dragomir, Erhan Dumitru, Vanhoucke Vincent, and Rabinovich Andrew. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 19.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Tanveer M., Gupta Tarun, and Shah Miten. 2021. Pinball loss twin support vector clustering. ACM Transactions on Multimedia Computing, Communications, and Applications 17, 2s (2021).Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Tanveer M., Rashid Ashraf H., Ganaie M. A., Reza M., Razzak Imran, and Hua Kai-Lung. 2021. Classification of alzheimer’s disease using ensemble of deep neural networks trained through transfer learning. IEEE Journal of Biomedical and Health Informatics 26, 4 (2021), 1453–1463. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Virtusio John Jethro, Ople Jose Jaena Mari, Tan Daniel Stanley, Tanveer M., Kumar Neeraj, and Hua Kai-Lung. 2021. Neural style palette: A multimodal and interactive style transfer from a single style image. IEEE Transactions on Multimedia 23 (2021), 22452258.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Wang Shui-Hua, Phillips Preetha, Dong Zheng-Chao, and Zhang Yu-Dong. 2018. Intelligent facial emotion recognition based on stationary wavelet entropy and jaya algorithm. Neurocomputing 272 (2018), 668676.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Wu Min, Su Wanjuan, Chen Luefeng, Pedrycz Witold, and Hirota Kaoru. 2022. Two-stage fuzzy fusion based-convolution neural network for dynamic emotion recognition. IEEE Transactions on Affective Computing 13, 2 (2022), 805–817.Google ScholarGoogle Scholar
  30. [30] XueLiang Quan, ZhiGang Zeng, JianHua Jiang, Zhang YaQian, Lv BaoLiang, and Wu DongRui. 2021. Physiological signals based affective computing: A systematic review. Acta Automatica Sinica 47, x (2021). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Zheng P. Chen and Y.. 2013. Clustering based on enhanced expansion move. IEEE Transaction on Knowledge and Data Engineering 25, 10 (2013), 2206–2216. DOI: 109/TKDE.2012.202Google ScholarGoogle Scholar
  32. [32] Deng Jun Wang, Yizhang Jiang, and Zhaohong. 2017. Seizure classification from EEG signals using transfer learning, semi-supervised learning and TSK fuzzy system. IEEE Transactions on Neural Systems and Rehabilitation Engine 25, 12 (2017), 22702284. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Bi Kaijian Xia, Jing Xue, Pengjiang Qian, Yizhang Jiang, and Anqi. 2019. Exemplar-based data stream clustering toward internet of things. The Journal of Supercomputing 76, 7 (2019).Google ScholarGoogle Scholar
  34. [34] YuanPin Lin. 2020. Constructing a personalized cross-day EEG-based emotion-classification model using transfer learning. IEEE Journal of Biomedical and Health Informatics 24, 5 (2020), 1255–1264. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Zeng Nianyin, Zhang Hong, Song Baoye, Liu Weibo, Li Yurong, and Dobaie Abdullah M.. 2018. Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273 (2018), 643649. Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Zepf Sebastian, Hernandez Javier, Schmitt Alexander, Minker Wolfgang, and Picard Rosalind W.. 2020. Driver emotion recognition for intelligent vehicles: A survey. ACM Computing Surveys 53, 3 (2020), 1–30. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Zhalehpour Sara, Onder Onur, Akhtar Zahid, and Erdem Cigdem Eroglu. 2017. BAUM-1: A spontaneous audio-visual face database of affective and mental states. IEEE Transactions on Affective Computing 8, 3 (2017), 300313.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Zhang Jianhua, Yin Zhong, Chen Peng, and Nichele Stefano. 2020. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. Information Fusion 59 (2020), 103126.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Zhang Yu-Dong, Yang Zhang-Jing, Lu Hui-Min, Zhou Xing-Xing, Phillips Preetha, Liu Qing-Ming, and Wang Shui-Hua. 2016. Facial emotion recognition based on biorthogonal wavelet entropy, fuzzy support vector machine, and stratified cross validation. IEEE Access 4 (2016), 83758385.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Kang Xia Kaijian, Gu Xiaoqing, Xue Jing, Qiu Shi, Jiang Yizhang, Qian Pengjiang, Zhu Jiaqi, and Li. 2019. A novel double-index-constrained, multi-view, fuzzy-clustering algorithm and its application for detecting epilepsy electroencephalogram signals. IEEE Access 7 (2019), 103823103832.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Dynamic Transfer Exemplar based Facial Emotion Recognition Model Toward Online Video

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Multimedia Computing, Communications, and Applications
            ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 2s
            June 2022
            383 pages
            ISSN:1551-6857
            EISSN:1551-6865
            DOI:10.1145/3561949
            • Editor:
            • Abdulmotaleb El Saddik
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 6 October 2022
            • Online AM: 20 May 2022
            • Accepted: 6 May 2022
            • Revised: 12 April 2022
            • Received: 1 January 2022
            Published in tomm Volume 18, Issue 2s

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format