skip to main content
survey

Research Progress of EEG-Based Emotion Recognition: A Survey

Published: 08 July 2024 Publication History

Abstract

Emotion recognition based on electroencephalography (EEG) signals has emerged as a prominent research field, facilitating objective evaluation of diseases like depression and motion detection for heathy people. Starting from the basic concepts of temporal-frequency-spatial features in EEG and the methods for cross-domain feature fusion, this survey then extends the overfitting challenge of EEG single-modal to the problem of heterogeneous modality modeling in multimodal conditions. It explores issues such as feature selection, sample scarcity, cross-subject emotional transfer, physiological knowledge discovery, multimodal fusion methods, and modality missing. These findings provide clues for researchers to further investigate emotion recognition based on EEG signals.

Supplementary Material

PDF File (csur-2023-0513-file002.pdf)

References

[1]
Mojtaba Khomami Abadi, Ramanathan Subramanian, Seyed Mostafa Kia, Paolo Avesani, Ioannis Patras, and Nicu Sebe. 2015. DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Transactions on Affective Computing 6, 3 (2015), 209–222.
[2]
Ljubomir I. Aftanas, Natalia V. Lotova, Vladimir I. Koshkarov, Vera L. Pokrovskaja, Serguei A. Popov, and Victor P. Makhnev. 1997. Non-linear analysis of emotion EEG: calculation of Kolmogorov entropy and the principal Lyapunov exponent. Neuroscience Letters 226, 1 (1997), 13–16.
[3]
Soraia M. Alarcao and Manuel J. Fonseca. 2017. Emotions recognition using EEG signals: A survey. IEEE Transactions on Affective Computing 10, 3 (2017), 374–393.
[4]
Sharifa Alghowinem, Roland Goecke, Michael Wagner, Gordon Parker, and Michael Breakspear. 2013. Eye movement analysis for depression detection. In Proceedings of the 2013 IEEE International Conference on Image Processing. IEEE, 4220–4224.
[5]
Salma Alhagry, Aly Aly Fahmy, and Reda A. El-Khoribi. 2017. Emotion recognition based on EEG using LSTM recurrent neural network. Emotion. 8, 10 (2017), 355–358.
[6]
Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning (PMLR), 214–223.
[7]
John Atkinson and Daniel Campos. 2016. Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers. Expert Systems with Applications 47 (2016), 35–41.
[8]
Areej Babiker, Ibrahima Faye, Wajid Mumtaz, Aamir Saeed Malik, and Hiroki Sato. 2019. EEG in classroom: EMD features to detect situational interest of students during learning. Multimedia Tools and Applications 78, 12 (2019), 16261–16281.
[9]
Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency. 2018. Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 2 (2018), 423–443.
[10]
Andrey V. Bocharov, Gennady G. Knyazev, and Alexander N. Savostyanov. 2017. Depression and implicit emotion processing: An EEG study. Neurophysiologie Clinique/Clinical Neurophysiology 47, 3 (2017), 225–230.
[11]
Karsten M. Borgwardt, Arthur Gretton, Malte J. Rasch, Hans-Peter Kriegel, Bernhard Schölkopf, and Alex J. Smola. 2006. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22, 14 (2006), e49–e57.
[12]
Margaret M. Bradley, Laura Miccoli, Miguel A. Escrig, and Peter J. Lang. 2008. The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45, 4 (2008), 602–607.
[13]
Hao Chen, Zhunan Li, Ming Jin, and Jinpeng Li. 2021. MEERNet: Multi-source EEG-based emotion recognition network for generalization across subjects and sessions. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2021, 6094–6097.
[14]
Jing Chen, Bin Hu, Lixin Xu, Philip Moore, and Yun Su. 2015. Feature-level fusion of multimodal physiological signals for emotion recognition. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 395–399.
[15]
Sihan Chen, Jiajia Tang, Li Zhu, and Wanzeng Kong. 2023. A multi-stage dynamical fusion network for multimodal emotion recognition. Cognitive Neurodynamics 17 (2023), 671--680.
[16]
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555. https://arxiv.org/abs/1412.3555
[17]
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in Neural Information Processing Systems 29 (2016), 3837–3845.
[18]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. https://arxiv.org/abs/1810.04805
[19]
Anselm Doll, Britta K. Hölzel, Satja Mulej Bratec, Christine C. Boucard, Xiyao Xie, Afra M. Wohlschläger, and Christian Sorg. 2016. Mindful attention to breath regulates emotions via increased amygdala–prefrontal cortex connectivity. Neuroimage 134 (2016), 305–313.
[20]
Changde Du, Changying Du, Hao Wang, Jinpeng Li, Wei-Long Zheng, Bao-Liang Lu, and Huiguang He. 2018. Semi-supervised deep generative modelling of incomplete multi-modality emotional data. In Proceedings of the 26th ACM International Conference on Multimedia. 108–116.
[21]
Paul Ekman and Wallace V. Friesen. 1971. Constants across cultures in the face and emotion. Journal of Personality and Social Psychology 17, 2 (1971), 124–129.
[22]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning (PMLR), 1126–1135.
[23]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning (PMLR), 1180–1189.
[24]
Muhammad Ghifary, David Balduzzi, W. Bastiaan Kleijn, and Mengjie Zhang. 2016. Scatter component analysis: A unified framework for domain adaptation and domain generalization. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 7 (2016), 1414–1430.
[25]
Xiaotong Gu, Zehong Cao, Alireza Jolfaei, Peng Xu, Dongrui Wu, Tzyy-Ping Jung, and Chin-Teng Lin. 2021. EEG-based brain-computer interfaces (BCIs): A survey of recent studies on signal sensing technologies and computational intelligence approaches and their applications. IEEE/ACM Transactions on Computational Biology and Bioinformatics 18, 5 (2021), 1645–1666.
[26]
Jiang-Jian Guo, Rong Zhou, Li-Ming Zhao, and Bao-Liang Lu. 2019. Multimodal emotion recognition from eye image, eye movement and EEG using deep neural networks. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 3071–3074.
[27]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[28]
Zhipeng He, Zina Li, Fuzhou Yang, Lei Wang, Jingcong Li, Chengju Zhou, and Jiahui Pan. 2020. Advances in multimodal emotion recognition based on brain–computer interfaces. Brain Sciences 10, 10 (2020), 687.
[29]
Xiaohua Huang, Jukka Kortelainen, Guoying Zhao, Xiaobai Li, Antti Moilanen, Tapio Seppänen, and Matti Pietikäinen. 2016. Multi-modal emotion analysis from facial expressions and electroencephalogram. Computer Vision and Image Understanding 147 (2016), 114–124.
[30]
Ziyu Jia, Youfang Lin, Xiyang Cai, Haobin Chen, Haijun Gou, and Jing Wang. 2020. SST-EmotionNet: Spatial-spectral-temporal based attention 3D dense network for EEG emotion recognition. In Proceedings of the 28th ACM International Conference on Multimedia. 2909–2917.
[31]
Ziyu Jia, Youfang Lin, Jing Wang, Zhiyang Feng, Xiangheng Xie, and Caijie Chen. 2021. HetEmotionNet: Two-stream heterogeneous graph recurrent neural network for multi-modal emotion recognition. In Proceedings of the 29th ACM International Conference on Multimedia. 1047–1056.
[32]
Wei-Bang Jiang, Ziyi Li, Wei-Long Zheng, and Bao-Liang Lu. 2024. Functional emotion transformer for EEG-assisted cross-modal emotion recognition. In Proceedings of the ICASSP 2024–2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1841–1845.
[33]
Wei-Bang Jiang, Xu Yan, Wei-Long Zheng and Bao-Liang Lu. 2023. Elastic graph transformer networks for EEG-based emotion recognition. In Proceedings of the ICASSP 2023–2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1–5.
[34]
Wei-Bang Jiang, Li-Ming Zhao, Ping Guo, and Bao-Liang Lu. 2021. Discriminating surprise and anger from EEG and eye movements with a graph network. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 1353–1357.
[35]
Magdiel Jiménez-Guarneros and Pilar Gómez-Gil. 2021. Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition. Pattern Recognition Letters 141 (2021), 54–60.
[36]
Jun-Su Kang, Swathi Kavuri, and Minho Lee. 2019. ICA-evolution based data augmentation with ensemble deep neural networks using time and frequency kernels for emotion recognition from EEG-data. IEEE Transactions on Affective Computing 13, 2 (2019), 616--627.
[37]
K. Kannadasan, Sridevi Veerasingam, B. Shameedha Begum, and N. Ramasubramanian. 2023. An EEG-based subject-independent emotion recognition model using a differential-evolution-based feature selection algorithm. Knowledge and Information Systems 65, 1 (2023), 341–377.
[38]
Stamos Katsigiannis and Naeem Ramzan. 2017. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE Journal of Biomedical and Health Informatics 22, 1 (2017), 98–107.
[39]
Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. 2015. Siamese neural networks for one-shot image recognition. In Proceedings of the ICML Deep Learning Workshop.
[40]
Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. 2011. DEAP: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing 3, 1 (2011), 18–31.
[41]
Yu-Ting Lan, Wei Liu, and Bao-Liang Lu. 2020. Multimodal emotion recognition using deep generalized canonical correlation analysis with an attention mechanism. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 1–6.
[42]
Vernon J. Lawhern, Amelia J. Solon, Nicholas R. Waytowich, Stephen M. Gordon, Chou P. Hung, and Brent J. Lance. 2018. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. Journal of Neural Engineering 15, 5 (2018), 056013.
[43]
Chao Li, Zhongtian Bao, Linhao Li, and Ziping Zhao. 2020. Exploring temporal representations by leveraging attention-based bidirectional LSTM-RNNs for multi-modal emotion recognition. Information Processing & Management 57, 3 (2020), 102185.
[44]
He Li, Yi-Ming Jin, Wei-Long Zheng, and Bao-Liang Lu. 2018. Cross-subject emotion recognition using deep adaptation networks. In Proceedings of the Neural Information Processing: 25th International Conference (ICONIP ’18). 403–413.
[45]
Jingcong Li, Shuqi Li, Jiahui Pan, and Fei Wang. 2021. Cross-subject EEG emotion recognition with self-organized graph neural network. Frontiers in Neuroscience 15 (2021), 611653.
[46]
Jinpeng Li, Shuang Qiu, Changde Du, Yixin Wang, and Huiguang He. 2020. Domain adaptation for EEG emotion recognition based on latent representation similarity. IEEE Transactions on Cognitive and Developmental Systems 12, 2 (2020), 344–353.
[47]
Jinpeng Li, Shuang Qiu, Yuan-Yuan Shen, Cheng-Lin Liu, and Huiguang He. 2020. Multisource transfer learning for cross-subject EEG emotion recognition. IEEE Transactions on Cybernetics 50, 7 (July 2020), 3281–3293.
[48]
Jinyu Li, Haoqiang Hua, Zhihui Xu, Lin Shu, Xiangmin Xu, Feng Kuang, and Shibin Wu. 2022. Cross-subject EEG emotion recognition combined with connectivity features and meta-transfer learning. Computers in Biology and Medicine 145 (2022), 105519.
[49]
Rui Li, Yiting Wang, and Bao-Liang Lu. 2021. A multi-domain adaptive graph convolutional network for EEG-based emotion recognition. In Proceedings of the 29th ACM International Conference on Multimedia. 5565–5573.
[50]
Rui Li, Yiting Wang, Wei-Long Zheng, and Bao-Liang Lu. 2022. A multi-view spectral-spatial-temporal masked autoencoder for decoding emotions with self-supervised learning. In Proceedings of the 30th ACM International Conference on Multimedia. 6–14.
[51]
Tian-Hao Li, Wei Liu, Wei-Long Zheng, and Bao-Liang Lu. 2019. Classification of five emotions from EEG and eye movement signals: Discrimination ability and stability over time. In Proceedings of the 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 607–610.
[52]
Xiang Li, Dawei Song, Peng Zhang, Yazhou Zhang, Yuexian Hou, and Bin Hu. 2018. Exploring EEG features in cross-subject emotion recognition. Frontiers in Neuroscience 12 (2018), 162.
[53]
Xiang Li, Yazhou Zhang, Prayag Tiwari, Dawei Song, Bin Hu, Meihong Yang, Zhigang Zhao, Neeraj Kumar, and Pekka Marttinen. 2022. EEG based emotion recognition: A tutorial and review. ACM Computing Surveys (CSUR) 55, 4 (2022), 1–57.
[54]
Xuelong Li. 2023. Multi-modal cognitive computing (in Chinese). Scientia Sinica Informationis 53, 1 (2023), 1–32.
[55]
Yang Li, Ji Chen, Fu Li, Boxun Fu, Hao Wu, Youshuo Ji, Yijin Zhou, Yi Niu, Guangming Shi, and Wenming Zheng. 2022. GMSS: Graph-based multi-task self-supervised learning for EEG emotion recognition. IEEE Transactions on Affective Computing 14 (2022), 2512–2525.
[56]
Yang Li, Lei Wang, Wenming Zheng, Yuan Zong, Lei Qi, Zhen Cui, Tong Zhang, and Tengfei Song. 2020. A novel bi-hemispheric discrepancy model for EEG emotion recognition. IEEE Transactions on Cognitive and Developmental Systems 13, 2 (2020), 354–367.
[57]
Yang Li, Wenming Zheng, Lei Wang, Yuan Zong, and Zhen Cui. 2019. From regional to global brain: A novel hierarchical spatial-temporal neural network model for EEG emotion recognition. IEEE Transactions on Affective Computing 13, 2 (2019), 568–578.
[58]
Zhongjie Li, Gaoyan Zhang, Jianwu Dang, Longbiao Wang, and Jianguo Wei. 2021. Multi-modal emotion recognition based on deep learning of EEG and audio signals. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 1–6.
[59]
Zhunan Li, Hao Chen, Ming Jin, and Jinpeng Li. 2021. Reducing the calibration effort of EEG emotion recognition using domain adaptation with soft labels. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 5962–5965.
[60]
Ziyi Li, Luyu Liu, Yihui Zhu, and Bao-Liang Lu. 2022. Exploring sex differences in key frequency bands and channel connections for EEG-based emotion recognition. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 4793–4796.
[61]
Shuaiqi Liu, Xu Wang, Ling Zhao, Jie Zhao, Qi Xin, and Shui-Hua Wang. 2021. Subject-independent emotion recognition of EEG signals based on dynamic empirical convolutional neural network. IEEE/ACM Transactions on Computational Biology and Bioinformatics 18, 5 (September-Octover 2021), 1710–1721.
[62]
Wei Liu, Jie-Lin Qiu, Wei-Long Zheng, and Bao-Liang Lu. 2019. Multimodal emotion recognition using deep canonical correlation analysis. arXiv:1908.05349. https://arxiv.org/abs/1908.05349
[63]
Wei Liu, Jie-Lin Qiu, Wei-Long Zheng, and Bao-Liang Lu. 2021. Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition. IEEE Transactions on Cognitive and Developmental Systems 14, 2 (2021), 715–729.
[64]
Wei Liu, Wei-Long Zheng, Ziyi Li, Si-Yuan Wu, Lu Gan, and Bao-Liang Lu. 2022. Identifying similarities and differences in emotion recognition with EEG and eye movements among Chinese, German, and French People. Journal of Neural Engineering 19, 2 (2022), 026012.
[65]
Wei Liu, Wei-Long Zheng, and Bao-Liang Lu. 2016. Emotion recognition using multimodal deep learning. In Proceedings of the International Conference on Neural Information Processing. Springer, 521–529.
[66]
BaoLiang Lu, Yaqian Zhang, and WeiLong Zheng. 2021. A survey of affective brain-computer interface (in Chinese). Chinese Journal of Intelligent Science and Technology 3, 1 (2021), 36–48.
[67]
Yifei Lu, Wei-Long Zheng, Binbin Li, and Bao-Liang Lu. 2015. Combining eye movements and EEG to enhance emotion recognition. In Proceedings of the 24th International Joint Conference on Artificial Intelligence. 1170–1176.
[68]
Yun Luo and Bao-Liang Lu. 2018. EEG data augmentation for emotion recognition using a conditional Wasserstein GAN. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2535–2538.
[69]
Yun Luo and Bao-Liang Lu. 2021. Wasserstein-distance-based multi-source adversarial domain adaptation for emotion recognition and vigilance estimation. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 1424–1428.
[70]
Yun Luo, Si-Yang Zhang, Wei-Long Zheng, and Bao-Liang Lu. 2018. WGAN domain adaptation for EEG-based emotion recognition. In Proceedings of the International Conference on Neural Information Processing. Springer, 275–286.
[71]
Yun Luo, Li-Zhen Zhu, and Bao-Liang Lu. 2019. A GAN-based data augmentation method for multimodal emotion recognition. In Proceedings of the International Symposium on Neural Networks. Springer, 141–150.
[72]
Bo-Qun Ma, He Li, Wei-Long Zheng, and Bao-Liang Lu. 2019. Reducing the subject variability of EEG signals with adversarial domain generalization. In Proceedings of the International Conference on Neural Information Processing. Springer, 30–42.
[73]
Jiaxin Ma, Hao Tang, Wei-Long Zheng, and Bao-Liang Lu. 2019. Emotion recognition using multimodal residual LSTM network. In Proceedings of the 27th ACM International Conference on Multimedia. 176–183.
[74]
Juan Manuel Mayor-Torres, Mirco Ravanelli, Sara E. Medina-DeVilliers, Matthew D. Lerner, and Giuseppe Riccardi. 2021. Interpretable SincNet-based deep learning for emotion recognition from EEG brain activity. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 412–415.
[75]
Juan Abdon Miranda-Correa, Mojtaba Khomami Abadi, Nicu Sebe, and Ioannis Patras. 2021. AMIGOS: A dataset for affect, personality and mood research on individuals and groups. IEEE Transactions on Affective Computing 12, 2 (2021), 479–493.
[76]
Kana Miyamoto, Hiroki Tanaka, and Satoshi Nakamura. 2021. Meta-learning for emotion prediction from EEG while listening to music. In Proceedings of the Companion Publication of the 2021 International Conference on Multimodal Interaction. 324–328.
[77]
Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor. 2017. AffectNet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing 10, 1 (2017), 18–31.
[78]
Saeid Motiian, Quinn Jones, Seyed Iranmanesh, and Gianfranco Doretto. 2017. Few-shot adversarial domain adaptation. Advances in Neural Information Processing Systems 30 (2017), 6670–6680.
[79]
Krikamol Muandet, David Balduzzi, and Bernhard Schölkopf. 2013. Domain generalization via invariant feature representation. In Proceedings of the International Conference on Machine Learning (PMLR), 10–18.
[80]
Murugappn Murugappan, Mohamed Rizon, Ramachandran Nagarajan, S. Yaacob, I. Zunaidi, and D. Hazry. 2007. EEG feature extraction for classifying emotions using FCM and FKM. International Journal of Computers and Communications. 1, 2 (2007), 21–25.
[81]
Bahareh Nakisa, Mohammad Naim Rastgoo, Dian Tjondronegoro, and Vinod Chandran. 2018. Evolutionary computation algorithms for feature selection of EEG-based emotion recognition using mobile sensors. Expert Systems with Applications 93 (2018), 143–155.
[82]
Wang Kay Ngai, Haoran Xie, Di Zou, and Kee-Lee Chou. 2022. Emotion recognition based on convolutional neural networks and heterogeneous bio-signal data sources. Information Fusion 77 (2022), 107–117.
[83]
Run Ning, C. L. Philip Chen, and Tong Zhang. 2021. Cross-subject EEG emotion recognition using domain adaptive few-shot learning networks. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 1468–1472.
[84]
Robert Oostenveld and Peter Praamstra. 2001. The five percent electrode system for high-resolution EEG and ERP measurements. Clinical Neurophysiology 112, 4 (2001), 713–719.
[85]
Hanchuan Peng, Fuhui Long, and Chris Ding. 2005. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 8 (2005), 1226–1238.
[86]
Louise H. Phillips, Clare Scott, Julie D. Henry, Donald Mowat, and J. Stephen Bell. 2010. Emotion perception in Alzheimer's disease and mood disorder in old age. Psychology and Aging 25, 1 (2010), 38–47.
[87]
Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2015. Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 2539–2544.
[88]
Darshana Priyasad, Tharindu Fernando, Simon Denman, Sridha Sridharan, and Clinton Fookes. 2022. Affect recognition from scalp-EEG using channel-wise encoder networks coupled with geometric deep learning and multi-channel feature fusion. Knowledge-Based Systems 250 (2022), 109038.
[89]
Jie-Lin Qiu, Wei Liu, and Bao-Liang Lu. 2018. Multi-view emotion recognition using deep canonical correlation analysis. In Proceedings of the International Conference on Neural Information Processing. Springer, 221–231.
[90]
XueLiang Quan, Zhigang Zeng, Jianhua Jiang, Yaqian Zhang, BaoLiang Lu, and Dongrui Wu. 2021. Physiological signals based affective computing: A systematic review (in Chinese). Acta Automatica Sinica. 47, 8 (2021), 1769–1784.
[91]
Yuan Rao, Lianwei Wu, Yiming Wang, and Cong Feng. 2018. Research progress on emotional computation technology based on semantic analysis (in Chinese). Ruan Jian Xue Bao/Journal of Software 29, 8 (2018), 2397–2426.
[92]
Soheil Rayatdoost, David Rudrauf, and Mohammad Soleymani. 2020. Expression-guided EEG representation learning for emotion recognition. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 3222–3226.
[93]
Soheil Rayatdoost, David Rudrauf, and Mohammad Soleymani. 2020. Multimodal gated information fusion for emotion recognition from EEG signals and facial behaviors. In Proceedings of the 2020 International Conference on Multimodal Interaction. 655–659.
[94]
Soheil Rayatdoost and Mohammad Soleymani. 2018. Cross-corpus EEG-based emotion recognition. In Proceedings of the 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 1–6.
[95]
James A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 6 (1980), 1161.
[96]
Pritam Sarkar and Ali Etemad. 2020. Self-supervised ECG representation learning for emotion recognition. IEEE Transactions on Affective Computing. 13, 3 (2020), 1541–1554.
[97]
Xinke Shen, Xianggen Liu, Xin Hu, Dan Zhang, and Sen Song. 2022. Contrastive learning of subject-invariant EEG representations for cross-subject emotion recognition. IEEE Transactions on Affective Computing 14 (2022), 2496–2511.
[98]
Yangyang Shu and Shangfei Wang. 2017. Emotion recognition through integrating EEG and peripheral signals. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2871–2875.
[99]
Aniket Singh Rajpoot and Mahesh Raveendranatha Panicker. 2021. Subject independent emotion recognition using EEG signals employing attention driven neural networks. arXiv:2106.03461. https://arxiv.org/abs/2106.03461
[100]
Uttam Singh, Rabi Shaw, and Bidyut Kr Patra. 2023. A data augmentation and channel selection technique for grading human emotions on DEAP dataset. Biomedical Signal Processing and Control 79 (2023), 104060.
[101]
Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Advances in Neural Information Processing Systems 30 (2017), 4080--4090.
[102]
Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2012), 42–55.
[103]
Tengfei Song, Suyuan Liu, Wenming Zheng, Yuan Zong, and Zhen Cui. 2020. Instance-adaptive graph for EEG emotion recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 2701–2708.
[104]
Tengfei Song, Wenming Zheng, Cheng Lu, Yuan Zong, Xilei Zhang, and Zhen Cui. 2019. MPED: A multi-modal physiological emotion database for discrete emotion recognition. IEEE Access 7 (2019), 12177–12191.
[105]
Tengfei Song, Wenming Zheng, Peng Song, and Zhen Cui. 2018. EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Transactions on Affective Computing 11, 3 (2018), 532–541.
[106]
Imam Yogie Susanto, Tse-Yu Pan, Chien-Wen Chen, Min-Chun Hu, and Wen-Huang Cheng. 2020. Emotion recognition from galvanic skin response signal based on deep hybrid neural networks. In Proceedings of the 2020 International Conference on Multimedia Retrieval. 341–345.
[107]
Hao Tang, Wei Liu, Wei-Long Zheng, and Bao-Liang Lu. 2017. Multimodal emotion recognition using deep neural networks. In Proceedings of the International Conference on Neural Information Processing. Springer, 811–819.
[108]
Nattapong Thammasan, Ken-ichi Fukui, and Masayuki Numao. 2017. Multimodal fusion of EEG and musical features in music-emotion recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 4991–4992.
[109]
Howell Tong. 2009. A personal overview of non-linear time series analysis from a chaos perspective. Exploration of A Nonlinear World: An Appreciation of Howell Tong's Contributions to Statistics. World Scientific Publishing, Singapore (2009), 183--229.
[110]
Anthoula C. Tsolaki, Vasiliki E. Kosmidou, Ioannis Yiannis Kompatsiaris, Chrysa Papadaniil, Leontios Hadjileontiadis, and Magda Tsolaki. 2017. Age-induced differences in brain neural activation elicited by visual emotional stimuli: A high-density EEG study. Neuroscience 340 (2017), 268–278.
[111]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30 (2017), 5998–6008.
[112]
Scott R. Vrana. 1993. The psychophysiology of disgust: Differentiating negative emotional contexts with facial EMG. Psychophysiology 30, 3 (1993), 279–286.
[113]
Zitong Wan, Rui Yang, Mengjie Huang, Nianyin Zeng, and Xiaohui Liu. 2021. A review on transfer learning in EEG signal analysis. Neurocomputing 421 (2021), 1–14.
[114]
Fang Wang, Sheng-hua Zhong, Jianfeng Peng, Jianmin Jiang, and Yan Liu. 2018. Data augmentation for EEG-based emotion recognition with deep convolutional neural networks. In Proceedings of the International Conference on Multimedia Modeling. Springer, 82–93.
[115]
Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. 2022. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering 35 (2022), 8052–8072.
[116]
Lei Wang, Guizhi Xu, Jiang Wang, Shuo Yang, Lei Guo, and Weili Yan. 2011. GA-SVM based feature selection and parameters optimization for BCI research. In Proceedings of the 2011 7th International Conference on Natural Computation. IEEE, 580–583.
[117]
Qian Wang, Mou Wang, Yan Yang, and Xiaolei Zhang. 2022. Multi-modal emotion recognition using EEG and speech signals. Computers in Biology and Medicine 149 (2022), 105907.
[118]
Xiao-Wei Wang, Dan Nie, and Bao-Liang Lu. 2014. Emotional state classification from EEG data using machine learning approach. Neurocomputing 129 (2014), 94–106.
[119]
Yiming Wang, Yuan Rao, and Lianwei Wu. 2017. A review of sentiment semantic analysis technology and progress. In Proceedings of the 2017 13th International Conference on Computational Intelligence and Security (CIS). IEEE, 452–455.
[120]
Yiming Wang, Bin Zhang, and Yujiao Tang. 2024. DMMR: Cross-subject domain generalization for EEG-based emotion recognition via denoising mixed mutual reconstruction. In Proceedings of the AAAI Conference on Artificial Intelligence. 628–636.
[121]
Yingdong Wang, Jiatong Liu, Qunsheng Ruan, Shuocheng Wang, and Chen Wang. 2021. Cross-subject EEG emotion classification based on few-label adversarial domain adaption. Expert Systems with Applications. 185 (2021), 115581.
[122]
Felix Weninger, Hakan Erdogan, Shinji Watanabe, Emmanuel Vincent, Jonathan Le Roux, John R. Hershey, and Björn Schuller. 2015. Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR. In Proceedings of the International Conference on Latent Variable Analysis and Signal Separation. Springer, 91–99.
[123]
Dongrui Wu, Yifan Xu, and Bao-Liang Lu. 2020. Transfer learning for EEG-based brain–computer interfaces: A review of progress made since 2016. IEEE Transactions on Cognitive and Developmental Systems 14, 1 (2020), 4–19.
[124]
Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying graph convolutional networks. In Proceedings of the International Conference on Machine Learning (PMLR), 6861–6871.
[125]
Xueyuan Xu, Tianyuan Jia, Qing Li, Fulin Wei, Long Ye, and Xia Wu. 2021. EEG feature selection via global redundancy minimization for emotion recognition. IEEE Transactions on Affective Computing 14 (2021), 421–435.
[126]
Xueyuan Xu, Jianhong Liu, Ziyu Li, Guangtao Zhai, and Xia Wu. 2023. EEG emotional feature selection method based on orthogonal regression and feature weighting (in Chinese). Scientia Sinica Informationis 53, 1 (2023), 33–45.
[127]
Xueyuan Xu and Xia Wu. 2020. Feature selection under orthogonal regression with redundancy minimizing. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 3457–3461.
[128]
Xu Yan, Li-Ming Zhao, and Bao-Liang Lu. 2021. Simplifying multimodal emotion recognition with single eye movement modality. In Proceedings of the 29th ACM International Conference on Multimedia. 1057–1063.
[129]
Fu Yang, Xingcong Zhao, Wenge Jiang, Pengfei Gao, and Guangyuan Liu. 2019. Multi-method fusion of cross-subject emotion recognition based on high-dimensional EEG features. Frontiers in Computational Neuroscience 13 (2019), 53.
[130]
Yang Yang, Jia Jia, Shumei Zhang, Boya Wu, Qicong Chen, Juanzi Li, Chunxiao Xing, and Jie Tang. 2014. How do your friends on social media disclose your emotions? In Proceedings of the AAAI Conference on Artificial Intelligence. 306–312.
[131]
Yilong Yang, Qingfeng Wu, Ming Qiu, Yingdong Wang, and Xiaowei Chen. 2018. Emotion recognition from multi-channel EEG through parallel convolutional recurrent neural network. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 1–7.
[132]
Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J. Kim. 2019. Graph transformer networks. Advances in Neural Information Processing Systems 32 (2019), 11983–11993.
[133]
Guangyi Zhang, Vandad Davoodnia, and Ali Etemad. 2022. PARSE: Pairwise alignment of representations in semi-supervised EEG learning for emotion recognition. IEEE Transactions on Affective Computing 13, 4 (2022), 2185–2200.
[134]
Guanhua Zhang, Minjing Yu, Yong-Jin Liu, Guozhen Zhao, Dan Zhang, and Wenming Zheng. 2021. SparseDGCNN: Recognizing emotion from multichannel EEG signals. IEEE Transactions on Affective Computing 14 (2021), 537–548.
[135]
Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In Proceedings of the International Conference on Learning Representations. 1–13.
[136]
Jianhua Zhang, Zhong Yin, Peng Chen, and Stefano Nichele. 2020. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. Information Fusion 59 (2020), 103–126.
[137]
Rui Zhang, Feiping Nie, and Xuelong Li. 2018. Feature selection under regularized orthogonal least square regression with optimal scaling. Neurocomputing 273 (2018), 547–553.
[138]
Xiaowei Zhang, Jinyong Liu, Jian Shen, Shaojie Li, Kechen Hou, Bin Hu, Jin Gao, and Tong Zhang. 2021. Emotion recognition from multimodal physiological signals using a regularized deep fusion of kernel machine. IEEE Transactions on Cybernetics 51, 9 (2021), 4386–4399.
[139]
Zhi Zhang, Sheng-hua Zhong, and Yan Liu. 2022. GANSER: A self-supervised data augmentation framework for EEG-based emotion recognition. IEEE Transactions on Affective Computing 14 (2022), 2048–2063.
[140]
Zhi Zhang, Shenghua Zhong, and Yan Liu. 2024. Beyond mimicking under-represented emotions: Deep data augmentation with emotional subspace constraints for EEG-based emotion recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 10252–10260.
[141]
Guozhen Zhao, Yulin Zhang, and Yan Ge. 2018. Frontal EEG asymmetry and middle line power difference in discrete emotions. Frontiers in Behavioral Neuroscience 12 (2018), 225.
[142]
Li Ming Zhao, Xu Yan, and Bao Liang Lu. 2021. Plug-and-play domain adaptation for cross-subject EEG-based emotion recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 863–870.
[143]
Li-Ming Zhao, Rui Li, Wei-Long Zheng, and Bao-Liang Lu. 2019. Classification of five emotions from EEG and eye movement signals: Complementary representation properties. In Proceedings of the 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 611–614.
[144]
Sicheng Zhao, Guiguang Ding, Jungong Han, and Yue Gao. 2018. Personality-aware personalized emotion recognition from physiological signals. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. 1660–1667.
[145]
Yuxuan Zhao, Xinyan Cao, Jinlong Lin, Dunshan Yu, and Xixin Cao. 2021. Multimodal affective states recognition based on multiscale CNNs and biologically inspired decision fusion model. IEEE Transactions on Affective Computing 14, 2 (2021), 1391–1403.
[146]
Zhi-Wei Zhao, Wei Liu, and Bao-Liang Lu. 2021. Multimodal emotion recognition using a modified dense co-attention symmetric network. In Proceedings of the 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 73–76.
[147]
Wei-Long Zheng, Bo-Nan Dong, and Bao-Liang Lu. 2014. Multimodal emotion recognition using EEG and eye tracking data. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 5040–5043.
[148]
Wei-Long Zheng, Wei Liu, Yifei Lu, Bao-Liang Lu, and Andrzej Cichocki. 2019. EmotionMeter: A multimodal framework for recognizing human emotions. IEEE Transactions on Cybernetics 49, 3 (March 2019), 1110–1122.
[149]
Wei-Long Zheng and Bao-Liang Lu. 2015. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development 7, 3 (2015), 162–175.
[150]
Wei-Long Zheng, Jia-Yi Zhu, and Bao-Liang Lu. 2019. Identifying stable patterns over time for emotion recognition from EEG. IEEE Transactions on Affective Computing 10, 3 (2019), 417–429.
[151]
Weilong Zheng, Zhenfeng Shi, and Baoliang Lu. 2020. Build cross-subject EEG-based affective models using heterogeneous transfer learning (in Chinese). Chinese Journal of Computers 43, 2 (2020), 177–189.
[152]
Peixiang Zhong, Di Wang, and Chunyan Miao. 2020. EEG-based emotion recognition using regularized graph neural networks. IEEE Transactions on Affective Computing 13, 3 (2020), 1290–1301.
[153]
Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. 2022. Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 4 (2022), 4396–4415.
[154]
Rushuang Zhou, Zhiguo Zhang, Hong Fu, Li Zhang, Linling Li, Gan Huang, Fali Li, Xin Yang, Yining Dong, and Yuan-Ting Zhang. 2023. PR-PL: A novel prototypical representation based pairwise learning framework for emotion recognition using EEG signals. IEEE Transactions on Affective Computing 15 (2023), 657–670.

Cited By

View all
  • (2025)Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognitionFrontiers in Neurorobotics10.3389/fnbot.2024.148174618Online publication date: 7-Jan-2025
  • (2024)A feature-enhanced knowledge graph neural network for machine learning method recommendationPeerJ Computer Science10.7717/peerj-cs.228410(e2284)Online publication date: 28-Aug-2024

Index Terms

  1. Research Progress of EEG-Based Emotion Recognition: A Survey

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 56, Issue 11
      November 2024
      977 pages
      EISSN:1557-7341
      DOI:10.1145/3613686
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 08 July 2024
      Online AM: 28 May 2024
      Accepted: 14 May 2024
      Revised: 06 May 2024
      Received: 27 June 2023
      Published in CSUR Volume 56, Issue 11

      Check for updates

      Author Tags

      1. EEG
      2. emotion recognition
      3. feature analysis
      4. overfitting
      5. physiological knowledge finding
      6. multimodal

      Qualifiers

      • Survey

      Funding Sources

      • Key Research and Development Program of Shaanxi
      • National Key Projects of China

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,761
      • Downloads (Last 6 weeks)219
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognitionFrontiers in Neurorobotics10.3389/fnbot.2024.148174618Online publication date: 7-Jan-2025
      • (2024)A feature-enhanced knowledge graph neural network for machine learning method recommendationPeerJ Computer Science10.7717/peerj-cs.228410(e2284)Online publication date: 28-Aug-2024

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media