Skip to main content
Log in

VG-DOCoT: a novel DO-Conv and transformer framework via VAE-GAN technique for EEG emotion recognition

VG-DOCoT: 一种新颖的基于变分自动编码器-生成对抗网络技术、深度过参数化卷积和变换器框架的脑电情绪识别模型

  • Research Article
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

Human emotions are intricate psychological phenomena that reflect an individual’s current physiological and psychological state. Emotions have a pronounced influence on human behavior, cognition, communication, and decision-making. However, current emotion recognition methods often suffer from suboptimal performance and limited scalability in practical applications. To solve this problem, a novel electroencephalogram (EEG) emotion recognition network named VG-DOCoT is proposed, which is based on depthwise over-parameterized convolutional (DO-Conv), transformer, and variational automatic encoder-generative adversarial network (VAE-GAN) structures. Specifically, the differential entropy (DE) can be extracted from EEG signals to create mappings into the temporal, spatial, and frequency information in preprocessing. To enhance the training data, VAE-GAN is employed for data augmentation. A novel convolution module DO-Conv is used to replace the traditional convolution layer to improve the network. A transformer structure is introduced into the network framework to reveal the global dependencies from EEG signals. Using the proposed model, a binary classification on the DEAP dataset is carried out, which achieves an accuracy of 92.52% for arousal and 92.27% for valence. Next, a ternary classification is conducted on SEED, which classifies neutral, positive, and negative emotions; an impressive average prediction accuracy of 93.77% is obtained. The proposed method significantly improves the accuracy for EEG-based emotion recognition.

摘要

人类情绪是反映个体当前生理和心理状态的复杂心理现象. 情绪对人类的行为、认知、交流和决策产生了显著的影响. 然而, 目前的情绪识别方法在实际应用中往往存在性能不佳和可扩展性有限的问题. 为此, 我们提出一种新颖的脑电图(EEG)情绪识别框架VG-DOCoT, 它基于深度过参数化卷积(DO-Conv)、变换器和变分自编码器-生成对抗网络(VAE-GAN)结构. 具体来说, 在预处理中, 可以从EEG信号中提取微分熵(DE), 以映射到时间、空间和频率信息中. 为了增强训练数据, 采用VAE-GAN进行数据增强. 使用一种新颖的卷积模块DO-Conv替代传统的卷积层, 以提高网络性能. 在网络框架中引入了变换器结构, 以揭示EEG信号中的全局依赖性. 使用所提出的模型, 在DEAP数据集上进行了二分类任务仿真, 唤醒度和效价度的准确率分别达到92.52%和92.27%. 另外, 在SEED数据集上进行了三分类任务测试, 包括中性、积极和消极三种情绪, 获得的平均预测准确率为93.77%. 所提出的方法显著提高了脑电情绪识别的准确率.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

Download references

Author information

Authors and Affiliations

Authors

Contributions

Yanping ZHU and Lei HUANG designed the research. Lei HUANG processed the data and analyzed the experimental results. Lei HUANG and Jixin CHEN made the charts. Lei HUANG and Yanping ZHU drafted the paper. Jixin CHEN and Jianan CHEN helped compile the paper. Yanping ZHU, Shenyun WANG, and Fayu WAN revised and finalized the paper.

Corresponding author

Correspondence to Yanping Zhu  (朱艳萍).

Ethics declarations

All the authors declare that they have no conflict of interest.

Additional information

Project supported by the National Key Research and Development Program of China (No. 2022YFE0122700) and the National Natural Science Foundation of China (No. 61971230)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, Y., Huang, L., Chen, J. et al. VG-DOCoT: a novel DO-Conv and transformer framework via VAE-GAN technique for EEG emotion recognition. Front Inform Technol Electron Eng 25, 1497–1514 (2024). https://doi.org/10.1631/FITEE.2300781

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2300781

Key words

关键词

CLC number