Elsevier

Computers & Electrical Engineering

Volume 72, November 2018, Pages 383-392
Computers & Electrical Engineering

Emotion recognition using empirical mode decomposition and approximation entropy

https://doi.org/10.1016/j.compeleceng.2018.09.022Get rights and content

Abstract

Automatic human emotion recognition is a key technology for human-machine interaction. In this paper, we propose an electroencephalogram (EEG) feature extraction method that leverages empirical mode decomposition and Approximation Entropy. In our proposed method, Empirical Mode Decomposition (EMD) is used to process EEG signals after data processing and obtains several intrinsic eigenmode functions. The Approximation Entropy (ApEn) of the first four Intrinsic Mode Functions (IMFs) is computed, which is used as the features from EEG signals for learning and recognition. An integration of Deep Belief Network and Support Vector Machine is devised for classification, which takes the eigenvectors from the extracted feature to identify four principal human emotions, namely happy, calm, sad, and fear. Experiments are conducted with EEG data acquired with a 16-lead device. Our experimental results demonstrate that the proposed method achieves an improved accuracy that is highly competitive to the state-of-the-art methods. The average accuracy is 83.34%, and the best accuracy reaches 87.32%.

Introduction

Automatic human emotion recognition is a key technology for human-machine interaction [1], [2]. Many research on emotion recognition relies on data from images, audio, and videos [3], [4], [5]. Discrete models [6] and dimensional models [7] are proposed to describe emotional states. Among a variety of data, physiological signals such as electroencephalogram (EEG), electrocardiogram (ECG), and electromyography (EMG) signals have been employed for emotion recognition [8]. EEG signal is closely correlated with brain activities and is more promising for recognizing emotional states [9], [10], [11].

Most recently, Hu et al. [12] proposed a classification method that combines Correlation-based Feature Selection (CFS) and a k-nearest-neighbor (KNN) algorithm for attention recognition. Lin et al. [4] used Support Vector Machine (SVM) to classify the emotional states based on EEG into four categories and found that the frontal and temporal lobes of the brain are the main areas of emotion generation, and the average classification accuracy of emotion achieves 82.29%. Goyal et al. [13] described the acquisition of EEG signals on frontal electrodes from five subjects for the classification of emotions. Gonuguntla et al. [14] analyzed the network mechanisms related to human emotion based on synchronization measure phase-locking value in EEG to formulate the emotion-specific brain functional network.

In addition to the development of classification methods, signal processing techniques have been studied. Empirical mode decomposition methods based on the Hilbert-Huang Transform (HHT) have been explored in the field of signal processing to improve the recognition performance [15], [16]. HHT includes empirical mode decomposition (EMD) and Hilbert transformation. It decomposes a signal into several approximate cosine waves and looks into their periods and amplitudes, which effectively suppresses noise and obtain the time-frequency characteristics of the signal. Such methods achieved greater results when dealing with non-stationary signals.

Despite the advancements in emotion recognition using EEG signals, there is much room to improve. This paper integrates EMD and Approximation Entropy (ApEn) and proposes an EEG feature extraction method (namely EMD-Approximation Entropy, in short E-ApEn) for feature extraction. This combined feature extraction method reduces the complexity of feature extraction. Using the emotion recognition model and integrating Deep Belief Network (DBN) and Support Vector Machine to get the feature vectors for training and classification, It is expected that a higher rate of emotion recognition can be achieved.

The rest of this paper is organized as follows: Section 2 presents our proposed method for emotion recognition using EEG signals. The section starts with an overview of the framework followed by the feature extraction method that integrates EMD and ApEn. A DBN-SVM is discussed, which makes multi-class decisions for emotion recognition. Section 3 discusses our experimental results and the section includes a description of data acquisition and preprocessing, as well as a comparison study with the state-of-the-art methods. Section 4 concludes this paper with a summary of our work and a highlight.

Section snippets

Framework of the proposed method

The framework of our proposed method is shown in Fig. 1. The process of establishing the emotion recognition model using EMD decomposition and approximate entropy consists of the following four steps:

  • 1.

    The signal is preprocessed to get signal clips of a fixed size and an independent component analysis (ICA) is used to suppress noise.

  • 2.

    For each attribute of a signal clip, EMD is used for decomposition, and the approximate entropy of the first 4 IMFs of the decomposed signal is calculated.

  • 3.

    Select the

Human subjects and acquisition device

Ten healthy subjects participated in the experiment (5 men, 5 women, and ages range 20–25). All participants had normal hearing and vision and no mental disorders. We use a 16-lead Emotiv brainwave instrument (14 of which are EEG acquisition channels and 2 of which are reference electrodes) at a frequency of 128 Hz.

Emotional induction

To induce the emotional states of human subjects, we used movie clips as experimental materials following the strategy in [21]. We select 20 videos from 200 videos prepared for the

Conclusion

In order to find out the correlation between emotional states and EEG signals and improve the recognition rate of emotion, an emotion recognition model using empirical mode decomposition and approximation entropy is proposed. In this method, empirical mode decomposition is performed to the EEG signals after intercepting valid duration data segments and noise suppression by ICA. Subsequently, the approximate entropy of the EEG signals is calculated. An integration of DBN and SVM is devised for

Acknowledgments

This work supported by The Key Program of the National Natural Science Foundation of China (Grant No. 61432004); The National Natural Science Foundation of China (Grant No. 61474035); The National Scholarship Foundation of China (Grant No. 201706695016); The fund of Affective Computing and Advanced Intelligent Machine Anhui Province Key Laboratory (Grant No. ACAIM180101).

Tian Chen received the B.E. M.E. and the Ph.D. degrees from Hefei University of Technology, China, in 1996, 2002 and 2011, respectively. She is an associate professor of the School of Computer Science and Information Engineering at the Hefei University of Technology, China. Her current research interests include affective computing, artificial intelligence, and design for test.

References (21)

  • Y. Liu et al.

    Conditional convolutional neural network enhanced random forest for facial expression recognition

    Pattern Recognit

    (2018)
  • R.W. Picard et al.

    Toward machine emotional intelligence: analysis of affective physiological state

    IEEE Trans Pattern Anal Mach Intell

    (2001)
  • S. Chu et al.

    Environmental sound recognition using mp-based features

    IEEE International Conference on Acoustics, Speech and Signal Processing

    (2008)
  • A. Savran et al.

    Emotion detection in the loop from brain signals and facial images

    Proceedings of the eNTERFACE 2006 Workshop

    (2006)
  • Y. Lin et al.

    Eeg-based emotion recognition in music listening

    IEEE Trans Biomed Eng

    (2010)
  • P. Ekman

    An argument for basic emotions

    Cognition & Emotion

    (1992)
  • A. Mehrabian

    Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament

    Current Psychology: A Journal for Diverse Perspectives on Diverse Psychological Issues

    (1996)
  • S. Koelstra et al.

    Deap: a database for emotion analysis; using physiological signals

    IEEE Trans Affect Comput

    (2012)
  • L.I. Aftanas et al.

    Analysis of evoked eeg synchronization and desynchronization in conditions of emotional activation in humans: temporal and topographic characteristics

    Neurosci Behav Physiol

    (2004)
  • S.K. Hadjidimitriou et al.

    Toward an eeg-based recognition of music liking using time-frequency analysis

    IEEE Trans Biomed Eng

    (2012)
There are more references available in the full text version of this article.

Cited by (71)

  • Analysis of Wrist Pulse Signal: Emotions and Physical Pain

    2022, IRBM
    Citation Excerpt :

    Various ways are present to compute the entropy (randomness) of a signal. Approximate entropy [57] quantifies the data in the time domain whereas spectral entropy [58] has been frequently used in the frequency domain. Band powers are widely used in literature for emotion recognition especially in EEG [59].

View all citing articles on Scopus

Tian Chen received the B.E. M.E. and the Ph.D. degrees from Hefei University of Technology, China, in 1996, 2002 and 2011, respectively. She is an associate professor of the School of Computer Science and Information Engineering at the Hefei University of Technology, China. Her current research interests include affective computing, artificial intelligence, and design for test.

Sihang Ju received his B.E. degree in Computer Science and Technology from Hefei University of Technology, China, in 2016. He has been a postgraduate student since 2016. His research interest is affective computing based on EEG.

Xiaohui Yuan received the Ph.D. degree in computer science from Tulane University in 2004. He is an Associate Professor at the University of North Texas. His research interests include computer vision and artificial intelligence. He is a recipient of Ralph E. Powe Junior Faculty award and a senior member of IEEE. He published over 130 papers in journals and conferences.

Mohamed Elhoseny received the Ph.D. in Computer and Information from Mansoura University. He is an Assistant Professor at the Faculty of Computers and Information, Mansoura University, Egypt. His research interests include sensor and ad-hoc networks, Internet of things, data security, machine learning, and optimization. He published over 90 papers in journals, conferences, and books, and edited 3 books.

Fuji Ren received the B.E. and M.E. degrees from the Beijing University of Posts and Telecommunications, in 1982 and 1985, respectively, and the Ph.D. degree from Hokkaido University, Japan, in 1991. He is a Professor with the Faculty of Engineering, University of Tokushima, Tokushima, Japan. His current research interests include information science, artificial intelligence, language understanding, and affective computing.

Mingyan Fan received his B.E. degree in Computer Science and Technology from Hefei University of Technology, China, in 2016. He has been a postgraduate student since 2017. His research interest is wearable computing.

Zhangang Chen received his B.E. and M.S. degree in Computer Science and Technology from Hefei University of Technology, China, in 2015 and 2018 respectively. His research interest is affective computing based on EEG.

Reviews processed and recommended for publication to the Editor-in-Chief by Guest Editor Dr. Guanglong Du.

View full text