Loading [a11y]/accessibility-menu.js
Weighted Feature Fusion Based Emotional Recognition for Variable-length Speech using DNN | IEEE Conference Publication | IEEE Xplore

Weighted Feature Fusion Based Emotional Recognition for Variable-length Speech using DNN


Abstract:

Emotion recognition plays an increasingly important role in human-computer interaction systems, which is a key technology in multimedia communication. Because neural netw...Show More

Abstract:

Emotion recognition plays an increasingly important role in human-computer interaction systems, which is a key technology in multimedia communication. Because neural networks can automatically learn the intermediate representation of raw speech signal, currently, most methods use Convolutional Neural Network (CNN) to extract information directly from spectrograms, but this may result in the ineffective use of information in hand-crafted features. In this work, a model based on weighted feature fusion method is proposed for emotion recognition of variable-length speech. Since the Chroma-based features are closely related to speech emotions, our model can effectively utilize the useful information in Chromaticity map to improve the performance by combining CNN-based features and Chroma-based features. We evaluated the model on the Interactive Emotional Motion Capture (IEMOCAP) dataset and achieved more than 5% increase in weighted accuracy (WA) and unweighted accuracy (UA), comparing with the existing state-of-the-art methods.
Date of Conference: 24-28 June 2019
Date Added to IEEE Xplore: 22 July 2019
ISBN Information:

ISSN Information:

Conference Location: Tangier, Morocco

Contact IEEE to Subscribe

References

References is not available for this document.