Automatic emotion recognition using auditory and prosodic indicative features | IEEE Conference Publication | IEEE Xplore

Automatic emotion recognition using auditory and prosodic indicative features


Abstract:

In this paper, a new framework for the automatic recognition of human emotions from speech was proposed. Besides auditory indicative features, selected prosodic and voice...Show More

Abstract:

In this paper, a new framework for the automatic recognition of human emotions from speech was proposed. Besides auditory indicative features, selected prosodic and voice quality parameters were optimally combined with Mel frequency coefficients to perform an automatic emotion classification. For this purpose, the Emotion Prosody Speech and Transcript database, a certified speech corpus, was used throughout this study. An extensive set of experiments have been carried out in order to assess the effectiveness of this original mixture of prosodic, perceptual and auditory features to perform the emotion recognition task. These features were selected by using Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) on the basis of their ability of discrimination. The selected features were used by the front-end processing stage of a hybrid Gaussian Mixture Model and Support Vector Machines (GSVMs) to perform the emotion classification. The results showed the effectiveness of the proposed feature extraction framework to discriminate between different human emotions when the LDA-PCA-GSVM classifier was used.
Date of Conference: 03-06 May 2015
Date Added to IEEE Xplore: 25 June 2015
ISBN Information:
Print ISSN: 0840-7789
Conference Location: Halifax, NS, Canada

References

References is not available for this document.