Abstract
Speech Emotion Recognition (SER) systems use machine learning to detect emotions from audio speech irrespective of the semantic context of the audio. Current research has limitations due to the complexity posed by language, accent, gender, age and intensity present in the speech and developing accurate SER systems remain an open challenge. This study focuses on a novel approach for developing a deep learning system which unifies four datasets, i.e., RAVDESS, TESS, CREMA-D and SAVEE to detect emotions from speech. This combination of datasets is used along with the most relevant features, i.e., Zero Crossing Rate (ZCR), Chroma Feature, MFCC, Root Mean Square (RMS) and Mel Spectrum. A 4-layer Convolutional Neural Network (CNN) is used on the training data achieving an accuracy of 76%. The results show that the proposed approach increases the reliability and makes the model less variant to new data compared to models trained on single datasets. The shortcomings of the current approach and their respective solutions are also discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abbaschian, B.J., Sierra-Sosa, D., Elmaghraby, A.: Deep learning techniques for speech emotion recognition, from databases to models. Sensors 21(4), 1249 (2021). https://doi.org/10.3390/s21041249
Bhavan, A., Chauhan, P., Hitkul, Shah, R.R.: Bagged support vector machines for emotion recognition from speech. Knowl.-Based Syst. 184, 104886 (2019). https://doi.org/10.1016/j.knosys.2019.104886
Fayek, H.M., Lech, M., Cavedon, L.: Evaluating deep learning architectures for speech emotion recognition. Neural Netw. 92, 60–68 (2017). https://doi.org/10.1016/j.neunet.2017.02.013
Uddin, M., Nilsson, E.G.: Emotion recognition using speech and neural structured learning to facilitate edge intelligence. Eng. Appl. Artif. Intell. 94, 103775 (2020). https://doi.org/10.1016/j.engappai.2020.103775
Bagherzadeh, S., Maghooli, K., Shalbaf, A., Maghsoudi, A.: Recognition of emotional states using frequency effective connectivity maps through transfer learning approach from electroencephalogram signals. Biomed. Signal Process. Control 75, 103544 (2022). https://doi.org/10.1016/j.bspc.2022.103544
Livingstone, S.R., Russo, F.A.: The Ryerson audio-visual database of emotional speech and song (RAVDESS): a dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5), e0196391 (2018). https://doi.org/10.1371/journal.pone.0196391
Pichora-Fuller, M.K., Dupuis, K.: Toronto emotional speech set (TESS). Borealis (2020). https://doi.org/10.5683/SP2/E8H2MF
Cao, H., Cooper, D.G., Keutmann, M.K., Gur, R.C., Nenkova, A., Verma, R.: CREMA-D: crowd-sourced emotional multimodal actors dataset. IEEE Trans. Affect. Comput. 5(4), 377–390 (2014). https://doi.org/10.1109/TAFFC.2014.2336244
Jackson, P., Ul Haq, S.: Surrey audio-visual expressed emotion (SAVEE) database (2011)
Panagiotakis, C., Tziritas, G.: A speech/music discriminator based on RMS and zero-crossings. IEEE Trans. Multimed. 7(1), 155–166 (2005). https://doi.org/10.1109/TMM.2004.840604
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ahmed, W., Riaz, S., Iftikhar, K., Konur, S. (2023). Speech Emotion Recognition Using Deep Learning. In: Bramer, M., Stahl, F. (eds) Artificial Intelligence XL. SGAI 2023. Lecture Notes in Computer Science(), vol 14381. Springer, Cham. https://doi.org/10.1007/978-3-031-47994-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-47994-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47993-9
Online ISBN: 978-3-031-47994-6
eBook Packages: Computer ScienceComputer Science (R0)