Abstract
In modern speech emotion recognition (SER), improving upon state-of-the-art systems requires increasing sophistication of extracting discriminative information from training data. The key aspects of our research include utilizing automatic speaker verification (ASV) and topic detection models to enhance emotion recognition in multimodal (audio and text) dialogues. We conduct a comparative analysis of modern SER models and experiment with various models for text (RoBERTa, TodKat, COSMIC) and audio (WavLM, Wav2Vec2) modalities. We also employ attention-based fusion of the modalities and a BiGRU-based classification approach. The results show that our proposed model, which combines RoBERTa, TodKat, Wav2Vec2, WavLM, and attention-based fusion, achieves an F1-score of 0.657, outperforming the state-of-the-art systems using the same selection of modalities. This performance is only surpassed by models that utilize additional modalities beyond audio and text. We demonstrate the effectiveness of our approach in improving SER by leveraging advanced techniques for extracting discriminative information from multimodal training data. The incorporation of ASV and topic detection, along with the novel fusion and classification methods, contributes to the enhanced performance of our proposed model compared to the existing state-of-the-art SER systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mikolov, T., Yih, W.-T., Zweig, G.: Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 746–751. Association for Computational Linguistics (2013). https://aclanthology.org/N13-1090
Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching Word Vectors with Subword Information (Version 2) (2016). arXiv. https://doi.org/10.48550/ARXIV.1607.04606
Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics (2014). https://doi.org/10.3115/v1/d14-1162
Peters, M.E., et al.: Deep contextualized word representations (Version 2) (2018). arXiv. https://doi.org/10.48550/ARXIV.1802.05365
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Version 2) (2018). arXiv. https://doi.org/10.48550/ARXIV.1810.04805
Liu, Y., et al.: RoBERTa: A Robustly Optimized BERT Pretraining Approach (Version 1) (2019). arXiv. https://doi.org/10.48550/ARXIV.1907.11692
Raffel, C., et al.: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Version 4) (2019). arXiv. https://doi.org/10.48550/ARXIV.1910.10683
Lian, Z., Liu, B., Tao, J.: SMIN: semi-supervised multi-modal interaction network for conversational emotion recognition. In: IEEE Transactions on Affective Computing (Vol. 14, Issue 3, pp. 2415–2429). Institute of Electrical and Electronics Engineers (IEEE) (2023). https://doi.org/10.1109/taffc.2022.3141237
Arumugam, B., Bhattacharjee, S. D., Yuan, J.: Multimodal attentive learning for real-time explainable emotion recognition in conversations. In 2022 IEEE International Symposium on Circuits and Systems (ISCAS) (Vol. 2, pp. 1210–1214). 2022 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE (2022). https://doi.org/10.1109/iscas48785.2022.9938005
Ho, N.-H., Yang, H.-J., Kim, S.-H., Lee, G.: Multimodal approach of speech emotion recognition using multi-level multi-head fusion attention-based recurrent neural network. In: IEEE Access (Vol. 8, pp. 61672–61686). Institute of Electrical and Electronics Engineers (IEEE) (2020). https://doi.org/10.1109/access.2020.2984368
Xu, Y., Xu, H., Zou, J.: HGFM: a hierarchical grained and feature model for acoustic emotion recognition. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (Vol. 8, pp. 6499–6503). ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE (2020). https://doi.org/10.1109/icassp40776.2020.9053039
Oliveira, J., Praca, I.: On the usage of pre-trained speech recognition deep layers to detect emotions. In: IEEE Access (Vol. 9, pp. 9699–9705). Institute of Electrical and Electronics Engineers (IEEE) (2021). https://doi.org/10.1109/access.2021.3051083
Degottex, G., Kane, J., Drugman, T., Raitio, T., Scherer, S.: COVAREP; a collaborative voice analysis repository for speech technologies. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE (2014). https://doi.org/10.1109/icassp.2014.6853739
Eyben, F., Wöllmer, M., Schuller, B.: OpenSMILE. In: Proceedings of the 18th ACM International Conference on Multimedia. MM ’10: ACM Multimedia Conference. ACM (2010). https://doi.org/10.1145/1873951.1874246
Schneider, S., Baevski, A., Collobert, R., Auli, M.: wav2vec: Unsupervised Pre-training for Speech Recognition (Version 4) (2019). arXiv. https://doi.org/10.48550/ARXIV.1904.05862
Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition (Version 6) (2014). arXiv. https://doi.org/10.48550/ARXIV.1409.1556
Matveev, A., Matveev, Y., Frolova, O., Nikolaev, A., Lyakso, E.: A neural network architecture for children’s audio-visual emotion recognition. In: Mathematics (Vol. 11, Issue 22, p. 4573). MDPI AG (2023). https://doi.org/10.3390/math11224573
Meng, H., Yan, T., Yuan, F., Wei, H.: Speech emotion recognition from 3D Log-Mel spectrograms with deep learning network. In: IEEE Access (Vol. 7, pp. 125868-125881). Institute of Electrical and Electronics Engineers (IEEE) (2019). https://doi.org/10.1109/access.2019.2938007
Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., Mihalcea, R.: MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations (Version 6) (2018). arXiv. https://doi.org/10.48550/ARXIV.1810.02508
Schneider, S., Baevski, A., Collobert, R., Auli, M.: wav2vec: Unsupervised Pre-training for Speech Recognition (Version 4) (2019). arXiv. https://doi.org/10.48550/ARXIV.1904.05862
Ta, B.T., Nguyen, T.L., Dang, D.S., Le, N.M., Do, V.H.: Improving speech emotion recognition via fine-tuning ASR with speaker information. In: 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) (Vol. 38, pp. 1–6). 2022 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE (2022). https://doi.org/10.23919/apsipaasc55919.2022.9980214
Ulgen, I.R., Du, Z., Busso, C., Sisman, B.: Revealing Emotional Clusters in Speaker Embeddings: A Contrastive Learning Strategy for Speech Emotion Recognition (2024). arXiv. https://doi.org/10.48550/ARXIV.2401.11017
Chen, S., et al.: WavLM: large-scale self-supervised pre-training for full stack speech processing. In: IEEE Journal of Selected Topics in Signal Processing (Vol. 16, Issue 6, pp. 1505–1518). Institute of Electrical and Electronics Engineers (IEEE) (2022). https://doi.org/10.1109/jstsp.2022.3188113
Ghosal, D., Majumder, N., Gelbukh, A., Mihalcea, R., Poria, S.: COSMIC: COmmonSense knowledge for eMotion Identification in Conversations (Version 1) (2020). arXiv. https://doi.org/10.48550/ARXIV.2010.02795
Zhu, L., Pergola, G., Gui, L., Zhou, D., He, Y.: Topic-driven and knowledge-aware transformer for dialogue emotion detection. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics (2021). https://doi.org/10.18653/v1/2021.acl-long.125
Sap, M., et al.: ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning (Version 3) (2018). arXiv. https://doi.org/10.48550/ARXIV.1811.00146
Lian, Z., Liu, B., Tao, J.: CTNet: conversational transformer network for emotion recognition. In: IEEE/ACM Transactions on Audio, Speech, and Language Processing (Vol. 29, pp. 985–1000). Institute of Electrical and Electronics Engineers (IEEE) (2021). https://doi.org/10.1109/taslp.2021.3049898
Huang, X., et al.: Emotion detection for conversations based on reinforcement learning framework. In: IEEE MultiMedia (Vol. 28, Issue 2, pp. 76–85). Institute of Electrical and Electronics Engineers (IEEE) (2021). https://doi.org/10.1109/mmul.2021.3065678
Ma, H., Wang, J., Lin, H., Zhang, B., Zhang, Y., Xu, B.: A transformer-based model with self-distillation for multimodal emotion recognition in conversations. In: IEEE Transactions on Multimedia (Vol. 26, pp. 776–788). Institute of Electrical and Electronics Engineers (IEEE) (2024). https://doi.org/10.1109/tmm.2023.3271019
Ren, M., Huang, X., Liu, J., Liu, M., Li, X., Liu, A.-A.: MALN: multimodal adversarial learning network for conversational emotion recognition. In: IEEE Transactions on Circuits and Systems for Video Technology (Vol. 33, Issue 11, pp. 6965–6980). Institute of Electrical and Electronics Engineers (IEEE) (2023). https://doi.org/10.1109/tcsvt.2023.3273577
Guo, L., Wang, L., Dang, J., Fu, Y., Liu, J., Ding, S.: Emotion recognition with multimodal transformer fusion framework based on acoustic and lexical information. In: IEEE MultiMedia (Vol. 29, Issue 2, pp. 94–103). Institute of Electrical and Electronics Engineers (IEEE) (2022). https://doi.org/10.1109/mmul.2022.3161411
Xu, C., Gao, Y.: Multi-modal transformer with multi-head attention for emotion recognition. In: 2023 IEEE International Conference on Sensors, Electronics and Computer Engineering (ICSECE) (pp. 826–831). IEEE (2023). https://doi.org/10.1109/icsece58870.2023.10263303
Hou, M., Zhang, Z., Lu, G.: Multi-modal emotion recognition with self-guided modality calibration. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4688–4692). ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE (2022). https://doi.org/10.1109/icassp43922.2022.9747859
Zhong, P., Wang, D., Miao, C.: Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations (Version 2) (2019). arXiv https://doi.org/10.48550/ARXIV.1909.10681
Li, J., Zhang, M., Ji, D., Liu, Y.: Multi-Task Learning with Auxiliary Speaker Identification for Conversational Emotion Recognition (Version 2) (2020). arXiv https://doi.org/10.48550/ARXIV.2003.01478
Kim, T., Vossen, P.: EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa (Version 1) (2021). arXiv. https://doi.org/10.48550/ARXIV.2108.12009
Son, J., Kim, J., Lim, J., Lim, H.: GRASP: Guiding model with RelAtional Semantics using Prompt for Dialogue Relation Extraction (Version 4) (2022). arXiv. https://doi.org/10.48550/ARXIV.2208.12494
Hu, G., Lin, T.-E., Zhao, Y., Lu, G., Wu, Y., Li, Y.: UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion Recognition (Version 1) (2022). arXiv. https://doi.org/10.48550/ARXIV.2211.11256
Ma, H., Wang, J., Lin, H., Zhang, B., Zhang, Y., Xu, B.: A transformer-based model with self-distillation for multimodal emotion recognition in conversations. In: IEEE Transactions on Multimedia (Vol. 26, pp. 776–788). Institute of Electrical and Electronics Engineers (IEEE) (2024). https://doi.org/10.1109/tmm.2023.3271019
Acknowledgments
The research was financially supported by the Russian Science Foundations (project 22-11-00128).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Makhnytkina, O., Matveev, Y., Zubakov, A., Matveev, A. (2025). Utilizing Speaker Models and Topic Markers for Emotion Recognition in Dialogues. In: Karpov, A., Delić, V. (eds) Speech and Computer. SPECOM 2024. Lecture Notes in Computer Science(), vol 15300. Springer, Cham. https://doi.org/10.1007/978-3-031-78014-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-78014-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-78013-4
Online ISBN: 978-3-031-78014-1
eBook Packages: Computer ScienceComputer Science (R0)