Abstract
The standard features used in emotion recognition carry, besides the emotion related information, also cues about the speaker. This is expected, since the nature of emotionally colored speech is similar to the variations in the speech signal, caused by different speakers. Therefore, we present a gradient descent derived transformation for the decoupling of emotion and speaker information contained in the acoustic features. The Interspeech ’09 Emotion Challenge feature set is used as the baseline for the audio part. A similar procedure is employed on the video signal, where the nuisance attribute projection (NAP) is used to derive the transformation matrix, which contains information about the emotional state of the speaker. Ultimately, different NAP transformation matrices are compared using canonical correlations. The audio and video sub-systems are combined at the matching score level using different fusion techniques. The presented system is assessed on the publicly available eNTERFACE ’05 database where significant improvements in the recognition performance are observed when compared to the stat-of-the-art baseline.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Gajšek, R., Štruc, V., Dobrišek, S., Mihelič, F.: Emotion Recognition Using Linear Transformations in Combination with Video. In: Proceedings of Interspeech 2009 (2009)
Martin, O., Kotsia, I., Macq, B., Pitas, I.: The Enterface 2005 Audio-Visual Emotion Database. In: ICDEW 2006: Proceedings of the 22nd International Conference on Data Engineering Workshops, Washington, DC, USA. IEEE Computer Society Press, Los Alamitos (2006)
Schuller, B., Steidl, S., Batliner, A.: The Interspeech 2009 Emotion Challenge. In: ISCA (ed.), Proceedings of Interspeech 2009, pp. 312–315 (2009)
Peng, H., Long, F., Ding, C.: Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 1226–1238 (2005)
Gales, M.J.F.: Maximum likelihood linear transformations for HMM-based speech recognition. Computer Speech and Language 12, 75–98 (1997)
Viola, P., Jones, M.: Robust real-time face detection. International Journal of Computer Vision 57, 137–154 (2004)
Štruc, V., Vesnicer, B., Mihelič, F., Pavešić, N.: Removing illumination artefact from face images using the nuisance attribute projection. In: ICASSP 2010, pp. 846–849 (2010)
Kim, T., Kittler, J., Cipolla, R.: Discriminative learning and recognition of image set classes using canonical correlations. TPAMI 29, 1005–1018 (2007)
Yamaguchi, O., Fukui, K., Maeda, K.: Face recognition using temporal image sequence. In: Proc. of AFGR, pp. 318–323 (1998)
Eyben, F., Wömer, M., Schuller, B.: Openear – introducing the Munich open-source emotion and affect recognition toolkit. In: Proc. 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction 2009 (ACII 2009), Amsterdam, The Netherlands, vol. I, pp. 576–581. IEEE, Los Alamitos (2009)
Jain, A.K., Nandakumar, K., Ross, A.: Score normalization in multimodal biometric systems. Pattern Recognition 38, 2270–2285 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gajšek, R., Štruc, V., Mihelič, F. (2010). Multimodal Emotion Recognition Based on the Decoupling of Emotion and Speaker Information. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds) Text, Speech and Dialogue. TSD 2010. Lecture Notes in Computer Science(), vol 6231. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15760-8_35
Download citation
DOI: https://doi.org/10.1007/978-3-642-15760-8_35
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15759-2
Online ISBN: 978-3-642-15760-8
eBook Packages: Computer ScienceComputer Science (R0)