Skip to main content

Multimodal Emotion Recognition Based on the Decoupling of Emotion and Speaker Information

  • Conference paper
Text, Speech and Dialogue (TSD 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6231))

Included in the following conference series:

Abstract

The standard features used in emotion recognition carry, besides the emotion related information, also cues about the speaker. This is expected, since the nature of emotionally colored speech is similar to the variations in the speech signal, caused by different speakers. Therefore, we present a gradient descent derived transformation for the decoupling of emotion and speaker information contained in the acoustic features. The Interspeech ’09 Emotion Challenge feature set is used as the baseline for the audio part. A similar procedure is employed on the video signal, where the nuisance attribute projection (NAP) is used to derive the transformation matrix, which contains information about the emotional state of the speaker. Ultimately, different NAP transformation matrices are compared using canonical correlations. The audio and video sub-systems are combined at the matching score level using different fusion techniques. The presented system is assessed on the publicly available eNTERFACE ’05 database where significant improvements in the recognition performance are observed when compared to the stat-of-the-art baseline.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Gajšek, R., Štruc, V., Dobrišek, S., Mihelič, F.: Emotion Recognition Using Linear Transformations in Combination with Video. In: Proceedings of Interspeech 2009 (2009)

    Google Scholar 

  2. Martin, O., Kotsia, I., Macq, B., Pitas, I.: The Enterface 2005 Audio-Visual Emotion Database. In: ICDEW 2006: Proceedings of the 22nd International Conference on Data Engineering Workshops, Washington, DC, USA. IEEE Computer Society Press, Los Alamitos (2006)

    Google Scholar 

  3. Schuller, B., Steidl, S., Batliner, A.: The Interspeech 2009 Emotion Challenge. In: ISCA (ed.), Proceedings of Interspeech 2009, pp. 312–315 (2009)

    Google Scholar 

  4. Peng, H., Long, F., Ding, C.: Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 1226–1238 (2005)

    Article  Google Scholar 

  5. Gales, M.J.F.: Maximum likelihood linear transformations for HMM-based speech recognition. Computer Speech and Language 12, 75–98 (1997)

    Article  Google Scholar 

  6. Viola, P., Jones, M.: Robust real-time face detection. International Journal of Computer Vision 57, 137–154 (2004)

    Article  Google Scholar 

  7. Štruc, V., Vesnicer, B., Mihelič, F., Pavešić, N.: Removing illumination artefact from face images using the nuisance attribute projection. In: ICASSP 2010, pp. 846–849 (2010)

    Google Scholar 

  8. Kim, T., Kittler, J., Cipolla, R.: Discriminative learning and recognition of image set classes using canonical correlations. TPAMI 29, 1005–1018 (2007)

    Google Scholar 

  9. Yamaguchi, O., Fukui, K., Maeda, K.: Face recognition using temporal image sequence. In: Proc. of AFGR, pp. 318–323 (1998)

    Google Scholar 

  10. Eyben, F., Wömer, M., Schuller, B.: Openear – introducing the Munich open-source emotion and affect recognition toolkit. In: Proc. 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction 2009 (ACII 2009), Amsterdam, The Netherlands, vol. I, pp. 576–581. IEEE, Los Alamitos (2009)

    Google Scholar 

  11. Jain, A.K., Nandakumar, K., Ross, A.: Score normalization in multimodal biometric systems. Pattern Recognition 38, 2270–2285 (2005)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gajšek, R., Štruc, V., Mihelič, F. (2010). Multimodal Emotion Recognition Based on the Decoupling of Emotion and Speaker Information. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds) Text, Speech and Dialogue. TSD 2010. Lecture Notes in Computer Science(), vol 6231. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15760-8_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15760-8_35

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15759-2

  • Online ISBN: 978-3-642-15760-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics