Skip to main content

Affect Recognition in Real Life Scenarios

  • Chapter
  • 1173 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6456))

Abstract

Affect awareness is important for improving human-computer interaction, but also facilitates the detection of atypical behaviours, danger, or crisis situations in surveillance and in human behaviour monitoring applications. The present work aims at the detection and recognition of specific affective states, such as panic, anger, happiness in close to real-world conditions. The affect recognition scheme investigated here relies on an utterance-level audio parameterization technique and a robust pattern recognition scheme based on the Gaussian Mixture Models with Universal Background Modelling (GMM-UBM) paradigm. We evaluate the applicability of the suggested architecture on the PROMETHEUS database, implemented in a number of indoor and outdoor conditions. The experimental results demonstrate the potential of the suggested architecture on the challenging task of affect recognition in real world conditions. However, further enhancement of the affect recognition performance would be needed before any deployment of practical applications.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Clavel, C., Devillers, L., Richard, G., Vasilexcu, I., Ehrette, T.: Detection and analysis of abnormal situations through fear type acoustic manifestations. In: Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2007), vol. 4, pp. 21–24 (2007)

    Google Scholar 

  2. Ntalampiras, S., Potamitis, I., Fakotakis, N.: An adaptive framework for acoustic monitoring of potential hazards. EURASIP Journal on Audio, Speech, and Music Processing 2009, article ID: 594103 (2009)

    Google Scholar 

  3. Clavel, C., Vasilescu, I., Devillers, L., Richard, G., Ehrette, T.: Fear-type emotion recognition for future audio-based surveillance systems. Speech Communication 50(6), 487–503 (2008)

    Article  Google Scholar 

  4. Schuller, B., Steidl, S., Batliner, A.: The Interspeech 2009 Emotion Challenge. In: Proc. of Interspeech 2009, ISCA, Brighton, UK, pp. 312–315 (2009)

    Google Scholar 

  5. Kockmann, M., Burget, L., Cernocky, J.: Brno University of Technology System for Interspeech 2009 Emotion Challenge. In: Proc. of Interspeech 2009, ISCA, Brighton, UK, pp. 348–351 (2009)

    Google Scholar 

  6. Steidl, S., Schuller, B., Seppi, D., Batliner, A.: The Hinterland of Emotions: Facing the Open-Microphone Challenge. In: Proc. of 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction (ACII 2009), vol. I, pp. 690–697 (2009)

    Google Scholar 

  7. Callejas, Z., Lopez-Cozar, R.: Influence of contextual information in emotion annotation for spoken dialogue systems. Speech Communication, 416–433 (2008)

    Google Scholar 

  8. Seppi, D., Batliner, A., Schuller, B., Steidl, S., Vogt, T., Wagner, J., Devillers, L., Vidrascu, L., Amir, N., Aharonson, V.: Patterns, prototypes, performance: classifying emotional user states. In: Proc. of Interspeech 2008, pp. 601–604 (2008)

    Google Scholar 

  9. Steidl, S.: Automatic classification of emotion-related user states in spontaneous children’s speech. In: Studien zur Mustererkennung, Bd, 28. Logos Verlag, Berlin (2009) ISBN 978-3-8325-2145-5

    Google Scholar 

  10. Batliner, A., Steidl, S., Hacker, C., Nöth, E.: Private emotions vs. social interaction – a data-driven approach towards analysing emotion in speech. In: User Modeling and User-Adpated Interaction (umuai), vol. 18(1-2), pp. 175–206 (2008)

    Google Scholar 

  11. Brendel, M., Zaccarelli, R., Devillers, L.: Building a system for emotions detection from speech to control an affective avatar. In: Proc. of LREC 2010, pp. 2205–2210 (2010)

    Google Scholar 

  12. Ntalampiras, S., Arsić, D., Stïormer, A., Ganchev, T., Potamitis, I., Fakotakis, N.: PROMETHEUS database: A multimodal corpus for research on modeling and interpreting human behavior. In: Proc. of the 16th International Conference on Digital Signal Processing (DSP 2009), Santorini, Greece (2009)

    Google Scholar 

  13. Ntalampiras, S., Ganchev, T., Potamitis, I., Fakotakis, N.: Heterogeneous sensor database in support of human behaviour analysis in unrestricted environments: The audio part. In: Valletta, Malta, Calzolari, N., et al. (eds.) Proc. of LREC 2010. ELRA, pp. 3006–3010 (2010) ISBN: 2-9517408-6-7

    Google Scholar 

  14. Eyben, F., Wollmer, M., Schuller, B.: openEAR – Introducing the Munich open-source emotion and affect recognition toolkit. In: Proc. of the 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction 2009 (ACII 2009). IEEE, Amsterdam (2009)

    Google Scholar 

  15. Reynolds, D.A., Rose, R.C.: Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Transactions on Speech and Audio Processing 3, 72–83 (1995)

    Article  Google Scholar 

  16. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc. 39, 1–38 (1977)

    MathSciNet  MATH  Google Scholar 

  17. Reynolds, D.A., Quatieri, T.F., Dunn, R.B.: Speaker verification using adapted Gaussian mixture models. Digital Signal Processing 10, 19–41 (2000)

    Article  Google Scholar 

  18. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18(1), 32–80 (2001)

    Article  Google Scholar 

  19. Eyben, F., Batliner, A., Schuller, B., Seppi, D., Steidl, S.: Cross-corpus classification of realistic emotions – some pilot experiments. In: Proc. 3rd International Workshop on Emotion (satellite of LREC), Valletta, Malta, pp. 77–82 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Kostoulas, T., Ganchev, T., Fakotakis, N. (2011). Affect Recognition in Real Life Scenarios. In: Esposito, A., Esposito, A.M., Martone, R., Müller, V.C., Scarpetta, G. (eds) Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues. Lecture Notes in Computer Science, vol 6456. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-18184-9_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-18184-9_37

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-18183-2

  • Online ISBN: 978-3-642-18184-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics