Skip to main content

Emotion and Its Triggers in Human Spoken Dialogue: Recognition and Analysis

  • Chapter
  • First Online:
Situated Dialog in Speech-Based Human-Computer Interaction

Abstract

Human communication is naturally colored by emotion, triggered by the other speakers involved in the interaction. Therefore, to build a natural spoken dialogue system, it is essential to consider emotional aspects, which should be done not only by identifying user emotion, but also by investigating the reason why the emotion occurred. The ability to do so is especially important in situated dialogue, where the current situation plays a role in the interaction. In this paper, we propose a method of automatic recognition of emotion using support vector machine (SVM) and present further analysis regarding emotion triggers. Experiments were performed on an emotionally colorful dialogue corpus. The result shows performance that surpasses random guessing accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chang C, Lin C (2011) LIBSVM: a library for support vector machine. ACM Trans Intell Syst Technol 2:27:1–27:27

    Google Scholar 

  2. Chuang Z, Wu C (2004) Multi-modal emotion recognition from speech and text. Comput Linguist Chin Lang Process 9:4:45–62

    Google Scholar 

  3. Dellaert F, Polzin T, Waibel A (1994) Recognizing emotion in speech. Carnegie Mellon University, Pennsylvania

    Google Scholar 

  4. Eyben F, Woeller M, Schuller B (2010) openSMILE–The Munich versatile and fast open-source audio feature extractor. In: Proceedings of the Multimedia (MM), pp 1459–1462

    Google Scholar 

  5. Fontaine et al (2007) The world of emotion is not two-dimensional. Psychol Rep 18:12:1050–1057

    Google Scholar 

  6. Frijda N (1986) The emotions. Cambridge University Press, Cambridge

    Google Scholar 

  7. Hasegawa et al (2013) Predicting and eliciting addressee’s emotion in online dialogue. In: Proceedings of the 51st annual meeting of the association for computational linguistics, vol 1, pp 964–972

    Google Scholar 

  8. Hearst M (1998) Support vector machines. Intell Syst Appl IEEE 13:4:18–28

    Google Scholar 

  9. Petrantonakis P, Hadjileontiadis L (2010) Emotion recognition from EEG using higher order crossings. IEEE Trans Inf Technol Biomed 14:2:186–197

    Google Scholar 

  10. Russell J, Barrett L (1999) Core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant. J Personal Soc Psychol 76(5):805–819

    Article  Google Scholar 

  11. McKeown G, Valstar M, Cowie R, Pantic M, Schroeder M (2012) The SEMAINE database: annotated multimodal records of emotionally coloured conversations between a person and a limited agent. IEEE Trans Affect Comput 3:5–17

    Article  Google Scholar 

  12. Schroeder M (2012) Building autonomous sensitive artificial listeners. IEEE Trans Affect Comput 3(2):165–183

    Article  MathSciNet  Google Scholar 

  13. Schuller B, Steidtl S, Batliner A (2009) The INTERSPEECH 2009 emotion challenge. In: Proceedings of the Interspeech, Brighton, pp 312–315

    Google Scholar 

  14. Schuller B, Valstar M, Eyben F, Cowie R, Pantic M (2012) AVEC 2012—the continuous audio/visual emotion challenge. In: Proceedings of the ACM international conference multimodal interaction, pp 449-456

    Google Scholar 

Download references

Acknowledgments

Part of this research is supported by Japan Student Services Organization (JASSO) scholarship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nurul Lubis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Lubis, N., Sakti, S., Neubig, G., Toda, T., Purwarianti, A., Nakamura, S. (2016). Emotion and Its Triggers in Human Spoken Dialogue: Recognition and Analysis. In: Rudnicky, A., Raux, A., Lane, I., Misu, T. (eds) Situated Dialog in Speech-Based Human-Computer Interaction. Signals and Communication Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-21834-2_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-21834-2_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-21833-5

  • Online ISBN: 978-3-319-21834-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics