Abstract
Human communication is naturally colored by emotion, triggered by the other speakers involved in the interaction. Therefore, to build a natural spoken dialogue system, it is essential to consider emotional aspects, which should be done not only by identifying user emotion, but also by investigating the reason why the emotion occurred. The ability to do so is especially important in situated dialogue, where the current situation plays a role in the interaction. In this paper, we propose a method of automatic recognition of emotion using support vector machine (SVM) and present further analysis regarding emotion triggers. Experiments were performed on an emotionally colorful dialogue corpus. The result shows performance that surpasses random guessing accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chang C, Lin C (2011) LIBSVM: a library for support vector machine. ACM Trans Intell Syst Technol 2:27:1–27:27
Chuang Z, Wu C (2004) Multi-modal emotion recognition from speech and text. Comput Linguist Chin Lang Process 9:4:45–62
Dellaert F, Polzin T, Waibel A (1994) Recognizing emotion in speech. Carnegie Mellon University, Pennsylvania
Eyben F, Woeller M, Schuller B (2010) openSMILE–The Munich versatile and fast open-source audio feature extractor. In: Proceedings of the Multimedia (MM), pp 1459–1462
Fontaine et al (2007) The world of emotion is not two-dimensional. Psychol Rep 18:12:1050–1057
Frijda N (1986) The emotions. Cambridge University Press, Cambridge
Hasegawa et al (2013) Predicting and eliciting addressee’s emotion in online dialogue. In: Proceedings of the 51st annual meeting of the association for computational linguistics, vol 1, pp 964–972
Hearst M (1998) Support vector machines. Intell Syst Appl IEEE 13:4:18–28
Petrantonakis P, Hadjileontiadis L (2010) Emotion recognition from EEG using higher order crossings. IEEE Trans Inf Technol Biomed 14:2:186–197
Russell J, Barrett L (1999) Core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant. J Personal Soc Psychol 76(5):805–819
McKeown G, Valstar M, Cowie R, Pantic M, Schroeder M (2012) The SEMAINE database: annotated multimodal records of emotionally coloured conversations between a person and a limited agent. IEEE Trans Affect Comput 3:5–17
Schroeder M (2012) Building autonomous sensitive artificial listeners. IEEE Trans Affect Comput 3(2):165–183
Schuller B, Steidtl S, Batliner A (2009) The INTERSPEECH 2009 emotion challenge. In: Proceedings of the Interspeech, Brighton, pp 312–315
Schuller B, Valstar M, Eyben F, Cowie R, Pantic M (2012) AVEC 2012—the continuous audio/visual emotion challenge. In: Proceedings of the ACM international conference multimodal interaction, pp 449-456
Acknowledgments
Part of this research is supported by Japan Student Services Organization (JASSO) scholarship.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Lubis, N., Sakti, S., Neubig, G., Toda, T., Purwarianti, A., Nakamura, S. (2016). Emotion and Its Triggers in Human Spoken Dialogue: Recognition and Analysis. In: Rudnicky, A., Raux, A., Lane, I., Misu, T. (eds) Situated Dialog in Speech-Based Human-Computer Interaction. Signals and Communication Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-21834-2_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-21834-2_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21833-5
Online ISBN: 978-3-319-21834-2
eBook Packages: EngineeringEngineering (R0)