Abstract
In this paper, we introduce a system that recognize emotion by speech and show the facial expression by using 2-dimensional emotion space. 4 emotional states are classified by the work with ANN. The derived features of the signal, pitch, and loudness are quantitatively contributed to the classification of emotions. Firstly we analyze the acoustic elements for using as emotional features and the elements are evaluated by ANN classifier. Secondly, we implement an avatar (simply drawn face) and the facial expressions are changed naturally by using the dynamic emotion space model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Lazarus, R.S., Lazarus, B.N.: Passion & Reason, pp. 255–256. Moonye Publishing, Seoul (1997)
Chen, L.S., Tao, H., Huang, T.S., Miyasato, T., Nakatsu, R.: Emotion recognition from audiovisual information. In: IEEE Second Workshop on Multimedia Signal Processing (1998)
Han, J.S.: Speech Signal Processing, pp. 84–85. O-Sung-media, Seoul (2000)
Moore, B.C.J.: Cochlear hearing loss, pp. 91–93. Academic Press, USA (2003)
Mitchell, T.M.: Machine learning, pp. 81–116. McGRAW-HILL Internatinal edn., Singapore (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Park, CH., Byun, KS., Sim, KB. (2005). The Implementation of the Emotion Recognition from Speech and Facial Expression System. In: Wang, L., Chen, K., Ong, Y.S. (eds) Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science, vol 3611. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11539117_14
Download citation
DOI: https://doi.org/10.1007/11539117_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28325-6
Online ISBN: 978-3-540-31858-3
eBook Packages: Computer ScienceComputer Science (R0)