Skip to main content

Toward More Reliable Emotion Recognition of Vocal Sentences by Emphasizing Information of Korean Ending Boundary Tones

  • Conference paper
  • 1531 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3642))

Abstract

Autonomic machines interacting with human should have capability to perceive the states of emotion and attitude through implicit messages for obtaining voluntary cooperation from their clients. Voice is the easiest and the most natural way to exchange human messages. The automatic systems capable of understanding the states of emotion and attitude have utilized features based on pitch and energy of uttered sentences. Performance of the existing emotion recognition systems can be further improved with the support of linguistic knowledge that specific tonal section in a sentence is related to the states of emotion and attitude. In this paper, we attempt to improve the recognition rate of emotion by adopting such linguistic knowledge for Korean ending boundary tones into an automatic system implemented using pitch-related features and multilayer perceptrons. From the results of an experiment over a Korean emotional speech database, a substantial improvement is confirmed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion Recognition in Human-Computer Interaction. IEEE Signal Processing Magazine 18, 32–80 (2001)

    Article  Google Scholar 

  2. Gauvain, J., Lamel, L.: Large-Vocabulary Continuous Speech Recognition: Advances and Applications. Proceedings of the IEEE 88, 1181–1200 (2000)

    Article  Google Scholar 

  3. Yoshimura, T., Hayamizu, S., Ohmura, H., Tanaka, K.: Pitch Pattern Clustering of User Utterances in Human-Machine Dialogue. International Conference on Spoken Language 2, 837–840 (1996)

    Google Scholar 

  4. Dellaert, F., Polzin, T., Waibel, A.: Recognizing Emotion in Speech. International Conference on Spoken Language 3, 1970–1973 (1996)

    Google Scholar 

  5. Bhatti, M.W., Wang, Y., Guan, L.: A Neural Network Approach for Human Emotion Recognition in Speech. In: International Symposium on Circuits and Systems, vol. 2, pp. 181–184 (2004)

    Google Scholar 

  6. Schuller, B., Rigoll, G., Lang, M.: Hidden Markov Model-Based Speech Emotion Recognition. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2, pp. 1–4 (2003)

    Google Scholar 

  7. Schuller, B., Rigoll, G., Lang, M.: Speech Emotion Recognition Combining Acoustic Features and Linguistic Information in a Hybrid Support Vector Machine-Belief Network Architecture. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. 577–580 (2004)

    Google Scholar 

  8. O’Connor, J.D., Arnold, G.F.: Intonation of Colloquial English, 2nd edn. Longmans, London (1961)

    Google Scholar 

  9. Jun, S.: K-ToBI Labelling Conventions. Ver. 3.1. (2000), http://www.linguistics.ucla.edu/people/jun/ktobi/K-tobi.html

  10. Pierrehumbert, J., Hirschberg, J.: The Meaning of Intonation Contours in the Interpretation of Discourse. In: Cohen, P., Morgan, J., Pollack, M. (eds.) Intentions in Communication, pp. 271–323. MIT Press, Cambridge (1990)

    Google Scholar 

  11. Lee, H.: The Structure of Korean Prosody. Doctoral Dissertation, University of London (1990)

    Google Scholar 

  12. Rabiner, L., Sambur, M.: An Algorithm for Determining the Endpoints of Isolated Utterances. Bell System Technical Journal 54, 297–315 (1975)

    Google Scholar 

  13. Krubsack, D.A., Niederjohn, R.J.: An Autocorrelation Pitch Detector and Voicing Decision with Confidence Measures Developed for Noise-Corrupted Speech. IEEE Transactions on Signal Processing 39, 319–329 (1991)

    Article  Google Scholar 

  14. Bengio, Y.: Neural Networks for Speech and Sequence Recognition. International Thomson Computer Press, London Boston (1995)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lee, TS., Park, M., Kim, TS. (2005). Toward More Reliable Emotion Recognition of Vocal Sentences by Emphasizing Information of Korean Ending Boundary Tones. In: Ślęzak, D., Yao, J., Peters, J.F., Ziarko, W., Hu, X. (eds) Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing. RSFDGrC 2005. Lecture Notes in Computer Science(), vol 3642. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11548706_32

Download citation

  • DOI: https://doi.org/10.1007/11548706_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28660-8

  • Online ISBN: 978-3-540-31824-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics