Skip to main content

An Empirical Study on Feature Extraction in DNN-Based Speech Emotion Recognition

  • Conference paper
  • First Online:
Book cover HCI International 2020 – Late Breaking Posters (HCII 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1293))

Included in the following conference series:

  • 1120 Accesses

Abstract

The current empirical study focuses on speech emotion recognition using speech data extracted from video clips. Although many studies reported speech emotion recognition, the majority of the studies presented were based on using acted and clean speech. A more challenging and realistic task would be using spontaneous noisy speech from video clips. In the current study, the modern and state-of-the-art i-vector features are applied and experimentally evaluated. Comparisons with the widely used low-level descriptors (LLDs) and functionals are also presented. To improve the classification accuracy, a method based on late fusion is investigated. Using the proposed method, higher accuracies were achieved compared to the sole use of individual features. For classification, a fully connected deep neural network (DNN) with several hidden layers was used.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Busso, C., Bulut, M., Narayanan, S.: Toward effective automatic recognition systems of emotion in speech. In: Gratch, J., Marsella, S. (eds.) Social Emotions in Nature and Artifact: Emotions in Human and Human-Computer Interaction, pp. 110–127. Oxford University Press, New York (2013)

    Chapter  Google Scholar 

  2. Pan, Y., Shen, P., Shen, L.: Speech emotion recognition using support vector machine. Int. J. Smart Home 6(2), 101–108 (2012)

    Google Scholar 

  3. Nicholson, J., Takahashi, K., Nakatsu, R.: Emotion recognition in speech using neural networks. Neural Comput. Appl. 9(4), 290–296 (2000)

    Article  Google Scholar 

  4. Stuhlsatz, A., Meyer, C., Eyben, F., Zielke1, T., Meier, G., Schuller, B.: Deep neural networks for acoustic emotion recognition: raising the benchmarks. In: Proceedings of ICASSP, pp. 5688–5691 (2011)

    Google Scholar 

  5. Han, K., Yu, D., Tashev, I.: Speech emotion recognition using deep neural network and extreme learning machine. In: Proceedings of Interspeech, pp. 2023–2027 (2014)

    Google Scholar 

  6. Lim, W., Jang, D., Lee, T.: Speech emotion recognition using convolutional and recurrent neural networks. In: Proceedings of Signal and Information Processing Association Annual Summit and Conference (APSIPA) (2016)

    Google Scholar 

  7. P-Cabaleiro, E., Costantini, G., Batliner, A., Baird, A., Schuller, B.: Categorical vs dimensional perception of italian emotional speech. In: Proceedings of Interspeech, pp. 3638–3642 (2018)

    Google Scholar 

  8. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)

    Article  Google Scholar 

  9. Dehak, N., Kenny, P.J., Dehak, R., Dumouchel, P., Ouellet, P.: Front-end factor analysis for speaker verification. IEEE Trans. Audio Speech Lang. Process. 19(4), 788–798 (2011)

    Article  Google Scholar 

  10. Schuller, B.W., et al.: The INTERSPEECH 2016 computational paralinguistics challenge: deception, sincerity & native language. In: Proceedings of Interspeech, pp. 2001–2005 (2016)

    Google Scholar 

  11. Sharma, G., Ghosh, S., Dhall, A.: Automatic group level affect and cohesion prediction in videos. In: 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), pp. 161–167 (2019)

    Google Scholar 

  12. Dhal, A., Sharma, G., Goecke, R., Gedeon, T.: EmotiW 2020: driver gaze, group emotion, student engagement and physiological signal based challenges. In: ACM International Conference on Multimodal Interaction 2020 (2020)

    Google Scholar 

  13. Sahidullah, M., Saha, G.: Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition. Speech Commun. 54(4), 543–565 (2012)

    Article  Google Scholar 

  14. Bielefeld, B.: Language identification using shifted delta cepstrum. In: Fourteenth Annual Speech Research Symposium (1994)

    Google Scholar 

  15. Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, New York (1990). Chap. 10

    MATH  Google Scholar 

  16. Valstar, M., Schuller, B., et al.: AVEC 2013 - the continuous audio/visual emotion and depression recognition challenge. In: Proceedings of AVEC 2013, pp. 3–10 (2013)

    Google Scholar 

  17. Schuller, B., Steidl, S., Batliner, A.: The INTERSPEECH 2009 emotion challenge. In: Proceedings of Interspeech, pp. 312–315 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Panikos Heracleous .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Heracleous, P., Takai, K., Wang, Y., Yasuda, K., Yoneyama, A., Mohammad, Y. (2020). An Empirical Study on Feature Extraction in DNN-Based Speech Emotion Recognition. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2020 – Late Breaking Posters. HCII 2020. Communications in Computer and Information Science, vol 1293. Springer, Cham. https://doi.org/10.1007/978-3-030-60700-5_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-60700-5_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-60699-2

  • Online ISBN: 978-3-030-60700-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics