Skip to main content

Emotional Speech Datasets for English Speech Synthesis Purpose: A Review

  • Conference paper
  • First Online:
  • 1767 Accesses

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1037))

Abstract

In this paper, we review the datasets of emotional speech publicly available and their usability for state of the art speech synthesis. This is conditioned by several characteristics of these datasets: the quality of the recordings, the quantity of the data and the emotional content captured contained in the data. We then present a dataset that was recorded based on the observation of the needs in this area. It contains data for male and female actors in English and a male actor in French. The database covers five emotion classes so it could be suitable to build synthesis and voice transformation systems with the potential to control the emotional dimension.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://github.com/numediart/EmoV-DB.

  2. 2.

    https://github.com/numediart/EmoV-DB.

References

  1. Bänziger, T., Mortillaro, M., Scherer, K.R.: Introducing the geneva multimodal expression corpus for experimental research on emotion perception. Emotion 12(5), 1161 (2012)

    Article  Google Scholar 

  2. Brognaux, S., Roekhaut, S., Drugman, T., Beaufort, R.: Train&Align: a new online tool for automatic phonetic alignment. In: Spoken Language Technology Workshop (SLT), 2012 IEEE, pp. 416–421. IEEE (2012)

    Google Scholar 

  3. Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., Weiss, B.: A database of german emotional speech. In: Ninth European Conference on Speech Communication and Technology (2005)

    Google Scholar 

  4. Busso, C., Bulut, M., Lee, C.-C., Kazemzadeh, A., Mower, E., Kim, S., Chang, J.N., Lee, S., Narayanan, S.S.: Iemocap: interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42(4), 335 (2008)

    Article  Google Scholar 

  5. Busso, C., Parthasarathy, S., Burmania, A., Abdel-Wahab, M., Sadoughi, N., Provost, E.M.: Msp-improv: an acted corpus of dyadic interactions to study emotion perception. IEEE Trans. Affect. Comput. 8(1), 67–80 (2017)

    Article  Google Scholar 

  6. Cao, H., Cooper, D.G., Keutmann, M.K., Gur, R.C., Nenkova, A., Verma, R.: CREMA-D: crowd-sourced emotional multimodal actors dataset. IEEE Trans. Affect. Comput. 5(4), 377–390 (2014)

    Article  Google Scholar 

  7. Ekman, P.: Basic emotions. In: Dalgleish, T., Powers, M.J. (eds.) Handbook of Cognition and Emotion, pp. 4–5. Wiley, New Jersey (1999)

    Google Scholar 

  8. El Haddad, K., Cakmak, H., Dupont, S., Dutoit, T.: Breath and repeat: an attempt at enhancing speech-laugh synthesis quality. In: European Signal Processing Conference (EUSIPCO 2015) Nice, France, 31 August–4 September 2015

    Google Scholar 

  9. El Haddad, K., Cakmak, H., Dupont, S., Dutoit, T.: An HMM approach for synthesizing amused speech with a controllable intensity of smile. In: IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Abu Dhabi, UAE, 7–10 December 2015

    Google Scholar 

  10. El Haddad, K., Dupont, S., d’Alessandro, N., Dutoit, T.: An HMM-based speech-smile synthesis system: an approach for amusement synthesis. In: International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE), Ljubljana, Slovenia, 4–8 May 2015

    Google Scholar 

  11. El Haddad, K., Dupont, S., Urbain, J., Dutoit, T.: Speech-laughs: an HMM-based approach for amused speech synthesis. In: Internation Conference on Acoustics, Speech and Signal Processing (ICASSP 2015), pp. 4939–4943, Brisbane, Australia, 19–24 April 2015

    Google Scholar 

  12. El Haddad, K., Tits, N., Dutoit, T.: Annotating nonverbal conversation expressions in interaction datasets. In: Proceedings of Laughter Workshop, vol. 2018, p. 09 (2018)

    Google Scholar 

  13. El Haddad, K., Torre, I., Gilmartin, E., Çakmak, H., Dupont, S., Dutoit, T., Campbell, N.: Introducing amus: the amused speech database. In: Camelin, N., Estève, Y., Martín-Vide, C. (eds.) Statistical Language and Speech Processing, pp. 229–240. Springer International Publishing, Cham (2017)

    Google Scholar 

  14. Honnet, P.-E., Lazaridis, A., Garner, P.N., Yamagishi, J.: The siwis french speech synthesis database? design and recording of a high quality french database for speech synthesis. Online Database (2017)

    Google Scholar 

  15. Kawanami, H., Iwami, Y., Toda, T., Saruwatari, H., Shikano, K.: Gmm-based voice conversion applied to emotional speech synthesis. In: Eighth European Conference on Speech Communication and Technology (2003)

    Google Scholar 

  16. Kominek, J., Black, A.W.: The CMU arctic speech databases. In: Fifth ISCA Workshop on Speech Synthesis (2004)

    Google Scholar 

  17. Livingstone, S.R., Russo, F.A.: The ryerson audio-visual database of emotional speech and song (ravdess): a dynamic, multimodal set of facial and vocal expressions in north american english. PLOS ONE 13(5), 1–35 (2018)

    Article  Google Scholar 

  18. Posner, J., Russell, J.A., Peterson, B.S.: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 17(3), 715–734 (2005)

    Article  Google Scholar 

  19. Shen, J., Pang, R., Weiss, R.J., Schuster, M., Jaitly, N., Yang, Z., Chen, Z., Zhang, Yu., Wang, Y., Skerry-Ryan, R.J., Saurous, R.A., Agiomyrgiannakis, Y., Wu, Y.: Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions. CoRR, abs/1712.05884 (2017)

    Google Scholar 

  20. Tits, N., El Haddad, K., Dutoit, T.: Asr-based features for emotion recognition: a transfer learning approach. arXiv preprint (2018). arXiv:1805.09197

  21. Tits, N., El Haddad, K., Dutoit, T.: Exploring transfer learning for low resource emotional tts. arXiv preprint (2019). arXiv:1901.042761901.04276

  22. Trouvain, J.: Phonetic aspects of “speech-laughs”. In: Oralité et Gestualité: Actes du colloque ORAGE, Aix-en-Provence. Paris: L’Harmattan, pp. 634–639 (2001)

    Google Scholar 

  23. van den Oord, A., Sander, D., Heiga, Z., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A.W., Kavukcuoglu, K.: A generative model for raw audio. In: SSW, Wavenet (2016)

    Google Scholar 

  24. Wang, Y., Skerry-Ryan, R.J., Stanton, D., Wu, Y., Weiss, R.J., Jaitly, N., Yang, Z., Xiao, Y., Chen, Z., Bengio, S., Le, Q.V., Agiomyrgiannakis, Y., Clark, R., Saurous, R.A.: Tacotron: towards end-to-end speech synthesis. In: INTERSPEECH (2017)

    Google Scholar 

  25. Yannakakis, G.N., Cowie, R., Busso, C.: The ordinal nature of emotions. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), vol. 00, pp. 248–255. Octobet 2017

    Google Scholar 

Download references

Acknowledgments

Noé Tits is funded through a PhD grant from the Fonds pour la Formation à la Recherche dans l’Industrie et l’Agriculture (FRIA), Belgium.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Noé Tits .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tits, N., El Haddad, K., Dutoit, T. (2020). Emotional Speech Datasets for English Speech Synthesis Purpose: A Review. In: Bi, Y., Bhatia, R., Kapoor, S. (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1037. Springer, Cham. https://doi.org/10.1007/978-3-030-29516-5_6

Download citation

Publish with us

Policies and ethics