Skip to main content

Diffusion-Based Approach to Style Modeling in Expressive TTS

  • Conference paper
  • First Online:
Intelligent Systems (BRACIS 2022)

Abstract

In this article, we propose an aggregation of denoising diffusion probabilistic models (DDPMs) onto an end-to-end text-to-speech system to learn a distribution of reference speaking styles in an unsupervised manner. By applying a few steps of a forward noising process to an embedding extracted from a reference mel spectrogram, we make profit of its information to reduce the diffusion chain and reconstruct an improved style embedding with only a few reverse steps, performing style transfer. Additionally, a proposed combination of spectrogram reconstruction and denoising losses allows for conditioning of the acoustic model on the synthesized style embeddings. A subjective perceptual evaluation is conducted to evaluate naturalness and style transfer capability of the proposed approach. The results show a 5-point increment on the mean of naturalness ratings and a preference of the raters (43%) of our proposed approach over state-of-the-art models (29%) in the style transfer scenario.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/coqui-ai/TTS.

  2. 2.

    Our code is available at https://github.com/AI-Unicamp/TTS.

  3. 3.

    Listening samples are available in https://ai-unicamp.github.io/publications/tts/diffusion_for_style/.

References

  1. Aggarwal, V., Cotescu, M., Prateek, N., Lorenzo-Trueba, J., Barra-Chicote, R.: Using VAEs and normalizing flows for one-shot text-to-speech synthesis of expressive speech. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6179–6183 (2020). https://doi.org/10.1109/ICASSP40776.2020.9053678

  2. Aylett, M.P., Clark, L., Cowan, B.R., Torre, I.: Building and designing expressive speech synthesis. In: The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition, pp. 173–212. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3477322

  3. Chen, N., Zhang, Y., Zen, H., Weiss, R.J., Norouzi, M., Chan, W.: WAVEGRAD: estimating gradients for waveform generation (2020). https://doi.org/10.48550/ARXIV.2009.00713

  4. Chen, Z., et al.: InferGrad: improving diffusion models for vocoder by considering inference in training (2022). https://doi.org/10.48550/ARXIV.2202.03751

  5. Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc. (2021). https://proceedings.neurips.cc/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf

  6. Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124 (1971). https://doi.org/10.1037/h0030377

    Article  Google Scholar 

  7. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf

  8. Hodari, Z., Lai, C., King, S.: Perception of prosodic variation for speech synthesis using an unsupervised discrete representation of f0. In: Proceedings of Speech Prosody 2020, pp. 965–969 (2020). https://doi.org/10.21437/SpeechProsody.2020-197. Published 24 May 2020; Speech Prosody 2020; Conference date: 24-05-2020 Through 28-05-2020

  9. Im, C.B., Lee, S.H., Kim, S.B., Lee, S.W.: EMOQ-TTS: emotion intensity quantization for fine-grained controllable emotional text-to-speech. In: ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6317–6321 (2022). https://doi.org/10.1109/ICASSP43922.2022.9747098

  10. James, J., Balamurali, B.T., Watson, C.I., MacDonald, B.: Empathetic speech synthesis and testing for healthcare robots. Int. J. Soc. Robot. 13(8), 2119–2137 (2020). https://doi.org/10.1007/s12369-020-00691-4

    Article  Google Scholar 

  11. Jeong, M., Kim, H., Cheon, S.J., Choi, B.J., Kim, N.S.: DIFF-TTS: a denoising diffusion model for text-to-speech (2021). https://doi.org/10.48550/ARXIV.2104.01409

  12. Klimkov, V., Ronanki, S., Rohnke, J., Drugman, T.: Fine-grained robust prosody transfer for single-speaker neural text-to-speech. In: 2019 Proceedings of the Interspeech, pp. 4440–4444 (2019). https://doi.org/10.21437/Interspeech.2019-2571

  13. Kong, Z., Ping, W.: On fast sampling of diffusion probabilistic models. In: ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models (2021). https://openreview.net/forum?id=agj4cdOfrAP

  14. Kong, Z., Ping, W., Huang, J., Zhao, K., Catanzaro, B.: DiffWave: a versatile diffusion model for audio synthesis (2020). https://doi.org/10.48550/ARXIV.2009.09761

  15. Liu, J., Li, C., Ren, Y., Chen, F., Zhao, Z.: DiffSinger: singing voice synthesis via shallow diffusion mechanism (2021). https://doi.org/10.48550/ARXIV.2105.02446

  16. Liu, L., et al.: On the variance of the adaptive learning rate and beyond. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=rkgz2aEKDr

  17. Liu, R., Sisman, B., Gao, G., Li, H.: Expressive TTS training with frame and style reconstruction loss. IEEE/ACM Trans. Audio Speech Lang. Proc. 29, 1806–1818 (2021). https://doi.org/10.1109/TASLP.2021.3076369

    Article  Google Scholar 

  18. Ma, S., McDuff, D., Song, Y.: Neural TTS stylization with adversarial and collaborative games. In: International Conference on Learning Representations (ICLR) (2019). https://www.microsoft.com/en-us/research/publication/neural-tts-stylization-with-adversarial-and-collaborative-games/

  19. Neekhara, P., Hussain, S., Dubnov, S., Koushanfar, F., McAuley, J.: Expressive neural voice cloning. In: Balasubramanian, V.N., Tsang, I. (eds.) Proceedings of The 13th Asian Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 157, pp. 252–267. PMLR, 17–19 November 2021. https://proceedings.mlr.press/v157/neekhara21a.html

  20. Obin, N.: MeLos: analysis and modelling of speech prosody and speaking style. Ph.D. thesis, Ecole Doctorale Informatique, Télécommunications et Electronique (EDITE) (2011). https://tel.archives-ouvertes.fr/tel-00694687v2/document

  21. Ren, Y., et al.: FastSpeech: fast, robust and controllable text to speech. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/f63f65b503e22cb970527f23c9ad7db1-Paper.pdf

  22. Shen, J., et al.: Natural TTS synthesis by conditioning waveNet on MEL spectrogram predictions. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4779–4783 (2018). https://doi.org/10.1109/ICASSP.2018.8461368

  23. Skerry-Ryan, R., et al.: Towards end-to-end prosody transfer for expressive speech synthesis with tacotron. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 4693–4702. PMLR, 10–15 July 2018. https://proceedings.mlr.press/v80/skerry-ryan18a.html

  24. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: Bach, F., Blei, D. (eds.) Proceedings of the 32nd International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 37, pp. 2256–2265. PMLR, Lille, France, 07–09 July 2015. https://proceedings.mlr.press/v37/sohl-dickstein15.html

  25. Stanton, D., Wang, Y., Skerry-Ryan, R.: Predicting expressive speaking style from text in end-to-end speech synthesis. In: 2018 IEEE Spoken Language Technology Workshop (SLT), pp. 595–602 (2018). https://doi.org/10.1109/SLT.2018.8639682

  26. Sun, G., Zhang, Y., Weiss, R.J., Cao, Y., Zen, H., Wu, Y.: Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6264–6268. IEEE (2020). https://doi.org/10.1109/ICASSP40776.2020.9053520

  27. Tits, N., Wang, F., Haddad, K.E., Pagel, V., Dutoit, T.: Visualization and interpretation of latent spaces for controlling expressive speech synthesis through audio analysis (2019). https://doi.org/10.48550/ARXIV.1903.11570

  28. Tomczak, J.M., Welling, M.: Improving variational auto-encoders using householder flow (2016). https://doi.org/10.48550/ARXIV.1611.09630

  29. Ueda, L.H., Costa, P.D.P., Simoes, F.O., Neto, M.U.: Are we truly modeling expressiveness? a study on expressive TTS in Brazilian Portuguese for real-life application styles. In: Proceedings of the 11th ISCA Speech Synthesis Workshop (SSW 2011), pp. 84–89 (2021). https://doi.org/10.21437/SSW.2021-15

  30. Vaswani, A., et al.: Attention is all you need. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017). https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf

  31. Wang, Y., et al.: Uncovering latent style factors for expressive speech synthesis. In: NIPS Workshop on Machine Learning for Audio Signal Processing (ML4Audio) (2017)

    Google Scholar 

  32. Wang, Y., et al.: Style tokens: unsupervised style modeling, control and transfer in end-to-end speech synthesis. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 5180–5189. PMLR, 10–15 July 2018. https://proceedings.mlr.press/v80/wang18h.html

  33. Wu, N.Q., Liu, Z.C., Ling, Z.H.: Discourse-level prosody modeling with a variational autoencoder for non-autoregressive expressive speech synthesis. In: ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7592–7596 (2022). https://doi.org/10.1109/ICASSP43922.2022.9746238

  34. Yamamoto, R., Song, E., Kim, J.M.: Parallel WaveGan: a fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6199–6203 (2020). https://doi.org/10.1109/ICASSP40776.2020.9053795

  35. Zhang, Y.J., Pan, S., He, L., Ling, Z.H.: Learning latent representations for style control and transfer in end-to-end speech synthesis. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6945–6949 (2019). https://doi.org/10.1109/ICASSP.2019.8683623

Download references

Acknowledgment

The authors would like to thank the Research and Development Institute CPQD and the Ministry of Science, Technology and Innovations for supporting and funding this project. This work is supported by the BI0S - Brazilian Institute of Data Science, grant #2020/09838-0, São Paulo Research Foundation (FAPESP).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leonardo B. de M. M. Marques .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

B. de M. M. Marques, L. et al. (2022). Diffusion-Based Approach to Style Modeling in Expressive TTS. In: Xavier-Junior, J.C., Rios, R.A. (eds) Intelligent Systems. BRACIS 2022. Lecture Notes in Computer Science(), vol 13653. Springer, Cham. https://doi.org/10.1007/978-3-031-21686-2_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21686-2_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21685-5

  • Online ISBN: 978-3-031-21686-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics