ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

A Universal Multi-Speaker Multi-Style Text-to-Speech via Disentangled Representation Learning Based on Rényi Divergence Minimization

Dipjyoti Paul, Sankar Mukherjee, Yannis Pantazis, Yannis Stylianou

In this paper, we present a universal multi-speaker, multi-style Text-to-Speech (TTS) synthesis system which is able to generate speech from text with speaker characteristics and speaking style similar to a given reference signal. Training is conducted on non-parallel data and generates voices in an unsupervised manner, i.e., neither style annotation nor speaker label are required. To avoid leaking content information into the style embeddings (referred to as “content leakage”) and leaking speaker information into style embeddings (referred to as “style leakage”) we suggest a novel Rényi Divergence based Disentangled Representation framework through adversarial learning. Similar to mutual information minimization, the proposed approach explicitly estimates via a variational formula and then minimizes the Rényi divergence between the joint distribution and the product of marginals for the content-style and style-speaker pairs. By doing so, content, style and speaker spaces become representative and (ideally) independent of each other. Our proposed system greatly reduces content leakage by improving the word error rate by approximately 17–19% relative to the baseline system. In MOS-speech-quality, the proposed algorithm achieves an improvement of about 16–20% whereas MOS-style-similarly boost up 15% relative performance.


doi: 10.21437/Interspeech.2021-660

Cite as: Paul, D., Mukherjee, S., Pantazis, Y., Stylianou, Y. (2021) A Universal Multi-Speaker Multi-Style Text-to-Speech via Disentangled Representation Learning Based on Rényi Divergence Minimization. Proc. Interspeech 2021, 3625-3629, doi: 10.21437/Interspeech.2021-660

@inproceedings{paul21_interspeech,
  author={Dipjyoti Paul and Sankar Mukherjee and Yannis Pantazis and Yannis Stylianou},
  title={{A Universal Multi-Speaker Multi-Style Text-to-Speech via Disentangled Representation Learning Based on Rényi Divergence Minimization}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={3625--3629},
  doi={10.21437/Interspeech.2021-660}
}