ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Improve emotional speech synthesis quality by learning explicit and implicit representations with semi-supervised training

Jiaxu He, Cheng Gong, Longbiao Wang, Di Jin, Xiaobao Wang, Junhai Xu, Jianwu Dang

Due to the lack of high-quality emotional speech synthesis datasets, the naturalness and expressiveness of synthesized speech are still lacking in order to achieve human-like communication. And existing emotional speech synthesis system usually extracts emotional information only from reference audio and ignores sentiment information implicit in the text. Therefore, we propose a novel model to improve emotional speech synthesis quality by learning explicit and implicit representations with semi-supervised learning. In addition to explicit emotional representations from reference audio, we propose an implicit emotion representations learning method based on graph neural network, considering dependency relations of a sentence and text sentiment classification (TSC) task. For the lack of emotion-annotated datasets, we leverage large amounts of expressive datasets to reinforce training the proposed model with semi-supervised learning. Experiments show that the proposed method can improve the naturalness and expressiveness of synthetic speech and is better than the baseline model.


doi: 10.21437/Interspeech.2022-11336

Cite as: He, J., Gong, C., Wang, L., Jin, D., Wang, X., Xu, J., Dang, J. (2022) Improve emotional speech synthesis quality by learning explicit and implicit representations with semi-supervised training. Proc. Interspeech 2022, 5538-5542, doi: 10.21437/Interspeech.2022-11336

@inproceedings{he22d_interspeech,
  author={Jiaxu He and Cheng Gong and Longbiao Wang and Di Jin and Xiaobao Wang and Junhai Xu and Jianwu Dang},
  title={{Improve emotional speech synthesis quality by learning explicit and implicit representations with semi-supervised training}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={5538--5542},
  doi={10.21437/Interspeech.2022-11336}
}