Loading [MathJax]/extensions/MathMenu.js
Forward–Backward Decoding Sequence for Regularizing End-to-End TTS | IEEE Journals & Magazine | IEEE Xplore

Forward–Backward Decoding Sequence for Regularizing End-to-End TTS


Abstract:

Neural end-to-end TTS such as Tacotron like network can generate very high-quality synthesized speech, and even close to human recording for similar domain text. However,...Show More

Abstract:

Neural end-to-end TTS such as Tacotron like network can generate very high-quality synthesized speech, and even close to human recording for similar domain text. However, it performs unsatisfactory when scaling it to some challenging test sets. One concern is that the encoder-decoder with attention-based network adopts autoregressive generative sequence model with the limitation of “exposure bias”: errors made early could be quickly amplified, harming subsequent sequence generation. To address this issue, we propose two novel methods, which aim at predicting future by improving the agreement between forward and backward decoding sequence. The first one (denoted as MRBA) is achieved by adding divergence regularization terms to model training objective to maximize the agreement between two directional models, namely L2R (which generates targets from left-to-right) and R2L (which generates targets from right-to-left). While the second one (denoted as BDR) operates on decoder-level and exploits the future information during decoding. By introducing regularization term into the training objective of forward-backward decoders, the forward-decoder's hidden states are forced to be close to the backward-decoder's. Thus, the hidden representations of a unidirectional decoder are encouraged to embed some useful information about the future. Moreover, in order to make forward and backward decoding to improve each other in an interactive process, a joint training method is designed. Experimental results on both English and Mandarin dataset show that our proposed methods especially the second one (BDR), lead to a significantly improvement on both robustness and overall naturalness, as achieving obvious preference advantages in a challenging test, and achieving state-of-the-art performance (outperforming baseline “the revised version of Tacotron2” with a gap of 0.13 and 0.12 for English and Mandarin in MOS, respectively) on a general test.
Page(s): 2067 - 2079
Date of Publication: 19 August 2019

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.