Skip to main content

Building Mongolian TTS Front-End with Encoder-Decoder Model by Using Bridge Method and Multi-view Features

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1143))

Included in the following conference series:

Abstract

In the context of text-to-speech systems (TTS), a front-end is a critical step for extracting linguistic features from given input text. In this paper, we propose a Mongolian TTS front-end which joint training Grapheme-to-Phoneme conversion (G2P) and phrase break prediction (PB). We use a bidirectional long short-term memory (LSTM) network as the encoder side, and build two decoders for G2P and PB that share the same encoder. Meanwhile, we put the source input features and encoder hidden states together into the Decoder, aim to shorten the distance between the source and target sequence and learn the alignment information better. More importantly, to obtain a robust representation for Mongolian words, which are agglutinative in nature and lacks sufficient training corpus, we design specific multi-view input features for it. Our subjective and objective experiments have demonstrated the effectiveness of this proposal.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    There are two writing systems of Mongolian: Cyrillic Mongolian and traditional Mongolian. This paper only studies traditional Mongolian.

  2. 2.

    http://sp-tk.sourceforge.net/.

References

  1. Bisani, M., Ney, H.: Joint-sequence models for grapheme-to-phoneme conversion. Speech Commun. 50, 434–51 (2008)

    Article  Google Scholar 

  2. Vadapalli, A., Bhaskararao, P., Prahallad, K.: Significance of word-terminal syllables for prediction of phrase breaks in Text-to-Speech systems for Indian languages. In: 8th ISCA Tutorial and Research Workshop on Speech Synthesis (2013)

    Google Scholar 

  3. Rao, K., Peng, F., Sak, H., Beaufays, F.: Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks. In: 40th IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4225–4229. IEEE Press (2015)

    Google Scholar 

  4. Liu, R., Bao, F., Gao, G., Wang, Y.: Mongolian text-to-speech system based on deep neural network. In: Tao, J., Zheng, T.F., Bao, C., Wang, D., Li, Y. (eds.) NCMMSC 2017. CCIS, vol. 807, pp. 99–108. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-8111-8_10

    Chapter  Google Scholar 

  5. Liu, R., Bao, F., Gao, G.: A LSTM approach with sub-word embeddings for mongolian phrase break prediction. In: 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA, 20–26 August 2018, pp. 2448–2455 (2018)

    Google Scholar 

  6. Liu, R., Bao, F., Gao, G., Wang, Y.: Improving mongolian phrase break prediction by using syllable and morphological embeddings with BiLSTM model. In: 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2–6 September 2018, pp. 57–61 (2018)

    Google Scholar 

  7. Kuang, S., Li, J., Branco, A., Luo, W., Xiong, D.: Attention focusing for neural machine translation by bridging source and target embeddings. In: 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, 15–20 July 2018 (2018)

    Google Scholar 

  8. Toshniwal, S., Livescu, K.: Jointly learning to align and convert graphemes to phonemes with neural attention models. In: Spoken Language Technology Workshop. IEEE Press (2017)

    Google Scholar 

  9. Yu, Z., Lee, G.G., Kim, B.: Using multiple linguistic features for Mandarin phrase break prediction in maximum-entropy classification framework. In: 5th INTERSPEECH 2004 - ICSLP, International Conference on Spoken Language Processing, Jeju Island, Korea, October 2004. DBLP (2004)

    Google Scholar 

  10. Qian, Y., Wu, Z., Ma, X., Soong, F.: Automatic prosody prediction and detection with Conditional Random Field (CRF) models. In: 7th International Symposium on Chinese Spoken Language Processing, pp. 135–138. IEEE Press (2010)

    Google Scholar 

  11. Vadapalli, A., Gangashetty, S.V.: An investigation of recurrent neural network architectures using word embeddings for phrase break prediction. In: 17th INTERSPEECH, pp. 2308–2312 (2016)

    Google Scholar 

  12. Liu, R., Bao, F., Gao, G., Wang, W.: Mongolian prosodic phrase prediction using suffix segmentation. In: 21th International Conference on Asian Language Processing, pp. 250–253. IEEE Press (2017)

    Google Scholar 

  13. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Proceedings of ICLR 2015, pp. 1–15 (2014)

    Google Scholar 

  14. Wang, X., Lorenzo-Trueba, J., Takaki, S., Juvela, L., Yamagishi, J.: A comparison of recent waveform generation and acoustic modeling methods for neural-network-based speech synthesis. In: 43th ICASSP (2018)

    Google Scholar 

  15. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

Download references

Acknowledgments

This research was supports by the National Natural Science Foundation of China (No.61563040, No.61773224), Natural Science Foundation of Inner Mongolian (No.2018MS06006, No.2016ZD06).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feilong Bao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, R., Bao, F., Gao, G. (2019). Building Mongolian TTS Front-End with Encoder-Decoder Model by Using Bridge Method and Multi-view Features. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1143. Springer, Cham. https://doi.org/10.1007/978-3-030-36802-9_68

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36802-9_68

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36801-2

  • Online ISBN: 978-3-030-36802-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics