Skip to main content

A Comparison of Language Model Training Techniques in a Continuous Speech Recognition System for Serbian

  • Conference paper
  • First Online:
Speech and Computer (SPECOM 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11096))

Included in the following conference series:

Abstract

In this paper, a number of language model training techniques will be examined and utilized in a large vocabulary continuous speech recognition system for the Serbian language (more than 120000 words), namely Mikolov and Yandex RNNLM, TensorFlow based GPU approaches and CUED-RNNLM approach. The baseline acoustic model is a chain sub-sampled time delayed neural network, trained using cross-entropy training and a sequence-level objective function on a database of about 200 h of speech. The baseline language model is a 3-gram model trained on the training part of the database transcriptions and the Serbian journalistic corpus (about 600000 utterances), using the SRILM toolkit and the Kneser-Ney smoothing method, with a pruning value of 10−7 (previous best). The results are analyzed in terms of word and character error rates and the perplexity of a given language model on training and validation sets. Relative improvement of 22.4% (best word error rate of 7.25%) is obtained in comparison to the baseline language model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Goodman, J.T.: A bit of progress in language modeling, extended version. Microsoft Research, Technical report, MSR-TR-2001-72 (2001)

    Google Scholar 

  2. Rosenfeld, R.: Two decades of statistical language modeling: where do we go from here? Proc. IEEE 88, 1270–1278 (2000)

    Article  Google Scholar 

  3. Pakoci, E., Popović, B., Pekar, D.: Language model optimization for a deep neural network based speech recognition system for Serbian. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 483–492. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66429-3_48

    Chapter  Google Scholar 

  4. Mulder, W.D., Bethard, S., Moens, M.F.: A survey on the application of recurrent neural networks to statistical language modeling. Comput. Speech Lang. 30(1), 61–98 (2015)

    Article  Google Scholar 

  5. Mikolov, T., Kombrink, S., Burget, L., Černocký, J.H., Khudanpur, S.: Extensions of recurrent neural network language model. In: Proceedings of ICASSP, pp. 5528–5531. IEEE (2011)

    Google Scholar 

  6. Popović, B., Pakoci, E., Pekar, D.: End-to-end large vocabulary speech recognition for the Serbian language. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 343–352. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66429-3_33

    Chapter  Google Scholar 

  7. Pakoci, E., Popović, B., Pekar, D.: Fast sequence-trained deep neural network models for Serbian speech recognition. In: 11th Digital Speech and Image Processing, DOGS, Novi Sad, Serbia, pp. 25–28 (2017)

    Google Scholar 

  8. Mikolov, T., Kombrink, S., Deoras, A., Burget, L., Černocký, J.H.: RNNLM - recurrent neural network language modeling toolkit. In: Procedings of ASRU Workshop (2011)

    Google Scholar 

  9. Mikolov, T., Chen K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space, arXiv:1301.3781 (2013)

  10. Niu, F., Recht, B., Ré, C., Wright, S.J.: Hogwild!: a lock-free approach to parallelizing stochastic gradient descent. In: Advances in Neural Information Processing Systems, Chicago, pp. 693–701 (2011)

    Google Scholar 

  11. Chen, X., Liu, X., Gales, M.J.F., Woodland, P.C.: Recurrent neural network language model training with noise contrastive estimation for speech recognition. In: Proceedings of ICASSP, pp. 5411–5415. IEEE (2015)

    Google Scholar 

  12. Abadi, M: TensorFlow: large-scale machine learning on heterogeneous distributed systems, arXiv:1603.04467 (2016)

  13. Chen, X., Liu, X., Qian, Y., Gales, M.J.F., Woodland P.C.: CUED-RNNLM – an open-source toolkit for efficient training and evaluation of recurrent neural network language models. In: Proceedings of ICASSP, pp. 6000–6004. IEEE (2015)

    Google Scholar 

  14. Povey, D., et al.: The Kaldi speech recognition toolkit. In: IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 1–4. IEEE Signal Processing Society (2011)

    Google Scholar 

  15. Xu, H., et al.: A pruned RNNLM lattice-rescoring algorithm for automatic speech recognition (2017)

    Google Scholar 

Download references

Acknowledgments

The work described in this paper was supported in part by the Ministry of Education, Science and Technological Development of the Republic of Serbia, within the project “Development of Dialogue Systems for Serbian and Other South Slavic Languages”, EUREKA project DANSPLAT, “A Platform for the Applications of Speech Technologies on Smartphones for the Languages of the Danube Region”, id E! 9944, and the Provincial Secretariat for Higher Education and Scientific Research, within the project “Central Audio-Library of the University of Novi Sad”, No. 114-451-2570/2016-02.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Branislav Popović .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Popović, B., Pakoci, E., Pekar, D. (2018). A Comparison of Language Model Training Techniques in a Continuous Speech Recognition System for Serbian. In: Karpov, A., Jokisch, O., Potapova, R. (eds) Speech and Computer. SPECOM 2018. Lecture Notes in Computer Science(), vol 11096. Springer, Cham. https://doi.org/10.1007/978-3-319-99579-3_54

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99579-3_54

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99578-6

  • Online ISBN: 978-3-319-99579-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics