Skip to main content

Exploring End-to-End Techniques for Low-Resource Speech Recognition

  • Conference paper
  • First Online:
Speech and Computer (SPECOM 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11096))

Included in the following conference series:

  • 1581 Accesses

Abstract

In this work we present simple grapheme-based system for low-resource speech recognition using Babel data for Turkish spontaneous speech (80 h). We have investigated different neural network architectures performance, including fully-convolutional, recurrent and ResNet with GRU. Different features and normalization techniques are compared as well. We also proposed CTC-loss modification using segmentation during training, which leads to improvement while decoding with small beam size.

Our best model achieved word error rate of 45.8%, which is the best reported result for end-to-end systems using in-domain data for this task, according to our knowledge.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. CTC Decoder for PyTorch. https://github.com/parlance/ctcdecode

  2. Kaldi Recipe Results for Turkish Language. https://github.com/kaldi-asr/kaldi/blob/master/egs/babel/s5d/results/results.105-turkish-fullLP.official.conf.jtrmal1%40jhu.edu.2015-11-28T144317-0500

  3. Sclite Scoring Package. http://www1.icsi.berkeley.edu/Speech/docs/sctk-1.2/sclite.htm

  4. The SRI Language Modeling Toolkit. http://www.speech.sri.com/projects/srilm/

  5. Turkish Alphabet. https://en.wikipedia.org/wiki/Turkish_alphabet

  6. Alumäe, T., et al.: The 2016 BBN Georgian telephone speech keyword spotting system. In: Proceedings of ICASSP, pp. 5755–5759 (2017)

    Google Scholar 

  7. Amodei, D., et al.: Deep speech 2: end-to-end speech recognition in English and Mandarin (2015). arxiv:1512.02595

  8. Chan, W., Jaitly, N., Le, Q.V., Vinyals, O.: Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In: Proceedings of ICASSP, pp. 4960–4964 (2016)

    Google Scholar 

  9. Chernykh, G., Korenevsky, M., Levin, K., Ponomareva, I., Tomashenko, N.: State level control for acoustic model training. In: Ronzhin, A., Potapova, R., Delic, V. (eds.) SPECOM 2014. LNCS (LNAI), vol. 8773, pp. 435–442. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11581-8_54

    Chapter  Google Scholar 

  10. Collobert, R., Puhrsch, C., Synnaeve, G.: Wav2letter: an end-to-end ConvNet-based speech recognition system (2016). arxiv:1609.03193

  11. Dalmia, S., Sanabria, R., Metze, F., Black, A.W.: Sequence-based multi-lingual low resource speech recognition (2018). arxiv:1802.07420

  12. Gales, M.J.F., Knill, K.M., Ragni, A.: Low-resource speech recognition and keyword-spotting. In: Karpov, A., Potapova, R., Mporas, I. (eds.) SPECOM 2017. LNCS (LNAI), vol. 10458, pp. 3–19. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66429-3_1

    Chapter  Google Scholar 

  13. Graves, A.: Sequence transduction with recurrent neural networks (2012). arxiv:1211.3711

  14. Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of ICML, pp. 369–376 (2006)

    Google Scholar 

  15. Hannun, A.Y., et al.: Deep speech: scaling up end-to-end speech recognition (2014). arxiv:1412.5567

  16. Khokhlov, Y.Y., et al.: The STC keyword search system for OpenKWS 2016 evaluation. In: Proceedings of INTERSPEECH, pp. 3602–3606 (2017)

    Google Scholar 

  17. Khomitsevich, O., Mendelev, V., Tomashenko, N., Rybin, S., Medennikov, I., Kudubayeva, S.: A bilingual Kazakh-Russian system for automatic speech recognition and synthesis. In: Ronzhin, A., Potapova, R., Fakotakis, N. (eds.) SPECOM 2015. LNCS (LNAI), vol. 9319, pp. 25–33. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23132-7_3

    Chapter  Google Scholar 

  18. Kingsbury, B.: Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2009, pp. 3761–3764 (2009)

    Google Scholar 

  19. Ko, T., Peddinti, V., Povey, D., Khudanpur, S.: Audio augmentation for speech recognition. In: Proceedings of INTERSPEECH (2015)

    Google Scholar 

  20. Levin, K., et al.: Automated closed captioning for Russian live broadcasting. In: Proceedings of INTERSPEECH, pp. 1438–1442 (2014)

    Google Scholar 

  21. Liptchinsky, V., Synnaeve, G., Collobert, R.: Letter-based speech recognition with Gated ConvNets (2017). arxiv:1712.09444

  22. Miao, Y., Gowayyed, M., Metze, F.: EESEN: end-to-end speech recognition using deep RNN models and WFST-based decoding. In: Proceedings of ASRU, pp. 167–174 (2015)

    Google Scholar 

  23. Povey, D., et al.: The Kaldi speech recognition toolkit. In: Proceedings of ASRU (2011)

    Google Scholar 

  24. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Neurocomputing: foundations of research. In: Learning Representations by Back-Propagating Errors, pp. 696–699. MIT Press, Cambridge (1988)

    Google Scholar 

  25. Zhou, Y., Xiong, C., Socher, R.: Improved regularization techniques for end-to-end speech recognition (2017). arxiv:1712.07108

Download references

Acknowledgements

This work was financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.575.21.0132 (IDRFMEFI57517X0132).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vladimir Bataev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bataev, V., Korenevsky, M., Medennikov, I., Zatvornitskiy, A. (2018). Exploring End-to-End Techniques for Low-Resource Speech Recognition. In: Karpov, A., Jokisch, O., Potapova, R. (eds) Speech and Computer. SPECOM 2018. Lecture Notes in Computer Science(), vol 11096. Springer, Cham. https://doi.org/10.1007/978-3-319-99579-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99579-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99578-6

  • Online ISBN: 978-3-319-99579-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics