Skip to main content
Log in

Lattice Based Transcription Loss for End-to-End Speech Recognition

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

End-to-end speech recognition systems have been successfully implemented and have become competitive replacements for hybrid systems. A common loss function to train end-to-end systems is connectionist temporal classification (CTC). This method maximizes the log likelihood between the feature sequence and the associated transcription sequence. However there are some weaknesses with CTC training. The main weakness is that the training criterion is different from the test criterion, since the training criterion is log likelihood, while the test criterion is word error rate. In this work, we introduce a new lattice based transcription loss function to address this deficiency of CTC training. Compared to the CTC function, our new method optimizes the model directly using the transcription loss. We evaluate this new algorithm in both a small speech recognition task, the Wall Street Journal (WSJ) dataset, a large vocabulary speech recognition task, the Switchboard dataset and a low resource speech recognition task, OpenKWS16. Results demonstrate that our algorithm outperforms a traditional CTC criterion, and achieves 7% WER relative reduction. In addition, we compare our new algorithm to some discriminative training algorithms, such as state-level minimum Bayes risk (SMBR) and minimum word error (MWE), with results supporting the benefits of the new algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5

Similar content being viewed by others

References

  1. Seide, F., Li, G., & Yu, D (2011). Conversational speech transcription using context-dependent deep neural networks. In Twelfth annual conference of the international speech communication.

  2. Seide, F., Li, G., Chen, X., & Yu, D. (2011). Feature engineering in context-dependent deep neural networks for conversational speech transcription. In IEEE workshop on automatic speech recognition and understanding (ASRU) (pp. 24–29).

  3. Dahl, G. E., Yu, D., Deng, L., & Acero, A (2012). Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1), 30–42.

    Article  Google Scholar 

  4. Hochreiter, S., & Schmidhuber, J (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780.

    Article  Google Scholar 

  5. Mikolov, T., Karafit, M., Burget, L., Cernocky, J., & Khudanpur, S (2010). Recurrent neural network based language model. Interspeech, 2, 3.

    Google Scholar 

  6. Graves, A., Jaitly, N., & Mohamed, A. R. (2013). Hybrid speech recognition with deep bidirectional LSTM. In IEEE workshop on automatic speech recognition and understanding (ASRU).

  7. Sak, H., Senior, A. W., & Beaufays, F (2014). Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Fiftteenth annual conference of the international speech communication associaton.

  8. Sainath, T. N., Vinyals, O., Senior, A., & Sak, H. (2015). Convolutional, long short-term memory, fully connected deep neural networks. In IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 4580–4584).

  9. Geiger, J. T., Zhang, Z., Weninger, F., Schuller, B., & Rigoll, G (2014). Robust speech recognition using long short-term memory recurrent neural networks for hybrid acoustic modelling. In Fifteenth annual conference of the international speech communication association.

  10. Senior, A., Sak, H., & Shafran, I. (2015). Context dependent phone models for LSTM RNN acoustic modelling. In IEEE international conference on acoustics, speech and signal processing (ICASSP).

  11. Graves, A., Mohamed, A. R., & Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 6645–6649).

  12. Graves, A., & Jaitly, N (2014). Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st international conference on machine learning (ICML-14) (pp. 1764–1772).

  13. Chorowski, J., Bahdanau, D., Cho, K., & Bengio, Y (2014). End-to-end continuous speech recognition using attention-based recurrent NN: first results. arXiv:1412.1602.

  14. Chorowski, J. K., Bahdanau, D., Serdyuk, D., Cho, K., & Bengio, Y (2015). Attention-based models for speech recognition. In Advances in neural information processing systems (pp. 577–585).

  15. Bahdanau, D., Chorowski, J., Serdyuk, D., & Bengio, Y. (2016). End-to-end attention-based large vocabulary speech recognition. In IEEE International conference on acoustics, speech and signal processing (ICASSP) (pp. 4945–4949).

  16. Hannun, A. Y., Maas, A. L., Jurafsky, D., & Ng, A.Y. (2014). First-pass large vocabulary continuous speech recognition using bi-directional recurrent DNNs. arXiv:http://arXiv.org/abs/1408.2873.

  17. Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., & Ng, A.Y (2014). Deep speech: Scaling up end-to-end speech recognition. arXiv:http://arXiv.org/abs/1412.5567.

  18. Sak, H., Senior, A., Rao, K., Irsoy, O., Graves, A., Beaufays, F., & Schalkwyk, J. (2015). Learning acoustic frame labeling for speech recognition with recurrent neural networks. In IEEE International conference on acoustics, speech and signal processing (ICASSP) (pp. 4280–4284).

  19. Amodei, D., Ananthanarayanan, S., Anubhai, R., Battenberg, E., Case, C., & Chen, J. (2016). Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning (pp. 173–182).

  20. Miao, Y., Gowayyed, M., & Metze, F. (2015). EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. In IEEE Workshop on automatic speech recognition and understanding (ASRU) (pp. 167–174).

  21. Li, J., Zhang, H., Cai, X., & Xu, B. (2015). Towards end-to-end speech recognition for chinese mandarin using long short-term memory recurrent neural networks. In Sixteenth annual conference of the international speech communication association.

  22. Bahdanau, D., Serdyuk, D., Brakel, P., Ke, N. R., Chorowski, J., Courville, A., & Bengio, Y (2015). Task loss estimation for sequence prediction. arXiv:http://arXiv.org/abs/1511.06456.

  23. Sak, H., Senior, A., Rao, K., & Beaufays, F (2015). Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv:http://arXiv.org/abs/1507.06947.

  24. Sak, H., de Chaumont Quitry, F., & Rao, K (2015). Acoustic modelling with cd-ctc-smbr lstm rnns. In IEEE workshop on automatic speech recognition and understanding (ASRU) (pp. 604–609).

  25. Kingsbury, B. (2009). Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling. In IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 3761–3764).

  26. Povey, D., & Woodland, P. C. (2002). Minimum phone error and I-smoothing for improved discriminative training. In IEEE international conference on acoustics, speech, and signal processing (ICASSP) (Vol. 1, pp.1–105).

  27. Povey, D., Peddinti, V., Galvez, D., Ghahrmani, P., Manohar, V., Na, X., & Khudanpur, S (2016). Purely sequence-trained neural networks for ASR based on lattice-free MMI. In INTERSPEECH (pp. 2751–2755).

  28. Graves, A., Fernández, S., Gomez, F., & Schmidhuber, J. (2006). Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 31st international conference on machine learning (ICML) (pp. 369–376).

  29. Mohri, M., Pereira, F., & Riley, M (2002). Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1), 69–88.

    Article  Google Scholar 

  30. Chen, Z., Zhuang, Y., Qian, Y., & Yu, K. (2017). Phone synchronous speech recognition with CTC lattices. IEEE/ACM Transactions on Audio Speech and Language Processing (TASLP), 25(1), 90–101.

    Article  Google Scholar 

  31. Sak, H., Vinyals, O., Heigold, G., Senior, A., McDermott, E., Monga, R., & Mao, M (2014). Sequence discriminative distributed training of long short-term memory recurrent neural networks In Fifteenth annual conference of the international speech communication association.

  32. Kanda, N., Lu, X., & Kawai, H (2017). Maximum-a-posteriori-based decoding for end-to-end acoustic models. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(5), 1023–1034.

    Article  Google Scholar 

  33. Harper, M (2014). IARPA babel program. https://www.iarpa.gov/index.php/research-programs/babel.

  34. https://www.nist.gov/itl/iad/mig/openkws16-evaluation.

  35. Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., & Silovsky, J. (2011). The Kaldi speech recognition toolkit. In IEEE workshop on automatic speech recognition and understanding (ASRU).

  36. Miao, Y., & Metze, F. (2015). On speaker adaptation of long short-term memory recurrent neural networks. In INTERSPEECH (pp. 1101–1105).

  37. Miao, Y., Gowayyed, M., Na, X., Ko, T., Metze, F., & Waibel, A. (2016). An empirical exploration of CTC acoustic models. In IEEE International conference on acoustics, speech and signal processing (ICASSP) (pp. 2623–2627).

  38. Peddinti, V., Povey, D., & Khudanpur, S. (2015). A time delay neural network architecture for efficient modeling of long temporal contexts. In INTERSPEECH (pp. 2440–2444).

  39. Huang, J. T., Li, J., Yu, D., Deng, L., & Gong, Y. (2013). Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In IEEE International conference on acoustics, speech and signal processing (ICASSP) (pp. 7304–7308).

Download references

Acknowledgments

Thiswork is supported by National Natural Science Foundation of China under Grant No. 61370034 and No. 61403224.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei-Qiang Zhang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kang, J., Zhang, WQ., Liu, WW. et al. Lattice Based Transcription Loss for End-to-End Speech Recognition. J Sign Process Syst 90, 1013–1023 (2018). https://doi.org/10.1007/s11265-017-1292-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-017-1292-0

Keywords

Navigation