Skip to main content

Simultaneously Trained NN-Based Acoustic Model and NN-Based Feature Extractor

  • Conference paper
  • First Online:
Text, Speech, and Dialogue (TSD 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9302))

Included in the following conference series:

  • 1811 Accesses

Abstract

This paper demonstrates how standard feature extraction methods such as PLP can be successfully replaced by a neural network and methods such as mean normalization, variance normalization and delta coefficients can be simultaneously utilized in a neural-network-based acoustic model. Our experiments show that this replacement is significantly beneficial. Moreover, in our experiments, also a neural-network-based voice activity detector was employed and trained simultaneously with a neural-network-based feature extraction and a neural-network-based acoustic model. The system performance was evaluated on the British English speech corpus WSJCAM0.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Grézl, F., Karafiát, M.: Semi-supervised bootstrapping approach for neural network feature extractor training. In: ASRU, pp. 470–475. IEEE (2013)

    Google Scholar 

  2. Sainath, T.N., Kingsbury, B., Mohamed, A.R, Ramabhadran, B.: Learning filter banks within a deep neural network framework. In: ASRU, pp. 297–302. IEEE (2013)

    Google Scholar 

  3. Narayanan, A., Wang, D.: Ideal ratio mask estimation using deep neural networks for robust speech recognition. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7092–7096, May 2013

    Google Scholar 

  4. Fernandez Astudillo, R., Abad, A., Trancoso, I.: Accounting for the residual uncertainty of multi-layer perceptron based features. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6859–6863, May 2014

    Google Scholar 

  5. Seps, L., Málek, J., Cerva, P., Nouza, J.: Investigation of deep neural networks for robust recognition of nonlinearly distorted speech. In: INTERSPEECH, pp. 363–367 (2014)

    Google Scholar 

  6. Sainath, T.N., Peddinti, V., Kingsbury, B., Fousek, P., Ramabhadran, B., Nahamoo, D.: Deep scattering spectra with deep neural networks for LVCSR tasks. In: 15th Annual Conference of the International Speech Communication Association, INTERSPEECH 2014, September 14–18, 2014, Singapore, pp. 900–904 (2014)

    Google Scholar 

  7. Tüske, Z., Golik, P., Schlüter, R., Ney, H.: Acoustic modeling with deep neural networks using raw time signal for lvcsr. In: Interspeech, Singapore, pp. 890–894, September 2014

    Google Scholar 

  8. Heřmanský, H.: Perceptual linear predictive (PLP) analysis of speech. J. Acoust. Soc. Am. 57(4), 1738–52 (1990)

    Article  Google Scholar 

  9. Chang, S., Morgan, N.: Robust CNN-based speech recognition with Gabor filter kernels. In: 15th Annual Conference of the International Speech Communication Association, INTERSPEECH 2014, September 14–18, 2014, Singapore, pp. 905–909 (2014)

    Google Scholar 

  10. Robinson, T., Fransen, J., Pye, D., Foote, J., Renals, S.: Wsjcam0: a british english speech corpus for large vocabulary continuous speech recognition. In: Proc. ICASSP 1995, pp. 81–84. IEEE (1995)

    Google Scholar 

  11. Psutka, J., Švec, J., Psutka, J.V., Vaněk, J., Pražák, A., Šmídl, L.: Fast phonetic/lexical searching in the archives of the czech holocaust testimonies: advancing towards the MALACH project visions. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds.) TSD 2010. LNCS, vol. 6231, pp. 385–391. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  12. Pražák, A., Psutka, J.V., Psutka, Loose, Z.: Towards live subtitling of TV ice-hockey commentary. In: Cabello, E., Virvou, M., Obaidat, M.S., Ji, H., Nicopolitidis, P., Vergados, D.D. (eds.) SIGMAP, pp. 151–155. SciTePress (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Zelinka .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Zelinka, J., Vaněk, J., Müller, L. (2015). Simultaneously Trained NN-Based Acoustic Model and NN-Based Feature Extractor. In: Král, P., Matoušek, V. (eds) Text, Speech, and Dialogue. TSD 2015. Lecture Notes in Computer Science(), vol 9302. Springer, Cham. https://doi.org/10.1007/978-3-319-24033-6_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-24033-6_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-24032-9

  • Online ISBN: 978-3-319-24033-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics