Skip to main content

Convolutional Variational Autoencoders for Spectrogram Compression in Automatic Speech Recognition

  • Conference paper
  • First Online:
Recent Trends in Analysis of Images, Social Networks and Texts (AIST 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1357))

  • 704 Accesses

Abstract

For many Automatic Speech Recognition (ASR) tasks audio features as spectrograms show better results than Mel-frequency Cepstral Coefficients (MFCC), but in practice they are hard to use due to a complex dimensionality of a feature space. The following paper presents an alternative approach towards generating compressed spectrogram representation, based on Convolutional Variational Autoencoders (VAE). A Convolutional VAE model was trained on a subsample of the LibriSpeech dataset to reconstruct short fragments of audio spectrograms (25 ms) from a 13-dimensional embedding. The trained model for a 40-dimensional (300 ms) embedding was used to generate features for corpus of spoken commands on the GoogleSpeechCommands dataset. Using the generated features an ASR system was built and compared to the model with MFCC features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Amodei, D., et al.: Deep speech 2: end-to-end speech recognition in English and Mandarin. In: ICML (2016)

    Google Scholar 

  2. Dai, B., et al.: Hidden talents of the variational autoencoder (2017)

    Google Scholar 

  3. van den Oord, A., et al.: Neural discrete representation learning. In: NIPS (2017)

    Google Scholar 

  4. van den Oord, A., et al.: WaveNet: a generative model for raw audio. In: SSW (2016)

    Google Scholar 

  5. Zhu, Z., et al.: Siamese recurrent auto-encoder representation for query-by-example spoken term detection. In: Interspeech (2018)

    Google Scholar 

  6. Milde, B., Biemann, C.: Unspeech: unsupervised speech context embeddings. In: Interspeech (2018)

    Google Scholar 

  7. Chung, Y.-A., Glass, J.R.: Speech2Vec: a sequence-to-sequence framework for learning word embeddings from speech. In: Interspeech (2018)

    Google Scholar 

  8. LibriSpeech. http://www.openslr.org/12/

  9. Google Speech Commands. https://ai.googleblog.com/2017/08/launching-speech-commands-dataset.html

  10. Audio\(\_\)vae. https://github.com/nsu-ai-team/audio_vae

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Olga Yakovenko .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yakovenko, O., Bondarenko, I. (2021). Convolutional Variational Autoencoders for Spectrogram Compression in Automatic Speech Recognition. In: van der Aalst, W.M.P., et al. Recent Trends in Analysis of Images, Social Networks and Texts. AIST 2020. Communications in Computer and Information Science, vol 1357. Springer, Cham. https://doi.org/10.1007/978-3-030-71214-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-71214-3_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-71213-6

  • Online ISBN: 978-3-030-71214-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics