ISCA Archive Interspeech 2019
ISCA Archive Interspeech 2019

Building Large-Vocabulary ASR Systems for Languages Without Any Audio Training Data

Manasa Prasad, Daan van Esch, Sandy Ritchie, Jonas Fromseier Mortensen

When building automatic speech recognition (ASR) systems, typically some amount of audio and text data in the target language is needed. While text data can be obtained relatively easily across many languages, transcribed audio data is challenging to obtain. This presents a barrier to making voice technologies available in more languages of the world. In this paper, we present a way to build an ASR system system for a language even in the absence of any audio training data in that language at all. We do this by simply re-using an existing acoustic model from a phonologically similar language, without any kind of modification or adaptation towards the target language. The basic insight is that, if two languages are sufficiently similar in terms of their phonological system, an acoustic model should hold up relatively well when used for another language. We describe how we tailor our pronunciation models to enable such re-use, and show experimental results across a number of languages from various language families. We also provide a theoretical analysis of situations in which this approach is likely to work. Our results show that it is possible to achieve less than 20% word error rate (WER) using this method.


doi: 10.21437/Interspeech.2019-1775

Cite as: Prasad, M., Esch, D.v., Ritchie, S., Mortensen, J.F. (2019) Building Large-Vocabulary ASR Systems for Languages Without Any Audio Training Data. Proc. Interspeech 2019, 271-275, doi: 10.21437/Interspeech.2019-1775

@inproceedings{prasad19_interspeech,
  author={Manasa Prasad and Daan van Esch and Sandy Ritchie and Jonas Fromseier Mortensen},
  title={{Building Large-Vocabulary ASR Systems for Languages Without Any Audio Training Data}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={271--275},
  doi={10.21437/Interspeech.2019-1775}
}