ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Automatic Speech Transcription for Low-Resource Languages — The Case of Yoloxóchitl Mixtec (Mexico)

Vikramjit Mitra, Andreas Kathol, Jonathan D. Amith, Rey Castillo García

The rate at which endangered languages can be documented has been highly constrained by human factors. Although digital recording of natural speech in endangered languages may proceed at a fairly robust pace, transcription of this material is not only time consuming but severely limited by the lack of native-speaker personnel proficient in the orthography of their mother tongue. Our NSF-funded project in the Documenting Endangered Languages (DEL) program proposes to tackle this problem from two sides: first via a tool that helps native speakers become proficient in the orthographic conventions of their language, and second by using automatic speech recognition (ASR) output that assists in the transcription effort for newly recorded audio data. In the present study, we focus exclusively on progress in developing speech recognition for the language of interest, Yoloxóchitl Mixtec (YM), an Oto-Manguean language spoken by fewer than 5000 speakers on the Pacific coast of Guerrero, Mexico. In particular, we present results from an initial set of experiments and discuss future directions through which better and more robust acoustic models for endangered languages with limited resources can be created.


doi: 10.21437/Interspeech.2016-546

Cite as: Mitra, V., Kathol, A., Amith, J.D., García, R.C. (2016) Automatic Speech Transcription for Low-Resource Languages — The Case of Yoloxóchitl Mixtec (Mexico). Proc. Interspeech 2016, 3076-3080, doi: 10.21437/Interspeech.2016-546

@inproceedings{mitra16b_interspeech,
  author={Vikramjit Mitra and Andreas Kathol and Jonathan D. Amith and Rey Castillo García},
  title={{Automatic Speech Transcription for Low-Resource Languages — The Case of Yoloxóchitl Mixtec (Mexico)}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={3076--3080},
  doi={10.21437/Interspeech.2016-546}
}