Abstract
Code-switching in South African languages is common but data for language modelling remains extremely scarce. We present techniques that allow recurrent neural networks (LSTMs) to be better applied as generative models to the task of producing artificial code-switched text that can be used to augment the small training sets. We propose the application of prompting to favour the generation of sentences with intra-sentential language switches, and introduce an extensive LSTM hyperparameter search that specifically optimises the utility of the artificially generated code-switched text. We use these strategies to generate artificial code-switched text for four under-resourced South African languages and evaluate the utility of this additional data for language modelling. We find that the optimised models are able to generate text that leads to consistent perplexity and word error rate improvements for all four language pairs, especially at language switches. This is an improvement on previous work using the same speech data in which text generated without such optimisation did not provide improved performance. We conclude that prompting and targeted hyperparameter optimisation are an effective means of improving language model data augmentation for code-switched speech recognition.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems. White Paper (2015)
Biswas, A., van der Westhuizen, E., Niesler, T., de Wet, F.: Improving ASR for code-switched speech in under-resourced languages using out-of-domain data. In: Proceedings of the 6th International Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU), Gurugram, India (2018)
Biswas, A., Yilmaz, E., de Wet, F., van der Westhuizen, E., Niesler, T.: Semi-supervised development of ASR systems for multilingual code-switched speech in under-resourced languages. In: Proceedings of the 12th Language Resources and Evaluation Conference (LREC), Marseille, France (2020)
Biswas, A., Yılmaz, E., de Wet, F., van der Westhuizen, E., Niesler, T.: Semi-supervised acoustic model training for five-lingual code-switched ASR. In: Proceedings of Interspeech, Graz, Austria (2019)
Chang, C.T., Chuang, S.P., Lee, H.Y.: Code-switching sentence generation by generative adversarial networks and its application to data augmentation. In: Proceedings of Interspeech, Graz, Austria (2019)
Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar (2014)
Deuchar, M.: Welsh-English code-switching and the matrix language frame model. Lingua 116(11), 1986–2011 (2006)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Gao, Y., Feng, J., Liu, Y., Hou, L., Pan, X., Ma, Y.: Code-switching sentence generation by BERT and generative adversarial networks. In: Proceedings of Interspeech, Graz, Austria (2019)
Garg, S., Parekh, T., Jyothi, P.: Code-switched language models using dual RNNs and same-source pretraining. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium (2018)
Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Hu, X., Zhang, Q., Yang, L., Gu, B., Xu, X.: Data augmentation for code-switch language modeling by fusing multiple text generation methods. In: Proceedings of Interspeech, Shanghai, China (2020)
Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: Proceedings of the International Conference on Machine Learning (2015)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Lee, G., Yue, X., Li, H.: Linguistically motivated parallel data augmentation for code-switch language modeling. In: Proceedings of Interspeech, Graz, Austria (2019)
Myers-Scotton, C.: Duelling Languages: Grammatical Structure in Codeswitching. Oxford University Press, Oxford (1997)
Povey, D., et al.: The Kaldi speech recognition toolkit. In: Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Hawaii, USA (2011)
Stolcke, A.: SRILM-an extensible language modeling toolkit. In: Proceedings of the Seventh International Conference on Spoken Language Processing (ICSLP), Colorado, USA (2002)
Taneja, K., Guha, S., Jyothi, P., Abraham, B.: Exploiting monolingual speech corpora for code-mixed speech recognition. In: Proceedings of Interspeech, Graz, Austria (2019)
van der Westhuizen, E., Niesler, T.: A first South African corpus of multilingual code-switched soap opera speech. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC). European Language Resources Association (ELRA), Miyazaki (2018)
van der Westhuizen, E., Niesler, T.: Automatic speech recognition of English-isiZulu code-switched speech from South African soap operas. Procedia Comput. Sci. 81, 121–127 (2016). 5th Workshop on Spoken Language Technologies for Under-resourced languages (SLTU), Yogyakarta, Indonesia
van der Westhuizen, E., Niesler, T.R.: Synthesised bigrams using word embeddings for code-switched ASR of four South African language pairs. Comput. Speech Lang. 54, 151–175 (2019)
Yılmaz, E., van den Heuvel, H., van Leeuwen, D.: Acoustic and textual data augmentation for improved ASR of code-switching speech. In: Proceedings of Interspeech, Hyderabad, India (2018)
Acknowledgments
We would like to thank the Council for Scientific and Industrial Research (CSIR), Department of Science and Technology, South Africa for providing access to their CHPC cluster. We gratefully acknowledge the support of Telkom South Africa.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Jansen van Vueren, J., Niesler, T. (2021). Optimised Code-Switched Language Model Data Augmentation in Four Under-Resourced South African Languages. In: Karpov, A., Potapova, R. (eds) Speech and Computer. SPECOM 2021. Lecture Notes in Computer Science(), vol 12997. Springer, Cham. https://doi.org/10.1007/978-3-030-87802-3_28
Download citation
DOI: https://doi.org/10.1007/978-3-030-87802-3_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87801-6
Online ISBN: 978-3-030-87802-3
eBook Packages: Computer ScienceComputer Science (R0)