Abstract
This article presents an overview of automatic speech recognition (ASR) technologies and describes the use of an advanced version of the Transformer model, the Hiformer model, in Kazakh speech recognition. A literature review of Kazakh speech recognition systems was made. The structure of the Hiformer model is described and how it can be used in different parts of the structure of an advanced attention mechanism (AED) (encoder, decoder, cross-coder attention). An experiment was carried out on the execution of tasks of recognition of Kazakh speech using the Hiformer model. This study also details an experiment conducted to assess the efficacy of the Hiormer model in recognizing Kazakh speech. The experimental results clearly demonstrate the superiority and enhanced efficiency of the HiFormer model over its predecessors, the Transformer and Conformer models, in handling the complexities of Kazakh speech recognition. This finding underscores the potential of the HiFormer model in advancing ASR technology for languages with unique linguistic characteristics. In recognizing Kazakh speech, the use of the Hiformer model reduced the word error rate (WER) by 3.7% and the character error rate (CER) by 2.2% compared to the Transformer model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Change history
03 December 2024
A correction has been published.
References
Yao, G., Lei, T., Zhong, J.: A review of convolutional-neural-network-based action recognition. Pattern Recogn. Lett. 118, 14–22 (2019)
Vaswani, N., et al.: Attention is all you need. In: Proceedings of NeurIPS, arxiv: 1706.0376 (2017)
Choi, H., Lee, J., Kim, W., Lee, J., Heo, H., Lee, K.: Neural analysis and synthesis: reconstructing speech from self-supervised representations. In: Proceedings of NeurIPS (2021)
Mamyrbayev, O., Oralbekova, D., Alimhan, K., Turdalykyzy, T., Othman, M.: A study of transformer-based end-to-end speech recognition system for Kazakh language. In: Proceedings of Springer Nature (2022). https://doi.org/10.1038/s41598-022-12260-y
Wu, X., Lu, H., Li, K., Wu, Z., Liu, X., Meng, H.: Hiformer: Sequence Modeling Networks with Hierarchical Attention Mechanisms. In: Publication in IEEE/ACM Transactions on Audio, Speech and Language Processing 6 (2023)
Chan, W., Jaitly, N., Le, Q., Vinyals, O.: Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In: Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016 (2016)
Mamyrbayev, O., Turdalyuly, M., Mekebayev, N., Alimhan, K., Kydyrbekova, A., Turdalykyzy, T.: Automatic recognition of Kazakh speech using deep neural networks. In: ACIIDS, Lecture Notes in Computer Science, pp. 465–474 (2019)
Kozhirbayev, Z., Islamgozhayev, T.: Cascade speech translation for the Kazakh language. Appl. Sci. (2023)
Amirgaliyev, E., Kuanyshbay, D., Baimuratov, O.: Development of automatic speech recognition for Kazakh language using transfer learning. Int. J. Adv. Trends Comput. Sci. Eng. (2020)
Meng, W., Yolwas, N.: A study of speech recognition for Kazakh based on unsupervised pre-training. Sensors (2023)
Bapna, A., Chen, M., Firat, O., et al.: Training deeper neural machine translation models with transparent attention. In: Proceedings of EMNLP (2018)
Srivastava, R., Greff, K., Schmidhuber, J.: Training very deep networks. In: Proceedings of NeurIPS (2015)
Yang, J., et al.: GTRANS: grouping and fusing transformer layers for neural machine translation. IEEE/ACM Trans. Audio, Speech Lang. Process (2022)
Chung, J., Ahn, S., Bengio, Y.: Hierarchical multiscale recurrent neural networks. In: Proceedings of ICLR (2017)
Zhang, Q., Lu, H., Sak, H., et al.: Transformer transducer: a streamable speech recognition model with transformer encoders and RNNT loss. In: Proceedings of ICASSP (2020)
Povey, D., Hadian, H., Ghahremani, P., Li, K., Khudanpur, S.: A timerestricted self-attention layer for ASR. In: Proceedings of ICASSP (2018)
Wang, Y., Mohamed, A., Le, D., et al.: Transformer-based acoustic modeling for hybrid speech recognition In: Proceedings of ICASSP (2020)
Sang, J., Nurmemet, Y.: Knowledge distillation for end-to-end speech recognition based on Conformer model. In: International Symposium on Robotics (2022)
Baevski, A., Zhou, Y., Mohamed, A., Auli, M.: wav2vec 2.0: a framework for self-supervised learning of speech representations. In: Advances in Neural Information Processing Systems, vol. 33, pp. 12:449–12:460 (2020)
Mamyrbayev, O., Oralbekova, D., Kydyrbekova, A., Turdalykyzy, T., Bekarystankyzy, A.: End-to-end model based on RNN-T for Kazakh speech recognition. In: 2021 3rd International Conference on Computer Communication and the Internet (ICCCI), pp. 163–167 (2021)
Acknowledgements
This work was supported by the Ministry of Science and higher education of the Republic of Kazakhstan. SP AP19675574 Development of an intelligent system for diagnosing structural changes in pathologies based on the analysis and processing of biomedical images.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Mamyrbayev, O., Kurmetkan, T., Oralbekova, D., Zhumazhan, N. (2024). A Study of Kazakh Speech Recognition in Hiformer Model. In: Nguyen, N.T., et al. Recent Challenges in Intelligent Information and Database Systems. ACIIDS 2024. Communications in Computer and Information Science, vol 2145. Springer, Singapore. https://doi.org/10.1007/978-981-97-5934-7_28
Download citation
DOI: https://doi.org/10.1007/978-981-97-5934-7_28
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5933-0
Online ISBN: 978-981-97-5934-7
eBook Packages: Computer ScienceComputer Science (R0)