Skip to main content

A Study of Kazakh Speech Recognition in Hiformer Model

  • Conference paper
  • First Online:
Recent Challenges in Intelligent Information and Database Systems (ACIIDS 2024)

Abstract

This article presents an overview of automatic speech recognition (ASR) technologies and describes the use of an advanced version of the Transformer model, the Hiformer model, in Kazakh speech recognition. A literature review of Kazakh speech recognition systems was made. The structure of the Hiformer model is described and how it can be used in different parts of the structure of an advanced attention mechanism (AED) (encoder, decoder, cross-coder attention). An experiment was carried out on the execution of tasks of recognition of Kazakh speech using the Hiformer model. This study also details an experiment conducted to assess the efficacy of the Hiormer model in recognizing Kazakh speech. The experimental results clearly demonstrate the superiority and enhanced efficiency of the HiFormer model over its predecessors, the Transformer and Conformer models, in handling the complexities of Kazakh speech recognition. This finding underscores the potential of the HiFormer model in advancing ASR technology for languages with unique linguistic characteristics. In recognizing Kazakh speech, the use of the Hiformer model reduced the word error rate (WER) by 3.7% and the character error rate (CER) by 2.2% compared to the Transformer model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Change history

  • 03 December 2024

    A correction has been published.

References

  1. Yao, G., Lei, T., Zhong, J.: A review of convolutional-neural-network-based action recognition. Pattern Recogn. Lett. 118, 14–22 (2019)

    Article  Google Scholar 

  2. Vaswani, N., et al.: Attention is all you need. In: Proceedings of NeurIPS, arxiv: 1706.0376 (2017)

    Google Scholar 

  3. Choi, H., Lee, J., Kim, W., Lee, J., Heo, H., Lee, K.: Neural analysis and synthesis: reconstructing speech from self-supervised representations. In: Proceedings of NeurIPS (2021)

    Google Scholar 

  4. Mamyrbayev, O., Oralbekova, D., Alimhan, K., Turdalykyzy, T., Othman, M.: A study of transformer-based end-to-end speech recognition system for Kazakh language. In: Proceedings of Springer Nature (2022). https://doi.org/10.1038/s41598-022-12260-y

  5. Wu, X., Lu, H., Li, K., Wu, Z., Liu, X., Meng, H.: Hiformer: Sequence Modeling Networks with Hierarchical Attention Mechanisms. In: Publication in IEEE/ACM Transactions on Audio, Speech and Language Processing 6 (2023)

    Google Scholar 

  6. Chan, W., Jaitly, N., Le, Q., Vinyals, O.: Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In: Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016 (2016)

    Google Scholar 

  7. Mamyrbayev, O., Turdalyuly, M., Mekebayev, N., Alimhan, K., Kydyrbekova, A., Turdalykyzy, T.: Automatic recognition of Kazakh speech using deep neural networks. In: ACIIDS, Lecture Notes in Computer Science, pp. 465–474 (2019)

    Google Scholar 

  8. Kozhirbayev, Z., Islamgozhayev, T.: Cascade speech translation for the Kazakh language. Appl. Sci. (2023)

    Google Scholar 

  9. Amirgaliyev, E., Kuanyshbay, D., Baimuratov, O.: Development of automatic speech recognition for Kazakh language using transfer learning. Int. J. Adv. Trends Comput. Sci. Eng. (2020)

    Google Scholar 

  10. Meng, W., Yolwas, N.: A study of speech recognition for Kazakh based on unsupervised pre-training. Sensors (2023)

    Google Scholar 

  11. Bapna, A., Chen, M., Firat, O., et al.: Training deeper neural machine translation models with transparent attention. In: Proceedings of EMNLP (2018)

    Google Scholar 

  12. Srivastava, R., Greff, K., Schmidhuber, J.: Training very deep networks. In: Proceedings of NeurIPS (2015)

    Google Scholar 

  13. Yang, J., et al.: GTRANS: grouping and fusing transformer layers for neural machine translation. IEEE/ACM Trans. Audio, Speech Lang. Process (2022)

    Google Scholar 

  14. Chung, J., Ahn, S., Bengio, Y.: Hierarchical multiscale recurrent neural networks. In: Proceedings of ICLR (2017)

    Google Scholar 

  15. Zhang, Q., Lu, H., Sak, H., et al.: Transformer transducer: a streamable speech recognition model with transformer encoders and RNNT loss. In: Proceedings of ICASSP (2020)

    Google Scholar 

  16. Povey, D., Hadian, H., Ghahremani, P., Li, K., Khudanpur, S.: A timerestricted self-attention layer for ASR. In: Proceedings of ICASSP (2018)

    Google Scholar 

  17. Wang, Y., Mohamed, A., Le, D., et al.: Transformer-based acoustic modeling for hybrid speech recognition In: Proceedings of ICASSP (2020)

    Google Scholar 

  18. Sang, J., Nurmemet, Y.: Knowledge distillation for end-to-end speech recognition based on Conformer model. In: International Symposium on Robotics (2022)

    Google Scholar 

  19. Baevski, A., Zhou, Y., Mohamed, A., Auli, M.: wav2vec 2.0: a framework for self-supervised learning of speech representations. In: Advances in Neural Information Processing Systems, vol. 33, pp. 12:449–12:460 (2020)

    Google Scholar 

  20. Mamyrbayev, O., Oralbekova, D., Kydyrbekova, A., Turdalykyzy, T., Bekarystankyzy, A.: End-to-end model based on RNN-T for Kazakh speech recognition. In: 2021 3rd International Conference on Computer Communication and the Internet (ICCCI), pp. 163–167 (2021)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Ministry of Science and higher education of the Republic of Kazakhstan. SP AP19675574 Development of an intelligent system for diagnosing structural changes in pathologies based on the analysis and processing of biomedical images.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Turdybek Kurmetkan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mamyrbayev, O., Kurmetkan, T., Oralbekova, D., Zhumazhan, N. (2024). A Study of Kazakh Speech Recognition in Hiformer Model. In: Nguyen, N.T., et al. Recent Challenges in Intelligent Information and Database Systems. ACIIDS 2024. Communications in Computer and Information Science, vol 2145. Springer, Singapore. https://doi.org/10.1007/978-981-97-5934-7_28

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5934-7_28

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5933-0

  • Online ISBN: 978-981-97-5934-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics