Skip to main content

Quantitative Comparison of Translation by Transformers-Based Neural Network Models

  • Conference paper
  • First Online:
Enterprise Information Systems (ICEIS 2022)

Abstract

One of the major tasks in producer-customer relations is a processing of customers’ feedback. Since for the international companies provide their products for a lot of countries the feedback could be provided with various languages. That can produce a number of difficulties during automatic text processing. One of the solutions is to use one common language to process feedback and automatically translate feedbacks from the various languages to the common one. Then the translated feedback could be processed with the configured pipeline. This paper compares existing open models for automatic text translation. Only language models with Transformer architecture were considered due to the best results of translation over other existing approaches. The models are: M2M100, mBART, OPUS-MT (Helsinki NLP). Own test data was built due to requirement of translation for texts specific to the subject area. To create dataset Microsoft Azure Translation was chosen as the reference translation with manual translation verification for grammar. Translations produced by each model were compared with the reference translation using two metrics: BLEU and METEOR. The possibility of fast fine-tuning of models was also investigated to improve the quality of address the translation on specific lexicon of the problem area. Among the reviewed models, M2M100 turned out to be the best in terms of translation quality, but it is also the most difficult to fine-tune it.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Usuga Cadavid, J.P., Lamouri, S., Grabot, B., Pellerin, R., Fortin, A.: Machine learning applied in production planning and control: a state-of-the-art in the era of industry 4.0. J. Intell. Manuf. 31(6), 1531–1558 (2020). https://doi.org/10.1007/s10845-019-01531-7

    Article  Google Scholar 

  2. Cioffi, R., Travaglioni, M., Piscitelli, G., Petrillo, A., De Felice, F.: Artificial intelligence and machine learning applications in smart production: progress, trends, and directions. Sustainability 12, 492 (2020). https://doi.org/10.3390/su12020492

    Article  Google Scholar 

  3. Zhang, J., Zong, C.: Neural machine translation: challenges, progress and future. Sci. China Technol. Sci. 63(10), 2028–2050 (2020). https://doi.org/10.1007/s11431-020-1632-x

    Article  Google Scholar 

  4. Jooste, W., Haque, R., Way, A., Koehn, P.: Neural machine translation. Mach. Transl. 35, 289–299 (2021). https://doi.org/10.1007/s10590-021-09277-x

  5. Stahlberg, F.: Neural machine translation: a review. J. Artif. Intell. Res. 69, 343–418 (2020). https://doi.org/10.1613/jair.1.12007

    Article  MathSciNet  Google Scholar 

  6. Google: Google Tranlate. https://translate.google.com/. Accessed 2022/01/12

  7. Microsoft: Microsoft Bing Translator. https://www.bing.com/translator. Accessed 12 Jan 2022

  8. Smirnov, A., Teslya, N., Shilov, A., Frank, D., Minina, E., Kovacs, M.: Comparative analysis of neural translation models based on transformers architecture. In: Proceedings of the 24th International Conference on Enterprise Information Systems, pp. 586–593. SCITEPRESS - Science and Technology Publications (2022). https://doi.org/10.5220/0011083600003179

  9. Vathsala, M.K., Holi, G.: RNN based machine translation and transliteration for Twitter data. Int. J. Speech Technol. 23, 499–504 (2020). https://doi.org/10.1007/s10772-020-09724-9

    Article  Google Scholar 

  10. Mahata, S.K., Das, D., Bandyopadhyay, S.: MTIL2017: machine translation using recurrent neural network on statistical machine translation. J. Intell. Syst. 28, 447–453 (2019). https://doi.org/10.1515/jisys-2018-0016

    Article  Google Scholar 

  11. Wang, X., Chen, C., Xing, Z.: Domain-specific machine translation with recurrent neural network for software localization. Empir. Softw. Eng. 24, 3514–3545 (2019). https://doi.org/10.1007/s10664-019-09702-z

    Article  Google Scholar 

  12. Wolf, T., et al.: Hugging face’s transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 38–45. Association for Computational Linguistics, Stroudsburg, PA, USA (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.6

  13. Wolf, T., et al.: Hugging Face’s Transformers: State-of-the-art Natural Language Processing. arXiv Prepr. arXiv1910.03771 (2019)

    Google Scholar 

  14. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018)

    Google Scholar 

  15. Fan, A., et al.: Beyond English-centric multilingual machine translation. J. Mach. Learn. Res. 22, 1–48 (2021)

    MathSciNet  MATH  Google Scholar 

  16. Liu, Y., et al.: Multilingual denoising pre-training for neural machine translation. Trans. Assoc. Comput. Linguist. 8, 726–742 (2020). https://doi.org/10.1162/tacl_a_00343

    Article  Google Scholar 

  17. Tiedemann, J., Thottingal, S.: OPUS-MT: building open translation services for the world. In: Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pp. 479–480 (2020)

    Google Scholar 

  18. Tan, Z., et al.: Neural machine translation: a review of methods, resources, and tools. AI Open 1, 5–21 (2020). https://doi.org/10.1016/j.aiopen.2020.11.001

    Article  Google Scholar 

  19. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 311–318. Philadelphia (2002)

    Google Scholar 

  20. Denkowski, M., Lavie, A.: Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 376–380. Association for Computational Linguistics, Stroudsburg, PA, USA (2014). https://doi.org/10.3115/v1/W14-3348

  21. Doddington, G.: Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: HLT ’02: Proceedings of the second international conference on Human Language Technology Research, pp. 138–145. ACM Press (2002). https://doi.org/10.5555/1289189.1289273

  22. Chen, B., et al.: Transformer-based language model fine-tuning methods for COVID-19 fake news detection. In: Chakraborty, T., Shu, K., Bernard, H.R., Liu, H., Akhtar, M.S. (eds.) CONSTRAINT 2021. CCIS, vol. 1402, pp. 83–92. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73696-5_9

    Chapter  Google Scholar 

  23. Mishra, S., Prasad, S., Mishra, S.: Multilingual joint fine-tuning of transformer models for identifying trolling, aggression and cyberbullying at TRAC 2020. In: Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying at Language Resources and Evaluation Conference (LREC 2020), pp. 120–125. European Language Resources Association (ELRA) (2020)

    Google Scholar 

Download references

Acknowledgements

The paper is due to the collaboration between SPC RAS and Festo SE & Co. KG. The methodology and experiment setup (Sect. 3) are partially due to the State Research, project number FFZF-2022-0005.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nikolay Teslya .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Smirnov, A., Teslya, N., Shilov, N., Frank, D., Minina, E., Kovacs, M. (2023). Quantitative Comparison of Translation by Transformers-Based Neural Network Models. In: Filipe, J., Śmiałek, M., Brodsky, A., Hammoudi, S. (eds) Enterprise Information Systems. ICEIS 2022. Lecture Notes in Business Information Processing, vol 487. Springer, Cham. https://doi.org/10.1007/978-3-031-39386-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-39386-0_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-39385-3

  • Online ISBN: 978-3-031-39386-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics