Skip to main content

A Data Augmentation Method Based on Sub-tree Exchange for Low-Resource Neural Machine Translation

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2023)

Abstract

Neural machine translation (NMT) has recently gained a lot of attention due to its ability to provide highly accurate translations. Despite its promising potential, NMT is confronted with a major hurdle in the form of insufficient training data, which can adversely affect translation performance, particularly in languages with low-resources. This is a major obstacle as it hinders the applicability of NMT across diverse domains. To alleviate this issue, a novel data augmentation (DA) method has been proposed to expand training set. It utilizes pseudo-parallel sentence pairs generated by exchanging sub-tree and back-translation to enrich the diversity of the training samples. The effectiveness of the proposed method has been validated through a series of experiments. Both simulated and real low-resource translation tasks were used to evaluate the performance of the method. The results show that the proposed method outperforms other DA methods and significantly improves translation quality beyond that of the strong baseline.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. In: International Conference on Machine Learning, pp. 1243–1252. PMLR (2017)

    Google Scholar 

  2. Wu, Y., et al.: Google’s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)

  3. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  4. Gu, J., Wang, Y., Chen, Y., Cho, K., Li, V.O.: Meta-learning for low-resource neural machine translation. arXiv preprint arXiv:1808.08437 (2018)

  5. Ren, S., Chen, W., Liu, S., Li, M., Zhou, M., Ma, S.: Triangular architecture for rare language translation. arXiv preprint arXiv:1805.04813 (2018)

  6. Zoph, B., Yuret, D., May, J., Knight, K.: Transfer learning for low-resource neural machine translation. arXiv preprint arXiv:1604.02201 (2016)

  7. Wang, X., Pham, H., Dai, Z., Neubig, G.: Switchout: an efficient data augmentation algorithm for neural machine translation. arXiv preprint arXiv:1808.07512 (2018)

  8. Fadaee, M., Bisazza, A., Monz, C.: Data augmentation for low-resource neural machine translation. arXiv preprint arXiv:1705.00440 (2017)

  9. Zhang, J., Zong, C.: Exploiting source-side monolingual data in neural machine translation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1535–1545 (2016)

    Google Scholar 

  10. Sennrich, R., Haddow, B., Birch, A.: Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709 (2015)

  11. Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  12. Artetxe, M., Labaka, G., Agirre, E., Cho, K.: Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041 (2017)

  13. Author, F., Author, S.: Title of a proceedings paper. In: Editor, F., Editor, S. (eds.) CONFERENCE 2016, LNCS, vol. 9999, pp. 1–13. Springer, Heidelberg (2016)

    Google Scholar 

  14. Lample, G., Conneau, A., Denoyer, L., Ranzato, M.: Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043 (2017)

  15. Iyyer, M., Manjunatha, V., Boyd-Graber, J., Daumé III, H.: Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1681–1691 (2015)

    Google Scholar 

  16. Xie, Z., et al.: Data noising as smoothing in neural network language models. arXiv preprint arXiv:1703.02573 (2017)

  17. Manning, C.D., Surdeanu, M., Bauer, J., Finkel, J.R., Bethard, S., McClosky, D.: The Stanford coreNLP natural language processing toolkit. In: Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 55–60 (2014)

    Google Scholar 

  18. Burlot, F., Yvon, F.: Using monolingual data in neural machine translation: a systematic study. arXiv preprint arXiv:1903.11437 (2019)

  19. Cheng, Y., Cheng, Y.: Semi-supervised learning for neural machine translation. Jt. Train. Neural Mach. Transl. 25–40 (2019)

    Google Scholar 

  20. Cotterell, R., Kreutzer, J.: Explaining and generalizing back-translation through wake-sleep. arXiv preprint arXiv:1806.04402 (2018)

  21. Currey, A., Miceli-Barone, A.V., Heafield, K.: Copied monolingual data improves low-resource neural machine translation. In: Proceedings of the Second Conference on Machine Translation, pp. 148–156 (2017)

    Google Scholar 

  22. Wu, X., Lv, S., Zang, L., Han, J., Hu, S.: Conditional BERT Contextual Augmentation. In: Rodrigues, J.M.F., et al. (eds.) ICCS 2019. LNCS, vol. 11539, pp. 84–95. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22747-0_7

    Chapter  Google Scholar 

  23. Kobayashi, S.: Contextual augmentation: data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201 (2018)

  24. Gao, F., et al.: Soft contextual data augmentation for neural machine translation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5539–5544 (2019)

    Google Scholar 

  25. Chen, K., Wang, R., Utiyama, M., Sumita, E.: Content word aware neural machine translation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 358–364 (2020)

    Google Scholar 

  26. Shi, X., Padhi, I., Knight, K.: Does string-based neural MT learn source syntax? In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1526–1534 (2016)

    Google Scholar 

  27. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  28. Ott, M., et al.: fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038 (2019)

  29. Lin, Z., Wu, L., Wang, M., Li, L.: Learning language specific sub-network for multilingual machine translation. arXiv preprint arXiv:2105.09259 (2021)

  30. Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 (2015)

  31. Bugliarello, E., Okazaki, N.: Enhancing machine translation with dependency-aware self-attention. arXiv preprint arXiv:1909.03149 (2019)

  32. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  33. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  34. Maimaiti, M., Liu, Y., Luan, H., Sun, M.: Data augmentation for low-resource languages NMT guided by constrained sampling. Int. J. Intell. Syst. 37(1), 30–51 (2022)

    Article  Google Scholar 

Download references

Acknowledgement

This work was supported by National Natural Science Foundation of Liaoning Province, China (Grant no. 2021-YKLH-12, 2022-YKLH-18).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fuxue Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chi, C., Li, F., Yan, H., Guan, H., Zhao, Z. (2023). A Data Augmentation Method Based on Sub-tree Exchange for Low-Resource Neural Machine Translation. In: Huang, DS., Premaratne, P., Jin, B., Qu, B., Jo, KH., Hussain, A. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2023. Lecture Notes in Computer Science(), vol 14089. Springer, Singapore. https://doi.org/10.1007/978-981-99-4752-2_53

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-4752-2_53

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-4751-5

  • Online ISBN: 978-981-99-4752-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics