Skip to main content

Data Selection as an Alternative to Quality Estimation in Self-Learning for Low Resource Neural Machine Translation

  • Conference paper
  • First Online:
Computational Science and Its Applications – ICCSA 2021 (ICCSA 2021)

Abstract

For many languages, the lack of sufficient parallel data to train translation models have resulted in using the monolingual data, source and target, through self-learning and back-translation respectively. Most works that implemented the self-learning approach utilized a quality estimation system to ensure that the resulting additional training data is of sufficient quality to improve the model. However, the quality estimation system may not be available for many low resource languages, restricting the implementation of such approach to a very few. This work proposes the utilization of the data selection technique as an alternative to quality estimation. The approach will ensure that the models will learn only from the data that is closer to the domain of the test set, improving the performance of the translation models. While this approach is applicable to many, if not all, languages, we obtained similar and, in some implementations, even better results (\(+\)0.53 BLEU) than the self-training approach that was implemented using the quality estimation system on low resource IWSLT’14 English-German dataset. We also showed that the proposed approach can be used to improve the performance of the back-translation approach, gaining \(+\)1.79 and \(+\)0.23 over standard back-translation and self-learning with quality estimation enhanced back-translation respectively.

This work is conducted as part of a PhD research supported by the NITDEF 2018.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI’16, pp. 265–283. USENIX Association, Berkeley (2016). http://dl.acm.org/citation.cfm?id=3026877.3026899

  2. Abdulmumin, I., Galadanci, B.S., Isa, A.: Enhanced back-translation for low resource neural machine translation using self-training. In: ICTA 2020. CCIS, vol. 1350, pp. 355–371. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-69143-1_28

    Chapter  Google Scholar 

  3. Abdulmumin, I., Galadanci, B.S., Isa, A., Sinan, I.I.: A hybrid approach for improved low resource neural machine translation using monolingual data. arXiv:2011.07403 (2021)

  4. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015). http://arxiv.org/abs/1409.0473

  5. Biçici, E., Yuret, D.: Instance selection for machine translation using feature decay algorithms. In: Proceedings of the 6th Workshop on Statistical Machine Translation, pp. 272–283. Asian Federation of Natural Language Processing, Edinburg (2011). https://pdfs.semanticscholar.org/14c6/731b2839d40b0c45f447b07afce1cc996de4.pdf

  6. Biçici, E., Yuret, D., Bicici, E., Yuret, D.: Optimizing instance selection for statistical machine translation with feature decay algorithms. IEEE/ACM Trans. Audio Speech Lang. Process. 23(2), 339–350 (2015). https://doi.org/10.1109/TASLP.2014.2381882

    Article  Google Scholar 

  7. Bojar, O., et al.: Findings of the 2017 conference on machine translation (WMT17). In: Proceedings of the Second Conference on Machine Translation, Shared Task Papers, vol. 2, pp. 169–214. Association for Computational Linguistics, Copenhagen (2017). http://www.aclweb.org/anthology/W17-4717

  8. Burlot, F., Yvon, F.F.: Using monolingual data in neural machine translation: a systematic study. In: Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 144–155. Association for Computational Linguistics, Brussels (2018). https://doi.org/10.18653/v1/W18-6315. https://www.aclweb.org/anthology/W18-6315

  9. Caswell, I., Chelba, C., Grangier, D.: Tagged back-translation. In: Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pp. 53–63. Association for Computational Linguistics, Florence (2019). https://doi.org/10.18653/v1/W19-5206. https://www.aclweb.org/anthology/W19-5206

  10. Cettolo, M., Niehues, J., Stüker, S., Bentivogli, L., Federico, M.: Report on the 11th IWSLT evaluation campaign, IWSLT 2014. In: Proceedings of the 11th Workshop on Spoken Language Translation, Lake Tahoe, CA, USA, pp. 2–16 (2014)

    Google Scholar 

  11. Currey, A., Miceli Barone, A.V., Heafield, K.: Copied monolingual data improves low-resource neural machine translation. In: Proceedings of the Second Conference on Machine Translation, vol. 1, pp. 148–156. Association for Computational Linguistics, Copenhagen (2017). https://doi.org/10.18653/v1/W17-4715

  12. Dabre, R., et al.: NICT’s supervised neural machine translation systems for the WMT19 news translation task. In: Proceedings of the Fourth Conference on Machine Translation (WMT), Florence, Italy, vol. 2, pp. 168–174 (2019)

    Google Scholar 

  13. Dehghani, M., Gouws, S., Vinyals, O., Uszkoreit, J., Kaiser, Ł.: Universal transformers. In: ICLR, pp. 1–23 (2019). http://arxiv.org/abs/1807.03819

  14. Edunov, S., Ott, M., Auli, M., Grangier, D.: Understanding back-translation at scale. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 489–500. Association for Computational Linguistics, Stroudsburg (2018). https://doi.org/10.18653/v1/D18-1045. http://aclweb.org/anthology/D18-1045

  15. Gascó, G., Rocha, M.A., Sanchis-Trilles, G., Andrés-Ferrer, J., Casacuberta, F.: Does more data always yield better translations? In: EACL 2012–13th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings, pp. 152–161 (2012)

    Google Scholar 

  16. Gehring, J., Michael, A., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1243–1252. PMLR, Sydney (2017). http://proceedings.mlr.press/v70/gehring17a.html

  17. Gulcehre, C., Firat, O., Xu, K., Cho, K., Bengio, Y.: On integrating a language model into neural machine translation. Comput. Speech Lang. 45(2017), 137–148 (2017). https://doi.org/10.1016/j.csl.2017.01.014

    Article  Google Scholar 

  18. He, D., et al.: Dual learning for machine translation. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pp. 820–828. Curran Associates Inc., USA (2016). http://dl.acm.org/citation.cfm?id=3157096.3157188

  19. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  20. Imamura, K., Fujita, A., Sumita, E.: Enhancement of encoder and attention using target monolingual corpora in neural machine translation. In: Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 55–63. Association for Computational Linguistics, Melbourne (2018)

    Google Scholar 

  21. Kepler, F., Trénous, J., Treviso, M., Vera, M., Martins, A.F.T.: OpenKiwi: an open source framework for quality estimation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 117–122. Association for Computational Linguistics, Florence (2019)

    Google Scholar 

  22. Kim, H., Lee, J.H., Na, S.H.: Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In: Proceedings of the Conference on Machine Translation (WMT), vol. 2, pp. 562–568. Association for Computational Linguistics, Copenhagen (2017). https://doi.org/10.18653/v1/w17-4763

  23. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, San Diego, CA, USA (2015). http://arxiv.org/abs/1412.6980

  24. Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.: OpenNMT: open-source toolkit for neural machine translation. In: Proceedings of ACL 2017, System Demonstrations, pp. 67–72. Association for Computational Linguistics, Vancouver (2017). https://www.aclweb.org/anthology/P17-4012

  25. Kocmi, T., Bojar, O.: CUNI submission for low-resource languages in WMT news 2019. In: Proceedings of the Fourth Conference on Machine Translation (WMT), Florence, Italy, vol. 2, pp. 234–240 (2019)

    Google Scholar 

  26. Koehn, P., Knowles, R.: Six challenges for neural machine translation. In: Proceedings of the First Workshop on Neural Machine Translation, pp. 28–39. Association for Computational Linguistics, Vancouver (2017). https://doi.org/10.18653/v1/w17-3204

  27. Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412–1421. Association for Computational Linguistics, Stroudsburg (2015). https://doi.org/10.18653/v1/D15-1166. http://aclweb.org/anthology/D15-1166

  28. Nguyen, T.Q., Chiang, D.: Transfer learning across low-resource, related languages for neural machine translation. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing, vol. 2, pp. 296–301. Asian Federation of Natural Language Processing (2017)

    Google Scholar 

  29. Niu, X., Denkowski, M., Carpuat, M.: Bi-directional neural machine translation with synthetic parallel data. In: Proceedings of the 2nd Workshop on Neural Machine Translation and Generation. pp. 84–91. Association for Computational Linguistics, Melbourne (2018). https://doi.org/10.18653/v1/W18-2710. https://www.aclweb.org/anthology/W18-2710

  30. Ott, M., Edunov, S., Grangier, D., Auli, M.: Scaling neural machine translation. In: Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 1–9. Association for Computational Linguistics, Stroudsburg (2018). https://doi.org/10.18653/v1/W18-6301. http://aclweb.org/anthology/W18-6301

  31. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pp. 311–318. Association for Computational Linguistics, Stroudsburg (2002). https://doi.org/10.3115/1073083.1073135

  32. Poncelas, A., Maillette de Buy, G.W., Way, A.: Adaptation of machine translation models with back-translated data using transductive data selection methods. CoRR abs/1906.0 (2019). http://arxiv.org/abs/1906.07808

  33. Ranzato, M., Chopra, S., Auli, M., Zaremba, W.: Sequence level training with recurrent neural networks. In: International Conference on Learning Representations (2016)

    Google Scholar 

  34. Sennrich, R., Haddow, B., Birch, A.: Edinburgh neural machine translation systems for WMT 16. In: Proceedings of the First Conference on Machine Translation, Shared Task Papers, vol. 2, pp. 371–376. Association for Computational Linguistics, Stroudsburg (2016). https://doi.org/10.18653/v1/W16-2323, http://aclweb.org/anthology/W16-2323

  35. Sennrich, R., Haddow, B., Birch, A.: Improving neural machine translation models with monolingual data. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 86–96. Association for Computational Linguistics, Berlin (2016)

    Google Scholar 

  36. Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725. Association for Computational Linguistics, Stroudsburg (2016). https://doi.org/10.18653/v1/P16-1162. http://aclweb.org/anthology/P16-1162

  37. Specia, L., Shah, K.: Machine translation quality estimation: applications and future perspectives. In: Moorkens, J., Castilho, S., Gaspari, F., Doherty, S. (eds.) Translation Quality Assessment. MTTA, vol. 1, pp. 201–235. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91241-7_10

    Chapter  Google Scholar 

  38. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS (2014)

    Google Scholar 

  39. Ueffing, N.: Using monolingual source-language data to improve MT performance. In: International Workshop on Spoken Language Translation, Kyoto, Japan, pp. 174–181 (2006)

    Google Scholar 

  40. Vaswani, A., et al.: Attention is all you need. In: 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA (2017)

    Google Scholar 

  41. Wu, F., Fan, A., Baevski, A., Dauphin, Y.N., Auli, M.: Pay less attention with lightweight and dynamic convolutions. arXiv:1901.10430v2, pp. 1–14 (2019)

  42. Yang, Z., Chen, W., Wang, F., Xu, B.: Effectively training neural machine translation models with monolingual data. Neurocomputing 333, 240–247 (2019). https://doi.org/10.1016/j.neucom.2018.12.032

    Article  Google Scholar 

  43. Zhang, J., Zong, C.: Exploiting source-side monolingual data in neural machine translation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1535–1545. Association for Computational Linguistics, Austin (2016). https://doi.org/10.18653/v1/d16-1160

  44. Zhang, Y., Weiss, D.: Stackpropagation: improved representation learning for syntax. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1557–1566. Association for Computational Linguistics, Berlin (2016). http://www.aclweb.org/anthology/P16-1147

  45. Zoph, B., Yuret, D., May, J., Knight, K.: Transfer learning for low-resource neural machine translation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1568–1575. Association for Computational Linguistics, Austin (2016). https://doi.org/10.18653/v1/D16-1163. https://www.aclweb.org/anthology/D16-1163

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Idris Abdulmumin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Abdulmumin, I., Galadanci, B.S., Ahmad, I.S., Abdullahi, R.I. (2021). Data Selection as an Alternative to Quality Estimation in Self-Learning for Low Resource Neural Machine Translation. In: Gervasi, O., et al. Computational Science and Its Applications – ICCSA 2021. ICCSA 2021. Lecture Notes in Computer Science(), vol 12957. Springer, Cham. https://doi.org/10.1007/978-3-030-87013-3_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87013-3_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87012-6

  • Online ISBN: 978-3-030-87013-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics