Skip to main content

Improving Low-Resource NMT with Parser Generated Syntactic Phrases

  • Conference paper
  • First Online:
Computational Linguistics and Intelligent Text Processing (CICLing 2019)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13451))

  • 329 Accesses

Abstract

Recently, neural machine translation (NMT) has become highly successful achieving state-of-the-art results on many resource-rich language pairs. However, it fails when there is a lack of sufficiently large amount of parallel corpora for a domain and/or language pair. In this paper, we propose an effective method for NMT under a low-resource scenario. The model operates by augmenting the original training data with the examples extracted from the parse trees of the target-side sentences. It provides important evidences to the model as these phrases are relatively smaller and linguistically correct. Our experiment on the benchmark WMT14 dataset shows an improvement of 3.28 BLEU and 3.41 METEOR score for Hindi to English translation. Evaluation on the same language pair with relatively much smaller datasets of judicial and health domains also show the similar trends with significant performance improvement in terms of BLEU (15.63 for judicial and 15.97 for health) and METEOR (14.30 for judicial and 15.93 for health).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Linguistically more accurate as the lengths are short.

  2. 2.

    https://nlp.stanford.edu/software/lex-parser.shtml.

  3. 3.

    https://github.com/EdinburghNLP/nematus.

  4. 4.

    It is based on BLEU score with patience value = 10.

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: International Conference on Learning Representation (ICLR) (2015)

    Google Scholar 

  2. Banerjee, S., Lavie, A.: METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pp. 65–72. Association for Computational Linguistics (2005). http://www.aclweb.org/anthology/W05-0909

  3. Bojar, O., et al.: Findings of the 2014 workshop on statistical machine translation. In: Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 12–58 (2014)

    Google Scholar 

  4. Bojar, O., et al.: Findings of the 2016 conference on machine translation. In: ACL 2016 First Conference on Machine Translation (WMT16), pp. 131–198. The Association for Computational Linguistics (2016)

    Google Scholar 

  5. Bojar, O., et al.: Hindencorp-Hindi-English and Hindi-only corpus for machine translation. In: LREC, pp. 3550–3555 (2014)

    Google Scholar 

  6. Crego, J., et al.: Systran’s pure neural machine translation systems. arXiv preprint arXiv:1610.05540 (2016)

  7. Currey, A., Barone, A.V.M., Heafield, K.: Copied monolingual data improves low-resource neural machine translation. In: Proceedings of the Second Conference on Machine Translation, pp. 148–156 (2017)

    Google Scholar 

  8. Federico, M., Bertoldi, N., Cettolo, M.: IRSTLM: an open source toolkit for handling large scale language models. In: Ninth Annual Conference of the International Speech Communication Association (2008)

    Google Scholar 

  9. Gulcehre, C., et al.: On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535 (2015)

  10. Junczys-Dowmunt, M., Dwojak, T., Hoang, H.: Is neural machine translation ready for deployment? A case study on 30 translation directions. In: In Proceedings of the International Workshop on Spoken Language Translation (IWSLT) (2016)

    Google Scholar 

  11. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representation (ICLR) (2015)

    Google Scholar 

  12. Kneser, R., Ney, H.: Improved backing-off for m-gram language modeling. In: Acoustics, Speech, and Signal Processing, 1995. ICASSP-95, 1995 International Conference on, vol. 1, pp. 181–184. IEEE (1995)

    Google Scholar 

  13. Koehn, P., et al.: Moses: open source toolkit for statistical machine translation. In: Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pp. 177–180. Association for Computational Linguistics (2007)

    Google Scholar 

  14. Koehn, P., Knowles, R.: Six challenges for neural machine translation. In: Proceedings of the First Workshop on Neural Machine Translation, pp. 28–39. Association for Computational Linguistics, Vancouver, August 2017. http://www.aclweb.org/anthology/W17-3204

  15. Koehn, P., Och, F.J., Marcu, D.: Statistical phrase-based translation. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, vol. 1, pp. 48–54. Association for Computational Linguistics (2003)

    Google Scholar 

  16. Och, F.J.: Minimum error rate training in statistical machine translation. In: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, vol. 1, pp. 160–167. Association for Computational Linguistics (2003)

    Google Scholar 

  17. Och, F.J., Ney, H.: A systematic comparison of various statistical alignment models. Comput. Linguist. 29(1), 19–51 (2003)

    Article  MATH  Google Scholar 

  18. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Philadelphia, Pennsylvania (2002)

    Google Scholar 

  19. Sennrich, R., et al.: Nematus: a toolkit for neural machine translation. arXiv preprint arXiv:1703.04357 (2017)

  20. Sennrich, R., Haddow, B., Birch, A.: Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709 (2015)

  21. Sennrich, R., Haddow, B., Birch, A.: Improving neural machine translation models with monolingual data. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, 7–12 August 2016, Berlin, Germany (2016)

    Google Scholar 

  22. Wu, Y., et al.: Google’s neural machine translation system: bridging the gap between human and machine translation. CoRR abs/1609.08144 (2016). http://arxiv.org/abs/1609.08144

  23. Zhang, J., Zong, C.: Exploiting source-side monolingual data in neural machine translation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1535–1545 (2016)

    Google Scholar 

Download references

Acknowledgement

We gratefully acknowledge TDIL, MeitY who supported this research work under development of the project “Hindi to English machine translation for judicial domain”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kamal Kumar Gupta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gupta, K.K., Sen, S., Ekbal, A., Bhattacharyya, P. (2023). Improving Low-Resource NMT with Parser Generated Syntactic Phrases. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2019. Lecture Notes in Computer Science, vol 13451. Springer, Cham. https://doi.org/10.1007/978-3-031-24337-0_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-24337-0_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-24336-3

  • Online ISBN: 978-3-031-24337-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics