Skip to main content

Paragraph-Level Hierarchical Neural Machine Translation

  • Conference paper
  • First Online:
Book cover Neural Information Processing (ICONIP 2019)

Abstract

Neural Machine Translation (NMT) has achieved great developments in recent years, but we still have to face two challenges: establishing a high-quality corpus and exploring optimal parameters of models for long text translation. In this paper, we first attempt to set up a paragraph-parallel corpus based on English and Chinese versions of the novels and then design a hierarchical model for it to handle these two challenges. Our encoder and decoder take all the sentences of a paragraph as input to process the words, sentences, paragraphs at different levels, particularly with a two-layer transformer. The bottom transformer of encoder and decoder is used as another level of abstraction, conditioning on its own previous hidden states. Experimental results show that our hierarchical model significantly outperforms seven competitive baselines, including ensembles.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  2. Brown, P.F., Della Pietra, S.A., Della Pietra, V.J., Mercer, R.L., Mohanty, S.: Dividing and conquering long sentences in a translation system. In: Proceedings of the Workshop on Speech and Natural Language, pp. 267–271. Association for Computational Linguistics (1992)

    Google Scholar 

  3. Cettolo, M., Girardi, C., Federico, M.: WIT3: web inventory of transcribed and translated talks. In: Conference of European Association for Machine Translation, pp. 261–268 (2012)

    Google Scholar 

  4. Cho, K., Van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. arXiv preprint arXiv:1409.1259 (2014)

  5. Doi, T., Sumita, E.: Splitting input sentence for machine translation using language model with sentence similarity. In: COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pp. 113–119 (2004)

    Google Scholar 

  6. Gong, Z., Zhang, M., Zhou, G.: Cache-based document-level statistical machine translation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 909–919. Association for Computational Linguistics (2011)

    Google Scholar 

  7. Hardmeier, C., Federico, M.: Modelling pronominal anaphora in statistical machine translation. In: IWSLT (International Workshop on Spoken Language Translation), Paris, France, 2nd–3rd December 2010, pp. 283–289 (2010)

    Google Scholar 

  8. Su, J., Zeng, J., Xiong, D., Liu, Y.: A hierarchy-to-sequence attentional neural machine translation model. IEEE/ACM Trans. Audio Speech Lang. Process. 26(3), 623–632 (2018)

    Article  Google Scholar 

  9. Li, J., Luong, M.T., Jurafsky, D.: Document-level neural machine translation with hierarchical attention networks. In: EMNLP 2018 (2018). arXiv preprint arXiv:1809.01576. version 2

  10. Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.M.: OpenNMT: open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810 (2017)

  11. Koehn, P., Hoang, H., Birch, A., Callison-Burch, C.: Moses: open source toolkit for statistical machine translation. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pp. 177–180 (2007)

    Google Scholar 

  12. Li, J., Luong, M.T., Jurafsky, D.: A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057 (2015)

  13. Maruf, S., Haffari, G.: Document context neural machine translation with memory networks. arXiv preprint arXiv:1711.03688 (2017)

  14. Miculicich, L., Ram, D., Pappas, N.: Document-level neural machine translation with hierarchical attention networks. arXiv preprint arXiv:1809.01576 (2018)

  15. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)

    Google Scholar 

  16. Serban, I.V., Sordoni, A., Bengio, Y., Courville, A., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)

    Google Scholar 

  17. Sordoni, A., Bengio, Y., Vahabi, H., Lioma, C., Grue Simonsen, J., Nie, J.Y.: A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 553–562. ACM (2015)

    Google Scholar 

  18. Sudoh, K., Duh, K., Tsukada, H., Hirao, T., Nagata, M.: Divide and translate: improving long distance reordering in statistical machine translation. In: Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pp. 418–427. Association for Computational Linguistics (2010)

    Google Scholar 

  19. Uszkoreit, J., Ponte, J.M., Popat, A.C., Dubiner, M.: Large scale parallel document mining for machine translation. In: Proceedings of the 23rd International Conference on Computational Linguistics, pp. 1101–1109. Association for Computational Linguistics (2010)

    Google Scholar 

  20. Vaswani, A., et al.: Attention is all you need, pp. 5998–6008 (2017)

    Google Scholar 

  21. Xu, J., Zens, R., Ney, H.: Sentence segmentation using IBM word alignment model 1. In: Proceedings of EAMT, pp. 280–287 (2005)

    Google Scholar 

  22. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., Hovy, E.: Hierarchical attention networks for document classification. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1480–1489 (2016)

    Google Scholar 

  23. Zhou, J., Cao, Y., Wang, X., Li, P., Xu, W.: Deep recurrent models with fast-forward connections for neural machine translation. Trans. Assoc. Comput. Linguist. 4, 371–383 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

This research work has been funded by the National Natural Science Foundation of China (Grant No. 61772337, U1736207), and the National Key Research and Development Program of China No. 2016QY03D0604 and 2018YFC0830703.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gongshen Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y., Meng, K., Liu, G. (2019). Paragraph-Level Hierarchical Neural Machine Translation. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Lecture Notes in Computer Science(), vol 11955. Springer, Cham. https://doi.org/10.1007/978-3-030-36718-3_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36718-3_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36717-6

  • Online ISBN: 978-3-030-36718-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics