Skip to main content

An Effective Ensemble Model Related to Incremental Learning in Neural Machine Translation

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13623))

Included in the following conference series:

  • 1643 Accesses

Abstract

In recent years, machine translation has made great progress with the rapid development of deep learning. However, there still exists a problem of catastrophic forgetting in the field of neural machine translation, namely, a decrease in overall performance will happen when training with new data added incrementally. Many methods related to incremental learning have been proposed to solve this problem in the tasks of computer vision, but few for machine translation. In this paper, firstly, several prevailing methods relevant to incremental learning are applied into the task of machine translation, then we proposed an ensemble model to deal with the problem of catastrophic forgetting, at last, some important and authoritative metrics are used to evaluate the model performances in our experiments. The results can prove that the incremental learning is also effective in the task of neural machine translation, and the ensemble model we put forward is also capable of improving the model performance to some extent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Psychology of Learning and Motivation, vol. 24, pp. 109–165. Elsevier (1989)

    Google Scholar 

  2. Daems, J., Macken, L.: Interactive adaptive SMT versus interactive adaptive NMT: a user experience evaluation. Mach. Transl. 1–18 (2019)

    Google Scholar 

  3. Chen, Z., Liu, B.: Lifelong machine learning. Synth. Lect. Artif. Intell. Mach. Lear. 12(3), 1207 (2018)

    Google Scholar 

  4. Mark, B.: Ring. Continual learning in reinforcement environments. In: GMD-Bericht (1994)

    Google Scholar 

  5. Grossberg, S.: Studies of Mind and Brain: Neural Principles of Learning, Perception, Development, Cognition, and Motor Control. Boston Studies in the Philosophy of Science, vol. 70. Reidel, Dordrecht (1982)

    Google Scholar 

  6. Rusu, A.A., et al.: Progressive neural networks. CoRR (2016)

    Google Scholar 

  7. Mallya, A., Lazebnik, S.: Packnet: adding multiple tasks to a single network by iterative pruning. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018, pp. 7765–7773 (2018)

    Google Scholar 

  8. Roy, D., Panda, P., Roy, K.: Tree-CNN: a hierarchical deep convolutional neural network for incremental learning. Neural Netw. 121, 148–160 (2020)

    Article  Google Scholar 

  9. Rebuffi, A., Kolesnikov, A., Sperl, G., Lampert, C.H.: ICARL: incremental classifier and representation learning. In: CVPR, pp. 2001–2010 (2017)

    Google Scholar 

  10. Chang, M., Gupta, A., Levine, S., Griffiths, T.L.: Automatically composing representation transformations as a means for generalization. In: ICML workshop Neural Abstract Machines and Program Induction vol. 2 (2018)

    Google Scholar 

  11. Wang, H., Xiong, W., Yu, M., Guo, X., Chang, S., Wang, W.Y.: Sentence embedding alignment for lifelong relation extraction. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 796–806, Minneapolis, Minnesota. Association for Computational Linguistics (2019b)

    Google Scholar 

  12. Chen, Z., Liu, B.: Lifelong machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–207 (2018)

    Google Scholar 

  13. Monaikul, N., Castellucci, G., Filice, S.: Continual learning for named entity recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 15, pp. 13570–13577 (2021)

    Google Scholar 

  14. Thompson, B., Gwinnup, J., Khayrallah, H., Duh, K., Koehn, P.: Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019, Volume 1 (Long and Short Papers), pp. 2062–2068. Association for Computational Linguistics (2019)

    Google Scholar 

  15. Khayrallah, H., Thompson, B., Duh, K., Koehn, P.: Regularized training objective for continued training for domain adaptation in neural machine translation. In: Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, ACL, Melbourne, Australia, 20 July 2018, pp. 36–44. Association for Computational Linguistics (2018)

    Google Scholar 

  16. Cao, Y., Wei, H.R., Chen, B., et al.: Continual learning for neural machine translation. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)

    Google Scholar 

  17. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. U S A 114(13), 3521–3526 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Liu, X., Masana, M., Herranz, L., et al.: Rotate your networks: better weight consolidation and less catastrophic forgetting. IEEE (2018)

    Google Scholar 

  19. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Comput. Sci. (2014)

    Google Scholar 

  20. Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, 17–21 September 2015, pp. 1412–1421. The Association for Computational Linguistics (2015)

    Google Scholar 

  21. Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), 4–9 December 2017, Long Beach, CA, USA (2017)

    Google Scholar 

  22. Tan, L., Li, L., Han, Y., et al.: An empirical study on ensemble learning of multimodal machine translation. In: IEEE Sixth International Conference on Multimedia Big Data. IEEE (2020)

    Google Scholar 

  23. Tefánik, M., Novotn, V., Sojka, P.: Regressive ensemble for machine translation quality evaluation (2021)

    Google Scholar 

  24. Bojar, O., et al.: In: Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 1–46. Association for Computational Linguistics, Lisbon (2015)

    Google Scholar 

  25. Aharoni, R., Goldberg, Y.: Unsupervised domain clusters in pretrained language models. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, pp. 7747–7763. Association for Computational Linguistics (2020)

    Google Scholar 

  26. Papineni, K., Roukos, S., Ward, T., Zhu, J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)

    Google Scholar 

  27. Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: Proceedings of Association for Machine Translation in the Americas, pp. 223–231 (2006)

    Google Scholar 

  28. Harvard NLP group and SYSTRAN. The OpenNMT ecosystem (2016). https://opennmt.net/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pumeng Shi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shi, P. (2023). An Effective Ensemble Model Related to Incremental Learning in Neural Machine Translation. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30105-6_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30104-9

  • Online ISBN: 978-3-031-30105-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics