Abstract
In recent years, machine translation has made great progress with the rapid development of deep learning. However, there still exists a problem of catastrophic forgetting in the field of neural machine translation, namely, a decrease in overall performance will happen when training with new data added incrementally. Many methods related to incremental learning have been proposed to solve this problem in the tasks of computer vision, but few for machine translation. In this paper, firstly, several prevailing methods relevant to incremental learning are applied into the task of machine translation, then we proposed an ensemble model to deal with the problem of catastrophic forgetting, at last, some important and authoritative metrics are used to evaluate the model performances in our experiments. The results can prove that the incremental learning is also effective in the task of neural machine translation, and the ensemble model we put forward is also capable of improving the model performance to some extent.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Psychology of Learning and Motivation, vol. 24, pp. 109–165. Elsevier (1989)
Daems, J., Macken, L.: Interactive adaptive SMT versus interactive adaptive NMT: a user experience evaluation. Mach. Transl. 1–18 (2019)
Chen, Z., Liu, B.: Lifelong machine learning. Synth. Lect. Artif. Intell. Mach. Lear. 12(3), 1207 (2018)
Mark, B.: Ring. Continual learning in reinforcement environments. In: GMD-Bericht (1994)
Grossberg, S.: Studies of Mind and Brain: Neural Principles of Learning, Perception, Development, Cognition, and Motor Control. Boston Studies in the Philosophy of Science, vol. 70. Reidel, Dordrecht (1982)
Rusu, A.A., et al.: Progressive neural networks. CoRR (2016)
Mallya, A., Lazebnik, S.: Packnet: adding multiple tasks to a single network by iterative pruning. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018, pp. 7765–7773 (2018)
Roy, D., Panda, P., Roy, K.: Tree-CNN: a hierarchical deep convolutional neural network for incremental learning. Neural Netw. 121, 148–160 (2020)
Rebuffi, A., Kolesnikov, A., Sperl, G., Lampert, C.H.: ICARL: incremental classifier and representation learning. In: CVPR, pp. 2001–2010 (2017)
Chang, M., Gupta, A., Levine, S., Griffiths, T.L.: Automatically composing representation transformations as a means for generalization. In: ICML workshop Neural Abstract Machines and Program Induction vol. 2 (2018)
Wang, H., Xiong, W., Yu, M., Guo, X., Chang, S., Wang, W.Y.: Sentence embedding alignment for lifelong relation extraction. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 796–806, Minneapolis, Minnesota. Association for Computational Linguistics (2019b)
Chen, Z., Liu, B.: Lifelong machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–207 (2018)
Monaikul, N., Castellucci, G., Filice, S.: Continual learning for named entity recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 15, pp. 13570–13577 (2021)
Thompson, B., Gwinnup, J., Khayrallah, H., Duh, K., Koehn, P.: Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019, Volume 1 (Long and Short Papers), pp. 2062–2068. Association for Computational Linguistics (2019)
Khayrallah, H., Thompson, B., Duh, K., Koehn, P.: Regularized training objective for continued training for domain adaptation in neural machine translation. In: Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, ACL, Melbourne, Australia, 20 July 2018, pp. 36–44. Association for Computational Linguistics (2018)
Cao, Y., Wei, H.R., Chen, B., et al.: Continual learning for neural machine translation. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. U S A 114(13), 3521–3526 (2016)
Liu, X., Masana, M., Herranz, L., et al.: Rotate your networks: better weight consolidation and less catastrophic forgetting. IEEE (2018)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Comput. Sci. (2014)
Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, 17–21 September 2015, pp. 1412–1421. The Association for Computational Linguistics (2015)
Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), 4–9 December 2017, Long Beach, CA, USA (2017)
Tan, L., Li, L., Han, Y., et al.: An empirical study on ensemble learning of multimodal machine translation. In: IEEE Sixth International Conference on Multimedia Big Data. IEEE (2020)
Tefánik, M., Novotn, V., Sojka, P.: Regressive ensemble for machine translation quality evaluation (2021)
Bojar, O., et al.: In: Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 1–46. Association for Computational Linguistics, Lisbon (2015)
Aharoni, R., Goldberg, Y.: Unsupervised domain clusters in pretrained language models. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, pp. 7747–7763. Association for Computational Linguistics (2020)
Papineni, K., Roukos, S., Ward, T., Zhu, J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)
Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: Proceedings of Association for Machine Translation in the Americas, pp. 223–231 (2006)
Harvard NLP group and SYSTRAN. The OpenNMT ecosystem (2016). https://opennmt.net/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Shi, P. (2023). An Effective Ensemble Model Related to Incremental Learning in Neural Machine Translation. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-30105-6_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30104-9
Online ISBN: 978-3-031-30105-6
eBook Packages: Computer ScienceComputer Science (R0)