Abstract
Important natural language processing tasks such as machine translation and document summarization have made enormous strides in recent years. However, their performance is still partially limited by the standard training objectives, which operate on single tokens rather than on more global features. Moreover, such standard objectives do not explicitly consider the source documents, potentially affecting their alignment with the predictions. For these reasons, in this paper, we propose using an Optimal Transport (OT) training objective to promote a global alignment between the model’s predictions and the source documents. In addition, we present an original implementation of the OT objective based on the Sinkhorn divergence between the final hidden states of the model’s encoder and decoder. Experimental results over machine translation and abstractive summarization tasks show that the proposed approach has been able to achieve statistically significant improvements across all experimental settings compared to our baseline and other alternative objectives. A qualitative analysis of the results also shows that the predictions have been able to better align with the source sentences thanks to the supervision of the proposed objective.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We remark that there are a few misaligned sentence pairs in the official release of this dataset, which end up affecting the test BLEU score. For more details, please refer to https://github.com/pytorch/fairseq/issues/4146. Herein, we report the BLEU scores on the corrected dataset.
- 2.
- 3.
- 4.
- 5.
References
Alqahtani, S., Lalwani, G., Zhang, Y., Romeo, S., Mansour, S.: Using optimal transport as alignment objective for fine-tuning multilingual contextualized embeddings. In: EMNLP (2021)
Chen, L., et al.: Improving sequence-to-sequence learning via optimal transport. In: ICLR, pp. 1–16 (2019)
Chen, Y.C., Gan, Z., Cheng, Y., Liu, J., Liu, J.: Distilling knowledge learned in bert for text generation. In: ACL (2020)
Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. In: NIPS, pp. 2292–2300 (2013)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: NAACL (2019)
Feydy, J., Séjourné, T., Vialard, F.X., Amari, S., Trouvé, A., Peyré, G.: Interpolating between optimal transport and mmd using sinkhorn divergences. In: AISTATS (2019)
Garg, S., Peitz, S., Nallasamy, U., Paulik, M.: Jointly learning to align and translate with transformer models. In: EMNLP (2019)
Jauregi Unanue, I., Parnell, J., Piccardi, M.: Berttune: fine-tuning neural machine translation with bertscore. In: ACL/IJCNLP (2021)
Koehn, P.: Statistical significance tests for machine translation evaluation. In: EMNLP (2004)
Kusner, M.J., Sun, Y., Kolkin, N.I., Weinberger, K.Q.: From word embeddings to document distances. In: ICML (2015)
Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv abs/1910.13461 (2020)
Li, C., et al.: Improving text generation with student-forcing optimal transport. In: EMNLP, pp. 9144–9156 (2020)
Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: ACL 2004 (2004)
Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: EMNLP (2015)
Nguyen, T., Luu, A.T.: Improving neural cross-lingual summarization via employing optimal transport distance for knowledge distillation. arXiv abs/2112.03473 (2021)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: ACL (2002)
Parnell, J., Unanue, I.J., Piccardi, M.: A multi-document coverage reward for relaxed multi-document summarization. In: ACL (2022)
Post, M.: A call for clarity in reporting bleu scores. In: WMT (2018)
Ranzato, M., Chopra, S., Auli, M., Zaremba, W.: Sequence level training with recurrent neural networks. CoRR abs/1511.06732 (2016)
Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. arXiv abs/1508.07909 (2016)
Shaw, P., Uszkoreit, J., Vaswani, A.: Self-attention with relative position representations. In: NAACL (2018)
Shi, X., Huang, H., Jian, P., Tang, Y.-K.: Case-Sensitive Neural Machine Translation. In: Lauw, H.W., Wong, R.C.-W., Ntoulas, A., Lim, E.-P., Ng, S.-K., Pan, S.J. (eds.) PAKDD 2020. LNCS (LNAI), vol. 12084, pp. 662–674. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47426-3_51
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS (2014)
Vaswani, A., et al.: Attention is all you need. arXiv abs/1706.03762 (2017)
Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.: BERTscore: evaluating text generation with BERT. arXiv abs/1904.09675 (2020)
Zhao, W., Peyrard, M., Liu, F., Gao, Y., Meyer, C.M., Eger, S.: Moverscore: text generation evaluating with contextualized embeddings and earth mover distance. arXiv abs/1909.02622 (2019)
Zhou, Q., Yang, N., Wei, F., Zhou, M.: Selective encoding for abstractive sentence summarization. In: ACL (2017)
Zhou, Q., Yang, N., Wei, F., Zhou, M.: Sequential copying networks. In: AAAI (2018)
Acknowledgements
The first author is funded by the China Scholarship Council (CSC) from the Ministry of Education of the P. R. of China.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, S., Unanue, I.J., Piccardi, M. (2023). Improving Machine Translation and Summarization with the Sinkhorn Divergence. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13938. Springer, Cham. https://doi.org/10.1007/978-3-031-33383-5_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-33383-5_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33382-8
Online ISBN: 978-3-031-33383-5
eBook Packages: Computer ScienceComputer Science (R0)