Abstract
Bilingual parallel sentences, combined with visual annotations, created an innovative machine translation scenario within the encoder-decoder framework, known as multimodal machine translation. In generally, it was encoded as an additional visual representation to enhance the dependent-time context vector when generating the target translation word by word. However, this approach only simulated the consistency between the visual annotation and the source language and did not sufficiently consider the consistency among the source language, the target language, and the visual context. To address this problem, we proposed a novel method that adds visual features to both the encoder and decoder. In the encoder, we designed a cross-modal correlation mechanism to effectively integrate textual and visual information. In the decoder, we designed a multimodal graph to enhance the related information of vision and text. Experimental results showed that the proposed approach significantly improved translation performance compared to strong baselines for the English-German/French language pairs. The ablation study further confirmed the effectiveness of the proposed approach in improving translation quality.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
A widely-used multi-modal dataset [5] to train MMT.
- 2.
References
Caglayan, O., et al.: Cross-lingual visual pre-training for multimodal machine translation. In: Merlo, P., Tiedemann, J., Tsarfaty, R. (eds.) Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, 19–23 April 2021, pp. 1317–1324. Association for Computational Linguistics (2021). https://aclanthology.org/2021.eacl-main.112/
Caglayan, O., Madhyastha, P., Specia, L., Barrault, L.: Probing the need for visual context in multimodal machine translation. CoRR abs/1903.08678 (2019). http://arxiv.org/abs/1903.08678
Calixto, I., Liu, Q., Campbell, N.: Incorporating global visual features into attention-based neural machine translation. CoRR abs/1701.06521 (2017). http://arxiv.org/abs/1701.06521
Elliott, D.: Adversarial evaluation of multimodal machine translation. In: Riloff, E., Chiang, D., Hockenmaier, J., Tsujii, J. (eds.) Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018, pp. 2974–2978. Association for Computational Linguistics (2018). https://doi.org/10.18653/v1/d18-1329
Elliott, D., Frank, S., Sima’an, K., Specia, L.: Multi30K: multilingual English-German image descriptions. In: Proceedings of the 5th Workshop on Vision and Language, pp. 70–74. Association for Computational Linguistics, Berlin, Germany, August 2016. https://doi.org/10.18653/v1/W16-3210. https://aclanthology.org/W16-3210
Elliott, D., Kádár, Á.: Imagination improves multimodal translation. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 130–141. Asian Federation of Natural Language Processing, Taipei, Taiwan, November 2017. https://aclanthology.org/I17-1014
Huang, P., Liu, F., Shiang, S., Oh, J., Dyer, C.: Attention-based multimodal neural machine translation. In: Proceedings of the First Conference on Machine Translation, WMT 2016, Colocated with ACL 2016, 11–12 August, Berlin, Germany, pp. 639–645. The Association for Computer Linguistics (2016). https://doi.org/10.18653/v1/w16-2360
Lee, J., Cho, K., Weston, J., Kiela, D.: Emergent translation in multi-agent communication. CoRR abs/1710.06922 (2017). http://arxiv.org/abs/1710.06922
Lin, H., et al.: Dynamic context-guided capsule network for multimodal machine translation. In: Proceedings of the 28th ACM International Conference on Multimedia, MM 2020, pp. 1320–1329. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3394171.3413715
Liu, J.: Multimodal machine translation. IEEE Access, 1 (2021). https://doi.org/10.1109/ACCESS.2021.3115135
Post, M.: A call for clarity in reporting BLEU scores. In: Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186–191. Association for Computational Linguistics, Belgium, Brussels, October 2018. https://www.aclweb.org/anthology/W18-6319
Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR abs/1910.01108 (2019). http://arxiv.org/abs/1910.01108
Satanjeev, B.: METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In: ACL-2005, pp. 228–231 (2005)
Sun, Y., Zhu, S., Yifan, F., Mi, C.: Parallel sentences mining with transfer learning in an unsupervised setting. In: North American Chapter of the Association for Computational Linguistics (2021)
Vaswani, A., et al.: Attention is all you need. arXiv (2017)
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=rJXMpikCZ
Wang, D., Xiong, D.: Efficient object-level visual context modeling for multimodal machine translation: masking irrelevant objects helps grounding. CoRR abs/2101.05208 (2021). https://arxiv.org/abs/2101.05208
Yang, P., Chen, B., Zhang, P., Sun, X.: Visual agreement regularized training for multi-modal machine translation. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, 7–12 February 2020, pp. 9418–9425. AAAI Press (2020). https://aaai.org/ojs/index.php/AAAI/article/view/6484
Yao, S., Wan, X.: Multimodal transformer for multimodal machine translation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020)
Zhang, Z., et al.: Neural machine translation with universal visual representation. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 26–30 April 2020. OpenReview.net (2020). https://openreview.net/forum?id=Byl8hhNYPS
Zhou, M., Cheng, R., Lee, Y.J., Yu, Z.: A visual attention grounding neural model for multimodal machine translation. CoRR abs/1808.08266 (2018). http://arxiv.org/abs/1808.08266
Zhu, S., Mi, C., Li, T., Zhang, F., Zhang, Z., Sun, Y.: Improving bilingual word embeddings mapping with monolingual context information. Mach. Transl. 35, 503–518 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cheng, P., Shi, X., Liu, B., Li, M. (2023). Glancing Text and Vision Regularized Training to Enhance Machine Translation. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14261. Springer, Cham. https://doi.org/10.1007/978-3-031-44198-1_22
Download citation
DOI: https://doi.org/10.1007/978-3-031-44198-1_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44197-4
Online ISBN: 978-3-031-44198-1
eBook Packages: Computer ScienceComputer Science (R0)