Abstract
Cross-modal image-text retrieval is a crucial task in the field of vision and language, aimed at retrieving the relevant samples from one modality as per the given user expressed in another modality. While most methods developed for this task have focused on English, recent advances expanded the scope of this task to the Multi-lingual domain. However, these methods face challenges due to the limited availability of annotated data in non-English languages. In this work, we propose a novel method that leverages an English pre-training model as a teacher to improve Multi-lingual image-text retrieval performance. Our method trains a student model that produces better Multi-lingual image-text similarity scores by learning from the English image-text similarity scores of the trained teacher. We introduce the contrastive loss to align the two different representations of the image and text, and the Contrastive Similarity Distillation loss to align the Multi-lingual image-text distribution of the student with that of the English teacher. We evaluate our method on two popular datasets, i.e., MS-COCO and Flickr-30K, and achieve state-of-the-art performance. Our approach shows significant improvement over existing methods and has potential for practical applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bengio, Y., LeCun, Y., Henderson, D.: Globally trained handwritten word recognizer using spatial representation, convolutional neural networks, and hidden markov models. Adv. Neural Inf. Process. Syst. 6, 1–8 (1993)
Chen, X., et al.: Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 (2015)
Chen, Y.-C., et al.: UNITER: UNiversal image-TExt representation learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12375, pp. 104–120. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58577-8_7
Chi, Z., et al.: Infoxlm: an information-theoretic framework for cross-lingual language model pre-training. arXiv preprint arXiv:2007.07834 (2020)
Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Dong, L., et al.: Unified language model pre-training for natural language understanding and generation. Adv. Neural Inf. Process. Syst. 32, 1–13 (2019)
Faghri, F., Fleet, D.J., Kiros, J.R., Fidler, S.: Vse++: improving visual-semantic embeddings with hard negatives. arXiv preprint arXiv:1707.05612 (2017)
Guo, L., Liu, J., Zhu, X., Yao, P., Lu, S., Lu, H.: Normalized and geometry-aware self-attention network for image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10327–10336 (2020)
Houlsby, N., et al.: Parameter-efficient transfer learning for nlp. In: International Conference on Machine Learning, pp. 2790–2799. PMLR (2019)
Huang, H., et al.: Unicoder: a universal language encoder by pre-training with multiple cross-lingual tasks. arXiv preprint arXiv:1909.00964 (2019)
Jain, A., et al.: Mural: multimodal, multitask retrieval across languages. arXiv preprint arXiv:2109.05125 (2021)
Jia, C., et al.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916. PMLR (2021)
Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128–3137 (2015)
Lample, G., Conneau, A.: Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291 (2019)
Li, F., et al.: Vision-language intelligence: tasks, representation learning, and large models. arXiv preprint arXiv:2203.01922 (2022)
Li, X., et al.: COCO-CN for cross-lingual image tagging, captioning, and retrieval. IEEE Trans. Multimedia 21(9), 2347–2360 (2019)
Liu, W., Chen, S., Guo, L., Zhu, X., Liu, J.: CPTR: full transformer network for image captioning. arXiv preprint arXiv:2101.10804 (2021)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
Lu, H., Fei, N., Huo, Y., Gao, Y., Lu, Z., Wen, J.R.: Cots: collaborative two-stream vision-language pre-training model for cross-modal retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15692–15701 (2022)
Lu, J., Batra, D., Parikh, D., Lee, S.: Vilbert: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Adv. Neural Inf. Process. Syst. 32, 1–11 (2019)
Luo, Z., Xi, Y., Zhang, R., Li, G., Zhao, Z., Ma, J.: Conditioned masked language and image modeling for image-text dense retrieval. In: Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 130–140 (2022)
Ni, M., et al.: M3p: learning universal representations via multitask multilingual multimodal pre-training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3977–3986 (2021)
Oord, A.V.D., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)
Pfeiffer, J., Vulić, I., Gurevych, I., Ruder, S.: Mad-x: an adapter-based framework for multi-task cross-lingual transfer. arXiv preprint arXiv:2005.00052 (2020)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)
Sharma, P., Ding, N., Goodman, S., Soricut, R.: Conceptual captions: a cleaned, hypernymed, image alt-text dataset for automatic image captioning. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, vol. 1: Long Papers, pp. 2556–2565 (2018)
Su, W., et al.: Vl-bert: pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530 (2019)
Sun, S., Chen, Y.C., Li, L., Wang, S., Fang, Y., Liu, J.: Lightningdot: pre-training visual-semantic embeddings for real-time image-text retrieval. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 982–997 (2021)
Tan, H., Bansal, M.: Lxmert: learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490 (2019)
Wang, L., Li, Y., Huang, J., Lazebnik, S.: Learning two-branch neural networks for image-text matching tasks. IEEE Trans. Pattern Anal. Mach. Intell. 41(2), 394–407 (2018)
Xu, C., Zhou, W., Ge, T., Wei, F., Zhou, M.: Bert-of-theseus: compressing bert by progressive module replacing. arXiv preprint arXiv:2002.02925 (2020)
Yoshikawa, Y., Shigeto, Y., Takeuchi, A.: Stair captions: constructing a large-scale Japanese image caption dataset. arXiv preprint arXiv:1705.00823 (2017)
Young, P., Lai, A., Hodosh, M., Hockenmaier, J.: From image descriptions to visual denotations: new similarity metrics for semantic inference over event descriptions. Trans. Assoc. Comput. Linguist. 2, 67–78 (2014)
Zhao, Z., Guo, L., He, X., Shao, S., Yuan, Z., Liu, J.: Mamo: masked multimodal modeling for fine-grained vision-language representation learning. arXiv preprint arXiv:2210.04183 (2022)
Zhou, M., et al.: Uc2: universal cross-lingual cross-modal vision-and-language pre-training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4155–4165 (2021)
Zhou, W., Lee, D.H., Selvam, R.K., Lee, S., Lin, B.Y., Ren, X.: Pre-training text-to-text transformers for concept-centric common sense. arXiv preprint arXiv:2011.07956 (2020)
Acknowledgements
This work was supported by the National Key Research and Development Program of China (No.2020AAA0106400), National Natural Science Foundation of China (U21B2043,62102416).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lu, S., Guo, L., He, X., Zhu, X., Liu, J., Liu, S. (2023). CSDNet: Contrastive Similarity Distillation Network for Multi-lingual Image-Text Retrieval. In: Lu, H., et al. Image and Graphics . ICIG 2023. Lecture Notes in Computer Science, vol 14357. Springer, Cham. https://doi.org/10.1007/978-3-031-46311-2_32
Download citation
DOI: https://doi.org/10.1007/978-3-031-46311-2_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-46310-5
Online ISBN: 978-3-031-46311-2
eBook Packages: Computer ScienceComputer Science (R0)