Abstract
Image-text retrieval is a fundamental and crucial task in the field of multimodal interaction, which assists internet users in retrieving the required visual and textual information conveniently. The dominant method for image-text retrieval aims to learn a visual semantic embedding space such that related visual and textual data are close to each other. Recent research focuses on designing sophisticated pooling strategies to better aggregate visual and textual features into holistic embeddings. However, existing methods often use the same pooling operator for the whole dataset, ignoring that samples with diverse intra-modality relationships require pooling operators trained with different parameters. To tackle this issue, we propose a novel Mixture of Pooling Experts (MoPE) framework, which combines multiple pooling operators to aggregate features for different data subsets. Specifically, we introduce a novel route gating strategy in combination with an aggregation expert module to dynamically learn diverse pooling experts for samples in different data subsets. Moreover, to fully exploit the intra-modality relationships, we develop a specialized router with a self-attention gate mechanism to direct each sample to the proper pooling expert. Extensive experiments conducted on two widely used benchmark datasets, namely Flickr30K and MS-COCO, demonstrate the superiority of our method over several state-of-the-art methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, H., Ding, G., Liu, X., Lin, Z., Liu, J., Han, J.: IMRAM: iterative matching with recurrent attention memory for cross-modal image-text retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12655–12663 (2020)
Chen, J., Hu, H., Wu, H., Jiang, Y., Wang, C.: Learning the best pooling strategy for visual semantic embedding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15789–15798 (2021)
Chen, T., Deng, J., Luo, J.: Adaptive offline quintuplet loss for image-text matching. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12358, pp. 549–565. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58601-0_33
Chen, Y., et al.: More than just attention: improving cross-modal attentions with contrastive constraints for image-text matching. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4432–4440 (2023)
Faghri, F., Fleet, D.J., Kiros, J.R., Fidler, S.: VSE++: improving visual-semantic embeddings with hard negatives. arXiv preprint arXiv:1707.05612 (2017)
Fedus, W., Zoph, B., Shazeer, N.: Switch transformers: scaling to trillion parameter models with simple and efficient sparsity. J. Mach. Learn. Res. 23(1), 5232–5270 (2022)
Han, N., Chen, J., Xiao, G., Zhang, H., Zeng, Y., Chen, H.: Fine-grained cross-modal alignment network for text-video retrieval. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 3826–3834 (2021)
Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3(1), 79–87 (1991)
Kim, S., Shim, K., Nguyen, L.T., Shim, B.: Semantic-preserving augmentation for robust image-text retrieval. In: 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2023, pp. 1–5. IEEE (2023)
Lee, K.H., Chen, X., Hua, G., Hu, H., He, X.: Stacked cross attention for image-text matching. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 201–216 (2018)
Lepikhin, D., et al.: GShard: scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668 (2020)
Li, J., Liu, L., Niu, L., Zhang, L.: Memorize, associate and match: embedding enhancement via fine-grained alignment for image-text retrieval. IEEE Trans. Image Process. 30, 9193–9207 (2021)
Li, K., Zhang, Y., Li, K., Li, Y., Fu, Y.: Image-text embedding learning via visual and textual semantic reasoning. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 641–656 (2022)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Plummer, B.A., Wang, L., Cervantes, C.M., Caicedo, J.C., Hockenmaier, J., Lazebnik, S.: Flickr30k entities: collecting region-to-phrase correspondences for richer image-to-sentence models. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2641–2649 (2015)
Qu, L., Liu, M., Cao, D., Nie, L., Tian, Q.: Context-aware multi-view summarization network for image-text matching. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 1047–1055 (2020)
Shazeer, N., et al.: Outrageously large neural networks: the sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)
Vaswani, A., et al.: Attention is all you need. arXiv (2017)
Wang, H., Zhang, Y., Ji, Z., Pang, Y., Ma, L.: Consensus-aware visual-semantic embedding for image-text matching. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12369, pp. 18–34. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58586-0_2
Wang, L., Li, Y., Huang, J., Lazebnik, S.: Learning two-branch neural networks for image-text matching tasks. IEEE Trans. Pattern Anal. Mach. Intell. 41(2), 394–407 (2018)
Wehrmann, J., Kolling, C., Barros, R.C.: Adaptive cross-modal embeddings for image-text alignment. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12313–12320 (2020)
Wehrmann, J., Souza, D.M., Lopes, M.A., Barros, R.C.: Language-agnostic visual-semantic embeddings. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5804–5813 (2019)
Zhang, G., Wei, S., Pang, H., Zhao, Y.: Heterogeneous feature fusion and cross-modal alignment for composed image retrieval. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 5353–5362 (2021)
Zhang, K., Mao, Z., Wang, Q., Zhang, Y.: Negative-aware attention framework for image-text matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15661–15670 (2022)
Zhang, Z., et al.: Improving visual-semantic embedding with adaptive pooling and optimization objective. arXiv preprint arXiv:2210.02206 (2022)
Acknowledgements
This work is supported by National Key Research and Development Program of China (2021YFC3340600), the Science and Technology Program of Shanghai, China (Grant No. 22511104300, 21ZR1423800), the Shanghai Municipal Science and Technology Major Project (2021SHZDZX0100) and the Fundamental Research Funds for the Central Universities.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclaimer
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, J., Wang, B., Qin, Y., Zhang, C., Yu, G., Zhao, Q. (2024). MoPE: Mixture of Pooling Experts Framework for Image-Text Retrieval. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14556. Springer, Cham. https://doi.org/10.1007/978-3-031-53311-2_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-53311-2_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53310-5
Online ISBN: 978-3-031-53311-2
eBook Packages: Computer ScienceComputer Science (R0)