Skip to main content

MoPE: Mixture of Pooling Experts Framework for Image-Text Retrieval

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2024)

Abstract

Image-text retrieval is a fundamental and crucial task in the field of multimodal interaction, which assists internet users in retrieving the required visual and textual information conveniently. The dominant method for image-text retrieval aims to learn a visual semantic embedding space such that related visual and textual data are close to each other. Recent research focuses on designing sophisticated pooling strategies to better aggregate visual and textual features into holistic embeddings. However, existing methods often use the same pooling operator for the whole dataset, ignoring that samples with diverse intra-modality relationships require pooling operators trained with different parameters. To tackle this issue, we propose a novel Mixture of Pooling Experts (MoPE) framework, which combines multiple pooling operators to aggregate features for different data subsets. Specifically, we introduce a novel route gating strategy in combination with an aggregation expert module to dynamically learn diverse pooling experts for samples in different data subsets. Moreover, to fully exploit the intra-modality relationships, we develop a specialized router with a self-attention gate mechanism to direct each sample to the proper pooling expert. Extensive experiments conducted on two widely used benchmark datasets, namely Flickr30K and MS-COCO, demonstrate the superiority of our method over several state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, H., Ding, G., Liu, X., Lin, Z., Liu, J., Han, J.: IMRAM: iterative matching with recurrent attention memory for cross-modal image-text retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12655–12663 (2020)

    Google Scholar 

  2. Chen, J., Hu, H., Wu, H., Jiang, Y., Wang, C.: Learning the best pooling strategy for visual semantic embedding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15789–15798 (2021)

    Google Scholar 

  3. Chen, T., Deng, J., Luo, J.: Adaptive offline quintuplet loss for image-text matching. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12358, pp. 549–565. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58601-0_33

    Chapter  Google Scholar 

  4. Chen, Y., et al.: More than just attention: improving cross-modal attentions with contrastive constraints for image-text matching. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4432–4440 (2023)

    Google Scholar 

  5. Faghri, F., Fleet, D.J., Kiros, J.R., Fidler, S.: VSE++: improving visual-semantic embeddings with hard negatives. arXiv preprint arXiv:1707.05612 (2017)

  6. Fedus, W., Zoph, B., Shazeer, N.: Switch transformers: scaling to trillion parameter models with simple and efficient sparsity. J. Mach. Learn. Res. 23(1), 5232–5270 (2022)

    MathSciNet  Google Scholar 

  7. Han, N., Chen, J., Xiao, G., Zhang, H., Zeng, Y., Chen, H.: Fine-grained cross-modal alignment network for text-video retrieval. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 3826–3834 (2021)

    Google Scholar 

  8. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3(1), 79–87 (1991)

    Article  Google Scholar 

  9. Kim, S., Shim, K., Nguyen, L.T., Shim, B.: Semantic-preserving augmentation for robust image-text retrieval. In: 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2023, pp. 1–5. IEEE (2023)

    Google Scholar 

  10. Lee, K.H., Chen, X., Hua, G., Hu, H., He, X.: Stacked cross attention for image-text matching. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 201–216 (2018)

    Google Scholar 

  11. Lepikhin, D., et al.: GShard: scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668 (2020)

  12. Li, J., Liu, L., Niu, L., Zhang, L.: Memorize, associate and match: embedding enhancement via fine-grained alignment for image-text retrieval. IEEE Trans. Image Process. 30, 9193–9207 (2021)

    Article  Google Scholar 

  13. Li, K., Zhang, Y., Li, K., Li, Y., Fu, Y.: Image-text embedding learning via visual and textual semantic reasoning. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 641–656 (2022)

    Article  Google Scholar 

  14. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  15. Plummer, B.A., Wang, L., Cervantes, C.M., Caicedo, J.C., Hockenmaier, J., Lazebnik, S.: Flickr30k entities: collecting region-to-phrase correspondences for richer image-to-sentence models. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2641–2649 (2015)

    Google Scholar 

  16. Qu, L., Liu, M., Cao, D., Nie, L., Tian, Q.: Context-aware multi-view summarization network for image-text matching. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 1047–1055 (2020)

    Google Scholar 

  17. Shazeer, N., et al.: Outrageously large neural networks: the sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)

  18. Vaswani, A., et al.: Attention is all you need. arXiv (2017)

    Google Scholar 

  19. Wang, H., Zhang, Y., Ji, Z., Pang, Y., Ma, L.: Consensus-aware visual-semantic embedding for image-text matching. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12369, pp. 18–34. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58586-0_2

    Chapter  Google Scholar 

  20. Wang, L., Li, Y., Huang, J., Lazebnik, S.: Learning two-branch neural networks for image-text matching tasks. IEEE Trans. Pattern Anal. Mach. Intell. 41(2), 394–407 (2018)

    Article  Google Scholar 

  21. Wehrmann, J., Kolling, C., Barros, R.C.: Adaptive cross-modal embeddings for image-text alignment. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12313–12320 (2020)

    Google Scholar 

  22. Wehrmann, J., Souza, D.M., Lopes, M.A., Barros, R.C.: Language-agnostic visual-semantic embeddings. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5804–5813 (2019)

    Google Scholar 

  23. Zhang, G., Wei, S., Pang, H., Zhao, Y.: Heterogeneous feature fusion and cross-modal alignment for composed image retrieval. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 5353–5362 (2021)

    Google Scholar 

  24. Zhang, K., Mao, Z., Wang, Q., Zhang, Y.: Negative-aware attention framework for image-text matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15661–15670 (2022)

    Google Scholar 

  25. Zhang, Z., et al.: Improving visual-semantic embedding with adaptive pooling and optimization objective. arXiv preprint arXiv:2210.02206 (2022)

Download references

Acknowledgements

This work is supported by National Key Research and Development Program of China (2021YFC3340600), the Science and Technology Program of Shanghai, China (Grant No. 22511104300, 21ZR1423800), the Shanghai Municipal Science and Technology Major Project (2021SHZDZX0100) and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qinpei Zhao .

Editor information

Editors and Affiliations

Ethics declarations

Disclaimer

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, J., Wang, B., Qin, Y., Zhang, C., Yu, G., Zhao, Q. (2024). MoPE: Mixture of Pooling Experts Framework for Image-Text Retrieval. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14556. Springer, Cham. https://doi.org/10.1007/978-3-031-53311-2_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53311-2_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53310-5

  • Online ISBN: 978-3-031-53311-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics