Complementarity-Aware Space Learning for Video-Text Retrieval | IEEE Journals & Magazine | IEEE Xplore

Complementarity-Aware Space Learning for Video-Text Retrieval


Abstract:

In general, videos are powerful at recording physical patterns (e.g., spatial layout) while texts are great at describing abstract symbols (e.g., emotion). When video and...Show More

Abstract:

In general, videos are powerful at recording physical patterns (e.g., spatial layout) while texts are great at describing abstract symbols (e.g., emotion). When video and text are used in multi-modal tasks, they are claimed to be complementary and their distinct information is crucial. However, when it comes to cross-modal tasks (e.g., retrieval), existing works usually use their common part in the form of common space learning while their distinct information is abandoned. In this paper, we argue that distinct information is also beneficial for cross-modal retrieval. To address this problem, we propose a divide-and-conquer learning approach, namely Complementarity-aware Space Learning (CSL), by recasting this challenge into learning of two spaces (i.e., latent and symbolic spaces) to simultaneously explore their common and distinct information by considering multi-modal complementary character. Specifically, we first propose to learn a symbolic space from video with a memory-based video encoder and a symbolic generator. In contrast, we also introduce learning a latent space from text with a text encoder and a memory-based latent feature selector. Finally, we propose a complementarity-aware loss by integrating two spaces to facilitate video-text retrieval tasks. Extensive experiments show that our approach outperforms existing state-of-the-art methods by 5.1%, 2.1% and 0.9% of R@10 for text-to-video retrieval on three benchmarks, respectively. Ablation study also verifies that the distinct information from video and text improves the retrieval performance. Trained models and source code have been released at https://github.com/NovaMind-Z/CSL.
Page(s): 4362 - 4374
Date of Publication: 09 January 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.