Skip to main content

Context Alignment Network for Video Moment Retrieval

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13604))

Included in the following conference series:

  • 1710 Accesses

Abstract

Video Moment Retrieval (VMR) is a challenging cross-modal retrieval task that aims to retrieve the most relevant moment from an untrimmed video via a given language query. In this task, cross-modal semantics should be thoroughly comprehended and supervisory signal of limited annotations should be efficiently mined. Toward this end, we develop a Context Alignment Network (CAN) to tackle VMR by modeling and aligning cross-modal contexts. First, we employ fine-grained fusion to preserve rich low-level information and conduct complementary local-global context modeling to translate low-level information into high-level semantics. Second, we propose a novel context alignment learning to utilize additional context alignment supervision during training. The intuitive motivation is that contextual information around the predicted moment boundaries should be similar to that of the ground truth moment boundaries. Therefore, we define the alignment degree of boundary contexts between video moments as a proxy measure of their temporal overlap. By minimizing the context alignment loss, the model is driven to learn a context-level alignment relationship between moment boundaries. We find context alignment learning is effective to improve the retrieval accuracy by exploiting context alignment as additional supervisory signal. Extensive experiments show that CAN attains competitive performance compared with state-of-the-arts on Charades-STA and TACoS datasets, demonstrating the effectiveness of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Anne Hendricks, L., Wang, O., Shechtman, E., Sivic, J., Darrell, T., Russell, B.: Localizing moments in video with natural language. In: CVPR (2017)

    Google Scholar 

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  3. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: CVPR (2017)

    Google Scholar 

  4. Chen, L., et al.: Rethinking the bottom-up framework for query-based video localization. In: AAAI (2020)

    Google Scholar 

  5. Cheng, X., Lin, H., Wu, X., Yang, F., Shen, D.: Improving video-text retrieval by multi-stream corpus alignment and dual softmax loss. arXiv preprint arXiv:2109.04290 (2021)

  6. Gao, J., Sun, C., Yang, Z., Nevatia, R.: Tall: temporal activity localization via language query. In: ICCV (2017)

    Google Scholar 

  7. Ghosh, S., Agarwal, A., Parekh, Z., Hauptmann, A.: EXCL: extractive clip localization using natural language descriptions. arXiv preprint arXiv:1904.02755 (2019)

  8. Hahn, M., Kadav, A., Rehg, J.M., Graf, H.P.: Tripping through time: efficient localization of activities in videos. arXiv preprint arXiv:1904.09936 (2019)

  9. Ji, S., Xu, W., Yang, M., Yu, K.: 3d convolutional neural networks for human action recognition. In: TPAMI (2012)

    Google Scholar 

  10. Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J., Han, J.: On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265 (2019)

  11. Lu, C., Chen, L., Tan, C., Li, X., Xiao, J.: Debug: a dense bottom-up grounding approach for natural language video localization. In: EMNLP-IJCNLP (2019)

    Google Scholar 

  12. Mun, J., Cho, M., Han, B.: Local-global video-text interactions for temporal grounding. In: CVPR (2020)

    Google Scholar 

  13. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP (2014)

    Google Scholar 

  14. Ramachandran, P., Zoph, B., Le, Q.V.: Searching for activation functions. arXiv preprint arXiv:1710.05941 (2017)

  15. Regneri, M., Rohrbach, M., Wetzel, D., Thater, S., Schiele, B., Pinkal, M.: Grounding action descriptions in videos. In: Transactions of the Association for Computational Linguistics (2013)

    Google Scholar 

  16. Rohrbach, M., Regneri, M., Andriluka, M., Amin, S., Pinkal, M., Schiele, B.: Script data for attribute-based recognition of composite activities. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7572, pp. 144–157. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33718-5_11

    Chapter  Google Scholar 

  17. Seo, M., Kembhavi, A., Farhadi, A., Hajishirzi, H.: Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 (2016)

  18. Sigurdsson, G.A., Varol, G., Wang, X., Farhadi, A., Laptev, I., Gupta, A.: Hollywood in homes: crowdsourcing data collection for activity understanding. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 510–526. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_31

    Chapter  Google Scholar 

  19. Vaswani, A., et al.: Attention is all you need. In: NIPS (2017)

    Google Scholar 

  20. Wang, J., Ma, L., Jiang, W.: Temporally grounding language queries in videos by contextual boundary-aware prediction. In: AAAI (2020)

    Google Scholar 

  21. Wu, J., Li, G., Liu, S., Lin, L.: Tree-structured policy based progressive reinforcement learning for temporally language grounding in video. In: AAAI (2020)

    Google Scholar 

  22. Xiao, S., et al.: Boundary proposal network for two-stage natural language video localization. In: AAAI (2021)

    Google Scholar 

  23. Xu, H., He, K., Plummer, B.A., Sigal, L., Sclaroff, S., Saenko, K.: Multilevel language and vision integration for text-to-clip retrieval. In: AAAI (2019)

    Google Scholar 

  24. Yu, A.W., Dohan, D., Luong, M.T., Zhao, R., Chen, K., Norouzi, M., Le, Q.V.: QANet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541 (2018)

  25. Yu, Y., Kim, J., Kim, G.: A joint sequence fusion model for video question answering and retrieval. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 487–503. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_29

    Chapter  Google Scholar 

  26. Yuan, Y., Ma, L., Wang, J., Liu, W., Zhu, W.: Semantic conditioned dynamic modulation for temporal sentence grounding in videos. arXiv preprint arXiv:1910.14303 (2019)

  27. Yuan, Y., Mei, T., Zhu, W.: To find where you talk: temporal sentence localization in video with attention based location regression. In: AAAI (2019)

    Google Scholar 

  28. Zeng, R., Xu, H., Huang, W., Chen, P., Tan, M., Gan, C.: Dense regression network for video grounding. In: CVPR (2020)

    Google Scholar 

  29. Zhang, D., Dai, X., Wang, X., Wang, Y.F., Davis, L.S.: Man: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In: CVPR (2019)

    Google Scholar 

  30. Zhang, H., Sun, A., Jing, W., Zhen, L., Zhou, J.T., Goh, R.S.M.: Natural language video localization: A revisit in span-based question answering framework. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

    Google Scholar 

  31. Zhang, S., Peng, H., Fu, J., Luo, J.: Learning 2d temporal adjacent networks for moment localization with natural language. In: AAAI (2020)

    Google Scholar 

  32. Zhu, L., Yang, Y.: ActBERT: learning global-local video-text representations. In: CVPR (2020)

    Google Scholar 

Download references

Acknowledgements

This work was supported partially by the NSFC (U1911401, U1811461, 62076260, 61772570), Guangdong Natural Science Funds Project (2020B1515120085), Guangdong NSF for Distinguished Young Scholar (2022B1515020009), and the Key-Area Research and Development Program of Guangzhou (202007030004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei-Shi Zheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tan, C., Hu, JF., Zheng, WS. (2022). Context Alignment Network for Video Moment Retrieval. In: Fang, L., Povey, D., Zhai, G., Mei, T., Wang, R. (eds) Artificial Intelligence. CICAI 2022. Lecture Notes in Computer Science(), vol 13604. Springer, Cham. https://doi.org/10.1007/978-3-031-20497-5_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20497-5_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20496-8

  • Online ISBN: 978-3-031-20497-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics