skip to main content
10.1145/3343031.3351072acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Hierarchical Global-Local Temporal Modeling for Video Captioning

Published:15 October 2019Publication History

ABSTRACT

In this paper, a Hierarchical Temporal Model (HTM) is proposed for the video captioning task, based on exploring the global and local temporal structure to better recognize fine-grained objects and actions. In our HTM, the encoder and decoder are hierarchically aligned according to different levels of features. The encoder applies two LSTM layers to construct temporal structures at both frame-level and object-level where the attention mechanism is applied to locate objects of interest, and the decoder uses corresponding LSTM layers to extract pivotal features from global to local through multi-level attention mechanism. Moreover, the local temporal structure is constructed implicitly from candidate object-oriented features under the guidance of global temporal-spatial representation, that could generate more accurate descriptions in handling shot-switching problems. Experiments on the widely used Microsoft Video Description Corpus (MSVD) and Charades datasets demonstrate the effectiveness of our proposed approach when compared to the state-of-the-art methods.

References

  1. Y Alp Aslandogan and Clement T. Yu. 1999. Techniques and Systems for Image and Video Retrieval. IEEE Transactions on Knowledge and Data Engineering (TKDE) , Vol. 11 (1999), 56--63.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. 2016. Delving Deeper into Convolutional Networks for Learning Video Representations. Proceedings of the International Conference on Representation Learning (ICLR) .Google ScholarGoogle Scholar
  3. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation And/Or Summarization (ACL workshop). 65--72.Google ScholarGoogle Scholar
  4. Lorenzo Baraldi, Costantino Grana, and Rita Cucchiara. 2017. Hierarchical Boundary-Aware Neural Encoder for Video Captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . 3185--3194.Google ScholarGoogle ScholarCross RefCross Ref
  5. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS). 1171--1179.Google ScholarGoogle Scholar
  6. David L. Chen and William B. Dolan. 2011. Collecting Highly Parallel Data for Paraphrase Evaluation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1 (ACL). 190--200.Google ScholarGoogle Scholar
  7. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft COCO Captions: Data Collection and Evaluation Server. arXiv preprint arXiv:1504.00325 (2015).Google ScholarGoogle Scholar
  8. Xinlei Chen and Abhinav Gupta. 2017. An Implementation of Faster RCNN with Study for Region Sampling. arXiv preprint arXiv:1702.02138 (2017).Google ScholarGoogle Scholar
  9. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv preprint arXiv:1412.3555 (2014).Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Rasool Fakoor, Abdel-rahman Mohamed, Margaret Mitchell, Sing Bing Kang, and Pushmeet Kohli. 2016. Memory-augmented Attention Modelling for Videos. arXiv preprint arXiv:1611.02261 (2016).Google ScholarGoogle Scholar
  11. Michael A. Goodrich and Alan C. Schultz. 2008. Human--robot Interaction: A Survey. Foundations and Trends® in Human-Computer Interaction , Vol. 1, 3 (2008), 203--275.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Sergio Guadarrama, Niveda Krishnamoorthy, Girish Malkarnenkar, Subhashini Venugopalan, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2013. YouTube2Text: Recognizing and Describing Arbitrary Activities Using Semantic Hierarchies and Zero-Shot Recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2712--2719.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  14. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation , Vol. 9, 8 (1997), 1735--1780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Qin Jin, Shizhe Chen, Jia Chen, and Alexander Hauptmann. 2017. Knowing Yourself: Improving Video Caption via In-depth Recap. In Proceedings of the 25th ACM International Conference on Multimedia (ACM). 1906--1911.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Xuelong Li, Bin Zhao, and Xiaoqiang Lu. 2017. MAM-RNN: Multi-level Attention Model Based RNN for Video Captioning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence(IJCAI). 2208--2214.Google ScholarGoogle ScholarCross RefCross Ref
  17. Xiangpeng Li, Zhilong Zhou, Lijiang Chen, and Lianli Gao. 2019. Residual Attention-based LSTM for Video Captioning. World Wide Web (WWW) , Vol. 22, 2 (2019), 621--636.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Chin-Yew Lin. 2004. ROUGE: a Package for Automatic Evaluation of Aummaries. Proceedings of the Workshop on Text Summarization Branches Out (WAS) (2004).Google ScholarGoogle Scholar
  19. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision (ECCV). 740--755.Google ScholarGoogle Scholar
  20. Sheng Liu, Zhou Ren, and Junsong Yuan. 2018. SibNet: Sibling Convolutional Encoder for Video Captioning. In Proceedings of the 26th ACM International Conference on Multimedia (ACM). 1425--1434.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Yuan Liu, Xue Li, and Zhongchao Shi. 2017. Video Captioning with Listwise Supervision. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI). 4197--4203.Google ScholarGoogle Scholar
  22. Pingbo Pan, Zhongwen Xu, Yi Yang, Fei Wu, and Yueting Zhuang. 2016. Hierarchical Recurrent Neural Encoder for Video Representation with Application to Captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1029--1038.Google ScholarGoogle ScholarCross RefCross Ref
  23. Yingwei Pan, Ting Yao, Houqiang Li, and Tao Mei. 2017. Video Captioning with Transferred Semantic Attributes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . 984--992.Google ScholarGoogle ScholarCross RefCross Ref
  24. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. Proceedings of the the 40Th Annual Meeting on Association for Computational Linguistics (ACL) (2002), 311--318.Google ScholarGoogle Scholar
  25. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards Real-time Object Detection with Region Proposal Networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1 (NIPS). 91--99.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Marcus Rohrbach, Wei Qiu, Ivan Titov, Stefan Thater, Manfred Pinkal, and Bernt Schiele. 2013. Translating Video Content to Natural Language Descriptions. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) . 433--440.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) , Vol. 115, 3 (2015), 211--252.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding. In Proceedings of the European Conference on Computer Vision (ECCV). 510--526.Google ScholarGoogle ScholarCross RefCross Ref
  29. Jingkuan Song, Zhao Guo, Lianli Gao, Wu Liu, Dongxiang Zhang, and Heng Tao Shen. 2017. Hierarchical LSTM with Adjusted Temporal Attention for Video Captioning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI). 2737--2743.Google ScholarGoogle ScholarCross RefCross Ref
  30. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. The Journal of Machine Learning Research (JMLR) , Vol. 15, 1 (2014), 1929--1958.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based Image Description Evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . 4566--4575.Google ScholarGoogle ScholarCross RefCross Ref
  32. Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015a. Sequence to Sequence -- Video to Text. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 4534--4542.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2015b. Translating Videos to Natural Language Using Deep Recurrent Neural Networks. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). 1494--1504.Google ScholarGoogle ScholarCross RefCross Ref
  34. Violeta Voykinska, Shiri Azenkot, Shaomei Wu, and Gilly Leshed. 2016. How Blind People Interact with Visual Content on Social Networking Services. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (ACM). 1584--1595.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Bairui Wang, Lin Ma, Wei Zhang, and Wei Liu. 2018b. Reconstruction Network for Video Captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7622--7631.Google ScholarGoogle ScholarCross RefCross Ref
  36. Junbo Wang, Wei Wang, Yan Huang, Liang Wang, and Tieniu Tan. 2018c. Hierarchical Memory Modelling for Video Captioning. In Proceedings of the 26th ACM International Conference on Multimedia (ACM). 63--71.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Xin Wang, Wenhu Chen, Jiawei Wu, Yuan-Fang Wang, and William Yang Wang. 2018a. Video Captioning via Hierarchical Reinforcement Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . 4213--4222.Google ScholarGoogle ScholarCross RefCross Ref
  38. Xian Wu, Guanbin Li, Qingxing Cao, Qingge Ji, and Liang Lin. 2018. Interpretable Video Captioning via Trajectory Structured Localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . 6829--6837.Google ScholarGoogle ScholarCross RefCross Ref
  39. Ran Xu, Caiming Xiong, Wei Chen, and Jason J Corso. 2015. Jointly Modeling Deep Video and Compositional Text to Bridge Vision and Language in a Unified Framework. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI). 2346--2352.Google ScholarGoogle Scholar
  40. Ziwei Yang, Yahong Han, and Zheng Wang. 2017. Catching the Temporal Regions-of-Interest for Video Captioning. In Proceedings of the 25th ACM International Conference on Multimedia (ACM). 146--153.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. 2015a. Describing Videos by Exploiting Temporal Structure. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) . 4507--4515.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. 2015b. Describing Videos by Exploiting Temporal Structure. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) . 4507--4515.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. 2016. Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . 4584--4593.Google ScholarGoogle ScholarCross RefCross Ref
  44. Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, and George Toderici. 2015. Beyond Short Snippets: Deep Networks for Video Classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . 4694--4702.Google ScholarGoogle ScholarCross RefCross Ref
  45. Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012).Google ScholarGoogle Scholar
  46. Bin Zhao, Xuelong Li, and Xiaoqiang Lu. 2018. Video Captioning with Tube Features. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence(IJCAI) . 1177--1183.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Hierarchical Global-Local Temporal Modeling for Video Captioning

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MM '19: Proceedings of the 27th ACM International Conference on Multimedia
      October 2019
      2794 pages
      ISBN:9781450368896
      DOI:10.1145/3343031

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 15 October 2019

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      MM '19 Paper Acceptance Rate252of936submissions,27%Overall Acceptance Rate995of4,171submissions,24%

      Upcoming Conference

      MM '24
      MM '24: The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader