Skip to main content

CC-LSTM: Cross and Conditional Long-Short Time Memory for Video Captioning

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12666))

Included in the following conference series:

Abstract

Automatically generating natural language descriptions for in-the-wild videos is a challenging task. Most recent progress in this field has been made through the combination of Convolutional Neural Networks (CNNs) and Encoder-Decoder Recurrent Neural Networks (RNNs). However, existing Encoder-Decoder RNNs framework has difficulty in capturing a large number of long-range dependencies along with the increasing of the number of LSTM units. It brings a vast information loss and leads to poor performance for our task. To explore this problem, in this paper, we propose a novel framework, namely Cross and Conditional Long Short-Term Memory (CC-LSTM). It is composed of a novel Cross Long Short-Term Memory (Cr-LSTM) for the encoding module and Conditional Long Short-Term Memory (Co-LSTM) for the decoding module. In the encoding module, the Cr-LSTM encodes the visual input into a richly informative representation by a cross-input method. In the decoding module, the Co-LSTM feeds the visual features, which is based on generated sentence and contains the global information of the visual content, into the LSTM unit as an extra visual feature. For the work of video capturing, extensive experiments are conducted on two public datasets, i.e., MSVD and MSR-VTT. Along with visualizing the results and how our model works, these experiments quantitatively demonstrate the effectiveness of the proposed CC-LSTM on translating videos to sentences with rich semantics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aafaq, N., Akhtar, N., Liu, W., Gilani, S.Z., Mian, A.: Spatio-temporal dynamics and semantic attribute enriched visual encoding for video captioning. arXiv preprint arXiv:1902.10322 (2019)

  2. Aneja, J., Deshpande, A., Schwing, A.G.: Convolutional image captioning. In: CVPR, pp. 5561–5570 (2018)

    Google Scholar 

  3. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  4. Bin, Y., Yang, Y., Shen, F., Xu, X., Shen, H.T.: Bidirectional long-short term memory for video description. In: ACMMM, pp. 436–440. ACM (2016)

    Google Scholar 

  5. Farhadi, A., Hejrati, M., Sadeghi, M.A., Young, P., Rashtchian, C., Hockenmaier, J., Forsyth, D.: Every picture tells a story: generating sentences from images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 15–29. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_2

    Chapter  Google Scholar 

  6. Guadarrama, S., et al.: Youtube2text: recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In: ICCV, pp. 2712–2719 (2013)

    Google Scholar 

  7. He, X., Shi, B., Bai, X., Xia, G.S., Zhang, Z., Dong, W.: Image caption generation with part of speech guidance. PRL (2017)

    Google Scholar 

  8. Jia, X., Gavves, E., Fernando, B., Tuytelaars, T.: Guiding the long-short term memory model for image caption generation. In: ICCV, pp. 2407–2415 (2015)

    Google Scholar 

  9. Krause, J., Johnson, J., Krishna, R., Fei-Fei, L.: A hierarchical approach for generating descriptive image paragraphs. arXiv preprint arXiv:1611.06607 (2016)

  10. Lavie, A., Agarwal, A.: Meteor: An automatic metric for MT evaluation with improved correlation with human judgments. In: Proceedings of the EMNLP 2011 Workshop on Statistical Machine Translation, pp. 65–72 (2005)

    Google Scholar 

  11. Li, S., Kulkarni, G., Berg, T.L., Berg, A.C., Choi, Y.: Composing simple image descriptions using web-scale n-grams. In: Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pp. 220–228. ACL (2011)

    Google Scholar 

  12. Mao, J., Xu, W., Yang, Y., Wang, J., Huang, Z., Yuille, A.: Deep captioning with multimodal recurrent neural networks (M-RNN). arXiv preprint arXiv:1412.6632 (2014)

  13. Pan, P., Xu, Z., Yang, Y., Wu, F., Zhuang, Y.: Hierarchical recurrent neural encoder for video representation with application to captioning. In: CVPR, pp. 1029–1038 (2016)

    Google Scholar 

  14. Pan, Y., Mei, T., Yao, T., Li, H., Rui, Y.: Jointly modeling embedding and translation to bridge video and language. In: CVPR pp. 4594–4602 (2016)

    Google Scholar 

  15. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. ACL (2002)

    Google Scholar 

  16. Pasunuru, R., Bansal, M.: Multi-task video captioning with video and entailment generation. arXiv preprint arXiv:1704.07489 (2017)

  17. Shen, F., Xu, Y., Liu, L., Yang, Y., Huang, Z., Shen, H.T.: Unsupervised deep hashing with similarity-adaptive and discrete optimization. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 3034–3044 (2018)

    Article  Google Scholar 

  18. Szegedy, C., et al.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)

    Google Scholar 

  19. Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., Saenko, K.: Sequence to sequence-video to text. In: ICCV, pp. 4534–4542 (2015)

    Google Scholar 

  20. Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., Saenko, K.: Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729 (2014)

  21. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: CVPR, pp. 3156–3164 (2015)

    Google Scholar 

  22. Wang, J., Wang, W., Huang, Y., Wang, L., Tan, T.: Multimodal memory modelling for video captioning. arXiv preprint arXiv:1611.05592 (2016)

  23. Xu, X., He, L., Lu, H., Gao, L., Ji, Y.: Deep adversarial metric learning for cross-modal retrieval. World Wide Web 22(2), 657–672 (2018). https://doi.org/10.1007/s11280-018-0541-x

    Article  Google Scholar 

  24. Xu, X., Shen, F., Yang, Y., Shen, H.T., Li, X.: Learning discriminative binary codes for large-scale cross-modal retrieval. IEEE Trans. Image Process. 26(5), 2494–2507 (2017)

    Article  MathSciNet  Google Scholar 

  25. Yao, L., et al.: Describing videos by exploiting temporal structure. In: ICCV, pp. 4507–4515 (2015)

    Google Scholar 

  26. Yao, T., Pan, Y., Li, Y., Qiu, Z., Mei, T.: Boosting image captioning with attributes. arXiv preprint arXiv:1611.01646 (2016)

  27. Yu, H., Wang, J., Huang, Z., Yang, Y., Xu, W.: Video paragraph captioning using hierarchical recurrent neural networks. In: CVPR, pp. 4584–4593 (2016)

    Google Scholar 

  28. Zhu, L., Huang, Z., Li, Z., Xie, L., Shen, H.T.: Exploring auxiliary context: discrete semantic transfer hashing for scalable image retrieval. IEEE Trans. Neural Networks Learn. Syst. 29(11), 5264–5276 (2018)

    Article  MathSciNet  Google Scholar 

  29. Zhu, L., Huang, Z., Li, Z., Xie, L., Shen, H.T.: Exploring auxiliary context: discrete semantic transfer hashing for scalable image retrieval. IEEE Trans. Neural Netw. Learning Syst. 29(11), 5264–5276 (2018)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work was supported in part by National Key Research and Development Program of China under grant No. 2018AAA0102200, the Sichuan Science and Technology Program, China, under grant 2018GZDZX0032 and 2020YFS0057, the Fundamental Research Funds for the Central Universities under Project ZYGX2019Z015, the National Natural Science Foundation of China under grants 61632007 and Dongguan Songshan Lake Introduction Program of Leading Innovative and Entrepreneurial Talents.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ai, J., Yang, Y., Xu, X., Zhou, J., Shen, H.T. (2021). CC-LSTM: Cross and Conditional Long-Short Time Memory for Video Captioning. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12666. Springer, Cham. https://doi.org/10.1007/978-3-030-68780-9_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68780-9_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68779-3

  • Online ISBN: 978-3-030-68780-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics