Skip to main content

Adaptive Attention Mechanism Based Semantic Compositional Network for Video Captioning

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2020)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1251))

Included in the following conference series:

  • 929 Accesses

Abstract

Video captioning task is to generate a text to describe the content in the video. To generate a proper description, many people have begun to add explicit semantic information to the video generation process. However, in recent work, with the mining of semantics in video, the semantic information in some existing methods will play a smaller and smaller role in the decoding process. Besides, decoders apply temporal attention mechanisms to all generation words including visual vocabulary and non visual vocabulary that will produce inaccurate or even wrong results. To overcome the limitations, 1) we detect visual feature to composite semantic tags from each video frame and introduce a semantic combination network in the decoding stage. We use the probability of each semantic object as an additional parameter in the long-short term memory(LSTM), so as to better play the role of semantic tags, 2) we combine two levels of LSTM with temporal attention mechanism and adaptive attention mechanism respectively. Then we propose an adaptive attention mechanism based semantic compositional network (AASCNet) for video captioning. Specifically, the framework uses temporal attention mechanism to select specific visual features to predict the next word, and the adaptive attention mechanism to determine whether it depends on visual features or context information. Extensive experiments conducted on the MSVD video captioning dataset prove the effectiveness of our method compared with state-of-the-art approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhong, X., Li, J., Huang, W., Xie, L.: Deep multi-label hashing for image retrieval. In: IEEE International Conference on Tools with Artificial Intelligence (ICTAI) (2019)

    Google Scholar 

  2. Xu, K., Ba, J., Kiros, R., Courville, A., Salakhutdinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044 (2015)

  3. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: CVPR, pp. 3156–3164 (2015)

    Google Scholar 

  4. Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: adaptive attention via a visual sentinel for image captioning. arXiv preprint arXiv:1612.01887 (2016)

  5. Chen, X., Fang, H., Lin, T.-Y., Vedantam, R., Gupta, S., Dollar, P., Zitnick, C.L.: Microsoft coco captions: data collection and evaluation server. arXiv preprint arXiv:1504.00325 (2015)

  6. Song, J., Guo, Z., Gao, L., Liu, W., Zhang, D., Shen, H.T.: Hierarchical LSTM with adjusted temporal attention for video captioning. arXiv:1706.01231v1 (2017)

  7. Gan, Z., Gan, C., He, X., Pu, Y., Tran, K., Gao, J., Carin, L., Deng, L.: Semantic compositional networks for visual captioning. In: IEEE CVPR (2017)

    Google Scholar 

  8. Rohrbach, M., Qiu, W., Titov, I., Thater, S., Pinkal, M., Schiele, B.: Translating video content to natural language descriptions. In: ICCV (2013)

    Google Scholar 

  9. Xu, R., Xiong, C., Chen, W., Corso, J.J.: Jointly modeling deep video and compositional text to bridge vision and language in a unified framework. In: AAAI (2015)

    Google Scholar 

  10. Kojima, A., Tamura, T., Fukunaga, K.: Natural language description of human activities from video images based on concept hierarchy of actions. IJCV 50(2), 171–184 (2002)

    Article  Google Scholar 

  11. Barbu, A., Bridge, A., Burchill, Z., Coroian, D., Dickinson, S., Fidler, S., Michaux, A., Mussman, S., Narayanaswamy, S., Salvi, D., et al.: Video in sentences out. In: UAI (2012)

    Google Scholar 

  12. Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., Saenko, K.: Translating videos to natural language using deep recurrent neural networks. In: NAACL, pp. 1494–1504 (2015)

    Google Scholar 

  13. Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C., Larochelle, H., Courville, A.: Describing videos by exploiting temporal structure. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4507–4515 (2015)

    Google Scholar 

  14. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional net?works. In: ICCV (2015)

    Google Scholar 

  15. Wu, Q., Shen, C., Liu, L., Dick, A., van den Hengel, A.: What value do explicit high level concepts have in vision to language problems? In: CVPR (2016)

    Google Scholar 

  16. Fang, H., Gupta, S., Iandola, F., Srivastava, R.K., Deng, L., Dollár, P., Gao, J., He, X., Mitchell, M., Platt, J.C., et al.: From captions to visual concepts and back. In: CVPR (2015)

    Google Scholar 

  17. Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., Saenko, K.: Sequence to sequence-video to text. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4534–4542 (2015)

    Google Scholar 

  18. Yu, H., Wang, J., Huang, Z., Yang, Y., Xu, W.: Video paragraph captioning using hierarchical recurrent neural networks. In: IEEE CVPR (2016)

    Google Scholar 

  19. Xu, H., Venugopalan, S., Ramanishka, V., Rohrbach, M., Saenko, K.: A multi-scale multiple instance video description network. In: A Workshop on Closing the Loop Between Vision and Language at ICCV 2015 (2015)

    Google Scholar 

  20. Pan, Y., Mei, T., Yao, T., Li, H., Rui, Y.: Jointly modeling embedding and translation to bridge video and language. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4594–4602 (2016)

    Google Scholar 

  21. Yang, Z., Han, Y., Wang, Z.: Catching the temporal regions-of-interest for video captioning. In: 25th ACM Multimedia, pp. 146–153 (2017)

    Google Scholar 

  22. Zhang, X., Gao, K., Zhang, Y., Zhang, D., Li, J., Tian, Q.: Task-driven dynamic fusion: reducing ambiguity in video description. In: IEEE CVPR (2017)

    Google Scholar 

  23. Chen, Y., Wang, S., Zhang, W., Huang, Q.: Less is more: picking informative frames for video captioning. In: ECCV (2018)

    Google Scholar 

  24. Gao, L., Guo, Z., Zhang, H., Xu, X., Shen, H.T.: Video captioning with attention-based LSTM and semantic consistency. IEEE Trans. Multimed. 19(9), 2045–2055 (2017)

    Article  Google Scholar 

  25. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported in part by Fundamental Research Funds for the Central Universities of China under Grant 191010001, Hubei Key Laboratory of Transportation Internet of Things under Grant 2018IOT003, 2020III026GX, and Science and Technology Department of Hubei Province under Grant 2017CFA012.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xian Zhong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dong, Z., Zhong, X., Chen, S., Liu, W., Cui, Q., Zhong, L. (2021). Adaptive Attention Mechanism Based Semantic Compositional Network for Video Captioning. In: Arai, K., Kapoor, S., Bhatia, R. (eds) Intelligent Systems and Applications. IntelliSys 2020. Advances in Intelligent Systems and Computing, vol 1251. Springer, Cham. https://doi.org/10.1007/978-3-030-55187-2_5

Download citation

Publish with us

Policies and ethics