Skip to main content

News Video Description Based on Template Generation and Entity Insertion

  • Conference paper
  • First Online:
Intelligent Computing Theories and Application (ICIC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13393))

Included in the following conference series:

  • 1562 Accesses

Abstract

News video description aims to generate a knowledge-rich description for a news video with attached text. The difficulty of this task is how to mine events and named entities from attached text using video input. Existing approaches are all based on one-stage methods and do not filter redundant contextual sentences, resulting in inaccurate descriptions. This paper proposes a two-stage approach based on template generation and entity insertion, where the first stage focuses on the generation of events and the second stage focuses on the generation of named entities such as event participants. Specifically, we first design a sentence ranker based on pre-trained models to filter video-related sentences from the attached text, then use a multimodal encoder and transformer-based decoder to generate a description template, and finally do the entity insertion using the sorted sentences to get the final description. The results show that our method exhibits strong performance on the News Video Dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zheng, Q., Wang, C., Tao, D.: Syntax-aware action targeting for video captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13096–13105 (2020)

    Google Scholar 

  2. Pan, B., Cai, H., Huang, D.A., et al.: Spatio-temporal graph for video captioning with knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10870–10879 (2020)

    Google Scholar 

  3. Whitehead, S., Ji, H., Bansal, M., et al.: Incorporating background knowledge into video description generation. In: 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, pp. 3992–4001. Association for Computational Linguistics (2018)

    Google Scholar 

  4. Rimle, P., Dogan-Schönberger, P., Gross, M.: Enriching video captions with contextual text. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 5474–5481. IEEE (2021)

    Google Scholar 

  5. Devlin, J., Chang, M.W., Lee, K., et al.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  6. Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  7. Nagel, H.H.: A vision of ‘vision and language’ comprises action: an example from road traffic. In: Artif. Intell. Rev. 8(2), 189–214 (1994)

    Google Scholar 

  8. Kojima, A., Tamura, T., Fukunaga, K.: Natural language description of human activities from video images based on concept hierarchy of actions. In: Int. J. Comput. Vision 50(2), 171–184 (2002)

    Google Scholar 

  9. Venugopalan, S., Rohrbach, M., Donahue, J., et al.: Sequence to sequence -- video to text. In: 2015 IEEE International Conference on Computer Vision (ICCV). IEEE (2016)

    Google Scholar 

  10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, p. 25 (2012)

    Google Scholar 

  11. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  12. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  13. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  14. Ji, S., Xu, W., Yang, M., et al.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2012)

    Google Scholar 

  15. Carreira, J., Zisserman, A.: Quo Vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308 (2017)

    Google Scholar 

  16. Li, X., Zhao, B., Lu, X.: MAM-RNN: multi-level attention model based RNN for video captioning. In: IJCAI, pp. 2208–2214 (2017)

    Google Scholar 

  17. Pei, W., Zhang, J., Wang, X., et al.: Memory-attended recurrent network for video captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8347–8356 (2019)

    Google Scholar 

  18. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017)

  19. Biten, A.F., Gomez, L., Rusinol, M., et al.: Good news, everyone! Context driven entity-aware captioning for news images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12466–12475 (2019)

    Google Scholar 

  20. Tran, A., Mathews, A., Xie, L.: Transform and tell: entity-aware news image captioning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13035–13045 (2020)

    Google Scholar 

  21. Liu, Y., Ott, M., Goyal, N., et al.: Roberta: a robustly optimized Bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  22. Liu, Y., Lapata, M.: Hierarchical transformers for multi-document summarization. In: arXiv preprint arXiv:1905.13164 (2019)

  23. Zhang, Z., Qi, Z., Yuan, C., et al.: Open-book video captioning with retrieve-copy-generate network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9837–9846 (2021)

    Google Scholar 

  24. Rouge, L.C.Y.: A package for automatic evaluation of summaries. In: Proceedings of Workshop on Text Summarization of ACL, Spain (2004)

    Google Scholar 

  25. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)

    Google Scholar 

  26. Deng, J., Dong, W., Socher, R., et al.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  27. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  28. Honnibal, M., Montani, I.: Natural language understanding with Bloom embeddings, convolutional neural networks, and incremental parsing. Unpublished software application (2017). https://spacy.Io

  29. Papineni, K., Roukos, S., Ward, T., et al.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  30. Denkowski, M., Lavie, A.: Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 376–380 (2014)

    Google Scholar 

  31. Vedantam, R., Lawrence Zitnick, C., Parikh, D.: Cider: consensus-based image description evaluation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4566–4575 (2015)

    Google Scholar 

Download references

Acknowledgments

This research was funded by the National Key Research and Development Program of China (No. 2019YFB2101600).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pengjun Zhai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yuan, Q., Zhai, P., Zheng, D., Fang, Y. (2022). News Video Description Based on Template Generation and Entity Insertion. In: Huang, DS., Jo, KH., Jing, J., Premaratne, P., Bevilacqua, V., Hussain, A. (eds) Intelligent Computing Theories and Application. ICIC 2022. Lecture Notes in Computer Science, vol 13393. Springer, Cham. https://doi.org/10.1007/978-3-031-13870-6_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-13870-6_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-13869-0

  • Online ISBN: 978-3-031-13870-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics