Skip to main content

Image Captioning Based on Visual and Semantic Attention

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11961))

Included in the following conference series:

  • 2752 Accesses

Abstract

Most of the existing image captioning methods only use the visual information of the image to guide the generation of the captions, lack the guidance of effective scene semantic information, and the current visual attention mechanism cannot adjust the focus intensity on the image. In this paper, we first propose an improved visual attention model. At each time step, we calculate the focus intensity coefficient of the attention mechanism through the context information of the model, and automatically adjust the focus intensity of the attention mechanism through the coefficient, so as to extract more accurate image visual information. In addition, we represent the scene semantic information of the image through some topic words related to the image scene, and add them to the language model. We use attention mechanism to determine the image visual information and scene semantic information that the model pays attention to at each time step, and combine them to guide the model to generate more accurate and scene-specific captions. Finally, we evaluate our model on MSCOCO dataset. The experimental results show that our approach can generate more accurate captions, and outperforms many recent advanced models on various evaluation metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Anderson, P., et al.: Bottom-up and top-down attention for image captioning and visual question answering. In: Proceedings CVPR, pp. 6077–6086 (2018)

    Google Scholar 

  2. Banerjee, S., Lavie, A.: Meteor: an automatic metric for MT evaluation with improved correlation with human judgments. In: Proceedings ACL, pp. 65–72 (2005)

    Google Scholar 

  3. Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: Proceedings NIPS, pp. 379–387 (2016)

    Google Scholar 

  4. Fang, H., et al.: From captions to visual concepts and back. In: Proceedings CVPR, pp. 1473–1482 (2015)

    Google Scholar 

  5. Fu, K., Jin, J., Cui, R., Sha, F., Zhang, C.: Aligning where to see and what to tell: Image captioning with region-based attention and scene-specific contexts. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2321–2334 (2017)

    Article  Google Scholar 

  6. Gu, J., Cai, J., Wang, G., Chen, T.: Stack-captioning: coarse-to-fine learning for image captioning. In: Proceedings AAAI (2018)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings CVPR, pp. 770–778 (2016)

    Google Scholar 

  8. Jiang, W., Ma, L., Jiang, Y.G., Liu, W., Zhang, T.: Recurrent fusion network for image captioning. In: Proceedings ECCV, pp. 499–515 (2018)

    Google Scholar 

  9. Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: Proceedings CVPR, pp. 3128–3137 (2015)

    Google Scholar 

  10. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint (2014). arXiv:1412.6980

  11. Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. Text Summarization Branches Out, pp. 74–81 (2004)

    Google Scholar 

  12. Liu, D., Zha, Z.J., Zhang, H., Zhang, Y., Wu, F.: Context-aware visual policy network for sequence-level image captioning. arXiv preprint (2018). arXiv:1808.05864

  13. Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: adaptive attention via a visual sentinel for image captioning. In: Proceedings CVPR, pp. 375–383 (2017)

    Google Scholar 

  14. Mao, J., Xu, W., Yang, Y., Wang, J., Huang, Z., Yuille, A.: Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint (2014). arXiv:1412.6632

  15. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)

    Google Scholar 

  16. Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequence training for image captioning. In: Proceedings CVPR, pp. 7008–7024 (2017)

    Google Scholar 

  17. Shuster, K., Humeau, S., Hu, H., Bordes, A., Weston, J.: Engaging image captioning via personality. In: Proceedings CVPR, pp. 12516–12526 (2019)

    Google Scholar 

  18. Vedantam, R., Lawrence Zitnick, C., Parikh, D.: Cider: consensus-based image description evaluation. In: Proceedings CVPR, pp. 4566–4575 (2015)

    Google Scholar 

  19. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: Proceedings CVPR, pp. 3156–3164 (2015)

    Google Scholar 

  20. Xu, K., et al.: Show, attend and tell: neural image caption generation with visual attention. In: Proceedings ICML, pp. 2048–2057 (2015)

    Google Scholar 

  21. You, Q., Jin, H., Wang, Z., Fang, C., Luo, J.: Image captioning with semantic attention. In: Proceedings CVPR, pp. 4651–4659 (2016)

    Google Scholar 

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Nos. 61966004, 61663004, 61762078, 61866004), the Guangxi Natural Science Foundation (Nos. 2016GXNSFAA380146, 2017GXNSFAA198365, 2018GXNSFDA281009), the Research Fund of Guangxi Key Lab of Multi-source Information Mining and Security (16-A-03-02, MIMS18-08), the Guangxi Special Project of Science and Technology Base and Talents (AD16380008), Innovation Project of Guangxi Graduate Education(XYCSZ2019068), the Guangxi “Bagui Scholar” Teams for Innovation and Research Project, Guangxi Collaborative Innovation Center of Multi-source Information Integration and Intelligent Processing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhixin Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wei, H., Li, Z., Zhang, C. (2020). Image Captioning Based on Visual and Semantic Attention. In: Ro, Y., et al. MultiMedia Modeling. MMM 2020. Lecture Notes in Computer Science(), vol 11961. Springer, Cham. https://doi.org/10.1007/978-3-030-37731-1_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37731-1_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37730-4

  • Online ISBN: 978-3-030-37731-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics