Skip to main content

Attending Local and Global Features for Image Caption Generation

  • Conference paper
  • First Online:
Computer Vision and Image Processing (CVIP 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1776))

Included in the following conference series:

  • 342 Accesses

Abstract

Attention based deep learning architectures have been used extensively for image captioning. Early methods used attention over global features for generating captions. After the advent of local or object-level features, most researchers started using local features as they provided better performance than global features. Thus the importance of global features in caption generation has been ignored in most current methods. Though the local features provide good performance, they may not adequately capture the underlying relations between different local features. On the contrary, the global features can capture the relationship between different object/local features of the image and may provide essential cues for better caption generation. This work proposes a method that utilizes mean global features to attend to salient local features while generating caption. The global features are extracted using a CNN, and local features are extracted using a Faster RCNN. The proposed decoder utilizes these features to generate the caption. Experiments carried out on the MSCOCO dataset indicate that our approach generates better captions indicating the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164 (2015)

    Google Scholar 

  2. Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128–3137 (2015)

    Google Scholar 

  3. Xu, K., et al.: Show, attend and tell: neural image caption generation with visual attention. In: International Conference on Machine Learning, pp. 2048–2057. PMLR (2015)

    Google Scholar 

  4. You, Q., Jin, H., Wang, Z., Fang, C., Luo, J.: Image captioning with semantic attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4651–4659 (2016)

    Google Scholar 

  5. Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: adaptive attention via a visual sentinel for image captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 375–383 (2017)

    Google Scholar 

  6. Chen, L., et al.: SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5659–5667 (2017)

    Google Scholar 

  7. Mun, J., Cho, M., Han, B.: Text-guided attention model for image captioning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, pp. 4233–4239 (2017)

    Google Scholar 

  8. Tavakoli, H.R., Shetty, R., Borji, A., Laaksonen, J.: Paying attention to descriptions generated by image captioning models. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2487–2496 (2017)

    Google Scholar 

  9. Zhou, L., Xu, C., Koch, P., Corso, J.J.: Watch what you just said: image captioning with text-conditional attention. In: Proceedings of the on Thematic Workshops of ACM Multimedia 2017, pp. 305–313 (2017)

    Google Scholar 

  10. Chen, H., Ding, G., Lin, Z., Zhao, S., Han, J.: Show, observe and tell: attribute-driven attention model for image captioning. In: IJCAI, pp. 606–612 (2018)

    Google Scholar 

  11. Wu, Q., Shen, C., Wang, P., Dick, A., Van Den Hengel, A.: Image captioning and visual question answering based on attributes and external knowledge. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1367–1381 (2017)

    Article  Google Scholar 

  12. Yao, T., Pan, Y., Li, Y., Qiu, Z., Mei, T.: Boosting image captioning with attributes. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4894–4902 (2017)

    Google Scholar 

  13. Huang, Y., Li, C., Li, T., Wan, W., Chen, J.: Image captioning with attribute refinement. In: 2019 IEEE International Conference on Image Processing, pp. 1820–1824 (2019)

    Google Scholar 

  14. Anderson, P., et al.: Bottom-up and top-down attention for image captioning and visual question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6077–6086 (2018)

    Google Scholar 

  15. Li, L., Tang, S., Deng, L., Zhang, Y., Tian, Q.: Image caption with global-local attention. In: Thirty-First AAAI Conference on Artificial Intelligence, pp. 4133–4139 (2017)

    Google Scholar 

  16. Zhong, X., Nie, G., Huang, W., Liu, W., Ma, B., Lin, C.W.: Attention-guided image captioning with adaptive global and local feature fusion. J. Vis. Commun. Image Represent. 78, 103138 (2021)

    Article  Google Scholar 

  17. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008 (2017)

    Google Scholar 

  18. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  19. Xu, K., et al.: Show, attend and tell: neural image caption generation with visual attention. In: International Conference on Machine Learning, pp. 2048–2057 (2015)

    Google Scholar 

  20. Gan, Z., et al.: Semantic compositional networks for visual captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5630–5639 (2017)

    Google Scholar 

  21. Kumar, K., Nishanth, P., Singh, M., Dahiya, S.: Image encoder and sentence decoder based video event description generating model: a storytelling. IETE J. Educ. 63(2), 78–84 (2022)

    Article  Google Scholar 

  22. Lu, J., Yang, J., Batra, D., Parikh, D.: Neural baby talk. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7219–7228 (2018)

    Google Scholar 

  23. Kulkarni, G., et al.: Babytalk: understanding and generating simple image descriptions. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2891–2903 (2013)

    Article  Google Scholar 

  24. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  25. Lavie, A., Agarwal, A.: METEOR: an automatic metric for MT evaluation with high levels of correlation with human judgments. In: Proceedings of the Second Workshop on Statistical Machine Translation, pp. 228–231 (2007)

    Google Scholar 

  26. Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)

    Google Scholar 

  27. Vedantam, R., Lawrence Zitnick, C., Parikh, D.: Cider: consensus-based image description evaluation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4566–4575 (2015)

    Google Scholar 

  28. Anderson, P., Fernando, B., Johnson, M., Gould, S.: SPICE: semantic propositional image caption evaluation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 382–398. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_24

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Virendra Kumar Meghwal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Meghwal, V.K., Mittal, N., Singh, G. (2023). Attending Local and Global Features for Image Caption Generation. In: Gupta, D., Bhurchandi, K., Murala, S., Raman, B., Kumar, S. (eds) Computer Vision and Image Processing. CVIP 2022. Communications in Computer and Information Science, vol 1776. Springer, Cham. https://doi.org/10.1007/978-3-031-31407-0_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-31407-0_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-31406-3

  • Online ISBN: 978-3-031-31407-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics