Know More Say Less: Image Captioning Based on Scene Graphs | IEEE Journals & Magazine | IEEE Xplore

Know More Say Less: Image Captioning Based on Scene Graphs


Abstract:

Automatically describing the content of an image has been attracting considerable research attention in the multimedia field. To represent the content of an image, many a...Show More

Abstract:

Automatically describing the content of an image has been attracting considerable research attention in the multimedia field. To represent the content of an image, many approaches directly utilize convolutional neural networks (CNNs) to extract visual representations, which are fed into recurrent neural networks to generate natural language. Recently, some approaches have detected semantic concepts from images and then encoded them into high-level representations. Although substantial progress has been achieved, most of the previous methods treat entities in images individually, thus lacking structured information that provides important cues for image captioning. In this paper, we propose a framework based on scene graphs for image captioning. Scene graphs contain abundant structured information because they not only depict object entities in images but also present pairwise relationships. To leverage both visual features and semantic knowledge in structured scene graphs, we extract CNN features from the bounding box offsets of object entities for visual representations, and extract semantic relationship features from triples (e.g., man riding bike) for semantic representations. After obtaining these features, we introduce a hierarchical-attention-based module to learn discriminative features for word generation at each time step. The experimental results on benchmark datasets demonstrate the superiority of our method compared with several state-of-the-art methods.
Published in: IEEE Transactions on Multimedia ( Volume: 21, Issue: 8, August 2019)
Page(s): 2117 - 2130
Date of Publication: 30 January 2019

ISSN Information:

Funding Agency:


References

References is not available for this document.