Abstract:
The representational quality of the generated feature vectors for images is essential for image retrieval models to achieve high performance. Spatial information is cruci...Show MoreMetadata
Abstract:
The representational quality of the generated feature vectors for images is essential for image retrieval models to achieve high performance. Spatial information is crucial in obtaining highly representative feature vectors for image retrieval, and deep convolutional neural networks provide an excellent framework to generate such features. Through convolutional operations, deep convolutional neural networks include spatial information in the feature maps. However, most available architectures cannot include adequate spatial details in the feature maps required for high-performance image retrieval. Deep residual networks are deep networks capable of including useful information through residual learning. This paper proposes a novel residual block to generate feature maps by focusing on spatial information. The proposed residual block comprises three modules: a spatial feature extraction module, a hierarchical feature extraction module, and a feature fusion module. The first module includes spatial information in the feature maps at different levels of abstraction, while the second module includes spatial information using conventional convolution hierarchy. The third model fuses the outputs of the first two modules to provide a very rich set of feature maps. The present study tests a deep network employing the proposed residual block. The results indicate that the proposed network performs comparably or is superior to state-of-the-art methods on standard benchmarks, thus showing the effectiveness of the proposed residual block in improving the representational capacity.
Date of Conference: 21-25 May 2023
Date Added to IEEE Xplore: 21 July 2023
ISBN Information: