Impact Statement:Hash code learning is an important technology that enables efficient image retrieval on large-scale data. While existing hashing algorithms can effectively generate compa...Show More
Abstract:
In this article, we propose SemanticHash, a simple and effective deep neural network model, to leverage semantic word embeddings (e.g., BERT) in hash codes learning. Both...Show MoreMetadata
Impact Statement:
Hash code learning is an important technology that enables efficient image retrieval on large-scale data. While existing hashing algorithms can effectively generate compact binary codes in a supervised learning setting trained with a moderate-size dataset, they are demanding to be scalable to large datasets and do not generalize to unseen datasets. The proposed approach overcomes these limitations. Compared with state-of-the-art ones, our solution achieves 2.1% of average performance improvement on four moderate-size benchmarks and 4.7% of improvement on ImageNet, a large-scale dataset with over 1.2 M training images. With superior performance on popular benchmarks for binary hash code learning, the technology introduced performs well on cross-dataset and zero-shot (i.e., the testing concepts are unseen during training) scenarios too.
Abstract:
In this article, we propose SemanticHash, a simple and effective deep neural network model, to leverage semantic word embeddings (e.g., BERT) in hash codes learning. Both images and class labels are compressed into K-bit binary vectors by using the visual (or the semantic) hash functions, which are jointly learned and aligned to optimize the semantic consistency. The K-dimensional class label prototypes—projected from semantic word embeddings—guide the hash mapping on the image side and vice versa, creating the K-bit image hash codes being aligned with their semantic prototypes and therefore more discriminative. Extensive experimental results on four benchmarks, CIFAR10, NUS-WIDE, ImageNet, and MS-COCO datasets, demonstrate the effectiveness of our approach. We also perform studies to analyze the effects of quantization and word semantic spaces and to explain the relations among the learned class prototypes. Finally, the generalization capability of the proposed approach is furth...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 2, Issue: 1, February 2021)