Abstract:
As massive data is stored in cloud datacenters, it is necessary to effectively locate interest data in such a distributed environment. However, since it is difficult to c...Show MoreMetadata
Abstract:
As massive data is stored in cloud datacenters, it is necessary to effectively locate interest data in such a distributed environment. However, since it is difficult to create a visual vocabulary due to the lack of global information, most existing systems of Content Based Image Retrieval (CBIR) only focus on global image features. In this paper, we propose a novel image retrieval framework, which efficiently incorporates the bag-of-visual-word model into Distributed Hash Tables (DHTs). Its key idea is to establish visual words for local image features by exploiting the merit of Locality Sensitive Hashing (LSH), so that similar image patches are most likely gathered into the same nodes without the knowledge of any global information. Extensive experimental results demonstrate that our approach yields high accuracy at very low cost, while keeping the load balanced.
Published in: 39th Annual IEEE Conference on Local Computer Networks
Date of Conference: 08-11 September 2014
Date Added to IEEE Xplore: 16 October 2014
ISBN Information:
Print ISSN: 0742-1303