Publication Type

Journal Article

Version

acceptedVersion

Publication Date

7-2016

Abstract

Cross-modal hashing integrates the advantages of traditional cross-modal retrieval and hashing, it can solve large-scale cross-modal retrieval effectively and efficiently. However, existing cross-modal hashing methods rely on either labeled training data, or lack semantic analysis. In this paper, we propose Cross-Modal Self-Taught Hashing (CMSTH) for large-scale cross-modal and unimodal image retrieval. CMSTH can effectively capture the semantic correlation from unlabeled training data. Its learning process contains three steps: first we propose Hierarchical Multi-Modal Topic Learning (HMMTL) to detect multi-modal topics with semantic information. Then we use Robust Matrix Factorization (RMF) to transfer the multi-modal topics to hash codes which are more suited to quantization, and these codes form a unified hash space. Finally we learn hash functions to project all modalities into the unified hash space. Experimental results on two web image datasets demonstrate the effectiveness of CMSTH compared to representative cross-modal and unimodal hashing methods.

Keywords

Cross-modal hashing, Image retreival, Self-taught learning, Semantic correlation

Discipline

Graphics and Human Computer Interfaces | Software Engineering

Research Areas

Information Systems and Management

Publication

Signal Processing

Volume

124

First Page

81

Last Page

92

ISSN

0165-1684

Identifier

10.1016/j.sigpro.2015.10.010

Publisher

Elsevier

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1016/j.sigpro.2015.10.010

Share

COinS