Skip to main content

From Sparse to Dense: Semantic Graph Evolutionary Hashing for Unsupervised Cross-Modal Retrieval

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13844))

Abstract

In recent years, cross-modal hashing has attracted an increasing attention due to its fast retrieval speed and low storage requirements. However, labeled datasets are limited in real application, and existing unsupervised cross-modal hashing algorithms usually employ heuristic geometric prior as semantics, which introduces serious deviations as the similarity score from original features cannot reasonably represent the relationships among instances. In this paper, we study the unsupervised deep cross-modal hash retrieval method and propose a novel Semantic Graph Evolutionary Hashing (SGEH) to solve the above problem. The key novelty of SGEH is its evolutionary affinity graph construction method. To be concrete, we explore the sparse similarity graph with clustering results, which evolve from fusing the affinity information from code-driven graph on intrinsic data and subsequently extends to dense hybrid semantic graph which restricts the process of hash code learning to learn more discriminative results. Moreover, the batch-inputs are chosen from edge set rather than vertexes for better exploring the original spatial information in the sparse graph. Experiments on four benchmark datasets demonstrate the superiority of our framework over the state-of-the-art unsupervised cross-modal retrieval methods. Code is available at: https://github.com/theusernamealreadyexists/SGEH.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Cer, D., et al.: Universal sentence encoder. arXiv preprint arXiv:1803.11175 (2018)

  2. Chua, T.S., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.: Nus-wide: a real-world web image database from National University of Singapore. In: Proceedings of the ACM International Conference on Image and Video Retrieval, pp. 1–9 (2009)

    Google Scholar 

  3. Ding, G., Guo, Y., Zhou, J.: Collective matrix factorization hashing for multimodal data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2075–2082 (2014)

    Google Scholar 

  4. Fan, K.: On a theorem of weyl concerning eigenvalues of linear transformations i. Proc. Natl. Acad. Sci. United States America 35(11), 652 (1949)

    Article  MathSciNet  Google Scholar 

  5. Gong, Y., Lazebnik, S., Gordo, A., Perronnin, F.: Iterative quantization: a procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2916–2929 (2012)

    Article  Google Scholar 

  6. He, L., Xu, X., Lu, H., Yang, Y., Shen, F., Shen, H.T.: Unsupervised cross-modal retrieval through adversarial learning. In: ICME, pp. 1153–1158 (2017)

    Google Scholar 

  7. Hu, H., Xie, L., Hong, R., Tian, Q.: Creating something from nothing: Unsupervised knowledge distillation for cross-modal hashing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2020

    Google Scholar 

  8. Hu, M., Yang, Y., Shen, F., Nie, N., Hong, R., Shen, H.: Collective reconstructive embeddings for cross-modal hashing. IEEE IEEE Trans. Image Process. 28(6), 2770–2784 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  9. Huiskes, M.J., Lew, M.S.: The mir flickr retrieval evaluation. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 39–43 (2008)

    Google Scholar 

  10. Li, C., Deng, C., Li, N., Liu, W., Gao, X., Tao, D.: Self-supervised adversarial hashing networks for cross-modal retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4242–4251, June 2018

    Google Scholar 

  11. Li, C., Deng, C., Wang, L., Xie, D., Liu, X.: Coupled cyclegan: unsupervised hashing network for cross-modal retrieval. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 176–183 (2019)

    Google Scholar 

  12. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  13. Liu, S., Qian, S., Guan, Y., Zhan, J., Ying, L.: Joint-modal distribution-based similarity hashing for large-scale unsupervised deep cross-modal retrieval. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1379–1388 (2020)

    Google Scholar 

  14. Lu, X., Zhu, L., Li, J., Zhang, H., Shen, H.T.: Efficient supervised discrete multi-view hashing for large-scale multimedia search. IEEE Trans. Multimedia 22(8), 2048–2060 (2020). https://doi.org/10.1109/TMM.2019.2947358

    Article  Google Scholar 

  15. Rasiwasia, N., et al.: A new approach to cross-modal multimedia retrieval. In: Proceedings of the ACM International Conference on Multimedia, pp. 251–260 (2010)

    Google Scholar 

  16. Rastegari, M., Choi, J., Fakhraei, S., Hal, D., Davis, L.: Predictable dual-view hashing. In: Proceedings of International Conference on Machine Learning, pp. 1328–1336. PMLR (2013)

    Google Scholar 

  17. Shen, H.T., Liu, L., Yang, Y., Xu, X., Huang, Z., Shen, F., Hong, R.: Exploiting subspace relation in semantic labels for cross-modal hashing. IEEE Trans. Knowl. Data Eng. (2020)

    Google Scholar 

  18. Shen, Y., et al.: Auto-encoding twin-bottleneck hashing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2827 (2020)

    Google Scholar 

  19. Song, J., Yang, Y., Yang, Y., Huang, Z., Shen, H.T.: Inter-media hashing for large-scale retrieval from heterogeneous data sources. In: Proceedings of the International Conference on Management of Data, pp. 785–796 (2013)

    Google Scholar 

  20. Su, S., Zhong, Z., Zhang, C.: Deep joint-semantics reconstructing hashing for large-scale unsupervised cross-modal retrieval. In: Proceedings of the International Conference on Computer Vision, pp. 3027–3035 (2019)

    Google Scholar 

  21. Wang, H., Yang, Y., Liu, B.: Gmc: graph-based multi-view clustering. IEEE Trans. Knowl. Data Eng. 32(6), 1116–1129 (2019)

    Article  Google Scholar 

  22. Wang, W., Shen, Y., Zhang, H., Yao, Y., Liu, L.: Set and rebase: determining the semantic graph connectivity for unsupervised cross modal hashing. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 853–859 (2020)

    Google Scholar 

  23. Wu, B., Yang, Q., Zheng, W.S., Wang, Y., Wang, J.: Quantized correlation hashing for fast cross-modal search. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 3946–3952. Citeseer (2015)

    Google Scholar 

  24. Wu, G., Lin, Z., Han, J., Liu, L., Ding, G., Zhang, B., Shen, J.: Unsupervised deep hashing via binary latent factor models for large-scale cross-modal retrieval. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 2854–2860 (2018)

    Google Scholar 

  25. Wu, L., Wang, Y., Shao, L.: Cycle-consistent deep generative hashing for cross-modal retrieval. IEEE Trans. Image Process. 28(4), 1602–1612 (2018)

    Article  MathSciNet  Google Scholar 

  26. Xie, L., Shen, J., Zhu, L.: Online cross-modal hashing for web image retrieval. In: Proceedings of the AAAI Conference on Artificial Intelligence (2016)

    Google Scholar 

  27. Xu, R., Li, C., Yan, J., Deng, C., Liu, X.: Graph convolutional network hashing for cross-modal retrieval. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 982–988 (2019)

    Google Scholar 

  28. Yang, D., Wu, D., Zhang, W., Zhang, H., Li, B., Wang, W.: Deep semantic-alignment hashing for unsupervised cross-modal retrieval. In: Proceedings of the 2020 International Conference on Multimedia Retrieval, pp. 44–52 (2020)

    Google Scholar 

  29. Yang, E., Deng, C., Liu, W., Liu, X., Tao, D., Gao, X.: Pairwise relationship guided deep hashing for cross-modal retrieval. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)

    Google Scholar 

  30. Yu, J., Zhou, H., Zhan, Y., Tao, D.: Deep graph-neighbor coherence preserving network for unsupervised cross-modal hashing. In: Proceedings of the AAAI Conference on Artificial Intelligence (2021)

    Google Scholar 

  31. Zhang, J., Peng, Y., Yuan, M.: Unsupervised generative adversarial cross-modal hashing. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  32. Zhang, X., Lai, H., Feng, J.: Attention-aware deep adversarial hashing for cross-modal retrieval. In: Proceedings of European Conference on Computer Vision, September 2018

    Google Scholar 

  33. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grants No. 61872187, No. 62072246 and No. 62077023, in part by the Natural Science Foundation of Jiangsu Province under Grant No. BK20201306, and in part by the “111” Program under Grant No. B13022.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haofeng Zhang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 549 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhao, Y., Yu, J., Liao, S., Zhang, Z., Zhang, H. (2023). From Sparse to Dense: Semantic Graph Evolutionary Hashing for Unsupervised Cross-Modal Retrieval. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13844. Springer, Cham. https://doi.org/10.1007/978-3-031-26316-3_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26316-3_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26315-6

  • Online ISBN: 978-3-031-26316-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics