Skip to main content

Graph Embedding Learning for Cross-Modal Information Retrieval

  • Conference paper
  • First Online:
Book cover Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10636))

Included in the following conference series:

Abstract

The aim of cross-modal retrieval is to learn mappings that project samples from different modalities into a common space where the similarity among instances can be measured. To pursuit common subspace, traditional approaches tend to solve the exact projection matrices while it is unrealistic to fully model multimodal data only by linear projection. In this paper, we propose a novel graph embedding learning framework that directly approximates the projected manifold and utilizes both the labeled information and local geometric structures. It avoids explicit eigenvector decomposition by iterating random walk on graph. Sampling strategies are adopted to generate training pairs to fully explore inter and intra modality among the data cloud. Moreover, graph embedding is learned in a semi-supervised learning manner which helps to discriminate the underlying representation over different classes. Experimental results on Wikipedia datasets show that the proposed framework is effective and outperforms other state-of-the-art methods on cross-modal retrieval.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.svcl.ucsd.edu/projects/crossmodal/.

  2. 2.

    https://github.com/fchollet/keras.

References

  1. Rasiwasia, N., Costa Pereira, J., Coviello, E., et al.: A new approach to cross-modal multimedia retrieval. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 251–260 (2010)

    Google Scholar 

  2. Sharma, A., Jacobs, D.W.: Bypassing synthesis: PLS for face recognition with pose, low-resolution and sketch. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600 (2011)

    Google Scholar 

  3. Li, D., Dimitrova, N., Li, M., et al.: Multimedia content processing through cross-modal association. In: Eleventh ACM International Conference on Multimedia, Berkeley, CA, USA, November, pp. 604–611 (2003)

    Google Scholar 

  4. Zhai, X., Peng, Y., Xiao, J.: Learning cross-media joint representation with sparse and semisupervised regularization. IEEE Trans. Circ. Syst. Video Technol. 24, 965–978 (2014)

    Article  Google Scholar 

  5. Wang, K., He, R., Wang, L., et al.: Joint feature selection and subspace learning for cross-modal retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2010–2023 (2016)

    Article  Google Scholar 

  6. Bengio, Y., Ducharme, R., Vincent, P., et al.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)

    MATH  Google Scholar 

  7. Mikolov, T., Chen, K., Corrado, G., et al.: Efficient estimation of word representations in vector space. Computer Science (2013)

    Google Scholar 

  8. Yang, Z., Cohen, W.W., Salakhutdinov, R.: Revisiting semi-supervised learning with graph embeddings. In: Proceedings of the International Conference on Machine Learning (2016)

    Google Scholar 

  9. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by National Natural Science Foundation of China under grants 61371148 and 61771145. The authors would like to greatly thank Jiayan Cao for his help in architecture modeling and implementation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodong Gu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Zhang, Y., Gu, X. (2017). Graph Embedding Learning for Cross-Modal Information Retrieval. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10636. Springer, Cham. https://doi.org/10.1007/978-3-319-70090-8_60

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70090-8_60

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70089-2

  • Online ISBN: 978-3-319-70090-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics