Abstract
The aim of cross-modal retrieval is to learn mappings that project samples from different modalities into a common space where the similarity among instances can be measured. To pursuit common subspace, traditional approaches tend to solve the exact projection matrices while it is unrealistic to fully model multimodal data only by linear projection. In this paper, we propose a novel graph embedding learning framework that directly approximates the projected manifold and utilizes both the labeled information and local geometric structures. It avoids explicit eigenvector decomposition by iterating random walk on graph. Sampling strategies are adopted to generate training pairs to fully explore inter and intra modality among the data cloud. Moreover, graph embedding is learned in a semi-supervised learning manner which helps to discriminate the underlying representation over different classes. Experimental results on Wikipedia datasets show that the proposed framework is effective and outperforms other state-of-the-art methods on cross-modal retrieval.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Rasiwasia, N., Costa Pereira, J., Coviello, E., et al.: A new approach to cross-modal multimedia retrieval. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 251–260 (2010)
Sharma, A., Jacobs, D.W.: Bypassing synthesis: PLS for face recognition with pose, low-resolution and sketch. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600 (2011)
Li, D., Dimitrova, N., Li, M., et al.: Multimedia content processing through cross-modal association. In: Eleventh ACM International Conference on Multimedia, Berkeley, CA, USA, November, pp. 604–611 (2003)
Zhai, X., Peng, Y., Xiao, J.: Learning cross-media joint representation with sparse and semisupervised regularization. IEEE Trans. Circ. Syst. Video Technol. 24, 965–978 (2014)
Wang, K., He, R., Wang, L., et al.: Joint feature selection and subspace learning for cross-modal retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2010–2023 (2016)
Bengio, Y., Ducharme, R., Vincent, P., et al.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)
Mikolov, T., Chen, K., Corrado, G., et al.: Efficient estimation of word representations in vector space. Computer Science (2013)
Yang, Z., Cohen, W.W., Salakhutdinov, R.: Revisiting semi-supervised learning with graph embeddings. In: Proceedings of the International Conference on Machine Learning (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)
Acknowledgments
This work was supported in part by National Natural Science Foundation of China under grants 61371148 and 61771145. The authors would like to greatly thank Jiayan Cao for his help in architecture modeling and implementation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Zhang, Y., Gu, X. (2017). Graph Embedding Learning for Cross-Modal Information Retrieval. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10636. Springer, Cham. https://doi.org/10.1007/978-3-319-70090-8_60
Download citation
DOI: https://doi.org/10.1007/978-3-319-70090-8_60
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70089-2
Online ISBN: 978-3-319-70090-8
eBook Packages: Computer ScienceComputer Science (R0)