Skip to main content

Efficient Cross-Modal Retrieval Using Social Tag Information Towards Mobile Applications

  • Conference paper
  • First Online:
Mobility Analytics for Spatio-Temporal and Social Data (MATES 2017)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10731))

  • 885 Accesses

Abstract

With the prevalence of mobile devices, millions of multimedia data represented as a combination of visual, aural and textual modalities, is produced every second. To facilitate better information retrieval on mobile devices, it becomes imperative to develop efficient models to retrieve heterogeneous content modalities using a specific query input, e.g., text-to-image or image-to-text retrieval. Unfortunately, previous works address the problem without considering the hardware constraints of the mobile devices. In this paper, we propose a novel method named Trigonal Partial Least Squares (TPLS) for the task of cross-modal retrieval on mobile devices. Specifically, TPLS works under the hardware constrains of mobile devices, i.e., limited memory size and no GPU acceleration. To take advantage of users’ tags for model training, we take the label information provided by the users as the third modality. Then, any two modalities of texts, images and labels are used to build a Kernel PLS model. As a result, TPLS is a joint model of three Kernel PLS models, and a constraint to narrow the distance between label spaces of images and texts is proposed. To efficiently learn the model, we use stochastic parallel gradient descent (SGD) to accelerate the learning speed with reduced memory consumption. To show the effectiveness of TPLS, the experiments are conducted on popular cross-modal retrieval benchmark datasets, and competitive results have been obtained.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 60.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai, S., Bai, X.: Sparse contextual activation for efficient visual re-ranking. IEEE Trans. Image Process. 25(3), 1056–1069 (2016)

    Article  MathSciNet  Google Scholar 

  2. Bai, X., Bai, S., Zhu, Z., Latecki, L.: 3d shape matching via two layer coding. IEEE Trans. Pattern Anal. Mach. Intell. 37(12), 2361–2373 (2015)

    Article  Google Scholar 

  3. Chen, Y., Wang, L., Wang, W., Zhang, Z.: Continuum regression for cross-modal multimedia retrieval (ICIP 2012), pp. 1949–1952 (2012)

    Google Scholar 

  4. Deng, J., Du, L., Shen, Y.: Heterogeneous metric learning for cross-modal multimedia retrieval. In: International Conference on Web Information Systems Engineering, pp. 43–56 (2013)

    Google Scholar 

  5. Duan, L., Xu, D., Tsang, I.: Learning with augmented features for heterogeneous domain adaptation. arXiv preprint arXiv:1206.4660 (2012)

  6. Gong, Y., Lazebnik, S.: Iterative quantization: a procrustean approach to learning binary codes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 817–824 (2011)

    Google Scholar 

  7. Hardoon, D.R., Szedmak, S., Shawe-Taylor, J.: Canonical correlation analysis: an overview with application to learning methods. Neural Comput. 16(12), 2639–2664 (2004)

    Article  MATH  Google Scholar 

  8. He, R., Zhang, M., Wang, L., Ye, J., Yin, Q.: Cross-modal subspace learning via pairwise constraints. IEEE Trans. Image Process. 24(12), 5543–5556 (2015). A Publication of the IEEE Signal Processing Society

    Article  MathSciNet  Google Scholar 

  9. Jia, Y., Salzmann, M., Darrell, T.: Learning cross-modality similarity for multinomial data. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2407–2414 (2011)

    Google Scholar 

  10. Kang, C., Xiang, S., Liao, S., Xu, C., Pan, C.: Learning consistent feature representation for cross-modal multimedia retrieval. IEEE Trans. Multimed. 17(3), 370–381 (2015)

    Article  Google Scholar 

  11. Li, A., Shan, S., Chen, X., Gao, W.: Cross-pose face recognition based on partial least squares. Pattern Recognit. Lett. 32(15), 1948–1955 (2011)

    Article  Google Scholar 

  12. Lu, X., Wu, F., Tang, S., Zhang, Z., He, X., Zhuang, Y.: A low rank structural large margin method for cross-modal ranking, pp. 433–442 (2013)

    Google Scholar 

  13. Mao, X., Lin, B., Cai, D., He, X., Pei, J.: Parallel field alignment for cross media retrieval. In: Proceedings of the ACM International Conference on Multimedia, pp. 897–906 (2013)

    Google Scholar 

  14. Pereira, J.C., Coviello, E., Doyle, G., Rasiwasia, N., Lanckriet, G., Levy, R., Vasconcelos, N.: On the role of correlation and abstraction in cross-modal multimedia retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 521–535 (2014)

    Article  Google Scholar 

  15. Rasiwasia, N., Pereira, J.C., Coviello, E., Doyle, G., Lanckriet, G.R.G., Levy, R., Vasconcelos, N.: A new approach to cross-modal multimedia retrieval. In: Proceedings of the ACM International Conference on Multimedia, pp. 251–260 (2010)

    Google Scholar 

  16. Rosipal, R., Krämer, N.: Overview and recent advances in partial least squares. In: Subspace, Latent Structure and Feature Selection, pp. 34–51 (2006)

    Google Scholar 

  17. Rosipal, R., Trejo, L.J.: Kernel partial least squares regression in reproducing kernel hilbert space. J. Mach. Learn. Res. 2, 97–123 (2002)

    MATH  Google Scholar 

  18. Sharma, A., Jacobs, D.W.: Bypassing synthesis: PLS for face recognition with pose, low-resolution and sketch. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600 (2011)

    Google Scholar 

  19. Sharma, A., Kumar, A., Daume III, H., Jacobs, D.W.: Generalized multiview analysis: a discriminative latent space. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2160–2167. IEEE (2012)

    Google Scholar 

  20. Song, G., Wang, S., Huang, Q., Tian, Q.: Similarity gaussian process latent variable model for multi-modal data analysis. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4050–4058 (2015)

    Google Scholar 

  21. Tang, J., Wang, H., Yan, Y.: Learning hough regression models via bridge partial least squares for object detection. Neurocomputing 152, 236–249 (2015)

    Article  Google Scholar 

  22. Tenenbaum, J.B., Freeman, W.T.: Separating style and content with bilinear models. Neural Comput. 12(6), 1247–1283 (2000)

    Article  Google Scholar 

  23. Verma, Y., Jawahar, C.: Im2text and text2im: associating images and texts for cross-modal retrieval. In: Proceedings of the British Machine Vision Conference (2014)

    Google Scholar 

  24. Viresh, R., Nikhil, R., Jawahar, C.V.: Multi-label cross-modal retrieval. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4094–4102 (2015)

    Google Scholar 

  25. Wang, J., Kumar, S., Chang, S.: Semi-supervised hashing for large-scale search. IEEE Trans. Pattern Anal. Mach. Intell. 34(12), 2393–2406 (2012)

    Article  Google Scholar 

  26. Wang, K., He, R., Wang, W., Wang, L., Tan, T.: Learning coupled feature spaces for cross-modal matching. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2088–2095 (2013)

    Google Scholar 

  27. Wang, S., Zhuang, F., Jiang, S., Huang, Q., Tian, Q.: Cluster-sensitive structured correlation analysis for web cross-modal retrieval. Neurocomputing 168, 747–760 (2015)

    Article  Google Scholar 

  28. Xie, L., Pan, P., Lu, Y.: A semantic model for cross-modal and multi-modal retrieval. In: Proceedings of the ACM Conference on International Conference on Multimedia Retrieval, pp. 175–182 (2013)

    Google Scholar 

  29. Yao, T., Kong, X., Fu, H., Tian, Q.: Semantic consistency hashing for cross-modal retrieval. Neurocomputing 193, 250–259 (2016)

    Article  Google Scholar 

  30. Yu, Z., Zhang, Y., Tang, S., Yang, Y., Tian, Q., Luo, J.: Cross-media hashing with kernel regression. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2014)

    Google Scholar 

  31. Zhang, H., Liu, Y., Ma, Z.: Fusing inherent and external knowledge with nonlinear learning for cross-media retrieval. Neurocomputing 119, 10–16 (2013)

    Article  Google Scholar 

  32. Zhang, L., Ma, B., He, J., Li, G., Huang, Q., Tian, Q.: Adaptively unified semi-supervised learning for cross-modal retrieval. In: International Conference on Artificial Intelligence, pp. 3406–3412 (2017)

    Google Scholar 

  33. Zhang, L., Ma, B., Li, G., Huang, Q., Tian, Q.: Pl-ranking: a novel ranking method for cross-modal retrieval. In: Proceedings of the ACM International Conference on Multimedia, pp. 1355–1364 (2016)

    Google Scholar 

  34. Zhang, L., Ma, B., Li, G., Huang, Q., Tian, Q.: Cross-modal retrieval using multi-ordered discriminative structured subspace learning. IEEE Trans. Multimed. 19(6), 1220–1233 (2017)

    Article  Google Scholar 

  35. Zhuang, Y., Wang, Y., Wu, F., Zhang, Y., Lu, W.: Supervised coupled dictionary learning with group structures for multi-modal retrieval. In: AAAI Conference on Artificial Intelligence (2013)

    Google Scholar 

Download references

Acknowledgement

This work was supported in part by the National Natural Science Foundation of China under Grant 61672497, Grant 61332016, Grant 61620106009, Grant 61650202 and Grant U1636214, in part by the National Basic Research Program of China (973 Program) under Grant 2015CB351802, and in part by the Key Research Program of Frontier Sciences of CAS under Grant QYZDJ-SSW-SYS013. This work was also partially supported by CAS Pioneer Hundred Talents Program by Dr. Qiang Qu.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuhui Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

He, J., Wang, S., Qu, Q., Zhang, W., Huang, Q. (2018). Efficient Cross-Modal Retrieval Using Social Tag Information Towards Mobile Applications. In: Doulkeridis, C., Vouros, G., Qu, Q., Wang, S. (eds) Mobility Analytics for Spatio-Temporal and Social Data. MATES 2017. Lecture Notes in Computer Science(), vol 10731. Springer, Cham. https://doi.org/10.1007/978-3-319-73521-4_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-73521-4_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-73520-7

  • Online ISBN: 978-3-319-73521-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics