Abstract
With the development of Internet, the forms of web data are rapidly increasing. However, existing cross-media retrieval methods mainly focus on coarse-grained, which is far from being satisfied in practical application. In addition, the heterogeneity gap among different types of media tends to result in inconsistent data representation, so the measuring similarity is quite challenging. In this work, we propose a novel multi-modal network for fine-grained cross-media retrieval. Specifically, our model consists of two networks, including proprietary networks and the common network. The proprietary network is designed as a single feature extraction network for each media to extract unique features for obtaining precise media feature representation. The common network is designed to extract common features of four different types of media. Comprehensive experiments demonstrate the effectiveness of our proposed approach. The source code and models of this work have been made public available at: https://github.com/fgcmr/fgcmr.
This work was supported by the National Natural Science Foundation of China (No. 61976116, 61773117), Fundamental Research Funds for the Central Universities (No. 30920021135), and the Primary Research & Development Plan of Jiangsu Province - Industry Prospects and Common Key Technologies (No. BE2017157).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Yao, Y., et al.: Towards automatic construction of diverse, high-quality image dataset. IEEE Trans. Knowl. Data Eng. 32(6), 1199–1211 (2020)
Lu, J., et al.: HSI Road: a hyper spectral image dataset for road segmentation. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2020)
Hua, X., et al.: A new web-supervised method for image dataset constructions. Neurocomputing 236, 23–31 (2017)
Yao, Y., et al.: Exploiting web images for dataset construction: a domain robust approach. IEEE Trans. Multimedia 19(8), 1771–1784 (2017)
Zhang, J., et al.: Extracting visual knowledge from the internet: making sense of image data. In: International Conference on Multimedia Modeling, pp. 862–873 (2016)
Shen, F., et al.: Automatic image dataset construction with multiple textual metadata. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2016)
Yao, Y., et al.: A domain robust approach for image dataset construction. In: ACM International conference on Multimedia, pp. 212–216 (2016)
Yao, Y., et al.: Exploiting web images for multi-output classification: from category to subcategories. IEEE Trans. Neural Netw. Learn. Syst. 31(7), 2348–2360 (2020)
Shu, X., et al.: Personalized age progression with bi-level aging dictionary learning. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 905–917 (2018)
Yao, Y., et al.: Bridging the web data and fine-grained visual recognition via alleviating label noise and domain mismatch. In: ACM International Conference on Multimedia (2020)
Sun, Z., et al.: CRSSC: salvage reusable samples from noisy data for robust learning. In: ACM International Conference on Multimedia (2020)
Zhang, C., et al.: Data-driven meta-set based fine-grained visual recognition. In: ACM International Conference on Multimedia (2020)
Liu, H., Yao, Y., Sun, Z., Li, X., Jia, K., Tang, Z.: Road segmentation with image-LiDAR data fusion in deep neural network. Multimed. Tools Appl. 1, 1–16 (2019). https://doi.org/10.1007/s11042-019-07870-0
Liu, H., Han, X., Li, X., Yao, Y., Huang, P., Tang, Z.: Deep representation learning for road detection using Siamese network. Multimed. Tools. Appl. 78(17), 24269–24283 (2018). https://doi.org/10.1007/s11042-018-6986-1
Xu, M., et al.: Deep learning for person reidentification using support vector machines. Adv. Multimed. (2017)
Chen, T., et al.: Classification constrained discriminator for domain adaptive semantic segmentation. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2020)
Ding, L., et al.: Approximate kernel selection via matrix approximation. In: IEEE Transactions on Neural Networks and Learning Systems (2020)
Shu, X., et al.: Hierarchical long short-term concurrent memory for human interaction recognition. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2019)
G.-S. Xie, et al., SRSC: Selective, robust, and supervised constrained feature representation for image classification. IEEE Trans. Neural Netwk. Learn. Syst. (2019)
Sun, Z., et al.: Dynamically visual disambiguation of keyword-based image search. In: International Joint Conference on Artificial Intelligence, pp. 996–1002 (2019)
Yang, W., et al.: Discovering and distinguishing multiple visual senses for polysemous words. In: AAAI Conference on Artificial Intelligence, pp. 523–530 (2018)
Hu, B., et al.: PyRetri: A PyTorch-based Library for Unsupervised Image Retrieval by Deep Convolutional Neural Networks. arXiv preprint arXiv:2005.02154 (2020)
Gu, Y., et al.: Clustering-driven unsupervised deep hashing for image retrieval. Neurocomputing 368, 114–123 (2019)
Wang, W., et al.: Set and rebase: determining the semantic graph connectivity for unsupervised cross modal hashing. In: International Joint Conference on Artificial Intelligence, pp. 853–859 (2020)
Huang, P., et al.: Collaborative representation based local discriminant projection for feature extraction. Digit. Signal Process. 76, 84–93 (2018)
Zhang, J., et al.: Extracting privileged information from untagged corpora for classifier learning. In: International Joint Conference on Artificial Intelligence, pp. 1085–1091 (2018)
Yao, Y., et al.: Extracting multiple visual senses for web learning. IEEE Trans. Multimed. 21(1), 184–196 (2019)
Yao, Y., et al.: Extracting privileged information for enhancing classifier learning. IEEE Trans. Image Process. 28(1), 436–450 (2019)
Yang, W., et al.: Exploiting textual and visual features for image categorization. Pattern Recogn. Lett. 117, 140–145 (2019)
Branson, S., et al.: Bird species categorization using pose normalized deep convolutional nets. arXiv preprint arXiv, 1406.2952 (2014)
Castrejon, L., et al.: Learning aligned cross-modal representations from weakly aligned data. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2940–2949 (2016)
Chen, W., et al.: Beyond triplet loss: a deep quadruplet network for person re-identification. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 403–412 (2017)
Cui, Y., et al.: Large scale fine-grained categorization and domain-specific transfer learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4109–4118 (2018)
Fu, J., et al.: Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4438–4446 (2017)
Gavves, E., et al.: Local alignments for fine-grained categorization. Int. J. Comput. Vis. 111(2), 191–212 (2015)
Gretton, A., et al.: A kernel two-sample test. J. Mach. Learn. Res. 13(1), 723–773 (2012)
Gu, J., et al.: Look, imagine and match: Improving textual-visual cross-modal retrieval with generative models. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 7181–7189 (2018)
He, X., et al.: Fine-grained image classification via combining vision and language. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5994–6002 (2017)
He, X., et al.: A new benchmark and approach for fine-grained cross-media retrieval. In: ACM International Conference on Multimedia, pp. 1740–1748 (2019)
Huang, X., et al.: Mhtn: Modal-adversarial hybrid transfer network for cross-modal retrieval. IEEE Trans. Cybernet. (2018)
Kim, J., et al.: Learning semantics with deep belief network for cross-language information retrieval. In: Proceedings of COLING 2012: Posters, pp. 579–588 (2012)
Lee, K.H., et al.: Stacked cross attention for image-text matching. In: European Conference on Computer Vision, pp. 201–216 (2018)
Lin, T.Y., et al.: Bilinear cnn models for fine-grained visual recognition. In: IEEE International Conference on Computer Vision, pp. 1449–1457 (2015)
Mandal, D., et al.: Generalized semantic preserving hashing for n-label cross-modal retrieval. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4076–4084 (2017)
Peng, Y., et al.: Object-part attention model for fine-grained image classification. IEEE Trans. Image Process. 27(3), 1487–1500 (2017)
Peng, Y., et al.: Cross-media shared representation by hierarchical learning with multiple deep networks. In: IJCAI, pp. 3846–3853 (2016)
Rasiwasia, N., et al.: A new approach to cross-modal multimedia retrieval. In: ACM International Conference on Multimedia, pp. 251–260 (2010)
Simonyan, K., et al.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv, 1409.1556 (2015)
Wang, B., et al.: Adversarial cross-modal retrieval. In: ACM International Conference on Multimedia, pp. 154–162 (2017)
Zhang, C., et al.: Web-supervised network with softly update-drop training for fine-grained visual classification. In: AAAI Conference on Artificial Intelligence, pp. 12781–12788 (2020)
Xie, G., et al.: Attentive region embedding network for zero-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 9384–9393 (2019)
Wang, C., et al.: Deep semantic mapping for cross-modal retrieval. In: IEEE International Conference on Tools with Artificial Intelligence, pp. 234–241 (2015)
Wang, Y., et al.: Learning a discriminative filter bank within a cnn for fine-grained recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4148–4157 (2018)
Wei, Y., et al.: Cross-modal retrieval with cnn visual features: A new baseline. IEEE Trans. Cybernet. 47(2), 449–460 (2016)
Xiao, T., et al.: The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 842–850 (2015)
Xie, S., et al.: Hyper-class augmented and regularized deep learning for fine-grained image classification. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2645–2654 (2015)
Yang, S., et al.: Unsupervised template learning for fine-grained object recognition. In: Advances in Neural Information Processing Systems, pp. 3122–3130, (2012)
Zhai, X., et al.: Learning cross-media joint representation with sparse and semisupervised regularization. IEEE Trans. Circuits Syst. Video Technol. 24(6), 965–978 (2014)
Zhang, N., et al.: Part-based r-cnns for fine-grained category detection. In: European Conference on Computer Vision, pp. 834–849 (2014)
Zhang, Y., et al.: Weakly supervised fine-grained categorization with part-based image representation. IEEE Trans. Image Process. 25(4), 1713–1725 (2016)
Zheng, H., et al.: Learning multi-attention convolutional neural network for fine-grained image recognition. In: IEEE International Conference on Computer Vision, pp. 5209–5217 (2017)
Zhou, P., et al.: Attention-based bidirectional long short-term memory networks for relation classification. In: Annual Meeting of the Association for Computational Linguistics, pp. 207–212 (2016)
Zhang, C., et al.: Web-supervised network for fine-grained visual classification. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2020)
Xie, G., et al.: Region graph embedding network for zero-shot learning. In: European Conference on Computer Vision (2020)
Zhou, T., et al.: Motion-attentive transition for zero-shot video object segmentation. In: AAAI Conference on Artificial Intelligence (2020)
Luo, H., et al.: SegEQA: video segmentation based visual attention for embodied question answering. In: IEEE Conference on Computer Vision, pp. 9667–9676 (2019)
Wang, W., et al.: Target-aware adaptive tracking for unsupervised video object segmentation. The DAVIS Challenge on Video Object Segmentation on CVPR workshop (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Bai, J., Yao, Y., Wang, Q., Zhou, Y., Yang, W., Shen, F. (2020). Multi-model Network for Fine-Grained Cross-Media Retrieval. In: Peng, Y., et al. Pattern Recognition and Computer Vision. PRCV 2020. Lecture Notes in Computer Science(), vol 12306. Springer, Cham. https://doi.org/10.1007/978-3-030-60639-8_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-60639-8_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60638-1
Online ISBN: 978-3-030-60639-8
eBook Packages: Computer ScienceComputer Science (R0)