Skip to main content

Prompt Learning with Cross-Modal Feature Alignment for Visual Domain Adaptation

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13604))

Included in the following conference series:

Abstract

Exploring the capacity of pre-trained large-scale models to learn common features of multimodal data and the effect of knowledge transfer on downstream tasks are two major trends in the multimedia field. However, existing studies usually use pre-trained models as feature extractors, or as the teacher model to achieve knowledge distillation of downstream tasks. Therefore, the cross-modal knowledge transfer mechanism and the knowledge forgetting problem of pre-trained large models have not been fully investigated.To address the above issues, this paper explores the fine-tuning strategy, feature selection strategy and semantic guidance approach in the migration process of pre-trained large models.Aiming at the problem of knowledge forgetting during “fine-tuning”, an image classification algorithm (PMHANet) integrating a pre-trained large-scale model and heterogeneous feature alignment is proposed.More importantly, this provides a cross-modal knowledge transfer paradigm for multimodal pre-training of large models.We conducted experiments on VireoFood-172 and NUS-WIDE and found that large models trained on datasets such as COCO performed better on the similar domain dataset NUS-WIDE than the small domain dataset VireoFood-172; PMHANet effectively implements multimodal representation enhancement in downstream tasks based on a partially fine-tuned pre-trained large model to achieve SOTA performance on both datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, J., Ngo, C.W.: Deep-based ingredient recognition for cooking recipe retrieval. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 32–41 (2016)

    Google Scholar 

  2. Chua, T.S., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.: Nus-wide: a real-world web image database from national university of Singapore. In: Proceedings of the ACM International Conference on Image and Video Retrieval, pp. 1–9 (2009)

    Google Scholar 

  3. Chung, Y.A., Weng, W.H., Tong, S., Glass, J.: Unsupervised cross-modal alignment of speech and text embedding spaces. In: Advances in Neural Information Processing Systems 31 (2018)

    Google Scholar 

  4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  6. Huang, H., et al.: Unicoder: a universal language encoder by pre-training with multiple cross-lingual tasks. arXiv preprint arXiv:1909.00964 (2019)

  7. Iki, T., Aizawa, A.: Effect of visual extensions on natural language understanding in vision-and-language models. arXiv preprint arXiv:2104.08066 (2021)

  8. Jiang, S., Min, W., Liu, L., Luo, Z.: Multi-scale multi-view deep feature aggregation for food recognition. IEEE Trans. Image Process. 29, 265–276 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  9. Kim, W., Son, B., Kim, I.: ViLT: vision-and-language transformer without convolution or region supervision. In: International Conference on Machine Learning, pp. 5583–5594. PMLR (2021)

    Google Scholar 

  10. Li, L.H., Yatskar, M., Yin, D., Hsieh, C.J., Chang, K.W.: VisualBert: a simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557 (2019)

  11. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  12. Lu, J., Batra, D., Parikh, D., Lee, S.: VilBert: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  13. Martinel, N., Foresti, G.L., Micheloni, C.: Wide-slice residual networks for food recognition. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 567–576. IEEE (2018)

    Google Scholar 

  14. Meng, L., et al.: Learning using privileged information for food recognition. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 557–565 (2019)

    Google Scholar 

  15. Qi, D., Su, L., Song, J., Cui, E., Bharti, T., Sacheti, A.: ImageBert: cross-modal pre-training with large-scale weak-supervised image-text data. arXiv preprint arXiv:2001.07966 (2020)

  16. Sun, B., Saenko, K.: Deep coral: correlation alignment for deep domain adaptation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 443–450. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_35

    Chapter  Google Scholar 

  17. Tan, H., Bansal, M.: Lxmert: learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490 (2019)

  18. Tang, J., Shu, X., Li, Z., Qi, G.J., Wang, J.: Generalized deep transfer networks for knowledge propagation in heterogeneous domains (2016)

    Google Scholar 

  19. Tang, J., et al.: Tri-clustered tensor completion for social-aware image tag refinement. IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1662–1674 (2016)

    Google Scholar 

  20. Tang, Z., Cho, J., Tan, H., Bansal, M.: VidLanKD: improving language understanding via video-distilled knowledge transfer. In: Advances in Neural Information Processing Systems 34 (2021)

    Google Scholar 

  21. Wang, J., Wang, H., Deng, J., Wu, W., Zhang, D.: EfficientcCLIP: efficient cross-modal pre-training by ensemble confident learning and language modeling. arXiv preprint arXiv:2109.04699 (2021)

  22. Yu, F., et al.: ERNIE-ViL: knowledge enhanced vision-language representations through scene graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 3208–3216 (2021)

    Google Scholar 

  23. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)

Download references

Acknowledgments

This work is supported in part by the Excellent Youth Scholars Program of Shandong Province (Grant no. 2022HWYQ-048) and the Oversea Innovation Team Project of the “20 Regulations for New Universities” funding program of Jinan (Grant no. 2021GXRC073).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Meng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, J. et al. (2022). Prompt Learning with Cross-Modal Feature Alignment for Visual Domain Adaptation. In: Fang, L., Povey, D., Zhai, G., Mei, T., Wang, R. (eds) Artificial Intelligence. CICAI 2022. Lecture Notes in Computer Science(), vol 13604. Springer, Cham. https://doi.org/10.1007/978-3-031-20497-5_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20497-5_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20496-8

  • Online ISBN: 978-3-031-20497-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics