Abstract
Recently, large-scale pre-trained visual language models have demonstrated excellent performance in many downstream tasks. A more efficient adaptation method for different downstream tasks is prompt tuning, which fixes the parameters of the visual language model and adjusts only prompt parameters in the process of adapting the downstream tasks, using the knowledge learned by the visual language model during pre-training to solve the problems in the down-stream tasks. However, the loss of the downstream task and the original loss of the visual language model are not exactly same during model training. For example, CLIP uses contrast learning loss to train the model, while the downstream image classification task uses the cross-entropy loss commonly used in classification problems. Different loss has different guiding effects on the task. The trend of the accuracy of the visual language model task during training is also different from that with the downstream task. The choice of an appropriate loss function and a reasonable prompt tuning method have a great impact on the performance of the model. Therefore, we pro-pose a more efficient method of prompt tuning for CLIP, experiments on 11 datasets demonstrate that our method achieves better performance and faster convergence in the downstream task.
B. Li, F. Li and Q. Fan — These authors contributed equally to this article and should be considered as co-first authors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning. PMLR (2021)
Jia, C., et al.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning. PMLR (2021)
Liu, P., et al.: Pre-train, prompt, and predict: a survey of prompting methods in NLP. ACM Comput. Surv. 55(9), 1–35 (2023)
Jia, M., et al.: Visual prompt tuning. In: Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, 23–27 October 2022, Proceedings, Part XXXIII, pp. 709–727. Springer, Cham (2022)
Zhou, K., et al.: Learning to prompt for vision-language models. Int. J. Comput. Vision 130(9), 2337–2348 (2022)
Zang, Y., et al.: Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225 (2022)
Shin, T., et al.: Autoprompt: eliciting knowledge from language models. arXiv preprint arXiv:2010.15980 (2020)
Qin, G., Eisner, J.: Learning how to ask: Querying LMs with soft prompts. arXiv preprint arXiv:2104.06599 (2021)
Tan, H., Bansal, M.: Lxmert: learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490 (2019)
Li, X., et al.: Oscar: object-semantics aligned pre-training for vision-language tasks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12375, pp. 121–137. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58577-8_8
Lu, J., et al.: Vilbert: pretraining visiolinguistic representations for vision-language tasks. Adv. Neural Inf. Process. Syst. 32 (2019)
Zhou, K., et al.: Conditional prompt learning for vision-language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16816–16825 (2022)
Sun, X., et al.: Dualcoop: fast adaptation to multi-label recognition with limited annotations. arXiv preprint arXiv:2206.09541 (2022)
Xing, Y., et al.: Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340 (2022)
He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE (2016)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)
Farhat, N.: Optoelectronic neural networks and learning machines. IEEE Circuits Devices Mag. 5(5), 32–41 (1989)
Li, Y., et al.: Supervision exists everywhere: a data efficient contrastive language-image pre-training paradigm. arXiv preprint arXiv:2110.05208 (2021)
Shannon, C.E.: A mathematical theory of communication. ACM SIGMOBILE Mob. Comput. Commun. Rev. 5(1), 3–55 (2001)
Deng, J., et al.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Fei-Fei, L., et al.: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. In: 2004 Conference on Computer Vision and Pattern Recognition Workshop, p. 178. IEEE (2004)
Parkhi, O.M., et al.: Cats and dogs. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3498–3505. IEEE (2012)
Krause, J., et al.: 3D object representations for fine-grained categorization. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 554–561 (2013)
Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729. IEEE (2008)
Bossard, L., Guillaumin, M., Van Gool, L.: Food-101 – mining discriminative components with random forests. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 446–461. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_29
Maji, S., et al.: Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151 (2013)
Xiao, J., et al.: Sun database: large-scale scene recognition from abbey to zoo. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3485–3492. IEEE (2010)
Soomro, K., et al.: UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)
Cimpoi, M., et al.: Describing textures in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3606–3613 (2014)
Helber, P., et al.: Eurosat: a novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 12(7), 2217–2226 (2019)
Li, S., Deng, W.: Blended emotion in-the-wild: Multi-label facial expression recognition using crowdsourced annotations and deep locality feature learning. Int. J. Computer Vision 127(6–7), 884–906 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, B. et al. (2024). Efficient Prompt Tuning for Vision and Language Models. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1965. Springer, Singapore. https://doi.org/10.1007/978-981-99-8145-8_7
Download citation
DOI: https://doi.org/10.1007/978-981-99-8145-8_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8144-1
Online ISBN: 978-981-99-8145-8
eBook Packages: Computer ScienceComputer Science (R0)