Semantic-Aware Adaptive Prompt Learning for Universal Multi-Source Domain Adaptation | IEEE Journals & Magazine | IEEE Xplore

Semantic-Aware Adaptive Prompt Learning for Universal Multi-Source Domain Adaptation


Abstract:

Universal multi-source domain adaptation (UniMDA) aims to transfer the knowledge from multiple labeled source domains to an unlabeled target domain without constraints on...Show More

Abstract:

Universal multi-source domain adaptation (UniMDA) aims to transfer the knowledge from multiple labeled source domains to an unlabeled target domain without constraints on the label space. Due to its inherent domain shift (different data distributions) and class shift (unknown target classes), UniMDA stands as an extremely challenging task. However, existing solutions mainly focus on excavating image features to detect unknown samples, ignoring the abundant information contained in the textual semantics. In this letter, we propose a Semantic-aware Adaptive Prompt Learning method based on Contrastive Language Image Pretraining (SAP-CLIP) for UniMDA classification tasks. Concretely, we utilize the CLIP with learnable prompts to leverage textual information of both class semantics and domain representations, thus helping the model detect unknown samples and tackle domain shifts. Besides, we propose a novel margin loss with a dynamic scoring function to enlarge the margin distance between known and unknown sample sets, facilitating a more precise classification. Experiment results on three benchmarks confirm the state-of-the-art performance of our method.
Published in: IEEE Signal Processing Letters ( Volume: 31)
Page(s): 1444 - 1448
Date of Publication: 16 April 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.