Loading web-font TeX/Main/Regular
A Self-Supervised-Driven Open-Set Unsupervised Domain Adaptation Method for Optical Remote Sensing Image Scene Classification and Retrieval | IEEE Journals & Magazine | IEEE Xplore

A Self-Supervised-Driven Open-Set Unsupervised Domain Adaptation Method for Optical Remote Sensing Image Scene Classification and Retrieval


Abstract:

Unsupervised domain adaptation (UDA) is an important solution to reduce the bias between the labeled source domain and the unlabeled target domain. It has attracted more ...Show More

Abstract:

Unsupervised domain adaptation (UDA) is an important solution to reduce the bias between the labeled source domain and the unlabeled target domain. It has attracted more attention for optical remote sensing image scene classification and retrieval. Currently, most of the previous work is devoted to closed-set UDA. In fact, the target domain often contains unknown classes. Moreover, some open UDA methods mine structural information of the target domain directly from the type knowledge of the source domain and less directly from the unlabeled data of the target domain. In this article, we propose a new self-supervised-driven open-set UDA (SSOUDA) method combining contrastive self-supervised learning with consistency self-training (CST) for optical remote sensing scene classification and retrieval. Specifically, a contrastive self-supervised learning network is introduced to learn discriminative features from the unlabeled target domain data. Moreover, a novel open-set class learning module is developed based on two-level confidence rules and the consistency self-training strategy, which can obtain reliable unknown class samples for co-training. Finally, an open-set dataset including six cross-domain scenarios is constructed based on three public datasets, and several experiments are conducted with 11 state-of-the-art domain adaptation methods. Experimental results demonstrate that our proposed method achieves superior performances on the six open-set cross-domain scenarios in both scene classification and retrieval. Especially, our method improves the overall classification accuracies by 9.72%–24.06% and improves mean average retrieval precisions by 8.06%–16.21% on the complex University of California Merced land use Dataset (UCMD) (source domain) \to Northwestern Polytechnical University (NWPU) (target domain) scenario, compared with the other 11 state-of-the-art methods. Our code is available at https://github.com/GeoRSAI/SSOUDA.
Article Sequence Number: 5605515
Date of Publication: 23 March 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.