Abstract
Universal Domain Adaptation (UDA) aims to transfer knowledge between two datasets. The main challenge is to distinguish “unknown” classes that do not exist in the labeled source domain but exist in the unlabeled target domain. Some existing methods have poor feature representation capability and prediction diversity. Besides, they cannot clearly discover the common label set effectively and the label sets private to each domain. In this paper, we propose an algorithm named TANet, which extracts features by a Tokens Transformer and automatically learns the classification boundaries between different classes by training a one-vs-all classifier for each class and design batch nuclear-norm maximization loss to ensure the discriminativeness of the model and the diversity of classification. Moreover, by employing adversarial and non-adversarial domain discriminators in Tokens Transformer, TANet can distinguish the source and target data in the common label set. Finally, extensive experimental results show that TANet outperforms competitors and is robust.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bucci, S., Loghmani, M.R., Tommasi, T.: On the effectiveness of image rotation for open set domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 422–438. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_25
Busto, P.P., Gall, J.: Open set domain adaptation. In: ICCV (2017)
Cao, Z., Ma, L., Long, M., Wang, J.: Partial adversarial domain adaptation. In: ECCV (2018)
Cui, S., Wang, S., Zhuo, J., Li, L., Huang, Q., Tian, Q.: Towards discriminability and diversity: batch nuclear-norm maximization under label insufficient situations. In: CVPR (2020)
Dosovitskiy, A., et al.: An image is worth 16\(\times \)16 words: transformers for image recognition at scale. In: ICLR (2021)
Fang, Z., Lu, J., Liu, F., Xuan, J., Zhang, G.: Open set domain adaptation: theoretical bound and algorithm. IEEE Trans. Neural Netw. Learn. Syst. 32, 4309–4322 (2021)
Fu, B., Cao, Z., Long, M., Wang, J.: Learning to detect open classes for universal domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12360, pp. 567–583. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58555-6_34
Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. ArXiv (2015)
Kundu, J.N., Bhambri, S., Kulkarni, A.R., Sarkar, H., Jampani, V., et al.: Subsidiary prototype alignment for universal domain adaptation. In: Advances in Neural Information Processing Systems (2022)
Li, G., Kang, G., Zhu, Y., Wei, Y., Yang, Y.: Domain consensus clustering for universal domain adaptation. In: CVPR (2021)
Liu, H., Cao, Z., Long, M., Wang, J., Yang, Q.: Separate to adapt: open set domain adaptation via progressive separation. In: CVPR (2019)
Liu, X., Huang, Y., He, S., Yin, J., Chen, X., Zhang, S.: Learning to transfer under unknown noisy environments: an universal weakly-supervised domain adaptation method. In: ICME (2021)
Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: NeurIPS (2016)
Peng, X., Bai, Q., Xia, X., Huang, Z., Wang, B.: Moment matching for multi-source domain adaptation. In: ICCV (2019)
Saito, K., Kim, D., Sclaroff, S., Saenko, K.: Universal domain adaptation through self supervision. In: NeurIPS (2020)
Saito, K., Saenko, K.: OVANet: one-vs-all network for universal domain adaptation. arXiv preprint arXiv:2104.03344 (2021)
Saito, K., Yamamoto, S., Ushiku, Y., Harada, T.: Open set domain adaptation by backpropagation. In: ECCV (2018)
Wang, Y., Zhang, L., Song, R., Ma, L., Zhang, W.: Exploiting inter-sample affinity for knowability-aware universal domain adaptation. arXiv preprint arXiv:2207.09280 (2022)
You, K., Long, M., Cao, Z., Wang, J., Jordan, M.I.: Universal domain adaptation. In: CVPR (2019)
Yuan, L., et al.: Tokens-to-token ViT: training vision transformers from scratch on ImageNet. In: ICCV (2021)
Zhang, J., Ding, Z., Li, W., Ogunbona, P.: Importance weighted adversarial nets for partial domain adaptation. In: CVPR (2018)
Zhang, Q., Dang, K., Lai, J.H., Feng, Z., Xie, X.: Modeling 3D layout for group re-identification. In: CVPR (2022)
Zhang, Q., Lai, J., Xie, X.: Learning modal-invariant angular metric by cyclic projection network for VIS-NIR person re-identification. IEEE Trans. Image Process. 30, 8019–8033 (2021)
Acknowledgement
This project was supported by Natural Science Foundation of Guangdong Province of China (2022A1515010269).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wu, H., Feng, Z., Zhang, Q., Wu, J., Lai, J. (2023). TANet: Adversarial Network via Tokens Transformer for Universal Domain Adaptation. In: Lu, H., et al. Image and Graphics. ICIG 2023. Lecture Notes in Computer Science, vol 14355. Springer, Cham. https://doi.org/10.1007/978-3-031-46305-1_15
Download citation
DOI: https://doi.org/10.1007/978-3-031-46305-1_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-46304-4
Online ISBN: 978-3-031-46305-1
eBook Packages: Computer ScienceComputer Science (R0)