Skip to main content

TANet: Adversarial Network via Tokens Transformer for Universal Domain Adaptation

  • Conference paper
  • First Online:
Image and Graphics (ICIG 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14355))

Included in the following conference series:

  • 619 Accesses

Abstract

Universal Domain Adaptation (UDA) aims to transfer knowledge between two datasets. The main challenge is to distinguish “unknown” classes that do not exist in the labeled source domain but exist in the unlabeled target domain. Some existing methods have poor feature representation capability and prediction diversity. Besides, they cannot clearly discover the common label set effectively and the label sets private to each domain. In this paper, we propose an algorithm named TANet, which extracts features by a Tokens Transformer and automatically learns the classification boundaries between different classes by training a one-vs-all classifier for each class and design batch nuclear-norm maximization loss to ensure the discriminativeness of the model and the diversity of classification. Moreover, by employing adversarial and non-adversarial domain discriminators in Tokens Transformer, TANet can distinguish the source and target data in the common label set. Finally, extensive experimental results show that TANet outperforms competitors and is robust.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://ai.bu.edu/visda-2017/.

  2. 2.

    http://ai.bu.edu/visda-2021/.

References

  1. Bucci, S., Loghmani, M.R., Tommasi, T.: On the effectiveness of image rotation for open set domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 422–438. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_25

    Chapter  Google Scholar 

  2. Busto, P.P., Gall, J.: Open set domain adaptation. In: ICCV (2017)

    Google Scholar 

  3. Cao, Z., Ma, L., Long, M., Wang, J.: Partial adversarial domain adaptation. In: ECCV (2018)

    Google Scholar 

  4. Cui, S., Wang, S., Zhuo, J., Li, L., Huang, Q., Tian, Q.: Towards discriminability and diversity: batch nuclear-norm maximization under label insufficient situations. In: CVPR (2020)

    Google Scholar 

  5. Dosovitskiy, A., et al.: An image is worth 16\(\times \)16 words: transformers for image recognition at scale. In: ICLR (2021)

    Google Scholar 

  6. Fang, Z., Lu, J., Liu, F., Xuan, J., Zhang, G.: Open set domain adaptation: theoretical bound and algorithm. IEEE Trans. Neural Netw. Learn. Syst. 32, 4309–4322 (2021)

    Article  MathSciNet  Google Scholar 

  7. Fu, B., Cao, Z., Long, M., Wang, J.: Learning to detect open classes for universal domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12360, pp. 567–583. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58555-6_34

    Chapter  Google Scholar 

  8. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. ArXiv (2015)

    Google Scholar 

  9. Kundu, J.N., Bhambri, S., Kulkarni, A.R., Sarkar, H., Jampani, V., et al.: Subsidiary prototype alignment for universal domain adaptation. In: Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  10. Li, G., Kang, G., Zhu, Y., Wei, Y., Yang, Y.: Domain consensus clustering for universal domain adaptation. In: CVPR (2021)

    Google Scholar 

  11. Liu, H., Cao, Z., Long, M., Wang, J., Yang, Q.: Separate to adapt: open set domain adaptation via progressive separation. In: CVPR (2019)

    Google Scholar 

  12. Liu, X., Huang, Y., He, S., Yin, J., Chen, X., Zhang, S.: Learning to transfer under unknown noisy environments: an universal weakly-supervised domain adaptation method. In: ICME (2021)

    Google Scholar 

  13. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: NeurIPS (2016)

    Google Scholar 

  14. Peng, X., Bai, Q., Xia, X., Huang, Z., Wang, B.: Moment matching for multi-source domain adaptation. In: ICCV (2019)

    Google Scholar 

  15. Saito, K., Kim, D., Sclaroff, S., Saenko, K.: Universal domain adaptation through self supervision. In: NeurIPS (2020)

    Google Scholar 

  16. Saito, K., Saenko, K.: OVANet: one-vs-all network for universal domain adaptation. arXiv preprint arXiv:2104.03344 (2021)

  17. Saito, K., Yamamoto, S., Ushiku, Y., Harada, T.: Open set domain adaptation by backpropagation. In: ECCV (2018)

    Google Scholar 

  18. Wang, Y., Zhang, L., Song, R., Ma, L., Zhang, W.: Exploiting inter-sample affinity for knowability-aware universal domain adaptation. arXiv preprint arXiv:2207.09280 (2022)

  19. You, K., Long, M., Cao, Z., Wang, J., Jordan, M.I.: Universal domain adaptation. In: CVPR (2019)

    Google Scholar 

  20. Yuan, L., et al.: Tokens-to-token ViT: training vision transformers from scratch on ImageNet. In: ICCV (2021)

    Google Scholar 

  21. Zhang, J., Ding, Z., Li, W., Ogunbona, P.: Importance weighted adversarial nets for partial domain adaptation. In: CVPR (2018)

    Google Scholar 

  22. Zhang, Q., Dang, K., Lai, J.H., Feng, Z., Xie, X.: Modeling 3D layout for group re-identification. In: CVPR (2022)

    Google Scholar 

  23. Zhang, Q., Lai, J., Xie, X.: Learning modal-invariant angular metric by cyclic projection network for VIS-NIR person re-identification. IEEE Trans. Image Process. 30, 8019–8033 (2021)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgement

This project was supported by Natural Science Foundation of Guangdong Province of China (2022A1515010269).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhanxiang Feng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, H., Feng, Z., Zhang, Q., Wu, J., Lai, J. (2023). TANet: Adversarial Network via Tokens Transformer for Universal Domain Adaptation. In: Lu, H., et al. Image and Graphics. ICIG 2023. Lecture Notes in Computer Science, vol 14355. Springer, Cham. https://doi.org/10.1007/978-3-031-46305-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46305-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46304-4

  • Online ISBN: 978-3-031-46305-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics