Skip to main content
Log in

GSA4FDA: Deep Geometric and Statistic Alignment for Fewer Labeled Domain Adaptation

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Domain adaptation (DA) improves the generalization ability of models across source and target domains with different distributions. Current methods aim to reduce domain distribution divergence to learn transferable features. However, in most real cases, the number of available labeled samples is limited, and annotating labels requires significant time and labor costs, making it difficult to achieve high accuracy. To address this problem, we propose a novel method called Deep Geometric and Statistic Alignment for Fewer labeled Domain Adaptation (GSA4FDA). This method achieves the target task with fewer labeled source samples by combining manifold learning and leveraging the local geometric structure of sufficient unlabeled source samples. For domain alignment, we employ a joint geometric-statistical alignment and embed it into a specific layer of a deep convolutional neural network (CNN) to obtain high-level semantic information, taking into account the complementary nature of the two aspects. Specifically, we use the Nyström method and Maximum Mean Discrepancy (MMD) to compensate for the geometrical and statistical shift between domains. The experimental results on several datasets demonstrate the superiority of our method, particularly when there are limited labeled source samples available.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434

    MathSciNet  MATH  Google Scholar 

  2. Chu T, Liu Y, Deng J, Li W, Duan L (2022) Denoised maximum classifier discrepancy for source-free unsupervised domain adaptation. In: AAAI, vol 2

  3. Ding Z, Li S, Shao M, Fu Y (2018) Graph adaptive knowledge transfer for unsupervised domain adaptation. In: ECCV, pp 37–52

  4. Drineas P, Mahoney MW, Cristianini N (2005) On the Nyström method for approximating a gram matrix for improved kernel-based learning. J Mach Learn Res 6:2153–2175

    MathSciNet  MATH  Google Scholar 

  5. Ganin Y, Lempitsky V (2015) Unsupervised domain adaptation by backpropagation. In: ICML, pp 1180–1189

  6. Gong B, Grauman K, Sha F (2017) Geodesic flow kernel and landmarks: kernel methods for unsupervised domain adaptation. In: CVPR, pp 59–79

  7. Gretton A, Borgwardt KM, Rasch MJ, Scholkopf B, Smola AJ (2012) A kernel two-sample test. J Mach Learn Res 13:723–773

    MathSciNet  MATH  Google Scholar 

  8. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: CVPR, pp 770–778

  9. Hu L, Kan M, Shan S, Chen X (2020) Unsupervised domain adaptation with hierarchical gradient synchronization. In: CVPR, pp 4043–4052

  10. Li M, Bi W, Kwok JT, Lu B (2014) Large-scale Nyström kernel matrix approximation using randomized SVD. IEEE Trans Neural Netw Learn Syst 26(1):152–164

    Google Scholar 

  11. Li J, Liu W, Zhou Y, Tao D, Nie L (2020) Domain adaptation with few labeled source samples by graph regularization. Neural Process Lett 51(1):23–39

    Article  Google Scholar 

  12. Li J, Liu W, Zhou Y, Yu J, Tao D, Xu C (2022) Domain-invariant graph for adaptive semi-supervised domain adaptation. ACM Trans Multimed Comput Commun Appl 18(3):1–18

    Google Scholar 

  13. Liu W, Li J, Liu B, Guan W, Zhou Y, Xu C (2021) Unified cross-domain classification via geometric and statistical adaptations. Pattern Recogn 110:107658

    Article  Google Scholar 

  14. Long M, Wang J, Sun J, Yu PS (2015) Domain invariant transfer kernel learning. IEEE Trans Knowl Data Eng 27(6):1519–1532

    Article  Google Scholar 

  15. Long M, Zhu H, Wang J, Jordan MI (2016) Unsupervised domain adaptation with residual transfer networks. Ad Neural Inf Process Syst 29

  16. Long M, Zhu H, Wang J, Jordan MI (2017) Deep transfer learning with joint adaptation networks. In: ICML. PMLR, pp 2208–2217

  17. Long M, Cao Y, Cao Z, Wang J, Jordan MI (2018) Transferable representation learning with deep adaptation networks. IEEE Trans Pattern Anal Mach Intell 41(12):3071–3085

    Article  Google Scholar 

  18. Long M, Cao Z, Wang J, Jordan MI (2018) Conditional adversarial domain adaptation. Adv Neural Inf Process Syst 31

  19. Ma X, Zhang T, Xu C (2019) GCAN: Graph convolutional adversarial network for unsupervised domain adaptation. In: CVPR, pp 8266–8276

  20. Pei Z, Cao Z, Long M, Wang J (2018) Multi-adversarial domain adaptation. In: AAAI (2018)

  21. Sun B, Saenko K (2016) Deep coral: correlation alignment for deep domain adaptation. In: ECCV, pp 443–450

  22. Sun J, Wang Z, Wang W, Li H, Sun F (2021) Domain adaptation with geometrical preservation and distribution alignment. Neurocomputing 454:152–167

    Article  Google Scholar 

  23. Tzeng E, Hoffman J, Saenko K, Darrell T (2017) Adversarial discriminative domain adaptation. In: CVPR, pp 7167–7176

  24. Williams CKI, Seeger M (2000) Using the Nyström method to speed up kernel machines. In: Neural information processing systems, pp 682–688

  25. Wu H, Yan Y, Ye Y, Ng MK, Wu Q (2020) Geometric knowledge embedding for unsupervised domain adaptation. Knowl Based Syst 191:105155

    Article  Google Scholar 

  26. Xiao N, Zhang L (2021) Dynamic weighted learning for unsupervised domain adaptation. In: CVPR, pp 15242–15251

  27. Xie S, Zheng Z, Chen L, Chen C (2018) Learning semantic representations for unsupervised domain adaptation. In: ICML. PMLR, pp 5423–5432

  28. Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? Adv Neural Inf Process Syst 27

  29. Zellinger W, Grubinger T, Lughofer E, Natschläger T, Saminger-Platz S (2017) Central moment discrepancy (CMD) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811

  30. Zhao J, Li L, Deng F, He H, Chen J (2020) Discriminant geometrical and statistical alignment with density peaks for domain adaptation. IEEE Trans Cybern

  31. Zhu Y, Zhuang F, Wang J, Ke G, Chen J, Bian J, Xiong H, He Q (2020) Deep subdomain adaptation network for image classification. IEEE Trans Neural Netw Learn Syst 32(4):1713–1722

    Article  MathSciNet  Google Scholar 

  32. Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q (2020) A comprehensive survey on transfer learning. Proc IEEE 109(1):43–76

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported partly by the Qingdao Natural Science Foundation (Grant No. 23-2-1-161-zyyd-jch), the Shandong Natural Science Foundation (Grant No. ZR2023MF008).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weifeng Liu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cai, Y., Liu, B., Yang, X. et al. GSA4FDA: Deep Geometric and Statistic Alignment for Fewer Labeled Domain Adaptation. Neural Process Lett 55, 11333–11351 (2023). https://doi.org/10.1007/s11063-023-11378-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-023-11378-y

Keywords

Navigation