Skip to main content
Log in

Self-corrected unsupervised domain adaptation

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Unsupervised domain adaptation (UDA), which aims to use knowledge from a label-rich source domain to help learn unlabeled target domain, has recently attracted much attention. UDA methods mainly concentrate on source classification and distribution alignment between domains to expect the correct target prediction. While in this paper, we attempt to learn the target prediction end to end directly, and develop a Self-corrected unsupervised domain adaptation (SCUDA) method with probabilistic label correction. SCUDA adopts a probabilistic label corrector to learn and correct the target labels directly. Specifically, besides model parameters, those target pseudo-labels are also updated in learning and corrected by the anchor-variable, which preserves the class candidates for samples. Experiments on real datasets show the competitiveness of SCUDA.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Li X C, Zhan D C, Yang J Q, Shi Y. Deep multiple instance selection. Science China Information Sciences, 2021, 64(3): 130102

    Article  MathSciNet  Google Scholar 

  2. Li S Y, Huang S J, Chen S C. Crowdsourcing aggregation with deep Bayesian learning. Science China Information Sciences, 2021, 64(3): 130104

    Article  MathSciNet  Google Scholar 

  3. Xu M, Guo L Z. Learning from group supervision: how supervision deficiency impacts multi-label learning. Science China Information Sciences, 2021, 64(3): 130101

    Article  MathSciNet  Google Scholar 

  4. Wang X G, Feng J P, Liu W Y. Deep graph cut network for weakly-supervised semantic segmentation. Science China Information Sciences, 2021, 64(3): 130105

    Article  Google Scholar 

  5. Zhao X, Pang N, Wang W, Xiao W D, Guo D K. Few-shot text classification by leveraging bi-directional attention and cross-class knowledge. Science China Information Sciences, 2021, 64(3): 130103

    Article  Google Scholar 

  6. Ben-David, S, Blitzer J, Crammer K, Kulesza A, Pereira F, Vaughan J W. A theory of learning from different domains. Machine Learning, 2010, 79(1–2): 151–175

    Article  MathSciNet  MATH  Google Scholar 

  7. Sun B C, Saenko K. Deep coral: correlation alignment for deep domain adaptation. In: Proceedings of European Conference on Computer Vision. 2016, 443–450

  8. Zellinger W, Grubinger T, Lughofer E, Natschläger T, Saminger-Platz S. Central moment discrepancy (CMD) for domain-invariant representation learning. In: Proceedings of International Conference on Learning Representations. 2017

  9. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014, 2672–2680

  10. Pan S J, Yang Q. A survey on transfer learning. IEEE Transactions on Knowledge Data Engineering, 2009, 22(10): 1345–1359

    Article  Google Scholar 

  11. Iscen A, Tolias G, Avrithis Y, Chum O. Label propagation for deep semi-supervised learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, 5070–5079

  12. Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T. Deep domain confusion: maximizing for domain invariance. 2014, arXiv preprint arXiv:1412.3474

  13. Ghifary M, Kleijn W B, Zhang M J. Domain adaptive neural networks for object recognition. In: Proceedings of Pacific Rim International Conference on Artificial Intelligence. 2014, 898–904

  14. Yan H, Ding Y K, Li P H, Wang Q L, Xu Y, Zuo W M. Mind the class weight bias: weighted maximum mean discrepancy for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2017, 2272–2281

  15. Zhu Y, Zhuang F, Wang J, Ke G, He Q. Deep subdomain adaptation network for image classification. IEEE Transactions on Neural Networks and Learning Systems, 2020, 99: 1–10

    Google Scholar 

  16. Saito K, Ushiku Y, Harada T. Asymmetric tri-training for unsupervised domain adaptation. In: Proceedings of International Conference on Machine Learning. 2017, 2988–2997

  17. Zhang X, Yu F X, Chang S, Wang S J. Deep transfer network: unsupervised domain adaptation. 2015, arXiv preprint arXiv: 1503.0059

  18. Long M S, Zhu H, Wang J M, Jordan M I. Unsupervised domain adaptation with residual transfer networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 136–144

  19. Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 2016, 17(1): 2096–2030

    MathSciNet  MATH  Google Scholar 

  20. Tzeng E, Hoffman J, Saenko K, Darrell T. Adversarial discriminative domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2017, 7167–7176

  21. Xie S A, Zheng Z B, Chen L, Chen C. Learning semantic representations for unsupervised domain adaptation. In: Proceedings of International Conference on Machine Learning. 2018, 5423–5432

  22. Pei Z Y, Cao Z J, Long M S, Wang J M. Multi-adversarial domain adaptation. In: Proceedings of AAAI Conference on Artificial Intelligence. 2018

  23. Wang Y Y, Gu J M, Wang C, Chen S C. Discrimination-aware domain adversarial neural network. Journal of Computer Science and Technology, 2020, 35(2): 1–9

    Article  Google Scholar 

  24. Wang S N, Chen X Y, Wang Y B, Long M S, Wang J M. Progressive adversarial networks for fine-grained domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2020, 9213–9222

  25. Kumar A, Sattigeri P, Wadhawan K, Karlinsky L, Feris R, Freeman B, Wornell G. Co-regularized alignment for unsupervised domain adaptation. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 9345–9356

  26. Saito K, Watanabe K, Ushiku Y, Harada T. Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2018, 3723–3732

  27. Yu C H, Wang J D, Chen Y Q, Huang M Y. Transfer learning with dynamic adversarial adaptation network. In: Proceedings of International Conference on Data Mining. 2019, 778–786

  28. Li Y F, Guo L Z, Zhou Z H. Towards safe weakly supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(1): 334–346

    Google Scholar 

  29. Li Y F, Liang D M. Safe semi-supervised learning: a brief introduction. Frontiers of Computer Science, 2019, 13(4): 669–676

    Article  Google Scholar 

  30. Yi K, Wu J X. Probabilistic end-to-end noise correction for learning with noisy labels. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2019, 7017–7025

  31. Wang G H, Wu J. Repetitive reprediction deep decipher for semi-supervised learning. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 6170–6177

  32. Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In: Proceedings of European Conference on Computer Vision. 2010, 213–226

  33. Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng A Y. Reading digits in natural images with unsupervised feature learning. In: Proceedings of NIPS Workshop on Deep Learning and Unsupervised Feature Learning. 2011

  34. LeCun Y, Matan O, Boser B, Henderson D, Howard R E, Hubbard W, Jacket LD, Baird H S. Handwritten zip code recognition with multilayer networks. In: Proceedings of the 10th International Conference on Pattern Recognition. 1990, 35–40

  35. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278–2324

    Article  Google Scholar 

  36. He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770–778

  37. Pan S J, Tsang L W, Kwok J T, Yang Q. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 2010, 22(2): 199–210

    Article  Google Scholar 

  38. Gong B Q, Shi Y, Sha F, Grauman K. Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 2066–2073

  39. Long M S, Cao Y, Wang J M, Jordan M I. Learning transferable features with deep adaptation networks. In: Proceedings of International Conference on Machine Learning. 2015, 97–105

  40. Long M S, Zhu H, Wnag J M, Jordan M I. Deep transfer learning with joint adaptation networks. In: Proceedings of International Conference on Machine Learning. 2017, 2208–2217

  41. Zhang W C. Oouyang W L, Li W, Wu D. Collaborative and adversarial network for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2018, 3801–3809

  42. Bhushan Damodaran B, Kellenberger B, Flamary R, Tuia D, Courty N. Deepjdot: deep joint distribution optimal transport for unsupervised domain adaptation. In: Proceedings of European Conference on Computer Vision. 2018, 447–463

  43. Hoffman J, Tzeng E, Park T, Zhu J Y, Isola, P, Saenko K, Efros A A, Darrel T. Cycada: cycle-consistent adversarial domain adaptation. In: Proceedings of International Conference on Machine Learning. 2018, 1989–1998

  44. Donahue J, Jai Y Q, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T. Decaf: a deep convolutional activation feature for generic visual recognition. In: Proceedings of International Conference on Machine Learning. 2014, 647–655

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61876091, 61772284), the China Postdoctoral Science Foundation (2019M651918), and the Open Foundation of MIIT Key Laboratory of Pattern Analysis and Machine Intelligence.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunyun Wang.

Additional information

Yunyun Wang received the PhD degree in Computer Science and Technology from Nanjing University of Aeronautics and Astronautics, China in 2012. She is currently with the School of Computer Science and Technology in Nanjing University of Posts and Telecommunications, China. Her current research interests include pattern recognition, machine learning, and neural computing.

Chao Wang received a BS degree from Nanjing University of Technology, China in 2018. He is a Master’s student in Computer Science and Technology at Nanjing University of Posts and Telecommunications, China. His current research interests include transfer learning, machine learning, and neural computing.

Hui Xue received her MS degree in mathematics from Nanjing University of Aeronautics & Astronautics (NUAA), China in 2005. And she also received her PhD degree in computer application technology at NUAA, China in 2008. Since 2009, she has been with the School of Computer Science & Engineering at Southeast University, China. Her research interests include pattern recognition, machine learning, and neural computing.

Songcan Chen received his BS degree in mathematics from Hangzhou University (now merged into Zhejiang University), China in 1983. In 1985, he completed his MS degree in computer applications at Shanghai Jiaotong University and then worked at NUAA in January 1986. There he received a PhD degree in communication and information systems in 1997. Since 1998, as a full-time professor, he has been with the College of Computer Science & Technology at NUAA, China. His research interests include pattern recognition, machine learning and neural computing. He is also an IAPR Fellow.

Electronic Supplementary Material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Wang, C., Xue, H. et al. Self-corrected unsupervised domain adaptation. Front. Comput. Sci. 16, 165323 (2022). https://doi.org/10.1007/s11704-021-1010-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-021-1010-8

Keywords

Navigation