skip to main content
10.1145/3365109.3368783acmconferencesArticle/Chapter ViewAbstractPublication PagesbdcatConference Proceedingsconference-collections
research-article

Class-Level Adaptation Network with Self Training for Unsupervised Domain Adaptation

Published:02 December 2019Publication History

ABSTRACT

Deep learning has been widely used in various tasks. However, in real-world scenarios, obtaining various dataset with labels is time-consuming and difficult. Most models are trained in simulated scenarios, and such models degrade in real-world scenarios. Unsupervised domain adaptation is a branch of transfer learning, which utilizes a large number of labeled source domain data to improve the performance of the model in the target domain with limited or missing labels through knowledge transfer. However, most of the previous work neglected the category information when aligning the distribution between source and target domains, which led to the emergence of negative transfer. To address this problem, we propose the class-level adaptation network (CLAN) optimizing a novel metric which makes the centers of each class in source and target domains are close. Specifically, the class center of the source domain is generated by the labels of the source samples, while the class center of the target domain without any label is generated by the high confidence pseudo labels of the target samples. Technically, CLAN matches each target sample to the nearest center in the source domain and assigns an example a high confidence pseudo label by considering a threshold. Extensive experiments indicate that the combination of the aforementioned two models can achieve state-of-the-art performance on the Office-31 and digital domain adaptation benchmarks.

References

  1. Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. 2017. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39, 12 (2017), 2481--2495.Google ScholarGoogle Scholar
  2. Zhihong Chen, Chao Chen, Zhaowei Cheng, Ke Fang, and Xinyu Jin. 2019. Selective Transfer with Reinforced Transfer Network for Partial Domain Adaptation. arXiv preprint arXiv:1905.10756 (2019).Google ScholarGoogle Scholar
  3. Zhihong Chen, Chao Chen, Xinyu Jin, Yifu Liu, and Zhaowei Cheng. 2019. Deep joint two-stream Wasserstein auto-encoder and selective attention alignment for unsupervised domain adaptation. Neural Computing and Applications (2019).Google ScholarGoogle Scholar
  4. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning. 647--655.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17, 1 (2016), 2096--2030.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2414--2423.Google ScholarGoogle ScholarCross RefCross Ref
  7. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  8. Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A Efros, and Trevor Darrell. 2017. Cycada: Cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213 (2017).Google ScholarGoogle Scholar
  9. Jonathan J. Hull. 1994. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence 16, 5 (1994), 550--554.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.Google ScholarGoogle Scholar
  11. Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradientbased learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278-- 2324.Google ScholarGoogle ScholarCross RefCross Ref
  12. Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, Vol. 3. 2.Google ScholarGoogle Scholar
  13. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I Jordan. 2015. Learning transferable features with deep adaptation networks. arXiv preprint arXiv:1502.02791 (2015).Google ScholarGoogle Scholar
  14. Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S Yu. 2014. Transfer joint matching for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1410--1417.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. 2011. Reading digits in natural images with unsupervised feature learning. (2011).Google ScholarGoogle Scholar
  16. Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22, 10 (2009), 1345--1359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. 2009. Dataset shift in machine learning. The MIT Press.Google ScholarGoogle Scholar
  18. Marc'Aurelio Ranzato and Martin Szummer. 2008. Semi-supervised learning of compact document representations with deep networks. In Proceedings of the 25th international conference on Machine learning. ACM, 792--799.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Baochen Sun and Kate Saenko. 2016. Deep coral: Correlation alignment for deep domain adaptation. In European Conference on Computer Vision. Springer, 443--450.Google ScholarGoogle ScholarCross RefCross Ref
  20. Isaac Triguero, Salvador García, and Francisco Herrera. 2015. Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study. Knowledge and Information systems 42, 2 (2015), 245--284.Google ScholarGoogle Scholar
  21. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7167--7176.Google ScholarGoogle ScholarCross RefCross Ref
  22. Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014).Google ScholarGoogle Scholar
  23. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics. 189--196.Google ScholarGoogle Scholar
  24. Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, and Susanne Saminger-Platz. 2017. Central moment discrepancy (cmd) for domaininvariant representation learning. arXiv preprint arXiv:1702.08811 (2017).Google ScholarGoogle Scholar
  25. Haijun Zhang, Yuzhu Ji, Wang Huang, and Linlin Liu. 2018. Sitcom-star-based clothing retrieval for video advertising: a deep learning framework. Neural computing and applications (2018), 1--20.Google ScholarGoogle Scholar

Index Terms

  1. Class-Level Adaptation Network with Self Training for Unsupervised Domain Adaptation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          BDCAT '19: Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies
          December 2019
          174 pages
          ISBN:9781450370165
          DOI:10.1145/3365109

          Copyright © 2019 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 2 December 2019

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate27of93submissions,29%
        • Article Metrics

          • Downloads (Last 12 months)6
          • Downloads (Last 6 weeks)0

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader