skip to main content
10.1145/3459637.3482238acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

CANN: Coupled Approximation Neural Network for Partial Domain Adaptation

Published:30 October 2021Publication History

ABSTRACT

Unsupervised domain adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain. Most existing UDA methods try to learn domain-invariant features so that the classifier trained by the source labels can automatically be adapted to the target domain. However, recent works have shown the limitations of these methods when label distributions differ between the source and target domains. Especially, in partial domain adaptation (PDA) where the source domain holds plenty of individual labels (private labels) not appeared in the target domain, the domain-invariant features can cause catastrophic performance degradation. In this paper, based on the originally favorable underlying structures of the two domains, we learn two kinds of target features, i.e., the source-approximate features and target-approximate features instead of the domain-invariant features. The source-approximate features utilize the consistency of the two domains to estimate the distribution of the source private labels. The target-approximate features enhance the feature discrimination in the target domain while detecting the hard (outlier) target samples. A novel Coupled Approximation Neural Network (CANN) has been proposed to co-train the source-approximate and target-approximate features by two parallel sub-networks without sharing the parameters. We apply CANN to three prevalent transfer learning benchmark datasets, Office-Home, Office-31, and Visda2017 with both UDA and PDA settings. The results show that CANN outperforms all baselines by a large margin in PDA and also performs best in UDA.

Skip Supplemental Material Section

Supplemental Material

CIKM-cann.mp4

mp4

282 MB

References

  1. Jiayuan Huang, Arthur Gretton, Karsten Borgwardt, Bernhard Schölkopf, and Alex Smola. Correcting sample selection bias by unlabeled data. Advances in neural information processing systems, 19:601--608, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009.Google ScholarGoogle Scholar
  3. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pages 97--105. PMLR, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096--2030, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5018--5027, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  6. Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735, 2018.Google ScholarGoogle Scholar
  7. Han Zhao, Remi Tachet des Combes, Kun Zhang, and Geoffrey J Gordon. On learning invariant representation for domain adaptation. arXiv preprint arXiv:1901.09453, 2019.Google ScholarGoogle Scholar
  8. Remi Tachet des Combes, Han Zhao, Yu-Xiang Wang, and Geoff Gordon. Domain adaptation with conditional distribution matching and generalized label shift. arXiv preprint arXiv:2003.04475, 2020.Google ScholarGoogle Scholar
  9. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770--778, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  10. Zhangjie Cao, Lijia Ma, Mingsheng Long, and Jianmin Wang. Partial adversarial domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 135--150, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  11. Jing Zhang, Zewei Ding, Wanqing Li, and Philip Ogunbona. Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8156--8164, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  12. Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I Jordan. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2724--2732, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  13. Shuang Li, Chi Harold Liu, Qiuxia Lin, Qi Wen, Limin Su, Gao Huang, and Zhengming Ding. Deep residual correction network for partial domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.Google ScholarGoogle Scholar
  14. Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In International conference on machine learning, pages 1081--1090. PMLR, 2019.Google ScholarGoogle Scholar
  15. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision, pages 213--226. Springer, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.Google ScholarGoogle Scholar
  17. Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. arXiv preprint arXiv:1705.10667, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Wenju Zhang, Xiang Zhang, Qing Liao, Wenjing Yang, Long Lan, and Zhigang Luo. Robust normalized squares maximization for unsupervised domain adaptation. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2317--2320, 2020. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. arXiv preprint arXiv:1603.04779, 2016.Google ScholarGoogle Scholar
  20. Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Semi-supervised domain adaptation via minimax entropy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8050--8058, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  21. Woong-Gi Chang, Tackgeun You, Seonguk Seo, Suha Kwak, and Bohyung Han. Domain-specific batch normalization for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7354--7362, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  22. Kuniaki Saito, Donghyun Kim, Stan Sclaroff, and Kate Saenko. Universal domain adaptation through self supervision. arXiv preprint arXiv:2002.07953, 2020.Google ScholarGoogle Scholar
  23. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.Google ScholarGoogle Scholar
  24. Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733--3742, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  25. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In Advances in neural information processing systems, pages 136--144, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Zhangjie Cao, Kaichao You, Mingsheng Long, Jianmin Wang, and Qiang Yang. Learning to transfer examples for partial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2985--2994, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  27. Zhihong Chen, Chao Chen, Zhaowei Cheng, Boyuan Jiang, Ke Fang, and Xinyu Jin. Selective transfer with reinforced transfer network for partial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12706--12714, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  28. Taotao Jing, Haifeng Xia, and Zhengming Ding. Adaptively-accumulated knowledge transfer for partial domain adaptation. In Proceedings of the 28th ACM International Conference on Multimedia, pages 1606--1614, 2020. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Jian Liang, Yunbo Wang, Dapeng Hu, Ran He, and Jiashi Feng. A balanced and uncertainty-aware approach for partial domain adaptation. In Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XI 16, pages 123--140. Springer, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  30. Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1426--1435, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  31. Ximei Wang, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. Transferable attention for domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5345--5352, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  32. Yabin Zhang, Hui Tang, Kui Jia, and Mingkui Tan. Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5031--5040, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  33. Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning, pages 7404--7413. PMLR, 2019.Google ScholarGoogle Scholar
  34. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211--252, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579--2605, 2008.Google ScholarGoogle Scholar

Index Terms

  1. CANN: Coupled Approximation Neural Network for Partial Domain Adaptation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management
      October 2021
      4966 pages
      ISBN:9781450384469
      DOI:10.1145/3459637

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 30 October 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,861of8,427submissions,22%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader