ABSTRACT
Unsupervised domain adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain. Most existing UDA methods try to learn domain-invariant features so that the classifier trained by the source labels can automatically be adapted to the target domain. However, recent works have shown the limitations of these methods when label distributions differ between the source and target domains. Especially, in partial domain adaptation (PDA) where the source domain holds plenty of individual labels (private labels) not appeared in the target domain, the domain-invariant features can cause catastrophic performance degradation. In this paper, based on the originally favorable underlying structures of the two domains, we learn two kinds of target features, i.e., the source-approximate features and target-approximate features instead of the domain-invariant features. The source-approximate features utilize the consistency of the two domains to estimate the distribution of the source private labels. The target-approximate features enhance the feature discrimination in the target domain while detecting the hard (outlier) target samples. A novel Coupled Approximation Neural Network (CANN) has been proposed to co-train the source-approximate and target-approximate features by two parallel sub-networks without sharing the parameters. We apply CANN to three prevalent transfer learning benchmark datasets, Office-Home, Office-31, and Visda2017 with both UDA and PDA settings. The results show that CANN outperforms all baselines by a large margin in PDA and also performs best in UDA.
Supplemental Material
- Jiayuan Huang, Arthur Gretton, Karsten Borgwardt, Bernhard Schölkopf, and Alex Smola. Correcting sample selection bias by unlabeled data. Advances in neural information processing systems, 19:601--608, 2006. Google ScholarDigital Library
- Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009.Google Scholar
- Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pages 97--105. PMLR, 2015. Google ScholarDigital Library
- Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096--2030, 2016. Google ScholarDigital Library
- Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5018--5027, 2017.Google ScholarCross Ref
- Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735, 2018.Google Scholar
- Han Zhao, Remi Tachet des Combes, Kun Zhang, and Geoffrey J Gordon. On learning invariant representation for domain adaptation. arXiv preprint arXiv:1901.09453, 2019.Google Scholar
- Remi Tachet des Combes, Han Zhao, Yu-Xiang Wang, and Geoff Gordon. Domain adaptation with conditional distribution matching and generalized label shift. arXiv preprint arXiv:2003.04475, 2020.Google Scholar
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770--778, 2016.Google ScholarCross Ref
- Zhangjie Cao, Lijia Ma, Mingsheng Long, and Jianmin Wang. Partial adversarial domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 135--150, 2018.Google ScholarCross Ref
- Jing Zhang, Zewei Ding, Wanqing Li, and Philip Ogunbona. Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8156--8164, 2018.Google ScholarCross Ref
- Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I Jordan. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2724--2732, 2018.Google ScholarCross Ref
- Shuang Li, Chi Harold Liu, Qiuxia Lin, Qi Wen, Limin Su, Gao Huang, and Zhengming Ding. Deep residual correction network for partial domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.Google Scholar
- Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In International conference on machine learning, pages 1081--1090. PMLR, 2019.Google Scholar
- Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision, pages 213--226. Springer, 2010. Google ScholarDigital Library
- Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.Google Scholar
- Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. arXiv preprint arXiv:1705.10667, 2017. Google ScholarDigital Library
- Wenju Zhang, Xiang Zhang, Qing Liao, Wenjing Yang, Long Lan, and Zhigang Luo. Robust normalized squares maximization for unsupervised domain adaptation. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2317--2320, 2020. Google ScholarDigital Library
- Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. arXiv preprint arXiv:1603.04779, 2016.Google Scholar
- Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Semi-supervised domain adaptation via minimax entropy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8050--8058, 2019.Google ScholarCross Ref
- Woong-Gi Chang, Tackgeun You, Seonguk Seo, Suha Kwak, and Bohyung Han. Domain-specific batch normalization for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7354--7362, 2019.Google ScholarCross Ref
- Kuniaki Saito, Donghyun Kim, Stan Sclaroff, and Kate Saenko. Universal domain adaptation through self supervision. arXiv preprint arXiv:2002.07953, 2020.Google Scholar
- Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.Google Scholar
- Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733--3742, 2018.Google ScholarCross Ref
- Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In Advances in neural information processing systems, pages 136--144, 2016. Google ScholarDigital Library
- Zhangjie Cao, Kaichao You, Mingsheng Long, Jianmin Wang, and Qiang Yang. Learning to transfer examples for partial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2985--2994, 2019.Google ScholarCross Ref
- Zhihong Chen, Chao Chen, Zhaowei Cheng, Boyuan Jiang, Ke Fang, and Xinyu Jin. Selective transfer with reinforced transfer network for partial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12706--12714, 2020.Google ScholarCross Ref
- Taotao Jing, Haifeng Xia, and Zhengming Ding. Adaptively-accumulated knowledge transfer for partial domain adaptation. In Proceedings of the 28th ACM International Conference on Multimedia, pages 1606--1614, 2020. Google ScholarDigital Library
- Jian Liang, Yunbo Wang, Dapeng Hu, Ran He, and Jiashi Feng. A balanced and uncertainty-aware approach for partial domain adaptation. In Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XI 16, pages 123--140. Springer, 2020.Google ScholarCross Ref
- Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1426--1435, 2019.Google ScholarCross Ref
- Ximei Wang, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. Transferable attention for domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5345--5352, 2019.Google ScholarCross Ref
- Yabin Zhang, Hui Tang, Kui Jia, and Mingkui Tan. Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5031--5040, 2019.Google ScholarCross Ref
- Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning, pages 7404--7413. PMLR, 2019.Google Scholar
- Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211--252, 2015. Google ScholarDigital Library
- Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579--2605, 2008.Google Scholar
Index Terms
- CANN: Coupled Approximation Neural Network for Partial Domain Adaptation
Recommendations
Cross-domain feature enhancement for unsupervised domain adaptation
AbstractTill the present, the domain adaptation has been widely researched by transferring the knowledge from a labeled source domain to an unlabeled target domain. Adversarial adaptation methods have achieved great success, learning domain-invariant ...
Domain consistency regularization for unsupervised multi-source domain adaptive classification
Highlights- We propose a novel multi-source domain adaptation method for classification.
- We ...
AbstractDeep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years. Compared with single-source unsupervised domain adaptation (SUDA), domain shift in MUDA exists not only between the ...
Manifold discrimination partial adversarial domain adaptation
AbstractMost traditional domain adaptations promote knowledge transfer with the assumption that source domain and target domain have the same label space. In a big data environment, we usually obtain the source dataset with a larger label ...
Highlights- MDPDA is proposed as a model framework. It designs a manifold discrimination sample weighting mechanism.
Comments