ABSTRACT
Deep learning has been widely used in various tasks. However, in real-world scenarios, obtaining various dataset with labels is time-consuming and difficult. Most models are trained in simulated scenarios, and such models degrade in real-world scenarios. Unsupervised domain adaptation is a branch of transfer learning, which utilizes a large number of labeled source domain data to improve the performance of the model in the target domain with limited or missing labels through knowledge transfer. However, most of the previous work neglected the category information when aligning the distribution between source and target domains, which led to the emergence of negative transfer. To address this problem, we propose the class-level adaptation network (CLAN) optimizing a novel metric which makes the centers of each class in source and target domains are close. Specifically, the class center of the source domain is generated by the labels of the source samples, while the class center of the target domain without any label is generated by the high confidence pseudo labels of the target samples. Technically, CLAN matches each target sample to the nearest center in the source domain and assigns an example a high confidence pseudo label by considering a threshold. Extensive experiments indicate that the combination of the aforementioned two models can achieve state-of-the-art performance on the Office-31 and digital domain adaptation benchmarks.
- Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. 2017. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39, 12 (2017), 2481--2495.Google Scholar
- Zhihong Chen, Chao Chen, Zhaowei Cheng, Ke Fang, and Xinyu Jin. 2019. Selective Transfer with Reinforced Transfer Network for Partial Domain Adaptation. arXiv preprint arXiv:1905.10756 (2019).Google Scholar
- Zhihong Chen, Chao Chen, Xinyu Jin, Yifu Liu, and Zhaowei Cheng. 2019. Deep joint two-stream Wasserstein auto-encoder and selective attention alignment for unsupervised domain adaptation. Neural Computing and Applications (2019).Google Scholar
- Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning. 647--655.Google ScholarDigital Library
- Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17, 1 (2016), 2096--2030.Google ScholarDigital Library
- Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2414--2423.Google ScholarCross Ref
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.Google ScholarCross Ref
- Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A Efros, and Trevor Darrell. 2017. Cycada: Cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213 (2017).Google Scholar
- Jonathan J. Hull. 1994. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence 16, 5 (1994), 550--554.Google ScholarDigital Library
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.Google Scholar
- Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradientbased learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278-- 2324.Google ScholarCross Ref
- Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, Vol. 3. 2.Google Scholar
- Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I Jordan. 2015. Learning transferable features with deep adaptation networks. arXiv preprint arXiv:1502.02791 (2015).Google Scholar
- Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S Yu. 2014. Transfer joint matching for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1410--1417.Google ScholarDigital Library
- Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. 2011. Reading digits in natural images with unsupervised feature learning. (2011).Google Scholar
- Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22, 10 (2009), 1345--1359.Google ScholarDigital Library
- Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. 2009. Dataset shift in machine learning. The MIT Press.Google Scholar
- Marc'Aurelio Ranzato and Martin Szummer. 2008. Semi-supervised learning of compact document representations with deep networks. In Proceedings of the 25th international conference on Machine learning. ACM, 792--799.Google ScholarDigital Library
- Baochen Sun and Kate Saenko. 2016. Deep coral: Correlation alignment for deep domain adaptation. In European Conference on Computer Vision. Springer, 443--450.Google ScholarCross Ref
- Isaac Triguero, Salvador García, and Francisco Herrera. 2015. Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study. Knowledge and Information systems 42, 2 (2015), 245--284.Google Scholar
- Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7167--7176.Google ScholarCross Ref
- Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014).Google Scholar
- David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics. 189--196.Google Scholar
- Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, and Susanne Saminger-Platz. 2017. Central moment discrepancy (cmd) for domaininvariant representation learning. arXiv preprint arXiv:1702.08811 (2017).Google Scholar
- Haijun Zhang, Yuzhu Ji, Wang Huang, and Linlin Liu. 2018. Sitcom-star-based clothing retrieval for video advertising: a deep learning framework. Neural computing and applications (2018), 1--20.Google Scholar
Index Terms
- Class-Level Adaptation Network with Self Training for Unsupervised Domain Adaptation
Recommendations
Mining Label Distribution Drift in Unsupervised Domain Adaptation
AI 2023: Advances in Artificial IntelligenceAbstractUnsupervised domain adaptation targets to transfer task-related knowledge from labeled source domain to unlabeled target domain. Although tremendous efforts have been made to minimize domain divergence, most existing methods only partially manage ...
Asymmetric tri-training for unsupervised domain adaptation
ICML'17: Proceedings of the 34th International Conference on Machine Learning - Volume 70It is important to apply models trained on a large number of labeled samples to different domains because collecting many labeled samples in various domains is expensive. To learn discriminative representations for the target domain, we assume that ...
Source Free Graph Unsupervised Domain Adaptation
WSDM '24: Proceedings of the 17th ACM International Conference on Web Search and Data MiningGraph Neural Networks (GNNs) have achieved great success on a variety of tasks with graph-structural data, among which node classification is an essential one. Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling ...
Comments