skip to main content
10.1145/3534678.3539235acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Domain Adaptation with Dynamic Open-Set Targets

Authors Info & Claims
Published:14 August 2022Publication History

ABSTRACT

Open-set domain adaptation aims to improve the generalization performance of a learning algorithm on a target task of interest by leveraging the label information from a relevant source task with only a subset of classes. However, most existing works are designed for the static setting, and can be hardly extended to the dynamic setting commonly seen in many real-world applications. In this paper, we focus on the more realistic open-set domain adaptation setting with a static source task and a time evolving target task where novel unknown target classes appear over time. Specifically, we show that the classification error of the new target task can be tightly bounded in terms of positive-unlabeled classification errors for historical tasks and open-set domain discrepancy across tasks. By empirically minimizing the upper bound of the target error, we propose a novel positive-unlabeled learning based algorithm named OuterAdapter for dynamic open-set domain adaptation with time evolving unknown classes. Extensive experiments on various data sets demonstrate the effectiveness and efficiency of our proposed OuterAdapter algorithm over state-of-the-art domain adaptation baselines.

Skip Supplemental Material Section

Supplemental Material

KDD22-fp0155.mp4

mp4

31.5 MB

References

  1. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning 79, 1--2 (2010), 151--175.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Andreea Bobu, Eric Tzeng, Judy Hoffman, and Trevor Darrell. 2018. Adapting to continuously shifting domains. (2018).Google ScholarGoogle Scholar
  3. Zhen Fang, Jie Lu, Feng Liu, Junyu Xuan, and Guangquan Zhang. 2020. Open set domain adaptation: Theoretical bound and algorithm. TNNLS (2020).Google ScholarGoogle Scholar
  4. Qianyu Feng, Guoliang Kang, Hehe Fan, and Yi Yang. 2019. Attract or distract: Exploit the margin of open set. In ICCV. 7990--7999.Google ScholarGoogle Scholar
  5. Ronald A Fisher. 1936. The use of multiple measurements in taxonomic problems. Annals of eugenics 7, 2 (1936), 179--188.Google ScholarGoogle ScholarCross RefCross Ref
  6. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. JMLR 17, 1 (2016), 2096--2030.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. 770--778.Google ScholarGoogle Scholar
  8. Judy Hoffman, Trevor Darrell, and Kate Saenko. 2014. Continuous manifold based adaptation for evolving visual domains. In CVPR. 867--874.Google ScholarGoogle Scholar
  9. Kurt Hornik, Maxwell Stinchcombe, and Halbert White. 1989. Multilayer feedforward networks are universal approximators. Neural Networks 2, 5 (1989), 359--366.Google ScholarGoogle ScholarCross RefCross Ref
  10. Andrew Kae and Yale Song. 2020. Image to Video Domain Adaptation Using Web Supervision. In WACV. 567--575.Google ScholarGoogle Scholar
  11. Ryuichi Kiryo, Gang Niu, Marthinus C Du Plessis, and Masashi Sugiyama. 2017. Positive-unlabeled learning with non-negative risk estimator. NeurIPS 30 (2017).Google ScholarGoogle Scholar
  12. Ananya Kumar, Tengyu Ma, and Percy Liang. 2020. Understanding self-training for gradual domain adaptation. In ICML. 5468--5479.Google ScholarGoogle Scholar
  13. Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. TPAMI 40, 12 (2017), 2935--2947.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Hong Liu, Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Qiang Yang. 2019. Separate to adapt: Open set domain adaptation via progressive separation. In CVPR. 2927--2936.Google ScholarGoogle Scholar
  15. Hong Liu, Mingsheng Long, Jianmin Wang, and Yu Wang. 2020. Learning to Adapt to Evolving Domains. NeurIPS 33 (2020).Google ScholarGoogle Scholar
  16. Mohammad Reza Loghmani, Markus Vincze, and Tatiana Tommasi. 2020. PositiveUnlabeled Learning for Open Set Domain Adaptation. Pattern Recognition Letters (2020).Google ScholarGoogle Scholar
  17. Yadan Luo, Zijian Wang, Zi Huang, and Mahsa Baktashmotlagh. 2020. Progressive graph learning for open-set domain adaptation. In ICML. 6468--6478.Google ScholarGoogle Scholar
  18. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain adaptation: Learning bounds and algorithms. In COLT.Google ScholarGoogle Scholar
  19. Pau Panareda Busto and Juergen Gall. 2017. Open set domain adaptation. In ICCV. 754--763.Google ScholarGoogle Scholar
  20. Xingchao Peng, Ben Usman, Kuniaki Saito, Neela Kaushik, Judy Hoffman, and Kate Saenko. 2018. Syn2real: A new benchmark forsynthetic-to-real visual domain adaptation. arXiv preprint arXiv:1806.09755 (2018).Google ScholarGoogle Scholar
  21. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. 2010. Adapting visual category models to new domains. In ECCV. 213--226.Google ScholarGoogle Scholar
  22. Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Open set domain adaptation by backpropagation. In ECCV. 153--168.Google ScholarGoogle Scholar
  23. Tasfia Shermin, Guojun Lu, Shyh Wei Teng, Manzur Murshed, and Ferdous Sohel. 2020. Adversarial network with multiple classifiers for open set domain adaptation. IEEE Transactions on Multimedia (2020).Google ScholarGoogle Scholar
  24. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. 2017. Deep hashing network for unsupervised domain adaptation. In CVPR. 5018--5027.Google ScholarGoogle Scholar
  25. Aparna Nurani Venkitasubramanian, Tinne Tuytelaars, and Marie-Francine Moens. 2016. Wildlife recognition in nature documentaries with weak supervision from subtitles and external data. Pattern Recognition Letters (2016).Google ScholarGoogle Scholar
  26. Hao Wang, Hao He, and Dina Katabi. 2020. Continuously Indexed Domain Adaptation. In ICML. 9898--9907.Google ScholarGoogle Scholar
  27. Zirui Wang, Zihang Dai, Barnabás Póczos, and Jaime Carbonell. 2019. Characterizing and avoiding negative transfer. In CVPR. 11293--11302.Google ScholarGoogle Scholar
  28. Junfeng Wen, Russell Greiner, and Dale Schuurmans. 2020. Domain aggregation networks for multi-source domain adaptation. In ICML. 10214--10224.Google ScholarGoogle Scholar
  29. Jun Wu and Jingrui He. 2020. Continuous Transfer Learning with Label-informed Distribution Alignment. arXiv preprint arXiv:2006.03230 (2020).Google ScholarGoogle Scholar
  30. Jun Wu and Jingrui He. 2021. Indirect Invisible Poisoning Attacks on Domain Adaptation. In KDD. 1852--1862.Google ScholarGoogle Scholar
  31. Jun Wu and Jingrui He. 2022. A Unified Meta-Learning Framework for Dynamic Transfer Learning. In IJCAI.Google ScholarGoogle Scholar
  32. Man Wu, Shirui Pan, Chuan Zhou, Xiaojun Chang, and Xingquan Zhu. 2020. Unsupervised domain adaptive graph convolutional networks. In Proceedings of The Web Conference 2020. 1457--1467.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Dongbo Xi, Fuzhen Zhuang, Ganbin Zhou, Xiaohu Cheng, Fen Lin, and Qing He. 2020. Domain adaptation with category attention network for deep sentiment analysis. In WWW. 3133--3139.Google ScholarGoogle Scholar
  34. Yixing Xu, Chang Xu, Chao Xu, and Dacheng Tao. 2017. Multi-positive and unlabeled learning. In IJCAI. 3182--3188.Google ScholarGoogle Scholar
  35. Wei-Nan Zhang, Qingfu Zhu, Yifa Wang, Yanyan Zhao, and Ting Liu. 2019. Neural personalized response generation as domain adaptation. WWW (2019).Google ScholarGoogle Scholar
  36. Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. 2019. Bridging Theory and Algorithm for Domain Adaptation. In ICML. 7404--7413.Google ScholarGoogle Scholar
  37. Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. 2019. On Learning Invariant Representations for Domain Adaptation. In ICML. 7523--7532.Google ScholarGoogle Scholar
  38. Han Zhao, Shanghang Zhang, Guanhang Wu, José MF Moura, Joao P Costeira, and Geoffrey J Gordon. 2018. Adversarial multiple source domain adaptation. NeurIPS (2018), 8559--8570.Google ScholarGoogle Scholar
  39. Dawei Zhou, Lecheng Zheng, Yada Zhu, Jianbo Li, and Jingrui He. 2020. Domain adaptive multi-modality neural attention network for financial forecasting. In Proceedings of The Web Conference 2020. 2230--2240.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Yao Zhou, Lei Ying, and Jingrui He. 2017. MultiC2 : an Optimization Framework for Learning from Task and Worker Dual Heterogeneity. In SDM. 579--587.Google ScholarGoogle Scholar
  41. Yao Zhou, Lei Ying, and Jingrui He. 2019. Multi-task Crowdsourcing via an Optimization Framework. TKDD 13, 3 (2019), 27:1--27:26.Google ScholarGoogle Scholar

Index Terms

  1. Domain Adaptation with Dynamic Open-Set Targets

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
        August 2022
        5033 pages
        ISBN:9781450393850
        DOI:10.1145/3534678

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 August 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,133of8,635submissions,13%

        Upcoming Conference

        KDD '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader