Skip to main content

Disentangled Image Generation for Unsupervised Domain Adaptation

  • Conference paper
  • First Online:
  • 2667 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12535))

Abstract

We explore the use of generative modeling in unsupervised domain adaptation (UDA), where annotated real images are only available in the source domain, and pseudo images are generated in a manner that allows independent control of class (content) and nuisance variability (style). The proposed method differs from existing generative UDA models in that we explicitly disentangle the content and nuisance features at different layers of the generator network. We demonstrate the effectiveness of (pseudo)-conditional generation by showing that it improves upon baseline methods. Moreover, we outperform the previous state-of-the-art with significant margins in recently introduced multi-source domain adaptation (MSDA) tasks, achieving significant error reduction rates of \(50.27 \%\), \(89.54 \%\), \(75.35 \%\), \(27.46 \%\) and \(94.3 \%\) in all 5 tasks.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)

    Google Scholar 

  2. Xu, R., Chen, Z., Zuo, W., Yan, J., Lin, L.: Deep cocktail network: multi-source unsupervised domain adaptation with category shift. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3964–3973 (2018)

    Google Scholar 

  3. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. arXiv preprint arXiv:1812.01754 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Safa Cicek .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 2328 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cicek, S., Xu, N., Wang, Z., Jin, H., Soatto, S. (2020). Disentangled Image Generation for Unsupervised Domain Adaptation. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12535. Springer, Cham. https://doi.org/10.1007/978-3-030-66415-2_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66415-2_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66414-5

  • Online ISBN: 978-3-030-66415-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics