Skip to main content

Exploring Adversarially Robust Training for Unsupervised Domain Adaptation

  • Conference paper
  • First Online:
Computer Vision – ACCV 2022 (ACCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13846))

Included in the following conference series:

Abstract

Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain. UDA has been extensively studied in the computer vision literature. Deep networks have been shown to be vulnerable to adversarial attacks. However, very little focus is devoted to improving the adversarial robustness of deep UDA models, causing serious concerns about model reliability. Adversarial Training (AT) has been considered to be the most successful adversarial defense approach. Nevertheless, conventional AT requires ground-truth labels to generate adversarial examples and train models, which limits its effectiveness in the unlabeled target domain. In this paper, we aim to explore AT to robustify UDA models: How to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA? To answer this question, we provide a systematic study into multiple AT variants that can potentially be applied to UDA. Moreover, we propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA. Extensive experiments on multiple adversarial attacks and UDA benchmarks show that ARTUDA consistently improves the adversarial robustness of UDA models. Code is available at https://github.com/shaoyuanlo/ARTUDA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning (2018)

    Google Scholar 

  2. Awais, M., et al.: Adversarial robustness for unsupervised domain adaptation. In: IEEE International Conference on Computer Vision (2021)

    Google Scholar 

  3. Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25

    Chapter  Google Scholar 

  4. Chang, W.G., You, T., Seo, S., Kwak, S., Han, B.: Domain-specific batch normalization for unsupervised domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  5. Ding, G.W., Wang, L., Jin, X.: AdverTorch v0.1: an adversarial robustness toolbox based on pyTorch. arXiv preprint arXiv:1902.07623 (2019)

  6. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  7. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning (2015)

    Google Scholar 

  8. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, 1–35 (2016)

    Google Scholar 

  9. Goodfellow, I., et al.: Generative adversarial nets. In: Conference on Neural Information Processing Systems (2014)

    Google Scholar 

  10. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  11. Guo, C., Rana, M., Cisse, M., Van Der Maaten, L.: Countering adversarial images using input transformations (2018)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  13. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (2015)

    Google Scholar 

  14. Itazuri, T., Fukuhara, Y., Kataoka, H., Morishima, S.: What do adversarially robust models look at? arXiv preprint arXiv:1905.07666 (2019)

  15. Jiang, J., Chen, B., Fu, B., Long, M.: Transfer-learning-library. https://github.com/thuml/Transfer-Learning-Library (2020)

  16. Kannan, H., Kurakin, A., Goodfellow, I.: Adversarial logit pairing. In: Conference on Neural Information Processing Systems (2018)

    Google Scholar 

  17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Conference on Neural Information Processing Systems (2012)

    Google Scholar 

  18. Li, Y., Wang, N., Shi, J., Liu, J., Hou, X.: Revisiting batch normalization for practical domain adaptation. In: International Conference on Learning Representations Workshop (2017)

    Google Scholar 

  19. Lo, S.Y., Patel, V.M.: Defending against multiple and unforeseen adversarial videos. IEEE Transactions on Image Processing (2021)

    Google Scholar 

  20. Lo, S.Y., Patel, V.M.: Error diffusion halftoning against adversarial examples. In: IEEE International Conference on Image Processing (2021)

    Google Scholar 

  21. Lo, S.Y., Patel, V.M.: MultAV: multiplicative adversarial videos. In: IEEE International Conference on Advanced Video and Signal-based Surveillance (2021)

    Google Scholar 

  22. Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning (2015)

    Google Scholar 

  23. Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Conference on Neural Information Processing Systems (2018)

    Google Scholar 

  24. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: International Conference on Machine Learning (2017)

    Google Scholar 

  25. Van der Maaten, L., Hinton, G.: Visualizing data using T-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    Google Scholar 

  26. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  27. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: ACM Asia Conference on Computer and Communications Security (2017)

    Google Scholar 

  28. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Conference on Neural Information Processing Systems (2019)

    Google Scholar 

  29. Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., Saenko, K.: VisDA: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924 (2017)

  30. Raff, E., Sylvester, J., Forsyth, S., McLean, M.: Barrage of random transforms for adversarially robust defense. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  31. Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_16

    Chapter  Google Scholar 

  32. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)

    Google Scholar 

  33. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy. In: International Conference on Learning Representations (2019)

    Google Scholar 

  34. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  35. Van Den Oord, A., Vinyals, O., et al.: Neural discrete representation learning. In: Conference on Neural Information Processing Systems (2017)

    Google Scholar 

  36. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  37. Xie, C., Tan, M., Gong, B., Wang, J., Yuille, A., Le, Q.V.: Adversarial examples improve image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  38. Xie, C., Wu, Y., van der Maaten, L., Yuille, A., He, K.: Feature denoising for improving adversarial robustness. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  39. Xie, C., Yuille, A.: Intriguing properties of adversarial training at scale. In: International Conference on Learning Representations (2020)

    Google Scholar 

  40. Yang, J., et al.: Exploring robustness of unsupervised domain adaptation in semantic segmentation. In: IEEE International Conference on Computer Vision (2021)

    Google Scholar 

  41. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: British Machine Vision Conference (2016)

    Google Scholar 

  42. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning (2019)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the DARPA GARD Program HR001119S0026-GARD-FP-052.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shao-Yuan Lo .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 200 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lo, SY., Patel, V.M. (2023). Exploring Adversarially Robust Training for Unsupervised Domain Adaptation. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13846. Springer, Cham. https://doi.org/10.1007/978-3-031-26351-4_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26351-4_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26350-7

  • Online ISBN: 978-3-031-26351-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics