Skip to main content
Log in

Noise-residual Mixup for unsupervised adversarial domain adaptation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Unsupervised domain adaptation (UDA) methods based on deep adversarial learning are successful for many practical fields. The deep adversarial UDA methods can promote knowledge transfer by learning domain invariant features. However, these UDA methods have the following problems. The inter-domain information in shared latent space between different domains can not be fully considered. And some low-level feature information of deep neural network is usually lost after multiple convolutions and layer by layer training. We propose noise-residual mixup for unsupervised adversarial domain adaptation (NMADA) to solve these problems in UDA methods based on deep adversarial learning. Our method NMADA is designed with two strategies: one is mixup linearly interpolation, this is the first time that noise mixup is incorporated into UDA. This strategy can enrich the cross domain feature information, further explore the inter-domain information in shared latent space and reduce the domain shift. The other strategy is the noise residual module. As a module that connects different convolutional layers of neural network, it can combine noise to make full use of different level feature information and consider intrisnic feature structure of different levels for better domain adaptation. In our method, the feature-level consideration from different levels and cross-domain sides can make better use of intra-domain content and inter-domain information. Compared with the mainstream methods, NMADA jointly considers different level feature information and richer cross domain information to improve robustness and performance of models. Experiments on unsupervised domain adaptation benchmark datasets validate the effectiveness and superiority of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Girshick R (2015) Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448. https://doi.org/10.1109/ICCV.2015.169

  2. Malte A, Ratadiya P (2019) Evolution of transfer learning in natural language processing. arXiv preprint arXiv:1910.07370

  3. Gong B, Grauman K, Sha F (2013) Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In: International Conference on Machine Learning (ICML). pp. 222–230

  4. Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T (2014) Deep domain confusion: Maximizing for domain invariance. arXiv

  5. Long M, Cao Y, Wang J, Jordan MI (2015) Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning (ICML). pp. 97–105

  6. Sun B, Saenko K (2016) Deep coral: Correlation alignment for deep domain adaptation. European Conference on Computer Vision (ECCV). pp. 443–450

  7. Gretton A, Borgwardt K, Rasch M, Schölkopf B, Smola AJ (2013) A kernel two-sample test. J Mach Learn Res (JMLR) 13:723–773

    MathSciNet  MATH  Google Scholar 

  8. Ganin Y, Lempitsky V (2015) Unsupervised domain adaptation by back propagation. In: International Conference on Machine Learning (ICML). pp. 1180–1189

  9. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems (NIPS). pp.2672–2680

  10. Tzeng E, Hoffffman J, Saenko K, Darrell T (2017) Adversarial discriminative domain adaptation. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 7167–7176

  11. Long M, Cao Z, Wang J, Jordan MI (2018) Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems (NIPS). pp. 1640–1650

  12. Saito K, Watanabe K, Ushiku Y, Harada T (2018) Maximum classififier discrepancy for unsupervised domain adaptation. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3723–3732

  13. Hong, W., Wang, Z., Ming, Y., Yuan, J. Conditional Generative Adversarial Network for Structured Domain Adaptation. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1335–1344. https://doi.org/10.1109/CVPR.2018.00145

  14. He C, Wang S, Kang H, Zheng L, Fan X, Tan T (2021) Adversarial domain adaptation network for tumor image diagnosis. Int J Approx Reason 135:38–52. https://doi.org/10.1016/j.ijar.2021.04.010

    Article  MathSciNet  MATH  Google Scholar 

  15. Long M, Wang J, Jordan MI (2016) Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems (NIPS). pp. 136–144

  16. Long M, Zhu H, Wang J, Jordan MI (2017) Deep transfer learning with joint adaptation networks. In: International Conference on Machine Learning (ICML). pp. 2208–2217

  17. Zhu Y, Zhuang F, Wang J, Ke G, He Q (2020) Deep Subdomain Adaptation Network for Image Classification. IEEE Trans Neural Netw Learn Syst PP(99):1–10

    Google Scholar 

  18. Wu Z, He C, Yang L, Kang F (2021) Attentive evolutionary generative adversarial network. Appl Intell 6:1–15. https://doi.org/10.1007/s10489-020-01917-8

    Article  Google Scholar 

  19. Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V (2016) Domain-adversarial training of neural networks. J Mach Learn Res (JMLR):2096–2030

  20. Liu MY, Tuzel O (2016) Coupled generative adversarial networks. In: Advances in Neural Information Processing Systems(NIPS). pp. 469–477

  21. Liu MY, Breuel T, Kautz J (2017) Unsupervised Image-to-Image Translation Networks. In: Advances in Neural Information Processing Systems(NIPS). pp. 700–708

  22. Sankaranarayanan S, Balaji Y, Castillo CD, Chellappa R (2018) Generate to adapt: Aligning domains using generative adversarial networks. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 8503–8512

  23. Cai G, Wang Y, Zhou M, He L (2019) Unsupervised domain adaptation with adversarial residual transform networks. IEEE Trans Neural Netw Learn Syst 31:3073–3086. https://doi.org/10.1109/TNNLS.2019.2935384

    Article  Google Scholar 

  24. Hou J, Ding X, Deng JD (2019) Unsupervised Domain Adaptation using Deep Networks with Cross-Grafted Stacks. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3257–3264. https://doi.org/10.1109/ICCVW.2019.00407

  25. Gabourie AJ, Rostami M, Pope PE, Kolouri S, Kim K (2019) Learning a Domain-Invariant Embedding for Unsupervised Domain Adaptation Using Class-Conditioned Distribution Alignment. 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 352–359. https://doi.org/10.1109/ALLERTON.2019.8919960

  26. Gallego A-J, Calvo-Zaragoza J, Fisher RB (2020) Incremental Unsupervised Domain-Adversarial Training of Neural Networks. IEEE Trans Neural Netw Learn Syst 32:4864–4878. https://doi.org/10.1109/TNNLS.2020.3025954

    Article  Google Scholar 

  27. Kang Q, Yao SY, Zhou MC, Zhang K, Abusorrah A (2020) Effective Visual Domain Adaptation via Generative Adversarial Distribution Matching. IEEE Trans Neural Netw Learn Syst 32:3919–3929

    Article  MathSciNet  Google Scholar 

  28. Carlucci FM, Porzi L, Caputo B, Ricci E, Buló SR (2020) MultiDIAL: Domain Alignment Layers for (Multisource) Unsupervised Domain Adaptation. IEEE Trans Pattern Anal Mach Intell 43:4441–4452. https://doi.org/10.1109/TPAMI.2020.3001338

    Article  Google Scholar 

  29. Nayak GK, Mopuri KR, Jain S, Chakraborty A (2021) Mining Data Impressions from Deep Models as Substitute for the Unavailable Training Data. IEEE Trans Pattern Anal Mach Intell PP:1. https://doi.org/10.1109/TPAMI.2021.3112816

    Article  Google Scholar 

  30. Li JJ, Du ZK, Zhu L, Ding ZM, Lu K, Shen HT (2021) Divergence-agnostic unsupervised domain adaptation by adversarial attacks. IEEE Trans Pattern Anal Mach Intell PP. https://doi.org/10.1109/TPAMI.2021.3109287

  31. Zhang WC, Xu D, Ouyang WL, Li W (2021) Self-paced collaborative and adversarial network for unsupervised domain adaptation. IEEE Trans Pattern Anal Mach Intell 43:2047–2061. https://doi.org/10.1109/TPAMI.2019.2962476

    Article  Google Scholar 

  32. Sharma A, Kalluri T, Chandraker M (2021) Instance level affinity-based transfer for unsupervised domain adaptation. In: Conference on Computer Vision and Pattern Recognition (CVPR)

  33. Zhang H, Ciss’e M, Dauphin YN, Lopez-Paz D (2018) mixup: Beyond empirical risk minimization. In International Conference on Learning Representations (ICLR)

  34. Verma V, Lamb A, Kannala J, Bengio Y, Lopez-Paz D (2019) Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825

  35. Berthelot D, Carlini N, Goodfellow I, Papernot N, Oliver A, Raffffel C (2019) Mixmatch: A holistic approach to semi-supervised learning. In: Advances in neural information processing systems (NIPS), pp. 5049–5059

  36. Verma V, Lamb A, Beckham C, Najafifi A, Mitliagkas I, Courville A, LopezPaz D, Bengio Y (2018) Manifold mixup: better representations by interpolating hidden states. In: International Conference on Learning Representations (ICLR)

  37. Zhong L, Fang Z, Liu F, Lu J, Zhang G (2020) How does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches? arXiv preprint arXiv:2101.01104

  38. Na J, Jung H, Chang HJ, Hwang W (2020) FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation. arXiv preprint arXiv:2011.09230

  39. Zou L, Tang H, Chen K, Jia K (2021) Geometry-Aware Self-Training for Unsupervised Domain Adaptation on Object Point Clouds. arXiv preprint arXiv:2108.09169

  40. Lcun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient based learning applied to document recognition. Proc IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  41. Hull JJ (1994) A database for handwritten text recognition research. IEEE Trans Pattern Anal Mach Intell 16(5):550–554. https://doi.org/10.1109/34.291440

    Article  Google Scholar 

  42. Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng AY (2011) Reading digits in natural images with unsupervised feature learning. In: Advances in neural information processing systems (NIPS), pp. 1–9

  43. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. In: Advances in neural information processing systems (NIPS), pp. 2234–2242

  44. Maaten LVD, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res (JMLR) 9(2605):2579–2605

    MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No.61402227), Hunan Education Department Project (20K129), Natural Science Foundation of Hunan Province (No.2019JJ50618), and Degree & Postgraduate Education Reform Project of Hunan Province (No. 2019JGYB116).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunmei He.

Ethics declarations

Conflict of interest

The authors declare there is no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, C., Tan, T., Fan, X. et al. Noise-residual Mixup for unsupervised adversarial domain adaptation. Appl Intell 53, 3034–3047 (2023). https://doi.org/10.1007/s10489-022-03709-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03709-8

Keywords

Navigation