Abstract
Unsupervised domain adaptation (UDA) methods based on deep adversarial learning are successful for many practical fields. The deep adversarial UDA methods can promote knowledge transfer by learning domain invariant features. However, these UDA methods have the following problems. The inter-domain information in shared latent space between different domains can not be fully considered. And some low-level feature information of deep neural network is usually lost after multiple convolutions and layer by layer training. We propose noise-residual mixup for unsupervised adversarial domain adaptation (NMADA) to solve these problems in UDA methods based on deep adversarial learning. Our method NMADA is designed with two strategies: one is mixup linearly interpolation, this is the first time that noise mixup is incorporated into UDA. This strategy can enrich the cross domain feature information, further explore the inter-domain information in shared latent space and reduce the domain shift. The other strategy is the noise residual module. As a module that connects different convolutional layers of neural network, it can combine noise to make full use of different level feature information and consider intrisnic feature structure of different levels for better domain adaptation. In our method, the feature-level consideration from different levels and cross-domain sides can make better use of intra-domain content and inter-domain information. Compared with the mainstream methods, NMADA jointly considers different level feature information and richer cross domain information to improve robustness and performance of models. Experiments on unsupervised domain adaptation benchmark datasets validate the effectiveness and superiority of our approach.
Similar content being viewed by others
References
Girshick R (2015) Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448. https://doi.org/10.1109/ICCV.2015.169
Malte A, Ratadiya P (2019) Evolution of transfer learning in natural language processing. arXiv preprint arXiv:1910.07370
Gong B, Grauman K, Sha F (2013) Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In: International Conference on Machine Learning (ICML). pp. 222–230
Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T (2014) Deep domain confusion: Maximizing for domain invariance. arXiv
Long M, Cao Y, Wang J, Jordan MI (2015) Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning (ICML). pp. 97–105
Sun B, Saenko K (2016) Deep coral: Correlation alignment for deep domain adaptation. European Conference on Computer Vision (ECCV). pp. 443–450
Gretton A, Borgwardt K, Rasch M, Schölkopf B, Smola AJ (2013) A kernel two-sample test. J Mach Learn Res (JMLR) 13:723–773
Ganin Y, Lempitsky V (2015) Unsupervised domain adaptation by back propagation. In: International Conference on Machine Learning (ICML). pp. 1180–1189
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems (NIPS). pp.2672–2680
Tzeng E, Hoffffman J, Saenko K, Darrell T (2017) Adversarial discriminative domain adaptation. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 7167–7176
Long M, Cao Z, Wang J, Jordan MI (2018) Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems (NIPS). pp. 1640–1650
Saito K, Watanabe K, Ushiku Y, Harada T (2018) Maximum classififier discrepancy for unsupervised domain adaptation. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3723–3732
Hong, W., Wang, Z., Ming, Y., Yuan, J. Conditional Generative Adversarial Network for Structured Domain Adaptation. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1335–1344. https://doi.org/10.1109/CVPR.2018.00145
He C, Wang S, Kang H, Zheng L, Fan X, Tan T (2021) Adversarial domain adaptation network for tumor image diagnosis. Int J Approx Reason 135:38–52. https://doi.org/10.1016/j.ijar.2021.04.010
Long M, Wang J, Jordan MI (2016) Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems (NIPS). pp. 136–144
Long M, Zhu H, Wang J, Jordan MI (2017) Deep transfer learning with joint adaptation networks. In: International Conference on Machine Learning (ICML). pp. 2208–2217
Zhu Y, Zhuang F, Wang J, Ke G, He Q (2020) Deep Subdomain Adaptation Network for Image Classification. IEEE Trans Neural Netw Learn Syst PP(99):1–10
Wu Z, He C, Yang L, Kang F (2021) Attentive evolutionary generative adversarial network. Appl Intell 6:1–15. https://doi.org/10.1007/s10489-020-01917-8
Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V (2016) Domain-adversarial training of neural networks. J Mach Learn Res (JMLR):2096–2030
Liu MY, Tuzel O (2016) Coupled generative adversarial networks. In: Advances in Neural Information Processing Systems(NIPS). pp. 469–477
Liu MY, Breuel T, Kautz J (2017) Unsupervised Image-to-Image Translation Networks. In: Advances in Neural Information Processing Systems(NIPS). pp. 700–708
Sankaranarayanan S, Balaji Y, Castillo CD, Chellappa R (2018) Generate to adapt: Aligning domains using generative adversarial networks. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 8503–8512
Cai G, Wang Y, Zhou M, He L (2019) Unsupervised domain adaptation with adversarial residual transform networks. IEEE Trans Neural Netw Learn Syst 31:3073–3086. https://doi.org/10.1109/TNNLS.2019.2935384
Hou J, Ding X, Deng JD (2019) Unsupervised Domain Adaptation using Deep Networks with Cross-Grafted Stacks. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3257–3264. https://doi.org/10.1109/ICCVW.2019.00407
Gabourie AJ, Rostami M, Pope PE, Kolouri S, Kim K (2019) Learning a Domain-Invariant Embedding for Unsupervised Domain Adaptation Using Class-Conditioned Distribution Alignment. 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 352–359. https://doi.org/10.1109/ALLERTON.2019.8919960
Gallego A-J, Calvo-Zaragoza J, Fisher RB (2020) Incremental Unsupervised Domain-Adversarial Training of Neural Networks. IEEE Trans Neural Netw Learn Syst 32:4864–4878. https://doi.org/10.1109/TNNLS.2020.3025954
Kang Q, Yao SY, Zhou MC, Zhang K, Abusorrah A (2020) Effective Visual Domain Adaptation via Generative Adversarial Distribution Matching. IEEE Trans Neural Netw Learn Syst 32:3919–3929
Carlucci FM, Porzi L, Caputo B, Ricci E, Buló SR (2020) MultiDIAL: Domain Alignment Layers for (Multisource) Unsupervised Domain Adaptation. IEEE Trans Pattern Anal Mach Intell 43:4441–4452. https://doi.org/10.1109/TPAMI.2020.3001338
Nayak GK, Mopuri KR, Jain S, Chakraborty A (2021) Mining Data Impressions from Deep Models as Substitute for the Unavailable Training Data. IEEE Trans Pattern Anal Mach Intell PP:1. https://doi.org/10.1109/TPAMI.2021.3112816
Li JJ, Du ZK, Zhu L, Ding ZM, Lu K, Shen HT (2021) Divergence-agnostic unsupervised domain adaptation by adversarial attacks. IEEE Trans Pattern Anal Mach Intell PP. https://doi.org/10.1109/TPAMI.2021.3109287
Zhang WC, Xu D, Ouyang WL, Li W (2021) Self-paced collaborative and adversarial network for unsupervised domain adaptation. IEEE Trans Pattern Anal Mach Intell 43:2047–2061. https://doi.org/10.1109/TPAMI.2019.2962476
Sharma A, Kalluri T, Chandraker M (2021) Instance level affinity-based transfer for unsupervised domain adaptation. In: Conference on Computer Vision and Pattern Recognition (CVPR)
Zhang H, Ciss’e M, Dauphin YN, Lopez-Paz D (2018) mixup: Beyond empirical risk minimization. In International Conference on Learning Representations (ICLR)
Verma V, Lamb A, Kannala J, Bengio Y, Lopez-Paz D (2019) Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825
Berthelot D, Carlini N, Goodfellow I, Papernot N, Oliver A, Raffffel C (2019) Mixmatch: A holistic approach to semi-supervised learning. In: Advances in neural information processing systems (NIPS), pp. 5049–5059
Verma V, Lamb A, Beckham C, Najafifi A, Mitliagkas I, Courville A, LopezPaz D, Bengio Y (2018) Manifold mixup: better representations by interpolating hidden states. In: International Conference on Learning Representations (ICLR)
Zhong L, Fang Z, Liu F, Lu J, Zhang G (2020) How does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches? arXiv preprint arXiv:2101.01104
Na J, Jung H, Chang HJ, Hwang W (2020) FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation. arXiv preprint arXiv:2011.09230
Zou L, Tang H, Chen K, Jia K (2021) Geometry-Aware Self-Training for Unsupervised Domain Adaptation on Object Point Clouds. arXiv preprint arXiv:2108.09169
Lcun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient based learning applied to document recognition. Proc IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791
Hull JJ (1994) A database for handwritten text recognition research. IEEE Trans Pattern Anal Mach Intell 16(5):550–554. https://doi.org/10.1109/34.291440
Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng AY (2011) Reading digits in natural images with unsupervised feature learning. In: Advances in neural information processing systems (NIPS), pp. 1–9
Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. In: Advances in neural information processing systems (NIPS), pp. 2234–2242
Maaten LVD, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res (JMLR) 9(2605):2579–2605
Acknowledgements
This work is supported by the National Natural Science Foundation of China (No.61402227), Hunan Education Department Project (20K129), Natural Science Foundation of Hunan Province (No.2019JJ50618), and Degree & Postgraduate Education Reform Project of Hunan Province (No. 2019JGYB116).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare there is no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
He, C., Tan, T., Fan, X. et al. Noise-residual Mixup for unsupervised adversarial domain adaptation. Appl Intell 53, 3034–3047 (2023). https://doi.org/10.1007/s10489-022-03709-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-022-03709-8