Skip to main content
Log in

Toward few-shot domain adaptation with perturbation-invariant representation and transferable prototypes

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Domain adaptation (DA) for semantic segmentation aims to reduce the annotation burden for the dense pixel-level prediction task. It focuses on tackling the domain gap problem and manages to transfer knowledge learned from abundant source data to new target scenes. Although recent works have achieved rapid progress in this field, they still underperform fully supervised models with a large margin due to the absence of any available hints in the target domain. Considering that few-shot labels are cheap to obtain in practical applications, we attempt to leverage them to mitigate the performance gap between DA and fully supervised methods. The key to this problem is to leverage the few-shot labels to learn robust domain-invariant predictions effectively. To this end, we first design a data perturbation strategy to enhance the robustness of the representations. Furthermore, a transferable prototype module is proposed to bridge the domain gap based on the source data and few-shot targets. By means of these proposed methods, our approach can perform on par with the fully supervised models to some extent. We conduct extensive experiments to demonstrate the effectiveness of the proposed methods and report the state-of-the-art performance on two popular DA tasks, i.e., from GTA5 to Cityscapes and SYNTHIA to Cityscapes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Chen L C, Papandreou G, Kokkinos I, Murphy K, Yuille A L. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834–848

    Article  Google Scholar 

  2. Hoffman J, Wang D, Yu F, Darrell T. FCNs in the wild: pixel-level adversarial and constraint-based adaptation. 2016, arXiv preprint arXiv: 1612.02649

  3. Saito K, Watanabe K, Ushiku Y, Harada T. Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, 3723–3732

  4. Wang Y, Peng J, Zhang Z. Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 9072–9081

  5. Luo Y, Zheng L, Guan T, Yu J, Yang Y. Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 2502–2511

  6. Zhang Y, Qiu Z, Yao T, Liu D, Mei T. Fully convolutional adaptation networks for semantic segmentation. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, 6810–6818

  7. Tsai Y H, Hung W C, Schulter S, Sohn K, Yang M H, Chandraker M. Learning to adapt structured output space for semantic segmentation. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, 7472–7481

  8. Zhang Y, David P, Gong B. Curriculum domain adaptation for semantic segmentation of urban scenes. In: Proceedings of 2017 IEEE International Conference on Computer Vision. 2017, 2039–2049

  9. Hung W C, Tsai Y H, Liou Y T, Lin Y Y, Yang M H. Adversarial learning for semi-supervised semantic segmentation. In: Proceedings of British Machine Vision Conference 2018. 2018, 65

  10. Kalluri T, Varma G, Chandraker M, Jawahar C V. Universal semi-supervised semantic segmentation. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 5258–5269

  11. Mittal S, Tatarchenko M, Brox T. Semi-supervised semantic segmentation with high- and low-level consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 43(4): 1369–1379

    Article  Google Scholar 

  12. Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems — Volume 2. 2014, 2672–2680

  13. Lim S, Kim I, Kim T, Kim C, Kim S. Fast autoaugment. In: Proceedings of Neural Information Processing Systems 32. 2019, 6662–6672

  14. Liu T, Yang Q, Tao D. Understanding how feature structure transfers in transfer learning. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017, 2365–2371

  15. Ge P, Ren C X, Dai D Q, Yan H. Domain adaptation and image classification via deep conditional adaptation network. 2020, arXiv preprint arXiv: 2006.07776

  16. Wittich D, Rottensteiner F. Appearance based deep domain adaptation for the classification of aerial images. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 180: 82–102

    Article  Google Scholar 

  17. He Z, Zhang L. Multi-adversarial faster-RCNN for unrestricted object detection. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 6667–6676

  18. He Z, Zhang L. Domain adaptive object detection via asymmetric tri-way faster-RCNN. In: Proceedings of the 16th European Conference on Computer Vision. 2020, 309–324

  19. Song L, Xu Y, Zhang L, Du B, Zhang Q, Wang X. Learning from synthetic images via active pseudo-labeling. IEEE Transactions on Image Processing, 2020, 29: 6452–6465

    Article  MATH  Google Scholar 

  20. Gao L, Zhang J, Zhang L, Tao D. DSP: dual soft-paste for unsupervised domain adaptive semantic segmentation. In: Proceedings of the 29th ACM International Conference on Multimedia. 2021, 2825–2833

  21. Pape C, Matskevych A, Wolny A, Hennies J, Mizzon G, Louveaux M, Musser J, Maizel A, Arendt D, Kreshuk A. Leveraging domain knowledge to improve microscopy image segmentation with lifted multicuts. Frontiers in Computer Science, 2019, 1: 6

    Article  Google Scholar 

  22. Quan T M, Hildebrand D G C, Jeong W K. Fusionnet: a deep fully residual convolutional neural network for image segmentation in connectomics. Frontiers in Computer Science, 2021, 3: 613981

    Article  Google Scholar 

  23. Baniukiewicz P, Lutton E J, Collier S, Bretschneider T. Generative adversarial networks for augmenting training data of microscopic cell images. Frontiers in Computer Science, 2019, 1: 10

    Article  Google Scholar 

  24. Hoffman J, Tzeng E, Park T, Zhu J Y, Isola P, Saenko K, Efros A A, Darrell T. CyCADA: cycle-consistent adversarial domain adaptation. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 1989–1998

  25. Yao T, Pan Y, Ngo C W, Li H, Mei T. Semi-supervised domain adaptation with subspace learning for visual recognition. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. 2015, 2142–2150

  26. Saito K, Kim D, Sclaroff S, Darrell T, Saenko K. Semi-supervised domain adaptation via minimax entropy. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 8049–8057

  27. Zhang H, Cisse M, Dauphin Y N, Lopez-Paz D. Mixup: beyond empirical risk minimization. In: Proceedings of the 6th International Conference on Learning Representations. 2018

  28. DeVries T, Taylor G W. Improved regularization of convolutional neural networks with cutout. 2017, arXiv preprint arXiv: 1708.04552

  29. Yun S, Han D, Chun S, Oh S J, Yoo Y, Choe J. CutMix: regularization strategy to train strong classifiers with localizable features. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 6022–6031

  30. Cubuk E D, Zoph B, Mane D, Vasudevan V, Le Q V. Autoaugment: learning augmentation policies from data. 2018, arXiv preprint arXiv: 1805.09501

  31. Zoph B, Cubuk E D, Ghiasi G, Lin T Y, Shlens J, Le Q V. Learning data augmentation strategies for object detection. In: Proceedings of the 16th European Conference on Computer Vision. 2020, 566–583

  32. Zhang L, Zhou Y, Zhang L. On the robustness of domain adaption to adversarial attacks. 2021, arXiv preprint arXiv: 2108.01807

  33. Koch G, Zemel R, Salakhutdinov R. Siamese neural networks for one-shot image recognition. In: Proceedings of the 32nd International Conference on Machine Learning. 2015

  34. Vinyals O, Blundell C, Lillicrap T, Wierstra D. Matching networks for one shot learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 3637–3645

  35. Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 4077–4087

  36. Bergstra J S, Bardenet R, Bengio Y, Kégl B. Algorithms for hyper-parameter optimization. In: Proceedings of the 24th International Conference on Neural Information Processing Systems. 2011, 2546–2554

  37. Miyato T, Maeda S I, Koyama M, Ishii S. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1979–1993

    Article  Google Scholar 

  38. Richter S R, Vineet V, Roth S, Koltun V. Playing for data: Ground truth from computer games. In: Proceedings of the 14th European Conference on Computer Vision. 2016, 102–118

  39. Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B. The cityscapes dataset for semantic urban scene understanding. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016, 3213–3223

  40. Ros G, Sellart L, Materzynska J, Vazquez D, Lopez A M. The SYNTHIA dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016, 3234–3243

  41. Chen Y H, Chen W Y, Chen Y T, Tsai B C, Wang Y C F, Sun M. No more discrimination: cross city adaptation of road scene segmenters. In: Proceedings of 2017 IEEE International Conference on Computer Vision. 2017, 2011–2020

  42. Zheng Z, Yang Y. Unsupervised scene adaptation with memory regularization in vivo. In: Proceedings of the 29th International Joint Conference on Artificial Intelligence. 2021, 150

  43. Zheng Z, Yang Y. Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation. International Journal of Computer Vision, 2021, 129(4): 1106–1120

    Article  Google Scholar 

  44. Loshchilov I, Hutter F. SGDR: stochastic gradient descent with warm restarts. In: Proceedings of the 5th International Conference on Learning Representations. 2016

  45. Paszke A, Gross S, Chintala S, Chanan G, Yang E, DeVito Z, Lin Z, Desmaison A, Antiga L, Lerer A. Automatic differentiation in PyTorch. In: Proceedings of the 31st Conference on Neural Information Processing Systems. 2017

  46. van der Maaten L, Hinton G. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008, 9(86): 2579–2605

    MATH  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Key R&D Program of China (2019QY1604), the Major Project for New Generation of AI (2018AAA0100400), the National Youth Talent Support Program, and the National Natural Science Foundation of China (Grant Nos. U21B2042, 62006231, and 62072457).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaoxiang Zhang.

Additional information

Junsong Fan received his Bachelor’s degree from Beihang University, China in 2016. He is now a PhD candidate of the Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China, under the supervision of Prof. Tieniu Tan and Zhaoxiang Zhang. His research interests include semi-/weakly-/self-supervised learning, domain adaptation, and open-world learning problems.

Yuxi Wang received the Bachelor’s degree from North Eastern University, China in 2016, and the PhD degree from the University of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, China in 2022. He is now an assistant professor in the Centre for Artificial Intelligence and Robotics, HKISI_CAS, China. His research interests include transfer learning, domain adaptation, and semantic segmentation.

He Guan is currently a PhD candidate with the University of Chinese Academy of Sciences, under the supervision of Prof. Tieniu Tan. Before that, he received his MS from the Institute of Automation, Chinese Academy of Sciences, and BS at Harbin Institute of Technology, China. His research interests include 3D object detection and computer vision.

Chunfeng Song received the PhD degree from University of Chinese Academy of Sciences, China in 2020. He is now working at the Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences as an Assistant Professor. He has published more than 20 conference and journal papers such as IEEE TIP, IJCV, CVPR, ECCV, and AAAI. His current research focuses on person identification, image segmentation, and unsupervised learning.

Zhaoxiang Zhang received his bachelor’s degree in Circuits and Systems from the University of Science and Technology of China, China in 2004, and he received his PhD degree in 2009, under the supervision of Prof. Tieniu Tan. He is now a full Professor in the Center for Research on Intelligent Perception and Computing and the National Laboratory of Pattern Recognition, China. His research interests include computer vision, pattern recognition, machine learning. Specifically, he recently focuses on biologically inspired intelligent computing and its applications in human analysis and scene understanding. He has published more than 150 papers in international journals and conferences, such as IEEE TIP, IEEE TCSVT, IEEE TIFS, IJCV, CVPR, ICCV, ECCV, and NeurIPS.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, J., Wang, Y., Guan, H. et al. Toward few-shot domain adaptation with perturbation-invariant representation and transferable prototypes. Front. Comput. Sci. 16, 163347 (2022). https://doi.org/10.1007/s11704-022-2015-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-022-2015-7

Keywords

Navigation