Skip to main content

Advertisement

Log in

TSP-UDANet: two-stage progressive unsupervised domain adaptation network for automated cross-modality cardiac segmentation

  • S.I.: Deep Learning in Multimodal Medical Imaging for Cancer Detection
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Accurate segmentation of cardiac anatomy is a prerequisite for the diagnosis of cardiovascular disease. However, due to differences in imaging modalities and imaging devices, known as domain shift, the segmentation performance of deep learning models lacks reliability. In this paper, we propose a two-stage progressive unsupervised domain adaptation network (TSP-UDANet) to reduce domain shift when segmenting cardiac images from various sources. We alleviate the domain shift between the feature distribution of the source and target domains by introducing an intermediate domain as a bridge. The TSP-UDANet consists of three sub-networks: a style transfer sub-network, a segmentation sub-network, and a self-training sub-network. We conduct cooperative alignment of different domains at image level, feature level, and output level. Specifically, we transform the appearance of images across domains and enhance domain invariance by adversarial learning in multiple aspects to achieve unsupervised segmentation of the target modality. We validate the TSP-UDANet on the MMWHS (unpaired MRI and CT images), MS-CMRSeg (cross-modality MRI images), and M&Ms (cross-vendor MRI images) datasets. The experimental results demonstrate excellent segmentation performance and generalizability for unlabeled target modality images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The data that support the findings of this study are publicly available in the following links: MMWHS https://zmiclab.github.io/projects/mmwhs/. MS-CMRSeg https://zmiclab.github.io/projects/mscmrseg19/. M&Ms https://www.ub.edu/mnms/.

References

  1. World Health Organization (2019) Cardiovascular diseases (CVDs). Available from: https://www.who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds). Accessed 2022

  2. Roth GA, Mensah GA, Johnson CO, Addolorato G, Ammirati E, Baddour LM, Barengo NC, Beaton AZ, Benjamin EJ, Benziger CP (2020) Global burden of cardiovascular diseases and risk factors, 1990–2019: update from the GBD 2019 study. J Am Coll Cardiol 76(25):2982–3021

    Article  Google Scholar 

  3. Zhuang X, Li L, Payer C, Štern D, Urschler M, Heinrich MP, Oster J, Wang C, Smedby Ö, Bian C (2019) Evaluation of algorithms for multi-modality whole heart segmentation: an open-access grand challenge. Med Image Anal 58:101537

    Article  Google Scholar 

  4. Leiner T, Rueckert D, Suinesiaputra A, Baeßler B, Nezafat R, Išgum I, Young AA (2019) Machine learning in cardiovascular magnetic resonance: basic concepts and applications. J Cardiovasc Magn Reson 21(1):1–14

    Article  Google Scholar 

  5. Hoffman J, Wang D, Yu F, Darrell T (2016) FCNs in the wild: Pixel-level adversarial and constraint-based adaptation. http://arxiv.org/abs/1612.02649

  6. Dong N, Kampffmeyer M, Liang X, Wang Z, Dai W, Xing E (2018) Unsupervised domain adaptation for automatic estimation of cardiothoracic ratio. Paper presented at the international conference on medical image computing and computer-assisted intervention, Granada, Spain, pp 544–552

  7. Ouyang C, Kamnitsas K, Biffi C, Duan J, Rueckert D (2019) Data efficient unsupervised domain adaptation for cross-modality image segmentation. Paper presented at the international conference on medical image computing and computer-assisted intervention, Shenzhen, China, pp 669–677

  8. Liu Y, Wang W, Wang K, Ye C, Luo G (2019) An automatic cardiac segmentation framework based on multi-sequence MR image. Paper presented at the international workshop on statistical atlases and computational models of the heart, Shenzhen, China, pp 220–227

  9. Valindria VVPN, Rajchl M, Lavdas I, Aboagye EO, Rockall AG, Rueckert D, Glocker B (2018) Multi-modal learning from unpaired images: Application to multi-organ segmentation in CT and MRI. Paper presented at the 2018 IEEE winter conference on applications of computer vision (WACV), LakeTahoe, NV, USA, pp 547–556

  10. Dou Q, Liu Q, Heng PA, Glocker B (2020) Unpaired multi-modal segmentation via knowledge distillation. IEEE Trans Med Imaging 39:2415–2425

    Article  Google Scholar 

  11. Jiang J, Hu YC, Tyagi N, Zhang P, Rimner A, Mageras GS, Deasy JO, Veeraraghavan H (2018) Tumor-aware, adversarial domain adaptation from CT to MRI for lung cancer segmentation. Paper presented at the international conference on medical image computing and computer-assisted intervention, Granada, Spain, pp 777–785

  12. Chen C, Dou Q, Chen H, Heng PA (2018) Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. Paper presented at the international workshop on machine learning in medical imaging, Granada, Spain, pp 143–151

  13. Zhu J Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. Paper presented at the proceedings of the IEEE international conference on computer vision, Venice, Italy, pp 2223–2232

  14. Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T (2014) Deep domain confusion: Maximizing for domain invariance. http://arxiv.org/abs/1412.3474

  15. Long M, Cao Y, Wang J, Jordan M (2015) Learning transferable features with deep adaptation networks. Paper presented at the international conference on machine learning, Lille, France, pp 97–105

  16. Dou Q, Ouyang C, Chen C, Chen H, Glocker B, Zhuang X, Heng PA (2019) PnP-AdaNet: Plug-and-Play Adversarial domain adaptation Network with a benchmark at cross-modality cardiac segmentation. IEEE Access 7:99065–99076

    Article  Google Scholar 

  17. Borgwardt KM, Gretton A, Rasch MJ, Kriegel HP, Schölkopf B, Smola AJ (2006) Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22(14):e49–e57

    Article  Google Scholar 

  18. Dou Q, Ouyang C, Chen C, Chen H, Heng PA (2018) Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss. http://arxiv.org/abs/1804.10916.

  19. Chen Y, Li W, Sakaridis C, Dai D, Van Gool L (2018) Domain adaptive faster R-CNN for object detection in the wild. Paper presented at the proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, pp 3339–3348

  20. Vesal S, Gu M, Kosti R, Maier A, Ravikumar N (2021) Adapt everywhere: unsupervised adaptation of point-clouds and entropy minimisation for multi-modal cardiac image segmentation. IEEE Trans Med Imaging 40(7):1838–1851

    Article  Google Scholar 

  21. Wang J, Huang H, Chen C, Ma W, Huang Y, Ding X (2019) Multi-sequence cardiac MR segmentation with adversarial domain adaptation network. Paper presented at the international workshop on statistical atlases and computational models of the heart, Shenzhen, China, pp 254–262

  22. Kamnitsas K, Baumgartner C, Ledig C, Newcombe V, Simpson J, Kane A, Menon D, Nori A, Criminisi A, Rueckert D (2017) Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. Paper presented at the international conference on information processing in medical imaging, pp 597–609

  23. Jain RK, Sato T, Watasue T, Nakagawa T, Iwamoto Y, Han X, Lin L, Hu H, Ruan X, Chen YW (2022) Unsupervised domain adaptation using adversarial learning and maximum square loss for liver tumors detection in multi-phase CT images. Paper presented at the 2022 44th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 1536–1539

  24. Panfilov E, Tiulpin A, Klein S, Nieminen M T, Saarakkala S (2019) Improving robustness of deep learning based knee mri segmentation: Mixup and adversarial domain adaptation. Paper presented at the proceedings of the IEEE/CVF international conference on computer vision workshops, Seoul Korea (South), pp 450–459

  25. Yang J, An W, Yan C, Zhao P, Huang J (2021) Context-aware domain adaptation in semantic segmentation. Paper presented at the proceedings of the IEEE/CVF winter conference on applications of computer vision, Vaikoloa, HI, USA, pp 514–524

  26. Tsai YH, Hung WC, Schulter S, Sohn K, Yang MH, Chandraker M (2018) Learning to adapt structured output space for semantic segmentation. Paper presented at the proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, pp 7472–7481

  27. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. Paper presented at the proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, pp 770–778

  28. Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal 40(4):834–848

    Article  Google Scholar 

  29. Kingma DP, Ba J (2014). Adam: a method for stochastic optimization. http://arxiv.org/abs/1412.6980

  30. Cherry JM, Adler C, Ball C, Chervitz SA, Dwight SS, Hester ET, Jia Y, Juvik G, Roe T, Schroeder M (1998) SGD: saccharomyces genome database. Nucleic Acids Res 26(1):73–79

    Article  Google Scholar 

  31. Zhuang X (2018) Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE Trans Pattern Anal Mach Intell 41(12):2933–2946

    Article  Google Scholar 

  32. Zhuang X, Shen J (2016) Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med Image Anal 31:77–87

    Article  Google Scholar 

  33. Campello VM, Gkontra P, Izquierdo C, Martín-Isla C, Sojoudi A, Full PM, Maier-Hein K, Zhang Y, He Z, Ma J (2021) Multi-centre, multi-vendor and multi-disease cardiac segmentation: the M&Ms challenge. IEEE Trans Med Imaging 40(12):3543–3554

    Article  Google Scholar 

  34. Chen C, Dou Q, Chen H, Qin J, Heng P-A (2019) Synergistic image and feature adaptation: towards cross-modality domain adaptation for medical image segmentation. Paper presented at the proceedings of the AAAI conference on artificial intelligence, vol 33, no 01, pp 865–872

  35. Vesal S, Ravikumar N, Maier A (2019) Automated multi-sequence cardiac MRI segmentation using supervised domain adaptation. Paper presented at the international workshop on statistical atlases and computational models of the heart, Shenzhen, China, pp 300–308

  36. Hoffman J, Tzeng E, Park T, Zhu JY, Isola P, Saenko K, Efros A, Darrell T (2018) CyCADA: cycle-consistent adversarial domain adaptation. Paper presented at the international conference on machine learning, Vienna, Austria, pp 1989–1998

  37. Chen X, Lian C, Wang L, Deng H, Kuang T, Fung S, Gateno J, Yap PT, Xia JJ, Shen D (2020) Anatomy-regularized representation learning for cross-modality medical image segmentation. IEEE Trans Med Imaging 40(1):274–285

    Article  Google Scholar 

  38. Tao X, Wei H, Xue W, Ni D (2019) Segmentation of multimodal myocardial images using shape-transfer GAN. Paper presented at the international workshop on statistical atlases and computational models of the heart, Shenzhen, China, pp 271–279

  39. Chen C, Ouyang C, Tarroni G, Schlemper J, Qiu H, Bai W, Rueckert D (2019) Unsupervised multi-modal style transfer for cardiac MR segmentation. Paper presented at the international workshop on statistical atlases and computational models of the heart, Shenzhen, China, pp 209–219

  40. Wu F, Zhuang X (2020) CF distance: a new domain discrepancy metric and application to explicit domain adaptation for cross-modality cardiac image segmentation. IEEE Trans Med Imaging 39(12):4274–4285

    Article  Google Scholar 

  41. Li H et al (2021) 3D IFPN: improved feature pyramid network for automatic segmentation of gastric Tumor. Front Oncol 11:1654

    Google Scholar 

  42. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. Paper presented at the international conference on medical image computing and computer-assisted intervention, Munich, Germany, pp 234–241

  43. Li L, Zimmer VA, Ding W, Wu F, Huang L, Schnabel JA, Zhuang X (2020). Random style transfer based domain generalization networks integrating shape and spatial information. Paper presented at the international workshop on statistical atlases and computational models of the heart, Lima, Peru, pp 208–218

  44. Carscadden A, Noga M, Punithakumar K (2020) A deep convolutional neural network approach for the segmentation of cardiac structures from MRI sequences. Paper presented at the international workshop on statistical atlases and computational models of the heart, Lima, Peru, pp 250–258

  45. Scannell CM, Chiribiri A, Veta M (2020) Domain-adversarial learning for multi-centre, multi-vendor, and multi-disease cardiac mr image segmentation. Paper presented at the international workshop on statistical atlases and computational models of the heart, Lima, Peru, pp 228–237

  46. Full PM, Isensee F, Jäger PF, Maier Hein K (2020) Studying robustness of semantic segmentation under domain shift in cardiac MRI. Paper presented at the international workshop on statistical atlases and computational models of the heart, Lima, Peru, pp 238–249

  47. Isensee F, Jaeger PF, Kohl SA, Petersen J, Hein KHM (2021) nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 18(2):203–211

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Key Research and Development Program of China (2020YFC2004400), the National Natural Science Foundation of China (No. 61773110), the Fundamental Research Funds for the Central Universities (No. N2119008), Natural Science Foundation of LiaoNing Province (General Program) (No.2021-MS-087) and Open Grant by National Health Commission Key Laboratory of Assisted Circulation (Sun Yat-sen University) (No. cvclab201901). The authors would also like to thank the editor and reviewers for their valuable advice which has helped to improve the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lin Qi.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Zhang, Y., Xu, L. et al. TSP-UDANet: two-stage progressive unsupervised domain adaptation network for automated cross-modality cardiac segmentation. Neural Comput & Applic 35, 22189–22207 (2023). https://doi.org/10.1007/s00521-023-08939-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-08939-6

Keywords

Navigation