Skip to main content
Log in

Disentangled representation and cross-modality image translation based unsupervised domain adaptation method for abdominal organ segmentation

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Existing medical image segmentation models tend to achieve satisfactory performance when the training and test data are drawn from the same distribution, while they often produce significant performance degradation when used for the evaluation of cross-modality data. To facilitate the deployment of deep learning models in real-world medical scenarios and to mitigate the performance degradation caused by domain shift, we propose an unsupervised cross-modality segmentation framework based on representation disentanglement and image-to-image translation.

Methods

Our approach is based on a multimodal image translation framework, which assumes that the latent space of images can be decomposed into a content space and a style space. First, image representations are decomposed into the content and style codes by the encoders and recombined to generate cross-modality images. Second, we propose content and style reconstruction losses to preserve consistent semantic information from original images and construct content discriminators to match the content distributions between source and target domains. Synthetic images with target domain style and source domain anatomical structures are then utilized for training of the segmentation model.

Results

We applied our framework to the bidirectional adaptation experiments on MRI and CT images of abdominal organs. Compared to the case without adaptation, the Dice similarity coefficient (DSC) increased by almost 30 and 25% and average symmetric surface distance (ASSD) dropped by 13.3 and 12.2, respectively.

Conclusion

The proposed unsupervised domain adaptation framework can effectively improve the performance of cross-modality segmentation, and minimize the negative impact of domain shift. Furthermore, the translated image retains semantic information and anatomical structure. Our method significantly outperforms several competing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Ghafoorian M, Mehrtash A, Kapur T, Karssemeijer N, Marchiori E, Pesteie M, Guttmann C, Leeuw FE, Tempany CM, Ginneken B, Fedorov A, Abolmaesumi P, Platel B, Well WM (2017) Transfer learning for domain adaptation in MRI: application in brain lesion segmentation. In: Proceedings of the 20th international conference on medical image computing & computer-assisted intervention, Part III (MICCAI), pp 516–524. https://doi.org/10.1007/978-3-319-66179-7_59

  2. Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE international conference on computer vision (ICCV 2017), pp 2242–2251. https://doi.org/10.1109/ICCV.2017.244

  3. Huang X, Liu, MY, Belongie SJ, Kautz J (2018) Multimodal unsupervised image-to-image translation. In: Proceedings of the 15th European conference on European conference on computer vision, part III (ECCV), pp. 179–196. https://doi.org/10.1007/978-3-030-01219-9_11

  4. Lee HY, Tseng, HY, Huang, JB, Singh M, Yang MH (2018) Diverse image-to-image translation via disentangled representations. In: Proceedings of the 15th European conference on computer vision, part I (ECCV), pp 36–52. https://doi.org/10.1007/978-3-030-01246-5_3

  5. Yan W, Wang Y, Gu S, Huang L, Yan F, Xia L, Tao Q (2019) The domain shift problem of medical image segmentation and vendor-adaptation by Unet-GAN. In: Proceedings of the 22nd international conference on medical image computing and computer assisted intervention, part II (MICCAI), pp 623–631. https://doi.org/10.1007/978-3-030-32245-8_69

  6. Karani N, Chaitanya K, Baumgartner C, Konukoglu E (2018) A lifelong learning approach to brain MR segmentation across scanners and protocols. In: Proceedings of the 21st international conference on medical image computing and computer assisted intervention, part I (MICCAI), pp 476–484

  7. Jiang J, Hu YC, Tyagi N, Zhang P, Rimner A, Mageras GS, Deasy JO, Veeraraghavan H (2018) Tumor-aware, adversarial domain adaptation from CT to MRI for lung cancer segmentation. In: Proceedings of the 21st international conference on medical image computer and computer assisted intervention, part II (MICCAI), pp 777–785. https://doi.org/10.1007/978-3-030-00934-2_86

  8. Gibson E, Hu Y, Ghavami N, Ahmed HU, Barratt DC (2018) Inter-site variability in prostate segmentation accuracy using deep learning. In: Proceedings of the 21st international conference on medical image computing and computer assisted intervention, part IV (MICCAI), pp 506–514

  9. Shen J, Qu Y, Zhang W, Yu Y (2017) Wasserstein distance guided representation learning for domain adaptation. In: Proceedings of the 32nd AAAI conference on artificial intelligence (AAAI), pp 4058–4065

  10. Zou Y, Yu Z, Kumar B, Wang J (2018) Domain adaptation for semantic segmentation via class-balanced self-training. In: Proceedings of the 15th European conference on computer vision, part III (ECCV), pp 297–313

  11. Goodfellow, IJ, Pouget-Abadie J, Mirza M, Bing X, Bengio Y (2014) Generative adversarial nets. In: Proceedings of the conference and workshop on neural information processing systems (NeurIPS), pp 2672–2680. https://doi.org/10.5555/2969033.2969125

  12. Kamnitsas K, Baumgartner C, Ledig C, Newcombe V, Simpson JP, Kane AD, Menon DK, Nori A, Criminisi A, Rueckert D, Glocker B (2017) Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: Proceedings of the 25th international conference on information processing in medical imaging (IPMI), pp 597–609. https://doi.org/10.1007/978-3-319-59050-9_47

  13. Chen C, Dou Q, Chen H, Heng PA (2018) Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest X-ray segmentation. In: International workshop on machine learning in medical imaging—9th international workshop (MIML@MICCAI 2018), pp 143–151. https://doi.org/10.1007/978-3-030-00919-9_17

  14. Wang S, Yu L, Yang X, Fu CW, Heng PA (2019) Patch-based output space adversarial learning for joint optic disc and cup segmentation. IEEE Trans Med Imaging 38(11):2485–2495. https://doi.org/10.1109/TMI.2019.2899910

    Article  PubMed  Google Scholar 

  15. Chen C, Dou Q, Chen H, Qin J, Heng PA (2019) Synergistic image and feature adaptation: towards cross-modality domain adaptation for medical image segmentation. In: Proceedings of the 33rd AAAI conference on artificial intelligence, vol 33, pp 865–872

  16. Chen C, Dou Q, Chen H, Qin J, Heng PA (2020) Unsupervised bidirectional cross-modality adaptation via deeply synergistic image and feature alignment for medical image segmentation. IEEE Trans Med Imaging 39:2494–2505. https://doi.org/10.1109/TMI.2020.2972701

    Article  PubMed  Google Scholar 

  17. Hoffman J, Tzeng E, Park T, Zhu JY, Darrell T (2018). Cycada: cycle-consistent adversarial domain adaptation. In: Proceedings of the 35th international conference on machine learning (ICML), pp 1989–1998

  18. Zhang Y, Miao S, Mansi T, Liao R (2020) Unsupervised X-ray image segmentation with task driven generative adversarial networks. Med Image Anal 62:101664. https://doi.org/10.1016/j.media.2020.101664

    Article  PubMed  Google Scholar 

  19. Jiang J, Hu YC, Tyagi N, Zhang P, Rimner A, Mageras GS, Deasy JO, Veeraraghavan H (2018) Tumor-aware, adversarial domain adaptation from CT to MRI for lung cancer segmentation. In: Proceedings of the 21st international conference on medical image computing and computer assisted intervention (MICCAI), pp 777–785. https://doi.org/10.1007/978-3-030-00934-2_86

  20. Zhang Y, Miao S, Mansi T, Liao R (2018). Task driven generative modeling for unsupervised domain adaptation: application to X-ray image segmentation. In: Proceedings of the 21st international conference on medical image computing and computer assisted intervention (MICCAI), pp 599–607. https://doi.org/10.1007/978-3-030-00934-2_67

  21. Isola P, Zhu JY, Zhou T, Efros AA (2016) Image-to-image translation with conditional adversarial networks. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 5967–597. https://doi.org/10.1109/CVPR.2017.632

  22. Liu MY, Breuel T, Kautz J (2017) Unsupervised image-to-image translation networks. In: 2017 Conference and workshop on neural information processing systems (NeurIPS)

  23. Zhu, JY, Zhang R, Pathak D, Darrell T, Efros AA, Wang O, Shechtman E (2020) Toward multimodal image-to-image translation. In: 2020 conference and workshop on neural information processing systems (NeurIPS).

  24. Choi Y, Choi MJ, Kim MY, Ha JW, Kim SH, Choo J (2018) StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In: 2018 IEEE conference on computer vision and pattern recognition (CVPR), pp 8789–8797. https://doi.org/10.1109/CVPR.2018.00916

  25. Nie D, Trullo R, Lian J, Wang L, Petitjean C, Ruan S, Wang Q, Shen DG (2018) Medical image synthesis with deep convolutional adversarial networks. IEEE Trans Biomed Eng 65(12):2720–2730. https://doi.org/10.1109/TBME.2018.2814538

    Article  PubMed  PubMed Central  Google Scholar 

  26. Almahairi A, Rajeswar S, Sordoni A, Bachman P, Courville A (2018) Augmented CycleGAN: learning many-to-many mappings from unpaired data. In: Proceedings of the 35th international conference on machine learning (ICML), pp 195–204

  27. Ma L, Jia X, Georgoulis S, Tuytelaars T, Gool LV (2019) Exemplar guided unsupervised image-to-image translation with semantic consistency. In: Proceedings of the 7th international conference on learning representations (ICLR).

  28. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein GAN. In: International conference on machine learning (ICML)

  29. Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: 2017 IEEE international conference on computer vision (ICCV 2017), pp 1510–1519. https://doi.org/10.1109/ICCV.2017.167

  30. Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of the 18th international conference on medical image computing and computer assisted intervention (MICCAI), pp 234–241. https://doi.org/10.1007/978-3-319-24574-4_28

  31. Kavur AE, Gezer NS, Bar MM, Conze PH, Selver M (2020) Chaos challenge–combined (CT–MRI) healthy abdominal organ segmentation. Med Image Anal. https://doi.org/10.1016/j.media.2020.101950

    Article  PubMed  Google Scholar 

  32. Heller N, Sathianathen N, Kalapara A, Walczak E, Moore K, Kaluzniak H, Rosenberg J, Blake P, Rengel Z, Oestreich M, Dean J, Tradewell M, Shah A, Tejpaul R, Edgerton Z, Peterson M, Raza S, Regmi S, Papanikolopoulos N, Weight C (2019) The Kits19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv:1904.00445

  33. Landman B, Xu ZB, Iglesias JE, Styner M, Langerak TR, Klein A (2017) Multi-atlas labeling beyond the cranial vault. Multi-atlas labeling beyond the cranial vault—workshop and challenge—syn3193805—Wiki (synapse.org)

  34. Dou Q, Ouyang C, Chen C, Chen H, Heng PA (2019) Pnp-adanet: plug-and-play adversarial domain adaptation network at unpaired cross-modality cardiac segmentation. IEEE Access 7:99065–99076. https://doi.org/10.1109/ACCESS.2019.2929258

    Article  Google Scholar 

Download references

Acknowledgements

This research work is supported by the grants from National Natural Science Foundation of China (61673007). We sincerely thank reviewers for their good advice.

Funding

This research work is supported by the grants from National Natural Science Foundation of China (61673007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Gong.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethics approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Consent to participate

This article does not contain patient data.

Consent for publication

This article does not contain patient data.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Kaida Jiang is the first author, Tao Gong is the corresponding author.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, K., Quan, L. & Gong, T. Disentangled representation and cross-modality image translation based unsupervised domain adaptation method for abdominal organ segmentation. Int J CARS 17, 1101–1113 (2022). https://doi.org/10.1007/s11548-022-02590-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-022-02590-7

Keywords

Navigation