Skip to main content
Log in

Double U-Net CycleGAN for 3D MR to CT image synthesis

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

CycleGAN and its variants are widely used in medical image synthesis, which can use unpaired data for medical image synthesis. The most commonly used method is to use a Generative Adversarial Network (GAN) model to process 2D slices and thereafter concatenate all of these slices to 3D medical images. Nevertheless, these methods always bring about spatial inconsistencies in contiguous slices. We offer a new model based on the CycleGAN to work out this problem, which can achieve high-quality conversion from magnetic resonance (MR) to computed tomography (CT) images.

Methods

To achieve spatial consistencies of 3D medical images and avoid the memory-heavy 3D convolutions, we reorganized the adjacent 3 slices into a 2.5D slice as the input image. Further, we propose a U-Net discriminator network to improve accuracy, which can perceive input objects locally and globally. Then, the model uses Content-Aware ReAssembly of Features (CARAFE) upsampling, which has a large field of view and content awareness takes the place of using a settled kernel for all samples.

Results

The mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) for double U-Net CycleGAN generated 3D image synthesis are 74.56±10.02, 27.12±0.71 and 0.84±0.03, respectively. Our method achieves preferable results than state-of-the-art methods.

Conclusion

The experiment results indicate our method can realize the conversion of MR to CT images using ill-sorted pair data, and achieves preferable results than state-of-the-art methods. Compared with 3D CycleGAN, it can synthesize better 3D CT images with less computation and memory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Ortendahl DA, Hylton NM, Kaufman L, Crooks LE, Cannon R, Watts J (1983) Calculated NMR images. In: The second annual meeting of the society for magnetic resonance in medicine, pp. 272–273

  2. Ortendahl DA, Hylton NM, Kaufman L, Crooks LE (1984) Signal to noise in derived NMR images. Magn Reson Med 1(3):316–338

    Article  CAS  Google Scholar 

  3. Riederer SJ, Suddarth SA, Bobman SA, Lee JN, Wang HZ, MacFall R (1984) Automated MR image synthesis: feasibility studies. Radiology 153(1):203–206

    Article  CAS  Google Scholar 

  4. Han X (2017) MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys 44(4):1408–1419

    Article  CAS  Google Scholar 

  5. Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, Shen D (2017) Medical image synthesis with context-aware generative adversarial networks. In: MICCAI. vol 10435, pp 417–425. Springer

  6. Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB (2018) Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology 286(2):676–684

    Article  Google Scholar 

  7. Zhang Z, Yang L, Zheng Y (2018) Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 9242–9251

  8. Wang Y, Yu B, Wang L, Zu C, Lalush DS, Lin W, Wu X, Zhou J, Shen D, Zhou L (2018) 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. NeuroImage 174(1):550–562

    Article  Google Scholar 

  9. Wolterink JM,Dinkla AM, Savenije MH, Seevinck PR, van den Berg CA, Is̆gum I (2017) Deep MR to CT synthesis using unpaired data. In: International workshop on simulation and synthesis in medical imaging. pp 14–23. Springer

  10. Hiasa Y, Otake Y, Takao M, Matsuoka T, Takashima K, Carass A, Prince JL, Sugano N, Sato Y (2018) Cross-modality image synthesis from unpaired data using CycleGAN. In: International workshop on simulation and synthesis in medical imaging. pp 31–41. Springer

  11. Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-Image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1125–1134

  12. Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp 2223–2232

  13. van der Ouderaa TFA, Worrall DE (2019) Reversible gans for memory-efficient image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 4720–4728

  14. Jung E, Miguel L, Park SH (2021) Conditional GAN with an attention-based generator and a 3D discriminator for 3D medical image generation. In: MICCAI. LNCS. vol 12906, pp 318–328. Springer

  15. Ronneberger O, Fischer P, Brox T (2015) U-Net: Convolutional networks for biomedical image segmentation. In: MICCAI. LNCS vol 9351, pp 234–241. Springer

  16. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778

  17. Wang J, Chen K, Xu R, Liu Z, Loy CC, Lin D (2019) Carafe: Content-aware reassembly of features. In: Proceedings of the IEEE international conference on computer vision. pp 3007–3016

  18. Odena A, Dumoulin V, Olah C (2016) Deconvolution and checkerboard artifacts. Distill. https://doi.org/10.23915/distill.00003

  19. Yun S, Han D, Oh SJ, Chun S, Choe J, Yoo Y (2019) Cutmix: Regularization strategy to train strong classifiers with localizable features. In: Proceedings of the IEEE international conference on computer vision. pp 6023–6032

  20. Sergey I, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. PMLR 37:448–456

  21. Shusharina N, Bortfeld T, Cardenas C, De B, Diao K, Hernandez S, Liu Y, Marroongroge S, Söderberg J, Soliman M (2020) Cross-modality brain structures image segmentation for the radiotherapy target definition and plan optimization. In: International conference on medical image computing and computer assisted intervention. pp 3–15. Springer

  22. Schönfeld E, Schiele B, Khoreva A (2020) A U-net based discriminator for generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 8207–8216

  23. Shinohara RT, Sweeney EM, Goldsmith J, Shiee N, Mateen FJ, Calabresi PA, Jarso S, Pham DL, Reich DS, Crainiceanu CM (2014) Statistical normalization techniques for magnetic resonance imaging. NeuroImage-Clin 6:9–19

    Article  Google Scholar 

  24. Etmann C, Ke R, Schönlieb C-B (2020) iUNets: Learnable invertible up- and downsampling for large-scale inverse problems. In: IEEE 30th international workshop on machine learning for signal processing (MLSP), pp 1–6

  25. Reinhold JC, Dewey BE, Carass A, Prince JL (2019) Evaluating the impact of intensity normalization on MR image synthesis. In: Medical imaging 2019: image processing, vol 10949, p 109493H. International society for optics and photonics

  26. Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I (2017) Deep MR to CT synthesis using unpaired data. In: Tsaftaris S, Gooya A, Frangi A, Prince J (eds) Simulation and synthesis in medical imaging. SASHIMI 2017. LNCS. vol 10557, pp 14–23. Springer, Cham

  27. Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, Prince JL (2018) Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN. In: Deep learning in medical image analysis and multimodal learning for clinical decision. LNCS. vol 11045, pp 174–182

  28. Yang H, Sun J, Carass A, Zhao C, Lee J, Prince JL, Xu Z (2020) Unsupervised MR-to-CT synthesis using structure-constrained CycleGAN. IEEE Trans Med Imaging 39(12):4249–4261

    Article  Google Scholar 

  29. Xu J, Gong E, Pauly J, Zaharchuk G (2017) 200x low-dose PET reconstruction using deep learning. arXiv preprint arXiv:1712.04119

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiling Jiang or Fucang Jia.

Ethics declarations

Funding

The present study was supported in part by the Guangdong Key Area Research and Development Program (2020B010165004), the National Natural Science Foundation of China (62172401, 12026602 and 81960208), the Shenzhen Key Basic Science Program (JCYJ20180507182437217), the National Key Research and Development Program (2019YFC0118100),the Guangdong Natural Science Foundation (2022A1515010439), and the Shenzhen Key Laboratory Program (ZDSYS201707271637577).

Conflict of interest

The authors declare that there are no conflicts of interest with regard to this study.

Ethical approval

The data used in this paper are a public dataset.

Informed consent

The data used in this paper are a public dataset.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, B., Jia, S., Jiang, X. et al. Double U-Net CycleGAN for 3D MR to CT image synthesis. Int J CARS 18, 149–156 (2023). https://doi.org/10.1007/s11548-022-02732-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-022-02732-x

Keywords

Navigation