Skip to main content
Log in

Synthesis of CT images from digital body phantoms using CycleGAN

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

The potential of medical image analysis with neural networks is limited by the restricted availability of extensive data sets. The incorporation of synthetic training data is one approach to bypass this shortcoming, as synthetic data offer accurate annotations and unlimited data size.

Methods

We evaluated eleven CycleGAN for the synthesis of computed tomography (CT) images based on XCAT body phantoms. The image quality was assessed in terms of anatomical accuracy and realistic noise properties. We performed two studies exploring various network and training configurations as well as a task-based adaption of the corresponding loss function.

Results

The CycleGAN using the Res-Net architecture and three XCAT input slices achieved the best overall performance in the configuration study. In the task-based study, the anatomical accuracy of the generated synthetic CTs remained high (\(\mathrm{SSIM} = 0.64\) and \(\mathrm{FSIM} = 0.76\)). At the same time, the generated noise texture was close to real data with a noise power spectrum correlation coefficient of \(\mathrm{NCC} = 0.92\). Simultaneously, we observed an improvement in annotation accuracy of 65% when using the dedicated loss function. The feasibility of a combined training on both real and synthetic data was demonstrated in a blood vessel segmentation task (dice similarity coefficient \(\mathrm {DSC}=0.83\pm 0.05\)).

Conclusion

CT synthesis using CycleGAN is a feasible approach to generate realistic images from simulated XCAT phantoms. Synthetic CTs generated with a task-based loss function can be used in addition to real data to improve the performance of segmentation networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Bermudez C, Plassard AJ, Davis LT, Newton AT, Resnick SM, Landman BA (2018) Learning implicit brain MRI manifolds with deep learning. In: Proceedings of SPIE 10574, medical imaging 2018: image processing, vol 105741L

  2. Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698

    Article  CAS  Google Scholar 

  3. Chen L, Jiang F, Zhang H, Wu S, Yu S, Xie Y (2016) Edge preservation ratio for image sharpness assessment. In: 2016 12th World congress on intelligent control and automation (WCICA), IEEE, pp 1377–1381

  4. Christ P, Ettlinger F, Lipkova J, Kaissis G (2017) LiTS—liver tumor segmentation challenge http://www.lits-challenge.com/. Accessed 1 Aug 2019

  5. Costa P, Galdran A, Meyer MI, Niemeijer M, Abrámoff M, Mendonça AM, Campilho A (2018) End-to-end adversarial retinal image synthesis. IEEE Trans Med Imaging 37(3):781–791

    Article  Google Scholar 

  6. Guibas JT, Virdi TS, Li PS (2017) Synthetic medical images from dual generative adversarial networks. CoRR arXiv:1709.01872

  7. Jin X, Qi Y, Wu S (2017) CycleGAN face-off. CoRR arXiv:1712.03451

  8. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JA, van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42(2012):60–88

    Article  Google Scholar 

  9. Lundervold AS, Lundervold A (2019) An overview of deep learning in medical imaging focusing on MRI. Zeitschrift für Medizinische Physik 29(2):102–127

    Article  Google Scholar 

  10. Maier J, Sawall S, Knaup M, Kachelrieß M (2018) Deep scatter estimation (DSE): accurate real-time scatter estimation for X-ray CT using a deep convolutional neural network. J Nondestruct Eval 37(3):1–9

    Article  Google Scholar 

  11. Odena A, Dumoulin V, Olah C (2016) Deconvolution and checkerboard artifacts. Distill. https://doi.org/10.23915/distill.00003

    Article  Google Scholar 

  12. Olut S, Sahin YH, Demir U, Unal G (2018) Generative adversarial training for MRA image synthesis using multi-contrast MRI. In: PRedictive intelligence in MEdicine, pp 147–154

    Chapter  Google Scholar 

  13. Rührnschopf EP, Klingenbeck K (2011) A general framework and review of scatter correction methods in cone beam CT. Part 2: scatter estimation approaches. Med Phys 38(9):5186–5199

    Article  Google Scholar 

  14. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article  Google Scholar 

  15. Schnurr AK, Chung K, Russ T, Schad LR, Zöllner FG (2019) Simulation-based deep artifact correction with convolutional neural networks for limited angle artifacts. Zeitschrift für Medizinische Physik 29(2):150–161

    Article  Google Scholar 

  16. Schnurr AK, Schad LR, Zöllner FG (2019) Sparsely connected convolutional layers in CNNs for liver segmentation in CT. In: Bildverarbeitung für die Medizin 2019, Springer, New York, pp 80–85

    Google Scholar 

  17. Segars WP, Sturgeon G, Mendonca S, Grimes J, Tsui BMW (2010) 4D XCAT Phantom for multimodality imaging research. Med Phys 37(9):4902–4915

    Article  CAS  Google Scholar 

  18. Sharp P, Barber DC, Brown DG, Burgess AE, Metz CE, Myers KJ, Taylor CJ, Wagner RF, Brooks R, Hill CR, Kuhl DE, Smith MA, Wells P, Worthington B (1996) Report 54. J Int Comm Radiat Units Meas

  19. Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W, Webb R (2017) Learning from simulated and unsupervised images through adversarial training. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 2242–2251

  20. Soler L, Hostettler A, Agnus V, Charnoz A, Fasquel J, Moreau J, Osswald A, Bouhadjar M, Marescaux J (2010) 3D Image reconstruction for comparison of algorithm database: a patient specific anatomical and medical image database. https://www.ircad.fr/fr/recherche/3d-ircadb-01-fr/. Accessed 1 Aug 2019

  21. Walek P, Jan J, Ourednicek P, Skotakova J, Jira I (2013) Methodology for estimation of tissue noise power spectra in iteratively reconstructed MDCT data. In: 21st International conference on computer graphics, visualization and computer vision, pp 243–252

  22. Wang Z, Bovik AC, Sheikh HR (2004) Image quality assessment: from error measurement to structural similarity. IEEE Trans Image Proces 13(4):600–612

    Article  Google Scholar 

  23. Wang Z, Yang J, Jin H, Shechtman E, Agarwala A, Brandt J, Huang TS (2015) DeepFont: identify your font from an image. In: Proceedings of the 23rd ACM international conference on multimedia, MM’15, pp 451–459

  24. Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I (2017) Deep MR to CT synthesis using unpaired data. In: Simulation and synthesis in medical imaging, pp 14–23

    Chapter  Google Scholar 

  25. Wood E, Baltrušaitis T, Morency LP, Robinson P, Bulling A (2016) Learning an appearance-based Gaze estimator from one million synthesised images. In: Proceedings of the ninth biennial ACM symposium on eye tracking research and applications—ETRA ’16, New York, pp 131–138

  26. Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Proces 20(8):2378–2386

    Article  Google Scholar 

  27. Zhu J, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International conference on computer vision (ICCV), IEEE, pp 2242–2251

Download references

Acknowledgements

We are thankful to Joshua Gawlitza and Leonard Chandra for their support regarding the CT data and the vessel segmentations.

Funding

This research project is part of the Research Campus M\(^2\)OLIE and funded by the German Federal Ministry of Education and Research (BMBF) within the Framework ’Forschungscampus - Public–Private Partnership for Innovation’ under the funding code 13GW0388A. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the NVIDIA Titan Xp GPU used for this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tom Russ.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Russ, T., Goerttler, S., Schnurr, AK. et al. Synthesis of CT images from digital body phantoms using CycleGAN. Int J CARS 14, 1741–1750 (2019). https://doi.org/10.1007/s11548-019-02042-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-019-02042-9

Keywords

Navigation