Skip to main content

Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network

  • Conference paper
  • First Online:
Book cover Simplifying Medical Ultrasound (ASMUS 2021)

Abstract

Endoscopic ultrasound (EUS) is a challenging procedure that requires skill, both in endoscopy and ultrasound image interpretation. Classification of key anatomical landmarks visible on EUS images can assist the gastroenterologist during navigation. Current applications of deep learning have shown the ability to automatically classify ultrasound images with high accuracy. However, these techniques require a large amount of labelled data which is time consuming to obtain, and in the case of EUS, is also a difficult task to perform retrospectively due to the lack of 3D context. In this paper, we propose the use of an image-to-image translation method to create synthetic EUS (sEUS) images from CT data, that can be used as a data augmentation strategy when EUS data is scarce. We train a cycle-consistent adversarial network with unpaired EUS images and CT slices extracted in a manner such that they mimic plausible EUS views, to generate sEUS images from the pancreas, aorta and liver. We quantitatively evaluate the use of sEUS images in a classification sub-task and assess the Fréchet Inception Distance. We show that synthetic data, obtained from CT data, imposes only a minor classification accuracy penalty and may help generalization to new unseen patients. The code and a dataset containing generated sEUS images are available at: https://ebonmati.github.io.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bonmati, E., et al.: Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures. Int. J. Comput. Assist. Radiol. Surg. 13(6), 875–883 (2018). https://doi.org/10.1007/s11548-018-1762-2

    Article  Google Scholar 

  2. Liu, S., et al.: Deep learning in medical ultrasound analysis: a review (2019). https://doi.org/10.1016/j.eng.2018.11.020

  3. Nie, D., et al.: Medical image synthesis with context-aware generative adversarial networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 417–425. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_48

    Chapter  Google Scholar 

  4. Jiao, J., Namburete, A.I.L., Papageorghiou, A.T., Noble, J.A.: Self-supervised ultrasound to MRI fetal brain image synthesis. IEEE Trans. Med. Imaging. 39, 4413–4424 (2020). https://doi.org/10.1109/TMI.2020.3018560

    Article  Google Scholar 

  5. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings IEEE International Conference Computer Vision, October 2017, pp. 2242–2251 (2017)

    Google Scholar 

  6. Zhang, L., Portenier, T., Goksel, O.: Learning ultrasound rendering from cross-sectional model slices for simulated training. Int. J. Comput. Assist. Radiol. Surg. 16(5), 721–730 (2021). https://doi.org/10.1007/s11548-021-02349-6

    Article  Google Scholar 

  7. Cronin, N.J., Finni, T., Seynnes, O.: Using deep learning to generate synthetic B-mode musculoskeletal ultrasound images. Comput. Methods Programs Biomed. 196, 105583 (2020). https://doi.org/10.1016/j.cmpb.2020.105583

  8. Landman, B., Xu, Z., Igelsias, J.E., Styner, M., Langerak, T.R., Klein, A.: Multi-atlas labeling beyond the cranial vault. https://doi.org/10.7303/syn3193805

  9. Ramalhinho, J., Tregidgo, H.F.J., Gurusamy, K., Hawkes, D.J., Davidson, B., Clarkson, M.J.: Registration of untracked 2D laparoscopic ultrasound to CT images of the liver using multi-labelled content-based image retrieval. IEEE Trans. Med. Imaging. 40, 1042–1054 (2021). https://doi.org/10.1109/TMI.2020.3045348

    Article  Google Scholar 

  10. Porav, H., Musat, V., Newman, P.: Reducing steganography in cycle-consistency GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni-tion (CVPR) Workshops, pp. 78–82 (2019)

    Google Scholar 

  11. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian Denoiser: residual learning of deep CNN for Image Denoising. IEEE Trans. Image Process. 26, 3142–3155 (2017). https://doi.org/10.1109/TIP.2017.2662206

    Article  MathSciNet  MATH  Google Scholar 

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. 22 Dec 2014. https://arxiv.org/abs/1412.6980

  13. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/ https://doi.org/10.5281/zenodo.4724125

  14. Lucic, M., Kurach, K., Michalski, M., Gelly, S., Bousquet, O.: Are GANs Created Equal? A large-scale study. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems. Curran Associates, Inc. (2018)

    Google Scholar 

  15. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6629–6640 (2017). https://doi.org/10.5555/3295222.3295408

  16. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2818–2826. IEEE Computer Society (2016). https://doi.org/10.1109/CVPR.2016.308

  17. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vision 128(2), 336–359 (2019). https://doi.org/10.1007/s11263-019-01228-7

    Article  Google Scholar 

  18. Bargsten, L., Schlaefer, A.: SpeckleGAN: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing. Int. J. Comput. Assist. Radiol. Surg. 15(9), 1427–1436 (2020). https://doi.org/10.1007/s11548-020-02203-1

    Article  Google Scholar 

  19. Peng, B., Huang, X., Wang, S., Jiang, J.: A real-time medical ultrasound simulator based on a generative adversarial network model. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 4629–4633 (2019). https://doi.org/10.1109/ICIP.2019.8803570

  20. Yi, X., Walia, E., Babyn, P.: Generative adversarial network in medical imaging: a review. Med. Image Anal. 58, 101552 (2019). https://doi.org/10.1016/j.media.2019.101552

Download references

Acknowledgements

This work is supported by the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) (203145/Z/16/Z) and by Cancer Research UK (CRUK) Multidisciplinary Award (C28070/A19985). NMB is supported by the EPSRC-funded UCL Centre for Doctoral Training in Intelligent, Integrated Imaging in Healthcare (i4health) (EP/S021930/1). ZMC Baum is supported by the Natural Sciences and Engineering Research Council of Canada Postgraduate Scholarships-Doctoral Program, and the UCL Overseas and Graduate Research Scholarships. SP Pereira was supported by the UCLH/UCL Comprehensive Biomedical Centre, which receives a proportion of funding from the Department of Health's National Institute for Health Research (NIHR) Biomedical Research Centres funding scheme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ester Bonmati .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Grimwood, A. et al. (2021). Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network. In: Noble, J.A., Aylward, S., Grimwood, A., Min, Z., Lee, SL., Hu, Y. (eds) Simplifying Medical Ultrasound. ASMUS 2021. Lecture Notes in Computer Science(), vol 12967. Springer, Cham. https://doi.org/10.1007/978-3-030-87583-1_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87583-1_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87582-4

  • Online ISBN: 978-3-030-87583-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics