Skip to main content

Transesophageal Echocardiography Generation Using Anatomical Models

  • Conference paper
  • First Online:
Data Augmentation, Labelling, and Imperfections (MICCAI 2023)

Abstract

Through automation, deep learning (DL) can enhance the analysis of transesophageal echocardiography (TEE) images. However, DL methods require large amounts of high-quality data to produce accurate results, which is difficult to satisfy. Data augmentation is commonly used to tackle this issue. In this work, we develop a pipeline to generate synthetic TEE images and corresponding semantic labels. The proposed data generation pipeline expands on an existing pipeline that generates synthetic transthoracic echocardiography images by transforming slices from anatomical models into synthetic images. We also demonstrate that such images can improve DL network performance through a left-ventricle semantic segmentation task. For the pipeline’s unpaired image-to-image (I2I) translation section, we explore two generative methods: CycleGAN and contrastive unpaired translation. Next, we evaluate the synthetic images quantitatively using the Fréchet Inception Distance (FID) Score and qualitatively through a human perception quiz involving expert cardiologists and the average researcher.

In this study, we achieve a dice score improvement of up to 10% when we augment datasets with our synthetic images. Furthermore, we compare established methods of assessing unpaired I2I translation and observe a disagreement when evaluating the synthetic images. Finally, we see which metric better predicts the generated data’s efficacy when used for data augmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Code is available at https://github.com/adgilbert/pseudo-image-extraction.git.

  2. 2.

    Details of which 19 views can be found in the supplementary material.

References

  1. Thorstensen, A., Dalen, H., Amundsen, B.H., Aase, S.A., Stoylen, A.: Reproducibility in echocardiographic assessment of the left ventricular global and regional function, the HUNT study. Eur. J. Echocardiogr. : J. Working Group Echocardiogr. Eur. Soc. Cardiol. 11(2), 149–156 (2010). https://doi.org/10.1093/EJECHOCARD/JEP188

  2. Abdi, A.H., Tsang, T., Abolmaesumi, P.: GAN-enhanced conditional echocardiogram generation (2019). https://arxiv.org/abs/1911.02121v2

  3. Armstrong, A.C., et al.: Quality control and reproducibility in m-mode, two-dimensional, and speckle tracking echocardiography acquisition and analysis: the CARDIA study, year 25 examination experience. Echocardiogr. (Mount Kisco, N.Y.) 32(8), 1233–1240 (2015). https://doi.org/10.1111/ECHO.12832

  4. Alessandrini, M., et al.: A pipeline for the generation of realistic 3D synthetic echocardiographic sequences: methodology and open-access database. IEEE Trans. Med. Imaging 34(7) (2015). https://doi.org/10.1109/TMI.2015.2396632

  5. Bargsten, L., Schlaefer, A.: SpeckleGAN: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing. Int. J. Comput. Assist. Radiol. Surg. 15(9), 1427–1436 (2020). https://doi.org/10.1007/S11548-020-02203-1/TABLES/2

  6. Chartsias, A., Joyce, T., Dharmakumar, R., Tsaftaris, S.A.: Adversarial image synthesis for unpaired multi-modal cardiac data. In: Tsaftaris, S., Gooya, A., Frangi, A., Prince, J. (eds.) Simulation and Synthesis in Medical Imaging. Lecture Notes in Computer Science(), vol. 10557. Springer, Cham. https://doi.org/10.1007/978-3-319-68127-6_1

  7. Chi, J., Walia, E., Babyn, P., Wang, J., Groot, G., Eramian, M.: Thyroid nodule classification in ultrasound images by fine-tuning deep convolutional neural network. J. Digit. Imaging 30(4), 477–486 (2017). https://doi.org/10.1007/S10278-017-9997-Y

  8. Dutta, A., Zisserman, A.: The VIA annotation software for images, audio and video. In: MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia, pp. 2276–2279 (2019). https://doi.org/10.1145/3343031.3350535

  9. Gilbert, A., Marciniak, M., Rodero, C., Lamata, P., Samset, E., McLeod, K.: Generating synthetic labeled data from existing anatomical models: an example with echocardiography segmentation. IEEE Trans. Med. Imaging (2021). https://doi.org/10.1109/TMI.2021.3051806

    Article  Google Scholar 

  10. Gontijo-Lopes, R., Smullin, S.J., Dyer, E.: Affinity and diversity: quantifying mechanisms of data augmentation (2020)

    Google Scholar 

  11. Hafiane, A., Vieyres, P., Delbos, A.: Deep learning with spatiotemporal consistency for nerve segmentation in ultrasound images (2017). https://arxiv.org/abs/1706.05870v1

  12. Hahn, R.T., et al.: ASE guidelines and standards guidelines for performing a comprehensive transesophageal echocardiographic examination: recommendations from the American society of echocardiography and the society of cardiovascular anesthesiologists (2013). https://doi.org/10.1016/j.echo.2013.07.009

  13. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium (2017)

    Google Scholar 

  14. Isensee, F., et al.: nnU-Net: self-adapting framework for U-Net-based medical image segmentation. Informatik aktuell, 22 (2018). https://doi.org/10.1007/978-3-658-25326-4_7

  15. Li, Z., Kamnitsas, K., Glocker, B.: Overfitting of neural nets under class imbalance: analysis and improvements for segmentation. In: Shen, D., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science(), vol. 11766, pp. 402–410. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_45, https://arxiv.org/abs/1907.10982v2

  16. Maximilian Seitzer: pytorch-fid: FID Score for PyTorch. GitHub (2020). https://github.com/mseitzer/pytorch-fid

  17. Park, T., Efros, A.A., Zhang, R., Zhu, J.Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds.) Computer Vision – ECCV 2020. Lecture Notes in Computer Science(), vol. 12354, pp. 319–345. Springer, Cham (2020). https://doi.org/10.48550/arxiv.2007.15651

  18. PM, C., HS, M.: Transfer learning with convolutional neural networks for classification of abdominal ultrasound images. J. Digit. Imaging 30(2), 234–243 (2017). https://doi.org/10.1007/S10278-016-9929-2

  19. Potter, A., Pearce, K., Hilmy, N.: The benefits of echocardiography in primary care. Brit. J. Gen. Pract. 69(684), 358–359 (2019). https://doi.org/10.3399/BJGP19X704513

  20. Rodero, C., et al.: Linking statistical shape models and simulated function in the healthy adult human heart (2021). https://doi.org/10.1371/journal.pcbi.1008851

  21. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Lecture Notes in Computer Science(), vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28, https://arxiv.org/abs/1505.04597v1

  22. Strocchi, M., et al.: A publicly available virtual cohort of four-chamber heart meshes for cardiac electro-mechanics simulations. PLOS ONE 15(6), e0235145 (2020). https://doi.org/10.1371/JOURNAL.PONE.0235145

  23. Tang, Y., Tang, Y., Xiao, J., Summers, R.M.: XLSor: a robust and accurate lung segmentor on chest x-rays using criss-cross attention and customized radiorealistic abnormalities generation. In: Proceedings of Machine Learning Research, pp. 1–11 (2019). https://arxiv.org/abs/1904.09229v1

  24. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision 2017-October, pp. 2242–2251 (2017). https://arxiv.org/abs/1703.10593v7

Download references

Acknowledgements

The authors thank D. Kulikova and A. Novikova for their help annotating images and participating in the quiz. We also thank the researchers who participated in the quiz.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emmanuel Oladokun .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 146 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oladokun, E., Abdulkareem, M., Šprem, J., Grau, V. (2024). Transesophageal Echocardiography Generation Using Anatomical Models. In: Xue, Y., Chen, C., Chen, C., Zuo, L., Liu, Y. (eds) Data Augmentation, Labelling, and Imperfections. MICCAI 2023. Lecture Notes in Computer Science, vol 14379. Springer, Cham. https://doi.org/10.1007/978-3-031-58171-7_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-58171-7_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-58170-0

  • Online ISBN: 978-3-031-58171-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics