Abstract
To facilitate diagnosis on cardiac ultrasound (US), clinical practice has established several standard views of the heart, which serve as reference points for diagnostic measurements and define viewports from which images are acquired. Automatic view recognition involves grouping those images into classes of standard views. Although deep learning techniques have been successful in achieving this, they still struggle with fully verifying the suitability of an image for specific measurements due to factors like the correct location, pose, and potential occlusions of cardiac structures. Our approach goes beyond view classification and incorporates a 3D mesh reconstruction of the heart that enables several more downstream tasks, like segmentation and pose estimation. In this work, we explore learning 3D heart meshes via graph convolutions, using similar techniques to learn 3D meshes in natural images, such as human pose estimation. As the availability of fully annotated 3D images is limited, we generate synthetic US images from 3D meshes by training an adversarial denoising diffusion model. Experiments were conducted on synthetic and clinical cases for view recognition and structure detection. The approach yielded good performance on synthetic images and, despite being exclusively trained on synthetic data, it already showed potential when applied to clinical images. With this proof-of-concept, we aim to demonstrate the benefits of graphs to improve cardiac view recognition that can ultimately lead to better efficiency in cardiac diagnosis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bernard, O., et al.: Challenge on endocardial three-dimensional ultrasound segmentation. CREATIS, The MIDAS Journal (2014). https://doi.org/10.54294/j78w0v
Cheng, L.H., Sun, X., van der Geest, R.J.: Contrastive learning for echocardiographic view integration. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13434, pp. 340–349. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16440-8_33
Gaggion, N., Mansilla, L., Milone, D.H., Ferrante, E.: Hybrid graph convolutional neural networks for landmark-based anatomical segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12901, pp. 600–610. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_57
Gilbert, A., Marciniak, M., Rodero, C., Lamata, P., Samset, E., McLeod, K.: Generating synthetic labeled data from existing anatomical models: an example with echocardiography segmentation. IEEE Trans. Med. Imaging, 2783–2794 (2021). https://doi.org/10.1109/TMI.2021.3051806
Gong, S., Chen, L., Bronstein, M., Zafeiriou, S.: Spiralnet++: a fast and highly efficient mesh convolution operator. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (CVPR) (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. CoRR abs/2006.11239 (2020). https://arxiv.org/abs/2006.11239
Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)
Kolotouros, N., Pavlakos, G., Daniilidis, K.: Convolutional mesh regression for single-image human shape reconstruction. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4496–4505 (2019). https://doi.org/10.1109/CVPR.2019.00463
Kong, F., Wilson, N., Shadden, S.: A deep-learning approach for direct whole-heart mesh reconstruction. Med. Image Anal. 74, 102222 (2021). https://doi.org/10.1016/j.media.2021.102222
Lang, R., et al.: Recommendations for cardiac chamber quantification by echocardiography in adults: an update from the American society of echocardiography and the European association of cardiovascular imaging, 16(3), 233–270 (2015). https://doi.org/10.1093/ehjci/jev014
Leclerc, S., et al.: Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Trans. Med. Imaging 38(9), 2198–2210 (2019)
Li, M., et al.: Interacting attention graph for single image two-hand reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2761–2770 (2022)
Lin, J., Yuan, Y., Shao, T., Zhou, K.: Towards high-fidelity 3D face reconstruction from in-the-wild images using graph convolutional networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020
Nakao, M., Nakamura, M., Matsuda, T.: Image-to-graph convolutional network for 2D/3D deformable model registration of low-contrast organs. IEEE Trans. Med. Imaging 41(12), 3747–3761 (2022). https://doi.org/10.1109/TMI.2022.3194517
Ouyang, D., et al.: Interpretable AI for beat-to-beat cardiac function assessment. Nature (2020). https://doi.org/10.1038/s41586-020-2145-8
Pasdeloup, D., et al.: Real-time echocardiography guidance for optimized apical standard views. Ultrasound Med. Biol. 49(1), 333–346 (2023). https://doi.org/10.1016/j.ultrasmedbio.2022.09.006
Rodero, C., et al.: Virtual cohort of 1000 synthetic heart meshes from adult human healthy population (2021). https://doi.org/10.5281/zenodo.4506930
Rodero, C., et al.: Virtual cohort of adult healthy four-chamber heart meshes from CT images (2021). https://doi.org/10.5281/zenodo.4590294
Stojanovski, D., Hermida, U., Muffoletto, M., Lamata, P., Beqiri, A., Gomez, A.: Efficient pix2vox++ for 3D cardiac reconstruction from 2d echo views. In: Aylward, S., Noble, J.A., Hu, Y., Lee, S.L., Baum, Z., Min, Z. (eds.) ASMUS 2022. LNCS, vol. 13565, pp. 86–95. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16902-1_9
Østvik, A., Smistad, E., Aase, S.A., Haugen, B.O., Lovstakken, L.: Real-time standard view classification in transthoracic echocardiography using convolutional neural networks. Ultrasound Med. Biol. 45(2), 374–384 (2019). https://doi.org/10.1016/j.ultrasmedbio.2018.07.024
Thomas, S., Gilbert, A., Ben-Yosef, G.: Light-weight spatio-temporal graphs for segmentation and ejection fraction prediction in cardiac ultrasound, 380–390 (2022). https://doi.org/10.1007/9783031164408_37
Tiago, C., Snare, S.R., Šprem, J., McLeod, K.: A domain translation framework with an adversarial denoising diffusion model to generate synthetic datasets of echocardiography images. IEEE Access 11, 17594–17602 (2023). https://doi.org/10.1109/ACCESS.2023.3246762
Wu, L., et al.: Standard echocardiographic view recognition in diagnosis of congenital heart defects in children using deep learning based on knowledge distillation. Front. Pediatrics 9 (2022). https://doi.org/10.3389/fped.2021.770182
Zhang, J., et al.: Fully automated echocardiogram interpretation in clinical practice. Circulation 138(16), 1623–1635 (2018). https://doi.org/10.1161/CIRCULATIONAHA.118.034338
Acknowledgment:
We thank Anna Novikova and Daria Kulikova for their valuable clinical consultation and for annotating the training data.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary material 1 (mp4 7865 KB)
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Thomas, S. et al. (2023). Graph Convolutional Neural Networks for Automated Echocardiography View Recognition: A Holistic Approach. In: Kainz, B., Noble, A., Schnabel, J., Khanal, B., Müller, J.P., Day, T. (eds) Simplifying Medical Ultrasound. ASMUS 2023. Lecture Notes in Computer Science, vol 14337. Springer, Cham. https://doi.org/10.1007/978-3-031-44521-7_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-44521-7_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44520-0
Online ISBN: 978-3-031-44521-7
eBook Packages: Computer ScienceComputer Science (R0)