Abstract
In this work, we aimed to tackle the challenge of fusing information from multiple echocardiographic views, mimicking cardiologists making diagnoses with an integrative approach. For this purpose, we used the available information provided in the CAMUS dataset to experiment combining 2D complementary views to derive 3D information of left ventricular (LV) volume. We proposed intra-subject and inter-subject volume contrastive losses with varying margin to encode heterogeneous input views to a shared view-invariant volume-relevant feature space, where feature fusion can be facilitated. The results demonstrated that the proposed contrastive losses successfully improved the integration of complementary information from the input views, achieving significantly better volume predictive performance (MAE: 10.96 ml, RMSE: 14.75 ml, R2: 0.88) than that of the late-fusion baseline without contrastive losses (MAE: 13.17 ml, RMSE: 17.91 ml, R2: 0.83). Code available at: https://github.com/LishinC/VCN.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Evangelista, A., et al.: European association of echocardiography recommendations for standardization of performance, digital storage and reporting of echocardiographic studies. Eur. J. Echocardiogr. 9, 438–448 (2008)
Leclerc, S., et al.: Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Trans. Med. Imaging. 38, 2198–2210 (2019)
Liu, F., Wang, K., Liu, D., Yang, X., Tian, J.: Deep pyramid local attention neural network for cardiac structure segmentation in two-dimensional echocardiography. Med. Image Anal. 67,(2020)
Wei, H., et al.: Temporal-consistent segmentation of echocardiography with co-learning from appearance and shape. In: Martel, A.L., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. Lecture Notes in Computer Science, vol. 12262, pp. 623–632. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_60
Chen, T., et al.: Multi-view learning with feature level fusion for cervical dysplasia diagnosis. In: Shen, D., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science, vol. 11764, pp. 329–338. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32239-7_37
Seeland, M., Mäder, P.: Multi-view classification with convolutional neural networks. PLoS ONE 16, e0245230 (2021)
Wu, N., et al.: Deep neural networks improve radiologists’ performance in breast cancer screening. IEEE Trans. Med. Imaging. 39, 1184–1194 (2020). https://doi.org/10.1109/TMI.2019.2945514
van Tulder, G., Tong, Y., Marchiori, E.: Multi-view analysis of unregistered medical images using cross-view transformers. In: de Bruijne, M., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Lecture Notes in Computer Science, vol. 12903, pp. 104–113. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_10
Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823 (2015)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: 37th International Conference on Machine Learning. pp. 1597–1607, PMLR (2020)
Sermanet, P., et al.: Time-contrastive networks: self-supervised learning from video. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1134–1141 (2018)
Dezaki, F.T., et al.: Echo-SyncNet: self-supervised cardiac view synchronization in echocardiography. IEEE Trans. Med. Imaging. 40, 2092–2104 (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. International Conference on Learning Representations, ICLR (2015)
Van Der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2625 (2008)
Acknowledgement
The work of LC was supported by the RISE-WELL project under H2020 Marie Skłodowska-Curie Actions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cheng, LH., Sun, X., van der Geest, R.J. (2022). Contrastive Learning for Echocardiographic View Integration. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13434. Springer, Cham. https://doi.org/10.1007/978-3-031-16440-8_33
Download citation
DOI: https://doi.org/10.1007/978-3-031-16440-8_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16439-2
Online ISBN: 978-3-031-16440-8
eBook Packages: Computer ScienceComputer Science (R0)