Skip to main content

Contrastive Learning for Echocardiographic View Integration

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13434))

Abstract

In this work, we aimed to tackle the challenge of fusing information from multiple echocardiographic views, mimicking cardiologists making diagnoses with an integrative approach. For this purpose, we used the available information provided in the CAMUS dataset to experiment combining 2D complementary views to derive 3D information of left ventricular (LV) volume. We proposed intra-subject and inter-subject volume contrastive losses with varying margin to encode heterogeneous input views to a shared view-invariant volume-relevant feature space, where feature fusion can be facilitated. The results demonstrated that the proposed contrastive losses successfully improved the integration of complementary information from the input views, achieving significantly better volume predictive performance (MAE: 10.96 ml, RMSE: 14.75 ml, R2: 0.88) than that of the late-fusion baseline without contrastive losses (MAE: 13.17 ml, RMSE: 17.91 ml, R2: 0.83). Code available at: https://github.com/LishinC/VCN.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Evangelista, A., et al.: European association of echocardiography recommendations for standardization of performance, digital storage and reporting of echocardiographic studies. Eur. J. Echocardiogr. 9, 438–448 (2008)

    Article  Google Scholar 

  2. Leclerc, S., et al.: Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Trans. Med. Imaging. 38, 2198–2210 (2019)

    Article  Google Scholar 

  3. Liu, F., Wang, K., Liu, D., Yang, X., Tian, J.: Deep pyramid local attention neural network for cardiac structure segmentation in two-dimensional echocardiography. Med. Image Anal. 67,(2020)

    Google Scholar 

  4. Wei, H., et al.: Temporal-consistent segmentation of echocardiography with co-learning from appearance and shape. In: Martel, A.L., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. Lecture Notes in Computer Science, vol. 12262, pp. 623–632. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_60

    Chapter  Google Scholar 

  5. Chen, T., et al.: Multi-view learning with feature level fusion for cervical dysplasia diagnosis. In: Shen, D., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science, vol. 11764, pp. 329–338. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32239-7_37

    Chapter  Google Scholar 

  6. Seeland, M., Mäder, P.: Multi-view classification with convolutional neural networks. PLoS ONE 16, e0245230 (2021)

    Article  Google Scholar 

  7. Wu, N., et al.: Deep neural networks improve radiologists’ performance in breast cancer screening. IEEE Trans. Med. Imaging. 39, 1184–1194 (2020). https://doi.org/10.1109/TMI.2019.2945514

    Article  Google Scholar 

  8. van Tulder, G., Tong, Y., Marchiori, E.: Multi-view analysis of unregistered medical images using cross-view transformers. In: de Bruijne, M., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Lecture Notes in Computer Science, vol. 12903, pp. 104–113. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_10

    Chapter  Google Scholar 

  9. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823 (2015)

    Google Scholar 

  10. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: 37th International Conference on Machine Learning. pp. 1597–1607, PMLR (2020)

    Google Scholar 

  11. Sermanet, P., et al.: Time-contrastive networks: self-supervised learning from video. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1134–1141 (2018)

    Google Scholar 

  12. Dezaki, F.T., et al.: Echo-SyncNet: self-supervised cardiac view synchronization in echocardiography. IEEE Trans. Med. Imaging. 40, 2092–2104 (2021)

    Article  Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  14. Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. International Conference on Learning Representations, ICLR (2015)

    Google Scholar 

  15. Van Der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2625 (2008)

    Google Scholar 

Download references

Acknowledgement

The work of LC was supported by the RISE-WELL project under H2020 Marie Skłodowska-Curie Actions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rob J. van der Geest .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 190 kb)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cheng, LH., Sun, X., van der Geest, R.J. (2022). Contrastive Learning for Echocardiographic View Integration. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13434. Springer, Cham. https://doi.org/10.1007/978-3-031-16440-8_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16440-8_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16439-2

  • Online ISBN: 978-3-031-16440-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics