Skip to main content

Temporal-Consistent Segmentation of Echocardiography with Co-learning from Appearance and Shape

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12262))

Abstract

Accurate and temporal-consistent segmentation of echocardiography is important for diagnosing cardiovascular disease. Existing methods often ignore consistency among the segmentation sequences, leading to poor ejection fraction (EF) estimation. In this paper, we propose to enhance temporal consistency of the segmentation sequences with two co-learning strategies of segmentation and tracking from ultrasonic cardiac sequences where only end diastole and end systole frames are labeled. First, we design an appearance-level co-learning (CLA) strategy to make the segmentation and tracking benefit each other and provide an eligible estimation of cardiac shapes and motion fields. Second, we design another shape-level co-learning (CLS) strategy to further improve segmentation with pseudo labels propagated from the labeled frames and to enforce the temporal consistency by shape tracking across the whole sequence. Experimental results on the largest publicly-available echocardiographic dataset (CAMUS) show the proposed method, denoted as CLAS, outperforms existing methods for segmentation and EF estimation. In particular, CLAS can give segmentations of the whole sequences with high temporal consistency, thus achieves excellent estimation of EF, with Pearson correlation coefficient 0.926 and bias of 0.1%, which is even better than the intra-observer agreement.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Al-Kadi, O.S.: Spatio-temporal segmentation in 3D echocardiographic sequences using fractional Brownian motion. IEEE Trans. Biomed. Eng. (2019)

    Google Scholar 

  2. Avants, B.B., Epstein, C.L., Grossman, M., Gee, J.C.: Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12(1), 26–41 (2008)

    Article  Google Scholar 

  3. Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: VoxelMorph: a learning framework for deformable medical image registration. IEEE Trans. Med. Imaging 38(8), 1788–1800 (2019)

    Article  Google Scholar 

  4. Chen, S., Ma, K., Zheng, Y.: TAN: temporal affine network for real-time left ventricle anatomical structure analysis based on 2D ultrasound videos. arXiv preprint arXiv:1904.00631 (2019)

  5. Du, X., Yin, S., Tang, R., Zhang, Y., Li, S.: Cardiac-DeepIED: automatic pixel-level deep segmentation for cardiac bi-ventricle using improved end-to-end encoder-decoder network. IEEE J. Transl. Eng. Health Med. 7, 1–10 (2019)

    Article  Google Scholar 

  6. Folland, E., Parisi, A., Moynihan, P., Jones, D.R., Feldman, C.L., Tow, D.: Assessment of left ventricular ejection fraction and volumes by real-time, two-dimensional echocardiography. A comparison of cineangiographic and radionuclide techniques. Circulation 60(4), 760–766 (1979)

    Article  Google Scholar 

  7. Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in Neural Information Processing Systems, pp. 2017–2025 (2015)

    Google Scholar 

  8. Jafari, M.H., et al.: A unified framework integrating recurrent fully-convolutional networks and optical flow for segmentation of the left ventricle in echocardiography data. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS 2018. LNCS, vol. 11045, pp. 29–37. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_4

    Chapter  Google Scholar 

  9. Leclerc, S., et al.: Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Trans. Med. Imaging 38(9), 2198–2210 (2019)

    Article  Google Scholar 

  10. Li, M., et al.: Recurrent aggregation learning for multi-view echocardiographic sequences segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 678–686. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_75

    Chapter  Google Scholar 

  11. Oktay, O., et al.: Anatomically constrained neural networks (ACNNs): application to cardiac image enhancement and segmentation. IEEE Trans. Med. Imaging 37(2), 384–395 (2017)

    Article  Google Scholar 

  12. Pedrosa, J., et al.: Fast and fully automatic left ventricular segmentation and tracking in echocardiography using shape-based b-spline explicit active surfaces. IEEE Trans. Med. Imaging 36(11), 2287–2296 (2017)

    Article  MathSciNet  Google Scholar 

  13. Qin, C., et al.: Joint learning of motion estimation and segmentation for cardiac MR image sequences. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 472–480. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_53

    Chapter  Google Scholar 

  14. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  15. Savioli, N., Vieira, M.S., Lamata, P., Montana, G.: Automated segmentation on the entire cardiac cycle using a deep learning work-flow. In: 2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS), pp. 153–158. IEEE (2018)

    Google Scholar 

  16. Yan, W., Wang, Y., Li, Z., van der Geest, R.J., Tao, Q.: Left ventricle segmentation via optical-flow-net from short-axis cine MRI: preserving the temporal coherence of cardiac motion. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 613–621. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00937-3_70

    Chapter  Google Scholar 

Download references

Acknowledgement

The paper is partially supported by the National Key R&D Program of China (No. 2019YFC0118300), Shenzhen Peacock Plan (No. KQTD2016053112051497, KQJSCX20180328095606003), Natural Science Foundation of China under Grants 61801296, the Shenzhen Basic Research JCYJ20190808115419619.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wufeng Xue .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wei, H. et al. (2020). Temporal-Consistent Segmentation of Echocardiography with Co-learning from Appearance and Shape. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12262. Springer, Cham. https://doi.org/10.1007/978-3-030-59713-9_60

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59713-9_60

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59712-2

  • Online ISBN: 978-3-030-59713-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics