Skip to main content

FetalNet: Multi-task Deep Learning Framework for Fetal Ultrasound Biometric Measurements

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2021)

Abstract

Fetal biometric measurements are routinely done during pregnancy for the fetus growth monitoring and estimation of gestational age and fetal weight. The main goal in fetal ultrasound scan video analysis is to find standard planes to measure the fetal head, abdomen and femur. In this paper, we propose an end-to-end multi-task neural network called FetalNet with an attention mechanism and stacked module for spatio-temporal fetal ultrasound scan video analysis to simultaneously localize, classify and measure the fetal body parts. We employ an attention mechanism with a stacked module to learn salient maps to suppress irrelevant ultrasound regions and efficient scan plane localization. We train on the fetal ultrasound video from routine examinations of 700 different patients. Our method called FetalNet outperforms existing state-of-the-art methods in both classification and segmentation in fetal ultrasound video recordings. The source code and pre-trained weights are publicly available (https://github.com/SanoScience/FetalNet).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Baumgartner, C.F., et al.: SonoNet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound. IEEE Trans. Med. Imaging 36, 2204–2215 (2017)

    Article  Google Scholar 

  2. Budd, S., et al.: Confident head circumference measurement from ultrasound with real-time feedback for sonographers. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 683–691. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_75

    Chapter  Google Scholar 

  3. Chen, L.-C., Papandreou, G., Schroff, F., Adam, H.: Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv:1706.05587 [cs] (2017)

  4. Hadlock, F.P., Harrist, R.B., Sharman, R.S., Deter, R.L., Park, S.K.: Estimation of fetal weight with the use of head, body, and femur measurements-a prospective study. Am. J. Obstet. Gynecol. 151, 333–337 (1985)

    Article  Google Scholar 

  5. Harrison, A.P., Xu, Z., George, K., Lu, L., Summers, R.M., Mollura, D.J.: Progressive and multi-path holistically nested neural networks for pathological lung segmentation from CT images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 621–629. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_71

    Chapter  Google Scholar 

  6. Jang, J., Park, Y., Kim, B., Lee, S.M., Kwon, J.-Y., Seo, J.K.: Automatic estimation of fetal abdominal circumference from ultrasound images. IEEE J. Biomed. Health Inform. 22, 1512–1520 (2018)

    Article  Google Scholar 

  7. Liu, P., Zhao, H., Li, P., Cao, F.: Automated classification and measurement of fetal ultrasound images with attention feature pyramid network. In: Wang, T., Chai, T., Fan, H., Yu, Q. (eds.) Second Target Recognition and Artificial Intelligence Summit Forum, p. 116. SPIE, Changchun (2020)

    Google Scholar 

  8. Liu, S., et al.: Deep learning in medical ultrasound analysis: a review. Engineering 5, 261–275 (2019)

    Article  Google Scholar 

  9. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440. IEEE, Boston (2015)

    Google Scholar 

  10. Mehta, S., Mercan, E., Bartlett, J., Weaver, D., Elmore, J.G., Shapiro, L.: Y-net: joint segmentation and classification for diagnosis of breast biopsy images. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 893–901. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_99

    Chapter  Google Scholar 

  11. Oktay, O., et al.: Attention U-net: learning where to look for the pancreas. arXiv:1804.03999 [cs] (2018)

  12. Ravishankar, H., Prabhu, S.M., Vaidya, V., Singhal, N.: Hybrid approach for automatic segmentation of fetal abdomen from ultrasound images using deep learning. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pp. 779–782. IEEE, Prague (2016)

    Google Scholar 

  13. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  14. Wang, P., Patel, V.M., Hacihaliloglu, I.: Simultaneous segmentation and classification of bone surfaces from ultrasound using a multi-feature guided CNN. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 134–142. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00937-3_16

    Chapter  Google Scholar 

  15. Wu, L., Xin, Y., Li, S., Wang, T., Heng, P.-A., Ni, D.: Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 663–666. IEEE, Melbourne (2017)

    Google Scholar 

  16. Zeng, Y., Tsui, P.-H., Wu, W., Zhou, Z., Wu, S.: Fetal ultrasound image segmentation for automatic head circumference biometry using deeply supervised attention-gated V-net. J. Digit. Imaging 34(1), 134–148 (2021). https://doi.org/10.1007/s10278-020-00410-5

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the following medical sonographers for data, annotations and clinical expertise: Jan Klasa, MD; Bogusław Marinković, MD; Wojciech Górczewski, MD; Norbert Majewski, MD; Anita Smal-Obarska, MD. This paper is supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement Sano No 857533 and the International Research Agendas programme of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund and by Warsaw University of Technology (grant of the Scientific Discipline of Computer Science and Telecommunications agreement of 18/06/2020).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Szymon Płotka or Arkadiusz Sitek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Płotka, S., Włodarczyk, T., Klasa, A., Lipa, M., Sitek, A., Trzciński, T. (2021). FetalNet: Multi-task Deep Learning Framework for Fetal Ultrasound Biometric Measurements. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Communications in Computer and Information Science, vol 1517. Springer, Cham. https://doi.org/10.1007/978-3-030-92310-5_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92310-5_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92309-9

  • Online ISBN: 978-3-030-92310-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics