Skip to main content

Semi-supervised Augmented 3D-CNN for FLARE22 Challenge

  • Conference paper
  • First Online:
Fast and Low-Resource Semi-supervised Abdominal Organ Segmentation (FLARE 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13816))

  • 881 Accesses

Abstract

Abdominal organ segmentation has been used in many important clinical applications, however, cases with accurate labels require huge manual labour and financial resources. As a potential alternative, semi-supervised learning can explore useful information from unlabeled cases, with only few labeled cases involved. Therefore, we propose our baseline model using augmented 3D-UNet and adopt semi-supervised method–Mean Teacher, to make quantitative evaluation on the FLARE2022 validation cases. Our method achieves average dice similarity coefficient (DSC) of 62.16\(\%\), Normalized Surface Distance (NSD) of 62.27\(\%\), running time of 9.58 s, and AUC of GPU and CPU is only 7424 and 199 respectively, which surpasses almost all other teams on resource consumption, demonstrating the effectiveness of our methods.

Z. Chen and T. Wang—Equal contribution

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chen, X., Yuan, Y., Zeng, G., Wang, J.: Semi-supervised semantic segmentation with cross pseudo supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2613–2622 (2021)

    Google Scholar 

  2. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  3. Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013)

    Article  Google Scholar 

  4. Heller, N., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: results of the KiTS19 challenge. Med. Image Anal. 67, 101821 (2021)

    Article  Google Scholar 

  5. Heller, N., et al.: An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in CT imaging. Proc. Am. Soc. Clin. Oncol. 38(6), 626 (2020)

    Article  Google Scholar 

  6. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)

    Article  Google Scholar 

  7. Lin, T.Y., Goyal, P., Girshick, R.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  8. Ma, J., et al.: AbdomenCT-1K: is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6695–6714 (2022)

    Google Scholar 

  9. Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063 (2019)

  10. Souly, N., Spampinato, C., Shah, M.: Semi supervised semantic segmentation using generative adversarial network. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017)

    Google Scholar 

  11. Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

Download references

Acknowledgements

The authors of this paper declare that the segmentation method they implemented for participation in the FLARE 2022 challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zining Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, Z., Wang, T., Han, S., Song, Y., Li, S. (2022). Semi-supervised Augmented 3D-CNN for FLARE22 Challenge. In: Ma, J., Wang, B. (eds) Fast and Low-Resource Semi-supervised Abdominal Organ Segmentation. FLARE 2022. Lecture Notes in Computer Science, vol 13816. Springer, Cham. https://doi.org/10.1007/978-3-031-23911-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23911-3_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23910-6

  • Online ISBN: 978-3-031-23911-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics