Skip to main content

KISEG: A Three-Stage Segmentation Framework for Multi-level Acceleration of Chest CT Scans from COVID-19 Patients

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12264))

Abstract

During the ongoing COVID-19 outbreak, it is critical to perform an accurate diagnosis of COVID-19 pneumonia by computed tomography (CT). Although chest lesion segmentation plays a pivotal role in computer-aided diagnosis (CAD), accuracy is hindered by the lack of a publicly available CT dataset with manual annotation. In addition, for clinical deployment, how to balance the accuracy versus efficiency for the semantic segmentation model remains challenging. To address these issues, we construct the first CT dataset of COVID-19 pneumonia with pixel-wise lesion annotations. We propose a three-stage framework, called KISEG (Key and Intermediate frame of Segmentation), to enhance performance on serial CT image segmentation with multi-level acceleration. We first take a policy to divide frames of serial CT into two groups, key frames and intermediate frames. Then KISEG employs a main model (accurate but cumbersome) for key frame segmentation. And third, an auxiliary model was employed for intermediate frame segmentation with incorporating the information of key frames during the fusion module. Moreover, we propose a Gaussian Kernel Dropout for data augmentation. Experiments on our dataset demonstrate that our proposed KISEG achieves comparable accuracy with state-of-the-art methods and fewer GFLOPs, speeding up from 2.88\(\times \) to 9.16\(\times \). This dataset has been made public for further research of COVID-19 for AI community, released on http://ncov-ai.big.ac.cn/download.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The dataset is released on http://ncov-ai.big.ac.cn/download.

References

  1. Zhang, K., Liu, X., Shen, J., et al.: Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of COVID-19 pneumonia using computed tomography. Cell 181(6), 1423–1433 (2020). e11 in Cell

    Google Scholar 

  2. Chen, C., Liu, X., Ding, M., Zheng, J., Li, J.: 3D dilated multi-fiber network for real-time brain tumor segmentation in MRI. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 184–192. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_21

    Chapter  Google Scholar 

  3. Manvel, A., Vladimir, K., Alexander, T., Dmitry, U.: Radiologist-level stroke classification on non-contrast CT scans with deep U-Net. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 820–828. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_91

    Chapter  Google Scholar 

  4. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)

    Google Scholar 

  5. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  6. Paszke, A., Chaurasia, A., Kim, S., Culurciello, E.: Enet: a deep neural network architecture for real-time semantic segmentation. arXiv:1606.02147 (2016)

  7. Li, H., Xiong, P., Fan, H., Sun, J.: Dfanet: deep feature aggregation for real-time semantic segmentation. In: CVPR (2019)

    Google Scholar 

  8. Howard, A.G., Zhu, M., Chen, B., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 (2017)

  9. Zhu, M., Gupta, S.: To prune, or not to prune: exploring the efficacy of pruning for model compression. In: ICLR (Workshop) (2018)

    Google Scholar 

  10. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv:1503.02531 (2015)

  11. Wiegand, T., Sullivan, G.J., Bjontegaard, G., Luthra, A.: Overview of the H. 264/AVC video coding standard. IEEE Trans. Circ. Syst. Video Technol. 13(7), 560–576 (2003)

    Google Scholar 

  12. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. In: EMNLP (2017)

    Google Scholar 

  13. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  14. Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: ICCV (2017)

    Google Scholar 

  15. Paszke, A., Gross, S., Massa, F., et al.: Pytorch: an imperative style, high-performance deep learning library. In: NIPS (2019)

    Google Scholar 

  16. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (Poster) (2015)

    Google Scholar 

Download references

Acknowledgements

We thank Dr. Julian McAuley for helping the revision of the manuscript. This work is supported by the National Key R&D Program of China (2019YFB1404804), the National Natural Science Foundation of China (grants 61906105, 61872218, 61721003 and 61673241), Tsinghua-Fuzhou Institute of Digital Technology, Beijing National Research Center for Information Science and Technology (BNRist), and Tsinghua University-Peking Union Medical College Hospital Initiative Scientific Research Program. The funders had no roles in study design, data collection and analysis, the decision to publish, and preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guangyu Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, X., Wang, K., Wang, K., Chen, T., Zhang, K., Wang, G. (2020). KISEG: A Three-Stage Segmentation Framework for Multi-level Acceleration of Chest CT Scans from COVID-19 Patients. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12264. Springer, Cham. https://doi.org/10.1007/978-3-030-59719-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59719-1_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59718-4

  • Online ISBN: 978-3-030-59719-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics