Skip to main content

Coarse-To-Fine Segmentation of Organs at Risk in Nasopharyngeal Carcinoma Radiotherapy

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (MICCAI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12901))

Abstract

Accurate segmentation of organs at risk (OARs) from medical images plays a crucial role in nasopharyngeal carcinoma (NPC) radiotherapy. For automatic OARs segmentation, several approaches based on deep learning have been proposed, however, most of them face the problem of unbalanced foreground and background in NPC medical images, leading to unsatisfactory segmentation performance, especially for the OARs with small size. In this paper, we propose a novel end-to-end two-stage segmentation network, including the first stage for coarse segmentation by an encoder-decoder architecture embedded with a target detection module (TDM) and the second stage for refinement by two elaborate strategies for large- and small-size OARs, respectively. Specifically, guided by TDM, the coarse segmentation network can generate preliminary results which are further divided into large- and small-size OARs groups according to a preset threshold with respect to the size of targets. For the large-size OARs, considering the boundary ambiguity problem of the targets, we design an edge-aware module (EAM) to preserve the boundary details and thus improve the segmentation performance. On the other hand, a point cloud module (PCM) is devised to refine the segmentation results for small-size OARs, since the point cloud data is sensitive to sparse structures and fits the characteristic of small-size OARs. We evaluate our method on the public Head&Neck dataset, and the experimental results demonstrate the superiority of our method compared with the state-of-the-art methods. Code is available at https://github.com/DeepMedLab/Coarse-to-fine-segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhong, T., Huang, X., Tang, F., et al.: Boosting-based cascaded convolutional neural networks for the segmentation of CT organs-at-risk in nasopharyngeal carcinoma. Med. Phys. 46(12), 5602–5611 (2019)

    Article  Google Scholar 

  2. Nelms, B.E., Tomé, W.A., Robinson, G., et al.: Variations in the contouring of organs at risk: test case from a patient with oropharyngeal cancer. Int. J. Radiat. Oncol. Biol. Phys. 82(1), 368–378 (2012)

    Article  Google Scholar 

  3. Brouwer, C.L., Steenbakkers, R.J.H.M., van den Heuvel, E., et al.: 3D variation in delineation of head and neck organs at risk. Radiat. Oncol. 7(1), 1–10 (2012)

    Article  Google Scholar 

  4. Ren, X., Xiang, L., Nie, D., et al.: Interleaved 3D-CNN s for joint segmentation of small-volume structures in head and neck CT images. Med. Phys. 45(5), 2063–2075 (2018)

    Article  Google Scholar 

  5. Ibragimov, B., Xing, L.: Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med. Phys. 44(2), 547–557 (2017)

    Article  Google Scholar 

  6. Raudaschl, P.F., et al.: Evaluation of segmentation methods on head and neck CT: auto-segmentation challenge 2015. Med. Phys. 44(5), 2020–2036 (2017)

    Article  Google Scholar 

  7. Wang, Z., Wei, L., Wang, L., et al.: Hierarchical vertex regression-based segmentation of head and neck CT images for radiotherapy planning. IEEE Trans. Image Process. 27(2), 923–937 (2017)

    Article  MathSciNet  Google Scholar 

  8. Zhu, W., Huang, Y., Tang, H., et al.: Anatomynet: deep 3d squeeze-and-excitation u-nets for fast and fully automated whole-volume anatomical segmentation. Med. Phys. 46(2), 576–589 (2019)

    Article  Google Scholar 

  9. Gao, Y., et al.: Focusnet: imbalanced large and small organ segmentation with an end-to-end deep neural network for head and neck ct images. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.-T., Khan, A. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 829–838. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_92

    Chapter  Google Scholar 

  10. Tang, H., Chen, X., Liu, Y., et al.: Clinically applicable deep learning framework for organs at risk delineation in CT images. Nat. Mach. Intell. 1(10), 480–491 (2019)

    Article  Google Scholar 

  11. Liang, S., Thung, K.H., Nie, D., et al.: Multi-view spatial aggregation framework for joint localization and segmentation of organs at risk in head and neck CT images. IEEE Trans. Med. Imaging 39(9), 2794–2805 (2020)

    Article  Google Scholar 

  12. Tong, N., Gou, S., Yang, S., et al.: Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med. Phys. 45(10), 4558–4567 (2018)

    Article  Google Scholar 

  13. Balsiger, F., Soom, Y., Scheidegger, O., Reyes, M.: Learning shape representation on sparse point clouds for volumetric image segmentation. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.-T., Khan, A. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 273–281. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_31

    Chapter  Google Scholar 

  14. Raudaschl, P.F., Zaffino, P., Sharp, G.C., et al.: Evaluation of segmentation methods on head and neck CT: auto-segmentation challenge 2015. Med. Phys. 44(5), 2020–2036 (2017)

    Article  Google Scholar 

  15. He, K., Gkioxari, G., Dollár, P., et al.: Mask R-CNN. In: IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017)

    Google Scholar 

  16. Berman, M., Triki, A.R., Blaschko, M.B.: The lovász-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4413–4421 (2018)

    Google Scholar 

Download references

Acknowledgments

This work is supported by National Natural Science Foundation of China (NSFC 62071314) and Sichuan Science and Technology Program (2021YFG0326, 2020YFG0079).

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, Q., Zu, C., Wu, X., Zhou, J., Wang, Y. (2021). Coarse-To-Fine Segmentation of Organs at Risk in Nasopharyngeal Carcinoma Radiotherapy. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12901. Springer, Cham. https://doi.org/10.1007/978-3-030-87193-2_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87193-2_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87192-5

  • Online ISBN: 978-3-030-87193-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics