Skip to main content

The Head and Neck Tumor Segmentation in PET/CT Based on Multi-channel Attention Network

  • Conference paper
  • First Online:
Head and Neck Tumor Segmentation and Outcome Prediction (HECKTOR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13209))

Included in the following conference series:

  • 1482 Accesses

Abstract

Automatic segmentation of head and neck (H&N) tumors plays an important and challenging role in clinical practice and radiomics researchers. In this paper, we developed an automated tumor segmentation method based on combined positron emission tomography/computed tomography (PET/CT) images provided by the MICCAI 2021 Head and Neck Tumor (HECKTOR) Segmentation Challenge. Our model takes 3D U-Net as the backbone architecture, on which residual network is added. In addition, we proposed a multi-channel attention network (MCA-Net), which fuses the information of different receptive fields and gives different weights to each channel to better capture image detail information. In the end, our network scored well on the test set (DSC 0.7681, HD95 3.1549) (id: siat).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Hira, S., Vidhya, K., Wise-Draper, T.M.: Managing recurrent metastatic head and neck cancer. Hematol./Oncol. Clin. North Am. 35, 1009–1020 (2021)

    Article  Google Scholar 

  2. Andrearczyk, V., et al.: Overview of the HECKTOR challenge at MICCAI 2021: automatic head and neck tumor segmentation and outcome prediction in PET/CT images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 1–37. Springer, Cham (2022)

    Google Scholar 

  3. Gudi, S., et al.: Interobserver variability in the delineation of gross tumor volume and specified organs-at-risk during IMRT for head and neck cancers and the impact of FDG-PET/CT on such variability at the primary site. J. Med. Imaging Radiat. Sci. 48(2), 184–192 (2017)

    Article  Google Scholar 

  4. Hatt, M., et al.: Classification and evaluation strategies of auto-segmentation approaches for PET: report of AAPM task group no. 211. Med. Phys. 44(6), e1–e42 (2017). https://doi.org/10.1002/mp.12124

    Article  Google Scholar 

  5. Oreiller, V., et al.: Head and neck tumor segmentation in PET/CT: the HECKTOR challenge. Med. Image Anal. 77, 102336 (2022). https://doi.org/10.1016/j.media.2021.102336

    Article  Google Scholar 

  6. Shiri, I., et al.: Fully automated gross tumor volume delineation from PET in head and neck cancer using deep learning algorithms. Clin. Nucl. Med. 46, 872–883 (2021)

    Google Scholar 

  7. Ren, J., et al.: Comparing different CT, PET and MRI multi-modality image combinations for deep learning-based head and neck tumor segmentation. Acta Oncologica (Stockholm, Sweden) 60, 1399–1406 (2021)

    Article  Google Scholar 

  8. Çiçek, Ö., Abdulkadir, A., Lienkamp, S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  9. Mnih, V., Heess, N., Graves, A.: Recurrent models of visual attention. In: Advances in Neural Information Processing Systems, pp. 2204–2212 (2014)

    Google Scholar 

  10. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: ICLR 2015, pp. 1–15 (2014)

    Google Scholar 

  11. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. CoRR, vol.abs/1709.01507 (2017)

    Google Scholar 

  12. Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 510–519 (2019). https://doi.org/10.1109/CVPR.2019.00060

  13. He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  14. Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D Vision, pp. 565–571. IEEE (2016)

    Google Scholar 

  15. Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal Loss for Dense Object Detection. arXiv preprint arXiv:1708.02002 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhanli Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, G., Huang, Z., Shen, H., Hu, Z. (2022). The Head and Neck Tumor Segmentation in PET/CT Based on Multi-channel Attention Network. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds) Head and Neck Tumor Segmentation and Outcome Prediction. HECKTOR 2021. Lecture Notes in Computer Science, vol 13209. Springer, Cham. https://doi.org/10.1007/978-3-030-98253-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98253-9_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98252-2

  • Online ISBN: 978-3-030-98253-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics