Abstract
In this paper, we create a fully autonomous system that segments primary head and neck tumors as well as lymph node tumors given only FDG-PET and CT scans without contrast enhancers. Given only these two modalities, the typical Dice score for the state-of-the-art (SOTA) models lies below 0.8, below what it would be when including other modalities due to the low resolution of PET scans and noisy non-enhanced CT images. Thus, we seek to improve tumor segmentation accuracy while working with the limitation of only having these two modalities. We introduce the Transfiner, a novel octree-based refinement system to harness the fidelity of transformers while keeping computation and memory costs low for fast inferencing. The observation behind our method is that segmentation errors almost always occur at the edges of a mask for predictions from a well-trained model. The Transfiner utilizes base network feature maps in addition to the raw modalities as input and selects regions of interest from these. These are then processed with a transformer network and decoded with a CNN. We evaluated our framework with Dice Similarity Coefficient (DSC) 0.76426 for the first task of the Head and Neck Tumor Segmentation Challenge (HECKTOR) and ranked 6th.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 1–9 (2020)
Andrearczyk, V., et al.: Overview of the HECKTOR challenge at MICCAI 2022: automatic head and neck tumor segmentation and outcome prediction in PET/CT. In: Head and Neck Tumor Segmentation and Outcome Prediction. Springer, Heidelberg (2023). DOI: https://doi.org/10.1007/978-3-030-98253-9_1
Oreiller, V., et al.: Head and neck tumor segmentation in PET/CT: the HECKTOR challenge. Med. Image Anal. 77, 102336 (2022)
Ke, L., Danelljan, M., Li, X., Tai, Y., Tang, C., Yu, F.: Mask transfiner for high-quality instance segmentation (2021). arXiv https://arxiv.org/abs/2111.13673
Dosovitskiy, A., et al.: An image is worth 16\(\times \)16 words: transformers for image recognition at scale. CoRR. abs/2010.11929 (2020). https://arxiv.org/abs/2010.11929
Smith, L., Topin, N.: Super-convergence: very fast training of neural networks using large learning rates (2017). arXiv, https://arxiv.org/abs/1708.07120
Burt, P., Adelson, E.: The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 31, 532–540 (1983)
Faraji, F., Gaba, R.: Radiologic modalities and response assessment schemes for clinical and preclinical oncology imaging. Front. Oncol. 9 (2019), https://doi.org/10.3389/fonc.2019.00471
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, A., Bai, T., Nguyen, D., Jiang, S. (2023). Octree Boundary Transfiner: Efficient Transformers for Tumor Segmentation Refinement. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds) Head and Neck Tumor Segmentation and Outcome Prediction. HECKTOR 2022. Lecture Notes in Computer Science, vol 13626. Springer, Cham. https://doi.org/10.1007/978-3-031-27420-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-27420-6_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-27419-0
Online ISBN: 978-3-031-27420-6
eBook Packages: Computer ScienceComputer Science (R0)