Abstract:
Although deep convolutional neural networks have achieved satisfactory performance in many medical image segmentation tasks, a considerable annotation challenge still nee...Show MoreMetadata
Abstract:
Although deep convolutional neural networks have achieved satisfactory performance in many medical image segmentation tasks, a considerable annotation challenge still needs to be solved, which is expensive and time-consuming for radiologists. Most existing popular semi-supervised methods mainly impose data-level perturbations (e.g., rotation, noising) or feature-level perturbations (e.g., MC dropout) on unlabeled data. In this paper, we propose a novel semi-supervised segmentation strategy with meaningful perturbations at the feature level to leverage abundant useful information naturally embedded in the unlabeled data. Specifically, we develop a dual-task network where the segmentation head produces multiple predictions with a perturbation module, and the reconstruction head further utilizes the semantic information to enhance segmentation performance. The proposed framework subtly perturbs the network at the feature-level to generate predictions which should be similar and consistent. However, enforcing them roughly to be consistent at all pixels harms stable training and neglects much delicate information. To better utilize those predictions and estimate the uncertainty, we further propose feature-perturbed consistency to exploit reliable regions for our framework to learn from. Extensive experiments on the public BraTS2020 dataset and the 2017 ACDC dataset confirm the efficiency and effectiveness of our method. In particular, the proposed method demonstrates remarkable superiority in the segmentation of boundary regions. The project is available at https://github.com/youngyzzZ/SFPC.
Date of Conference: 05-08 December 2023
Date Added to IEEE Xplore: 18 January 2024
ISBN Information: