Abstract
The high cost of manual annotation for medical images leads to an extreme lack of annotated samples for image segmentation. Moreover, the scales of target regions in medical images are diverse, and the local features like texture and contour of some images (such as skin lesions and polyps) are often poorly distinguished. To solve the above problems, this paper proposes a novel semi-supervised medical image segmentation method based on multi-scale knowledge discovery and multi-task ensemble, incorporating two key improvements. Firstly, to detect targets with various scales and focus on local information, a multi-scale knowledge discovery framework (MSKD) is introduced and discovers multi-scale semantic features and dense spatial detail features from cross-level (image and patches) inputs. Secondly, by integrating the ideas of multi-task learning and ensemble learning, this paper leverages the knowledge discovered by MSKD to perform three subtasks: semantic constraints for the target regions, reliability learning for unsupervised data, and representation learning for local features. Each subtask is treated as a weak learner focusing on learning unique features for a specific task in the ensemble learning. Through three-task ensemble, the model achieves multi-task feature sharing. Finally, comparative experiments are conducted on datasets for skin lesions, polyps, and multi-object cell nucleus segmentation, indicate the superior segmentation accuracy and robustness of the proposed method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Isensee, F., Jaeger, P.F., Kohl, S.A., et al.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)
Zhu, X., Goldberg, A.B.: Introduction to semi-supervised learning. Synth. Lect. Artif. Intell. Mach. Learn. 3(1), 1–130 (2009)
Chen, X., Yuan, Y., Zeng, G., et al.: Semi-supervised semantic segmentation with cross pseudo supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2613–2622 (2021)
Li, X., Yu, L., Chen, H., et al.: Transformation-consistent self-ensembling model for semisupervised medical image segmentation. IEEE Trans. Neural Netw. Learn. Syst. 32(2), 523–534 (2020)
Vu, T.H., Jain, H., Bucher, M., et al.: Advent: adversarial entropy minimization for domain adaptation in semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2517–2526 (2019)
Fan, Y., Kukleva, A., Dai, D., et al.: Revisiting consistency regularization for semi-supervised learning. Int. J. Comput. Vision 131(3), 626–643 (2023)
Ouali, Y., Hudelot, C., Tami, M. Semi-supervised semantic segmentation with cross-consistency training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12674–12684 (2020)
Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. Adv. Neural Inf. Process. Syst. 30 (2017)
Bortsova, G., Dubost, F., Hogeweg, L., Katramados, I., de Bruijne, M.: Semi-supervised medical image segmentation via learning consistency under transformations. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 810–818. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_90
Verma, V., Kawaguchi, K., Lamb, A., et al.: Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825 (2019)
Zhao, X., Fang, C., Fan, D.J., et al.: Cross-level contrastive learning and consistency constraint for semi-supervised medical image segmentation. In: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), pp. 1–5. IEEE (2022)
Chen, T., Kornblith, S., Norouzi, M., et al.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)
Xiao, T., Liu, S., De Mello, S., et al.: Learning contrastive representation for semantic correspondence. Int. J. Comput. Vision 130(5), 1293–1309 (2022)
Perone, C.S., Cohen-Adad, J.: Deep semi-supervised segmentation with weight-averaged consistency targets. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 12–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_2
Li, X., Yu, L., Chen, H., et al.: Semi-supervised skin lesion segmentation via transformation consistent self-ensembling model. arXiv preprint arXiv:1808.03887 (2018)
Luo, X., et al.: Efficient semi-supervised gross target volume of nasopharyngeal carcinoma segmentation via uncertainty rectified pyramid consistency. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 318–329. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_30
Xu, C., Yang, Y., Xia, Z., et al.: Dual uncertainty-guided mixing consistency for semi-supervised 3d medical image segmentation. IEEE Trans. Big Data 9, 1156–1170 (2023)
Zhang, Y., Zhang, J.: Dual-task mutual learning for semi-supervised medical image segmentation. In: Ma, H., et al. (eds.) PRCV 2021. LNCS, vol. 13021, pp. 548–559. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88010-1_46
Zhang, Y., Jiao, R., Liao, Q., et al.: Uncertainty-guided mutual consistency learning for semi-supervised medical image segmentation. Artif. Intell. Med. 138, 102476 (2023)
Wang, K., Zhan, B., Zu, C., et al.: Semi-supervised medical image segmentation via a tripled-uncertainty guided mean teacher model with contrastive learning. Med. Image Anal. 79, 102447 (2022)
Shi, J., Gong, T., Wang, C., Li, C.: Semi-supervised pixel contrastive learning framework for tissue segmentation. IEEE J. Biomed. Health Inf. 27(1), 97–108 (2023)
Codella, N.C., Gutman, D., Celebi, M.E., et al. Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), pp. 168–172. IEEE (2018)
Jha, D., et al.: Kvasir-SEG: a segmented polyp dataset. In: Ro, Y.M., et al. (eds.) MMM 2020. LNCS, vol. 11962, pp. 451–462. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37734-2_37
Kumar, N., Verma, R., Anand, D., et al.: A multi-organ nucleus segmentation challenge. IEEE Trans. Med. Imaging 39(5), 1380–1391 (2019)
Acknowledgment
This work is supported by the National Natural Science Foundation of China (No. 11973022, 12373108) and the Natural Science Foundation of Guangdong Province (2020A1515010710).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Tu, Y., Li, X., Zhong, Y., Mei, H. (2024). Semi-supervised Medical Image Segmentation Based on Multi-scale Knowledge Discovery and Multi-task Ensemble. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14437. Springer, Singapore. https://doi.org/10.1007/978-981-99-8558-6_18
Download citation
DOI: https://doi.org/10.1007/978-981-99-8558-6_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8557-9
Online ISBN: 978-981-99-8558-6
eBook Packages: Computer ScienceComputer Science (R0)