Abstract
Automatic segmentation of medical images plays a crucial role in scientific research and healthcare. Obtaining large-scale training datasets with high-quality manual annotations poses challenges in many clinical applications. Utilizing noisy datasets has become increasingly important, but label noise significantly affects the performance of deep learning models. Sample selection is an effective method for handling label noise. In this study, we propose a medical image segmentation framework based on entropy estimation uncertainty for sample selection to address datasets with noisy labels. Specifically, after sample selection, parallel training of two networks and cross-model information exchange are employed for collaborative optimization learning. Based on the exchanged information, sample selection is performed using entropy estimation uncertainty, following a carefully designed schedule for gradual label filtering and correction of noisy labels. The framework is flexible in terms of the precise deep neural network (DNN) models used. Method analysis and empirical evaluation demonstrate that our approach exhibits superior performance on open datasets with noisy annotations. The sample selection method outperforms small loss criterion approaches, and the segmentation results surpass those of traditional fully supervised models. Our framework provides a valuable solution for effectively handling noisy label datasets in medical image segmentation tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arazo, E., Ortego, D., Albert, P., O’Connor, N., McGuinness, K.: Unsupervised label noise modeling and loss correction. In: International Conference on Machine Learning, pp. 312–321. PMLR (2019)
Ching, T., Himmelstein, D.S., Beaulieu-Jones, B.K., Kalinin, A.A., Do, B.T., Way, G.P., Ferrero, E., Agapow, P.M., Zietz, M., Hoffman, M.M., et al.: Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 15(141), 20170387 (2018)
Falk, T., et al.: U-net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16(1), 67–70 (2019)
Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in Neural Information Processing Systems 31 (2018)
Hollon, T.C., et al.: Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nature Med. 26(1), 52–58 (2020)
Huang, J., Qu, L., Jia, R., Zhao, B.: O2u-net: a simple noisy label detection approach for deep neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3326–3334 (2019)
Jiang, L., Zhou, Z., Leung, T., Li, L.J., Fei-Fei, L.: Mentornet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: International Conference on Machine Learning, pp. 2304–2313. PMLR (2018)
Kavur, A.E., Gezer, N.S., Barış, M., Aslan, S., Conze, P.H., Groza, V., Pham, D.D., Chatterjee, S., Ernst, P., Özkan, S., et al.: Chaos challenge-combined (ct-mr) healthy abdominal organ segmentation. Med. Image Anal. 69, 101950 (2021)
Kavur, A.E., et al.: Comparison of semi-automatic and deep learning-based automatic methods for liver segmentation in living liver transplant donors. Diagn. Interv. Radiol. 26(1), 11 (2020)
Li, C., Sun, H., Liu, Z., Wang, M., Zheng, H., Wang, S.: Learning cross-modal deep representations for multi-modal MR image segmentation. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.-T., Khan, A. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 57–65. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_7
Liu, F., Chen, Y., Wang, C., Tain, Y., Carneiro, G.: Asymmetric co-teaching with multi-view consensus for noisy label learning. arXiv preprint arXiv:2301.01143 (2023)
Lyu, X., Wang, J., Zeng, T., Li, X., Chen, J., Wang, X., Xu, Z.: Tss-net: two-stage with sample selection and semi-supervised net for deep learning with noisy labels. In: Third International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI 2022), vol. 12509, pp. 575–584. SPIE (2023)
Shen, Y., Sanghavi, S.: Learning with bad training data via iterative trimmed loss minimization. In: International Conference on Machine Learning, pp. 5739–5748. PMLR (2019)
Song, H., Kim, M., Lee, J.G.: Selfie: refurbishing unclean samples for robust deep learning. In: International Conference on Machine Learning, pp. 5907–5915. PMLR (2019)
Song, H., Kim, M., Park, D., Lee, J.G.: How does early stopping help generalization against label noise? arXiv preprint arXiv:1911.08059 (2019)
Tan, C., Xia, J., Wu, L., Li, S.Z.: Co-learning: learning from noisy labels with self-supervision. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 1405–1413 (2021)
Tang, H., et al.: Clinically applicable deep learning framework for organs at risk delineation in CT images. Nature Mach. Intell. 1(10), 480–491 (2019)
Wang, Y., Ma, X., Chen, Z., Luo, Y., Yi, J., Bailey, J.: Symmetric cross entropy for robust learning with noisy labels. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 322–330 (2019)
Wei, H., Feng, L., Chen, X., An, B.: Combating noisy labels by agreement: A joint training method with co-regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13726–13735 (2020)
Yu, X., Han, B., Yao, J., Niu, G., Tsang, I., Sugiyama, M.: How does disagreement help generalization against label corruption? In: International Conference on Machine Learning, pp. 7164–7173. PMLR (2019)
Zhang, C., Bengio, S., Hardt, M., Mozer, M.C., Singer, Y.: Identity crisis: memorization and generalization under extreme overparameterization. arXiv preprint arXiv:1902.04698 (2019)
Zhang, Q., et al.: Cjc-net: a cyclical training method with joint loss and co-teaching strategy net for deep learning under noisy labels. Inf. Sci. 579, 186–198 (2021)
Acknowledgment
This work was supported by the National Natural Science Foundation of China (62276116, 61976106); Six talent peaks project in Jiangsu Province (DZXX-122); Jiangsu Province Graduate Research Innovation Program (KYCX23_3677).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Hao, S. et al. (2024). Sample Selection Based on Uncertainty for Combating Label Noise. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1963. Springer, Singapore. https://doi.org/10.1007/978-981-99-8138-0_6
Download citation
DOI: https://doi.org/10.1007/978-981-99-8138-0_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8137-3
Online ISBN: 978-981-99-8138-0
eBook Packages: Computer ScienceComputer Science (R0)