Abstract
Noisy label training is the problem of training a neural network from a dataset with errors in the labels. Selective prediction is the problem of selecting only the predictions of a neural network which have sufficient confidence. These problems are both important in medical deep learning, where they commonly occur simultaneously. Existing methods however tackle one problem but not both. We show that they are interdependent and propose the first integrated framework to tackle them both, which we call Unsupervised Confidence Approximation (UCA). UCA trains a neural network simultaneously for its main task (e.g. image segmentation) and for confidence prediction, from noisy label datasets. UCA does not require confidence labels and is thus unsupervised in this respect. UCA is generic as it can be used with any neural architecture. We evaluated its performance on the CIFAR-10N and Gleason-2019 datasets. UCA’s prediction accuracy increases with the required level of confidence. UCA-equipped networks are on par with the state-of-the-art in noisy label training when used in regular, full coverage mode. However, they have a risk-management facility, showing flawless risk-coverage curves with substantial performance gain over existing selective prediction methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bai, Y., et al.: Understanding and improving early stopping for learning with noisy labels. In: NeurIPS (2021)
Berthon, A., Han, B., Niu, G., Liu, T., Sugiyama, M.: Confidence scores make instance-dependent label-noise learning possible. In: ICML (2021)
Cheng, H., Zhu, Z., Li, X., Gong, Y., Sun, X., Liu, Y.: Learning with instance-dependent label noise: a sample sieve approach. arXiv preprint: arXiv:2010.02347 (2020)
Cordeiro, F.R., Carneiro, G.: A survey on deep learning with noisy labels: how to train your model when you cannot trust on the annotations? In: SIBGRAPI (2020)
Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML (2016)
Gawlikowski, J., et al.: A survey of uncertainty in deep neural networks. arXiv preprint: arXiv:2107.03342 (2021)
Geifman, Y., El-Yaniv, R.: Selective classification for deep neural networks. In: NeurIPS (2017)
Ghesu, F.C., et al.: Quantifying and leveraging classification uncertainty for chest radiograph assessment. In: Shen, D., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science(), vol. 11769. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_75
Ghosh, A., Kumar, H., Sastry, P.S.: Robust loss functions under label noise for deep neural networks. In: AAAI (2017)
Goldberger, J., Ben-Reuven, E.: Training deep neural-networks using a noise adaptation layer. In: ICLR (2017)
Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: ICML (2017)
Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: NeurIPS (2018)
Hendrycks, D., Mazeika, M., Wilson, D., Gimpel, K.: Using trusted data to train deep networks on labels corrupted by severe noise. In: NeurIPS (2018)
Karimi, D., Dou, H., Warfield, S.K., Gholipour, A.: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Med. Image Anal. 65, 101759 (2020)
Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Kohl, S., et al.: A probabilistic U-Net for segmentation of ambiguous images. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Kumar, A., Amid, E.: Constrained instance and class reweighting for robust learning under label noise. arXiv preprint: arXiv:2111.05428 (2021)
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NeurIPS (2017)
Li, J., Socher, R., Hoi, S.C.: DivideMix: learning with noisy labels as semi-supervised learning. arXiv preprint: arXiv:2002.07394 (2020)
Liu, J., et al.: Detecting out-of-distribution via an unsupervised uncertainty estimation for prostate cancer diagnosis. In: MIDL (2021)
Malinin, A., Gales, M.: Predictive uncertainty estimation via prior networks. In: NeurIPS (2018)
Nir, G., et al.: Automatic grading of prostate cancer in digitized histopathology images: learning from multiple experts. Med. Image Anal. 50, 167–180 (2018)
Patrini, G., Rozza, A., Krishna Menon, A., Nock, R., Qu, L.: Making deep neural networks robust to label noise: a loss correction approach. In: CVPR (2017)
Qiu, Y., et al.: Automatic prostate Gleason grading using pyramid semantic parsing network in digital histopathology. Front. Oncol. 12, 1–13 (2022)
Raghu, M., Blumer, K., Sayres, R., Obermeyer, Z., Kleinberg, B., Mullainathan, S., Kleinberg, J.: Direct uncertainty prediction for medical second opinions. In: ICML (2019)
Rodriguez-Puigvert, J., Recasens, D., Civera, J., Martinez-Cantin, R.: On the uncertain single-view depths in colonoscopies. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. Lecture Notes in Computer Science, vol. 13433. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16437-8_13
Song, H., Kim, M., Park, D., Shin, Y., Lee, J.G.: Learning from noisy labels with deep neural networks: a survey. IEEE Trans. Neural Netw. Learn. Syst. (2022)
Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: NeurIPS (2017)
Thulasidasan, S., Chennupati, G., Bilmes, J.A., Bhattacharya, T., Michalak, S.: On mixup training: improved calibration and predictive uncertainty for deep neural networks. In: NeurIPS (2019)
Wang, J., Liu, Y., Levy, C.: Fair classification with group-dependent label noise. In: ACM FAccT (2021)
Warfield, S.K., Zou, K.H., Wells, W.M.: Simultaneous truth and performance level estimation (staple): an algorithm for the validation of image segmentation. IEEE Trans. Med. Imaging 23(7), 903–921 (2004)
Wei, H., Feng, L., Chen, X., An, B.: Combating noisy labels by agreement: a joint training method with co-regularization. In: CVPR (2020)
Wei, J., Zhu, Z., Cheng, H., Liu, T., Niu, G., Liu, Y.: Learning with noisy labels revisited: a study using real-world human annotations. arXiv preprint: arXiv:2110.12088 (2021)
Xia, X., et al.: Are anchor points really indispensable in label-noise learning? In: NeurIPS (2019)
Yu, X., Han, B., Yao, J., Niu, G., Tsang, I., Sugiyama, M.: How does disagreement help generalization against label corruption? In: ICML (2019)
Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. In: NeurIPS (2018)
Zhu, Z., Song, Y., Liu, Y.: Clusterability as an alternative to anchor points when learning with noisy labels. In: ICML (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Rabbani, N., Bartoli, A. (2024). A Unified Approach to Learning with Label Noise and Unsupervised Confidence Approximation. In: Xue, Y., Chen, C., Chen, C., Zuo, L., Liu, Y. (eds) Data Augmentation, Labelling, and Imperfections. MICCAI 2023. Lecture Notes in Computer Science, vol 14379. Springer, Cham. https://doi.org/10.1007/978-3-031-58171-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-58171-7_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-58170-0
Online ISBN: 978-3-031-58171-7
eBook Packages: Computer ScienceComputer Science (R0)