Skip to main content

A Unified Approach to Learning with Label Noise and Unsupervised Confidence Approximation

  • Conference paper
  • First Online:
Data Augmentation, Labelling, and Imperfections (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14379))

  • 27 Accesses

Abstract

Noisy label training is the problem of training a neural network from a dataset with errors in the labels. Selective prediction is the problem of selecting only the predictions of a neural network which have sufficient confidence. These problems are both important in medical deep learning, where they commonly occur simultaneously. Existing methods however tackle one problem but not both. We show that they are interdependent and propose the first integrated framework to tackle them both, which we call Unsupervised Confidence Approximation (UCA). UCA trains a neural network simultaneously for its main task (e.g. image segmentation) and for confidence prediction, from noisy label datasets. UCA does not require confidence labels and is thus unsupervised in this respect. UCA is generic as it can be used with any neural architecture. We evaluated its performance on the CIFAR-10N and Gleason-2019 datasets. UCA’s prediction accuracy increases with the required level of confidence. UCA-equipped networks are on par with the state-of-the-art in noisy label training when used in regular, full coverage mode. However, they have a risk-management facility, showing flawless risk-coverage curves with substantial performance gain over existing selective prediction methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai, Y., et al.: Understanding and improving early stopping for learning with noisy labels. In: NeurIPS (2021)

    Google Scholar 

  2. Berthon, A., Han, B., Niu, G., Liu, T., Sugiyama, M.: Confidence scores make instance-dependent label-noise learning possible. In: ICML (2021)

    Google Scholar 

  3. Cheng, H., Zhu, Z., Li, X., Gong, Y., Sun, X., Liu, Y.: Learning with instance-dependent label noise: a sample sieve approach. arXiv preprint: arXiv:2010.02347 (2020)

  4. Cordeiro, F.R., Carneiro, G.: A survey on deep learning with noisy labels: how to train your model when you cannot trust on the annotations? In: SIBGRAPI (2020)

    Google Scholar 

  5. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML (2016)

    Google Scholar 

  6. Gawlikowski, J., et al.: A survey of uncertainty in deep neural networks. arXiv preprint: arXiv:2107.03342 (2021)

  7. Geifman, Y., El-Yaniv, R.: Selective classification for deep neural networks. In: NeurIPS (2017)

    Google Scholar 

  8. Ghesu, F.C., et al.: Quantifying and leveraging classification uncertainty for chest radiograph assessment. In: Shen, D., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science(), vol. 11769. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_75

  9. Ghosh, A., Kumar, H., Sastry, P.S.: Robust loss functions under label noise for deep neural networks. In: AAAI (2017)

    Google Scholar 

  10. Goldberger, J., Ben-Reuven, E.: Training deep neural-networks using a noise adaptation layer. In: ICLR (2017)

    Google Scholar 

  11. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: ICML (2017)

    Google Scholar 

  12. Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: NeurIPS (2018)

    Google Scholar 

  13. Hendrycks, D., Mazeika, M., Wilson, D., Gimpel, K.: Using trusted data to train deep networks on labels corrupted by severe noise. In: NeurIPS (2018)

    Google Scholar 

  14. Karimi, D., Dou, H., Warfield, S.K., Gholipour, A.: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Med. Image Anal. 65, 101759 (2020)

    Article  Google Scholar 

  15. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  16. Kohl, S., et al.: A probabilistic U-Net for segmentation of ambiguous images. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  17. Kumar, A., Amid, E.: Constrained instance and class reweighting for robust learning under label noise. arXiv preprint: arXiv:2111.05428 (2021)

  18. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NeurIPS (2017)

    Google Scholar 

  19. Li, J., Socher, R., Hoi, S.C.: DivideMix: learning with noisy labels as semi-supervised learning. arXiv preprint: arXiv:2002.07394 (2020)

  20. Liu, J., et al.: Detecting out-of-distribution via an unsupervised uncertainty estimation for prostate cancer diagnosis. In: MIDL (2021)

    Google Scholar 

  21. Malinin, A., Gales, M.: Predictive uncertainty estimation via prior networks. In: NeurIPS (2018)

    Google Scholar 

  22. Nir, G., et al.: Automatic grading of prostate cancer in digitized histopathology images: learning from multiple experts. Med. Image Anal. 50, 167–180 (2018)

    Article  Google Scholar 

  23. Patrini, G., Rozza, A., Krishna Menon, A., Nock, R., Qu, L.: Making deep neural networks robust to label noise: a loss correction approach. In: CVPR (2017)

    Google Scholar 

  24. Qiu, Y., et al.: Automatic prostate Gleason grading using pyramid semantic parsing network in digital histopathology. Front. Oncol. 12, 1–13 (2022)

    Google Scholar 

  25. Raghu, M., Blumer, K., Sayres, R., Obermeyer, Z., Kleinberg, B., Mullainathan, S., Kleinberg, J.: Direct uncertainty prediction for medical second opinions. In: ICML (2019)

    Google Scholar 

  26. Rodriguez-Puigvert, J., Recasens, D., Civera, J., Martinez-Cantin, R.: On the uncertain single-view depths in colonoscopies. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. Lecture Notes in Computer Science, vol. 13433. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16437-8_13

  27. Song, H., Kim, M., Park, D., Shin, Y., Lee, J.G.: Learning from noisy labels with deep neural networks: a survey. IEEE Trans. Neural Netw. Learn. Syst. (2022)

    Google Scholar 

  28. Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: NeurIPS (2017)

    Google Scholar 

  29. Thulasidasan, S., Chennupati, G., Bilmes, J.A., Bhattacharya, T., Michalak, S.: On mixup training: improved calibration and predictive uncertainty for deep neural networks. In: NeurIPS (2019)

    Google Scholar 

  30. Wang, J., Liu, Y., Levy, C.: Fair classification with group-dependent label noise. In: ACM FAccT (2021)

    Google Scholar 

  31. Warfield, S.K., Zou, K.H., Wells, W.M.: Simultaneous truth and performance level estimation (staple): an algorithm for the validation of image segmentation. IEEE Trans. Med. Imaging 23(7), 903–921 (2004)

    Article  Google Scholar 

  32. Wei, H., Feng, L., Chen, X., An, B.: Combating noisy labels by agreement: a joint training method with co-regularization. In: CVPR (2020)

    Google Scholar 

  33. Wei, J., Zhu, Z., Cheng, H., Liu, T., Niu, G., Liu, Y.: Learning with noisy labels revisited: a study using real-world human annotations. arXiv preprint: arXiv:2110.12088 (2021)

  34. Xia, X., et al.: Are anchor points really indispensable in label-noise learning? In: NeurIPS (2019)

    Google Scholar 

  35. Yu, X., Han, B., Yao, J., Niu, G., Tsang, I., Sugiyama, M.: How does disagreement help generalization against label corruption? In: ICML (2019)

    Google Scholar 

  36. Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. In: NeurIPS (2018)

    Google Scholar 

  37. Zhu, Z., Song, Y., Liu, Y.: Clusterability as an alternative to anchor points when learning with noisy labels. In: ICML (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Navid Rabbani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rabbani, N., Bartoli, A. (2024). A Unified Approach to Learning with Label Noise and Unsupervised Confidence Approximation. In: Xue, Y., Chen, C., Chen, C., Zuo, L., Liu, Y. (eds) Data Augmentation, Labelling, and Imperfections. MICCAI 2023. Lecture Notes in Computer Science, vol 14379. Springer, Cham. https://doi.org/10.1007/978-3-031-58171-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-58171-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-58170-0

  • Online ISBN: 978-3-031-58171-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics