Skip to main content

Towards Robust Uncertainty Estimation in the Presence of Noisy Labels

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2022 (ICANN 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13529))

Included in the following conference series:

  • 2289 Accesses

Abstract

In security-critical applications, it is essential to know how confident the model is in its predictions. Many uncertainty estimation methods have been proposed recently, and these methods are reliable when the training data do not contain labeling errors. However, we find that the quality of these uncertainty estimation methods decreases dramatically when noisy labels are present in the training data. In some datasets, the uncertainty estimates would become completely absurd, even though these labeling noises barely affect the test accuracy. We further analyze the impact of existing label noise handling methods on the reliability of uncertainty estimates, although most of these methods focus only on improving the accuracy of the models. We identify that the data cleaning-based approach can alleviate the influence of label noise on uncertainty estimates to some extent, but there are still some drawbacks. Finally, we propose a robust uncertainty estimation method under label noise. Compared with other algorithms, our approach achieves a more reliable uncertainty estimates in the presence of noisy labels, especially when there are large-scale labeling errors in the training data.

This work was supported by the Research Institute of Trustworthy Autonomous Systems, the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (Grant No. 2017ZT07X386), the Shenzhen Science and Technology Program (Grant No. KQTD2016112514355531) and Huawei project on “Fundamental Theory and Key Technologies of Trustworthy Systems”.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J. Mané, D.: Concrete problems in AI safety. CoRR. abs/1606.06565 (2016). http://arxiv.org/abs/1606.06565

  2. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. International Conference on Machine Learning, pp. 1050–1059 (2016)

    Google Scholar 

  3. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  4. Abdar, M., et al.: A review of uncertainty quantification in deep learning: techniques, applications and challenges. Inf. Fusion 76, 243–297 (2021)

    Article  Google Scholar 

  5. Gawlikowski, J., et al.: A survey of uncertainty in deep neural networks. ArXiv. abs/2107.03342 (2021)

  6. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  7. Seoh, R.: Qualitative analysis of Monte Carlo dropout. ArXiv Preprint ArXiv:2007.01720 (2020)

  8. Valdenegro-Toro, M.: I find your lack of uncertainty in computer vision disturbing. In: Proceedings Of The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1263–1272 (2021)

    Google Scholar 

  9. Schwaiger, A., Sinhamahapatra, P., Gansloser, J., Roscher, K.: Is uncertainty quantification in deep learning sufficient for out-of-distribution detection? In: AISafety@ IJCAI (2020)

    Google Scholar 

  10. Salvador, T., Voleti, V., Iannantuono, A., Oberman, A.: Improved predictive uncertainty using corruption-based calibration. STAT 1050, 7 (2021)

    Google Scholar 

  11. Hendrycks, D., Gimpel, K.: A Baseline for detecting misclassified and out-of-distribution examples in neural networks. In: Proceedings of International Conference on Learning Representations (2017)

    Google Scholar 

  12. Tajwar, F., Kumar, A., Xie, S., Liang, P.: No true state-of-the-art? OOD detection methods are inconsistent across datasets. ArXiv Preprint ArXiv:2109.05554 (2021)

  13. Shin, W., Ha, J., Li, S., Cho, Y., Song, H., Kwon, S.: Which strategies matter for noisy label classification? Insight into loss and uncertainty. ArXiv. abs/2008.06218 (2020)

  14. Cordeiro, F., Carneiro, G.: A survey on deep learning with noisy labels: how to train your model when you cannot trust on the annotations? In: 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 9–16 (2020)

    Google Scholar 

  15. Algan, G., Ulusoy, I.: Image classification with deep learning in the presence of noisy labels: a survey. Knowl.-Based Syst. 215, 106771 (2021)

    Article  Google Scholar 

  16. Karimi, D., Dou, H., Warfield, S., Gholipour, A.: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Med. Image Anal. 65, 101759 (2020)

    Article  Google Scholar 

  17. Goel, P., Chen, L.: On the robustness of Monte Carlo dropout trained with noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2219–2228 (2021)

    Google Scholar 

  18. Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  19. Ghosh, A., Kumar, H., Sastry, P.: Robust loss functions under label noise for deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)

    Google Scholar 

  20. Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  21. Chen, P., Liao, B., Chen, G., Zhang, S.: Understanding and utilizing deep neural networks trained with noisy labels. In: International Conference on Machine Learning, pp. 1062–1070 (2019)

    Google Scholar 

  22. Northcutt, C., Wu, T., Chuang, I.: Learning with confident examples: rank pruning for robust classification with noisy labels. In: Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence (2017). http://auai.org/uai2017/proceedings/papers/35.pdf

  23. Song, H., Kim, M., Lee, J.: SELFIE: refurbishing unclean samples for robust deep learning. In: International Conference on Machine Learning, pp. 5907–5915 (2019)

    Google Scholar 

  24. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  25. Feinman, R., Curtin, R., Shintre, S., Gardner, A.: Detecting adversarial samples from artifacts. ArXiv Preprint ArXiv:1703.00410 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Pan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pan, C., Yuan, B., Zhou, W., Yao, X. (2022). Towards Robust Uncertainty Estimation in the Presence of Noisy Labels. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13529. Springer, Cham. https://doi.org/10.1007/978-3-031-15919-0_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15919-0_56

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15918-3

  • Online ISBN: 978-3-031-15919-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics