Abstract
In security-critical applications, it is essential to know how confident the model is in its predictions. Many uncertainty estimation methods have been proposed recently, and these methods are reliable when the training data do not contain labeling errors. However, we find that the quality of these uncertainty estimation methods decreases dramatically when noisy labels are present in the training data. In some datasets, the uncertainty estimates would become completely absurd, even though these labeling noises barely affect the test accuracy. We further analyze the impact of existing label noise handling methods on the reliability of uncertainty estimates, although most of these methods focus only on improving the accuracy of the models. We identify that the data cleaning-based approach can alleviate the influence of label noise on uncertainty estimates to some extent, but there are still some drawbacks. Finally, we propose a robust uncertainty estimation method under label noise. Compared with other algorithms, our approach achieves a more reliable uncertainty estimates in the presence of noisy labels, especially when there are large-scale labeling errors in the training data.
This work was supported by the Research Institute of Trustworthy Autonomous Systems, the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (Grant No. 2017ZT07X386), the Shenzhen Science and Technology Program (Grant No. KQTD2016112514355531) and Huawei project on “Fundamental Theory and Key Technologies of Trustworthy Systems”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J. Mané, D.: Concrete problems in AI safety. CoRR. abs/1606.06565 (2016). http://arxiv.org/abs/1606.06565
Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. International Conference on Machine Learning, pp. 1050–1059 (2016)
Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? Adv. Neural Inf. Process. Syst. 30 (2017)
Abdar, M., et al.: A review of uncertainty quantification in deep learning: techniques, applications and challenges. Inf. Fusion 76, 243–297 (2021)
Gawlikowski, J., et al.: A survey of uncertainty in deep neural networks. ArXiv. abs/2107.03342 (2021)
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. Adv. Neural Inf. Process. Syst. 30 (2017)
Seoh, R.: Qualitative analysis of Monte Carlo dropout. ArXiv Preprint ArXiv:2007.01720 (2020)
Valdenegro-Toro, M.: I find your lack of uncertainty in computer vision disturbing. In: Proceedings Of The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1263–1272 (2021)
Schwaiger, A., Sinhamahapatra, P., Gansloser, J., Roscher, K.: Is uncertainty quantification in deep learning sufficient for out-of-distribution detection? In: AISafety@ IJCAI (2020)
Salvador, T., Voleti, V., Iannantuono, A., Oberman, A.: Improved predictive uncertainty using corruption-based calibration. STAT 1050, 7 (2021)
Hendrycks, D., Gimpel, K.: A Baseline for detecting misclassified and out-of-distribution examples in neural networks. In: Proceedings of International Conference on Learning Representations (2017)
Tajwar, F., Kumar, A., Xie, S., Liang, P.: No true state-of-the-art? OOD detection methods are inconsistent across datasets. ArXiv Preprint ArXiv:2109.05554 (2021)
Shin, W., Ha, J., Li, S., Cho, Y., Song, H., Kwon, S.: Which strategies matter for noisy label classification? Insight into loss and uncertainty. ArXiv. abs/2008.06218 (2020)
Cordeiro, F., Carneiro, G.: A survey on deep learning with noisy labels: how to train your model when you cannot trust on the annotations? In: 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 9–16 (2020)
Algan, G., Ulusoy, I.: Image classification with deep learning in the presence of noisy labels: a survey. Knowl.-Based Syst. 215, 106771 (2021)
Karimi, D., Dou, H., Warfield, S., Gholipour, A.: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Med. Image Anal. 65, 101759 (2020)
Goel, P., Chen, L.: On the robustness of Monte Carlo dropout trained with noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2219–2228 (2021)
Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. Adv. Neural Inf. Process. Syst. 31 (2018)
Ghosh, A., Kumar, H., Sastry, P.: Robust loss functions under label noise for deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)
Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. Adv. Neural Inf. Process. Syst. 31 (2018)
Chen, P., Liao, B., Chen, G., Zhang, S.: Understanding and utilizing deep neural networks trained with noisy labels. In: International Conference on Machine Learning, pp. 1062–1070 (2019)
Northcutt, C., Wu, T., Chuang, I.: Learning with confident examples: rank pruning for robust classification with noisy labels. In: Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence (2017). http://auai.org/uai2017/proceedings/papers/35.pdf
Song, H., Kim, M., Lee, J.: SELFIE: refurbishing unclean samples for robust deep learning. In: International Conference on Machine Learning, pp. 5907–5915 (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Feinman, R., Curtin, R., Shintre, S., Gardner, A.: Detecting adversarial samples from artifacts. ArXiv Preprint ArXiv:1703.00410 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Pan, C., Yuan, B., Zhou, W., Yao, X. (2022). Towards Robust Uncertainty Estimation in the Presence of Noisy Labels. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13529. Springer, Cham. https://doi.org/10.1007/978-3-031-15919-0_56
Download citation
DOI: https://doi.org/10.1007/978-3-031-15919-0_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15918-3
Online ISBN: 978-3-031-15919-0
eBook Packages: Computer ScienceComputer Science (R0)