Abstract
Training a classifier on web-crawled data demands learning algorithms that are robust to annotation errors and irrelevant examples. This paper builds upon the recent empirical observation that applying unsupervised contrastive learning to noisy, web-crawled datasets yields a feature representation under which the in-distribution (ID) and out-of-distribution (OOD) samples are linearly separable [2]. We show that direct estimation of the separating hyperplane can indeed offer an accurate detection of OOD samples, and yet, surprisingly, this detection does not translate into gains in classification accuracy. Digging deeper into this phenomenon, we discover that the near-perfect detection misses a type of clean examples that are valuable for supervised learning. These examples often represent visually simple images, which are relatively easy to identify as clean examples using standard loss- or distance-based methods despite being poorly separated from the OOD distribution using unsupervised learning. Because we further observe a low correlation with SOTA metrics, this urges us to propose a hybrid solution that alternates between noise detection using linear separation and a state-of-the-art (SOTA) small-loss approach. When combined with the SOTA algorithm PLS, we substantially improve SOTA results for real-world image classification in the presence of web noise https://github.com/PaulAlbert31/LSA.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Albert, P., Arazo, E., Krishna, T., O’Connor, N.E., McGuinness, K.: Is your noise correction noisy? PLS: robustness to label noise with two stage detection. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2023)
Albert, P., Arazo, E., O’Connor, N.E., McGuinness, K.: Embedding contrastive unsupervised features to cluster in-and out-of-distribution noise in corrupted image datasets. In: European Conference on Computer Vision (ECCV) (2022)
Albert, P., Ortego, D., Arazo, E., O’Connor, N., McGuinness, K.: Addressing out-of-distribution label noise in webly-labelled data. In: Winter Conference on Applications of Computer Vision (WACV) (2022)
Ankerst, M., Breunig, M.M., Kriegel, H.P., Sander, J.: Optics: ordering points to identify the clustering structure. ACM SIGMOD Rec. 28(2), 49–60 (1999)
Arazo, E., Ortego, D., Albert, P., O’Connor, N., McGuinness, K.: Unsupervised label noise modeling and loss correction. In: International Conference on Machine Learning (ICML) (2019)
Arazo, E., Ortego, D., Albert, P., O’Connor, N., McGuinness, K.: Pseudo-labeling and confirmation bias in deep semi-supervised learning. In: International Joint Conference on Neural Networks (IJCNN) (2020)
Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., Raffel, C.: MixMatch: a holistic approach to semi-supervised learning. In: Advances in Neural Information Processing Systems (NeuRIPS) (2019)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning (ICML) (2020)
Chrabaszcz, P., Loshchilov, I., Hutter, F.: A downsampled variant of imagenet as an alternative to the cifar datasets. arXiv: 1707.08819 (2017)
Cordeiro, F.R., Belagiannis, V., Reid, I., Carneiro, G.: PropMix: Hard Sample Filtering and Proportional MixUp for Learning with Noisy Labels. arXiv: 2110.11809 (2021)
Cordeiro, F.R., Sachdeva, R., Belagiannis, V., Reid, I., Carneiro, G.: Longremix: robust learning with high confidence samples in a noisy label environment. Pattern Recognit. 133, 109013 (2023)
Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: practical automated data augmentation with a reduced search space. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2020)
Fooladgar, F., To, M.N.N., Mousavi, P., Abolmaesumi, P.: Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise. arXiv:2308.06861 (2023)
Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in Neural Information Processing Systems (NeurIPS) (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: European Conference on Computer Vision (ECCV) (2016)
Iscen, A., Valmadre, J., Arnab, A., Schmid, C.: Learning with neighbor consistency for noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)
Jiang, L., Zhou, Z., Leung, T., Li, L., Fei-Fei, L.: MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: International Conference on Machine Learning (ICML) (2018)
Jiang, L., Huang, D., Liu, M., Yang, W.: Beyond synthetic noise: deep learning on controlled noisy labels. In: International Conference on Machine Learning (ICML) (2020)
Kim, H., Chang, H.S., Cho, K., Lee, J., Han, B.: Learning with Noisy Labels: Interconnection of Two Expectation-Maximizations. arXiv: 2401.04390 (2024)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)
Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NeurIPS) (2012)
Lee, K., Zhu, Y., Sohn, K., Li, C.L., Shin, J., Lee, H.: i-Mix: a strategy for regularizing contrastive representation learning. In: International Conference on Learning Representations (ICLR) (2021)
Li, J., Socher, R., Hoi, S.: DivideMix: learning with noisy labels as semi-supervised learning. In: International Conference on Learning Representations (ICLR) (2020)
Li, J., Xiong, C., Hoi, S.C.: Learning from noisy data with robust representation learning. In: IEEE/CVF International Conference on Computer Vision (ICCV) (2021)
Li, W., Wang, L., Li, W., Agustsson, E., Van Gool, L.: WebVision Database: Visual Learning and Understanding from Web Data. arXiv: 1708.02862 (2017)
Liu, S., Niles-Weed, J., Razavian, N., Fernandez-Granda, C.: Early-learning regularization prevents memorization of noisy labels. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)
Ortego, D., Arazo, E., Albert, P., O’Connor, N.E., McGuinness, K.: Multi-objective interpolation training for robustness to label noise. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Ortego, D., Arazo, E., Albert, P., O’Connor, N.E., McGuinness, K.: Towards robust learning with different label noise distributions. In: International Conference on Pattern Recognition (ICPR) (2021)
Sachdeva, R., Cordeiro, F.R., Belagiannis, V., Reid, I., Carneiro, G.: EvidentialMix: learning with combined open-set and closed-set noisy labels. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2020)
Sachdeva, R., Cordeiro, F.R., Belagiannis, V., Reid, I., Carneiro, G.: ScanMix: learning from severe label noise via semantic clustering and semi-supervised learning. Pattern Recognit. 134, 109121 (2023)
Sohn, K., et al.: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. arXiv: 2001.07685 (2020)
Song, H., Kim, M., Lee, J.G.: SELFIE: refurbishing unclean samples for robust deep learning. In: International Conference on Machine Learning (ICML) (2019)
Sun, Z., et al.: Webly supervised fine-grained recognition: benchmark datasets and an approach. In: IEEE/CVF International Conference on Computer Vision (ICCV) (2021)
Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Association for the Advancement of Artificial Intelligence (AAAI) (2016)
Toneva, M., Sordoni, A., Combes, R., Trischler, A., Bengio, Y., Gordon, G.: An empirical study of example forgetting during deep neural network learning. In: International Conference on Learning Representations (ICLR) (2019)
Da Costa, V.G.T., Fini, E., Nabi, M., Sebe, N., Ricci, E.: Solo-learn: a library of self-supervised methods for visual representation learning. J. Mach. Learn. Res. 23(56), 1–6 (2022)
Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., Wierstra, D.: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems (NeuRIPS) (2016)
Wang, T., Isola, P.: Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In: International Conference on Machine Learning (ICLR) (2020)
Xu, Y., Zhu, L., Jiang, L., Yang, Y.: Faster meta update strategy for noise-robust deep learning. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Yao, Y., et al.: Jo-SRC: a contrastive approach for combating noisy labels. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Yi, K., Wu, J.: Probabilistic end-to-end noise correction for learning with noisy labels. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Zhang, B., et al.: Flexmatch: boosting semi-supervised learning with curriculum pseudo labeling. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)
Zhang, H., Cisse, M., Dauphin, Y., Lopez-Paz, D.: Mixup: beyond empirical risk minimization. In: International Conference on Learning Representations (ICLR) (2018)
Zhang, Y., Zheng, S., Wu, P., Goswami, M., Chen, C.: Learning with feature-dependent label noise: a progressive approach. In: International Conference on Learning Representations (ICLR) (2021)
Zhang, Z., et al.: RankMatch: fostering confidence and consistency in learning with noisy labels. In: IEEE/CVF International Conference on Computer Vision (ICCV) (2023)
Zheltonozhskii, E., Baskin, C., Mendelson, A., Bronstein, A.M., Litany, O.: Contrast to divide: self-supervised pre-training for learning with noisy labels. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2022)
Acknowledgments
This publication has emanated from research conducted with the joint financial support of the Center for Augmented Reasoning (CAR) and Science Foundation Ireland (SFI) under grant number SFI/12/RC/2289_P2. The authors additionally acknowledge the Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support. The authors would like to issue special remembrance to our dearly missed friend and colleague Kevin McGuinness for his invaluable contributions to our research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Albert, P., Valmadre, J., Arazo, E., Krishna, T., O’Connor, N.E., McGuinness, K. (2025). An Accurate Detection Is Not All You Need to Combat Label Noise in Web-Noisy Datasets. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15107. Springer, Cham. https://doi.org/10.1007/978-3-031-72967-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-72967-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72966-9
Online ISBN: 978-3-031-72967-6
eBook Packages: Computer ScienceComputer Science (R0)