Skip to main content

Enhancing Continual Noisy Label Learning with Uncertainty-Based Sample Selection and Feature Enhancement

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14432))

Included in the following conference series:

  • 309 Accesses

Abstract

The task of continual learning is to design algorithms that can address the problem of catastrophic forgetting. However, in the real world, there are noisy labels due to inaccurate human annotations and other factors, which seem to exacerbate catastrophic forgetting. To tackle both catastrophic forgetting and noise issues, we propose an innovative framework. Our framework leverages sample uncertainty to purify the data stream and selects representative samples for replay, effectively alleviating catastrophic forgetting. Additionally, we adopt a semi-supervised approach for fine-tuning to ensure the involvement of all available samples. Simultaneously, we incorporate contrastive learning and entropy minimization to mitigate noise memorization in the model. We validate the effectiveness of our proposed method through extensive experiments on two benchmark datasets, CIFAR-10 and CIFAR-100. For CIFAR-10, we achieve a performance gain of 2% under 20% noise conditions.

This work was supported by the Natural Science Foundation of Shandong Province, China, under Grant Nos. ZR2020MF041 and ZR2022MF237, and the National Natural Science Foundation of China under Grant No. 11901325.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bang, J., Kim, H., Yoo, Y., Ha, J.W., Choi, J.: Rainbow memory: continual learning with a memory of diverse samples. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8218–8227 (2021)

    Google Scholar 

  2. Berthelot, D., et al.: Remixmatch: semi-supervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785 (2019)

  3. Caruana, R.: Multitask learning. Mach. Learn. 28, 41–75 (1997)

    Article  Google Scholar 

  4. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  5. Ferdinand, Q., Clement, B., Oliveau, Q., Le Chenadec, G., Papadakis, P.: Attenuating catastrophic forgetting by joint contrastive and incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3782–3789 (2022)

    Google Scholar 

  6. Guo, Y., Hu, W., Zhao, D., Liu, B.: Adaptive orthogonal projection for batch and online continual learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 6783–6791 (2022)

    Google Scholar 

  7. Karim, N., Khalid, U., Esmaeili, A., Rahnavard, N.: CNLL: a semi-supervised approach for continual noisy label learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3878–3888 (2022)

    Google Scholar 

  8. Karim, N., Rizve, M.N., Rahnavard, N., Mian, A., Shah, M.: UNICON: combating label noise through uniform selection and contrastive learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9676–9686 (2022)

    Google Scholar 

  9. Kim, C.D., Jeong, J., Kim, G.: Imbalanced continual learning with partitioning reservoir sampling. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12358, pp. 411–428. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58601-0_25

    Chapter  Google Scholar 

  10. Kim, C.D., Jeong, J., Moon, S., Kim, G.: Continual learning on noisy data streams via self-purified replay. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 537–547 (2021)

    Google Scholar 

  11. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  12. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  13. Li, S., Xia, X., Ge, S., Liu, T.: Selective-supervised contrastive learning with noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 316–325 (2022)

    Google Scholar 

  14. Liu, D., Zhao, J., Wu, J., Yang, G., Lv, F.: Multi-category classification with label noise by robust binary loss. Neurocomputing 482, 14–26 (2022)

    Article  Google Scholar 

  15. Mai, Z., Li, R., Jeong, J., Quispe, D., Kim, H., Sanner, S.: Online continual learning in image classification: an empirical survey. Neurocomputing 469, 28–51 (2022)

    Article  Google Scholar 

  16. de Masson D’Autume, C., Ruder, S., Kong, L., Yogatama, D.: Episodic memory in lifelong language learning. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  17. Ortego, D., Arazo, E., Albert, P., O’Connor, N.E., McGuinness, K.: Multi-objective interpolation training for robustness to label noise. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6606–6615 (2021)

    Google Scholar 

  18. Prabhu, A., Torr, P.H.S., Dokania, P.K.: GDumb: a simple approach that questions our progress in continual learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 524–540. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_31

    Chapter  Google Scholar 

  19. Ren, M., Zeng, W., Yang, B., Urtasun, R.: Learning to reweight examples for robust deep learning. In: International Conference on Machine Learning, pp. 4334–4343. PMLR (2018)

    Google Scholar 

  20. Riemer, M., et al.: Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910 (2018)

  21. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  22. Song, H., Kim, M., Park, D., Shin, Y., Lee, J.G.: Learning from noisy labels with deep neural networks: a survey. IEEE Trans. Neural Netw. Learn. Syst. (2022)

    Google Scholar 

  23. Wang, H., et al.: Score-CAM: score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 24–25 (2020)

    Google Scholar 

  24. Wang, X., et al.: Transformer-based unsupervised contrastive learning for histopathological image classification. Med. Image Anal. 81, 102559 (2022)

    Article  Google Scholar 

  25. Wang, Y., Ma, X., Chen, Z., Luo, Y., Yi, J., Bailey, J.: Symmetric cross entropy for robust learning with noisy labels. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 322–330 (2019)

    Google Scholar 

  26. Wei, H., Feng, L., Chen, X., An, B.: Combating noisy labels by agreement: a joint training method with co-regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13726–13735 (2020)

    Google Scholar 

  27. Yao, Y., et al.: Jo-SRC: a contrastive approach for combating noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5192–5201 (2021)

    Google Scholar 

  28. Yi, K., Wu, J.: Probabilistic end-to-end noise correction for learning with noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7017–7025 (2019)

    Google Scholar 

  29. Zheng, M., You, S., Huang, L., Wang, F., Qian, C., Xu, C.: Simmatch: semi-supervised learning with similarity matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14471–14481 (2022)

    Google Scholar 

  30. Zhou, X., Liu, X., Zhai, D., Jiang, J., Ji, X.: Asymmetric loss functions for noise-tolerant learning: Theory and applications. IEEE Trans. Pattern Anal. Mach. Intell. (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinyong Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, G., Wei, Z., Cheng, J. (2024). Enhancing Continual Noisy Label Learning with Uncertainty-Based Sample Selection and Feature Enhancement. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14432. Springer, Singapore. https://doi.org/10.1007/978-981-99-8543-2_40

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8543-2_40

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8542-5

  • Online ISBN: 978-981-99-8543-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics