Skip to main content

Noisy Label Learning in Deep Learning

  • Conference paper
  • First Online:
Intelligence Science IV (ICIS 2022)

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 659))

Included in the following conference series:

  • 1181 Accesses

Abstract

Currently, the construction of a large-scale manual annotation databases is still a prerequisite for the success of DNN. Although there is no shortage of data, there is a lack of clean label data in many fields, because it takes a lot of time and huge labor costs to build such a database. As many studies have shown that noisy label will seriously affect the stability and performance of the DNN. Learning from noisy labels has become more and more important, and many methods have been proposed by scholars. The purpose of this paper is to systematically summarize the different ideas for solving the noisy label learning problem, analyze the problems with existing methods, and try to analyze how to solve these problems. First, we will describe the problem of learning with label noise from the perspective of supervised learning. And then we will summarize the existing methods from the perspective of dataset usage. Subsequently, we will analyze the problems with the data and existing methods. Finally we will give some possible solution ideas.

This work was supported by the Guangdong Provincial Key Research and Development Programme under Grant 2021B0101410002.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Arpit, D., et al.: A closer look at memorization in deep networks. In: International Conference on Machine Learning, pp. 233–242. PMLR (2017)

    Google Scholar 

  2. Balcan, M.F., Blum, A., Yang, K.: Co-training and expansion: towards bridging theory and practice. Adv. Neural. Inf. Process. Syst. 17, 89–96 (2005)

    Google Scholar 

  3. Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., Raffel, C.: MixMatch: a holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249 (2019)

  4. Bossard, L., Guillaumin, M., Van Gool, L.: Food-101 – mining discriminative components with random forests. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 446–461. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_29

    Chapter  Google Scholar 

  5. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  6. Han, B., et al.: SIGUA: forgetting may make learning with noisy labels more robust. In: International Conference on Machine Learning, pp. 4006–4016. PMLR (2020)

    Google Scholar 

  7. Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. arXiv preprint arXiv:1804.06872 (2018)

  8. Han, J., Luo, P., Wang, X.: Deep self-learning from noisy labels. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5138–5147 (2019)

    Google Scholar 

  9. Hu, M., Han, H., Shan, S., Chen, X.: Weakly supervised image classification through noise regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11517–11525 (2019)

    Google Scholar 

  10. Jiang, L., Zhou, Z., Leung, T., Li, L.J., Fei-Fei, L.: MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: International Conference on Machine Learning, pp. 2304–2313. PMLR (2018)

    Google Scholar 

  11. Kim, Y., Yim, J., Yun, J., Kim, J.: NLNL: negative learning for noisy labels. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 101–110 (2019)

    Google Scholar 

  12. Kumar, A., Saha, A., Daume, H.: Co-regularization based semi-supervised domain adaptation. Adv. Neural. Inf. Process. Syst. 23, 478–486 (2010)

    Google Scholar 

  13. Lee, K.H., He, X., Zhang, L., Yang, L.: CleanNet: transfer learning for scalable image classifier training with label noise. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5447–5456 (2018)

    Google Scholar 

  14. Li, J., Socher, R., Hoi, S.C.: DivideMix: learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394 (2020)

  15. Li, J., Wong, Y., Zhao, Q., Kankanhalli, M.S.: Learning to learn from noisy labeled data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5051–5059 (2019)

    Google Scholar 

  16. Li, W., Wang, L., Li, W., Agustsson, E., Van Gool, L.: Webvision database: visual learning and understanding from web data. arXiv preprint arXiv:1708.02862 (2017)

  17. Liu, S., Niles-Weed, J., Razavian, N., Fernandez-Granda, C.: Early-learning regularization prevents memorization of noisy labels. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  18. Liu, T., Tao, D.: Classification with noisy labels by importance reweighting. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 447–461 (2015)

    Article  Google Scholar 

  19. Ma, X., Huang, H., Wang, Y., Romano, S., Erfani, S., Bailey, J.: Normalized loss functions for deep learning with noisy labels. In: International Conference on Machine Learning, pp. 6543–6553. PMLR (2020)

    Google Scholar 

  20. Malach, E., Shalev-Shwartz, S.: Decoupling “when to update” from “how to update”. arXiv preprint arXiv:1706.02613 (2017)

  21. Mandal, D., Bharadwaj, S., Biswas, S.: A novel self-supervised re-labeling approach for training with noisy labels. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1381–1390 (2020)

    Google Scholar 

  22. Patrini, G., Rozza, A., Krishna Menon, A., Nock, R., Qu, L.: Making deep neural networks robust to label noise: a loss correction approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944–1952 (2017)

    Google Scholar 

  23. Ren, M., Zeng, W., Yang, B., Urtasun, R.: Learning to reweight examples for robust deep learning. In: International Conference on Machine Learning, pp. 4334–4343. PMLR (2018)

    Google Scholar 

  24. Sindhwani, V., Niyogi, P., Belkin, M.: A co-regularization approach to semi-supervised learning with multiple views. In: Proceedings of ICML Workshop on Learning with Multiple Views, vol. 2005, pp. 74–79. Citeseer (2005)

    Google Scholar 

  25. Tanaka, D., Ikami, D., Yamasaki, T., Aizawa, K.: Joint optimization framework for learning with noisy labels. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5552–5560 (2018)

    Google Scholar 

  26. Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780 (2017)

  27. Van Rooyen, B., Menon, A.K., Williamson, R.C.: Learning with symmetric label noise: the importance of being unhinged. arXiv preprint arXiv:1505.07634 (2015)

  28. Wang, Y., Ma, X., Chen, Z., Luo, Y., Yi, J., Bailey, J.: Symmetric cross entropy for robust learning with noisy labels. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 322–330 (2019)

    Google Scholar 

  29. Wei, H., Feng, L., Chen, X., An, B.: Combating noisy labels by agreement: a joint training method with co-regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13726–13735 (2020)

    Google Scholar 

  30. Xiao, T., Xia, T., Yang, Y., Huang, C., Wang, X.: Learning from massive noisy labeled data for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2691–2699 (2015)

    Google Scholar 

  31. Xu, Y., Cao, P., Kong, Y., Wang, Y.: L_DMI: a novel information-theoretic loss function for training deep nets robust to label noise. In: NeurIPS, pp. 6222–6233 (2019)

    Google Scholar 

  32. Yao, Y., et al.: Dual T: reducing estimation error for transition matrix in label-noise learning. arXiv preprint arXiv:2006.07805 (2020)

  33. Yi, K., Wu, J.: Probabilistic end-to-end noise correction for learning with noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7017–7025 (2019)

    Google Scholar 

  34. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)

  35. Zhang, W., Wang, Y., Qiao, Y.: MetaCleaner: learning to hallucinate clean representations for noisy-labeled visual recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7373–7382 (2019)

    Google Scholar 

  36. Zhang, Z., Sabuncu, M.R.: Generalized cross entropy loss for training deep neural networks with noisy labels. arXiv preprint arXiv:1805.07836 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuefeng Liang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liang, X., Yao, L., Liu, X. (2022). Noisy Label Learning in Deep Learning. In: Shi, Z., Jin, Y., Zhang, X. (eds) Intelligence Science IV. ICIS 2022. IFIP Advances in Information and Communication Technology, vol 659. Springer, Cham. https://doi.org/10.1007/978-3-031-14903-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-14903-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-14902-3

  • Online ISBN: 978-3-031-14903-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics