Skip to main content

Reduction from Complementary-Label Learning to Probability Estimates

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13936))

Included in the following conference series:

  • 1326 Accesses

Abstract

Complementary-Label Learning (CLL) is a weakly-supervised learning problem that aims to learn a multi-class classifier from only complementary labels, which indicate a class to which an instance does not belong. Existing approaches mainly adopt the paradigm of reduction to ordinary classification, which applies specific transformations and surrogate losses to connect CLL back to ordinary classification. Those approaches, however, face several limitations, such as the tendency to overfit. In this paper, we sidestep those limitations with a novel perspective–reduction to probability estimates of complementary classes. We prove that accurate probability estimates of complementary labels lead to good classifiers through a simple decoding step. The proof establishes a reduction framework from CLL to probability estimates. The framework offers explanations of several key CLL approaches as its special cases and allows us to design an improved algorithm that is more robust in noisy environments. The framework also suggests a validation procedure based on the quality of probability estimates, offering a way to validate models with only CLs. The flexible framework opens a wide range of unexplored opportunities in using deep and non-deep models for probability estimates to solve CLL. Empirical experiments further verified the framework’s efficacy and robustness in various settings. The full paper can be accessed at https://arxiv.org/abs/2209.09500.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chou, Y.T., Niu, G., Lin, H.T., Sugiyama, M.: Unbiased risk estimators can mislead: a case study of learning with complementary labels. In: International Conference on Machine Learning, pp. 1929–1938. PMLR (2020)

    Google Scholar 

  2. Gao, Y., Zhang, M.L.: Discriminative complementary-label learning with weighted loss. In: International Conference on Machine Learning, pp. 3587–3597. PMLR (2021)

    Google Scholar 

  3. Ishida, T., Niu, G., Hu, W., Sugiyama, M.: Learning from complementary labels. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 5644–5654 (2017)

    Google Scholar 

  4. Ishida, T., Niu, G., Menon, A., Sugiyama, M.: Complementary-label learning for arbitrary losses and models. In: International Conference on Machine Learning, pp. 2971–2980. PMLR (2019)

    Google Scholar 

  5. Kull, M., Flach, P.: Novel decompositions of proper scoring rules for classification: score adjustment as precursor to calibration. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Soares, C., Gama, J., Jorge, A. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9284, pp. 68–85. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23528-8_5

    Chapter  Google Scholar 

  6. Li, X., Liu, T., Han, B., Niu, G., Sugiyama, M.: Provably end-to-end label-noise learning without anchor points. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 6403–6413. PMLR (18–24 Jul 2021)

    Google Scholar 

  7. Liu, J., Hang, H., Wang, B., Li, B., Wang, H., Tian, Y., Shi, Y.: GAN-CL: generative adversarial networks for learning from complementary labels. IEEE Trans. Cybernet. (2021)

    Google Scholar 

  8. Wang, D.B., Feng, L., Zhang, M.L.: Learning from complementary labels via partial-output consistency regularization. In: IJCAI, pp. 3075–3081 (2021)

    Google Scholar 

  9. Williamson, R.C., Vernet, E., Reid, M.D.: Composite multiclass losses. J. Mach. Learn. Res. 17(222), 1–52 (2016)

    MathSciNet  MATH  Google Scholar 

  10. Xu, Y., Gong, M., Chen, J., Liu, T., Zhang, K., Batmanghelich, K.: Generative-discriminative complementary learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6526–6533 (2020)

    Google Scholar 

  11. Yu, X., Liu, T., Gong, M., Tao, D.: Learning with biased complementary labels. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 68–83 (2018)

    Google Scholar 

  12. Zhang, M., Lee, J., Agarwal, S.: Learning from noisy labels with no change to the training process. In: International Conference on Machine Learning, pp. 12468–12478. PMLR (2021)

    Google Scholar 

  13. Zhang, Y., Liu, F., Fang, Z., Yuan, B., Zhang, G., Lu, J.: Learning from a complementary-label source domain: theory and algorithms. IEEE Trans. Neural Netw. Learn. Syst. (2021)

    Google Scholar 

  14. Zhou, Z.H.: A brief introduction to weakly supervised learning. Nat. Sci. Rev. 5(1), 44–53 (2018)

    Article  Google Scholar 

Download references

Acknowlegement

We thank the anonymous reviewers and the members of NTU CLLab for valuable suggestions. The work is partially supported by the National Science and Technology Council via the grants 110-2628-E-002-013 and 111-2628-E-002-018. We also thank the National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational resources.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hsuan-Tien Lin .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1227 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, WI., Lin, HT. (2023). Reduction from Complementary-Label Learning to Probability Estimates. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13936. Springer, Cham. https://doi.org/10.1007/978-3-031-33377-4_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33377-4_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33376-7

  • Online ISBN: 978-3-031-33377-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics