Skip to main content

Learning a Classifier when the Labeling Is Known

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6925))

Abstract

We introduce a new model of learning, Known-Labeling-Classifier-Learning (KLCL). The goal of such learning is to find a low-error classifier from some given target-class of predictors, when the correct labeling is known to the learner. This learning problem can be viewed as measuring the information conveyed by the identity of input examples, rather than by their labels.

Given some class of predictors \({\mathcal H}\), a labeling function, and an \(\textsl{i.i.d.\ }\) unlabeled sample generated by some unknown data distribution, the goal of our learner is to find a classifier in \({\mathcal H}\) that has as low as possible error with respect to the sample-generating distribution and the given labeling function. When the labeling function does not belong to the target class, the error of members of the class (and thus their relative quality as label predictors) varies with the marginal of the underlying data distribution.

We prove a trichotomy with respect to the KLCL sample complexity. Namely, we show that for any learnable concept class \({\mathcal H}\), its KLCL sample complexity is either 0 or Θ(1/ε) or Ω(1/ε 2). Furthermore, we give a simple combinatorial property of concept classes that characterizes this trichotomy.

Our results imply new sample-size lower bounds for the common agnostic PAC model - a lower bound of Ω(1/ε 2) on the sample complexity of learning deterministic classifiers, as well as novel results about the utility of unlabeled examples in a semi-supervised learning setup.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999)

    Book  MATH  Google Scholar 

  2. Slud, E.: Distribution inequalities for the binomial law. Annals of Probablility 5, 404–412 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  3. Tate, R.F.: On a double inequality of the normal distribution. Annals of Mathematical Statistics 24, 132–134 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  4. Urner, R., Ben-David, S., Shalev-Shwartz, S.: Unlabeled data can speed up prediction time. In: ICML (to appear, 2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ben-David, S., Ben-David, S. (2011). Learning a Classifier when the Labeling Is Known. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2011. Lecture Notes in Computer Science(), vol 6925. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24412-4_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24412-4_34

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24411-7

  • Online ISBN: 978-3-642-24412-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics