Abstract
We introduce a new model of learning, Known-Labeling-Classifier-Learning (KLCL). The goal of such learning is to find a low-error classifier from some given target-class of predictors, when the correct labeling is known to the learner. This learning problem can be viewed as measuring the information conveyed by the identity of input examples, rather than by their labels.
Given some class of predictors \({\mathcal H}\), a labeling function, and an \(\textsl{i.i.d.\ }\) unlabeled sample generated by some unknown data distribution, the goal of our learner is to find a classifier in \({\mathcal H}\) that has as low as possible error with respect to the sample-generating distribution and the given labeling function. When the labeling function does not belong to the target class, the error of members of the class (and thus their relative quality as label predictors) varies with the marginal of the underlying data distribution.
We prove a trichotomy with respect to the KLCL sample complexity. Namely, we show that for any learnable concept class \({\mathcal H}\), its KLCL sample complexity is either 0 or Θ(1/ε) or Ω(1/ε 2). Furthermore, we give a simple combinatorial property of concept classes that characterizes this trichotomy.
Our results imply new sample-size lower bounds for the common agnostic PAC model - a lower bound of Ω(1/ε 2) on the sample complexity of learning deterministic classifiers, as well as novel results about the utility of unlabeled examples in a semi-supervised learning setup.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999)
Slud, E.: Distribution inequalities for the binomial law. Annals of Probablility 5, 404–412 (1977)
Tate, R.F.: On a double inequality of the normal distribution. Annals of Mathematical Statistics 24, 132–134 (1953)
Urner, R., Ben-David, S., Shalev-Shwartz, S.: Unlabeled data can speed up prediction time. In: ICML (to appear, 2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ben-David, S., Ben-David, S. (2011). Learning a Classifier when the Labeling Is Known. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2011. Lecture Notes in Computer Science(), vol 6925. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24412-4_34
Download citation
DOI: https://doi.org/10.1007/978-3-642-24412-4_34
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24411-7
Online ISBN: 978-3-642-24412-4
eBook Packages: Computer ScienceComputer Science (R0)