Skip to main content
Log in

Efficient distribution-free population learning of simple concepts

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

We consider a variant of the ‘population learning model’ proposed by Kearns and Seung [8], in which the learner is required to be ‘distribution-free’ as well as computationally efficient. A population learner receives as input hypotheses from a large population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample for the target concept and output a hypothesis. A polynomial time population learner is said to PAC-learn a concept class, if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial, even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies, and some simple concept classes that can be learned by them. These strategies include the ‘supremum hypothesis finder’, the ‘minimum superset finder’ (a special case of the ‘supremum hypothesis finder’), and various voting schemes. When coupled with appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the ‘high–low game’, conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model [11], with appropriate choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively a model of ‘population prediction’, in which the learner is to predict the value of the target concept at an arbitrarily drawn point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning model is strictly more powerful than the population prediction model. Finally, we consider a variant of this model with classification noise, and exhibit a population learner for the class of conjunctions in this model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. D. Angluin and P. Laird, Learning from noisy examples, Machine Learning 2 (1988) 343–370.

    Google Scholar 

  2. A. Blumer, A. Ehrenfeucht, D. Haussler and M.K. Warmuth, Learnability and the Vapnik–Chervonenkis dimension, Journal of the ACM 36(4) (1989) 929–965.

    Article  MATH  MathSciNet  Google Scholar 

  3. W. Feller, An Introduction to Probability and Its Applications, Vol. 2 (Wiley, New York, 2nd ed., 1971).

    Google Scholar 

  4. P. Fischer and H.U. Simon, On learning ring-sum-expansions, SIAM Journal on Computing 21(1) (1992) 181–192.

    Article  MATH  MathSciNet  Google Scholar 

  5. S. Goldman, M. Kearns and R. Schapire, On the sample complexity of weak learning, Information and Computation 117 (1995) 276–287.

    Article  MATH  MathSciNet  Google Scholar 

  6. D. Haussler, Quantifying inductive bias: AI learning algorithms and Valiant's learning framework, Artificial Intelligence 36 (1988) 177–221.

    Article  MATH  MathSciNet  Google Scholar 

  7. D. Helmbold, R. Sloan and M.K. Warmuth, Learning nested differences of intersection closed concept classes, Machine Learning 5(1) (1990).

  8. M. Kearns and S. Seung, Learning from a population of hypotheses, Machine Learning 18 (1995) 255–276.

    MATH  Google Scholar 

  9. B.K. Natarajan, On learning boolean functions, in: Proc. 19th ACM Symposium on Theory of Computing (1987) pp. 296–304.

  10. L. Pitt and M.K. Warmuth, Prediction preserving reducibility, Journal of Computer and Systems Sciences 41(3) (1990).

  11. L.G. Valiant, A theory of the learnable, Communications of ACM 27 (1984) 1134–1142.

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nakamura, A., Takeuchi, Ji. & Abe, N. Efficient distribution-free population learning of simple concepts. Annals of Mathematics and Artificial Intelligence 23, 53–82 (1998). https://doi.org/10.1023/A:1018908122958

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1018908122958

Keywords

Navigation