Skip to main content

PAC Learning

  • Reference work entry
  • First Online:

Synonyms

Distribution-free learning; PAC identification; Probably approximately correct learning

Motivation and Background

A very important learning problem is the task of learning a concept. Concept learning has attracted much attention in learning theory. For having a running example, we look at humans who are able to distinguish between different “things,” e.g., chair, table, car, airplane, etc. There is no doubt that humans have to learn how to distinguish “things.” Thus, in this example, each concept is a thing. To model this learning task, we have to convert “real things” into mathematical descriptions of things. One possibility to do this is to fix some language to express a finitelist of properties. Afterward, we decide which of these properties are relevant for the particular things we want to deal with and which of them have to be fulfilled or not to be fulfilled, respectively. The list of properties comprises qualities or traits such as “has four legs,” “has wings,” “is...

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   699.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   949.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Recommended Reading

  • Angluin D (1987) Learning regular sets from queries and counterexamples. Inf Comput 75(2):87–106

    Article  MathSciNet  MATH  Google Scholar 

  • Angluin D (1988) Queries and concept learning. Mach Learn 2(4):319–342

    MathSciNet  Google Scholar 

  • Angluin D (1992) Computational learning theory: survey and selected bibliography. In: Proceedings of the 24th annual ACM symposium on theory of computing. ACM Press, New York, pp 351–369

    Google Scholar 

  • Anthony M, Biggs N (1992) Computational learning theory. Cambridge tracts in theoretical computer science, vol 30. Cambridge University Press, Cambridge

    Google Scholar 

  • Anthony M, Biggs N, Shawe-Taylor J (1990) The learnability of formal concepts. In: Fulk MA, Case J (eds) Proceedings of the third annual workshop on computational learning theory. Morgan Kaufmann, San Mateo, pp 246–257

    Google Scholar 

  • Arora S, Barak B (2009) Computational complexity: a modern approach. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Blum A, Singh M (1990) Learning functions of k terms. In: Proceedings of the third annual workshop on computational learning theory. Morgan Kaufmann, San Mateo, pp 144–153

    Google Scholar 

  • Blumer A, Ehrenfeucht A, Haussler D, Warmuth MK (1987) Occam’s razor. Inf Process Lett 24(6):377–380

    Article  MathSciNet  MATH  Google Scholar 

  • Blumer A, Ehrenfeucht A, Haussler D, Warmuth MK (1989) Learnability and the Vapnik-Chervonenkis dimension. J ACM 36(4):929–965

    Article  MathSciNet  MATH  Google Scholar 

  • Bshouty NH (1993) Exact learning via the monotone theory. In: Proceedings of the 34rd annual symposium on foundations of computer science. IEEE Computer Society Press, Los Alamitos, pp 302–311

    Google Scholar 

  • Bshouty NH, Jackson JC, Tamon C (2004) More efficient PAC-learning of DNF with membership queries under the uniform distribution. J Comput Syst Sci 68(1):205–234

    Article  MathSciNet  MATH  Google Scholar 

  • Bshouty NH, Jackson JC, Tamon C (2005) Exploring learnability between exact and PAC. J Comput Syst Sci 70(4):471–484

    Article  MathSciNet  MATH  Google Scholar 

  • Ehrenfeucht A, Haussler D (1989) Learning decision trees from random examples. Inf Comput 82(3):231–246

    Article  MathSciNet  MATH  Google Scholar 

  • Ehrenfeucht A, Haussler D, Kearns M, Valiant L (1988) A general lower bound on the number of examples needed for learning. In: Haussler D, Pitt L (eds) Proceedings of the 1988 workshop on computational learning theory (COLT’88), 3–5 Aug. MIT/Morgan Kaufmann, San Francisco, pp 139–154

    Google Scholar 

  • Haussler D (1987) Bias, version spaces and Valiant’s learning framework. In: Langley P (ed) Proceedings of the fourth international workshop on machine learning. Morgan Kaufmann, San Mateo, pp 324–336

    Chapter  Google Scholar 

  • Haussler D, Kearns M, Littlestone N, Warmuth MK (1991) Equivalence of models for polynomial learnability. Inf Comput 95(2):129–161

    Article  MathSciNet  MATH  Google Scholar 

  • Jackson JC (1997) An efficient membership-query algorithm for learning DNF with respect to the uniform distribution. J Comput Syst Sci 55(3):414–440

    Article  MATH  Google Scholar 

  • Jerrum M (1994) Simple translation-invariant concepts are hard to learn. Inf Comput 113(2):300–311

    Article  MathSciNet  MATH  Google Scholar 

  • Kearns M, Valiant L (1994) Cryptographic limitations on learning Boolean formulae and finite automata. J ACM 41(1):67–95

    Article  MathSciNet  MATH  Google Scholar 

  • Kearns M, Valiant LG (1989) Cryptographic limitations on learning Boolean formulae and finite automata. In: Proceedings of the 21st symposium on theory of computing. ACM Press, New York, pp 433–444

    Google Scholar 

  • Kearns MJ, Vazirani UV (1994) An introduction to computational learning theory. MIT Press, Cambridge

    Google Scholar 

  • Linial N, Mansour Y, Rivest RL (1991) Results on learnability and the Vapnik-Chervonenkis dimension. Inf Comput 90(1):33–49

    Article  MathSciNet  MATH  Google Scholar 

  • Littlestone N (1988) Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Mach Learn 2(4):285–318

    Google Scholar 

  • Maas W, Turán G (1990) On the complexity of learning from counterexamples and membership queries. In: Proceedings of the 31st annual symposium on foundations of computer science (FOCS 1990), St. Louis, 22–24 Oct 1990. IEEE Computer Society, Los Alamitos, pp 203–210

    Google Scholar 

  • Natarajan BK (1991) Machine learning: a theoretical approach. Morgan Kaufmann, San Mateo

    Google Scholar 

  • Pitt L, Valiant LG (1988) Computational limitations on learning from examples. J ACM 35(4):965–984

    Article  MathSciNet  MATH  Google Scholar 

  • Rivest RL (1987) Learning decision lists. Mach Learn 2(3):229–246

    Google Scholar 

  • Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197–227

    Google Scholar 

  • Schapire RE (1999) Theoretical views of boosting and applications. In: Algorithmic learning theory, 10th international conference (ALT ’99), Tokyo, Dec 1999, Proceedings. Lecture notes in artificial intelligence, vol 1720. Springer, pp 13–25

    Google Scholar 

  • Schapire RE, Sellie LM (1996) Learning sparse multivariate polynomials over a field with queries and counterexamples. J Comput Syst Sci 52(2):201–213

    Article  MathSciNet  MATH  Google Scholar 

  • Valiant LG (1984) A theory of the learnable. Commun ACM 27(11):1134–1142

    Article  MATH  Google Scholar 

  • Xiao D (2009) On basing ZKBPP on the hardness of PAC learning. In: Proceedings of the 24th annual IEEE conference on computational complexity (CCC 2009), Paris, 15–18 July 2009. IEEE Computer Society, Los Alamitos, pp 304–315

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Zeugmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Science+Business Media New York

About this entry

Cite this entry

Zeugmann, T. (2017). PAC Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_631

Download citation

Publish with us

Policies and ethics