Motivation and Background
A very important learning problem is the task of learning a concept. Concept learning has attracted much attention in learning theory. For having a running example, we look at humans who are able to distinguish between different “things,” e.g., chair, table, car, airplane, etc. There is no doubt that humans have to learn how to distinguish “things.” Thus, in this example, each concept is a thing. To model this learning task, we have to convert “real things” into mathematical descriptions of things. One possibility to do this is to fix some language to express a finitelist of properties. Afterward, we decide which of these properties are relevant for the particular things we want to deal with and which of them have to be fulfilled or not to be fulfilled, respectively. The list of properties comprises qualities or traits such as “has four legs,” “has wings,” “is...
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsRecommended Reading
Angluin D (1987) Learning regular sets from queries and counterexamples. Inf Comput 75(2):87–106
Angluin D (1988) Queries and concept learning. Mach Learn 2(4):319–342
Angluin D (1992) Computational learning theory: survey and selected bibliography. In: Proceedings of the 24th annual ACM symposium on theory of computing. ACM Press, New York, pp 351–369
Anthony M, Biggs N (1992) Computational learning theory. Cambridge tracts in theoretical computer science, vol 30. Cambridge University Press, Cambridge
Anthony M, Biggs N, Shawe-Taylor J (1990) The learnability of formal concepts. In: Fulk MA, Case J (eds) Proceedings of the third annual workshop on computational learning theory. Morgan Kaufmann, San Mateo, pp 246–257
Arora S, Barak B (2009) Computational complexity: a modern approach. Cambridge University Press, Cambridge
Blum A, Singh M (1990) Learning functions of k terms. In: Proceedings of the third annual workshop on computational learning theory. Morgan Kaufmann, San Mateo, pp 144–153
Blumer A, Ehrenfeucht A, Haussler D, Warmuth MK (1987) Occam’s razor. Inf Process Lett 24(6):377–380
Blumer A, Ehrenfeucht A, Haussler D, Warmuth MK (1989) Learnability and the Vapnik-Chervonenkis dimension. J ACM 36(4):929–965
Bshouty NH (1993) Exact learning via the monotone theory. In: Proceedings of the 34rd annual symposium on foundations of computer science. IEEE Computer Society Press, Los Alamitos, pp 302–311
Bshouty NH, Jackson JC, Tamon C (2004) More efficient PAC-learning of DNF with membership queries under the uniform distribution. J Comput Syst Sci 68(1):205–234
Bshouty NH, Jackson JC, Tamon C (2005) Exploring learnability between exact and PAC. J Comput Syst Sci 70(4):471–484
Ehrenfeucht A, Haussler D (1989) Learning decision trees from random examples. Inf Comput 82(3):231–246
Ehrenfeucht A, Haussler D, Kearns M, Valiant L (1988) A general lower bound on the number of examples needed for learning. In: Haussler D, Pitt L (eds) Proceedings of the 1988 workshop on computational learning theory (COLT’88), 3–5 Aug. MIT/Morgan Kaufmann, San Francisco, pp 139–154
Haussler D (1987) Bias, version spaces and Valiant’s learning framework. In: Langley P (ed) Proceedings of the fourth international workshop on machine learning. Morgan Kaufmann, San Mateo, pp 324–336
Haussler D, Kearns M, Littlestone N, Warmuth MK (1991) Equivalence of models for polynomial learnability. Inf Comput 95(2):129–161
Jackson JC (1997) An efficient membership-query algorithm for learning DNF with respect to the uniform distribution. J Comput Syst Sci 55(3):414–440
Jerrum M (1994) Simple translation-invariant concepts are hard to learn. Inf Comput 113(2):300–311
Kearns M, Valiant L (1994) Cryptographic limitations on learning Boolean formulae and finite automata. J ACM 41(1):67–95
Kearns M, Valiant LG (1989) Cryptographic limitations on learning Boolean formulae and finite automata. In: Proceedings of the 21st symposium on theory of computing. ACM Press, New York, pp 433–444
Kearns MJ, Vazirani UV (1994) An introduction to computational learning theory. MIT Press, Cambridge
Linial N, Mansour Y, Rivest RL (1991) Results on learnability and the Vapnik-Chervonenkis dimension. Inf Comput 90(1):33–49
Littlestone N (1988) Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Mach Learn 2(4):285–318
Maas W, Turán G (1990) On the complexity of learning from counterexamples and membership queries. In: Proceedings of the 31st annual symposium on foundations of computer science (FOCS 1990), St. Louis, 22–24 Oct 1990. IEEE Computer Society, Los Alamitos, pp 203–210
Natarajan BK (1991) Machine learning: a theoretical approach. Morgan Kaufmann, San Mateo
Pitt L, Valiant LG (1988) Computational limitations on learning from examples. J ACM 35(4):965–984
Rivest RL (1987) Learning decision lists. Mach Learn 2(3):229–246
Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197–227
Schapire RE (1999) Theoretical views of boosting and applications. In: Algorithmic learning theory, 10th international conference (ALT ’99), Tokyo, Dec 1999, Proceedings. Lecture notes in artificial intelligence, vol 1720. Springer, pp 13–25
Schapire RE, Sellie LM (1996) Learning sparse multivariate polynomials over a field with queries and counterexamples. J Comput Syst Sci 52(2):201–213
Valiant LG (1984) A theory of the learnable. Commun ACM 27(11):1134–1142
Xiao D (2009) On basing ZK ≠ BPP on the hardness of PAC learning. In: Proceedings of the 24th annual IEEE conference on computational complexity (CCC 2009), Paris, 15–18 July 2009. IEEE Computer Society, Los Alamitos, pp 304–315
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media New York
About this entry
Cite this entry
Zeugmann, T. (2017). PAC Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_631
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7687-1_631
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7685-7
Online ISBN: 978-1-4899-7687-1
eBook Packages: Computer ScienceReference Module Computer Science and Engineering