Keywords and Synonyms
Representation-based hardness of learning
Problem Definition
The work of Pitt and Valiant [16] deals with learning Boolean functions in the Probably Approximately Correct (PAC) learning model introduced by Valiant [17]. A learning algorithm in Valiant's original model is given random examples of a function \( { f: \{0,1\}^n \rightarrow \{0,1\} } \) from a representation class \( { \mathcal{F} } \) and produces a hypothesis \( { h \in \mathcal{F} } \) that closely approximates f. Here a representation class is a set of functions and a language for describing the functions in the set. The authors give examples of natural representation classes that are NP-hard to learn in this model whereas they can be learned if the learning algorithm is allowed to produce hypotheses from a richer representation class \( { \mathcal{H} } \). Such an algorithm is said to learn \( { \mathcal{F} } \) by \( { \mathcal{H} } \); learning \( { \mathcal{F} } \) by \( { \mathcal{F} } \)is...
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Recommended Reading
Alekhnovich, M., Braverman, M., Feldman, V., Klivans, A., Pitassi, T.: Learnability and automizability. In: Proceeding of FOCS, pp. 621–630 (2004)
Ben-David, S., Eiron, N., Long, P. M.: On the difficulty of approximately maximizing agreements. In: Proceedings of COLT, pp. 266–274 (2000)
Blum, A.L., Rivest, R.L.: Training a 3-node neural network is NP-complete. Neural Netw. 5(1), 117–127 (1992)
Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.: Learnability and the Vapnik-Chervonenkis dimension. J. ACM 36(4), 929–965 (1989)
Bshouty, N.: Exact learning via the monotone theory. Inf. Comput. 123(1), 146–153 (1995)
Feldman, V.: Hardness of Approximate Two-level Logic Minimization and PAC Learning with Membership Queries. In: Proceedings of STOC, pp. 363–372 (2006)
Feldman, V.: Optimal hardness results for maximizing agreements with monomials. In: Proceedings of Conference on Computational Complexity (CCC), pp. 226–236 (2006)
Garey, M., Johnson, D.S.: Computers and Intractability. W. H. Freeman, San Francisco (1979)
Guruswami, V., Raghavendra, P.: Hardness of Learning Halfspaces with Noise. In: Proceedings of FOCS, pp. 543–552 (2006)
Hancock, T., Jiang, T., Li, M., Tromp, J.: Lower bounds on learning decision lists and trees. In: 12th Annual Symposium on Theoretical Aspects of Computer Science, pp. 527–538 (1995)
Haussler, D.: Decision theoretic generalizations of the PAC model for neural net and other learning applications. Inf. Comput. 100(1), 78–150 (1992)
Jackson, J.: An efficient membership-query algorithm for learning DNF with respect to the uniform distribution. J. Comput. Syst. Sci. 55, 414–440 (1997)
Kearns, M., Schapire, R., Sellie, L.: Toward efficient agnostic learning. Mach. Learn. 17(2–3), 115–141 (1994)
Kearns, M., Valiant, L.: Cryptographic limitations on learning boolean formulae and finite automata. J. ACM 41(1), 67–95 (1994)
Kearns, M., Vazirani, U.: An introduction to computational learning theory. MIT Press, Cambridge, MA (1994)
Pitt, L., Valiant, L.: Computational limitations on learning from examples. J. ACM 35(4), 965–984 (1988)
Valiant, L.: A theory of the learnable. Commun. ACM 27(11), 1134–1142 (1984)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag
About this entry
Cite this entry
Feldman, V. (2008). Hardness of Proper Learning. In: Kao, MY. (eds) Encyclopedia of Algorithms. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30162-4_177
Download citation
DOI: https://doi.org/10.1007/978-0-387-30162-4_177
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-30770-1
Online ISBN: 978-0-387-30162-4
eBook Packages: Computer ScienceReference Module Computer Science and Engineering