Abstract
We study a model of Probably Exactly Correct (PExact) learning that can be viewed either as the Exact model (learning from Equivalence Queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the Probably Approximately Correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of Probably Almost Exactly Correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.
Supported by the fund for promotion of research at the Technion, Research no. 120-138.
This material is based upon work supported by the National Science Foundation under Grant No. CCR-9877079.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
M. Anthony, A. Biggs. Computational Learning Theory. Cambridge University Press, 1992.
Dana Angluin. Queries and Concept Learning. Machine Learning, 2:319–342, 1988.
Dana Angluin. Negative Results for Equivalence Queries. Machine Learning, 5:121–150, 1990.
Avrim Blum. Separating Distribution-Free and Mistake-Bound Learning Models over the Boolean Domain. SIAM Journal on Computing, 23(5):990–1000, 1994.
Nader H. Bshouty. Towards the Learnability of DNF Formulae. Proceedings of the ACM Annual Symposium on Theory of Computing, 1996.
Nader H. Bshouty. Exact Learning of Formulas in Parallel. Machine Learning, 26:25–41, 1997.
Shai Ben-David, Eyal Kushilevitz, and Yishay Mansour. Online Learning versus Offline Learning. Machine Learning, 29:45–63, 1997.
Francois Denis. Learning Regular Languages from Simple Positive Examples. Machine Learning, 44(1/2):37–66, 2001.
Matthias Krause, Pavel Pudlak. On the Computational Power of Depth 2 Circuits with Threshold and Modulo Gates Proceedings of the ACM Annual Symposium on Theory of Computing, pages 48–57, 1994.
Nick Littlestone. Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm. Machine Learning, 2:285–318, 1988.
Nathan Linial, Yishay Mansour, Noam Nisan. Constant Depth Circuits, Fourier Transform, and Learnability Journal of the Association for Computing Machinery, 40(3):607–620, 1993.
Rajesh Parekh and Vasant Honavar. Simple DFA are polynomially probably exactly learnable from simple examples. Proceedings of the 16th International Conference on Machine Learning, Morgan Kaufmann, San Francisco, CA, 298–306, 1999.
L. G. Valiant. A Theory of the Learnable. Communications of the ACM, 27(11):1134–1142, 1984.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bshouty, N.H., Jackson, J.C., Tamon, C. (2002). Exploring Learnability between Exact and PAC. In: Kivinen, J., Sloan, R.H. (eds) Computational Learning Theory. COLT 2002. Lecture Notes in Computer Science(), vol 2375. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45435-7_17
Download citation
DOI: https://doi.org/10.1007/3-540-45435-7_17
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43836-6
Online ISBN: 978-3-540-45435-9
eBook Packages: Springer Book Archive