Abstract
Over the last few years, the efficient learnability of logic programs has been studied extensively. Positive and negative learnability results now exist for a number of restricted classes of logic programs that are closely related to the classes used in practice within inductive logic programming. This paper surveys these results, and also introduces some of the more useful techniques for deriving such results. The paper does not assume any prior background in computational learning theory.
Similar content being viewed by others
References
Aizenstein, H., “On the Learnability of Disjunctive Normal Form Formulas and Decision Trees,”Ph. D. thesis, University of Illinois at Urbana-Champaign, 1992.
Angluin, D., Frazier, M. and Pitt, L., “Learning Conjunctions of Horn Clauses,”Machine Learning, 9, pp. 147–164, 1992.
Angluin, D., “Queries and Concept Learning,”Machine Learning, 2 4, 1988.
Angluin, D., “Equivalence Queries and Approximate Fingerprints” InProceedings of the 1989 Workshop on Computational Learning Theory, Santa Cruz, California 1989.
Arimura, H., Ishizaka, H., and Shinohara, T., “Polynomial Time Inference of a Subclass of Context-Free Transformations,” inProceedings of the Fifth Workshop on Computational Learning Theory (COLT-92), The Association for Computing Machinery, New York, pp. 136–143 1992.
Blumer, A., Ehrenfeucht, A., Haussler, D., and Warmuth, M., “Classifiying Learnable Concepts with the Vapnik-Chervonenkis Dimension,”Journal of the Association for Computing Machinery, 36, 4, pp. 929–965, 1989.
Buntine, W., “Generalized Subsumption and Its Application to Induction and Redundancy,”Artificial Intelligence, 36, 2, pp. 149–176, 1988.
Buntine, W., “A Theory of Learning Classification Rules,”Ph.D thesis, School of Computing Science, University of Technology, Sydney, 1990.
Cohen, W. W., “Cryptographic Limitations on Learning One-Clause Logic Programs,” inProceedings of the Tenth National Conference on Artificial Intelligence, Washington, D. C., 1993.
Cohen, W. W., “Learnability of Restricted Logic Programs,” inProceedings of the Third International Workshop on Inductive Logic Programming, Bled, Slovenia, 1993.
Cohen, W. W., “A Pac-Learning Algorithm for a Restricted Class of Recursive Logic Programs,” inProceedings of the Tenth National Conference on Artificial Intelligence, Washington, D.C., 1993.
Cohen, W. W., “Pac-Learning Non-Recursive Prolog Clauses,” to appearArtificial Intelligence, 1993.
Cohen, W. W., “The Pac-Learnability of Recursive Logic Programs,” in preparation, 1994.
Cohen, W. W., “Pac-Learning Nondeterminate Clauses” inProceedings of the Eleventh National Conference on Artificial Intelligence, Seattle, WA, 1994.
Cohen, W. W., “Recovering Software Specifications with Inductive Logic Programming,” inProceedings of the Eleventh National Conference on Artificial Intelligence, Seattle, WA, 1994.
DeRaedt, L. and Džeroski, S., “jk-Clausal Theories Are PAC-Learnable,”Artificial Intelligence, to appear, 1994.
Džeroski, S., Muggleton, S., and Russell, S., “Pac-Learnability of Determinate Logic Programs,” inProceedings of the 1992 Workshop on Computational Learning Theory, Pittsburgh, Pennsylvania, 1992.
Frazier, M. and Page, C. D., “Learnability in Inductive Logic Programming: Some Basic Results and Techniques,” inProceedings of the 11th National Conference on Artificial Intelligence (AAAI-93), Menlo Park, CA, AAAI Press, 1993.
Frazier, M. and Page, C. D., “Learnability of Recursive, Non-determinate Theories: Some Basic Results and Techniques,” inProceedings of the Third International Workshop on Inductive Logic Programming, Ljubljana, Slovenia, pp. 103–126, 1993.J. Stefan Institute Technical Report, IJS-DP-6707.
Frazier, M. and Page, C. D., “Prefix Grammars: An Alternative Characterization of the Regular Languages,”Information Processing Letters, 51, pp. 67–71, 1994.
Frazier, M., “Matters Horn and Other Features in the Computational Learning Theory Landscape: The Notion of Membership,”Ph.D thesis, University of Illinois at Urbana-Champaign, 1994.
Haussler, D., Kearns, M., and Schapire, R., “Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and VC Dimension,” inProceedings of the Fourth Annual Workshop on Computational Learning Theory, San Mateo, CA, Morgan Kaufmann, pp. 61–74, 1991.
Haussler, D., Kearns, M., and Schapire, R., “Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and VC Dimension,”Machine Learning, 14, 1, pp. 83–113, January 1994.
Haussler, D., “Learning Conjunctive Concepts in Structural Domains,”Machine Learning, 4, 1, 1989.
Helmbold, D. and Warmuth, M., “Some Weak Learning Results,” inProc. 5th Annu. Workshop on Comput. Learning Theory, ACM Press, New York, NY, pp. 399–412, 1992.
Kearns, M. and Valiant, L., “Cryptographic Limitations on Learning Boolean Formulae and Finite Automata,” in21th Annual Symposium on the Theory of Computing, ACM Press, 1989.
Kearns, M. and Valiant, L., “Cryptographic Limitations on Learning Boolean Formulae and Finite Automata,”Journal of the Association for Computing Machinery, 41, 1, pp. 67–95, 1994.
King, R. D., Muggleton, S., Lewis, R. A., and Sternberg, M. J. E., “Drug Design by Machine Learning: The Use of Inductive Logic Programming to Model the Structure-Activity Relationships of Trimethoprim Analogues Binding to Dihydrofolate Reductase,”Proceedings of the National Academy of Science, 89, 1992.
Lassez, J-L., Maher, M. J., and Marriott, K., “Unification Revisited,” inFoundations of Deductive Databases and Logic Programming (Jack Minker, ed.), chapter 15, Morgan Kaufmann Publishers, pp. 587–625, 1988.
Minton, S., “Small Is Beautiful: A Brute-Force Approach to Finding First-Order Formulas,” inProceedings of the Twelfth National Conference on Artificial Intelligence, Seattle, Washington, MIT Press, 1994.
Muggleton, S. H. and Page, C. D., “A Learnability Model for Universal Representations,” inProceedings of the Fourth International Workshop on Inductive Logic Programming (S. Wrobel, ed.), Sankt Augustin, Germany, pp. 139–160, 1994. GMD. Published asGMD-Studien Nr. 237.
Muggleton, S., “Bayesian Inductive Logic Programming,” inProceedings of the Eleventh International Conference on Machine Learning (ML-94), San Francisco, Morgan Kaufmann, pp. 371–379, 1994.
Muggleton, S., “Bayesian Inductive Logic Programming,” inProceedings of the Seventh Annual ACM Conference on Computational Learning Theory (COLT-94), New York, The Association for Computing Machinery, pp. 3–11, 1994.
Nienhuys-Cheng, S. H. and Polman, M., “Sample Pac-Learnability in Model Inference,” inMachine Learning: ECML-94, Catania, Italy,Lecture Notes in Computer Science #784, Springer-Verlag, 1994.
Page, C. D. and Frisch, A. M., “Generalizing Atoms in Constraint Logic,” inSecond Int. Conf on Principles of Knowledge Representation and Reasoning (J. Allen, R. Fikes, and E. Sandewall, eds.), San Mateo, CA, Morgan Kaufmann, April 1991.
Page, C. D. and Frisch, A. M., “Generalization and Learnability: A Study of Constrained Atoms,” inInductive Logic Programming (S. H. Muggleton, eds.), Academic Press, London, pp. 29–61, 1992.
Page, C. D., “Anti-Unification in Constraint Logics: Foundations and Applications to Learnability in First-Order Logic, to Speed-up Learning, and to Deduction,”Ph.D thesis, University of Illinois at Urbana-Champaign, 1993.
Pitt, L. and Valiant, L., “Computational Limitations on Learning from Examples,”Journal of the ACM, 35, 4, pp. 965–984, 1988.
Pitt, L. and Warmuth, M., “Prediction-Preserving Reducibility,”Journal of Computer and System Sciences, 41, pp. 430–467, 1990.
Plotkin, G. D., “A Note on Inductive Generalization,” inMachine Intelligence 5, Edinburgh University Press, pp. 153–163, 1969.
Quinlan, J. R., “Learning Logical Definitions from Relations,”Machine Learning, 5, 3, 1990.
Riddle, P., Segal, R., and Etzioni, O., “Representation Design and Brute-Force Induction in a Boeing Manufacturing Domain,”Applied Artificial Intelligence, 8, pp. 125–147, 1994.
Rouveirol, C., “Flattening and Saturation: Two Representation Changes for Generalization,”Machine Learning, 14, 2, 1994.
Schapire, R., “The Strength of Weak Learnability,”Machine Learning, 5, 2, 1990.
Srinivasan, A., Muggleton, S., King, R., and Sternberg, M., “Mutagenesis: Ilp Experiments in a Non-Determinate Biological Domain,” inProceedings of the Fourth International Workshop on Inductive Logic Programming (S. Wrobel, ed.), Sankt Augustin, Germany, pp. 217–232, 1994. GMD. Published asGMD-Studien Nr. 237.
Valiant, L. G., “A Theory of the Learnable,”Communications of the ACM, 27, 11, November 1984.
Author information
Authors and Affiliations
Additional information
William W. Cohen, Ph. D.: He is a Member of Technical Staff in the Department of Information Extraction, which is part of the Center for Software and Systems Research in AT & T Bell Laboratories. He received a B. S. in 1984 from Duke University, and received an M. S. and Ph. D. from Rutgers University in 1988 and 1990 respectively. His present interests include inductive logic programming, computational learning theory, and learning from large datasets.
David Page: He received a B. A. in Political Science from Clemson University in 1985, with a minor in Computer Science. In 1987 he received an M. S. in Computer Science from the same university. He received a Ph. D. in Computer Science from the University of Illinois at Urbana-Champaign in 1993. His thesis was entitled “Antiunification” in Constraint Logices: Foundations and Applications to Learnability in First-Order Logic, to Speed-up Learning, and to Deduction. From 1993 to present he has been a research officer in the Oxford University Computing Laboratory. His research interests are Inductive Logic Programming, Machine Learning and Computational Learning Theory, Knowledge Representation and Reasoning Automated Deduction and Logic Programming.
About this article
Cite this article
Cohen, W.W., Page, C.D. Polynomial learnability and Inductive Logic Programming: Methods and results. NGCO 13, 369–409 (1995). https://doi.org/10.1007/BF03037231
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF03037231