Skip to main content
Log in

Polynomial learnability and Inductive Logic Programming: Methods and results

  • Special Issue
  • Published:
New Generation Computing Aims and scope Submit manuscript

Abstract

Over the last few years, the efficient learnability of logic programs has been studied extensively. Positive and negative learnability results now exist for a number of restricted classes of logic programs that are closely related to the classes used in practice within inductive logic programming. This paper surveys these results, and also introduces some of the more useful techniques for deriving such results. The paper does not assume any prior background in computational learning theory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aizenstein, H., “On the Learnability of Disjunctive Normal Form Formulas and Decision Trees,”Ph. D. thesis, University of Illinois at Urbana-Champaign, 1992.

  2. Angluin, D., Frazier, M. and Pitt, L., “Learning Conjunctions of Horn Clauses,”Machine Learning, 9, pp. 147–164, 1992.

    MATH  Google Scholar 

  3. Angluin, D., “Queries and Concept Learning,”Machine Learning, 2 4, 1988.

    Google Scholar 

  4. Angluin, D., “Equivalence Queries and Approximate Fingerprints” InProceedings of the 1989 Workshop on Computational Learning Theory, Santa Cruz, California 1989.

  5. Arimura, H., Ishizaka, H., and Shinohara, T., “Polynomial Time Inference of a Subclass of Context-Free Transformations,” inProceedings of the Fifth Workshop on Computational Learning Theory (COLT-92), The Association for Computing Machinery, New York, pp. 136–143 1992.

  6. Blumer, A., Ehrenfeucht, A., Haussler, D., and Warmuth, M., “Classifiying Learnable Concepts with the Vapnik-Chervonenkis Dimension,”Journal of the Association for Computing Machinery, 36, 4, pp. 929–965, 1989.

    MATH  MathSciNet  Google Scholar 

  7. Buntine, W., “Generalized Subsumption and Its Application to Induction and Redundancy,”Artificial Intelligence, 36, 2, pp. 149–176, 1988.

    Article  MATH  MathSciNet  Google Scholar 

  8. Buntine, W., “A Theory of Learning Classification Rules,”Ph.D thesis, School of Computing Science, University of Technology, Sydney, 1990.

    Google Scholar 

  9. Cohen, W. W., “Cryptographic Limitations on Learning One-Clause Logic Programs,” inProceedings of the Tenth National Conference on Artificial Intelligence, Washington, D. C., 1993.

  10. Cohen, W. W., “Learnability of Restricted Logic Programs,” inProceedings of the Third International Workshop on Inductive Logic Programming, Bled, Slovenia, 1993.

  11. Cohen, W. W., “A Pac-Learning Algorithm for a Restricted Class of Recursive Logic Programs,” inProceedings of the Tenth National Conference on Artificial Intelligence, Washington, D.C., 1993.

  12. Cohen, W. W., “Pac-Learning Non-Recursive Prolog Clauses,” to appearArtificial Intelligence, 1993.

  13. Cohen, W. W., “The Pac-Learnability of Recursive Logic Programs,” in preparation, 1994.

  14. Cohen, W. W., “Pac-Learning Nondeterminate Clauses” inProceedings of the Eleventh National Conference on Artificial Intelligence, Seattle, WA, 1994.

  15. Cohen, W. W., “Recovering Software Specifications with Inductive Logic Programming,” inProceedings of the Eleventh National Conference on Artificial Intelligence, Seattle, WA, 1994.

  16. DeRaedt, L. and Džeroski, S., “jk-Clausal Theories Are PAC-Learnable,”Artificial Intelligence, to appear, 1994.

  17. Džeroski, S., Muggleton, S., and Russell, S., “Pac-Learnability of Determinate Logic Programs,” inProceedings of the 1992 Workshop on Computational Learning Theory, Pittsburgh, Pennsylvania, 1992.

  18. Frazier, M. and Page, C. D., “Learnability in Inductive Logic Programming: Some Basic Results and Techniques,” inProceedings of the 11th National Conference on Artificial Intelligence (AAAI-93), Menlo Park, CA, AAAI Press, 1993.

  19. Frazier, M. and Page, C. D., “Learnability of Recursive, Non-determinate Theories: Some Basic Results and Techniques,” inProceedings of the Third International Workshop on Inductive Logic Programming, Ljubljana, Slovenia, pp. 103–126, 1993.J. Stefan Institute Technical Report, IJS-DP-6707.

  20. Frazier, M. and Page, C. D., “Prefix Grammars: An Alternative Characterization of the Regular Languages,”Information Processing Letters, 51, pp. 67–71, 1994.

    Article  MATH  MathSciNet  Google Scholar 

  21. Frazier, M., “Matters Horn and Other Features in the Computational Learning Theory Landscape: The Notion of Membership,”Ph.D thesis, University of Illinois at Urbana-Champaign, 1994.

  22. Haussler, D., Kearns, M., and Schapire, R., “Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and VC Dimension,” inProceedings of the Fourth Annual Workshop on Computational Learning Theory, San Mateo, CA, Morgan Kaufmann, pp. 61–74, 1991.

    Google Scholar 

  23. Haussler, D., Kearns, M., and Schapire, R., “Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and VC Dimension,”Machine Learning, 14, 1, pp. 83–113, January 1994.

    MATH  Google Scholar 

  24. Haussler, D., “Learning Conjunctive Concepts in Structural Domains,”Machine Learning, 4, 1, 1989.

    Google Scholar 

  25. Helmbold, D. and Warmuth, M., “Some Weak Learning Results,” inProc. 5th Annu. Workshop on Comput. Learning Theory, ACM Press, New York, NY, pp. 399–412, 1992.

    Chapter  Google Scholar 

  26. Kearns, M. and Valiant, L., “Cryptographic Limitations on Learning Boolean Formulae and Finite Automata,” in21th Annual Symposium on the Theory of Computing, ACM Press, 1989.

  27. Kearns, M. and Valiant, L., “Cryptographic Limitations on Learning Boolean Formulae and Finite Automata,”Journal of the Association for Computing Machinery, 41, 1, pp. 67–95, 1994.

    MATH  MathSciNet  Google Scholar 

  28. King, R. D., Muggleton, S., Lewis, R. A., and Sternberg, M. J. E., “Drug Design by Machine Learning: The Use of Inductive Logic Programming to Model the Structure-Activity Relationships of Trimethoprim Analogues Binding to Dihydrofolate Reductase,”Proceedings of the National Academy of Science, 89, 1992.

  29. Lassez, J-L., Maher, M. J., and Marriott, K., “Unification Revisited,” inFoundations of Deductive Databases and Logic Programming (Jack Minker, ed.), chapter 15, Morgan Kaufmann Publishers, pp. 587–625, 1988.

  30. Minton, S., “Small Is Beautiful: A Brute-Force Approach to Finding First-Order Formulas,” inProceedings of the Twelfth National Conference on Artificial Intelligence, Seattle, Washington, MIT Press, 1994.

    Google Scholar 

  31. Muggleton, S. H. and Page, C. D., “A Learnability Model for Universal Representations,” inProceedings of the Fourth International Workshop on Inductive Logic Programming (S. Wrobel, ed.), Sankt Augustin, Germany, pp. 139–160, 1994. GMD. Published asGMD-Studien Nr. 237.

  32. Muggleton, S., “Bayesian Inductive Logic Programming,” inProceedings of the Eleventh International Conference on Machine Learning (ML-94), San Francisco, Morgan Kaufmann, pp. 371–379, 1994.

    Google Scholar 

  33. Muggleton, S., “Bayesian Inductive Logic Programming,” inProceedings of the Seventh Annual ACM Conference on Computational Learning Theory (COLT-94), New York, The Association for Computing Machinery, pp. 3–11, 1994.

  34. Nienhuys-Cheng, S. H. and Polman, M., “Sample Pac-Learnability in Model Inference,” inMachine Learning: ECML-94, Catania, Italy,Lecture Notes in Computer Science #784, Springer-Verlag, 1994.

  35. Page, C. D. and Frisch, A. M., “Generalizing Atoms in Constraint Logic,” inSecond Int. Conf on Principles of Knowledge Representation and Reasoning (J. Allen, R. Fikes, and E. Sandewall, eds.), San Mateo, CA, Morgan Kaufmann, April 1991.

    Google Scholar 

  36. Page, C. D. and Frisch, A. M., “Generalization and Learnability: A Study of Constrained Atoms,” inInductive Logic Programming (S. H. Muggleton, eds.), Academic Press, London, pp. 29–61, 1992.

    Google Scholar 

  37. Page, C. D., “Anti-Unification in Constraint Logics: Foundations and Applications to Learnability in First-Order Logic, to Speed-up Learning, and to Deduction,”Ph.D thesis, University of Illinois at Urbana-Champaign, 1993.

  38. Pitt, L. and Valiant, L., “Computational Limitations on Learning from Examples,”Journal of the ACM, 35, 4, pp. 965–984, 1988.

    Article  MATH  MathSciNet  Google Scholar 

  39. Pitt, L. and Warmuth, M., “Prediction-Preserving Reducibility,”Journal of Computer and System Sciences, 41, pp. 430–467, 1990.

    Article  MATH  MathSciNet  Google Scholar 

  40. Plotkin, G. D., “A Note on Inductive Generalization,” inMachine Intelligence 5, Edinburgh University Press, pp. 153–163, 1969.

  41. Quinlan, J. R., “Learning Logical Definitions from Relations,”Machine Learning, 5, 3, 1990.

    Google Scholar 

  42. Riddle, P., Segal, R., and Etzioni, O., “Representation Design and Brute-Force Induction in a Boeing Manufacturing Domain,”Applied Artificial Intelligence, 8, pp. 125–147, 1994.

    Article  Google Scholar 

  43. Rouveirol, C., “Flattening and Saturation: Two Representation Changes for Generalization,”Machine Learning, 14, 2, 1994.

    Article  Google Scholar 

  44. Schapire, R., “The Strength of Weak Learnability,”Machine Learning, 5, 2, 1990.

    Google Scholar 

  45. Srinivasan, A., Muggleton, S., King, R., and Sternberg, M., “Mutagenesis: Ilp Experiments in a Non-Determinate Biological Domain,” inProceedings of the Fourth International Workshop on Inductive Logic Programming (S. Wrobel, ed.), Sankt Augustin, Germany, pp. 217–232, 1994. GMD. Published asGMD-Studien Nr. 237.

  46. Valiant, L. G., “A Theory of the Learnable,”Communications of the ACM, 27, 11, November 1984.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

William W. Cohen, Ph. D.: He is a Member of Technical Staff in the Department of Information Extraction, which is part of the Center for Software and Systems Research in AT & T Bell Laboratories. He received a B. S. in 1984 from Duke University, and received an M. S. and Ph. D. from Rutgers University in 1988 and 1990 respectively. His present interests include inductive logic programming, computational learning theory, and learning from large datasets.

David Page: He received a B. A. in Political Science from Clemson University in 1985, with a minor in Computer Science. In 1987 he received an M. S. in Computer Science from the same university. He received a Ph. D. in Computer Science from the University of Illinois at Urbana-Champaign in 1993. His thesis was entitled “Antiunification” in Constraint Logices: Foundations and Applications to Learnability in First-Order Logic, to Speed-up Learning, and to Deduction. From 1993 to present he has been a research officer in the Oxford University Computing Laboratory. His research interests are Inductive Logic Programming, Machine Learning and Computational Learning Theory, Knowledge Representation and Reasoning Automated Deduction and Logic Programming.

About this article

Cite this article

Cohen, W.W., Page, C.D. Polynomial learnability and Inductive Logic Programming: Methods and results. NGCO 13, 369–409 (1995). https://doi.org/10.1007/BF03037231

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF03037231

Keywords

Navigation