Skip to main content
Log in

Learning Horn definitions: Theory and an application to planning

  • Special Issue
  • Published:
New Generation Computing Aims and scope Submit manuscript

Abstract

A Horn definition is a set of Horn clauses with the same predicate in all head literals. In this paper, we consider learning non-recursive, first-order Horn definitions from entailment. We show that this class is exactly learnable from equivalence and membership queries. It follows then that this class is PAC learnable using examples and membership queries. Finally, we apply our results to learning control knowledge for efficient planning in the form of goal-decomposition rules.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ackerman, P. and Kanfer, R.,Kanfer-Ackerman Air Traffic Control Task© CD-ROM Database, Data-Collection Program, and Playback Program, Dept. of Psychology, Univ. of Minn., Minneapolis, MN, 1993.

    Google Scholar 

  2. Angluin, D., “Queries and Concept Learning,”Machine Learning 2, pp. 319–342, 1988.

    Google Scholar 

  3. Angluin, D., Frazier, M. and Pitt, L., “Learning conjunctions of Horn clauses,”Machine Learning 9, pp. 147–164, 1992.

    MATH  Google Scholar 

  4. Arimura, H., “Learning Acyclic First-order Horn Sentences From Entailment,” inProceedings of the Eight International Workshop on Algorithmic Learning Theory, Ohmsha/Springer-Verlag, 1997.

  5. Cohen, W., “Pac-learning non-recursive prolog clauses,”Artificial Intelligence 79, 1, pp. 1–38, 1995.

    Article  MATH  MathSciNet  Google Scholar 

  6. Cohen, W., “Pac-learning recursive logic programs: efficient algorithms,”Jl. of AI Research 2, pp. 500–539, 1995.

    Google Scholar 

  7. Cohen, W., “Pac-learning recursive logic programs: negative results,”Jl. of AI Research 2, pp. 541–573, 1995.

    MATH  Google Scholar 

  8. De Raedt, L., “Logical Settings for Concept Learning,”Artificial Intelligence 95, 1, pp. 187–201, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  9. De Raedt, L. and Bruynooghe, M., “Interactive concept learning and constructive induction by analogy,”Machine Learning 8, 2, pp. 107–150, 1992.

    Article  MATH  Google Scholar 

  10. Džeroki, S., Muggleton, S. and Russell, S., “PAC-learnability of Determinate Logic Programs,” inProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, pp. 128–135, 1992.

  11. Erol, K., Hendler, J. and Nau, D., “HTN planning: complexity and expressivity,” inProceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94), AAAI Press, 1994.

  12. Estlin, T. and Mooney, R., “Multi-strategy Learning of Search Control for Partial-Order Planning,” inProceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), AAAI/MIT Press, pp. 843–848, 1996.

  13. Fikes, R., Hart, P. and Nilsson, N., “Learning and executing generalized robot plans,”Artificial Intelligence 3, pp. 251–288, 1972.

    Article  Google Scholar 

  14. Frazier, M. and Page, C. D., “Learnability in Inductive Logic Programming: Some Basic Results and Techniques,” inProceedings of the Eleventh National Conference on Artificial Intelligence (AAAI-93), AAAI Press, pp. 120–127, 1993.

  15. Frazier, M. and Pitt, L., “Learning from entailment: An application to propositional Horn sentences,” inProceedings of the Tenth International Conference on Machine Learning, pp. 120–127, 1993.

  16. Frazier, M. and Pitt, L., “Classic learning,”Machine Learning 25, pp. 151–194, 1995.

    Google Scholar 

  17. Haussler, D., “Learning Conjunctive Concepts in Structural Domains,”Machine Learning 4, pp. 7–40, 1989.

    Google Scholar 

  18. Kietz, J.-U. and Lübbe, M., “An efficient subsumption algorithm for inductive logic programming,” inProceedings of the Eleventh International Conference on Machine Learning, pp. 130–138, 1994.

  19. Kowalski, R., “The case for using equality axioms in automatic demonstration,” inLecture Notes in Mathematics, volume 125, Springer-Verlag, 1970.

  20. Lassez, J.-L., Maher, M. and Marriott, K., “Unification revisited,” inFoundations of Deductive Databases and Logic Programming (J. Minker ed.), Morgan Kaufmann, 1988.

  21. Leckie, C. and Zukerman, I., “An inductive approach to learning search control rules for planning,” inProceedings of the 13th IJCAI, pp. 1100–1105, 1993.

  22. Lloyd, J.,Foundations of Logic Programming (2nd ed.), Springer-Verlag, Berlin, 1987.

    MATH  Google Scholar 

  23. Muggleton, S. and Feng, C., “Efficient induction of logic programs,” inProceedings of the First Conference on Algorithmic Learning Theory, Ohmsha/Springer-Verlag, pp. 368–381, 1990.

  24. Nienhuys-Cheng, S.-H. and de Wolf, R., “Least Generalizations and Greatest Specializations of Sets of Clauses,”Jl. of AI Research 4, pp. 341–363, 1996.

    MATH  Google Scholar 

  25. Page, C. D.,Anti-Unification in Constraint Logics: Foundations and Applications to Learnability in First-Order Logic, to Speed-up Learning, and to Deduction, Ph.D. thesis, University of Illinois, Urbana, IL, 1993.

    Google Scholar 

  26. Page, C. D. and Frisch, A. M., “Generalization and Learnability: A Study of Constrained Atoms,” inInductive Logic Programming (S. H. Muggleton ed.), Academic Press, pp. 29–61, 1992.

  27. Plotkin, G., “A Note on Inductive Generalization,” inMachine Intelligence (B. Meltzer and D. Michie eds.), volume 5, Elsevier North-Holland, New York, pp. 153–163, 1970.

    Google Scholar 

  28. Quinlan, J., “Learning logical definitions of from relations,”Machine Learning, 5, pp. 239–266, 1990.

    Google Scholar 

  29. Reddy, C. and Tadepalli, P., “Inductive logic programming for speedup learning,” inProceedings of the IJCAI-97 workshop on Frontiers of Inductive Logic Programming (L. DeRaedt and S. Muggleton eds.), 1997.

  30. Reddy, C. and Tadepalli, P., “Learning Goal-Decomposition Rules using Exercises,” inProceedings of the 14th International Conference on Machine Learning, Morgan Kaufmann, 1997.

  31. Reddy, C. and Tadepalli, P., “Learning First-Order Acyclic Horn Programs from Entailment,” inProceedings of the 15th International Conference on Machine Learning; (and Proceedings of the 8th International Conference on Inductive Logic Programming), Morgan Kaufmann, 1998.

  32. Reddy, C., Tadepalli, P. and Roncagliolo, S., “Theory-Guided Empirical Speedup Learning of Goal-Decomposition Rules,”, inProceedings of the 13th International Conference on Machine Learning, Morgan Kaufmann, pp. 409–417, 1996.

  33. Schapire, R., “The strength of weak learnability,”Machine Learning 5, pp. 197–227, 1990.

    Google Scholar 

  34. Zelle, J. and Mooney, R., “Combining FOIL and EBG to speedup logic programs,”, inProceedings of the 13th IJCAI, pp. 1106–1111, 1993.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chandra Reddy.

Additional information

Chandra Reddy, Ph.D.: He is currently a doctoral student in the Department of Computer Science at Oregon State University. He is completing his Ph.D. on June 30, 1998. His dissertation is entitled “Learning Hierarchical Decomposition Rules for Planning: An Inductive Logic Programming Approach.” Earlier, he had an M. Tech in Artificial Intelligence and Robotics from University of Hyderabad, India, and an M.Sc.(tech) in Computer Science from Birla Institute of Technology and Science, India. His current research interests broadly fall under machine learning and planning/scheduling—more specifically, inductive logic programming, speedup learning, data mining, and hierarchical planning and optimization.

Prasad Tadepalli, Ph.D.: He has an M.Tech in Computer Science from Indian Institute of Technology, Madras, India and a Ph.D. from Rutgers University, New Brunswick, USA. He joined Oregon State University, Corvallis, as an assistant professor in 1989. He is now an associate professor in the Department of Computer Science of Oregon State University. His main area of research is machine learning, including reinforcement learning, inductive logic programming, and computational learning theory, with applications to classification, planning, scheduling, manufacturing, and information retrieval.

About this article

Cite this article

Reddy, C., Tadepalli, P. Learning Horn definitions: Theory and an application to planning. New Gener Comput 17, 77–98 (1999). https://doi.org/10.1007/BF03037583

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF03037583

Keywords

Navigation