Abstract
A Horn definition is a set of Horn clauses with the same predicate in all head literals. In this paper, we consider learning non-recursive, first-order Horn definitions from entailment. We show that this class is exactly learnable from equivalence and membership queries. It follows then that this class is PAC learnable using examples and membership queries. Finally, we apply our results to learning control knowledge for efficient planning in the form of goal-decomposition rules.
Similar content being viewed by others
References
Ackerman, P. and Kanfer, R.,Kanfer-Ackerman Air Traffic Control Task© CD-ROM Database, Data-Collection Program, and Playback Program, Dept. of Psychology, Univ. of Minn., Minneapolis, MN, 1993.
Angluin, D., “Queries and Concept Learning,”Machine Learning 2, pp. 319–342, 1988.
Angluin, D., Frazier, M. and Pitt, L., “Learning conjunctions of Horn clauses,”Machine Learning 9, pp. 147–164, 1992.
Arimura, H., “Learning Acyclic First-order Horn Sentences From Entailment,” inProceedings of the Eight International Workshop on Algorithmic Learning Theory, Ohmsha/Springer-Verlag, 1997.
Cohen, W., “Pac-learning non-recursive prolog clauses,”Artificial Intelligence 79, 1, pp. 1–38, 1995.
Cohen, W., “Pac-learning recursive logic programs: efficient algorithms,”Jl. of AI Research 2, pp. 500–539, 1995.
Cohen, W., “Pac-learning recursive logic programs: negative results,”Jl. of AI Research 2, pp. 541–573, 1995.
De Raedt, L., “Logical Settings for Concept Learning,”Artificial Intelligence 95, 1, pp. 187–201, 1997.
De Raedt, L. and Bruynooghe, M., “Interactive concept learning and constructive induction by analogy,”Machine Learning 8, 2, pp. 107–150, 1992.
Džeroki, S., Muggleton, S. and Russell, S., “PAC-learnability of Determinate Logic Programs,” inProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, pp. 128–135, 1992.
Erol, K., Hendler, J. and Nau, D., “HTN planning: complexity and expressivity,” inProceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94), AAAI Press, 1994.
Estlin, T. and Mooney, R., “Multi-strategy Learning of Search Control for Partial-Order Planning,” inProceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), AAAI/MIT Press, pp. 843–848, 1996.
Fikes, R., Hart, P. and Nilsson, N., “Learning and executing generalized robot plans,”Artificial Intelligence 3, pp. 251–288, 1972.
Frazier, M. and Page, C. D., “Learnability in Inductive Logic Programming: Some Basic Results and Techniques,” inProceedings of the Eleventh National Conference on Artificial Intelligence (AAAI-93), AAAI Press, pp. 120–127, 1993.
Frazier, M. and Pitt, L., “Learning from entailment: An application to propositional Horn sentences,” inProceedings of the Tenth International Conference on Machine Learning, pp. 120–127, 1993.
Frazier, M. and Pitt, L., “Classic learning,”Machine Learning 25, pp. 151–194, 1995.
Haussler, D., “Learning Conjunctive Concepts in Structural Domains,”Machine Learning 4, pp. 7–40, 1989.
Kietz, J.-U. and Lübbe, M., “An efficient subsumption algorithm for inductive logic programming,” inProceedings of the Eleventh International Conference on Machine Learning, pp. 130–138, 1994.
Kowalski, R., “The case for using equality axioms in automatic demonstration,” inLecture Notes in Mathematics, volume 125, Springer-Verlag, 1970.
Lassez, J.-L., Maher, M. and Marriott, K., “Unification revisited,” inFoundations of Deductive Databases and Logic Programming (J. Minker ed.), Morgan Kaufmann, 1988.
Leckie, C. and Zukerman, I., “An inductive approach to learning search control rules for planning,” inProceedings of the 13th IJCAI, pp. 1100–1105, 1993.
Lloyd, J.,Foundations of Logic Programming (2nd ed.), Springer-Verlag, Berlin, 1987.
Muggleton, S. and Feng, C., “Efficient induction of logic programs,” inProceedings of the First Conference on Algorithmic Learning Theory, Ohmsha/Springer-Verlag, pp. 368–381, 1990.
Nienhuys-Cheng, S.-H. and de Wolf, R., “Least Generalizations and Greatest Specializations of Sets of Clauses,”Jl. of AI Research 4, pp. 341–363, 1996.
Page, C. D.,Anti-Unification in Constraint Logics: Foundations and Applications to Learnability in First-Order Logic, to Speed-up Learning, and to Deduction, Ph.D. thesis, University of Illinois, Urbana, IL, 1993.
Page, C. D. and Frisch, A. M., “Generalization and Learnability: A Study of Constrained Atoms,” inInductive Logic Programming (S. H. Muggleton ed.), Academic Press, pp. 29–61, 1992.
Plotkin, G., “A Note on Inductive Generalization,” inMachine Intelligence (B. Meltzer and D. Michie eds.), volume 5, Elsevier North-Holland, New York, pp. 153–163, 1970.
Quinlan, J., “Learning logical definitions of from relations,”Machine Learning, 5, pp. 239–266, 1990.
Reddy, C. and Tadepalli, P., “Inductive logic programming for speedup learning,” inProceedings of the IJCAI-97 workshop on Frontiers of Inductive Logic Programming (L. DeRaedt and S. Muggleton eds.), 1997.
Reddy, C. and Tadepalli, P., “Learning Goal-Decomposition Rules using Exercises,” inProceedings of the 14th International Conference on Machine Learning, Morgan Kaufmann, 1997.
Reddy, C. and Tadepalli, P., “Learning First-Order Acyclic Horn Programs from Entailment,” inProceedings of the 15th International Conference on Machine Learning; (and Proceedings of the 8th International Conference on Inductive Logic Programming), Morgan Kaufmann, 1998.
Reddy, C., Tadepalli, P. and Roncagliolo, S., “Theory-Guided Empirical Speedup Learning of Goal-Decomposition Rules,”, inProceedings of the 13th International Conference on Machine Learning, Morgan Kaufmann, pp. 409–417, 1996.
Schapire, R., “The strength of weak learnability,”Machine Learning 5, pp. 197–227, 1990.
Zelle, J. and Mooney, R., “Combining FOIL and EBG to speedup logic programs,”, inProceedings of the 13th IJCAI, pp. 1106–1111, 1993.
Author information
Authors and Affiliations
Corresponding author
Additional information
Chandra Reddy, Ph.D.: He is currently a doctoral student in the Department of Computer Science at Oregon State University. He is completing his Ph.D. on June 30, 1998. His dissertation is entitled “Learning Hierarchical Decomposition Rules for Planning: An Inductive Logic Programming Approach.” Earlier, he had an M. Tech in Artificial Intelligence and Robotics from University of Hyderabad, India, and an M.Sc.(tech) in Computer Science from Birla Institute of Technology and Science, India. His current research interests broadly fall under machine learning and planning/scheduling—more specifically, inductive logic programming, speedup learning, data mining, and hierarchical planning and optimization.
Prasad Tadepalli, Ph.D.: He has an M.Tech in Computer Science from Indian Institute of Technology, Madras, India and a Ph.D. from Rutgers University, New Brunswick, USA. He joined Oregon State University, Corvallis, as an assistant professor in 1989. He is now an associate professor in the Department of Computer Science of Oregon State University. His main area of research is machine learning, including reinforcement learning, inductive logic programming, and computational learning theory, with applications to classification, planning, scheduling, manufacturing, and information retrieval.
About this article
Cite this article
Reddy, C., Tadepalli, P. Learning Horn definitions: Theory and an application to planning. New Gener Comput 17, 77–98 (1999). https://doi.org/10.1007/BF03037583
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF03037583