Abstract
We study agents situated in partially observable environments, who do not have the resources to create conformant plans. Instead, they create conditional plans which are partial, and learn from experience to choose the best of them for execution. Our agent employs an incomplete symbolic deduction system based on Active Logic and Situation Calculus for reasoning about actions and their consequences. An Inductive Logic Programming algorithm generalises observations and deduced knowledge in order to choose the best plan for execution.
We show results of using PROGOL learning algorithm to distinguish “bad” plans, and we present three modifications which make the algorithm fit this class of problems better. Specifically, we limit the search space by fixing semantics of conditional branches within plans, we guide the search by specifying relative relevance of portions of knowledge base, and we integrate learning algorithm into the agent architecture by allowing it to directly access the agent’s knowledge encoded in Active Logic. We report on experiments which show that those extensions lead to significantly better learning results.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Andrews, R., Geva, S.: Rule extraction from local cluster neural nets. Neurocomputing 47 (2002)
Badea, L., Stanciu, M.: Refinement operators can be (weakly) perfect. In: Džeroski, S., Flach, P.A. (eds.) Inductive Logic Programming. LNCS (LNAI), vol. 1634, Springer, Heidelberg (1999)
Bertoli, P., Cimatti, A., Traverso, P.: Interleaving execution and planning for nondeterministic, partially observable domains. In: European Conference on Artificial Intelligence, pp. 657–661 (2004)
Blum, A., Langley, P.: Selection of relevant features and examples in machine learning. Artificial Intelligence 97(1-2), 245–271 (1997)
Bonet, B., Geffner, H.: Planning and control in artificial intelligence: A unifying perspective. Applied Intelligence 14(3), 237–252 (2001)
Cimatti, A., Roveri, M., Bertoli, P.: Conformant planning via symbolic model checking and heuristic search. Artificial Intelligence 159(1-2), 127–206 (2004)
Dean, T., Wellman, M.P.: Planning and Control. Morgan Kaufmann, San Francisco (1991)
Dehaspe, L., De Raedt, L., Laer, W.: Claudien: The clausal discovery engine user’s guide, The CLAUsal DIscovery ENgine User’s Guide, Katholieke Universiteit Leuven (1996)
Dietterich, T.G., Flann, N.S.: Explanation-based learning and reinforcement learning: A unified view. In: Int. Conf. on Machine Learning, pp. 176–184 (1995)
Khardon, R.: Learning to take actions. Machine Learning 35(1), 57–90 (1999)
Könik, T., Laird, J.E.: Learning goal hierarchies from structured observations and expert annotations. Machine Learning 64, 263–287 (2006)
Langley, P., Choi, D.: Learning recursive control programs from problem solving. Journal of Machine Learning Research 7, 493–518 (2006)
Mitchell, T.M.: Machine Learning. McGraw-Hill Higher Education, New York (1997)
Moyle, S.: Using theory completion to learn a robot navigation control program. In: Matwin, S., Sammut, C. (eds.) ILP 2002. LNCS (LNAI), vol. 2583, Springer, Heidelberg (2003)
Muggleton, S.: Inverse entailment and Progol. New Generation Computing, Special issue on Inductive Logic Programming 13(3-4), 245–286 (1995)
Nowaczyk, S.: Partial planning for situated agents based on active logic. In: ESSLLI 2006. Workshop on Logics for Resource Bounded Agents (2006)
Nowaczyk, S., Malec, J.: Learning to evaluate conditional partial plans. In: Sixth International Conference on Machine Learning and Applications (December 2007)
Nowaczyk, S., Malec, J.: Relative relevance of subsets of agent’s knowledge. In: Workshop on Logics for Resource Bounded Agents (September 2007)
Petrick, R.P.A., Bacchus, F.: Extending the knowledge-based approach to planning with incomplete information and sensing. In: International Conference on Automated Planning and Scheduling, pp. 2–11 (2004)
Purang, K., Purushothaman, D., Traum, D., Andersen, C., Perlis, D.: Practical reasoning and plan execution with active logic. In: Bell, J. (ed.) Proceedings of the IJCAI-99 Workshop on Practical Reasoning and Rationality, pp. 30–38 (1999)
Reiter, R.: Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. The MIT Press, Cambridge (2001)
Ricci, F., Mam, S., Marti, P., Normand, V., Olmo, P.: CHARADE: a platform for emergencies management systems. Technical Report 9404-07, Povo (1994)
Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. AI. Prentice-Hall, Englewood Cliffs (2003)
Sommer, E., Emde, W., Kietz, J.-U., Wrobel, S.: Mobal 3.0 user guide
van der Hoek, W., Wooldridge, M.: Tractable multiagent planning for epistemic goals. In: First International Conference on Autonomous Agents and Multiagent Systems (2002)
Vilalta, R., Drissi, Y.: A perspective view and survey of meta-learning. Artificial Intelligence Review 18(2), 77–95 (2002)
Yamamoto, A.: Improving theories for inductive logic programming systems with ground reduced programs. Technical report, AIDA9619, Technische Hochschule Darmstadt (1996)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Nowaczyk, S., Malec, J. (2007). Inductive Logic Programming Algorithm for Estimating Quality of Partial Plans. In: Gelbukh, A., Kuri Morales, Á.F. (eds) MICAI 2007: Advances in Artificial Intelligence. MICAI 2007. Lecture Notes in Computer Science(), vol 4827. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-76631-5_34
Download citation
DOI: https://doi.org/10.1007/978-3-540-76631-5_34
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-76630-8
Online ISBN: 978-3-540-76631-5
eBook Packages: Computer ScienceComputer Science (R0)