ABSTRACT
A significant challenge in developing planning systems for practical applications is the difficulty of acquiring the domain knowledge needed by such systems. One method for acquiring this knowledge is to learn it from plan traces, but this method typically requires a huge number of plan traces to converge. In this paper, we show that the problem with slow convergence can be circumvented by having the learner generate solution plans even before the planning domain is completely learned. Our empirical results show that these improvements reduce the size of the training set that is needed to find correct answers to a large percentage of planning problems in the test set.
- Erol, K., Hendler, J., & Nau, D. S. (1996). Complexity results for hierarchical task-network planning. Annals of Mathematics and Artificial Intelligence, 18, 69--93.Google ScholarCross Ref
- Garland, A., Ryall, K., & Rich, C. (2001). Learning hierarchical task models by defining and refining examples. Proceedings of the 1st Int'l Conference on Knowledge Capture (pp. 44--51). Google ScholarDigital Library
- Hirsh, H., Mishra, N., & Pitt, L. (1997). Version spaces without boundary sets. Proceedings of the 14th Nat'l Conference on Artificial Intelligence (pp. 491--496). Google ScholarDigital Library
- Hirsh, H., Mishra, N., & Pitt, L. (2004). Version spaces and the consistency problem. Artificial Intelligence, 156, 115--138. Google ScholarDigital Library
- Huang, Y.-C., Selman, B., & Kautz, H. A. (2000). Learning declarative control rules for constraint-based planning. Proceedings of the 17th Int'l Conference on Machine Learning (pp. 415--422). Google ScholarDigital Library
- Ilghami, O., Nau, D., Muñoz-Avila, H., & Aha, D. (2002). CaMeL: Learning method preconditions for HTN planning. Proceedings of the 6th Int'l Conference on AI Planning and Scheduling (pp. 168--178).Google Scholar
- Khardon, R. (1999). Learning action strategies for planning domains. Artificial Intelligence, 113, 125--148. Google ScholarDigital Library
- Langley, P., & Rogers, S. (2004). Cumulative learning of hierarchical skills. Proceedings of the Third International Conference on Development and Learning.Google Scholar
- Martin, M., & Geffner, H. (2000). Learning generalized policies in planning using concept languages. Proceedings of the 7th Int'l Conference on Knowledge Representation and Reasoning (pp. 667--677).Google Scholar
- Mitchell, T. M. (1977). Version spaces; A candidate elimination approach to rule learning. Proceedings of the 5th Int'l Joint Conference on Artificial Intelligence (pp. 305--310).Google Scholar
- Nau, D. S., Cao, Y., Lotem, A., & Muññoz-Avila, H. (1999). SHOP: Simple hierarchical ordered planner. Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (pp. 968--973). Google ScholarDigital Library
- Reddy, C., & Tadepalli, P. (1997). Learning goal-decomposition rules using exercises. 14th Int'l Conference on Machine Learning (pp. 278--286). Google ScholarDigital Library
- Ruby, D., & Kibler, D. (1993). Learning recurring subplans. In S. Minton (Ed.), Machine learning methods for planning, 467--497. San Mateo, CA: Kaufmann.Google Scholar
- Sacerdoti, E. (1975). The nonlinear nature of plans. Proceedings of the Fourth International Joint Conference on Artificial Intelligence (pp. 206--214).Google Scholar
- Tate, A. (1977). Generating project networks. Proceedings of the Fifth International Joint Conference on Artificial Intelligence (pp. 888--893).Google Scholar
- van Lent, M., & Laird, J. (1999). Learning hierarchical performance knowledge by observation. Proceedings of the Sixteenth International Conference on Machine Learning (pp. 229--238). Google ScholarDigital Library
Recommendations
Learning methods to generate good plans: integrating HTN learning and reinforcement learning
AAAI'10: Proceedings of the Twenty-Fourth AAAI Conference on Artificial IntelligenceWe consider how to learn Hierarchical Task Networks (HTNs) for planning problems in which both the quality of solution plans generated by the HTNs and the speed at which those plans are found is important. We describe an integration of HTN Learning with ...
Knowledge preconditions for actions and plans
IJCAI'87: Proceedings of the 10th international joint conference on Artificial intelligence - Volume 2Agents who operate in complex environments must often construct plans on the basis of incomplete knowledge. In such situations, the successful agent must incorporate into his plans actions which obtain information. These plans are intrinsically sketchy ...
Comments