Abstract
Plan recognition in a dialogue system is the process of explaining why an utterance was made, in terms of the plans and goals that its speaker was pursuing in making the utterance. I present a theory of how such an explanation of an utterance may be judged as to its merits as an explanation. I propose three criteria for making such judgments: applicability, grounding, and completeness. The first criterion is the applicability of the explanation to the needs of the system that will use it. The second criterion is the grounding of the explanation in what is already known of the speaker and of the dialogue. Finally, the third criterion is the completeness of the explanation's coverage of the goals that motivated the production of the utterance. An explanation of an utterance is a good explanation of that utterance to the extent that it meets these three criteria. In addition to forming the basis of a method for evaluating the merit of an explanation, these criteria are useful in designing and evaluating a plan recognition algorithm and its associated knowledge base.
Similar content being viewed by others
References
Allen, J.F. (1979). A Plan-Based Approach to Speech Act Recognition. Ph.D. thesis, Technical Report 131/79, Computer Science Department, University of Toronto.
Allen, J.F. & Perrault, C.R. (1980). Analyzing Intention in Utterances. Artificial Intelligence 15(3): 143–178.
Appelt, D.E. (1985). Planning English Referring Expressions. Artificial Intelligence 26(1): 1–33.
Appelt, D.E. & Pollack, M.E. (1992). Weighted Abduction for Plan Ascription. User Modeling and User-Adapted Interaction 2(1–2): 1–25.
Braverman, M.S. & Russell, S.J. (1988). IMEX: Overcoming Intractability in Explanation Based Learning. In Proceedings of the National Conference on Artificial Intelligence, 575–579. Los Altos, CA: Morgan Kaufmann.
Calistri-Yeh, R.J. (1991). Utilizing User Models to Handle Ambiguity and Misconceptions in Robust Plan Recognition. User Modeling and User-Adapted Interaction 1(4): 289–322.
Carberry, S. (1986). TRACK: Toward a Robust Natural Language Interface. In Proceedings of the Canadian National Conference on Artificial Intelligence, 84–88.
Chin, D. (1988). Intelligent Agents as a Basis for Natural Language Interfaces. Ph.D. thesis, Technical Report UCB/CSD 88/396, Computer Science Department, University of California, Berkeley, California.
Eller, R. & Carberry, S. (1992). A Meta-rule Approach to Flexible Plan Recognition in Dialogue. User Modeling and User-Adapted Interaction 2(1–2): 27–53.
Fikes, R.E. & Nilsson, N.J. (1971). STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving. Artificial Intelligence 2: 189–208.
Grosz, B. & Sidner, C.L. (1985). The Structures of Discourse Structure. Technical Report CSLI-85–39, Center for the Study of Language and Information, Stanford University, Palo Alto, California.
Hirschberg, J. (1984). Toward a Redefinition of Yes/no Questions. In Proceedings of the Tenth International Conference on Computational Linguistics, 48–51. Palo Alto: International Committee on Computational Linguistics.
Kass, R. & Finin, T. (1988). Modeling the User in Natural Language Systems. Computational Linguistics 14(3): 5–22.
Litman, D.J. (1985). Plan Recognition and Discourse Analysis: An Integrated Approach for Understanding Dialogues. Ph.D. thesis, Technical Report TR170, Department of Computer Science, University of Rochester.
Mayfield, J. (1989). Goal Analysis: Plan Recognition in Dialogue Systems. Ph.D. thesis, Technical Report UCB 89/521, Computer Science Division (EECS), University of California, Berkeley, California.
Mayfield, J. (1992). Controlling Inference in Plan Recognition. User Modeling and User-Adapted Interaction 2(1–2): 83–115.
Norvig, P. (1988). Multiple Simultaneous Interpretations of Ambiguous Sentences. In Program of the Tenth Annual Conference of the Cognitive Science Society.
Pollack, M.E. (1984). Good Answers to Bad Questions: Goal Inference in Expert Advicegiving. Technical Report MS-CIS-84–15, Computer Science Department, University of Pennsylvania, Philadelphia, Pennsylvania.
Quilici, A.E. (1989). Detecting and Responding to Plan-oriented Misconceptions. In Kobsa, A. and Wahlster, W. (eds.) User Models in Dialog Systems, 108–132. Springer Verlag: Berlin.
Quilici, A.E., Dyer, M.G. & Flowers, M. (1985). Understanding and Advice Giving in AQUA. Technical Report UCLA-AI-85–19, Computer Science Department, University of California, Los Angeles, California.
Raskutti, B. & Zukerman, I. (1991). Generation and Selection of Likely Interpretation During Plan Recognition in Task-oriented Consultation Systems. User Modeling and User-Adapted Interaction 1(4): 323–353.
Retz-Schmidt, G. (1991). Recognizing Intentions, Interactions, and Causes of Plan Failures. User Modeling and User-Adapted Interaction 1(2): 173–202.
Schank, R. & Abelson, R. (1977). Scripts, Plans, Goals and Understanding. Lawrence Erlbaum: Hillsdale, NJ.
Schank, R.C. (1986). Explanation Patterns: Understanding Mechanically and Creatively. Lawrence Erlbaum Associates: Hillsdale, NJ.
Wilensky, R. (1978). Understanding Goal-Based Stories. Ph.D. thesis, Research Report 140, Computer Science Department, Yale University, New Haven, Connecticut.
Wilensky, R. (1983). Planning and Understanding: A Computational Approach to Human Reasoning. Addison-Wesley: Reading, MA.
Wilensky, R. (1987). Some Problems and Proposals for Knowledge Representation. Memorandum UCB/CSD 87/351, University of California, Berkeley, California.
Wilensky, R., Arens, Y. & Chin, D. (1984). Talking to UNIX in English: An Overview of UC. Communications of the ACM 27(6): 575–593.
Wilensky, R., Chin, D., Luria, M., Martin, J., Mayfield, J. & Wu, D. (1988). The Berkeley UNIX Consultant Project. Computational Linguistics 14(4): 35–84.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Mayfield, J. Evaluating Plan Recognition Systems: Three Properties of a Good Explanation. Artificial Intelligence Review 14, 351–376 (2000). https://doi.org/10.1023/A:1026411904041
Issue Date:
DOI: https://doi.org/10.1023/A:1026411904041