Skip to main content
Log in

Evaluating Plan Recognition Systems: Three Properties of a Good Explanation

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Plan recognition in a dialogue system is the process of explaining why an utterance was made, in terms of the plans and goals that its speaker was pursuing in making the utterance. I present a theory of how such an explanation of an utterance may be judged as to its merits as an explanation. I propose three criteria for making such judgments: applicability, grounding, and completeness. The first criterion is the applicability of the explanation to the needs of the system that will use it. The second criterion is the grounding of the explanation in what is already known of the speaker and of the dialogue. Finally, the third criterion is the completeness of the explanation's coverage of the goals that motivated the production of the utterance. An explanation of an utterance is a good explanation of that utterance to the extent that it meets these three criteria. In addition to forming the basis of a method for evaluating the merit of an explanation, these criteria are useful in designing and evaluating a plan recognition algorithm and its associated knowledge base.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Allen, J.F. (1979). A Plan-Based Approach to Speech Act Recognition. Ph.D. thesis, Technical Report 131/79, Computer Science Department, University of Toronto.

    Google Scholar 

  • Allen, J.F. & Perrault, C.R. (1980). Analyzing Intention in Utterances. Artificial Intelligence 15(3): 143–178.

    Google Scholar 

  • Appelt, D.E. (1985). Planning English Referring Expressions. Artificial Intelligence 26(1): 1–33.

    Google Scholar 

  • Appelt, D.E. & Pollack, M.E. (1992). Weighted Abduction for Plan Ascription. User Modeling and User-Adapted Interaction 2(1–2): 1–25.

    Google Scholar 

  • Braverman, M.S. & Russell, S.J. (1988). IMEX: Overcoming Intractability in Explanation Based Learning. In Proceedings of the National Conference on Artificial Intelligence, 575–579. Los Altos, CA: Morgan Kaufmann.

    Google Scholar 

  • Calistri-Yeh, R.J. (1991). Utilizing User Models to Handle Ambiguity and Misconceptions in Robust Plan Recognition. User Modeling and User-Adapted Interaction 1(4): 289–322.

    Google Scholar 

  • Carberry, S. (1986). TRACK: Toward a Robust Natural Language Interface. In Proceedings of the Canadian National Conference on Artificial Intelligence, 84–88.

  • Chin, D. (1988). Intelligent Agents as a Basis for Natural Language Interfaces. Ph.D. thesis, Technical Report UCB/CSD 88/396, Computer Science Department, University of California, Berkeley, California.

    Google Scholar 

  • Eller, R. & Carberry, S. (1992). A Meta-rule Approach to Flexible Plan Recognition in Dialogue. User Modeling and User-Adapted Interaction 2(1–2): 27–53.

    Google Scholar 

  • Fikes, R.E. & Nilsson, N.J. (1971). STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving. Artificial Intelligence 2: 189–208.

    Google Scholar 

  • Grosz, B. & Sidner, C.L. (1985). The Structures of Discourse Structure. Technical Report CSLI-85–39, Center for the Study of Language and Information, Stanford University, Palo Alto, California.

    Google Scholar 

  • Hirschberg, J. (1984). Toward a Redefinition of Yes/no Questions. In Proceedings of the Tenth International Conference on Computational Linguistics, 48–51. Palo Alto: International Committee on Computational Linguistics.

    Google Scholar 

  • Kass, R. & Finin, T. (1988). Modeling the User in Natural Language Systems. Computational Linguistics 14(3): 5–22.

    Google Scholar 

  • Litman, D.J. (1985). Plan Recognition and Discourse Analysis: An Integrated Approach for Understanding Dialogues. Ph.D. thesis, Technical Report TR170, Department of Computer Science, University of Rochester.

    Google Scholar 

  • Mayfield, J. (1989). Goal Analysis: Plan Recognition in Dialogue Systems. Ph.D. thesis, Technical Report UCB 89/521, Computer Science Division (EECS), University of California, Berkeley, California.

    Google Scholar 

  • Mayfield, J. (1992). Controlling Inference in Plan Recognition. User Modeling and User-Adapted Interaction 2(1–2): 83–115.

    Google Scholar 

  • Norvig, P. (1988). Multiple Simultaneous Interpretations of Ambiguous Sentences. In Program of the Tenth Annual Conference of the Cognitive Science Society.

  • Pollack, M.E. (1984). Good Answers to Bad Questions: Goal Inference in Expert Advicegiving. Technical Report MS-CIS-84–15, Computer Science Department, University of Pennsylvania, Philadelphia, Pennsylvania.

    Google Scholar 

  • Quilici, A.E. (1989). Detecting and Responding to Plan-oriented Misconceptions. In Kobsa, A. and Wahlster, W. (eds.) User Models in Dialog Systems, 108–132. Springer Verlag: Berlin.

    Google Scholar 

  • Quilici, A.E., Dyer, M.G. & Flowers, M. (1985). Understanding and Advice Giving in AQUA. Technical Report UCLA-AI-85–19, Computer Science Department, University of California, Los Angeles, California.

    Google Scholar 

  • Raskutti, B. & Zukerman, I. (1991). Generation and Selection of Likely Interpretation During Plan Recognition in Task-oriented Consultation Systems. User Modeling and User-Adapted Interaction 1(4): 323–353.

    Google Scholar 

  • Retz-Schmidt, G. (1991). Recognizing Intentions, Interactions, and Causes of Plan Failures. User Modeling and User-Adapted Interaction 1(2): 173–202.

    Google Scholar 

  • Schank, R. & Abelson, R. (1977). Scripts, Plans, Goals and Understanding. Lawrence Erlbaum: Hillsdale, NJ.

    Google Scholar 

  • Schank, R.C. (1986). Explanation Patterns: Understanding Mechanically and Creatively. Lawrence Erlbaum Associates: Hillsdale, NJ.

    Google Scholar 

  • Wilensky, R. (1978). Understanding Goal-Based Stories. Ph.D. thesis, Research Report 140, Computer Science Department, Yale University, New Haven, Connecticut.

    Google Scholar 

  • Wilensky, R. (1983). Planning and Understanding: A Computational Approach to Human Reasoning. Addison-Wesley: Reading, MA.

    Google Scholar 

  • Wilensky, R. (1987). Some Problems and Proposals for Knowledge Representation. Memorandum UCB/CSD 87/351, University of California, Berkeley, California.

    Google Scholar 

  • Wilensky, R., Arens, Y. & Chin, D. (1984). Talking to UNIX in English: An Overview of UC. Communications of the ACM 27(6): 575–593.

    Google Scholar 

  • Wilensky, R., Chin, D., Luria, M., Martin, J., Mayfield, J. & Wu, D. (1988). The Berkeley UNIX Consultant Project. Computational Linguistics 14(4): 35–84.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mayfield, J. Evaluating Plan Recognition Systems: Three Properties of a Good Explanation. Artificial Intelligence Review 14, 351–376 (2000). https://doi.org/10.1023/A:1026411904041

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1026411904041

Navigation