Skip to main content
Log in

Building a user model implicitly from a cooperative advisory dialog

  • Published:
User Modeling and User-Adapted Interaction Aims and scope Submit manuscript

Abstract

This paper reviews existing methods for building user models to support adaptive, interactive systems, identifies sigificant problems with these approaches, and describes a new method for implicitly acquiring user models from an ongoing user-system dialog. Existing explicit user model acquisition methods, such as user edited models or model building dialogs put additional burden on the user and introduce artificial model acquisition dialogs. Hand coding stereotypes, another explicit acquisition method, is a tedious and error-prone process. On the other hand, implicit acquisition techniques such as computing presuppositions or entailments either draw too few inferences to be generally useful, or too many to be trusted.

In contrast, this paper describes GUMAC, a General User Model Acquisition Component that uses heuristic rules to make default inferences about users' beliefs from their interaction with an advisory expert system. These rules are based on features of human action and conversation that constrain people's behavior and establish expectations about their knowledge. The application of these rules is illustrated with two examples of extended dialogs between users and an investment advisory system. During the course of these conversations, GUMAC is able to acquire an extensive model of the users' beliefs about the aspects of the domain considered in the dialog. These models, in turn, provide the sort of information needed by an explanation generator to tailor explanations the advisory system gives to its users.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Allen, James F.: 1983, ‘Recognising Intentions from Natural Language Utterances’. In: Michael Brady and Robert C. Berwick (eds.): Computational Models of Discourse, MIT Press, Cambridge, MA, pp. 107–166.

    Google Scholar 

  • Allen, James F. and Perrault, C. Raymond: 1980, ‘Analyzing Intention in Utterances’. Artificial Intelligence 15, 143–178.

    Google Scholar 

  • Anderson, John R., Boyd, C. Franklin, and Yost, Gregg: 1985, The Geometry Tutor. In 9th International Conference on Artificial Intelligence, pp. 1–7.

  • Brown, J. S. and Burton, R. R.: 1978, ‘Diagnostic Models for Procedural Bugs in Basic Mathematical Skills’. Cognitive Science 2, 155–192.

    Google Scholar 

  • Brown, J. S., Burton, R. R., and Bell, A. G.: 1975, ‘SOPHIE: A Step Toward a Reactive Learning Environment’. International Journal of Man-Machine Studies 7, 675–696.

    Google Scholar 

  • Buchanan, Bruce G. and Shortliffe, E. H.: 1984, ‘Human Engineering of Medical Expert Systems’. In: Bruce G. Buchanan and Edward H. Shortliffe (eds.): Rule-Based Expert Systems, Addison-Wesley, Reading, MA, pp. 599–612.

    Google Scholar 

  • Burton, Richard R.: 1982, ‘Diagnosing Bugs in a Simple Procedural Skill’. In: D. Sleeman and J. S. Brown (eds.): Intelligent Tutoring Systems, Academic Press, New York, pp. 157–184.

    Google Scholar 

  • Burton, Richard R. and Brown, John Seely: 1982, ‘An Investigation of Computer Coaching for Informal Learning Activities’. In: D. Sleeman and J. S. Brown (eds.): Intelligent Tutoring Systems, Academic Press, New York, pp. 79–98.

    Google Scholar 

  • Carberry, Sandra: 1988, ‘Modeling the User's Plans and Goals’. Computational Linguistics 14(3), 23–37.

    Google Scholar 

  • Carberry, Sandra: 1989, ‘Plan Recognition and Its Use in Understanding Dialogue’. In: Alfred Kobsa and Wolfgang Wahlster (eds.): User Models in Dialog Systems, Springer Verlag, Berlin—New York, pp. 133–162.

    Google Scholar 

  • Chin, David N.: 1989, ‘KNOME: Modeling What the User Knows in UC’. In: Alfred Kobsa and Wolfgang Wahlster (eds.): User Models in Dialog Systems, Springer Verlag, Berlin—New York, pp. 74–107.

    Google Scholar 

  • Clancey, William J.: 1983, ‘The Epistemology of a Rule-based Expert System—A Framework for Explanation’. Artificial Intelligence 20, 215–251.

    Google Scholar 

  • de Kleer, Johan: 1986, ‘An Assumption-based TMS’. Artificial Intelligence 28, 127–162.

    Google Scholar 

  • Doyle, Arthur Conan: 1975, ‘The Musgrave Ritual’. In: Arthur Conan Doyle (ed.): The Complete Adventures and Memoirs of Sherlock Holmes. Bramhall House, New York.

    Google Scholar 

  • Doyle, Jon: 1979, ‘A Truth Maintenance System’. Artificial Intelligence 12 (3), 231–272.

    Google Scholar 

  • Fagin, Ronald and Halpern, Joseph Y.: 1985, ‘Belief, Awareness and Limited Reasoning: Preliminary Report’. In: 9th International Conference on Artificial Intelligence, pp. 491–501.

  • Finin, Tim: 1989, ‘GUMS-A General User Modelling Shell’. In: Alfred Kobsa and Wolfgang Wahlster (eds.): User Models in Dialog Systems, Springer Verlag, Berlin—New York, pp. 411–430.

    Google Scholar 

  • Gal, A.: 1985, ‘A Natural Language Database Interface that Provides Cooperative Answers’. In: Proceedings of the Second Conference on Artificial Intelligence Applications, pp. 352–357.

  • Genesereth, Michael: 1979, ‘The Role of Plans in Automated Consultation’. In: 6th International Conference on Artificial Intelligence, pp. 311–319.

  • Genesereth, Michael R.: 1982, ‘The Role of Plans in Intelligent Teaching Systems’. In: D. Sleeman and J. S. Brown (eds.): Intelligent Tutoring Systems, Academic Press, New York, pp. 137–156.

    Google Scholar 

  • Grice, H. P.: 1975, ‘Logic and Conversation’. In: P. Cole and J. L. Morgan (eds.): Syntax and Semantics, volume 3, Academic Press, New York, pp. 64–75.

    Google Scholar 

  • Halpern, Joseph Y. and Moses, Yoram: 1985, ‘A Guide to the Modal Logics of Knowledge and Belief: Preliminary Draft’. In: 9th International Conference on Artificial Intelligence, pp. 480–490.

  • Jameson, Anthony: 1989, ‘But What Will the Listener Think? Belief Ascription and Image Maintenace in Dialog’. In: Alfred Kobsa and Wolfgang Wahlster (eds.): User Models in Dialog Systems, Springer Verlag, Berlin-New York, pp. 255–311.

    Google Scholar 

  • Johnson, William Lewis: 1985, Intention-Based Diagnosis of Errors in Novice Programs. PhD thesis, Department of Computer Science, Yale University.

  • Johnson, W. Lewis and Soloway, Elliot: 1984, Intention-based diagnosis of programming errors. In Proceedings of the 4th National Conference on Artificial Intelligence, pages 162–168.

  • Joshi, A., Webber, Bonnie, and Weischedel, Ralph: 1984a, ‘Preventing False Inferences’. In: Proceedings of the Tenth International Conference on Computational Linguistics, pp. 134–138.

  • Joshi, A., Webber, Bonnie, and Weischedel, Ralph: 1984b, ‘Living Up to Expectations: Computing Expert Responses’. In: Proceedings of the 4th National Conference on Artificial Intelligence.

  • Joshi, A., Webber, Bonnie, and Weischedel, Ralph: 1987, ‘Some Aspects of Default Reasoning in Interactive Discourse’. In: Ronan G. Reilly (ed.): Communication Failure in Dialogue and Discourse, Elsevier Science Publishers, 1987, pp. 213–219. An earlier version of this paper appears as technical report number MS-CIS-86-27 from the Department of Computer and Information Science, University of Pennsylvania.

  • Kaplan, S. J.: 1982, Cooperative responses from a portable natural language database query system. Artificial Intelligence, 19(2): 165–188.

    Google Scholar 

  • Kass, Robert John: 1988, Acquiring a Model of the User's Beliefs From a Cooperative Advisory Dialog. PhD thesis, Department of Computer and Information Science, University of Pennsylvania.

  • Kass, Robert and Finin, Tim: 1988, ‘A General User Modelling Facility’. In: Proceedings of CHI'88, pp. 145–150.

  • Kass, Robert and Finin, Tim: 1991, ‘General User Modelling: A Facility to Support Intelligent Interaction’. In: Joseph W. Sullivan and Sherman W. Tyler (eds.): Intelligent User Interfaces. Addison-Wesley.

  • Kobsa, Alfred: 1984a, ‘Three Steps in Constructing Mutual Belief Models from User Assertions’. In Proceedings of the 6th European Conference on Artificial Intelligence, pp. 423–127.

  • Kobsa, Alfred: 1984b, ‘Generating a User Model from WH-Questions in the VIE-LANG System’. Technical Report 84-03, Department of Medical Cybernetics, University of Vienna.

  • Kobsa, Alfred: 1985, ‘Using Situation Descriptions and Russellian Attitudes for Representing Beliefs and Wants’. In: 9th International Conference on Artificial Intelligence, pp. 513–515.

  • Konolige, Kurt: 1984, ‘Belief and Incompleteness’. In: R. C. Moore and J. Hobbs (eds.): Formal Theories of the Commonsense World. Ablex Publishing, Norwood, NJ.

    Google Scholar 

  • Lipkis, T.: 1982, ‘A KL-ONE Classifier’. Technical Report 4842, Bolt, Beranek and Newman.

    Google Scholar 

  • McGuire, W. J. and Padawer-Singer, A.: 1976. ‘Trait Salience in the Spontaneous Self- concept’. Journal of Personality and Social Psychology 33, 743–754.

    Google Scholar 

  • Moore, Johanna Doris: 1989, A Reactive Approach to Explanation in Expert and Advice Giving Systems. PhD thesis, University of California, Los Angeles.

    Google Scholar 

  • Moore, Johanna, D. and Swartout, William R.: 1989, ‘A Reactive Approach to Explanation’. In: 11th International Conference on Artificial Intelligence, pp. 1504–1510.

  • Moore, Robert C.: 1984, ‘A Formal Theory of Knowledge and Action’. In: R. C. Moore and J. Hobbs (eds.): Formal Theories of the Commonsense World, Ablex Publishing, Norwood, NJ, pp. 319–358.

    Google Scholar 

  • Morik, Katharina and Rollinger, Claus-Rainer: 1985, ‘The Real-Estate Agent - Modeling Users by Uncertain Reasoning’. AI Magazine 6, 44–52.

    Google Scholar 

  • Moser, M. G.: 1983, ‘An Overview of NIKL, The New Implementation of KL-ONE’. Technical Report 5421, Bolt, Beranek and Newman.

    Google Scholar 

  • Neal, Lisa Rubin: 1987, ‘Cognition-sensitive Design and User Modeling for Syntax-directed Editors’. In: Proceedings of the Human Factors in Computer Systems Conference, pp. 99–102.

  • Neal, Lisa: 1990, ‘Predictive Tasks to Build User Models’. In: Proceedings of the Fifth European Conference on Cognitive Ergonomics, pp. 215–227.

  • Neches, Robert, Swartout, William R., and Moore, J.: 1985a, ‘Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development’. IEEE Transactions on Software Engineering SE-11 (11), 1337–1351.

    Google Scholar 

  • Neches, Robert, Swartout, William R., and Moore, J.: 1985b, ‘Explanable (and Maintainable) Expert Systems’. In: 9th International Conference on Artificial Intelligence, pp. 382–389.

  • Nisbett, R. E. and Wilson, T. D.: 1977, ‘Telling More Than We Know: Verbal Reports on Mental Processes’. Psychology Review 84, 231–259.

    Google Scholar 

  • Paris, Cecile L.: 1989, ‘Tailoring Object Descriptions to a User's Level of Expertise’. In: Alfred Kobsa and Wolfgang Wahlster (eds.): User Models in Dialog Systems, Springer Verlag, Berlin-New York, pp. 200–232.

    Google Scholar 

  • Perrault, Raymond: 1990, ‘An Application of Default Logic to Speech Act Theory’. In: P. Cohen, J. Morgan, and M. Pollack (eds.): Plans and Intentions in Communication and Discourse. MIT Press.

  • Pollack, Martha E.: 1986, Inferring Domain Plans in Question-Answering. PhD thesis, Department of Computer and Information Science, University of Pennsylvania.

  • Pollack, Martha E., Hirschberg, Julia, and Webber, Bonnie: 1982, ‘User Participation in the Reasoning Processes of Expert Systems’. In: Proceedings of the 2nd National Conference on Artificial Intelligence, pp. 358–361. A longer version of this paper appears as Technical Report MS-CIS-82-9, Department of Computer and Information Science, University of Pennsylvania.

  • Reiser, Brian J., Anderson, John R., and Farrell, Robert G.: 1985, ‘Dynamic Student Modelling in an Intelligent Tutor for Lisp Programming’. In: 9th International Conference on Artificial Intelligence, pp. 8–14.

  • Reiter, Raymond: 1980, ‘A Logic for Default Reasoning’. Artificial Intelligence 13 (1), 81–132.

    Google Scholar 

  • Rich, Elaine: 1979, ‘User Modelling Via Stereotypes’. Cognitive Science 3, 329–354.

    Google Scholar 

  • Rich, Elaine: 1983, ‘Users as Individuals: Individualizing User Models’. International Journal of Man-Machine Studies 18, 199–214.

    Google Scholar 

  • Schmolze, J. G. and Israel, D.: 1983, ‘KL-ONE: Semantics and Classification’. Technical Report 5421, Bolt, Beranek and Newman.

    Google Scholar 

  • Sidner, Candace L.: 1983, ‘What the Speaker Means: The Recognition of Speaker's Plans in Discourse’. Computers and Mathematics with Applications 9, 71–82.

    Google Scholar 

  • Sleeman, D.: 1985, ‘UMFE: A User Modelling Front End System’. In: International Journal of Man-Machine Studies 23, 71–88.

    Google Scholar 

  • Swartout, William R.: 1983, ‘XPLAIN: A System for Creating and Explaining Expert Consulting Programs’. Artificial Intelligence 21, 285–325.

    Google Scholar 

  • Swartout, William R. and Smoliar, Stephen W.: 1987a, ‘Explaining the Link between Causal Reasoning and Expert Behavior’. In: Proceedings of the Symposium on Computer Applications in Medical Care. Also to appear in: ‘Topics in Medical Artificial Intelligence’; P. L. Miller (ed.), Springer Verlag.

  • Swartout, William R. and Smoliar, Stephen W.: 1987b, ‘On Making Expert Systems More Like Experts’. Expert Systems 3(3). Also to appear in: M. Richer (ed): AI Tools and Techniques, Ablex.

  • Swartout, William R. and Smoliar, Stephen W: 1988, ‘Explanation: A Source of Guidance for Knowledge Representation’. In: K. Morik (ed.): Knowledge Representation and Organization in Machine Learning. Springer Verlag.

  • Teach, R. L. and Shortliffe, E. H.: 1984, ‘An Analysis of Physician Attitudes Regarding Computer-based Clinical Consultation Systems’. In: Bruce G. Buchanan and Edward H. Shortliffe (eds.): Rule-Based Expert Systems, Addison-Wesley, Reading, MA, pp. 635–652.

    Google Scholar 

  • Wallis, J. W. and Shortliffe, E. H.: 1982, ‘Explanatory Power for Medical Reasoning Expert Systems: Studies in the Representation of Causal Relationships for Clinical Consultations’. Technical Report STAN-CS-82-923, Department of Computer Science, Stanford University.

  • Wallis, J. W. and Shortliffe, E. H.: 1984, ‘Customizing Explanations Using Causal Knowledge’. In: Bruce G. Buchanan and Edward H. Shortliffe (eds.): Rule-Based Expert Systems, Addison-Wesley, Reading, MA, pp. 371–388. 1984.

    Google Scholar 

  • Weiner, J. L.: 1980, ‘BLAH, A System Which Explains Its Reasoning’. Artificial Intelligence 15, 19–48.

    Google Scholar 

  • Woods, W. A.: 1977, ‘Semantics and Quantification in Natural Language Question Answering’. Technical Report 3687, Bolt, Beranek and Newman.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kass, R. Building a user model implicitly from a cooperative advisory dialog. User Model User-Adap Inter 1, 203–258 (1991). https://doi.org/10.1007/BF00141081

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00141081

Key words

Navigation