Skip to main content

An explicit representation of reasoning failures

  • Scientific Papers
  • Conference paper
  • First Online:
Case-Based Reasoning Research and Development (ICCBR 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1266))

Included in the following conference series:

Abstract

This paper focuses upon the content and the level of granularity at which representations for the mental world should be placed in case-based explainers that employ introspective reasoning. That is, for a case-based reasoning system to represent thinking about the self, about the states and processes of reasoning, at what level of detail should one attempt to declaratively capture the contents of thought? Some claim that a mere set of two mental primitives are sufficient to represent the utterances of humans concerning verbs of thought such as “I forgot his birthday.” Alternatively, many in the CBR community have built systems that record elaborate traces of reasoning, keep track of knowledge dependencies or inference, or encode much metaknowledge concerning the structure of internal rules and defaults. The position here is that a system should be able instead to capture enough details to represent causally a common set of reasoning failure symptoms. I propose a simple model of expectation-driven reasoning, derive a taxonomy of reasoning failures from the model, and present a declarative representation of the failure symptoms that have been implemented in a CBR simulation. Such representations enable a system to explain reasoning failures by mapping from symptoms of the failures to causal factors involved.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cox, M. T. (1997). Loose Coupling of Failure Explanation and Repair: Using learning goals to sequence learning methods. This volume.

    Google Scholar 

  2. Cox, M. T. (1996). Introspective multistrategy learning: Constructing a learning strategy under reasoning failure. Doctoral dissertation, Technical Report, GIT-CC-96-06, College of Computing, Georgia Institute of Technology, Atlanta. (Available at URL ftp://ftp.cc.gatech.edu/pub/ai/rani/git-cc-96-06.html)

    Google Scholar 

  3. Cox, M. T., & Ram, A. (1995). Interacting learning-goals: Treating learning as a planning task. In J.-R Haton, M. Keane & M. Manago (Eds.), Advances in case-based reasoning: Second European Workshop, EWCBR-94 (pp. 60–74). Berlin: Springer-Verlag.

    Google Scholar 

  4. Doyle, J. (1979). A truth maintenance system, Artificial Intelligence, 12, 231–272.

    Google Scholar 

  5. Glenberg, A. M., Wilkinson, A. C., & Epstein, W. (1992). The illusion of knowing: Failure in the self-assessment of comprehension. In T. O. Nelson (Ed.), Metacognition: Core readings (pp. 185–195). Boston: Allyn and Bacon. (Original work published in 1982)

    Google Scholar 

  6. Krinsky, R., & Nelson, T. O. (1985). The feeling of knowing for different types of retrieval failure. Acta Psychologica, 58, 141–158.

    Google Scholar 

  7. Nelson, T. O., & Dunlosky, J. (1991). When people's Judgements of Learning (JOLs) are extremely accurate at predicting subsequent recall: The “Delayed-JOL Effect.” Psychological Science, 2(4), 267–270.

    Google Scholar 

  8. Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.

    Google Scholar 

  9. Ram, A. (1993). Indexing, elaboration and refinement: Incremental learning of explanatory cases. Machine Learning, 10, 201–248.

    Google Scholar 

  10. Ram, A. (1994). AQUA: Questions that drive the understanding process. In R. C. Schank, A. Kass, & C. K. Riesbeck (Eds.), Inside case-based explanation (pp. 207–261). Hillsdale, NJ: Lawrence Erlbaum Associates

    Google Scholar 

  11. Schank, R. C. (1975). Conceptual information processing. Amsterdam: North-Holland Publishing.

    Google Scholar 

  12. Schank, R. C. (1986). Explanation patterns. Hillsdale: LEA.

    Google Scholar 

  13. Schank, R. C., Goldman, N., Rieger, C. K., & Riesbeck, C. (1972). Primitive concepts underlying verbs of thought (Stanford Artificial Intelligence Project Memo No. 162). Stanford, CA: Stanford Univ., Computer Science Dept. (NTIS No. AD744634)

    Google Scholar 

  14. Schank, R. C., Kass, A., & Riesbeck, C. K. (1994). Inside case-based explanation. Hillsdale, NJ: LEA.

    Google Scholar 

  15. Schwanenflugel, P. J., Fabricius, W. V., Noyes, C. R., Bigler, K., D., & Alexander, J. M. (1994). The organization of mental verbs and folk theories of knowing. Journal of Memory and Language, 33, 376–395.

    Google Scholar 

  16. Stallman, R. M., & Sussman, G. J. (1977). Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis. Artificial Intelligence, 9, 135–196.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

David B. Leake Enric Plaza

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cox, M.T. (1997). An explicit representation of reasoning failures. In: Leake, D.B., Plaza, E. (eds) Case-Based Reasoning Research and Development. ICCBR 1997. Lecture Notes in Computer Science, vol 1266. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63233-6_493

Download citation

  • DOI: https://doi.org/10.1007/3-540-63233-6_493

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63233-7

  • Online ISBN: 978-3-540-69238-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics