Skip to main content

Cognitive Defeasible Reasoning: the Extent to Which Forms of Defeasible Reasoning Correspond with Human Reasoning

  • Conference paper
  • First Online:
Book cover Artificial Intelligence Research (SACAIR 2021)

Abstract

Classical logic forms the basis of knowledge representation and reasoning in AI. In the real world, however, classical logic alone is insufficient to describe the reasoning behaviour of human beings. It lacks the flexibility so characteristically required of reasoning under uncertainty, reasoning under incomplete information and reasoning with new information, as humans must. In response, non-classical extensions to propositional logic have been formulated, to provide non-monotonicity. It has been shown in previous studies that human reasoning exhibits non-monotonicity. This work is the product of merging three independent studies, each one focusing on a different formalism for non-monotonic reasoning: KLM defeasible reasoning, AGM belief revision and KM belief update. We investigate, for each of the postulates propounded to characterise these logic forms, the extent to which they have correspondence with human reasoners. We do this via three respective experiments and present each of the postulates in concrete and abstract form. We discuss related work, our experiment design, testing and evaluation, and report on the results from our experiments. We find evidence to believe that 1 out of 5 KLM defeasible reasoning postulates, 3 out of 8 AGM belief revision postulates and 4 out of 8 KM belief update postulates conform in both the concrete and abstract case. For each experiment, we performed an additional investigation. In the experiments of KLM defeasible reasoning and AGM belief revision, we analyse the explanations given by participants to determine whether the postulates have a normative or descriptive relationship with human reasoning. We find evidence that suggests, overall, KLM defeasible reasoning has a normative relationship with human reasoning while AGM belief revision has a descriptive relationship with human reasoning. In the experiment of KM belief update, we discuss counter-examples to the KM postulates.

Supported by Centre for Artificial Intelligence Research (CAIR).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alchourrón, C.E., Gärdenfors, P., Makinson, D.: On the logic of theory change: partial meet contraction and revision functions. J. Symb. Logic 50, 510–530 (1985). https://doi.org/10.2307/2274239

    Article  MathSciNet  MATH  Google Scholar 

  2. Buhrmester, M.: M-turk guide (2018). https://michaelbuhrmester.wordpress.com/mechanical-turk-guide/

  3. Creswell, J.W.: Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, vol. 4, pp. 245–253. SAGE Publications, Thousand Oaks (2014)

    Google Scholar 

  4. Darwiche, A., Pearl, J.: On the logic of iterated belief revision. Artif. Intell. 89, 1–29 (1997). https://doi.org/10.1016/S0004-3702(96)00038-0

    Article  MathSciNet  MATH  Google Scholar 

  5. Gärdenfors, P., Makinson, D.: Nonmonotonic inference based on expectations. Artif. Intell. 65(2), 197–245 (1994)

    Article  MathSciNet  Google Scholar 

  6. Gärdenfors, P.: Belief Revision: An Introduction, pp. 1–26. Cambridge University Press, Cambridge (1992). https://doi.org/10.1017/CBO9780511526664.001

    Book  Google Scholar 

  7. Governatori, G., Terenziani, P.: Temporal extensions to defeasible logic. In: Orgun, M.A., Thornton, J. (eds.) AI 2007. LNCS (LNAI), vol. 4830, pp. 476–485. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-76928-6_49

    Chapter  Google Scholar 

  8. Hansson, S.: A Textbook of Belief Dynamics: Theory Change and Database Updating. Kluwer Academic Publishers, Berlin (1999)

    Book  Google Scholar 

  9. Herzig, A., Rifi, O.: Update operations: a review. In: Prade, H. (ed.) Proceedings of the 13th European Conference on Artificial Intelligence, pp. 13–17. John Wiley & Sons, Ltd., New York (1998)

    Google Scholar 

  10. Inc., A.M.T.: Faqs (2018). https://www.mturk.com/help

  11. Katsuno, H., Mendelzon, A.O.: On the difference between updating a knowledge base and revising it. In: Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning, KR 1991, pp. 387–394. Morgan Kaufmann Publishers Inc., San Francisco (1991). http://dl.acm.org/citation.cfm?id=3087158.3087197

  12. Kennedy, R., Clifford, S., Burleigh, T., Jewell, R., Waggoner, P.: The shape of and solutions to the MTurk quality crisis, October 2018

    Google Scholar 

  13. Kraus, S., Lehmann, D., Magidor, M.: Nonmonotonic reasoning, preferential models and cumulative logics. Artif. Intell. 44, 167–207 (1990)

    Article  MathSciNet  Google Scholar 

  14. Krosnich, J., Presser, S.: Question and questionnaire design. Handbook of Survey Research, March 2009

    Google Scholar 

  15. Lang, J.: Belief update revisited. In: Proceedings of the 20th International Joint Conference on Artificial Intelligence, IJCAI 2007, pp. 1534–1540, 2517–2522. Morgan Kaufmann Publishers Inc., San Francisco (2007). http://dl.acm.org/citation.cfm?id=1625275.1625681

  16. Lehmann, D.: Another perspective on default reasoning. Ann. Math. Artif. Intell. 15(1), 61–82 (1995). https://doi.org/10.1007/BF01535841

    Article  MathSciNet  MATH  Google Scholar 

  17. Lieto, A., Minieri, A., Piana, A., Radicioni, D.: A knowledge-based system for prototypical reasoning. Connect. Sci. 27(2), 137–152 (2015). https://doi.org/10.1080/09540091.2014.956292

    Article  Google Scholar 

  18. Makinson, D.: Bridges between classical and nonmonotonic logic. Logic J. IGPL 11(1), 69–96 (2003)

    Article  MathSciNet  Google Scholar 

  19. Martins, J., Shapiro, S.: A model for belief revision. Artif. Intell. 35, 25–79 (1988). https://doi.org/10.1016/0004-3702(88)90031-8

    Article  MathSciNet  MATH  Google Scholar 

  20. Over, D.: Rationality and the normative/descriptive distinction. In: Koehler, D.J., Harvey, N. (eds.) Blackwell Handbook of Judgment and Decision Making, pp. 3–18. Blackwell Publishing Ltd., United States (2004)

    Google Scholar 

  21. Pelletier, F., Elio, R.: The case for psychologism in default and inheritance reasoning. Synthese 146, 7–35 (2005). https://doi.org/10.1007/s11229-005-9063-z

    Article  MathSciNet  MATH  Google Scholar 

  22. Peppas, P.: Belief revision. In: Harmelen, F., Lifschitz, V., Porter, B. (eds.) Handbook of Knowledge Representation. Elsevier Science, December 2008. https://doi.org/10.1016/S1574-6526(07)03008-8

  23. Pollock, J.: A theory of defeasible reasoning. Int. J. Intell. Syst. 6, 33–54 (1991)

    Article  Google Scholar 

  24. Ragni, M., Eichhorn, C., Bock, T., Kern-Isberner, G., Tse, A.P.P.: Formal nonmonotonic theories and properties of human defeasible reasoning. Minds Mach. 27(1), 79–117 (2017). https://doi.org/10.1007/s11023-016-9414-1

    Article  Google Scholar 

  25. Ragni, M., Eichhorn, C., Kern-Isberner, G.: Simulating human inferences in light of new information: a formal analysis. In: Kambhampati, S. (ed.) Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI 16), pp. 2604–2610. IJCAI Press (2016)

    Google Scholar 

  26. Ross, J., Zaldivar, A., Irani, L., Tomlinson, B.: Who are the turkers? Worker demographics in Amazon mechanical turk, January 2009

    Google Scholar 

  27. Rott, H.: Change, Choice and Inference: A Study of Belief Revision and Nonmonotonic Reasoning. Oxford University Press (2001)

    Google Scholar 

  28. Sullivan, G., Artino, R., Artino, J.: Analyzing and interpreting data from likert-type scales. J. Grad. Med. Educ. 5(4), 541–542 (2013)

    Article  Google Scholar 

  29. Turk, A.M.: Qualifications and worker task quality best practices, April 2019. https://blog.mturk.com/qualifications-and-worker-task-quality-best-practices-886f1f4e03fc

  30. TurkPrime: After the bot scare: Understanding what’s been happening with data collection on mturk and how to stop it September 2018. https://blog.turkprime.com/after-the-bot-scare-understanding-whats-been-happening-with-data-collection-on-mturk-and-how-to-stop-it

  31. Verheij, B.: Correct grounded reasoning with presumptive arguments. In: Michael, L., Kakas, A. (eds.) JELIA 2016. LNCS (LNAI), vol. 10021, pp. 481–496. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48758-8_31

    Chapter  Google Scholar 

  32. Witte, J.: Introduction to the special issue on web surveys. Sociol. Methods Res. 37(3), 283–290 (2009)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Clayton Kevin Baker .

Editor information

Editors and Affiliations

A Appendix: Supplementary Information

A Appendix: Supplementary Information

1.1 A.1 External Resources

We have created a GitHub repository which contains additional resources for this project. In this repository, we include our survey questions, our raw data and the codebooks used for our data analysis. As mentioned in the abstract, this work is the product of merging three independent papers: one each for KLM [13] defeasible reasoning, AGM belief revision [1] and KM belief update [11]. These independent papers are also included in the GitHub repository. The GitHub repository can be accessed by clicking here. In addition, a summary of our project work is also showcased on our project website which can be viewed by clicking here.

1.2 A.2 Defeasible Reasoning

KLM Postulates. Table 1 presents the KLM postulates. For ease of comparison, we present the postulates translated in a manner similar to [27]. We write \(C_n(S)\) to represent the smallest set closed under classical consequence containing all sentences in S, and \(D_C(S)\) to represent the resulting set if defeasible consequence is used instead. \(D_C(S)\) is assumed defined only for finite S. \(C_n(\alpha )\) is an abbreviation for \(C_n(\{\alpha \})\), and \(D_C(\alpha )\) is an abbreviation for \(D_C(\{\alpha \})\).

Table 1. KLM postulates

Reflexivity states that if a formula is satisfied, it follows that the formula can be a consequence of itself. Left Logical Equivalence states that logically equivalent formulas have the same consequences. Right Weakening expresses the fact that one should accept as plausible consequences all that is logically implied by what one thinks are plausible consequences. And expresses the fact that the conjunction of two plausible consequences is a plausible consequence. Or says that any formula that is, separately, a plausible consequence of two different formulas, should also be a plausible consequence of their disjunction. Cautious Monotonicity expresses the fact that learning a new fact, the truth of which could have been plausibly concluded, should not invalidate previous conclusions.

Additional Postulates. Table 2 presents additional defeasible reasoning postulates. Cut expresses the fact that one may, in his way towards a plausible conclusion, first add an hypothesis to the facts he knows to be true and prove the plausibility of his conclusion from this enlarged set of facts and then deduce (plausibly) this added hypothesis from the facts. Rational Monotonicity expresses the fact that only additional information, the negation of which was expected, should force us to withdraw plausible conclusions previously drawn. Transitivity expresses that if the second fact is a plausible consequence of the first and the third fact is a plausible consequence of the second, then the third fact is also a plausible consequence of the first fact. Contraposition allows the converse of the original proposition to be inferred, by the negation of terms and changing their order.

Table 2. Additional postulates

1.3 A.3 Belief Revision

AGM Postulates. Table 3 presents the AGM postulates. \(K*\alpha \) is the sentence representing the knowledge base after revising the knowledge base K with \(\alpha \). We assume that K is a set that is closed under classical deductive consequence.

Table 3. AGM postulates

Closure implies logical omniscience on the part of the ideal agent or reasoner, including after revision of their belief set. Success expresses that the new information should always be part of the new belief set. Inclusion and Vacuity are motivated by the principle of minimum change. Together, they express that in the case of information \(\alpha \), consistent with belief set or knowledge base K, belief revision involves performing expansion on K by \(\alpha \) i.e. none of the original beliefs need to be withdrawn. Consistency expresses that the agent should prioritise consistency, where the only acceptable case of not doing so is if the new information, \(\alpha \), is inherently inconsistent - in which case, success overrules consistency. Extensionality effectively expresses that the content i.e. the belief represented, and not the syntax, affects the revision process, in that logically equivalent sentences or beliefs will cause logically equivalent changes to the belief set. Super-expansion and sub-expansion is motivated by the principle of minimal change. Together, they express that for two propositions \(\alpha \) and \(\phi \), if in revising belief set K by \(\alpha \) one obtains belief set K’ consistent with \(\phi \), then to obtain the effect of revising K with \(\alpha \wedge \phi \), simply perform expansion on K’ with \(\phi \). In short, \(K * (\alpha \wedge \phi ) = (K * \alpha ) + \phi \).

Table 4. KM postulates

1.4 A.4 Belief Update

KM Postulates. Table 4 presents the KM postulates. For ease of comparison, the postulates have been rephrased as in the AGM paradigm [22]. We use \(\diamond \) to represent the update operator. U1 states that updating with the new fact must ensure that the new fact is a consequence of the update. U2 states that updating on a fact that could in principle be already known has no effect. U3 states the reasonable requirement that we cannot lapse into impossibility unless we either start with it, or are directly confronted by it. U4 requires that syntax is irrelevant to the results of an update. U5 says that first updating on \(\alpha \) then simply adding the new information \(\gamma \) is at least as strong (i.e. entails) as updating on the conjunction of \(\alpha \) and \(\gamma \). U6 states that if updating on \(\alpha _1\) entails \(\alpha _2\) and if updating on \(\alpha _2\) entails \(\alpha _1\), then the effect of updating on either is equivalent. U7 applies only to complete knowledge bases, that is knowledge bases with a single model. If some situation arises from updating a complete K on \(\alpha \) and it also results from updating that K from \(\phi \) then it must also arise from updating that K on \(\alpha \vee \phi \). U8 is the disjunction rule. U*9 is not necessary in the propositional formulation of the postulates and is listed for completeness. It was not tested in the survey.

Fig. 1.
figure 1

Hit rate (%) for defeasible reasoning postulates

Fig. 2.
figure 2

Hit rate (%) for belief revision postulates

1.5 A.5 Results

In Fig. 1, we show the Hit Rate (%) for each defeasible reasoning postulate. In Fig. 2, we show the Hit Rate (%) for each belief revision postulate. In Fig. 3, we show the Hit Rate (%) for each belief update postulate.

Fig. 3.
figure 3

Hit rate (%) for belief update postulates

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Baker, C.K., Denny, C., Freund, P., Meyer, T. (2020). Cognitive Defeasible Reasoning: the Extent to Which Forms of Defeasible Reasoning Correspond with Human Reasoning. In: Gerber, A. (eds) Artificial Intelligence Research. SACAIR 2021. Communications in Computer and Information Science, vol 1342. Springer, Cham. https://doi.org/10.1007/978-3-030-66151-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66151-9_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66150-2

  • Online ISBN: 978-3-030-66151-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics