Skip to main content

Predictive Theory of Mind Models Based on Public Announcement Logic

  • Conference paper
  • First Online:
Dynamic Logic. New Trends and Applications (DaLí 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14401))

Included in the following conference series:

  • 102 Accesses

Abstract

Epistemic logic can be used to reason about statements such as ‘I know that you know that I know that \(\varphi \)’. In this logic, and its extensions, it is commonly assumed that agents can reason about epistemic statements of arbitrary nesting depth. In contrast, empirical findings on Theory of Mind, the ability to (recursively) reason about mental states of others, show that human recursive reasoning capability has an upper bound.

In the present paper we work towards resolving this disparity by proposing some elements of a logic of bounded Theory of Mind, built on Public Announcement Logic. Using this logic, and a statistical method called Random-Effects Bayesian Model Selection, we estimate the distribution of Theory of Mind levels in the participant population of a previous behavioral experiment. Despite not modeling stochastic behavior, we find that approximately three-quarters of participants’ decisions can be described using Theory of Mind. In contrast to previous empirical research, our models estimate the majority of participants to be second-order Theory of Mind users.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For solving the game of Aces and Eights, all players also need to be truthful, perfect logical reasoners, and there needs to be common knowledge of this.

  2. 2.

    Note that this differs from [11], where the horizon of a player i at (Ms) contains all states player i can ‘reach’ by taking one step along one of her own edges, followed by any number of steps along any agent’s edges. Closer to our intentions, but more general, is the notion of admissibility on E [22, 24].

  3. 3.

    We use \(l=0\) as the only special case, but for situations other than Aces and Eights we need a more general solution, found in Appendix A. Furthermore, our semantics can be made equivalent to one with the usual knowledge operator if we ‘unfold’ our models such that we have \(R : (A \times \mathbb {N}) \rightarrow \mathcal {P}(S \times S)\).

  4. 4.

    All code used for this article can be found at https://github.com/jdtoprug/EpistemicToMProject and doi: 10.5281/zenodo.8382660. Note that we implemented the model updates needed for Aces and Eights and related games, and not a general logical framework.

  5. 5.

    We cannot test these predictions as we do not have access to the computational power required to fit the SUWEB model of [8] in a reasonable amount of time.

  6. 6.

    Because knowledge can be false, using ‘knowledge’ and K may not be entirely accurate. We use it because the model for Aces and Eights is S5, but for future work we recommend using ‘beliefs’ and B.

References

  1. Arslan, B., Verbrugge, R., Taatgen, N., Hollebrandse, B.: Accelerating the development of second-order false belief reasoning: a training study with different feedback methods. Child Dev. 91(1), 249–270 (2020). https://doi.org/10.1111/cdev.13186

    Article  Google Scholar 

  2. Arthaud, F., Rinard, M.: Depth-bounded epistemic logic. In: Verbrugge, L.C. (ed.) Proceedings of the 19th Conference on Theoretical Aspects of Rationality and Knowledge (TARK 23), pp. 46–65 (2023). https://doi.org/10.4204/EPTCS.379.7

  3. Baltag, A., Moss, L.S., Solecki, S.: The logic of public announcements, common knowledge, and private suspicions. In: Gilboa, I. (ed.) Proceedings of the 7th Conference on Theoretical Aspects of Rationality and Knowledge (TARK 98), pp. 43–46 (1998)

    Google Scholar 

  4. Baltag, A., Moss, L.S., Solecki, S.: The logic of public announcements, common knowledge, and private suspicions. In: Arló-Costa, H., Hendricks, V.F., van Benthem, J. (eds.) Readings in Formal Epistemology. SGTP, vol. 1, pp. 773–812. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-20451-2_38

    Chapter  Google Scholar 

  5. Barrett, J.L., Richert, R.A., Driesenga, A.: God’s beliefs versus mother’s: The development of nonhuman agent concepts. Child Dev. 72(1), 50–65 (2001). https://doi.org/10.1111/1467-8624.00265

    Article  Google Scholar 

  6. Blackburn, P., De Rijke, M., Venema, Y.: Modal logic (Cambridge tracts in theoretical computer science no. 53). Cambridge University Press (2001)

    Google Scholar 

  7. Camerer, C.F., Ho, T.H., Chong, J.K.: A cognitive hierarchy model of games. Q. J. Econ. 119(3), 861–898 (2004). https://doi.org/10.1162/0033553041502225

    Article  Google Scholar 

  8. Cedegao, Z., Ham, H., Holliday, W.H.: Does Amy know Ben knows you know your cards? A computational model of higher-order epistemic reasoning. In: Proceedings of the 43th Annual Meeting of the Cognitive Science Society, pp. 2588–2594 (2021)

    Google Scholar 

  9. De Weerd, H., Diepgrond, D., Verbrugge, R.: Estimating the use of higher-order theory of mind using computational agents. BE J. Theor. Econ. 18(2) (2018). https://doi.org/10.1515/bejte-2016-0184

  10. De Weerd, H., Verbrugge, L.C., Verheij, B.: How much does it help to know what she knows you know? An agent-based simulation study. Artif. Intell. 199–200, 67–92 (2013). https://doi.org/10.1016/j.artint.2013.05.004

  11. Dégremont, C., Kurzen, L., Szymanik, J.: Exploring the tractability border in epistemic tasks. Synthese 191(3), 371–408 (2014). https://doi.org/10.1007/s11229-012-0215-7

    Article  MathSciNet  Google Scholar 

  12. Devaine, M., Hollard, G., Daunizeau, J.: The social Bayesian brain: Does mentalizing make a difference when we learn? PLoS Comput. Biol. 10(12), e1003992 (2014). https://doi.org/10.1371/journal.pcbi.1003992

    Article  Google Scholar 

  13. Etel, E., Slaughter, V.: Theory of mind and peer cooperation in two play contexts. J. Appl. Dev. Psychol. 60, 87–95 (2019). https://doi.org/10.1016/j.appdev.2018.11.004

    Article  Google Scholar 

  14. Fagin, R., Halpern, J.Y., Moses, Y., Vardi, M.: Reasoning About Knowledge. MIT Press, Cambridge (1995)

    Google Scholar 

  15. Gierasimczuk, N., Szymanik, J.: A note on a generalization of the muddy children puzzle. In: Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge, pp. 257–264 (2011). https://doi.org/10.1145/2000378.2000409

  16. Goodie, A.S., Doshi, P., Young, D.L.: Levels of theory-of-mind reasoning in competitive games. J. Behav. Decis. Mak. 25(1), 95–108 (2012). https://doi.org/10.1002/bdm.717

    Article  Google Scholar 

  17. Hall-Partee, B.: Semantics-mathematics or psychology? In: Bäuerle, R., Egli, U., Von Stechow, A. (eds.) Semantics from Different Points of View, SSLC, vol. 6, pp. 1–14. Springer, Berlin, Heidelberg (1979). https://doi.org/10.1007/978-3-642-67458-7_1

  18. Hayashi, H.: Possibility of solving complex problems by recursive thinking. Jpn. J. Psychol. 73(2), 179–185 (2002). https://doi.org/10.4992/jjpsy.73.179

    Article  Google Scholar 

  19. Hintikka, J.: Knowledge and Belief: An Introduction to the Logic of the Two Notions. Cornell University Press, Ithaca, NY, USA (1962)

    Google Scholar 

  20. Jonker, C.M., Treur, J.: Modelling the dynamics of reasoning processes: Reasoning by assumption. Cogn. Syst. Res. 4(2), 119–136 (2003). https://doi.org/10.1016/S1389-0417(02)00102-X

    Article  Google Scholar 

  21. Kahneman, D., Slovic, P., Tversky, A.: Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge (1982)

    Google Scholar 

  22. Kaneko, M., Suzuki, N.Y.: Epistemic logic of shallow depths and game theoretical applications. In: Advances In Modal Logic, vol. 3, pp. 279–298. World Scientific (2002). https://doi.org/10.1142/9789812776471_0015

  23. Keysar, B., Lin, S., Barr, D.J.: Limits on theory of mind use in adults. Cognition 89(1), 25–41 (2003). https://doi.org/10.1016/S0010-0277(03)00064-7

    Article  Google Scholar 

  24. Kline, J.J.: Evaluations of epistemic components for resolving the muddy children puzzle. Econ. Theor. 53(1), 61–83 (2013). https://doi.org/10.1007/s00199-012-0735-x

    Article  MathSciNet  Google Scholar 

  25. McCarthy, J.: Formalization of two puzzles involving knowledge. Formalizing Common Sense: Papers by John McCarthy, pp. 158–166 (1990)

    Google Scholar 

  26. Meijering, B., van Rijn, H., Taatgen, N.A., Verbrugge, R.: What eye movements can tell about theory of mind in a strategic game. PLoS ONE 7(9), 1–8 (2012). https://doi.org/10.1371/journal.pone.0045961

    Article  Google Scholar 

  27. Nagel, R.: Unraveling in guessing games: An experimental study. Am. Econ. Rev. 85(5), 1313–1326 (1995)

    Google Scholar 

  28. Paal, T., Bereczkei, T.: Adult theory of mind, cooperation, Machiavellianism: The effect of mindreading on social relations. Pers. Individ. Differ. 43(3), 541–551 (2007). https://doi.org/10.1016/j.paid.2006.12.021

    Article  Google Scholar 

  29. Plaza, J.: Logics of public announcements. In: Emrich, M., Pfeifer, M., Hadzikadic, M., Ras, Z. (eds.) Proceedings of the 4th International Symposium on Methodologies for Intelligent Systems: Poster Session Program, pp. 201–216. Oak Ridge National Laboratory (1989)

    Google Scholar 

  30. Premack, D., Woodruff, G.: Does the chimpanzee have a theory of mind? Behav. Brain Sci. 1(4), 515–526 (1978). https://doi.org/10.1017/S0140525X00076512

    Article  Google Scholar 

  31. Solaki, A.: The effort of reasoning: modelling the inference steps of boundedly rational agents. J. Log. Lang. Inform. 31(4), 529–553 (2022). https://doi.org/10.1007/s10849-022-09367-w

    Article  MathSciNet  Google Scholar 

  32. Stahl, D.O., II., Wilson, P.W.: Experimental evidence on players’ models of other players. J. Econ. Behav. Organ. 25(3), 309–327 (1994). https://doi.org/10.1016/0167-2681(94)90103-1

    Article  Google Scholar 

  33. Stephan, K.E., Penny, W.D., Daunizeau, J., Moran, R.J., Friston, K.J.: Bayesian model selection for group studies. Neuroimage 46(4), 1004–1017 (2009). https://doi.org/10.1016/j.neuroimage.2009.03.025

    Article  Google Scholar 

  34. Top, J.D., Verbrugge, R., Ghosh, S.: An automated method for building cognitive models for turn-based games from a strategy logic. Games 9(3), 44 (2018). https://doi.org/10.3390/g9030044

    Article  MathSciNet  Google Scholar 

  35. Top, J.D., Verbrugge, R., Ghosh, S.: Automatically translating logical strategy formulas into cognitive models. In: 16th International Conference on Cognitive Modelling, pp. 182–187 (2018)

    Google Scholar 

  36. Van Ditmarsch, H.: Dynamics of lying. Synthese 191(5), 745–777 (2014)

    Article  MathSciNet  Google Scholar 

  37. Van Ditmarsch, H., van der Hoek, W., Kooi, B.: Dynamic Epistemic Logic, Synthese Library, vol. 337. Springer Science & Business Media, Dordrecht, Netherlands (2007). https://doi.org/10.1007/978-1-4020-5839-4

  38. Veltman, K., de Weerd, H., Verbrugge, R.: Training the use of theory of mind using artificial agents. J. Multimodal User Interfaces 13(1), 3–18 (2019). https://doi.org/10.1007/s12193-018-0287-x

    Article  Google Scholar 

  39. Verbrugge, R.: Logic and social cognition: The facts matter, and so do computational models. J. Philos. Log. 38(6), 649–680 (2009). https://doi.org/10.1007/s10992-009-9115-9

    Article  MathSciNet  Google Scholar 

  40. Verbrugge, R., Meijering, B., Wierda, S., Van Rijn, H., Taatgen, N.: Stepwise training supports strategic second-order theory of mind in turn-taking games. Judgm. Decis. Mak. 13(1), 79–98 (2018). https://doi.org/10.1017/S1930297500008846

    Article  Google Scholar 

  41. Wimmer, H., Perner, J.: Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition 13(1), 103–128 (1983). https://doi.org/10.1016/0010-0277(83)90004-5

    Article  Google Scholar 

Download references

Acknowledgements

This research was funded by the project ‘Hybrid Intelligence: Augmenting Human Intellect’, a 10-year Gravitation programme funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, grant number 024.004.022. Lastly, we would like to thank our four anonymous reviewers and prof. dr. Hans van Ditmarsch for providing us with helpful comments, suggestions, and discussion.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jakob Dirk Top .

Editor information

Editors and Affiliations

Appendices

Appendix A

This appendix describes how to extend our work beyond Aces and Eights.

In [22], concatenation of sequences is defined: \(e \circ e' = (i_1,\dotsc ,i_m,j_1,\dotsc ,j_k)\) for \(e = (i_1,\dotsc ,i_m)\), \(e' = (j_1,\dotsc ,j_k)\). The empty sequence is \(\epsilon \), and \(e \circ \epsilon = \epsilon \circ e = e\).

The epistemic depth \(\delta (F)\) of a formula F is inductively defined as follows:

D0::

\(\delta (p) = \{\epsilon \}\) for any \(p \in P\);

D1::

\(\delta (\lnot F) = \delta (F)\);

D2::

\(\delta (F \rightarrow G) = \delta (F) \cup \delta (G)\);

D3::

\(\delta (\wedge \varPhi ) = \delta (\vee \varPhi ) = \cup _{F\in \varPhi }\delta (F)\);

D4::

\(\delta (K_i(F)) = \{(i)\circ e : e \in \delta (F)\}\).

D5::

\(\delta ([F]G) = \{f \circ e : e \in \delta (F), f \in \delta (G)\}\)

We added D5, which is not present in [22]. Moving to novel work, we define the ToM structure \(\mathcal {T}_{(p,l)}\), with \(p \in A\) and \(l \in \mathbb {N}\) inductively as follows:

Base Case::

\(e \in \mathcal {T}_{(p,l)}\) for every \(e = (i_1, \dotsc , i_m)\) where \(0 \le m \le l\), and for every \(i_j \in e\) we have that \(i_j \in A\) and [if \(0 < j < m\), then \(i_j \ne i_{j+1}\)]. If \(m \le 0\) then \(e = \epsilon \).

Inductive Step 1::

If \(e \in \mathcal {T}_{(p,l)}\) and \(l \ge 0\), then \((p) \circ e \in \mathcal {T}_{(p,l)}\)

Inductive Step 2::

If, for any \(e_1, i, e_2\); \(e_1 \circ ((i) \circ e_2) \in \mathcal {T}_{(p,l)}\), then

\((e_1 \circ (i)) \circ ((i) \circ e_2) \in \mathcal {T}_{(p,l)}\)

Our base case corresponds to our requirement that the number of ‘perspective switches’ is limited by an agent’s ToM order. Inductive steps 1 and 2 correspond to not switching perspectives, not requiring additional ToM.

For zero or more repetitions of i we write \(i^*\). As an example, consider \(A = \{0, 1\}\). Then, \(\mathcal {T}_{(0,2)} = \{\epsilon , (0^*), (1^*), (0^*,1^*), (1^*,0^*), (0^*,1^*,0^*)\}\).

We then modify our semantic definition of \([\varphi ]\psi \) in Definition 3:

$$\begin{aligned} \begin{array}{ccc}M, (s, (i,l)) \models [\varphi ]\psi & \quad \Leftrightarrow \quad & M, (s, (i,l)) \models \varphi \text { implies }M|\varphi , (s, (i,l)) \models \psi \end{array} \end{aligned}$$

where we define the model restriction \(M|\varphi = (S, R, V, T')\) with \((i,l)\in T'(s)\text { iff}\) \((i,l)\in T(s){\ \textit{and}\ }[ M, (s, (i,l))\models \varphi \) or \([\delta (\varphi ) \not \subseteq \mathcal {T}_{(i,l)}] ]\).

Note that \(\delta (\varphi ) \not \subseteq \mathcal {T}_{(i,0)}\) is equivalent to “\(\varphi \) contains an operator \(K_j\) with \(i \ne j\)”, as \(\mathcal {T}_{(i,0)} = \{\epsilon ,(i^*)\}\). With this substitution, our proofs for Theorems 13 hold, and our models can be used with any announcements.

Appendix B

Table 1. Tuples at each relevant state during a series of announcements.

There are two games where non-stochastic EL-2 answers correctly whereas our ToM-2 models answer incorrectly.Footnote 6 In both of these, the participant is player 0. The distribution of cards in these games is AA8A88 and 8A8AAA. For the former, we show the removal of tuples after each announcement in Table 1, where each column is a relevant state, and each row corresponds to an announcement. Column ordering corresponds to the order of states in Fig. 1. The rightmost column shows the next announcement, where the index denotes the player, k is ‘I know my cards’, and \(k\lnot \) is ‘I do not know my cards’. Tuples that will be removed after the next announcement are . After six announcements, player 0 at ToM-2 will incorrectly answer ‘I know my cards’, whereas at ToM-5 she will answer ‘I do not know my cards’, which is the correct answer. When working through the example, it is recommended to use Fig. 1 as a companion.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Top, J.D., Jonker, C., Verbrugge, R., de Weerd, H. (2024). Predictive Theory of Mind Models Based on Public Announcement Logic. In: Gierasimczuk, N., Velázquez-Quesada, F.R. (eds) Dynamic Logic. New Trends and Applications. DaLí 2023. Lecture Notes in Computer Science, vol 14401. Springer, Cham. https://doi.org/10.1007/978-3-031-51777-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-51777-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-51776-1

  • Online ISBN: 978-3-031-51777-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics