skip to main content
research-article
Open Access

On the Complexity of Model Checking Knowledge and Time

Published:16 January 2024Publication History

Skip Abstract Section

Abstract

We establish the precise complexity of the model-checking problem for the main logics of knowledge and time. While this problem was known to be non-elementary for agents with perfect recall, with a number of exponentials that increases with the alternation of knowledge operators, the precise complexity of the problem when the maximum alternation is fixed has been an open problem for 20 years. We close it by establishing improved upper bounds for CTL* with knowledge and providing matching lower bounds that also apply for epistemic extensions of LTL and CTL. We also study the model-checking problem for these logics on systems satisfying the “no learning” property, introduced by Halpern and Vardi in their taxonomy of logics of knowledge and time, and we settle the complexity in almost all cases.

Skip 1INTRODUCTION Section

1 INTRODUCTION

A central aspect of multi-agent systems is that agents only have partial knowledge about the system [47]. This implies that some facts about the system are known to them, while others are not. Epistemic logics are a standard framework that have been developed precisely to model and reason about what agents know about the world and each others’ knowledge—the latter is called higher-order knowledge. To talk about behaviors of multi-agent systems, epistemic logics have been combined with temporal logics such as LTL [49], CTL, and CTL* [20]. The resulting epistemic temporal logics can express properties about the evolution of multi-agent systems and agents’ knowledge over time. These logics have been applied to the modeling and analysis of, e.g., distributed protocols [23, 36], information flow and cryptographic protocols [26, 60], information-theoretically optimal behaviors [14, 46], and knowledge-based programs [61].

The satisfiability problem for this family of logics has been thoroughly studied in References [28, 29], which categorize epistemic temporal logics according to a number of criteria: (1) is the system synchronous or asynchronous; (2) does it have a unique initial state known to all agents; (3) do agents have no memory or perfect recall; (4) can agents learn; (5) is the temporal part of the language linear or branching; (6) can the epistemic part of the language talk about the knowledge of several agents; and (7) can it talk about common knowledge. By considering all the possible combinations, the authors identify 96 logics and study their satisfiability/validity problem. Sound and complete axiomatization for those of these logics that admit one are also provided in Reference [27].

While the picture is clear for satisfiability and axiomatization of these logics, our understanding remains partial regarding the model-checking problem, which is arguably at least as important for the verification of multi-agent systems as satisfiability or axiomatization (see, for instance, Reference [16] for a survey on the use of model checking of epistemic temporal logics in the verification of security protocols).

For agents with no memory, the situation is well understood. In particular, for the asynchronous setting, adding knowledge operators for multiple agents and even common-knowledge operators to LTL, CTL, or CTL* does not increase the complexity of model checking: It is Pspace -complete for extensions of LTL and CTL* [35] and Ptime -complete for extensions of CTL [23, 39]. For the synchronous setting, i.e., agents without memory but with access to a global clock (the so-called clock semantics) the situation is the same, except for extensions of CTL, for which model checking becomes PH -hard,1 and in Pspace [22, 32].

For agents with a bounded amount of memory, the situation is quite similar, as this case can be easily reduced to the memoryless case by putting the memory inside the positions of the model. If the memory of agents is given as finite-state machines (such as in Reference [19]), then the reduction is polynomial in the size of the formula and the model (including the memory machines) and exponential in the number of agents. If the number of agents is fixed, then the complexity of model checking the different logics is therefore the same as in the memoryless setting. In the case where agents remember a “window” containing the last \(n\) states, such as in Reference [8], then the reduction is still polynomial in the size of the formula and the model, exponential in \(n\) if it is given in unary, and doubly exponential if it is given in binary (see Reference [8] for a detailed study of this case for CTLK).

For agents with perfect recall, the model-checking problem is known to be undecidable when common knowledge is part of the language [59], so most works focus on the extensions of LTL, CTL, and CTL* with knowledge but no common knowledge operators, denoted, respectively, LTLK, CTLK, and \(\texttt { CTL}^{*}\texttt {K}\). For these logics, the model-checking problem is known to be decidable but non-elementary [1, 3, 12, 17, 59]. It was noted that the non-elementary blow-up depends on the alternation depth of formulas, the maximal number of alternations between knowledge operators for different agents: Each additional alternation forces to maintain in the model-checking procedure an additional layer of information about what agents know. For a fixed alternation depth \(k\ge 1\), in the synchronous setting, model checking is known to be in \(k\)-Expspace for LTLK [59]. For both the synchronous and asynchronous semantics, it is known to be in \((k-1)\)-Expspace for CTLK [1] and in \(k\)-Exptime for \(\texttt { CTL}^{*}\texttt {K}\) [3, 12]. However, it is not known whether these bounds are tight.

Main contribution. We show that the model-checking problem for LTLK, CTLK, and \(\texttt { CTL}^{*}\texttt {K}\) is actually \((k-1)\)-Expspace -complete for alternation depth at most \(k\) (for \(k\ge 1\)), both for synchronous and asynchronous perfect recall. This improves on the previously known upper bounds for synchronous and asynchronous \(\texttt { CTL}^{*}\texttt {K}\) and LTLK. Besides, the lower bounds are new for all six logics. We summarize the main results in Table 1, and we point out that our Pspace -completeness result for the fragment of LTLK with alternation depth one generalizes the one for synchronous LTLK with one agent proved in [22].

Table 1.
CTLLTLCTL*
nm, asy, K/CKPtime -c [23, 39]Pspace -c [35]Pspace -c [35]
nm, sy, K/CKPH\(\le\) . \(\le\)Pspace [32]Pspace -c [22]Pspace -c [32]
pr, sy/asy, CKundecidable [59]undecidable [59]undecidable [59]
pr, sy/asy, K\((k\!-\!1)\)-Expspace -c\((k\!-\!1)\)-Expspace -c\((k\!-\!1)\)-Expspace -c
nl, nm, asy, K/CKPtime -cPspace -cPspace -c
nl, nm, sy, K/CKPH\(\le\) . \(\le\)PspacePspace -cPspace -c
nl, pr, sy, K/CKPH\(\le\) . \(\le\)PspacePspace -cPspace -c
nl, pr, asy, K/CK???
  • Green and blue results are new. Those in green are direct consequences of known results (see below). Those in blue required new proofs, exposed in this work. “nm” and “pr” stand for “no memory” and “perfect recall,” “sy” and “asy” for “synchronous” and “asynchronous,” and “nl” stands for “no learning.” “CK” indicates extensions with both knowledge and common knowledge operators, while “K” indicates the absence of common knowledge. Finally, \(k\ge 1\) is the maximal alternation depth of formulas, for cases where the complexity depends on it. Commas indicate conjunction of features, slashes indicate disjunction

Table 1. In White, Previously Known Results

  • Green and blue results are new. Those in green are direct consequences of known results (see below). Those in blue required new proofs, exposed in this work. “nm” and “pr” stand for “no memory” and “perfect recall,” “sy” and “asy” for “synchronous” and “asynchronous,” and “nl” stands for “no learning.” “CK” indicates extensions with both knowledge and common knowledge operators, while “K” indicates the absence of common knowledge. Finally, \(k\ge 1\) is the maximal alternation depth of formulas, for cases where the complexity depends on it. Commas indicate conjunction of features, slashes indicate disjunction

As a base case for our model-checking procedure, we combine three existing ideas to obtain a Pspace model-checking procedure for \(\texttt { CTL}^{*}\texttt {K}\) formulas of alternation depth one. The first one is the original Pspace model-checking procedure for LTL by Sistla and Clarke [55], which can be seen as on-the-fly construction and resolution of Büchi automata for LTL formulas [63]. We extend this algorithm with an on-the-fly construction of the powerset model, which allows us to evaluate LTLK formulas of alternation depth one in polynomial space (it was already noted in Reference [1] that the powerset construction can be done on-the-fly for CTLK). Finally, we use the meta-algorithm by Emerson and Lei [21] to extend this procedure from LTLK to the full branching-time setting of \(\texttt { CTL}^{*}\texttt {K}\). For the inductive step of our procedure, we resort to a classic powerset construction to eliminate one level of alternation. The lower bounds are obtained by a reduction from a tiling problem (sometimes called corridor problem) with rows of width \(k\)-exponential in the input.

No Learning. Except for CTLK and \(\texttt { CTLK}_{\texttt { C}}\) with clock semantics, for which remains a gap between PH and Pspace , the exact complexity is now settled for all logics in the general case where agents can learn (see criterion (4)). We then turn to study the complexity of the model-checking problem under the no learning assumption. While this criterion was considered both in relation with the satisfiability problem [29] and axiomatization [27], we are not aware of any work that considers it in relation with the model-checking problem. As a result, we have to first define what this criterion means on finite systems, and then we study how it impacts the complexity of model checking.

Since assuming no learning amounts to restricting the class of models considered, it can only make the problem easier (while for satisfiability it tends to make it harder). We can thus inherit directly a number of results, in particular when the problem cannot become easier because the known upper bounds already match the lower bounds for the underlying purely temporal language (results in green in Table 1).

We prove in addition that with no learning the problem remains PH -hard for CTLK with clock semantics, but that it becomes much easier for synchronous perfect recall: From undecidable with common knowledge, or nonelementary without, it drops to Pspace when we restrict to models with no learning, even with common knowledge.

The only case that remains is that of asynchronous perfect recall. As we explain in Section 5.4, the technique used to obtain a Pspace procedure for synchronous perfect recall does not work for the asynchronous case, so we only have the \(k-1\)-Expspace upper bounds from asynchronous perfect recall without the no learning assumption. However, the model built in the proof for the \(k-1\)-Expspace lower bounds is very far from satisfying the no learning property, so the best lower bounds we have are PH for CTLK and Pspace for LTLK and \(\texttt { CTL}^{*}\texttt {K}\). Similarly, the proof that model checking is undecidable with common knowledge and asynchronous perfect recall also produces models that do not satisfy no learning, so the decidability status of the model-checking problem for the logics with common knowledge, asynchronous perfect recall, and no learning remains an open problem.

Related work. Besides those already mentioned, we point out the following related works:

On model checking: The most closely related are those dealing with model checking of different variants of epistemic temporal logics. First, model-checking procedures for some of the logic studied here were implemented in the tool MCMAS [38] for the memoryless semantics, and in MCK [24], which also supports clock semantics as well as synchronous and asynchronous perfect recall. In References [53, 54], Garanina et al. consider epistemic extensions of a number of branching-time temporal logics (propositional dynamic logic, mu-calculus, and a multimodal variant of CTL with actions), for which they also study the complexity of model checking. A number of works study for various epistemic temporal logics a variant of the model-checking problem, called symbolic model checking; in this variant, the model’s states and transitions are not given explicitly (as in the present work), but rather described in some succinct formalism [30, 33, 34, 39, 44, 48, 51].

On synthesis: Besides model checking, different synthesis problems have been considered for specifications given in epistemic temporal logic. Games with imperfect information and objectives given as formulas in epistemic temporal logic were studied in References [10, 45, 50, 62], all for synchronous perfect recall, except for Puchala who considers both synchronous and asynchronous perfect recall. In the setting of Dynamic Epistemic Logic (DEL), where games are implicitly described by epistemic models, synthesis of plans and protocols as, respectively, sequences and trees of epistemic actions, were studied in References [18, 40] for epistemic temporal objectives. DEL controller synthesis was studied in Reference [42] for LTLK objectives, and DEL multiplayer games with imperfect information were defined and studied in Reference [43].

On strategic logics: Logics for knowledge and time have also been extended with strategic operators, yielding in particular epistemic variants of Alternating-time Temporal Logic (ATL) [2, 25, 56] and Strategy Logic [4, 5, 6, 7, 15, 41], and others [31]. An epistemic extension of ATL was also studied on DEL presentations of game arenas in Reference [43].

Finally, epistemic temporal logic was compared to hyper temporal logic in Reference [13], and epistemic temporal logic with real time was investigated in Reference [37].

The present work is an extended version of Reference [11]. The main additions are the following:

(1)

A correction: In the conference version, we stated that model checking CTL with knowledge operators under the clock semantics is PSPACE-complete. This result was announced by Huang and van der Meyden in the paper “The Complexity of Epistemic Model Checking: Clock Semantics and Branching Time.” However, it turns out that the proof presented there only allows establishing hardness for each level of the polynomial hierarchy, as confirmed by a private communication with Ron van der Meyden.

(2)

New results concerning agents who do not learn, not studied in the conference paper.

Plan. The article is organized as follows: In Section 2, we recall the syntax and semantics of the different logics. We then focus on the case of perfect recall without no learning assumption, establishing our upper bounds in Section 3 and the matching lower bounds in Section 4, both for the synchronous and asynchronous case. We define and study the case of no learning in Section 5, and we conclude in Section 6.

Skip 2PRELIMINARIES Section

2 PRELIMINARIES

Given a finite set of symbols \(\Sigma\) called alphabet, a finite (respectively, infinite) word over \(\Sigma\) is an element of \(\Sigma ^{*}\) (respectively, \(\Sigma ^{\omega }\)). Let \(w\) be a finite or infinite word over \(\Sigma\). We denote by \(|w|\) the length of \(w\): If \(w=w_{0}\ldots w_{n}\), then we let \(|w|=n+1\), and we let \(|w|=\infty\) if \(w\) is infinite. For all \(0\le i,j\lt |w|\), with \(i\le j\), \(w_i\) is the \(i\)th letter of \(w\), \(w_{\le i}\) is the prefix of \(w\) that ends at position \(i\), \(w_{\ge i}\) is the suffix that starts at position \(i\), and \(w_{[i,j]}=w_i\ldots w_j\). For words \(w\) and \(w^{\prime }\), we write \(w\preccurlyeq w^{\prime }\) if \(w\) is a prefix of \(w^{\prime }\).

Let \(\mathbb {N}\) be the set of natural numbers. For all \(n,k\in \mathbb {N}\), define \({\it Tower}(n,0)=n\) and \({\it Tower}(n,k+1)=2^{{\it Tower}(n,k)}\). \({\it Tower}(n,k)\) denotes a tower of exponentials of height \(k\) and argument \(n\).

2.1 Syntax and Fragments

We first recall the logic \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\), the extension of CTL* with operators for knowledge and common knowledge.

Let us fix a countably infinite set of atomic propositions \(\mathcal {AP}\) and a finite set of agents \(\mathit {Ag}\). As for state and path formulas in CTL*, we distinguish between history formulas and path formulas. We say history formulas instead of state formulas because, considering agents with perfect recall of the past, the truth of epistemic formulas depends not only on the current state, but also on the history before reaching this state.

Definition 2.1

(Syntax of \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\))

The sets of history formulas \(\varphi\) and path formulas \(\psi\) are defined as follows: \(\begin{equation*} \begin{array}{ccl} \varphi & ::= & p \mid \lnot \varphi \mid \varphi \vee \varphi \mid \mathit {E}\psi \mid \mathit {K}_{a}\varphi \mid {C_G}\varphi \\ \psi & ::= & \varphi \mid \lnot \psi \mid \psi \vee \psi \mid \mathit {X}\psi \mid \psi \mathit {U}\psi , \end{array} \end{equation*}\) where \(p\in \mathcal {AP}\), \(a\in \mathit {Ag}\) and \(\emptyset \subsetneq G\subseteq \mathit {Ag}\).

Operators \(\mathit {X}\) and \(\mathit {U}\) are the standard next and until temporal operators of LTL, \(\mathit {E}\) is the existential path quantifier of CTL*, \(\mathit {K}_{a}\) is the knowledge operator for agent \(a\) from epistemic logics, and \({C_G}\) is the operator of common knowledge for the nonempty group of agents \(G\). Formula \(\mathit {K}_{a}\varphi\) reads as “agent \(a\) knows that \(\varphi\) is true,” and \({C_G}\varphi\) reads as “there is common knowledge among agents in \(G\) that \(\varphi\) is true.” Intuitively, this means that everybody in \(G\) knows that \(\varphi\) is true, and everybody knows that everybody knows that it is true, and everybody knows that everybody knows that everybody knows that it is true, and so on (see, for instance, Reference [23] for more on common knowledge).

As usual, we define \(\top =p\vee \lnot p\), \(\varphi \wedge \varphi ^{\prime }=\lnot (\varphi \vee \lnot \varphi)\), \(\varphi \rightarrow \varphi ^{\prime } = \lnot \varphi \vee \varphi ^{\prime }\), the temporal operators finally (\(\mathit {F}\)) and always (\(\mathit {G}\)): \(\mathit {F}\varphi = \top \mathit {U}\varphi\), and \(\mathit {G}\varphi =\lnot \mathit {F}\lnot \varphi\), and the universal path quantifier \(\mathit {A}\psi := \lnot \mathit {E}\lnot \psi\).

The language of \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) consists of the history formulas. We let \(\mathrm{Sub}(\varphi)\) be the set of subformulas in \(\varphi\), and we define the size of a formula \(\varphi\) as \(|\varphi |=|\mathrm{Sub}(\varphi)|\).

We now define the alternation depth of a formula \(\varphi\), written \(\mathrm{ad}(\varphi)\). Intuitively, \(\mathrm{ad}(\varphi)\) is the maximum number of alternations between knowledge operators for different agents in the formula (note that we only take into account the modalities \(\mathit {K}_{a}\) and not the modalities \({C_G}\) for common knowledge).

Definition 2.2

(Alternation Depth).

Let \(\mathcal {O}= \bigcup _{a\in \mathit {Ag}}\lbrace \mathit {K}_{a}\rbrace\), i.e., the set of knowledge modalities \(\mathit {K}_{a}\) for the various agents \(a\in \mathit {Ag}\). We first define the alternation length \(\ell (\chi)\) of finite sequence \(\chi \in \mathcal {O}^{*}\): \(\chi (\varepsilon)=0\), \(\chi (O)=1\) for all \(O\in \mathcal {O}\), and \(\begin{equation*} \chi (OO^{\prime }\chi)= {\left\lbrace \begin{array}{ll}\chi (O^{\prime }\chi) & \text{if }O=O^{\prime }\\ \chi (O^{\prime }\chi) +1 & \text{otherwise.} \end{array}\right.} \end{equation*}\) Then, the alternation depth \(\mathrm{ad}(\varphi)\) of a \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) formula \(\varphi\) is the maximum over the alternation lengths \(\ell (\chi)\), where \(\chi\) ranges over the sequences of knowledge modalities in \(\mathcal {O}\) along the paths in the tree encoding of \(\varphi\).

For instance, \(\mathrm{ad}(p)=0\), \(\mathrm{ad}(\mathit {K}_{a}p)=1\), \(\mathrm{ad}(\mathit {K}_{a}\lnot \mathit {K}_{a}p)=1\) and \(\mathrm{ad}(\mathit {K}_{b} \mathit {K}_{a}q \vee \mathit {K}_{a}p)=2\).

Fragments of \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\). We consider the usual syntactic fragments \(\texttt { LTLK}_{\texttt { C}}\) and \(\texttt { CTLK}_{\texttt { C}}\) of \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\). \(\texttt { LTLK}_{\texttt { C}}\) consists of formulas of the form \(\mathit {E}\psi\) or \(\lnot \mathit {E}\psi\) where path quantifiers in \(\psi\) are immediately preceded by epistemic modalities, while \(\texttt { CTLK}_{\texttt { C}}\) is obtained by requiring that the temporal modalities \(\mathit {X}\) and \(\mathit {U}\) are immediately preceded by a path quantifier. Formally, the syntax of \(\texttt { LTLK}_{\texttt { C}}\) and \(\texttt { CTLK}_{\texttt { C}}\) are defined by the following grammars: \(\begin{equation*} \begin{array}{lccl} \text{$\texttt { LTLK}_{\texttt { C}}$}: & \varphi & ::= & \mathit {E}\psi \mid \lnot \mathit {E}\psi \\ & \psi & ::= & \lnot \psi \mid \psi \vee \psi \mid \mathit {X}\psi \mid \psi \mathit {U}\psi \mid \mathit {K}_{a}\varphi , \end{array} \end{equation*}\) \(\begin{equation*} \begin{array}{lccl} \text{$\texttt { CTLK}_{\texttt { C}}$}: & \varphi & ::= & p \mid \lnot \varphi \mid \varphi \vee \varphi \mid \mathit {E}\mathit {X}\varphi \mid \mathit {E}\varphi \mathit {U}\varphi \mid \mathit {A}\varphi \mathit {U}\varphi \mid \mathit {K}_{a}\varphi \mid {C_G}\varphi . \end{array} \end{equation*}\)

We also let LTLK, CTLK, and \(\texttt { CTL}^{*}\texttt {K}\) be the fragments of \(\texttt { LTLK}_{\texttt { C}}\), \(\texttt { CTLK}_{\texttt { C}}\), and \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\), respectively, obtained by forbidding the common knowledge operator \({C_G}\). Finally, for every \(k\in \mathbb {N}\), we define \(\texttt { LTLK}_{k}\), \(\texttt { CTLK}_{k}\), and \(\texttt { CTL}^{*}\texttt { K}_{k}\) the restrictions of LTLK, CTLK, and \(\texttt { CTL}^{*}\texttt {K}\), respectively, to formulas of alternation depth at most \(k\).

2.2 Models

\(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) formulas are interpreted over Kripke structures (KS) equipped with one indistinguishability relation\(\sim _{a}\) for each agent \(a\).

Definition 2.3.

A Kripke structure is a structure \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},{{s}^\iota })\), where

\(\text {AP}\subset \mathcal {AP}\) is a finite subset of atomic propositions,

\(S\) is a set of states,

\(R\subseteq S\times S\) is a left-total transition relation,

\(V:S\rightarrow 2^{\text {AP}}\) is a valuation function,

\(\sim _{a}\;\subseteq S\times S\) is an equivalence relation, for each \(a\in \mathit {Ag}\), and

\({{s}^\iota }\in S\) is an initial state.

The size \(|M|\) of \(M\) is the number of states in \(M\). A path is an infinite sequence of states \(\pi =s_0 s_1\dots\) such that for all \(i\ge 0\), \(s_i Rs_{i+1}\), and a history\(\tau\) is a non-empty prefix of a path. We denote by \(\textrm {H}(s)\) (respectively, \(\Pi (s)\)) the set of histories (respectively, paths) that start in \(s\). Unless specified otherwise, all histories and paths are assumed to start in the initial state \({{s}^\iota }\). For \(I\subseteq S\), we write \(R(I)=\lbrace s^{\prime } \mid \exists s\in I\mbox{ s.t. }sRs^{\prime }\rbrace\) for the set of successors of states in \(I\). Since the relation \(R\) is left-total, i.e., for every \(s\in S\) there exists \(s^{\prime }\in S\) such that \(sRs^{\prime }\), \(R(I)\) is nonempty for every nonempty \(I\). Finally, for \(a\in \mathit {Ag}\) and \(s\in S\), we let \([s]_{a}\) be the equivalence class of \(s\) for relation \(\sim _{a}\), which is also called agent \(a\)’s observation relation. For a history \(\tau\), \(\mbox{lst}(\tau)\) is the last state of \(\tau\).

Example 2.4.

We represent in Figure 1

Fig. 1.

Fig. 1. A Kripke structure \(M\) with one agent \(a\) whose observation relation defines two equivalence classes, represented in white and gray.

a Kripke structure that we use in the following to illustrate different semantics. There are two atomic propositions: \(p\), which is satisfied in \(s_2\), \(s_3\) and \(s_4\), and \(q\), satisfied only in \(s_4\). There is only one agent, \(a\), and its observation relation \(\sim _{a}\) defines two equivalence classes, \(\lbrace s_0,s_2\rbrace\) and \(\lbrace s_1,s_3,s_4\rbrace\).

2.3 Synchronous and Asynchronous Memoryless Semantics

We first recall both the synchronous and asynchronous memoryless semantics of \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\). As usual, when not specified otherwise, memoryless semantics refers to the asynchronous variant, where agents do not remember anything, while we may use the term clock semantics for the synchronous variant in which agents only remember how many transitions were taken. We will write \(\models _m\) for the (asynchronous) memoryless semantics, and \(\models _{ck}\) for the clock semantics (synchronous memoryless).

2.3.1 Asynchronous Memoryless.

To define the semantics of common knowledge for a group of agents, given a model \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},{{s}^\iota })\), we define the following relations on states, for every nonempty group \(G\subseteq \mathit {Ag}\): \(\sim _{G}\,=(\cup _{a\in G} \sim _{a})^*\). So, relations for common knowledge are the reflexive and transitive closure of the union of the relations for all agents in the group.

Definition 2.5

(Memoryless Semantics).

Given a model \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},{{s}^\iota })\), a state \(s\in S\), and a path \(\pi\), the memoryless semantics is defined inductively as follows: \(\begin{equation*} \begin{array}{l c l} s\models _mp & \mbox{if} & p\in V(s)\\ s\models _m\lnot \varphi & \mbox{if} & s\not\models _m\varphi \\ s\models _m\varphi _1\vee \varphi _2 & \mbox{if} & s\models _m\varphi _1 \text{ or }s\models _m\varphi _2\\ s\models _m\mathit {E}\psi & \mbox{if} & \mbox{for some }\pi \in \Pi (s),\;\; \pi \models _m\psi \\ s\models _m\mathit {K}_a\varphi & \mbox{if} & \mbox{for all } s^{\prime } \mbox{ s.t. }s^{\prime }\;\sim _{a}\; s,\;\; s^{\prime }\models _m\varphi \\ s\models _m{C_G}\varphi & \mbox{if} & \mbox{for all } s^{\prime } \mbox{ s.t. }s^{\prime }\;\sim _{G}\; s,\;\; s^{\prime }\models _m\varphi \\ \pi \models _m\varphi & \mbox{if} & \pi _0\models _m\varphi \\ \pi \models _m\lnot \psi & \mbox{if} & \pi \not\models _m\psi \\ \pi \models _m\psi _1\vee \psi _2 & \mbox{if} &\pi \models _m\psi _1 \text{ or } \pi \models _m\psi _2\\ \pi \models _m\mathit {X}\psi & \mbox{if} & \pi _{\ge 1}\models _m\psi \\ \pi \models _m\psi _1\mathit {U}\psi _2 & \mbox{if} & \exists i\ge 0 \mbox{ such that } \pi _{\ge i}\models _m\psi _2 \mbox{ and } \forall j \mbox{ such that }0\le j \lt i,\; \pi _{\ge j}\models _m\psi _1 . \end{array} \end{equation*}\)

We may write \(M,\tau \models _m\varphi\) for \(M,\mbox{lst}(\tau)\models _m\varphi\), and \(M\models _m\varphi\) for \({{s}^\iota }\models _m\varphi\).

2.3.2 Synchronous Memoryless.

To define the clock semantics, we define first the clock indistinguishability relation on histories.

Definition 2.6.

Let \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},{{s}^\iota })\) be a model. Two histories \(\tau\) and \(\tau ^{\prime }\) are indistinguishable for an agent \(a\) with clock semantics, written \(\tau \approx ^{ck}_{a}{}\tau ^{\prime }\), if they have same length and their last states are indistinguishable to agent \(a\), i.e., \(|\tau |=|\tau ^{\prime }|\) and \(\mbox{lst}(\tau)\sim _{a} \mbox{lst}(\tau ^{\prime })\).

For common knowledge, we also define, for every nonempty group of agents \(\emptyset \subsetneq G\subseteq \mathit {Ag}\), the relation \(\approx ^{ck}_{G}\,=(\cup _{a\in G}\approx ^{ck}_{a})^*\).

In the case of asynchronous memoryless, the past does not matter and we defined the semantics of history formulas directly on states. For clock semantics, the length of histories matters, and we thus define the semantics on histories.

Definition 2.7

(Clock Semantics).

Let \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},{{s}^\iota })\) be a model. The clock semantics of a history formula \(\varphi\) on a history \(\tau\) and a path formula \(\psi\) on a path \(\pi\) and a point in time \(n\in \mathbb {N}\) is defined by induction as follows: \(\begin{equation*} \begin{array}{lcl} \tau \models _{ck}p & \mbox{ if } & p\in V(\mbox{lst}(\tau))\\ \tau \models _{ck}\lnot \varphi & \mbox{ if } & \tau \not\models _{ck}\varphi \\ \tau \models _{ck}\varphi _1\vee \varphi _2 & \mbox{ if } &\tau \models _{ck}\varphi _1~\text{or}~ \tau \models _{ck}\varphi _2\\ \tau \models _{ck}\mathit {E}\psi & \mbox{ if } & \exists \pi \mbox{ s.t. }\tau \preccurlyeq \pi \text{ and } \;\pi ,|\tau |-1\models _{ck}\psi \\ \tau \models _{ck}\mathit {K}_{a}\varphi & \mbox{ if } & \forall \tau ^{\prime }\in \textrm {H}({{s}^\iota }) \text{ such that } \tau ^{\prime }\approx ^{ck}_{a}\tau ,\; \tau ^{\prime }\models _{ck}\varphi \\ \tau \models _{ck}{C_G}\varphi & \mbox{ if } & \forall \tau ^{\prime }\in \textrm {H}({{s}^\iota }) \text{ such that } \tau ^{\prime }\approx ^{ck}_{G}\tau ,\; \tau ^{\prime }\models _{ck}\varphi \\ \pi ,n\models _{ck}\varphi & \mbox{ if } & \pi _{\le n}\models _{ck}\varphi \\ \pi ,n\models _{ck}\lnot \psi & \mbox{ if } & \pi ,n\not\models _{ck}\psi \\ \pi ,n\models _{ck}\psi _1\vee \psi _2 & \mbox{ if } &\pi ,n\models _{ck}\psi _1 \mbox{ or }\pi ,n\models _{ck}\psi _2\\ \pi ,n\models _{ck}\mathit {X}\psi & \mbox{ if } & \pi ,(n+1)\models _{ck}\psi \\ \pi ,n\models _{ck}\psi _1\mathit {U}\psi _2 & \mbox{ if } & \exists m\ge n \mbox{ s.t. } \pi ,m\models _{ck}\psi _2 \mbox{ and } \forall k \mbox{ s.t. } n\le k \lt m,\; \pi ,k\models _{ck}\psi _1. \end{array} \end{equation*}\)

We may write \(M\models _{ck}\varphi\) for \({{s}^\iota }\models _{ck}\varphi\).

Example 2.8.

Consider history \(s_0s_2\) in the Kripke structure from Example 2.4. After this history, with the asynchronous memoryless semantics, since \(s_0\sim _{a}s_2\), agent \(a\) considers that the current state is either \(s_0\) or \(s_2\), and thus she does not know that \(p\) holds: \(s_0s_2\models _m\lnot \mathit {K}_{a}p\). With the synchronous memoryless semantics (clock semantics) instead, the only history of length two other than \(s_0s_2\) is \(s_0s_1\), and it ends in a state that is not equivalent to \(s_2\) for agent \(a\). As a result, the only history in the equivalence class of \(s_0s_2\) for relation \(\approx ^{ck}_{a}\) is itself, and thus in this semantics agent \(a\) knows that \(p\) holds: \(s_0s_2\models _{ck}p\).

2.4 Synchronous and Asynchronous Perfect Recall Semantics

We now define the perfect recall semantics of knowledge modalities, where agents remember all of the past. Two main notions of perfect recall have been considered for the semantics of epistemic temporal logics [27], as well as in games with imperfect information [50], depending on whether the system is assumed to be synchronous or not. While in synchronous systems agents always observe when a transition takes place, in asynchronous ones, agents cannot tell that a transition occurred if their observation of the state remains unchanged.

Definition 2.9.

Two histories \(\tau\) and \(\tau ^{\prime }\) are indistinguishable for an agent \(a\) with synchronous perfect recall(SPR for short), written \(\tau \approx ^{\mbox{s}}_{a}{}\tau ^{\prime }\), if they are point-wise indistinguishable to \(a\), i.e., \(|\tau |=|\tau ^{\prime }|\) and \(\tau _i\sim _{a} \tau ^{\prime }_i\) for each \(i\lt |\tau |\).

To define asynchronous perfect recall, we first define the sequence of observations that an agent has along a history, in which sequences of successive identical observations collapse to a single observation. Formally, for an agent \(a\), we let \(\text{Obs}_{a}(s)=[s]_{a}\), and \(\begin{equation*} \text{Obs}_{a}(\tau \cdot s)= {\left\lbrace \begin{array}{ll}\text{Obs}_{a}(\tau)\cdot [s]_{a} & \text{if }[s]_{a}\ne \mbox{lst}(\text{Obs}_{a}(\tau)),\\ \text{Obs}_{a}(\tau) & \text{otherwise.} \end{array}\right.} \end{equation*}\)

Definition 2.10.

Two histories \(\tau\) and \(\tau ^{\prime }\) are indistinguishable for an agent \(a\) with asynchronous perfect recall(APR for short), written \(\tau \approx ^{\mbox{as}}_{a}{}\tau ^{\prime }\), if \(\text{Obs}_{a}(\tau)=\text{Obs}_{a}(\tau ^{\prime })\).

We also define, for every nonempty group of agents \(\emptyset \subsetneq G\subseteq \mathit {Ag}\), the relations for common knowledge \(\approx ^{\mbox{s}}_{G}\,=(\cup _{a\in G}\approx ^{\mbox{s}}_{a})^*\) and \(\approx ^{\mbox{as}}_{G}\,=(\cup _{a\in G}\approx ^{\mbox{as}}_{a})^*\).

We define the semantics of \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) formulas for either synchronous or asynchronous perfect recall as follows: The definition only differs in the relations on histories used, either \(\approx ^{\mbox{s}}_{a}\) and \(\approx ^{\mbox{s}}_{G}\) or \(\approx ^{\mbox{as}}_{a}\) and \(\approx ^{\mbox{as}}_{G}\). The relation on histories is also the only difference with the synchronous memoryless semantics (clock semantics) defined earlier.

Definition 2.11

(Perfect-recall Semantics).

Fix a model \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},{{s}^\iota })\). The perfect-recall semantics of a history formula \(\varphi\) on a history \(\tau\) and a path formula \(\psi\) on a path \(\pi\) and a point in time \(n\in \mathbb {N}\) is defined by induction as follows, where \(\approx _{a}\) and \(\approx _{G}\) stand for \(\approx ^{\mbox{s}}_{a}\) and \(\approx ^{\mbox{s}}_{G}\) in the case of SPR (written \(\models _{\mbox{s}}\)), and for \(\approx ^{\mbox{as}}_{a}\) and \(\approx ^{\mbox{as}}_{G}\) in the case of APR (written \(\models _{\mbox{as}}\)):

A model \(M\) with initial state \({{s}^\iota }\) satisfies a \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) formula \(\varphi\) under the \(\text{SPR}\) semantics, written \(M\models _{\mbox{s}}\varphi\), if \({{s}^\iota }\models _{\mbox{s}}\varphi\), and similarly for the APR semantics: \(M\models _{\mbox{as}}\varphi\) if \({{s}^\iota }\models _{\mbox{as}}\varphi\).

Example 2.12.

Take again the model from Example 2.4, and consider this time history \(s_0s_1s_3\). With the synchronous perfect-recall semantics, no other history is considered possible by agent \(a\), which implies that she knows that \(p\) holds: \(s_0s_1s_3\models _{\mbox{s}}\mathit {K}_{a}p\). With the asynchronous perfect-recall semantics instead, history \(s_0s_1\) is equivalent to \(s_1s_1s_3\), as both have the same sequence of observations (white followed by gray): \(\text{Obs}_{a}(s_0s_1)=\text{Obs}_{a}(s_0s_1s_3)\). It follows that with this semantics agent \(a\) does not know that \(p\) holds: \(s_0s_1s_3\models _{\mbox{as}}\lnot \mathit {K}_{a}p\).

With the synchronous semantics, it is also the case that agent \(a\) knows that \(q\) does not hold: \(s_0s_1s_3\models _{\mbox{s}}\mathit {K}_{a}\lnot q\). It is not so with the asynchronous perfect-recall semantics, however, where agent \(a\) considers that \(q\) may hold: \(s_0s_1s_3\models _{\mbox{as}}\lnot \mathit {K}_{a}\lnot q\). Indeed, history \(s_0s_2s_4\) also defines the same sequence of observations \(\text{Obs}_{a}(s_0s_2s_4)=\text{Obs}_{a}(s_0s_1s_3)=\text{Obs}_{a}(s_0s_1)\), i.e., white and then gray.

Remark 1.

Observe that we assume the definition of the model to be common knowledge among the agents, which is standard, for instance, in game theory. As a result, the initial state is common knowledge, which corresponds to the unique initial state assumption in References [27, 29]. This is reflected in the semantics in the fact that we only consider indistinguishable histories that start in the initial state. We note that one can simulate in this semantics the case where the initial state is unknown by adding an artificial initial state \({{s}^\iota }_0\) such that \(({{s}^\iota }_0,s)\in R\) and \((s,{{s}^\iota }_0)\notin R\) for all states \(s\) in the original model, and labeling each state \(s\) with an atom \(p_s\). Then evaluating \(s\models \varphi\) (with unknown initial state) is equivalent to evaluating \({{s}^\iota }_0\models \mathit {A}\mathit {X}(p_s\rightarrow \varphi)\) (with unique initial state).

2.5 Main Result

The model-checking problem for a semantics \(\models\) is the following: Given a finite model \(M\) and a formula \(\varphi\), decide whether \(M\models \varphi\). The main result in this work is the following:

Theorem 2.13.

For every \(k\in \mathbb {N}\), the model-checking problem for \(\texttt { LTLK}_{k+1}\), \(\texttt { CTLK}_{k+1}\), and \(\texttt { CTL}^{*}\texttt { K}_{k+1}\) is \(k\)-Expspace -complete, both for synchronous perfect recall and asynchronous perfect recall semantics.

In particular, for \(k=0\), we get that model checking formulas of alternation depth at most one is in Pspace , which generalizes a similar result from Reference [22] for LTLK with one agent, which is a strict subset of \(\texttt { LTLK}_{1}\). The upper bound was proved in Reference [1] for (a variant of) \(\texttt { CTLK}_{k}\), both for synchronous and asynchronous perfect recall. However, the best-known upper-bound for \(\texttt { CTL}^{*}\texttt { K}_{k+1}\) was \(k+1\)-Exptime , which we improve by extending the approach of Reference [1] to the case of the full branching time logic \(\texttt { CTL}^{*}\texttt {K}\), showing that it is not more expensive than CTLK. In addition, while it was only known that these problems are nonelementary, no precise lower bounds in terms of alternation depth were known, so it was not known whether the above-mentioned upper bounds were optimal or not. We settle this issue by providing matching lower-bounds in all cases.

In the next two sections (Sections 3 and 4), we establish Theorem 2.13, so we focus on the logics LTLK, CTLK, and \(\texttt { CTL}^{*}\texttt {K}\) (i.e., without common knowledge), and the perfect recall semantics (both synchronous and asynchronous). In Section 5, we focus on the no learning assumption, which we introduce there and for which we establish additional results on the complexity of model checking the various logics under the different memoryless and perfect recall semantics, with and without common knowledge.

Skip 3UPPER BOUNDS FOR PERFECT RECALL Section

3 UPPER BOUNDS FOR PERFECT RECALL

In this section, we prove the upper bounds in Theorem 2.13 for the model-checking problem for \(\texttt { CTL}^{*}\texttt {K}\) with either synchronous or asynchronous perfect recall. We start by recalling a classic powerset construction for algorithmic questions related to imperfect information. It was first used by Reif in Reference [52] to eliminate imperfect information from two-player games and in References [1, 57, 59] to model check variants of LTLK and CTLK. We show that it also can be used to model check \(\texttt { CTL}^{*}\texttt {K}\).

3.1 Information Sets and Updates

An information set captures the set of states that an agent considers possible at a given moment. The following definition is common to synchronous and asynchronous perfect recall, and \(\approx _{a}\) stands for either \(\approx ^{\mbox{s}}_{a}\) or \(\approx ^{\mbox{as}}_{a}\):

Definition 3.1.

Given a model \(M\) with state set \(S\) and initial state \({{s}^\iota }\), an information set\(I\subseteq S\) is a set of states. Given a history \(\tau\) and an agent \(a\), the information set of \(a\) at \(\tau\) is defined as \(\begin{equation*} I_{a}(\tau)=\lbrace s\mid \exists \tau ^{\prime }\in \textrm {H}({{s}^\iota }) \text{ s.t. }\tau \approx _{a}\tau ^{\prime } \text{ and }s=\mbox{lst}(\tau ^{\prime })\rbrace . \end{equation*}\)

We write \(I^{\mbox{s}}_{a}\) when referring to the synchronous semantics and \(I^{\mbox{as}}_{a}\) when referring to the asynchronous one.

We now define two different update functions, one for the synchronous and one for the asynchronous case. The role of these functions is to compute the new information set of an agent after a transition, given her former information set and the new state. We start with the synchronous case, which is easier and standard (see, for instance, Reference [50] or References [57, 59] for the more general case of \(k\)-trees).

Definition 3.2.

The synchronous update of an information set \(I_{a}\) for agent \(a\) with a new state \(s\) is \(\begin{equation*} \mathrm{Up}^{\mbox{s}}(I_{a},s)=R(I_{a})\cap [s]_{a}. \end{equation*}\)

This definition says that after taking a transition that arrives in state \(s\), the states that agent \(a\) considers possible are the successors of states she previously considered possible and that are compatible with what she observes of the new state \(s\).

For asynchronous perfect recall, the update is slightly more involved, as the agent may consider that arbitrarily many steps occurred without her noticing. We call invisible step (for some agent \(a\)) a transition between two states \(s\) and \(s^{\prime }\) such that \(s\sim _{a}s^{\prime }\), and given a set of states \(I\), we let \(\text{Reach}_i^a(I)\supseteq I\) be the set of states reachable from \(I\) via steps invisible for \(a\). We can now define the update as follows:

Definition 3.3.

The asynchronous update of an information set \(I_{a}\) for agent \(a\) with a new state \(s\) is \(\begin{equation*} \mathrm{Up}^{\mbox{as}}(I_{a},s)= {\left\lbrace \begin{array}{ll} I_{a}&\text{if } I_{a}\subseteq [s]_{a},\\ \text{Reach}_i^a(R(I_{a})\cap [s]_{a}) & \text{otherwise}. \end{array}\right.} \end{equation*}\)

This definition is an adaptation to our setting of the one in Reference [50], which considers two-player games. The following result follows directly by applying the definitions:

Lemma 3.4.

For every history \(\tau \cdot s\), \(I^{\mbox{s}}_{a}(\tau \cdot s)=\mathrm{Up}^{\mbox{s}}(I^{\mbox{s}}_{a}(\tau),s)\) and \(I^{\mbox{as}}_{a}(\tau \cdot s)=\mathrm{Up}^{\mbox{as}}(I^{\mbox{as}}_{a}(\tau),s)\).

3.2 Powerset Construction

Given a model \(M\), we define a powerset model \(\widehat{M}\) of exponential size in which formulas of alternation depth 1 can be evaluated positionally. States of \(\widehat{M}\) contain, in addition to the current state in \(M\), the current information set of each agent. This construction can be instantiated either for synchronous or asynchronous perfect recall by choosing the appropriate update function for information sets. In the following, we will often omit to specify which semantics is considered, because the construction and reasoning work for both.

Definition 3.5.

Given a model \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},{{s}^\iota })\), we define the powerset model\(\widehat{M}=(\text {AP},\widehat{S},\widehat{R},\widehat{V},\lbrace \widehat{\sim }_{a}\rbrace _{a\in \mathit {Ag}},\widehat{s}^{\,\iota })\), where

\(\widehat{S}= S\times (2^{S})^\mathit {Ag}\)

\((s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\, \widehat{R}\, (s^{\prime },\langle I_{a}^{\prime }\rangle _{a\in \mathit {Ag}})\) if \(s\,R\, s^{\prime }\) and for each \(a\in \mathit {Ag}\), \(I_{a}^{\prime }=\mathrm{Up}(I_{a},s^{\prime })\)

\(\widehat{V}(s,\langle I_{a}\rangle _{a\in \mathit {Ag}})=V(s)\)

for \(b\in \mathit {Ag}\), \((s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\,\widehat{\sim }_{b}\,(s^{\prime },\langle I_{a}^{\prime }\rangle _{a\in \mathit {Ag}})\) if \(s^{\prime } \in I_{b}\) and \(I_{b}=I_{b}^{\prime }\)

\(\widehat{s}^{\,\iota }=({{s}^\iota },\langle \lbrace {{s}^\iota }\rbrace \rangle _{a\in \mathit {Ag}}).\)

Note that \(|\widehat{M}| = |M| 2^{|\mathit {Ag}||M|}\). Because the update of information sets with a new state is deterministic, every history \(\tau\) in \(M\) defines a unique history \(\widehat{\tau }\) of length \(|\tau |\) in \(\widehat{M}\), that starts in \(\widehat{s}^{\,\iota }\) and follows transitions in \(\tau\), and this naturally extends to (infinite) paths. The next lemma follows by definition of \(\widehat{M}\), \(\widehat{\tau }\) and application of Lemma 3.4.

Lemma 3.6.

For every history \(\tau\), \(\mbox{lst}(\widehat{\tau })=(\mbox{lst}(\tau),\langle I_{a}(\tau)\rangle _{a\in \mathit {Ag}})\).

In \(\widehat{M}\), the information contained in states is sufficient to evaluate epistemic formulas positionally. However, since states contain only one “level” of knowledge, we cannot evaluate nested knowledge operators for different agents. But, we can evaluate nested knowledge operators for the same agent, because \(\approx _{a}\) is an equivalence relation2: If \(\tau \approx _{a}\tau ^{\prime }\), then agent \(a\) knows the same things in \(\tau\) and in \(\tau ^{\prime }\). This is reflected in the definition of \(\widehat{\sim }_{a}\).

The following proposition establishes that for formulas of alternation depth at most one, the memoryless semantics on the powerset model is equivalent to the perfect-recall semantics on the original model (in the proposition below, \(\models\) can be either \(\models _{\mbox{s}}\) or \(\models _{\mbox{as}}\), and the right-hand part then refers to the corresponding powerset construction).

Proposition 3.7.

For every history formula \(\varphi\) and path formula \(\psi\) of alternation depth at most one, each model \(M\), history \(\tau\), path \(\pi\), and time \(n\in \mathbb {N}\), \(\begin{align*} \tau \models \varphi &\quad \mbox{iff}\quad \mbox{lst}(\widehat{\tau })\models _m\varphi , \mbox{ and}\\ \pi ,n\models \psi &\quad \mbox{iff}\quad \widehat{\pi }_{\ge n}\models _m\psi . \end{align*}\)

Proof.

The proof is by induction on formulas. We only treat the cases of atomic propositions, path quantifier, and knowledge operators; all remaining cases follow directly by definition of the semantics and application of the induction hypothesis.

[\(\varphi =p\):] By Lemma 3.6, \(\widehat{\tau }\) ends in state \((\mbox{lst}(h),\langle I_{a}(\tau)\rangle _{a\in \mathit {Ag}})\), so by definition of \(\widehat{M}\), we have \(\widehat{V}(\mbox{lst}(\widehat{\tau }))=V(\mbox{lst}(\tau))\), and the result follows.

[\(\varphi =\mathit {E}\psi\):] It is enough to observe that \(\Pi (\mbox{lst}(\widehat{\tau }))=\lbrace \widehat{\pi }_{\ge |\tau |-1}\mid \tau \preccurlyeq \pi \rbrace\); the result then follows by induction hypothesis.

[\(\varphi ={\mathit {K}_{b}}\varphi ^{\prime }\):] Let us write \(\mbox{lst}(\widehat{\tau })=(s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\), and recall that by Lemma 3.6, for each \(a\in \mathit {Ag}\), (1) \(\begin{equation} I_{a}=I_{a}(\tau)=\lbrace \mbox{lst}(\tau ^{\prime })\mid \tau \approx _{a}\tau ^{\prime }\rbrace . \end{equation}\)

Assume that \(\mbox{lst}(\widehat{\tau })\models _m\mathit {K}_{b}\varphi ^{\prime }\). By definition of \(\widehat{M}\) and \(\models _m\), \(\widehat{s}\,^{\prime }\models _m\varphi ^{\prime }\) for all \(\widehat{s}\,^{\prime }=(s^{\prime },\langle I_{a}^{\prime }\rangle _{a\in \mathit {Ag}})\) with \(s^{\prime }\in I_{b}\) and \(I_{b}^{\prime }=I_{b}\). Now, let \(\tau ^{\prime }\approx _{b}\tau\). By Equation (1), \(\mbox{lst}(\tau ^{\prime })\in I_{b}\), hence \((\mbox{lst}(\tau ^{\prime }),\langle I_{a}\rangle _{a\in \mathit {Ag}})\models _m\varphi ^{\prime }\). Now, note that \(I_{b}=I_{b}(\tau)=I_{b}(\tau ^{\prime })\), and since \(\mathit {K}_{b}\varphi ^{\prime }\) is of alternation depth one, \(\varphi ^{\prime }\) does not contain any operator \(\mathit {K}_{a}\) for \(a\ne b\). As a result, the semantics of \(\varphi ^{\prime }\) does not depend on \(I_{a}\) for \(a\ne b\), and thus we also have \((\mbox{lst}(\tau ^{\prime }),\langle I_{a}(\tau ^{\prime })\rangle _{a\in \mathit {Ag}})\models _m\varphi ^{\prime }\). By induction hypothesis, we conclude that \(\tau ^{\prime }\models \varphi ^{\prime }\).

For the other direction, assume that \(\tau \models \mathit {K}_{b}\varphi ^{\prime }\). To show that \(\mbox{lst}(\widehat{\tau })\models _m\mathit {K}_{b}\varphi ^{\prime }\), we take some \(\widehat{s}\,^{\prime }~\widehat{\sim }_{b}\,\mbox{lst}(\widehat{\tau })\) and show that \(\widehat{s}\,^{\prime }\models _m\varphi ^{\prime }\). By definition of \(\widehat{M}\) and by Equation (1), \(\widehat{s}\,^{\prime }\) is of the form \(\widehat{s}\,^{\prime }=(s^{\prime },\langle I_{a}^{\prime }\rangle _{a\in \mathit {Ag}})\) with \(s^{\prime }\in I_{b}(\tau)\) and \(I_{b}^{\prime }=I_{b}(\tau)\). Let \(\tau ^{\prime }\) be such that \(\tau ^{\prime }\approx _{b} \tau\) and \(\mbox{lst}(\tau ^{\prime })=s^{\prime }\). Since \(\tau \models \mathit {K}_{b}\varphi ^{\prime }\), we have \(\tau ^{\prime }\models \varphi ^{\prime }\), and by induction hypothesis \(\mbox{lst}(\widehat{\tau ^{\prime }})\models _m\varphi ^{\prime }\). Now, by Lemma 3.6, \(\mbox{lst}(\widehat{\tau ^{\prime }})=(s^{\prime },\langle I_{a}(\tau ^{\prime })\rangle _{a\in \mathit {Ag}})\), and since \(\tau \approx _{b}\tau ^{\prime }\), we have \(I_{b}(\tau)=I_{b}(\tau ^{\prime })\), so \(\mbox{lst}(\widehat{\tau ^{\prime }})\) and \(\widehat{s}\,^{\prime }\) may differ only on \(I_{a}\) for \(a\ne b\). Since \(\mathit {K}_{b}\varphi ^{\prime }\) has alternation depth one, \(\varphi ^{\prime }\) contains no \(\mathit {K}_{a}\) for \(a\ne b\), and thus its semantics does not depend on \(I_{a}\) for \(a\ne b\), so \(\widehat{s}\,^{\prime }\models _m\varphi ^{\prime }\).

 □

We now establish the upper bounds in Theorem 2.13. We start with the base case, showing that we can model check formulas of alternation depth at most one in polynomial space. We then use a marking algorithm on the powerset construction to show that model-checking formulas of alternation depth \(k+1\) reduces to model checking formulas of alternation depth \(k\) on an exponentially larger model. All the reasoning in this section is independent of the chosen semantics, synchronous or asynchronous perfect recall.

3.3 Alternation Depth One

For the base case, i.e., for formulas of alternation depth at most one, we combine three existing ideas to obtain a Pspace model-checking procedure. The first one is the original Pspace model-checking procedure for LTL by Sistla and Clarke [55], which can be seen as on-the-fly construction and resolution of Büchi automata for LTL formulas [63]. We extend this algorithm with an on-the-fly construction of the powerset model, which allows us to evaluate LTLK formulas of alternation depth one in polynomial space. It was already noted in Reference [1], for a variant of CTLK, that the powerset construction can be done on-the-fly. Finally, we use the meta-algorithm by Emerson and Lei [21] to extend this procedure from LTLK to the full branching-time setting of \(\texttt { CTL}^{*}\texttt {K}\).

In the main model-checking procedure, formulas of the form \(\mathit {E}\psi\) are dealt with by guessing a path in the powerset model, which is built on-the-fly, and evaluating \(\psi\) on it in polynomial space. However, for our algorithm to terminate, we need to bound the length of paths that need to be searched, and we need this bound to be at most exponential so we can count up to it in polynomial space. We now prove that it is indeed the case.

An infinite word \(w\) is ultimately periodic if there exist \(i,j\in \mathbb {N}\) such that \(w=w_{\le i-1}w_{[i,j]}^*\). Letting \(i\) and \(j\) be the smallest such values, we call \(i\) the start index of \(\pi\), and \(j-i+1\) is called its period.

Lemma 3.8.

Let \(\psi\) be a \(\texttt { CTL}^{*}\texttt {K}\) path formula of alternation depth at most 1, let \(M\) be a model, \(\widehat{M}\) the powerset model, and \(\widehat{s}\) a state in \(\widehat{M}\). If \(\widehat{s}\models \mathit {E}\psi\), then there exists a path \(\widehat{\pi }\) starting in \(\widehat{s}\) such that \(\widehat{\pi }\models _m\psi\) and \(\widehat{\pi }\) is ultimately periodic with start index and period less than \(|M| 2^{|\mathit {Ag}| |M|+|\psi |}\).

Proof.

According to Proposition 3.7, we can evaluate positionally maximal history subformulas of \(\psi\) in states of \(\widehat{M}\), mark states where these subformulas hold with fresh atoms \(\text {AP}_f\), and replace maximal history subformulas in \(\psi\) with these atoms. We thus get a marked powerset model \(\widehat{M}^{\prime }\) and a formula \(\psi ^{\prime }\) of alternation depth 0 such that \(\widehat{\pi }\models _m\psi\) iff \(\widehat{\pi }^{\prime }\models _m\psi ^{\prime }\) (see the proof of Proposition 3.10 below for more details on this classic construction).

Since \(\psi ^{\prime }\) is an LTL formula, one can build a nondeterministic Büchi word automaton \(\mathcal {A}_{\psi ^{\prime }}\) of size at most \(2^{|\psi ^{\prime }|}\) that accepts precisely the infinite words on \(2^{\text {AP}\cup \text {AP}_f}\) that satisfy \(\psi ^{\prime }\) [63]. By taking the product of \(\mathcal {A}_\psi\) with \(\widehat{M}^{\prime }\), we obtain an automaton \(\mathcal {A}_{\widehat{M}^{\prime },\psi ^{\prime }}\) over the states of \(\widehat{M}^{\prime }\) that has size at most \(|M|2^{|\mathit {Ag}||M|+|\psi ^{\prime }|}\) and accepts precisely paths in \(\widehat{M}^{\prime }\) that satisfy \(\psi ^{\prime }\). By definition of the Büchi acceptance condition, there exists such a path if and only if there is an ultimately periodic path in the transition graph of \(\mathcal {A}_{\widehat{M}^{\prime },\psi ^{\prime }}\) with an accepting state in the periodic part. So, if there is such a path \(\widehat{\pi }^{\prime }\), then there is an ultimately periodic one with start index and period less than the size of \(\mathcal {A}_{\widehat{M}^{\prime },\psi ^{\prime }}\), i.e., \(|M| 2^{|\mathit {Ag}| |M|+|\psi |}\). Finally, observe that this path \(\widehat{\pi }^{\prime }\) such that \(\widehat{\pi }^{\prime }\models _m\psi ^{\prime }\) defines an ultimately periodic path \(\widehat{\pi }\) in the unmarked powerset model \(\widehat{M}\), with same start index and period, such that \(\widehat{\pi }\models _m\psi\). □

We now adapt Emerson and Lei’s algorithm, which shows how to turn a polynomial-space model-checking procedure for LTL into one for CTL* that also runs in polynomial space [21]. The interesting case is for formulas of the form \(\mathit {E}\psi\). The proof of Lemma 3.8 provides a model-checking procedure for such formulas, but building the full automaton \(\mathcal {A}_\psi\) and powerset model \(\widehat{M}\) takes exponential space. We tackle this by building them both on-the-fly. The marking procedure of maximal history subformulas in states of \(\widehat{M}\) is replaced with recursive calls to the model-checking procedure for \(\texttt { CTL}^{*}\texttt { K}_{1}\), and by Lemma 3.8, we can implement in polynomial space a counter that indicates when the nondeterministic search of a satisfying path can be stopped.

Proposition 3.9.

Model checking \(\texttt { CTL}^{*}\texttt {K}\) formulas of alternation depth one is in Pspace both for synchronous and asynchronous perfect-recall semantics.

Proof.

We define algorithm \(\mbox{MC}^1(\varphi ,M,s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\), which takes as input a \(\texttt { CTL}^{*}\texttt {K}\) history formula \(\varphi\) of alternation depth at most one, a model \(M\), and a state \((s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\) of the powerset model \(\widehat{M}\), and returns true if \((s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\models _m\varphi\), and false otherwise. The algorithm is the same for the synchronous and asynchronous case, which are obtained by instantiating \(\mathrm{Up}\) with \(\mathrm{Up}^{\mbox{s}}\) and \(\mathrm{Up}^{\mbox{as}}\), respectively. The algorithm is defined by induction on \(\varphi\) as follows:

[\(\varphi =p\):] return \(p\in V(s)\)

[\(\varphi =\varphi _1 \vee \varphi _2\):] return \(\mbox{MC}^1(\varphi _1,M,s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\) or \(\mbox{MC}^1(\varphi _2,M,s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\)

[\(\varphi =\lnot \varphi ^{\prime }\):] return not \(\mbox{MC}^1(\varphi ^{\prime },M,s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\)

[\(\varphi ={\mathit {K}_{b}}\varphi ^{\prime }\):] return \(\mathop{\text{And}}\limits _{{s^{\prime }\in I_{b}}}\;\mbox{MC}^1(\varphi ^{\prime },M,s^{\prime },\langle I_{a}^{\prime }\rangle _{a\in \mathit {Ag}})\), where \(I_{a}^{\prime }= {\left\lbrace \begin{array}{ll} I_{a}& \text{if }a=b,\\ \emptyset & \text{otherwise.} \end{array}\right.}\)

[\(\varphi =\mathit {E}\psi\):] Let \(\textrm {MaxSub}(\psi)\) be the set of maximal history subformulas of \(\psi\), let \(\psi ^{\prime }\) be the LTL formula obtained from \(\psi\) by considering subformulas in \(\textrm {MaxSub}(\psi)\) as atoms, and let \(\textrm {Cl}(\psi)\) be the closure of \(\mbox{Sub}(\psi ^{\prime })\) under negation. First, guess a subset \(S_1\subseteq \textrm {Cl}(\psi)\) of formulas that currently hold in state \((s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\). Check Boolean consistency, i.e., check that the following two conditions hold:

\(\quad \varphi _1\vee \varphi _2\in S_1\) iff \(\varphi _1\in S_1\) or \(\varphi _2\in S_1\)

\(\quad \lnot \varphi ^{\prime }\in S_1\) iff \(\varphi ^{\prime }\notin S_1\) .

Check that \(\psi \in S_1\). Also, check that the truth of maximal history subformulas was guessed correctly: For all \(\varphi ^{\prime }\in \textrm {MaxSub}(\psi)\cap S_1\), check that \(\mbox{MC}^1(\varphi ^{\prime },M,s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\).

Now, by Lemma 3.8, we know that if there exists a path that satisfies \(\psi\), there exists an ultimately periodic one with start index and period less than \(|M| 2^{|\mathit {Ag}| |M|+|\psi |}\). So, let us guess \(n_1,n_2\le |M| 2^{|\mathit {Ag}| |M|+|\psi |}\), representing, respectively, the start index and the period of the ultimately periodic path that the algorithm is going to guess. Set a counter \(\texttt {c}\) to zero.

While \(\texttt {c}\lt n_1\), do:

Step \({\left\lbrace \begin{array}{ll} \mbox{guess }s^{\prime }\in R(s)\\ \text{for each }a\in \mathit {Ag}, I_{a}^{\prime }:=\mathrm{Up}(I_{a},s^{\prime }) \\ \mbox{guess a set }S_2\subseteq \textrm {Cl}(\psi)\\ \mbox{check Boolean consistency of $S_2$}\\ \mbox{check dynamic consistency of $S_1$ and $S_2$:}\\ \quad \mathit {X}\varphi ^{\prime }\in S_1\mbox{ iff } \varphi ^{\prime }\in S_2, \mbox{ and}\\ \quad \varphi _1\mathit {U}\varphi _2\in S_1\mbox{ iff } \mbox{$\varphi _2\in S_1$, or ($\varphi _1\in S_1$ and $\varphi _1\mathit {U}\varphi _2\in S_2$)}\\ \mbox{check the truth of $\textrm {MaxSub}(\psi)\cap S_2$ on the new state $(s^{\prime },\langle I_{a}^{\prime }\rangle _{a\in \mathit {Ag}})$:}\\ \quad \mbox{for all } \varphi ^{\prime }\in \textrm {MaxSub}(\psi)\cap S_1, \mbox{ check that } \mbox{MC}^1(\varphi ^{\prime },M,s^{\prime },\langle I_{a}^{\prime }\rangle _{a\in \mathit {Ag}}).\\ s:=s^{\prime }, \langle I_{a}\rangle _{a\in \mathit {Ag}}:=\langle I_{a}^{\prime }\rangle _{a\in \mathit {Ag}}, S_1:=S_2, \texttt {c}:=\texttt {c}+1 \end{array}\right.}\)

Once \(\texttt {c}=n_1\), let \(S^{\mathrm{period}}:=S_1\), \(s^{\mathrm{period}}:=s\), \(\langle I_{a}\rangle _{a\in \mathit {Ag}}^{\mathrm{period}}:=\langle I_{a}\rangle _{a\in \mathit {Ag}}\), and \(\texttt {c}:=0\).

While \(\texttt {c}\lt n_2\), do:

Mark which eventualities (formulas of the form \(\varphi _1\mathit {U}\varphi _2\)) in \(S^{\mathrm{period}}\) are satisfied

Execute Step

Once \(\texttt {c}=n_2\), check that \(s=s^{\mathrm{period}}\) and \(\langle I_{a}\rangle _{a\in \mathit {Ag}}=\langle I_{a}\rangle _{a\in \mathit {Ag}}^{\mathrm{period}}\). If it is the case, then we indeed guessed an ultimately periodic path in the powerset model, and we just need to check that all eventualities in \(S^{\mathrm{period}}\) have been satisfied somewhere in the period. Return true if it is the case, false otherwise.

Since the algorithm simply follows the memoryless semantics, its correctness follows from Proposition 3.7, together with Lemma 3.8 for the bound on the length of paths. We thus have that \(M\models \varphi\) if, and only if, \(\mbox{MC}^1(\varphi ,M,{{s}^\iota },\langle \lbrace {{s}^\iota }\rbrace \rangle _{a\in \mathit {Ag}})\).

A simple analysis shows that each recursive call uses a polynomial amount of memory: Indeed, subsets of subformulas (\(S_1,S_2,S^{\mathrm{period}}\)) each take space at most \(|\varphi |^2\), tuples of information sets (\(\langle I_{a}\rangle _{a\in \mathit {Ag}}\) and \(\langle I_{a}\rangle _{a\in \mathit {Ag}}^{\mathrm{period}}\)) each take space at most \(|M||\mathit {Ag}|\), and \(n_1\) and \(n_2\) being smaller than \(|M|2^{|\mathit {Ag}| |M|+|\psi |}\), their binary encoding can be stored with \(O(|\mathit {Ag}| |M|+|\psi |)\) bits. Since the number of nested recursive calls is bounded by \(|\varphi |\), the overall procedure uses polynomial space. □

3.4 Reducing Alternation

We now describe how we can use the powerset construction to eliminate one level of alternation of knowledge operators. This is done with a classic procedure that we recall for completeness.

Proposition 3.10.

Given a \(\texttt { CTL}^{*}\texttt {K}\) formula \(\Phi\) of alternation depth \(k+1\) and a model \(M\), one can compute in exponential space a model \(M^{\prime }\) of size at most \(|M| 2^{|\mathit {Ag}||M|}\) and a \(\texttt { CTL}^{*}\texttt {K}\) formula \(\Phi ^{\prime }\) of alternation depth \(k\) and size \(|\Phi ^{\prime }|\le |\Phi |\) such that \(M\models \Phi\) iff \(M^{\prime }\models \Phi ^{\prime }\).

The construction of \(M^{\prime }\) is as follows: First, build the powerset model \(\widehat{M}\) as in Definition 2.5. In this model, history formulas of alternation depth one can be evaluated positionally, as stated in Proposition 3.7. Let \(\mbox{Sub}_1(\Phi)\) be the set of maximal such formulas in \(\mbox{Sub}(\Phi)\). For each formula \(\varphi\) in \(\mbox{Sub}_1(\Phi)\) and each state \(\widehat{s}\) of \(\widehat{M}\), evaluate whether \(\widehat{s}\models _m\varphi\) (which can be done in space polynomial in \(\widehat{M}\) and \(\varphi\) [35], hence in exponential space, since \(\widehat{M}\) is of size exponential in the original model), and mark state \(\widehat{s}\) with the fresh atomic proposition \(p_\varphi\) if \(\widehat{s}\models _m\varphi\). We abuse notation and still call \(\widehat{M}\) the model obtained after this marking procedure (and similarly, \(\widehat{s}\), \(\widehat{\tau }\), and \(\widehat{\pi }\) refer to states, histories, and paths in the marked model). Also, for every subformula \(\varphi\) of \(\Phi\), define \(\widehat{\varphi }\) by replacing each \(\varphi ^{\prime }\) in \(\mbox{Sub}_1(\varphi)\) with atom \(p_{\varphi ^{\prime }}\) (note that if \(\varphi\) contains no knowledge operator, then \(\widehat{\varphi }=\varphi\)).

Unlike in Proposition 3.7, where we use the memoryless semantics to evaluate positionally formulas of alternation depth one, this time, we interpret \(\widehat{\varphi }\) on \(\widehat{M}\) with the perfect-recall semantics. One can prove the following lemma:

Lemma 3.11.

For every history subformula \(\varphi\) and path subformula \(\psi\) of \(\Phi\), every history \(\tau\) and path \(\pi\) in \(M\), \(\begin{align*} \tau \models \varphi &\quad \mbox{iff} \quad \widehat{\tau }\models \widehat{\varphi }\\ \pi ,n\models \psi &\quad \mbox{iff} \quad \widehat{\pi },n\models \widehat{\psi }. \end{align*}\)

Proof.

The proof is by induction on formulas. Again, we only treat the cases of atomic propositions, path quantifier, and knowledge operators; all remaining cases follow directly by definition of the semantics and application of the induction hypothesis.

[\(\varphi =p\):] By Lemma 3.6, \(\widehat{\tau }\) ends in state \((\mbox{lst}(h),\langle I_{a}(\tau)\rangle _{a\in \mathit {Ag}})\), so by definition of \(\widehat{M}\), we have \(\widehat{V}(\mbox{lst}(\widehat{\tau }))=V(\mbox{lst}(\tau))\), and we conclude by noting that \(\widehat{p}=p\).

[\(\varphi =\mathit {E}\psi\):] We observe that the set \(\lbrace \pi \mid \tau \preccurlyeq \pi \rbrace\) is in bijection with the set \(\lbrace \widehat{\pi }\mid \widehat{\tau }\preccurlyeq \widehat{\pi }\rbrace\); the result then follows by induction hypothesis and the fact that \(\widehat{\mathit {A}\psi }=\mathit {A}\widehat{\psi }\).

[\(\varphi =\mathit {K}_{a}\varphi ^{\prime }\):] Let us write \(\mbox{lst}(\widehat{\tau })=(s,\langle I_{a}\rangle _{a\in \mathit {Ag}})\). We consider two cases.

If \(\mathrm{ad}(\mathit {K}_{a}\varphi ^{\prime })=1\), then \(\widehat{\varphi }=p_\varphi\). Thus, \(\widehat{\tau }\models p_\varphi\) iff \(\mbox{lst}(\widehat{\tau })\) has been marked with \(p_\varphi\), which by construction is done iff \(\mbox{lst}(\widehat{\tau })\models _m\varphi\), which by Proposition 3.7 is equivalent to \(\tau \models \varphi\), and we are done.

If \(\mathrm{ad}(\mathit {K}_{a}\varphi ^{\prime })\gt 1\), then \(\widehat{\varphi }=\mathit {K}_{a}\widehat{\varphi ^{\prime }}\). In this case \(\tau \models \mathit {K}_{a}\varphi ^{\prime }\) iff (2) \(\begin{equation} \text{for all }\tau ^{\prime }\approx _{a}\tau ,\; \tau ^{\prime }\models \varphi ^{\prime }, \end{equation}\) which by induction hypothesis is equivalent to (3) \(\begin{equation} \text{for all }\tau ^{\prime }\approx _{a}\tau ,\; \widehat{\tau }^{\prime }\models \widehat{\varphi ^{\prime }}. \end{equation}\) Because all histories start in the initial state, there is a bijection between \(\lbrace \tau ^{\prime }\mid \tau ^{\prime }\approx _{a}\tau \rbrace\) and \(\lbrace \widehat{\tau }^{\prime }\mid \widehat{\tau }^{\prime }\approx _{a}\widehat{\tau }\rbrace\). Thus, Equation (3) can be rewritten as (4) \(\begin{equation} \text{for all }\widehat{\tau }^{\prime }\approx _{a}\widehat{\tau },\; \widehat{\tau }^{\prime }\models \widehat{\varphi ^{\prime }}, \end{equation}\) which is equivalent to \(\widehat{\tau }\models \mathit {K}_{a}\widehat{\varphi ^{\prime }}\), and we are done.

 □

Using Proposition 3.10 for the inductive case and Proposition 3.9 for the base case, we easily prove the following by induction on \(k\).

Theorem 3.12.

Model checking \(\texttt { CTL}^{*}\texttt {K}\) formulas of alternation depth at most \(k+1\) is in \(k\)-Expspace , both for synchronous and asynchronous perfect recall.

Skip 4LOWER BOUNDS FOR PERFECT RECALL Section

4 LOWER BOUNDS FOR PERFECT RECALL

In this section, we establish the following result, which provides lower bounds for model checking against \(\texttt { CTL}^{*}\texttt {K}\), matching the upper bounds of Theorem 3.12.

Theorem 4.1.

For \(k\in \mathbb {N}\), the model-checking problem for \(\texttt { CTL}^{*}\texttt { K}_{k+1}\) under both the SPR and APR semantics is \(k\)-Expspace -hard. This is already the case for two agents and a fixed \(\texttt { LTLK}_{k+1} \bigcap \texttt { CTLK}_{k+1}\) formula.

Theorem 4.1 is proved by a polynomial-time reduction from a domino-tiling problem for grids with rows of length \({\it Tower}(n,k)\) [9], where \(n\) is an input parameter. Formally, an instance \(\mathcal {I}\) of this problem is a tuple \(\mathcal {I}=(C,\Delta ,n,d_{\it in},d_{\it acc})\), where \(C\) is a finite set of colors, \(\Delta \subseteq C^{4}\) is a set of tuples \((c_{\it down},c_{\it left},c_{\it up},c_{\it right})\) of four colors, called domino-types, \(n\gt 0\) is a natural number encoded in unary, and \(d_{\it in},d_{\it acc}\in \Delta\) are domino-types. Given \(k\in \mathbb {N}\), a \(k\)-grid of \(\mathcal {I}\) is a mapping \(f:[0,\ell ]\times [0,{\it Tower}(n,k)-1] \rightarrow \Delta\) for some \(\ell \in \mathbb {N}\). Intuitively, a \(k\)-grid is a finite grid, where each row consists of \({\it Tower}(n,k)\) cells, and each cell contains a domino type. A \(k\)-tiling of \(\mathcal {I}\) is a \(k\)-grid \(f\) satisfying the following additional constraints:

[Initialization:] \(f(0,0)=d_{\it in}\)

[Row adjacency:] two adjacent cells in a row have the same color on the shared edge: for all \((i,j)\in [0,\ell ]\times [0,{\it Tower}(n,k)-2]\), \([f(i,j)]_{{\it right}}=[f(i,j+1)]_{{\it left}}\)

[Column adjacency:] two adjacent cells in a column have the same color on the shared edge: for all \((i,j)\in [0,\ell -1]\times [0,{\it Tower}(n,k)-1]\), \([f(i,j)]_{{\it up}}=[f(i+1,j)]_{{\it down}}\)

[Acceptance:] \(f(\ell ,j)=d_{\it acc}\) for some \(j\in [0,{\it Tower}(n,k)-1]\).

Given \(k\in \mathbb {N}\), the problem of checking the existence of a \(k\)-tiling for \(\mathcal {I}\) is \(k\)-Expspace -complete [9]. Hence, Theorem 4.1 directly follows from the following proposition:

Proposition 4.2.

Let \(k\ge 0\). There is a fixed formula \(\varphi _k\) of \(\texttt { LTLK}_{k+1} \bigcap \texttt { CTLK}_{k+1}\) such that one can build, in time polynomial in the size of the given instance \(\mathcal {I}\), a Kripke structure \(M_{\mathcal {I},k}\) with two agents so \(\mathcal {I}\) has a \(k\)-tiling iff \(M_{\mathcal {I},k}\models _{\mbox{s}}\varphi _k\) (respectively, \(M_{\mathcal {I},k}\models _{\mbox{as}}\varphi _k\)).

In Section 4.2, we first provide a proof of Proposition 4.2 for the synchronous setting. Then, in Section 4.2, we explain the easy adaptation for the asynchronous case.

Traces of a Kripke structure. Let \(M\) be a Kripke structure over \(\text {AP}\) with valuation function \(V\). Given a history \(\tau\) (respectively, a path \(\pi\)), the trace of \(\tau\) (respectively, \(\pi\)) is the finite (respectively, infinite) word over \(2^{\text {AP}}\) given by \(V(\tau _0)\ldots V(\tau _{m-1})\) where \(m=|\tau |\) (respectively, \(V(\pi _0)V(\pi _1)\ldots\)). A trace of \(M\) is a trace of some history or path in \(M\).

4.1 Proof of Proposition 4.2 for the synchronous setting

Fix \(k\ge 0\). In the following, we assume that \(k\ge 1\) (the proof of Proposition 4.2 for the case \(k=0\) being simpler). First, we define a suitable encoding of the \(k\)-grids by infinite words over the set \(\text {AP}\) of atomic propositions given by \(\text {AP}= {{\it Main}}\cup {{\it Tags}}\), where

\({{\it Main}}= \lbrace \$,{\it acc},\bot \rbrace \cup (\lbrace \$_{k}\rbrace \times \Delta) \cup (\lbrace \$_1,\ldots ,\$_{k-1}\rbrace \times \lbrace 0,1\rbrace)\cup \lbrace 0,1\rbrace\)

\({{\it Tags}}= \displaystyle {\lbrace \#_1,\#_2, {\it row},{\it col},{\it good}\rbrace \cup \bigcup _{h=1}^{k}\lbrace (h,=),(h,{\it inc})\rbrace .}\)

The propositions in \({{\it Main}}\) are used to encode the \(k\)-grids, while the propositions in \({{\it Tags}}\), whose meaning will be explained later, are used to mark in a suitable way the codes of \(k\)-grids. Essentially, the unmarked code of a \(k\)-grid \(f\) is obtained by concatenating the codes of the rows of \(f\) starting from the first row and adding the suffix \({\it acc}^{\omega }\) if \(f\) satisfies the acceptance requirement, and the suffix \(\bot ^{\omega }\) otherwise. The code of a row is in turn obtained by concatenating the codes of the row’s cells starting from the first cell.

In the encoding of a cell of a \(k\)-grid, we keep track of the content of the cell together with a suitable encoding of the cell number, which is a natural number in \([0,{\it Tower}(n,k)-1]\). Thus, for all \(1\le h\le k\), we define the notions of \(h\)-block and well-formed \(h\)-block. Essentially, for \(1\le h\lt k\), well-formed \(h\)-blocks are finite words over \((\lbrace \$_1,\ldots ,\$_{h}\rbrace \times \lbrace 0,1\rbrace)\cup \lbrace 0,1\rbrace\), which encode integers in \([0,{\it Tower}(n,h)-1]\), while well-formed \(k\)-blocks are finite words over \({{\it Main}}\setminus \lbrace \$,{\it acc},\bot \rbrace\), which encode the cells of \(k\)-grids. In particular, for \(h\gt 1\), a well-formed \(h\)-block encoding a natural number \(m\in [0,{\it Tower}(n,h)-1]\) is a sequence of \({\it Tower}(n,h-1)\) \((h-1)\)-blocks, where the \(i\)th \((h-1)\)-block encodes both the value and (recursively) the position of the \(i\)th-bit in the binary representation of \(m\). Formally, the set of (well-formed) \(h\)-blocks is defined by induction on \(h\) as follows:

Base Step: \(h=1\). The notions of 1-block and well-formed 1-block coincide, and a 1-block is a finite word \({\it bl}\) of length \(n+1\) having the form \({\it bl}=(\$_{1},\tau){\it bit}_1\ldots {\it bit}_n\) such that \({\it bit}_1,\ldots ,{\it bit}_n\in \lbrace 0,1\rbrace\) and \(\tau \in \lbrace 0,1\rbrace\) if \(1\lt k\), and \(\tau \in \Delta\) otherwise. For all \(1\le \ell \le n\), we say that \({\it bit}_\ell\) is the \(\ell\)th bit of \({\it bl}\). The content of \({\it bl}\) is \(\tau\), and the index of \({\it bl}\) is the natural number in \([0,{\it Tower}(n,1)-1]\) (recall that \({\it Tower}(n,1)=2^n\)) whose binary code is \({\it bit}_1\ldots {\it bit}_n\).3 The 1-block \({\it bl}\) is initial (respectively, final) if \({\it bit}_i=0\) (respectively, \({\it bit}_i=1\)) for all \(1\le i\le n\).

Induction Step: \(1\lt h\le k\). An \(h\)-block is a finite word \({\it bl}\) having the form \((\$_{h},\tau)\, {\it bl}_0 \ldots {\it bl}_j\) such that \(j\gt 0\), \({\it bl}_0,\ldots ,{\it bl}_j\) are \((h-1)\)-blocks, and \(\tau \in \lbrace 0,1\rbrace\) if \(h\lt k\), and \(\tau \in \Delta\) otherwise. Additionally, we require that \({\it bl}_0\) is initial, \({\it bl}_j\) is final, and for all \(0\lt i\lt j\), \({\it bl}_i\) is not final. The content of \({\it bl}\) is \(\tau\). The \(h\)-block \({\it bl}\) is initial (respectively, final) if the content of \({\it bl}_i\) is 0 (respectively, 1) for all \(0\le i\le j\). The \(h\)-block \({\it bl}\) is well-formed if additionally, the following holds: \(j={\it Tower}(n,h-1)-1\) and for all \(0\le i\le j\), \(bl_i\) is well-formed and has index \(i\). If \({\it bl}\) is well-formed, then its index is the natural number in \([0,{\it Tower}(n,h)-1]\) whose binary code is given by \({\it bit}_0,\ldots ,{\it bit}_j\), where \({\it bit}_i\) is the content of the sub-block \({\it bl}_i\) for all \(0\le i\le j\).

Encoding of \(k\)-grids A row-code is a finite word \(w_r=\${\it bl}_0\ldots {\it bl}_j\) satisfying the following:

\({\it bl}_0,\ldots ,{\it bl}_j\) are \(k\)-blocks;

\({\it bl}_0\) is initial, \({\it bl}_j\) is final, and no other \({\it bl}_i\) is final.

The row-code \(w_r\) is well-formed if additionally, \(j={\it Tower}(n,k)-1\) and for all \(0\le i\le j\), \({\it bl}_i\) is well-formed and has index \(i\). A \(k\)-grid code (respectively, well-formed \(k\)-grid code) is an infinite word over \(\text {AP}\) of the form \(w\cdot \tau ^{\omega }\) such that (i) \(w\) is a finite sequence of row-codes (respectively, well-formed row-codes), and (ii) \(\tau ={\it acc}\) if the last row-code of \(w\) contains a \(k\)-block whose content is \(d_{\it acc}\) (acceptance), and \(\tau =\bot\) otherwise. A \(k\)-grid code is initialized if the first \(k\)-block of the first row-code has content \(d_{\it in}\). Note that while \(k\)-grid codes encode grids of \(\mathcal {I}\) having rows of arbitrary length, well-formed \(k\)-grid codes encode the \(k\)-grids of \(\mathcal {I}\). In particular, there is exactly one well-formed \(k\)-grid code associated with a given \(k\)-grid of \(\mathcal {I}\).

Example 4.3.

Let \(n=2\) and \(k=2\). In this case, \({\it Tower}(n,k)=16\) and \({\it Tower}(n,k-1)=4\). Thus, we can encode by well-formed 2-blocks all the integers in \([0,15]\). For example, let us consider the number 14 whose binary code \((\)using \({\it Tower}(n,h-1)=4\) bits\()\) is given by 0111 \((\)assuming that the first bit is the least significant one\()\). For each \(d\in \Delta\), the well-formed 2-block with content \(d\) and index 14 (encoding a grid-cell with content \(d\) and cell number 14) is given by \(\begin{equation*} (\$_{2},d)(\$_{1},0) 00(\$_{1},1) 10(\$_{1},1) 01(\$_{1},1) 11. \end{equation*}\) Note that we encode also the position of each bit in the binary code of 14. Now, let us consider a word \(r= d_0,\ldots ,d_{15}\) over \(\Delta\) of length 16 representing a row of a \(k\)-grid. The row \(r\) is encoded by the word \(w_r\) over \({{\it Main}}\) obtained from \(r\) by replacing for each \(0\le i\le 15\), the \(i\)th symbol \(d_i\) of \(r\) with the well-formed 2-block having content \(d_i\) and index \(i\) (so, we encode also the position of \(d_i\) along the row \(r\)). Formally, \(w_r\) is given by \(\begin{equation*} w_r=\$(\$_{2},d_0)v_0 \ldots (\$_{2},d_{15})v_{15}, \end{equation*}\) where for each \(0\le i\le 15\), \((\$_{2},d_i)v_i\) is the well-formed 2-block with content \(d_i\) and index \(i\): \(\begin{equation*} v_i= (\$_{1},h_{i,0}) 00(\$_{1},h_{i,1}) 10(\$_{1},h_{i,2}) 01(\$_{1}h_{i,3}) 11 , \end{equation*}\) where \(h_{i,0}h_{i,1}h_{i,2}h_{i,3}\) is the binary code of \(i\).

Construction of \(M_{\mathcal {I},k}\) in Proposition 4.2 for the synchronous case For a fixed \(k\ge 1\), we now illustrate the construction of the finite Kripke structure \(M_{\mathcal {I},k}\) over two agents, say, \(a_1\) and \(a_2\), in Proposition 4.2. Essentially, \(M_{\mathcal {I},k}\) nondeterministically generates all the initialized \(k\)-grid codes with the additional ability of nondeterministically marking some positions with the propositions in \({{\it Tags}}\). This marking is exploited for checking by a suitable fixed \(\texttt { LTLK}_{k+1} \bigcap \texttt { CTLK}_{k+1}\) formula that one of the initialized \(k\)-grid codes generated by \(M_{\mathcal {I},k}\) is well-formed and encodes a \(k\)-tiling of \(\mathcal {I}\). The main idea is to decompose the verification that a \(k\)-grid code \(\nu\) is well-formed and encodes a \(k\)-tiling in layers implementable with polynomially many states of \(M_{\mathcal {I},k}\), where each layer corresponds to a tagged version of some prefix of \(\nu\), and invoking other layers for \(\nu\) thanks to the knowledge modalities for the two agents \(a_1\) and \(a_2\). In particular, the propositions in \({{\it Main}}\) are observable by both agents, while the tag propositions in \({{\it Tags}}\setminus \lbrace \#_1,\#_2\rbrace\) are not observable by any agent. For the remaining two tag propositions \(\#_1\) and \(\#_2\), \(\#_1\) is observable by agent \(a_1\) but not by agent \(a_2\), and symmetrically for \(\#_2\).

We now define the marking performed by the Kripke structure \(M_{\mathcal {I},k}\). For a word \(w\) over \(2^{\text {AP}}\), the content of \(w\) is the word over \(2^{\text {AP}\setminus {{\it Tags}}}\) obtained by removing from each letter in \(w\) the propositions in \({{\it Tags}}\). Let \(1\le h\le k\). A tagged \(h\)-block is a word \({\it bl}\) over \(2^{\text {AP}}\) whose content is an \(h\)-block and:

the initial position of \({\it bl}\) is marked by the tag \(\#_1\) if \(h\) is odd, and the tag \(\#_2\) otherwise;

if \(h=1\), then there is \(1\le \ell \le n\) such that the \(\ell\)th bit of \({\it bl}\) is marked by the tag \(\#_2\);

if \(h\gt 1\), then there is exactly one \((h-1)\)-sub-block \({\it sb}\) of \({\it bl}\) whose first position is marked by the tag \(\#_1\) if \(h-1\) is odd, and the tag \(\#_2\) otherwise;

no other position of \({\it bl}\) is marked.

A simple tagged \(h\)-block \({\it bl}\) is defined in a similar way but we require that only the first position of \({\it bl}\) is marked, with \(\#_1\) if \(h\) is odd, and \(\#_2\) otherwise. See Figure 2 for an illustration of a tagged \(h\)-block and Figure 3 for a simple tagged \(h\)-block.

Fig. 2.

Fig. 2. Tagged \(h\) -block, with odd \(h\gt 1\) .

Fig. 3.

Fig. 3. Simple tagged \(h\) -block, with odd \(h\gt 1\) .

The initialized \(k\)-grid codes are marked by the Kripke structure \(M_{\mathcal {I},k}\) as follows: A tagged \(k\)-grid code is an infinite word \(\nu\) over \(2^{\text {AP}}\) such that the content of \(\nu\) is an initialized \(k\)-grid code and one of the following holds:

\((h,=)\)-tagging with \(1\le h\le k\): there are two tagged \(h\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) along \(\nu\) such that \({\it bl}\hspace{1.00006pt}^{\prime }\) follows \({\it bl}\) and:

if \(h=1\), then the marked bit of \({\it bl}\) has the same position as the marked bit of \({\it bl}\hspace{1.00006pt}^{\prime }\);

each position in \(\nu\) following the last position of \({\it bl}\hspace{1.00006pt}^{\prime }\) is marked by the tags in \(O\) where \(\begin{equation*} \lbrace (h,=)\rbrace \subseteq O\subseteq \lbrace (h,=),{\it good}\rbrace , \end{equation*}\) and \({\it good}\in O\) iff either \(h=1\) and the marked bit of \({\it bl}\) has the same value as the marked bit of \({\it bl}\hspace{1.00006pt}^{\prime }\), or \(h\gt 1\) and the marked sub-block of \({\it bl}\) and the marked sub-block of \({\it bl}\hspace{1.00006pt}^{\prime }\) have the same content.

no other position of \(\nu\) is marked.

\((h,{\it inc})\)-tagging with \(1\le h\le k\): there are two tagged \(h\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) along \(\nu\) such that \({\it bl}\hspace{1.00006pt}^{\prime }\) follows \({\it bl}\) and:

if \(h=1\), then the marked bit of \({\it bl}\) has the same position as the marked bit of \({\it bl}\hspace{1.00006pt}^{\prime }\);

\({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are adjacent within the same \((h+1)\)-block if \(h\lt k\), and within the same row-code otherwise;

each position in \(\nu\) following the last position of \({\it bl}\hspace{1.00006pt}^{\prime }\) is marked by the tags in \(O\) where \(\lbrace (h,{\it inc})\rbrace \subseteq O\subseteq \lbrace (h,{\it inc}),{\it good}\rbrace\), and:

case \(h=1\): let \(i_0\) be the position of the least significant (i.e., leftmost) bit of \({\it bl}\) whose value is 0 (note that, since \({\it bl}\hspace{1.00006pt}^{\prime }\) comes after \({\it bl}\), \({\it bl}\) is not final and thus such a bit exists). Then, we require that \({\it good}\in O\) if and only if the marked bits of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) have the same value if the marked bit of \({\it bl}\) follows the \(i_0^{th}\) bit of \({\it bl}\), and the marked bits of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) have distinct value otherwise.

case \(h\gt 1\): let \({\it sb}_0\) be the first \((h-1)\)-sub-block of \({\it bl}\) whose content is 0 (again, since \({\it bl}\hspace{1.00006pt}^{\prime }\) comes after \({\it bl}\), \({\it bl}\) is not final and thus such a sub-block \({\it sb}_0\) exists). Then, we require that \({\it good}\in O\) if and only if the marked sub-blocks of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) have the same content if the marked sub-block of \({\it bl}\) follows \({\it sb}_0\), and the marked sub-blocks of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) have distinct content otherwise.

no other position of \(\nu\) is marked (see Figure 4).

Fig. 4.

Fig. 4. \((h,{\it inc})\) -tagging, with odd \(h\gt 1\) .

row-tagging: there are two simple tagged \(k\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) along \(\nu\) such that \({\it bl}\hspace{1.00006pt}^{\prime }\) follows \({\it bl}\) and:

\({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are adjacent within the same row-code;

each position in \(\nu\) following the last position of \({\it bl}\hspace{1.00006pt}^{\prime }\) is marked by the tags in \(O\) where \(\lbrace {\it row}\rbrace \subseteq O\subseteq \lbrace {\it row},{\it good}\rbrace\). Moreover, \({\it good}\in O\) iff \([d]_{{\it right}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it left}}\), where \(d\in \Delta\) is the content of \({\it bl}\) and \({\it d}\hspace{1.00006pt}^{\prime }\in \Delta\) is the content of \({\it bl}\hspace{1.00006pt}^{\prime }\);

no other position of \(\nu\) is marked. (see Figure 5).

Fig. 5.

Fig. 5. Row-tagging, with odd \(k\) .

column-tagging: there are two simple tagged \(k\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) along \(\nu\) such that \({\it bl}\hspace{1.00006pt}^{\prime }\) follows \({\it bl}\) and:

\({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) belong to two adjacent row-codes in \(\nu\);

each position in \(\nu\) following the last position of \({\it bl}\hspace{1.00006pt}^{\prime }\) is marked by the tags in \(O\) where \(\lbrace {\it col}\rbrace \subseteq O\subseteq \lbrace {\it col},{\it good}\rbrace\), and \({\it good}\in O\) iff \([d]_{{\it up}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it down}}\), where \(d\in \Delta\) is the content of \({\it bl}\) and \({\it d}\hspace{1.00006pt}^{\prime }\in \Delta\) is the content of \({\it bl}\hspace{1.00006pt}^{\prime }\);

no other position of \(\nu\) is marked.

A partial tagged \(k\)-grid code is the prefix of some tagged \(k\)-grid code whose last position is labeled by tags in \({{\it Tags}}\setminus \lbrace \#_1,\#_2\rbrace = \lbrace {\it row},{\it col},{\it good}\rbrace \cup \bigcup _{h=1}^{k}\lbrace (h,=),(h,{\it inc})\rbrace\). Thus, we have four different types of partial tagged \(k\)-grid codes \(\rho\), where a type is identifiable by the tag proposition in \({{\it Tags}}\setminus \lbrace \#_1,\#_2,{\it good}\rbrace\), which marks the last position of \(\rho\). The additional proposition \({\it good}\) is used to check whether some additional condition is fulfilled, depending on the specific type. Intuitively, partial \((h,=)\)-tagged \(k\)-grid codes are exploited as nested layers for checking that distinct well-formed \(h\)-blocks along the given initialized \(k\)-grid code have the same index, while partial \((h,{\it inc})\)-tagged \(k\)-grid codes are used as nested layers to check that the indices of adjacent well-formed \(h\)-blocks \({\it bl}_1\) and \({\it bl}_2\) are consecutive (i.e., \({\it bl}_1\) is not final and the index of \({\it bl}_2\) is the index of \({\it bl}_1\) plus one); this is used for testing well-formedness of blocks and \(k\)-grid codes. Finally, partial row-tagged \(k\)-grid codes and partial column-tagged \(k\)-grid codes are exploited as first-level layers for verifying the row adjacency and column adjacency requirements.

As an example, let us consider a partial \((h,{\it inc})\)-tagged \(k\)-grid code \(\rho\) corresponding to some arbitrary prefix of a given initialized \(k\)-grid code \(\nu\). Moreover, assume that \(h\) is odd and the \(h\)-blocks along \(\nu\) are well-formed (this well-formedness requirement will be checked by other marking layers). To check that the two tagged \(h\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) along \(\rho\) have consecutive indices, we consider the set \(\Pi\) of partial \((h,{\it inc})\)-tagged \(k\)-grid codes \(\rho ^{\prime }\) having the same content and the same marked \(h\)-blocks as \(\rho\). So, \(\rho\) and \(\rho ^{\prime }\) differ only in the choice of the marked bits of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) if \(h=1\), and the marked \(h-1\)-sub-blocks of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) otherwise. By construction of \(M_{\mathcal {I},k}\) (see the following Lemma 4.4), the set \(\Pi\) corresponds to the set of finite traces \(\rho ^{\prime }\) of \(M_{\mathcal {I},k}\), which are SPR \(a_1\)-indistinguishable from \(\rho\) and whose last position is tagged by \((h,{\it inc})\).4 Thus, for selecting the set \(\Pi\), we use the knowledge modalities for agent \(a_1\). Moreover, by definition of the tagging schemas, if \(h=1\), then \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) have consecutive indices iff for each of such traces \(\rho ^{\prime }\in \Pi\), the last position of \(\rho ^{\prime }\) is also marked by \({\it good}\). If instead \(h\gt 1\), then \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) have consecutive indices iff for all \(\rho ^{\prime }\in \Pi\) such that the marked \(h-1\)-sub-block \({\it sb}\) of \({\it bl}\) in \(\rho ^{\prime }\) and the marked \(h-1\)-sub-block \({\it sb}\hspace{1.00006pt}^{\prime }\) of \({\it bl}\hspace{1.00006pt}^{\prime }\) in \(\rho ^{\prime }\) have the same index, it holds that the last position of \(\rho ^{\prime }\) is also marked by \({\it good}\). In this case, for selecting the finite traces in \(\Pi\) satisfying the previous condition, starting from a selected trace \(\rho ^{\prime }\in \Pi\), we use the knowledge modality for agent \(a_2\) and the nested marked layer of type \((h-1,=)\) (for details, see Lemma 4.5).

We now prove the following result concerning the construction of the finite Kripke structure \(M_{\mathcal {I},k}\):

Lemma 4.4.

Let \(k\ge 1\). One can construct in time polynomial in the size of \(\mathcal {I}\) a finite Kripke structure \(M_{\mathcal {I},k}=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \lbrace a_1,a_2\rbrace },{{s}^\iota })\) over two agents \(a_1\) and \(a_2\) such that:

(1)

the set of finite traces of \(M_{\mathcal {I},k}\) coincides with the set of prefixes of tagged \(k\)-grid codes5;

(2)

the set of infinite traces of \(M_{\mathcal {I},k}\) contains the set of initialized \(k\)-grid codes;

(3)

for all \(i=1,2\) and states \(s\) and \(s^{\prime }\), \(\begin{equation*} s\sim _{a_i}s^{\prime } \quad \text{iff} \quad V(s)\cap ({{\it Main}}\cup \lbrace \#_i\rbrace)=V(s^{\prime })\cap ({{\it Main}}\cup \lbrace \#_i\rbrace). \end{equation*}\)

Proof.

We illustrate the construction of the Kripke structure \(M_{\mathcal {I},k}\) satisfying Lemma 4.4. The set \(S\) of states is given by \(S= Q_{{\it Main}}\times Q_{{\it Tags}}\), the initial state \({{s}^\iota }\) is given by \((q_{{\it Main}}^{\iota },q_{{\it Tags}}^{\iota })\), and the transition relation \(R\) is given by \(R=R_{{\it Main}}\cap R_{{\it Tags}}\), where \(Q_{{\it Main}}\), \(Q_{{\it Tags}}\), \(q_{{\it Main}}^{\iota }\), \(q_{{\it Tags}}^{\iota }\), \(R_{{\it Main}}\), and \(R_{{\it Tags}}\) are defined in the following. Intuitively, the main component \(Q_{{\it Main}}\) of \(S\) and the binary relation \(R_{{\it Main}}\subseteq S\times S\) are used for ensuring that the contents of the finite traces of \(M_{\mathcal {I},k}\) are prefixes of initialized \(k\)-grid codes, while the tag component \(Q_{{\it Tags}}\) of \(S\) and the binary relation \(R_{{\it Tags}}\subseteq S\times S\) are used for generating the marking information. Moreover, for each state \(s=(q_{{\it Main}},q_{{\it Tags}})\), \(V(s)=\lbrace V_{{\it Main}}(q_{{\it Main}})\rbrace \cup V_{{\it Tags}}(q_{{\it Tags}})\), where \(V_{{\it Main}}:Q_{{\it Main}}\rightarrow {{\it Main}}\) and \(V_{{\it Tags}}:Q_{{\it Tags}}\rightarrow 2^{{{\it Tags}}}\) are defined in the following:

For all \(i,j\in \mathbb {N}\) with \(i\le j\), let \([i,j]\) be the set of natural numbers \(\ell\) such that \(i\le \ell \le j\).

Generation of initialized \(k\)-grid codes: definition of \(Q_{{\it Main}}\), \(q_{{\it Main}}^{\iota }\), \(R_{{\it Main}}\) and \(V_{{\it Main}}\). The main component \(Q_{{\it Main}}\) of the set of states \(S\) exploits the special symbols \({{\it in}}\), \({{\it fin}}\), \({{\it his}_0}\), and \({{\it his}_1}\), and is given by \(Q_{{\it Main}}= Q_0 \times \ldots Q_{k+1} \times \widehat{{{\it Main}}}\), where:

\(Q_0\) consists of the sets \(X\subset \lbrace {{\it his}_0},{{\it his}_1}\rbrace\) such that \(|X|\le 1\);

for each \(h\in [1,k-1]\), \(Q_h\) consists of the sets \(X\subset \lbrace {{\it his}_0},{{\it his}_1},{{\it in}},{{\it fin}}\rbrace\) such that at most one element in \(\lbrace {{\it his}_0},{{\it his}_1}\rbrace\) (respectively, in \(\lbrace {{\it in}},{{\it fin}}\rbrace\)) can occur in \(X\);

\(Q_k\) consists of the sets \(X\subset \lbrace {{\it in}},{{\it fin}},{\it acc}\rbrace\) such that at most one element in \(\lbrace {{\it in}},{{\it fin}}\rbrace\) can occur in \(X\);

\(Q_{k+1}\) consists of the sets \(X\subseteq \lbrace {{\it in}}\rbrace\);

\(\widehat{{{\it Main}}}=({{\it Main}}\setminus \lbrace 0,1\rbrace) \cup \lbrace 0,1\rbrace \times [1,n]\).

Moreover, \(q_{{\it Main}}^{\iota }=(\emptyset ,\ldots ,\emptyset ,\lbrace {{\it in}}\rbrace ,\$)\) and for each \(q_{{\it Main}}=(q_0,\ldots ,q_{k+1},p_{{\it Main}})\in Q_{{\it Main}}\), \(V_{{\it Main}}(q_{{\it Main}})=b\) if \(p_{{\it Main}}=(b,j)\) for some \((b,j)\in \lbrace 0,1\rbrace \times [1,n]\), and \(V_{{\it Main}}(q_{{\it Main}})=p_{{\it Main}}\) otherwise.

Intuitively, for a state having main component \(q_{{\it Main}}=(q_0,\ldots ,q_{k+1},p_{{\it Main}})\in Q_{{\it Main}}\), the last component \(p_{{\it Main}}\) of \(q_{{\it Main}}\) keeps track of the proposition in \({{\it Main}}\) currently generated. States generating the proposition \({\it acc}\) (respectively, \(\bot\)) are sink states. For the other states (i.e., \(p_{{\it Main}}\in \widehat{{{\it Main}}}\setminus \lbrace {\it acc},\bot \rbrace\)), \(p_{{\it Main}}\) is either of the form \((\$_h,\tau)\) for some \(1\le h\le k\), where \(\tau\) is the content of the currently generated \(h\)-block or an element of the form \((b,\ell)\in \lbrace 0,1\rbrace \times [1,n]\), representing the \(\ell\)th bit of the currently generated 1-block or the proposition \(\$\), which is the first symbol of the currently generated row-code. For these cases, the additional components \(q_0,\ldots ,q_{k+1}\) of \(q_{{\it Main}}\) keep track of a few bits of information regarding each level \(1\le h \le k\):

first, if a 1-block \({\it bl}\) is currently being generated (i.e., a main proposition representing a symbol of a 1-block holds at the current state), then \(q_0\) records whether some bit of \({\it bl}\) generated so far is 0 (\({{\it his}_0}\in q_0\)) or not (\({{\it his}_1}\in q_0\)). This information is used to ensure that in the generation of 1-blocks \({\it bl}\), whenever \({\it bl}\) is guessed to be a non-extremal sub-block of a 2-block, then the index of \({\it bl}\) is distinct from \(2^{n}-1\) (i.e., the \(i\)th bit of \({\it bl}\) is 0 for some \(1\le i\le n\)).

For each \(1\le h\le k\), if an \(h\)-block is currently being generated (i.e., a main proposition representing a symbol of a \(h\)-block \({\it bl}\) holds at the current state), then \(q_h\) keeps track of the following information:

case \(h\lt k\): the indication whether the currently generated \(h\)-block \({\it bl}\) is either the first sub-block of the current \((h+1)\)-block (\({{\it in}}\in q_h\)) or a non-extremal sub-block (\({{\it in}}\notin q_h\) and \({{\it fin}}\notin q_h\)) or the last sub-block (\({{\it fin}}\in q_h\)). Additionally, \(q_h\) keeps track whether for the current \((h+1)\)-block, the content of some \(h\)-sub-block generated so far is 0 (\({{\it his}_0}\in q_h\)) or not (\({{\it his}_1}\in q_h\)). This last information is used for ensuring that for a non-extremal \(h+1\)-block \({\it bl}\), the content of some \(h\)-sub-block of \({\it bl}\) is 0.

case \(h=k\): the indication whether the currently generated \(k\)-block \({\it bl}\) is either the first \(k\)-block of the current row-code (\({{\it in}}\in q_k\)) or a guessed non-extremal block (\({{\it in}}\notin q_k\) and \({{\it fin}}\notin q_k\)) or the last block of the row-code. Moreover, \(q_k\) keeps track whether for the current row-code, the content of some \(k\)-block generated so far is the accepting domino-type \(d_{\it acc}\) (\({\it acc}\in q_k\)) or not (\({\it acc}\notin q_k\)). In this way, whenever the generation of a row-code terminates and the row-code contains some accepting \(k\)-block (respectively, the row-code does not contain accepting \(k\)-blocks), \(M_{\mathcal {I},k}\) can choose either to generate the proposition \({\it acc}\) (respectively, \(\bot\)) or to generate the next row-code.

Additionally, the component \(q_{k+1}\) keeps track whether the current row-code is the first one (\(q_{k+1}=\lbrace {{\it in}}\rbrace\)) or not (\(q_{k+1}=\emptyset\)). In this way, \(M_{\mathcal {I},k}\) ensures that starting from the initial state, the content of the first nondeterministically generated \(k\)-block is always \(d_{\it in}\) (initialization).

Observe that \(|Q_{{\it Main}}|=O(n+|\Delta |)^{9^{k+2}}\); it is thus polynomial in \(n\) and \(\Delta\), and \(k\) is a fixed parameter.

It is worth noting that when \(M_{\mathcal {I},k}\) starts the generation of an \(h\)-block for some \(h\ne 1\), it nondeterministically can get trapped into the generation of the \(h\)-block by either producing an infinite sequence of non-final \(h-1\)-sub-blocks or by getting trapped into the generation of an \(h-1\)-sub-block in case \(h-1\gt 1\). However, the behavior described above, which is captured by the binary relation \(R_{{\it Main}}\), ensures that the contents of the prefixes of such computations are also prefixes of initialized \(k\)-grid codes. Formally, \(R_{{\it Main}}\) is defined as follows:

\(((q_0,\ldots ,q_{k+1},p_{{\it Main}}),q_{{\it Tags}}),(q^{\prime }_0,\ldots ,q^{\prime }_{k+1},p^{\prime }_{{\it Main}}),q^{\prime }_{{\it Tags}}))\in R_{{\it Main}}\) if one of the following conditions holds:

\(p_{{\it Main}}=\$\):

\(q^{\prime }_i= q_i\) for each \(i\in [0,k+1]\setminus \lbrace k\rbrace\);

\(p^{\prime }_{{\it Main}}= (\$_k,d)\) for some \(d\in \Delta\) such that \(d=d_{\it in}\) if \(q_{k+1}=\lbrace {{\it in}}\rbrace\). Moreover, either \(q^{\prime }_k=\lbrace {{\it in}},{\it acc}\rbrace\) and \(d=d_{\it acc}\), or \(q^{\prime }_k=\lbrace {{\it in}}\rbrace\) and \(d\ne d_{\it acc}\).

\(p_{{\it Main}}=(\$_h,\tau)\) for some \(h\in [1,k]\), where \(\tau \in \Delta\) if \(h=k\), and \(\tau \in \lbrace 0,1\rbrace\) otherwise:

\(q^{\prime }_i = q_i\) for each \(i\in [0,k+1]\setminus \lbrace h-1\rbrace\);

if \(h\gt 1\) then: either (i) \(p^{\prime }_{{\it Main}}=(\$_{h-1},0)\), \(q^{\prime }_{h-1}=\lbrace {{\it in}}, {{\it his}_0}\rbrace\), and \({{\it fin}}\notin q_h\), or (ii) \(p^{\prime }_{{\it Main}}=(\$_{h-1},1)\), \(q^{\prime }_{h-1}=\lbrace {{\it in}}, {{\it his}_1}\rbrace\), and \({{\it in}}\notin q_h\);

if \(h=1\) then: either (i) \(p^{\prime }_{{\it Main}}=(0,1)\), \(q^{\prime }_{0}=\lbrace {{\it his}_0}\rbrace\), and \({{\it fin}}\notin q_1\), or (ii) \(p^{\prime }_{{\it Main}}=(1,1)\), \(q^{\prime }_{0}=\lbrace {{\it his}_1}\rbrace\), and \({{\it in}}\notin q_1\).

\(p_{{\it Main}}=(b,j)\) for some \(b\in \lbrace 0,1\rbrace\) and \(j\in [1,n-1]\):

\(q^{\prime }_i = q_i\) for each \(i\in [1,k+1]\);

either \(p^{\prime }_{{\it Main}}=(0,j+1)\) and \({{\it fin}}\notin q_1\), or \(p^{\prime }_{{\it Main}}=(1,j+1)\) and \({{\it in}}\notin q_1\);

if \({{\it his}_0}\in q_0\) or \(p^{\prime }_{{\it Main}}=(0,j+1)\), then \(q^{\prime }_0=\lbrace {{\it his}_0}\rbrace\); otherwise \(q^{\prime }_0=\lbrace {{\it his}_1}\rbrace\).

\(p_{{\it Main}}=(b,n)\) for some \(b\in \lbrace 0,1\rbrace\) and there is some \(h\in [1,k]\) such that \({{\it fin}}\notin q_h\). Let \(\ell\) be the smallest of such \(h\). Then:

\(q^{\prime }_i = q_i\) for each \(i\in [0,k+1]\setminus \lbrace \ell \rbrace\) and \({{\it in}}\notin q^{\prime }_\ell\);

if \(\ell \lt k\), \(p^{\prime }_{{\it Main}}=(\$_\ell ,b)\) for some \(b\in \lbrace 0,1\rbrace\) and (i) \(b=0\) if either \({{\it in}}\in q_{\ell +1}\), or \({{\it fin}}\notin q_{\ell +1}\), \({{\it his}_1}\in q_{\ell }\), and \({{\it fin}}\in q^{\prime }_\ell\) and (ii) \(b=1\) if \({{\it fin}}\in q_{\ell +1}\). Moreover, if \(b=0\) or \({{\it his}_0}\in q_\ell\), then \({{\it his}_0}\in q^{\prime }_\ell\); otherwise \({{\it his}_1}\in q^{\prime }_\ell\).

if \(\ell =k\), \(p^{\prime }_{{\it Main}}=(\$_\ell ,d)\) for some \(d \in \Delta\). Moreover, if \({\it acc}\in q_k\) or \(d=d_{\it acc}\), then \({\it acc}\in q^{\prime }_k\); otherwise \({\it acc}\notin q^{\prime }_k\).

\(p_{{\it Main}}=(b,n)\) for some \(b\in \lbrace 0,1\rbrace\) and for all \(h\in [1,k]\), \({{\it fin}}\in q_h\):

\(q^{\prime }_i = q_i\) for each \(i\in [0,k]\) and \(q_{k+1}=\emptyset\);

either (i) \(p^{\prime }_{{\it Main}}=\$\), or (ii) \(p^{\prime }_{{\it Main}}=\bot\) and \({\it acc}\notin q_k\), or (iii) \(p^{\prime }_{{\it Main}}={\it acc}\) and \({\it acc}\in q_k\).

\(p_{{\it Main}}\in \lbrace \bot ,{\it acc}\rbrace\): \(p^{\prime }_{{\it Main}}= p_{{\it Main}}\) and \(q^{\prime }_i=q_i\) for all \(i\in [0,k+1]\).

Generation of the tagging information: definition of \(Q_{{\it Tags}}\), \(q_{{\it Tags}}^{\iota }\), \(R_{{\it Tags}}\) and \(V_{{\it Tags}}\). For \(h\in [0,k]\), let \(pr(h)=1\) if \(h\) is odd, and \(pr(h)=2\) otherwise. The tag component \(Q_{{\it Tags}}\) of the set of states \(S\) is given by \(\begin{equation*} Q_{{\it Tags}}= \lbrace \top \rbrace \cup Q_{\it row}\cup Q_{\it col}\cup \bigcup _{h=1}^{k}(Q_{(h,=)}\cup Q_{(h,{\it inc})}) \end{equation*}\) and \(q_{{\it Tags}}^{\iota }=\top\), where:

\(Q_{\it row}= \lbrace \top ,{\top _{\mathrm{f}}},{{\it cur}}\rbrace \times (\lbrace {\it row}\rbrace \times \Delta)\times (\lbrace \top \rbrace \cup \Delta)\).

\(Q_{\it col}= \lbrace \top ,{\top _{\mathrm{f}}},{{\it cur}}\rbrace \times (\lbrace {\it col}\rbrace \times \Delta)\times (\lbrace \top ,{\it col}\rbrace \cup \Delta)\).

For each \(h\in [1,k]\), let \(B_h=\lbrace 0,1\rbrace\) if \(h\gt 1\) and \(B_h=\lbrace 0,1\rbrace \times [1,n]\) otherwise. Then, \(Q_{(h,=)}\) is the set of tuples \((\tau _1,\tau _2,\tau _3,\tau _4,\tau _5)\) in \(\begin{equation*} \lbrace \top ,{\top _{\mathrm{f}}},{{\it cur}}\rbrace \times \lbrace (h,=)\rbrace \times (\lbrace \top \rbrace \cup B_h) \times \lbrace \top ,(h,=)\rbrace \times \lbrace \top ,0,1\rbrace \end{equation*}\) such that \(\tau _{i+1}= \top\) if \(\tau _{i}= \top\) for each \(2\le i \le 4\).

For each \(h\in [1,k]\), let \(C_h=\lbrace 0,1,\widehat{0},\widehat{1}\rbrace\) if \(h\gt 1\) and \(C_h=\lbrace 0,1,\widehat{0},\widehat{1}\rbrace \times [1,n]\) otherwise. Then, \(Q_{(h,{\it inc})}\) is the set of tuples \((\tau _1,\tau _2,\tau _3,\tau _4,\tau _5)\) in \(\begin{equation*} \lbrace \top ,{\top _{\mathrm{f}}},{{\it cur}}\rbrace \times \lbrace (h,{\it inc})\rbrace \times (\lbrace \top \rbrace \cup C_h) \times \lbrace \top ,(h,{\it inc})\rbrace \times \lbrace \top ,0,1\rbrace \end{equation*}\) such that \(\tau _{i+1}= \top\) if \(\tau _{i}= \top\) for each \(2\le i \le 4\).

Intuitively, for \(q_{{\it Tags}}\in Q_{{\it Tags}}\), \(q_{{\it Tags}}=\top\) means that no tagging information has been generated so far. When instead for some \(t\in \lbrace {\it row},{\it col}\rbrace\), \(q_{{\it Tags}}=(\tau _1,\tau _2,\tau _3)\in Q_t\), then \(q_{{\it Tags}}\) keeps track of the \(t\)-tagging information generated so far: \(\tau _1={{\it cur}}\) means that the tag \(\#_{pr(k)}\) is currently generated (this happens when the first symbol of a guessed simple tagged \(k\)-block is generated), \(\tau _2=(t,d)\) (respectively, \(\tau _3=d\)) means that the first (respectively, second) guessed simple tagged \(k\)-block has content \(d\), and \(\tau _3 =\top\) means that only one \(k\)-block has been tagged so far. Moreover, if \(t={\it col}\), then \(\tau _3={\it col}\) means that only one \(k\)-block has been tagged so far and the tagged \(k\)-block belongs to the row preceding the current row (recall that in a column-tagging the two simple tagged \(k\)-blocks belong to two adjacent row-codes). Furthermore, the value \({\top _{\mathrm{f}}}\) of the first component of \(q_{{\it Tags}}\) is used during the generation of the symbols of the second simple tagged \(k\)-block \({\it bl}\) but the first symbol of \({\it bl}\). Upon generating the symbols following \({\it bl}\), \(M_{\mathcal {I},k}\) moves to states whose tag component \(q_{{\it Tags}}\) is of the form \((\top ,(t,d),{\it d}\hspace{1.00006pt}^{\prime })\), where \(d\) (respectively, \({\it d}\hspace{1.00006pt}^{\prime }\)) is the content of the first (respectively, second) simple tagged \(k\)-block and marks these symbols with the tags in \(O\) where \(\lbrace t\rbrace \subseteq O\subseteq \lbrace t,{\it good}\rbrace\) and \({\it good}\in O\) if and only if \([d]_{{\it right}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it left}}\) if \(t={\it row}\), and \([d]_{{\it up}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it down}}\) if \(t={\it col}\).

Elements \((\tau _1,\tau _2,\tau _3,\tau _4,\tau _5)\in Q_{(h,=)}\) are used for generating a \((h,=)\)-tagging: \(\tau _2\) and \(\tau _3\) keep track of the info associated to the first tagged \(h\)-block (\(\tau _3=\bot\) means that only the first symbol of the tagged \(h\)-block has been tagged), while \(\tau _4\) and \(\tau _5\) keep track of the info associated to the second tagged \(h\)-block. Note that \(\tau _1={{\it cur}}\) means that a tag in \(\lbrace \#_1,\#_2\rbrace\) is currently generated: this happens on generating the first symbols of the two guessed tagged \(h\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\), and the marked bits of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) if \(h=1\), and the first symbols of the marked sub-blocks of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\), otherwise. Moreover, when \(h=1\), we keep track of the position in \([1,n]\) of the tagged bit of the first tagged 1-block: This information is used for ensuring that the marked bits of the tagged 1-blocks have the same position.

Similarly, elements \((\tau _1,\tau _2,\tau _3,\tau _4,\tau _5)\in Q_{(h,{\it inc})}\) are used for generating a \((h,{\it inc})\)-tagging. Note that in this case, if \(h=1\), then \(\tau _3=(b,j)\) (respectively, \(\tau _3=(\widehat{b},j)\)) for some \(b\in \lbrace 0,1\rbrace\) and \(j\in [1,n]\), means that the tagged bit of the first tagged 1-block \({\it bl}\) has position \(\ell\) and precedes or coincides with (respectively, follows) the first bit of \({\it bl}\) having value 0. Moreover, if \(h\gt 1\), \(\tau _3= b\) (respectively, \(\tau _3= \widehat{b}\)) for some \(b\in \lbrace 0,1\rbrace\) means that the tagged \((h-1)\)-sub-block of the first tagged \(h\)-block \({\it bl}\) precedes or coincides with (respectively, follows) the first \((h-1)\)-sub-block of \({\it bl}\) having content 0. Note that for ensuring that the \((h,{\it inc})\)-tagging is generated correctly, we use the information associated to the element \(q_{h-1}\) of the main component \((q_0,\ldots ,q_{k+1},p_{{\it Main}})\) of a state that keeps track in case \(h\gt 1\) (respectively, \(h=1\)) whether the content of some \(h-1\)-sub-block (respectively, some bit) of the currently generated \(h\)-block generated so far is 0 (\({{\it his}_0}\in q_{h-1}\)) or not (\({{\it his}_1}\in q_{h-1}\)).

Accordingly to the previous intuitions, the set \(V_{{\it Tags}}(q_{{\it Tags}})\) of tags associated to each element \(q_{{\it Tags}}\in Q_{{\it Tags}}\) is defined as follows, where for simplicity, we write \(V(q_{{\it Tags}})\) instead of \(V_{{\it Tags}}(q_{{\it Tags}})\):

\(q_{{\it Tags}}=\top\): \(V(q_{{\it Tags}})=\emptyset\).

\(q_{{\it Tags}}=(\tau _1,\tau _2,\tau _3)\in Q_{\it row}\cup Q_{\it col}\) and (\(\tau _1\in \lbrace {\top _{\mathrm{f}}},{{\it cur}}\rbrace\) if \(\tau _3\in \Delta\)):

\(V(q_{{\it Tags}})=\lbrace \#_{pr(k)}\rbrace\) if \(\tau _1={{\it cur}}\), and \(V(q_{{\it Tags}})=\emptyset\) otherwise.

\(q_{{\it Tags}}=(\top ,({\it row},d), {\it d}\hspace{1.00006pt}^{\prime })\in Q_{\it row}\):

\(\lbrace {\it row}\rbrace \subseteq V(q_{{\it Tags}})\subseteq \lbrace {\it row},{\it good}\rbrace\) and \({\it good}\in V(q_{{\it Tags}})\) iff \([d]_{{\it right}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it left}}\).

\(q_{{\it Tags}}=(\top ,({\it col},d), {\it d}\hspace{1.00006pt}^{\prime })\in Q_{\it col}\):

\(\lbrace {\it col}\rbrace \subseteq V(q_{{\it Tags}})\subseteq \lbrace {\it col},{\it good}\rbrace\) and \({\it good}\in V(q_{{\it Tags}})\) iff \([d]_{{\it up}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it down}}\).

\(q_{{\it Tags}}=(\tau _1,\tau _2,\tau _3,\tau _4,\tau _5)\in Q_{(h,=)}\cup Q_{(h,{\it inc})}\) and (\(\tau _1\in \lbrace {\top _{\mathrm{f}}},{{\it cur}}\rbrace\) if \(\tau _5\in \Delta\)):

if \(\tau _1\ne {{\it cur}}\): \(V(q_{{\it Tags}})=\emptyset\). Otherwise, \(V(q_{{\it Tags}})=\lbrace \#_{pr(h)}\rbrace\) if \(\tau _5=\top\) and either \(\tau _4\ne \top\) or \(\tau _3=\top\), and \(V(q_{{\it Tags}})=\lbrace \#_{pr(h-1)}\rbrace\) otherwise.

\(q_{{\it Tags}}=(\top ,\tau _2,\tau _3,\tau _4,b)\in Q_{(h,=)}\) for some \(b\in \lbrace 0,1\rbrace\):

\(\lbrace (h,=)\rbrace \subseteq V(q_{{\it Tags}})\subseteq \lbrace (h,=),{\it good}\rbrace\) and \({\it good}\in V(q_{{\it Tags}})\) iff \(b\) and the bit value occurring in \(\tau _3\) coincide.

\(q_{{\it Tags}}=(\tau _1,\tau _2,\tau _3,\tau _4,b)\in Q_{(h,{\it inc})}\) for some \(b\in \lbrace 0,1\rbrace\):

let \(\overline{b}\) be the possibly marked bit value occurring in \(\tau _3\). Then, \(\lbrace (h,{\it inc})\rbrace \subseteq V(q_{{\it Tags}})\subseteq \lbrace (h,{\it inc}),{\it good}\rbrace\) and \({\it good}\in V(q_{{\it Tags}})\) iff either (i) \(b=1-\overline{b}\) and \(\overline{b}\) is unmarked, or (ii) \(\overline{b}\) is marked, and \(b\) and the bit value of \(\overline{b}\) coincide.

It is worth noting that after generating the first symbol of a guessed tagged \(h\)-block \({\it bl}\), \(M_{\mathcal {I},k}\) can generate an infinite number of unmarked \((h-1)\)-sub-blocks of \({\it bl}\) or can get trapped in the generation of some \((h-1)\)-sub-block of \({\it bl}\) if \(h-1\gt 0\). However, \(M_{\mathcal {I},k}\) ensures that a next \(h\)-block can be generated only if some \((h-1)\)-sub-block \({\it sb}\) of the current \(h\)-block \({\it bl}\) is marked.

Formally, the binary relation \(R_{{\it Tags}}\) ensuring the correct generation of the tagging information is defined as follows: \(((q_0,\ldots ,q_{k+1},p_{{\it Main}}),q_{{\it Tags}}),(q^{\prime }_0,\ldots ,q^{\prime }_{k+1},p^{\prime }_{{\it Main}}),q^{\prime }_{{\it Tags}}))\in R_{{\it Tags}}\) if one of the following conditions hold, where for each \(h\in [1,k]\), \(\widehat{{{\it Main}}}_h\) denotes the set of symbols in \(\widehat{{{\it Main}}}\) used for generating \(h\)-blocks.

Rules for tagging initialization.

\(q_{{\it Tags}}=\top\): either \(q^{\prime }_{{\it Tags}}=\top\),

or \(q^{\prime }_{{\it Tags}}=({{\it cur}},(h,=),\top ,\top ,\top)\) and \(p^{\prime }_{{\it Main}}\) is of the form \((\$_h,c)\),

or \(q^{\prime }_{{\it Tags}}=({{\it cur}},(h,{\it inc}),\top ,\top ,\top)\), \(p^{\prime }_{{\it Main}}\) is of the form \((\$_h,c)\), and \({{\it fin}}\notin q^{\prime }_h\),

or \(q^{\prime }_{{\it Tags}}=({{\it cur}},({\it row},d),\top)\), \(p^{\prime }_{{\it Main}}=(\$_k,d)\) and \({{\it fin}}\notin q^{\prime }_k\),

or \(q^{\prime }_{{\it Tags}}=({{\it cur}},({\it col},d),\top)\) and \(p^{\prime }_{{\it Main}}=(\$_k,d)\).

Rules for row-tagging.

\(q_{{\it Tags}}=(\tau _1,({\it row},d),\top)\): either \(q^{\prime }_{{\it Tags}}=(\top ,({\it row},d),\top)\) and \(p^{\prime }_{{\it Main}}\in \widehat{{{\it Main}}}_{k-1}\), or \(q^{\prime }_{{\it Tags}}=({{\it cur}},({\it row},d), {\it d}\hspace{1.00006pt}^{\prime })\) and \(p^{\prime }_{{\it Main}}=(\$_k,{\it d}\hspace{1.00006pt}^{\prime })\).

\(q_{{\it Tags}}=(\tau _1,({\it row},d),d^{\prime })\): if \(\tau _1\in \lbrace {{\it cur}},{\top _{\mathrm{f}}}\rbrace\) and \(p^{\prime }_{{\it Main}}\in \widehat{{{\it Main}}}_{k-1}\), then \(q^{\prime }_{{\it Tags}}=({\top _{\mathrm{f}}},({\it row},d),d^{\prime })\); otherwise, \(q^{\prime }_{{\it Tags}}=(\top ,({\it row},d),d^{\prime })\).

Rules for column-tagging.

\(q_{{\it Tags}}=(\tau _1,({\it col},d),\top)\): either \(q^{\prime }_{{\it Tags}}=(\top ,({\it col},d),\top)\) and \(p^{\prime }_{{\it Main}}\notin \lbrace \$,{\it acc},\bot \rbrace\), or \(q^{\prime }_{{\it Tags}}=(\top ,({\it col},d),{\it col})\) and \(p^{\prime }_{{\it Main}}= \$\).

\(q_{{\it Tags}}=(\tau _1,({\it col},d),{\it col})\): either \(q^{\prime }_{{\it Tags}}=(\top ,({\it col},d),{\it col})\) and \({{\it fin}}\notin q^{\prime }_k\), or \(q^{\prime }_{{\it Tags}}=({{\it cur}},({\it col},d), {\it d}\hspace{1.00006pt}^{\prime })\) and \(p^{\prime }_{{\it Main}}=(\$_k,{\it d}\hspace{1.00006pt}^{\prime })\).

\(q_{{\it Tags}}=(\tau _1,({\it col},d),d^{\prime })\): if \(\tau _1\in \lbrace {{\it cur}},{\top _{\mathrm{f}}}\rbrace\) and \(p^{\prime }_{{\it Main}}\in \widehat{{{\it Main}}}_{k-1}\), then \(q^{\prime }_{{\it Tags}}=({\top _{\mathrm{f}}},({\it col},d),d^{\prime })\); otherwise, \(q^{\prime }_{{\it Tags}}=(\top ,({\it col},d),d^{\prime })\).

Rules for \((h,=)\)-tagging with \(h\in [1,n]\).

\(q_{{\it Tags}}=(\tau _1,(1,=),\top ,\top ,\top)\): either \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\) and \(p^{\prime }_{{\it Main}}\notin \lbrace 0,1\rbrace \times \lbrace n\rbrace\), or \(q^{\prime }_{{\it Tags}}=({{\it cur}},(1,=),p^{\prime }_{{\it Main}},\top ,\top)\) and \(p^{\prime }_{{\it Main}}\in \lbrace 0,1\rbrace \times [1,n]\).

\(q_{{\it Tags}}=(\tau _1,(h,=),\top ,\top ,\top)\) for some \(h\in [2,k]\): either \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\) and \({{\it fin}}\notin q^{\prime }_{h-1}\), or \(q^{\prime }_{{\it Tags}}=({{\it cur}},(h,=),b,\top ,\top)\) and \(p^{\prime }_{{\it Main}}=(\$_{h-1},b)\).

\(q_{{\it Tags}}=(\tau _1,(h,=),\tau _3,\top ,\top)\) with \(h\in [1,k]\) and \(\tau _3\ne \top\): either \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\) and \(p^{\prime }_{{\it Main}}\notin \lbrace {\it acc},\bot \rbrace\), or \(q^{\prime }_{{\it Tags}}=(\top ,(h,=),\tau _3,(h,=),\top)\) and \(p^{\prime }_{{\it Main}}\) is of the form \((\$_h,c)\).

\(q_{{\it Tags}}=(\tau _1,(1,=),(b,j),(1,=),\top)\): if \(p^{\prime }_{{\it Main}}\notin \lbrace 0,1\rbrace \times \lbrace j\rbrace\) then \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\); otherwise, \(q^{\prime }_{{\it Tags}}=({{\it cur}},(1,=),(b,j),(1,=),p^{\prime }_{{\it Main}})\).

\(q_{{\it Tags}}=(\tau _1,(h,=),\tau _3,(h,=),\top)\) for some \(h\in [2,k]\): either \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\) and \({{\it fin}}\notin q^{\prime }_{h-1}\), or \(q^{\prime }_{{\it Tags}}=({{\it cur}},(h,=),\tau _3,(h,=),b)\) and \(p^{\prime }_{{\it Main}}=(\$_{h-1},b)\).

\(q_{{\it Tags}}=(\tau _1,(h,=),\tau _3,(h,=),\tau _5)\) and \(\tau _5 \ne \top\): if \(\tau _1\in \lbrace {{\it cur}},{\top _{\mathrm{f}}}\rbrace\) and \(p^{\prime }_{{\it Main}}\in \widehat{{{\it Main}}}_{h-1}\), then \(q_{{\it Tags}}^{\prime }=({\top _{\mathrm{f}}},(h,=),\tau _3,(h,=),\tau _5)\); otherwise, \(q_{{\it Tags}}^{\prime }=(\top ,(h,=),\tau _3,(h,=),\tau _5)\).

Rules for \((h,{\it inc})\)-tagging with \(h\in [1,n]\).

\(q_{{\it Tags}}=(\tau _1,(1,{\it inc}),\top ,\top ,\top)\): either \(q^{\prime }_{{\it Tags}}= q_{{\it Tags}}\) and \(p^{\prime }_{{\it Main}}\notin \lbrace 0,1\rbrace \times \lbrace n\rbrace\), or \(p^{\prime }_{{\it Main}}=(b,\ell)\in \lbrace 0,1\rbrace \times [1,n]\) and \(q^{\prime }_{{\it Tags}}=({{\it cur}},(1,{\it inc}),\tau ,\top ,\top)\) where \(\tau = (\widehat{b},\ell)\) if \({{\it his}_0}\in q_0\) and \(\tau =(b,\ell)\) otherwise.

\(q_{{\it Tags}}=(\tau _1,(h,{\it inc}),\top ,\top ,\top)\) and \(h\in [2,k]\): either \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\) and \({{\it fin}}\notin q^{\prime }_{h-1}\), or \(p^{\prime }_{{\it Main}}=(\$_{h-1},b)\) and \(q^{\prime }_{{\it Tags}}=({{\it cur}},(h,{\it inc}),\tau ,\top ,\top)\) where \(\tau =\widehat{b}\) if \({{\it his}_0}\in q_{h-1}\) and \(\tau = b\) otherwise.

\(q_{{\it Tags}}=(\tau _1,(h,{\it inc}),\tau _3,\top ,\top)\) with \(h\in [1,k]\) and \(\tau _3\ne \top\): either \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\) and \({{\it fin}}\notin q^{\prime }_{h}\), or \(q^{\prime }_{{\it Tags}}=(\top ,(h,{\it inc}),\tau _3,(h,{\it inc}),\top)\) and \(p^{\prime }_{{\it Main}}\) is of the form \((\$_h,c)\).

\(q_{{\it Tags}}=(\tau _1,(1,{\it inc}),(c,j),(1,{\it inc}),\top)\): if \(p^{\prime }_{{\it Main}}\notin \lbrace 0,1\rbrace \times \lbrace j\rbrace\) then \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\); otherwise, \(q^{\prime }_{{\it Tags}}=({{\it cur}},(1,{\it inc}),(c,j),(1,{\it inc}),p^{\prime }_{{\it Main}})\).

\(q_{{\it Tags}}=(\tau _1,(h,{\it inc}),\tau _3,(h,{\it inc}),\top)\) for some \(h\in [2,k]\): either \(q^{\prime }_{{\it Tags}}=q_{{\it Tags}}\) and \({{\it fin}}\notin q^{\prime }_{h-1}\), or \(q^{\prime }_{{\it Tags}}=({{\it cur}},(h,{\it inc}),\tau _3,(h,{\it inc}),b)\) and \(p^{\prime }_{{\it Main}}=(\$_{h-1},b)\).

\(q_{{\it Tags}}=(\tau _1,(h,{\it inc}),\tau _3,(h,{\it inc}),\tau _5)\) and \(\tau _5 \ne \top\): if \(\tau _1\in \lbrace {{\it cur}},{\top _{\mathrm{f}}}\rbrace\) and \(p^{\prime }_{{\it Main}}\in \widehat{{{\it Main}}}_{h-1}\), then \(q_{{\it Tags}}^{\prime }=({\top _{\mathrm{f}}},(h,{\it inc}),\tau _3,(h,{\it inc}),\tau _5)\); otherwise, \(q_{{\it Tags}}^{\prime }=(\top ,(h,{\it inc}),\tau _3,(h,{\it inc}),\tau _5)\).

Note that the number of states is polynomial in the size of \(\mathcal {I}\) and singly exponential in the fixed parameter \(k\). This concludes the proof of Lemma 4.4. □

Construction of the fixed formula \(\varphi _k\) in Proposition 4.2 A \(\textsf {K}\)-propositional formula is a \(\texttt { CTL}^{*}\texttt {K}\) formula that only contains the knowledge modalities, Boolean connectives, and atomic propositions. For each \(h\in \mathbb {N}\), a \(\textsf {K}_h\)-propositional formula is a \(\textsf {K}\)-propositional formula with alternation depth \(h\). Let \(k\ge 1\) and \(M_{\mathcal {I},k}\) be the Kripke structure of Lemma 4.4. By Lemma 4.4, the indistinguishability relations \(\sim _{a_i}\) of \(M_{\mathcal {I},k}\) (for \(i=1,2\)) depend only on the valuation function. Hence, histories that have the same trace are indistinguishable by any agent and satisfy the same \(\textsf {K}\)-propositional formulas. Thus, for a finite trace \(\rho\) of \(M_{\mathcal {I},k}\) and a \(\textsf {K}\)-propositional formula, we write \(\rho \models \psi\) to mean that \(\tau \models \psi\) under the SPR semantics for any history whose trace is \(\rho\). The core result in the proposed reduction is represented by the following lemma, which, together with Lemma 4.4, concludes the proof of Proposition 4.2 for the SPR semantics.

Lemma 4.5.

Let \(k\ge 1\) and \(M_{\mathcal {I},k}\) be the Kripke structure of Lemma 4.4. Then there are a fixed \(\texttt { LTLK}_{k+1} \bigcap \texttt { CTLK}_{k+1}\) formula \(\varphi _k\) and, for each \(1\le h \le k\), two fixed \(\textsf {K}_h\)-propositional formulas \(\varphi _{=}^{h}\) and \(\varphi _{\it inc}^{h}\) such that the following holds:

(1)

Let \(\rho\) be a partial \((h,=)\)-tagged \(k\)-grid code. If the two tagged \(h\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) of \(\rho\) are well-formed, then \(\rho \models \varphi _{=}^{h}\) iff\({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) have the same index.

(2)

Let \(\rho\) be a partial \((h,{\it inc})\)-tagged \(k\)-grid code. If the two tagged \(h\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) of \(\rho\) are well-formed, and \({\it bl}\) precedes \({\it bl}\hspace{1.00006pt}^{\prime }\), then \(\rho \models \varphi _{{\it inc}}^{h}\) iff the indices of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are consecutive.

(3)

\(M_{\mathcal {I},k}\models \varphi _k\) iff there is a path of \(M_{\mathcal {I},k}\) whose trace is a well-formed \(k\)-code encoding a \(k\)-tiling of \(\mathcal {I}\).

Proof.

We first prove Properties (1) and (2). This is done by induction on \(1\le h\le k\). We also show that \(\varphi _{=}^{h}\) (respectively, \(\varphi _{{\it inc}}^{h}\)) is a fixed \(\textsf {K}_h\)-propositional formula of the form \(\mathit {K}_{a_i}\psi\), where \(i=1\) if \(h\) is odd, and \(i=2\) otherwise. Here, we focus on Property (2) (the proof of Property (1) is similar).

Let \(\rho\) be a partial \((h,{\it inc})\)-tagged \(k\)-grid code such that the tagged \(h\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) of \(\rho\) are well-formed and \({\it bl}\hspace{1.00006pt}^{\prime }\) follows \({\it bl}\).

For the base case, assume that \(h=1\). By construction, the first positions of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are marked by \(\#_1\), and the marked bits of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are marked by \(\#_2\). Moreover, the last position of \(\rho\) is marked by tags in \(O\), where \(\lbrace (1,{\it inc})\rbrace \subseteq O\subseteq \lbrace (1,{\it inc}),{\it good}\rbrace\). Let \(\Pi\) be the set of partial \((1,{\it inc})\)-tagged \(k\)-grid codes \(\rho ^{\prime }\) having the same content as \(\rho\) and the same marked \(\#_1\)-positions as \(\rho\) (i.e., \(\rho\) and \(\rho ^{\prime }\) mark the same two adjacent 1-blocks). By construction, the indices of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are consecutive if and only if for each \(\rho ^{\prime }\in \Pi\), the last position of \(\rho ^{\prime }\) is marked by \({\it good}\). By Lemma 4.4, \(\Pi\) coincides with the set of finite traces of \(M_{\mathcal {I},k}\), which are SPR \(a_1\)-indistinguishable from \(\rho\) and whose last position is tagged by \((1,{\it inc})\). Hence, the fixed \(\textsf {K}_1\)-propositional formula \(\varphi _{{\it inc}}^{1}\) capturing Property 2 for \(h=1\) is defined as \(\begin{equation*} \varphi _{{\it inc}}^1=\mathit {K}_{a_1}((1,{\it inc})\rightarrow {\it good}). \end{equation*}\)

Now, assume that \(h\gt 1\) and \(h\) is odd (the case where \(h\) is even being similar). By construction, the first positions of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are marked by \(\#_1\), and the marked \((h-1)\)-sub-blocks of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are marked by \(\#_2\). Moreover, the last position of \(\rho\) is marked by tags in \(O\), where \(\lbrace (h,{\it inc})\rbrace \subseteq O\subseteq \lbrace (h,{\it inc}),{\it good}\rbrace\). Let \(\Pi\) be the set of partial \((h,{\it inc})\)-tagged \(k\)-grid codes \(\rho ^{\prime }\) having the same content as \(\rho\) and the same marked \(\#_1\)-positions as \(\rho\) (i.e., \(\rho\) and \(\rho ^{\prime }\) mark the same two adjacent \(h\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\)). By construction, the indices of \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) are consecutive if and only if for each \(\rho ^{\prime }\in \Pi\) such that the two \(\#_2\)-marked \((h-1)\)-sub-blocks of \(\rho ^{\prime }\) have the same index, it holds that the last position of \(\rho ^{\prime }\) is marked by \({\it good}\). By Lemma 4.4, \(\Pi\) coincides with the set of finite traces of \(M_{\mathcal {I},k}\), which are SPR \(a_1\)-indistinguishable from \(\rho\) and whose last position is tagged by \((h,{\it inc})\). Moreover, for each \(\rho ^{\prime }\in \Pi\), the set of partial \((h-1,=)\)-tagged \(k\)-grid codes that have the same content as \(\rho ^{\prime }\) and the same marked \(\#_2\)-positions as \(\rho ^{\prime }\) (i.e., the same initial positions of the two \(\#_2\)-marked \((h-1)\)-sub-blocks of \(\rho ^{\prime }\)) coincides with the set of finite traces of \(M_{\mathcal {I},k}\), which are SPR \(a_2\)-indistinguishable from \(\rho ^{\prime }\) and whose last position is tagged by \((h-1,=)\). Thus, by the induction hypothesis on Property (1), the fixed \(\textsf {K}_h\)-propositional formula \(\varphi _{{\it inc}}^{h}\) capturing Property (2) when \(h\) is odd is defined as \(\begin{equation*} \varphi _{{\it inc}}^h=\mathit {K}_{a_1}\bigl (((h,{\it inc})\wedge \mathit {K}_{a_2}((h-1,=)\rightarrow \varphi _{=}^{h-1}))\rightarrow {\it good}\bigr). \end{equation*}\) Note that, since \(h-1\) is even, by the induction hypothesis \(\varphi _{=}^{h-1}\) is of the form \(\mathit {K}_{a_2}\psi\) and has alternation depth \(h-1\). Hence, \(\varphi _{{\it inc}}^{h}\) has alternation depth \(h\).

Proof of Property (3). We now illustrate the construction of the fixed \(\texttt { LTLK}_{k+1} \bigcap \texttt { CTLK}_{k+1}\) formula \(\varphi _k\) ensuring Property 3 of Lemma 4.5. For this, by exploiting the \(\textsf {K}\)-propositional formulas \(\varphi _{=}^{h}\) and \(\varphi _{\it inc}^{h}\), we first construct, for all \(2\le h\le k\), a \(\textsf {K}_{h}\)-propositional formula \(\varphi _{{\it bl}}^{h}\), and two \(\textsf {K}_{k+1}\)-propositional formulas \(\varphi _{{\it row}}\) and \(\varphi _{{\it col}}\) satisfying the following for each initialized \(k\)-grid code \(\nu\):

if all the \((h-1)\)-blocks in \(\nu\) are well-formed, then \(\nu _{\le i}\models \varphi _{{\it bl}}^{h}\) for all \(i\ge 0\)iff all the \(h\)-blocks in \(\nu\) are well-formed, too;

if all \(k\)-blocks in \(\nu\) are well-formed, then \(\nu _{\le i}\models \varphi _{{\it row}}\) for all \(i\ge 0\) iff the row-codes in \(\nu\) are well-formed and for all adjacent \(k\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) in a row-code of \(\nu\) such that \({\it bl}\hspace{1.00006pt}^{\prime }\) follows \({\it bl}\), \([d]_{{\it right}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it left}}\), where \(d\in \Delta\) is the content of \({\it bl}\) and \({\it d}\hspace{1.00006pt}^{\prime }\in \Delta\) is the content of \({\it bl}\hspace{1.00006pt}^{\prime }\).

if \(\nu\) is well-formed, then \(\nu _{\le i}\models \varphi _{{\it col}}\) for all \(i\ge 0\) iff for all the \(k\)-blocks \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) such that \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) belong to two adjacent row-codes and \({\it bl}\) and \({\it bl}\hspace{1.00006pt}^{\prime }\) have the same index, it holds that \([d]_{{\it up}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it down}}\) where \(d\in \Delta\) is the content of \({\it bl}\) and \({\it d}\hspace{1.00006pt}^{\prime }\in \Delta\) is the content of \({\it bl}\hspace{1.00006pt}^{\prime }\).

Intuitively, the formulas \(\varphi _{{\it bl}}^{2},\ldots ,\varphi _{{\it bl}}^{k}\), \(\varphi _{\it row}\), and \(\varphi _{\it col}\) require that an initialized \(k\)-grid code is well-formed and satisfies the row adjacency and column adjacency requirements. First, let us define the formula \(\varphi _{{\it bl}}^{h}\) where \(2\le h \le k\). Assume that \(h\) is even (the other case where \(h\) is odd is similar). Let \(\nu\) be an initialized \(k\)-grid code whose \((h-1)\)-blocks are well-formed. To ensure that the \(h\)-blocks of \(\nu\) are well-formed as well, we need to require that adjacent \((h-1)\)-blocks of \(\nu\) belonging to the same \(h\)-block have consecutive indices. For this, we exploit the \(\textsf {K}_{h-1}\)-propositional formula \(\varphi _{{\it inc}}^{h-1}\) and the \((h-1,{\it inc})\)-tagging. Define formula \(\varphi _{{\it bl}}^{h}\) as follows: \(\begin{equation*} \varphi _{{\it bl}}^{h}:= \mathit {K}_{a_2} \mathit {K}_{a_1} ((h-1,{\it inc})\rightarrow \varphi _{{\it inc}}^{h-1}). \end{equation*}\) Since agent \(a_2\) observes tag \(\#_2\) but not \(\#_1\), the knowledge modality \(\mathit {K}_{a_2}\) allows placing the tag \(\#_1\), and similarly the modality \(\mathit {K}_{a_1}\) allows placing the tag \(\#_2\). Combined with the tag proposition \((h-1,{\it inc})\), this allows selecting all and only the partial \((h-1,{\it inc})\)-tagged \(k\)-grid codes that have the same content as the current prefix of \(\nu\). Hence, by Property 2, correctness of the construction directly follows. The definition of the \(\textsf {K}_{k+1}\)-propositional formulas \(\varphi _{{\it row}}\) and \(\varphi _{{\it col}}\) follows a similar pattern, with \(\varphi _{{\it row}}\) exploiting row-tagging and \(\varphi _{\it col}\) exploiting column-tagging. We only describe the construction of \(\varphi _{\it col}\). Assuming that \(k\) is odd, \(\varphi _{\it col}\) is defined as follows: \(\begin{equation*} \varphi _{{\it col}} := \mathit {K}_{a_2}\bigl ([{\it col}\wedge \mathit {K}_{a_1}((k,=)\rightarrow \varphi _{=}^{k})]\rightarrow {\it good}\bigr). \end{equation*}\) Given an initialized well-formed \(k\)-grid code \(\nu\), the formula above asserts that for all partial column-tagged \(k\)-grid codes \(\rho\) having the same content as the current prefix of \(\nu\), if the two simple tagged \(k\)-blocks of \(\rho\) have the same index, then \([d]_{{\it up}}=[{\it d}\hspace{1.00006pt}^{\prime }]_{{\it down}}\), where \(d\) is the content of the first simple tagged block of \(\rho\), and \({\it d}\hspace{1.00006pt}^{\prime }\) is the content of the second simple tagged block of \(\rho\). Thus, since in partial columns-tagged \(k\)-grid codes, the two simple tagged \(k\)-blocks belong to two adjacent row-codes, correctness of the construction directly follows.

Finally, the fixed \(\texttt { LTLK}_{k+1} \bigcap \texttt { CTLK}_{k+1}\) formula \(\varphi _k\) ensuring Property 3 of Lemma 4.5 is defined as follows: \(\begin{equation*} \varphi _k := \mathit {E}\left(\left(\bigwedge _{t\in {{\it Tags}}}\lnot t \wedge \varphi _{\it row}\wedge \varphi _{\it col}\wedge \bigwedge _{h=2}^{k}\varphi _{{\it bl}}^{h}\right) \,\mathit {U}\, {\it acc}\right), \end{equation*}\) which by Lemma 4.4 ensures the existence of a path of \(M_{\mathcal {I},k}\) whose trace is an initialized well-formed \(k\)-grid code encoding a \(k\)-tiling. This concludes the proof of Lemma 4.5. □

4.2 Proof of Proposition 4.2 for the Asynchronous Setting

For the asynchronous case, we slightly modify the construction of model \(M_{\mathcal {I},k}\) in Lemma 4.4 by incorporating a bit represented by a fresh atomic proposition \(p_{b}\) that is flipped at every transition and is observed by all agents. This way, the resulting model \(M^{\prime }_{\mathcal {I},k}\) generates the same traces as \(M_{\mathcal {I},k}\) (modulo \(p_b\)), and for all histories \(\tau\) and \(\tau ^{\prime }\), \(\tau \approx ^{\mbox{as}}_{a}\tau ^{\prime }\) iff \(\tau \approx ^{\mbox{s}}_{a}\tau ^{\prime }\) (the asynchronous and synchronous semantics coincide). We point out that a similar trick was used in Reference [57] to turn asynchronous systems into synchronous ones.

Formally, let \(M_{\mathcal {I},k}=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},{{s}^\iota })\). We define \(\begin{equation*} M^{\prime }_{\mathcal {I},k}=(\text {AP}\cup \lbrace p_b\rbrace ,S\times \lbrace 0,1\rbrace ,R^{\prime },V^{\prime },\lbrace \sim _{a}^{\prime }\rbrace _{a\in \mathit {Ag}},({{s}^\iota },0)), \end{equation*}\) where

\((s,i)R^{\prime }(s^{\prime },j)\) if \(sRs^{\prime }\) and \(j=1-i\)

\(V^{\prime }(s,i)= {\left\lbrace \begin{array}{ll} V(s) & \text{if }i=0\\ V(s)\cup \lbrace p_b\rbrace & \text{otherwise} \end{array}\right.}\)

\((s,i)\sim _{a}^{\prime }(s^{\prime },j)\) if \(s\sim _{a}s^{\prime }\) and \(i=j.\)

It is clear that \(M^{\prime }_{\mathcal {I},k}\) generates the same traces as \(M^{\prime }_{\mathcal {I},k}\), modulo the valuations of \(p_b\). It is also clear that all agents observe every step, so the asynchronous and synchronous semantics coincide. Also, since the bit \(i\in \lbrace 0,1\rbrace\) is reflected by \(p_b\), it is also the case that histories that have the same trace are indistinguishable for both agents and satisfy the same \(\textsf {K}\)-propositional formulas, so all the reasoning in the proof of Lemma 4.5 still holds. Finally, since the formulas built for Lemma 4.5 do not mention \(p_b\), we obtain that \(M_{\mathcal {I},k}\models _{\mbox{s}}\varphi _k\) iff \(M^{\prime }_{\mathcal {I},k}\models _{\mbox{as}}\varphi _k\), which concludes the proof of Proposition 4.2 for the asynchronous case.

Skip 5THE CASE OF NO LEARNING Section

5 THE CASE OF NO LEARNING

The notion of no learning was formalized in Reference [29] as the dual to the notion of perfect recall (or “no forgetting,” as it is called there). Intuitively, if an agent with no learning cannot distinguish between two paths \(\pi\) and \(\pi ^{\prime }\) at some point in time, then there will always be points in the future in which these runs will be indistinguishable to the agent: She will never learn how to tell them apart. A similar assumption was made earlier in Ladner and Reif’s Linear Logic of Protocols [36], which captures blindfold games where players do not observe anything. However, no learning is more general than blindfold, as it is possible for an agent with no learning to tell the difference between different states, as long as it does not allow her to distinguish between two runs that she could not distinguish at some point in the past. See Example 5.4 for an instance where this happens.

In Reference [29], which studies the complexity of the satisfiability problem for logics of knowledge and time, it was observed that adding either the no forgetting or the no learning assumption tends to make the problem harder, as it puts constraints on the model to synthesize. For the model-checking problem that we study here, no forgetting (or perfect recall) still makes things harder (see Table 1), as it forces the procedure to keep track of what agents remember of the past. However, as we will see, assuming no learning for the model-checking problem amounts to restricting attention to a subclass of models, and thus can only make the problem easier.

In References [29] and [27], the notion of no learning is defined with respect to interpreted systems, in which infinite runs are given explicitly, and states are tuples of local states, one for each agent, indicating the information each agent has about the current execution. We first recall these interpreted systems and the original definition of no learning and then transpose this definition to models given as finite Kripke structures.

5.1 Definition of No Learning

An interpreted system for agents \(\mathit {Ag}\) is a tuple \(\text{IS}=(\Pi ,V)\) where \(\Pi\) is a set of infinite paths \(\pi =\pi _0\pi _1\pi _2\ldots \in (L^\mathit {Ag})^\omega\), where \(L\) is a set of local states, and \(V:\Pi \times \mathbb {N}\rightarrow 2^\mathcal {AP}\) is a valuation function indicating for each path \(\pi \in \Pi\) and timestep \(n\in \mathbb {N}\) the set of atomic propositions that hold at \(\pi _n\). A pair \((\pi ,n)\) is called a point, and two points \((\pi ,n)\) and \((\pi ^{\prime },n^{\prime })\) are indistinguishable for agent \(a\), written \((\pi ,n)\approx _{a}(\pi ^{\prime },n^{\prime })\), if the local states of agent \(a\) at \(\pi _n\) and \(\pi ^{\prime }_{n^{\prime }}\) are the same. Formally, letting \(\pi _n=(l_a)_{a\in \mathit {Ag}}\) and \(\pi _{n^{\prime }}=(l^{\prime }_a)_{a\in \mathit {Ag}}\), relation \(\approx _{a}\) is defined by letting \((\pi ,n)\approx _{a}(\pi ^{\prime },n^{\prime })\) if \(l_a=l^{\prime }_a\).

Intuitively, the local state of agent \(a\) in \(\pi _n\) represents all the information she has accumulated along \(\pi\) until time \(n\). In particular, it does not correspond to the “instantaneous” observation that agent \(a\) makes in our setting at time \(n\) of path \(\pi =s_0s_1\ldots\), where she observes \([s_n]_{a}\) and updates her memory with this new piece of information. It is closer to the notion of information set (see Definition 3.1), but it is finer, because two histories can have the same information set without being equivalent (think, for instance, of two histories of different length in the synchronous setting). Instead, a local state characterizes precisely a set of equivalent histories.

In this setting, perfect recall is therefore not a way of extending instantaneous equivalence relations to finite plays, as in Definitions 2.9 and 2.10. Instead, it is said that an agent in an interpreted system has perfect recall if the relation \(\approx _{a}\), defined by local states, satisfies a certain property. This property, intuitively, says that the local states of agent \(a\) encode all that this agent has observed from the beginning. From the various equivalent formal characterizations of perfect recall, we recall the one from Reference [27], which is more intuitive. Let \(\text{IS}=(\Pi ,V)\) be an interpreted system, and define agent \(a\)’s local state sequence at point \((\pi ,n)\) as the sequence of local states \(l_0,\ldots ,l_k\) that agent \(a\) sees along \(\pi\) until time \(n\), after removal of successive repetitions (note that in a synchronous system there can never be repetitions of local states along a run, as by definition of \(\approx _{a}\) there would then be equivalent histories of different length). Now, agent \(a\) has perfect recall in \(\text{IS}\) if for all points \((\pi ,n)\) and \((\pi ^{\prime },n^{\prime })\) in \(\text{IS}\) such that \((\pi ,n)\approx _{a}(\pi ^{\prime },n^{\prime })\), agent \(a\) has the same local state sequence at \((\pi ,n)\) and \((\pi ^{\prime },n^{\prime })\). So, agent \(a\)’s local state is the same in two points if and only if the local state sequences at these points are the same, which captures the intuition that an agent with perfect recall remembers her view of the past.

For no learning, the definition is dual. First, define agent \(a\)’s future local state sequence at \((\pi ,n)\) as the sequence of local states \(l_0,l_1,\ldots\) of agent \(a\) along \(\pi\) starting at time \(n\), with consecutive repetitions omitted. Agent \(a\) does not learn if for all \(\pi ,\pi ^{\prime },n,n^{\prime }\) such that \((\pi ,n)\approx _{a}(\pi ^{\prime },n^{\prime })\), agent \(a\)’s future local state sequence is the same at \((\pi ,n)\) and \((\pi ^{\prime },n^{\prime })\). So, if two runs are indistinguishable at some points, then they will remain indistinguishable.

Example 5.1.

In the interpreted system depicted on the left of Figure 6

Fig. 6.

Fig. 6. Example of an interpreted system \(I\) with no learning and a Kripke structure \(M\) that generates \(I\) .

(left), both agents \(a\) and \(b\) do not learn. For agent \(a\) it is clear, as she is blindfold: She has the same local state \(l_a\) at all times in both paths \(\pi _1\) and \(\pi _2\), and thus at any point her future local state sequence is simply \(l_a\). Agent \(b\) first has local state \(l_b\) in both paths, and then has local state \(l^{\prime }_b\) forever, starting at time 1 in \(\pi _1\) and time 2 in run \(\pi _2\). Thus, at all points where she has local state \(l_b\), her future local state sequence is \(l_bl^{\prime }_b\), and at all points where her local state is \(l^{\prime }_b\), her future local state sequence is just \(l^{\prime }_b\). Since two points are equivalent if they contain the same local state, equivalent points indeed have the same future local state sequence. Note that this system is asynchronous.

Definition of No Learning for Finite Models

We now adapt the definition of no learning to our formal framework, where models are not given as explicit sets of infinite paths, but as finite Kripke structures that generate these paths.

First, when considering no learning, it becomes relevant to make the difference between models that have one unique initial state and those that have more. Indeed, the trick mentioned in Remark 1 to simulate the case of several possible initial states by adding an artificial unique initial state does not preserve no learning: By imposing that all start with the same state, which is necessary equivalent to itself, we impose that all paths have the same sequence of future local states for no learning to hold. So, from now on, we consider that models are of the form \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},S^\iota)\), where \(S^\iota \subseteq S\) is a set of initial states, and we write \(\textrm {H}(S^\iota)\) (respectively, \(\Pi (S^\iota)\)) the set of histories (respectively, paths) that start in some state \(s\in S^\iota\): Formally, \(\textrm {H}(S^\iota):=\cup _{s\in S^\iota }\textrm {H}(s)\) and \(\Pi (S^\iota):=\cup _{s\in S^\iota }\Pi (s)\). Finally, the notion of information set from Definition 3.1 is also generalized as follows:

Definition 5.2.

Given a history \(\tau\) and an agent \(a\), the information set of \(a\) at \(\tau\) is defined as \(\begin{equation*} I_{a}(\tau)=\lbrace s\mid \exists \tau ^{\prime }\in \textrm {H}(S^\iota) \text{ s.t. }\tau \approx _{a}\tau ^{\prime } \text{ and }s=\mbox{lst}(\tau ^{\prime })\rbrace . \end{equation*}\)

Here, a crucial difference with the setting of interpreted systems is that equivalence of histories is not defined directly by the local states contained in the last state, but rather is inferred from how agents observe states of the system, their memory (memoryless or perfect recall), and whether they have access to a global clock (synchronous or asynchronous). Fix a model \(M\) with initial states \(S^\iota\), and assume that indistinguishability relations \(\approx _{a}\) on histories are defined. We call local state of an agent \(a\) at a history \(\tau \in \textrm {H}(S^\iota)\), noted \(ls_{a}(\tau)\), the set of all histories indistinguishable from \(\tau\) for \(a\): \(ls_{a}(\tau):=\lbrace \tau ^{\prime }\in \textrm {H}(S^\iota) \mid \tau ^{\prime } \approx _{a}\tau \rbrace\). To see that this corresponds to the notion of local state in interpreted systems, observe that two histories are equivalent if and only if they have the same local state. Given a path \(\pi \in \Pi (S^\iota)\) and a point in time \(n\in \mathbb {N}\), the future local-state sequence at \((\pi ,n)\) for agent \(a\) is the sequence of local states \(ls_{a}(\pi _{\le n}),ls_{a}(\pi _{\le n+1}),\ldots\) in which consecutive repetitions are omitted.

Definition 5.3

(No Learning).

Agent \(a\) does not learn in model \(M\) if for all paths \(\pi ,\pi ^{\prime }\) and points in time \(n,n^{\prime }\in \mathbb {N}\) such that \(\pi _{\le n}\approx _{a}\pi ^{\prime }_{\le n^{\prime }}\), the future local-state sequences of agent \(a\) are the same at \((\pi ,n)\) and \((\pi ^{\prime },n^{\prime })\).

Example 5.4.

Consider the Kripke structure \(M\) depicted in Figure 6 (right). It is similar to that of Figure 1, except for the valuation (which is here empty) and the equivalence relations: This time, the states’ colors represent agent \(b\)’s equivalence relation \(\sim _{b}\), while agent \(a\)’s indistinguishability relation is the total relation and is not represented (agent \(a\) is blind). In this model, there are again two possible paths, namely, \(\pi _1=s_0s_1s_3^\omega\) and \(\pi _2=s_0s_2s_4^\omega\). We consider the asynchronous semantics, with either memoryless or perfect recall (it makes no difference in this example). With \(\approx _{a}\) and \(\approx _{b}\) thus defined, all histories are equivalent to agent \(a\), and all histories whose last state have the same color are equivalent to agent \(b\). To see that this model is fundamentally the same as the interpreted system \(I\) represented on the left of the figure, let \(ls_{a}\) be the set of all histories in \(M\), \(ls_{b}\) the set of all histories ending in a white state, and \(ls_{b}^{\prime }\) the set of all histories ending in a gray state. Then, identifying \(ls_{a}\), \(ls_{b}\), and \(ls_{b}^{\prime }\) with, respectively, \(l_a\), \(l_b\), and \(l^{\prime }_b\), the future local state sequences at each point of the plays \(\pi _1\) and \(\pi _2\) are the same in \(M\) and \(I\), and thus both agents do not learn, as in \(I\).

We can make the connection between the two notions more formal. First, for each Kripke structure \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},S^\iota)\), define its generated interpreted system \(\text{IS}_M=(\Pi _M,V_M)\) over agents \(\mathit {Ag}\cup \lbrace a_0\rbrace\), defined as follows: First \(\Pi _M=\lbrace \widehat{\pi }\mid \pi \in \Pi (S^\iota)\rbrace\), where for each \(i\in \mathbb {N}\), \(\widehat{\pi }_i=(\pi _i,(ls_{a}(\pi _{\le i}))_{a\in \mathit {Ag}})\): Each path \(\pi\) in \(M\) gives rise to a path \(\widehat{\pi }\) in \(\text{IS}_M\) by taking the sequence of local states for each agent along \(\pi\). We use the additional agent \(a_0\) to retain the actual sequence of states, so we can define the valuation as follows: \(V_M(\widehat{\pi },i))=V(\pi _i)\). For instance, in Figure 6, the interpreted system \(I\) is isomorphic to \(I_M\) (with respect to agents \(\mathit {Ag}\)). In particular, \(I\) does not contain the additional agent \(a_0\) and its local states, which are only useful when different paths share the same sequence of local states for all agents and would thus be merged in the same run without \(a_0\). By applying the definitions, we can prove that:

Proposition 5.5.

For every Kripke structure \(M\), each agent \(a\in \mathit {Ag}\) does not learn in \(M\) if and only if she does not learn in \(I_M\).

We now study the complexity of the model-checking problem for agents that do not learn.

5.2 No Learning with No Memory

First, considering the no learning assumption amounts to restricting attention to a subclass of models, and therefore it cannot make model checking harder. The complexity results in the general case thus provide upper bounds for the case of no learning. Also, because the no learning assumption only impacts epistemic aspects, we inherit the lower bounds for the underlying temporal languages of the different logics, i.e., the syntactic fragments obtained by removing knowledge operators. Whenever the upper bounds for an epistemic logic are the same as the lower bounds for its purely temporal fragment, model checking cannot get easier with no learning, and we thus get the following results (see Table 1 for the works where these upper bounds are established in the general case, i.e., without the no learning assumption).

Theorem 5.6.

For no-learning agents with no memory, model checking \(\texttt { CTLK}_{\texttt { C}}\) is Ptime -complete for asynchronous semantics, and model checking \(\texttt { LTLK}_{\texttt { C}}\) and \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) is Pspace -complete both for synchronous and asynchronous semantics.

Remark 2.

While model checking \(\texttt { LTLK}_{\texttt { C}}\) and \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) for memoryless agents takes time polynomial in the size of the model and exponential in the size of the formula, adding knowledge operators makes the problem hard for every level of the polynomial hierarchy already for fixed formulas and synchronous semantics [22].

Concerning memoryless agents, there only remains to treat the case of \(\texttt { CTLK}_{\texttt { C}}\) with synchronous semantics (the so-called “clock semantics” [22, 32]). We know from Reference [32] that in the general case the model-checking problem for \(\texttt { CTLK}_{\texttt { C}}\) with clock semantics is in Pspace and is PH -hard (hard for every level of the polynomial hierarchy PH).6 First, this gives us Pspace membership for the particular case of No Learning. We also observe that in the hardness proof in Reference [32] the models that are built, which have two agents and multiple initial states, satisfy the no learning assumption for both agents, so we obtain directly the lower bounds for the case of multiple initial states. We then show that we can adapt the proof to obtain models with unique initial state and a single blindfold agent (who thus satisfies no learning), thus obtaining the lower bound also for unique initial state and a single agent.

Theorem 5.7.

Model checking \(\texttt { CTLK}_{\texttt { C}}\) for no-learning agents with clock semantics is in Pspace and is PH -hard, even with unique initial state and a single blindfold agent.

Proof.

The PH -hardness proof in Reference [32] builds on the one in Reference [22, Theorem 6], which shows that model checking LTLK for agents with synchronous bounded memory is hard for every level of the polynomial hierarchy. We briefly recall the proof and show how to adapt it to obtain models with single initial state while maintaining the no learning property.

The proof is by reduction from QBF with fixed alternation \(k\in \mathbb {N}\). Let \(\Phi\) be a QBF formula of the form \(\begin{equation*} \Phi =\forall q_1^k\ldots q_n^k\exists q_1^{k-1} \ldots q_n^{k-1}\ldots (\forall /\exists)q_1^1\ldots q_n^1\; \alpha , \end{equation*}\) where \(\alpha\) is a propositional formula in conjunctive normal form.

Let also \(p_1^1,\ldots ,p_n^1,\ldots ,p_1^k,\ldots ,p_n^k\) be the sequence of the first \(nk\) prime numbers greater than 2, and for \(i\in \lbrace 1,\ldots ,n\rbrace\) and \(j\in \lbrace 1,\ldots ,k\rbrace\) let \(N_i^j=\prod _{1\le j^{\prime }\le j}p_i^{j^{\prime }}\). The largest number \(p_n^k\) is in \(O(nk \log (nk))\), hence in \(O(n^2)\), since \(k\) is fixed. The biggest \(N_i^j\), which is \(N_n^k\), is thus in \(O(n^{2k})\).

The model \(M_\varphi\) consists of a set of disjoint cycles: For each proposition \(q_i^j\) and clause \(c\) in which \(q_i^j\) appears either positively or negatively (we assume that a same variable never appears both positively and negatively in a same clause), \(M\) contains one cycle \(s_0^{c,i,j}\rightarrow \ldots \rightarrow s_{N}^{c,i,j}\rightarrow s_0^{c,i,j}\) where \(N=N_i^j-1\). So, there are potentially several cycles for each proposition \(q_i^j\), one for each clause in which it appears, but all of the same length \(N_i^j-1\). The model is thus of size \(O(|\Phi |\cdot N_n^k)=O(|\Phi |^{2k+1})\). Each \(s_0^{c,i,j}\) is an initial state.

For every \(m\in \mathbb {N}\), we let \(S_m\) be the set of possible states at time \(m\), i.e., states that appear at position \(m\) in some history starting in some initial state. For each \(j\in \lbrace 1,\ldots ,k\rbrace\), we also let \(X^j\) be the set of states of the form \(s_l^{c,i,j}\) for some \(l,c,i\), i.e., states that belong to cycles corresponding to propositions at level \(j\) of quantification. For every \(m\) and \(j\), we have that (5) \(\begin{equation} S_m\cap X^j= \lbrace s_l^{c,i,j} \mid 1\le i \le n, \lbrace q_i^j,\lnot q_i^j\rbrace \cap c\ne \emptyset , \text{ and }l=m \text{ mod } N_i^j\rbrace . \end{equation}\)

States are labeled with atomic propositions as follows: First, each state \(s_l^{c,i,j}\) is labeled with proposition \(\texttt {level}_j\), which characterizes cycles that correspond to propositions at level \(j\) of quantification. For each level of quantification \(j\), state \(s_l^{c,i,j}\) is also labeled with \(\texttt {assgt}_j\) if \(j=1\), or \(j\gt 1\) and \(l\) is divisible by \(N_i^j/p_i^j=N_i^{j-1}\). Proposition \(\texttt {assgt}_j\) thus marks \(p_i^j\) states evenly distributed in the cycle and indicates that state \(s_l^{c,i,j}\) codes for a possible truth assignment for \(p_i^j\), to true if \(l\) is even and false if \(l\) is odd. Note that if a proposition \(q_i^j\) appears in two clauses \(c\) and \(c^{\prime }\), since the cycles corresponding to these two occurrences have same length \(N_i^j\), then at each time \(m\) the positions \(s_{m \text{ mod } N_i^j}^{c,i,j}\) and \(s_{m \text{ mod } N_i^j}^{c^{\prime },i,j}\) in both cycles are consistent on \(\texttt {assgt}_j\) and the associated truth assignment for \(p_i^j\). In addition, a state labeled with \(\texttt {assgt}_j\) is labeled with \(\texttt {sat}_j\) if \(l\) is even and \(q_i^j\) appears positively in \(c\) or if \(l\) is odd and \(q_i^j\) appears negatively in \(c\).

We are especially interested in times \(m\) when all states in \(S_m\cap X^j\) are marked with \(\texttt {assgt}_j\), so this time \(m\) defines a truth assignment for each proposition \(p_i^j\) at level \(j\) of quantification. For \(j=1\) this is the case for all \(m\), but for \(j\gt 1\) this is the case if and only if \(m\) is divisible by each \(N_i^{j-1}\), or equivalently, if \(m\) is divisible by \(\prod _{i=1}^nN_i^{j-1}\).

The main idea is that, since the \(N_i^j\) for a given \(j\) are co-prime, the sequence \((S_m\cap X^j)_{m\in \mathbb {N}}\) is periodic with period \(\prod _{i=1}^nN_i^j\), and each period visits each set of the form \(\begin{equation*} \left\lbrace s_{f(i)}^{c,i,j} \mid 1\le i \le n, \left\lbrace q_i^j,\lnot q_i^j\right\rbrace \cap c\ne \emptyset \right\rbrace \end{equation*}\) for any function \(f:\lbrace 1,\ldots ,n\rbrace \rightarrow \lbrace 0,\ldots ,N_i^j-1\rbrace\). Thus, for any given truth assignment for propositions at level \(j\) of quantification, in each period there is a time \(m\) for which the set \(S_m\cap X^j\) codes for this assignment. In addition, if \(j\gt 1\), then there are \(\prod _{i=1}^nN_i^{j-1}\) timesteps between two moments that code for truth assignments of all level \(j\) propositions, and in this interval, we have visited all possible truth assignments for propositions at level \(j-1\).

We now finish the description of the reduction. There are two agents, \(a\) and \(b\): Agent \(a\) cannot distinguish anything, i.e., \(\sim _{a}\) is the universal relation, and agent \(b\) can only distinguish the clause from which a state arises: \(s_l^{c,i,j}\sim _{b}s_{l^{\prime }}^{c^{\prime },i^{\prime },j^{\prime }}\) if and only if \(c=c^{\prime }\).

The following formulas are then defined: \(\begin{equation*} \texttt {Assgt}_j=\mathit {K}_a (\texttt {level}_j\rightarrow \texttt {assgt}_j) \end{equation*}\) expresses that all level \(j\) states at the current time satisfy \(\texttt {assgt}_j\) (recall that agent \(a\) is blind, but with the clock semantics she perceives the ticks of time). Then, formula \(\begin{equation*} \texttt {all}_j(\varphi)=\mathit {A}\mathit {X}[\mathit {A}(\texttt {Assgt}_j\rightarrow \varphi)\;\mathit {U}\;\texttt {Assgt}_{j+1}] \end{equation*}\) expresses that \(\varphi\) holds at all future states in the cycle that code for a level \(j\) assignment, until the next state corresponding to a level \(j+1\) assignment (note that in a cycle, \(\mathit {A}\mathit {X}\varphi\) is equivalent to \(\mathit {E}\mathit {X}\varphi\), as there is only one possible future, and similarly for \(\mathit {U}\)). We also define the dual formula \(\begin{equation*} \texttt {some}_j(\varphi)=\lnot \texttt {all}_j(\lnot \varphi), \end{equation*}\) which expresses that \(\varphi\) holds at some point coding for a level \(j\) assignment before the next level \(j+1\) assignment.

Next, define formula \(\begin{equation*} \texttt {Holds}=\lnot \mathit {K}_b \lnot \beta , \quad \quad \text{where } \beta =\bigvee _{j=1\ldots n} \mathit {A}\mathit {X}\mathit {A}(\lnot \texttt {Assgt}_j)\mathit {U}(\texttt {Assgt}_j\wedge \texttt {sat}_j). \end{equation*}\) By definition of the observation relation for agent \(b\), \(\mathit {K}_b\) ranges over all cycles that correspond to the same clause as the one in which the formula is evaluated. Formula \(\texttt {Holds}\), when evaluated in a history ending in a state of the form \(s_l^{c,i,j}\), expresses that some literal in the clause \(c\) causes \(c\) to be satisfied at the next point where it is assigned a valuation.

Finally, define the CTLK formula \(\begin{equation*} \Phi ^{\prime }=\mathit {A}\mathit {G}(\texttt {Assgt}_k \rightarrow \texttt {some}_{k-1}\texttt {all}_{k-2}\ldots \mathit {K}_a \texttt {Holds}) \end{equation*}\) and we have that \(M\models \Phi ^{\prime }\) if and only if \(\Phi\) is true (see Reference [22, Appendix A] for more detail).

We show that in \(M\) both agents have No Learning. Let \(S^\iota =\lbrace s_0^{c,i,j}\mid q_i^j\text{ occurs in }c\rbrace\) be the set of initial states in \(M\). For agent \(b\), if \(\pi _{\le m}\approx _{b} \pi ^{\prime }_{\le m^{\prime }}\) for some paths \(\pi ,\pi ^{\prime }\in \Pi (S^\iota)\), by the clock semantics, we have that \(m=m^{\prime }\), and by definition of \(\sim _{b}\), we have that \(\pi\) and \(\pi ^{\prime }\) are in cycles that correspond to the same clause \(c\). Because all states in cycles arising from the same clause are equivalent for \(\sim _{b}\), for every \(m\) the local state \(ls_{a}(\pi _{\le m})\) at \((\pi ,m)\) is the set of histories of length \(m\) in the cycles for clause \(c\), and thus the future local-state sequences at \((\pi ,m)\) and \((\pi ^{\prime },m^{\prime })\) are the same. For agent \(a\) it is even clearer, as she is blindfold and thus all histories of same length are equivalent to her. We thus obtain the lower bound if multiple initial states are allowed.

To strengthen this result, we define a model \(M^{\prime }\) by adding to \(M\) a fresh unique initial state \(s_\iota\) that branches to all states in \(S^\iota\). The problem is that now agent \(b\) no longer satisfies no learning, as the history \(s_\iota\) is necessarily equivalent to itself, but \(s_\iota s_0^{c,i,j}\) and \(s_\iota s_0^{c^{\prime },i^{\prime },j^{\prime }}\) are not equivalent if \(c\ne c^{\prime }\). We note that the only place where agent \(b\) is used is in formula \(\texttt {Holds}\), and it is used only to make sure that we remain in cycles that correspond to the same clause as the one in which the formula is evaluated. We can actually enforce this without resorting to a second agent by labeling each state \(s_l^{c,i,j}\) with an additional proposition \(\texttt {clause}_c\). We then redefine formula \(\texttt {Holds}\) as \(\begin{equation*} \texttt {Holds}^{\prime }= \bigvee _c \texttt {clause}_c \wedge \lnot \mathit {K}_a \lnot (\texttt {clause}_c \wedge \beta). \end{equation*}\) We let agent \(a\) be blindfold again, i.e., \(\sim _{a}\) is the universal relation. We now have a single blindfold agent. With the clock semantics, it is clear that her local state in any history \(\tau \in \textrm {H}(s_\iota)\) is the set of histories \(\tau ^{\prime }\in \textrm {H}(s_\iota)\) of same length as \(\tau\). Agent \(a\) thus satisfies the no learning assumption. It only remains to redefine \(\Phi ^{\prime }\) as \(\begin{equation*} \Phi ^{\prime \prime }=\mathit {A}\mathit {X}\Phi ^{\prime }. \end{equation*}\) We have that \(M^{\prime }\models _{ck}\Phi ^{\prime \prime }\) if and only if \(\Phi\) is true, which concludes the proof. □

5.3 No Learning with Synchronous Perfect Recall

We now show that for synchronous perfect recall, the no-learning assumption makes the problem drastically easier: From undecidable (with operators for common knowledge) or nonelementary decidable (without common knowledge), it becomes Pspace even with common knowledge operators.

It is observed in Reference [27] that assuming no learning together with unique initial state and synchronous semantics implies that all agents always have the same knowledge, so this case boils down to the one agent case: Every formula is equivalent to the same formula with each knowledge operator replaced with \(\mathit {K}_{a}\) for a single agent \(a\). We prove a more general result that will be central to build a Pspace procedure for no learning agents with synchronous perfect recall.

Lemma 5.8.

Let \(M\) be a model with initial states \(S^\iota\) where all agents satisfy no learning, and let \(s\) and \(s^{\prime }\) be two states in \(M\). For all histories \(\tau ,\tau ^{\prime }\in \textrm {H}(S^\iota)\), agent \(a\in \mathit {Ag}\) and group \(G\subseteq \mathit {Ag}\), it holds that:

(1)

\(\tau \approx ^{\mbox{s}}_{a}\tau ^{\prime }\) if and only if \(\tau _0\sim _{a}\tau ^{\prime }_0\) and \(|\tau |=|\tau ^{\prime }|\).

(2)

\(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime }\) if and only if \(\tau _0\sim _{G}\tau ^{\prime }_0\) and \(|\tau |=|\tau ^{\prime }|\).

Proof.

For both points the left-to-right implication follows directly from the definitions of \(\approx ^{\mbox{s}}_{a}\) and \(\approx ^{\mbox{s}}_{G}\).

For the other implication of point \((1)\), assume that \(\tau _0\sim _{a}\tau ^{\prime }_0\) and \(|\tau |=|\tau ^{\prime }|\). Since \(\tau _0\sim _{a}\tau ^{\prime }_0\) and agents do not learn, for all paths \(\pi\) starting with \(\tau _0\) and \(\pi ^{\prime }\) starting with \(\tau ^{\prime }_0\), the future local-state sequences for agent \(a\) are the same at \((\pi ,0)\) and \((\pi ^{\prime },0)\). Since we have synchrony, the local state of a history is never equal to that of one of its strict prefixes, therefore the local-state sequence at \((\pi ,0)\) is exactly \(ls_{a}(\pi _{\le 0}),ls_{a}(\pi _{\le 1}),\ldots\) and similarly for \(\pi ^{\prime }\). Thus, \(ls_{a}(\pi _{\le 0}),ls_{a}(\pi _{\le 1}),\ldots =ls_{a}(\pi ^{\prime }_{\le 0}),ls_{a}(\pi ^{\prime }_{\le 1}),\ldots\) This implies that for every \(i\in \mathbb {N}\), \(ls_{a}(\pi _{\le i})=ls_{a}(\pi ^{\prime }_{\le i})\), hence \(\pi _{\le i}\approx ^{\mbox{s}}_{a}\pi ^{\prime }_{\le i}\). Since \(|\tau |=|\tau ^{\prime }|\) there exists \(i\) and \(\pi ,\pi ^{\prime }\) starting, respectively, in \(\tau _0\) and \(\tau ^{\prime }_0\) such that \(\pi _{\le i}=\tau\) and \(\pi ^{\prime }_{\le i}=\tau ^{\prime }\), and thus \(\tau \approx ^{\mbox{s}}_{a}\tau ^{\prime }\).

Finally, for the right-to-left implication of point \((2)\), we prove by induction on \(k\) that if \((\tau _0,\tau ^{\prime }_0)\in (\cup _{a\in G} \sim _{a})^k\) and \(|\tau |=|\tau ^{\prime }|\), then \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime }\). If \(k=0\), then \(\tau _0=\tau ^{\prime }_0\), hence \(\tau _0\sim _{a}\tau ^{\prime }_0\) for all agents \(a\). In particular, \(\tau _0\sim _{a}\tau ^{\prime }_0\) for some agent \(a\in G\) (which exists, since \(G\) is nonempty). By point \((1)\) it follows that \(\tau \approx ^{\mbox{s}}_{a}\tau ^{\prime }\) and thus \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime }\). For the inductive step assume that \((\tau _0,\tau ^{\prime }_0)\in (\cup _{a\in G}\sim _{a})^{k+1}\). There exists a sequence of agents \(a_1,\ldots ,a_{k+1}\in G^+\) and a sequence of states \(s_1,\ldots ,s_{k}\in S\) such that \(\tau _0\sim _{a_1}s_1\sim _{a_2}\ldots s_{k}\sim _{a_{k+1}}\tau ^{\prime }_0\). Since the translation relation in \(M\) is left-total, there exists a history \(\tau ^{\prime \prime }\) that starts in \(s_k\) and of length \(|\tau ^{\prime \prime }|=|\tau |=|\tau ^{\prime }|\). We have \((\tau _0,\tau ^{\prime \prime }_0)\in (\cup _{a\in G}\sim _{a})^{k}\), so by induction hypothesis \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime \prime }\). We also have \(\tau ^{\prime \prime }_0\sim _{a_{k+1}}\tau ^{\prime }_0\), so by point \((1)\) it holds that \(\tau ^{\prime \prime }\approx ^{\mbox{s}}_{a_{k+1}} \tau ^{\prime }\). From \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime \prime } \approx ^{\mbox{s}}_{a_{k+1}} \tau ^{\prime }\), we conclude that \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime }\). □

So, the information set \(I_{a}(\tau)\) of agent \(a\) at history \(\tau\) is entirely determined by the first state of \(\tau\) and its length, and in particular all histories that start in the same state and have same length are equivalent for all agents. We strongly rely on this fact and define a variant of the notion of information sets, parameterized no longer by agents but by starting states.

In the following, let us fix a model \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},S^\iota)\) where all agents do not learn. For every starting state \(s\in S^\iota\) and every history \(\tau \in \textrm {H}(S^\iota)\), we let \(\begin{equation*} I_{s}(\tau)=\lbrace s^{\prime }\mid \exists \tau ^{\prime }\in \textrm {H}(s) \text{ s.t. }|\tau |= |\tau ^{\prime }| \text{ and }s^{\prime }=\mbox{lst}(\tau ^{\prime })\rbrace . \end{equation*}\)

\(I_{s}(\tau)\) is the set of possible states after histories of length \(\tau\) that start in \(s\). Note that it does not depend on the agent. The following lemma follows directly from the definitions of \(I_{a}(\tau)\) and \(I_{s}(\tau)\) and Lemma 5.8:

Lemma 5.9.

For every agent \(a\) and history \(\tau \in \textrm {H}(s)\) where \(s\in S^\iota\), it holds that \(\begin{equation*} I_{a}(\tau)=\bigcup _{s^{\prime }\in S^\iota \text{ s.t. }s\sim _{a} s^{\prime }}I_{s^{\prime }}(\tau). \end{equation*}\)

Based on this, we now adapt the powerset construction from Section 3.2 to obtain a structure of single exponential size that contains enough information to evaluate formulas with arbitrary nesting of knowledge operators.

Definition 5.10.

Given a model \(M=(\text {AP},S,R,V,\lbrace \sim _{a}\rbrace _{a\in \mathit {Ag}},S^\iota)\), we define the powerset model \(\widehat{M}=(\text {AP},\widehat{S},\widehat{R},\widehat{V},\lbrace \widehat{\sim }_{a}\rbrace _{a\in \mathit {Ag}},\widehat{S}^{\,\iota })\), where

\(\widehat{S}= S^\iota \times S\times (2^{S})^{S^\iota }\)

\((s,t,\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota })\, \widehat{R}\, (s^{\prime },t^{\prime },\langle I_{s^{\prime \prime }}^{\prime }\rangle _{s^{\prime \prime }\in S^\iota })\) if

\(s^{\prime }=s\),

\(t\, R\, t^{\prime }\), and

for each \(s^{\prime \prime }\in S^\iota\), \(I_{s^{\prime \prime }}^{\prime }=R(I_{s^{\prime \prime }})\)

\(\widehat{V}(s,t,\langle I_{s}\rangle _{s^{\prime }\in S^\iota })=V(t)\)

for \(a\in \mathit {Ag}\), \((s,t,\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota })\,\widehat{\sim }_{a}\,(s^{\prime },t^{\prime },\langle I_{s^{\prime \prime }}^{\prime }\rangle _{s^{\prime \prime }\in S^\iota })\) if

\(s\sim _{a} s^{\prime }\),

\(t^{\prime }\in I_{s^{\prime }}\), and

for each \(s^{\prime \prime }\in S^\iota\), \(I_{s^{\prime \prime }}^{\prime }=I_{s^{\prime \prime }}\)

\(\widehat{S}^{\,\iota }=\lbrace (s,s,\langle \lbrace s^{\prime }\rbrace \rangle _{s^{\prime }\in S^\iota })\mid s\in S^\iota \rbrace .\)

This model is of size \(|\widehat{M}| = |M|^2\times 2^{|M|^2}\). Each set \(I_{s}\) is updated by simply taking the set of successors of states in the current \(I_{s}\). As in the construction of Section 3.2, this update is deterministic and thus every history \(\tau\) starting in state \(s\in S^\iota\) defines a unique history \(\widehat{\tau }\) of length \(|\tau |\) in \(\widehat{M}\), which starts in \((s,s,\langle \lbrace s^{\prime }\rbrace \rangle _{s^{\prime }\in S^\iota })\in \widehat{S}^{\,\iota }\) and follows transitions in \(\tau\). Similarly, every path \(\pi\) in \(M\) induces a unique path \(\widehat{\pi }\) in \(\widehat{M}\). We have the following:

Lemma 5.11.

For every history \(\tau\) starting in \(s\), \(\begin{equation*} \mbox{lst}(\widehat{\tau })=(s,\mbox{lst}(\tau),\langle I_{s^{\prime }}(\tau)\rangle _{s^{\prime }\in S^\iota }). \end{equation*}\)

Proof.

By induction on \(|\tau |\). The base case \(|\tau |=1\) is direct. Now, let \(\tau =\tau ^{\prime }\cdot s^{\prime }\), and let \(\mbox{lst}(\widehat{\tau ^{\prime }})=(s,\mbox{lst}(\tau ^{\prime }),\langle I_{s^{\prime \prime }}^{\prime }\rangle _{s^{\prime \prime }\in S^\iota })\). By construction \(\mbox{lst}(\widehat{\tau })=(s,s^{\prime },\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota })\) where \(I_{s^{\prime \prime }}=R(I_{s^{\prime \prime }}^{\prime })\). By induction hypothesis, \(I_{s^{\prime \prime }}^{\prime }=I_{s^{\prime \prime }}(\tau ^{\prime })= \lbrace \mbox{lst}(\tau ^{\prime \prime })\mid \tau ^{\prime \prime }\in \textrm {H}(s^{\prime \prime }) \text{ and }|\tau ^{\prime \prime }|= |\tau ^{\prime }|\rbrace\). It follows that \(R(I_{s^{\prime \prime }}^{\prime })=\lbrace \mbox{lst}(\tau ^{\prime \prime })\mid \tau ^{\prime \prime }\in \textrm {H}(s^{\prime \prime }) \text{ and }|\tau ^{\prime \prime }|= |\tau ^{\prime }|+1=|\tau |\rbrace =I_{s^{\prime \prime }}(\tau)\). □

Together with Lemma 5.9, this explains why, as in the general case, we can evaluate one level of knowledge operators positionally in model \(\widehat{M}\), using relations \(\widehat{\sim }_{a}\): After history \(\tau\) starting in \(s\), if we write \(\mbox{lst}(\widehat{\tau })=(s,\mbox{lst}(\tau),\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\), then for each agent \(a\), we have \(I_{a}(\tau)=\bigcup _{s^{\prime }\sim _{a}s} I_{s^{\prime }}\). But, in fact, we can do more, and \(\widehat{M}\) contains enough information to evaluate positionally formulas with arbitrary nesting of knowledge operators, and even common knowledge.

In the general case, after history \(\tau\), \(\mbox{lst}(\widehat{\tau })\) contains the information set \(I_{a}(\tau)\) for each agent \(a\), and in particular for all \(\tau ^{\prime }\) such that \(\tau \approx _{a}\tau ^{\prime }\), \(I_{a}(\tau)\) contains \(\mbox{lst}(\tau ^{\prime })\), which is enough information to evaluate whether a purely temporal formula holds at \(\tau ^{\prime }\), but not to evaluate epistemic formulas. For this, we would need the information sets of all agents after \(\tau ^{\prime }\), and while \(I_{a}(\tau)=I_{a}(\tau ^{\prime })\) if \(\tau \approx _{a}\tau ^{\prime }\), in general, we may have \(I_{b}(\tau)\ne I_{b}(\tau ^{\prime })\) for \(b\ne a\). This is why we iterate the powerset construction as presented in Section 3.4.

With synchronous perfect recall and no learning, information sets \(I_{a}(\tau)\) can be deduced from sets \(I_{s}(\tau)\). Since for each \(s\in S^\iota\), \(I_{s}(\tau)\) is completely determined by the length of \(\tau\), then for all \(\tau \approx _{a}\tau ^{\prime }\), by synchronicity it holds that \(I_{s}(\tau)=I_{s}(\tau ^{\prime })\). This explains the last part of the definition of \(\widehat{\sim }_{a}\) in Definition 5.10, and it also explains why we can use the memoryless semantics on the powerset model to evaluate formulas of arbitrary alternation depth and with common knowledge. This is formalized by the following variant of Proposition 3.7:

Proposition 5.12.

For every history formula \(\varphi\) and path formula \(\psi\) of \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\), each model \(M\), history \(\tau\), path \(\pi\), and time \(n\in \mathbb {N}\), \(\begin{align*} \tau \models _{\mbox{s}}\varphi &\quad \mbox{iff}\quad \mbox{lst}(\widehat{\tau })\models _m\varphi , \mbox{ and}\\ \pi ,n\models _{\mbox{s}}\psi &\quad \mbox{iff}\quad \widehat{\pi }_{\ge n}\models _m\psi . \end{align*}\)

Proof.

The proof is very similar to that of Proposition 3.7, using Lemma 5.11 instead of Lemma 3.6. The main difference is for the case of the knowledge operator and the additional case for common knowledge. We only prove the latter; the former follows from the fact that \({\mathit {K}_{a}}\varphi \equiv {{C_G}[\lbrace a\rbrace ]}\varphi\).

[\(\varphi ={{C_G}}\varphi ^{\prime }\):] Let us write \(\mbox{lst}(\widehat{\tau })=(s,t,\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota })\), and recall that by Lemma 5.11, for each \(s^{\prime \prime }\in S^\iota\), (6) \(\begin{equation} I_{s^{\prime \prime }}=I_{s^{\prime \prime }}(\tau)=\lbrace \mbox{lst}(\tau ^{\prime })\mid \tau ^{\prime }\in \textrm {H}(s^{\prime \prime }) \text{ and }|\tau ^{\prime }|= |\tau |\rbrace . \end{equation}\) By definition, \(\tau \models _{\mbox{s}}{C_G}\varphi ^{\prime }\) if and only if \(\tau ^{\prime }\models _{\mbox{s}}\varphi ^{\prime }\) for all \(\tau ^{\prime }\) such that \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime }\). By induction hypothesis, this becomes \(\tau \models _{\mbox{s}}{C_G}\varphi ^{\prime }\) if and only if \(\mbox{lst}(\widehat{\tau ^{\prime }})\models _m\varphi ^{\prime }\) for all \(\tau ^{\prime }\) such that \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime }\). However, by definition, \(\mbox{lst}(\widehat{\tau })\models _m{C_G}\varphi ^{\prime }\) iff \(\widehat{s}\,^{\prime }\models _m\varphi ^{\prime }\) for all \(\widehat{s}\,^{\prime }\) such that \(\widehat{s}\,^{\prime }\widehat{\sim }_{G}\, \mbox{lst}(\widehat{\tau })\), all that remains to prove is that \(\lbrace \mbox{lst}(\widehat{\tau ^{\prime }})\mid \tau ^{\prime }\approx ^{\mbox{s}}_{G}\tau \rbrace =\lbrace \widehat{s}\,^{\prime }\mid \widehat{s}\,^{\prime }\widehat{\sim }_{G}\, \mbox{lst}(\widehat{\tau })\rbrace\).

For the first inclusion, let \(\tau ^{\prime }\) be such that \(\tau ^{\prime }\approx ^{\mbox{s}}_{G}\tau\), and let \(\mbox{lst}(\widehat{\tau ^{\prime }})=(s^{\prime },t^{\prime },\langle I_{s^{\prime \prime }}^{\prime }\rangle _{s^{\prime \prime }\in S^\iota })\). By Lemma 5.8, \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime }\) if and only if \(\tau _0\sim _{G}\tau ^{\prime }_0\) and \(|\tau |=|\tau ^{\prime }|\), so \(\tau _0=s\, \sim _{G}\, s^{\prime }=\tau ^{\prime }_0\). In addition, \(|\tau |=|\tau ^{\prime }|\) implies that \(I_{s^{\prime \prime }}^{\prime }=I_{s^{\prime \prime }}\) for each \(s^{\prime \prime }\in S^\iota\), by construction of \(\widehat{M}\). Still, by construction, we also have that \(t^{\prime }=\mbox{lst}(\tau ^{\prime })\in I_{s^{\prime }}\). Put together, this implies that we can go from \(\mbox{lst}(\widehat{\tau })\) to \(\mbox{lst}(\widehat{\tau ^{\prime }})\) through a finite number of states related by \(\cup _{a\in G}\,\widehat{\sim }_{a}\), i.e., \(\mbox{lst}(\widehat{\tau })\,\widehat{\sim }_{G}\,\mbox{lst}(\widehat{\tau ^{\prime }})\).

For the other inclusion, let \(\widehat{s}\,^{\prime }=(s^{\prime },t^{\prime },\langle I_{s^{\prime \prime }}^{\prime }\rangle _{s^{\prime \prime }\in S^\iota })\) be such that \(\widehat{s}\,^{\prime }\widehat{\sim }_{G}\,\mbox{lst}(\widehat{\tau })\). By definition of \(\widehat{\sim }_{G}\), we have \(s\sim _{G}s^{\prime }\), \(t^{\prime }\in I_{s^{\prime }}^{\prime }\) and \(I_{s^{\prime \prime }}^{\prime }=I_{s^{\prime \prime }}\) for each \(s^{\prime \prime }\in S^\iota\). Further, by Lemma 5.11, we have that \(I_{s^{\prime \prime }}=I_{s^{\prime \prime }}(\tau)\) for each \(s^{\prime \prime }\in S^\iota\). From \(t^{\prime }\in I_{s^{\prime }}^{\prime }=I_{s^{\prime }}(\tau)\), we get that there exists a history \(\tau ^{\prime }\in \textrm {H}(s^{\prime })\) such that \(|\tau ^{\prime }|=|\tau |\) and \(\mbox{lst}(\tau ^{\prime })=t^{\prime }\). By construction, \(\mbox{lst}(\widehat{\tau ^{\prime }})=\widehat{s}\,^{\prime }\). Finally, since \(s\sim _{G}s^{\prime }\), we have by Lemma 5.8 that \(\tau \approx ^{\mbox{s}}_{G}\tau ^{\prime }\), which concludes the proof.

 □

Proposition 5.12 shows that to model check \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) with synchronous perfect recall and no-learning agents, we can equivalently model check the same formula with the (asynchronous) memoryless semantics \(\models _m\) on the powerset model. Model checking for the memoryless semantics is in Pspace [35], and by building on-the-fly the powerset model as in Section 3.3, we obtain a Pspace model-checking procedure.

Proposition 5.13.

Model checking \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) with synchronous perfect recall and no learning is in Pspace .

Proof.

We define function \(\mbox{MC}^{\text{nl}}(\varphi ,M,s,t,\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\) that takes as input a \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) history formula \(\varphi\), a model \(M\), and a state \((s,t,\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\) of the powerset model \(\widehat{M}\), and returns true if \((s,t,\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\models _m\varphi\), and false otherwise. The algorithm is very similar to algorithm \(\mbox{MC}^1\) of Proposition 3.9, but we describe it entirely for completeness. The main differences are in the case of the knowledge operator, the new case for common knowledge, and in the way states are updated along a branch in the case of \(\mathit {E}\psi\). \(\mbox{MC}^{\text{nl}}(\varphi ,M,s,t,\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\) is defined by induction on \(\varphi\) as follows:

[\(\varphi =p\):] return \(p\in V(t)\)

[\(\varphi =\varphi _1 \vee \varphi _2\):] return \(\mbox{MC}^{\text{nl}}(\varphi _1,M,s,t,\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\) or \(\mbox{MC}^{\text{nl}}(\varphi _2,M,s,t,\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\)

[\(\varphi =\lnot \varphi ^{\prime }\):] return not \(\mbox{MC}^{\text{nl}}(\varphi ^{\prime },M,s,t,\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\)

[\(\varphi =\mathit {K}_{a}\varphi ^{\prime }\):] return \(\mathop{\text{And}}\limits _{s^{\prime }\sim _{a}s, \,{t^{\prime }\in I_{s^{\prime }}}}\mbox{MC}^{\text{nl}}(\varphi ^{\prime },M,s^{\prime },t^{\prime },\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota })\)

[\(\varphi ={C_G}\varphi ^{\prime }\):] return \(\mathop{\text{And}}\limits _{s^{\prime }\sim _{G}s, \,{t^{\prime }\in I_{s^{\prime }}}}\mbox{MC}^{\text{nl}}(\varphi ^{\prime },M,s^{\prime },t^{\prime },\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota })\)

[\(\varphi =\mathit {E}\psi\):] Let \(\textrm {MaxSub}(\psi)\) be the set of maximal history subformulas of \(\psi\), let \(\psi ^{\prime }\) be the LTL formula obtained from \(\psi\) by considering subformulas in \(\textrm {MaxSub}(\psi)\) as atoms, and let \(\textrm {Cl}(\psi)\) be the closure of \(\mbox{Sub}(\psi ^{\prime })\) under negation. First, guess a subset \(S_1\subseteq \textrm {Cl}(\psi)\) of formulas that currently hold in state \(s\) with information sets \(\langle I_{a}\rangle _{a\in \mathit {Ag}}\). Check Boolean consistency, i.e., check that the following two conditions hold:

\(\quad \varphi _1\vee \varphi _2\in S_1\) iff \(\varphi _1\in S_1\) or \(\varphi _2\in S_1\)

\(\quad \lnot \varphi ^{\prime }\in S_1\) iff \(\varphi ^{\prime }\notin S_1.\)

Check that \(\psi \in S_1\). Also, check that the truth of maximal history subformulas was guessed correctly: For all \(\varphi ^{\prime }\in \textrm {MaxSub}(\psi)\cap S_1\), check that \(\mbox{MC}^{\text{nl}}(\varphi ^{\prime },M,s,t,\langle I_{s^{\prime }}\rangle _{s^{\prime }\in S^\iota })\).

Now, by Lemma 3.8, we know that if there exists a path that satisfies \(\psi\), then there exists an ultimately periodic one with start index and period less than \(|M| 2^{|\mathit {Ag}| |M|+|\psi |}\). So, let us guess \(n_1,n_2\le |M| 2^{|\mathit {Ag}| |M|+|\psi |}\), representing, respectively, the start index and the period of the ultimately periodic path that the algorithm is going to guess. Set a counter \(\texttt {c}\) to zero.

While \(\texttt {c}\lt n_1\), do:

Step \({\left\lbrace \begin{array}{ll} \mbox{guess }t^{\prime }\in R(t)\\ \text{for each }s^{\prime \prime }\in S^\iota , I_{s^{\prime \prime }}^{\prime }:=R(I_{s^{\prime \prime }})\\ \mbox{guess a set }S_2\subseteq \textrm {Cl}(\psi)\\ \mbox{check Boolean consistency of $S_2$}\\ \mbox{check dynamic consistency of $S_1$ and $S_2$:}\\ \quad \mathit {X}\varphi ^{\prime }\in S_1\mbox{ iff } \varphi ^{\prime }\in S_2, \mbox{ and}\\ \quad \varphi _1\mathit {U}\varphi _2\in S_1\mbox{ iff } \mbox{$\varphi _2\in S_1$, or ($\varphi _1\in S_1$ and $\varphi _1\mathit {U}\varphi _2\in S_2$)}\\ \mbox{check the truth of }\textrm {MaxSub}(\psi)\cap S_2\mbox{ on the new state }(s,t^{\prime },\langle I_{s^{\prime \prime }}^{\prime }\rangle _{s^{\prime \prime }\in S^\iota }):\\ \quad \mbox{for all } \varphi ^{\prime }\in \textrm {MaxSub}(\psi)\cap S_1, \mbox{ check that } \mbox{MC}^{\text{nl}}(\varphi ^{\prime },M,s,t^{\prime },\langle I_{s^{\prime \prime }}^{\prime }\rangle _{s^{\prime \prime }\in S^\iota }).\\ t:=t^{\prime }, \langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota }=\langle I_{s^{\prime \prime }}^{\prime }\rangle _{s^{\prime \prime }\in S^\iota }, S_1:=S_2, \texttt {c}:=\texttt {c}+1 \end{array}\right.}\)

Once \(\texttt {c}=n_1\), let \(S^{\mathrm{period}}:=S_1\), \(s^{\mathrm{period}}:=s\), \(\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota }^{\mathrm{period}}:=\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota }\), and \(\texttt {c}:=0\).

While \(\texttt {c}\lt n_2\), do:

Mark which eventualities (formulas of the form \(\varphi _1\mathit {U}\varphi _2\)) in \(S^{\mathrm{period}}\) are satisfied

Execute Step

Once \(\texttt {c}=n_2\), check that \(s=s^{\mathrm{period}}\) and \(\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota }=\langle I_{s^{\prime \prime }}\rangle _{s^{\prime \prime }\in S^\iota }^{\mathrm{period}}\). If it is the case, then we indeed guessed an ultimately periodic path in the powerset model, and we just need to check that all eventualities in \(S^{\mathrm{period}}\) have been satisfied somewhere in the period. Return true if it is the case, false otherwise.

The correctness follows from Proposition 5.12 together with the correctness of the LTL model-checking procedure and Emerson and Lei’s meta-algorithm. We thus have that \(M\models _{\mbox{s}}\varphi\) if, and only if, \(\mbox{MC}^{\text{nl}}(\varphi ,M,{{s}^\iota },{{s}^\iota },\langle \lbrace s^{\prime }\rbrace \rangle _{s^{\prime }\in S^\iota })\) returns true. A complexity analysis similar to that of Proposition 3.9 shows that this procedure runs in polynomial space. □

Theorem 5.14.

For no learning agents with synchronous perfect recall, model checking \(\texttt { LTLK}_{\texttt { C}}\) and \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) is Pspace -complete. For \(\texttt { CTLK}_{\texttt { C}}\), it is PH -hard and in Pspace .

Proof.

The upper bounds are from Proposition 5.13. The lower bound for \(\texttt { LTLK}_{\texttt { C}}\) and \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) is inherited from LTL and CTL*. The PH -hard lower bound for \(\texttt { CTLK}_{\texttt { C}}\) is inherited from the case of CTLK with synchronous memoryless agents with no learning. Indeed, Theorem 5.7 shows that this problem is PH -hard already for a single blindfold agent, and for such an agent, the indistinguishability relation for synchronous perfect recall is the same as the one for synchronous memoryless semantics (it is the relation that relates all histories of same length). □

5.4 No Learning with Asynchronous Perfect Recall

The last remaining case is that of no learning with asynchronous perfect recall. In this case, indistinguishable histories can be of different lengths, and in a given history the number of events observed may not be the same for different agents. As a consequence, the characterization in Lemma 5.9, on which relies the Pspace model-checking procedure for no learning with synchronous perfect recall, does not hold. We have instead the following characterization, where \(|\tau |_a:=|\text{Obs}_{a}(\tau)|\) denotes the number of visible events for agent \(a\) in history \(\tau\):

Lemma 5.15.

Let \(M\) be a model with initial states \(S^\iota\) where all agents satisfy no learning, and let \(s\) and \(s^{\prime }\) be two states in \(M\). For all histories \(\tau ,\tau ^{\prime }\in \textrm {H}(S^\iota)\) and agent \(a\in \mathit {Ag}\), it holds that \(\tau \approx ^{\mbox{as}}_{a}\tau ^{\prime }\) if and only if \(\tau _0\sim _{a}\tau ^{\prime }_0\) and \(|\tau |_a=|\tau ^{\prime }|_a\).

Proof.

Again, one implication is clear. For the other direction, assume that \(\tau _0\sim _{a}\tau ^{\prime }_0\) and \(|\tau |_a=|\tau ^{\prime }|_a\). Since \(\tau _0\sim _{a}\tau ^{\prime }_0\) and agents do not learn, for all paths \(\pi\) starting with \(\tau _0\) and \(\pi ^{\prime }\) starting with \(\tau ^{\prime }_0\), the future local-state sequences for agent \(a\) are the same at \((\pi ,0)\) and \((\pi ^{\prime },0)\). This implies that the sequences of observations (modulo stuttering) for agent \(a\) are also the same along \(\pi\) and \(\pi ^{\prime }\). It follows that if \(\tau\) and \(\tau ^{\prime }\) have the same number of observations for agent \(a\), then their sequences of observations for \(a\) are equal, and thus \(\tau \approx ^{\mbox{as}}_{a}\tau ^{\prime }\). □

However, since the number of observations made along a history may be different for each agent, this characterization does not get us rid of the need to record nested information for different agents to evaluate nested knowledge operators. Intuitively, two histories that are indistinguishable to agent \(a\) may have different numbers of observations for agent \(b\).

Concerning upper bounds, we can infer from Theorem 3.12 that for \(\texttt { CTL}^{*}\texttt {K}\) (without common knowledge) the problem is in \(k-1\)-Expspace for alternation depth \(k\). For lower bounds, we know that the problem is Pspace -hard for LTLK and \(\texttt { CTL}^{*}\texttt {K}\), and PH -hard for CTLK. The latter result is obtained by a simple adaptation of the proof of Theorem 5.7. Using the same trick used in the proof of Theorem 4.1 to deal with the asynchronous case, the only agent is turned from one that observes nothing to one that observes only the clock by switching between two observations at every transition in the model. The exact complexity remains an open problem. In particular, we do not know if model checking \(\texttt { CTL}^{*}\texttt {K}_{\texttt {C}}\) with asynchronous perfect recall remains undecidable on systems with no learning or if it becomes decidable.

Skip 6CONCLUSION Section

6 CONCLUSION

In this work, we settle the exact complexity of model checking the main logics of knowledge and time for almost all cases that were still unknown. In particular, in the case of agents with perfect recall, for 20 years the best-known upper-bound for \(\texttt { CTL}^{*}\texttt {K}\) and LTLK remained \(k\)-Exptime for formulas of alternation depth \(k\), and no matching lower bound was known. We showed that it is \((k-1)\)-Expspace -complete for formulas of alternation depth at most \(k\ge 1\), both in the synchronous and the asynchronous case. We also studied the case of agents with no learning, a notion dual to perfect recall stemming from seminal works on epistemic temporal logics [28, 29], for which we established the complexity of the model-checking problem in almost all cases.

To close the picture for the 96 logics identified by Halpern and Vardi, it remains to settle the exact complexity for CTLK with clock semantics and the case of asynchronous perfect recall with no learning. The former was announced to be Pspace -complete in Reference [32], but actually the proof only allows to conclude that it is PH -hard and in Pspace . Note that it is still an open question whether the inclusion of PH in Pspace is strict. Concerning the latter case, the gap to be closed is much wider, as we only have PH or Pspace lower bounds, nonelementary upper bounds for the logics without common knowledge, and no upper bound for the logics with common knowledge. On the one hand, the no-learning assumption constrains very strongly the models and gives the impression that the problem should be much easier than the general case. It is indeed what happens for synchronous perfect recall, which we showed drops from undecidable with common knowledge and nonelementary without, to Pspace even with common knowledge. On the other hand, it seems that with asynchronous perfect recall, models with no learning still display phenomena that may force recording nested levels of information to evaluate nested knowledge operators.

The fact that the synchronous and asynchronous cases seem to be so different may seem surprising, as in the general case both can be dealt with very similarly, as shown in Sections 3 and 4. However, it is known that for the satisfiability problem with no learning there is a big difference between synchronous and asynchronous perfect recall: the one is nonelementary, the other is undecidable [29]. We leave for future work these last remaining cases.

Footnotes

  1. 1 It is announced in Reference [22] that the problem is Pspace -hard, but the proof actually only establishes hardness for every level of the polynomial hierarchy (PH) [58].

    Footnote
  2. 2 Actually, it would also work for KD45 relations (see Reference [23] for more on weaker notions of knowledge).

    Footnote
  3. 3 We assume that the first bit in the binary encoding of a natural number is the least significant one.

    Footnote
  4. 4 Since the indistinguishability relations of \(M_{\mathcal {I},k}\) depend only on the valuation function, histories that have the same trace are indistinguishable by any agent.

    Footnote
  5. 5 Note that the prefix of an initialized \(k\)-grid code is always the prefix of some tagged \(k\)-grid code.

    Footnote
  6. 6 It is announced in Reference [32] that it is Pspace -hard, but the proof actually only establishes PH -hardness [58].

    Footnote

REFERENCES

  1. [1] Alur Rajeev, Černỳ Pavol, and Chaudhuri Swarat. 2007. Model checking on trees with path equivalences. In TACAS. Springer, 664678.Google ScholarGoogle Scholar
  2. [2] Aminof Benjamin, Murano Aniello, Rubin Sasha, and Zuleger Florian. 2016. Prompt alternating-time epistemic logics. In KR.Google ScholarGoogle Scholar
  3. [3] Aucher Guillaume. 2014. Supervisory control theory in epistemic temporal logic. In AAMAS. 333340. Retrieved from http://dl.acm.org/citation.cfm?id=2615787Google ScholarGoogle Scholar
  4. [4] Belardinelli Francesco, Jamroga Wojtek, Malvone Vadim, Mittelmann Munyque, Murano Aniello, and Perrussel Laurent. 2022. Reasoning about human-friendly strategies in repeated keyword auctions. In AAMAS, Faliszewski Piotr, Mascardi Viviana, Pelachaud Catherine, and Taylor Matthew E. (Eds.). International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 6271.Google ScholarGoogle Scholar
  5. [5] Belardinelli Francesco, Knight Sophia, Lomuscio Alessio, Maubert Bastien, Murano Aniello, and Rubin Sasha. 2021. Reasoning about agents that may know other agents’ strategies. In IJCAI, Zhou Zhi-Hua (Ed.). ijcai.org, 17871793.Google ScholarGoogle Scholar
  6. [6] Belardinelli Francesco, Lomuscio Alessio, Murano Aniello, and Rubin Sasha. 2017. Verification of broadcasting multi-agent systems against an epistemic strategy logic. In IJCAI, Vol. 17. 9197.Google ScholarGoogle Scholar
  7. [7] Belardinelli Francesco, Lomuscio Alessio, Murano Aniello, and Rubin Sasha. 2020. Verification of multi-agent systems with public actions against strategy logic. Artif. Intell. 285 (2020), 103302.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Belardinelli Francesco, Lomuscio Alessio, and Yu Emily. 2020. Model checking temporal epistemic logic under bounded recall. In AAAI. 70717078.Google ScholarGoogle Scholar
  9. [9] Boas P. Van Emde. 1997. The convenience of tilings. In Complexity, Logic, and Recursion Theory. Marcel Dekker Inc, 331363.Google ScholarGoogle Scholar
  10. [10] Bozianu Rodica, Dima Cătălin, and Filiot Emmanuel. 2014. Safraless synthesis for epistemic temporal specifications. In CAV. Springer, 441456.Google ScholarGoogle Scholar
  11. [11] Bozzelli Laura, Maubert Bastien, and Murano Aniello. 2019. The complexity of model checking knowledge and time. In IJCAI. 15951601.Google ScholarGoogle Scholar
  12. [12] Bozzelli Laura, Maubert Bastien, and Pinchinat Sophie. 2015. Uniform strategies, rational relations and jumping automata. Inf. Computat. 242 (2015), 80107. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Bozzelli Laura, Maubert Bastien, and Pinchinat Sophie. 2015. Unifying hyper and epistemic temporal logics. In FoSSaCS. Springer, 167182.Google ScholarGoogle Scholar
  14. [14] Brafman Ronen I., Latombe Jean-Claude, Moses Yoram, and Shoham Yoav. 1997. Applications of a logic of knowledge to motion planning under uncertainty. J. ACM 44, 5 (1997), 633668.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Čermák Petr, Lomuscio Alessio, Mogavero Fabio, and Murano Aniello. 2014. MCMAS-SLK: A model checker for the verification of strategy logic specifications. In CAV. Springer, 525532.Google ScholarGoogle Scholar
  16. [16] Dechesne Francien and Wang Yanjing. 2010. To know or not to know: Epistemic approaches to security protocol verification. Synthese 177, 1 (2010), 5176.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Dima Cătălin. 2009. Revisiting satisfiability and model-checking for CTLK with synchrony and perfect recall. In CLIMA. 117131. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Douéneau-Tabot Gaëtan, Pinchinat Sophie, and Schwarzentruber François. 2018. Chain-monadic second order logic over regular automatic trees and epistemic planning synthesis. Adv. Modal Logic 7 (2018).Google ScholarGoogle Scholar
  19. [19] Dziembowski Stefan, Jurdzinski Marcin, and Walukiewicz Igor. 1997. How much memory is needed to win infinite games? In LICS. IEEE, 99110.Google ScholarGoogle Scholar
  20. [20] Emerson E. Allen and Halpern Joseph Y.. 1986. “Sometimes” and “not never” revisited: On branching versus linear time temporal logic. J. ACM 33, 1 (1986), 151178.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Emerson E. Allen and Lei Chin-Laung. 1987. Modalities for model checking: Branching time logic strikes back. Sci. Comput. Program. 8, 3 (1987), 275306.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Engelhardt Kai, Gammie Peter, and Meyden Ron Van Der. 2007. Model checking knowledge and linear time: PSPACE cases. In LFCS. Springer, 195211.Google ScholarGoogle Scholar
  23. [23] Fagin Ronald, Halpern Joseph Y., Moses Yoram, and Vardi Moshe. 2004. Reasoning about Knowledge. MIT Press.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Gammie Peter and Meyden Ron Van Der. 2004. MCK: Model checking the logic of knowledge. In CAV. Springer, 479483.Google ScholarGoogle Scholar
  25. [25] Guelev Dimitar P., Dima Catalin, and Enea Constantin. 2011. An alternating-time temporal logic with knowledge, perfect recall and past: Axiomatisation and model-checking. J. Appl. Non-classic. Logics 21, 1 (2011), 93131. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Halpern Joseph Y. and O’Neill Kevin R.. 2005. Anonymity and information hiding in multiagent systems. J. Comput. Secur. 13, 3 (2005), 483512. Retrieved from http://content.iospress.com/articles/journal-of-computer-security/jcs237Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Halpern Joseph Y., Meyden Ron van der, and Vardi Moshe Y.. 2004. Complete axiomatizations for reasoning about knowledge and time. SIAM J. Comput. 33, 3 (2004), 674703. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Halpern Joseph Y. and Vardi Moshe Y.. 1986. The complexity of reasoning about knowledge and time. In STOC. ACM, 304315.Google ScholarGoogle Scholar
  29. [29] Halpern Joseph Y. and Vardi Moshe Y.. 1989. The complexity of reasoning about knowledge and time. 1. Lower bounds. J. Comput. Syst. Sci. 38, 1 (1989), 195237. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Huang Xiaowei, Chen Qingliang, and Su Kaile. 2015. The complexity of model checking succinct multiagent systems. In IJCAI. 10761082. Retrieved from http://ijcai.org/Abstract/15/156Google ScholarGoogle Scholar
  31. [31] Huang Xiaowei and Meyden Ron van der. 2018. An epistemic strategy logic. ACM Trans. Computat. Logic 19, 4 (2018), 145.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Huang Xiaowei and Meyden Ron van der. 2010. The complexity of epistemic model checking: Clock semantics and branching time. In ECAI. 549554.Google ScholarGoogle Scholar
  33. [33] Kacprzak Magdalena, Lomuscio Alessio, and Penczek Wojciech. 2004. From bounded to unbounded model checking for temporal epistemic logic. Fundam. Inform. 63, 2-3 (2004), 221240.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Kacprzak Magdalena, Lomuscio Alessio, and Penczek Wojciech. 2004. Verification of multiagent systems via unbounded model checking. In AAMAS. IEEE, 638645.Google ScholarGoogle Scholar
  35. [35] Kong Jeremy and Lomuscio Alessio. 2017. Symbolic model checking multi-agent systems against CTL*K specifications. In AAMAS. 114122. Retrieved from http://dl.acm.org/citation.cfm?id=3091147Google ScholarGoogle Scholar
  36. [36] Ladner Richard E. and Reif John H.. 1986. The logic of distributed protocols. In TARK. 207222.Google ScholarGoogle Scholar
  37. [37] Lomuscio Alessio, Penczek Wojciech, and Woźna Bożena. 2007. Bounded model checking for knowledge and real time. Artif. Intell. 171, 16-17 (2007), 10111038.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Lomuscio Alessio, Qu Hongyang, and Raimondi Franco. 2017. MCMAS: An open-source model checker for the verification of multi-agent systems. Int. J. Softw. Tools Technol. Transf. 19, 1 (2017), 930.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Lomuscio Alessio and Raimondi Franco. 2006. The complexity of model checking concurrent programs against CTLK specifications. In AAMAS. 548550. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Maubert Bastien. 2014. Logical Foundations of Games with Imperfect Information: Uniform Strategies. (Fondations logiques des jeux à information imparfaite : Stratégies uniformes). Ph. D. Dissertation. University of Rennes 1, France. Retrieved from https://tel.archives-ouvertes.fr/tel-00980490Google ScholarGoogle Scholar
  41. [41] Maubert Bastien and Murano Aniello. 2018. Reasoning about knowledge and strategies under hierarchical information. In KR.Google ScholarGoogle Scholar
  42. [42] Maubert Bastien, Murano Aniello, Pinchinat Sophie, Schwarzentruber François, and Stranieri Silvia. 2020. Dynamic epistemic logic games with epistemic temporal goals. In ECAI(Frontiers in Artificial Intelligence and Applications, Vol. 325). IOS Press, 155162. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Maubert Bastien, Pinchinat Sophie, Schwarzentruber François, and Stranieri Silvia. 2020. Concurrent games in dynamic epistemic logic. In IJCAI, Bessiere Christian (Ed.). ijcai.org, 18771883. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Mȩski Artur, Penczek Wojciech, Szreter Maciej, Woźna-Szcześniak Bożena, and Zbrzezny Andrzej. 2014. BDD-versus SAT-based bounded model checking for the existential fragment of linear temporal logic with knowledge: Algorithms and their performance. Auton. Agents Multi-agent Syst. 28, 4 (2014), 558604.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Meyden Ron van der and Vardi Moshe Y.. 1998. Synthesis from knowledge-based specifications. In CONCUR. Springer, 3449.Google ScholarGoogle Scholar
  46. [46] Neiger Gil and Bazzi Rida. 1992. Using knowledge to optimally achieve coordination in distributed systems. In TARK. 4359.Google ScholarGoogle Scholar
  47. [47] Parikh Rohit and Ramanujam Ramaswamy. 1985. Distributed processes and the logic of knowledge. In Logic of Programs. Spinger, 256268.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Penczek Wojciech and Lomuscio Alessio. 2003. Verifying epistemic properties of multi-agent systems via bounded model checking. Fundam. Inform. 55, 2 (2003), 167185.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Pnueli Amir. 1977. The temporal logic of programs. In FOCS. IEEE, 4657.Google ScholarGoogle Scholar
  50. [50] Puchala Bernd. 2010. Asynchronous omega-regular games with partial information. In MFCS. 592603.Google ScholarGoogle Scholar
  51. [51] Raimondi Franco and Lomuscio Alessio. 2007. Automatic verification of multi-agent systems by model checking via ordered binary decision diagrams. J. Appl. Logic 5, 2 (2007), 235251.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Reif John H.. 1984. The complexity of two-player games of incomplete information. J. Comput. Syst. Sci. 29, 2 (1984), 274301. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Shilov Nikolay V. and Garanina Natalya Olegovna. 2002. Model checking knowledge and fixpoints. In FICS. 2539.Google ScholarGoogle Scholar
  54. [54] Shilov Nikolay V., Garanina Natalya Olegovna, and Choe K.-M.. 2006. Update and abstraction in model checking of knowledge and branching time. Fundam. Inform. 72, 1-3 (2006), 347361.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Sistla A. Prasad and Clarke Edmund M.. 1985. The complexity of propositional linear temporal logics. J. ACM 32, 3 (1985), 733749.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Hoek W. van der and Wooldridge M.. 2003. Cooperation, knowledge, and time: Alternating-time temporal epistemic logic and its applications. Studia Logica 75, 1 (2003), 125157. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Meyden Ron van der. 1998. Common knowledge and update in finite environments. Inform. Computat. 140, 2 (1998), 115157. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Meyden Ron van der. 2020. Personal communication.Google ScholarGoogle Scholar
  59. [59] Meyden Ron van der and Shilov Nikolay V.. 1999. Model checking knowledge and time in systems with perfect recall (extended abstract). In FSTTCS. 432445.Google ScholarGoogle Scholar
  60. [60] Meyden Ron van der and Su Kaile. 2004. Symbolic model checking the knowledge of the dining cryptographers. In CSFW. 280291.Google ScholarGoogle Scholar
  61. [61] Meyden Ron van der and Vardi Moshe Y.. 1998. Synthesis from knowledge-based specifications. In CONCUR. Springer, 3449.Google ScholarGoogle Scholar
  62. [62] Meyden Ron Van der and Wilke Thomas. 2005. Synthesis of distributed systems from knowledge-based specifications. In CONCUR, Vol. 5. Springer, 562576.Google ScholarGoogle Scholar
  63. [63] Vardi Moshe Y. and Wolper Pierre. 1994. Reasoning about infinite computations. Inform. Computat. 115, 1 (1994), 137.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. On the Complexity of Model Checking Knowledge and Time
              Index terms have been assigned to the content through auto-classification.

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in

              Full Access

              • Published in

                cover image ACM Transactions on Computational Logic
                ACM Transactions on Computational Logic  Volume 25, Issue 1
                January 2024
                312 pages
                ISSN:1529-3785
                EISSN:1557-945X
                DOI:10.1145/3613508
                • Editor:
                • Anuj Dawar
                Issue’s Table of Contents

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 16 January 2024
                • Online AM: 13 December 2023
                • Accepted: 27 November 2023
                • Revised: 3 November 2023
                • Received: 24 June 2021
                Published in tocl Volume 25, Issue 1

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • research-article
              • Article Metrics

                • Downloads (Last 12 months)131
                • Downloads (Last 6 weeks)62

                Other Metrics

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader