Keywords

1 Introduction

Resource consciousness is routinely cited as a motivation for considering substructural logics (see, e.g., [10]). But usually the reference to resources is kept informal, like in Girard’s well-known example of being able to buy a pack of Camels and/or a pack of Marlboros [5] with a single dollar, illustrating linear implication as well as the ambiguity of conjunction between the “multiplicative” and “additive” reading. The invitation to distinguish, e.g., between a “causal”, action-oriented interpretation of implication and a more traditional understanding of implication as a timeless, abstract relation between propositions is certainly inspiring and motivating. However, the specific shape and properties of proof systems for usual substructural logics owe more to a deep analysis of Gentzen’s sequent system than to action-oriented models of handling scarce resources of a specific kind.

Various semantics, in particular so-called game semantics for (fragments of) linear logics [1, 3] offer additional leverage points for a logical analysis of resource consciousness. But these semantics hardly support a straightforward reading of sequent derivations as action plans devised by resource conscious agents. Moreover, the inherent level of abstraction often does not match the appeal of (e.g.) Girard’s very concrete and simple picture of action-oriented inference.

We introduce a two-player game based on the idea that a proof is an action-plan, i.e. a strategy for one of the players (the “Client”) to reduce particular structured information to information provided by the other player (the “Server”). As we will show, the interpretation of game states as single conclusion sequents leads to variations of the basic game, that match (affine) intuitionistic linear logic, but also other substructural logics. To emphasize the indicated shift of perspective, relative to traditional interpretations of formulas as sentences, propositions, or types, we introduce the notion of an information package, which emphasizes the interpretation of formulas as (in general) compound information, that is built up from atomic pieces of information using constructors that indicate possible ways of accessing the information.

Obviously our Client-Server games constitute a variant of game semantics; therefore a few words on the relation to other forms of game semantics are appropriate. Already in the late 1950s Lorenzen [9] proposed to justify intuitionistic logic in terms of a dialogue game, where a proponent defends a statement against systematic attacks by an opponent. Logical validity is identified with the existence of a winning strategy for the proponent. This setup has later been generalized to other logics; see, e.g., [7, 11]. While there are some obvious similarities between Lorenzen-style dialogue games and our Client-Server games there are differences at the structural level. In particular, Lorenzen and his followers argue that the two players should have ‘equal rights’: both the specific rules for the logical connectives and the so-called frame rules, that regulate the overall progression of a dialogue, should be as symmetric as possible. In contrast, we deliberately break this symmetry and view the Client as the active ‘scheduler’ of the interaction with a largely passive or at least dis-interested Server. Similar remarks hold for game semantics developed for (fragments of) linear logic in the wake of [1, 3, 8]. The idea there is to view propositions as games and connectives as operators on games. Again, the symmetry between the two players is important, as witnessed by the prominence of the copy-cat strategy, which has no counterpart in our Client-Server games. Finally, Japaridze’s Computability Logic [6] deserves to be mentioned, where formulas are interpreted as computational problems. The underlying model of interactive computation is a game between a machine and the environment. While somewhat related in spirit to our (much simpler and more specific) game model, the corresponding logics and inference mechanisms are again quite different. Probably the most important feature of our approach is that we aim at a direct interpretation of sequent rules as rules for systematically reducing information packages to its components.

The paper is structured as follows: in Sect. 2, we introduce our client-server game in its basic form. In Sect. 3, we show that this game captures provability in intuitionistic logic. Section 4 describes a resource-aware version of the game, which is shown to capture affine logic, and, with a small modification, intuitionistic linear logic. In Sect. 5, we make some remarks on the interpretation of (sub)exponentials. The final Sect. 6 discusses a variant of the game where information packages are arranged in a stack.

2 A Client-Server Game for Intuitionistic Logic

In our \(\mathbf{C /\mathbf S }(\text {I})\)-game, a client \(\mathbf C \) maintains that the information packaged as G can be obtained from the information represented by the packages \(F_1,\ldots ,F_n,\) provided by a server \(\mathbf S \), via stepwise reduction of complex information packages (henceforth short ips, singular ip) into simpler ones. At any state of the game, the bunch of information provided by \(\mathbf S \) is a (possibly empty) multiset of ips. The ip G which \(\mathbf C \) currently claims to be obtainable from that information is called C’s current ip. The corresponding state is denoted by

$$ F_1,\ldots ,F_n\vartriangleright G. $$

The game proceeds in rounds that are always initiated by \(\mathbf C \) and, in general, solicit some action from \(\mathbf S \). We look at the game from the client’s point of view.Footnote 1 There are two different types of requests that \(\mathbf C \) may submit to \(\mathbf S \): (1) \(\text {U}\textsc {npack}\) an ip provided by the server, and (2) \(\text {C}\textsc {heck}\) my (i.e. the clients) current ip. We call the ip chosen by \(\mathbf C \) for either the \(\text {U}\textsc {npack}\)- or \(\text {C}\textsc {heck}\)-request the active ip. Thus in a \(\text {C}\textsc {heck}\)-request the active ip is always C’s current ip. Both \(\text {U}\textsc {npack}\)- and \(\text {C}\textsc {heck}\)-requests depend on the structure of the active ip. For now, we will consider the following types of ips:

  • atomic ips, which admit no further reduction

  • among those, a special ip \(\bot \), denoting an elementary inconsistency

  • complex ips which are build from simpler ips by means of the constructors \(\wedge \), \(\vee \), and \(\rightarrow \) (called any of, some of and given respectively).

We use lowercase letters abc for atomic ips and uppercase letters FGHK for ips which may be either complex or atomic. Multisets of ips are denoted by \(\varGamma \) or \(\varDelta \). The rules for reducing complex ips are given in Table 1. One may easily introduce other constructors for complex ips into the game by specifying their \(\text {U}\textsc {npack}\)- and \(\text {C}\textsc {heck}\)-rules, and we will see some examples of that later.

Table 1. Atoms, constructors and rules for \(\mathbf{C /\mathbf S }(\text {I})\)

At the beginning of each round of the game \(\mathbf C \) is free to choose whether she wants to continue with a request of type \(\text {U}\textsc {npack}\) (if possible) or of type \(\text {C}\textsc {heck}\); moreover in the first case \(\mathbf C \) can freely choose any occurrence of a non-atomic ip or an occurrence of \(\bot \) in the bunch of information provided by \(\mathbf S \). Formally, each initial state \(F_1,\ldots ,F_n\vartriangleright G\) induces an extensive two-players win/lose (zero sum) game of perfect information in the usual game theoretic sense.

The corresponding game tree is finitely branching, but may be infinite since \(\mathbf C \) may request to unpack the same ip repeatedly. Intuitively, a strategy for \(\mathbf C \) is a function telling \(\mathbf C \) how to move in (some initial part of) the game when it is her turn. We require strategies to be finite objects. A strategy \(\tau \) for \(\mathbf C \) can therefore be identified with a finite subtree of the game tree satisfying

  1. 1.

    the root of \(\tau \) is the initial state of the relevant instance of the \(\mathbf{C /\mathbf S }(\text {I})\)-game in question

  2. 2.

    at each state S, if the strategy \(\tau \) tells \(\mathbf C \) to continue with a round of type (\(\text {U}\textsc {npack}\) \(F_1\vee F_2\)), (\(\text {U}\textsc {npack}\) \(F_1\rightarrow F_2\)) or (\(\text {C}\textsc {heck}\) \(F_1\wedge F_2\)), then \(\tau \) branches at S into two successor states according to the possible choices available to \(\mathbf S \) as specified by the rules. On the other hand, no branching occurs at states where \(\tau \) tells \(\mathbf C \) to continue according to any other rule, since those rules do not involve a choice of \(\mathbf S \).

A strategy \(\tau \) is called a winning strategy if additionally all leaves are winning states for \(\mathbf C \) according to either rule (\(\text {C}\textsc {heck}\) a) or (\(\text {U}\textsc {npack}\) \(\bot \)).

The game rules are local: the validity of a move of \(\mathbf C \) only depends on the presence of a certain ip in the current game state, but not on the complete bunch of provided information. Furthermore, S’s moves are restricted to ips previously chosen by \(\mathbf C \), and ips different from the active one are never touched at all in a move. It follows that we can regard a strategy \(\tau \) for \(\mathbf C \) in a game state \(\varGamma \vartriangleright F\) also as a strategy in \(\varDelta ,\varGamma \vartriangleright F\) for any multiset of ips \(\varDelta \). Indeed, viewed as a subtree of the full game tree for \(\varGamma \vartriangleright F\), \(\tau \) is isomorphic to a subtree \(\tau ^\varDelta \) of the full game tree for \(\varDelta ,\varGamma \vartriangleright F\) obtained by adding the multiset \(\varDelta \) to all the nodes in \(\tau \). By abuse of notation, we will not distinguish between \(\tau \) and \(\tau ^\varDelta \).

The following proposition sums up these observations and some easy consequences for further reference:

Proposition 1

Let \(\varGamma \vartriangleright F\) be a game state and \(\varDelta \) a multiset of ips.

  1. 1.

    If \(\tau \) is a strategy for \(\mathbf C \) in \(\varGamma \vartriangleright F\), then \(\tau \) is also a strategy for \(\mathbf C \) in \(\varDelta ,\varGamma \vartriangleright F\).

  2. 2.

    Furthermore, if a sequence of moves in the game \(\varGamma \vartriangleright F\) according to \(\tau \) leads to a state \(\varGamma '\vartriangleright F'\), then the same sequence of moves leads to the state \(\varDelta ,\varGamma '\vartriangleright F'\) in the game \(\varDelta ,\varGamma \vartriangleright F\).

  3. 3.

    If \(\tau \) is winning strategy for \(\mathbf C \) in \(\varGamma \vartriangleright F\), then \(\tau \) is also a winning strategy for \(\mathbf C \) in \(\varDelta ,\varGamma \vartriangleright F\).

Proof

(1) and (2) are immediate from the discussion preceeding the proposition. For (3), let \(\tau \) be a winning strategy for \(\mathbf C \) in \(\varGamma \vartriangleright F\). Then by (2), moving according to \(\tau \) in \(\varDelta ,\varGamma \vartriangleright F\) leads to states of the form \(\varDelta ,\varGamma '\vartriangleright F'\) where \(\varGamma '\vartriangleright F'\) is a winning state. But if \(\varGamma '\vartriangleright F'\) is a winning state for \(\mathbf C \), then so is \(\varDelta ,\varGamma '\vartriangleright F'\), since the winning conditions for \(\mathbf C \) are local. Hence \(\tau \) is also a winning strategy for \(\mathbf C \) in \(\varDelta ,\varGamma \vartriangleright F\).    \(\square \)

3 The Adequateness of \(\mathbf{C /\mathbf S }(\text {I})\) for Intuitionistic Logic

Let us now identify atomic ips with propositional variables and complex ips with their corresponding propositional formulas. It is well-known that we may read winning strategies for \(\mathbf C \) as proofs in a sequent calculus, where the turnstile \(\Rightarrow \) stands for \(\vartriangleright \) and the initial sequents correspond to winning states. In our case, the initial sequents are thus

figure a

corresponding to the states \(\varGamma ,a\vartriangleright a\) (where \(\mathbf C \) wins by sending a (\(\text {C}\textsc {heck}\) a)-request) and \(\varGamma ,\bot \vartriangleright F\) (where \(\mathbf C \) wins by sending an (\(\text {U}\textsc {npack}\) \(\bot \))-request). The \(\text {U}\textsc {npack}\)-rule for \(\vee \) translates to the sequent rule

figure b

where the two premises correspond to the two possible choices of \(\mathbf S \). The \(\text {C}\textsc {heck}\)-rule for \(\vee \) translates to the pair of rules

figure c

corresponding to the two possible choices of \(\mathbf C \). Similarly, one writes down the sequent rules for the remaining connectives \(\wedge ,\rightarrow \). Using this translation, the rules and initial sequents exactly match the sequent calculus \(\mathbf {LIk}\) for intuitionistic logic (cf. [12]). We obtain:

Theorem 2

The following are equivalent:

  1. 1.

    \(\mathbf C \) has a winning strategy in the \(\mathbf{C /\mathbf S }(\text {I})\)-game \((\varGamma \vartriangleright H)\)

  2. 2.

    \((\mathbf {LIk} \vdash \varGamma \Rightarrow H)\)

  3. 3.

    \((\bigwedge \varGamma \Rightarrow H)\) is intuitionistically valid.Footnote 2

Proof

The equivalence of (2) and (3) are the soundness and completeness theorem for \(\mathbf {LIk}\). For the equivalence of (1) and (2), recall that we can view a winning strategy in a \(\mathbf{C /\mathbf S }(\text {I})\)-game \(\varGamma \vartriangleright F\) as subtrees of the full game tree, where a branching occurs iff \(\mathbf S \) choses the next move. Using the translation given above, such a subtree can be read as a proof in \(\mathbf {LIk}\) of the sequent \(\varGamma \Rightarrow F\), and conversely, every \(\mathbf {LIk}\)-proof with end-sequent \(\varGamma \Rightarrow F\) can be read as a winning strategy in the \(\mathbf{C /\mathbf S }(\text {I})\)-game \(\varGamma \vartriangleright F\).    \(\square \)

Example: Consider the \(\mathbf {LIk}\)-proof

figure d

where we have labelled the inference steps with the principal formula of the applied \(\mathbf {LIk}\)-rule. The corresponding winning strategy for the game state \(F\vee G,F\rightarrow H,G\rightarrow H\vartriangleright H\) can be described as follows: First, \(\mathbf C \) sends an (\(\text {U}\textsc {npack}\) \(F\vee G\))-request, forcing \(\mathbf S \) to add either F or G to the bunch of provided information. Then \(\mathbf C \) sends either an (\(\text {U}\textsc {npack}\) \(F\rightarrow H\)) or an (\(\text {U}\textsc {npack}\) \(G\rightarrow H\))-request, depending on which ip out of FG has been chosen by \(\mathbf S \) in the previous move. \(\mathbf S \) can now either add H to the bunch of provided information; in this case \(\mathbf C \) wins with a subsequent \(\text {C}\textsc {heck}\)-request, since H is her current ip. Otherwise, \(\mathbf S \) can replace C’s current ip by F or G respectively, but this is exactly the ip that \(\mathbf S \) has added to the bunch of provided information in a previous move. Hence, \(\mathbf C \) wins also in this situation by sending a \(\text {C}\textsc {heck}\)-request.

\(\mathbf {LIk}\) arises from the traditional sequent calculus \(\mathbf {LI}\) for intuitionistic logic by eliminating contraction by building into the logical rules and eliminating weakening by generalizing the initial sequents (axioms) correspondingly.Footnote 3

We get a game directly matching the rules for \(\mathbf {LI}\) by making the following modifications to the \(\mathbf{C /\mathbf S }(\text {I})\)-game: First, we change the \(\text {U}\textsc {npack}\)-rules such that the active ip is removed from the bunch of provided information after use; second, we add two types of request called Dismiss and Copy, which allow \(\mathbf C \) to either remove or duplicate ips from the bunch of provided information: and finally we allow only

figure e

as winning states for \(\mathbf C \). Let us call the modified game \(\mathbf{C /\mathbf S }(\text {I})^*\).

Via Theorem 2, results from the structural proof theory of LIk or \(\mathbf {LI}\) turn into statements about winning strategies in \(\mathbf{C /\mathbf S }(\text {I})\) or \(\mathbf{C /\mathbf S }(\text {I})^*\). As a simple example (which works for either variant of the calculus/game), the soundness of the rule

figure f

says that if \(\mathbf C \) has a winning strategy \(\tau \) for \(\varGamma \vartriangleright F\) and \(\sigma \) for \(\varGamma \vartriangleright G\), then she has a winning strategy in \(\varGamma \vartriangleright F\wedge G\). The winning strategy, of course, is this: In her first move, \(\mathbf C \) sends a (\(\text {C}\textsc {heck}\) \(F\wedge G\)) request. If now \(\mathbf S \) chooses F, the game is in a state \(\varGamma \vartriangleright F\) where she can move according to \(\tau \) to win; otherwise, if \(\mathbf S \) picks G, she moves according to \(\sigma \).

More interestingly, the invertibility of the (\(\wedge \)R) rule – the fact that the validity of its conclusion implies the validity of its premises – says that if \(\mathbf C \) has a winning strategy in \(\varGamma \vartriangleright F\wedge G\), then she has such a winning strategy where her first move is (\(\text {C}\textsc {heck}\) \(F\wedge G\)).

The correspondence of Theorem 2 goes both ways; for example, Proposition 1 is nothing but a game theoretic proof of the admissibility of the weakening rule in \(\mathbf {LIk}\). As yet another example, The cut-elimination theorem for the calculus \(\mathbf {LIk}\) tells us that if \(\mathbf C \) has winning strategies in \(\varGamma \vartriangleright G\) and \(G,\varDelta \vartriangleright H\) then she has also a winning strategy in \(\varGamma ,\varDelta \vartriangleright H\). Below, we give a proof of cut-admissibility for the \(\rightarrow \)-free fragment of \(\mathbf {LI}\) by using the game semantics of \(\mathbf{C /\mathbf S }(\text {I})^*\). In this fragment, we can give a particularly simple and intuitive description of the winning strategy obtained from combining the winning strategies for \(\varGamma \vartriangleright G\) and \(G,\varDelta \vartriangleright H\).

Proposition 3

Assume that \(\rightarrow \) does not appear in \(\varGamma ,\varDelta ,G,H\). If \(\mathbf C \) has winning strategies in the \(\mathbf{C /\mathbf S }(\text {I})^*\)-games \(\varGamma \vartriangleright G\) and \(G,\varDelta \vartriangleright H\) then she also has a winning strategy in \(\varGamma ,\varDelta \vartriangleright H\).

Proof

Let \(\tau \) be a winning strategy for \(\varGamma \vartriangleright G\) and \(\sigma \) a winning strategy for \(G,\varDelta \vartriangleright H\). We prove by induction on the structure of G that \(\mathbf C \) wins in \(\varGamma ,\varDelta \vartriangleright H\).

  1. 1.

    \(G\equiv a\) for atomic a: Since the game ends when atomic ips are checked, all but the last move in \(\tau \) must be \(\text {U}\textsc {npack}\)-requests. Since \(\tau \) is winning, a play on \(\varGamma \vartriangleright a\) according to \(\tau \) always ends in a state of the form \(\bot \vartriangleright a\) or \(a\vartriangleright a\). \(\mathbf C \) can thus move according to \(\tau \) in the game \(\varGamma ,\varDelta \vartriangleright H\) to arrive at a state \(\bot ,\varDelta \vartriangleright H\) or \(a,\varDelta \vartriangleright H\). In the first case she wins by sending Dismiss-requests repeatedly until she is in the winning state \(\bot \vartriangleright H\). In the second case, she can move according to \(\sigma \) to win.

  2. 2.

    \(G\equiv F_1\wedge F_2\): \(\mathbf C \) starts moving according to \(\tau \) in the game \(\varGamma ,\varDelta \vartriangleright H\) until a (\(\text {C}\textsc {heck}\) \(F_1\wedge F_2\))-request appears (if that does not happen, the game must arrive eventually at a state \(\varGamma ',\bot ,\varDelta \vartriangleright H\) where \(\mathbf C \) can easily win). The game is now in a state \(\varGamma ',\varDelta \vartriangleright H\). Note that \(\mathbf C \) must have winning strategies in \(\varGamma '\vartriangleright F_1\) and \(\varGamma '\vartriangleright F_2\), since by moving according to \(\tau \) in the game \(\varGamma \vartriangleright F_1\wedge F_2\) she ends up in a state \(\varGamma '\vartriangleright F_1\wedge F_2\) and now, since the next step in \(\tau \) is (\(\text {C}\textsc {heck}\) \(F_1\wedge F_2\)), \(\mathbf C \) must be prepared for any choice of \(F_1,F_2\) by \(\mathbf S \).

    Back to the game state \(\varGamma ',\varDelta \vartriangleright H\). Here, \(\mathbf C \) now switches to the strategy \(\sigma \) and moves until an (\(\text {U}\textsc {npack}\) \(F_1\wedge F_2\))-request appears (again, if this does not happen, the game must arrive at a state where \(\mathbf C \) obviously wins). The game is then in a state \(\varGamma ',\varDelta '\vartriangleright H'\). Without loss of generality, let us assume that \(\sigma \) tells \(\mathbf C \) to pick \(F_1\) in the rule for \(\wedge \). Then \(\mathbf C \) has a winning strategy \(\varDelta ',F_1\vartriangleright H'\), because this state arises by starting in \(F_1\wedge F_2,\varDelta \vartriangleright H\) and moving according to the winning strategy \(\sigma \).

    Applying the induction hypothesis to the states \(\varGamma '\vartriangleright F_1\) and \(\varDelta ',F_1\vartriangleright H'\) (and their respective winning strategies), we thus know that \(\mathbf C \) has a winning strategy in \(\varGamma ',\varDelta '\vartriangleright H'\), which is exactly the current game state.

  3. 3.

    \(G=F_1\vee F_2\): similar to the previous case.   \(\square \)

Remark 4

Note that the number of moves in the winning strategy constructed in the above proof is polynomially bounded in the number of moves in the winning strategies for \(\varGamma \vartriangleright G\) and \(G,\varDelta \vartriangleright H\). This cannot be the case if we include \(\rightarrow \), since it is known that cut reduction in the full fragment of intuitionistic logic increases proof size exponentially.

4 Resource Consciousness

Probably the most important step in turning the \(\mathbf{C /\mathbf S }(\text {I})\)-game into a ‘resource conscious’ one, regards rules that entail a choice by \(\mathbf S \) and thus require \(\mathbf C \) to be prepared to act in more than just one possible successor state to the current state. The \(\mathbf{C /\mathbf S }(\text {I})\)-rules allow \(\mathbf C \) to use all the information provided by \(\mathbf S \) in each of the possible successor states. If, instead, we require \(\mathbf C \) to declare which ips she intends to use for which of those options – taking care that she is using each occurrence of an ip exactly once – then we arrive at rules that match multiplicative instead of additive connectives.

Following the tradition of linear logic, we do not discard the previously defined rules, but rather extend the game by new ip constructors and their corresponding resource concious rules. We also introduce a unary ‘safety’ constructor ! (called exponential in the literature on linear logic). Ips prefixed by ! are meant to be exempt from resource consciousness and thus behave like ips in the \(\mathbf{C /\mathbf S }(\text {I})\)-game. Ips not prefixed by ! are called unsecured. Table 2 lists all new constructors and their corresponding rules.Footnote 4 Let us denote by C/S(IAL) the following modification of game \(\mathbf{C /\mathbf S }(\text {I})\):

Table 2. Resource conscious rules in C/S(IAL)
  1. 1.

    Constructors and rules for \(\otimes ,\multimap \) and ! are added as in Table 2

  2. 2.

    The \(\text {U}\textsc {npack}\)-rules for \(\wedge ,\vee \) and \(\rightarrow \) are changed so that the active ip is removed at the end of the request.

We claim that the logic captured by C/S(IAL) is intuitionistic affine logic \(\mathbf {IAL}\), i.e. intuitionistic linear logic with weakening [5]. A standard sequent calculus for \(\mathbf {IAL}\) is presented in Table 3. We need the following preliminary result analogous to Proposition 1:

Proposition 5

If \(\mathbf C \) has a winning strategy in the C/S(IAL)-game \(\varGamma \vartriangleright F\) and \(\varDelta \) is any multiset of ips, then \(\mathbf C \) also has a winning strategy in \(\varDelta ,\varGamma \vartriangleright F\).

Proof

By induction on the number of steps in a winning strategy for \(\varGamma \vartriangleright F\). We only consider the case that \(F\equiv ~!G\) and the first step in the winning strategy is to send a (\(\text {C}\textsc {heck}\)  !G)-request. Let us write the state as \(!\varGamma _1,\varGamma _2\vartriangleright !G\), where we assume that all ips in \(\varGamma _2\) are unsecured (\(!\varGamma \) denotes \(\{!F\mid F\in \varGamma \}\)). The request results in the state \(!\varGamma _1\vartriangleright G\), for which \(\mathbf C \) therefore has a winning strategy. It follows that \(\mathbf C \) wins in \(\varDelta ,!\varGamma _1,\varGamma _2\vartriangleright !G\): She starts by sending a (\(\text {C}\textsc {heck}\)  !G)-request, resulting in the state \(\varDelta _1,!\varGamma _1\vartriangleright G\), where \(\varDelta _1\) denotes the set of all safe formulas in \(\varDelta \). Since \(\mathbf C \) has a winning strategy for \(!\varGamma _1\vartriangleright G\), the induction hypothesis implies that she also wins in \(\varDelta _1,!\varGamma _1\vartriangleright G\).    \(\square \)

Table 3. The sequent calculus \(\mathbf {IAL}\)

Theorem 6

The following are equivalent:

  1. 1.

    \(\mathbf C \) has a winning strategy in the C/S(IAL)-game \(\varGamma \vartriangleright H\)

  2. 2.

    \(\mathbf {IAL} \vdash \varGamma \Rightarrow H\)

Proof

(Sketch). Again, we use the correspondence between winning strategies and proofs described in Sect. 3. However, the game rules do not directly match the rules of \(\mathbf {IAL}\) in all cases, thus we have to provide some further arguments.

First, there is no game rule corresponding to weakening (W). This is not a problem, because weakening is admissible in the game theoretic version of the rules by Proposition 5.Footnote 5

Second, there is no game rule corresponding to (!C). Rather, the splitting in multiplicative rules is changed so that safe formulas never need to be split, making the duplication of safe formulas obsolete. The equivalence of the thus obtained calculus is known in the literature (see for example the dyadic calculus of [2]).

Finally, the (\(\text {U}\textsc {npack}\)  !F)-rule in our game semantics forces us to immediately unpack the copy of F after it has been created. There is no such requirement in \(\mathbf {IAL}\): here we may create a copy of a safe formula by a combination of (!C) and (!dR), which might be used only later in a proof (if at all). It is however not hard to check that such a detour is never necessary. This can also be seen as a special case of Andreoli’s results on Focusing [2].    \(\square \)

Before closing this section, let us remark that we also obtain a game adequate for \(\mathbf {ILL}\) (full intuitionistic linear logic) by allowing only

$$ a\vartriangleright a \text { and } \bot \vartriangleright F $$

as winning states for \(\mathbf C \) and introducing atomic ips \(0,1,\top \) with their corresponding rules. This amounts to an interpretation of sequents as C/S-game states, where \(\mathbf C \) announces that she needs precisely the information provided by \(\mathbf S \) to obtain her current ip.

5 Interpreting Exponentials and Subexponentials

The \(\text {U}\textsc {npack}\)-rule for ! (together with the \(\text {C}\textsc {heck}\)-rule for \(\otimes \) and the \(\text {U}\textsc {npack}\)-rule for \(\multimap \)) shows that safe ips are exempt from resource consciousness: operations are performed on copies of the safe ip rather than on the ip itself. The \(\text {U}\textsc {npack}\)-rule for ! says that the safety predicate is hereditary: If F can be demonstrated from a bunch of safe ips, then F is also safe.

\(\mathbf C \) can send (\(\text {U}\textsc {npack}\) !F)-requests to the same ip !F as often as she wishes. Furthermore, if \(\mathbf C \) has a winning strategy for \(\varGamma \vartriangleright !F\) then she also has winning strategies for \(\varGamma \vartriangleright F^{\otimes n}\) for any n, where \(F^{\otimes n}\) denotes \(\underbrace{F\otimes \ldots \otimes F}_{n}\). This is most easily seen by first checking that \(\mathbf C \) has a winning strategy in \(!F\vartriangleright F^{\otimes n}\) and then using the fact that the cut rule is admissible in \(\mathbf {IAL}\).

The meaning of !F is often paraphrased as ‘arbitrarily many F’. But this intuition is not without pitfalls, as the observation demonstrates.

Lemma 7

Assume \(a,b\ne \bot \). \(\mathbf C \) has a winning strategy in \(a,!(a\multimap a\otimes b)\vartriangleright b^{\otimes n}\) for any n, but she has no winning strategy in \(a,!(a\multimap a\otimes b)\vartriangleright !b\).

Formulated proof-theoretically, Lemma 7 entails that the infinitary rule

figure h

is not admissible in IAL. The interpretation of ! is improved by thinking of !F not as arbitrarily many F’s, but as a single container containing (potentially) arbitrarily many F’s. The problem is that this does not tell us much about what we should require from a proof of !F.

Instead, we invite the reader to think of the rules for the safety predicate as (partially) specifying a concept of safety, where being exempt from consumption through unpacking (i.e., resource consciousness) is the essential minimal requirement. This also aligns with the observation that when adding another unary constructor \(!'\) with the same rules as ! to \(\mathbf {IAL}\), one cannotFootnote 6 prove the equivalence of ! and \(!'\). Variants of the standard exponential introduced in this way are usually called subexponentials. In the ‘arbitrarily many’-interpretation of the exponential, the existence of subexponentials seems to be mysterious – how can there be two different concepts of ‘arbitrarily many’?

In the safety interpretation, we may think of different subexponentials !’s as corresponding to different levels of safety. In fact, we can add constructors \(!_1,!_2,\ldots ,!_n\), where greater indices denote greater safety. A natural generalization of the !-rule is then the following:

figure i

One may go further and arrange the safety levels in a partial order rather than a linear order, with the obvious modification of the (\(\text {C}\textsc {heck}\))-rule. At some point, one loses cut-admissibility of the logic – we refer the reader to [4, Chap. 5].

6 The Server as Stack

In the games considered so far, C’s choice of the active ip at the beginning of each round was completely free. We now consider a variant of the game where the bunch of provided information is a list rather than a multiset, and \(\mathbf C \) can only access the last element in the list. In other words, we think of the server as a stack. We include this new game in the discussion as an example of a variant which arises naturally in the context of Client/Server-interactions, but not in the proof-theoretic context.

The game rules are as given in Table 4. Note that in \(\text {U}\textsc {npack}\)-requests, the active ip is now always the topmost element of the stack.

Table 4. Constructors and rules for C/S(STACK)

Let us call the resulting game C/S(STACK). Again, we translate game states to sequents (which are now lists of ips) and game rules to sequent rules. We write stacks from left to right, so that the rightmost element of a list of ips corresponds to the topmost element of the stack. Let us call the resulting system \(\mathbf {LSTACK}\). The initial sequents are thus

figure j

Of the rules, we only mention those for \(\rightarrow \) and (; ) explicitly. They are

figure k

and

figure l

where \(\varGamma _1\) and \(\varGamma _2\) correspond to the lower and the upper part of the stack in the rule (\(\text {C}\textsc {heck}\) (FG)) respectively.

Analogously to Theorems 2 and 6, we have

Theorem 8

The following are equivalent:

  1. 1.

    \(\mathbf C \) has a winning strategy in the C/S(STACK)-game \(\varGamma \vartriangleright H\).

  2. 2.

    \(\mathbf {LSTACK} \vdash \varGamma \Rightarrow H\).

The rules for the connective (;) resemble those of the \(\otimes \) of linear logic, only that in the right rule, the premises are split in an ordered way. (;) internalizes the linear order of the stack. It has the following properties, which are straightforward to check:

Proposition 9

  1. 1.

    (non-commutativity) \(\mathbf C \) has no winning strategy in \((F;G)\vartriangleright (G;F)\).

  2. 2.

    (associativity 1) \(\mathbf C \) has a winning strategy in \((F;(G;H))\vartriangleright ((F;G);H)\).

  3. 3.

    (associativity 2) \(\mathbf C \) has a winning strategy in \(((F;G);H)\vartriangleright (F;(G;H))\).

Proposition 10

  1. 1.

    \(\mathbf C \) has a winning strategy in \(\varGamma ,F\vartriangleright F\).

  2. 2.

    \(\mathbf C \) has a winning strategy in \(\varGamma ,F,F\rightarrow G\vartriangleright G\).

Proof

The proof of (1) proceeds by induction on F. If F is atomic, \(\varGamma ,F\vartriangleright F\) is already a winning state for \(\mathbf C \). If \(F\equiv G\rightarrow H\), the \(\mathbf {LSTACK}\)-derivation

figure m

demonstrates that \(\mathbf C \) can always move to a state \(\varGamma ,G,H\vartriangleright H\) or \(\varGamma ,G\vartriangleright G\), for both of which she has winning strategies by the induction hypothesis. If \(F\equiv (G;H)\), the \(\mathbf {LSTACK}\)-derivation

figure n

demonstrates that \(\mathbf C \) can always move to a state \(G\vartriangleright G\) or \(\varGamma ,H\vartriangleright C\), and again she has winning strategies for both states by the induction hypothesis. The other cases are similar.

For (2), \(\mathbf C \) starts the game \(\varGamma ,F,F\rightarrow G\vartriangleright G\) by sending an (\(\text {U}\textsc {npack}\) \(F\rightarrow G\))-request. Depending on the subsequent choice of \(\mathbf S \), the game is then either in the state \(\varGamma ,F,G\vartriangleright G\) or \(\varGamma ,F\vartriangleright F\). For both of these states, \(\mathbf C \) has a winning strategy by (1).    \(\square \)

Proposition 11

If \(\mathbf C \) has a winning strategy in \(\varGamma ,(F;G),\varDelta \Rightarrow H\), then she also has a winning strategy in \(\varGamma ,G,F,\varDelta \Rightarrow H\).

Proof

Let \(\tau \) be a winning strategy for \(\mathbf C \) in \(\varGamma ,(F;G),\varDelta \Rightarrow H\). \(\mathbf C \) can use essentially the same strategy \(\tau \) in \(\varGamma ,G,F,\varDelta \Rightarrow H\). If during the game, the indicated occurence of GF is on top of the stack and \(\tau \) tells her to (\(\text {C}\textsc {heck}\) (FG)), \(\mathbf C \) simply skips this step.    \(\square \)

The converse to Proposition 11 fails: For example, \(\mathbf C \) has a winning strategy in

$$ K,F\rightarrow G,G\rightarrow H \vartriangleright F\rightarrow H $$

as the following \(\mathbf {LSTACK}\)-derivation shows:

figure o

In contrast, \(\mathbf C \) has no winning strategy in \( ((F\rightarrow G);K),G\rightarrow H \vartriangleright F\rightarrow H\). This is because (; ) prevents \(\mathbf C \) from inserting the premise F below \(F\rightarrow G\) in the stack as her first step in the winning strategy. One easily checks that no other proof exists, assuming that FGHK are pairwise distinct atoms.

The discussed properties allow one to wrap up whole game states in single information packages: For any game state \(S\equiv F_1,\ldots ,F_n\vartriangleright G\) let \( \textit{IP}(S):=((\ldots (F_n;F_{n-1});F_{n-2});\ldots );F_1)\rightarrow G. \)

Proposition 12

\(\mathbf C \) has a winning strategy in a game state S iff \(\mathbf C \) has a winning strategy in the state \(\vartriangleright \textit{IP}(S)\).

Proof

For the direction from left to right, \(\mathbf C \) starts the game for \(\vartriangleright \textit{IP}(S)\) by sending a (\(\text {C}\textsc {heck}\) \(\rightarrow \))-request, followed by \((n-1)\)-many \(\text {U}\textsc {npack}\) (;)-requests. The game is then in the state S, for which \(\mathbf C \) has a winning strategy by assumption. For the other direction, it is clear (by lack of other choices) that a winning strategy for \(\textit{IP}(S)\) must start with a (\(\text {C}\textsc {heck}\) \(\rightarrow \))-request, and hence \(\mathbf C \) has a winning strategy for the subsequent state \(((\ldots (F_n;F_{n-1});F_{n-2});\ldots );F_1)\vartriangleright G\). By applying Proposition 11 \((n-1)\)-times, we see that \(\mathbf C \) has a winning strategy in \(F_1,\ldots ,F_n\vartriangleright G\).    \(\square \)

Formulated proof-theoretically, Proposition 12 says that \(\mathbf {LSTACK}\) is an internal calculus: There is a uniform way of mapping sequents S to formulas \(\textit{IP}(S)\) such that S is provable iff its formula interpretation \(\textit{IP}(S)\) is provable.

Finally observe that combining winning strategies for different game states in C/S(STACK) would require to merge stacks. Hence the following observation should not come as a surprise.

Proposition 13

The cut rule is not admissible in \(\mathbf {LSTACK}\).

Proof

Let abc be pairwise distinct atoms and \(a\ne \bot \). The sequents \(a,b\rightarrow c\Rightarrow b\rightarrow c\) and \(b,b\rightarrow c\Rightarrow c\) are provable. Applying the cut rule (with cut formula \(b\rightarrow c\)) yields the sequent \(b,a,b\rightarrow c\Rightarrow c\), which is not provable:

figure p

   \(\square \)

7 Conclusion

We have introduced an interpretation of single-conclusioned sequent calculi as means of information extraction: formulas are seen as information packages and a derivation of \(\varGamma \Rightarrow F\) corresponds to a winning strategy of a Client \(\mathbf C \) that seeks to reduce the information F to the information \(\varGamma \) provided by the Server \(\mathbf S \). In this manner we obtain an interpretation of a standard sequent calculus for intuitionistic logic that naturally extends to (affine) intuitionistic linear logic \(\mathbf {IAL} \). In particular exponentials and subexponentials receive a robust interpretation in terms of safeness from destruction through consumption. To demonstrate that our game semantics does not only fit already known calculi, we also applied it to a new concept: sequents where the left hand side represents a stack, rather than a set, multiset, or list of information packages.

We view the presented ideas and results as just a starting point for a more thorough analysis of deduction in analytic calculi in terms of reducing structured information to atomic information and plan to address, e.g., the following questions in future research: Which further operators for packaging information should be considered? Which alternative forms of storing information on a server lead to sequent calculi? Can the approach be lifted to quantifiers? Does the new interpretation of rule-admissibility lead to further insights into the underlying logics? How can the Client/Server view assist in organizing efficient proof search?