1 Introduction

End-to-end security is the Holy Grail of information-flow security [38]. It guarantees absence of information leaks between all endpoints of a system. Enforcing end-to-end security is challenging for two main reasons. One is that modern software is large and complex: software platforms execute third-party programs, which have access to user-sensitive data and can interact with each other, the user, and the operating system. The other is that even if a software is secure, a leak may emerge when it is used as part of a larger system. This is because any security guarantee makes assumptions on the system environment, which the larger system can violate [26, 28]. For instance, FlowFox [9] (by design) has a timing leak [34] since it violates an assumption that its built-in enforcement mechanism relies on to eliminate timing leaks. To address these challenges, theories for secure composition have been studied extensively in event systems (e.g. [24, 26, 42, 44]), process calculi (e.g. [12, 16, 17, 31, 36, 37]), transition systems (e.g. [32, 35]), and thread pools (e.g. [2, 25]). These theories facilitate compositional reasoning: sub-components can be analyzed in isolation, and security properties of the entire system can be derived from security properties of its components.

This paper investigates compositional reasoning for eliminating timing leaks in interactive systems. Timing channels are a key concern in computer security; they can be used by an adversary to reliably obtain sensitive information [6, 11, 22, 30], and building systems free of timing channels is a nontrivial matter. Many timing leaks are caused by the environment violating a system assumption, e.g. when the cache affects the timing behavior of an application [11, 30]. Despite great interest in eliminating timing leaks [1, 5, 10, 14, 46, 47], little has been done towards secure composition that eliminates timing leaks [13].

To bridge this gap, we present a theory for secure composition of timed systems. We first define a general model of computation, with a notion of interface that simplifies compositional reasoning. For this model of computation, we formalize our security property, timing-sensitive noninterference. We develop a core of combinators for composing systems, designed to be expressive yet easy to reason about formally. With it, we implement more practical combinators, i.e. a language for building composite systems, which support reasoning about process scheduling, message routing, and state. We establish compositionality results for the core of combinators, which then translates to compositionality results for the whole language of combinators. Finally, as a case study, we implement secure multi-execution (SME) [10] (an enforcement of timing-sensitive noninterference), and its variant used by FlowFox [9] (which is timing-insensitive). This demonstrates how our formalism makes it straightforward to prove noninterference of a complex system, and to trace the insecurity of a system to faulty component(s).

Our contributions are as follows:

  • We define a general system model for timed asynchronous interactive systems (Sect. 3) and formalize timing-sensitive noninterference for these systems (Sect. 4).

  • We develop a generic language of process combinators, with primitives for routing messages, maintaining state, scheduling processes, and wiring processes together arbitrarily (Sect. 6).

  • Crucially, we identify and prove conditions under which our combinators preserve timing-sensitive noninterference under composition (Sect. 5).

  • We demonstrate the practicality of our formalism and language by conducting case studies on secure multi-execution (SME) (Sect. 7).

By implementing \(\text {SME}\), we give a complete approach for building large systems free of timing leaks: \(\text {SME}\) atomic parts, and build the rest using our language. Detailed definitions and proofs can be found in our technical report [33]. The main technical results are the theorems in Sect. 5. The culmination of our work is Fig. 2, which describes the language, and lists the compositionality result for all 28 combinators in it. We begin by motivating our approach in Sect. 2.

2 Motivation

A system is a whole of interacting components, which can themselves be systems. We refer to the system boundary as its interface, and what lies beyond as its environment. We reason about the behavior of a system in terms of how it interacts with its environment through its interface. Compositional reasoning is the use of compositionality results on parts to derive facts about the whole. Secure composition is the study of compositionality results stating conditions under which a secure system can be constructed from secure components. Secure composition is a crucial challenge for securing composite systems: even if all components are secure, insecurities can arise under composition. However, obtaining compositionality results is a nontrivial matter. Each definition of security makes assumptions on how a system is used; if a composition operator—combinator—violates such an assumption, then its use may introduce a leak.

To motivate our work, we give examples of timing leaks that arise under composition, and outline challenges for secure composition of interactive systems.

figure a

Timing leaks. A timing channel is one through which an adversary learns sensitive information by observing the time at which observed effects occur. A timing leak is an information leak through such a channel. For instance, consider the program on the right. Here, “\(\mathtt {H}\)” and “\(\mathtt {L}\)” denote “high” (, secret) and “low” (, public) confidentiality. Upper-case variables are shared, and we refer to these as channels. Lower-case variables are local. We use this convention throughout the paper. The output on the public channel \(\mathtt {L}\) is delayed as a function of the secret input channel \(\mathtt {H}\); by observing the timing of this event, an adversary can infer information about \(\mathtt {H}\). Similar to \(\mathtt {sleep}\), a loop on \(\mathtt {h}\) (key-value lookup), or a branch on \(\mathtt {h}\) where one branch takes longer to execute, also leaks information.

figure b

Timing leaks from insecure composition. Timing leaks can arise as a result of composing secure systems. For instance, FlowFox [9] is a prototype of an information-flow secure browser, based on secure multi-execution (\(\text {SME}\)) [10, 34]. \(\text {SME}\) is a black-box enforcement that removes insecurities (including timing leaks) in any given process. It does so by running two copies, and , of a given process; feeding (a copy of) and input to the -copy, and dropping its output; and feeding only input to the -copy, and dropping its output. Since the only source of output (the -copy) receives no input, no information can leak. FlowFox implements \(\text {SME}\) on a per-event basis; inputs are queued, and the queue is serviced by first running the -copy on the projection of the next input, then running the -copy on the input. Each copy finishes handling an input before passing control over to the next copy, implementing cooperative scheduling. However, while this approach prevents leaks to output values, the time at which the -copy processes the next input depends on how long it takes for the -copy to finish processing previous inputs. Thus, despite the process copies being run securely, and the environment just being a queue, the way the two are put together and scheduled creates a timing leak. This is illustrated by the program on the right. This program will, upon receiving a message on \(\texttt {H}\) with value n, sleep for n time units, and upon receiving a message on \(\mathtt {L}\), output 42 to \(\mathtt {L}\). However, running an and copy of this program on a queue starting with makes the time at which the -copy produces depend on the time it takes for the -copy to react to Hn—a function of n.

Secure composition & interaction: challenges. We have seen that a secure system can easily cause an information leak by being used in unexpected ways by its environment. While it is best that a secure system assumes as little as possible of its environment, such a security guarantee would be very strict, and might not be preserved under composition. The design of a theory for secure composition thus balances 1) environment assumptions, 2) security guarantee, and 3) choice of combinators; each of these factors dramatically impact the others. We outline some challenges that interaction introduces in this context.

figure c

One challenge involves the notion of environment that the security definition needs to consider. Clark and Hunt showed that for deterministic programs, an environment can wlg. be considered a fixed stream of inputs [7]. However, this does not apply to nondeterministic programs, as demonstrated by the example on the right [7]. Here, || interleaves components nondeterministically, and \(\texttt {0|1}\) is a nondeterministic choice. The right component outputs a secret bit H, encrypted (using XOR \(\oplus \)) with key x, to L. The output is 0 or 1, independently of H. The left component has no outputs. Thus, both components (and the whole) are secure. Say || models hardware interleaving that is, while a priori unknown, deterministic. Then the nondeterminism in || masks a covert channel that emerges when this nondeterminism is refined [46] to that of the hardware. For instance, in interleaving right (line) 1, right 2, and left 1, \(\texttt {H} = \texttt {H'} \oplus \texttt {x}\) at the time of the \(\mathtt {L}\) output, so \(\texttt {H'} \oplus \texttt {x} \oplus \texttt {x} = \texttt {H'}\) is written to L.

figure d

The main problem is that the right component does not keep its encryption key x to itself. Its environment can thus, through accident or malice, adapt input to the right component, causing the insecurity. To capture this, “animate” environments need to be considered, e.g. strategies [42]. While expressive, strategies are always ready to synchronize with a system on input and output operations. Strategies thus do not consider leaks caused by blocking communication, which can occur under composition when components are wired together directly. Consider the program on the right. With strategies as environments, all three components are secure; the left component interacts only on channels, and, since a strategy always provides input on request, the other two components output an infinite sequence of 1s and 0s respectively. However, when composed with \(\gg \), which wires its components in a synchronous pipe (i.e. any right-hand side global variable read blocks until the left-hand side writes to said variable, and vice versa), the first output is 0 only if the bitwise representation of h contains 0.

Our assumptions. Considering systems that assume that their environment is always ready to synchronize, but that do not guarantee the same, is an incongruous basis for a theory of secure interaction. We therefore adopt an asynchronous model of interaction in our theory. We assume systems can always receive any input (making them input total [23, 26]), and always take a step (which may produce an output message). Our timing-sensitive noninterference assumes the same of the environment. This strikes a good balance of Pt. 1-3 in the challenges section above; since interaction is nonblocking, composing components will not introduce adverse behavior. This enables rich forms of composition, and at the same time yields a clean, not too strict, notion of noninterference.

3 System model

We begin by presenting our system model, the constraints we impose on it for reasoning about interaction, and our model of time.

Process domain. We consider a model of computation for processes that interact with their environment (e.g. other processes) by receiving input or producing output. We formalize this as a pair of relations, one specifying which inputs the process can receive, the other which outputs it can produce. Let range over processes. For can produce output and become and can receive input and become

We write to denote the semantic domain of processes that take inputs of type and produce outputs of type . We define this set as the greatest fixpoint of the following equation:

figure h

We take the greatest fixpoint because we wish to reason about the security of processes that possibly run forever. This coalgebraic [18] approach is inspired by the interaction trees of Zanarini et al. [45]. As demonstrated below, this approach is just another way to define a labeled transition system. In contrast to more standard transition system definitions, our approach is less cumbersome since we do not need explicitly named states.

Example 1:

Let be defined as the greatest fixpoint of the following equations.

figure i

This process outputs a Boolean indicating whether it has received a unit input since its last actuation. The graph describes this behavior; straight arrows are outputs, and wavy arrows are inputs.    \(\triangle \)

Example 2:

Let be the set of messages. is the set of channels. Let range over , abbreviate (message on carrying ), and range over . is the set of message-passing processes. These can receive a message, or take a step whilst sending a message () or not ().    \(\triangle \)

Example 3:

Let range over programs, expressions, variables , memories respectively. We give the semantics of our example programs as message-passing processes denotationally as the greatest fixpoint of , where and . Here, , and is given in full in the TR [33]. A sample of its definition:

Inputs update memory without stepping the program, and each step produces output \(\mathtt {Nothing}\) except in the global variable assignment case.    \(\triangle \)

Process behavior. We reason about the behavior of a process strictly in terms of its inputs and outputs. Process inputs and outputs thus constitute its interface to its environment. Let . We write iff , and iff . We write , and iff

Example: Process from Example 1 is the least process satisfying and    \(\triangle \)

A process thus defines a labeled transition system with input-output effects as labels. We define , the set of -input and -output effects, as follows.

figure aa

Let range over effects. We write as shorthand for , respectively. The transition relation is then: iff , and iff

We consider the sequences of effects performed by a process. Let range over traces, i.e. finite words, and range over streams, i.e. infinite words. Let denote the empty word, and “” concatenation. Let and be the set of finite and infinite words over set . For each , let , let iff , and let iff . Likewise, iff

Example: For from Example 1, we have . Let . Then , and    \(\triangle \)

Interactive processes. Since we are interested in the interaction of processes, the model of interaction that we consider is of central importance. Ours has two properties. The first property is that processes are productive: they can always produce output. This is intuitive, since outputs represent work performed by the process, and the processes that we consider can always perform work (this is similar to e.g. weakly time alive in tSPA [13]). The second property is that processes are input total [27] (a.k.a. input enabled [23]): processes can always receive any input. This makes communication asynchronous, which simplifies compositional reasoning [26, 44] since processes cannot block their environment. This assumption is typically achieved by queuing input or by buffering channels.

Definition 1

(interactive process): is interactive iff

figure ar

An interactive process can always take action, and always accept any input. Interaction between an interactive process and its environment thus never blocks or stops; to reason about such behavior, it must be modeled, making its effect, e.g. on timing, explicit. We define , the set of interactive , as

figure au

Example: For from Example 1, , i.e. is interactive. If we remove a transition from , the resulting process will not be interactive. For instance, removing yields a process that is not input total, as it cannot receive more than one between actuations.    \(\triangle \)

Timing. We use a discrete model of time and conflate transitions with time similar to prior work (e.g. [5, 10, 13, 20]). Our formalism times the work performed by a process, which is producing output, since systems receive input asynchronously. As a result, outputs are timed, and inputs are untimed. Each output takes one unit of time, and inputs arrive at units of time by arriving between outputs.

Example: For from Example 1, in , the process performed two time units of work (one per output). Between the outputs, the environment provided two inputs without the process itself performing work.    \(\triangle \)

To motivate this timing model, consider an operating system process , waiting to be scheduled. While is idle, another process can write to ’s memory, thus delivering an input to . performed no work in receiving it; however, the writing process (and thus the computer) performs work producing said input and thus the passing of time in this exchange is accounted for in the actions of processes. This model of time makes explicit, in the transition history of the whole system, the time that passes while processes wait. This simplifies reasoning.

Our work makes no restriction on how fine the discretization of time is; it can be chosen as needed when a process is being modeled (e.g. to a constant factor of the motherboard clock frequency).

4 Security definition

Based on a notion of attacker observation, we formalize absence of attacks as a semantic security property: timing-sensitive noninterference.

Threat model. We consider an attacker that observes public process inputs and outputs, as well as how much time passes between their occurrence. We assume that the attacker knows how the process is defined. Our goal is to facilitate building processes that preserve confidentiality: an attacker that interacts with such a process through its interface learns nothing about inputs to the process that the attacker is not allowed to know.

Observables. We formalize what each principal is allowed to know by means of a security lattice denoted , where is a set, is a partial order relation over , and every pair of elements and in have a least upper bound and greatest lower bound . Any principal, including the attacker, is assumed to be associated with an element of , and expresses the relative privileges of principals. Information from a principal may only flow to more privileged principals (i.e. only upwards in the security lattice). We refer to elements of as security levels, expressing levels of confidentiality. In examples, we use a two-point lattice , where and

We express what each principal observes in inputs and outputs by defining, for each principal, which values are observably equivalent. To identify values that are unobservable to a principal, we introduce a distinguished value that we assume is not an element of any value space. Any value observably equivalent to is considered unobservable. Let , and let range over . Let be the set of equivalence relations over .

Definition 2:

We say is -equivalent with iff .    \(\diamondsuit \)

We define the set of -equivalences over as

figure bm

We will consider different -equivalences over the same set at the same time; when is clear from the context, we let , , and range over .

Example 4:

For from Example 1, say observes the Boolean outputs, but does not observe the inputs. We capture this as -equivalences as follows.

figure bu

Since , cannot distinguish from , making presence of input to unobservable to . The principal, however, can distinguish all values.    \(\triangle \)

Example 5:

Revisiting Example 2, assume a mapping from channels to security levels . We express that an -observer observes messages over channels, using the following projection function

figure bx

We define two messages to be -equivalent iff what an -observer observes in them is the same. That is, for all , is the least equivalence relation satisfying

figure by

Since for messages on channels, , meaning will not observe the presence of such inputs. We let , and let range over . Let be the least equivalence relation satisfying

figure bz

is the set of principals that can distinguish from unobservable . We compare outputs with .    \(\triangle \)

Noninterference. An interactive process is noninterfering iff unobservable input does not interfere with observable output. An attacker observing public effects of such a process thus cannot infer any knowledge of its secret inputs. To motivate our formalization of noninterference, consider the set of streams a process can perform. Each time the process performs an effect, this set shrinks to the set of streams prefixed by the effects that the process has performed so far. To violate noninterference, a process must receive secret input that renders some public behavior impossible. Our formalization stipulates that a process can, through its own actions, avoid states where it can be influenced by its environment in this manner. We achieve this by requiring that, at any point of the execution, secret input can be inserted, changed or removed, without affecting the ability of the process to perform a given stream of observable effects.

Definition 3:

figure cb

Definition 4

(noninterfering p):

figure cd

This coinductive definition requires that, for each , and for each stream that can perform, must -simulate (Definition 3). For to -simulate , needs to satisfy four conditions. Pt. 1 and 2 deal with unobservable input (and are therefore vacuously true when has no values unobservable to ). Pt. 1 states that if , the presence of in is not required for to be able to simulate \( s \). Similarly, Pt. 2 states that the absence of is not required either. Pt. 3 and 4 deal with observable as well as unobservable effects. Pt. 3 states that if must simulate after any has been inserted into , i.e. unobservably changing the next input will not prevent the process from simulating the rest. Finally, Pt. 4 states that if must be capable of producing some and subsequently simulate .

This definition is timing-sensitive; must be able to simulate without inserting, observably changing, or deleting output, or any observable input. Thus, must be able to preserve the timing of public effects in .

Example: The top four programs on the right violate has an explicit flow. Assume , with and as defined in Example 5. Let . Since , must hold. So there must exist a ---simulation for which . By Definition 3 Pt. 2, since where . However, , is the only output can perform, so violates Pt. 4, contradicting . Thus . has an implicit flow; the proof that it violates is nearly identical. has a progress leak. can perform ; if is inserted, eventually outputs . has a timing leak. Let can perform . However, inserting delays .

The last two programs satisfy . Let , let such that , and let and be as given in Example 5. We show that , for all . Let . Since , , so . The proof that is a -- -simulation involves picking any , and showing that Pt. 1-4 of Definition 3 hold (using that for all and and . Similarly, satisfies , since it ignores inputs.    \(\triangle \)

5 Combinator core

We develop a core of combinators for composing processes, presented in Figure 1. The core is expressive yet easy to reason about; instead of striving for a minimal core, we designed this core such that each combinator in it embodies a clearly-defined responsibility. We prove that the core combinators are all security preserving; composing secure components yields a secure whole. We use this core to implement a language of security-preserving combinators, in Sect. 6.

Core. Each core combinator in Figure 1 is a function that takes a set of processes as parameter and returns a new process. The combinators are designed for building secure composites using secure parts. By introducing a primitive process, e.g. , the core becomes a core language for implementing processes.

Fig. 1.
figure 1

Core combinators

The combinator transforms incoming and outgoing messages. With , we can tag messages, providing means of routing messages. The combinator maintains state, updating and forwarding it upon receiving input and output. With , we can implement queues and counters. The compositionality results for enable reasoning about the security of state maintained by a system. The combinator maintains a Boolean state that determines whether the given process is “on” or “off”. In , determines whether or not is running. If , then is “off”. Thus, when is tasked for output, it merely produces without touching (by rule ). With , we can implement scheduling strategies and process termination, facilitating secure implementation of runtime systems. Notice that in , receives input. This lets the environment write values into ’s memory while is waiting. The combinator ignores non-value inputs. That is, ignores input, and inputs to on receiving (rule ). With , we can, together with , filter incoming messages, removing those not intended to the process. The combinator executes two processes in parallel. With , composite processes can be built. The combinator feeds process output back in as input, which can orchestrate interactions between subcomponents.

Compositionality of core. Our main results are compositionality results for each core combinator, stating how each preserves security. The proofs are by coinduction. We sketch the proof for \(\texttt {map}\); the other proofs are similar.

map. The \(\texttt {map}\) combinator preserves the security of its given process as long as its given functions do not introduce insecurities. We identify two ways a function can introduce insecurities. The former is when a function maps observably equivalent values to observably different values. Functions, that do not, are noninterfering. The latter is when the input function maps an unobservable input to an observable one. Functions, that do not, are unobservable-preserving.

Definition 5

(noninterference): forall , and , is -noninterfering, written , iff    \(\diamondsuit \)

Definition 6

(unobservable-preserving): forall , and , is --unobservable-preserving, written , iff .    \(\diamondsuit \)

Theorem 1

(map): forall , and , if , and , then .

Proof sketch

Pick everything universally quantified in Theorem 1, satisfying the stated assumptions. By Definition 4, the proof of is carried out in two steps: given and such that , the first step is to find a relation that relates and ; the second step is to prove that is a -stream-simulation (Definition 3). Let

figure ed

Here, relates an activity of the composite process to the activity of the inner process; iff for some process , and (thus is what did as computed ). To see that , construct from the proof of such that and . Then invoke to establish . The proof that is a -stream-simulation involves picking any pair , and showing that points 1) through 4) of Definition 3 hold through case analysis.    \(\blacksquare \)

sta. The compositionality result for states how to introduce state into a large system without violating security: preserves the security of a given process as long as the state update functions do not introduce insecurities. These functions can do so in two ways: using unobservable parts of input and state to observably update state, and observably updating state upon receiving an unobservable input. Functions that do not do this are noninterfering and equivalence-preserving.

Definition 7:

forall , and , is ---noninterfering, , iff    \(\diamondsuit \)

Definition 8

(equivalence-preserving): forall , , and is -equivalence-preserving, , iff    \(\diamondsuit \)

Theorem 2

(sta): forall , and , if , , and , then , where and .    \(\square \)

satisfying (1), (1) and (2), and (1) and (3) respectively. Here, is componentwise observable equivalence, with observable presence, and weaken by making the presence of pairs unobservable when both, or the right, components are, respectively.

swi The compositionality result for states how to switch processes (to e.g. implement schedulers) securely: preserves security as long as unobservables cannot affect the switch state, and, as a result, stagger observable process output. We consider two ways to meet this restriction. One way this restriction is met for a principal is for to fully observe the switch state; that way, no information can ever leak to through it. Such observers are aware of the value of the switch.

Definition 9

(awareness): forall and , is aware of under , , iff .    \(\diamondsuit \)

For instance, is the set of principals who can distinguish from every other value in (i.e. ). In the case of , those observers observe the switch signals, and thus the switch state. Since the switch state can be inferred by knowing whether the switched process took a step, only are allowed to distinguish output from for unobservable . Relation achieves this. Another way this restriction is met for a principal is if all process output is -unobservable. Then, is oblivious to .

Definition 10

(oblivious): forall , and is -oblivious to , iff .    \(\diamondsuit \)

An observer that is not aware of the value of the switch will then, by , not be able to infer any information about the switch state, since all output from the switched process look the same.

Theorem 3

(swi): forall , and , if , then ,  where , , , and .    \(\square \)

maybe, loop, par. The compositionality results for and are simple in comparison to the above. For instance, preserves the security of a process, even for principals who do not observe , since nothing is ever delivered to the process when such input is received. Using to create feedback around a secure process does not introduce insecurities, since the process must always meet its public deadlines regardless of what the source of its input is. Looping thus cannot cause an interactive process to block itself. Our theory therefore eliminates known challenges for security under feedback [26, 35, 43]. Finally, composing secure processes with yields a secure process, since all it does is run the processes in parallel.

Theorem 4

(maybe): forall , if , then , where .    \(\square \)

Theorem 5

(loop): forall and , if , then .    \(\square \)

Theorem 6

(par): forall , and , if and , then , where .    \(\square \)

6 Combinator language

With this core, we build a rich language (Figure 2a) of combinators that mediate the interaction of processes. The language, in addition to facilitating the wiring of process outputs and inputs, includes combinators for transforming and filtering messages, maintaining state, and for switching processes on or off. Complex systems, including schedulers, runtime monitors, and even runtime systems can be implemented in this language. By virtue of compositionality results for our core, the combinators in our language are security preserving. The crucial point is that the compositionality results can be invoked to prove noninterference of processes implemented in our language, obtaining noninterference by construction. To demonstrate, we use this language to implement an enforcement of timing-sensitive noninterference in Sect. 7.

Language. The language is summarized in Figure 2a. The figure displays the type of each combinator in the language, along with a brief description of its semantics. For brevity, we leave out descriptions of combinators that are trivial specializations of a more general combinator (e.g. ones with suffix \(\mathtt {I}\) or \(\mathtt {O}\): specializations that operate only on input and output). The implementation of each combinator in terms of core combinators is given in the TR [33].

Message transformation & process state. , and are trivial specializations of the core and combinators. For instance, is defined as . Thus, only transforms inputs. We make heavy use of and for routing and restructuring messages in Sect. 7.

Message filtering. drops messages that do not satisfy a predicate. We implement using , by transforming predicates into functions that map messages that do not satisfy the predicate to . We then use to discard resulting input. We cannot do the same for output; the process still performed work. The combinator drops all input.

Message tagging. The tagging combinators tag and untag messages. These are simple specializations of ; for instance, iff . The only nontrivial tagging combinator, , treats a tag as a token, only passing an input to if the input is tagged with (consuming the token). A sample use of the tag combinators is implementing point-to-point communication; this can be done by having senders tag a message with the ID of a recipient process, and having said process use to only process messages addressed to it.

Process switching. Two specializations of are noteworthy. combinator, by only switching its subprocess on or off upon receiving input, implements a preemptive switching strategy. Likewise, , by using input to switch its subprocess on, and output to switch it off, implements a cooperative switching strategy. We use and in to implement scheduling strategies in Sect. 7.

Process composition. With and , we can compose any number of processes that all receive copies of each other’s output. This “universal” composition can be specialized to more restricted forms of communication, including “sequential” composition, using our other combinators to selectively route messages.

Compositionality of language. The compositionality results for our language are listed in Figure 2b. Each black-bordered box contains a compositionality result; the first line is its guarantee, while subsequent lines in the box are assumptions under which that guarantee holds. Occurrences of unbound variables in a box are implicitly universally quantified. For instance, the first six lines under “process state” is one compositionality result, namely Theorem 2 restated.

The meaning of each assumption has already been explained in Sect. 5, save for three. First, . That part of the assumption of states that must either be aware of the token (thus observing presence of all input), or oblivious to all input (thus public output is independent of all input). Second, . Third, is the restriction of to . We use these two definitions to state that for observable observably equivalent values, the filter functions make the same filtering decision.

Compositionality follow from Theorems 1, 2, 3, 4, 5, and 6.

Corollary 1

(composition): Each statement in Figure 2b is true.    \(\square \)

Fig. 2.
figure 2

Language

7 Case study: SME

To demonstrate the practicality of our results, we implement secure multi-execution (\(\text {SME}\)) [10], the enforcement that we discussed in Sect. 2.

We develop two variations of \(\text {SME}\), which differ in how the execution of process copies is managed. The former variant uses a preemptive scheduling strategy to schedule the process copies. For this variant, we show how a proof of soundness can be straightforwardly obtained by invoking our compositionality results. The latter variant uses a cooperative scheduling strategy. Here we demonstrate a timing leak, and, using our compositionality results, trace the insecurity in the implementation to a single component. Together, this demonstrates that our theory can be used to straightforwardly establish timing-sensitive noninterference of a complex system, and to identify subcomponents that cause insecurities.

We stress that our construction easily generalizes to lattices of any shape and size, like SME does [34], even though for clarity of presentation, we assume the two-point lattice . We will use the definition of message-passing processes and their observables, i.e. and , from Examples 2 and 5.

Secure execution. At first, it appears our compositionality results will not aid us in establishing soundness for an implementation of \(\text {SME}\) in our language; our results assume that processes being composed are secure, while \(\text {SME}\) makes no such assumption. We observe that only a tiny part of \(\text {SME}\) is responsible for enforcing security. We deconstruct \(\text {SME}\), separating plumbing and scheduling from this part, prove that the part enforces , and then leverage our compositionality results to show that plumbing and scheduling does not introduce insecurities.

This tiny part is : a combinator for executing any given securely. secures the -copies. With denoting the securely executed -copy of , achieves this effect by 1) feeding only the -observable part of input to the -copy, and 2) dropping all non- parts of output from . Intuitively, is a secure process since outputs messages only on channels labeled , and computes these using only input on channels labeled . Both 1) and 2) are needed; without 1), input from can flow to output channels labeled , and without 2), can leak between incomparable channels in .

figure fn

To achieve 1), we use preprocesses input to in the manner required by 1). To achieve 2), we use , where is a function that projects output on non- channels to .

figure fp

With this, we define \(\mathtt {SE}\) as in Listing 1.1.

Theorem 7: .

Proof sketch

Pick and . We need to prove . Pick , and such that . Case on . We use the following simulations in the cases.

figure ft

In the “true” case, replaces -unobservable input with (by definition of ), which in turn gets dropped by . Since -observable inputs are only --observably equivalent with themselves, this together gives that changes in -unobservable input to never propagate into . Thus, we can show that , which relates streams to processes very tightly, is a --stream-simulation. It is also easy to see that . In the “false” case, we use a different observation: maps all output from to if it is not a message on a -labeled channel (by definition of ). Since messages on -labeled channels are -equivalent to , none of the outputs from are -observable. This lets us use . To establish , we use the following lemma.    \(\blacksquare \)

Lemma 1:

.    \(\square \)

Scheduler processes. Our two variations of \(\text {SME}\) execute -copies concurrently, with executions coordinated by a scheduler process. A scheduler chooses which process copy goes next by outputting its security level. Like previous work on \(\text {SME}\) [10, 20, 34], our schedulers receive no input. This simplifies reasoning (this way, schedulers cannot leak information [20]). Our schedulers are rich enough to express practical scheduling strategies, including Round-Robin scheduling.

The set of schedulers is . Since schedulers receive no input, we make scheduler choices public. We define . Let , and let . Since schedulers receive no input, the following is clear.

Corollary 2:

.    \(\square \)

Secure multi-execution, preemptive. Our first variation of \(\text {SME}\) schedules -copies preemptively. Example \(\text {SME}\) schedulers of this sort are Multiplex-2 [20], and the deterministic fair schedulers [34]. In this variation, the -copies run in parallel with a scheduler. In each time unit, the scheduler can switch one of the process copies on or off (preempting it).

figure gd

The combinator in Listing 1.2 achieves this effect. Here, securely executes a - and a -copy of a process in parallel (line 5). These -copies are made switchable by (defined later). The scheduler interacts with these switches by means of the construct. Whereas ensures interacts only with the -copies, makes this interaction unidirectional.

figure ge

Before explaining the s on line 3, let’s delve into , in Listing 1.3. Besides making preemptively switchable (by ), defines the interface between an -copy and the scheduler. As the type of indicates, receives switch commands from the scheduler, and messages from the environment. Function specifies how reacts to input. The function outputs a pair ; determines whether the switch should be flipped, and is the input message (if any) to . Here, iff the input is from the scheduler, and iff the input is from the environment.

Note that only changes how data is packaged w/o changing the data itself (except , which is public). Thus, is noninterfering. This, together with our compositionality results, gives us that is security-preserving.

Corollary 3:

where .    \(\square \)

In Listing 1.2 line 3, maps environment input into (the space of values switched -copies receive). Finally, projects each pair of output messages (if any) from the -copies to a single message. It does so by preferring the right component, choosing the left component only if the right component is . We define as follows.

figure gi

Lemma 2:

where and .    \(\square \)

By Lemma 1, the output space of is

Corollary 4:

,

where and    \(\square \)

Now, is the output space of . This lets us invoke Lemma 2 on the part of . By invoking the compositionality results for , and , we get a proof of soundness of .

Corollary 5:

.    \(\square \)

This venture highlights the power of our approach: it enables \(\text {SME}\) to simply be implemented, reducing soundness to proving properties of simple components.

Secure multi-execution, cooperative. Our second variation of \(\text {SME}\) schedules \(\ell \)-copies cooperatively. An example scheduler of this sort is \(\texttt {select}_\text {lowprio}\) [10], implemented in FlowFox [9] on a per-event basis. Here, processes are arranged like in \(\mathtt {SME}_\mathtt {P}\). The key difference is that at only one process (including the scheduler) can be active at a time. An active process remains active until it releases control. When an \(\ell \)-copy does, the scheduler receives control, remaining active until it determines which process copy to activate, and activates it.

However, as we will confirm, this approach has a timing leak: allowing the -copy to control when it releases control to the scheduler means that the time at which the -copy is subsequently activated can depend on information [20, 34].

figure gq
figure gr

The combinator, in Listing 1.4, implements this approach. The structure is exactly like . However, a few combinators have been modified. First, the type of is different; processes to be multi-executed are now . The Boolean output signifies control release. Second, needs to be modified slightly as a result. The new combinator, , enforces ; see the TR [33] for details. Third, the process switch needs to be updated to match this new scheduling semantics. The new switch, , is given in Listing 1.5. Compared to , replaces with , and propagates a release signal from the process to both and the scheduler. The following should thus be of no surprise.

Corollary 6:

, where , .    \(\square \)

Here, defines that values are observable only to principals at or above a given level; for all , is the least equivalence relation satisfying

figure gw
figure gx

Things start to go wrong in the scheduler switch, , sketched in Listing 1.6. This switch follows the structure of . When switched on (by a -copy), remains active until it produces a security level (which, in turn, by , switches the -copy on).

Now a problem emerges in . Since the Boolean used to switch the scheduler comes from the - and the -copy, needs to be oblivious to the scheduler process. However, the scheduler process outputs security levels to the -copy, which are . If we instead make security levels , Corollary 6 becomes false; the switch signal sent to the -copy becomes , forcing the switch on the -copy to be , and since is not oblivious to the -copy, a leak can occur. There thus appears to be an irreparable conflict in this variation of \(\text {SME}\); output must be independent of input, but the time at which the -copy regains control depends on output from the -copy, which depends on input.

8 Related Work

We discuss work in areas most related to ours: information-flow control of timing channels, timed interaction, and theories of information-flow secure composition.

Timing channels. Timing channels can be categorized as internal and external [39]. Several program analyses and transformations have been proposed to stop leaks through external channels. Proposed white-box approaches include the following. Hedin and Sands developed a type system that rejects programs for which the time it takes to reach the point of the effect can depend on  [14]. Zhang et al. annotate statements in an imperative language with a read and write label expressing how information can flow through the runtime [47]. Agat gave a program transformation that, in a program that passes Denning-style enforcement [41] (which rules out explicit and implicit flows), pads ifs and bans whiles [1]. Askarov et al. present a black-box timing leak mitigator [5]. Here, outputs are queued, and released FIFO according to a pre-programmed schedule. If no output is in the FIFO when a release is scheduled, the schedule is updated (i.e. slowed). This places a logarithmic bound on timing leaks. Devriese and Piessens formalize secure multi-execution (SME) that executes a program multiple times, once for each security level, while carefully dispatching inputs and ensuring that an execution at a given level is responsible for producing outputs for sinks at that level [10].

Whereas the above approaches performs little or no exploration of compositionality, we demonstrate that our timing-sensitive noninterference is preserved under composition. Our combinators can be used to prove timing-sensitive noninterference in large systems, by construction. By implementing SME, we have shown that it is compatible with our theory. The mitigations are not compatible with our theory as-is, since these allow leaks through timing, whereas our theory allows no leaks. Modifying our theory to accommodate these is a promising line of future work. The compatibility of the other approaches to our theory is unclear as they make environment assumptions that may be incompatible with ours. Compared to [14], our discrete-timed model is simplistic. We note, however, that no part of our theory places restrictions on how fine the discretization of time can be. Our work focuses on eliminating external timing channels, because they have been demonstrated to be exploitable [6, 11, 22, 30], and because internal timing channels are caused by external timing channels of subcomponents.

Timed interaction. Timed models of interaction have been studied extensively in a process algebraic setting [8, 13, 15, 29, 40]. The prevalent approach has been to introduce a special timed tick action to the model, leaving synchronization constructs untimed  [8, 13, 15, 40]. This tick action requires special attention in the theory; for instance, it is useful to require that processes are weakly time alive, i.e. never prevent time from passing by engaging in infinite interaction. Instead, our model times output, alleviating the need to introduce a special action and machinery around it. This yields a cleaner theory; for instance, progress is already built into our definition of interactive process. While this limits how much work a process can do in a time unit, the discretization of time can be arbitrarily fine. Whereas these calculi mostly use bisimulation to compare processes, our simulation relation is more forgiving when it comes to reasoning about nondeterministic choice. Since our theory operates on transition systems as opposed to on a language of processes, our theory is more general.

Focardi and Gorrieri’s work on information-flow secure interaction is particularly related to ours [13]. Their security properties are bisimulation-based, with the part of environment modeled explicitly as a process that binds all channels and only interacts on channels. In contrast, our environments are implicit, and can e.g. be any interactive process.

Timed I/O automata are real-time systems that synchronize through discrete, timeless actions [21]. Like our interactive processes, these systems are input total, and it is assumed that time can pass. However, systems are finite-state, and, like the process algebras, passage of time is separate from synchronization.

In summary, while our model of time is weaker than those in some other timed computation models (notably, dense time), time can be discretized as needed, and conflating output with passage of time greatly simplifies our theory.

Theories for secure composition (information-flow). With his seminal paper [26], McCullough sparked a study into information-flow secure composition of nondeterministic systems in the 80s that continues to this day [19, 24, 28, 35, 42, 44]. This work studies the relative merits of several trace-based formalizations of possibilistic progress-sensitive noninterference [3, 4], in terms of whether they are preserved under e.g. universal composition, sequential composition (a.k.a. cascade), and feedback. Whereas some properties are preserved under all of these [19], others fail for some combinators, most notably feedback [43]. These models are all untimed. It would be worthwhile to apply our timing model in these settings and explore how these security properties classify programs. Requiring that the presence of all output is is a good starting point, since this makes these properties timing-sensitive in our timing model. However, more work may be needed, since the system models differ subtly (e.g. they are not all input total). Our simulation relation is inspired by Rafnsson and Sabelfeld [35]. While their relation was designed to facilitate an inductive proof principle, ours is designed around a coinductive proof principle. Our simulation is simpler as a result.

Secure composition has also been studied in great detail in a process algebraic setting [12, 16, 17, 31, 36, 37], Parallel composition is one of the defining features of process algebra, making compositional reasoning a key concern. In contrast to this work (which studies compositionality of parallel composition), our work studies compositionality of a language of combinators. Further, our model is timed, while these are not. Finally, the behavioral equivalence of choice is bisimulation, which we find to be too strict for possibilistic noninterference.

More recently, Mantel et al. explore secure composition in a shared-memory concurrent setting [2, 25]. They develop a security condition that is sensitive to the assumptions that each thread makes on whether other threads can read or write to shared variables. For instance, the right-component of the Clark-Hunt example in Sect. 2 assumes that no other thread reads , and, thus, the two components cannot be securely composed since the left-component violates this assumption. Their approach is more fine-grained than ours, since compositionality is parameterized by individual environment assumptions of subcomponents. However, their system model is untimed, threads are arranged in a fixed, flat structure, communication is only via shared memory, and only parallel composition is considered. In contrast, our system model is timed, and our combinators enable modeling fairly arbitrary structures of interacting processes (including shared memory), as demonstrated in Sect. 7. Exploring whether this finer granularity can be introduced into our theory is a promising direction of future work.

All of these approaches consider only combinators that passively glue together two processes, facilitating interaction. In contrast, our combinators actually do something, e.g. maintain state, switch processes on or off, and transform messages. As a result, our theory presents a rich toolset for reasoning about secure composition, made even richer by its generic nature (arbitrary message types, combinators parameterized by functions, etc.).