1 Introduction

Canetti [3], and independently Pfitzmann and Waidner [11] propose security frameworks for reactive processes. Both frameworks have composition theorems, and are based on older definitional work. The initial ideal-model based definitional approach for secure function evaluation is informally proposed by Goldreich, Micali, and Wigderson in [6]. The first formalizations appear in Goldwasser and Levin [7], Micali and Rogaway [10], and Beaver [1]. Canetti [2] presents the first definition of security that is preserved under composition. See [2, 3] for an excellent background.

The basic approach of all these models is the same. An ideal functionality is defined that implicitly captures the functionality and security properties we expect from a real protocol. The real protocol is then said to be secure if it is indistinguishable from the ideal functionality by any efficient distinguisher. However, in an execution of the real protocol the adversary may influence the execution or extract information that it passes on to the distinguisher. Thus, we introduce an simulation adversary (simulator) that is given the same task, but when interacting with the ideal functionality. The ideal functionality is secure by inspection, so the simulation adversary can by definition not attack the ideal functionality in any meaningful way. Instead it must simulate a real attack to the distinguisher. The definition of security then says that if for every real adversary there exists an simulation adversary such that no efficient distinguisher can distinguish: (1) an interaction with the real protocol and the real adversary from (2) an interaction with the ideal functionality and the simulation adversary, then the real protocol is said to securely realize the ideal functionality.

The UC framework is an ambitious attempt to capture the security of a wide range of settings in a uniform way, but the original UC framework was flawed in several ways. The most recent version of the online paper [3] contains a discussion about the issues and pointers to relevant literature. However, the core ideas of the UC framework are correct, and there are no flaws in the basic instantiations needed to prove the security of practical protocols. In this paper we detail one possible instantiation, but before we do so, we point out the main areas where our particular instantiation is more restricted, and hence less complex, than the general framework.

Canetti assumes the existence of an “operating system” that takes care of the instantiation of subprotocols when needed. This is necessary to handle dynamically instantiated subprotocols, but in our application we may assume that all subprotocols are instantiated at the start of the execution. This means that we can view each instance of a subprotocol as a separate Turing machine that exists from scratch that interacts with the invoking protocol with a predefined session identifier.

Canetti models an asynchronous communication network, where the adversary has the power to delete, modify, and insert any messages of his choice. To do this he is forced to give details for exactly what the adversary is allowed to do to messages passed in different ways between interactive Turing machines, which quickly becomes quite complex. We instead factor out all aspects of the communication network into a separate concrete “communication model”-machine. The real, ideal, and hybrid models are then defined solely by how certain machines are linked. The adversary is defined as any interactive Turing machine, and how the adversary can interact with other machines also follows implicitly from the definitions of the real and ideal communication models. With our approach there is also no need for session identifiers.

The above means that the real, ideal, and hybrid models can not only be illustrated by a graph of connected parties, they are graphs of Turing machines in a very tangible way, which makes the composition theorem almost trivial.

There are several ways to model corruption in cryptographic protocols. In this paper, we only consider static corruption, i.e., the adversary must decide which parties to corrupt before the execution starts. However, it is straightforward to extend the model to adaptive corruption as explained in Remark 2. Even dynamic adversaries could be handled in a similar way, so there is no inherent restriction to static adversaries.

1.1 Contribution

We present a precise and workable security framework using modularized definitions that are easily verified to be sound. Abstractions emerge in a natural way that are firmly grounded in the underlying definitions. Although our treatment may initially seem more complex than the description of the UC framework, the actual content is captured faithfully in simple drawings that are enough to understand the framework, and the composition theorem becomes almost trivial.

Explicit invertible transforms are introduced that can turn any hybrid model into a hybrid model with a single ideal functionality (or a real model). Thus, it suffices to consider how the security of such a protocol in our simplified UC framework relates to its security in any other security framework, in particular the UC framework. This also immediately generalizes the single composition theorem to allow multiple compositions.

We introduce a novel generalization the UC framework and other frameworks we are aware of in that the definition of security captures the case where a hybrid protocol securely realizes another hybrid protocol, and not only ideal functionalities. This allows a novel type of proof that is not only based on securely realizing ideal functionalities and applying the composition theorem. We give natural examples where this technique is applicable.

The essential restriction in our framework compared to general UC is that the set of parties and the protocol, including all subprotocols and ideal functionalities used, are determined at the start of the execution.

1.2 Related Work

Several frameworks have been proposed today, but we only mention two frameworks that perhaps are closest to our framework at a philosphical level.

Constructive cryptography was developed and proposed by Maurer and Renner [8, 9] independently of our work. The design of cryptographic primitives and protocols in this framework is viewed as the construction of an ideal resource from assumed or real resources. It shares with our framework the aims of achieving simplicity and eliminating irrelevant artefacts. We have not carried out a detailed analysis of the relations between their model and ours, but we are currently corresponding with the authors.

In subsequent, but independent work, Canetti et al. [4] propose an alternative formalization of a simplified UC framework motivated by the same problems as we do, and to some extent they use also the same approach as we do. Their motivation and the restrictions they introduce compared to the full UC framework are the same. Several features of the formalization that distinguishes it from the UC framework are also similar, e.g., their explicit “router” corresponds to our “communication model”.

We consider the main difference between our framework and theirs to be that they use top-down approach, whereas we gradually build the model from the bottom up. They explicitly relate their model to the general UC model. We instead provide transforms that allow us to relate our framework to any other framework with ease, since today there are many proposals of security framework and it is nearly impossible to understand each framework sufficiently well to perform a valid comparison.

That said, we hope that the reader takes the time to read both papers, since they both attempt to capture the core ideas of the UC framework in a way that is easier to understand and use.

2 Interactive Turing Machines

Parties and algorithms are modeled as probabilistic Turing machines, but to be able to talk about multiple parties that interact with each other we need to augment this model with a notion of communication. We follow the approach of Goldreich [5] and Canetti [3] and define interactive Turing machines, but we replace the activation bit used by Goldreich by a slightly more complicated gadget to allow seamless treatment of multiparty protocols.

Definition 1

(Interactive Turing Machine). An interactive Turing machine (ITM) is a Turing machine with the following tapes and tape heads in addition to its work tapes: a read-only identity tape, a read-only security parameter tape, a read-once input tape, a write-once output tape, a read-once random tape, a write-once send head s, a read-once receive head r, and two single-bit read/write activity heads \(a_s\) and \(a_r\). The following restrictions apply to an ITM, where we use brackets to indicate the value stored in the cell pointed at by a tape head.

  1. 1.

    If \(([a_s],[a_r])\in \{(0,0),(1,0)\}\), then it is inactive and can not change its state in a state transition, or read, write, or move on any tape.

  2. 2.

    If \(([a_s],[a_r])=(0,1)\), then it is active and can change its state in a state transition.

  3. 3.

    A special instruction allows it to atomically: set \(([a_s],[a_r])=(1,0)\) and become inactive.

Fig. 1.
figure 1

The ITM’s \(\mathcal {M}_{0}\) and \(\mathcal {M}_{1}\) share activation and send/receive tapes. The send head of \(\mathcal {M}_{0}\) points to same tape as the receive head of \(\mathcal {M}_{1}\) and vice versa. A corresponding configuration is used for the activation tapes. The figure does not contain the other tapes of the ITM’s.

Note that a single ITM is not a complete computational model, since some tape heads do not have matching tapes. Two ITM’s are connected by adding the missing tapes and pairing the write-once send head of one party with the read-once receive head of the other and the activity head \(a_s\) of one party with the activity head \(a_r\) of the other. Intuitively, the activation tapes implement an “activation token” that is passed back and forth between the parties. This is illustrated in Fig. 1. We denote the set of all ITM’s by \(\mathsf {ITM}\).

3 Graph of Interactive Turing Machines

To connect multiple ITM’s with each other without introducing extra tapes for each machine and thereby change the computational model, we introduce a gadget that plays the role of a router. A router is a Turing machine with several sets of tape heads that can share tapes with interactive Turing machines (ITM) or other routers.

Definition 2

(Router). An \(l\)-router is a Turing machine with write-once send heads denoted \(s_0,\ldots ,s_l\), read-once receive heads, denoted \(r_0,\ldots ,r_l\), and single-bit read/write activity heads \(a_{s,i}\) and \(a_{r,i}\) for \(i=[0,l]\) such that \(\sum _{i=0}^{k}([a_{s,i}]+[a_{r,i}])\in \{0,1\}\).

Active. If \([a_{r,i}]=1\) for some \(i\in [0,k]\), then it is active and proceeds as follows.

  1. 1.

    To form a string \(w\) it reads and stores symbols from its ith receive tape using \(r_i\) until it encounters \(\bot \).

  2. 2.

    If \(i=0\) then

    • if \(|w|\ge n\) and the last \(n\) bits of \(w\) is an integer \(j\in [k]\), then it writes \(w\) except the last \(n\) bits to its jth send tape using \(s_j\), and

    • otherwise it writes \(\Diamond \Vert w\) to its 0th send tape using \(s_0\).

    If \(i\ne 0\), then it sets \(j=0\) and writes \(w\) and a \(n\)-bit representation of i to its 0th send tape using \(s_0\).

  3. 3.

    It sets \(([a_{s,j}],[a_{r,i}])=(1,0)\) (as an atomic operation) to pass the activity token to the jth party.

Inactive. If \([a_{r,i}]=0\) for all \(i\in [0,k]\), then it is inactive and keeps its state and does not read, write, or move on any tape.

The use of routers inbetween ITM’s makes sure that an ITM activates another ITM (indirectly through the router) if and only if it first sends it a message. The message may of course be empty to simply pass activation. Note that the address of a message is appended to the end of the message. This may seem odd, but it turns out to be useful for technical reasons (see Appendix A.4 for details).

Due to the test in step 2, a message can only be copied from the 0th receive tape to the ith send tape for \(i>0\), or from the ith receive tape for \(i>0\) to the 0th send tape. Furthermore, data written to or read from the 0th tape contains the index of another pair of tapes as an \(n\)-bit appendix, whereas it does not for other tapes. Thus, data written to the 0th write-once tape may be badly formed in which case the data is simply written back to the 0th write-once tape with the prefix \(\Diamond \). This prefix is a special symbol used only for this purpose that indicates badly formed inputs.

Remark 1

(Concatenation). Concatenations such as that in Step 2 are common in this chapter and the chapters that follows. Care has to be taken to avoid that such concatenation, directly or indirectly, give rise to strings that can not be decoded uniquely into the original components. We can not solve this by simply stating that concatenation is a short hand for an invertible encoding algorithm, since we need the associative property of concatenation to prove that routers and communication models “commute”. Fortunately, it is easy to see that there is no risk of ambiguous representations for most uses of concatenation.

To connect routers and ITM’s with each other we let them share tapes pairwise. We formalize this as follows.

Definition 3

(Slot of Interactive Turing Machine or Router). A tuple of heads of an ITM \((s,r,a_s,a_r)\) or a tuple of heads of a router \((s_i,r_i,a_{s,i},a_{r,i})\) is a slot. (Using notation from Definitions 1 and  2.)

Definition 4

(Linked). Two slots \((s,r,a_s,a_r)\) and \((s',r',a_s',a_r')\) are linked if there are four tapes such that the heads of each pair \((s,r')\), \((s',r)\), \((a_s,a_r')\), and \((a_s',a_r)\) point to the same tape and no other heads point to any of these tapes.

An ITM graph is simply a number of ITM’s that are linked to each other indirectly using routers. Note that a router of which the 0th slot is linked to an ITM effectively increases the number of slots of the ITM. From now on we take this view. A basic requirement of an ITM graph to be executable, is that no ITM has any “dangling” tape heads.

Definition 5

(ITM Graph). An ITM graph is a set V of ITM’s, a set R of routers, and a set of additional tapes such that the slot of each ITM is linked to the 0th slot of a router, the 0th slot of each router is linked to the slot of an ITM, and every other slot of every router in R is linked to a slot of a different router in R. The set of all ITM graphs is denoted \(\mathsf {G}_{\mathsf {ITM}}\).

In other words, we use the routers to increase the number of slots of ITM’s and then link the slots of routers to each other to allow the ITM’s to communicate. Figure 2 illustrates this. The idea behind this approach is to restrict the notion of an ITM to Turing machines that have a fixed number of tapes. This avoids the need to change the computational model by adding tapes for parties in a protocol depending on how many parties there are.

Fig. 2.
figure 2

An ITM graph consisting of parties \(\mathcal {M}_{1}\), \(\mathcal {M}_{2}\), and \(\mathcal {M}_{3}\) linked by three unnamed routers providing three slots each. The 0th slot of each router is marked by an arrow. We use this convention throughout this paper.

Definition 6

(Initializing an ITM Graph). To initialize an ITM graph with ITM’s \(\mathcal {M}_{1},\ldots ,\mathcal {M}_{k}\), the identity tape of \(\mathcal {M}_{j}\) is assigned the integer j in binary, every cell of every activity tape is set to zero, every cell of every random tape is set to a randomly chosen bit, every cell of every other tape is set to \(\bot \), and tape heads pointing to the same tape are set to point to the same cell.

We say that a tape of an initialized ITM graph is assigned a string x when we fill the consecutive cells starting at the cell pointed to by the tape heads with x. This is done in the reachable direction for directed tape heads and in some canonical direction for other tape heads.

To simplify the analysis of running times, we ignore the state transitions occuring in routers when stating running times. This does not change any results about concrete protocols in any essential way, since only a small constant number of routers are used and they all run in linear time in the messages forwarded.

Definition 7

(Executing an ITM Graph). An ITM graph with ITM’s \(\mathcal {M}_{1},\ldots ,\mathcal {M}_{k}\), that has been initialized, is executed starting at \(\mathcal {M}_{1}\) on security parameter \(n\) and input \(z\) to \(\mathcal {M}_{1}\) as follows.

  1. 1.

    Assign \(1^n\) to the security parameter tape of \(\mathcal {M}_{j}\) for \(j\in [k]\).

  2. 2.

    Set the input tape of \(\mathcal {M}_{1}\) to \(z\).

  3. 3.

    Set \([a_r]=1\), where \(a_r\) is the receiving activity head of \(\mathcal {M}_{1}\).

  4. 4.

    Repeatedly execute the transition functions of all ITM’s in unison.

Note that due to the demand that an ITM or a router is active to change its state, or read, write, or move on a tape, this effectively means that a single machine is executing at any time.

Definition 8

(Bounding the Running Time). Let G be an ITM graph and let X be a subset of the ITM’s in G. We say that the running time of G is bounded at X by \(T_{X}\) if the number of active state transitions taking place in ITM’s in X is bounded by \(T_{X}\).

The above gives a solid foundation for defining a simple and explicit version of the UC framework, but the notation is cumbersome. From now on we say that two ITM’s are linked if two or more slots of their routers are linked. This allows us to take an abstract view of an ITM graph as a set of ITM’s V and a set of links E describing how the ITM’s are connected. If two machines are linked, then they can exchange messages and activate each other.

However, an ITM with a set of slots not only expects to be linked to some other ITM’s, it expects that particular slots are used to form links to particular slots of other ITM’s. Thus, we must label the slots of each ITM and introduce notation for forming a link using two such slots. Suppose that the ITM’s \(\mathcal {M}_{1}\) and \(\mathcal {M}_{2}\) have slots \({{\scriptstyle [a]}}\) and \({{\scriptstyle [b]}}\) respectively. Then \(\langle \mathcal {M}_{1}{{\scriptstyle [a]}},\mathcal {M}_{2}{{\scriptstyle [b]}}\rangle \) denotes a link formed between slot \({{\scriptstyle [a]}}\) of \(\mathcal {M}_{1}\) and slot \({{\scriptstyle [b]}}\) of \(\mathcal {M}_{2}\). Due to the restrictions on ITM’s, the definition of a router, and the starting state of an initialized ITM graph, this guarantees that exactly one ITM is active at any given time. In figures, we now draw the machines as circles instead of squares to indicate that we have abstracted from the details of communication.

Throughout we use the convention that a small letter in a slot, e.g., a in \({{\scriptstyle [a]}}\), is a variable over the set of all labels of slots, and a capital letter is the label given verbatim, e.g., \(\mathcal {M}\) in \({{\scriptstyle [\mathcal {M}]}}\).

4 Entities of Models

Before we introduce the real, ideal, and hybrid models, we introduce the ITM’s used to form these models. To be able to talk about different types of ITM’s below without ambiguity we mark them. This can be formalized by adding an additional read-only tape on which the marking is written when the ITM is initialized, but we avoid formalizing this to avoid cluttering. Furthermore, each ITM of a given type has dedicated named slots.

An implementation of a function in software typically checks that the input is of a given form and returns an error code or throws an exception otherwise. It is then the responsibility of the caller of the function to deal with the error or exception. We mirror this in that if an ITM receives a message \(w\) on a slot \({{\scriptstyle [a]}}\) that does not match the explicitly stated format of valid messages, then \(\Diamond \Vert w\) is written to \({{\scriptstyle [a]}}\). We have already used this convention in Definition 2.

A communication model captures how the parties of a protocol can communicate in the presence of an adversary.

Definition 9

(Communication Model). A \(k\)-communication model \(\mathcal {C}\) is an ITM marked as a “communication model” with one ideal functionality slot \({{\scriptstyle [\mathcal {F}]}}\), party slots \({{\scriptstyle [\mathcal {P}_{1}]}},\ldots ,{{\scriptstyle [\mathcal {P}_{k}]}}\), and an adversary slot \({{\scriptstyle [\mathcal {A}]}}\). If \(\Diamond \Vert w\) is read from \({{\scriptstyle [\mathcal {P}_{i}]}}\) or \({{\scriptstyle [\mathcal {F}]}}\), then \(\Diamond \Vert w\) is written to \({{\scriptstyle [\mathcal {A}]}}\).

The adversary slot is used by an adversary to influence the behaviour of the communication model, e.g., if the communication model represents the Internet, then the adversary can insert, delay, or remove messages. The party slots are used by parties to communicate through the communication model. The ideal functionality slot is used to communicate with an ideal functionality. Note that the above definition implies that whenever a party or an ideal functionality refuses to accept an input, then the adversary is informed about this incident and activated. When no ideal functionality is needed we tacitly assume that an ideal functionality that refuses any input is used.

Definition 10

(Ideal Functionality). An ideal functionality \(\mathcal {F}\) is an ITM marked as an “ideal functionality” with a single communication slot \({{\scriptstyle [\mathcal {C}]}}\).

The communication slot is used by the ideal functionality both to accept inputs and to return outputs.

Definition 11

(Party). An f-party \(\mathcal {P}\) is an ITM marked “party” with an environment slot \({{\scriptstyle [\mathcal {Z}]}}\), a communication slot \({{\scriptstyle [\mathcal {C}]}}\), f subparty slots \({{\scriptstyle [\mathcal {U}_{1}]}},\ldots ,{{\scriptstyle [\mathcal {U}_{f}]}}\), and an adversary slot \({{\scriptstyle [\mathcal {A}]}}\). When \(f=0\) we simply say that \(\mathcal {P}\) is a party.

The subprotocol slots are used in the hybrid model to formalize access to subprotocols and ideal functionalities. The adversary slot is only used by corrupted parties. If it is not used in the formation of a model, then we assume that it is simply linked to an ITM that does not accept any input.

Definition 12

(Protocol). A \((k,f)\)-protocol \(\pi \) is a list \((\mathcal {P}_{1},\ldots ,\mathcal {P}_{k})\) of f-parties. When \(f=0\) we simply say that \(\pi \) is a \(k\)-protocol (or protocol when \(k\) is clear from the context).

Fig. 3.
figure 3

To the left a 3-communication model \(\mathcal {C}\) with ideal functionality slot \({{\scriptstyle [\mathcal {F}]}}\), adversary slot \({{\scriptstyle [\mathcal {A}]}}\), and party slots \({{\scriptstyle [\mathcal {P}_{1}]}}\), \({{\scriptstyle [\mathcal {P}_{2}]}}\), and \({{\scriptstyle [\mathcal {P}_{3}]}}\). In the middle an ideal functionality \(\mathcal {F}\) with a single communication slot \({{\scriptstyle [\mathcal {C}]}}\). To the right a 2-party with subparty slots \({{\scriptstyle [\mathcal {U}_{1}]}}\) and \({{\scriptstyle [\mathcal {U}_{2}]}}\), communication slot \({{\scriptstyle [\mathcal {C}]}}\), and environment slot \({{\scriptstyle [\mathcal {Z}]}}\).

Fig. 4.
figure 4

To the left a (3, 2)-adversary with a communication slot \({{\scriptstyle [\mathcal {C}]}}\), subadversary slots \({{\scriptstyle [\mathcal {A}_{1}]}}\) and \({{\scriptstyle [\mathcal {A}_{2}]}}\), an environment slot \({{\scriptstyle [\mathcal {Z}]}}\), and corrupted party slots \({{\scriptstyle [\mathcal {P}_{1}^*]}}\), \({{\scriptstyle [\mathcal {P}_{2}^*]}}\), and \({{\scriptstyle [\mathcal {P}_{3}^*]}}\). To the right a corrupted 2-party \(\mathcal {P}_{}^*\) with a communication slot \({{\scriptstyle [\mathcal {C}]}}\), subparty slots \({{\scriptstyle [\mathcal {U}_{1}]}}\) and \({{\scriptstyle [\mathcal {U}_{2}]}}\), an adversary slot \({{\scriptstyle [\mathcal {A}]}}\), and an environment slot \({{\scriptstyle [\mathcal {Z}]}}\).

Definition 13

(Adversary). A \((k,f)\)-adversary \(\mathcal {A}\) is an ITM marked as an “adversary” with a communication slot \({{\scriptstyle [\mathcal {C}]}}\), an environment slot \({{\scriptstyle [\mathcal {Z}]}}\), f subadversary slots \({{\scriptstyle [\mathcal {A}_{1}]}},\ldots ,{{\scriptstyle [\mathcal {A}_{f}]}}\), and \(k\) corrupted party slots \({{\scriptstyle [\mathcal {P}_{1}^*]}},\ldots ,{{\scriptstyle [\mathcal {P}_{k}^*]}}\). When \(f=0\) we simply say that \(\mathcal {A}\) is a \(k\)-adversary.

The corrupted party slots are used to communicate with corrupted parties in protocols. Depending on which parties, and how many parties, are corrupted some of these slots may remain unused. To meet the requirement that a model is an ITM graph we assume that each such slot is linked to an ITM that does not accept any input. Figures 3 and 4 illustrate a communication model, an ideal functionality, a party, an adversary, and a corrupt party.

5 Real Free Models

The real communication model formalizes a network in which the adversary can read, delete, modify, and insert any message of its choice. The Internet is an example of such a network.

Definition 14

(Real Communication Model). The real \(k\)-communication model \(\mathcal {N}_{k}\) is defined as follows.

  • If \(w\) is read from \({{\scriptstyle [\mathcal {P}_{i}]}}\), where \(i\in [k]\), then \(\mathcal {P}_{i}\Vert w\) is written to \({{\scriptstyle [\mathcal {A}]}}\).

  • If \(\mathcal {P}_{i}\Vert w\) is read from \({{\scriptstyle [\mathcal {A}]}}\), where \(i\in [k]\), then \(w\) is written to \({{\scriptstyle [\mathcal {P}_{i}]}}\).

A real free model describes a protocol that executes over a real communication model. We define a map that combines a communication model, parties, and an adversary into a graph of linked ITM’s. Recall that \(\langle \mathcal {M}_{0}{{\scriptstyle [a]}},\mathcal {M}_{1}{{\scriptstyle [b]}}\rangle \) denotes a link between slot \({{\scriptstyle [a]}}\) of \(\mathcal {M}_{0}\) and slot \({{\scriptstyle [b]}}\) of \(\mathcal {M}_{1}\).

Definition 15

(Real Model Map). The real \((k,I,f)\)-model map is the map \(\mathscr {R}_{k,I,f}:(\pi ,\mathcal {A},\pi ^{*})\mapsto (V,E)\), where \(\pi =(\mathcal {P}_{1},\ldots ,\mathcal {P}_{k})\) is a \((k,f)\)-protocol, \(\mathcal {A}\) is an f-adversary, and \(\pi ^{*}=\{\mathcal {P}_{i}^*\}_{i\in I}\) is a set of corrupted f-parties, defined by

$$\begin{aligned} V=&\{\mathcal {N}_{k},\mathcal {A}\}\cup \bigcup \nolimits _{i\notin I}\{\mathcal {P}_{i}\}\cup \bigcup \nolimits _{i\in I}\{\mathcal {P}_{i}^*\}\quad \text {and}\\ E=&\big \{\langle \mathcal {A}{{\scriptstyle [\mathcal {C}]}},\mathcal {N}_{k}{{\scriptstyle [\mathcal {A}]}}\rangle \big \}\cup \bigcup \nolimits _{i\notin I}\big \{\langle \mathcal {P}_{i}{{\scriptstyle [\mathcal {C}]}},\mathcal {N}_{k}{{\scriptstyle [\mathcal {P}_{i}]}}\rangle \big \}\\&\cup \bigcup \nolimits _{i\in I}\big \{\langle \mathcal {P}_{i}^*{{\scriptstyle [\mathcal {C}]}},\mathcal {N}_{k}{{\scriptstyle [\mathcal {P}_{i}]}}\rangle ,\langle \mathcal {P}_{i}^*{{\scriptstyle [\mathcal {A}]}},\mathcal {A}{{\scriptstyle [\mathcal {P}_{i}]}}\rangle \big \}. \end{aligned}$$

Definition 16

(Real Free Model). A real free \((k,I,f)\)-model M is an output of the real free \((k,I,f)\)-model map. If \(f=0\), then we simply say that M is a real free \((k,I)\)-model.

We say that the real model is free, since the parties and the adversary in it have free environment slots (and possibly free subparty or subadversary slots), i.e., a real free model is not an ITM graph and can not be executed. Figures 5 and 6 illustrate real free models without and with corruption.

Fig. 5.
figure 5

A real free \((3,\emptyset )\)-model \(\mathscr {R}_{3,\emptyset ,0}(\pi ,\mathcal {A},\emptyset )\) with a real 3-communication model \(\mathcal {N}_{3}\), 3-protocol \(\pi =(\mathcal {P}_{1},\mathcal {P}_{2},\mathcal {P}_{3})\), and real 3-adversary \(\mathcal {A}\).

Fig. 6.
figure 6

The real (3, I)-model \(\mathscr {R}_{3,I,0}(\pi ,\mathcal {A},\pi ^{*})\) with indices of corrupted parties \(I=\{3\}\), real 3-communication model \(\mathcal {N}_{3}\), 3-protocol \(\pi =(\mathcal {P}_{1},\mathcal {P}_{2},\mathcal {P}_{3})\), real 3-adversary \(\mathcal {A}\), and set of corrupted parties \(\pi ^{*}=\{\mathcal {P}_{3}^*\}\). Note the link between \(\mathcal {A}\) and the corrupted party  \(\mathcal {P}_{3}^*\).

6 Ideal Free Models

The ideal model formalizes a protocol execution in an ideal world where there is an ideal functionality, i.e., a trusted party that performs some service. The trusted party is simply an ITM executing a program, and it communicates with the parties through the ideal communication model.

The ideal communication model below captures the fact that the adversary may decide if and when it would like to deliver a message from the ideal functionality to a party, but it cannot read the contents of the communication between parties and the ideal functionality.

Definition 17

(Ideal Communication Model). The ideal \(k\)-communication model \(\mathcal {I}_{k}\) is defined as follows.

  • If \(\mathcal {F}\Vert m\) is read from \({{\scriptstyle [\mathcal {A}]}}\), then \(\mathcal {S}\Vert m\) is written to \({{\scriptstyle [\mathcal {F}]}}\).

  • If \(\mathcal {S}\Vert m\) is read from \({{\scriptstyle [\mathcal {F}]}}\), then \(\mathcal {F}\Vert m\) is written to \({{\scriptstyle [\mathcal {A}]}}\).

  • If \(w\) is read from \({{\scriptstyle [\mathcal {P}_{i}]}}\), then \(\mathcal {P}_{i}\Vert w\) is written to \({{\scriptstyle [\mathcal {F}]}}\).

  • If \(w\Vert (\mathcal {P}_{j},w_{j})_{j\in J}\Vert e\) is read from \({{\scriptstyle [\mathcal {F}]}}\), where \(J\subset [k]\), then for \(j\in J\):

    1. 1.

      \(\tau _j\) is chosen randomly, and

    2. 2.

      \((\mathcal {P}_{j},w_{j}\Vert e)\) is stored in a database under \(\tau _j\).

    Then \(w\Vert (\mathcal {P}_{j},\tau _j)_{j\in J}\Vert e\) is written to \({{\scriptstyle [\mathcal {A}]}}\).

  • If \(\tau \) is read from \({{\scriptstyle [\mathcal {A}]}}\) and \((\mathcal {P}_{j},w\Vert e)\) is stored under \(\tau \) in the database, then \(w\Vert e\) is written to \({{\scriptstyle [\mathcal {P}_{j}]}}\).

In our thesis we use an authenticated bulletin board for communication. Authenticated channels are trivial to define using an ideal functionality. Although we could absorb this into a separate communication model, this makes little sense.

Definition 18

(Authenticated Channels Functionality). The authenticated channels functionality \({\mathcal {F}}_{auth}\) repeatedly reads an input of the form \(\mathcal {P}_{i}\Vert (\mathcal {P}_{j},m)\) from \({{\scriptstyle [\mathcal {C}]}}\) and writes \((\mathcal {P}_{j},\mathcal {P}_{i}\Vert m)\Vert (\mathcal {P}_{j},\mathcal {P}_{i}\Vert m)\) to \({{\scriptstyle [\mathcal {C}]}}\).

In most formalizations the lengths of messages are provided to the simulation adversary by the communication model. This is needed to prove the security of most protocols, since without it the ideal functionality could hide the lengths of messages from the simulation adversary (something that would be impossible to achieve in a real protocol). Our formalization requires the definition of each ideal functionality to provide the lengths explicitly. However, for concrete protocols this is rarely needed, since the lengths of messages can be derived by the simulation adversary from the security parameter.

Definition 19

(Dummy Party). A dummy party is a party that writes any input on \({{\scriptstyle [\mathcal {Z}]}}\) to \({{\scriptstyle [\mathcal {C}]}}\), and writes any input on \({{\scriptstyle [\mathcal {C}]}}\) to \({{\scriptstyle [\mathcal {Z}]}}\).

Dummy parties are introduced to provide identical interfaces to the parties in real models and to ideal functionalities. There may be many copies of the dummy party. Dummy parties are denoted by \(\mathcal {Q}_{i}\) to distinguish them from real parties and may be thought of as labels for links. We denote a dummy \(k\)-protocol by \((\mathcal {Q}_{1},\ldots ,\mathcal {Q}_{k})\).

The ideal free model below captures the setup one wishes to realize, i.e., the environment may interact with the ideal functionality \(\mathcal {F}\), except that the adversary \(\mathcal {S}\) has some control over how the communication model behaves.

Definition 20

(Ideal Free Model Map). The ideal free \((k,I)\)-model map is the map \(\mathscr {I}_{k,I}:(\mathcal {F},\mathcal {S},\sigma ^{*})\mapsto (V,E)\), where \(I\subset [k]\) is a set of indices of corrupted parties, \(\mathcal {F}\) is an ideal functionality, \(\mathcal {S}\) is a simulation \(k\)-adversary, and \(\sigma ^{*}=\{\mathcal {Q}_{i}^*\}_{i\in I}\) is a set of corrupted parties, defined by

$$\begin{aligned} V=&\{\mathcal {I}_{k},\mathcal {F},\mathcal {S}\}\cup \bigcup \nolimits _{i\notin I}\{\mathcal {Q}_{i}\}\cup \bigcup \nolimits _{i\in I}\{\mathcal {Q}_{i}^*\},\quad \text {and}\\ E=&\big \{\langle \mathcal {I}_{k}{{\scriptstyle [\mathcal {F}]}},\mathcal {F}{{\scriptstyle [\mathcal {C}]}}\rangle ,\langle \mathcal {S}{{\scriptstyle [\mathcal {C}]}},\mathcal {I}_{k}{{\scriptstyle [\mathcal {A}]}}\rangle \big \}\cup \bigcup \nolimits _{i\notin I}\big \{\langle \mathcal {Q}_{i}{{\scriptstyle [\mathcal {C}]}},\mathcal {I}_{k}{{\scriptstyle [\mathcal {P}_{i}]}}\rangle \big \}\\&\cup \bigcup \nolimits _{i\in I}\big \{\langle \mathcal {Q}_{i}^*{{\scriptstyle [\mathcal {C}]}},\mathcal {I}_{k}{{\scriptstyle [\mathcal {P}_{i}]}}\rangle ,\langle \mathcal {Q}_{i}^*{{\scriptstyle [\mathcal {A}]}},\mathcal {S}{{\scriptstyle [\mathcal {P}_{i}]}}\rangle \big \} . \end{aligned}$$
Fig. 7.
figure 7

An ideal free \((3,\emptyset )\)-model \(\mathscr {I}_{3,\emptyset }(\mathcal {F},\mathcal {S},\emptyset )\) with ideal 3-communication model \(\mathcal {I}_{3}\), dummy 3-protocol \((\mathcal {Q}_{1},\mathcal {Q}_{2},\mathcal {Q}_{3})\), ideal functionality \(\mathcal {F}\), and simulation 3-adversary \(\mathcal {S}\).

Definition 21

(Ideal Free Model). An ideal free \((k,I)\)-model is an output of the ideal free \((k,I)\)-model map.

Figure 7 illustrate an ideal free model without corruption.

7 Hybrid Free Models

A hybrid free model formalizes the execution of a real protocol that has access to other real subprotocols, ideal functionalities, or hybrid protocols. It can both be used to describe protocols that need setup assumptions (or trusted parties) for specific tasks and as a tool to construct protocols in a modular way.

Note that the following definitions give a joint inductive definition of the hybrid free model map and hybrid free models.

Definition 22

(Hybrid Free Model). A hybrid free \((k,I,f)\)-model is an output of the hybrid free \(k\)-model map \(\mathscr {H}_{k,I,f}\) of Definition 26 below. We drop f from our notation if it is zero.

Definition 23

(Free Model). A free \((k,I,f)\)-model is a real free \((k,I,f)\)-model, a hybrid free \((k,I,f)\)-model, or provided \(f=0\), an ideal free \((k,I)\)-model.

A free model is complete if it does not have any dangling subparty slots. Thus, every free ideal model and every real/hybrid \((k,I)\)-model is complete.

Definition 24

(Complete Free Model). A free \((k,I,0)\)-model is complete.

Definition 25

(Root of Free Model). The root of a free \((k,I,f)\)-model (VE) is the unique pair of a protocol and adversary \(((\mathcal {X}_{1},\ldots ,\mathcal {X}_{k}),\mathcal {A})\) such that \(\mathcal {X}_{i}\in V\) is a party with a free slot \({{\scriptstyle [\mathcal {Z}]}}\) for \(i\in [k]\) and \(\mathcal {A}\in V\) is an adversary with a free slot \({{\scriptstyle [\mathcal {Z}]}}\).

We stress that if \(i\in I\), then \(\mathcal {X}_{i}\) is a corrupted party usually denoted \(\mathcal {P}_{i}^*\) (or \(\mathcal {Q}_{i}^*\)), and otherwise it is an uncorrupted party \(\mathcal {P}_{i}\) (or \(\mathcal {Q}_{i}\)) defined by the original protocol or dummy protocol of the ideal functionality.

Definition 26

(Hybrid Free Model Map). The hybrid free \((k,I,f)\)-model map is the map \(\mathscr {H}_{k,I,f}\) with \(f>0\) that takes as input:

  • A real free \((k,I,f)\)-model (VE) with root \(((\mathcal {X}_{1},\ldots ,\mathcal {X}_{k}),\mathcal {A})\).

  • A complete free \((k,I)\)-model \((V_j,E_j)\) with root \(((\mathcal {X}_{j,1},\ldots ,\mathcal {X}_{j,k}),\mathcal {A}_{j})\) for \(j\in [f]\).

and outputs a complete free model \((V',E')\) where

$$\begin{aligned} V'&=V\cup \bigcup \nolimits _{j\in [f]}V_j\quad \text {and}\\ E'&=E\cup \bigcup \nolimits _{j\in [f]} \left( E_j \cup \big \{\langle \mathcal {A}{{\scriptstyle [\mathcal {A}_{j}]}},\mathcal {A}_{j}{{\scriptstyle [\mathcal {Z}]}}\rangle \big \} \cup \bigcup \nolimits _{i\in [k]}\big \{\langle \mathcal {X}_{i}{{\scriptstyle [\mathcal {U}_{j}]}},\mathcal {X}_{j,i}{{\scriptstyle [\mathcal {Z}]}}\rangle \big \}\right) . \end{aligned}$$

8 Environments and Models

To be able to execute a free model we need an environment that connects to the free slots of the root protocol and root adversary. We formalize the environment in which a protocol is executed as an ITM (Fig. 8).

Fig. 8.
figure 8

A hybrid free model \(\mathscr {H}_{3,I,1}\big (\mathscr {R}_{3,I,1}(\pi ,\mathcal {A},\emptyset ),\mathscr {I}_{3,I}(\mathcal {F},\mathcal {S},\emptyset )\big )\) with indices of corrupted parties \(I=\emptyset \), real 3-communication model \(\mathcal {N}_{3}\), root (3, 1)-protocol \(\pi =(\mathcal {P}_{1},\mathcal {P}_{2},\mathcal {P}_{3})\), root (3, 1)-adversary \(\mathcal {A}\), ideal 3-communication model \(\mathcal {I}_{3}\), dummy 3-protocol \((\mathcal {Q}_{1},\mathcal {Q}_{2},\mathcal {Q}_{3})\), ideal functionality \(\mathcal {F}\), and simulation 3-subadversary \(\mathcal {S}\).

Definition 27

(Environment). A \(k\)-environment is an ITM marked as an “environment” with party slots \({{\scriptstyle [\mathcal {P}_{1}]}},\ldots ,{{\scriptstyle [\mathcal {P}_{k}]}}\) and an adversary slot \({{\scriptstyle [\mathcal {A}]}}\).

Figure 9 illustrates an environment. The environment provides the data used by the parties in the protocol and is always the first ITM to be activated during the execution of the model.

Fig. 9.
figure 9

A \(k\)-environment with party slots \({{\scriptstyle [\mathcal {P}_{1}]}}\), \({{\scriptstyle [\mathcal {P}_{2}]}}\), and \({{\scriptstyle [\mathcal {P}_{3}]}}\), and an adversary slot \({{\scriptstyle [\mathcal {A}]}}\).

Definition 28

(Environment Map). The \((k,I)\)-environment map \(\mathscr {Z}_{k}:(M,\mathcal {Z})\mapsto (V',E')\) takes a complete free \((k,I)\)-model \(M=(V,E)\) with root \(\big ((\mathcal {X}_{1},\ldots ,\mathcal {X}_{k}),\mathcal {A}\big )\) and a \(k\)-environment \(\mathcal {Z}\) as input and outputs \((V',E')\) where

$$\begin{aligned} V'&=V\cup \{\mathcal {Z}\} \quad \text {and}\\ E'&=E\cup \big \{\langle \mathcal {Z}{{\scriptstyle [\mathcal {A}]}},\mathcal {A}{{\scriptstyle [\mathcal {Z}]}}\rangle \big \}\cup \bigcup \nolimits _{i\in [k]}\big \{\langle \mathcal {Z}{{\scriptstyle [\mathcal {P}_{i}]}},\mathcal {X}_{i}{{\scriptstyle [\mathcal {Z}]}}\rangle \big \}. \end{aligned}$$

Definition 29

(Model). A \((k,I)\)-model is an output of the \((k,I)\)-environment map.

Note that a model is an ITM graph, which means that it can be executed. In an execution of a model the environment is always activated first with some auxiliary input. Figures 10, 11, and 12 illustrate a real model, an ideal model, and a hybrid model respectively. We abuse notation and write \(\mathscr {R}_{k,I,f}(\pi ,\mathcal {A},\pi ^{*},\mathcal {Z})\) instead of \(\mathscr {Z}_{k}(\mathscr {R}_{k,I,f}(\pi ,\mathcal {A},\pi ^{*}),\mathcal {Z})\) and correspondingly for ideal and hybrid free model maps.

Fig. 10.
figure 10

A real \((3,\emptyset )\)-model \(\mathscr {R}_{3,\emptyset ,0}(\pi ,\mathcal {A},\emptyset ,\mathcal {Z})\) with real 3-communication model \(\mathcal {N}_{3}\), 3-protocol \(\pi =(\mathcal {P}_{1},\mathcal {P}_{2},\mathcal {P}_{3})\), real 3-adversary \(\mathcal {A}\), and 3-environment \(\mathcal {Z}\).

Fig. 11.
figure 11

An ideal \((3,\emptyset )\)-model \(\mathscr {I}_{3,\emptyset }(\mathcal {F},\mathcal {S},\emptyset ,\mathcal {Z})\) with ideal 3-communication model \(\mathcal {I}_{3}\), dummy 3-protocol \((\mathcal {Q}_{1},\mathcal {Q}_{2},\mathcal {Q}_{3})\), ideal functionality \(\mathcal {F}\), simulation 3-adversary \(\mathcal {S}\), and 3-environment \(\mathcal {Z}\).

Fig. 12.
figure 12

A hybrid model \(\mathscr {H}_{3,I,1}\big (\mathscr {R}_{3,I,1}(\pi ,\mathcal {A},\emptyset ),\mathscr {I}_{3,I}(\mathcal {F},\mathcal {S},\emptyset ),\mathcal {Z}\big )\) with real 3-communication model \(\mathcal {N}_{3}\), root (3, 1)-protocol \(\pi =(\mathcal {P}_{1},\mathcal {P}_{2},\mathcal {P}_{3})\), root (3, 1)-adversary \(\mathcal {A}\), ideal 3-communication model \(\mathcal {I}_{3}\), dummy 3-protocol \((\mathcal {Q}_{1},\mathcal {Q}_{2},\mathcal {Q}_{3})\), ideal functionality \(\mathcal {F}\), simulation 3-subadversary \(\mathcal {S}\), and 3-environment \(\mathcal {Z}\).

9 Classes of Adversaries

We need to bound the running times of the adversary, the simulation adversary, and the environment to give a definition of security. Several ways to do this have been proposed in the literature. We choose a simple solution that gives concrete bounds on the security reductions. Given a model \(M=(V,E)\) with an adversary \(\mathcal {H}\) (real, ideal, or hybrid) and environment \(\mathcal {Z}\) we say that:

  1. 1.

    \(\mathcal {H}\) has running time \(T_{\mathcal {H}}\) if the running time of M is bounded by \(T_{\mathcal {H}}\) at \(V\setminus \{\mathcal {Z}\}\).

  2. 2.

    \(\mathcal {Z}\) has running time \(T_{\mathcal {Z}}\) if the running time of M is bounded by \(T_{\mathcal {Z}}\) at \(\{\mathcal {Z}\}\).

We remark that this approach differs from the simpler approach used in our thesis [12] and in [4], where the running time of each ITM was simply bounded by a polynomial in the security parameter. The advantage with the current approach is that ideal functionalities and protocols never halt until they are explicitly asked to by the adversary or the environment. However, both approaches are possible in our formalization.

10 Simplified Notation

At this point we have defined the models of the simplified UC framework rigorously, but it is convenient to introduce some alternative notation more in line with the literature to emphasize protocols, ideal functionalities, and adversaries instead of the technical details of how these are linked. We stress that we do not abandon the original notation; the freedom to change notation when convenient greatly simplifies describing and analyzing protocols.

It is easy to see that we may assume that all corrupted parties and all adversaries except the one linked to the environment are simulations of the router of Definition 2 with a suitable number of heads. This is illustrated in Fig. 13.

Fig. 13.
figure 13

A modification of a hybrid free model with corruption, where \(\mathcal {Q}_{3}^*\), \(\mathcal {P}_{3}^*\), and \(\mathcal {S}\) are replaced by routers and \(\mathcal {A}'\) is a corresponding modification of \(\mathcal {A}\), but with an environment \(\mathcal {Z}\) turning it into a model. The 0th slot of each router is marked by an arrow. We stress that strictly speaking each router is simulated by an ITM to adhere to our definitions. The routers needed for this ITM to have multiple links are hidden by our abstractions.

The subprotocols and ideal functionalities of a hybrid model are arranged in a tree of subprotocols where every ideal functionality is a leaf. Thus, given the set of indices of corrupted parties and the tree of subprotocols and ideal functionalities, an adversary, and an environment we can introduce an indexing scheme and recover the hybrid model. We denote a tree of subprotocols and ideal functionalities by inductively applying the rules that:

  1. 1.

    An ideal free model based on an ideal functionality \(\mathcal {F}\) is denoted by \(\mathcal {F}\).

  2. 2.

    A real free model based on a protocol \(\pi \) is denoted by \(\pi \).

  3. 3.

    A hybrid free model based on a real protocol \(\pi \), and complete free models based on hybrid protocols \(\rho _{1},\ldots ,\rho _{t}\) is denoted \(\pi (\rho _{1},\ldots ,\rho _{t})\).

We may consider the set of indices of corrupted parties to be embedded in the description of the adversary and simply say that we consider an adversary that corrupts a certain set of parties. This convention gives less concrete notation than the original, but it is more in line with the literature.

Suppose that \(\rho \) is such a description of a protocol, \(\mathcal {Z}\) is an environment, and \(\mathcal {A}\) is an adversary (where the indices of corrupted parties have been encoded). Then we denote by \(\mathcal {Z}_{z}(\rho ,\mathcal {A})\) the output of the environment \(\mathcal {Z}\) running on auxiliary input \(z\) when executing the model recovered from \(\rho \), \(\mathcal {Z}\), and \(\mathcal {A}\). Sometimes we structure the adversary to match the topology of the protocols and ideal functionalities, i.e., we denote each simulation subadversary by \(\mathcal {S}\) and each hybrid or real subadversary by \(\mathcal {A}\) with suitable subscripts.

We remark that in hybrid models the number of dummy parties linked to any ideal functionalities that are used is easily derived. Thus, there is no need to state this explicitly. This is not the case for ideal models, but the number of parties is always clear from the context.

Example 1

Suppose that \(\pi \) is a protocol that uses real subprotocols \(\pi _{0}\) and \(\pi _{1}\), and an ideal functionality \(\mathcal {F}\), where \(\pi _{1}\) in turn uses an ideal functionality \(\mathcal {F}_{1}\). Suppose further that \(\mathcal {A}\) is the overall adversary that attacks \(\pi \), and orchestrates: (1) subadversaries \(\mathcal {S}\) and \(\mathcal {S}_{1}\) of \(\mathcal {F}\) and \(\mathcal {F}_{1}\) respectively, and (2) real subadversaries \(\mathcal {A}_{1}\) and \(\mathcal {A}_{2}\) of \(\pi _{1}\) and \(\pi _{2}\) respectively. Then the output of the corresponding model executed with auxiliary input \(z\) is denoted by \(\mathcal {Z}_{z}\big (\pi (\pi _{0},\pi _{1}(\mathcal {F}_{1}),\mathcal {F}),\mathcal {A}(\mathcal {A}_{0},\mathcal {A}_{1}(\mathcal {S}_{1}),\mathcal {S})\big )\). If we are not interested in the internal structure of \(\mathcal {A}\), then we simply write \(\mathcal {A}\) instead of \(\mathcal {A}(\mathcal {A}_{0},\mathcal {A}_{1}(\mathcal {S}_{1}),\mathcal {S})\).

11 Definition of Security

Following the approach outlined at the beginning of the paper we now formalize the security of protocols. In this paper we only consider static corruption, i.e., an adversary may only choose a set of parties to corrupt before execution starts.

Remark 2

Adaptive corruption is easy to add to our framework as follows. (1) Add a link between each party and the adversary. There are already slots prepared for this. (2) Wrap each party in an ITM that simulates the party until it receives “corrupt” from the adversary, at which point it writes the state of the party to the adversary, and waits for a new ITM with a given state in return that it executes instead. The adversary may now use the link to the wrapped replacement freely. (3) Stipulate to which sets of parties the adversary may send “corrupt”. Another wrapper of the adversary can be used to enforce this to avoid restrictions when quantifying over adversaries.

One would typically assume a uniform adversarial structure for subprotocols as for static corruption, but the approach works even when this is not the case.

Most proofs of security only hold as long as the adversary does not corrupt certain parties or some subsets of parties. An adversarial structure is a collection of sets, where each set is a set of indices of parties that the adversary can corrupt. We use \(\mathsf {J}\) to denote an adversarial structure.

Example 2

If we have five parties \(\mathcal {P}_{1},\ldots ,\mathcal {P}_{5}\) in a protocol and we are able to prove that the protocol is secure provided that at most one out of \(\mathcal {P}_{1}\) and \(\mathcal {P}_{2}\) is corrupted and two out of \(\mathcal {P}_{3}\), \(\mathcal {P}_{4}\), and \(\mathcal {P}_{5}\) are corrupted. Then the adversarial structure we consider is \(\mathsf {J}=\{\{1,3,4\},\{1,4,5\},\{1,3,5\},\{2,3,4\},\{2,4,5\},\{2,3,5\}\}\).

Here we only consider the case where corruption takes place in a uniform way in all free models within a model, i.e., if a party is corrupted, then so are all its subparties recursively. However, it is quite natural to generalize this in certain situations.

We use \(\mathsf {A}\) to denote a class of adversaries with running time bounded by \(T_{\mathsf {A}}\), where the number of parties \(k\) and the topology of hybrid adversaries are implicit. Furthermore, the subset of such adversaries that corrupt the parties with indices in a set J are denoted by \(\mathsf {A}_{J}\). We use the same conventions for a class of simulation adversaries \(\mathsf {S}\) and the corresponding class \(\mathsf {S}_{J}\) of adversaries that corrupt dummy parties with indices in J. Finally, we use \(\mathsf {Z}\) to denote a class of environments with running time bounded by \(T_{\mathsf {Z}}\). Given two classes \(\mathsf {A}\) and \(\mathsf {A}'\) of adversaries with the same topology, we simply write \(\mathsf {A}+\mathsf {A}'\) to denote the class of adversaries with the same topology and running time \(T_{\mathsf {A}}+T_{\mathsf {A}'}\).

For standard asymptotic security we can simply require that \(T_{\mathsf {A}}\), \(T_{\mathsf {S}}\), and \(T_{\mathsf {Z}}\) are polynomially bounded, but for concrete security claims we can give explicit upper bounds.

Definition 30

(Secure Realization). A protocol \(\rho \) is a \((\mathsf {J},\mathsf {A},\mathsf {S},\mathsf {Z},\mu )\)-secure realization of a target protocol \(\tau \) if for every \(J\in \mathsf {J}\) and every adversary \(\mathcal {A}\in \mathsf {A}_{J}\), there exists a simulation adversary \(\mathcal {S}\in \mathsf {S}_{J}\) such that for every environment \(\mathcal {Z}\in \mathsf {Z}\) and every auxiliary input \(z\in \{0,1\}^{*}\):

$$\begin{aligned} \left| \Pr \left[ \mathcal {Z}_{z}(\rho ,\mathcal {A})=1\right] -\Pr \left[ \mathcal {Z}_{z}(\tau ,\mathcal {S})=1\right] \right| \le \mu . \end{aligned}$$

The above definition is considerably more general than other flavours of the UC framework in that a protocol can securely realize another protocol and not only an ideal functionality. This may seem contrived at first glance, but is in fact an important generalization that simplifies the description and analysis of concrete protocols.

Consider for example an ideal functionality for distributed key generation and decryption. It outputs a public key and can then be used to decrypt ciphertexts if asked to do so by the parties using its service. This works well with a CCA2-secure cryptosystem, but for IND-CPA secure cryptosystems the functionality can not be securely realized, since a simulator has no way of limiting access to the plaintexts needed to simulate decryption. Thus, any application of such a functionality must ensure that this information is otherwise available, but there are several ways to do this, e.g., a trusted party, secret sharing, and proofs of knowledge, and these are actually used in various electronic voting systems (see [13] for a discussion).

We can formalize an intuitive ideal functionality \(\mathcal {F}\) for distributed key generation and decryption, and several different ideal functionalities \(\mathcal {F}_{1},\ldots ,\mathcal {F}_{l}\) for submitting a ciphertext as an input to the ideal functionality. The individual functionalities may be impossible to securely realize in isolation, but we can consider a hybrid protocol \(\pi (\mathcal {F},\mathcal {F}_{i})\), where \(\pi \) forces any inputs to \(\mathcal {F}\) to first be processed by \(\mathcal {F}_{i}\) (possibly along with other information or through interaction) in such a way that \(\mathcal {F}_{i}\), and hence the simulator, knows the plaintext of any ciphertexts decrypted by \(\mathcal {F}\). This hybrid protocol can then be securely realized by a protocol of the form \(\pi (\sigma ,\sigma _{i})\), where \(\sigma \) and \(\sigma _{i}\) are the natural and often classic implementations in practice. The hybrid protocol \(\pi (\mathcal {F},\mathcal {F}_{i})\) may either be viewed as a type of ideal functionality that is secure by inspection, in which \(\pi \) should be a “thin” middle layer that is trivial to understand, or there could be another ideal functionality \(\mathcal {F}'\) that it securely realizes. Thus, this approach avoids some of the artificial complexity of the UC framework and allows a more modular approach.

12 Universal Composition Theorem

Canetti [3] proves a powerful composition theorem. Loosely speaking it says that if a protocol \(\pi \) securely realizes some functionality \(\mathcal {F}\), then the protocol \(\pi \) can be used instead of the ideal functionality regardless of how the functionality \(\mathcal {F}\) is employed. The general composition theorem can handle polynomially many instances of a constant number of ideal functionalities for many different adversarial models, but we only need the following weaker special case due to the results in Appendix A.

Theorem 1

(Special Universal Composition Theorem). If \(\rho _{0}\) is a \((\mathsf {J},\mathsf {A},\mathsf {S},\mathsf {Z},\mu )\)-secure realization of \(\tau _{0}\) and \(\pi (\tau _{0},\mathcal {F}_{1})\) is a \((\mathsf {J},\mathsf {A}+\mathsf {S},\mathsf {S}',\mathsf {Z},\mu )\)-secure realization of \(\tau \), then \(\pi (\rho _{0},\mathcal {F}_{1})\) is a \((\mathsf {J},\mathsf {A},\mathsf {S}',\mathsf {Z},\mu +\mu '\big )\)-secure realization of \(\tau \).

Proof

The triangle inequality implies that for every simulation adversary \(\mathcal {S}_{0}\), every hybrid adversary \(\mathcal {A}(\mathcal {A}_{0},\mathcal {S}_{1})\), every simulation adversary \(\mathcal {S}\), every environment \(\mathcal {Z}\) and every auxiliary input \(z\in \{0,1\}^{*}\)

$$\begin{aligned}&\left| \Pr \left[ \mathcal {Z}_{z}\big (\pi (\rho _{0},\mathcal {F}_{1}),\mathcal {A}(\mathcal {A}_{0},\mathcal {S}_{1})\big )=1\right] -\Pr \left[ \mathcal {Z}_{z}(\tau ,\mathcal {S})=1\right] \right| \nonumber \\&\,\le \left| \Pr \left[ \mathcal {Z}_{z}\big (\pi (\rho _{0},\mathcal {F}_{1}),\mathcal {A}(\mathcal {A}_{0},\mathcal {S}_{1})\big )=1\right] -\Pr \left[ \mathcal {Z}_{z}\big (\pi (\tau _{0},\mathcal {F}_{1}),\mathcal {A}(\mathcal {S}_{0},\mathcal {S}_{1})\big )=1\right] \right| \nonumber \\&\quad +\left| \Pr \left[ \mathcal {Z}_{z}\big (\pi (\tau _{0},\mathcal {F}_{1}),\mathcal {A}(\mathcal {S}_{0},\mathcal {S}_{1})\big )=1\right] -\Pr \left[ \mathcal {Z}_{z}(\tau ,\mathcal {S})=1\right] \right| \end{aligned}$$
(1)

We now denote by \(\mathcal {Z}_{z}(\mathcal {A},\mathcal {S}_{1})\) the environment that simulates the environment \(\mathcal {Z}\) on auxiliary input \(z\), the real free model \(\mathscr {R}_{k,J,2}(\pi ,\mathcal {A},\pi ^{*})\), and the ideal free model \(\mathscr {I}_{k,J}(\mathcal {F}_{1},\mathcal {S}_{1},\sigma ^{*}_{1})\). Here \(\pi ^{*}\) and \(\sigma ^{*}_{1}\) are the sets of corrupted subparties, but without loss of generality we may assume that they are routers. This allows us to rewrite the right side of Inequality (1) as

$$\begin{aligned}&\left| \Pr \left[ \mathcal {Z}_{z}(\mathcal {A},\mathcal {S}_{1})(\rho _{0},\mathcal {A}_{0})=1\right] -\Pr \left[ \mathcal {Z}_{z}(\mathcal {A},\mathcal {S}_{1})(\tau _{0},\mathcal {S}_{0})=1\right] \right| \\&\,+\left| \Pr \left[ \mathcal {Z}_{z}\big (\pi (\tau _{0},\mathcal {F}_{1}),\mathcal {A}(\mathcal {S}_{0},\mathcal {S}_{1})\big )=1\right] -\Pr \left[ \mathcal {Z}_{z}(\tau ,\mathcal {S})=1\right] \right| , \end{aligned}$$

without restricting the quantification.

Note that if \(\mathcal {A}(\mathcal {A}_{0},\mathcal {S}_{1})\in \mathsf {A}_{J}\) and \(\mathcal {S}_{0}\in \mathsf {S}_{J}\), then \(\mathcal {A}_{0}\in \mathsf {A}_{J}\) and \(\mathcal {A}(\mathcal {S}_{0},\mathcal {S}_{1})\in \mathsf {A}_{J}+\mathsf {S}_{J}\). Morover, if \(\mathcal {Z}(\mathcal {A},\mathcal {S}_{1})\in \mathsf {Z}\), then \(\mathcal {Z}\in \mathsf {Z}\). From the hypothesis of the theorem we know that for every hybrid adversary \(\mathcal {A}(\mathcal {A}_{0},\mathcal {S}_{1})\in \mathsf {A}_{J}\) there exists a simulation adversary \(\mathcal {S}_{0}\in \mathsf {S}_{J}\) such that for the hybrid adversary \(\mathcal {A}(\mathcal {S}_{0},\mathcal {S}_{1})\in (\mathsf {A}_{J}+\mathsf {S}_{J})\) there exists a simulation adversary \(\mathcal {S}\in \mathsf {S}_{J}'\) such that for every environment \(\mathcal {Z}_{z}(\mathcal {A},\mathcal {S}_{1})\in \mathsf {Z}\) and every auxiliary input \(z\in \{0,1\}^{*}\)

$$\begin{aligned} \left| \Pr \left[ \mathcal {Z}_{z}(\mathcal {A},\mathcal {S}_{1})(\rho _{0},\mathcal {A}_{0})=1\right] -\Pr \left[ \mathcal {Z}_{z}(\mathcal {A},\mathcal {S}_{1})(\tau _{0},\mathcal {S}_{0})=1\right] \right|&\le \mu \quad \text {and}\\ \left| \Pr \left[ \mathcal {Z}_{z}\big (\pi (\tau _{0},\mathcal {F}_{1}),\mathcal {A}(\mathcal {S}_{0},\mathcal {S}_{1})\big )=1\right] -\Pr \left[ \mathcal {Z}_{z}(\tau ,\mathcal {S})=1\right] \right|&\le \mu '. \end{aligned}$$

We conclude that for every \(\mathcal {A}(\mathcal {A}_{0},\mathcal {S}_{1})\in \mathsf {A}_{J}\) there exists a simulation adversary \(\mathcal {S}\in \mathsf {S}'\) such that for every \(\mathcal {Z}\in \mathsf {Z}\) and every auxiliary input \(z\in \{0,1\}^{*}\)

$$\begin{aligned} \left| \Pr \left[ \mathcal {Z}_{z}\big (\pi (\rho _{0},\mathcal {F}_{1}),\mathcal {A}(\mathcal {A}_{0},\mathcal {S}_{1})\big )=1\right] -\Pr \left[ \mathcal {Z}_{z}(\tau ,\mathcal {S})=1\right] \right| \le \mu +\mu '.\end{aligned}$$

13 Transforms of Models

It is intuitively clear that we can absorb any real subprotocols into the main protocol by simply combining each real party and its subparties into single new real party, but this does not give a valid model according to our definitions, since each such party is linked to multiple real communication models. A similar problem appears when bundling multiple ideal communication models.

In Appendix A we describe and analyze three explicit faithful transforms that allow us to: (1) simulate multiple ITM’s in a single ITM, (2) simulate multiple links between two ITM’s using a single link, and (3) simulate multiple identical communication models using a single communication model. The first two are straightforward, but the third depends on the details of the definitions of the communication models. A transform is faithful if it is invertible and preserves functionality.

These transforms give us the freedom to view protocols with subprotocols and ideal functionalities in the most convenient way for each situation without sacrificing rigor. In particular, it means that we can apply Theorem 1 to protocols with more than two ideal functionalities. More precisely, we can transform any protocol and adversary into a protocol of the form \(\pi (\mathcal {F}_{0},\mathcal {F}_{1})\), as required by the composition theorem and a corresponding adversary \(\mathcal {A}\). Suppose that \(\pi _{0}\) securely realizes \(\mathcal {F}_{0}\). Then, due to the composition theorem we know that there is a simulation adversary \(\mathcal {S}\) which shows that \(\pi (\pi _{0},\mathcal {F}_{1})\) securely realizes \(\pi (\mathcal {F}_{0},\mathcal {F}_{1})\). Due to faithfulnesss, we may then recover the original protocol along with a modified simulation adversary \(\mathcal {S}'\), which implies that the composition is secure for the original protocol. We provide details in Appendix A.

14 Relation to Other Security Frameworks

It is natural to ask if the simplified UC framework captures the same notion of security as other security frameworks. Instead of providing relations and proofs for particular other frameworks we exploit our transforms to make this easy for any security framework.

The faithful transforms allow us to turn any protocol into a protocol with at most one ideal functionality. If a protocol securely realizes an ideal functionality, then its transform does as well. Thus, proving that it securely realizes the functionality in another security framework is reduced to the special case where the protocol has at most one ideal functionality. More precisely, to relate the simplified UC framework to an alternative framework it suffices that: (1) protocols with at most one ideal functionality can be expressed in the alternative framework (with suitable restrictions), and (2) if there is an adversary that contradicts the security of such a protocol in the alternative framework, then there is an adversary that violates the security in the simplified UC framework.

In particular, relating the simplified UC framework to any reasonable presentation of the UC framework is straightforward. This should be contrasted with the analysis of Canetti et al. [4] which relates their presentation of the simplified UC framework with a particular presentation of the UC framework. Determining if their proof still holds after further modifications of the UC framework or for other alternative presentations is cumbersome.