1 Introduction

1.1 Composable Security

One can distinguish two different types of security statements about multi-party protocols. Stand-alone security considers only the protocol at hand and does not capture (at least not explicitly) what it means to use the protocol in a larger context. This can cause major problems. For example, if one intuitively understands an r-round broadcast protocol as implementing a functionality where the sender inputs a value and r rounds later everybody learns this value, then one missed the point that a dishonest party learns the value already in the first round. Therefore a naive randomness generation protocol, in which each party broadcasts (using a broadcast protocol) a random string and then all parties compute the XOR of all the strings, is insecure even though naively it may look secure [17]. There are also more surprising and involved examples of failures when using stand-alone secure protocols in larger contexts.

The goal of composable security frameworks is to capture all aspects of a protocol that can be relevant in any possible application; hence the term universal composability [6]. While composable security is more difficult to achieve than some form of stand-alone security, one can argue that it is ultimately necessary. Indeed, one can sometimes reinterpret stand-alone results in a composable framework. There exist several frameworks for defining and reasoning about composable security (e.g. [6, 11, 19, 24, 29, 32, 34]).

1.2 Composable Synchronous Models

One can classify results on distributed protocols according to the underlying interaction model. Synchronous models, where parties are synchronized and proceed in rounds, were first considered in the literature because they are relatively simple in terms of the design and analysis of protocols. Asynchronous models are closer to the physical reality, but designing them and proving their security is significantly more involved, and the achievable results (e.g. the fraction of tolerable dishonest parties) are significantly weaker than for a synchronous model. However, synchronous models are nevertheless justified because if one assumes a maximal latency of all communication channels as well as sufficiently well-synchronized clocks, then one can execute a synchronous protocol over an asynchronous network.

Most composable treatments of synchronous protocols are in (versions of) the UC framework by Canetti [6], which is an inherently asynchronous model. The models presented in [6, 18, 22, 33] propose different approaches to model synchronous communication on top of the UC framework [6]. These approaches inherit the complexity of the UC framework designed to capture full asynchrony. Another approach was introduced with the Timing Model [12, 14, 21]. This model integrates a notion of time in an intuitive manner, but as noted in [22] fails to exactly capture the guarantees expected from a synchronous network. A similar approach was proposed in [2], which modifies the asynchronous reactive-simulatability framework [3] by adding an explicit time port to each automaton.

Despite the large number of synchronous composable frameworks, the overhead created when using them is still too large. For example, when using a model built on top of UC, one typically needs to consider clock/synchronization functionalities, activation tokens, message scheduling, etc. Researchers wish to make composable statements, but using these models often turn out to be a burden and create huge overhead. As a consequence, papers written in synchronous UC models tend to be rather informal: the descriptions of the functionalities are incomplete, clock functionalities are missing, protocols are underspecified and the proofs are often made at an intuitive level. This leaves the question:

Can one design a composable framework targeted to minimally capture synchronous protocols?

People have considered capturing composable frameworks for restricted settings (e.g. [7, 36]), but to the best of our knowledge, there is no composable framework that is targeted to minimally capture any form of synchronous setting.

1.3 Multi-party Computation

In the literature on secure multi-party computation (MPC) protocols, of which secure function evaluation (SFE) is a special case, most of the results are for the synchronous model as well as stand-alone security, even though intuitively most protocols seem to provide composable security. To the best of our knowledge, the first paper proving the composable security of a classical SFE protocol is [8], where the security of the famous GMW-protocol [15] is proved. The protocol assumes trusted setup, and security is obtained in the UC framework. In [1], the security of the famous BGW-protocol [4] is proved in the plain model. With the results in [22, 23], one can prove security in the UC framework.

1.4 Contributions of this Paper

A guiding principle in this work is to strive for minimality and to avoid unnecessary artefacts, thus lowering the entrance fee for getting into the field of composable security and also bringing the reasoning about composable security for synchronous protocols closer to being tractable by formal methods.

Our contributions are two-fold. First, we introduce a new composable framework to capture settings where parties have synchronized clocks (in particular, traditional synchronous protocols), and illustrate the framework with a few simple examples. Our focus is on the meaningful class of information-theoretic security as well as static corruption. However, in Sect. 9, we discuss how one can further extend the framework.

As a second contribution, we prove the composable security of Maurer’s simple-MPC protocol [27] and demonstrate that it perfectly constructs a versatile computer resource which can be (re-)programmed during the execution. Compared to [1, 8], our treatment is significantly simpler for two reasons. First, the protocol of [27] is simpler than the BGW-protocol. Second, and more importantly, the simplicity of our framework allows to prove security of the protocols without the overhead of asynchronous models: we do not deal with activation tokens, message scheduling, running time, etc.

Synchronous Constructive Cryptography. Our framework is an instantiation of the Constructive Cryptography framework [28,29,30], for specific instantiations of the resource and converter concepts. Moreover, we introduce a new type of construction notion, parameterized by the set Z of potentially dishonest parties, allowing to capture the guarantees for every such dishonest set Z. An often considered special case is that nothing is guaranteed if Z contains too many parties.

Synchronous resources are very simple: They are (random) systems where the alphabet is list-valued. That is, a system takes a complete input list and produces a complete output list. Parallel composition of resources is naturally defined. There is no need to talk about a scheduler or activation patterns.

To allow that dishonest parties can potentially make their inputs depend on some side information of the round, we let one round r of the protocol correspond to two rounds, r.a and r.b (called semi-rounds). Honest parties provide the round input in semi-round r.a and the dishonest parties receive some information already in the same semi-round r.a. In semi-round r.b, the dishonest parties give their inputs and everybody receives the round’s output.Footnote 1

The framework is aimed at being minimal and differs from other frameworks in several ways. One aspect is that the synchronous communication network is simply a resource and not part of the framework; hence it can be modelled arbitrarily, allowing to capture incomplete networks and various types of channels (e.g., delay channels, secure, authenticated, insecure, etc.).

We demonstrate the usage of our model with three examples: a two-party protocol to construct a common randomness resource (Sect. 5), the protocol introduced in [5] to construct a broadcast resource (Sect. A), and the simple MPC protocol [27] as the construction of a computer resource (Sects. 7 and 8).

The Computer Resource. We introduce a system \(\mathsf {Computer}\) which captures intuitively what traditional MPC protocols like GMW, BGW or CCD [4, 9, 13, 15, 27, 35] achieve. Traditionally, in a secure function evaluation protocol among n parties, the function to compute is modelled as an arithmetic circuit assumed to be known in advance. However, the same protocols are intuitively secure even if parties do not know in advance the entire circuit. It is enough that parties have agreement on the next instruction to execute.

We capture such guarantees in an interactive computer resource, similar to a (programmable) old-school calculator with a small instruction set (read, write, addition, and multiplication in our case), an array of value-registers, and an instruction queue. The resource has n interfaces. The interfaces \(1, \dots , n-1\) are used to give inputs to the resource and receive outputs from the resource. Interface n is used to write instructions into the queue. A read instruction \((\textsc {input},i, p)\) instructs the computer to read a value from a value space \(\mathcal {V}\) at interface i and store it at position p of the value register. A write instruction \((\textsc {output},i,p)\) instructs the computer to output the value stored at position p to interface i. A computation instruction \((\textsc {op},p_1,p_2,p_3)\), \(\textsc {op} \in \{\textsc {add}, \textsc {mult} \}\) instructs the computer to add or to multiply the values at positions \(p_1\) and \(p_2\) and store it at position \(p_3\). We then show how to construct the computer resource using the Simple MPC protocol [27]. A similar statement could be obtained using other traditional MPC protocols.

1.5 Notation

We denote random variables by capital letters. Prefixes of sequences of random variables are denoted by a superscript, e.g. \(X^i\) denotes the finite sequence \(X_1,\dots ,X_i\). For random variables X and Y, we denote by \(\mathrm {p}_{X|Y}\) the corresponding conditional probability distribution.Footnote 2 Given a tuple t, we write the projection to the j-th component of the tuple as \([t]_j\). Given a sequence \(t^i\) of tuples \(t_1,\dots ,t_i\), we write \([t^i]_j\) as the sequence \([t_1]_j,\dots ,[t_i]_j\). For a finite set X, \(x \leftarrow _\$ X\) denotes sampling x uniform randomly from X.

2 Constructive Cryptography

The basic concepts of the Constructive Cryptography framework by Maurer and Renner [28,29,30] needed for this paper are quite simple and natural and are summarized below.

2.1 Specifications

A basic idea, which one finds in many disciplines, is that one considers a set \(\varPhi \) of objects and specifications of such objects. A specification \(\mathcal {U}\subseteq \varPhi \) is a subset of \(\varPhi \) and can equivalently be understood as a predicate on \(\varPhi \) defining the set of objects satisfying the specification, i.e., being in \(\mathcal {U}\). Examples of this general paradigm are the specification of mechanical parts in terms of certain tolerances (e.g. the thickness of a bolt is between 1.33 and 1.34 mm), the specification of the property of a program (e.g. the set of programs that terminate, or the set of programs that compute a certain function within a given accuracy and time limit), or in a cryptographic context the specification of a close-to-uniform n-bit key as the set of probability distributions over \(\{0,1\}^n\) with statistical distance at most \(\epsilon \) from the uniform distribution.

A specification corresponds to a guarantee, and smaller specifications hence correspond to stronger guarantees. An important principle is to abstract a specification \(\mathcal {U}\) by a larger specification \(\mathcal {V}\) (i.e., \(\mathcal {U}\subseteq \mathcal {V}\)) which is simpler to understand and work with. One could call \(\mathcal {V}\) an ideal specification to hint at a certain resemblance with terminology often used in the cryptographic literature. If a construction (see below) requires an object satisfying specification \(\mathcal {V}\), then it also works if the given object actually satisfies the stronger specification \(\mathcal {U}\).

2.2 Constructions

A construction is a function \(\gamma :\varPhi \rightarrow \varPhi \) transforming objects into (usually in some sense more useful) objects. A well-known example of a construction useful in cryptography, achieved by a so-called extractor, is the transformation of a pair of independent random variables (say a short uniform random bit-string, called seed, and a long bit-string for which only a bound on the min-entropy is known) into a close-to-uniform string.

A construction statement of specification \(\mathcal {S}\) from specification \(\mathcal {R}\) using construction \(\gamma \), denoted \(\mathcal {R} \xrightarrow {\gamma } \mathcal {S}\), is of the form

$$ \mathcal {R} \xrightarrow {\gamma } \mathcal {S} \ \ \ :\Longleftrightarrow \ \ \ \gamma (\mathcal {R}) \subseteq \mathcal {S} . $$

It states that if construction \(\gamma \) is applied to any object satisfying specification \(\mathcal {R}\), then the resulting object is guaranteed to satisfy (at least) specification \(\mathcal {S}\).

The composability of this construction notion follows immediately from the transitivity of the subset relation:

$$ \mathcal {R} \xrightarrow {\gamma } \mathcal {S} \ \wedge \ \mathcal {S} \xrightarrow {\gamma '} \mathcal {T} \ \ \Longrightarrow \ \ \mathcal {R} \xrightarrow {\gamma '\circ \gamma } \mathcal {T}. $$

2.3 Resources and Converters

The above natural and very general viewpoint is also taken in Constructive Cryptography, where the objects in \(\varPhi \) are systems, called resources, with interfaces to the parties considered in the given setting. If a party performs actions at its interface, this corresponds to applying a so-called converter which can also be thought of as a system or protocol engine. At its inside, the converter “talks to” the party’s interface of the resource and at the outside it emulates an interface (of the transformed resource). Applying such a converter induces a mapping \(\varPhi \rightarrow \varPhi \). We denote the set of converters as \(\varSigma \).

Figure 1 shows a resource with four interfaces where converters are applied at two of the interfaces. The resource obtained by applying a converter \(\pi \) at interface j of resource \(\mathbf{R} \) is denoted as \(\pi ^j\mathbf{R} \). Applying converters at different interfaces commutes.Footnote 3 The resource shown in Fig. 1 can hence be written

$$ \pi ^2 \rho ^4 \mathbf{R} , $$

which is equal to \(\rho ^4 \pi ^2 \mathbf{R} \).

Fig. 1.
figure 1

Example of a resource with 4 interfaces, where converters \(\pi \) and \(\rho \) are attached to interfaces 2 and 4.

Several resources (more precisely a tuple of resources) can be understood as a single resource, i.e., as being composed in parallel. One can think that for each party, all its interfaces are merged into a single interface, where the original interfaces can be thought of as sub-interfaces.

2.4 Multi-party Protocols and Constructions

Let us consider a setting with n parties, where \(\mathcal {P}=\{1,\ldots ,n\}\) denotes the set of parties (or, rather, interfaces).Footnote 4 A protocol consists of a tuple \(\varvec{\pi } = (\pi _1,\dots ,\pi _n)\) of converters, one for each party, and a construction consists of each party applying its converter. However, an essential aspect of reasoning in cryptography is that one considers that parties can either be honest or dishonest, and the goal is to state meaningful guarantees for the honest parties.Footnote 5 While an honest party applies its converter, there is no such guarantee for a dishonest party, meaning that a dishonest party may apply an arbitrary converter to its interface, including the identity converter that gives direct access to the interface.

In many cryptographic settings one considers a set of (honest) parties and a fixed dishonest party (often called the adversary). However, in a so-called multi-party context one considers each party to be either honest or dishonest. For each subset \(Z\subseteq \mathcal {P}\) of dishonest parties one states a separate guarantee: If the assumed resource satisfies specification \(\mathcal {R}_Z\), then, if all parties in \(\mathcal {P}\setminus Z\) apply their converter, the resulting resource satisfies specification \(\mathcal {S}_Z\). Typically, but not necessarily, all guarantees \(\mathcal {R}_Z\) (and analogously all \(\mathcal {S}_Z\)) are compactly described, possibly all derived as variations of the same resource.

Definition 1

The protocol \(\varvec{\pi } = (\pi _1,\dots ,\pi _n)\) constructs specifications \(\mathcal {S}_Z\) from \(\mathcal {R}_Z\) if

$$\begin{aligned} \forall Z \subseteq \mathcal {P}\ \ \ \mathcal {R}_Z \xrightarrow {\varvec{\pi }_{\mathcal {P}\setminus Z}} \mathcal {S}_Z. \end{aligned}$$

A special case often considered is that one provides guarantees only if the set of dishonest parties is within a so-called adversary structure [16], for example that there are at most t dishonest parties. This simply corresponds to the special case where \(\mathcal {S}_{Z} = \varPhi \) if \(|Z| > t\). In other words, if Z is not in the adversary structure, then the resource is only known to satisfy the trivial specification \(\varPhi \).

2.5 Specification Relaxations

As mentioned above, that a party j is possibly dishonest means that we have no guarantee about which converter is applied at that interface. For a given specification \(\mathcal {S}\), this is captured by relaxing the specification to the larger specification \(\mathcal {S}^{*_{j}}\):

$$\begin{aligned} \mathcal {S}^{*_{j}} := \{\pi ^j \mathbf{S} \ \vert \ \pi \in \varSigma \ \wedge \ \mathbf{S} \in \mathcal {S}\}. \end{aligned}$$

If we consider a set Z of potentially dishonest parties, we can consider the set of interfaces in Z as being merged to a single interface with several sub-interfaces, and applying the above relaxation to this interface. The resulting specification is denoted \(\mathcal {S}^{*_{Z}}\). This corresponds to the viewpoint that all dishonest parties collude (or, as sometimes stated in the literature, are under control of a central adversary). It is easy to see that the described \(*\)-relaxation is idempotent: For any specification \(\mathcal {S}\) and any set of interfaces Z, we have \((\mathcal {S}^{*_{Z}})^{*_{Z}} = \mathcal {S}^{*_{Z}}\).

If one wants to prove that a given specification \(\mathcal {U}\) is contained in \(\mathcal {S}^{*_{Z}}\), one can exhibit for every element \(U\in \mathcal {U}\) a converter \(\alpha \) such that \(U=\alpha ^Z S\) for some \(S\in \mathcal {S}\). Here \(\alpha ^Z S\) means applying \(\alpha \) to the interface resulting from merging the interfaces in Z. If the same \(\alpha \) works for every U, then one can think of \(\alpha \) as corresponding to a (joint) simulator for the interfaces in Z.

It should be pointed out that Constructive Cryptography [30] considers general specifications, and the above described specification type is only a special case. Therefore the construction notion does not involve a simulator. Indeed, this natural viewpoint allows to circumvent impossibility results in classical simulation-based frameworks (including the early version of Constructive Cryptography [28, 29]) because the type of specifications resulting from requiring a single simulator is too restrictive. See [20] for an example.

3 Synchronous Systems

To instantiate the Constructive Cryptography framework at the level of synchronous discrete systems, we need to instantiate the notions of a resource \(\mathbf{R} \in \varPhi \) and a converter \(\pi \in \varSigma \). We define each of them as special types of random systems [26, 31]. We briefly explain the role of random systems in such definitions.

3.1 Random Systems

Definition 2

An \((\mathcal {X},\mathcal {Y})\)-random system \(\mathbf{R} \) is a sequence of conditional probability distributions \(\mathrm {p}_{Y_i|X^iY^{i-1}}^\mathbf{R }\), for \(i \ge 1\). Equivalently, the random system can be characterized by the sequence \(\mathrm {p}_{Y^i|X^i}^\mathbf{R } = \prod _{k=1}^{i} \mathrm {p}_{Y_k|X^kY^{k-1}}^\mathbf{R }\), for \(i \ge 1\).

As explained in [25], a random system is the mathematical object corresponding to the behavior of a discrete system. A deterministic system is a special type of function (or sequence of functions), and the composition of systems is defined via function composition. Probabilistic systems are often thought about (and described) at a more concrete level, where the randomness is made explicit (e.g. as the randomness of an algorithm or the random tape of a Turing machine). Hence a probabilistic discrete system (PDS) corresponds to a probability distribution over deterministic systems, and the definition of the composition of probabilistic systems is induced by the definition of composition of deterministic systems (analogously to the fact that the definition of the sum of real-valued random variables is naturally induced by the definition of the sum of real numbers, which are not probabilistic objects).

Different PDS can have the same behavior, which means that the behavior, i.e., a random system, corresponds to an equivalence class of PDS (with the same behavior). The fact that the composition of (independent) random systems corresponds to a particular product of the involved conditional distribution can be proved and should not be seen as the definition. However, in this paper, which only considers random systems (the actual mathematical objects of study), the product of distributions appears as the definition.

It is important to distinguish the type and the description of a mathematical object. An object of a given type can be described in may different ways. For example, a random system can be described by several variants of pseudo-code, and as is common in the literature we also use such an ad-hoc description language. The fact that a random system is defined via conditional probability distributions does not mean that they have to described in that way.

3.2 Resources

A resource (the mathematical type) is a special type of random system [26, 31].

Definition 3

An \((\mathcal {X},\mathcal {Y})\)-random system \(\mathbf{R} \) is a sequence of conditional probability distributions \(\mathrm {p}_{Y_i|X^iY^{i-1}}^\mathbf{R }\), for \(i \ge 1\). Equivalently, the random system can be characterized by the sequence \(\mathrm {p}_{Y^i|X^i}^\mathbf{R } = \prod _{k=1}^{i} \mathrm {p}_{Y_k|X^kY^{k-1}}^\mathbf{R }\), for \(i \ge 1\).

A resource with n interfaces takes one input per interface and produces an output at every interface (see Fig. 2). Without loss of generality, we assume that the alphabets at all interfaces and for all indices i are the same.Footnote 6 An \((n,\mathcal {X},\mathcal {Y})\)-resource is a resource with n interfaces and input (resp. output) alphabet \(\mathcal {X}\) (resp. \(\mathcal {Y}\)).

Fig. 2.
figure 2

An example resource with 4 interfaces. At each invocation, the resource takes an input \(x_j \in \mathcal {X}\) at each interface j, and it outputs a value \(y_j \in \mathcal {Y}\) at each interface j.

Definition 4

An \((n,\mathcal {X},\mathcal {Y})\)-resource is an \((\mathcal {X}^n,\mathcal {Y}^n)\)-random system.

Parallel Composition. One can take several independent \((n,\mathcal {X}_j,\mathcal {Y}_j)\)-resources \(\mathbf{R} _1,\dots , \mathbf{R} _k\) and form an -resource, denoted \([\mathbf{R} _1,\dots , \mathbf{R} _k]\). A party interacting with the composed resource \([\mathbf{R} _1,\dots , \mathbf{R} _k]\) can give an input \(\mathbf {a} = (a^1,\dots ,a^k)\), which is interpreted as giving each input \(a^j \in \mathcal {X}_j\) to resource \(\mathbf{R} _j\), and then receive an output \(\mathbf {b} = (b^1,\dots ,b^k)\) containing the output from each of the resources.

In the following definition, we denote by \(x_i = (\mathbf {a}_{1,i},\dots ,\mathbf {a}_{n,i})\) the i-th input to the resource, and by \(y_i = (\mathbf {b}_{1,i},\dots ,\mathbf {b}_{n,i})\) the i-th output from the resource. We further let \([[x_i]]_j = ([\mathbf {a}_{1,i}]_j,\dots ,[\mathbf {a}_{n,i}]_j)\) be the tuple with the j-th component of each tuple \(\mathbf {a}_{\cdot ,i}\); and let \([[x^i]]_j\) be the finite sequence \([[x_1]]_j, \dots ,[[x_i]]_j\). We let \([[y_i]]_j\) and \([[y^i]]_j\) be defined accordingly.

Definition 5

Given a tuple of resources \((\mathbf{R} _1,\dots , \mathbf{R} _k)\), where \(\mathbf{R} _j\) is an \((n,\mathcal {X}_j,\mathcal {Y}_j)\)-resource. The parallel composition \(\mathbf{R} := [\mathbf{R} _1,\dots , \mathbf{R} _k]\), is an -resource, defined as follows:

$$\begin{aligned} \mathrm {p}_{Y_i|X^iY^{i-1}}^\mathbf{R }(y_i,x^i,y^{i-1}) = \prod _{j=1}^k \mathrm {p}_{Y_i|X^iY^{i-1}}^\mathbf{R _j}([[y_i]]_j,[[x^i]]_j,[[y^{i-1}]]_j) \end{aligned}$$

3.3 Converters

An \((\mathcal {X},\mathcal {Y})\)-converter is a system (of a different type than resources) with two interfaces, an outside interface \(\mathtt {out}\) and an inside interface \(\mathtt {in}\). The inside interface is connected to the \((n,\mathcal {X},\mathcal {Y})\)-resource, and the outside interface serves as the interface of the combined system. When an input is given (an input at the outside), the converter invokes the resource (with an input on the inside), and then converts its response into a corresponding output (an output on the outside). When a converter is connected to several resources in parallel \([\mathbf{R} _1,\dots ,\mathbf{R} _k]\), we address the corresponding sub-interfaces with the name of the resource, i.e, \(\mathtt {in}.\mathtt {R1}\) is the sub-interface connected to \(\mathbf{R} _1\).

More concretely, an \((\mathcal {X},\mathcal {Y})\)-converter is an \((\mathcal {X} \cup \mathcal {Y},\mathcal {X} \cup \mathcal {Y})\)-random system whose input and output alphabets alternate between \(\mathcal {X}\) and \(\mathcal {Y}\). That is,

  • On the first input, and further odd inputs, it takes a value \(x \in \mathcal {X}\) and produces a value \(x'\in \mathcal {X}\).

  • On the second input, and further even inputs, it takes a value \(y' \in \mathcal {Y}\), and produces a value \(y \in \mathcal {Y}\).

Definition 6

An \((\mathcal {X},\mathcal {Y})\)-converter \(\pi \) is a pair of sequences of conditional probability distributions \(\mathrm {p}_{X'_i|X^iX'^{i-1}Y'^{i-1}Y^{i-1}}^{\pi }\) and \(\mathrm {p}_{Y_i|X^iX'^iY'^{i}Y^{i-1}}^{\pi }\), for \(i \ge 1\). Equivalently, a converter can be characterized by the sequence

\(\mathrm {p}_{X'^iY^i|X^iY'^{i}}^{\pi } = \prod _{k=1}^i \mathrm {p}_{X'_k|X^kX'^{k-1}Y'^{k-1}Y^{k-1}}^{\pi } \cdot \mathrm {p}_{Y_k|X^kX'^kY'^{k}Y^{k-1}}^{\pi }\), for \(i \ge 1\).

Application of a Converter to a Resource Interface. The application of a converter \(\pi \) to a resource \(\mathbf{R} \) at interface j can be naturally understood as the resource that operates as follows (see Fig. 3):

  • On input \((x_1,\dots ,x_n) \in \mathcal {X}^n\): input \(x_j\) to \(\pi \), and let \(x_j'\) be the output.

    Then, input \((x_1,\dots ,x_{j-1},x_{j}',x_{j+1},\dots ,x_n) \in \mathcal {X}^n\) to \(\mathbf{R} \).

  • On output \((y_1,\dots ,y_{j-1},y_{j}',y_{j+1},\dots ,y_n) \in \mathcal {Y}^n\) from \(\mathbf{R} \), input \(y_j'\) to \(\pi \), and let \(y_j\) be the output.

    The output is \((y_1,\dots ,y_n) \in \mathcal {Y}^n\).

Fig. 3.
figure 3

The figure shows the application of a converter \(\pi \) to the interface 2 of a resource \(\mathbf{R} \). On input a value \(x_2 \in \mathcal {X}\) to interface \(\mathtt {out}\) of \(\pi \), the converter \(\pi \) outputs a value \(x_2' \in \mathcal {X}\) at interface \(\mathtt {in}\). The resource \(\mathbf{R} \) takes as input \((x_1,x_2',x_3,x_4) \in \mathcal {X}^4\), and outputs \((y_1,y_2',y_3,y_4) \in \mathcal {Y}^4\). On input \(y_2'\) to interface \(\mathtt {in}\) of \(\pi \), the converter outputs a value \(y_2\) at interface \(\mathtt {out}\).

Given a tuple \(a = (a_1,\dots ,a_n)\), we denote \(a_{\{j \rightarrow b\}}\) the tuple where the j-th component is substituted by value b, i.e. the tuple \((a_1,\dots ,a_{j-1},b,a_{j+1},a_n)\). Moreover, given a sequence \(a^i\) of tuples \(t^1, \dots , t^i\) and a sequence \(b^i\) of values \(b_1, \dots , b_i\), we denote \(a^i_{\{j \rightarrow b^i\}}\), the sequence of tuples \(t^1_{\{j \rightarrow b_1\}}\), \(\dots \), \(t^i_{\{j \rightarrow b_i\}}\).

Definition 7

The application of an \((\mathcal {X},\mathcal {Y})\)-converter \(\pi \) at interface j of an \((n,\mathcal {X},\mathcal {Y})\)-resource \(\mathbf{R} \) is the \((n,\mathcal {X},\mathcal {Y})\)-resource \(\pi ^j \mathbf{R} \) defined as follows:

$$\begin{aligned} \begin{aligned} \mathrm {p}_{Y^i|X^i}^{\pi ^j\mathbf{R} }\left( y^i,x^i\right) = \sum _{x'^i,y'^i}&\mathrm {p}_{X'^iY^i|X^iY'^{i}}^{\pi }\left( x'^i,[y^i]_j,[x^i]_j,y'^i\right) \mathrm {p}_{Y^i|X^i}^\mathbf{R }\left( y^i_{\{j \rightarrow y'^i\}},x^i_{\{j \rightarrow x'^i\}}\right) \end{aligned} \end{aligned}$$

One can see that applying converters at distinct interfaces commutes. That is, for any converters \(\pi \) and \(\rho \), any resource \(\mathbf{R} \) and any disjoint interfaces jk, we have that \(\pi ^j \rho ^k \mathbf{R} = \rho ^k \pi ^j \mathbf{R} \).

For a tuple of converters \(\varvec{\pi } = (\pi _1,\dots ,\pi _n)\), we denote by \(\varvec{\pi } \mathbf{R} \) the resource where each converter \(\pi _j\) is attached to interface j. Given a subset of interfaces I, we denote by \(\varvec{\pi }_I \mathbf{R} \) the resource where each converter \(\pi _j\) with \(j \in I\), is attached to interface j.

4 Resources with Specific Round-Causality Guarantees

The resource type of Definition 4 captures that all parties act in a synchronized manner. The definition also implies that any (dishonest) party’s input depends solely on the previous outputs seen by the party.

In practice this assumption is often not justified. For example, consider a resource consisting of two parallel communication channels (in a certain round) between two parties, one in each direction. Then it is typically unrealistic to assume that a dishonest party can not delay giving its input until having seen the output on the other channel. Such adversarial behavior is typically called “rushing” in the literature. More generally, a dishonest party’s input can depend on partial information of the current round inputs from honest parties.

To model such causality guarantees, we introduce resources that proceed in two rounds (called semi-rounds) per actual protocol round.Footnote 7 This makes explicit what a dishonest party’s input can (and can not) depend on.

More concretely, each round r consists of two semi-rounds, denoted r.a and r.b. In the first semi-round, r.a, the resource takes inputs from the honest parties and gives an output to the dishonest parties. No output is given to honest parties, and no input is taken from dishonest parties. In the second semi-round, r.b, the resource takes inputs from the dishonest parties and gives an output to all parties. Figure 4 illustrates the behavior of such a resource within one round. When describing such resources, we often omit specifying the semi-round when it is clear from the context.

Fig. 4.
figure 4

Figure depicts a resource operating in a round. The dashed lines indicate that no value is taken as input to the resource, and is output from the resource. The honest (resp. dishonest) parties give inputs to the resource in the first (resp. second) invocation, and all parties receive an output in the second invocation. The dishonest parties receive in addition an output in the first invocation.

When applying a protocol converter to such a resource, we formally attach the corresponding converter that operates in semi-rounds, where round-r inputs are given to the resource at r.a, and round-r outputs are obtained at r.b.

5 A First Example

We demonstrate the usage of our model to describe a very simple 2-party protocol which uses delay channels to generate common randomness. The protocol uses a channel with a known lower and upper bound on the delay, and proceeds as follows: Each party generates a random value and sends it to the other party via a delay channel. Then, once the value is received, each party outputs the sum of the received value and the previously generated random value. It is intuitively clear that the protocol works because 1) a dishonest party does not learn the message before round r, and 2) an honest party is guaranteed to learn the message at round R.

Bounded-Delay Channel with Known Lower and Upper Bound. We model a simple delay channel \(\overrightarrow{\mathcal {DC}}\) (resp. \(\overleftarrow{\mathcal {DC}}\)) from party 1 to party 2 (resp. party 2 to party 1) with known lower and upper bound on the delay. It takes a message at round 1, and is guaranteed to not deliver the message until round r to a dishonest party, but is guaranteed to deliver it at round R to an honest party. To model such a delay channel, we define a delay channel \(\overrightarrow{\mathsf {DC}}_{r,R,Z}\) with message space \(\mathcal {M}\) from party 1 to party 2 with fixed delay that takes a message at round 1 and delivers it at round r if the receiver is dishonest, and at round R if the receiver is honest. The set Z indicates the set of dishonest parties. The channel \(\overleftarrow{\mathsf {DC}}_{r,R,Z}\) in the other direction is analogous.

figure a

To capture that the delay channel is not guaranteed to deliver the message to a dishonest receiver exactly at round r, we consider the \(*\)-relaxation \((\overrightarrow{\mathsf {DC}}_{r,R,Z})^{*_Z}\) on the delay channel at the dishonest interfaces Z. This specification includes resources with no guarantees at Z. For example, the resource may deliver the message later than r, or garbled, or not at all.

Common Randomness Resource. The sketc.hed protocol constructs a common randomness resource \(\mathsf {CRS}\) that outputs a random string. We would like to model a \(\mathsf {CRS}\) that is guaranteed to output the random string at round R to an honest party, but does not output the random string before r to a dishonest party. For that, we first consider a resource which outputs a random string to each honest (resp. dishonest) party at round R (resp. r).

figure b

With the same idea as with the delay channels, we can model a common randomness resource that is guaranteed to deliver the randomness to the honest parties at round R but is not guaranteed to deliver the output to the dishonest parties at round r, by considering a \(*\)-relaxation on the resource over the dishonest interfaces Z, \((\mathsf {CRS}_{r,R,Z})^{*_Z}\).

Two-Party Construction. We describe the 2-party protocol \(\varvec{\pi }= (\pi _1,\pi _2)\) sketc.hed at the beginning of the section and show that it constructs a common randomness resource.

figure c

Lemma 1

\(\varvec{\pi }= (\pi _1,\pi _2)\) constructs the specification \((\mathsf {CRS}_{r,R,Z})^{*_Z}\) from the specification \([(\overrightarrow{\mathsf {DC}}_{r,R,Z})^{*_Z}, (\overleftarrow{\mathsf {DC}}_{r,R,Z})^{*_Z}]\).

Proof

We prove each case separately.

1) \(Z = \varnothing \): In this case, it is easy to see that \(\pi _1 \pi _2 [\overrightarrow{\mathsf {DC}}_{r,R,\varnothing }, \overleftarrow{\mathsf {DC}}_{r,R,\varnothing }] = \mathsf {CRS}_{r,R,\varnothing }\) holds, since the sum of two uniformly random messages is uniformly random.

2) \(Z = \{2\}\): Consider now the case where party 2 is dishonest (the case where party 1 is dishonest is similar). Let \(\mathbf{S} := [\overrightarrow{\mathsf {DC}}_{r,R,Z}, \overleftarrow{\mathsf {DC}}_{r,R,Z}]\). It suffices to prove that \(\pi _1 \mathbf{S} \in (\mathsf {CRS}_{r,R,Z})^{*_Z}\) because:

figure d

where the last equality holds because the \(*\)-relaxation is idempotent. Hence, we show that the converter \(\sigma \) described below is such that \(\pi _1 \mathbf{S} = \sigma ^2\mathsf {CRS}_{r,R,Z}\).

figure e

Consider the system \(\pi _1 \mathbf{S} \). The system outputs at interface 1 of round R.b, a value \(\texttt {rnd} + v\), where \(\texttt {rnd} \) is a random value and v is the value received at interface 2 of round 1.b (and \(v=0\) if no value was received). Moreover, the system outputs at interface 2 of round r.a, the value \(\texttt {rnd} \).

Now consider the system \(\sigma ^2\mathsf {CRS}_{r,R,Z}\). The system outputs at interface 1 of round R.b, a random value \(\texttt {rnd} '\). Moreover, the system outputs at interface 2 of round r.a, the value \(\texttt {rnd} ' - v\), where v is the same value received at interface 2 of round 1.b (and \(v=0\) if no value was received).

Since the joint distribution \(\{\texttt {rnd} + v, \texttt {rnd} \}\) and \(\{\texttt {rnd} ', \texttt {rnd} ' - v\}\) are exactly the same, we conclude that \(\pi _1 \mathbf{S} = \sigma ^2\mathsf {CRS}_{r,R,Z}\).

   \(\square \)

6 Communication Resources

6.1 Point-to-Point Channels

We model the standard synchronous communication network, where parties have the guarantee that messages input at round k are received by round \(k+1\), and dishonest parties’ round-k messages potentially depend on the honest parties’ round-k messages. Let \(\mathsf {CH}_{\ell , Z}(s,r)\) be a bilateral channel resource with n interfaces, one designated to each party \(i \in \mathcal {P}\), and where two of the interfaces, s and r are designated to the sender and the receiver. The channel is parameterized by the set of dishonest parties \(Z \subseteq \mathcal {P}\). The privacy guarantees are formulated by a leakage function \(\ell (\cdot )\) that determines the information leaked to dishonest parties. For example, in an authenticated channel \(\ell (m) = m\), and in a secure channel \(\ell (m) = |m|\).

figure f

Let \(\mathcal {N}_{Z}\) be the complete network of pairwise secure channels. That is, \(\mathcal {N}_{Z}\) is the parallel composition of secure channels \(\mathsf {CH}_{\ell , Z}(i,j)\) with \(\ell (m) = |m|\), for each pair of parties \(i,j \in \mathcal {P}\).

6.2 Broadcast Resource Specification

Broadcast is an important building block that many distributed protocols use. It allows a specific party, called the sender, to consistently distribute a message. More formally, it provides two guarantees: 1) Every honest party outputs the same value (consistency), and 2) the output value is the sender’s value in case the sender is honest (validity).

The broadcast specification \(\mathcal {BC}_{k,l,Z}(s)\) involves a set of parties \(\mathcal {P}\), where one of the parties is the sender s. It is parameterized by the round numbers k and l indicating when the sender distributes the message and when the parties are guaranteed to receive it. The specification \(\mathcal {BC}_{k,l,Z}(s)\), is the set of all resources satisfying both validity and consistency. That is, there is a value v such that the output at each interface j for \(j \notin Z\) at round l.b is \(y_j^{l.b} = v\), and if the sender is honest, this value is the sender’s input \(x_s^{k.a}\) at round k.a. That is:

figure g

We show how to construct such a broadcast specification in Sect. A. Let \(\mathcal {BC}_{\varDelta ,Z}(s)\) be the parallel composition of \(\mathcal {BC}_{k,k+\varDelta ,Z}(s)\), for each \(k \ge 1\), and let \(\mathcal {BC}_{\varDelta ,Z}\) be the parallel composition of \(\mathcal {BC}_{\varDelta ,Z}(s)\), for each party \(s \in \mathcal {P}\).

7 The Interactive Computer Resource

In this section, we introduce a simple ideal interactive computer resource with n interfaces. Interfaces \(1,\dots ,n-1\) are used to give input values and receive output values. Interface n allows to input instruction commands. The resource has a memory which is split into two parts: an array storing values S and a queue C storing instruction commands to be processed. We describe the functionality of the resource in two parts: Storing the instructions that are input at \(n\), and processing the instructions.

Store Instructions. On input an instruction at interface \(n\) at round r, the instruction is stored in the queue C. Then, after a fixed number of rounds, the input instruction is output at each honest interface i, and at dishonest interfaces at round r.a.

Instruction Processing. The interactive computer processes instructions sequentially. There are three types of instructions that the resource can process. Each instruction type has a fixed number of rounds.

  1. 1.

    An input instruction \((\textsc {input},i, p)\) instructs the resource to read a value from a value space \(\mathcal {V}\) at interface i and store it at position p of the array S. If party i is honest, it inputs the value at the first round of processing the input instruction, otherwise it inputs the value at the last round. This models the fact that a dishonest party i can defer the choice of the input value to the end of processing the instruction.

  2. 2.

    An output instruction \((\textsc {output},i,p)\) instructs the computer to output the value stored at position p to interface i. If party i is dishonest, it receives the value at the first round of processing the output instruction. Otherwise, the value is output at the last round of processing the instruction.

  3. 3.

    A computation instruction \((\textsc {op},p_1,p_2,p_3))\), \(\textsc {op} \in \{\textsc {add}, \textsc {mult} \}\) instructs the computer to add or to multiply the values at positions \(p_1\) and \(p_2\) and store it at \(p_3\).

One could consider different refinements of the interactive computer. For example, a computer that can receive lists of instructions, process instructions in parallel, or a computer that allows instructions to be the result of a computation using values from S. For simplicity, we stick to a simple version of the computer and leave possible refinements to future work.

figure h

8 Protocol Simple MPC

We adapt Maurer’s Simple MPC protocol [27], originally described for SFE in the stand-alone setting, to realize the resource \(\mathsf {Computer}\) from Sect. 7, thereby proving a much stronger (and composable) statement. The protocol is run among a set \(\mathcal {P}= \{1,\dots ,n\}\) of n parties. Parties \(1,\dots ,n-1\) process the instructions, give input values and obtain output values. Party n has access to the instructions that the other parties needs to execute.

General Adversaries. In many protocols, the sets of possible dishonest parties are specified by a threshold t, that indicates that any set of dishonest parties is of size at most t. However, in this protocol, one specifies a so-called adversary structure \(\mathcal {Z}\), which is a monotoneFootnote 8 set of subsets of parties, where each subset indicates a possible set of dishonest parties. We are interested in the condition that no three sets in \(\mathcal {Z}\) cover \([n-1]\), also known as \(\mathcal {Q}^3([n-1],\mathcal {Z})\) [16].

8.1 Protocol Description

Let \(\mathcal {Z}\) be an adversary structure that satisfies \(\mathcal {Q}^3([n-1],\mathcal {Z})\). Protocol \(\mathsf {sMPC}= (\pi _1,\dots ,\pi _{n})\) constructs the resource \(\mathsf {Computer}_Z\), introduced in Sect. 7, for any \(Z \in \mathcal {Z}\). For sets \(Z \notin \mathcal {Z}\), the protocol constructs the trivial specification \(\varPhi \).

Assumed Specifications. The protocol assumes the following specifications: a network specification \(\mathcal {N}_Z\) among the parties in \(\mathcal {P}\) (see Sect. 6.1) and a parallel broadcast specification \(\mathcal {BC}_{\varDelta ,Z}\) which is the parallel composition of broadcast channels where any party in \(\mathcal {P}\) can be a sender and the set of recipients is \(\mathcal {P}\) (see Sect. 6.2).

Converters. The converter \(\pi _{n}\) is the identity converter. It allows to give direct access to the flow of instructions that the parties need to process. Because the instructions are delivered to the parties in \(\mathcal {P}\) via the broadcast specification \(\mathcal {BC}_{\varDelta ,Z}(n)\), parties have agreement on the next instruction to execute.

We now describe the converters \(\pi _1,\dots ,\pi _{n-1}\). Each converter \(\pi _i\) keeps an (initially empty) array L with the current stored values, and a queue C of instructions to be executed. Each time an instruction is received from \(\mathcal {BC}_{\varDelta ,Z}(n)\), it is added to C and also output. Each instruction in C is processed sequentially.

In order to describe how to process each instruction, we consider the adversary structure \(\mathcal {Z}' := \{Z \setminus \{n\} : \ Z \in \mathcal {Z}\}\). Let the maximal sets in \(\mathcal {Z}'\) be \(\max (\mathcal {Z}') := \{Z_1,\dots ,Z_m\}\).

Input Instruction \((\mathbf{input} ,i,p)\), for \(i \in [n-1]\). Converter \(\pi _i\) does as follows: On input a value s from the outside interface, compute shares \(s_1,\dots ,s_m\) using a m-out-of-m secret-sharing scheme (m is the number of maximal sets in \(\mathcal {Z}'\)). That is, compute random summands such that \(s = \sum _{j=1}^m s_j\). Then, output \(s_j\) to the inside interface \(\mathtt {in}.\mathtt {net}.\mathtt {ch}_{i,k}\), for each party \(k \in \overline{Z_j}\).

Then each converter for party in \(\overline{Z_j}\), echoes the received shares to all parties in \(\overline{Z_j}\), i.e. outputs the received shares to \(\mathtt {in}.\mathtt {net}.\mathtt {ch}_{i,k}\), for each party \(k \in \overline{Z_j}\). If a converter obtained different values, it broadcasts a complaint message, i.e. it outputs a complaint message at \(\mathtt {in}.\mathtt {bc}\). In such a case, \(\pi _i\) broadcasts the share \(s_j\). At the end of the process, the converters store the received shares in their array, along with the information that the value was assigned to position p. Intuitively, a consistent sharing ensures that no matter which set \(Z_k\) of parties is dishonest, they miss the share \(s_k\), and hence s remains secret.

Output Instruction \((\mathbf{output} ,i,p)\), for \(i \in [n-1]\). Each converter \(\pi _l\), \(l \in [n-1]\), outputs all the stored shares assigned to position p at interface \(\mathtt {in}.\mathtt {net}.\mathtt {ch}_{l,i}\). Converter \(\pi _i\) does: Let \(v_j^l\) be the value received from party l as share j at \(\mathtt {in}.\mathtt {net}.\mathtt {ch}_{l,i}\). Then, converter \(\pi _i\) reconstructs each share \(s_j\) as the value v such that \(\{l \ \vert \ v_j^l \ne v\} \in \mathcal {Z}\), and outputs \(\sum _j s_j\).

Addition Instruction \((\mathbf{add} ,p_1,p_2,p_3)\). Each converter for a party in \(\overline{Z_j}\) adds the j-th shares of the values assigned to positions \(p_1\) and \(p_2\), and stores the result as the j-th share of the value at position \(p_3\).

Multiplication Instruction \((\mathbf{mult} ,p_1,p_2,p_3)\). The goal is to compute a share of the product ab, assuming that the converters have stored shares of a and of b respectively. Given that \(ab = \sum _{p,q = 1}^m a_pb_q\), it suffices to compute shares of each term \(a_pb_q\), and add the shares locally. In order to compute a sharing of \(a_pb_q\), the converter for each party \(i \in \overline{Z_p} \cap \overline{Z_q}\) executes the same steps as the input instruction, with the value \(a_pb_q\). Then, converters for parties in \(\overline{Z_p} \cap \overline{Z_q}\) check that they all shared the same value by reconstructing the difference of every pair of shared values. In the case that all differences are zero, they store the shares of a fixed party (e.g. the shares from the party in \(\overline{Z_p} \cap \overline{Z_q}\) with the smallest index). Otherwise, each term \(a_p\) and \(b_q\) is reconstructed, and the default sharing \((a_pb_q,0,\dots ,0)\) is adopted.

Theorem 1

Let \(\mathcal {P}= \{1,\dots ,n\}\), and let \(\mathcal {Z}\) be an adversary structure that satisfies \(\mathcal {Q}^3([n-1], \mathcal {Z})\). Protocol \(\mathsf {sMPC}\) constructs \((\mathsf {Computer}_Z)^{*_Z}\) with parameters \((r_i,r_o,r_a,r_m,r_s) = (2\varDelta +2,1,0,2\varDelta + 4,\varDelta )\) from \([\mathcal {N}_{Z},\mathcal {BC}_{\varDelta ,Z}]\), for any \(Z \in \mathcal {Z}\), and constructs \(\varPhi \) otherwise.

Proof

Case \(Z = \varnothing \): In this case all parties are honest. We need to argue that:

$$\begin{aligned} \mathcal {R}_{\varnothing } := \mathsf {sMPC}[\mathcal {N}_{\varnothing },\mathcal {BC}_{\varDelta ,\varnothing }] = \mathsf {Computer}_{\varnothing }. \end{aligned}$$

At the start of the protocol, the computer resource \(\mathsf {Computer}_{\varnothing }\), and each protocol converter has an empty queue C of instructions and empty array L of values. Consider the system \(\mathcal {R}_{\varnothing }\). Each time party \(n\) inputs an instruction I to \(\mathcal {BC}_{\varDelta ,\varnothing }(n)\), because of validity, it is guaranteed that after \(\varDelta \) rounds each protocol converter receives I, stores I in the queue C and outputs I at interface \(\mathtt {out}\). Each converter processes the instructions in its queue sequentially, and each instruction takes the same constant amount of rounds to be processed for all parties. Hence, all honest parties keep a queue with the same instructions throughout the execution of the protocol.

Now consider the system \(\mathsf {Computer}_{\varnothing }\). It stores each instruction input at interface \(n\) in its queue C, and outputs the instruction I at each party interface \(i \in \mathcal {P}\) after \(\varDelta \) rounds. The instructions are processed sequentially, and it takes the same amount of rounds to process each instruction as in \(\mathcal {R}_{\varnothing }\).

We then conclude that each queue for each protocol converter in \(\mathcal {R}_{\varnothing }\) contains exactly the same instructions as the queue in \(\mathsf {Computer}_{\varnothing }\).

We now argue that the behavior of both systems is identical not only when storing the instructions, but also when processing them.

Let us look at the content of the arrays L that \(\mathsf {Computer}_{\varnothing }\) and each protocol converter in \(\mathcal {R}_{\varnothing }\) has. Whenever a value s is stored in the array L of \(\mathsf {Computer}_{\varnothing }\) at position p, there are values \(s_l\), such that \(s = \sum _{l=1}^m s_l\) and \(s_l\) is stored in each converter \(\pi _j\) such that \(j \notin Z_l\). For each value \(s_l\), the converters that store \(s_l\), also stores additional information containing the position p and the index l.

Consider an input instruction, \((\textsc {input},i,p)\) at round k, and a value x is input at the next round at interface i. In the system \(\mathcal {R}_{\varnothing }\), the converter \(\pi _i\) computes values \(s_l\), such that \(s = \sum _{l=1}^m s_l\) and sends each \(s_l\) to each converter \(\pi _j\) such that \(j \notin Z_l\). All broadcasted messages are 0, i.e. there are no complaints, and as a consequence \(s_l\) is stored in each converter \(\pi _j\), where \(j \notin Z_j\). In the system \(\mathsf {Computer}_{\varnothing }\), the value x is stored at the p-th register of the array L.

Consider an output instruction, \((\textsc {output},i,p)\). In the system \(\mathcal {R}_{\varnothing }\), each converter \(\pi _j\) sends the corresponding previously stored values \(s_l\) associated with position p, and \(\pi _i\) outputs \(s = \sum _{l=1}^m s_l\). In the system \(\mathsf {Computer}_{\varnothing }\), the value x stored at the p-th register of the array L is output at interface i.

Consider an addition instruction, \((\textsc {add},p_1,p_2,p_3)\). In the system \(\mathcal {R}_{\varnothing }\) each converter adds, for each share index l, the corresponding values associated with position \(p_1\) and \(p_2\), and stores the result as a value associated with position \(p_3\) and index l. In the system \(\mathsf {Computer}_{\varnothing }\), the sum of the values a and b stored at the \(p_1\)-th and \(p_2\)-th positions is stored at position \(p_3\).

Consider a multiplication instruction, \((\textsc {mult},p_1,p_2,p_3)\). In the ideal system \(\mathsf {Computer}_{\varnothing }\), the product of the values a and b stored at positions \(p_1\)-th and \(p_2\)-th is stored at position \(p_3\). In the system \(\mathcal {R}_{\varnothing }\), let \(a_p\) (resp. \(b_q\)) be the value associated with position \(p_1\) (resp. \(p_2\)) and with index p (resp. q), that each converter for party in \(\overline{Z_p}\) (resp. \(\overline{Z_q}\)) has. For each \(1 \le p,q \le m\), consider each protocol converter for party \(j \in \overline{Z_p} \cap \overline{Z_q}\). (Note that since the adversary structure satisfies \(Q^{3}(\mathcal {P},\mathcal {Z})\), then, for any two sets \(Z_p, Z_q \in \mathcal {Z}\), \(\overline{Z_p} \cap \overline{Z_q} \ne \varnothing \).) The converter does the following steps:

  1. 1.

    Input instruction steps with the value \(a_pb_q\) as input. As a result, each converter in \(\overline{Z_u}\) stores a value, which we denote \(v_{j}^u\), from \(j \in \overline{Z_p} \cap \overline{Z_q}\).

  2. 2.

    Execute the output instruction, with the value \(v_{j}^u - v_{j_0}^u\) and towards all parties in \([n-1]\). As a result, every party obtains 0, and the value \(v_{j_0}^u\) is stored.

  3. 3.

    The value associated with position \(p_3\) and index p, stored by each converter for party in \(\overline{Z_p}\), is the sum \(w_p = \sum _{j_0} v_{j_0}^p\).

As a result, each party in \(\overline{Z_p}\) stores \(w_p\), and \(\sum _{p} w_p = ab\).

Case \(Z \ne \varnothing \): In this case, the statement is only non-trivial if \(Z \in \mathcal {Z}\), because otherwise the ideal system specification is \(\mathcal {S}_Z = \varPhi \), i.e. there are no guarantees.

We need to show that when executing \(\mathsf {sMPC}\) with the assumed specification, we obtain a system in the specification \((\mathsf {Computer}_Z)^{*_Z}\). That is, for each network resource \(\mathsf {N} \in \mathcal {N}_{Z}\) and parallel broadcast resource \(\mathsf {PBC} = [\mathsf {BC}_1,\dots ,\mathsf {BC}_n] \in \mathcal {BC}_{\varDelta ,Z}\) we need to find a system \(\sigma \) such that:

$$\begin{aligned} \mathbf{R} := \mathsf {sMPC}_{\mathcal {P}\setminus Z}[\mathsf {N},\mathsf {PBC}] = \mathbf{S} := \sigma ^Z \mathsf {Computer}_Z. \end{aligned}$$
figure i

We first argue that the instructions written at the queue C in resource \(\sigma ^Z\mathsf {Computer}_Z\) follow the same distribution as the instructions that the honest parties store in their queue in the system \(\mathcal {R}_Z\). If party \(n\) is honest, this is true, as argued in the previous case for \(Z = \varnothing \). In the case that party \(n\) is dishonest, the converter \(\sigma \) inputs (equally distributed) instructions as \(\mathsf {BC}_n\) outputs to honest parties in \(\mathcal {R}_Z\) by emulating the behavior of \(\mathsf {BC}_n\), taking into account the inputs from dishonest parties provided at the outside interface, and the honest parties’ inputs are \(\bot \).

Now we need to show that the messages that dishonest parties receive in both systems are equally distributed. We argue about each single instruction separately. Let I be the next instruction to be executed.

Input instruction: \(I=(\textsc {input},i,p)\). We consider two cases, depending on whether party i is honest.

Dishonest party i. In the system \(\mathbf{R} \), if a complaint message is generated from an honest party, the exact same complaint message will be output by \(\sigma \) in the system \(\mathbf{S} \). This is because \(\sigma \) stores the shares received at the outside interface by the dishonest parties, and checks that the shares are consistent. Moreover, at the end of the input instruction it is guaranteed that all shares are consistent (i.e., all honest parties in each \(\overline{Z_q}\) have the same share), and hence the sum of the shares is well-defined. This exact sum is input at \(\mathsf {Computer}_Z\) by \(\sigma \).

Honest party i. In this case, the converter \(\sigma \) generates and outputs random consistent values as the shares for dishonest parties. On input a complaint from a dishonest party, output at the broadcast interface its share to all dishonest parties. In the system \(\mathbf{R} \), dishonest parties also receive shares that are randomly distributed. Observe that in this case, the correct value is stored in the queue of \(\mathsf {Computer}_Z\), but \(\sigma \) only has the shares of dishonest parties.

Output instruction: \(I=(\textsc {output},i,p)\). In this case, the emulation is only non-trivial if party i is dishonest. The converter outputs random shares such that the sum of the random shares and the corresponding shares from dishonest parties that are stored, corresponds to the output value x obtained from \(\mathsf {Computer}_Z\). Observe that in the system \(\mathbf{R} \), the shares sum up to the value x as well, because of the \(\mathcal {Q}^3\) condition. Given that the correct value was stored in the queue in every input instruction, the same shares that are output by \(\sigma \) follow the same distribution as the shares received by dishonest parties in \(\mathbf{R} \) (namely, random shares subject to the fact that the sum of the random shares and the dishonest shares is equal to x).

Addition instruction: \(I = (\textsc {add},p_1,p_2,p_3)\). The converter \(\sigma \) simply adds the corresponding shares and stores them in the correct location.

Multiplication instruction: \(I = (\textsc {mult},p_1,p_2,p_3)\). Consider each \(1 \le p,q \le m\). Consider the following steps in the execution of the multiplication instruction in \(\mathbf{R} \):

  1. 1.

    Honest parties execute the input instruction steps with the value \(a_pb_q\) as input. Dishonest parties can use any value as input. However, it is guaranteed that the sharing is consistent. That is, each converter for an honest party in \(\overline{Z_u}\) stores a value, which we denote \(v_{j}^u\), from \(j \in \overline{Z_p} \cap \overline{Z_q}\).

  2. 2.

    Execute the output instruction, with the value \(v_{j}^u - v_{j_0}^u\) and towards all parties in \(\mathcal {P}\). If any dishonest party used a value different than \(a_pb_q\) in the previous step, one difference will be non-zero, and the default sharing \((a_pb_q,0,\dots ,0)\) is adopted. Otherwise, the sharing from \(P _{j_0}\), i.e. the values \(v_{j_0}^u\), is adopted.

Case \(Z \cap \overline{Z_{p}} \cap \overline{Z_{q}} \ne \varnothing \): If there is a dishonest party in \(\overline{Z_{p}} \cap \overline{Z_{q}}\), then the converter \(\sigma \) has the values \(a_p\) and \(b_q\) stored.

Step 1: For each dishonest party \(i \in \overline{Z_{p}} \cap \overline{Z_{q}}\), the converter \(\sigma \) checks whether the shares are correctly shared (it checks that the dishonest parties in \(\overline{Z_{p}} \cap \overline{Z_{q}}\) input a consistent sharing), in the same way as when emulating the input instruction.

Step 2: After that, \(\sigma \) checks that the shares from party i add up to \(a_pb_q\). If not, the converter \(\sigma \) defines the sharing of \(a_pb_q\) as \((a_pb_q,0,\dots ,0)\), and outputs the corresponding shares to the dishonest parties.

Observe that given that the adversary structure satisfies the \(\mathcal {Q}^3\) condition, there is always an honest party in \(\overline{Z_{p}} \cap \overline{Z_{q}}\). Then, in the system \(\mathbf{R} \), it is guaranteed that the value \(a_pb_q\) is shared. Moreover, as in \(\mathbf{S} \), the default sharing is adopted if and only if a dishonest party shared a value different from \(a_pb_q\).

Case \(Z \cap \overline{Z_{p}} \cap \overline{Z_{q}} = \varnothing \): If all parties in \(\overline{Z_{p}} \cap \overline{Z_{q}}\) are honest, dishonest parties receive random shares in \(\mathbf{R} \). Moreover, all reconstructed differences are 0, since honest parties in \(\overline{Z_{p}} \cap \overline{Z_{q}}\) share the same value. In \(\mathbf{S} \), \(\sigma \) generates random values as shares of \(a_pb_q\) as well, and then open 0s as the reconstructed differences.

   \(\square \)

9 Concluding Remarks

The fact that the construction notion in Definition 1 states a guarantee for every possible set of dishonest parties, might suggest that our model cannot be extended to the setting of adaptive corruptions. However, the term adaptive corruption most often refers to the fact that a resource can be adaptively compromised, e.g. a party’s computer has a weakness (e.g. a virus) which allows the adversary to take it over, depending on environmental events. This can be modeled by stating explicitly the party’s resources with an interface to the adversary and with a so-called free interface on which the corruptibility can be (adaptively) initiated. If one takes this viewpoint, it is actually natural to consider a more fine-grained model of the resources (e.g. the computer, the memory, and the randomness resource as separate resources) with separate meanings of what “corruption” means. Note that the guarantees for honest parties whose resources have been (partially) taken over are (and must be) still captured by the constructed resource specification.