1 Introduction

1.1 Background

In the setting of secure multiparty computation, a set of mutually distrusting parties wish to jointly and securely compute a function of their inputs. This computation should be such that each party receives its correct output, and none of the parties learn anything beyond their prescribed output. In more detail, the most important security properties that we wish to capture are: privacy (no party should learn anything more than its prescribed output), correctness (each party is guaranteed that the output that it receives is correct), independence of inputs (the corrupted parties must choose their inputs independently of the honest parties’ inputs), fairness Footnote 1 (corrupted parties should receive their output if and only if honest parties do), and guaranteed output delivery (corrupted parties should not be able to prevent honest parties from receiving their output). The standard definition today, [4, 15] formalizes the above requirements (and others) in the following general way. Consider an ideal world in which an external trusted party is willing to help the parties carry out their computation. An ideal computation takes place in this ideal world by having the parties simply send their inputs to the trusted party, who then computes the desired function and passes each party its prescribed output. The security of a real protocol is established by comparing the outcome of the protocol to the outcome of an ideal computation. Specifically, a real protocol that is run by the parties (without any trusted party) is secure, if an adversary controlling a coalition of corrupted parties can do no more harm in a real execution than in the above ideal execution.

The above informal description is “overly ideal” in the following sense. It is a known fact that unless an honest majority is assumed, it is impossible to obtain generic protocols for secure multi-party computation that guarantee output delivery and fairness [6]. The definition is therefore typically relaxed when no honest majority is assumed. In particular, under certain circumstances, honest parties may not receive any output, and fairness is not always guaranteed. Recently, it was shown that it is actually possible to securely compute some (in fact, many) two-party functionalities fairly [2, 10]. In addition, it is possible to even compute some multiparty functionalities fairly, for any number of corrupted parties; in particular, the majority function may be securely computed fairly with three parties, and the Boolean OR function may be securely computed for any number of parties [11]. This has promoted interest in the question of fairness in the setting of no honest majority.

1.2 Fairness Versus Guaranteed Output Delivery

The two notions of fairness and of guaranteed output delivery are quite similar and are often interchanged. However, there is a fundamental difference between them. If a protocol guarantees output delivery, then the parties always obtain output and cannot abort. In contrast, if a protocol is fair, then it is only guaranteed that if one party receives output then all parties receive output. Thus, it is possible that all parties abort. In order to emphasize the difference between the notions, we note that every protocol that provides guaranteed output delivery can be transformed into a protocol that provides fairness but not guaranteed output delivery, as follows. At the beginning every party broadcasts OK; if one of the parties did not send OK then all the parties output \(\bot \); otherwise, the parties execute the original protocol (that ensures guaranteed output delivery). Clearly every party can cause the protocol to abort. However, it can only do so before any information has been obtained. Thus, the resulting protocol is fair, but does not guarantee output delivery.

It is immediate to see that guaranteed output delivery implies fairness, since if all parties must receive output then it is not possible for the corrupted parties to receive output while the honest do not. However, the opposite direction is not clear. In the two-party case, guaranteed output delivery is indeed implied by fairness since upon receiving abort, the honest party can just compute the function on its own input and a default input for the other party. However, when there are many parties involved, it is not possible to replace inputs with default inputs since the honest parties do not necessarily know who is corrupted (and security mandates that honest parties’ inputs cannot be changed; otherwise, this could be disastrous in an election-type setting). This leads us to the following fundamental questions, which until now have not been considered at all (indeed, fairness and guaranteed output delivery are typically used synonymously):

Does fairness imply guaranteed output delivery? Do there exist functionalities that can be securely computed with fairness but not with guaranteed output delivery? Are there conditions on the function/network model for which fairness implies guaranteed output delivery?

The starting point of our work is the observation that the broadcast functionality does actually separate guaranteed output delivery and fairness. Specifically, let n denote the overall number of parties, and let t denote an upper bound on the number of corrupted parties. Then, it is well known that secure broadcast can be achieved if and only if \(t<n/3\) [17, 19].Footnote 2 However, it is also possible to achieve detectable broadcast (which means that either all parties abort and no one receives output, or all parties receive and agree upon the broadcasted value) for any \(t<n\) [8]. In our terms, this is a secure computation of the broadcast functionality with fairness but no guaranteed output delivery. Thus, we see that for \(t\ge n/3\) there exist functionalities that can be securely computed with fairness but not with guaranteed output delivery (the fact that broadcast cannot be securely computed with guaranteed output delivery for \(t \ge n/3\) follows directly from the bounds on Byzantine Generals [17, 19]). Although broadcast does provide a separation, it is an atypical function. Specifically, there is no notion of privacy, and the functionality can be computed information theoretically for any \(t<n\) given a secure setup phase [20]. Thus, broadcast is a trivial functionality.Footnote 3 This leaves the question of whether fairness and guaranteed output delivery are distinct still holds for more “standard” secure computation tasks.

It is well known that for \(t<n/2\) any multiparty functionality can be securely computed with guaranteed output delivery given a broadcast channel [14, 21]. Fitzi et al. [9] used detectable broadcast in the protocols of [14, 21] and showed that any functionality can be securely computed with fairness for \(t<n/2\). This leaves open the question as to whether there exist functionalities (apart from broadcast) that cannot be securely computed with guaranteed output delivery for \(n/3 \le t < n/2\).

Gordon and Katz [11] showed that the three-party majority function and multiparty Boolean OR function can be securely computed with guaranteed output delivery for any number of corrupted parties (in particular, with an honest minority). However, the constructions of [11] use a broadcast channel. This leads us to the following questions for the range of \(t\ge n/3\):

  1. 1.

    Can the three-party majority function and multiparty Boolean OR function be securely computed with guaranteed output delivery without a broadcast channel?

  2. 2.

    Can the three-party majority function and multiparty Boolean OR function be securely computed with fairness without a broadcast channel?

  3. 3.

    Does the existence of broadcast make a difference with respect to fairness and/or guaranteed output delivery in general?

We remark that conceptually guaranteed output delivery is a stronger notion of security and that it is what is required in some applications. Consider the application of “mental poker”; if guaranteed output delivery is not achieved, then a corrupted party can cause the execution to abort in case it is dealt a bad hand. This is clearly undesirable.

1.3 Our Results

1.3.1 Separating Fairness and Guaranteed Output Delivery

We show that the three-party majority function, which can be securely computed with fairness [11], cannot be securely computed with guaranteed output delivery. Thus, there exist non-trivial functionalities (i.e., functionalities that cannot be securely computed in the information-theoretic setting without an honest majority) for which fairness can be achieved but guaranteed output delivery cannot. Technically, we show this by proving that the three-party majority function can be used to achieve broadcast, implying that it cannot be securely computed with guaranteed output delivery.

Theorem 1.1

(informal). Consider a model without a broadcast channel and consider any \(t\ge n/3\). Then, there exist non-trivial functionalities f (e.g., the majority function) such that f can be securely computed with fairness but f cannot be securely computed with guaranteed output delivery.

This proves that fairness and guaranteed output delivery are distinct, at least in a model without a broadcast channel.

1.3.2 Feasibility of Guaranteed Output Delivery Without Broadcast

The protocols of [11] for majority and Boolean OR both use a broadcast channel to achieve guaranteed output delivery. As shown in Theorem 1.1, this is essential for achieving their result for the majority function. However, is this also the case for the Boolean OR function? In general, do there exist non-trivial functionalities for which guaranteed output delivery is achievable without a broadcast channel and for any number of corrupted parties?

Theorem 1.2

(informal). Consider a model without a broadcast channel and consider any number of corruptions. Then, there exist non-trivial functionalities f (e.g., the Boolean OR function) such that f can be securely computed with guaranteed output delivery.

1.3.3 On the Role of Broadcast

We show that the existence or nonexistence of broadcast is meaningless with respect to fairness, but of great significance with respect to guaranteed output delivery. Specifically, we show the following:

Theorem 1.3

(informal). Let f be a multiparty functionality. Then:

  1. 1.

    There exists a protocol for securely computing f with fairness with a broadcast channel if and only if there exists a protocol for securely computing f with fairness without a broadcast channel.

  2. 2.

    If there exists a protocol for securely computing f with fairness (with or without a broadcast channel), then there exists a protocol for securely computing f with guaranteed output delivery with a broadcast channel.

Thus, fairness and guaranteed output delivery are equivalent in a model with a broadcast channel and distinct without a broadcast channel. In contrast, by Theorem 1.1 we already know that without broadcast it does not hold that fairness implies guaranteed output delivery (otherwise, the separation in Theorem 1.1 would not be possible). We also show that under black-box reductions, fairness never helps to achieve guaranteed output delivery. That is:

Theorem 1.4

(informal). Let f be a multiparty functionality and consider a hybrid model where a trusted party computes f fairly for the parties (i.e., either all parties receive output or none do). Then, there exists a protocol for securely computing f with guaranteed output delivery in this hybrid model if and only if there exists a protocol for securely computing f with guaranteed output delivery in the real model with no trusted party.

Intuitively, Theorem 1.4 follows from the fact that an adversary can always cause the result of calls to f to be abort in which case they are of no help. This does not contradict item (2) of Theorem 1.3 since given a broadcast channel and non-black-box access to the protocol that computes f with fairness, it is possible to apply a variant of the GMW compiler [14] and detect which party cheated and caused the abort to occur.

1.3.4 Conditions Under which Fairness Implies Guaranteed Output Delivery

We have already seen that fairness implies guaranteed output delivery given broadcast. We also consider additional scenarios in which fairness implies guaranteed output delivery. We prove that if a functionality can be securely computed with fairness and identifiable abort (meaning that the identity of the cheating party is detected), then the functionality can be securely computed with guaranteed output delivery. Finally, we show that in the fail-stop model (where the only thing an adversary can do is instruct a corrupted party to halt prematurely), fairness is always equivalent to guaranteed output delivery. This follows from the fact that broadcast is trivial in the fail-stop model.

1.3.5 Identifiable Abort and Broadcast

In the model of identifiable abort, the identity of the cheating party is revealed to the honest parties. This definition was explicitly used by Aumann and Lindell [1], who remarked that it is met by most protocols (e.g.,  [14]), but not all (e.g.,  [13]). This model has the advantage that a cheating adversary who runs a “denial of service” attack and causes the protocol to abort cannot go undetected. Thus, it cannot repeatedly prevent the parties from obtaining output. An interesting corollary that comes out of our work—albeit not related to fairness and guaranteed output delivery—is that security with identifiable abort cannot be achieved in general for \(t\ge n/3\) without broadcast. This follows from the fact that if identifiable abort can be achieved in general (even without fairness), then it is possible to achieve broadcast. Thus, we conclude:

Corollary 1.5

(informal). Consider a model without a broadcast channel and consider any \(t\ge n/3\). Then, there exist functionalities f that cannot be securely computed with identifiable abort.

1.3.6 Summary of Feasibility Results

Table 1 below summarizes the state of affairs regarding feasibility for secure computation with fairness and guaranteed output delivery, for different ranges regarding the number of corrupted parties.

Table 1 Feasibility of fairness and guaranteed output delivery

2 Definitions and Preliminaries

Notation We let \(\kappa \in {\mathbb {N}}\) denote the security parameter. A function \(\mathsf{negl}(\kappa )\) is negligible if for every positive polynomial \(p(\kappa )\) and all sufficiently large \(\kappa \in {\mathbb {N}}\) it holds that \(\mathsf{negl}(\kappa ) < 1/p(\kappa )\). A distribution ensemble \(X = \left\{ {X(a,\kappa )}\right\} _{a\in \{0,1\}^*,\kappa \in {\mathbb {N}}}\) is an infinite sequence of random variables indexed by \(a\in \{0,1\}^*\) and \(\kappa \in {\mathbb {N}}\). Two distribution ensembles \(X=\left\{ {X(a,\kappa )}\right\} _{a\in \{0,1\}^*,\kappa \in {\mathbb {N}}}\) and \(Y=\left\{ {Y(a,\kappa )}\right\} _{a\in \{0,1\}^*,\kappa \in {\mathbb {N}}}\) are computationally indistinguishable (denoted \(X\mathop {\equiv }\limits ^\mathrm{c}Y\)) if for every non-uniform polynomial-time distinguisher \(\mathcal{D}\) there exists a function \(\mathsf{negl}(\kappa )\), such that for every \(a\in \{0,1\}^*\) and all sufficiently large \(\kappa \)’s

$$\begin{aligned} \left| {\Pr }[\mathcal{D}(X(a,\kappa ),1^\kappa )=1] - {\Pr }[\mathcal{D}(Y(a,\kappa ),1^\kappa )=1] \right| \le \mathsf{negl}(\kappa ). \end{aligned}$$

Functionalities An n-party functionality is a random process that maps vectors of n inputs to vectors of n outputs, denoted as \(f : (\{0,1\}^*)^n \rightarrow (\{0,1\}^*)^n\), where \(f = (f_1, \ldots , f_n)\). That is, for a vector of inputs \(\mathbf {x} = (x_1, \ldots , x_n)\), the output vector is a random variable \((f_1(\mathbf {x}), \ldots , f_n(\mathbf {x}))\) ranging over vectors of strings. The output for the ith party (with input \(x_i\)) is defined to be \(f_i(\mathbf {x})\). We denote an empty input by \(\lambda \). In case of symmetric functionalities, where \(f_1=f_2=\ldots =f_n\), by abuse of notation we refer to the functionality f as \(f_1\).

All of the results in this paper apply to the case of reactive functionalities, which are multi-phase computations, e.g., commitment schemes. In this case, the functionality to be computed is modeled by a Turing machine that continually receives inputs and generates outputs. Our definition is based on function evaluation in order to simplify the presentation.

Adversarial Behavior Loosely speaking, the aim of a secure multiparty protocol is to protect the honest parties against dishonest behavior from the corrupted parties. This is normally modeled using a central adversarial entity, which controls the set of corrupted parties and instructs them how to operate. That is, the adversary obtains the views of the corrupted parties, consisting of their inputs, random tapes and incoming messages, and provides them with the messages that they are to send in the execution of the protocol.

We differentiate between three types of adversaries:

  • Semi-honest adversaries: a semi-honest adversary always instructs the corrupted parties to follow the protocol. Semi-honest adversaries model “honest but curious” behavior, where the adversary tries to learn additional information other than the output, based on the internal states of the corrupted parties.

  • Fail-stop adversaries: a fail-stop adversary instructs the corrupted parties to follow the protocol as a semi-honest adversary, but it may also instruct a corrupted party to halt early (only sending some of its messages in a round).

  • Malicious adversaries: a malicious adversary can instruct the corrupted parties to deviate from the protocol in any arbitrary way it chooses. There are no restrictions on the behavior of malicious adversaries.

Unless stated otherwise, we consider malicious adversaries who may arbitrarily deviate from the protocol specification. When considering malicious adversaries, there are certain undesirable actions that cannot be prevented. Specifically, parties may refuse to participate in the protocol, may substitute their local input (and enter with a different input) and may cease participating in the protocol before it terminates. Essentially, secure protocols limit the adversary to such behavior only.

We further assume that the adversary is computationally bounded and static. By computationally bounded, we mean that the adversary is modeled by a non-uniform probabilistic polynomial-time interactive Turing machine. By static, we mean that at the beginning of the execution, the adversary is given a set \(\mathcal{I}\) of corrupted parties which it controls.

Security of Protocols We consider a number of different ideal models: security with guaranteed output delivery, with fairness, with abort, with identifiable abort (meaning that in the case of abort one of the corrupted parties is identified by the honest parties), and fairness with identifiable abort. The ideal models are, respectively, denoted \(\text{ IDEAL }^{\mathsf{g.d.}}\), \(\text{ IDEAL }^{\mathsf{fair}}\), \(\text{ IDEAL }^{\mathsf{abort}}\), \(\text{ IDEAL }^{\mathsf{id\hbox {-} abort}}\), \(\text{ IDEAL }^{\mathsf{id\hbox {-} fair}}\). We also consider hybrid model protocols where the parties send regular messages to each other, and also have access to a trusted party who computes some function f for them. The trusted party may compute according to any of the specified ideal models. Letting \(\mathsf{type}\in \left\{ {\mathsf{g.d.}, \mathsf{fair}, \mathsf{abort}, \mathsf{id\hbox {-} abort}, \mathsf{id\hbox {-} fair}}\right\} \), we call this the \((f,\mathsf{type})\)-hybrid model and denote it \(\text{ HYBRID }^{f,\mathsf{type}}\). Full definitions can be found in Appendix 8.

Definitions of Specific Functionalities We next define three functionalities that will be used throughout the paper. The first functionality we consider is the n-party broadcast functionality, \(f_{\mathsf{bc}}:(\{0,1\}^*)^n\rightarrow (\{0,1\}^*)^n\), where the sender \(P_1\) has an input \(x\in \{0,1\}^*\) while all other parties have the empty input \(\lambda \) (in plain English, this means that only the first party \(P_1\) has input). The output of each party is x.

$$\begin{aligned} f_\mathsf{bc}(x,\lambda , \ldots , \lambda )=(x,\ldots ,x). \end{aligned}$$

The second functionality n-party Boolean OR, \(f_{\mathsf{OR}}:\{0,1\}^n\rightarrow \{0,1\}^n\), where each party \(P_i\) has an input bit \(x_i\in \{0,1\}\). The output of each party is the OR of all the inputs \(x=x_1\vee \ldots \vee x_n\).

$$\begin{aligned} f_\mathsf{or}(x_1,\ldots , x_n)=(x,\ldots ,x)\quad {\mathrm{where }}\quad x=x_1\vee \ldots \vee x_n. \end{aligned}$$

The third functionality is the majority functionality for three parties, \(f_{\mathsf{maj}}:\{0,1\}^3\rightarrow \{0,1\}^3\), where each party \(P_i\) has an input bit \(x_i\in \{0,1\}\). The output of each party is the majority value of the input bits.

$$\begin{aligned} f_\mathsf{maj}(x_1,x_2, x_3)=(x,x,x)\quad {\mathrm{where }}\quad x=(x_1 \wedge x_2) \vee (x_1 \wedge x_3) \vee (x_2 \wedge x_3). \end{aligned}$$

3 Separating Fairness from Guaranteed Output Delivery

In this section, we prove Theorem 1.1. As we have mentioned in the Introduction, it is known that secure broadcast can be t-securely computed with guaranteed output delivery if and only if \(t<n/3\). In addition, secure broadcast can be computed with fairness, for any \(t\le n\), using the protocol of [8]. Thus, broadcast already constitutes a separation of fairness from guaranteed output delivery; however, since broadcast can be information theoretically computed (and is trivial in the technical sense; see Footnote 3), we ask whether or not such a separation also exists for more standard secure computation tasks.

In order to show a separation, we need to take a function for which fairness in the multiparty setting is feasible. Very few such functions are known, and the focus of this paper is not the construction of new protocols. Fortunately, Gordon and Katz [11] showed that the three-party majority function can be securely computed with fairness. (In [11] a broadcast channel is used. However, as we show in Sect. 5.1, this implies the result also without a broadcast channel.) We stress that the three-party majority function is not trivial, and in fact the ability to securely compute it with any number of corruptions implies the existence of oblivious transfer (this is shown by reducing the two-party greater-than functionality to it and applying [16]).

We show that the three-party majority function \(f_\mathsf{maj}\) cannot be securely computed with guaranteed output delivery and any number of corrupted parties in the point-to-point network model by showing that it actually implies broadcast. The key observation is that there exists an input (1, 1, 1) for which the output of \(f_\mathsf{maj}\) will be 1, even if a single corrupted party changes its input to 0. Similarly, there exists an input (0, 0, 0) for which the output of \(f_\mathsf{maj}\) will be 0, even if a single corrupt party changes its input to 1. Using this property, we show that if \(f_\mathsf{maj}\) can be computed with guaranteed output delivery, then there exists a broadcast protocol for three parties that is secure against a single corruption. Given an input bit \(\beta \), the sender sends \(\beta \) to each other party, and all parties compute \(f_\mathsf{maj}\) on the input they received. This works since a corrupted dealer cannot make two honest parties output inconsistent values, since \(f_\mathsf{maj}\) provides the same output to all parties. Likewise, if there is one corrupted receiver, then it cannot change the majority value (as described above). Finally, if there are two corrupted receivers, then it makes no difference what they output anyway.

Theorem 3.1

Let t be a parameter and let \(f_\mathsf{maj}:\{0,1\}^3\rightarrow \{0,1\}^3\) be the majority functionality for three parties \(f_\mathsf{maj}(x_1,x_2,x_3)=(y,y,y)\) where \(y= (x_1 \wedge x_2) \vee (x_1 \wedge x_3) \vee (x_2 \wedge x_3)\). If \(f_\mathsf{maj}\) can be t-securely computed with guaranteed output delivery in a point-to-point network, then there exists a protocol that t-securely computes the three-party broadcast functionality.

Proof

We construct a protocol \(\pi \) for securely computing the three-party broadcast functionality \(f_\mathsf{bc}(x,\lambda ,\lambda )=(x,x,x)\) in the \((f_\mathsf{maj},\mathsf{g.d.})\)-hybrid model (i.e., in a hybrid model where a trusted party computes the \(f_\mathsf{maj}\) functionality with guaranteed output delivery). Protocol \(\pi \) works as follows:

  1. 1.

    The sender \(P_1\) with input \(x\in \{0,1\}\) sends x to \(P_2\) and \(P_3\).

  2. 2.

    Party \(P_1\) sends x to the trusted party computing \(f_\mathsf{maj}\). Each party \(P_i\) \((i\in \{2,3\})\) sends the value it received from \(P_1\) to \(f_\mathsf{maj}\).

  3. 3.

    Party \(P_1\) always outputs x. The parties \(P_2\) and \(P_3\) output whatever they receive from the trusted party computing \(f_\mathsf{maj}\).

Let \(\mathcal{A}\) be an adversary attacking the execution of \(\pi \) in the \((f_\mathsf{maj},\mathsf{g.d.})\)-hybrid model; we construct an ideal-model adversary \(\mathcal{S}\) in the ideal model for \(f_\mathsf{bc}\) with guaranteed output delivery. \(\mathcal{S}\) invokes \(\mathcal{A}\) and simulates the interaction of \(\mathcal{A}\) with the honest parties and with the trusted party computing \(f_\mathsf{maj}\). \(\mathcal{S}\) proceeds based on the following corruption cases:

  • \(P_1\) alone is corrupted: \(\mathcal{S}\) receives from \(\mathcal{A}\) the values \(x_2,x_3\in \{0,1\}\) that it sends to parties \(P_2\) and \(P_3\), respectively. Next, \(\mathcal{S}\) receives the value \(x_1\in \{0,1\}\) that \(\mathcal{A}\) sends to \(f_\mathsf{maj}\). \(\mathcal{S}\) computes \(x=f_\mathsf{maj}(x_1,x_2,x_3)\) and sends x to the trusted party computing \(f_\mathsf{bc}\). \(\mathcal{S}\) simulates \(\mathcal{A}\) receiving x back from \(f_\mathsf{maj}\), and outputs whatever \(\mathcal{A}\) outputs.

  • \(P_1\) and one of \(P_2\) or \(P_3\) are corrupted: the simulation is the same as in the previous case except that if \(P_2\) is corrupted then the value \(x_2\) is taken from what \(\mathcal{A}\) sends in the name of \(P_2\) to \(f_\mathsf{maj}\) (and not the value that \(\mathcal{A}\) sends first to \(P_2\)); likewise for \(P_3\). Everything else is the same.

  • \(P_1\) is honest: \(\mathcal{S}\) sends an empty input \(\lambda \) to the trusted party for every corrupted party, and receives back some \(x\in \{0,1\}\). Next, \(\mathcal{S}\) simulates \(P_1\) sending x to both \(P_2\) and \(P_3\). If both \(P_2\) and \(P_3\) are corrupted, then \(\mathcal{S}\) obtains from \(\mathcal{A}\) the values \(x_2\) and \(x_3\) that they send to \(f_\mathsf{maj}\), computes \(x'=f_\mathsf{maj}(x,x_2,x_3)\) and simulates the trusted party sending \(x'\) back to all parties. If only one of \(P_2\) and \(P_3\) are corrupted, then \(\mathcal{S}\) simulates the trusted party sending x back to all parties. Finally, \(\mathcal{S}\) outputs whatever \(\mathcal{A}\) outputs.

The fact that the simulation is good is straightforward. If \(P_1\) is corrupted, then only consistency is important, and \(\mathcal{S}\) ensures that the value sent to \(f_\mathsf{bc}\) is the one that the honest party/parties would output. If \(P_1\) is not corrupted, and both \(P_2\) and \(P_3\) are corrupted, then \(P_1\) always outputs the correct x as required, and the outputs of \(P_2\) and \(P_3\) are not important. Finally, if \(P_1\) and \(P_2\) are corrupted, then \(\mathcal{S}\) sends \(f_\mathsf{bc}\) the value that \(P_3\) would output in the real protocol as required; likewise for \(P_1\) and \(P_3\) corrupted. \(\square \)

Theorem 3.1 implies that \(f_\mathsf{maj}\) cannot be securely computed with guaranteed output delivery for any \(t<3\) in a point-to-point network; this follows immediately from the fact that the broadcast functionality can be securely computed if and only if \(t<n/3\). Furthermore, by [11], \(f_\mathsf{maj}\) can be securely computed fairly given oblivious transfer (and as shown in Sect. 5.1 this also holds in a point-to-point network). Thus, we have:

Corollary 3.2

Assume that oblivious transfer exists. Then, there exist non-trivial functionalities f such that f can be t-securely computed with fairness but cannot be t-securely computed with guaranteed output delivery, in a point-to-point network and with \(t\ge n/3\).

Three-Party Functionalities that Imply Broadcast It is possible to generalize the property that we used to show that \(f_\mathsf{maj}\) implies broadcast. Specifically, consider a functionality f with the property that there exist inputs \((x_1,x_2,x_3)\) and \((x_1',x_2',x_3')\) such that \(f(x_1,x_2,x_3)=0\) and \(f(x_1',x_2',x_3')=1\), and such that if either of \(x_2\) or \(x_3\) (resp., \(x_2'\) or \(x_3'\)) are changed arbitrarily, then the output of f remains the same. Then, this function can be used to achieve broadcast. We describe the required property formally inside the proof of the theorem below. We show that out of the 256 functions over 3-bit inputs, there are 110 of them with this property. It follows that none of these can be securely computed with guaranteed output delivery in the presence of one or two corrupted parties. We prove the following:

Theorem 3.3

There are 110 functions from the family of all three-party Boolean functions

$$\begin{aligned} \{f:\{0,1\}\times \{0,1\}\times \{0,1\}\rightarrow \{0,1\}\} \end{aligned}$$

that cannot be securely computed with guaranteed output delivery in a point-to-point network with \(t=1\) or \(t=2\).

Proof

We provide a combinatorial proof of the theorem, by counting how many functions have the property that arbitrarily changing one of the inputs does not affect the output, and there are inputs that yield output 0 and inputs that yield output 1. As we have seen in the proof of Theorem 3.1, it is possible to securely realize the broadcast functionality given a protocol that securely computes any such functionality with guaranteed output delivery.

We prove that there are 110 functions \(f:\{0,1\}^3\rightarrow \{0,1\}\) in the union of the following sets \(F_1,F_2,F_3\):

  1. 1.

    Let \(F_1\) be the set of all functions for which there exist \((a,b,c), (a',b',c')\in \{0,1\}^3\) such that \(f(a,b,\cdot )=f(a,\cdot ,c)=1\) and \(f(a',b',\cdot )=f(a',\cdot ,c')=0\).

  2. 2.

    Let \(F_2\) be the set of all functions for which there exist \( (a,b,c), (a',b',c')\in \{0,1\}^3\) such that \(f(a,b,\cdot )=f(\cdot ,b,c)=1\) and \(f(a',b',\cdot )=f(\cdot ,b',c')=0\).

  3. 3.

    Let \(F_3\) be the set of all functions for which there exist \( (a,b,c), (a',b',c')\in \{0,1\}^3\) such that \(f(\cdot ,b,c)=f(a,\cdot ,c)=1\) and \(f(\cdot ,b',c')=f(a',\cdot ,c')=0\).

Observe that any function in one of these sets can be used to achieve broadcast, as described above. Based on the inclusion–exclusion principle and using Lemma 3.5 proven below, it follows that:

$$\begin{aligned} |F_1 \cup F_2 \cup F_3| = 3\cdot 50 - 3\cdot 16 + 8 = 110, \end{aligned}$$

as required. We first prove the following lemma:

Lemma 3.4

If \(f\in F_1\), then \(a \ne a'\), if \(f\in F_2\) then \(b\ne b'\) and if \(f\in F_3\) then \(c\ne c'\).

Proof

Let \(f\in F_1\) (the proof for \(F_2, F_3\) is similar) and let \(a,a',b,b',c,c'\in \{0,1\}\) be inputs fulfilling the condition for the set \(F_1\). Then,

$$\begin{aligned}&f(a,b,c)=f(a,\bar{b},c)=f(a,b,\bar{c})=1 \quad {\mathrm{and}} \\&f(a',b',c') =f(a',\bar{b}',c')=f(a',b',\bar{c}') =0. \end{aligned}$$

On the one hand, \(f(a,b',c) = 1\) because \(f(a, \cdot , c) = 1\). On the other hand, if \(a = a'\) then \(f(a, b', c) = f(a', b', c) = 0\), because \(f(a', b', \cdot ) = 0\). \(\square \)

It remains to prove the following lemma, to derive the theorem.

Lemma 3.5

  1. 1.

    \(|F_1| = |F_2| = |F_3| = 50\).

  2. 2.

    \(|F_1 \cap F_2| = |F_1 \cap F_3| = |F_2 \cap F_3| = 16\).

  3. 3.

    \(|F_1 \cap F_2 \cap F_3| = 8\).

Proof

Let \(f :\{0,1\}^3\rightarrow \{0,1\}\) be a function, and consider the representation of f using a binary string \((\beta _0 \beta _1 \beta _2 \beta _3\beta _4 \beta _5 \beta _6 \beta _7)\) as shown in Table 2:

  1. 1.

    Assume \(f\in F_1\) (the proof for \(F_2, F_3\) is similar). The first quadruple \((\beta _0 \beta _1 \beta _2 \beta _3)\) corresponds to \(a=0\) and the second quadruple \((\beta _4 \beta _5 \beta _6 \beta _7)\) corresponds to \(a=1\). There exists bc such that \(f(a, b, c)=f(a, \bar{b}, c)=f(a, b, \bar{c})\) and \(b',c'\) such that \(f(\bar{a}, b', c')=f(\bar{a}, \bar{b}', c')=f(\bar{a}, b', \bar{c}')\), in addition, \(f(a,b,c)\ne f(\bar{a}, b', c')\). Therefore, in each such quadruple there must be a triplet of 3 identical bits, and the two triplets have opposite values.

    Denote \(\beta =f(a, b, c)\), there are 5 options for \((\beta _0 \beta _1 \beta _2 \beta _3)\) in which 3 of the bits equal \(\beta \):

    $$\begin{aligned} (\beta \beta \beta \beta ), (\beta \beta \beta \bar{\beta }), (\beta \beta \bar{\beta }\beta ), (\beta \bar{\beta }\beta \beta ), (\bar{\beta }\beta \beta \beta ). \end{aligned}$$

    For each such option, there are 5 options for \((\beta _4 \beta _5 \beta _6 \beta _7)\) in which 3 of the bits equal \(\bar{\beta }\):

    $$\begin{aligned} (\bar{\beta }\bar{\beta }\bar{\beta }\bar{\beta }), (\bar{\beta }\bar{\beta }\bar{\beta }\beta ), (\bar{\beta }\bar{\beta }\beta \bar{\beta }), (\bar{\beta }\beta \bar{\beta }\bar{\beta }), (\beta \bar{\beta }\bar{\beta }\bar{\beta }). \end{aligned}$$

    There are 2 options for the value of \(\beta \), so in total \(|F_1|=2\cdot 5 \cdot 5=50\).

  2. 2.

    Assume \(f\in F_1\cap F_2\) (the proof for \(F_1\cap F_3, F_2\cap F_3\) is similar). In this case \(a'=\bar{a}\) and \(b'=\bar{b}\) and the constraints are

    $$\begin{aligned} f(a, b, c)\!=\! f(\bar{a}, b, c)\!=\! f(a, \bar{b}, c)\!=\! f(a, b, \bar{c}) \!\ne \! f(\bar{a}, \bar{b}, c')\!=\! f(a, \bar{b}, c')=f(\bar{a}, b, c')=f(\bar{a}, \bar{b}, \bar{c}'). \end{aligned}$$

    Therefore, the string is balanced (there are 4 zeros and 4 ones), where 3 of the bits \((\beta _0\beta _1\beta _2\beta _3)\) are equal to \(\beta \) and one to \(\bar{\beta }\), and 3 of the bits \((\beta _4\beta _5\beta _6\beta _7)\) are equal to \(\bar{\beta }\) and one to \(\beta \).

    There are 4 options to select 3 bits in \((\beta _0\beta _1\beta _2\beta _3)\), and 2 options to select one bit in \((\beta _4\beta _5\beta _6\beta _7)\). These two options correspond to either \((\bar{a}, b, c)\) or \((\bar{a}, \bar{b}, \bar{c})\). Hence, \(|F_1\cap F_2|=2\cdot 4\cdot 2=16\).

  3. 3.

    Assume \(f\in F_1\cap F_2\cap F_3\). In this case \(a'=\bar{a}\), \(b'=\bar{b}\) and \(c'=\bar{c}\) and the constraints are

    $$\begin{aligned} f(a, b, c)=f(\bar{a}, b, c)=f(a, \bar{b}, c)=f(a, b, \bar{c}) \ne f(\bar{a}, \bar{b}, \bar{c})=f(a, \bar{b}, \bar{c})=f(\bar{a}, b, \bar{c})=f(\bar{a}, \bar{b}, c). \end{aligned}$$

    Therefore, the string is of the form \((\beta _0\beta _1\beta _2\beta _3\bar{\beta _0}\bar{\beta _1}\bar{\beta _2}\bar{\beta _3})\), where 3 of the bits \((\beta _0\beta _1\beta _2\beta _3)\) are equal to \(\beta \) and one to \(\bar{\beta }\). There are 4 options to select 3 bits in \((\beta _0\beta _1\beta _2\beta _3)\), and setting them to the same value determines the rest of the string. Hence, \(|F_1\cap F_2\cap F_3|=2\cdot 4=8\).

This completes the proof of Theorem 3.3. \(\square \)

Table 2 Representation of a Boolean function \(\{0,1\}^3\rightarrow \{0,1\}\)

As we have mentioned in the Introduction, in the case that \(t=1\) (i.e., when there is an honest majority), all functions can be securely computed with fairness in a point-to-point network. Thus, we have that all 110 functions of Theorem 3.3 constitute a separation of fairness from guaranteed output delivery. That is, in the case of \(n/3 \le t < n/2\), we have that many functions can be securely computed with fairness but not with guaranteed output delivery. In addition, 8 out of these 110 functions reduce to three-party majority and so can be computed fairly for any \(t\le n\). Thus, these 8 functions form a separation for the range of \(t\ge n/2\).

4 Fairness Implies Guaranteed Output Delivery for Default-Output Functionalities

In this section, we prove Theorem 1.2. In fact, we prove a stronger theorem, stating that fairness implies guaranteed output delivery for functions with the property that there exists a “default value” such that any single party can fully determine the output to that value. For example, the multiparty Boolean AND and OR functionalities both have this property (for the AND functionality any party can always force the output to be 0, and for the OR functionality any party can always force the output to be 1). We call such a function a default-output functionality. Intuitively, such a function can be securely computed with guaranteed output delivery if it can be securely computed fairly, since the parties can first try to compute it fairly. If they succeed, then they are done. Otherwise, they all received abort and can just output their respective default-output value for the functionality. This can be simulated since any single corrupted party in the ideal model can choose an input that results in the default-output value.

Definition 4.1

Let \(f:(\{0,1\}^*)^n \rightarrow (\{0,1\}^*)^n\) be an n-party functionality. f is called a default-output functionality with default output \((\tilde{y}_1, \ldots , \tilde{y}_n)\), if for every \(i\in \left\{ {1,\ldots ,n}\right\} \) there exists a special input \(\tilde{x}_i\) such that for every \(x_j\) with \(j\ne i\) it holds that \(f(x_1,\ldots ,\tilde{x}_i,\ldots , x_n) = (\tilde{y}_1,\ldots ,\tilde{y}_n)\).

Observe that \((0,\ldots ,0)\) is a default output for the Boolean AND function, and \((1,\ldots ,1)\) is a default output for the Boolean OR function. We now prove that if a functionality f has a default-output value, then the existence of a fair protocol for f implies the existence of a protocol with guaranteed output delivery for f.

Theorem 4.2

Let \(f:(\{0,1\}^*)^n \rightarrow (\{0,1\}^*)^n\) be a default-output functionality and let \(t<n\). If f can be t-securely computed with fairness (with or without a broadcast channel), then f can be t-securely computed with guaranteed output delivery, in a point-to-point network.

Proof

Let f be as in the theorem statement, and let the default output be \((\tilde{y}_1, \ldots , \tilde{y}_n)\). Assume that f can be securely computed with fairness with or without a broadcast channel. By Theorem 5.1, f can be securely computed with fairness without a broadcast channel. We now construct a protocol \(\pi \) that securely computes f with guaranteed output delivery in the \((f,\mathsf{fair})\)-hybrid model:

  1. 1.

    Each \(P_i\) sends its input \(x_i\) to the trusted party computing f.

  2. 2.

    Denote by \(y_i\) the value received by \(P_i\) from the trusted party.

  3. 3.

    If \(y_i\ne \bot \), \(P_i\) outputs \(y_i\), otherwise \(P_i\) outputs \(\tilde{y}_i\).

Let \(\mathcal{A}\) be an adversary attacking the execution of \(\pi \) in the \((f,\mathsf{fair})\)-hybrid model. We construct an ideal-model adversary \(\mathcal{S}\) in the ideal model with guaranteed output delivery. Let \(\mathcal{I}\) be the set of corrupted parties, let \(i\in \mathcal{I}\) be one of the corrupted parties (if no parties are corrupted then there is nothing to simulate), and let \(\tilde{x}_i\) be the input guaranteed to exist by Definition 4.1. Then, \(\mathcal{S}\) invokes \(\mathcal{A}\) and simulates the interaction of \(\mathcal{A}\) with the trusted party computing f (note that there is no interaction between \(\mathcal{A}\) and honest parties). \(\mathcal{S}\) receives the inputs that \(\mathcal{A}\) sends to f. If any of the inputs equal abort  then \(\mathcal{S}\) sends \(\tilde{x}_i\) as \(P_i\)’s input to its own trusted party computing f (with guaranteed output delivery), and arbitrary inputs for the other parties. Then, \(\mathcal{S}\) simulates the corrupted parties receiving \(\bot \) as output from the trusted party in \(\pi \), and outputs whatever \(\mathcal{A}\) outputs. Else, if none of the inputs equal abort, then \(\mathcal{S}\) sends its trusted party the inputs that \(\mathcal{A}\) sent. \(\mathcal{S}\) then receives the outputs of the corrupted parties from its trusted party, and internally sends these to \(\mathcal{A}\) as the corrupted parties’ outputs from the trusted party computing f in \(\pi \). Finally, \(\mathcal{S}\) outputs whatever \(\mathcal{A}\) outputs.

If \(\mathcal{A}\) sends abort, then in the real execution every honest party \(P_j\) outputs \(\tilde{y}_j\). However, since \(\mathcal{S}\) sends the input \(\tilde{x}_i\) to the trusted party computing f, by Definition 4.1 we have that the output of every honest party \(P_j\) in the ideal execution is also \(\tilde{y}_j\). Furthermore, if \(\mathcal{A}\) does not send abort, then \(\mathcal{S}\) just uses exactly the same inputs that \(\mathcal{A}\) sent. It is clear that the view of \(\mathcal{A}\) is identical in the execution of \(\pi \) and the simulation with \(\mathcal{S}\). We therefore conclude that \(\pi \) t-securely computes f with guaranteed output delivery, as required. \(\square \)

We have proven that fairness implies guaranteed output delivery for default-output functionalities; it remains to show the existence of fair protocols for some default-output functionalities. Fortunately, this was already proven in [11]. The only difference is that [11] uses a broadcast channel. Noting that the multiparty Boolean OR functionality is non-trivial (in the sense of Footnote 3) and that it has default output \((1,\ldots ,1)\) as mentioned above, we have the following corollary.

Corollary 4.3

Assume that oblivious transfer exists. Then, there exist non-trivial functionalities f that can be t-securely computed with guaranteed output delivery in a point-to-point network, for any \(t<n\).

Feasibility of Guaranteed Output Delivery In Theorem 4.4, we prove that 16 non-trivial functionalities can be securely computed with guaranteed output delivery in a point-to-point network (by showing that they are default-output functionalities). Thus, guaranteed output delivery can be achieved for a significant number of functions.

Theorem 4.4

Assume that oblivious transfer exists. There are 16 non-trivial functions from the family of all three-party Boolean functions \(\{f:\{0,1\}\times \{0,1\}\times \{0,1\}\rightarrow \{0,1\}\}\) that can be securely computed with guaranteed output delivery in a point-to-point network for any number of corrupted parties.

Proof

When represented using its truth table as a binary string (see Table 2), the three-party Boolean OR function is (01111111), similarly, the Boolean AND function is (00000001). Every function \((\beta _0 \beta _1 \beta _2 \beta _3\beta _4 \beta _5 \beta _6 \beta _7)\) such that there exists i for which \(\beta _i=\beta \) and for every \(j\ne i\) \(\beta _j=\bar{\beta }\) can be reduced to computing Boolean OR. Since there are 8 ways to choose i and 2 ways to choose \(\beta \), we conclude that there are 16 such functions. \(\square \)

5 The Role of Broadcast

In this section, we prove Theorem 1.3 and show that a functionality can be securely computed fairly with broadcast if and only if it can be securely computed fairly without broadcast. In addition, we show that if a functionality can be securely computed with fairness, then with a broadcast channel it can be securely computed with guaranteed output delivery.

5.1 Fairness is Invariant to Broadcast

Gordon and Katz construct two fair multiparty protocols in [11], both of them require a broadcast channel. In this section, we show that fairness holds for both even without a broadcast channel. More generally, fairness can be achieved with a broadcast channel if and only if it can be achieved without a broadcast channel.

It is immediate that fairness without broadcast implies fairness with broadcast. The other direction follows by using the protocol of Fitzi et al. [8] for detectable broadcast. In the first stage, the parties execute a protocol that establishes a public-key infrastructure. This protocol is independent of the parties’ inputs and is computed with abort. If the adversary aborts during this phase, it learns nothing about the output and fairness is retained. If the adversary does not abort, the parties can use the public-key infrastructure and execute multiple (sequential) instances of authenticated broadcast, and so can run the original protocol with broadcast that is fair.

One subtlety arises since the composition theorem replaces every ideal call to the broadcast functionality with a protocol computing broadcast. However, in this case, each authenticated broadcast protocol relies on the same public-key infrastructure that is generated using a protocol with abort. We therefore define a reactive ideal functionality which allows abort only in the first “setup” call. If no abort was sent in this call, then the functionality provides a fully secure broadcast (with guaranteed output delivery) from there on. The protocol of [8] securely computes this functionality with guaranteed output delivery, and thus, constitutes a sound replacement of the broadcast channel (unless an abort took place).

Theorem 5.1

Let f be an n-party functionality and let \(t\le n\). Then, assuming the existence of one-way functions, f can be t-securely computed with fairness assuming a broadcast channel if and only if f can be t-securely computed with fairness in a point-to-point network.

Proof Sketch

If f can be t-securely computed with fairness in a point-to-point network, then it can be t-securely computed with fairness with a broadcast channel by just having parties broadcast messages and stating who the intended recipient is. (Recall that in the point-to-point network we assume authenticated but not private channels.)

Next, assume that f can be t-securely computed with fairness assuming a broadcast channel. We now show that it can be t-securely computed with fairness in a point-to-point network. We define the reactive functionality for conditional broadcast \(f_{\mathsf{condbc}}\). In the first call to \(f_{\mathsf{condbc}}\), the functionality computes the AND function, i.e., each party has an input bit \(b_i\) and the functionality returns \(b=b_1 \wedge \ldots \wedge b_n\) to each party. In addition, the functionality stores the bit b as its internal state for all future calls. In all future calls to \(f_{\mathsf{condbc}}\), if \(b=1\) it behaves exactly like \(f_{\mathsf{bc}}\), whereas if \(b=0\) it returns \(\bot \) to all the parties in the first call and halts. By inspection, it is immediate that the protocol of [8] securely computes \(f_{\mathsf{condbc}}\) with guaranteed output delivery, for any \(t\le n\) in a point-to-point network.

Let \(\pi \) be the protocol that t-securely computes f assuming a broadcast channel; stated differently, \(\pi \) t-securely computes f in the \((f_\mathsf{bc},\mathsf{g.d.})\)-hybrid model. We construct a protocol \(\pi '\) for t-securely computing f in the \((f_{\mathsf{condbc}},\mathsf{fair})\)-hybrid model. \(\pi '\) begins by all parties sending the bit 1 to \(f_{\mathsf{condbc}}\) and receiving back output. If a party receives back \(b=0\), it aborts and outputs \(\bot \). Else, it runs \(\pi \) with the only difference that all broadcast messages are sent to \(f_{\mathsf{condbc}}\) instead of to \(f_\mathsf{bc}\). Since \(f_{\mathsf{condbc}}\) behaves exactly like \(f_\mathsf{bc}\) as long \(b=1\) is returned from the first call, we have that in this case the output of \(\pi \) and \(\pi '\) is identical. Furthermore, \(\pi '\) is easily simulated by first invoking the adversary \(\mathcal{A}'\) for \(\pi '\) and obtaining the corrupted parties’ inputs to \(f_{\mathsf{condbc}}\) in the first call. If any 0 bit is sent, then the simulator \(\mathcal{S}'\) for \(\pi '\) sends abort to the trusted party, outputs whatever \(\mathcal{A}'\) outputs and halts. Otherwise, it invokes the simulator \(\mathcal{S}\) that is guaranteed to exist for \(\pi \) on the residual adversary \(\mathcal{A}\) that is obtained by running \(\mathcal{A}'\) until the end of the first call to \(f_{\mathsf{condbc}}\) (including \(\mathcal{A}'\) receiving the corrupted parties’ output bits from this call). Then, \(\mathcal{S}'\) sends whatever \(\mathcal{S}\) wishes to send to the trusted party, and outputs whatever \(\mathcal{S}\) outputs. Since \(f_{\mathsf{condbc}}\) behaves exactly like \(f_\mathsf{bc}\) when \(b=1\) in the first phase, we have that the output distribution generated by \(\mathcal{S}'\) is identical to that of \(\mathcal{S}\) when \(b=1\). Furthermore, when \(b=0\), it is clear that the simulation is perfect. \(\square \)

5.2 Fairness with Identifiable Abort Implies Guaranteed Output Delivery

Before proceeding to prove that fairness implies guaranteed output delivery in a model with a broadcast channel, we first show that fairness with identifiable abort implies guaranteed output delivery. Recall that a protocol securely computes a functionality f with identifiable abort, if when the adversary causes an abort all honest parties receive \(\bot \) as output along with the identity of a corrupted party. If a protocol securely computes f with fairness and identifiable abort, then it is guaranteed that if the adversary aborts, it learns nothing about the output and all honest parties learn an identity of a corrupted party. In this situation, the parties can eliminate the identified corrupted party and execute the protocol again, where an arbitrary party emulates the operations of the eliminated party using a default input. Since nothing was learned by the adversary when an abort occurs, the parties can rerun the protocol from scratch (without the identified corrupted party) and nothing more than a single output will be revealed to the adversary. Specifically, given a protocol \(\pi \) that computes f with fairness and identifiable abort, we can construct a new protocol \(\pi '\) that computes f with guaranteed output delivery. In the protocol \(\pi '\), the parties iteratively execute \(\pi \), where in each iteration, either the adversary does not abort and all honest parties receive consistent output, or the adversary aborts without learning anything and the parties identify a corrupted party, who is eliminated from the next iteration.

Theorem 5.2

Let f be an n-party functionality and let \(t\le n\). If f can be t-securely computed with fairness and identifiable abort, then f can be t-securely computed with guaranteed output delivery.

Proof

We prove the theorem by constructing a protocol \(\pi \) that t-securely computes f with guaranteed output delivery in the \((f,\mathsf{id\hbox {-} fair})\)-hybrid model. For every party \(P_i\), we assign a default-input value \(\tilde{x}_i\) and construct the protocol \(\pi \) as follows:

  1. 1.

    Let \(\mathcal{P}_1=\left\{ {1, \ldots , n}\right\} \) denote the set of indices of all participating parties.

  2. 2.

    For \(i=1,\ldots ,t+1\)

    1. (a)

      All parties in \(\mathcal{P}_i\) send their inputs to the trusted party computing f, where the party with the lowest index in \(\mathcal{P}_i\) simulates all parties in \(\mathcal{P}_1\setminus \mathcal{P}_i\), using their predetermined default-input values.

      For each \(j\in \mathcal{P}_i\), denote the output of \(P_j\) from f by \(y_j\).

    2. (b)

      For every \(j\in \mathcal{P}_i\), party \(P_j\) checks whether \(y_j\) is a valid output, if so \(P_j\) outputs \(y_j\) and halts. Otherwise, all parties receive \((\bot ,{i^*})\) as output, where \(i^{*}\) is an index of a corrupted party. If \(i^{*}\notin \mathcal{P}_i\) (and so \(i^{*}\) is a previously identified corrupted party), then all parties set \(i^*\) to be the party with the lowest index in \(\mathcal{P}_i\).

    3. (c)

      Set \(\mathcal{P}_{i+1}=\mathcal{P}_i\setminus \left\{ {{i^*}}\right\} \).

First note that there are at most \(t+1\) iterations; therefore, \(\pi \) terminates in polynomial time. Let \(\mathcal{A}\) be an adversary attacking \(\pi \) and let \(\mathcal{I}\) be the set of corrupted parties. We construct a simulator \(\mathcal{S}\) for the ideal model with f and guaranteed output delivery, as follows. \(\mathcal{S}\) invokes \(\mathcal{A}\) and receives its inputs to f in every iteration. If an iteration contains an abort, then \(\mathcal{S}\) simulates sending the response \((\bot ,{i^*})\) to all parties, and proceeds to the next iteration. In the first iteration in which no abort is sent (and such an iteration must exist since there are \(t+1\) iterations and in every iteration except for the last one corrupted party is removed), \(\mathcal{S}\) sends the inputs of the corrupted parties that \(\mathcal{A}\) sent to the trusted party computing f. In addition, \(\mathcal{S}\) sends the values for any corrupted parties that were identified in previous iterations: if the lowest index remaining is honest, then \(\mathcal{S}\) sets these values to be the default values; else, it sets these values to be the values sent by \(\mathcal{A}\) for these parties. Upon receiving the output from its trusted party, \(\mathcal{S}\) hands it to \(\mathcal{A}\) as if it were the output of the corrupted parties in the iteration of \(\pi \), and outputs whatever \(\mathcal{A}\) outputs.

The simulation in the \((f,\mathsf{id\hbox {-} fair})\)-hybrid model is perfect since \(\mathcal{S}\) can perfectly simulate the trusted party for all iterations in which an abort is sent. Furthermore, in the first iteration for which an abort is not sent, \(\mathcal{S}\) sends f the exact inputs upon which the function f is computed in the protocol. Thus, the view of \(\mathcal{A}\) and the output of the honest parties in the simulation with \(\mathcal{S}\) are identical to their view and output in an execution of \(\pi \) in the \((f,\mathsf{id\hbox {-} fair})\)-hybrid model. \(\square \)

5.3 Fairness with Broadcast Implies Guaranteed Output Delivery

In Sect. 5.2, we saw that if a functionality can be securely computed with fairness and identifiable abort, then it can be securely computed with guaranteed output delivery. In this section, we show that assuming the existence of a broadcast channel, there is a protocol compiler that given a protocol computing a functionality f with fairness, outputs a protocol computing f with fairness and identifiable abort. Therefore, assuming broadcast, fairness implies guaranteed output delivery.

The protocol compiler we present is a modification of the GMW compiler, which relies on the code of the underlying fair protocol and requires non-black-box access to the protocol. (Therefore, this result does not contradict the proof in Sect. 6 that black-box access to an ideal functionality that computes f with fairness does not help to achieve guaranteed output delivery.) The underlying idea is to use the GMW compiler [14, 15]. However, instead of enforcing semi-honest behavior, the compiler is used in order to achieve security with identifiable abort. This is accomplished by tweaking the GMW compiler so that first only public-coin zero-knowledge proofs are used, and second if an honest party detects dishonest behavior—i.e., if some party does not send a message or fails to provide a zero-knowledge proof for a message it sent—the honest parties record the identity \({i^*}\) of the cheating party. We stress that the parties do not abort the protocol at this point, but rather continue until the end to see if they received \(\bot \) or not. If they received \(\bot \), then they output \((\bot ,{i^*})\) and halt. Else, if they received proper output, then they output it. Note that if the parties were to halt as soon as they detected a cheating party, then this would not be secure since it is possible that some of the corrupted parties already received output by that point. Thus, they conclude the protocol to determine whether they should abort or not.

The soundness of this method holds because in the GMW compiler with public-coin zero-knowledge proofs, a corrupted party cannot make an honest party fail, and all parties can verify whether the zero-knowledge proof was successful or not. A brief description of the GMW compiler appears in Appendix 9.1. We prove the following:

Theorem 5.3

Assume the existence of one-way functions and let \(t\le n\). If a functionality f can be t-securely computed with fairness assuming a broadcast channel, then f can be t-securely computed with guaranteed output delivery.

Proof

We begin by proving that fairness with a broadcast channel implies fairness with identifiable abort.

Lemma 5.4

Assume the existence of one-way functions and let \(t\le n\). Then, there exists a polynomial-time protocol compiler that receives any protocol \(\pi \), running over a broadcast channel, and outputs a protocol \(\pi '\), such that if \(\pi \) t-securely computes a functionality f with fairness then \(\pi '\) t-securely computes f with fairness and identifiable abort.

Proof Sketch

Since the protocol is run over a single broadcasts channel, if at any point a party does not broadcast a message when it is supposed to, then all the parties detect it and can identify this party as corrupted.

We consider a tweaked version of the GMW compiler. The input-commitment phase and the coin-generation phase are kept the same, with the sole exception that if a party is identified a corrupted at this stage (e.g., if it does not send any value) then all the parties hard-wire to the function the default-input value corresponding to this party. In the protocol-emulation phase, when a sender transmits a message to a receiver, they execute a strong zero-knowledge proof of knowledge with perfect completeness, in which the sender acts as the prover and the receiver as the verifier. The statement is that the message was constructed by the next-message function, based on the sender’s input, random coins and the history of all the messages the sender received in the protocol. However, if the prover fails to prove the statement, unlike in the GMW compiler, the verifier does not immediately broadcast the verification coins, but stores the verification coins along with the identity of the sender in memory, and resumes the protocol.

At the end of the protocol emulation, each party checks whether it received an output, if so it outputs it and halts. If a party did not receive an output and it received a message for which the corresponding zero-knowledge proof failed, it broadcasts the verification coins it used during the zero-knowledge proof. In this case, the other parties verify whether this is a justified reject, and if so they output \(\bot \) along with the identity of the prover. If the reject is not justified, the parties output \(\bot \) along with the identity of the party that sent the false verification coins.

Since the zero-knowledge proof has perfect completeness, a corrupted party cannot produce verification coins that will falsely reject an honest party. Hence, only parties that deviate from the protocol can be identified as corrupted.

It case each honest party finishes the execution of the compiled protocol with some output, the compiled protocol remains secure, based on the security of the underlying protocol and of the zero-knowledge proof.

In case one of the honest parties did not get an output, there must be at least one message that does not meet the protocol’s specification, hence at least one honest party received a message without a valid proof. Therefore, all the honest parties output \(\bot \) along with an identity of a corrupted party. However, in this situation, the adversary does not learn anything about the output, since otherwise there exists an attack violating the fairness of the underlying protocol \(\pi \). Hence, the compiled protocol retains fairness. \(\square \)

Applying Theorem 5.2 to Lemma 5.4, we have that f can be t-securely computed with guaranteed output delivery, completing the proof of the theorem. \(\square \)

6 Black-Box Fairness does not help Guaranteed Output Delivery

In this section, we show that the ability to securely compute a functionality with complete fairness does not assist in computing the functionality with guaranteed output delivery, at least in a black-box manner. More precisely, a functionality f can be securely computed with guaranteed output delivery in the \((f,\mathsf{fair})\)-hybrid model if and only if f can be securely computed with guaranteed output delivery in the plain model.

The idea is simply that any protocol that provides guaranteed output delivery in the \((f,\mathsf{fair})\)-hybrid model has to work even if the output of every call to the trusted party computing f fairly concludes with an abort. This is because a corrupted party can always send abort to the trusted party in every such call.

Proposition 6.1

Let f be an n-party functionality and let \(t\le n\). Then, f can be t-securely computed in the \((f,\mathsf{fair})\)-hybrid model with guaranteed output delivery if and only if f can be t-securely computed in the real model with guaranteed output delivery.

Proof Sketch

If f can be t-securely computed in the real model with guaranteed output delivery, then clearly it can be t-securely computed in the \((f,\mathsf{fair})\)-hybrid model with guaranteed output delivery by simply not sending anything to the trusted party.

For the other direction, let \(\pi \) be a protocol that t-securely computes f in the \((f,\mathsf{fair})\)-hybrid model with guaranteed output delivery. We construct a protocol \(\pi '\) in the real model which operates exactly like \(\pi \), except that whenever there is a call in \(\pi \) to the ideal functionality f, the parties in \(\pi '\) emulate receiving \(\bot \) as output. It is immediate that for every adversary \(\mathcal{A}'\) for \(\pi '\), there exists an adversary \(\mathcal{A}\) for \(\pi \) so that the output distributions of the two executions are identical (\(\mathcal{A}\) just sends \(\mathsf{abort}\) to every ideal call in \(\pi \), and otherwise sends the same messages that \(\mathcal{A}'\) sends). By the assumption that \(\pi \) is secure, there exists a simulator \(\mathcal{S}\) for the ideal model for f with guaranteed output delivery. This implies that \(\mathcal{S}\) is also a good simulator for \(\mathcal{A}'\) in \(\pi '\), and so \(\pi '\) t-securely computes f with guaranteed output delivery in the real model. \(\square \)

7 Additional Results

In this section, we prove two additional results. First, there exist functionalities for which identifiable abort cannot be achieved (irrespective of fairness), and fairness and guaranteed output delivery are equivalent for fail-stop adversaries.

7.1 Broadcast is Necessary for Identifiable Abort

We show that security with identifiable abort cannot be achieved in general without assuming a broadcast channel.

Proposition 7.1

Assume the existence of one-way functions and let \(t\ge n/3\). There exist functionalities that cannot be t-securely computed with identifiable abort, in the point-to-point network model.

Proof Sketch

Assume by contradiction that the PKI setup functionality defined by

$$\begin{aligned} f_{\mathsf{PKI}}(\lambda , \ldots , \lambda )=((\mathbf {pk},sk_1), \ldots , (\mathbf {pk},sk_n)), \end{aligned}$$

can be t-securely computed with identifiable abort for \(t=n/3\), where \(\mathbf {pk}=(pk_1,\ldots ,pk_n)\) and each \((pk_i,sk_i)\) are a public/private key pair for a secure digital signature scheme (that exists if one-way function exists). Then, we can t-securely compute \(f_\mathsf{bc}\) by running the protocol \(\pi \) that is assumed to exist for \(f_{\mathsf{PKI}}\), where \(\pi \) is t-secure with identifiable abort. As in the proof of Theorem 5.2, if \(\pi \) ends with abort, then the party who is identified as corrupted is removed. This continues iteratively until the protocol \(\pi \) terminates without abort, in which case a valid PKI is established between all remaining parties. Given this PKI, the parties can run authenticated broadcast in order to securely compute \(f_\mathsf{bc}\). Since \(f_\mathsf{bc}\) cannot be securely computed for \(t=n/3\), we have a contradiction. \(\square \)

7.2 Fairness Implies Guaranteed Output Delivery for Fail-Stop Adversaries

In the presence of malicious adversaries, fairness and guaranteed output delivery are different notions, since there exist functionalities that can be computed with complete fairness but cannot be computed with guaranteed output delivery. In the presence of semi-honest adversaries, it is immediate that both notions are equivalent, since the adversary cannot abort. In this section, we show that in the presence of the fail-stop adversaries, i.e., when the corrupted parties follow the protocol with the exception that the adversary is allowed to abort, fairness implies guaranteed output delivery.

The underlying idea is that if a corrupted party does not send a message to an honest party during the execution of a fair protocol, the honest party can inform all parties that it identified a corrupted party. Since the adversary is fail-stop, corrupted parties cannot lie and falsely incriminate an honest party. Similarly to the proof of Theorem 5.3, the parties do not halt if a party is detected cheating (i.e., halting early). Rather, the parties continue to the end of the protocol: if the protocol ended with output, then they take the output and halt; otherwise, they remove the cheating party and begin again. Since the original protocol is fair, this guarantees that nothing is learned by any party if anyone receives abort; thus, they can safely run the protocol again. As in the proof of Theorem 5.2, this process is repeated iteratively until no abort is received. We conclude that:

Theorem 7.2

Let f be an n-party functionality and let \(t\le n\). Then, f can be t-securely computed with fairness in the presence of fail-stop adversaries, if and only if f can be t-securely computed with guaranteed output delivery in the presence of fail-stop adversaries.