1 Introduction

In this paper we give the first construction of boolean circuits which are secure against attacks that can toggle an arbitrary subset of the wires, in the sense that every such attack is equivalent to attacking only the inputs and outputs of the circuit. We begin with a short overview of the problem and related background.

An Algebraic Manipulation Detection (AMD) code [3] over a finite field \(\mathbb {F}\) is a randomized coding scheme that offers the best possible protection against additive attacks, namely attacks that can blindly add a fixed (but possibly different) element from \(\mathbb {F}\) to every entry of the codeword. Since an attacker can destroy all information by adding a random field element to every symbol, the best one can hope for is to detect errors with high probability, rather than correct them.

An analogous goal of protecting computations against additive attacks was recently considered by Genkin et al. [11]. This goal is captured by the notion of an AMD circuit, a randomized arithmetic circuit which offers the best possible protection against additive attacks that may add a (possibly different) field element to every wire. Since the adversary can legitimately attack input and output wires, the best one can hope for is to limit the adversary to these inevitable attacks. That is, in an AMD circuit the effect of every additive attack that may apply to all internal wires in the circuit can be simulated by an ideal attack that applies only to the inputs and outputs. Combining such AMD circuits with a standard AMD code, one can also protect the inputs and outputs by employing small tamper-proof input encoder and output decoder.

The study of AMD circuits in [11] was motivated by the observation that in the simplest information-theoretic MPC protocols from the literature, that were only designed to offer protection against passive (i.e., semi-honest) adversaries, the effect of every active (malicious) adversary corresponds precisely to an additive attack on the circuit being evaluated. Thus, a useful paradigm for tackling the difficult goal of protecting against active attacks is to apply such a simple passive-secure protocol to an AMD-encoded computation. This paradigm seems quite promising even from a concrete efficiency perspective [10, 13].

The main result of [11] is that every arithmetic circuit C over \(\mathbb {F}\) can be transformed into an equivalent AMD circuit of size O(|C|) with \(O(1/|\mathbb {F}|)\) simulation error. This provides poor security guarantees over small fields, and in fact the construction used to achieve this can be completely broken when applied over the binary field \(\mathbb {F}=\mathbb {F}_2\). (The natural approach of using an arithmetic circuit over a large extension field does not apply here, because the computation of field multiplications is also subject to attacks.) For the binary case, an alternative construction from [11] relies on the use of a tamper-proof output decoder and can only realize a weaker notion of security that allows for arbitrary correlations between the input and the event an attack is detected.

The goal of this work is to remedy this state of affairs and provide fully secure AMD circuits over small fields, with a primary focus on the binary case. Binary AMD circuits can be viewed as standard (randomized) boolean circuits (over the full basis) that are subject to arbitrary toggling attacks: the adversary may choose to toggle the values of an arbitrary subset of the wires. This seems quite natural even from a pure fault tolerance perspective and can be viewed as a strict generalization of the classical “random noise” fault model considered by von Neumann [20], Dobrushin and Ortyukov [7], and Pippenger [18]. Such a toggling attack model may not be too far from some real-life scenarios like faults introduced by faulty hardware or cosmic radiation.

In the context of applications to MPC, the binary case is important because it enables us to apply the AMD circuits methodology also to natural protocols that are cast in the OT-hybrid model. These include the simple passive-secure version of the GMW protocol [12]. In contrast, the MPC applications in [11] for the case of dishonest majority could only apply to arithmetic extensions of the GMW protocol that employ an arithmetic extension of OT denoted by OLE.Footnote 1 Replacing OLE by OT is particularly attractive in light of efficient OT extension techniques [14, 17] that do not apply to OLE.

We obtain the first constructions of fully secure binary AMD circuits. Given a boolean circuit C and a statistical security parameter \(\sigma \), we construct an equivalent binary AMD circuit \(\widehat{\mathsf C}\) of size \(|C|\cdot {\text {polylog} }(|C|,\sigma )\) (ignoring lower order additive terms) with \(2^{-\sigma }\) simulation error. That is, the effect of toggling an arbitrary subset of wires can be simulated by toggling only input and output wires.

Our construction combines in a general way two types of “simple” honest-majority MPC protocols: protocols that only offer security against passive adversaries, and protocols that only offer correctness against active adversaries. It proceeds according to the following steps. First, we use the correct-only MPC protocol to convert a relatively simple AMD circuit that provides only constant correctness (i.e., any “potentially harmful” attack is detected with some positive probability) into one that offers full correctness (i.e., attacks are detected except with \(2^{-\sigma }\) probability). However, this notion of correctness is not enough, mainly because it does not rule out correlations between the input and the event an attack is detected. We eliminate such correlations generically by distributing the computation using a passive-secure MPC protocol. The analysis of this step crucially relies on a recent lemma due to Bogdanov et al. [1] that uses the degree of approximating the OR function by real-valued polynomials to upper bound its best-case advantage in distinguishing between two distributions that are t-wise indistinguishable.

As a byproduct, we get a conceptually different technique for constructing active-secure two-party protocols in the OT-hybrid model from these simpler building blocks. This technique is appealing because in a sense it counters the common wisdom that “security” is more than a combination of “correctness” and “secrecy.” Indeed, our construction shows a general way to obtain full security (for MPC protocols in the OT-hybrid model) by only combining one MPC protocol that guarantees correctness and another that only guarantees secrecy, namely security in the presence of passive attacks. Moreover, the “correct-only MPC” component can be instantiated by a trivial protocol in which each party performs the entire computation locally. (To get the asymptotic efficiency mentioned above, we need to apply more sophisticated correct-only MPC protocols that offer better efficiency.) This can be compared with the IPS compiler [16], which also provides a general way of obtaining active-secure protocols in the OT-hybrid model, but requires an honest-majority MPC protocol that provides active security (which is strictly stronger than relying on active correctness and passive security).

In addition to its conceptual appeal, our new methodology also sheds new light on an intriguing open question about the complexity of secure computation [15]: Are there active-secure two-party protocols that achieve constant computational overhead? In other words, does the asymptotic multiplicative cost of protecting against active adversaries have to grow with the level of security? This question is open even when allowing a trusted source of correlated randomness, and in particular it is open in the OT-hybrid model. The best known protocols [6] have a polylogarithmic overhead in the security parameter (a result that we can match using binary AMD circuits). Our work reduces this question to the same open question in arguably simpler models. Indeed, while our construction involves some additional ad-hoc components (on top of the two types of MPC protocols discussed above) the additional cost they incur depends only on the input and output sizes, and not on the size of the computation. Furthermore, our construction also employs AMD codes to encode the entire protocol transcript, but these can be implemented with constant computational overhead (see Claim 18 and Corollary 1 in Sect. 6).

1.1 Our Results and Techniques

We now provide more details about our results, and the underlying techniques (summarized in Fig. 1 below). We begin by defining the notion of additive correctness, which allows the evaluation of a function \(f:\mathbb {F}^n\rightarrow \mathbb {F}^k\) in the presence of an additive attackFootnote 2 on the circuit computing f.

Definition 1

(Additive correctness; cf. full version of [11], Definition 4.1). Let \(\epsilon >0\). We say that a randomized circuit \(\widehat{\mathsf C}:\mathbb {F}^{n} \rightarrow \mathbb {F}^t\times \mathbb {F}^{k}\) is an \(\epsilon \)-additively-correct implementation of a function \(f:\mathbb {F}^n \rightarrow \mathbb {F}^k\) if the following holds:

  • Completeness. For all \(\mathbf{{x}}\in \mathbb {F}^n\) it holds that \(\Pr [\widehat{\mathsf C}(\mathbf{{x}}) = (0^t,f(\mathbf{{x}}))] = 1.\)

  • Additive correctness. For any additive attack \( \mathbf{A} \) there exists \(\mathbf{{a}}^{\mathsf {in}}\in \mathbb {F}^n\), and \(\mathbf{{a}}^{\mathsf {out}}\in \mathbb {F}^k\), such that for every input \(\mathbf{{x}}\) it holds that \(\Pr [\widehat{\mathsf C}^ \mathbf{A} (\mathbf{{x}}) \notin \mathsf{ERR}\cup \{(0^t,f(\mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}})+\mathbf{{a}}^{\mathsf {out}})\}] \le \epsilon \), where \(C^ \mathbf{A} \) is the circuit obtained by subjecting \({C}\) to the additive attack \( \mathbf{A} \), and \(\mathsf{ERR}= \left( \mathbb {F}^t {\setminus } \{0^t\}\right) \times \mathbb {F}^k\).

We say that \(\widehat{\mathsf C}\) is an \(\epsilon \)-additively-correct implementation of a circuit \({C}\) if \(\widehat{\mathsf C}\) is an \(\epsilon \)-additively-correct implementation of the function \(f_{C}\) computed by \({C}\).

Previous works [10, 11] constructed additively correct implementations for arithmetic circuits over any finite field \(\mathbb {F}\), with constant overhead, and \(\epsilon =O\left( 1/|\mathbb {F}|\right) \). In particular, for \(\mathbb {F}=\mathbb {F}_2\) the error is constant.

1.1.1 Correctness Amplification via Correct-Only MPC

For any function f, and security parameter \(\sigma \), we show the first \(2^{-\sigma }\)-additively-correct implementation of f, with polylogarithmic blowup:

Theorem 1

(Cf. Theorem 11 ). For any depth-d arithmetic circuit \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\), and any security parameter \(\sigma \), there exists a \(2^{-\sigma }\)-additively-correct implementation \(\widehat{\mathsf C}\) of \({C}\), where \(|\widehat{\mathsf C}| = |{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma ).\)

To prove Theorem 1, we present a general method of amplifying additive correctness based on “correct-only” MPC protocols. Such protocols enable a single client, aided by m servers, to evaluate an arithmetic circuit \({C}\) on its input, while guaranteeing correctness of the computation in the presence of an active adversary that corrupts a constant fraction of the servers. Moreover, the only interaction between the client and servers is in the first and last rounds.

More specifically, for m servers, and some constant c, let \(\pi \) be a d-round cm-correct MPC protocol, namely correctness holds even if cm servers are corrupted. Let \(\mathsf{InpEnc},\mathsf{OutDec}\) denote the functions used by the client in the first and last rounds (respectively) to compute its messages to the servers, and its output (respectively). Let \(\mathsf{NextMSG}_{}^{}\) denote the function used by the servers to compute their messages in each round of the protocol. The naive approach towards implementing the circuit \(\widehat{\mathsf C}\) using \(\pi \) is to implement every sub-circuit (namely, each of \(\mathsf{NextMSG}_{}^{}\), \(\mathsf{InpEnc}\), and \(\mathsf{OutDec}\)) using an \(\epsilon \)-additively-correct implementation. This naive approach fails because an additive attack may influence the computation of all \(\mathsf{NextMSG}_{}^{}\) functions, which corresponds to actively corrupting all servers in \(\pi \), whereas the correctness of the protocol only holds when at most cm servers are corrupted. Consequently, additive attacks on \(\widehat{\mathsf C}\) can be divided into two categories:

  1. 1.

    “Small” Attacks. The sub-circuits of \(\widehat{\mathsf C}\) that these attacks influence correspond to at most cm servers of \(\pi \), so by the cm-correctness of \(\pi \), such attacks cannot affect the output.

  2. 2.

    “Large” Attacks. The sub-circuits of \(\widehat{\mathsf C}\) that these attacks affect correspond to more than cm servers of \(\pi \). Since each sub-circuit (computing \(\mathsf{NextMSG}_{}^{}\)) is implemented using an \(\epsilon \)-additively-correct implementation, then except with probability \(\epsilon ^{cm}\) at least one of these attacks is detected, or their effect on the computations in the sub-circuits is equivalent to additive attacks on the inputs and outputs of the sub-circuits.

Additionally, we notice that any additive attack on \(\pi \) consists of sub-attacks of one of three types:

  1. 1.

    Attacks on communication channels. These attacks only affect the messages that parties receive in \(\pi \), but do not modify the \(\mathsf{NextMSG}_{}^{}\) functions. By encoding all messages sent in the protocol using an AMD encoding scheme (and altering \(\mathsf{InpEnc},\mathsf{NextMSG}_{}^{},\mathsf{OutDec}\) to operate on AMD codewords) we can guarantee that such attacks are detected with high probability.

  2. 2.

    Attacks on \({\mathbf {\mathsf{{NextMSG}}}}\) functions. These attacks arbitrary modify the \(\mathsf{NextMSG}_{}^{}\) function of the corresponding server, but (as noted above) can be protected against by replacing all \(\mathsf{NextMSG}_{}^{}\) functions with their \(\epsilon \)-additively-correct implementations.

  3. 3.

    Attacks on client functions. Since \(\pi \) is correct only as long as the client is honest, such attacks may arbitrarily affect the outputs. Therefore, to guarantee that such attacks are detected except with negligible probability, \(\mathsf{InpEnc}\) and \(\mathsf{OutDec}\) should be replaced with their \(2^{-\sigma }\)-additively-correct implementation. The crucial point here is that since \(\left| \mathsf{InpEnc}\right| +\left| \mathsf{OutDec}\right| \) is polynomial in the inputs and outputs, but otherwise independent of \(\left| {C}\right| \), then any efficient \(2^{-\sigma }\)-additively-correct implementation will do, and the resultant overhead would still be \({\text {polylog} }\left( m\left| C\right| \right) \). (We show an example of a \(2^{-\sigma }\)-additively-correct implementation in Appendix A.)

Consequently, we implement the circuit \(\widehat{\mathsf C}\) using \(\pi \) as follows. We first replace the \(\mathsf{NextMSG}_{}^{}\) functions of \(\pi \) with the functions \(\mathsf{NextMSG}_{}^{\prime }\) that operate on AMD codewords, and replace \(\mathsf{NextMSG}_{}^{\prime }\) with its \(\epsilon \)-additively correct implementation, \(\widehat{\mathsf{NextMSG}_{}^{\prime }}\), such that \(\left| \widehat{\mathsf{NextMSG}_{}^{\prime }}\right| =O\left( \left| \mathsf{NextMSG}_{}^{\prime }\right| \right) \), and \(\epsilon \) is constant. Additionally, we replace \(\mathsf{InpEnc}\) (resp., \(\mathsf{OutDec}\)) with the function \(\mathsf{InpEnc}'\) (resp., \(\mathsf{OutDec}'\)) which outputs (resp., takes as input) AMD codewords, and replace \(\mathsf{InpEnc}',\mathsf{OutDec}'\) with their \(2^{-\sigma }\)-additively correct implementations \(\widehat{\mathsf{InpEnc}},\widehat{\mathsf{OutDec}}\). Thus, \(|\widehat{\mathsf C}| = |\widehat{\mathsf{InpEnc}}|+|\widehat{\mathsf{OutDec}}|+ \sum _{i=1}^m \sum _{j=1}^d |\widehat{\mathsf{NextMSG}_{i}^{j}}|\). We use an efficient correct-only MPC protocol \(\pi \) (e.g., a slightly simplified version of [6]) to guarantee that when \(m=\sigma \), the multiplicative computational overhead is only \({\text {polylog} }\left( \sigma ,\left| {C}\right| \right) \). (Since we would like the overhead to be sublinear in \(\sigma \), we cannot use a trivial correct-only MPC protocol for evaluating C on input x.) For this choice of \(\pi \), \(|{\mathsf{InpEnc}}|+|{\mathsf{OutDec}}| = {\text {poly}}(n,k)\), so \(|\widehat{\mathsf{InpEnc}}|+|\widehat{\mathsf{OutDec}}| = {\text {poly}}(n,k)\). Similarly, \(\sum _{i=1}^\sigma \sum _{j=1}^d |{\mathsf{NextMSG}_{i}^{j}}| = |{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma )\), so \(\sum _{i=1}^\sigma \sum _{j=1}^d |\widehat{\mathsf{NextMSG}_{i}^{j}}| = |{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma )\). (See Sect. 4 for a more complete discussion.)

1.1.2 From Correctness to Security via Passive-Secure MPC

Additive correctness (as guaranteed by Theorem 1) does not rule out the possibility that the probability of \(\mathsf{ERR}\) (due to set flags) is correlated with the inputs of \(\widehat{\mathsf C}\). Thus, additive attacks on additively-correct circuits may leak information about the inputs to \(\widehat{\mathsf C}\), making additive correctness insufficient for applications to secure multiparty computation (as described in, e.g., [11]) that require that no such correlations exist. This stronger property is achieved by the following additive security property which, intuitively, guarantees that any additive attack on \(\widehat{\mathsf C}\) is equivalent (up to a small statistical distance) to an additive attack on the inputs and outputs of the function that \(\widehat{\mathsf C}\) computes. Formally,

Definition 2

(Additively-secure implementation). Let \(\epsilon >0\). We say that a randomized circuit \({C}:\mathbb {F}^n\rightarrow \mathbb {F}^k\) is an \(\epsilon \)-additively-secure implementation of a function \(f:\mathbb {F}^n\rightarrow \mathbb {F}^k\) if the following holds.

  • Completeness. For every \(x\in \mathbb {F}^n\), \(\Pr \left[ C\left( x\right) =f\left( x\right) \right] =1\).

  • Additive-attack security. For any additive attack \(\mathcal {A}\) there exist \(\mathbf{{a}}^{\mathsf {in}}\in \mathbb {F}^n\), and a distribution \(\mathcal {A}^{\mathsf{Out}}\) over \(\mathbb {F}^k\), such that for every \(\mathbf{{x}}\in \mathbb {F}^n\), \(\mathsf{{SD}}(C^{ \mathbf{A} }\left( \mathbf{{x}}\right) ,f\left( \mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}}\right) +\mathbf{{\mathcal {A}}}^{\mathsf {out}})\le \epsilon \).

As in the case of additive correctness, previous works [10, 11] constructed additively-secure implementations for arithmetic circuits over any finite field \(\mathbb {F}\), with constant overhead, and \(\epsilon =O\left( 1/\left| \mathbb {F}\right| \right) \). Unfortunately, their results and techniques are of little use in the binary case, since the error is too large. We present the first additively-secure circuits with negligible error probability over the binary field. Formally:

Theorem 2

(Cf. Theorem 14 ). For any depth-d arithmetic circuit \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\), and security parameter \(\sigma \), there exists a \(2^{-\sigma }\)-additively-secure implementation \(\widehat{\mathsf C}\) of \({C}\), where \(|\widehat{\mathsf C}| = |{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma ).\)

As in Sect. 1.1.1, the high-level idea is to implement C using an m-party protocol (in the standard model, namely not in the server-client model), where the functions computed by the parties are replaced with additively-correct implementations that operate over AMD encodings. However, since our main concern now is privacy, and not correctness, we use passive-secure protocols which only guarantee privacy against a constant fraction c of passively-corrupted parties. This privacy guarantee allows us to decouple the probability of \(\mathsf{ERR}\) of the additively correct circuits from their inputs, resulting in additively secure circuits.

More specifically, the input of the circuit C is shared between the parties using an additive secret-sharing, and the d-round passive-secure protocol \(\pi \) computes the functionality that reconstructs the input from the shares, evaluates C, and outputs an additive secret-sharing of the output. The privacy property of \(\pi \), together with the secrecy property of the secret-sharing scheme, guarantee that the joint view of a constant fraction of passively-corrupted parties reveals no information about the inputs, or outputs, of the computation. As in Sect. 1.1.1, \(\widehat{\mathsf C}\) is obtained from \(\pi \) by first replacing all \(\mathsf{NextMSG}_{}^{}\) functions with the functions \(\mathsf{NextMSG}_{}^{\prime }\) that operate on AMD encodings, and then implementing each \(\mathsf{NextMSG}_{}^{\prime }\) using a \(2^{-\sigma }\)-additively-correct implementation \(\widehat{\mathsf{NextMSG}_{}^{\prime }}\) with constant overhead (such as the one from Theorem 1). As \(\widehat{\mathsf C}\) should emulate C (rather than output a secret sharing of the output of C), the output is reconstructed from the outputs of the parties in \(\pi \) by summing their shares, and is then combined with the flags generated by all the additively-correct implementations, such that if any of the flags were set then the output of \(\widehat{\mathsf C}\) is random.

Using a union-bound over the additive-correctness property of the additively-correct implementations, except with probability at most \(|{C}|\cdot 2^{-\sigma }\) any additive attack on the \(\widehat{\mathsf{NextMSG}_{}^{\prime }}\) functions either sets a flag, or is equivalent to an attack on the inputs and outputs of \(\mathsf{NextMSG}_{}^{\prime }\). Except for the inputs, and output, of \(\widehat{\mathsf C}\), the inputs and outputs of the \(\mathsf{NextMSG}_{}^{\prime }\) functions are protected by the AMD encoding scheme, so by the additive soundness of the AMD encoding scheme, any attack (except for an attack on the inputs, and output, of \(\widehat{\mathsf C}\)) will set a flag with overwhelming probability. Thus, the only additive attacks that do not set a flag (with overwhelming probability) are attacks on the inputs and outputs of \(\widehat{\mathsf C}\), which are equivalent to attacks on the inputs and outputs of \({C}\). Thus, with overwhelming probability the execution of \(\pi \) is correct even in the presence of additive attacks.

It remains to show that the probability of setting a flag in \(\widehat{\mathsf C}\), thus causing the output to be random, is input independent. We use the fact that the probability that a subset of \(\widehat{\mathsf{NextMSG}_{}^{\prime }}\) implementations set their flags depends only on their joint inputs and outputs, and distinguish between two types of attacks.

  1. 1.

    “Small” attacks. These attacks attempt to corrupt less than cm parties. Therefore, the probability that a flag is set depends only on the inputs and outputs of these parties which, by the privacy of \(\pi \), and the secrecy of the secret-sharing scheme, is independent of the inputs of \(\widehat{\mathsf C}\).

  2. 2.

    “Large” attacks. These attacks attempt to corrupt more than cm parties, and so we can no longer use the privacy of \(\pi \). However, notice that in this case the output of \(\widehat{\mathsf C}\) is random if and only if at least one additively-correct implementation set a flag (regardless of the identity or number of flags that were set). That is, the output is random if and only if the OR of the flags is 1. Using a recent lemma of [1] (stated as Lemma 1 below), the correlation of the OR with the input is negligible, because the OR is computed over a large fraction of the flags.

As for the size of \(\widehat{\mathsf C}\), notice that \(|\widehat{\mathsf C}| = \sum _{i=1}^m \sum _{j=1}^d |\widehat{\mathsf{NextMSG}_{i}^{j}}|\). To obtain the small overhead guaranteed by Theorem 2, we use a cm private (for some constant \(c>0\)), m-party protocol of [6] in which the total circuit size of all the \(\mathsf{NextMSG}_{}^{}\) functions is \(|{C}|\cdot {\text {polylog} }(|{C}|,m)+ {\text {poly}}(m,n,k,d,\log |{C}|)\). Setting \(m={\text {poly}}\left( \sigma \right) \), \(\sum _{i=1}^\sigma \sum _{j=1}^d |{\mathsf{NextMSG}_{i}^{j}}| = |{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma )\), and so if all the \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) are generated using Theorem 1, \(|\widehat{\mathsf C}|=|{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma )\). (See Sect. 5 for a more detailed analysis.)

1.2 On the Difference Between Additive Correctness and Additive Security

As noted in Sect. 1.1.2, Definition 1 is weaker than Definition 2. In particular, the correctness guarantee of Definition 1 is insufficient for many MPC applications, since the probability of \(\mathsf{ERR}\) (due to set flags) might be correlated with the inputs, and consequently reveal information regarding the inputs of \(\widehat{\mathsf C}\). As we now show, such correlations exist in many natural constructions of additively correct implementations (and, in particular, in all additivity correct constructions discussed in this paper as well as the constructions in [10, 11]).

As a typical example of correlations between inputs and the probability of \(\mathsf{ERR}\) created by additive attacks, consider the simpler case of an AMD code. Specifically, consider the code which encodes a field element \(x \in \mathbb {F}\) as \((x,v_1,\cdots ,v_\sigma , r_1,\cdots ,r_\sigma )\), where \(v_1,\cdots ,v_\sigma \in _R\mathbb {F}\) are uniformly random, and \(r_i = v_i \cdot x\) for all \(1 \le i \le \sigma \). To decode \((x,v_1,\cdots ,v_\sigma , r_1 ,\cdots ,r_\sigma )\), the decoder verifies that \(x \cdot v_i = r_i\) for all \(1 \le i \le \sigma \). Consider the additive attack that adds the same arbitrary constant \(\delta \ne 0\) to all the \(v_i\)’s. If \(x=0\) then \(r_i=0\) for every \(1 \le i \le \sigma \), thus the test \(0 \cdot (v_i+\delta )=0\) passes for all i, and decoding succeeds. However, if \(x\ne 0\) then every \(x \cdot v_i = r_i\) test fails except with probability \(1/|\mathbb {F}|\). Since decoding succeeds only if all tests succeed, decoding fails in this case with probability at least \(1-1/|\mathbb {F}|^\sigma \).

Overall, this attack leaks information regarding the value of x because if \(x=0\) then the decoder aborts with probability zero, whereas if \(x\ne 0\) then the decoder aborts with probability almost 1. Similar attacks apply to all additively-correct constructions presented in this paper, thus requiring the transformation of Sect. 5.

Fig. 1.
figure 1

Additive security from weak additive correctness (both steps use AMD codes)

2 Preliminaries

In the following, \(\mathbb {F}\) will denote a finite field, n usually denotes the input length, k usually denotes the output length, ds denote depth and size, respectively (e.g., of circuits, as defined below), and m is used to denote the number of parties. Vectors will be denoted by boldface letters (e.g., \(\mathbf a \)). If \(\mathcal {D}\) is a distribution then \(X\leftarrow \mathcal {D}\), or \(X\in _R \mathcal {D}\), denotes sampling X according to the distribution \(\mathcal {D}\). Given two distributions XY, \(\mathsf{{SD}}\left( X,Y\right) \) denotes the statistical distance between XY.

The following lemma regarding k-wise indistinguishable distributions over \(\{0,1\}^n\) will be used to construct additively-secure circuits.

Lemma 1

(Cf. Claim 3.9 in [1]). Let nk be positive integers, and \(\mathcal {X}\), \(\mathcal {Y}\) be k-wise indistinguishable distributions over \(\{0,1\}^n\). Then

$$\begin{aligned} \left| \Pr [(x_1,\cdots ,x_n) \leftarrow \mathcal {X}: \vee _{i=1}^n x_i =1 ] - \Pr [(y_1,\cdots ,y_n) \leftarrow \mathcal {Y}: \vee _{i=1}^n y_i =1 ] \right| \le 2^{-\Omega (k/\sqrt{n})}. \end{aligned}$$

Additive Attacks. We follow the terminology of [10].

Definition 3

(Additive attack). An additive attack \( \mathbf{A} \) on a circuit \({C}\) is a fixed vector of field elements which is independent from the inputs and internal values of \({C}\). \( \mathbf{A} \) contains an entry for every wire, and every output gate, of \({C}\), and has the following effect on the evaluation of the circuit. For every wire \(\omega \) connecting gates \(\mathsf{a}\) and \(\mathsf{b}\) in \({C}\), the entry of \( \mathbf{A} \) that corresponds to \(\omega \) is added to the output of \(\mathsf{a}\), and the computation of the gate \(\mathsf{b}\) uses the derived value. Similarly, for every output gate \(\mathsf{o}\), the entry of \( \mathbf{A} \) that corresponds to the wire in the output of \(\mathsf{o}\) is added to the value of this output.

Notation 3

For a (possibly randomized) circuit \({C}\) and for a gate g of \({C}\), we denote by \(g_\mathbf{{x}}\) the distribution of the output value of g (defined in a natural way) when \({C}\) is evaluated on an input \(\mathbf{{x}}\).

Notation 4

Let \({C}\) be a (possibly randomized) circuit, and \( \mathbf{A} \) be an additive attack on \({C}\). We denote by \( \mathbf{A} _{{c},{c'}}\) the attack \( \mathbf{A} \) restricted to the wire connecting the gates \(c,c'\) of \({C}\). Similarly we denote by \( \mathbf{A} ^{\mathsf{out}}\) the restriction of \( \mathbf{A} \) to all the outputs of \({C}\).Footnote 3

Encoding Schemes. An encoding scheme \(\mathsf {E}\) over a set \(\Sigma \) of symbols (called “the alphabet”) is a pair \(\left( \mathsf{Enc},\mathsf{Dec}\right) \) of algorithms, where the encoding algorithm \(\mathsf{Enc}\) is a PPT algorithm that given a message \(x \in \Sigma ^n\) outputs an encoding \(\hat{x}\in \Sigma ^{\hat{n}}\) for some \(\hat{n} = \hat{n}\left( n\right) \); and the decoding algorithm \(\mathsf{Dec}\) is a deterministic algorithm, that given an \(\hat{x}\) of length \(\hat{n}\) in the image of \(\mathsf{Enc}\), outputs an \(x\in \Sigma ^n\). Moreover, \(\Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}\left( x\right) \right) =x\right] =1\) for every \(x\in \Sigma ^n\). We will assume that when \(n>1\), \(\mathsf{Enc}\) encodes every symbol of x separately, and in particular \(\hat{n}\left( n\right) =n\cdot \hat{n}\left( 1\right) \).

Parameterized Encoding Schemes. We consider encoding schemes in which the encoding and decoding algorithms are given an additional input \(1^t\), which is used as a security parameter. Concretely, the encoding length depends also on t (and not only on n), i.e., \(\hat{n}=\hat{n}\left( n,t\right) \), and for every t the resultant scheme is an encoding scheme (in particular, for every \(x\in \Sigma ^n\) and every \(t\in \mathbb {N}\), \(\Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}\left( x,1^{t}\right) ,1^{t}\right) =x\right] =1\)). We call such schemes parameterized. We will only consider parameterized encoding schemes, and therefore when we say “encoding scheme” we mean a parameterized encoding scheme.

Algebraic Manipulation Detection (AMD) Encoding Schemes. Informally, AMD encoding schemes over a finite field \(\mathbb {F}\) guarantee that additive attacks on codewords are detected by the decoder with some non-zero probability:

Definition 4

(AMD encoding scheme, [3, 11]). Let \(\mathbb {F}\) be a finite field, \(n\in \mathbb {N}\) be an input length parameter, \(t\in \mathbb {N}\) be a security parameter, and \(\epsilon \left( n,t\right) :\mathbb {N}\times \mathbb {N}\rightarrow \mathbb {R}^+\). An \(\left( n,t,\epsilon \left( n,t\right) \right) \)-algebraic manipulation detection (AMD) encoding scheme \(\left( \mathsf{Enc},\mathsf{Dec}\right) \) over \(\mathbb {F}\) is an encoding scheme with the following guarantees.

  • Perfect completeness. For every \(\mathbf{{x}}\in \mathbb {F}^n\), \(\Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}\left( \mathbf{{x}},1^t\right) ,1^t\right) =\right. \) \(\left. \left( 0,\mathbf{{x}}\right) \right] =1\).

  • Additive soundness. For every \(0^{\hat{n}\left( n,t\right) }\ne \mathbf{{a}}\in \mathbb {F}^{\hat{n}\left( n,t\right) }\), and every \(\mathbf{{x}}\in \mathbb {F}^n\), \(\Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}\left( \mathbf{{x}},1^t\right) +\mathbf{{a}},1^t\right) \notin \mathsf{ERR}\right] \le \epsilon \left( n,t\right) \) where \(\mathsf{ERR}=(\mathbb {F}{\setminus } \{0\})\times \mathbb {F}^n \), and the probability is over the randomness of \(\mathsf{Enc}\).

Remark 1

It will sometime be useful to represent \((\mathsf{Enc},\mathsf{Dec})\) as families of arithmetic circuits (instead of polynomial-time algorithms) that are parameterized by the security parameter t. That is, \(\left( \mathsf{Enc}=\left\{ \mathsf{Enc}_{n}\right\} ,\mathsf{Dec}=\left\{ \mathsf{Dec}_{n}\right\} \right) \) are families of arithmetic circuits over \(\mathbb {F}\), where \(\mathsf{Enc}_{n}:\mathbb {F}^n \rightarrow \mathbb {F}^{\hat{n}}\) is randomized, and \(\mathsf{Dec}_{n}:\mathbb {F}^{\hat{n}} \rightarrow \mathbb {F}\times \mathbb {F}^{n}\) is deterministic. (Here, the security parameter t is “hard-wired” into the circuits.) Somewhat abusing notation, we use \(\mathsf{Enc},\mathsf{Dec}\) to denote both the families of circuits, and the circuits \(\mathsf{Enc}_{n},\mathsf{Dec}_{n}\) for a specific n, omitting the subscript (when n is clear from the context).

We will sometimes need AMD codes with a stronger robustness guarantee which, roughly speaking, guarantees additive correctness even in the presence of additive attacks on the internal wires of the encoding procedure, where the ideal additive attack on the output is independent of the additive attack:

Definition 5

(Robust AMD encoding schemes). Let \(\mathbb {F}\) be a finite field, \(n\in \mathbb {N}\) be an input length parameter, \(\hat{n}\in \mathbb {N}\) be an output length parameter, \(t\in \mathbb {N}\) be a security parameter, and \(\epsilon \left( n,t\right) :\mathbb {N}\times \mathbb {N}\rightarrow \mathbb {R}^+\). We say that an encoding scheme \(\left( \mathsf{Enc},\mathsf{Dec}\right) \) over \(\mathbb {F}\) is an \(\left( n,\hat{n},t,\epsilon \left( n,t\right) \right) \)-robust AMD encoding scheme, if it is an \(\left( n,t,\epsilon \left( n,t\right) \right) \)-AMD encoding scheme in which the additive soundness property is replaced with the following additive robustness property. Let \(\mathsf{Enc}:\mathbb {F}^n \rightarrow \mathbb {F}^{\hat{n}}\), \(\mathsf{Dec}:\mathbb {F}^{\hat{n}} \rightarrow \mathbb {F}\times \mathbb {F}^{n}\), then for any additive attack \( \mathbf{A} \) on \(\mathsf{Enc}\) there exists an ideal attack \(\mathbf{{a}}^{\mathsf {in}}\in \mathbb {F}^n\) such that for any \(\mathbf{{b}}\in \mathbb {F}^{\hat{n}}\), and any \(\mathbf{{x}}\in \mathbb {F}^n\), it holds that \( \Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}^ \mathbf{A} \left( \mathbf{{x}},1^t\right) + \mathbf{{b}},1^t\right) \notin \mathsf{ERR}\cup \left\{ \left( 0,\mathbf{{x}}+ \mathbf{{a}}^{\mathsf {in}}\right) \right\} \right] \le \epsilon \), where \(\mathsf{ERR}=(\mathbb {F}{\setminus } \{0\})\times \mathbb {F}^n \), and the probability is over the randomness of \(\mathsf{Enc}\).

Secure Multiparty Computation. We recall a few standard definitions that will be used in subsequent sections.

We view an MPC protocol \(\pi \) as a collection of \(\mathsf{NextMSG}_{}^{}\) functions. The protocol proceeds in rounds, where in round j, the description of \(\pi \) contains a next message function \(\mathsf{NextMSG}_{i}^{j}\) of round j for party \(P_i\), defined as follows. \(\mathsf{NextMSG}_{i}^{j}\) takes as input all the messages \(m_i^{j-1}\) that \(P_i\) received before round j, its input \(x_i\), and its randomness \(r_i\); and outputs the messages that \(P_i\) sends in round j. If j is the last round of \(\pi \), then for every party \(P_i\), \(\mathsf{NextMSG}_{i}^{j}\) outputs the output of \(P_i\) in \(\pi \).

The Client-Server Model. The client-server model (see [2, 4, 5] for a more detailed discussion) is a refinement of the standard MPC model in which each party has one of two possible roles: clients hold inputs and receive outputs; and servers have no inputs and receive no outputs, but may participate in the computation. Notice that every protocol in the client-server model can be converted to a protocol in the standard MPC model by asking every party to emulate a single server and a single client (assuming the protocol has the same number of clients and servers). See Fig. 2.

Fig. 2.
figure 2

MPC protocol with a single client and m servers

In the following, we assume that the protocol consists of a single input client, a single output client, and \(m_S\) servers. We call such protocols \(m_S\)-server protocols. We use the simulation-based paradigm, and say that a protocol \(\pi \) in the client-server model is \((s,\epsilon )\)-secure (\((s,\epsilon )\)-private) if it is secure (up to distance \(\epsilon \)) against all active (passive) adversaries corrupting at most s servers, and no clients. We assume that the description of a protocol in the client-server model consists of the following:

  1. 1.

    Input Encoding. A description of a function \(\mathsf{InpEnc}\) whose input is the input of the input client, and whose output is the messages that the input client sends to the servers.

  2. 2.

    Circuit Evaluation. For every server \(S_i\), and every round j, a description of a function \(\mathsf{NextMSG}_{i}^{j}\) which specifies the messages that \(S_i\) sends to all the servers (to the output client) in round j (in the last round).

  3. 3.

    Output Decoding. A description of a function \(\mathsf{OutDec}\) whose input is the messages sent to the output client (from the servers) in the last round, and whose output is the output of \(\pi \).

We will use a relaxed notion of security, which we call correct-only MPC. Intuitively, it guarantees output correctness even in the presence of an active adversary that corrupts a “small” subset of the servers. This notion relaxes the standard security notion because it does not guarantee input privacy. We formalize correct-only MPC as follows, where for a protocol \(\pi \), and an adversary \(\mathsf{Adv}\), \(\pi ^\mathsf{Adv}(\mathbf{{x}})\) denotes the outputs (of the clients) in an execution of \(\pi \) on inputs \(\mathbf{{x}}\) in the presence of \(\mathsf{Adv}\).

Definition 6

Let \(f:X \rightarrow Y\) be a function, and \(\pi \) be a single client, \(m_S\)-server protocol. We say that \(\pi \) \((t,\epsilon )\)-correctly computes f if for every active adversary \(\mathsf{Adv}\) corrupting a set \(T,|T| \le t\) of servers, and every client input \(\mathbf{{x}}\in X\), it holds that \(\Pr \left[ f(\mathbf{{x}})\ne \pi ^\mathsf{Adv}(\mathbf{{x}})\right] \le \epsilon .\)

We say that \(\pi \) t-correctly computes f if it \(\left( t,\epsilon \right) \)-correctly computes f for \(\epsilon =0\).

Remark 2

Notice that any protocol \(\pi \) for t-correctly computing f in the client-server model can be assumed to be deterministic without loss of generality. This is because the adversary \(\mathsf{Adv}\) has no effect on the randomness used by the input clients. Therefore, any \(\pi \) can be de-randomized by fixing its randomness to some arbitrary value.

Next, we describe a simple replication-based m-server protocol for \(\left( \lceil m/2 \rceil -1\right) \)-correctly computing a function f.

Theorem 5

Let \(\mathbb {F}\) be a finite field. Then for every arithmetic circuit \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\), and \(m\in \mathbb {N}\), there exists an m-server protocol for \(\left( \lceil m/2 \rceil -1\right) \)-correctly computing f. Moreover, the computational complexity (in field operations) of \(\pi \) is \(|{C}|\cdot m\).

Proof

The input client replicates the input \(\mathbf{{x}}\) among all the servers, who locally compute \(\mathbf{{z}}_i \leftarrow {C}(\mathbf{{x}})\) and send \(\mathbf{{z}}_i\) to the output client, who outputs \({\mathrm {maj}}\{\mathbf{{z}}_1,\cdots ,\mathbf{{z}}_m\}\). \(\square \)

We will use the following theorem regarding the existence of correct-only MPC protocols.

Theorem 6

(Implicit in [6]). Let \(\sigma \) be a security parameter, \(m\in \mathbb {N}\), \(\mathbb {F}\) be a finite field, and \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\) be a depth-d arithmetic circuit. Then there exists a d-round, m-server protocol \(\pi \) that m / 10-correctly computes \({C}\), where:

  • The total circuit size of the input encoding function \(\mathsf{InpEnc}\), and the output decoding function \(\mathsf{OutDec}\), is \({\text {poly}}(n,k,m)\).

  • The total circuit size of all the \(\mathsf{NextMSG}_{}^{}\) functions is \(|{C}|\cdot {\text {polylog} }(|{C}|,\sigma )+{\text {poly}}(m,d,n,k,\log |{C}|)\).

  • In each round of \(\pi \), the messages sent by each party contain in total at most \({\text {poly}}(n,k,\log |{C}|)\) field elements.

3 Circuit Transformations

In this section we describe a few circuit transformations which will be used in Sects. 4 and 5 to construct additively-correct and additively-secure circuits. At a high level, these transformations replace a given circuit C over field \(\mathbb {F}\) with a new circuit that operates on AMD encodings. We first describe a randomized gadget that combines and amplifies error flags. This gadget will be used in the following constructions to combine error flags obtained from AMD decoding of several codewords.

Construction 1

Let \(n_f\in \mathbb {N}\) be an input length parameter, and \(\sigma \in \mathbb {N}\) be a security parameter. The flag combining gadget \(\mathcal {F}_{\mathsf{{comb}}}:\mathbb {F}^{n_f}\rightarrow \mathbb {F}^{\sigma }\), on input \(f_1,\cdots ,f_{n_f}\in \mathbb {F}\), operates as follows.

  1. 1.

    Generates \(n_f\) random vectors \(\mathbf{{r}}_1,\cdots ,\mathbf{{r}}_{n_f} \in _R \mathbb {F}^{\sigma }\).

  2. 2.

    Outputs \(\mathbf{{f}}\leftarrow \sum _{i=1}^{n_f} \mathbf{{r}}_i \cdot f_i \).

Observation 7

If \(\left( f_1,\cdots ,f_{n_f}\right) \ne \mathbf {0}\) then \(\mathbb {F}_{\mathsf{{comb}}}\left( f_1,\cdots ,f_{n_f}\right) \ne \mathbf {0}\) except with probability at most \(2^{-\sigma }\).

Next, we describe a circuit transformation \(\mathcal {T}_{\mathsf{{inter}}}\) that will be used to replace intermediate rounds in secure protocols. Intuitively, given a circuit C, the transformed circuit \({\mathcal T}_{\mathsf{{inter}}}\left( C\right) \) takes AMD encodings of the inputs of C, decodes them, uses the flag combining gadget \(\mathcal {F}_{\mathsf{{comb}}}\) of Construction 1 to combine the error flags generated during decoding, evaluates the circuit C, and outputs AMD encodings of the output, concatenated with the combined error flag.

Construction 2

Given a circuit \(C:\mathbb {F}^n\rightarrow \mathbb {F}^k\), and an AMD encoding scheme \(\left( \mathsf{Enc},\mathsf{Dec}\right) \) that outputs encodings of length \(\hat{n}\left( n\right) \), the circuit \({\mathcal T}_{\mathsf{{inter}}}\left( C\right) :\mathbb {F}^{\hat{n}\left( n\right) }\rightarrow \mathbb {F}^{\sigma }\times \mathbb {F}^{\hat{n}\left( k\right) }\), on input \(\left( \mathbf{{x}}_1,\cdots ,\mathbf{{x}}_n\right) \), operates as follows.

  1. 1.

    For every \(1 \le i \le n\), computes \((f_i,\mathbf{{x}}'_i) \leftarrow \mathsf{Dec}(\mathbf{{x}}_i)\).

  2. 2.

    Computes \((y_1,\cdots ,y_k) \leftarrow C(\mathbf{{x}}'_1,\cdots ,\mathbf{{x}}'_n)\).

  3. 3.

    Computes \(\mathbf{{f}}\leftarrow \mathcal {F}_{\mathsf{{comb}}}\left( f_1,\cdots ,f_n\right) \).

  4. 4.

    Outputs \((\mathbf{{f}},\mathsf{Enc}(y_1),\cdots ,\mathsf{Enc}(y_k))\).

Finally, we describe a circuit transformation \(\mathcal {T}_{\mathsf{{fin}}}\) that will be used to replace the output generation rounds. This transformation differs from the transformation \({\mathcal T}_{\mathsf{{inter}}}\) of Construction 2 only in the fact that it does not encode the outputs.

Construction 3

Given a circuit \(C:\mathbb {F}^n\rightarrow \mathbb {F}^k\), and an AMD encoding scheme \(\left( \mathsf{Enc},\mathsf{Dec}\right) \) that outputs encodings of length \(\hat{n}\left( n\right) \), the circuit \({\mathcal T}_{\mathsf{{fin}}}\left( C\right) :\mathbb {F}^{\hat{n}\left( n\right) }\rightarrow \mathbb {F}^{\sigma }\times \mathbb {F}^{k}\), on input \(\left( \mathbf{{x}}_1,\cdots ,\mathbf{{x}}_n\right) \), operates as follows.

  1. 1.

    Performs Steps 13 of Construction 2, and let \((y_1,\cdots ,y_k),\mathbf{{f}}\) denote the outputs of Steps 2 and 3, respectively.

  2. 2.

    Outputs \((\mathbf{{f}},y_1,\cdots ,y_k)\).

Finally, we will use the following notation.

Notation 8

Given a circuit \(C:\mathbb {F}^n\rightarrow \mathbb {F}^k\), and an AMD encoding scheme \(\left( \mathsf{Enc},\mathsf{Dec}\right) \) that outputs encodings of length \(\hat{n}\left( n\right) \), we use \(\left( \mathsf{Enc}\circ C\right) :\mathbb {F}^n\rightarrow \mathbb {F}^{\hat{n}\left( k\right) }\) to denote the circuit that on input \(\mathbf{{x}}\in \mathbb {F}^n\), computes \(\left( y_1,\cdots ,y_k\right) \leftarrow C\left( \mathbf{{x}}\right) \), and outputs \(\left( \mathsf{Enc}\left( y_1\right) ,\cdots ,\mathsf{Enc}\left( y_k\right) \right) \).

4 Efficient Additive Correctness Using Correct-Only MPC

In this section we construct a \(2^{-\sigma }\)-additively-correct circuit with \({\text {polylog} }(|{C}|,\sigma )\) overhead. Specifically, for every depth-d arithmetic circuit \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\) we construct a \(2^{-\sigma }\)-additively correct implementation \(\widehat{\mathsf C}\), where \(\left| \widehat{\mathsf C}\right| = |{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma )\), thus proving Theorem 1.

Recall that when \(\widehat{\mathsf C}\) is constructed from a correct-only MPC protocol \(\pi \) then each attack on \(\widehat{\mathsf C}\) can be divided into three “parts”. The first “part” attacks connecting wires between sub-circuits of \(\widehat{\mathsf C}\) (these sub-circuits are \(\mathsf{InpEnc},\mathsf{OutDec}\) and \(\mathsf{NextMSG}_{}^{}\)), and we protect against such attacks by having these sub-circuits operate on AMD codewords. The second “part” attacks the \(\mathsf{NextMSG}_{}^{}\) functions, and we protect against such attacks by replacing \(\mathsf{NextMSG}_{}^{}\) with its \(\epsilon \)-additively correct implementation. Thus, every such attack either affects only few \(\mathsf{NextMSG}_{}^{}\) functions, in which case the correctness of \(\pi \) guarantees that it does not affect the outputs; or it affects many \(\mathsf{NextMSG}_{}^{}\) functions, in which case \(\epsilon \)-additive correctness guarantees that (except with negligible probability) the attack is either detected, or corresponds to an additive attack on the inputs and outputs of \(\mathsf{NextMSG}_{}^{}\). (Additive attacks on the inputs and outputs correspond to the first type of attacks, namely attacks on the connecting wires, which are detected by the AMD encoding scheme.) The third and final “part” attacks the clients, and we protect against such attacks by replacing \(\mathsf{InpEnc},\mathsf{OutDec}\) with their \(2^{-\sigma }\)-additively-correct implementations (e.g., Construction 9 and Appendix A). This is formalized in the following construction, and described in Fig. 3.

Construction 4

Let \(\mathbb {F}\) be a finite field, \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\) be an arithmetic circuit over \(\mathbb {F}\), \(\sigma \) be a security parameter, and \(\pi \) be a d-round, \(\sigma \)-correct m-server protocol for computing \({C}\) using only point-to-point channels. We assume (without loss of generality) that every message sent in \(\pi \) consists of exactly s field elements, for some \(s\in \mathbb {N}\). Let \((\mathsf{Enc},\mathsf{Dec})\) be an \((s,\sigma ,2^{-\sigma })\)-AMD encoding scheme that outputs encodings of length \(\hat{n}\left( s\right) \). The circuit \(\widehat{\mathsf C}\) will use the following ingredients.

  1. 1.

    Input Encoding. Let h denote the number of messages sent by the input client in the first round, namely \(\mathsf{InpEnc}:\mathbb {F}^n \rightarrow \left( \mathbb {F}^s\right) ^h\). Let \(\widehat{\mathsf{InpEnc}}:\mathbb {F}^n \rightarrow \mathbb {F}^{t'} \times \left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^h\) denote the \(2^{-\sigma }\)-additively correct implementation, with \({t'}\) flags, of the circuit \(\left( \mathsf{Enc}\circ \mathsf{InpEnc}\right) :\mathbb {F}^n \rightarrow \left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^h\) (as defined in Notation 8).

  2. 2.

    Message Generation. For every \(1 \le i \le m\), and \(2 \le j \le d-1\), let g (h) denote the number of messages received (sent) by the i’th server in round \(j-1\) (j).Footnote 4 That is, \(\mathsf{NextMSG}_{i}^{j}:\left( \mathbb {F}^s\right) ^g \rightarrow \left( \mathbb {F}^s\right) ^h\). Let \(\widehat{\mathsf{NextMSG}_{i}^{j}}:\left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^g \rightarrow \mathbb {F}^{t} \times \mathbb {F}^{\sigma } \times \left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^h\) denote the \(\epsilon \)-additively correct implementation, with t flags, of the circuit \({\mathcal T}_{\mathsf{{inter}}}\left( \mathsf{NextMSG}_{i}^{j}\right) :\left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^g \rightarrow \mathbb {F}^{\sigma } \times \left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^h\) (see Construction 2).

  3. 3.

    Output Generation. Let g denote the number of messages received by the output client in the final round, namely \(\mathsf{OutDec}:\left( \mathbb {F}^s\right) ^g \rightarrow \mathbb {F}^k\). Let \(\widehat{\mathsf{OutDec}}:\left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^g \rightarrow \mathbb {F}^{t''} \times \mathbb {F}^{\sigma } \times \mathbb {F}^k\) denote the \(2^{-\sigma }\)-additively correct implementation, with \(t''\) flags, of the circuit \({\mathcal T}_{\mathsf{{fin}}}\left( \mathsf{OutDec}\right) :\left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^g \rightarrow \mathbb {F}^{\sigma } \times \mathbb {F}^k\) (see Construction 3).

  4. 4.

    Circuit Construction. The circuit \(\widehat{\mathsf C}\), on input \(\mathbf{{x}}\in \mathbb {F}^n\):

    1. (a)

      Emulates \(\pi \), with x as the input of the client, and where \(\widehat{\mathsf{InpEnc}}\), \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) and \(\widehat{\mathsf{OutDec}}\) of Steps 13 above (connected in the natural way) replace \(\mathsf{InpEnc}\), \(\mathsf{NextMSG}_{i}^{j}\) and \(\mathsf{OutDec}\). That is, for every round \(1 \le j \le d\), if server \(S_i\) sends a message to server \(S_{i'}\), then the corresponding output of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) is wired to the corresponding input of \(\widehat{\mathsf{NextMSG}_{i'}^{j+1}}\). Denote the output of the client in the above execution by \(\mathbf{{z}}\).

    2. (b)

      For every \(1 \le i \le m\), and every \(1 \le j \le d\), let \(f_{i,1}^{\prime j},\cdots ,f_{i,t}^{\prime j}\) be the first t outputs of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\), and let \(f_{i,1}^{j},\cdots ,f_{i,\sigma }^{j}\) be the next \(\sigma \) outputs of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\). (The \(f_{i,w}^{\prime j}\)’s are the flags of the \(\epsilon \)-correct implementation, and the \(f_{i,w}^{j}\)’s are the flags generated during the AMD decoding.)

    3. (c)

      Let \(f_{1}^{\prime 1},\cdots ,f_{t'}^{\prime 1}\) be the first \(t'\) outputs of \(\widehat{\mathsf{InpEnc}}\). (These are the flags of the \(2^{-\sigma }\)-correct implementation.)

    4. (d)

      Let \(f_{1}^{\prime d},\cdots ,f_{t''}^{\prime d}\) be the first \(t''\) outputs of \(\widehat{\mathsf{OutDec}}\) and let \(f_{1}^{d},\cdots ,f_{\sigma }^{d}\) be the next \(\sigma \) outputs of \(\widehat{\mathsf{OutDec}}\). (The \(f_{i}^{\prime d}\)’s are the flags of the \(2^{-\sigma }\)-correct implementation, and the \(f_{i}^{d}\)’s are the flags generated during the AMD decoding.)

    5. (e)

      For every \(1 \le w' \le \sigma \), compute \(f''_{w'} \leftarrow \sum _{i=1}^m \sum _{j=2}^{d-1} \left( \sum _{w=1}^{t} f^{\prime j}_{i,w} \cdot \right. \) \(\left. r_{i,j,w,w'} + \sum _{w=1}^{\sigma } f^{ j}_{i,w} \cdot r_{i,j,t+w,w'} \right) + \sum _{w=1}^{t''} f^{\prime d}_{w} \cdot r_{1,d,w,w'} + \sum _{w=1}^{\sigma } f^{ d}_{w} \cdot r_{1,d,t+w,w'}+ \sum _{w=1}^{t''} f^{\prime 1}_{w} \cdot r_{1,1,w,w'} \) where \(r_{i,j,w,w'} \in _R \mathbb {F}\).

    6. (f)

      Output \(\mathbf{{z}}+\sum _{w=1}^\sigma f''_w\cdot \mathbf{{r}}'_w\) where \(\mathbf{{r}}'_w \in _R \mathbb {F}^k\).

Fig. 3.
figure 3

Components of Construction 4

We now analyze the properties of Construction 4. The following notation will be useful.

Notation 9

We denote the ingredients of Construction 4 as follows.

  • We use \(\mathsf{InpEnc}^{\prime }\) to denote the circuit \(\left( \mathsf{Enc}\circ \mathsf{InpEnc}\right) \) obtained in Step 1.

  • For every \(1 \le i \le m\), and \(2 \le j \le d-1\), we use \(\mathsf{NextMSG}_{i}^{\prime j}\) to denote the circuit \({\mathcal T}_{\mathsf{{inter}}}\left( \mathsf{NextMSG}_{i}^{j}\right) \) obtained in Step 2.

  • We use \(\mathsf{OutDec}^{\prime }\) to denote the circuit \({\mathcal T}_{\mathsf{{fin}}}\left( \mathsf{OutDec}\right) \) obtained in Step 3.

The next theorem shows that Construction 4 produces a \(2^{-\Omega (\sigma )}\)-additively-correct implementation.

Theorem 10

Let \(\sigma \) be a security parameter, \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\) be an arithmetic circuit, and \(\pi \) be an m-party, d-round protocol for \(\left( \sigma ,2^{-\sigma }\right) \)-correctly computing \({C}\). Then the circuit \(\widehat{\mathsf C}\) obtained by applying Construction 4 to \({C}\) is a \(2^{-\Omega (\sigma )}\)-additively-correct implementation of \({C}\).

Proof

The completeness property of \(\widehat{\mathsf C}\) immediately follows from Construction 4, the correctness of \(\pi \), and the perfect completeness of the underlying AMD code. We now proceed to proving additive correctness. Let \( \mathbf{A} \) be an additive attack on \(\widehat{\mathsf C}\), and let \( \mathbf{A} ^\mathsf{out}\) denote the attacks on the outputs of \(\widehat{\mathsf C}\) as specified by \( \mathbf{A} \). Let \( \mathbf{A} _{\mathsf{InpEnc}}\), \( \mathbf{A} _{\mathsf{OutDec}{}}\) denote the restrictions of \( \mathbf{A} \) to the wires of \(\widehat{\mathsf{InpEnc}}\) and \(\widehat{\mathsf{OutDec}}\) respectively. Additionally, for every \(1 \le i \le m\) and every \(2 \le j \le d-1\) let \( \mathbf{A} ^j_i\) denote the restriction of \( \mathbf{A} \) to \(\widehat{\mathsf{NextMSG}_{i}^{j}}\). Let \((\mathbf{{a}}^{\mathsf {in}, 1}_{},\mathbf{{a}}^{\mathsf {out}, 1}_{})\) and \((\mathbf{{a}}^{\mathsf {in}, d}_{},\mathbf{{a}}^{\mathsf {out}, d}_{})\) be the ideal additive attacks on the inputs and outputs of \(\widehat{\mathsf{InpEnc}}\) and \(\widehat{\mathsf{OutDec}}\) corresponding to \( \mathbf{A} _{\mathsf{InpEnc}}\), \( \mathbf{A} _{\mathsf{OutDec}}\). Similarly, for every \(1 \le i \le m\) and every \(2 \le j \le d-1\), let \(\mathbf{{a}}^{\mathsf {in}, j}_{i}\), and \(\mathbf{{a}}^{\mathsf {out}, j}_{i}\) be the ideal additive attacks on the inputs and outputs of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) corresponding to \( \mathbf{A} ^j_i\). Define \(\mathbf{{a}}^{\mathsf {in}}= \mathbf{{a}}^{\mathsf {in}, 1}_{}\) and \(\mathbf{{a}}^{\mathsf {out}}= \mathbf{{a}}^{\mathsf {out}, d}_{} + \mathbf{A} ^\mathsf{out}\). We claim that for every input \(\mathbf{{x}}\) it holds that

$$\begin{aligned} \Pr [\widehat{\mathsf C}^ \mathbf{A} (\mathbf{{x}}) \notin \mathsf{ERR}\cup \{(0^\sigma ,{C}(\mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}})+\mathbf{{a}}^{\mathsf {out}})\}] \le 2^{-\Omega (\sigma )} \end{aligned}$$

where \(\mathsf{ERR}= \left( \mathbb {F}^\sigma {\setminus } \{0^\sigma \}\right) \times \mathbb {F}^k\).

Indeed, let \(\mathbf{{x}}\in \mathbb {F}^n\) be an input to \(\widehat{\mathsf C}\), and define \(P_\mathsf{bad}\) as the event that \(\widehat{\mathsf C}^ \mathbf{A} (\mathbf{{x}}) \notin \mathsf{ERR}\cup \{(0^\sigma ,{C}(\mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}})+\mathbf{{a}}^{\mathsf {out}})\}\), namely

$$\begin{aligned} \Pr [\widehat{\mathsf C}^ \mathbf{A} (\mathbf{{x}}) \notin \mathsf{ERR}\cup \{(0^\sigma ,{C}(\mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}})+\mathbf{{a}}^{\mathsf {out}})\}] = \Pr \left[ P_\mathsf{bad}\right] . \end{aligned}$$

Next, denote by \(P_f\) the event that

$$\begin{aligned} \bigwedge _{i=1}^m \bigwedge _{j=2}^{d-1} \bigwedge _{w=1}^t (f^{\prime j \mathbf{A} }_{i,w,\mathbf{{x}}} = f^{j \mathbf{A} }_{i,w,\mathbf{{x}}} = 0 ) \ \bigwedge \ \bigwedge _{w=1}^{t'} f^{\prime 1}_w=0 \ \bigwedge \ \bigwedge _{w=1}^{t''} f^{\prime d}_w = 0. \end{aligned}$$

Notice that by construction of \(\widehat{\mathsf C}\) we obtain that

$$\begin{aligned} \Pr [\widehat{\mathsf C}^ \mathbf{A} (\mathbf{{x}}) \notin \mathsf{ERR}\cup \{(0^\sigma ,{C}(\mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}})+\mathbf{{a}}^{\mathsf {out}})\}] \le 2^{-\Omega (\sigma )} + \Pr \left[ P_\mathsf{bad} \wedge P_f\right] . \end{aligned}$$

We proceed by defining the event \(P_\mathsf{OK}^{1,1}\) as \( \widehat{\mathsf{InpEnc}}^ \mathbf{A} (\mathbf{{x}}) \in \mathsf{ERR}\cup \{ \mathsf{InpEnc}(\mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}, 1}_{})+\mathbf{{a}}^{\mathsf {out}, 1}_{} \} \) and \(P_\mathsf{OK}^{d,d}\) as \( \widehat{\mathsf{OutDec}}^ \mathbf{A} (\mathbf{{y}}^ \mathbf{A} _\mathbf{{x}}) \in \mathsf{ERR}\cup \{ \mathsf{OutDec}(\mathbf{{y}}^ \mathbf{A} _\mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}, d}_{})+\mathbf{{a}}^{\mathsf {out}, d}_{} \}, \) where \(\mathbf{{y}}^ \mathbf{A} _\mathbf{{x}}\) is the random variable corresponding to the messages received by the client from the servers during the last round of \(\pi \) inside \(\widehat{\mathsf C}^ \mathbf{A} (\mathbf{{x}})\). We notice that by the \(2^{-\sigma }\)-correctness of \(\widehat{\mathsf{InpEnc}}\) and \(\widehat{\mathsf{OutDec}}\) it holds that

$$\begin{aligned} \Pr \left[ P_\mathsf{bad} \wedge P_f\right] \le 2^{-\Omega (\sigma )}+ \Pr \left[ P_\mathsf{bad} \wedge P_f \wedge P_\mathsf{OK}^{1,1} \wedge P_\mathsf{OK}^{d,d} \right] . \end{aligned}$$

Next, for every round \(2 \le j \le d-1\) and party \(1 \le i \le m\), denote by \(\mathsf{In}_i^j\) the set of servers which send messages to the ith server during the jth round, and denote by \(\mathbf{{a}}^{\mathsf {in}, j}_{i,i'}\) the ideal additive attacks on the inputs of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) which correspond to the message received by server i from server \(i'\) during the jth round. Similarly, denote by \(\mathsf{Out}_i^j\) the set of servers to which the ith server sends messages during the jth round, and denote by \(\mathbf{{a}}^{\mathsf {out}, j}_{i,i'}\) the ideal additive attacks on the outputs of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) which correspond to the message sent by server i to server \(i'\) during the jth round. In addition, we assume without loss of generality that the client sends a message to all the servers during the first round, and receives a message from all the servers during the last round. Finally, for every server \(1 \le i \le m\), we denote by \(\mathbf{{a}}^{\mathsf {out}, 1}_{i}\) the restriction of \(\mathbf{{a}}^{\mathsf {out}, 1}_{}\) to the messages that the client sends to the ith server during the first round and by \(\mathbf{{a}}^{\mathsf {in}, d}_{i}\) the restriction of \(\mathbf{{a}}^{\mathsf {in}, d}_{}\) to the messages that the client receives from the ith server during the dth round. Finally, we denote by \(\mathbf{{a}}^{\mathsf {in}, 2}_{i}\) the messages received by the ith server from the client, and we denote by \(\mathbf{{a}}^{\mathsf {out}, d-1}_{i'}\) the messages sent by the \(i'\)th server to the client.

For any \(1 \le i,i' \le m\) and \(2 \le j \le d-1\), we say that a tuple \((i',i,j)\) is problematic if one of the following three conditions hold.

  1. 1.

    Input Corruption. It holds that \(\mathbf{{a}}^{\mathsf {in}, 2}_{i} + \mathbf{{a}}^{\mathsf {out}, 1}_{i} \ne 0\) and \(i'=j=1\).

  2. 2.

    Intermediate Corruption. It holds that \(\mathbf{{a}}^{\mathsf {in}, j}_{i,i'} +\mathbf{{a}}^{\mathsf {out}, j-1}_{i',i} \ne 0\).

  3. 3.

    Output Corruption. It holds that \(\mathbf{{a}}^{\mathsf {out}, d-1}_{i'} + \mathbf{{a}}^{\mathsf {in}, d}_{i'} \ne 0\) and \(i=j=d\).

Next, we define the set \(\mathcal {A}= \{(i',i,j): \text {the tuple (i',i,j) is problematic}\}\) and we split the proof into two cases.

  • Case 1: Intuitively, in this case a large portion of \(\widehat{\mathsf C}\) was corrupted. We show that in this case \(\widehat{\mathsf C}\) will almost always abort the computation by setting at least one of the flags to a non zero value, namely the probability of an incorrect output (i.e., not in \(\mathsf{ERR}\cup \{(0^\sigma ,{C}(\mathbf{{x}}+\mathbf{{a}}^{\mathsf {in}})+\mathbf{{a}}^{\mathsf {out}})\}\)) is low. We denote the random variables describing the messages exchanged during the evaluation of \(\widehat{\mathsf C}^ \mathbf{A} \) on input \(\mathbf{{x}}\) as follows: for every \(1 \le i \le m\) and \(2 \le j \le d-2\), \(\widehat{y}^{ \mathbf{A} , j}_{i,i',\mathbf{{x}}}\) corresponds to the message sent by the ith server to the \(i'\)th server in round j; \(\widehat{y}^{ \mathbf{A} , 1}_{i,\mathbf{{x}}}\) corresponds to the messages sent by the client to the ith server in the first round; and \(\widehat{y}^{ \mathbf{A} , d-1}_{i,\mathbf{{x}}}\) corresponds to the message sent by the ith server to the client in round \(d-1\). Next, for any \(1 \le i \le m\) and \(2 \le j \le d-1\) denote by \(P_\mathsf{OK}^{i,j}\) the event that \( \widehat{\mathsf{NextMSG}_{i}^{ \mathbf{A} , j}}\left( \left( \widehat{y}^{ \mathbf{A} , j-1}_{i',i,\mathbf{{x}}}\right) _{i' \in \mathsf{In}^{j}_i}\right) \) is in \( \mathsf{ERR}\cup \left\{ \left( 0^t, {\mathsf{NextMSG}_{i}^{\prime j}}\left( \left( \widehat{y}^{ \mathbf{A} , j-1}_{i',i,\mathbf{{x}}}\right) _{i' \in \mathsf{In}^{j}_i} + \mathbf{{a}}^{\mathsf {in}, j}_{i} \right) + \mathbf{{a}}^{\mathsf {out}, j}_{i}\right) \right\} \), where \(\mathsf{ERR}= (\{\mathbb {F}^t\}{\setminus } \{0^t\}) \times \mathbb {F}^{o_i^j}\), and \(o_{i}^j\) is the output length of \({\mathsf{NextMSG}_{i}^{\prime j}}\).

    Next, notice that for every tuple \((i',i,j)\) the randomness of \(\widehat{\mathsf{NextMSG}_{i}^{ \mathbf{A} , j}}\) is independent from the randomness of \(\widehat{\mathsf{NextMSG}_{i'}^{ \mathbf{A} , j-1}}\). Thus, it holds that \( \Pr \left[ P_\mathsf{OK}^{i',j-1} \wedge P_\mathsf{OK}^{i,j} \right] \ge (1-\epsilon )^2 \), yielding \( \Pr \left[ \overline{ P_\mathsf{OK}^{i',j-1} \wedge P_\mathsf{OK}^{i,j}} \right] \le 1-(1-\epsilon )^2. \) Next, across all the problematic tuples in \(\mathcal {A}\) we obtain that \(\Pr \left[ P_\mathsf{bad} \wedge P_f \wedge P_\mathsf{OK}^{1,1} \wedge P_\mathsf{OK}^{d,d} \right] \) is at most

    $$\begin{aligned} \left( 1-\left( 1-\epsilon \right) ^2\right) ^\sigma + \Pr \left[ \begin{array}{c} P_\mathsf{bad} \wedge P_f \wedge P_\mathsf{OK}^{1,1} \wedge P_\mathsf{OK}^{d,d} \wedge \\ \left( \exists (i',i,j) \in \mathcal {A}: (P_\mathsf{OK}^{i',j-1} \wedge P_\mathsf{OK}^{i,j}) \right) \end{array} \right] . \end{aligned}$$

    Finally, the fact that \(P_\mathsf{OK}^{i',j-1} \wedge P_\mathsf{OK}^{i,j}\) for some problematic tuple \((i',i,j) \in \mathcal {A}\) implies that there is a non-zero additive attack on the wires between server \(i'\) (or the client in case \(j=1\)) and server i (again, or the client in case \(j=d\)) during the jth round. Thus, by the additive soundness of \((\mathsf{Enc},\mathsf{Dec})\) we obtain that except with probability \(2^{-\sigma }\), \((f^j_{i,1},\cdots ,f^j_{i,\sigma }) \ne 0\), namely \(P_f\) does not hold. Consequently,

    $$\begin{aligned} \Pr \left[ \begin{array}{c} P_\mathsf{bad} \wedge P_f \wedge P_\mathsf{OK}^{1,1} \wedge P_\mathsf{OK}^{d,d} \wedge \\ \left( \exists (i',i,j) \in \mathcal {A}: (P_\mathsf{OK}^{i',j-1} \wedge P_\mathsf{OK}^{i,j}) \right) \end{array} \right] \le 2^{-\Omega (\sigma )}. \end{aligned}$$
  • Case 2: Notice that having less than \(\sigma \) problematic tuples implies that for the protocol \(\pi \) inside \(\widehat{\mathsf C}\), the additive attack \( \mathbf{A} \) only corrupted less than \(\sigma \) parties. In this case we get that except with probability \(2^{-\sigma }\), the protocol \(\pi \) manages to correctly compute \({C}\). Thus, in this case

    $$\begin{aligned} \Pr \left[ P_\mathsf{bad} \wedge P_f \wedge P_\mathsf{OK}^{1,1} \wedge P_\mathsf{OK}^{d,d} \right] \le 2^{-\Omega (\sigma )}. \end{aligned}$$

       \(\square \)

We show that for an appropriate choice of parameters, Construction 4 is a \(2^{-\sigma }\)-additively correct implementation. This is formalized in the next Theorem.

Theorem 11

For any depth-d arithmetic circuit \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\), and any security parameter \(\sigma \), there exists a \(2^{-\Omega (\sigma )}\)-additively-correct implementation \(\widehat{\mathsf C}\) of \({C}\) where \(|\widehat{\mathsf C}| = |{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma ).\)

We first state several results regarding AMD encoding schemes, which will be used in the proof.

Asymptotically optimal constructions of AMD encoding schemes have been presented by [3, 8]. In fact, [3] consider a slightly weaker definition of AMD codes which guarantees that \(\Pr [\mathsf{Dec}(\mathsf{Enc}(\mathbf{{x}}) + \mathbf{{a}}) \notin \mathsf{ERR}\cup \{(0,\mathbf{{x}})\}] \le \epsilon \), allowing for \(\mathsf{ERR}\) on some inputs and correct output on others (see Definition 7 below). However, their construction actually possesses the stronger security property of Definition 4.

Theorem 12

(Implicit in [3], Corollary 1). For any \(n,\sigma \in \mathbb {N}\), and field \(\mathbb {F}\), there exists a pair of families of circuits \((\mathsf{Enc},\mathsf{Dec})\) over \(\mathbb {F}\) that is an \((n,\sigma ,\frac{1}{|\mathbb {F}|^\sigma })\)-AMD encoding scheme with encodings of length \(n+\sigma \). Moreover, the size of \(\mathsf{Enc}\) and \(\mathsf{Dec}\) is \(\widetilde{O}(n +\sigma )\).

Theorem 13

(Implicit in [10]). There exists a constant \(\epsilon \in (0,1)\) such that for any field \(\mathbb {F}\) and arithmetic circuit \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\) there exist a circuit \(\widehat{\mathsf C}:\mathbb {F}^n \rightarrow \mathbb {F}\times \mathbb {F}^k\) which is an \(\epsilon \)-additively-correct implementation of \({C}\). Moreover, \(\left| \widehat{\mathsf C}\right| =O\left( \left| {C}\right| \right) \).

Proof

(of Theorem 11 ). Apply Construction 4 to \({C}\) using an AMD code of Theorem 12, the \(\epsilon \)-additively-correct construction from Theorem 13 and the \(\sigma \)-server protocol \(\pi \) from Theorem 6. To obtain the \(2^{-\sigma }\)-additively-correct implementation of \(\widehat{\mathsf{InpEnc}}\) and \(\widehat{\mathsf{OutDec}}\) used in Steps 1 and 3 of Construction 4, we use an additively-correct circuit compiler \(\mathsf{{Comp}} ^{\mathsf{In}}\) that on input a circuit C outputs a circuit \(\widehat{\mathsf C}\) such that \(\left| \widehat{\mathsf C}\right| =\sigma \cdot |{C}|\) (e.g., Construction 9 of Appendix A). Since \(\pi \) \((\sigma /10)\)-correctly computes \({C}\) we obtain that \(\widehat{\mathsf C}\) is a \(2^{-\Omega (\sigma )}\)-additively-correct implementation of \({C}\).

Next, we proceed to analyze the size of \(\widehat{\mathsf C}\). By the construction of \(\widehat{\mathsf C}\) we have that \(|\widehat{\mathsf C}| = |\widehat{\mathsf{InpEnc}}|+|\widehat{\mathsf{OutDec}}|+ \sum _{i=1}^\sigma \sum _{j=1}^d |\widehat{\mathsf{NextMSG}_{i}^{j}}|\). From Theorem 6 we obtain that \(|{\mathsf{InpEnc}}|+|{\mathsf{OutDec}}|\) is \({\text {poly}}(n,k,\sigma )\). Thus, when \(\mathsf{InpEnc}\) and \(\mathsf{OutDec}\) are implemented using Construction 9 (Appendix A, \(|\widehat{\mathsf{InpEnc}}|+|\widehat{\mathsf{OutDec}}|\) is also \({\text {poly}}(n,k,\sigma )\). We now proceed to analyze \(\sum _{i=1}^\sigma \sum _{j=1}^d |\widehat{\mathsf{NextMSG}_{i}^{j}}|\).

We begin by noticing that in each round of \(\pi \), each server sends messages containing a total of \({\text {poly}}(n,k,\log |{C}|)\) field elements. Thus, by having \(\mathsf{NextMSG}_{}^{\prime }\) encode every message sent during the execution of \(\pi \) with the AMD codes from Theorem 12 we obtain that the circuit size of every \(\mathsf{NextMSG}_{}^{\prime }\) function increases by an additive term which is \({\text {poly}}(n,k,\log |{C}|,\sigma )\) compared to \(\mathsf{NextMSG}_{}^{}\). Next, since the overall circuit size of all the \(\mathsf{NextMSG}_{}^{}\) functions is \(|{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(\sigma ,d,n,k,\log |{C}|)\) and since \(|\widehat{\mathsf{NextMSG}_{}^{}}| = O(\mathsf{NextMSG}_{}^{\prime })\) we obtain that the total circuit size of all the \(\widehat{\mathsf{NextMSG}_{}^{}}\) circuits inside \(\widehat{\mathsf C}\) is also \(|{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(\sigma ,d,n,k,\log |{C}|)\). \(\square \)

Remark 3

The proof of Theorem 11 uses an ad-hoc “feasibility” construction to achieve \({\text {polylog} }\left( \sigma \right) \) overhead. However, it is possible to improve the simplicity, and concrete efficiency, of the construction by replacing the feasibility construction with simpler gadgets implementing the input encoder and output decoder. We now outline a more direct construction (which matches the complexity of Theorem 11). We begin by observing that for the protocol of Theorem 6, we can assume (without loss of generality) that \(\mathsf{InpEnc}(\mathbf{{x}}) = (\mathbf{{x}},\cdots ,\mathbf{{x}})\), and \(\mathsf{OutDec}(\mathbf{{y}}_1,\cdots ,\mathbf{{y}}_m)\) outputs \((0^\sigma ,\mathbf{{y}}_1)\) if \(\mathbf{{y}}_1=\cdots =\mathbf{{y}}_m\), otherwise it outputs a random value in \((\mathbb {F}^\sigma {\setminus } \{0^\sigma \}) \times \mathbb {F}^k\). Next, we implement \(\widehat{\mathsf{InpEnc}}\) and \(\widehat{\mathsf{OutDec}}\) directly using the following simple gadgets.

  • Implementing \(\varvec{\widehat{\mathsf{InpEnc}}}{} \mathbf{{.}}\) We define \(\widehat{\mathsf{InpEnc}}(\mathbf{{x}}) = (\mathsf{Enc}(\mathbf{{x}}),\cdots ,\mathsf{Enc}(\mathbf{{x}}))\), where \(\mathsf{Enc}\) is the encoding procedure of a \(2^{-\sigma }\)-robust AMD code (as in Definition 5). The stronger robustness property guarantees the existence of a single consistent value such that (with high probability) every server either decodes to it, or aborts.

  • Implementing \(\varvec{\widehat{\mathsf{OutDec}}}{} \mathbf{{.}}\) We modify each server to compute a MAC value of its outputs. In addition, \({C}\) is evaluated in the clear: \(\mathbf{{z}}\leftarrow {C}(\mathbf{{x}})\), and the output \(\mathbf{{z}}\) is MACed to obtain \(\tilde{\mathbf{{z}}}\). Finally, \(\widehat{\mathsf{OutDec}}\) contains a gadget that compares all MACed outputs of the servers to \(\widetilde{\mathbf{{z}}}\), and outputs \(\mathbf{{z}}\) if the test passes, otherwise it outputs \(\mathsf{ERR}\).Footnote 5

5 From Additive Correctness to Additive Security via Passive-Secure MPC

In this section we combine additively-correct circuits with passive-secure MPC protocols to construct binary additively-secure circuits with a negligible error, thus proving Theorem 2.

Recall that (as described in Sect. 1.1.2) we construct the additively-secure implementation \(\widehat{\mathsf C}\) of C from a passive-secure MPC protocol \(\pi \). More specifically, the inputs of parties in \(\pi \) are additive secret-shares of the input of C, and \(\pi \) evaluates the function that: (1) reconstructs the input from the secret shares; (2) evaluates C; and (3) outputs an additive secret-sharing of the output.

Consequently, every additive attack on \(\widehat{\mathsf C}\) can be divided into two “parts”. The first “part” targets the wires connecting different sub-circuits \(\mathsf{NextMSG}_{}^{}\) of \(\widehat{\mathsf C}\), and we protect against such attacks by having these sub-circuits operate on AMD codewords. The second “part” modifies the internal computations of the \(\mathsf{NextMSG}_{}^{}\) functions, and we protect against such attacks by replacing each \(\mathsf{NextMSG}_{}^{}\) with its \(2^{-\sigma }\)-additively-correct implementation. Thus, the resultant \(\widehat{\mathsf C}\) is a \(2^{-\Omega \left( \sigma \right) }\)-additively correct implementation of C, where every attack is with overwhelming probability either “harmless” (namely, corresponds to an additive attack on the inputs and output of C), or causes the output to be random. Moreover, as we argued in Sect. 1.1.2, the probability that the output is random is independent of the inputs.

We start by defining the circuit \({C}_\mathsf{AUG }\), which implements the functionality computed by \(\pi \) (namely, emulates C on secret shares).

Construction 5

Let \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\) be an arithmetic circuit, and \(m\in \mathbb {N}\). The circuit \({C}_\mathsf{AUG }\), on inputs \((\mathbf{{x}}_1,\cdots ,\mathbf{{x}}_m) \in \left( \mathbb {F}^n\right) ^m \), performs the following.

  1. 1.

    Computes \(\mathbf{{x}}\leftarrow \sum _{i=1}^m \mathbf{{x}}_i\), and \(\mathbf{{y}}\leftarrow {C}(\mathbf{{x}})\). (This step reconstructs the input to C from the secret shares, and evaluates C.)

  2. 2.

    Generates \(\mathbf{{y}}_1,\cdots ,\mathbf{{y}}_{m-1} \in \mathbb {F}^n\) uniformly at random, and compute \(\mathbf{{y}}_m \leftarrow \mathbf{{y}}- \sum _{i=1}^{m-1} \mathbf{{y}}_i\). (\(\mathbf{{y}}_1,\cdots ,\mathbf{{y}}_m\) is an additive secret sharing of the output \(\mathbf{{y}}\).)

  3. 3.

    Outputs \((\mathbf{{y}}_1,\cdots ,\mathbf{{y}}_m)\).

Next, we use \({C}_\mathsf{AUG }\) to construct the circuit \(\widehat{\mathsf C}\), see also Fig. 4.

Construction 6

Let \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\) be an arithmetic circuit over a finite field \(\mathbb {F}\), \(\sigma \) be a security parameter, and \(\pi \) be a d-round, t-private, m-party protocol for computing the circuit \({C}_\mathsf{AUG }\) of Construction 5, using only point-to-point channels. We assume (without loss of generality) that every message sent in \(\pi \) consists of exactly s field elements, for some \(s\in \mathbb {N}\). Let \((\mathsf{Enc},\mathsf{Dec})\) be an \((s,\sigma ,2^{-\sigma })\)-AMD encoding scheme that outputs encodings of length \(\hat{n}\left( s\right) \), and \(\mathsf{Dec}\) outputs \(\sigma \) flags during decoding. The circuit \(\widehat{\mathsf C}\) will use the following ingredients.

  1. 1.

    Protecting the first round. For every \(1 \le i \le m\), assume that party \(P_i\) sends h messages in the first round, namely \(\mathsf{NextMSG}_{i}^{1}:\mathbb {F}^n \rightarrow \left( \mathbb {F}^s\right) ^h\). Let \(\widehat{\mathsf{NextMSG}_{i}^{1}}:\mathbb {F}^n \rightarrow \mathbb {F}^t \times \left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^h\) be the \(2^{-\sigma }\)-additively correct implementation, with t flags, of the circuit \(\left( \mathsf{Enc}\circ \mathsf{NextMSG}_{i}^{1}\right) :\mathbb {F}^n \rightarrow \left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^h\) (see Construction 3).

  2. 2.

    Protecting middle rounds. For every \(1 \le i \le m\), and \(2 \le j \le d-1\), assume that in round \(j-1\) (j) \(P_i\) receives (sends) g (h) messages, namely \(\mathsf{NextMSG}_{i}^{j}:\left( \mathbb {F}^s\right) ^g \rightarrow \left( \mathbb {F}^s\right) ^h\). Let \(\widehat{\mathsf{NextMSG}_{i}^{j}}:\left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^g \rightarrow \mathbb {F}^t \times \mathbb {F}^\sigma \times \left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^h\) be the \(2^{-\sigma }\)-additively correct implementation, with t flags, of the circuit \({\mathcal T}_{\mathsf{{inter}}}\left( \mathsf{NextMSG}_{i}^{j}\right) :\left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^g \rightarrow \mathbb {F}^\sigma \times \left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^h\) (see Construction 2).

  3. 3.

    Protecting the last round. For every \(1 \le i \le m\) assume that \(P_i\) receives g messages in the final round, namely \(\mathsf{NextMSG}_{i}^{d}:\left( \mathbb {F}^s\right) ^g \rightarrow \mathbb {F}^k\). Let \(\widehat{\mathsf{NextMSG}_{i}^{d}}:\left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^g \rightarrow \mathbb {F}^t \times \mathbb {F}^\sigma \times \mathbb {F}^k\) be the \(2^{-\sigma }\)-additively correct implementation, with t flags, of the circuit \({\mathcal T}_{\mathsf{{fin}}}\left( \mathsf{NextMSG}_{i}^{d}\right) :\left( \mathbb {F}^{\hat{n}\left( s\right) }\right) ^g \rightarrow \mathbb {F}^t \times \mathbb {F}^k\) (see Construction 3).

  4. 4.

    Circuit construction. The circuit \(\widehat{\mathsf C}\) on input \(\mathbf{{x}}\) performs the following.

    (a):

    Generate \(\mathbf{{x}}_1,\cdots ,\mathbf{{x}}_{m-1} \in \mathbb {F}^n\) uniformly at random and compute \(\mathbf{{x}}_m \leftarrow \mathbf{{x}}- \sum _{i=1}^{m-1} \mathbf{{x}}_i\).

    (b):

    Emulates \(\pi \) with \(\mathbf{{x}}_i\) as the input of party \(P_i\), where the \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) described in Steps Steps 13 (connected in the natural way) replace the \(\mathsf{NextMSG}_{i}^{j}\). That it, for every round \(1 \le j \le d-1\), if party \(P_i\) sends a message to party \(P_{i'}\), we wire the corresponding output of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) to the corresponding input of \(\widehat{\mathsf{NextMSG}_{i'}^{j+1}}\).

    (c):

    Let \(\mathbf{{z}}_i\) denote \(P_i\)’s output in the above execution. Compute \(\mathbf{{z}}\leftarrow \sum _{i=1}^m \mathbf{{z}}_i\).

    (d):

    For every \(1 \le i \le m\), and \(2 \le j \le d\), let \(f_{i,1}^{\prime j},\cdots ,f_{i,t}^{\prime j}\) denote the first t outputs of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\), and \(f_{i,1}^{j},\cdots ,f_{i,\sigma }^{j}\) denote the \(t+1\) to \(t+\sigma \) outputs of \(\widehat{\mathsf{NextMSG}_{i}^{j}}\). (The \(f_{i,w}^{\prime j}\)’s are the flags of the \(2^{-\sigma }\)-correct implementation, and the \(f_{i,w}^{j}\)’s are the flags generated during the AMD decoding.)

    (e):

    For every \(1 \le i \le m\) let \(f_{i,1}^{\prime d},\cdots ,f_{i,t}^{\prime d}\) denote the first t outputs of \(\widehat{\mathsf{NextMSG}_{i}^{1}}\). (These are the flags of the \(2^{-\sigma }\)-correct implementation.)

    (f):

    For every \(1 \le w' \le \sigma \), compute \(f''_{w'} \leftarrow \sum _{i=1}^m \sum _{j=2}^{d} \left( \sum _{w=1}^{t} f^{\prime j}_{i,w} \cdot r_{i,j,w} + \sum _{w=1}^{\sigma } f^{ j}_{i,w} \cdot r_{t+i,j,w} \right) + \sum _{i=1}^m \sum _{w=1}^t f^{\prime 1}_{i,w} \cdot r_{i,d,w}\), where \(r_{i,j,w} \in _R \mathbb {F}\).

    (g):

    Output \(\mathbf{{z}}+\sum _{w=1}^m f''_w\cdot \mathbf{{r}}'_w\), where \(\mathbf{{r}}'_w \in _R \mathbb {F}^k\).

Fig. 4.
figure 4

Components of Construction 6

We show that any additive attack on \(\widehat{\mathsf C}\) is either equivalent to an additive attack on the inputs and output of \({C}\), or will sets flags inside \(\widehat{\mathsf C}\) to non-zero values. Moreover, the probability that a flag is set depends only on the additive attack, and is almost independent of the input. This is captured by the next theorem.

Theorem 14

For any depth-d arithmetic circuit \({C}:\mathbb {F}^n \rightarrow \mathbb {F}^k\), and security parameter \(\sigma \), there exists a \(2^{-\Omega (\sigma )}\)-additively-secure implementation \(\widehat{\mathsf C}\) of \({C}\), where \(|\widehat{\mathsf C}| = |{C}|\cdot {\text {polylog} }(|{C}|,\sigma ) + {\text {poly}}(n,k,d,\sigma ).\) Moreover, \(\widehat{\mathsf C}\) can be constructed from \({C}\) in \({\text {poly}}\left( |{C}|,\sigma ,m\right) \) time.

The proof of Theorem 14, which follows the outline presented in Sect. 1.1.2, is deferred to the full version. Here, we only outline the main points and subtle issues in the proof. We first show that with overwhelming probability any additive attack on \(\widehat{\mathsf C}\) either sets error flags in \(\widehat{\mathsf C}\), or is equivalent to an additive attack on its inputs and output. This is proved in two steps: first, using the additive correctness property of the \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) sub-circuits, except with negligible probability additive attacks on the internal wires of every \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) can be “pushed” to an additive attack on its inputs and outputs. Second, we examine the additive attacks obtained in this manner between every pair of adjacent \(\widehat{\mathsf{NextMSG}_{i'}^{j-1}}\) and \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) sub-circuits. If all these attacks cancel out, then the output of \(\widehat{\mathsf C}\) is correct. Otherwise, the additive-security property of the AMD code protecting the communication channels between the \(\widehat{\mathsf{NextMSG}_{}^{}}\) sub-circuits guarantees that with overwhelming probability an error flag will be set, causing \(\widehat{\mathsf C}\) to abort.

Next, we prove that the probability of abort is almost independent of the inputs of \(\widehat{\mathsf C}\). As before, we first “push” additive attacks on the \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) sub-circuits to additive attacks on their inputs and outputs. We then traverse the layers of \(\widehat{\mathsf C}\) from the inputs to the output. In each layer j, a flag can be raised either by a \(\widehat{\mathsf{NextMSG}_{i}^{j}}\) sub-circuit (which corresponds to the computation performed by a single party \(P_i\)), or by the AMD decoding performed in \(\widehat{\mathsf{NextMSG}_{i}^{j}}\). In either case, the event that a flag is set depends only on the view of \(P_i\) which, by the t-privacy of \(\pi \) (and of the additive secret sharing of the input), guarantees that the distributions of the flags when evaluating \(\widehat{\mathsf C}\) on two different inputs \(\mathbf{{x}},\mathbf{{x}}'\) are t-wise indistinguishable. Since a single set flag suffices to cause an abort, the “OR lemma” (Lemma 1) guarantees that the probability of abort is independent of the inputs to \(\widehat{\mathsf C}\).

6 Constant-Overhead AMD Codes and Their Applications to Constant-Overhead MPC

In this section we use AMD codes to relate the open question of constructing actively-secure two-party protocols with constant computational overhead to the simpler questions of constructing passively-secure honest-majority MPC protocols, and correct-only honest-majority MPC protocols, with constant computational overhead. This is done by combining our constructions from Sects. 4 and 5 with a (relaxed) AMD encoding scheme that has constant overhead.

More formally, we say that a secure implementation of a circuit C (e.g., an additively-secure implementation of C, or a secure protocol for evaluating C) has constant computational overhead if its circuit size is \(O(|{C}|)+{\text {poly}}(\log |C|,\sigma ,d,n,k)\) where \(\sigma \) is the security parameter, d is the circuit depth, and nk are the input and output lengths, respectively. (The circuit size of a protocol \(\pi \) is the total circuit size of all the \(\mathsf{NextMSG}_{}^{}\) functions of \(\pi \).)

We first construct relaxed AMD encoding schemes with constant overhead, namely the size of the encoding and decoding circuits is linear in the message length. At a high level, relaxed AMD encoding schemes, first considered by [3], have a weaker soundness guarantee: as long as the output is correct with high probability, (non-zero) additive attacks are allowed to pass unnoticed. This should be contrasted with (standard) AMD codes, in which every additive attack is guaranteed to be detected (with high probability).

Definition 7

(Relaxed AMD encoding scheme [3]). Let \(\mathbb {F}\) be a finite field, \(n\in \mathbb {N}\) be an input length parameter, \(t\in \mathbb {N}\) be a security parameter, and \(\epsilon \left( n,t\right) :\mathbb {N}\times \mathbb {N}\rightarrow \mathbb {R}^+\). An \(\left( n,t,\epsilon \left( n,t\right) \right) \)-relaxed AMD encoding scheme \(\left( \mathsf{Enc},\mathsf{Dec}\right) \) over \(\mathbb {F}\) is an encoding scheme with the following properties.

  • Perfect completeness. For every \(\mathbf{{x}}\in \mathbb {F}^n\), \(\Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}\left( \mathbf{{x}},1^t\right) ,1^t\right) =\left( 0,\mathbf{{x}}\right) \right] =1\).

  • Relaxed additive soundness. For every \(0^{\hat{n}\left( n,t\right) }\ne \mathbf{{a}}\in \mathbb {F}^{\hat{n}\left( n,t\right) }\), and every \(\mathbf{{x}}\in \mathbb {F}^n\), \(\Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}\left( \mathbf{{x}},1^t\right) +\mathbf{{a}},1^t\right) \notin \mathsf{ERR}\cup \{(0,\mathbf{{x}})\}\right] \le \epsilon \left( n,t\right) \) where \(\mathsf{ERR}=(\mathbb {F}{\setminus } \{0\})\times \mathbb {F}^n \), and the probability is over the randomness of \(\mathsf{Enc}\).

Roughly speaking, we construct a constant-overhead AMD encoding scheme by composing a linearly encodable and decodable AMD encoding scheme with constant additive soundness, with a linearly encodable error-correcting code with constant rate and relative distance. We will need the following notion of an [nkd]-error-correcting code.

Definition 8

We say that a pair \((\mathsf{Enc}:\mathbb {F}^k \rightarrow \mathbb {F}^n,\mathsf{Dec}:\mathbb {F}^n \rightarrow \mathbb {F}^k)\) of deterministic circuits is an [nkd]-error-correcting code (ECC) over \(\mathbb {F}\) if any \(\mathbf{{x}},\mathbf{{y}}\in \mathbb {F}^k\) it holds that \(\Pr \left[ \mathsf{Dec}(\mathsf{Enc}(\mathbf{{x}})) = \mathbf{{x}}\right] =1\) and that \(|\{i: (\mathsf{Enc}(x))_i \ne (\mathsf{Enc}(y))_i \}| \ge d \).

The following theorem is due to Spielman [19] (see also [9]):

Theorem 15

There exist constants \(d_1>1\), and \(d_2 >0\), such that for any field \(\mathbb {F}\), and any \(k\in \mathbb {N}\), there exists a pair of circuits \((\mathsf{Enc}_k,\mathsf{Dec}_k)\) which is a \([\lfloor d_1 k \rfloor ,k,\lceil d_2 k \rceil ]\)-ECC over \(\mathbb {F}\). Moreover, the size of \(\mathsf{Enc}_k\) is O(k).

We can now construct an AMD encoding scheme with constant overhead.

Construction 7

Let n be a positive integer, \(\mathbb {F}\) be a finite field, and \((\mathsf{Enc}_n,\mathsf{Dec}_n)\) be an \([n',n,d]\)-ECC over \(\mathbb {F}\). In addition, let \((\mathsf{Enc}_{amd}:\mathbb {F}\rightarrow \mathbb {F}^{k},\mathsf{Dec}_{amd}:\mathbb {F}^{k} \rightarrow \mathbb {F}\times \mathbb {F}\) be a \((1,t,\epsilon (t))\)-AMD encoding scheme. Consider the circuits \(\mathsf{Enc}:\mathbb {F}^{n} \rightarrow \mathbb {F}^{n+k\cdot n'}\) and \(\mathsf{Dec}:\mathbb {F}^{n+k\cdot n'} \rightarrow \mathbb {F}\times \mathbb {F}^n\) which are defined as follows.

  • The circuit \(\mathsf{Enc}\) on input \(\mathbf{{x}}\in \mathbb {F}^n\) performs the following:

    1.:

    Computes \(\mathbf{{x}}' \leftarrow \mathsf{Enc}_n (\mathbf{{x}})\) and for all \(1 \le i \le n'\) computes \(\widehat{x}_i \leftarrow \mathsf{Enc}_{amd} (x'_i)\).

    2.:

    Outputs \((\mathbf{{x}},\widehat{\mathbf{{x}}})\).

  • The circuit \(\mathsf{Dec}\) on input \((\mathbf{{x}},\widehat{\mathbf{{x}}})\) performs the following:

    1.:

    Computes \(\mathbf{{x}}' \leftarrow \mathsf{Enc}_n (\mathbf{{x}})\).

    2.:

    For all \(1 \le i \le n'\) computes \((f_i,y'_i) \leftarrow \mathsf{Dec}_{amd}(\widehat{x}_i)\) and \(f'_i \leftarrow x'_i - y'_i\).

    3.:

    In case there exists an \(1 \le i \le n'\) such that \(f_i \ne 0\) or \(f'_i \ne 0\), outputs \((1,0^n)\). Otherwise, outputs \((0,\mathbf{{x}})\).

Theorem 16

For any positive integer n, the pair of circuits \(\mathsf{Enc},\mathsf{Dec}\) of Construction 7 is an \((n,t,\epsilon (t)^d)\)-relaxed AMD encoding scheme.

Proof

The correctness property follows directly from the construction. We now prove the relaxed additive soundness property. Let \(\mathbf{{x}}\in \mathbb {F}^n\) be an input to \(\mathsf{Enc}\), and \( \mathbf{A} =(\mathbf{{a}},\mathbf{{b}}) \in \mathbb {F}^n \times \mathbb {F}^{kn'}\) be an additive attack on the outputs of \(\mathsf{Enc}\). We consider two possible cases.

1.:

a = 0. In this case, the additive attack does not attempt to alter the value \(\mathbf{{x}}\) passed from \(\mathsf{Enc}\) to \(\mathsf{Dec}\), so \(\Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}\left( \mathbf{{x}},1^t\right) +(\mathbf{{a}},\mathbf{{b}}),1^t\right) \notin \mathsf{ERR}\cup \{(0,\mathbf{{x}})\}\right] = 0\).

2.:

\({\varvec{a}_{\varvec{i}}}\,\varvec{\ne }\, \varvec{0}\) for some \(\varvec{1}\,\varvec{\le }\,\varvec{i}\,\varvec{\le }\,\varvec{n}\varvec{.}\) In this case, let \(\mathcal {I}= \left\{ i: \left( \mathsf{Enc}_n(\mathbf{{x}}+\mathbf{{a}})\right) _i \ne \left( \mathsf{Enc}_n(\mathbf{{x}})\right) _i \right\} \). For an additive attack to successfully cause \(\mathsf{Dec}\) to output some \(\tilde{\mathbf{{x}}}\ne \mathbf{{x}}\), it must be the case that \(x_i^{\prime \mathbf{A} } =y_i^{\prime \mathbf{A} }\) for every \(i \in \mathcal {I}\), where \(x_i^{\prime \mathbf{A} } =\left( \mathsf{Enc}_n(\mathbf{{x}}+\mathbf{{a}})\right) _i\), and \(y_i^{\prime \mathbf{A} } = \mathsf{Dec}_{amd}(\widehat{x}_i+\mathbf{{b}}_i) = \mathsf{Dec}_{amd}\left( (\mathsf{Enc}_{amd}((\mathsf{Enc}_n(\mathbf{{x}}))_i))+\mathbf{{b}}_i\right) \) (the right equality follows from the definition of \(\widehat{\mathbf{{x}}}\)). For every \(i\in \mathcal {I}\), if \(\mathbf{{b}}_i=0\) then by the correctness of \((\mathsf{Enc}_{amd}, \mathsf{Dec}_{amd})\), \(\mathsf{Dec}_{amd}\left( \mathsf{Enc}_{amd}((\mathsf{Enc}_n(\mathbf{{x}})_i))+\mathbf{{b}}_i\right) =\mathsf{Dec}_{amd}\left( \mathsf{Enc}_{amd}((\mathsf{Enc}_n(\mathbf{{x}}))_i)\right) =(\mathsf{Enc}_n(\mathbf{{x}}))_i \ne \left( \mathsf{Enc}_n(\mathbf{{x}}+\mathbf{{a}})\right) _i\) (the right-most equality holds since \(i\in \mathcal {I}\)), so \(\mathsf{Dec}\) outputs \(\mathsf{ERR}\) (with probability 1); otherwise the additive soundness of \((\mathsf{Enc}_{amd}, \mathsf{Dec}_{amd})\) guarantees that \(f_i\ne 0\) only with probability \(\epsilon (t)\). Moreover, the relative distance property of the ECC guarantees that \( |\mathcal {I}| \ge d\). Consequently, \(\Pr \left[ \mathsf{Dec}\left( \mathsf{Enc}\left( \mathbf{{x}},1^t\right) +(\mathbf{{a}},\mathbf{{b}}),1^t\right) \notin \mathsf{ERR}\cup \{(0,\mathbf{{x}})\}\right] \le \epsilon (t)^d\). \(\square \)

Instantiating Construction 7 with the ECC of Theorem 15, we obtain the following result.

Theorem 17

For any positive integer n there exists an \((n,t,2^{-\Omega (n)})\)-relaxed AMD encoding scheme with encoding and decoding circuits of size \(\Theta (n)\).

Theorem 17 can be used to relate the open question of constructing actively-secure two-party protocols with constant computational overhead to the simpler questions of constructing passively-secure honest-majority MPC protocols, and correct-only honest-majority MPC protocols, with constant computational overhead. We first show that actively secure 2-party MPC protocols in the OT-hybrid model, with constant computational overhead, can be constructed from additively-secure circuits with constant computational overhead. Formally,

Claim 18

Assume that any boolean circuit \({C}\) admits an additively-secure implementation \(\widehat{\mathsf C}\) with constant computational overhead. Then there exists an actively secure 2-party protocol \(\pi \) for evaluating \(\ cC\) in the OT-hybrid model with constant computational overhead.

Proof

(sketch). The work of [11] observed that the effect of an active attack on an arithmetic version of the passively-secure GMW protocol [12] \(\pi _\mathsf{GMW}\) (in the OLE-hybrid model) corresponds to an additive attack on the underlaying circuit being evaluated. This observation holds in the binary case as well (where \(\pi '\) is executed in the OT-hybrid model). Thus, given an additively-secure implementation \(\widehat{\mathsf C}\) of \({C}\) with constant computational overhead, one can construct an actively secure 2-party protocol \(\pi \) for evaluating \({C}\) in the OT-hybrid model, with constant computational overhead, simply by running \(\pi _\mathsf{GMW}\) on \(\widehat{\mathsf C}\). \(\square \)

The following corollary reduces the task of constructing actively-secure 2-party protocols in the OT-hybrid model, with constant computational overhead, to the following simpler tasks:

  1. 1.

    Constructing passively-secure 2-party protocols in the OT-hybrid model with constant computational overhead.

  2. 2.

    Constructing correct-only (as per Definition 6) 2-party protocols in the OT-hybrid model with constant computational overhead.

Corollary 1

If there exist both correct-only MPC protocols, and passively secure MPC protocols, with constant computational overhead, then there is a secure 2-party protocol in the OT-hybrid model with constant computational overhead.

Proof

(sketch). Let \(\pi _1,\pi _2\) be correct-only, and passively secure, protocols (resp.) with constant computational overhead. The protocol \(\pi \) for evaluating a circuit \({C}\) is obtained by applying Claim 18 to the circuit \(\widehat{\mathsf C}_\mathsf{sec}\) constructed below.

  1. 1.

    Construct an additively-correct implementation \(\widehat{\mathsf C}_\mathsf{corr}\) of \({C}\) (as per Definition 1) with constant computational overhead using \(\pi _1\), Construction 4, and the relaxed AMD codes of Theorem 17.

  2. 2.

    Construct an additively-secure implementation \(\widehat{\mathsf C}_\mathsf{sec}\) of \({C}\) (as per Definition 2) with constant computational overhead using \(\pi _2\), Construction 6, and the relaxed AMD codes of Theorem 17.

By repeating the analysis of Constructions 4 and 6 while replacing the protocol from [6] with \(\pi _1,\pi _2\), we obtain that \(\pi \) has constant computational overhead. Regarding the security of \(\pi \), the only difference from the analysis in Sects. 4 and 5 is that \(\pi \) employs a relaxed AMD encoding scheme (whereas Constructions 4 and 6 used (standard) AMD encoding schemes). However, since AMD codes are used in these constructions only to protect the communication channels of \(\pi _1,\pi _2\), then relaxed additive soundness suffices for the analysis since it guarantees that no attack can alter the values of these messages. \(\square \)