1 Introduction

Historically, Cryptography has been used to protect information (either in transit or stored) from unauthorized access. One of the most important developments in Cryptography in the last thirty years, has been the ability to protect not only information but also the computations that are performed on data that needs to be secure. Starting with the work on secure multiparty computation [Yao82], and continuing with ZK proofs [GMR89], and more recently Fully Homomorphic Encryption [Gen09], verifiable outsourcing computation [GKR08, GGP10], SNARKs [GGPR13, BCI+13] and obfuscation [GGH+16] we now have cryptographic tools that protect the secrecy and integrity not only of data, but also of the programs which run on that data.

Another crucial development in Modern Cryptography has been the adoption of a more “fine-grained” notion of computational hardness and security. The traditional cryptographic approach modeled computational tasks as “easy” (for the honest parties to perform) and “hard” (infeasible for the adversary). Yet we have also seen a notion of moderately hard problems being used to attain certain security properties. The best example of this approach might be the use of moderately hard inversion problems used in blockchain protocols such as Bitcoin. Although present in many works since the inception of Modern Cryptography, this approach was first formalized in a work of Dwork and Naor [DN92].

In the second part of this work we consider the following model (which can be traced back to the seminal paper by Merkle [Mer78] on public key cryptography). Honest parties will run a protocol which will costFootnote 1 them C while an adversary who wants to compromise the security of the protocol will incur a \(C'=\omega (C)\) cost. Note that while \(C'\) is asymptotically larger than C, it might still be a feasible cost to incur – the only guarantee is that it is substantially larger than the work of the honest parties. For example in Merkle’s original proposal for public-key cryptography the honest parties can exchange a key in time T but the adversary can only learn the key in time \(T^2\). Other examples include primitives introduced by Cachin and Maurer [CM97] and Hastad [Has87] where the cost is the space and parallel time complexity of the parties, respectively.

Recently there has been renewed interest in this model. Degwekar et al. [DVV16] show how to construct certain cryptographic primitives in \(\mathsf {NC}^1\) [resp. \(\mathsf {AC}^0\)] which are secure against all adversaries in \(\mathsf {NC}^1\) [resp. \(\mathsf {AC}^0\)]. In conceptually related work Ball et al. [BRSV17] present computational problems which are “moderately hard” on average, if they are moderately hard in the worst case, a useful property for such problems to be used as cryptographic primitives.

The goal of this paper is to initiate a study of Fine Grained Secure Computation. By doing so we connect these two major developments in Modern Cryptography. The question we ask is if it is possible to construct secure computation primitives that are secure against “moderately complex” adversaries. We answer this question in the affirmative, by presenting definitions and constructions for the task of Fully Homomorphic Encryption and Verifiable Computation in the fine-grained model. In our constructions, our goal is to optimize at the same time (for the extent to which it is possible) in terms of depth, size, round and communication complexity. Our constructions rely on a widely believed complexity separationFootnote 2. We also present two application scenarios for our model: (i) hardware chips that prove their own correctness and (ii) protocols against rational adversaries including potential solutions to the Verifier’s Dilemma in smart-contracts transactions such as Ethereum.

1.1 Our Results

Our starting point is the work in [DVV16] and specifically their public-key encryption scheme secure against \(\mathsf {NC}^1\) circuits. Recall that \(\mathsf {AC}^0[2]\) is the class of Boolean circuits with constant depth, unbounded fan-in, augmented with parity gates. If the number of \(\mathsf {AND}\) (and \(\mathsf {OR}\)) gates of non constant fan-in is constant we say that the circuit belongs to the class \(\mathsf {AC}^0_{\text {Q}}[2]\subset \mathsf {AC}^0[2]\).

Our results can be summarized as follows:

  • We first show that the techniques in [DVV16] can be used to build a somewhat homomorphic encryption (SHE) scheme. We note that because honest parties are limited to \(\mathsf {NC}^1\) computations, the best we can hope is to have a scheme that is homomorphic for computations in \(\mathsf {NC}^1\). However our scheme can only support computations that can be expressed in \(\mathsf {AC}^0_{\text {Q}}[2]\).

  • We then use our SHE scheme, in conjunction with protocols described in [GGP10, CKV10, AIK10], to construct verifiable computation protocols for functions in \(\mathsf {AC}^0_{\text {Q}}[2]\), secure and input/output private against any adversary in \(\mathsf {NC}^1\).

Our somewhat homomorphic encryption also allows us to obtain the following protocols secure against \(\mathsf {NC}^1\) adversaries: (i) constant-round 2PC, secure in the presence of semi-honest static adversaries for functions in \(\mathsf {AC}^0_{\text {Q}}[2]\); (ii) Private Function Evaluation in a two party setting for circuits of constant multiplicative depth without relying on universal circuits. These results stem from well-known folklore transformations and we do not prove them formally.

The class \(\mathsf {AC}^0_{\text {Q}}[2]\) includes many natural and interesting problems such as: fixed precision arithmetic, evaluation of formulas in 3CNF (or kCNF for any constant k), a representative subset of SQL queries, and S-Boxes [BP11] for symmetric key encryption.

Our results (like [DVV16]) hold under the assumption that \(\mathsf {NC}^1 \subsetneq \oplus \mathsf {L}/ {\mathsf {poly}}\), a widely believed worst-case assumption on separation of complexity classes. Notice that this assumption does not imply the existence of one-way functions (or even \(\mathsf {P}\not = \mathsf {NP}\)). Thus, our work shows that it is possible to obtain “advanced” cryptographic schemes, such as somewhat homomorphic encryption and verifiable computation, even if we do not live in MinicryptFootnote 3 Footnote 4.

Comparison with other approaches. One important question is: on what features are our schemes better than “generic” cryptographic schemes that after all are secure against any polynomial time adversary.

One such feature is the type of assumption one must make to prove security. As we said above, our schemes rely on a very mild worst-case complexity assumption, while cryptographic SHE and VC schemes rely on very specific assumptions, which are much stronger than the above.

For the case of Verifiable Computation, we also have information-theoretic protocols which are secure against any (possibly computationally unbounded) adversary. For example the “Muggles” protocol in [GKR08] which can compute any (log-space uniform) \(\mathsf {NC}\) function, and is also reasonably efficient in practice [CMT12]. Or, the more recent work [GR18], which obtains efficient VC for functions in a subset of \(\mathsf {NC}\cap {\mathsf {SC}}\). Compared to these results, one aspect in which our protocol fares better is that our Prover/Verifier can be implemented with a constant-depth circuit (in particular in \(\mathsf {AC}^0[2]\), see Sect. 4) which is not possible for the Prover/Verifier in [GKR08, GR18], which needsFootnote 5 to be in \(\mathsf {TC}^0\). Moreover our protocol is non-interactive (while [GKR08, GR18] requires \(\varOmega (1)\) rounds of interaction) and because our protocols work in the “pre-processing model” we do not require any uniformity or regularity condition on the circuit being outsourced (which are required by [GKR08, CMT12]). Finally, out verification scheme achieves input and output privacy.

Another approach to obtain information-theoretic security for Verifiable Computation is to use the framework of randomized encodings (RE) [IK00a, AIK04] (e.g. [GGH+07] which uses related techniques). In this work we build scheme with additional requirements: compact homomorphic encryptionFootnote 6 and overall efficient verification for verifiable computationFootnote 7. We do not see how to achieve these additional requirements via current RE-based approaches. We further discuss these and other limitations of directly using RE in Appendix D.

1.2 Overview of Our Techniques

Homomorphic Encryption. In [DVV16] the authors already point out that their scheme is linearly homomorphic. We make use of the re-linearization technique from [BV14] to construct a leveled homomorphic encryption.

Our scheme (as the one in [DVV16]) is secure against adversaries in the class of (non-uniform) \(\mathsf {NC}^1\). This implies that we can only evaluate functions in \(\mathsf {NC}^1\) otherwise the evaluator would be able to break the semantic security of the scheme. However we have to ensure that the whole homomorphic evaluation stays in \(\mathsf {NC}^1\). The problem is that homomorphically evaluating a function f might increase the depth of the computation.

In terms of circuit depth, the main overhead will be (as usual) the computation of multiplication gates. As we show in Sect. 3 a single homomorphic multiplication can be performed by a depth two \(\mathsf {AC}^0[2]\) circuit, but this requires depth \(O(\log (n))\) with a circuit of fan-in two. Therefore, a circuit for f with \(\omega (1)\) multiplicative depth would require an evaluation of \(\omega (\log (n))\) depth, which would be out of \(\mathsf {NC}^1\). Therefore our first scheme can only evaluate functions with constant multiplicative depth, as in that case the evaluation stays in \(\mathsf {AC}^0[2]\).

We then present a second scheme that extends the class of computable functions to \(\mathsf {AC}^0_{\text {Q}}[2]\) by allowing for a negligible error in the correctness of the scheme. We use techniques from a work by Razborov [Raz87] on approximating \(\mathsf {AC}^0[2]\) circuits with low-degree polynomials – the correctness of the approximation (appropriately amplified) will be the correctness of our scheme.

Reusable Verifiable Computation. The core of our approach is the construction in [CKV10], to derive Verifiable Computation from Homomorphic Encryption. The details of this approach follow. Recall that we are working in a model with an expensive preprocessing phase (executed by the Client only once and before providing any inputs to the Server) and an inexpensive online phase. The online phase is in turn composed by two algorithms, an algorithm to encode the input for the Server and one to check its response. In the preprocessing phase in [CKV10], the Client selects a random input r, encrypts it as \(c_r=E(r)\) and homomorphically compute \(c_{f(r)}\) an encryption of f(r). During the online phase, the Client, on input x, computes \(c_x=E(x)\) and submits the ciphertexts \(c_x,c_r\) in random order to the Server, who homomorphically computes \(c_{f(r)}=E(f(r))\) and \(c_{f(x)}=E(f(x))\) and returns them to the Client. The Client, given the message \(c_0,c_1\) from the Server, checks that \(c_b=c_{f(r)}\) (for the appropriate bit b) and if so accepts \(y=D(c_{f(x)})\) as \(y=f(x)\). The semantic security of E guarantees that this protocol has soundness error . This error can be reduce by “scaling” this approach replacing the two ciphertexts \(c_x\) and \(c_r\) with 2t ciphertexts (t distinct encryptions of x and t encryptions of random values \(r_1,\dots ,r_t\) ) sent to the prover after being shuffled through a random permutation. The scheme as described is however one-time secure, since a malicious server can figure out which one is the test ciphertext \(c_{f(r)}\) if it is used again. To make this scheme “many-times secure”, [CKV10] uses the paradigm introduced in [GGP10] of running the one-time scheme “under the hood” of a different homomorphic encryption key each time.

When applying these techniques in our fine-grained context the main technical challenge is to guarantee that they would also work within \(\mathsf {NC}^1\). In particular, we needed to ensure that: (i) the constructions can be computed in low-depth; (ii) the reductions in the security proofs can be carried out in low-depth. We rely on results from [MV91] to make sure a random permutation can be sampled by an appropriately low-depth schemeFootnote 8 Moreover, we cannot simply make black-box use of the one-time construction in [CKV10]. In fact, their construction works only for homomorphic encryption schemes with deterministic evaluation, whereas the more expressive of our constructions (Sect. 3.3) is randomizedFootnote 9.

1.3 Application Scenarios

The applications described in this section refer to the problem of Verifying Computation, where a Client outsources an algorithm f and an input x to a Server, who returns a value y and a proof that \(y=f(x)\). The security property is that it should be infeasible to convince the verifier to accept \(y' \ne f(x)\), and the crucial efficiency property is that verifying the proof should cost less than computing f (since avoiding that cost was the reason the Client hired the Server to compute f).

Hardware Chips That Prove Their Own Correctness Verifiable Computation (VC) can be used to verify the execution of hardware chips designed by untrusted manufacturers. One could envision chips that provide (efficient) proofs of their correctness for every input-output computation they perform. These proofs must be efficiently verified in less time and energy than it takes to re-execute the computation itself.

When working in hardware, however, one may not need the full power of cryptographic protection against any malicious attacks since one could bound the computational power of the malicious chip. The bound could be obtained by making (reasonable and evidence-based) assumptions on how much computational power can fit in a given chip area. For example one could safely assume that a malicious chip can perform at most a constant factor more work than the original function because of the basic physics of the size and power constraints. In other words, if C is the cost of the honest Server in a VC protocol, then in this model the adversary is limited to O(C)-cost computations, and therefore a protocol that guarantees that successful cheating strategies require \(\omega (C)\) cost, will suffice. This is exactly the model in our paper. Our results will apply to the case in which we define the cost as the depth (i.e. the parallel time complexity) of the computation implemented in the chip.

Rational Proofs. The problem above is related to the notion of composable Rational Proofs defined in [CG15]. In a Rational Proof (introduced by Azar and Micali [AM12, AM13]), given a function f and an input x, the Server returns the value \(y=f(x)\), and (possibly) some auxiliary information, to the Client. The Client in turn pays the Server for its work with a reward based on the transcript exchanged with the server and some randomness chosen by the client. The crucial property is that this reward is maximized in expectation when the server returns the correct value y. Clearly a rational prover who is only interested in maximizing his reward, will always answer correctly.

The authors of [CG15] show however that the definition of Rational Proofs in [AM12, AM13] does not satisfy a basic compositional property needed for the case in which many computations are outsourced to many servers who compete with each other for rewards (e.g. the case of volunteer computations [ACK+02]). A “rational proof” for the single-proof setting may no longer be rational when a large number of “computation problems” are outsourced. If one can produce T “random guesses” to problems in the time it takes to solve 1 problem correctly, it may be preferable to guess! That’s because even if each individual reward for an incorrect answer is lower than the reward for a correct answer, the total reward of T incorrect answers might be higher (and this is indeed the case for some of the protocols presented in [AM12, AM13]).

The question (only partially answered in [CG15, CG17] for a limited class of computations) is to design protocols where the reward is strictly connected, not just to the correctness of the result, but to the amount of work done by the prover. Consider for example a protocol where the prover collects the reward only if he produces a proof of correctness of the result. Assume that the cost to produce a valid proof for an incorrect result, is higher than just computing the correct result and the correct proof. Then obviously a rational prover will always answer correctly, because the above strategy of fast incorrect answers will not work anymore. While the application is different, the goal is the same as in the previous verifiable hardware scenario.

The Verifier’s Dilemma. In blockchain systems such as Ethereum, transactions can be expressed by arbitrary programs. To add a transaction to a block miners have to verify its validity, which could be too costly if the program is too complex. This creates the so-called Verifier’s Dilemma [LTKS15]: given a costly valid transaction Tr a miner who spends time verifying it is at a disadvantage over a miner who does not verify it and accept it “uncritically” since the latter will produce a valid block faster and claim the reward. On the other hand if the transaction is invalid, accepting it without verifying it first will lead to the rejection of the entire block by the blockchain and a waste of work by the uncritical miner. The solution is to require efficiently verifiable proofs of validity for transactions, an approach already pursued by various startups in the Ethereum ecosystem (e.g. TrueBitFootnote 10). We note that it suffices for these proofs to satisfy the condition above: i.e. we do not need the full power of information-theoretic or cryptographic security but it is enough to guarantee that to produce a proof of correctness for a false transaction is more costly than producing a valid transaction and its correct proof, which is exactly the model we are proposing.

1.4 Future Directions

Our work opens up many interesting future directions.

First of all, it would be nice to extend our results to the case where cost is the actual running time, rather than “parallel running time”/“circuit depth” as in our model. The techniques in [BRSV17] (which presents problems conjectured to have \(\varOmega (n^2)\) complexity on the average), if not even the original work of Merkle [Mer78], might be useful in building a verifiable computation scheme where if computing the function takes time T, then producing a false proof of correctness would have to take \(\varOmega (T^2)\).

For the specifics of our constructions it would be nice to “close the gap” between what we can achieve and the complexity assumption: our schemes can only compute \(\mathsf {AC}^0_{\text {Q}}[2]\) against adversaries in \(\mathsf {NC}^1\), and ideally we would like to be able to compute all of \(\mathsf {NC}^1\) (or at the very least all of \(\mathsf {AC}^0[2]\)).

Finally, to apply these schemes in practice it is important to have tight concrete security reductions and a proof-of-concept implementations.

2 Preliminaries

For a distribution D, we denote by \(x \leftarrow D\) the fact that x is being sampled according to D. We remind the reader that an ensemble \(\mathcal {X} = \{X_{\lambda }\}_{\lambda \in \mathbb {N}}\) is a family of probability distributions over a family of domains \(\mathcal {D}=\{D_{\lambda }\}_{\lambda \in \mathbb {N}}\). We say two ensembles \(\mathcal {D} = \{D_{\lambda }\}_{\lambda \in \mathbb {N}}\) and \(\mathcal {D}' = \{D'_{\lambda }\}_{\lambda \in \mathbb {N}}\) are statistically indistinguishable if . Finally, we note that all arithmetic computations (such as sums, inner product, matrix products, etc.) in this work will be over \(\text {GF}(2)\) unless specified otherwise.

Definition 2.1

(Function Family). A function family is a family of (possibly randomized) functions \(F = \{f_{\lambda }\}_{\lambda \in \mathbb {N}}\), where for each \(\lambda \), \(f_{\lambda }\) has domain \(D^f_{\lambda }\) and co-domain \(R^f_{\lambda }\). A class \(\mathcal {C}\) is a collection of function families.

In most of our constructions \(D^f_{\lambda }=\{0,1\}^{d_\lambda ^f}\) and \(R^f_{\lambda }=\{0,1\}^{r_\lambda ^f}\) for sequences \(\{d_\lambda ^f\}_\lambda \), \(\{d_\lambda ^f\}_\lambda \).

In the rest of the paper we will focus on the class of \(\mathcal {C}=\mathsf {NC}^1\) of functions for which there is a polynomial \(p(\cdot )\) and a constant c such that for each \(\lambda \), the function \(f_\lambda \) can be computed by a Boolean (randomized) fan-in 2, circuit of size \(p(\lambda )\) and depth \(c \log (\lambda )\). In the formal statements of our results we will also use the following classes: \(\mathsf {AC}^0\), the class of functions of polynomial size and constant depth with \(\mathsf {AND}, \mathsf {OR}\) and \(\mathsf {NOT}\) gates with unbounded fan-in; \(\mathsf {AC}^0[2]\), the class of functions of polynomial size and constant depth with \(\mathsf {AND}, \mathsf {OR}, \mathsf {NOT}\) and \(\mathsf {PARITY}\) gates with unbounded fan-in.

Given a function f, we can think of its multiplicative depth as the degree of the lowest-degree polynomial in \(\text {GF}(2)\) that evaluates to f. We denote by \(\mathsf {AC}^0_{\text {CM}}[2]\) the class of circuits in \(\mathsf {AC}^0[2]\) with constant multiplicative depth. We say that a circuit has quasi-constant multiplicative depth if it has a constant number of gates with non-constant fan-in (an example is a circuit composed by a single \(\mathsf {AND}\) of fan-in n). We denote the class of such circuits by \(\mathsf {AC}^0_{\text {Q}}[2]\). See Appendix A for a formal treatment.

Limited Adversaries. We define adversaries also as families of randomized algorithms \(\{A_\lambda \}_\lambda \), one for each security parameter (note that this is a non-uniform notion of security). We denote the class of adversaries we consider as \(\mathcal {A}\), and in the rest of the paper we will also restrict \(\mathcal {A}\) to \(\mathsf {NC}^1\).

Infinitely-Often Security. We now move to define security against all adversaries \(\{A_\lambda \}_\lambda \) that belong to a class \(\mathcal {A}\). Our results achieve an “infinitely often” notion of security, which states that for all adversaries outside of our permitted class \(\mathcal {A}\) our security property holds infinitely often (i.e. for an infinite sequence of security parameters rather than for every sufficiently large security parameter). This limitation seems inherent to the techniques in this paper and in [DVV16]. We informally denote with \(\mathcal {X} \sim _{\varLambda } \mathcal {Y}\) the fact that two ensembles \(\mathcal {X}\) and \(\mathcal {Y}\) are indistinguishable by \(\mathsf {NC}^1\) adversaries for an infinite sequence of parameters \(\varLambda \). See also Appendix A.

3 Fine-Grained SHE

We start by recalling the public key encryption from [DVV16] which is secure against adversaries in \(\mathsf {NC}^1\).

The scheme is described in Fig. 1. Its security relies on the following result, implicit in [IK00a]Footnote 11. We will also use this lemma when proving the security of our construction in Sect. 3.

Lemma 3.1

([IK00a]). If \(\mathsf {NC}^1 \subsetneq \oplus \mathsf {L}/ {\mathsf {poly}}\) then there exist distribution \(\mathcal {D}^{\text {kg}}_{\lambda }\) over \(\{0, 1\}^{\lambda \times \lambda }\), distribution \(\mathcal {D}^{f}_{\lambda }\) over matrices in \(\{0, 1\}^{\lambda \times \lambda }\) of full rank, and infinite set \(\varLambda \subseteq \mathbb {N}\) such that

$$ \mathbf {M}^{\text {kg}}\sim _{\varLambda }\mathbf {M}^{\mathcal {\text {f}}}$$

where \(\mathbf {M}^{\mathcal {\text {f}}}\leftarrow \mathcal {D}^{f}_{\lambda }\) and \(\mathbf {M}^{\text {kg}}\leftarrow \mathcal {D}^{\text {kg}}_{\lambda }\).

The following result is central to the correctness of the scheme \(\mathsf {PKE}\) in Fig. 1 and is implicit in [DVV16].

Lemma 3.2

([DVV16]). There exists sampling algorithm \(\mathsf {KSample}\) such that \((\mathbf {M}, \mathbf {k}) \leftarrow \mathsf {KSample}(1^{\lambda })\), \(\mathbf {M}\) is a matrix distributed according to \(\mathcal {D}^{\text {kg}}_{\lambda }\) (as in Lemma 3.1), \(\mathbf {k}\) is a vector in the kernel of \(\mathbf {M}\) and has the form

\(\mathbf {k}= (r_1, \, r_2, \, \dots , \, r_{\lambda -1}, \, 1) \in \{0, 1\}^{\lambda }\) where \(r_i\)-s are uniformly distributed bits.

Fig. 1.
figure 1

PKE construction [DVV16]

Theorem 3.1

([DVV16]). Assume \(\mathsf {NC}^1 \subsetneq \oplus \mathsf {L}/ {\mathsf {poly}}\). Then, the scheme \(\mathsf {PKE}= (\mathsf {PKE.Keygen}, \mathsf {PKE.Enc}, \mathsf {PKE.Dec})\) defined in Fig. 1 is a Public Key Encryption scheme secure against \(\mathsf {NC}^1\) adversaries. All algorithms in the scheme are computable in \(\mathsf {AC}^0[2]\).

3.1 Leveled Homomorphic Encryption for \(\mathsf {AC}^0_{\text {CM}}[2]\) Functions Secure Against \(\mathsf {NC}^1\)

We denote by \(\varvec{x}[i]\) the i-th bit of a vector of bits \(\varvec{x}\) . Below, the scheme \(\mathsf {PKE}= (\mathsf {PKE.Keygen},\mathsf {PKE.Enc}, \mathsf {PKE.Dec})\) is the one defined in Fig. 1. Our SHE scheme is defined by the following four algorithms:

  • \(\mathsf {HE.Keygen}_{\mathsf {sk}}(1^{\lambda }, L):\) For key generation, sample \(L+1\) key pairs \((\mathbf {M}_0, \mathbf {k}_0),\dots ,(\mathbf {M}_L, \mathbf {k}_L) \leftarrow \mathsf {PKE.Keygen}(1^{\lambda })\), and compute, for all \(\ell \in \{0, \dots , L-1\}\), \(i,j \in [\lambda ]\), the value

    $$ \varvec{a}_{\ell ,i,j} \leftarrow \mathsf {PKE.Enc}_{\mathbf {M}_{\ell +1}}(\mathbf {k}_{\ell }[i]\cdot \mathbf {k}_{\ell }[j]) \in \{0, 1\}^{\lambda }$$

    We define to be the set of all these values. t then outputs the secret key \(\mathsf {sk}= \mathbf {k}_L\), and the public key \(\mathsf {pk}= (\mathbf {M}_0, \mathbf {A})\). In the following we call \(\mathsf {evk}= \mathbf {A}\) the evaluation key. We point out a property that will be useful later: by the definition above, for all \(\ell \in \{0, \dots , L-1 \}\) we have

    $$\begin{aligned} \langle \mathbf {k}_{\ell +1} \, , \varvec{a}_{\ell +1,i,j} \rangle = \mathbf {k}_{\ell }[i] \cdot \mathbf {k}_{\ell }[j]\,. \end{aligned}$$
    (1)
  • \(\mathsf {HE.Enc}_{\mathsf {pk}}(\mu )):\) Recall that \(\mathsf {pk}= \mathbf {M}_0\). To encrypt a message \(\mu \) we compute \(\varvec{v} \leftarrow \mathsf {PKE.Enc}_{\mathbf {M}_0}(\mu )\). The output ciphertext contains \(\varvec{v}\) in addition to a “level tag”, an index in \(\{0, \dots , L \}\) denoting the “multiplicative depth” of the generated ciphertext. The encryption algorithm outputs .

  • \(\mathsf {HE.Dec}_{\mathbf {k}_L}(c):\) To decrypt a ciphertextFootnote 12 \(c = (\varvec{v},L)\) compute \(\mathsf {PKE.Dec}_{\mathbf {k}_L}(\varvec{v})\), i.e.

    $$ \langle \mathbf {k}_L \, , \varvec{v} \rangle $$
  • \(\mathsf {HE.Eval}_{\mathsf {evk}}(f, c_1,\dots ,c_n):\) where \(f : \{0, 1\}^n \rightarrow \{0, 1\}\): We require that f is represented as an arithmetic circuit in \(\text {GF}(2)\) with addition gates of unbounded fan-in and multiplication gates of fan-in 2. We also require the circuit to be layered, i.e. the set of gates can be partitioned in subsets (layers) such that wires are always between adjacent layers. Each layer should be composed homogeneously either of addition or multiplication gates. Finally, we require that the number of multiplications layers (i.e. the multiplicative depth) of f is L. We homomorphically evaluate f gate by gate. We will show how to perform multiplication (resp. addition) of two (resp. many) ciphertexts. Carrying out this procedure recursively we can homomorphically compute any circuit f of multiplicative depth L.

Ciphertext Structure During Evaluation. During the homomorphic evaluation a ciphertext will be of the form \(c = (\varvec{v}, \ell )\) where \(\ell \) is the “level tag” mentioned above. At any point of the evaluation we will have that \(\ell \) is between 0 (for fresh ciphertexts at the input layer) and L (at the output layer). We define homomorphic evaluation only among ciphertexts at the same level. Since our circuit is layered we will not have to worry about homomorphic evaluation occurring among ciphertexts at different levels. Consistently with the fact a level tag represents the multiplicative depth of a ciphertext, addition gates will keep the level of ciphertexts unchanged, whereas multiplication gates will increase it by one. Finally, we will keep the invariant that the output of each gate evaluation \(c = (\varvec{v}, \ell )\) is such that

$$\begin{aligned} \langle \mathbf {k}_{\ell } \, , \varvec{v} \rangle = \mu \end{aligned}$$
(2)

where \(\mu \) is the correct plaintext output of the gate. We prove our construction satisfies this invariant in Appendix B.

Homomorphic Evaluation of Gates:

  • Addition gates. Homomorphic evaluation of an addition gates on inputs \(c_1,\dots ,c_n\) where \(c_i = (\varvec{v}_i, \ell )\) is performed by outputting

  • Multiplication gates. We show how to multiply ciphertexts \(c, c'\) where \(c = (\varvec{v}, \ell )\) and \(c' = (\varvec{v}', \ell )\) to obtain an output ciphertext \(c_{\text {mult}} = (\varvec{v}_{\text {mult}}, \ell +1)\). The homomorphic multiplication algorithm will set

    where \(h_{i,j} = \varvec{v}[i] \cdot \varvec{v}'[j]\) for \(i,j \in [\lambda ]\). The final output ciphertext will be

The following theorem states the security of our scheme under our complexity assumption.

Theorem 3.2

(Security). The scheme \(\mathsf {HE}\) is \(\text {CPA}\) secure against \(\mathsf {NC}^1\) adversaries (Definition A.5) under the assumption \(\mathsf {NC}^1 \subsetneq \oplus \mathsf {L}/ {\mathsf {poly}}\).

3.2 Efficiency and Homomorphic Properties of Our Scheme

Our scheme is secure against adversaries in the class \(\mathsf {NC}^1\). This implies that we can run \(\mathsf {HE.Eval}\) only on functions f that are in \(\mathsf {NC}^1\), otherwise the evaluator would be able to break the semantic security of the scheme. However we have to ensure that the whole homomorphic evaluation stays in \(\mathsf {NC}^1\). The problem is that homomorphically evaluating f has an overhead with respect to the “plain” evaluation of f. Therefore, we need to determine for which functions f, we can guarantee that \(\mathsf {HE.Eval}(F, \dots )\) will stay in \(\mathsf {NC}^1\). The class of such functions turns out to be the class of functions implementable in constant multiplicative depth, i.e. \(\mathsf {AC}^0_{\text {CM}}[2]\)Footnote 13.

These observations, plus the fact that the invariant in Eq. 2 is preserved throughout homomorphic evaluation, imply the following result.

Theorem 3.3

The scheme \(\mathsf {HE}\) is leveled \(\mathsf {AC}^0_{\text {CM}}[2]\)-homomorphic. Key generation, encryption, decryption and evaluation are all computable in \(\mathsf {AC}^0_{\text {CM}}[2]\).

3.3 Beyond Constant Multiplicative Depth

In the previous section we saw how our scheme is homomorphic for a class of constant-depth, unbounded fan-in arithmetic circuits in \(\text {GF}(2)\) with constant multiplicative depth. We now show how to overcome this limitation by first extending techniques from [Raz87] to approximate \(\mathsf {AC}^0[2]\) circuits with low-degree polynomials and then designing a construction that internally uses our scheme \(\mathsf {HE}\) from Sect. 3.1.

Approximating \(\varvec{\mathsf {AC}^0_{\text {Q}}[2]}\) in \(\varvec{\mathsf {AC}^0_{\text {CM}}[2]}\) . Our approach to homomorphically evaluate a function \(f \in \mathsf {AC}^0_{\text {Q}}[2]\) is as follows. Instead of evaluating f we evaluate \(f^*\), an approximate version of f that is computable in \(\mathsf {AC}^0_{\text {CM}}[2]\). The function \(f^*\) is randomized and we will denote by \(n'\) the number of random bits \(f^*\) takes in input (in addition to the n bits of the input x). If \(\varvec{\hat{x}} = {\mathsf {Enc}}(x)\) and \(\varvec{\hat{r}} = {\mathsf {Enc}}(r)\) where r is uniformly random in \(\{0, 1\}^{n'}\), then decrypting \(\mathsf {HE.Eval}(f^*, \varvec{\hat{x}}, \varvec{\hat{r}})\)Footnote 14 yields f(x) with constant error probability. One way to reduce error could be to let evaluation compute \(f^*\) s times with s random inputs. However, this requires particular care to avoid using majority gates in the decryption algorithm. With this goal in mind we extend the output of the approximating function \(f^*\). When performing evaluation we will then perform s evaluations of \(f'\), the “extension” of \(f^*\) This additional information will be returned (encrypted) from the evaluation algorithm and will allow correct decryption with overwhelming probability and in low-depth (and without majority gates).

In the next constructions we will make use of the functions \(\mathsf {GenApproxFun}\), \({\mathsf {GenDecodeAux}}\) and \({\mathsf {DecodeApprox}}\), here only informally definedFootnote 15. The function \(\mathsf {GenApproxFun}(f)\) returns the (extended) approximating function \(f'\); the function \({\mathsf {GenDecodeAux}}(f)\) returns a constant-size string \(\mathbf {aux}_f\) used to decode (multiple) output of \(f'(x)\); the function \({\mathsf {DecodeApprox}}(\mathbf {aux}_f, \varvec{y}^{\text {out}}_1, \dots , \varvec{y}^{\text {out}}_s )\) returns f(x) w.h.p. if each \(\varvec{y}^{\text {out}}_s\) is an output of \(f'(x;r)\) for random r.

Homomorphic Evaluations of \(\varvec{\mathsf {AC}^0_{\text {Q}}[2]}\) Circuits. Below is our construction for a homomorphic scheme that can evaluate all circuits in \(\mathsf {AC}^0_{\text {Q}}[2]\) in \(\mathsf {AC}^0[2]\). This time, in order to evaluate circuit C, we perform several homomorphic evaluations of the randomized circuit \(C'\) (as in Lemma B.2). To obtain the plaintext output of C we can decrypt all the ciphertext outputs and use \({\mathsf {DecodeApprox}}\). Notice that this scheme is still compact. As we use a randomized approach to evaluate f, the scheme \(\mathsf {HE'}\) will be implicitly parametrized by a soundness parameter s. Intuitively, the probability of a function f being evaluated incorrectly will be upper bounded by \(2^{-s}\).

For our new scheme we will use the following auxiliary functions:

Definition 3.1

(Auxiliary Functions for \(\mathsf {HE'}\)). Let \(f: \{0, 1\}^n \rightarrow \{0, 1\}\) be represented as an arithmetic circuit as in \(\mathsf {HE}\) and \(\mathsf {pk}\) a public key for the scheme \(\mathsf {HE}\) that includes the evaluation key. Let s be a soundness parameter. We denote by \(f'\) be as above; let \(n' = O(n)\) be the number of additional bits \(f'\) will take as random input.

  • \(\mathsf {SampleAuxRandomness}_s(\mathsf {pk}, f'):\)

    1. 1.

      Sample \(s\cdot n'\) random bits \(r^{(1)}_1,\dots ,r^{(1)}_{n'}, \dots , r^{(s)}_1,\dots ,r^{(s)}_{n'}\);

    2. 2.

      Compute ;

    3. 3.

      Output \(\varvec{\hat{r}}_{\text {aux}}\).

  • \(\mathsf {EvalApprox}_s(\mathsf {pk}, f', c_1,\dots ,c_{n}, \varvec{\hat{r}}_{\text {aux}}):\)

    1. 1.

      Let \(\varvec{\hat{r}}_{\text {aux}}= \{ \hat{r}^{(i)}_j \ | \ i \in [s], j \in [n'] \} \).

    2. 2.

      For \(i \in [s]\), compute \(\varvec{c}^{\text {out}}_i \leftarrow \mathsf {HE.Eval}_{\mathsf {evk}}(f', c_1, \dots c_{n}, \hat{r}^{(i)}_1, \dots , \hat{r}^{(i)}_{n'})\);

    3. 3.

      Output \(\varvec{c} = (\varvec{c}^{\text {out}}_1, \dots , \varvec{c}^{\text {out}}_{s})\)Footnote 16.

The new scheme \(\mathsf {HE'}\) with soundness parameter s follows. Notice that the evaluation function outputs an auxiliary string \(\mathbf {aux}_f\) together with the proper ciphertext \(\varvec{c}\). This is necessary to have a correct decoding in decryption phase.

figure e

The following theorem summarizes the properties of this construction.

Theorem 3.4

The scheme \(\mathsf {HE'}\) above with soundness parameter \(s = \varOmega (\lambda )\) is leveled \(\mathsf {AC}^0_{\text {Q}}[2]\)-homomorphic. Key generation, encryption and evaluation can be computed in \(\mathsf {AC}^0_{\text {CM}}[2]\). Decryption is computable in \(\mathsf {AC}^0_{\text {Q}}[2]\).

4 Fine-Grained Verifiable Computation

In this section we describe our private verifiable computation scheme. Our constructions are based on the techniques in [CKV10] to obtain (reusable) verifiable computation from fully homomorphic encryption; see Sect. 1.2 for a high-level description.

4.1 A One-time Verification Scheme

In Figure 2 we describe an adaptation of the one-time secure delegation scheme from [CKV10]. We make non-black box use of our homomorphic encryption scheme \(\mathsf {HE'}\) (Sect. 3.3) with soundness parameter \(s= \lambda \). Notice that. during the preprocessing phase, we fix the “auxiliary randomness” for \(\mathsf {EvalApprox}\) (and thus for \(\mathsf {HE'.Eval}\)) once and for all. We will use that same randomness for all the input instances. This choice does not affect the security of the construction. We remind the reader that we will simplify notation by considering the evaluation key of our somewhat homomorphic encryption scheme as part of its public key.

If x is a vector of bits \(x_1, \dots , x_n\), below we will denote with \(\mathsf {HE'.Enc}(x)\) the concatenation of the bit by bit ciphertexts \(\mathsf {HE'.Enc}(x_1), \dots , \mathsf {HE'.Enc}(x_n)\). We denote by \(\mathsf {HE'.Enc}(\bar{0})\) the concatenation of n encryptions of 0, \(\mathsf {HE'.Enc}(0)\).

Fig. 2.
figure 2

One-Time Delegation Scheme \(\mathcal{VC}\)

The scheme \(\mathcal{VC}\) in Figure 2 has overwhelming completeness and is one-time secure when t is chosen \(\omega (\log (\lambda ))\). We prove these results in Appendix C.

Remark 4.1

(Efficiency of \(\mathcal{VC}\)). In the following we consider the verifiable computation of a function \(f: \{0, 1\}^n \rightarrow \{0, 1\}^m\) computable by an \(\mathsf {AC}^0_{\text {Q}}[2]\) circuit of size S.

  • \(\mathsf {VC.KeyGen}\) is computable by an \(\mathsf {AC}^0[2]\) circuit of size \(O({\mathsf {poly}}(\lambda )S)\);

  • \(\mathsf {VC.ProbGen}\) is computable by an \(\mathsf {AC}^0[2]\) circuit of size \(O({\mathsf {poly}}(\lambda )(m+n))\);

  • \(\mathsf {VC.Compute}\) is computable by an \(\mathsf {AC}^0[2]\) circuit of size \(O({\mathsf {poly}}(\lambda )S)\);

  • \(\mathsf {VC.Verify}\) is computable by a \(\mathsf {AC}^0[2]\) circuit of size \(O({\mathsf {poly}}(\lambda )(m+n))\).

The (constant) depth of \(\mathsf {VC.ProbGen}\) and \(\mathsf {VC.Verify}\) is independent of the depth of fFootnote 17.

4.2 A Reusable Verification Scheme

We obtain our reusable verification scheme \(\overline{\mathcal{VC}}\) applying the transformation in [CKV10] from one-time sound verification schemes through fully homomorphic encryption. The core idea behind this transformation is to encapsulate all the operations of a one-time verifiable computation scheme (such as \(\mathcal{VC}\) in Fig. 2) through homomorphic encryption. We instantiate this transformation with the simplest of our two somewhat homomorphic encryption schemes, \(\mathsf {HE}\) (described in Sect. 3.1). The full construction of \(\overline{\mathcal{VC}}\) is in Appendix C (Fig. 3).

Remark 4.2

(Efficiency of \(\overline{\mathcal{VC}}\)). The efficiency of \(\overline{\mathcal{VC}}\) is analogous to that of \(\mathcal{VC}\) with the exception of a circuit size overhead of a factor \(O(\lambda )\) on the problem generation and verification algorithms and of \(O(\lambda ^2)\) for the computation algorithm. The (constant) depth of \(\mathsf {\overline{VC}.ProbGen}\) and \(\mathsf {\overline{VC}.Verify}\) is independent of the depth of f.

Theorem 4.1

(Completeness of \(\overline{\mathcal{VC}}\)). The verifiable computation scheme \(\overline{\mathcal{VC}}\) has overwhelming completeness (Definition A.10) for the class \(\mathsf {AC}^0_{\text {Q}}[2]\).

Theorem 4.2

(Many-Times Soundness of \(\overline{\mathcal{VC}}\)). Under the assumption that \(\mathsf {NC}^1 \subsetneq \oplus \mathsf {L}/ {\mathsf {poly}}\) the scheme \(\overline{\mathcal{VC}}\) is many-times secure against \(\mathsf {NC}^1\) adversaries whenever t is chosen to be \(\omega (\log (\lambda ))\) in the underlying scheme \(\mathcal{VC}\).