1 Introduction

Fully Homomorphic Signatures. A signature scheme is said to be homomorphic when given signatures \(\sigma _1,\ldots ,\sigma _n\) of messages \(m_1,\ldots ,m_n\), it is possible to publicly compute a signature \(\sigma _f\) of the message \(f(m_1,\ldots ,m_n)\) for any function f. This evaluated signature \(\sigma _f\) is verified with respect to the verification key of the scheme, the message \(m=f(m_1,\ldots ,m_n)\) and the function f.

Given a set of signatures \(\sigma _1,\ldots ,\sigma _n\), unforgeability prevents an adversary from deriving a signature \(\sigma _f\) that verifies with respect to a function f and a message \(y \ne f(m_1,\ldots ,m_n)\). In other words, the signature certifies that the message corresponds to the proper evaluation of the function f on the original messages.

Akin to homomorphic encryption, the signing algorithm is a homomorphism from the message space to the signature space. Computing the addition of signatures \(\sigma _1 \boxplus \sigma _2\) results in the signature of the message \(m_1 + m_2\), where \(\boxplus \) and \(+\) denote the addition in the signature and message space, respectively. The same goes for multiplication. Schemes equipped with a ring homomorphism (with both addition and multiplication) are referred to as fully homomorphic, since these operations are sufficient to capture all possible Boolean functions.

Applications of FHS. Homomorphic signatures are applicable in a wide range of scenarios, such as:

  • Integrity for Network Coding. Network performances can be improved by encoding ongoing messages into vectors and letting each node perform linear operations on these encodings, instead of simply forwarding them. Unfortunately, because these encodings are modified by every node, the integrity property is lost when using traditional signatures. Homomorphic signatures (or their secret-key counterpart, as in [AB09]) that support linear operations can be used to preserve integrity throughout the network. In particular, each node updates not only the encoded messages, but also the homomorphic signatures associated with them.

  • Verifying Delegated Computations. A client that wishes to delegate some computation on his data to a cloud provider could authenticate it via homomorphic signatures, then send it away to the cloud. The cloud performs the computation and updates the signatures accordingly, then sends the result back to the client, who can then verify the evaluated signature. If verification is successful, then the client is guaranteed that the cloud computed the intended function on the data. It is the perfect complement to fully homomorphic encryption (FHE), which preserves the confidentiality of the data in use, but not its integrity.

  • Anonymous Credentials. Consider the scenario where a user obtains signatures \(\sigma _1,\ldots ,\sigma _n\) of her credentials \(m_1,\ldots ,m_n\), produced by some authority (the authority is associated to the \(\textsf{vk} \) of the signature scheme). Later on, the user is asked by a service provider (say, an insurance company) to prove that her credentials satisfy a policy expressed by a predicate \(\textsf{P}\). The user can compute the signature \(\sigma _{\textsf{P}}\) and send it to the provider. If this signature verifies successfully with respect to \(\textsf{vk} \) and the message 1 (the output of the predicate should be 1), then it proves the user’s credentials fulfill the policy. Assuming the homomorphic signatures satisfy some mild re-randomizability property (so that evaluated signatures look fresh), this does not reveal the underlying credentials to the provider (only that they satisfy the policy). Giving the policy explicitly to the user provides some transparency (for instance, the predicate \(\textsf{P}\) can be signed by a trusted regulator, ensuring the insurance company is not performing some discriminatory screening). We can even evaluate a function f on the signatures that not only indicates whether the user is eligible to an insurance scheme, but also outputs the price to be paid based on the credentials.

State of the Art. The first construction of homomorphic signatures [AB09] was limited to additive homomorphism in the secret-key setting i.e. it is a message authentication (MAC) scheme. Later on, [BF11a] built the first homomorphic signature for constant-degree polynomials, subsequently improved by [CFW14]. In [GW13], the authors built the first fully homomorphic MAC from FHE, while [CF13] built an homomorphic MAC with better efficiency for a restricted class of functions. Then, [GVW15] built the first leveled fully homomorphic signature (FHS) scheme.

All existing works suffer from the fact that the depth of the functions that can be homomorphically evaluated is bounded at setup. In other words, these are leveled FHS. This stands in contrast with FHE, where unleveled schemes can be obtained via bootstrapping [Gen09] and circular security. Bootstrapping requires an FHE encryption of the secret decryption key, and relies on evaluating homomorphically the (shallow) decryption algorithm to “refresh” ciphertexts. This idea is not straightforwardly transferable to the signature case, and unleveled FHS have so far been elusive.

Another approach to building FHS is to use Succinct Arguments of Knowledge (SNARKs) for NP, but this requires the use of strong knowledge assumptions, which we discuss in more detail in the full version of this paper [GU23].

Given this state of affair, a natural question comes up:

Can we build unleveled FHS from falsifiable assumptions?

This was left as an open problem in [GVW15], and has remained unsolved until our construction.

Our Result. We answer the question positively. Namely, we build the first unleveled FHS from falsifiable assumptions, in the standard model. Our feasibility result relies on indistinguishability obfuscation (iO), of which promising constructions appeared recently in [BDGM20a, JLS21, GP21, WW21, AP20, BDGM20b, DQV+21, JLS22], unleveled fully homomorphic encryption and a non-interactive zero-knowledge proof system (NIZK). While iO is not a falsifiable assumption itselfFootnote 1, most of the iO candidates rely on falsifiable assumptions. The second building block, fully-homomorphic encryption, can be instantiated using circularly-secure LWE [GSW13], and alternatively using indistinguishability obfuscation [CLTV15]. Instantiating the FHE scheme using [CLTV15] yields a fully homomorphic signature construction that does not require any circular security assumption.

Building Blocks. We give more details on the building blocks, and the assumptions over which they can be instantiated. To build our FHS, we use an unleveled Fully-Homomorphic Encryption (FHE) scheme, which can be chosen to be either:

  • a variant of the FHE scheme from [GSW13], slightly modified to ensure that it has unique random coins (which is needed for technical reasons in the proof). This scheme can be built from circularly-secure LWE.

  • the FHE scheme of [CLTV15], which is instantiable using subexponentially-secure iO and a re-randomizable public-key encryption scheme. This second type of FHE scheme does not require a circular assumption. Moreover, the re-randomizable encryption scheme can be any one of the following: Goldwasser-Micali [GM82], ElGamal [ElG85], Paillier [Pai99] or Damgard-Jurik [DJ01] (which are secure assuming QR, DDH, or DCR).

Moreover, we rely on Non-Interactive Zero Knowledge (NIZK) proof systems satisfying a proof of knowledge property and composable zero-knowledge, which can also be built from subexponentially secure iO and lossy trapdoor functions [HU19]. Lossy trapdoor functions can be based on a multitude of standard assumptions such as DDH, k-LIN, QR or DCR. Other NIZK systems also offer the properties required, but from bilinear maps [GS08].

The NIZKs above [HU19, GS08] allow that the common reference string (CRS) can be either generated honestly to be binding, which ensures soundness (i.e. the fact that only true statements can be proved), or alternatively, the CRS is generated in a hiding way, providing a simulation mode that ensures zero-knowledge. In fact, the binding CRS is generated together with an extraction trapdoor that can be used to extract efficiently a witness from any valid proof (thereby ensuring that the statement proved is indeed true). The simulated CRS is generated together with a simulation trapdoor. In this case, the simulation trapdoor can be used to generate proofs on any statement (without requiring a witness). The two modes (real or simulated) are computationally indistinguishable.

Technical Overview

Overview of Our Construction. The verification key \(\textsf{vk} \) of our scheme contains several FHE encryptions of an arbitrary message (for example the message equals to 0). The number of such encryptions, \(N \), determines the arity of the functions that can be homomorphically evaluated. We require that the FHE is unleveled. This differs from the FHS scheme from [GVW15] which uses homomorphic commitments instead of FHE encryptions. They crucially rely on the fact that these commitments are non-binding, which prevents from bootstrapping and only yields leveled FHS. To produce signatures, we rely on the NIZK proof system. To sign a message \(m_i\) for \(i=1,\ldots ,N \), the signer produces a simulated proof stating (falsely) that the i’th encryption from \(\textsf{vk} \), which we denote by \(\textsf{ct} _i\), is an FHE encryption of \(m_i\). This can be done since the NIZK common reference string CRS is simulated with an associated simulation trapdoor \(\textsf{td}_{\textsf{sim}} \). Creating these simulated proofs requires the trapdoor, which is set to be the signing key. A signature is simply the ZK proof \(\pi _i\) stating that the ciphertext \(\textsf{ct} _i\) is an encryption of \(m_i\). To homomorphically evaluate a function f on signatures \(\sigma _1,\ldots ,\sigma _N \) of the messages \(m_1,\ldots ,m_N \), we use an obfuscated circuit containing the simulation trapdoor \(\textsf{td}_{\textsf{sim}} \) that, given as input the tuple \((\sigma _1,m_1,\ldots ,\sigma _n,m_n,f)\), first checks that the signatures \(\sigma _i\) are valid ZK proofs (of false statements), by running the verification algorithm of the NIZK proof system. If the check is successful, then it homomorphically evaluates the function f on the FHE encryptions \(\textsf{ct} _1,\ldots ,\textsf{ct} _N \) that are part of \(\textsf{vk} \), which yields an FHE ciphertext \(\textsf{ct} _f\). It also generates a proof \(\pi \) that \(\textsf{ct} _f\) is an FHE encryption of \(f(m_1,\ldots ,m_n)\), using \(\textsf{td}_{\textsf{sim}} \). The signature \(\sigma _f\) is set to be the proof \(\pi \), which the evaluation circuit outputs. To verify a signature \(\sigma _f\) with respect to a function f and a value y, the verifier algorithm computes \(\textsf{ct} _f\) by evaluating f on the FHE encryptions \(\textsf{ct} _1,\ldots ,\textsf{ct} _N \) from \(\textsf{vk} \) and verifies that \(\sigma \) is a valid proof stating that \(\textsf{ct} _f\) is an FHE encryption of y.

Let us now have a look at the proof of unforgeability. For simplicity, we consider the selective setting, where the adversary first sends messages \(m^\star _1,\ldots ,m_n^\star \), then receives \(\textsf{vk} \) and the signatures \(\sigma _1^\star ,\ldots ,\sigma _n^\star \). Finally, the adversary sends a forgery \((\sigma _f,f,y)\). It wins if the signature \(\sigma _f\) verifies successfully with respect to \(\textsf{vk},f,y\) and \(y \ne f(m_1^\star ,\ldots ,m_n^\star )\). The first step of the proof is to switch the FHE encryptions \(\textsf{ct} _1\ldots \textsf{ct} _{N}\) of 0 in the \(\textsf{vk} \) to FHE encryptions of \(m_1^\star ,\ldots ,m_n^\star \), respectively. This way, we can change the signatures \(\sigma _i^\star \) to proofs that are computed using a witness (where the witness is the randomness used to compute the FHE encryptions in \(\textsf{vk} \)). The main implication is that we do not need to simulate proofs using \(\textsf{td}_{\textsf{sim}} \) anymore. The intent is to get rid of \(\textsf{td}_{\textsf{sim}} \) altogether and switch to an honestly computed CRS so that we can use the soundness of the NIZK to prevent forgeries. Unfortunately it is not clear at this point how to remove \(\textsf{td}_{\textsf{sim}} \) from \(\textsf{Eval} \), the obfuscated circuit that performs the homomorphic evaluations. What if we use proofs of knowledge? This way, if the signatures input to the Eval algorithm are valid ZK proofs, then Eval can efficiently extract witnesses (i.e. randomness of the corresponding FHE ciphertexts), which can be used to compute the randomness of the evaluated FHE ciphertext. This requires a so-called randomness homomorphism of the FHE scheme. Namely, given the secret key of the FHE \(\textsf{sk} \), randomness \(r_1,r_2\) and messages \(m_1,m_2\) such that \(\textsf{ct} _1 = \textsf{FHE}.\textsf{Enc} (\textsf{pk},m_1;r_1)\) and \(\textsf{ct} _2=\textsf{FHE}.\textsf{Enc} (\textsf{pk},m_2;r_2)\), one can compute a randomness r such that \(\textsf{FHE}.\textsf{Eval NAND} (\textsf{ct} _1,\textsf{ct} _2) = \textsf{FHE}.\textsf{Enc} (\textsf{pk},\textsf{NAND};r)\). A stronger property where a randomness r can be computed only using the \(\textsf{pk} \) is satisfied by most lattice-based FHE schemes (e.g. [GSW13]) and the secret-key variant is satisfied by the FHE scheme from [CLTV15]. Then, \(\textsf{Eval} \) can use this randomness r as a witness to produce the ZK proof that constitutes the evaluated signature \(\sigma _f\).

This approach runs into a circular issue: while it is true that the \(\sigma _i^\star \) are proofs that are computed without \(\textsf{td}_{\textsf{sim}} \), to use the proof of knowledge property and extract witnesses, we need to first remove \(\textsf{td}_{\textsf{sim}} \) and switch to an honestly generated CRS. To do so, we need \(\textsf{Eval} \) to produce the signatures \(\sigma _f\) without \(\textsf{td}_{\textsf{sim}} \), but using witnesses instead, which already requires the proof of knowledge property and an honest CRS.

To solve this circular issue, our scheme uses a different NIZK proof system for each depth level of the circuit that is homomorphically evaluated. That is, to evaluate a function f, represented as a depth d circuit, we evaluate the circuit gate by gate. Starting at the level 0, signatures \(\sigma _1,\ldots ,\sigma _n\) of messages \(m_1,\ldots ,m_n\) are ZK proofs stating (falsely) that the FHE ciphertexts \(\textsf{ct} _1,\ldots ,\textsf{ct} _N \) from \(\textsf{vk} \) are encryptions of \(m_1,\ldots ,m_n\), respectively, computed using \(\textsf{crs} _0\), a simulated crs, together with a simulation trapdoor \(\textsf{td}_{\textsf{sim}} ^0\). Then \(\textsf{Eval} \) takes as input these level 0 signatures \(\sigma _1,\ldots ,\sigma _n\), the messages \(m_1,\ldots ,m_n\) and a n-ary gate g, verifies that the \(\sigma _i\) are valid proofs, computes the gate g on the messages which yields the value \(y=g(m_1,\ldots ,m_n)\), homomorphically evaluates g on the ciphertexts \(\textsf{ct} _1,\ldots ,\textsf{ct} _n\) which yields \(\textsf{ct} _g\), and computes a ZK proof \(\pi \) stating that \(\textsf{ct} _g\) is an FHE encryption of y using \(\textsf{crs} _1\), a simulated crs, together with a simulation trapdoor \(\textsf{td}_{\textsf{sim}} ^1\). The \(\textsf{Eval} \) algorithm performs just one more level of the homomorphic computation. It is repeated many times to obtain the final signature \(\sigma _f\) for the function f. To keep track of the gate-by-gate evaluation of the circuit, each signature will be of the form \(\sigma =(\pi ,i,\textsf{ct})\), where \(i \in \mathbb {N}\) indicates the level of the signature, \(\pi \) is a proof computed using \((\textsf{crs} _i,\textsf{td}_{\textsf{sim}} ^i)\), and \(\textsf{ct} \) is an homomorphically evaluated ciphertext (if \(i=0\) it is one ciphertext from \(\textsf{vk} \)). This way, \(\textsf{Eval} \) takes as input signatures of level i, and outputs signatures of level \(i+1\).

To prove the unforgeability of this scheme, as before, we start by replacing the FHE ciphertexts \(\textsf{ct} _1,\ldots ,\textsf{ct} _N \) from the \(\textsf{vk} \) to encryptions of the messages \(m_1^\star ,\ldots ,m_N ^\star \) chosen by the adversary, using the semantic security of FHE. Then, we generate level 0 signatures using witnesses (the randomness used to compute the \(\textsf{ct} _i\)) instead of \(\textsf{td}_{\textsf{sim}} ^0\). At this point, we can switch \(\textsf{crs} _0\) to a real CRS, generated along with an extraction trapdoor, since \(\textsf{td}_{\textsf{sim}} ^0\) is not used anymore. The rest of the proof proceeds using a hybrid argument over all the levels \(i=1,\ldots ,d\) where d is the (unbounded) depth of the circuit chosen by the adversary. By induction, we assume \(\textsf{crs} _i\) is generated honestly along with an extraction trapdoor \(\textsf{td}_{\textsf{ext}} ^i\). Therefore, we can switch the way \(\textsf{Eval} \) computes the ZK proof for the level \(i+1\). Instead of using a simulation trapdoor \(\textsf{td}_{\textsf{sim}} ^{i+1}\) with respect to \(\textsf{crs} _{i+1}\) and computing simulated proofs, it instead extracts witnesses from the level i signatures using \(\textsf{td}_{\textsf{ext}} ^i\), and uses them to compute the proofs without the trapdoor \(\textsf{td}_{\textsf{sim}} ^{i+1}\). At this point \(\textsf{td}_{\textsf{sim}} ^{i+1}\) is not used anymore so we can also switch \(\textsf{crs} _{i+1}\) to a real CRS, and go to the next step until we reach the depth of the function f chosen by the adversary.

While using a different CRS for each level seems to solve the circularity issue, this approach creates another problem: if we simply generate all \(\textsf{crs} _i\) for all levels in advance and put them in \(\textsf{vk} \), we necessarily have to bound the maximum depth of the functions that can be homomorphically evaluated. In other words, we have a leveled FHS. To avoid that, \(\textsf{Eval} \) samples the \(\textsf{crs} _i\) on the fly using a pseudo-random function (the key of the PRF is hard-coded in the obfuscated circuit \(\textsf{Eval} \)). This complicates the security proof, but it can be made to work using puncturing techniques. Namely, to switch \(\textsf{crs} _i\) from a simulated to real CRS and use the proof of knowledge property of the proof system associated to \(\textsf{crs} _i\), we need \(\textsf{crs} _i\) to be generated with truly random coins, as opposed to a PRF. We simply hard-code the PRF value on i, puncture the PRF key, and switch the value to random (this is a standard technique for security proofs using iO, see for instance [SW14]). The crucial fact that makes these techniques applicable is that at any point in our security proof, we only require the CRS of one specific level to be generated with truly random coins. That is, we only need to hard-code the value of one CRS to perform the hybrid argument that goes over each level one by one. Ultimately, we show that the CRS for the last level, which corresponds to the depth of f chosen by the adversary, is generated honestly, and the soundness of the proof system directly prevents any successful FHS forgery.

High-Level Description of our FHS Scheme. In this description, \(\textsf{SimSetup}\) generates a simulated CRS with an associated simulation trapdoor \(\textsf{td}_{\textsf{sim}}\). In the unforgeability proof, we will use the honest variant \(\textsf{Setup} \) that generates a real CRS along with an extraction trapdoor \(\textsf{td}_{\textsf{ext}} \). For simplicity, we consider an algorithm \(\textsf{Eval} \) that only evaluates binary NAND gates (this is without loss of generality). Our scheme is as follows:

  • \(\textsf{vk} = (\textsf{FHE}.\textsf{Enc} (0),\ldots ,\textsf{FHE}.\textsf{Enc} (0),\textsf{crs} _0)\), where \((\textsf{crs} _0,\textsf{td}_{\textsf{sim}} ^0) \leftarrow \textsf{SimSetup}(1^\lambda )\), where \(\lambda \in \mathbb {N}\) denotes the security parameter. The verification key \(\textsf{vk} \) contains \(N \) FHE encryptions of 0, namely \(\textsf{ct} _1\ldots \textsf{ct} _N \).

  • \(\textsf{sk} = K\), where K is a PRF key.

  • \(\textsf{Eval NAND} \Big ((\sigma _0,m_0),(\sigma _1,m_1)\Big ) = \widetilde{\mathcal {C} _{[\textsf{td}_{\textsf{sim}} ^0,K]}}\Big ((\sigma _0,m_0),(\sigma _1,m_1)\Big )\), where \(\widetilde{\mathcal {C} _{[\textsf{td}_{\textsf{sim}} ^0,K]}}\) denotes an obfuscation of the circuit \(\mathcal {C} _{[\textsf{td}_{\textsf{sim}} ^0,K]}\) that has the values \(\textsf{td}_{\textsf{sim}} ^0\) and K hard-coded, described in Fig. 1 below, \(\sigma _0\) and \(\sigma _1\) are signatures of level \(i \in \mathbb {N}\) of the messages \(m_0\) and \(m_1\) respectively, and a binary NAND gate is homomorphically evaluated.

  • \(\textsf{Verify} (\sigma _f,f,y)\): parses \(\sigma _f\) as \((\textsf{ct},\pi ,d)\). Proof \(\pi \) is a ZK proof with respect to \(\textsf{crs} _d\) where d is the depth of f and \((\textsf{crs} _d,\textsf{td} _d) = \textsf{SimSetup}(1^\lambda ;\textsf{PRF} _K(i))\), i.e. \(\textsf{SimSetup}\) is run on the pseudorandom coins \(\textsf{PRF} _K(d)\). Then, it homomorphically evaluates f on the ciphertexts \(\textsf{ct} _i=\textsf{FHE}.\textsf{Enc} (0)\) from \(\textsf{vk} \) to obtain \(\textsf{ct} _f\). It checks that \(\pi \) is a valid proof stating that \(\textsf{ct} _f\) is an encryption of y, with respect to \(\textsf{crs} _d\) (it outputs 1 if the check passes, 0 otherwise). Note that the ciphertext \(\textsf{ct} \) that is part of the signature is not used by \(\textsf{Verify} \). It is only useful if extra homomorphically evaluation are to be performed on the evaluated signature.

Fig. 1.
figure 1

Circuit \(\mathcal {C} _{[\textsf{td} _0,K]}(\cdot ,\cdot )\) used by \(\textsf{Eval} \).

We summarize the unforgeability proof using the list of hybrid games presented in Fig. 2. Note that \(\textsf{G} _{3.0} = \textsf{G} _2\), and in the last game \(\textsf{G} _{3.d}\), where d denotes the depth of the function f chosen by the adversary, security simply follows from the soundness of the level d NIZK.

Fig. 2.
figure 2

Hybrid games for the selective unforgeability proof of our FHS. We denote by \(m_i^\star \) the message sent by the adversary, by \(\sigma _i^\star \) the signatures it receives, by \(\textsf{SimSetup}\) the algorithm that generates a simulated CRS with a trapdoor \(\textsf{td}_{\textsf{sim}} \), by \(\textsf{Setup} (1^\lambda )\) the honest variant that generates a real CRS together with an extraction trapdoor and by K a puncturable PRF key. We denote by \(\textsf{Setup} (1^\lambda ;r)\) the algorithm \(\textsf{Setup} \) run with coins r (which can be truly random or pseudo random). When omitted, truly random coins are implicitly used. We use the same notations when writing \(\textsf{SimSetup}(1^\lambda ;r)\) or \(\textsf{FHE}.\textsf{Enc} (m;r)\).

Complexity Leveraging and Adaptive Security. In the overview above, we skipped over some technical details. In the unforgeability proof of our FHS scheme, the challenger that interacts with the adversary does not know in advance the depth d of the function f chosen. To solve this problem, the challenger chooses a super-polynomial e.g. \(2^{\omega (\log \lambda )}\) number of levels to perform the hybrid argument sketched above. This gives a super-polynomial security loss, which is why we require subexponential security of the underlying assumptions. A similar complexity leveraging argument can be used to obtain adaptive security, where the adversary is not restricted to choose the messages \(m^\star _1,\ldots ,m^\star _N \) before seeing the verification key of the scheme. The challenger guesses in advance the messages and acts as though the adversary were selective. The security loss due to the guessing argument is \(2^N \), which we can accommodate by choosing appropriately large parameters, relying again on the subexponential security of the underlying building blocks.

Unique Randomness. For technical reasons, we require additionally that the FHE has unique randomness: given a message m and a ciphertext \(\textsf{ct} =\textsf{Enc} (\textsf{pk},m;r)\) there cannot be another randomness \(r' \ne r\) such that \(\textsf{Enc} (\textsf{pk},m;r')=\textsf{ct} \). In the full version of this paper [GU23], we show that a slight modification of the GSW FHE scheme [GSW13] directly achieves such a property. We also show that the FHE from [CLTV15] can be adapted straightforwardly to obtain unique randomness. Simply put, their scheme relies on iO and a re-randomizable encryption scheme (such as Goldwasser Micali, ElGamal, Paillier or Damgard-Jurik). If the latter has unique randomness, then the resulting FHE also has this property.

Related Works. The work of [JMSW02] introduced a similar notion of homomorphic signature but where the verification algorithm does not take the function f as an input. That is, signatures can be manipulated homomorphically, thereby changing the underlying message being signed, but the verification does not track which function was applied. In that case, the notion of unforgeability only makes sense when the homomorphism property is limited, so that from a set of signatures, one can only get a signature on some but not all messages. Typically, the messages are vectors, and given signatures on vectors \(\textbf{v}_1,\ldots ,\textbf{v}_n\), only signatures on the linear combinations of the vectors \(\textbf{v}_1,\ldots ,\textbf{v}_n\) can be obtained. In particular if n is less than the dimension of the vectors, then there are some vectors for which signatures cannot be generated (those outside the span of \(\textbf{v}_1,\ldots ,\textbf{v}_n\)) and the unforgeability property is meaningful. These are referred to as linearly homomorphic signatures, such as in [BF11b, Fre12, ALP13, LPJY15, CFN15, CLQ16, HPP20]. This is similar to the notion of equivalence-class signatures [HS14, FHS19, FG18, KSD19], where signatures can be combined homomorphically within a given equivalence class, but forgeries outside the class are prohibited. The notion also requires a re-randomizability property, and is used in particular for anonymous credentials.

Other works [LTWC18, FP18, AP19, SBB19] consider the multi-key extension of homomorphic signatures, where the signatures to be homomorphically evaluated come from different users with different signing keys.

In [BFS14], the authors provide a fully-homomorphic signature from lattices that has the advantage of being adaptively secure (where the adversary can send the messages of her choice after receiving the verification key in the security game). In [CFN18], the authors study the security notions of homomorphic signatures in the adaptive setting, provide a simpler and stronger definition, and a compiler that generically strengthens the security of a scheme. The work of [Tsa17] establishes an equivalence between homomorphic signatures and the related notion of attribute-based signatures, and provides new constructions for both.

Another recent line of work [CFT22, BCFL23] on functional commitments also addresses the problem of homomorphic signatures. [BCFL23] instantiates the framework of [CFT22] with a functional commitment for circuits of unbounded depth, resulting in a homomorphic signature that supports circuits of unbounded depth (though the circuit width is bounded). In this way, [BCFL23] proposes schemes based on new falsifiable assumptions which rely on pairings and lattices (the pairing assumption holds in the bilinear generic group model, while the lattice one is an extension of the k-R-ISIS assumption of [ACL+22]). Comparing our work to [BCFL23], our basic scheme only relies on a bound on the input sizeFootnote 2. Moreover, our scheme allows for arbitrary compositions of signatures, as was the case in [GVW15]. The signatures in [BCFL23] can be composed only sequentially, by feeding an entire signature as the input to another circuit (given a signature \(\sigma \) for \(y=f(m)\), their scheme can compute a signature \(\sigma '\) for \(z=g(y)\). Namely, the resulting signature \(\sigma '\) is with respect to z, circuit \(g\circ f\) and input m).

As we mentioned already earlier, [CLTV15] builds an unleveled FHE scheme from subexponentially secure iO and re-randomizable encryption. Remarkably, their FHE does not require any circular security assumption, since it does not rely on the bootstrapping technique. Although we use a similar technical complexity leveraging argument to handle unbounded depth, the technical similarities end here.

Fully-Homomorphic Signatures from SNARKs. It was claimed in previous works [GW13, GVW15] that FHS can be built using succinct arguments of knowledge (SNARKs) for NP. This comes at a cost: in the FHS regime, that would mean using unfalsifiable assumptions (even in the random oracle model), as we explain in further details in the full version of this paper [GU23]. This stands in contrast with our scheme that can be instantiated from falsifiable assumptions, since general indistinguishability obfuscation itself can be built from falisifiable assumptions [JLS21, GP21, JLS22].

Full Context-Hiding. Our FHS scheme is also the first to achieve a strong notion of context hiding, more powerful than the one achieved by [GVW15]. Consider a signature \(\sigma \) for \(m=f(m_1\ldots m_N)\), which was obtained by homomorphically evaluating a function f for signature-message pairs \((\sigma _1,m_1)\ldots (\sigma _{N},m_{N})\). Full context-hidingFootnote 3 guarantees that the signature \(\sigma \) only certifies m and does not leak any information on messages \(m_1\ldots m_N \). A signature \(\sigma \) in [GVW15] is not context-hiding, but can be post-processed into another signature \(\sigma '\) that achieves context-hiding, at the cost that the homomorphism property is broken: no homomorphic operations can be applied on \(\sigma '\).

In contrast, our FHS construction achieves full context hiding for signatures at all levels out-of-the-box, and context-hiding signatures can be homomorphically combined for an unbounded number of times. Our construction is the first to achieve this stronger notion of context-hiding in the standard model. More details can be found in the full version of this paper [GU23].

Roadmap. In Sect. 2 we define the building blocks used in our construction, then we describe our scheme in Sect. 3 and prove its security in Sect. 4.

Due to space limitations, some of our results are deferred to the full version of this paper [GU23] which contains:

  • a description of several schemes that satisfy unique randomness, a property needed from the FHE building block in the proof.

  • a variation of the scheme that supports datasets of unbounded length, albeit by relying on the use of the random oracle model.

  • an analysis of the context-hiding security of our scheme.

  • a detailed description of how SNARKs can be used to build FHS. While such an approach would be much more practical in terms of the efficiency of the scheme, there would also be drawbacks with respect to the falsifiability of the assumptions used.

  • a brief description of multi-data FHS, which allows for the signing of multiple datasets by associating each one with a label (the label is an arbitrary binary string, for example an encoding of a filename or a timestamp). Signing and verification is done with respect to the label, but the scheme uses the same signing and verification key for multiple labels. A generic transformation from single-data to multi-data FHS is known due to [GVW15] and is recalled in the full version of this paper [GU23].

2 Preliminaries

Notation. Throughout this paper, \(\lambda \) denotes the security parameter. For all \(n\in \mathbb {N}\), [n] denotes the set \(\{1, \dots , n\}\). An algorithm is said to be efficient if it is a probabilistic polynomial time (PPT) algorithm. A function \(f: \mathbb {N}\rightarrow \mathbb {N}\) is negligible if for any polynomial p there exists a bound \(B>0\) such that, for any integer \(k\ge B\), \(f(k)\le 1/{\vert p(k)\vert }\). An event depending on \(\lambda \) occurs with overwhelming probability when its probability is at least \(1-{{\,\mathrm{\textsf{negl}}\,}}(\lambda )\) for a negligible function \({{\,\mathrm{\textsf{negl}}\,}}\). Given a finite set S, the notation \(x\leftarrow _{\textsc {r}}S\) means a uniformly random assignment of an element of S to the variable x. For all probabilistic algorithms \(\mathcal {A} \), all inputs x, we denote by \(y \leftarrow \mathcal {A} (x)\) the process of running \(\mathcal {A} \) on x and assigning the output to y. The notation \(\mathcal {A} ^{\mathcal {O}}\) indicates that the algorithm \(\mathcal {A} \) is given an oracle access to \(\mathcal {O}\). For all algorithm \(\mathcal {A},\mathcal {B},\ldots \), all inputs \(x,y,\ldots \) and all predicates \(\textsf{P}\), we denote by \(\Pr [a \leftarrow \mathcal {A} (x);b \leftarrow \mathcal {B}(a);\ldots : \textsf{P}(a,b,\ldots )]\) the probability that the predicate \(\textsf{P}\) holds on the values \(a,b,\ldots \) computed by first running \(\mathcal {A} \) on x, then \(\mathcal {B}\) on y and a, and so forth. For two distributions \(D_1,D_2\), we denote by \(\varDelta (D_1,D_2)\) their statistical distance. We denote by \(\mathcal {D}_1 \approx _c \mathcal {D}_2\) two computationally indistinguishable distribution ensembles \(\mathcal {D}_1\) and \(\mathcal {D}_2\). We denote by \(\mathcal {D}_1 \approx _s \mathcal {D}_2\) two statistically close ensembles.

Subexponential Security. The security definitions we consider will require that for every efficient algorithm \(\mathcal {A} \), there exists some negligible function \({{\,\mathrm{\textsf{negl}}\,}}\) such that for all \(\lambda \in \mathbb {N}\), \(\mathcal {A} \) succeeds in “breaking security” w.r.t. the security parameter \(\lambda \) with probability at most \({{\,\mathrm{\textsf{negl}}\,}}(\lambda )\). All the definitions that we consider can be extended to consider subexponential security; this is done by requiring the existence of a constant \(\varepsilon > 0\), such that for every PPT algorithm \(\mathcal {A} \), \(\mathcal {A} \) succeeds in “breaking security” w.r.t. the security parameter \(\lambda \) with probability at most \(2^{- \lambda ^\varepsilon }\). The security notion of obfuscation (Sect. 2.3) and NIZK (Sect. 2.4) are traditionally defined for non-uniform adversaries. We write our security definitions for uniform adversaries for simplicity, but they can be easily adapted to non-uniform adversaries.

2.1 Puncturable Pseudorandom Functions

A pseudorandom function (PRF) is a tuple of PPT algorithms \((\textsf{PRF}.\textsf{KeyGen},\textsf{PRF}.\textsf{Eval})\) where \(\textsf{PRF}.\textsf{KeyGen} \) generates a key which is used by \(\textsf{PRF}.\textsf{Eval} \) to evaluate outputs. The core property of PRFs states that for a random choice of key, the outputs of \(\textsf{PRF}.\textsf{Eval} \) are pseudo-random. Puncturable PRFs (pPRFs) have the additional property that keys can be generated punctured at any input x in the domain. As a result, the punctured key can be used to evaluate the PRF at all inputs but x. Moreover, revealing the punctured key does not violate the pseudorandomness of the image of x. This notion can be generalized to allow they key to be punctured at multiple points.

As observed in [BW13, BGI14, KPTZ13], it is possible to construct such punctured PRFs for the original PRF construction of [GGM84], which can be based on any one-way functions [HILL99]. While this PRFs support puncturing for a polynomial number of times, in this paper we only to puncture at sets that contain at most two points.

Definition 1 (Puncturable Pseudorandom Function)

[Puncturable Pseudorandom Function] A puncturable pseudorandom function (pPRF) is a triple of PPT algorithms \((\textsf{PRF}.\textsf{KeyGen},\textsf{PRF}.\textsf{Puncture},\textsf{PRF}.\textsf{Eval})\) such that:

  • \(\textsf{PRF}.\textsf{KeyGen} (1^{\lambda })\): on input the security parameter, it outputs a key K in the key space \(\mathcal {K}_\lambda \). It also defines a domain \(\mathcal {X} _\lambda \), a range \(\mathcal {Y} _\lambda \) and a punctured key space \(\mathcal {K}^*_\lambda \).

  • \(\textsf{PRF}.\textsf{Puncture} (K,S)\): on input a key \(K \in \mathcal {K}_\lambda \), a set \(S \subseteq \mathcal {X} _\lambda \), it outputs a punctured key \(K\{S\} \in \mathcal {K}_\lambda ^*\),

  • \(\textsf{PRF}.\textsf{Eval} (K,x)\): on input a key K (punctured or not, i.e. \(K \in \mathcal {K}_\lambda \cup \mathcal {K}^*_\lambda \)), and a point \(x\in \mathcal {X} _\lambda \), it outputs a value in \(\mathcal {Y} _\lambda \).

We require the PPR algorithms to meet the following conditions:

  • Functionality Preserved under Puncturing. For all \(\lambda \in \mathbb {N}\), for all subsets \(S\subseteq \mathcal {X}_\lambda \),

    $$\begin{aligned} \Pr [&K\leftarrow \textsf{PRF}.\textsf{KeyGen} (1^{\lambda }), K\{S\}\leftarrow \textsf{PRF}.\textsf{Puncture} (K, S):\\ &\forall x'\in \mathcal {X}_\lambda \setminus S :\textsf{PRF}.\textsf{Eval} (K, x')=\textsf{PRF}.\textsf{Eval} (K\{S\}, x')]=1\text {.} \end{aligned}$$
  • Pseudorandom at Punctured Points. For every stateful PPT adversary \(\mathcal {A} \) and every security parameter \(\lambda \in \mathbb {N}\), the advantage of \(\mathcal {A} \) in \({\textsf{Exp}\text {-}\textsf{pPRF}}\) (described in Fig. 3) is negligible, namely:

    $$\textsf{Adv} _{\textsf{cPRF}}(\lambda ,{\mathcal {A}}){:}{=}\big |\Pr [{\textsf{Exp}\text {-}\textsf{pPRF}} (1^{\lambda },{\mathcal {A}})=1]-\frac{1}{2}\big |\le {{\,\mathrm{\textsf{negl}}\,}}(\lambda ).$$

For ease of notation we often write \(\textsf{PRF} (\cdot , \cdot )\) instead of \(\textsf{PRF}.\textsf{Eval} (\cdot , \cdot )\). When S is a singleton set \(S=\{x\}\), we denote the punctured key at S as \(K\{S\}=K\{x\}\), and when \(S=\{x_1,x_2\}\), we denote \(K\{S\}=K\{x_1,x_2\}\).

Theorem 2

[GGM84, BW13, BGI14, KPTZ13] Consider a fixed polynomial \(p(\lambda )\), and two arbitrary polynomials \(n(\lambda ),m(\lambda )\) in the security parameter \(\lambda \). If one-way functions exist, then there exists a puncturable PRF family that maps \(n(\lambda )\) bits to \(m(\lambda )\) bits and which supports punctured sets S of \(p(\lambda )\) size.

As explained at the beginning of this section, in this paper we use puncturing for sets that contain at most two elements.

Fig. 3.
figure 3

Experiment \({\textsf{Exp}\text {-}\textsf{pPRF}} (1^{\lambda },\mathcal {A})\) for the pseudo-randomness at punctured points.

2.2 Fully Homomorphic Encryption

We recall the definition of unleveled FHE here, where there is no a-priori bound on the depth of circuits that can be homomorphically evaluated. For simplicity we consider messages to be bits.

Definition 3 (Fully Homomorphic Encryption)

[Fully Homomorphic Encryption] A fully homomorphic encryption scheme \(\textsf{FHE} \) is a tuple of PPT algorithms \((\textsf{FHE}.\textsf{KeyGen},\textsf{FHE}.\textsf{Enc}, \textsf{FHE}.\textsf{Dec}, \textsf{FHE}.\textsf{Eval})\), where:

  • \(\textsf{FHE}.\textsf{KeyGen} (1^\lambda )\): outputs a public encryption/evaluation key \(\textsf{pk} \) and a secret key \(\textsf{sk} \).

  • \(\textsf{FHE}.\textsf{Enc} (\textsf{pk},m)\): outputs an encryption \(\textsf{ct} \) of message \(m \in \{0,1\} \). We denote by \(\mathcal {R}\) the randomness space of \(\textsf{FHE}.\textsf{Enc} \).

  • \(\textsf{FHE}.\textsf{Dec} (\textsf{sk},\textsf{ct})\): uses \(\textsf{sk} \) to decrypt \(\textsf{ct} \). It outputs a message.

  • \(\textsf{FHE}.\textsf{Eval} (\textsf{pk},f,\textsf{ct} _1\ldots \textsf{ct} _N)\): it is a deterministic algorithm that takes as input a circuit f of arity \(N \), and employs \(\textsf{pk} \) to compute an evaluated ciphertext \(\textsf{ct} _f\).

An \(\textsf{FHE} \) scheme must satisfy the following requirements:

Encryption Correctness. For all \(\lambda \in \mathbb {N}\), all messages \(m \in \{0,1\} \), all \((\textsf{pk},\textsf{sk})\) in the support of \(\textsf{FHE}.\textsf{KeyGen} (1^\lambda )\), all ciphertexts \(\textsf{ct} \) in the support of \(\textsf{FHE}.\textsf{Enc} (\textsf{pk},m)\), we have \(\textsf{FHE}.\textsf{Dec} (\textsf{sk},\textsf{ct})=m\).

Evaluation Correctness. For all \(\lambda \in \mathbb {N}\), all \((\textsf{pk},\textsf{sk})\) in the support of \(\textsf{FHE}.\textsf{KeyGen} (1^\lambda )\), all messages \(m_1,\ldots ,m_N \in \{0,1\} \), all ciphertexts \((\textsf{ct} _1\ldots \textsf{ct} _{N})\) such that \(\textsf{FHE}.\textsf{Dec} (\textsf{sk},\textsf{ct} _i)=m_i\) for all \(i \in [N ]\), all circuits f of arity \(N \), it holds that:

$$ \textsf{FHE}.\textsf{Dec} (\textsf{sk},\textsf{FHE}.\textsf{Eval} (\textsf{pk},f,\textsf{ct} _1\ldots \textsf{ct} _N))=f(m_1,\ldots ,m_{N}). $$

Randomness Homomorphism. There exists an efficient deterministic algorithm \(\textsf{FHE}.\textsf{EvalRand} \) such that for all \(\lambda \in \mathbb {N}\), all \((\textsf{pk},\textsf{sk})\) in the support of \(\textsf{Setup} (1^\lambda )\), all messages \(m_1,\ldots ,m_N \in \{0,1\} \) and randomness \(r_1,\ldots ,r_N \in \mathcal {R}\), all circuits f of arity \(N \), writing \(r_f = \textsf{FHE}.\textsf{EvalRand} (\textsf{sk},\textsf{pk},r_1,\ldots ,r_N,m_1,\ldots ,m_N,f)\) and \(\textsf{ct} _i = \textsf{FHE}.\textsf{Enc} (\textsf{pk},m_i;r_i)\) for all \(i \in [N ]\), we have:

$$\begin{aligned} \textsf{FHE}.\textsf{Enc} (\textsf{pk},f(m_1,\ldots ,m_N);r_f) = \textsf{FHE}.\textsf{Eval} (\textsf{pk},f,\textsf{ct} _1,\ldots ,\textsf{ct} _N). \end{aligned}$$

For most lattice-based FHE schemes, such as [GSW13], a stronger property holds: \(\textsf{EvalRand} \) can be publicly evaluated from the initial randomness and messages, and does not require \(\textsf{sk} \) (only \(\textsf{pk} \)). Nevertheless, the \(\textsf{FHE}\) scheme based on \(\textsf{iO}\) from [CLTV15] does require the use of the secret key to compute the evaluated randomness (which will consist of the key of a puncturable PRF). Both variants can be used as a building block in our construction.

Fig. 4.
figure 4

Experiment \({\textsf{Exp}\text {-}\textsf{IND}\text {-}\textsf{CPA}}\) for the selective indistinguishable security of \(\textsf{FHE}\).

Unique Randomness. For all \(\lambda \in \mathbb {N}\), all \((\textsf{pk},\textsf{sk})\) in the support of \(\textsf{FHE}.\textsf{KeyGen} (1^\lambda )\), all messages \(m \in \{0,1\}\), all \(r \in \mathcal {R}\) where \(\mathcal {R}\) denotes the randomness space, there is no \(r' \in \mathcal {R}\) such that \(r' \ne r\) and \(\textsf{Enc} (\textsf{pk},m;r)=\textsf{Enc} (\textsf{pk},m;r')\).

Selective \(\boldsymbol{\mathsf {IND\text {-}CPA}}\) Security. For any PPT adversary \(\mathcal {A}\), we require that \(\textsf{Adv} ^{\textsf{FHE}}_{\mathsf {IND\text {-}CPA}}(\lambda ,\mathcal {A})\) in the experiment \({\textsf{Exp}\text {-}\textsf{IND}\text {-}\textsf{CPA}}\) from Fig. 4 is negligible, namely:

$$\textsf{Adv} ^{\textsf{FHE}}_{\mathsf {IND\text {-}CPA}}(\lambda ,\mathcal {A}){:}{=}\big |\Pr [{\textsf{Exp}\text {-}\textsf{IND}\text {-}\textsf{CPA}} ^{\textsf{FHE}}(1^{\lambda },{\mathcal {A}})=1]-\frac{1}{2}]\big |\le {{\,\mathrm{\textsf{negl}}\,}}(\lambda )$$

2.3 Indistinguishability Obfuscation

We recall the definition of indinstuighuishability obfuscation, originally from [BGI+01].

Definition 4 (Indistinguishability Obfuscator)

[Indistinguishability Obfuscator] An indistinguishability obfuscator for a circuit class \({\{\mathcal {C} _\lambda \}}_{\lambda \in \mathbb {N}}\) is an efficient algorithm \(\textsf{iO} \) such that:

  • Perfect correctness: for all \(\lambda \in \mathbb {N}\), all \(C \in \mathcal {C} _\lambda \), all inputs x, we have:

    $$\Pr [C' \leftarrow \textsf{iO} (1^\lambda ,C):C'(x)=C(x)]=1$$
  • Security: for all efficient algorithms \(\mathcal {A} \), there exists a negligible function \({{\,\mathrm{\textsf{negl}}\,}}\) such that for all \(\lambda \in \mathbb {N}\), all pairs of circuits \(C_0,C_1 \in \mathcal {C} _\lambda \) such that \(C_0(x) = C_1(x)\) for all inputs x, we have:

    $$\textsf{Adv} ^{\textsf{iO}}(\lambda ,{\mathcal {A}}){:}{=}|\Pr [{\mathcal {A}}(\textsf{iO} (1^\lambda ,C_0))=1] -\Pr [{\mathcal {A}}(\textsf{iO} (1^\lambda ,C_1))=1]| \le {{\,\mathrm{\textsf{negl}}\,}}(\lambda )$$

2.4 Non-interactive Zero Knowledge Proofs

Given a binary relation \(R: \mathcal {X}\times \mathcal {W}\rightarrow \{0,1\} \) defined over a set of statements \(\mathcal {X}\) and a set of witnesses \(\mathcal {W}\), let \(\mathcal {L}_R\) be the language defined as \(\mathcal {L}_R = \{ x \in \mathcal {X}\,\,|\,\, \exists w \in \mathcal {W}: R(x,w) = 1\}\). A Non-Interactive Zero Knowledge proof system for the binary relation R (originally introduced in [BFM88]) allows a prover in possession of a statement x and a witness w such that \(R(x,w)=1\) to produce a proof that convinces a verifier of the fact that \(x \in L_R\) without revealing any information about w. The soundness property ensures that no proof can convince the verifier of the validity of a false statement, i.e. a statement \(x \notin L_R\). We require the existence of an extractor that efficiently gets a witness from a valid proof \(\pi \) of a statement x, using an extraction trapdoor. Such proof systems are called proofs of knowledge. We focus on NIZK for relations R where the size of all statements and witnesses are bounded, which we call size-bounded relation. We now give the formal definition of NIZK proof of knowledge.

Definition 5 (NIZK-PoK)

[NIZK-PoK] Let R be a size-bounded relation. A Non-Interactive Zero-Knowledge Proof of Knowledge (NIZK-PoK) for R consists of the following PPT algorithms:

  • \(\textsf{Setup} (1^\lambda )\): on input the security parameter, it outputs a common reference string \(\textsf{crs} \) and an extraction trapdoor \(\textsf{td}_{\textsf{ext}} \).

  • \(\textsf{Prove} (\textsf{crs}, x, w)\): on input \(\textsf{crs} \), a statement x and a witness w, it outputs an argument \(\pi \).

  • \(\textsf{Verify} (\textsf{crs}, x, \pi )\): on input \(\textsf{crs} \), a statement x and an argument \(\pi \), it deterministically outputs a bit representing acceptance (1) or rejection (0).

The PPT algorithms satisfy the following properties.

Composable Zero-Knowledge. There exist two PPT algorithms \(\textsf{SimSetup}\) and \(\textsf{Sim}\) such that for all PPT adversaries \(\mathcal {A} \), the following advantages \(\textsf{Adv} ^{\textsf{crs}}_{\varPi }(\lambda ,\mathcal {A})\) and \(\textsf{Adv} ^{\textsf {ZK}}_{\varPi }(\lambda ,\mathcal {A})\) are negligible in \(\lambda \):

$$\begin{aligned} \textsf{Adv} ^{\textsf{crs}}_{\varPi }(\lambda ,\mathcal {A})=\Big |1/2- \Pr \Big [ &(\textsf{crs},\textsf{td}_{\textsf{ext}}) \leftarrow \textsf{Setup} (1^\lambda ), (\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) \leftarrow \textsf{SimSetup}(1^\lambda ), \\ &b \leftarrow \{0,1\}, \textsf{crs} _0 = \textsf{crs}, \textsf{crs} _1 = \textsf{crs}_{\textsf{sim}}, b' \leftarrow \mathcal {A} (\textsf{crs} _b) : b'=b\Big ]\Big |. \end{aligned}$$
$$\begin{aligned} \textsf{Adv} ^{\textsf {ZK}}_{\varPi }(\lambda ,\mathcal {A})=\Big |1/2- \Pr \Big [&(x,w) \leftarrow \mathcal {A} (1^\lambda ), (\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) \leftarrow \textsf{SimSetup}(1^\lambda ), \\ & \pi _0 \leftarrow \textsf{Prove} (\textsf{crs}_{\textsf{sim}},x,w), \pi _1 \leftarrow \textsf{Sim}(\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}},x), \\ & b \leftarrow \{0,1\},b' \leftarrow \mathcal {A} (\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}},\pi _b) : R(x,w)=1 \ \wedge \ b'=b\Big ]\Big |. \end{aligned}$$

Completeness on Simulated CRS. For all efficient adversaries \(\mathcal {A} \), the following advantage is negligible in the security parameter \(\lambda \in \mathbb {N}\): \(\Pr \Big [(x,w) \leftarrow \mathcal {A} (1^\lambda ), (\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) \leftarrow \textsf{NIZK}.\textsf{SimSetup}(1^\lambda ), \pi \leftarrow \textsf{NIZK}.\textsf{Prove} (\textsf{crs}_{\textsf{sim}},x,w): R(x,w)=1 \ \wedge \ \textsf{NIZK}.\textsf{Verify} (\textsf{crs}_{\textsf{sim}},x,\pi )= 0\Big ]\).

Knowledge-Soundness. There exists an efficient algorithm \(\textsf{Extract}\) such that the following probability \(\nu _{\textsf{sound}}(\lambda )\) is a negligible function of \(\lambda \in \mathbb {N}\), defined as:

$$\begin{aligned} \nu _{\textsf{sound}}(\lambda )=\Pr \Big [&(\textsf{crs},\textsf{td}_{\textsf{ext}}) \leftarrow \textsf{Setup} (1^\lambda ): \exists \ \pi ,x,w \in \textsf{Supp} (\textsf{Extract}(\textsf{crs},\textsf{td}_{\textsf{ext}},x,\pi ))\\ & \ s.t. \ \textsf{Verify} (\textsf{crs},x,\pi )=1 \ \wedge \ R(x,w)=0\Big ]. \end{aligned}$$

We say subexponential knowledge-soundness holds if \(\nu _{\textsf{sound}}\) is subexponential in the security parameter \(\lambda \).

2.5 Fully Homomorphic Signatures

We recall the definition of Fully-Homomorphic Signature (FHS), which was originally given in [BF11a]. When many datasets are present, the signing algorithm takes as an additional input a tag \(\tau \) that identifies the dataset that is being signed. Only signatures issued for the same tag can be combined together. For simplicity, we focus on the single dataset setting here (where there are no tags), since [GVW15] showed how to generically transform any FHS for single dataset to many datasets. This transformation relies on regular (non-homomorphic) signature schemes. Again for simplicity, we focus on bit messages and Boolean functions.

Definition 6 (FHS, Single Dataset)

[FHS, Single Dataset] An FHS scheme is a tuple of PPT algorithms \(\varSigma =(\textsf{KeyGen},\textsf{Sign},\textsf{Verify},\textsf{Eval})\), such that:

  • \(\textsf{KeyGen} (1^\lambda ,1^{N})\): on input the security parameter \(\lambda \) and a data-size bound \(N \), it generates a public verification key \(\textsf{vk} \), along with a secret signing key \(\textsf{sk} \).

  • \(\textsf{Sign} (\textsf{sk},m,i)\): on input the secret key \(\textsf{sk} \), a message \(m\in \{0,1\} \) and an index \(i \in [N ]\), it outputs a signature \(\sigma \).

  • \(\textsf{Eval} (\textsf{vk},f,(m_1,\sigma _1),\ldots ,(m_N,\sigma _N))\): on input the public key \(\textsf{vk} \), a function f of arity \(N \) and pairs \((m_i,\sigma _i)\), it deterministically outputs an evaluated signature \(\sigma \) of the message \(f(m_1,\ldots ,m_N)\).

  • \(\textsf{Verify} (\textsf{vk},f,y,\sigma ):\) on input the public key \(\textsf{vk} \), a function f, a value y and a signature \(\sigma \), it outputs a bit. 0 means the signature \(\sigma \) is deemed invalid, 1 means it is considered valid.

The algorithms satisfy the following properties.

Perfect Signing Correctness. For all \(\lambda ,N \in \mathbb {N}\), all pairs \((\textsf{vk},\textsf{sk})\) in the support of \(\textsf{KeyGen} (1^\lambda ,1^N)\), all \(i \in [N ]\), all messages \(m\in \{0,1\} \), all signatures \(\sigma \) in the support of \(\textsf{Sign} (\textsf{sk},m,i)\), we have \(\textsf{Verify} (\textsf{vk},\textsf{id} _i,m,\sigma )=1\), where \(\textsf{id} _i\) is the projection function that takes \(N \) messages \(m_1,\ldots ,m_N \in \{0,1\} \), and outputs the i’th message \(m_i\).

In our scheme, we achieve a weaker, computational variant of the correctness property, which roughly states that an efficient algorithm cannot find messages (with more than negligible probability) on which properly generated signatures do not verify successfully.

Computational Signing Correctness. For all efficient algorithms \(\mathcal {A} \), the following probability, defined for all \(\lambda ,N \in \mathbb {N}\) is negligible in \(\lambda \): \(\Pr [(\textsf{vk},\textsf{sk}) \leftarrow \textsf{Setup} (1^\lambda ,1^N), (m_1,\ldots ,m_N) \leftarrow \mathcal {A} (\textsf{vk}), \forall i \in [N ], \sigma _i \leftarrow \textsf{Sign} (\textsf{sk},m_i,i): \exists \ i \in [N ] \ s.t.\ \textsf{Verify} (\textsf{vk},\textsf{id} _i,m_i,\sigma _i)= 0]\).

Perfect Evaluation Correctness. For all \(\lambda ,N \in \mathbb {N}\), all pairs \((\textsf{vk},\textsf{sk})\) in the support of \(\textsf{KeyGen} (1^\lambda ,1^N)\), all messages \(m_1,\ldots ,m_N \in \{0,1\} \), all signatures \(\sigma _1,\ldots ,\sigma _N \) in the support of \(\textsf{Sign} (\textsf{sk},m_1),\ldots ,\textsf{Sign} (\textsf{sk},m_N)\) respectively, for all functions f of arity \(N \), writing \(\sigma _f = \textsf{Eval} (\textsf{vk},f,(\sigma _1,m_1),\ldots ,(\sigma _N,m_N))\) and \(y=f(m_1,\ldots ,m_N)\), we have \(\textsf{Verify} (\textsf{vk},f,y,\sigma _f)=1\). Moreover, it is possible to perform additional homomorphic operations on signatures that have already been evaluated on. That is, correctness holds when functions are composed. Namely, for all \(\ell \in \mathbb {N}\), all functions g of arity \(\ell \), all tuples \((\sigma _1,f_1,m_1),\ldots ,(\sigma _\ell ,f_\ell ,m_\ell )\) such that for all \(i \in [\ell ]\), \(\textsf{Verify} (\textsf{vk},f_i,m_i,\sigma _i)=1\), writing \(\textsf{Eval} (\textsf{vk},g,(m_1,\sigma _1),\ldots ,(m_\ell ,\sigma _\ell ))=\sigma \) and \(y=g(m_1,\ldots ,m_\ell )\), we have \(\textsf{Verify} (\textsf{vk},g,y,\sigma )=1\).

Similarly to signing correctness, we define a computational variant of the evaluation correctness. For simplicity, we split the property into two properties: the first is a computational evaluation correctness that only consider one-shot homomorphic evaluation, but does not take into account the possibility of performing homomorphic evaluations in several steps, i.e. composing functions. The second property, called weak context hiding, states that composing functions using \(\textsf{Eval} \) many times yields the same signature as using \(\textsf{Eval} \) once on the composed function. The (non-weak) context hiding property additionally requires that evaluated signatures be independent of the underlying dataset, apart from the output of the evaluated function.

Computational Evaluation Correctness. For all efficient algorithms \(\mathcal {A} \), the following probability, defined for all \(\lambda , N \in \mathbb {N}\), is negligible in \(\lambda \): \(\Pr [(\textsf{vk},\textsf{sk}) \leftarrow \textsf{Setup} (1^\lambda ,1^N), (m_1,\ldots ,m_N,f) \leftarrow \mathcal {A} (\textsf{vk}), \forall i \in [N ], \sigma _i \leftarrow \textsf{Sign} (\textsf{sk},m_i,i), \sigma _f \leftarrow \textsf{Eval} (\textsf{vk},f,(m_1,\sigma _1),\ldots ,(m_N,\sigma _N)), y=f(m_1,\ldots ,m_N): \textsf{Verify} (\textsf{vk},f,y,\sigma _f)= 0]\).

Weak Context Hiding. For all \(\lambda , N, t, \ell \in \mathbb {N}\), all \((\textsf{vk},\textsf{sk})\) in the support of \(\textsf{Setup} (1^\lambda ,1^N)\), all messages \(m_1,\ldots ,m_t \in \{0,1\} \), functions \(\theta _1,\ldots ,\theta _t\) and signatures \(\sigma _1,\ldots ,\sigma _t\) such that for all \(i \in [t]\), \(\textsf{Verify} (\textsf{vk},\theta _i,m_i,\sigma _i)=1\), all t-ary functions \(f_1,\ldots ,f_\ell \), all \(\ell \)-ary functions g, we have:

$$\begin{aligned} \sigma _{g \circ \vec {f}} = \sigma _h, \end{aligned}$$

where \(\sigma _{g \circ \vec {f}} = \textsf{Eval} (\textsf{vk},g,(\sigma _{f_1},f_1(\vec {m})),\ldots ,(\sigma _{f_\ell },f_\ell (\vec {m})))\), \(\sigma _{f_j} = \textsf{Eval} (\textsf{vk},f_j,(\sigma _1,m_1),\ldots ,(\sigma _t,m_t))\) for all \(j \in [\ell ]\), \(\sigma _h = \textsf{Eval} (\textsf{vk},h,(\sigma _1,m_1),\ldots ,(\sigma _t,m_t))\), h is the t-ary function defined on any input \(m_1,\ldots ,m_t\) as \(h(\vec {m}) = g(f_1(\vec {m}),\ldots ,f_\ell (\vec {m})))\), which we denote by \(h = g \circ \vec {f}\). We are also using the notation \(\vec {m} = (m_1,\ldots ,m_t)\).

Pre-processing. The scheme can be endowed with a pre-processing algorithm \(\textsf{Process} \). Just like the FHS scheme from [GVW15], our \(\textsf{Verify} \) algorithm works in two steps. The first step only depends on the inputs \(\textsf{vk} \) and f. Thus, it can be run offline, before knowing the signature \(\sigma \) and message y to verify. It produces a short processed \(\textsf{vk} \), denoted by \(\alpha _f\) (whose size is independent of the size of f). This first phase constitutes the \(\textsf{Process} \) algorithm. The second, online step takes as input \(\alpha _f\), y and \(\sigma \) and outputs a bit. The online step runs in time independent of the complexity of f.

Adaptive Unforgeability. For all stateful PPT adversaries \(\mathcal {A} \) and all data bound \(N \in \mathbb {N}\), the advantage \(\textsf{Adv} ^\textsf{forg}_{\varSigma }(\lambda ,\mathcal {A})\) defined below is a negligible function of the security parameter \(\lambda \in \mathbb {N}\):

$$\begin{aligned} \textsf{Adv} ^\textsf{forg}_{\varSigma }(\lambda ,\mathcal {A})=\Pr \Big [&(\textsf{sk},\textsf{vk}) \leftarrow \textsf{Setup} (1^\lambda ,1^N), (m_1,\ldots ,m_N) \leftarrow \mathcal {A} (\textsf{vk}),\\ & \forall i \in [N ], \sigma _i \leftarrow \textsf{Sign} (\textsf{sk},m_i,i), (f,y,\sigma ^\star ) \leftarrow \mathcal {A} (\sigma _1,\ldots ,\sigma _N): \\ &\textsf{Verify} (\textsf{vk},f,y,\sigma ^\star )=1 \wedge y\ne f(m_1,\ldots ,m_n)\Big ]. \end{aligned}$$

Selective unforgeability is defined identically except the adversary \(\mathcal {A} \) must send the messages \(m_1,\ldots ,m_n\) of its choice before seeing the public key \(\textsf{vk} \).

3 Construction

We describe our unleveled \(\textsf{FHS}\) scheme in Fig. 5. We choose to focus on single dataset FHS (as per Definition 6) rather that multi datasets for simplicity, since the work of [GVW15] presents a generic transformation from single to multi datasets, relying only on (non-homomorphic) signatures. Our FHS is for bit messages, and can evaluate arbitrary Boolean circuits. Without loss of generality, we focus on evaluating binary NAND gates.

We use a puncturable \(\textsf{PRF} \), an indistinguishability obfuscator \(\textsf{iO}\), an \(\textsf{FHE}\) scheme and a NIZK-PoK as building blocks, whose definition are given in the previous section. Our construction can be implemented using the dual-mode NIZK from [GS08] (from pairings) or [HU19] (from iO and lossy trapdoor functions), for instance. The \(\textsf{FHE}\) can be implemented using most lattice-based FHE (with bootstrapping since the FHE must be unleveled, which requires circular security), or with the construction from [CLTV15], which does not require any circularity assumption (it relies on iO and lossy trapdoor functions). Altogether, if we use the NIZK from [HU19] and the FHE from [CLTV15] we obtain our main result, which follows from Theorem 12 (unforgeability of our FHS).

Theorem 7

(Main Result). Assume the existence of subexponentially secure iO and lossy trapdoor functions. Then subexponentially adaptively unforgeable unleveled FHS exist.

Fig. 5.
figure 5

Fully-homomorphic signature scheme \(\textsf{FHS} =(\textsf{FHS}.\textsf{KeyGen},\textsf{FHS}.\textsf{Sign},\textsf{FHS}.\textsf{Verify},\textsf{FHS}.\textsf{Eval})\). \(\textsf{PRF} \) is a puncturable pseudo-random function, \(\textsf{NIZK} \) is a proof of knowledge (NIZK PoK), \(\textsf{FHE} \) is a fully-homomorphic encryption scheme, and \(\textsf{iO} \) is an indistinguishability obfuscator. By \(\textsf{stat} _{m,\textsf{ct}}\) we denote the statement which claims that \(\exists \ r \in \mathcal {R}\) such that \(\textsf{ct} =\textsf{FHE}.\textsf{Enc} (\textsf{fpk},m;r)\), where \(\mathcal {R}\) denotes the randomness space of the FHE encryption algorithm. Parameters \(\kappa (\lambda ) = (N +2\log ^2 \lambda + 5)^{1/\varepsilon }\), where \(\varepsilon >0\) is a constant whose existence is ensure by the subexponential security of the underlying building blocks.

3.1 Choice of Parameters

In our FHS, we rely on building blocks \(\textsf{PRF}\), \(\textsf{iO}\), \(\textsf{NIZK}\), \(\textsf{FHE}\) that are subexponentially secure, that is, for which efficient adversaries can succeed with at most advantage \(2^{-\kappa ^\varepsilon }\) in breaking the security, for a constant \(\varepsilon > 0\), where \(\kappa \) is the parameter chosen to run the setup of these primitives. We denote by \(\kappa _1\) the parameter used for \(\textsf{FHE} \) and by \(\kappa _2\) the parameter used for \(\textsf{PRF}\), \(\textsf{iO}\), and \(\textsf{NIZK}\). Correctness is satisfied as long as the Eqs. (1) and (2) hold. Adaptive unforgeability is satisfied as long as the Eq. (3) holds. These equations are simultaneously satisfied when:

$$\begin{aligned} \kappa _1&= (N + \log N + 2\log ^2 \lambda )^{1/\varepsilon }\\ \kappa _2&= \big (|\textsf{ct} | + N + \log N + 2\log ^2 \lambda +O(1)\big )^{1/\varepsilon } \end{aligned}$$

where \(|\textsf{ct} |\) denotes the size of the FHE ciphertexts.

3.2 Correctness of the FHS

In this section we prove the computational signing correctness, the computational evaluation correctness, the weak context hiding and the pre-processing property of our scheme, all given in Definition 6.

Lemma 8

(Computational Signing Correctness). The FHS scheme from Fig. 5 satisfies the computational signing correctness as per Definition 6, assuming \(\textsf{NIZK}\) satisfies the subexponential composable zero-knowledge and completeness on simulated crs properties (as per Definition 5), \(\textsf{FHE}\) satisfies the subexponential (selective) IND-CPA security (as per Definition 3), \(\textsf{PRF}\) satisfies the subexponential pseudorandomness at punctured points and the functionality preservation under puncturing (as per Definition 1) and \(\textsf{iO}\) satisfies the correcntess and subexponential security properties (as per Definition 4).

Proof

We first explain how to prove the computational signing property in the selective case, where \(\mathcal {A} \) sends the messages \(m_1,\ldots ,m_N \in \{0,1\} \) before receiving \(\textsf{vk} \). In this case, we can prove correctness using a hybrid argument, where we first switch the ciphertexts \(\textsf{ct} '_i\) from \(\textsf{vk} \) to \(\textsf{FHE}.\textsf{Enc} (\textsf{fpk},m_i;r_i)\), using the selective IND-CPA security of \(\textsf{FHE}\). Then, we want to change the way \(\textsf{FHS}.\textsf{Sign} (\textsf{sk},m_i,i)\) computes the ZK proofs, using \(\pi \leftarrow \textsf{NIZK}.\textsf{Prove} (\textsf{crs}_{\textsf{sim}},\textsf{stat} _{m_i,\textsf{ct} '_i},r_i)\), where \(r_i\) is a witness for \(\textsf{stat} _{m_i,\textsf{ct} '_i}\), instead of producing \(\pi \leftarrow \textsf{NIZK}.\textsf{Sim} (\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}},\textsf{stat} _{m_i,\textsf{ct} '_i})\). This change would be justified by the composable zero knowledge property of NIZK. Finally, we would conclude the correctness proof using the completeness of \(\textsf{NIZK}\) on the simulated \(\textsf{crs}_{\textsf{sim}}\). To perform these changes, we first need to puncture the PRF key \(K_1\) on the point 0, and hardcode the pair \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) = \textsf{NIZK}.\textsf{SimSetup}(1^{\kappa _2};\textsf{PRF} (K_1,0))\) in the obfuscated circuits (which relies on the functionality preservation under puncturing of \(\textsf{PRF}\) and the security of \(\textsf{iO}\)), then switch the value \(\textsf{PRF} (K_1,0)\) to truly random (which relies on the pseudorandomness at punctured points of \(\textsf{PRF}\)). Then, we can switch the way the proof \(\pi \) is computed by \(\textsf{FHS}.\textsf{Sign} (\textsf{sk},m_i,i)\) as we explained, using the composable zero-knowledge property of \(\textsf{NIZK} \). Finally use the completeness on simulated crs property of \(\textsf{NIZK}\). To obtain correctness in the adaptive case, where \(\mathcal {A} \) can choose the messages \(m_1,\ldots ,m_N \) after seeing \(\textsf{vk} \), we simply guess all the messages \(m_i\) in advance, which incurs a security loss of \(2^N \). Since we assume subexponential security of the underlying building blocks, we know that an adversary against the selective correctness can only succeed with a probability \(N \cdot 2^{-\kappa _1^\varepsilon }+4 \cdot 2^{-\kappa _2^\varepsilon }\) for \(\varepsilon >0\) where \(\kappa _1\) is the parameter used for \(\textsf{FHE}\), and \(\kappa _2\) is the parameter used for \(\textsf{NIZK}\), \(\textsf{PRF}\) and \(\textsf{iO}\). Note that \(\varepsilon \) does not depend on \(N \), so we can choose \(\kappa _1,\kappa _2\) as polynomials in the security parameter \(\lambda \) and the arity \(N \) such that \(2^N (N \cdot 2^{-\kappa _1^\varepsilon }+4 \cdot 2^{-\kappa _2^\varepsilon })\) is a negligible function of \(\lambda \), e.g.

$$\begin{aligned} \kappa _1,\kappa _2\ge (N + \log N + \log ^2 \lambda )^{1/\varepsilon }. \end{aligned}$$
(1)

Lemma 9

(Computational Evaluation Correctness). The FHS scheme from Fig. 5 satisfies the computational evaluation correctness as per Definition 6, assuming NIZK satisfies the subexponential zero-knowledge and and completeness on simulated crs properties (as per Definition 5), \(\textsf{FHE}\) satisfies the subexponential (selective) IND-CPA security and the randomness homomorphism properties (as per Definition 3), \(\textsf{PRF}\) satisfies the subexponential pseudorandomness at punctured points and the functionality preservation under puncturing (as per Definition 1) and \(\textsf{iO}\) satisfies the subexponential security and the perfect correctness properties (as per Definition 4).

Proof

First, we prove the evaluation correctness in the selective case where the adversary \(\mathcal {A} \) sends the messages \(m_1,\ldots ,m_N \) and the depth d of the circuit f before seeing the public key \(\textsf{vk} \). Then, \(\mathcal {A} \) receives \(\textsf{vk} \) and chooses the circuit f of depth d. To obtain computational evaluation correctness in the adaptive setting where \(\mathcal {A} \) can choose f and the messages \(m_1,\ldots ,m_N \) after seeing \(\textsf{vk} \) (as per Definition 3), we will use a guessing argument together with the subexponential security of the underlying building blocks similarly than for proving the signing correctness. Namely, we choose a superpolynomial function \(L(\lambda )\), e.g. \(L(\lambda )=2^{\log ^2 \lambda }\) and we guess the messages \(m_1,\ldots ,m_N \) at random over \(\{0,1\} ^N\) and the depth d at random between 1 and \(L(\lambda )\). Because we choose \(L(\lambda )\) superpolynomial, we know that the depth d chosen by \(\mathcal {A} \) is less than \(L(\lambda )\), so the guess of the depth is correct with probability \(1/L(\lambda )\). Overall the guessing incurs a security loss of \(2^N L(\lambda )\).

Now we prove the selective variant of computational evaluation soundness. To begin with, we switch the ciphertexts \(\textsf{ct} '_i\) in \(\textsf{vk} \) to FHE encryptions of \(m_i\) of the form \(\textsf{FHE}.\textsf{Enc} (\textsf{fpk},m_i;r_i)\), using the selective IND-CPA security of \(\textsf{FHE}\), just as in the computational signing correctness proof. Moreover, by perfect correctness of \(\textsf{iO}\), we know that an evaluated signature \(\sigma _f = \textsf{Eval} (\textsf{vk},f,(\sigma _1,m_1),\ldots ,(\sigma _N,m_N))\) is of the form \(\sigma _f = (\textsf{ct},\pi ,d)\) where \(\textsf{ct} = \textsf{FHE}.\textsf{Eval} (\textsf{fpk},f,\textsf{ct} '_1,\ldots \textsf{ct} '_N)\), and d is the depth of f. By evaluation correctness of \(\textsf{FHE}\), we know that \(\textsf{ct} \) is an encryption of the message \(f(m_1,\ldots ,m_N)\). In fact, by the randomness homomorphism property of \(\textsf{FHE}\), we know that \(\textsf{ct} =\textsf{FHE}.\textsf{Enc} (\textsf{fpk},f(m_1,\ldots ,m_N);r_f)\) where \(r_f = \textsf{FHE}.\textsf{EvalRand} (\textsf{fsk},r_1,\ldots ,r_N,m_1,\ldots ,m_N,f)\). Then, we want to switch the way the proof \(\pi \) in \(\sigma _f\) is computed: using \(\textsf{NIZK}.\textsf{Prove} \) and the witness \(r_f\) instead of using \(\textsf{NIZK}.\textsf{Sim} \) and the simulation trapdoor \(\textsf{td}_{\textsf{sim}} \). This switch would be justified by the composable zero-knowledge property of \(\textsf{NIZK}\). We would then conclude the proof using the completeness of \(\textsf{NIZK}\) on simulated crs. Only to use these properties of \(\textsf{NIZK}\), we first need to generate \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}})\) of level d using truly random coins, as opposed to pseudo-random. As typical, this requires puncturing the PRF key \(K_1\) and hardcoding the pair \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) = \textsf{NIZK}.\textsf{Setup} (1^{\kappa _2};\textsf{PRF} (K_1,d))\) in the obfuscated circuits (thanks to the security of \(\textsf{iO}\) and the functionality preservation under puncturing of \(\textsf{PRF}\)), then switching the value \(\textsf{PRF} (K_1,d)\) to truly random (thanks to the pseudo-randomness at punctured points property of \(\textsf{PRF}\)). Afterwards, we can use the properties of NIZK to conclude the proof, as we explained.

Since we assume subexponential security of the underlying building blocks, we know that an adversary against the selective computational evaluation correctness can only succeed with a probability \(N \cdot 2^{-\kappa _1^\varepsilon }+4 \cdot 2^{-\kappa _2^\varepsilon }\) for \(\varepsilon >0\) where \(\kappa _1\) is the parameter used for \(\textsf{FHE}\), and \(\kappa _2\) is the parameter used for \(\textsf{NIZK}\), \(\textsf{PRF}\) and \(\textsf{iO}\). Note that \(\varepsilon \) does not depend on \(N \), so we can choose \(\kappa _1,\kappa _2\) as polynomials in the security parameter \(\lambda \) and the arity \(N \) such that \(2^N L(\lambda ) (N \cdot 2^{-\kappa _1^\varepsilon }+4 \cdot 2^{-\kappa _2^\varepsilon })\) is a negligible function of \(\lambda \), e.g.

$$\begin{aligned} \kappa _1,\kappa _2\ge (N + \log N + 2\log ^2 \lambda )^{1/\varepsilon }. \end{aligned}$$
(2)

Lemma 10

(Weak Context Hiding). The FHS scheme from Fig. 5 satisfies the weak context hiding property as per Definition 6, assuming the perfect correctness of \(\textsf{iO}\).

Proof

This property follows straightforwardly from the description of the \(\textsf{Eval} \) algorithm and the correctness of \(\textsf{iO}\). Indeed, \(\textsf{Eval} \) evaluates circuits gate by gate, using the \(\textsf{Eval NAND} \) algorithm (see Fig. 5), which performs deterministic evaluation on the FHE ciphertext, and then derive a ZK proof deterministically from the statement and the depth level (using \(\textsf{PRF} \) on the key \(K_2\)). Thus, we have \(\sigma _{g \circ \vec {f}} = \sigma _h\).

Lemma 11

(Pre-processing). The FHS scheme from Fig. 5 satisfies the pre-processing property as per Definition 6.

Proof

This simply follows from the description of \(\textsf{FHS}.\textsf{Verify} \). First, during a pre-processing phase, it computes the values \(\textsf{ct} _f\) and \(\textsf{crs} \) from \(\textsf{vk} \) and f. This can be performed offline, since it does not require to know the message y and the signature \(\sigma \). The result is a short pre-processed key \(\alpha _f = (\textsf{ct} _f,\textsf{crs})\). Then, during the online phase, \(\textsf{FHS}.\textsf{Verify} \) uses \(\alpha _f\), \(\sigma \) and y to run the \(\textsf{NIZK}.\textsf{Verify} \) algorithm. The running time of this online phase is independent from the size or depth of f.

4 Proof of Unforgeability

Theorem 12

(Adaptive Unforgeability). Assuming subexponential security of \(\textsf{PRF}\), \(\textsf{FHE}\), \(\textsf{iO}\), and \(\textsf{NIZK}\), the FHS from Fig. 5 satisfies subexponential unforgeability as per Definition 6.

Proof of Theorem 12. We first prove the selective unforgeability (as per Definition 6), where the adversary \(\mathcal {A} \) must send the messages \(m_1,\ldots ,m_N \) before receiving \(\textsf{vk} \). Then we show how to obtain adaptive unforgeability using a guessing argument and the subexponential security of the underlying building blocks (just as in the proof of computational signing and evaluation correctness in the previous section).

To prove unforgeability in the selective setting, we use a sequence of hybrid games, starting with \(\textsf{G} _0\), defined exactly as the selective unforgeability game from Definition 6. For any game \(\textsf{G} _i\), we denote by \(\textsf{Adv} _i(\mathcal {A})\) the advantage of \(\mathcal {A} \) in \(\textsf{G} _i\), that is, \(\Pr [\textsf{G} _i(1^{\lambda },\mathcal {A}) = 1]\), where the probability is taken over the random coins of \(\textsf{G} _i\) and \(\mathcal {A} \). Before we proceed to describe the other hybrids, we make several technical remarks.

Remark 13

When we hardcode a value in a subprogram, it is understood that this value is also hardcoded in all the programs that run it, and if a \(\textsf{PRF}\) key K is punctured in a subprogram, it is also punctured in all the programs that run it.

Remark 14

(Padding the programs). The security of \(\textsf{iO}\) can only be invoked for programs of the same size. For brevity, we assume without loss of generality that all programs in the security proof are padded to the size of the longest program. Since our hybrids extend up to a superpolynomial level \(L(\lambda )=2^{\omega (\log \lambda )}\), this implies a small increase in the programs contained in the real verification key (since the last hybrid must keep track of the level, and its bit representation requires \(\omega (\log \lambda )\) bits). For example, choosing \(L(\lambda )=2^{\log ^2 \lambda }\) would only incur a multiplicative increase by a factor of \(\log ^2 \lambda \) bits.

Remark 15

(Bounding the Sizes of Punctured PRF Keys). The security proof will require that PRF keys \(K_1\) and \(K_2\) are punctured at levels \(i=0\ldots L(\lambda )\), where \(L(\lambda )=2^{\log ^2 \lambda }\). Puncturing increases the size of the keys. In existing constructions of PRFs (e.g. [GGM84]), the size of the punctured keys only grows logarithmically with the number of levels This results in a size-increase of the keys (and therefore of the programs) of up to \(O(\log ^2 \lambda )\). In particular, it is important to note that this size increase is independent of the value of the specific level at which the adversary will output a forgery.

  • Game \(\textsf{G} _1\): same as \(\textsf{G} _0\), except that we change the \(\textsf{FHS}.\textsf{KeyGen} \) algorithm. Instead of computing the \(\textsf{ct} '_i\) in the verification key as encryptions of 0, we compute \(\textsf{ct} '_i \leftarrow \textsf{FHE}.\textsf{Enc} (m_i;r_i)\), where \(m_i\) are the messages sent by \(\mathcal {A} \). The randomness \(r_i\) used to compute the ciphertext \(\textsf{ct} '_i\) is stored in the secret key \(\textsf{sk} \).

Lemma 16

(From \(\textsf{G} _{0}\) \(\textbf{to}\) \(\textsf{G} _{1}\)). For every PPT adversary \(\mathcal {A}\), there exists a PPT adversary \(\mathcal {B}\), such that: \(|\textsf{Adv} _{0}({\mathcal {A}})-\textsf{Adv} _{1}({\mathcal {A}})|\le \textsf{Adv} ^{\textsf{FHE}}_{\mathsf {IND\text {-}CPA}}(\kappa _1,\mathcal {B})\).

Proof

The reduction \(\mathcal {B}\) starts by sending \((0\ldots 0)\) and \((m_1\ldots m_{N})\) to the IND-CPA challenger. It receives \((\textsf{ct} '_1\ldots \textsf{ct} '_{N})\), which it embeds in the \(\textsf{vk} \). During the execution of \(\textsf{FHS}.\textsf{KeyGen} \), all the other obfuscated programs in \(\textsf{vk} \) are generated as before, but using the ciphertexts received from the challenger.

  • Game \(\textsf{G} _2\): same as \(\textsf{G} _1\), except that we change the \(\textsf{FHS}.\textsf{Sign} \) algorithm and replace it with \(\textsf{Hybrid Sign} \), defined in Fig. 6. The latter computes the signatures \(\sigma _1,\ldots ,\sigma _N \) sent to \(\mathcal {A} \) (after \(\mathcal {A} \) sends the messages \(m_1,\ldots ,m_N \)) as \(\sigma _i = (\textsf{ct} '_i,\pi _i,0)\) where \(\textsf{ct} '_i=\textsf{FHE}.\textsf{Enc} (\textsf{fpk},m_i;r_i)\) is the i’th FHE encryption contained in \(\textsf{vk} \), 0 indicates the level, and \(\pi _i\) is computed using the witness \(r_i\) (which is stored in \(\textsf{sk} \)), instead of using a simulation trapdoor.

Lemma 17

(From \(\textsf{G} _{1}\) to \(\textsf{G} _{2}\)). For every PPT adversary \(\mathcal {A} \), there exist PPT adversaries \(\mathcal {B}_1\), \(\mathcal {B}_2\), \(\mathcal {B}_3\) such that:

$$\begin{aligned} |\textsf{Adv} _{1}({\mathcal {A}})-\textsf{Adv} _{2}({\mathcal {A}})| \le 2\big (\textsf{Adv} _{\textsf{cPRF}}(\kappa _2,\mathcal {B}_1)+\textsf{Adv} _{\textsf{iO}}(\kappa _2,\mathcal {B}_2)\big )+N \cdot \textsf{Adv} _{\textsf {ZK}}(\kappa _2,\mathcal {B}_3). \end{aligned}$$

Proof

To switch from proofs \(\pi _i\) generated using \(\textsf{NIZK}.\textsf{Sim} \) and the simulation trapdoor \(\textsf{td}_{\textsf{sim}} \) to proofs generated using \(\textsf{NIZK}.\textsf{Prove} \) and the witnesses \(r_i\), as described in Fig. 6, we want to use the composable zero-knowledge property of \(\textsf{NIZK}\). To do so, we first have to hard-code the pair \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) = \textsf{NIZK}.\textsf{SimSetup}(1^{\kappa _2};\textsf{PRF} (K_1,0))\) in the obfuscated circuit instead of using the key \(K_1\) on the point 0. To generate the pairs \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}})\) for all other levels \(i \ne 0\), we compute \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) = \textsf{NIZK}.\textsf{SimSetup}(1^{\kappa _2};\textsf{PRF} (K_1\{0\},i))\), where \(K_1\{0\}\) is a key punctured at the point 0. Because puncturing preserves the functionality of \(\textsf{PRF}\) (as per Definition 1), this does not change the input/output behavior of the obfuscated circuit. Thus we can use the \(\textsf{iO}\) security to argue that this change is computational undetectable by the adversary. Then, we switch the hardcoded pair \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) = \textsf{NIZK}.\textsf{SimSetup}(1^{\kappa _2};\textsf{PRF} (K_1,0))\) to \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}}) = \textsf{NIZK}.\textsf{SimSetup}(1^{\kappa _2};r_0)\), where \(r_0\) is truly random. This is possible by the pseudorandomness property at punctured points of \(\textsf{PRF}\). Then, we use the composable zero-knowledge property of \(\textsf{NIZK}\) to switch \(\pi _i\) to \(\pi _i \leftarrow \textsf{NIZK}.\textsf{Prove} (\textsf{crs}_{\textsf{sim}},\textsf{stat} _{\textsf{ct} '_i,m_i},r_i)\) for all \(i \in [N]\). Finally we switch back the generation of the pairs \((\textsf{crs}_{\textsf{sim}},\textsf{td}_{\textsf{sim}})\) using pseudo-random coins for all levels (instead of using truly random coins for the level 0) and we unpuncture the key \(K_1\).

Fig. 6.
figure 6

In \(\textsf{G} _{2}\), we replace the \(\textsf{FHS}.\textsf{Sign} \) algorithm with \(\textsf{Hybrid Sign} \). Changes are highlighted in gray.

  • Game \(\textsf{G} _{3,\ell }\): At this point, the proof proceeds in a series of \(L(\lambda )=2^{\log ^2 \lambda }\) hybrids where \(\textsf{G} _{3,\ell }\) is defined for all \(\ell =\{0,\ldots ,L(\lambda )\}\) identically to \(\textsf{G} _2\), except that:

    1. 1.

      the program \(\textsf{Gen CRS} \) is replaced by \(\textsf{Hybrid Gen CRS} ^{\ell }\), described in Fig. 7. The latter generates a crs with an extraction trapdoor using \(\textsf{NIZK}.\textsf{Setup} \) on any level \(< \ell \), and generates a simulated crs with a simulation trapdoor using \(\textsf{NIZK}.\textsf{SimSetup}\) on any level \(\ge \ell \).

    2. 2.

      the program \(\textsf{Eval NAND} \) is replaced by \(\textsf{Hybrid Eval NAND} ^{\ell }\), described in Fig. 7. For any level \(<\ell \), the latter generates proofs for the next level using witnesses obtained using an extraction trapdoor and the randomness homomorphic property of \(\textsf{FHE}\). For any level \(\ge \ell \), it generates proofs for the next level using a simulation trapdoor.

    Note that \(\textsf{G} _{3,0}=\textsf{G} _2\). In Theorem 18, we prove that for all \(\ell \in \{0,\ldots ,L(\lambda )-1\}\), \(\textsf{G} _{3,\ell } \approx _c \textsf{G} _{3,\ell +1}\).

  • Game \(\textsf{G} _4\): same as \(\textsf{G} _{3,L(\lambda )}\), except the game guesses the depth of the function f chosen by the adversary \(\mathcal {A} \) for his forgery, by sampling \(d^\star \leftarrow _{\textsc {r}}\{1,\ldots ,L(\lambda )\}\). At the end of the game, \(\mathcal {A} \) sends the forgery \((f,y,\sigma ^\star )\). If \(d^\star \ne d\), then the game \(\textsf{G} _4\) outputs 0. Otherwise it proceeds as in \(\textsf{G} _{3,L(\lambda )}\). Since \(L(\lambda )\) has been chosen super polynomial in \(\lambda \), we know that the function f has depth \(d \le L(\lambda )\). Thus, with probability \(1/L(\lambda )\), the guess is correct, i.e. we have \(d^\star =d\). Therefore,

    $$\begin{aligned} \textsf{Adv} _4(\mathcal {A}) = \frac{\textsf{Adv} _{3,L(\lambda )}({\mathcal {A}})}{L(\lambda )}. \end{aligned}$$
  • Game \(\textsf{G} _5\): same as \(\textsf{G} _4\), except we puncture the key \(K_1\) at \(d^\star \) and hardcode the value \(\textsf{PRF} (K_1,d^\star )\) in the obfuscated circuit. Since puncturing preserve the functionality, we can use the security of \(\textsf{iO}\) to argue that there exists a PPT adversary \(\mathcal {B}_5\) such that:

    $$\begin{aligned} |\textsf{Adv} _5(\mathcal {A}) - \textsf{Adv} _4(\mathcal {A})| = \textsf{Adv} _{\textsf{iO}}(\kappa _2,\mathcal {B}_5). \end{aligned}$$
  • Game \(\textsf{G} _6\): same as \(\textsf{G} _5\), except we change the value \(\textsf{PRF} (K_1,d^\star )\) hardcoded in the obfuscated circuit is turned to a truly random value. By the pseudorandomness of \(\textsf{PRF}\) on punctured points, we know there exists a PPT \(\mathcal {B}_6\) such that:

    $$\begin{aligned} |\textsf{Adv} _6(\mathcal {A}) - \textsf{Adv} _5(\mathcal {A})| = \textsf{Adv} _{\textsf{cPRF}}(\kappa _2,\mathcal {B}_6). \end{aligned}$$

    We now proceed to bound \(\textsf{Adv} _6(\mathcal {A})\). By the knowledge soundness property of \(\textsf{NIZK}\), we know that \(\textsf{Adv} _6(\mathcal {A}) \le \nu _\textsf{sound}(\kappa _2)\). Putting things together, we have \(\textsf{Adv} _4(\mathcal {A}) \le \nu _\textsf{sound}(\kappa ) + \textsf{Adv} _{\textsf{cPRF}}(\kappa _2,\mathcal {B}_6) + \textsf{Adv} _{\textsf{iO}}(\kappa _2,\mathcal {B}_5)\) and \(\textsf{Adv} _3(\mathcal {A}) = L(\lambda ) \textsf{Adv} _4(\mathcal {A})\). Together with the result of Theorem 18, we have:

    $$\begin{aligned} \textsf{Adv} _0(\mathcal {A}) \le &(2^{|\textsf{ct} |+2}+L(\lambda )+8)\textsf{Adv} _{\textsf{iO}}(\kappa _2,\mathcal {B}_1) + (2^{|\textsf{ct} |+2}+L(\lambda )+6)\textsf{Adv} _{\textsf{cPRF}}(\kappa _2,\mathcal {B}_2) \\ &+\, \textsf{Adv} _{\textsf{crs}}(\kappa _2,\mathcal {B}_3) + (2^{|\textsf{ct} |+1}+N) \textsf{Adv} _{\textsf {ZK}}(\kappa _2,\mathcal {B}_4)\\ &+\, (L(\lambda )+2)\nu _\textsf{sound}(\kappa _2) + \textsf{Adv} ^{\textsf{FHE}}_{\mathsf {IND\text {-}CPA}}(\kappa _1,\mathcal {B}_5) .\end{aligned}$$

    The subexponential security of the building blocks implies that there exists a constant \(\varepsilon > 0\) such that \(\textsf{Adv} _{\textsf{iO}}(\kappa _2,\mathcal {B}_1),\textsf{Adv} _{\textsf{cPRF}}(\kappa _2,\mathcal {B}_2),\textsf{Adv} _{\textsf{crs}}(\kappa _2,\mathcal {B}_3),\textsf{Adv} _{\textsf {ZK}}(\kappa _2,\mathcal {B}_4),\nu _\textsf{sound}(\kappa _2) \le 2^{-\kappa _2^\varepsilon }\) and \(\textsf{Adv} ^{\textsf{FHE}}_{\mathsf {IND\text {-}CPA}}(\kappa _1,\mathcal {B}_5) \le 2^{-\kappa _1^\varepsilon }\). Thus, we have

    $$\begin{aligned} \textsf{Adv} _0(\mathcal {A}) \le 2^{-\kappa _2^\varepsilon }(5\cdot 2^{|\textsf{ct} |+1} + 3L(\lambda ) + N + 17) + 2^{-\kappa _1^\varepsilon }. \end{aligned}$$

    Since we chose \(L(\lambda ) = \log ^2 \lambda \), selective security can be achieved by choosing for instance

    $$\begin{aligned} \kappa _2&\ge (|\textsf{ct} |+ \log N + 2 \log ^2 \lambda + O(1))^{1/\varepsilon },\\ \kappa _1&\ge (\log ^2 \lambda )^{1/\varepsilon }. \end{aligned}$$

    To achieve adaptive unforgeability, we use the same guessing technique as for the proof of computation correctness (both signing and evaluation) in the previous section. Namely, we simply guess the messages \(m^\star _1,\ldots ,m^\star _N \leftarrow _{\textsc {r}}\{0,1\} \) in advance, then proceed as in the selective game (but with the guesses \(m_i^\star \) instead of the real messages chosen by the adversary). If the guess is correct, we have the same advantage as in the selective security game. If the guess is incorrect, the game outputs 0. This guessing argument incurs a security loss of \(2^N \). That is, the advantage of an adaptive adversary \(\mathcal {A} \) against the unforgeability of our FHS is less than \(2^{N}\) times the security loss in the selective setting written above. Therefore, adaptive unforgeability can be achieved by choosing for instance

    $$\begin{aligned} \kappa _2\ge (|\textsf{ct} |+ N + \log N + 2 \log ^2 \lambda + O(1))^{1/\varepsilon },\ \kappa _1\ge (N +\log ^2 \lambda )^{1/\varepsilon } \end{aligned}$$
    (3)

    This concludes the unforgeability proof.   \(\square \)

Fig. 7.
figure 7

Algorithms \(\textsf{Hybrid Gen CRS} ^\ell \) and \(\textsf{Hybrid Eval NAND} ^\ell \), used in the games \(\textsf{G} _{3,\ell }\), for all \(\ell \in \{0,\ldots ,L(\lambda )\}\).

Theorem 18

(From \(\textsf{G} _{3,\ell }\) to \(\textsf{G} _{3,\ell +1}\)). For every PPT adversary \(\mathcal {A}\), there exist PPT adversaries \(\mathcal {B}_1,\mathcal {B}_2,\mathcal {B}_3,\mathcal {B}_4\), such that:

$$|\textsf{Adv} _{3,\ell }({\mathcal {A}})-\textsf{Adv} _{3,\ell +1}({\mathcal {A}})|\le (2^{|\textsf{ct} |+2} + 6) \textsf{Adv} _{\textsf{iO}}(\kappa _2,\mathcal {B}_1) + (2^{|\textsf{ct} |+2}+4)\textsf{Adv} _{\textsf{cPRF}}(\kappa _2,\mathcal {B}_2) + 2^{|\textsf{ct} |+1} \textsf{Adv} _{\textsf {ZK}}(\kappa _2,\mathcal {B}_3) + \textsf{Adv} _{\textsf{crs}}(\kappa _2,\mathcal {B}_4)+2\nu _\textsf{sound}(\kappa _2)$$

.

Due to space constraints, we provide the technical proof of this theorem in the full version of the paper [GU23].