1 Reference to Full Version

The full version of this paper [LV20] is freely available on the Cryptology ePrint Archive. We refer the reader to this version for a complete description of our results and proofs.

2 Introduction

The Fiat-Shamir transform [FS86] is a methodology for compiling a public-coin interactive proof (or argument) system for a language L into a non-interactive argument system for L. While originally developed in order to convert 3-message identification schemes into signature schemes, the methodology readily generalized [BR93] to apply to a broad, expressive class of interactive protocols, with applications including non-interactive zero knowledge for \(\mathbf {NP}\) [BR93], succinct non-interactive arguments for \(\mathbf {NP}\) [Mic00, BCS16], and widely used/practically efficient signature schemes [Sch89].

However, these constructions and results come with a big caveat: the security of the Fiat-Shamir transformation is typically heuristic. While the transformation has been proved secure (in high generality) in the random oracle model [BR93, PS96, Mic00, BCS16], it is known that some properties that hold in the random oracle model – including the soundness of Fiat-Shamir for certain contrived interactive arguments – cannot be instantiated at all in the standard model [CGH04, DNRS99, Bar01, GK03, BBH+19].

Given these negative results, security in the random oracle model is by no means the end of the story. Indeed, the question of whether Fiat-Shamir can be instantiated for any given interactive argument system (and under what computational assumptions this can be done) has been a major research direction over the last twenty years [DNRS99, Bar01, GK03, BLV06, CCR16, KRR17, CCRR18, HL18, CCH+19, PS19, BBH+19, BFJ+19, JJ19, LVW19]. After much recent work, some positive results are known, falling into three categories (in the decreasing order of strength of assumptions required):

  1. 1.

    We can compile arbitrary (constant-round, public-coin) interactive proofs under extremely strong assumptions [KRR17, CCRR18] that are non-falsifiable in the sense of [Nao03].

  2. 2.

    We can compile certain succinct interactive proofs [LFKN92, GKR08] – and variants of other interactive proofs not captured in item (3) below, such as [GMW91] – under extremely strong but falsifiable assumptions [CCH+19].

  3. 3.

    We can compile variants of some classical 3-message zero knowledge proof systems [GMR85, Blu86, FLS99] under standard cryptographic assumptions [CCH+19, PS19].

Elaborating on item (2) above, what is currently known is that the sumcheck protocol [LFKN92] and the related Goldwasser-Kalai-Rothblum (GKR) [GKR08] interactive proof system can be compiled under an “optimal security assumption” related to (secret-key) Regev encryption. Roughly speaking, an optimal hardness assumption is the assumption that some search problem cannot be solved with probability significantly better than repeatedly guessing a solution at random. This is an extremely strong assumption that (in the context of Regev encryption) requires careful parameter settings to avoid being trivially false.

In this work, we focus on improving item (2); in particular, we ask:

figure a

Instead of considering the [LFKN92, GKR08] protocols, we work on compiling a protocol of Pietrzak [Pie18] for the “repeated-squaring language” [RSW96]. At a high level, Pietrzak constructs a “sumcheck-like” succinct interactive proof system for the computation \(f_{N, g}(T) = g^{2^T} \pmod N\) over an RSA modulus \(N = pq\). Compiling this protocol turns out to have applications related to verifiable delay functions (VDFs) [BBBF18] and hardness in the complexity class \(\mathbf {PPAD}\) [CHK+19a, CHK+19b, EFKP19], which we elaborate on below.

Applications. We consider two apparently different questions: the first is that of establishing the hardness of the complexity class \(\mathbf {PPAD}\) (“polynomial parity arguments on directed graphs”)  [Pap94] that captures the hardness of finding Nash equilibria in bimatrix games  [DGP09, CDT09]; the second is that of constructing verifiable delay functions (VDFs), a recently introduced cryptographic primitive  [BBBF18] which gives us a way to introduce delays in decentralized applications such as blockchains.

The Hardness of \(\mathbf {PPAD}\). Establishing the hardness of \(\mathbf {PPAD}\) [Pap94], possibly under cryptographic assumptions, is a long-standing question in the foundations of cryptography and computational game theory. After two decades of little progress on the question, a recent sequence of works [BPR15, HY17, CHK+19a, CHK+19b, EFKP19] has managed to prove that there are problems in \(\mathbf {PPAD}\) (and indeed a smaller complexity class, \(\mathbf {CLS}\) [DP11]) that are hard (even on average) under strong cryptographic assumptions. The results so far fall roughly into two categories, depending on the techniques used.

  1. 1.

    Program Obfuscation. Bitansky, Paneth and Rosen  [BPR15], inspired by an approach outlined in [AKV04], showed that \(\mathbf {PPAD}\) is hard assuming the existence of subexponentially secure indistinguishability obfuscation (IO) [BGI+01, GGH+13] and one-way functions. This was later improved [GPS16, HY17] to rely on polynomially-secure functional encryption and to give hardness in \(\mathbf {CLS}\subset \mathbf {PPAD}\).

  2. 2.

    Unambiguously Sound Incrementally Verifiable Computation. The recent beautiful work [CHK+19a] constructs a hard-on-average \(\mathbf {CLS}\) instance assuming the existence of a special kind of incrementally verifiable computation (IVC) [Val08]. Instantiating this approach, they show that \(\mathbf {CLS}\subset \mathbf {PPAD}\) is hard-on-average if there exists a hash function family that soundly instantiates the Fiat-Shamir heuristic [FS86] for the sumcheck interactive proof system for \(\mathsf {\#P}\) [LFKN92]. Two follow-up works [CHK+19b, EFKP19] show the same conclusion if Fiat-Shamir for Pietrzak’s interactive proof system [Pie18] can be soundly instantiated (and if the underlying “repeated squaring language” is hard).

Regarding the first approach [BPR15, GPS16, HY17], secure indistinguishability obfuscators have recently been constructed based on the veracity of a number of non-standard assumptions (see, e.g., [AJL+19, BDGM20]). Regarding the second approach [CHK+19a, CHK+19b, EFKP19], the hash function can be instantiated in the random oracle model, or under “optimal KDM-security” assumptions  [CCRR18, CCH+19].

In summary, despite substantial effort, there are no known constructions of hard \(\mathbf {PPAD}\) instances from standard cryptographic assumptions (although see Section 2.3 for a recent independent work  [KPY20] that shows such a result under a new assumption on bilinear groups).

Verifiable Delay Functions. A Verifiable Delay Function (VDF)  [BBBF18] is a function f with the following properties:

  • f can be evaluated in some (moderately large) time T.

  • Computing f (on average) requires time close to T, even given a large amount of parallelism.

  • There is a time \(T + o(T)\) procedure that computes \(y=f(x)\) on an input x along with a proof \(\pi \) that \(y=f(x)\) is computed correctly. This proof (argument) system should be verifiable in time \(\ll T\) (ideally \(\mathrm {poly}(\lambda , \log T))\)) and satisfy standard (computational) soundness.

Since their introduction [BBBF18], there have been a few proposed candidate VDF constructions [BBBF18, Pie18, Wes19, dFMPS19, EFKP19]. There are currently no constructions based on standard cryptographic assumptions, but this is somewhat inherent to the primitive: a secure VDF implies the existence of a problem which can be solved in time T and also requires (sequential) time close to T. Nonetheless, one can askFootnote 1 whether VDFs can be constructed from “more standard-looking” assumptions, a question partially answered by [Pie18, Wes19]. In particular, each of their constructions relies on two assumptions:

  1. (1)

    The T-repeated squaring problem [RSW96] requires sequential time close to T.

  2. (2)

    The Fiat-Shamir heuristic for some specific public-coin interactive proof/argumentFootnote 2 can be soundly instantiated.

The techniques used in both the construction of hard \(\mathbf {PPAD}\) instances and the construction of VDFs are similar, and so are the underlying assumptions (this is due to the connection between \(\mathbf {PPAD}\) and incrementally verifiable computation [Val08, CHK+19a]). In particular, the works of [CHK+19b, EFKP19] construct hard \(\mathbf {PPAD}\) (and even \(\mathbf {CLS}\)) instances under two assumptions:

(1\({}^\prime \)):

The T-repeated squaring problem [RSW96] requires super-polynomial (standard) time for some \(T = \lambda ^{\omega (1)}\).

(2\({}^\prime \)):

The Fiat-Shamir heuristic for a variant of the [Pie18] interactive proof system can be soundly instantiated.

The assumption (1) (and its weakening, assumption \((1^\prime )\)) is the foundation of the Rivest-Shamir-Wagner time-lock puzzle [RSW96] and has been around for over 20 years. In particular, breaking the RSW assumption has received renewed cryptanalytic interest recently  [Riv99, Fab19].

On the other hand, as previously discussed, the assumptions \((2, 2')\) are not well understood. Indeed, our main question about Fiat-Shamir for succinct arguments (if specialized to the [Pie18] protocol) is intimately related to the following question.

figure b

2.1 Our Results

We show how to instantiate the Fiat-Shamir heuristic for the [Pie18] protocol under a quantitatively strong (but relatively standard) variant of the Learning with Errors (\(\mathsf {LWE}\)) assumption [Reg09]. We give a family of constructions of hash functions that run in subexponential (or even quasi-polynomial or polynomial) time, and prove that they soundly instantiate Fiat-Shamir for this protocol under a sufficiently strong \(\mathsf {LWE}\) assumption.

More generally, we extend the “bad-challenge function” methodology of [CCH+19] for proving the soundness of Fiat-Shamir to a class of protocols whose bad-challenge functions are not efficiently computable. We elaborate on this below in the technical overview (Sect. 2.4).

As a consequence, we obtain \(\mathbf {CLS}\)-hardness and VDFs from a pair of quantitatively related assumptions on the [RSW96] repeated squaring problem and on the learning with errors (\(\mathsf {LWE}\)) problem  [Reg09]; the latter can in turn be based on the worst-case hardness of the (approximate) shortest vector problem (GapSVP) on lattices. In particular, we can base the hardness of \(\mathbf {CLS}\subset \mathbf {PPAD}\), as well as the security of a VDF, on the hardness of two relatively well-studied problems.

Fiat-Shamir for Pietrzak’s Protocol. For our main result, we show that for any \(\epsilon > 0\), an \(\mathsf {LWE}\) assumption of quantitative strength \(2^{n^{1-\epsilon }}\) allows for a Fiat-Shamir instantiation with verification runtime \(2^{\tilde{O}(n^\epsilon )}\) on a repeated squaring instance with security parameter \(\lambda = O(n\log n)\). Such a result is meaningful as long as the verification runtime is smaller than the time it takes to solve the repeated squaring problem; the current best known algorithms for repeated squaring run in heuristic time \(2^{\tilde{O}(\lambda ^{ 1/3})} = 2^{\tilde{O}(n^{ 1/3})}\) [LLMP90].

Here and throughout the paper, we will use \((t, \delta )\)-hardness to denote that a cryptographic problem is hard for t-time algorithms to solve with \(\delta \) probability (or distinguishing advantage).

Theorem 2.1

Let \(\epsilon > 0\) be arbitrary. Assume that (decision) \(\mathsf {LWE}\) is \(\Big (2^{\tilde{O}(n^{1/2})}, 2^{-n^{1-\epsilon }}\Big )\)-hard (or alternatively, \(\left( 2^{\tilde{O}(n^{\epsilon })}, 2^{-n^{1-\epsilon }}\right) \)-hard for non-uniform algorithms). Then, there exists a hash family \(\mathcal {H}\) that soundly instantiates the Fiat-Shamir heuristic for Pietrzak’s interactive proof system  [Pie18]. When the proof system is instantiated for repeated squaring over groups of size \(2^{O(\lambda )}\) with \(\lambda = O(n\log n)\), the hash function h from the family \(\mathcal {H}\) can be evaluated in time \(2^{\tilde{O}(\lambda ^\epsilon )}\).

Under the assumption that (decision) \(\mathsf {LWE}\) is \(\left( 2^{\tilde{O}(n^{1/2})}, 2^{-\frac{n}{\log ^c n}}\right) \)-hard for some constant \(c>0\) (or alternatively, \(\left( \mathsf {quasipoly}(n), 2^{-\frac{n}{\log ^c n}}\right) \)-hard for non-uniform algorithms), there exists such a hash family \(\mathcal {H}\) with quasi-polynomial evaluation time.

Moreover, the \(\mathsf {LWE}\) assumption that we make falls into the parameter regime where we know worst-case to average-case reductions [Reg09, BLP+13, PRS17], so we obtain the following corollary.

Corollary 2.1

The conclusions of Theorem 2.1 (with parameter \(\epsilon < \frac{1}{2}\)) follow from the assumption that the worst case problem \(\mathrm {poly}(n)\)-GapSVP for rank n lattices requires time \(2^{ \omega (n^{1-\epsilon })}\). Similarly, the protocol with quasi-polynomial verification time is sound under the assumption that \(\mathrm {poly}(n)\)-GapSVP requires time \(2^{\frac{n}{\log (n)^c}}\) for some \(c>0\).

The Shortest Vector Problem (SVP) on integer lattices is a well-studied problem (see discussion in [Pei16, ADRS15]); despite a substantial effort, all known \(\mathrm {poly}(n)\)-approximation algorithms for the problem have exponential run-time \(2^{\Omega (n)}\). As a result, our current understanding of the approximate-SVP landscape is consistent with the following conjecture.

Conjecture 2.1

(Exponential Time Hypothesis for GapSVP). For any fixed \(\gamma (n) = \mathrm {poly}(n)\), the \(\gamma (n)\)-GapSVP problem cannot be solved in time \(2^{o(n)}\).

Assuming Conjecture 2.1, the conclusion of Theorem 2.1 holds for every \(\epsilon > 0\); moreover, the variant of the Theorem 2.1 protocol with quasi-polynomial time evaluation is sound as well.

What about polynomial-time verification? Given a non-interactive protocol for repeated squaring with \(2^{\tilde{O}(\lambda ^\epsilon )}\) verification time (or quasi-polynomial evaluation time), one can always define a new security parameter \(\kappa = 2^{\tilde{O}(\lambda ^\epsilon )}\) (or \(\kappa = 2^{\log (\lambda )^c}\)) to obtain a protocol with polynomial-time verification. However, this makes use of complexity leveraging [CGGM00], so (i) this requires making the assumption that repeated squaring (on instances with security parameter \(\lambda \)) is hard for \(\mathrm {poly}(\kappa (\lambda ))\)-time adversaries, and (ii) the resulting protocol cannot have security subexponential in \(\kappa \).

If one does not wish to use complexity leveraging, we give an alternative construction that has (natively) polynomial-time verification, at the cost of a stronger LWE assumption.

Theorem 2.2

Let \(\delta > 0\) be arbitrary and \(q(n) = \mathrm {poly}(n)\) be a fixed (sufficiently large) polynomial in n. Assume that (decision) \(\mathsf {LWE}\) is \(\Big (\mathrm {poly}(n), q^{-\delta n}\Big )\)-hard for non-uniform distinguishers (or \(\Big (2^{\tilde{O}(n^{1/2})}, q^{-\delta n}\Big )\)-hard for uniform distinguishers). Then, there exists a hash family \(\mathcal {H}\) that soundly instantiates the Fiat-Shamir heuristic for Pietrzak’s interactive proof system  [Pie18] with \(\mathrm {poly}(\lambda ) = \mathrm {poly}(n\log n)\)-time verification. More specifically, the verification time is \(\lambda ^{O(1/\delta )}\).

Moreover, this strong LWE assumption still falls into the parameter regime with a meaningful worst-case to average-case reduction:

Corollary 2.2

The conclusion of Theorem 2.2 follows from the assumption that worst-case \(\gamma (n)\)-GapSVP (for a fixed \(\gamma (n) = \mathrm {poly}(n)\)) cannot be solved in time \(n^{o(n)}\) with \(\mathrm {poly}(n)\) space and \(\mathrm {poly}(n)\) bits of nonuniform advice (independent of the lattice).

Polynomial-space algorithms for GapSVP have themselves been an object of study for over 25 years [Kan83, KF16, BLS16, ABF+20], but the current best (poly-space) algorithms for this problem run in time \(n^{\Omega (\epsilon n)}\) for approximation factor \(n^{1/\epsilon }\). Therefore, under a sufficiently strong (and plausible) worst-case assumption about GapSVP, we have a polynomial-time Fiat-Shamir compiler without complexity leveraging.

By combining Theorems 2.1 and 2.2 with the results of [CHK+19b, EFKP19], we obtain the following construction of hard-on-average \(\mathbf {CLS}\) instances.

Theorem 2.3

For a constant \(\epsilon > 0\), suppose that

  • n-dimensional \(\mathsf {LWE}\) (with polynomial modulus) is \( \left( 2^{\tilde{O}(n^{1/2})}, 2^{-n^{1-\epsilon }}\right) \)-hard, and

  • The repeated squaring problem on an instance of size \(2^\lambda \) requires \(2^{\lambda ^{\epsilon } \log (\lambda )^{\omega (1)}}\) time.

Then, there is a hard-on-average problem in \(\mathbf {CLS}\subset \mathbf {PPAD}\). The same conclusion holds if for some \(c>0\),

  • \(\mathsf {LWE}\) is \( \left( 2^{\tilde{O}(n^{1/2})}, 2^{-\frac{n}{\log (n)^c}}\right) \)-hard, and

  • The repeated squaring problem is hard for quasi-polynomial time algorithms.

The same conclusion also holds if for some \(\delta > 0\),

  • \(\mathsf {LWE}\) is \(\Big (\mathrm {poly}(n), q^{-\delta n}\Big )\)-hard for non-uniform distinguishers, and

  • The repeated squaring problem is hard for polynomial time algorithms.

We obtain Theorem 2.3 by plugging our standard model Fiat-Shamir instantiation into the complexity-theoretic reduction of [CHK+19b].Footnote 3 For use in this reduction, our non-interactive protocol must satisfy a stronger security notion called (adaptive) unambiguous soundness [RRR16, CHK+19a], which we show is indeed the case.

Note that the two hardness assumptions in the theorem statement are in opposition to each other. As \(\epsilon \) becomes smaller, the repeated squaring assumption becomes weaker, but the LWE assumption becomes stronger. In particular, we cannot set \(\epsilon \ge 1/3\) as there are known algorithms [LLMP90] solving repeated squaring in (heuristic) time \(2^{\widetilde{O}(\lambda ^{1/3})}\).

Additionally, as a direct consequence of Theorem 2.1, we obtain VDFs in the standard model as long as the underlying repeated squaring problem is sufficiently (sequentially) hard. Recall that the repeated squaring problem [RSW96] is the computation of the function \(f_{N,g}(T) = g^{2^T}\) (mod N), for the appropriate distribution on \(N = pq\) and g.

Theorem 2.4

For a constant \(\epsilon > 0\), suppose that

  • \(\mathsf {LWE}\) is \( \left( 2^{\tilde{O}(n^{1/2})}, 2^{-n^{1-\epsilon }}\right) \)-hard, and

  • The repeated squaring problem [RSW96] over groups of size \(2^{O(\lambda )}\) requires \(T(1-o(1))\) sequential time for \(T \gg 2^{\tilde{O}(\lambda ^{\epsilon })}\).

Then, the repeated squaring function \(f_{N, g}\) can be made into a VDF with verification time \(2^{\tilde{O}(\lambda ^\epsilon )}\) on groups of size \(2^{O(\lambda )}\) (with \(\lambda = O(n\log n)\)). Similarly, if for some \(c>0\),

  • \(\mathsf {LWE}\) is \( \left( 2^{\tilde{O}(n^{1/2})}, 2^{-\frac{n}{\log (n)^c}}\right) \)-hard, and

  • The repeated squaring problem requires \(T(1-o(1))\) sequential time for \(T \gg 2^{\tilde{O}(\log (\lambda )^{c+1})}\),

Then, \(f_{N, g}\) can be made into a VDF with verification time \(2^{\tilde{O}(\log (\lambda )^{c+1})}\). Finally, if for some \(\delta > 0\),

  • \(\mathsf {LWE}\) (with modulus q) is \(\Big (\mathrm {poly}(n), q^{-\delta n}\Big )\)-hard for non-uniform distinguishers, and

  • The repeated squaring problem requires \(T(1-o(1))\) sequential time for all \(T = \mathrm {poly}(\lambda )\).

Then, \(f_{N,g}\) can be made into a VDF with \(\lambda ^{O(1/\delta )}\)-time verification.

Theorem 2.4 follows immediately from Theorem 2.1 along with the construction of Pietrzak [Pie18]. While many of the VDFs in Theorem 2.4 have super-polynomial verification time (and therefore do not fit the standard definition), they can be converted into (standard) VDFs with polynomial verification time via complexity leveraging; however, the leveraged VDFs will only support quasi-polynomial (respectively, \(2^{2^{\mathrm {poly}\log \log \kappa }}\)) time computation (and soundness of the VDF will only hold against adversaries running in time quasi-polynomial in the new security parameter \(\kappa \)). Because of this, we consider the formulation in terms of super-polynomial time verification to be more informative.

2.2 Comparison with Prior Work

Cryptographic Hardness of \(\mathbf {PPAD}\). As described in the introduction, prior works on the cryptographic hardness of \(\mathbf {PPAD}\) fall into two categories – those based on obfuscation and ones based on incrementally verifiable computation (IVC). The obfuscation-based constructions all make cryptographic assumptions related to the existence of indistinguishability obfuscation or closely related primitives that we currently do not know how to instantiate based on well-studied assumptions. (For the latest in obfuscation technology, we refer the reader to [JLMS19, JLS19].) We therefore focus on comparing to the previous IVC-based constructions.

  • [CHK+19a] constructs hard problems in \(\mathbf {CLS}\) under the polynomial hardness of #\(\mathsf {SAT}\) with poly-logarithmically many variables along with the assumption that Fiat-Shamir can be soundly instantiated for the sumcheck protocol [LFKN92]. The latter follows either in the random oracle model or under the assumption that a \(\mathsf {LWE}\)-based fully homomorphic encryption scheme is “optimally circular-secure” [CCH+18, CCH+19] for quasi-polynomial time adversaries.

    While the hardness of #\(\mathsf {SAT}\) (with this parameter regime) is a weaker assumption than the subexponential hardness of repeated squaring, the [CHK+19a] (standard model) result has the drawback of relying on an optimal hardness assumption. Roughly speaking, an optimal hardness assumption is the assumption that some search problem cannot be solved with probability significantly better than repeatedly guessing a solution at random. This is an extremely strong assumption that requires careful parameter settings to avoid being trivially false.

    In contrast, our main \(\mathsf {LWE}\) assumption is subexponential (concerning distinguishing advantage \(2^{-n^{1-\epsilon }}\)) and follows from the worst-case hardness of \(\mathrm {poly}(n)\)-GapSVP for time \(2^{n^{1-\epsilon }}\) algorithms. Even our most optimistic LWE assumption (as in Theorem 2.2) follows from a form of worst-case hardness quantitatively far from the corresponding best known algorithms.

  • [CHK+19b, EFKP19] construct hard problems in \(\mathbf {CLS}\) assuming the polynomial hardness of repeated squaring along with a generic assumption that the Fiat-Shamir heuristic can be instantiated for round-by-round sound (see [CCH+18, CCH+19]) public-coin interactive proofs. The latter can be instantiated either in the random oracle model, or under the assumption that Regev encryption (or ElGamal encryption) is “optimally KDM-secure” for unbounded KDM functions [CCRR18].

    The [CCRR18] assumption is (up to minor technical details) stronger than the optimal security assumption used in [CHK+19a] (because the security game additionally involves an unbounded function), so the [CHK+19b, EFKP19] are mostly framed in the random oracle model. In this work, we give a new Fiat-Shamir instantiation to plug into the [CHK+19b, EFKP19] framework.

VDFs. We compare our construction of VDFs to previous constructions [BBBF18, Pie18, Wes19, dFMPS19, EFKP19].

  • [BBBF18] and [dFMPS19] give constructions of VDFs from new cryptographic assumptions related to permutation polynomials and isogenies over supersingular elliptic curves, respectively. These assumptions are certainly incomparable to ours, but we rely on the hardness of older, more well-studied problems.

  • [Pie18, EFKP19] have the same basic VDF construction as ours; the main difference is that they use a random oracle to instantiate their hash function, while we use a hash function in the standard model and prove its security under a quantitatively strong variant of \(\mathsf {LWE}\).

  • [Wes19] also builds a VDF based on the hardness of repeated squaring, but by building a different interactive argument for computing the function and assuming that Fiat-Shamir can be instantiated for this argument. Again, this assumption holds in the random oracle model, but we know of no instantiation of this VDF in the standard model.

On the negative side, our main VDF (for the natural choice of security parameter) has verification time \(2^{\tilde{O}(\lambda ^\epsilon )}\); this can be thought of as polynomial-time via complexity leveraging, but this results in a VDF that is only quasi-polynomially secure. Alternatively, based on our optimistic LWE assumption, we only obtain a VDF with large polynomial (i.e. \(\lambda ^{1/\delta }\) for small \(\delta \)) verification time. As a result, we consider our VDF construction to be a proof-of-concept regarding whether VDFs can be built based on “more standard-looking assumptions”, in particular, without invoking the random oracle model.

2.3 Additional Related Work

[BG20] constructs hard instances in the complexity class \(\mathbf {PLS}\) – which contains \(\mathbf {CLS}\) and is incomparable to \(\mathbf {PPAD}\) – under a falsifiable assumption on bilinear maps introduced in [KPY19] (along with the randomized exponential time hypothesis (ETH)).

In recent independent work, [KPY20] constructs hard-on-average \(\mathbf {CLS}\) instances under the (quasi-polynomial) [KPY19] assumption. In fact, they give a protocol for unambiguous and incrementally verifiable computation for all languages decidable in space-bounded and slightly super-polynomial time.

2.4 Technical Overview

We now discuss the ideas behind our main result, Theorem 2.1, which is an instantiation of the Fiat-Shamir heuristic for the [Pie18] repeated squaring protocol. In obtaining this result, we also broaden the class of interactive proofs for which we have Fiat-Shamir instantiations under standard assumptions.

The main tool used by our construction is a hash function family \(\mathcal {H}\) that is correlation intractable [CGH04] for efficiently computable functions [CLW18, CCH+19]. Recall that a hash family \(\mathcal {H}\) is correlation intractable for t-time computable functions if for every function f computable time t, the following computational problem is hard: given a description of a hash function h, find an input x such that \(h(x) = f(x)\). We now know [PS19] that such hash families can be constructed under the \(\mathsf {LWE}\) assumption.

Correlation Intractability and Fiat-Shamir. In order to describe our result, we first sketch the [CCH+19] paradigm for using such a hash family \(\mathcal {H}\) to instantiate the Fiat-Shamir heuristic.

For simplicity, consider a three-message (public-coin) interactive proof system (\(\varSigma \)-protocol)

Fig. 1.
figure 1

A \(\varSigma \)-protocol \(\varPi \).

as well as its corresponding Fiat-Shamir round-reduced protocol \(\varPi _{\mathrm {FS}, \mathcal {H}}\) for a hash family \(\mathcal {H}\).

Fig. 2.
figure 2

The Protocol \(\varPi _{\mathrm {FS}, \mathcal {H}}\).

Moreover, suppose that this protocol \(\varPi \) satisfies the following soundness property (sometimes referred to as “special soundness”): for every \(x\not \in L\) and every prover message \(\alpha \), there exists at most one verifier message \(\beta ^*(x, \alpha )\) allowing the prover to cheat.Footnote 4

It then follows that if a hash family \(\mathcal {H}\) is correlation intractable for the function family \(f_x(\alpha ) = \beta ^*(x, \alpha )\), then \(\mathcal {H}\) instantiates the Fiat-Shamir heuristic for \(\varPi \).Footnote 5 This is because a cheating prover \(P^*_{\mathrm {FS}}\) breaking the soundness of \(\varPi _{\mathrm {FS}, \mathcal {H}}\) must find a first message \(\alpha \) such that its corresponding challenge \(h(x, \alpha )\) is equal to the bad challenge \(f_x(\alpha )\) (or else it has no hope of successfully cheating).

Therefore, using the hash family of [PS19], we can (under the \(\mathsf {LWE}\) assumption) do Fiat-Shamir for any protocol \(\varPi \) whose “bad-challenge function” \(f_x(\alpha )\) is computable in polynomial time; this has the important caveat that the complexity of computing the hash function h is at least the complexity of computing \(f_x(\alpha )\).

This paradigm seems to run into the following roadblock: intuitively, for protocols \(\varPi \) of interest, computing \(f_x(\alpha )\) appears to be hard rather than easy. For example,

  1. 1.

    For a standard construction of zero-knowledge proofs for \(\mathbf {NP}\) such as [Blu86], computing \(f_x(\alpha )\) involves breaking a cryptographically secure commitment scheme.

  2. 2.

    For (unconditional) statistical zero knowledge protocols such as the [GMR85] Quadratic Residuosity protocol, computing \(f_x(\alpha )\) involves deciding the underlying hard language L.

  3. 3.

    For doubly efficient interactive proofs such as the [GKR08] interactive proof for logspace-uniform \(\mathsf {NC}\), computing \(f_x(\alpha )\) again involves deciding the underlying language L; in this case, L is in \(\mathsf {P}\), but this Fiat-Shamir compiler would result in a non-interactive argument whose verifier runs in time longer than it takes to decide L.

The work [CCH+19] resolves issues (1) and (2) in the following way: in both cases, we can arrange for \(f_x(\alpha )\) to be efficiently computable given an appropriate trapdoor: in the case of [Blu86], the commitment scheme can have a trapdoor allowing for efficient extraction, while in the case of [GMR85], \(f_x(\alpha )\) is efficient given an appropriate \(\mathbf {NP}\)-witness for the complement language \(\overline{L}\). However, we have no analogous resolution to (3), which is the setting of interest to us.Footnote 6

The bad-challenge function of the [Pie18] protocol. With this context in mind, we now consider the [Pie18] protocol.Footnote 7 This protocol (like the [GKR08] protocol and the related sumcheck protocol [LFKN92]) is not a constant-round protocol, but is instead composed of up to polynomially many “reduction steps” of the following form.

  • The prover, given (NgT), computes and sends \(u= g^{2^{T/2}}\), the (supposed) “halfway point” of the computation.

  • The message u indicates (to the verifier) two derivative claims: \(u = g^{2^{T/2}}\) and \(h = u^{2^{T/2}}\).

  • The verifier then challenges the prover to prove a random linear combination of the two statements: \(h\cdot u^r = (u\cdot g^r)^{2^{T/2}}\).

Soundness can then be analyzed in a “round-by-round” fashion [CCH+19]: if you start with a false statement (or if you start with a true statement but send an incorrect value \(\tilde{u}\ne u\)), there is at most oneFootnote 8 bad challenge \(r^*\) resulting in a recursive call on a true statement.

To invoke the [CCH+19] paradigm, we ask: how efficiently can we compute the function \(f(N, T, g, h, u) = r^*\)? To answer this question, let \(\tilde{g}\) denote a fixed group element of order \(\phi (N)/2\) such that \(g, h, u\in \langle \tilde{g} \rangle \). Letting \(\gamma , \eta , \omega \) denote the discrete logs of gh, and u in base \(\tilde{g}\), we see that (for corresponding challenge r) the statement \((N, T/2, g', h')\) is true if and only if

$$\eta + r\cdot \omega \equiv 2^{T/2} (\omega + r\cdot \gamma ) \pmod {\phi (N)/2}. $$

As a result, we see that r can be efficiently computed from the following information:

  • The discrete logarithms \(\eta , \omega , \gamma \), and

  • The factorization of N.

While the factorization of N can be known a priori in the security reduction (similar to prior work), the discrete logarithms depend on the prover message u and (adaptively chosen) statement (gh). We conclude that the “bottleneck” for computing f is the problem computing a constant number of discrete logarithms in \(\mathbb {Z}_p^\times \).

Since computing discrete logarithms over \(\mathbb {Z}_p^\times \) is believed to be hard, and is not known to have a trapdoor, it appears unlikely that this approach would allow us to rely on the polynomial hardness of the [PS19] hash family. However, it is plausible that we could use a variant of the [PS19] hash family supporting super-polynomial time computation (proven secure under a super-polynomial variant of \(\mathsf {LWE}\)) to capture the complexity of computing discrete logarithms.

Unfortunately, the naive version of this approach fails: the best known runtime boundsFootnote 9 for computing discrete logarithms over \(\mathbb {Z}_p^\times \) for \(p = 2^{O(\lambda )}\) are of the form \(2^{\tilde{O}(\lambda ^{1/2})}\) [Adl79, Pom87], and the best known heuristic algorithms (plausibly) run in time \(2^{\tilde{O}(\lambda ^{1/3})}\) [LLMP90]. If we were to instantiate the [PS19] hash family to support functions of this complexity, we could prove the soundness of Fiat-Shamir for the [Pie18] protocol, but the resulting non-interactive protocol would run in time \(2^{\tilde{O}(\lambda ^{1/2})}\) (or in time \(2^{\tilde{O}(\lambda ^{1/3})}\) with a heuristic security proof); these are the same runtime bounds for the best known algorithms for solving the repeated squaring problem [Dix81, Pom87, LLMP90] (via factoring the modulus N). In other words, the verifier would run in enough time to be able to solve the repeated squaring problem itself. This is a very similar problem to issue (3) regarding the [LFKN92, GKR08] protocols, so we appear to be stuck.

Computing bad-challenge functions with low probability. We overcome the above problem with the following idea:

figure c

In other words, we consider a new variant of the [CCH+19] framework for instantiating Fiat-Shamir in the standard model, where:

  • An interactive protocol \(\varPi \) is characterized by some bad-challenge function f,

  • f can be computed by a time t algorithm (or size s circuit) with some small but non-trivial probability \(\delta \).

  • The hash function \(\mathcal {H}\) is assumed to be correlation intractable – with sufficiently strong quantitative security – against adversaries running in time t (or with size s).

Then, it turns out that the resulting non-interactive protocol is sound! Informally, this is because if f is “approximated” by a time t-computable randomized function \(g_r\) (in the sense that \(g_r(x)\) and f(x) agree with probability \(\delta \) on a worst-case input), then an adversary breaking the protocol \(\varPi _{\mathrm {FS}, \mathcal {H}}\) will break the correlation intractability of \(\mathcal {H}\) with respect to g (rather than f) with probability \(\delta \). More formally, a cheating prover \(P^*_{\mathrm {FS}}\) yields an algorithm that breaks the correlation intractability of \(\mathcal {H}\) with respect to f, which in turn breaks the correlation intractability of \(\mathcal {H}\) with respect to \(g_r\) (for hard-coded randomness r) with probability \(\delta \cdot \frac{1}{\mathrm {poly}(\lambda )}\) (since \(g_r\) and f agree on an arbitrary input with probability at least \(\delta \)). Therefore, if \(\mathcal {H}\) is \((t, \delta \cdot \lambda ^{-\omega (1)})\)-secure, we conclude that \(\varPi _{\mathrm {FS}, \mathcal {H}}\) is sound.

This modification allows us to instantiate Fiat-Shamir for the [Pie18] protocol. In particular, we make use of folkloreFootnote 10 [CCRR18] preprocessing algorithms for the discrete logarithm problem over \(\mathbb {Z}_p^\times \) that run in time \(2^{\lambda ^\epsilon }\) and have success probability \(2^{-\lambda ^{1-\epsilon }}\). More specifically, we consider a computation of the bad challenge function f(NTghu) in the following model:

  • Hard-code (1) the factorization \(N=pq\), (2) an appropriately chosen group element \(\tilde{g}\) of high order, and (3) \(2^{\tilde{O}(\lambda ^{\epsilon })}\) discrete logarithms (of fixed numbers modulo p and modulo q, respectively) in base \(\tilde{g}\).

  • Compute a (constant-size) collection of worst-case discrete logarithms by the standard index calculus algorithm [Adl79] in time \(2^{\tilde{O}(\lambda ^{\epsilon })}\) with success probability \(2^{-\lambda ^{1-\epsilon }}\).

This can be thought of as either a non-uniform \(2^{\tilde{O}(\lambda ^\epsilon )}\)-time algorithm, or a \(2^{\tilde{O}(\lambda ^\epsilon )}\)-time algorithm with \(2^{\tilde{O}(\lambda ^{1/2})}\)-time preprocessing.Footnote 11 By using this algorithm for the computation of the bad-challenge function f(NTghu), we obtain a Fiat-Shamir instantiation with verification time \(2^{\tilde{O}(\lambda ^\epsilon )}\) – a meaningful result as long as this runtime does not allow for solving the repeated squaring problem. Finally, the required assumption is that the [PS19] hash function is correlation intractable for adversaries that succeed with probability \(2^{-\lambda ^{1-\epsilon }}\), which holds under the claimed \(\mathsf {LWE}\) assumption with parameters (nq) for \(\lambda = n\log q\).

Generalizations. In this overview, we focused specifically on the [Pie18] protocol, but our techniques give general blueprints for obtaining Fiat-Shamir instantiations. We believe these blueprints may be useful in future work, so we state them (as “meta-theorems”) explicitly here:

  • Fiat-Shamir for protocols with low success probability bad-challenge functions. Our approach shows that if an interactive protocol \(\varPi \) is governed by a bad-challenge function f that is computable by an efficient randomized algorithm that is only correct with (potentially very) low probability, it is still possible to instantiate Fiat-Shamir for \(\varPi \) under a sufficiently strong LWE assumption.

  • Fiat-Shamir for discrete-log based bad-challenge functions. Our approach also shows that if a protocol \(\varPi \) is governed by a bad-challenge function f that is efficiently computable given oracle accessFootnote 12 to a discrete log solver (over \(\mathbb {Z}_p^\times \) for \(p\le 2^{O(\lambda )}\)), then it is possible to instantiate Fiat-Shamir for \(\varPi \) under a sufficiently strong LWE assumption.

We formalize both of these “meta-theorems” in the language of correlation intractability (rather than Fiat-Shamir) in the full version of this paper.