1 Introduction

1.1 Background and Motivation

In recent years, researchers have uncovered a variety of ways to capture cryptographic keys through side-channel attacks: physical measurements, such as execution time, power consumption, and even sound waves generated by the processor. This has prompted cryptographers to build models for these attacks and to construct leakage-resilient schemes that remain secure in the face of such attacks. Of course, if the adversary can leak the entire secret key, security becomes impossible, and so the “bounded” leakage model was introduced (cf. [2, 11, 39, 46]). Here, it is assumed that there is a fixed upper bound, L on the number of bits the attacker may leak, regardless of the parameters of the scheme, or, alternatively, it is assumed that the attacker is allowed to leak \(L = \lambda \cdot |\mathsf{sk}|\) total number of bits, where the amount of leakage increases as the size of the secret key increases. Various works constructed public key encryption and signature schemes with optimal leakage rate of \(\lambda = 1-o(1)\), from specific assumptions (cf. [11, 46]). Hazay et al. [35] even constructed a leakage-resilient public key encryption scheme in this model assuming only the existence of standard public key encryption, although the leakage rate achieved by their scheme was not optimal.Footnote 1

Surprisingly, it is possible to do better; a strengthening of the model—the “continual” leakage modelFootnote 2—allows the adversary to request unbounded leakage. This model was introduced by Brakerski et al. [12]—who constructed continual leakage-resilient (CLR) public key encryption and signature schemes—and Dodis et al. [21]—who constructed CLR signature schemes. Intuitively, the CLR model divides the lifetime of the attack, which may be unbounded, into time periods and: (1) allows the adversary to obtain the output of a “bounded” leakage function in each time period and (2) allows the secret key (but not the public key!) to be updated between time periods. So, while the adversary’s leakage in each round is bounded, the total leakage is unbounded.

Note that the algorithm used by any CLR scheme to update the current secret key to the next one must be randomized, since otherwise the adversary can obtain some future secret key, bit by bit, via its leakage in each time period. While the CLR schemes of [12, 21] were able to tolerate a \(1-o(1)\) leakage rate, handling leakage during the update procedure itself—that is, produced as a function of the randomness used by the update algorithm as well as the current secret key—proved to be much more challenging. The first substantial progress on this problem of “leakage on key updates” was made by Lewko et al. [43], with their techniques being considerably refined and generalized by Dodis et al. [24]. In particular, they give encryption and signature schemes that are CLR with leakage on key updates tolerating a constant leakage rate, using “dual-system” techniques (cf. [48]) in bilinear groups.

1.2 Overview of Our Results

Our first main contribution is to show how to compile any public key encryption or signature scheme that satisfies a slight strengthening of CLR (which we call “consecutive” CLR or 2CLR) without leakage on key updates to one that is CLR with leakage on key updates. Our compiler is based on a new connection we make between the problems of leakage on key updates and “sender deniability” [13] for encryption schemes. In particular, our compiler uses program obfuscation—either indistinguishability obfuscation (iO) [5, 29] or the public-coin differing-inputs obfuscation (diO) [37]Footnote 3—and adapts and extends techniques recently developed by Sahai and Waters [47] to achieve sender-deniable encryption. This demonstrates the applicability of the techniques of [47] to other seemingly unrelated contexts.Footnote 4 We then show that the existing CLR encryption scheme of Brakerski et al. [12] can be extended to meet the stronger notion of 2CLR that we require for our compiler. Additionally, we show all our results carry over to signatures as well. In particular, we show that 2CLR PKE implies 2CLR signatures (via the intermediate notion of CLR “one-way relations” of Dodis et al. [21]), and observe that our compiler also upgrades 2CLR signatures to ones that are CLR with leakage on updates.

Our second main contribution concerns constructions of leakage-resilient public key encryption directly from obfuscation. In particular, we show that the approach of Sahai and Waters to achieve public key encryption from iO and punctured pseudorandom functions [47] can be extended to achieve leakage resilience in the bounded leakage model. Specifically, we achieve (1) leakage-resilient public key encryption tolerating L bits of leakage for any L from iO and one-way functions, (2) leakage-resilient public key encryption with optimal leakage rate of \(1-o(1)\) based on public-coin differing-inputs obfuscation and collision-resistant hash functions, and (3) (consecutive) CLR public key encryption with constant (although not optimal, on the order of one over several hundred) leakage rate from differing-inputs obfuscation (not public coin) and standard assumptions. Extending the construction from (2) to achieve continual leakage resilience, without these additional assumptions, is an interesting open problem.

1.3 Summary and Perspective

In summary, we provide a thorough study of the connection between program obfuscation and leakage resilience. We define a new notion of leakage resilience (2CLR) and demonstrate new constructions of 2CLR-secure encryption and signature schemes from program obfuscation. Also using program obfuscation, we construct a compiler that lifts 2CLR-secure schemes to CLR with leakage on key updates; together with our new constructions, this provides a unified and modular method for constructing CLR with leakage on key updates. Under appropriate assumptions (namely the ones used by Brakerski et al. [12] in their construction), this approach allows us to achieve a leakage rate of \(1/4 - o(1)\) with leakage on key updates, a large improvement over prior work, where the best leakage rate was \(1/258 - o(1)\) [43]. Our result nearly matches the trivial upper bound of \(1/2-o(1)\).Footnote 5 In the bounded leakage model, we show that it is possible to achieve optimal-rate leakage-resilient public key encryption from obfuscation and generic assumptions.

Comparing our results in the bounded leakage model with the work of Hazay et al. [35], we have (1) leakage-resilient public key encryption tolerating L bits of leakage from iO and one-way functions and (2) leakage-resilient public key encryption with optimal leakage rate based on public-coin differing-inputs obfuscation and collision-resistant hash functions. As we mentioned above, Hazay et al. [35] constructed bounded leakage-resilient public key encryption in the bounded leakage model from a far weaker generic assumption (they require only standard public key encryption). Moreover, the leakage rate of Hazay et al. [35] is far better than the leakage rate we achieve in (1), since in our iO-based construction, the secret key consists of an entire obfuscated program, which will be extremely large. Thus, the work of Hazay et al. [35] completely subsumes (1). On the other hand, the leakage rate we achieve in (2) is optimal and so in this case, our leakage rate improves upon the rate of Hazay et al. [35], though we require the far stronger assumption of public-coin differing-inputs obfuscation for our result.

Finally, we discuss our result in the continuous leakage model on (3) (consecutive) CLR public key encryption with constant leakage rate from differing-inputs obfuscation and standard assumptions. When instantiating our construction in (3), the assumptions and parameters achieved are inferior to those of the Brakerski et al. [12] scheme (which we adapt to our setting). Our intention in (3) is therefore to explore what can be done from generic assumptions, ideally showing that (consecutive) CLR public key encryption can be constructed from any PKE scheme and diO. Unfortunately, we fall somewhat short, requiring that the underlying encryption scheme posses various additional properties.

Given the above discussion, we feel that the main value of our results in the bounded leakage model is that they provide direct insight into the connection between obfuscation and leakage resilience. We are also hopeful that our techniques in the continual model might lead to future improvements in rate as well as a better understanding of the relationship between obfuscation and continual leakage resilience.

1.4 Details and Techniques

Part I: The Leak-on-Update Compiler. As described above, in the model of continual leakage resilience (CLR) [12, 21] for public key encryption or signature schemes, the secret key can be updated periodically (according to some algorithm \(\mathsf {Update} \)) and the adversary can obtain bounded leakage between any two updates. Our compiler applies to schemes that satisfy a slight strengthening of CLR we call consecutive CLR, where the adversary can obtain bounded leakage as a joint function of any two consecutive keys. More formally, let \(\mathsf{sk}_{0}, \mathsf{sk}_{1},\mathsf{sk}_{2},\dots , \mathsf{sk}_{t},\dots \) be the secret keys at each time period, where \(\mathsf{sk}_{i} = \mathsf {Update} (\mathsf{sk}_{i-1},r_{i})\), and each \(r_{i}\) denotes fresh random coins used at that round. For leakage functions \( f _{1},\dots , f _{t},\dots \) (chosen adaptively by the adversary), consider the following two leakage models:

  1. (1)

    For consecutive CLR (2CLR), the adversary obtains leakage

    $$\begin{aligned} f _{1}(\mathsf{sk}_{0},\mathsf{sk}_{1}), f _{2}(\mathsf{sk}_{1},\mathsf{sk}_{2}), \dots , f _{t}(\mathsf{sk}_{t-1},\mathsf{sk}_{t}), \dots \;.\end{aligned}$$
  2. (2)

    For CLR with leakage on key updates, the adversary obtains leakage

    $$\begin{aligned} f _{1}(\mathsf{sk}_{0},r_{1}), f _{2}(\mathsf{sk}_{1},r_{2}), \dots , f _{t}(\mathsf{sk}_{t-1},r_{t}), \dots \;.\end{aligned}$$

Our compiler from 2CLR to CLR with leakage on key updates produces a slightly different \(\mathsf {Update} \) algorithm for the compiled scheme depending on whether we assume indistinguishability obfuscation ( iO ) [5, 29] or public-coin differing-inputs obfuscation [37]. In both cases, if we start with an underlying scheme that is consecutive two-key CLR while allowing \(\mu \)-bits of leakage, then our compiled scheme is CLR with leakage on key updates with leakage rate

$$\begin{aligned}\frac{\mu }{|\mathsf{sk}| + |r_{up}|} \;,\end{aligned}$$

where \(|r_{up}|\) is the length of the randomness required by \(\mathsf {Update} \). When using iO, we obtain \(|r_{up}| = 5|\mathsf{sk}|\), where \(|\mathsf{sk}|\) is the secret key length for the underlying 2CLR scheme, whereas using public-coin differing-inputs obfuscation we obtain \(|r_{up}| = |\mathsf{sk}|\). Thus:

  • Assuming iO, the compiled scheme is CLR with leakage on key updates with leakage rate \(\frac{\mu }{6 \cdot |\mathsf{sk}|}\).

  • Assuming public-coin differing-inputs obfuscation, the compiled scheme is CLR with leakage on key updates with leakage rate \(\frac{\mu }{2 \cdot |\mathsf{sk}|}\).

Thus, if the underlying 2CLR scheme tolerates the optimal number of bits of leakage (\(\approx 1/2 \cdot |\mathsf{sk}|\)), then our resulting public-coin differing-inputs-based scheme achieves leakage rate \(1/4 - o(1)\).

Our compiler is obtained by adapting and extending the techniques developed by [47] to achieve sender-deniable PKE from any PKE scheme. In sender-deniable PKE, a sender, given a ciphertext and any message, is able to produce coins that make it appear that the ciphertext is an encryption of that message. Intuitively, the connection we make to leakage on key updates is that the simulator in the security proof faces a similar predicament to the coerced sender in the case of deniable encryption; it needs to come up with some randomness that “explains” a current secret key as the update of an old one. Our compiler makes any two such keys explainable in a way that is similar to how Sahai and Waters make any ciphertext and message explainable. Intuitively, this is done by “encoding” a secret key in the explained randomness in a special way that can be detected only by the (obfuscated) \(\mathsf {Update} \) algorithm. Once detected, the \(\mathsf {Update} \) algorithm outputs the encoded secret key, instead of running the normal procedure.

However, in our context, naïvely applying their techniques would result in the randomness required by our \(\mathsf {Update} \) algorithm being very long, which, as described above, affects the leakage rate of our resulting CLR scheme with leakage on key updates in a crucial way (we would not even be able to get a constant leakage rate). We decrease the length of this randomness in two steps. First, we note that the sender-deniable encryption scheme of Sahai and Waters encrypts a message bit by bit and “explains” each message bit individually. This appears to be necessary in their context in order to allow the adversary to choose its challenge messages adaptively depending on the public key. For our setting, this is not the case, since the secret key is chosen honestly (not by the adversary), so “non-adaptive” security is in fact sufficient in our context and we can “explain” a secret key all at once. This gets us to \( |r_{up}| = 5 \cdot |\mathsf{sk}|\) and thus \(1/12 - o(1)\) leakage rate assuming the underlying 2CLR scheme can tolerate the optimal leakage. Second, we observe that by switching assumptions from iO to the public-coin differing-inputs obfuscation we can replace some instances of \(\mathsf{sk}\) in the explained randomness with its value under a collision-resistant hash, which gets us to \( |r_{up}| = \mathsf{sk}\) and thus \(1/4 - o(1)\) leakage rate in this case.

A natural question is whether the upper bound of \(1/2 - o(1)\) leakage rate for CLR with leakage on key updates, can be attained via our techniques (if at all). We leave this as an intriguing open question, but note that the only way to do so would be to further decrease \(|r_{up}|\) so that \(|r_{up}|<|\mathsf{sk}|\).

Part II: Constructions against Two-key Consecutive Continual Leakage. We revisit the existing CLR public key encryption scheme of [12] and show that a suitable modification of it achieves 2CLRFootnote 6 with optimal \(1/4 - o(1)\) leakage rate,Footnote 7 under the same assumption used by [12] to achieve optimal leakage rate in the basic CLR setting (namely the symmetric external Diffie–Hellman (SXDH) assumption in bilinear groups; smaller leakage rates can be obtained under weaker assumptions). Our main technical tool here is a new generalization of the Crooked Leftover Hash Lemma [6, 26] that generalizes the result of [12], which shows that “random subspaces are leakage resilient,” showing that random subspaces are in fact resilient to “consecutive leakage.” Our claim also leads to a simpler analysis of the scheme than appears in [12].

Finally, we also show (via techniques from learning theory) that 2CLR public key encryption generically implies 2CLR one-way relations. Via a transformation of Dodis et al. [21], this then yields 2CLR signatures with the same leakage rate as the starting encryption scheme. Therefore, all the above results translate to the signature setting as well. We also show a direct approach to constructing 2CLR one-way relations following [21] based on the SXDH assumption in bilinear groups, although we are not able to achieve as good of a leakage rate this way (only \(1/8-o(1)\)).

Part III: Exploring the relationship between (bounded and continual) leakage resilience and obfuscation. Note that, interestingly, even the strong notion of virtual black-box (VBB) obfuscation does not immediately lead to constructions of leakage-resilient public key encryption. In particular, if we replace the secret key of a public key encryption scheme with a VBB obfuscation of the decryption algorithm, it is not clear that we gain anything: For example, the VBB obfuscation may output a circuit of size |C|, where only \(\sqrt{|C|}\) number of the gates are “meaningful” and the remaining gates are simply “dummy” gates, in which case we cannot hope to get a leakage bound better than \(L = \sqrt{|C|}\), and a leakage rate of \(1/\sqrt{|C|}\). Nevertheless, we are able to show that the PKE scheme of Sahai and Waters (SW) [47], which is built from iO and “punctured pseudorandom functions (PRFs),” can naturally be made leakage resilient. To give some brief intuition, a ciphertext in our construction is of the form \((r, w, \mathsf {Ext} (\mathsf{PRF}(k; r), w) \oplus m)\), where \(\mathsf {Ext} \) is a strong extractor, r and w are random values,Footnote 8 and the \(\mathsf{PRF}\) key k is embedded in obfuscated programs that are used in both encryption and decryption. In the security proof, we “puncture” the key k at the challenge point, \(t^*\), and hardcode the mapping \(t^* \rightarrow y\), where \(y = \mathsf{PRF}(k; t^*)\), in order to preserve the input/output behavior. As in SW, we switch the mapping to \(t^* \rightarrow y^*\) for a random \(y^*\) via security of the puncturable PRF. But now observe we have that the min-entropy of \(y^*\) is high even after leakage, so the output of the extractor is close to uniform. To achieve optimal leakage rate, we further modify the scheme to separate \(t^* \rightarrow y^*\) from the obfuscated program and store only an encryption of \(t^* \rightarrow y^*\) in the secret key.

Note that the last change lends itself to achieving (consecutive) CLR, since the secret key can be refreshed by re-randomizing the encryption. However, the information theoretic argument above about the entropy remaining in \(y^*\) no longer holds, since additional entropy is lost in every round, and, eventually, \(y^*\) might be recovered in full. To address this issue, we must prevent the attacker from directly leaking on \(y^*\) in each round. Instead of embedding an encryption of \(t^* \rightarrow y^*\) in the secret key, we embed an encryption of a tuple \((s_i, \alpha _i, H(t^*)) \rightarrow y^*\) using a fresh \(s_i\) in each round i, subject to the constraint that \(\alpha _i = \langle s_i, t^* \rangle \). In order to determine whether to output \(y^*\) on some input t, our obfuscated circuit decrypts and checks whether \(H(t^*) = H(t) \wedge \langle s_i, t \rangle = \alpha _i\), where H is a collision-resistant hash function. We rely on the following facts to ensure that \(y^*\) remains indistinguishable from random given the adversary’s view: a) the adversary must form his leakage queries before learning \(t^*\), b) very little information about \(t^*\) is contained in the secret key, and c) due to the previous facts, and since the inner product is a good two-source extractor, \(\langle s_i, t^* \rangle \) remains very close to uniform, even under the leakage. It follows that we can switch, even under leakage, to a random \(\alpha ^*\), uncorrelated with \(s_i, t^*\). Since it is now hard to find inputs satisfying \(H(t^*) = H(t) \wedge \langle s_i, t \rangle = \alpha ^*\), we can, using security of the diO, ignore this conditional statement and replace \(y^*\), with a 0 string in the secret key, while still using \(y^*\) in the challenge ciphertext.

In the above discussion, we omitted some additional technical challenges due to lack of space. Most notably, we also require that the encryption scheme used for encrypting the tuple in the secret key satisfies a notion of “diO-compatible RCCA-secure re-randomizability,” which we introduce (see Sect. A.2), and show that the “controlled-malleable” RCCA-secure PKE due to Chase et al. [17] based on the Decision-Linear assumption in bilinear groups schemes satisfies it, giving us a constant leakage rate for our (2)CLR scheme. For an in-depth technical overview and complete proof, see Sect. A.

1.5 Related Work

Leakage-Resilient Cryptography. We discuss various types of memory leakage attacks that have been studied in the literature. Memory attacks are a strong type of attack, where all secrets in memory are subject to leakage, whether or not they are actively being computed on. Memory leakage attacks are motivated by the cold-boot attack of Halderman et al. [34], who showed that for some time after power is shut down, partial data can be recovered from random access memory (DRAM and SRAM). Akavia et al. [2] introduced the model of bounded memory attacks, where arbitrary leakage on memory is allowed, as long as the output size of the leakage function is bounded. Additional models introduced by [16, 27] and [23] allow unbounded-length noisy leakage, unbounded-length leakage under restricted leakage functions, or unbounded-length hard-to-invert leakage, respectively. The works of [12] and [21] introduced the notion of “continual memory leakage” for public key primitives where the secret key is updated while the public key remains the same. This model allows bounded memory leakage between key refreshes. Finally, [12, 21, 24, 43] considered the model of continual memory leakage with leak on update, where leakage can occur while the secret key is being updated. In this work, we consider bounded memory attacks, continual memory leakage and continual memory leakage with leak on update.

There is a long line of constructions of leakage-resilient cryptographic primitives, including public key encryption that are leakage resilient (LR) against bounded memory attacks [2, 46]; public key encryption that is continual leakage resilient (CLR) without leak on update [12]; public key encryption that is CLR with leak on update [43]; digital signature schemes that are leakage resilient (LR) against bounded memory attacks [39]; digital signature schemes that are LR against bounded memory attacks on both secret key and random coins for signing [11, 39, 44]; digital signature schemes that are CLR without leak on update [21]; digital signature schemes that are CLR with leak on update [43].

Obfuscation and Its Applications. Since the breakthrough result of Garg et al. [29], demonstrating the first candidate of indistinguishability obfuscation (iO) for all circuits, a myriad of uses for iO in cryptography have been found. Among these results, the puncturing methodology by Sahai and Waters [47] has been found very useful. Related notions such as differing-inputs obfuscation (diO) [4] have been studied [3, 9, 37]. Please refer to [49, 50] for new constructions, applications, and limitations of obfuscation.

1.6 Organization

We present definitions and preliminaries in Sect. 2. In Sect. 3, we present our compiler from 2CLR public key encryption/signatures to CLR public key encryption/signatures with leakage on key update. In Sect. 4, we prove that the public key encryption scheme of Brakerski et al. [12] achieves 2CLR. In Sect. 5, we present constructions of leakage-resilient public key encryption (in the non-continual setting) from obfuscation and generic assumptions. In Sect. 6, we define 2CLR security for one-way relations and prove that the construction of Dodis et al. [21] achieves the 2CLR notion. In Sect. 7, we present a construction of 2CLR signatures from 2CLR one-way relations. Finally, in Appendix A, we address the question of constructing 2CLR public key encryption from obfuscation and generic assumptions.

2 Definitions and Preliminaries

Statistical Indistinguishability. The statistical distance between two random variables XY is defined by

$$\begin{aligned} \varDelta (X,Y) = \frac{1}{2}\sum _x \left| \Pr [X=x] -\Pr [Y=x] \right| \end{aligned}$$

We write \(X{\mathop {\approx }\limits ^{\text {s}}}Y\) to denote that the statistical distance is negligible in the security parameter, and we say that XY are statistically indistinguishable.

2.1 Security Definitions for Leakage-Resilient Public Key Encryption

In this subsection, we present the definitions of various leakage-resilient public key encryption schemes. These definitions are from the literature. In Subsect. 2.2, we present the definitions for leakage-resilient signature schemes. Jumping ahead, in Subsect. 3.1, we start to present our new definition for consecutive continual leakage resilience (2CLR).

We present definitions for obfuscation and puncturable PRFs in Subsects. 2.3 and 2.4.

2.1.1 One-Time Leakage Model

A public key encryption scheme \(\mathsf {PKE}\) consists of three algorithms: \(\mathsf {PKE}.\mathsf{Gen}, \mathsf {PKE}.\mathsf {Enc} \), and \(\mathsf {PKE}.\mathsf {Dec} \).

  • \(\mathsf {PKE}.\mathsf{Gen}(1^{\kappa }) \rightarrow (\mathsf {pk}, \mathsf{sk})\). The key generation algorithm takes in the security parameter \(\kappa \) and outputs a public key \(\mathsf {pk}\) and a secret key \(\mathsf{sk}\).

  • \(\mathsf {PKE}.\mathsf {Enc} (\mathsf {pk}, m) \rightarrow c\). The encryption algorithm takes in a public key \(\mathsf {pk}\) and a message m. It outputs a ciphertext c.

  • \(\mathsf {PKE}.\mathsf {Dec} (\mathsf{sk}, c) \rightarrow m\). The decryption algorithm takes in a ciphertext c and a secret key \(\mathsf{sk}\). It outputs a message m.

Correctness. The PKE scheme satisfies correctness if \(\mathsf {PKE}.\mathsf {Dec} (\mathsf{sk}, c) = m\) with all but negligible probability whenever \((\mathsf {pk}, \mathsf{sk})\) is produced by \(\mathsf {PKE}.\mathsf{Gen}\) and c is produced by \(\mathsf {PKE}.\mathsf {Enc} (\mathsf {pk}, m)\).

Security. We define one-time leakage-resilient security for PKE schemes in terms of the following game between a challenger and an attacker. (This extends the usual notion of semantic security to our leakage setting.) We let \(\kappa \) denote the security parameter, and the parameter \(\mu \) controls the amount of leakage allowed.

Setup Phase. :

The game begins with a setup phase. The challenger calls \(\mathsf {PKE}.\mathsf{Gen}(1^\kappa )\) to create the initial secret key \(\mathsf{sk}\) and public key \(\mathsf {pk}\). It gives \(\mathsf {pk}\) to the attacker. No leakage is allowed in this phase.

Query Phase. :

The attacker specifies an efficiently computable leakage function \( f \), whose output is at most \(\mu \) bits. The challenger returns \( f (\mathsf{sk})\) to the attacker. We sometimes refer to the challenger as a stateful, “leakage oracle,” denoted \(\mathcal {O}\), during the query phase of the security experiment.

Challenge Phase. :

The attacker chooses two messages \(m_0,m_1\) which it gives to the challenger. The challenger chooses a random bit \(b \in {\{0,1\}}\), encrypts \(m_b\), and gives the resulting ciphertext to the attacker. The attacker then outputs a guess \(b'\) for b. The attacker wins the game if \(b = b'\). We define the advantage of the attacker in this game as \(|\frac{1}{2} - \Pr [b' = b]|\).

Definition 1

(One-time Leakage Resilience) We say a public key encryption scheme is \(\mu \)-leakage resilient against one-time key leakage if any probabilistic polynomial-time attacker only has a negligible advantage (negligible in \(\kappa \)) in the above game.

2.1.2 Continual Leakage Model

In the continual leakage setting, we require an additional algorithm \(\mathsf {PKE}.\mathsf {Update} \) which updates the secret key. Specifically, the update algorithm takes in a secret key \(\mathsf{sk}_{i-1}\) and some randomness \(r_i\), and produces a new secret key \(\mathsf{sk}_i\) for the same public key. Thus, scheme \(\mathsf {PKE}\) consists of four algorithms: \(\mathsf {PKE}.\mathsf{Gen}, \mathsf {PKE}.\mathsf {Enc},\mathsf {PKE}.\mathsf {Dec} \), and \(\mathsf {PKE}.\mathsf {Update} \).

  • \(\mathsf {PKE}.\mathsf{Gen}(1^\kappa ) \rightarrow (\mathsf {pk}, \mathsf{sk}_0)\). The key generation algorithm takes in the security parameter and outputs a public key \(\mathsf {pk}\) and a secret key \(\mathsf{sk}_0\).

  • \(\mathsf {PKE}.\mathsf {Enc} (\mathsf {pk}, m) \rightarrow c\). The encryption algorithm takes in a public key \(\mathsf {pk}\) and a message m. It outputs a ciphertext c.

  • \(\mathsf {PKE}.\mathsf {Dec} (\mathsf{sk}_i, c) \rightarrow m\). The decryption algorithm takes in a ciphertext c and a secret key \(\mathsf{sk}_i\). It outputs a message m.

  • \(\mathsf {PKE}.\mathsf {Update} (\mathsf{sk}_{i-1}) \rightarrow \mathsf{sk}_i\). The update algorithm takes in a secret key \(\mathsf{sk}_{i-1}\) and produces a new secret key \(\mathsf{sk}_i\) for the same public key. Here some randomness \(r_i\) is used in the update algorithm.

Correctness. The PKE scheme satisfies correctness if \(\mathsf {PKE}.\mathsf {Dec} (\mathsf{sk}_i, c) = m\) with all but negligible probability whenever \(\mathsf {pk}\) and \(\mathsf{sk}\) are produced by \(\mathsf {PKE}.\mathsf{Gen}, \mathsf{sk}_i\) is obtained by calls to \(\mathsf {PKE}.\mathsf {Update} \) on previously obtained secret keys (starting with \(\mathsf{sk}_0)\), and c is produced by \(\mathsf {PKE}.\mathsf {Enc} (\mathsf {pk}, m)\).

Security. We define continual leakage-resilient security for PKE schemes in terms of the following game between a challenger and an attacker. (This extends the usual notion of semantic security to our leakage setting.) We let \(\kappa \) denote the security parameter, and the parameter \(\mu \) controls the amount of leakage allowed.

Setup Phase. :

The game begins with a setup phase. The challenger calls \(\mathsf {PKE}.\mathsf{Gen}(1^\kappa )\) to create the initial secret key \(\mathsf{sk}_0\) and public key \(\mathsf {pk}\). It gives \(\mathsf {pk}\) to the attacker. No leakage is allowed in this phase.

Query Phase. :

In this phase, the attacker launches a polynomial number of leakage queries. Each time, say in the ith query, the attacker provides an efficiently computable leakage function \( f _i\) whose output is at most \(\mu \) bits, and the challenger chooses randomness \(r_i\), updates the secret key from \(\mathsf{sk}_{i-1}\) to \(\mathsf{sk}_i\), and gives the attacker the leakage response \(\ell _i\). In the regular continual leakage model, the leakage attack is applied on a single secret key, and the leakage response \(\ell _i = f _i(\mathsf{sk}_{i-1})\). In the continual leak-on-update model, the leakage attack is applied on the current secret key and the randomness used for updating the secret key, i.e., \(\ell _i = f _i(\mathsf{sk}_{i-1}, r_{i})\). We sometimes refer to the challenger as a stateful, “leakage oracle,” denoted \(\mathcal {O}\), during the query phase of the security experiment.

Challenge Phase. :

The attacker chooses two messages \(m_0\) and \(m_1\) which it gives to the challenger. The challenger chooses a random bit \(b \in {\{0,1\}}\), encrypts \(m_b\), and gives the resulting ciphertext to the attacker. The attacker then outputs a guess \(b'\) for b. The attacker wins the game if \(b = b'\). We define the advantage of the attacker in this game as \(|\frac{1}{2} - \Pr [b' = b]|\).

Definition 2

(Continual Leakage Resilience) We say a public key encryption scheme is \(\mu \)-CLR secure (respectively, \(\mu \)-CLR secure with leakage on key updates) if any \(\textsc {ppt}\) attacker only has a negligible advantage (negligible in \(\kappa \)) in the above game.

2.2 Leakage-Resilient Signatures

A digital signature scheme \({\mathsf {SIG}} \) consists of three algorithms: \({\mathsf {SIG}}.\mathsf {Gen}, {\mathsf {SIG}}.\mathsf {Sign} \), and \({\mathsf {SIG}}.\mathsf {Verify} \). In the continual leakage setting, we require an additional algorithm \({\mathsf {SIG}}.\mathsf {Update} \) which updates the secret keys. Note that the verification key remains unchanged.

  • \({\mathsf {SIG}}.\mathsf {Gen} (1^\kappa ) \rightarrow (\mathsf{vk}, \mathsf{sk}_0)\). The key generation algorithm takes in the security parameter \(\kappa \), and outputs a secret key \(\mathsf{sk}_0\) and a public verification key \(\mathsf{vk}\).

  • \({\mathsf {SIG}}.\mathsf {Sign} (m,\mathsf{sk}_i) \rightarrow \sigma \). The signing algorithm takes in a message m and a secret key \(\mathsf{sk}_i\), and outputs a signature \(\sigma \).

  • \({\mathsf {SIG}}.\mathsf {Verify} (\mathsf{vk}, \sigma , m) \rightarrow {{\{0,1\}}^{}} \). The verification algorithm takes in the verification key \(\mathsf{vk}\), a signature \(\sigma \), and a message m. It outputs either 0 or 1.

  • \({\mathsf {SIG}}.\mathsf {Update} (\mathsf{sk}_{i-1}) \rightarrow \mathsf{sk}_i\). The update algorithm takes in a secret key \(\mathsf{sk}_{i-1}\) and produces a new secret key \(\mathsf{sk}_i\) for the same verification key.

Correctness. The signature scheme satisfies correctness if \({\mathsf {SIG}}.\mathsf {Verify} (\mathsf{vk},\sigma , m)\) outputs 1 whenever \(\mathsf{vk}, \mathsf{sk}_0\) is produced by \({\mathsf {SIG}}.\mathsf {Gen} \), and \(\sigma \) is produced by \({\mathsf {SIG}}.\mathsf {Sign} (m,\mathsf{sk}_i)\) for some \(\mathsf{sk}_i\) obtained by calls to \({\mathsf {SIG}}.\mathsf {Update} \), starting with \(\mathsf{sk}_0\). (If the verification algorithm is randomized, we may relax this requirement to hold with all but negligible probability.)

Security. We define continual leakage security for signatures in terms of the following game between a challenger and an attacker. (This extends the usual notion of existential unforgeability to our leakage setting.) The game is parameterized by two values: the security parameter \(\kappa \), and the parameter \(\mu \) which controls the amount of leakage allowed. For the sake of simplicity, we assume that the signing algorithm calls the update algorithm on each invocation. Since updates in our scheme do occur with each signature, we find it more convenient to work with the simplified definition given below.

Setup Phase :

The game begins with a setup phase. The challenger calls \(\mathsf {Gen} (1^\kappa )\) to create the signing key, \(\mathsf{sk}_0\), and the verification key, \(\mathsf{vk}\). It gives \(\mathsf{vk}\) to the attacker. No leakage is allowed in this phase.

Query Phase. :

In this phase, the attacker launches a polynomial number of signing queries and leakage queries. Each time, say in the ith query, the attacker specifies a message \(m_i\) and provides an efficiently computable leakage function \( f _i\) whose output is at most \(\mu \) bits, and the challenger chooses randomness \(r_i\), updates the secret key from \(\mathsf{sk}_{i-1}\) to \(\mathsf{sk}_i\), and gives the attacker the corresponding signature for message \(m_i\) as well as the leakage response \(\ell _i\). In the CLR model, the leakage attack is applied on a single secret key, and the leakage response \(\ell _i = f _i(\mathsf{sk}_{i-1})\). In the CLR with leakage on key updates, the leakage attack is applied on the current secret key and the randomness used for updating the secret key, i.e., \(\ell _i = f _i(\mathsf{sk}_{i-1}, r_{i})\).

Forgery Phase :

The attacker gives the challenger a message, \(m^*\), and a signature \(\sigma ^*\) such that \(m^*\) has not been previously queried. The attacker wins the game if \((m^*, \sigma ^*)\) passes the verification algorithm using \(\mathsf{vk}\).

Definition 3

(Continual Leakage Resilience) We say a Digital Signature scheme is \(\mu \)-CLR secure (respectively, \(\mu \)-CLR secure with leakage on key updates) if any \(\textsc {ppt}\) attacker only has a negligible advantage (negligible in \(\kappa \)) in the above game.

2.3 Obfuscation

Indistinguishability Obfuscation. A uniform \(\textsc {ppt}\) machine \({\mathsf {iO}} \) is called an indistinguishable obfucastor [4, 5, 29, 33], for a circuit family \(\{\mathcal {C}_\kappa \}\), if the following conditions hold:

  • (Correctness) For all \(\kappa \in \mathbb {N}\), for all \(C \in \mathcal {C}_\kappa \), for all inputs x, we have

    $$\begin{aligned} \Pr \left[ C'(x) = C(x) \ : \ C' \leftarrow {\mathsf {iO}} (\kappa , C) \right] = 1 \end{aligned}$$
  • For any uniform or non-uniform \(\textsc {ppt}\) distinguisher D, for all security parameter \(\kappa \in \mathbb {N}\), for all pairs of circuits \(C_0, C_1 \in \mathcal {C}_\kappa \) such that \(C_0(x) = C_1(x)\) for all inputs x, then

    $$\begin{aligned} \left| \Pr \left[ D({\mathsf {iO}} (\kappa , C_0)) = 1 \right] - \Pr \left[ D({\mathsf {iO}} (\kappa , C_1)) = 1 \right] \right| \le \mathsf{negl}(\kappa ) \end{aligned}$$

For simplicity, when the security parameter \(\kappa \) is clear, we write \({\mathsf {iO}} (C)\) in short.

Public-Coin Differing-inputs Obfuscation for Circuits. Barak et al. [4, 5] defined the notion of differing-inputs obfuscation, which was later re-formulated in the works of Ananth et al. and Boyle et. al [3, 9]. In our work, we use a weaker notion known as public-coin differing-inputs obfuscation, due to Ishai et al. [37]. To the best of our knowledge, unlike the case of differing-inputs obfuscation, there are no impossibility results for public-coin differing-inputs obfuscation. Below, we closely follow the definitions presented in [37].

Definition 4

(Public-Coin Differing-Inputs Sampler for Circuits) An efficient non-uniform sampling algorithm \(\mathsf {Samp}= \{\mathsf {Samp}_{\kappa }\}\) is called a public-coin differing-inputs sampler for the parameterized collection of circuits \(\mathcal {C} = \{\mathcal {C}_{\kappa } \}\) if the output of \(\mathsf {Samp}_{\kappa }\) is distributed over \(\mathcal {C}_{\kappa } \times \mathcal {C}_{\kappa }\) and for every efficient non-uniform algorithm \(\mathcal {A}= \{\mathcal {A}_{\kappa }\}\) there exists negligible function \(\mathsf{negl}\) such that for all \(\kappa \in \mathbb {N}\):

$$\begin{aligned} \Pr _r[C_{0}(x) \ne C_{1}(x) : (C_{0},C_{1})\leftarrow \mathsf {Samp}_{\kappa }(r), x\leftarrow \mathcal {A}_{\kappa }(r)]\le \mathsf{negl}(\kappa ). \end{aligned}$$

Note that in the above definition the sampler and attacker circuits both receive the same random coins as input.

Definition 5

(Public-Coin Differing-inputs Obfuscator for Circuits) A uniform \(\textsc {ppt}\) machine \({\mathsf {diO}} \) is called a public-coin differing-inputs obfuscator for the parameterized collection of circuits \(\mathcal {C} = \{\mathcal {C}_{\kappa }\}\) if the following conditions are satisfied:

  • (Correctness): For all security parameter \(\kappa \), all \(C\in \mathcal {C}_{\kappa }\), all inputs x, we have

    $$\begin{aligned} \Pr [C'(x)= C(x): C' \leftarrow {\mathsf {diO}} (\kappa ,C)] =1. \end{aligned}$$
  • (Differing-inputs): For every public-coin differing-inputs samplers \(\mathsf {Samp}= \{\mathsf {Samp}_{\kappa }\}\) for the collection \(\mathcal {C}\), for every (not necessarily uniform) \(\textsc {ppt}\) distinguisher D, there exists a negligible function \(\mathsf{negl}\) such that for all security parameters \(\kappa \):

    $$\begin{aligned}&\left| \begin{array}{l} \Pr [D_{\kappa }(r, C') = 1 : (C_0, C_1) \leftarrow \mathsf {Samp}_{\kappa }(r), C' \leftarrow {\mathsf {diO}} (\kappa , C_{0})]-\\ \qquad \qquad \Pr [D_{\kappa }(r, C') = 1 : (C_0, C_1) \leftarrow \mathsf {Samp}_{\kappa }(r), C' \leftarrow {\mathsf {diO}} (\kappa , C_{1})] \end{array} \right| \\&\quad \le \mathsf{negl}(\kappa ), \end{aligned}$$

    where the probability is taken over r and the coins of \({\mathsf {diO}} \).

2.4 Puncturable Pseudorandom Functions

Puncturable family of PRFs are a special case of constrained PRFs [8, 10, 41], where the PRF is defined on all input strings except for a set of size polynomial in the security parameter. Below we recall their definition, as given by [47].

A puncturable family of PRFs \({\mathsf {PRF}} \) is defined by a tuple of efficient algorithms \((\mathsf {Gen}, \mathsf{Eval}, \mathsf {Punct})\) and a pair of polynomials n() and m():

  • Key Generation\(\mathsf {Gen} (1^\kappa )\) is a \(\textsc {ppt}\) algorithm that takes as input the security parameter \(\kappa \) and outputs a PRF key K.

  • Punctured Key Generation\(\mathsf {Punct}(K,S)\) is a \(\textsc {ppt}\) algorithm that takes as input a PRF key K, a set \(S \subset {\{0,1\}}^{n(\kappa )}\) and outputs a punctured key \(K_S\).

  • Evaluation\(\mathsf{Eval}(K,x)\) is a deterministic algorithm that takes as input a key K (punctured key or PRF key), a string \(x\in {\{0,1\}}^{n(\kappa )}\) and outputs \(y\in {\{0,1\}}^{m(\kappa )}\)

Definition 6

A family of PRFs \((\mathsf {Gen}, \mathsf{Eval}, \mathsf {Punct})\) is puncturable if it satisfies the following properties

  • Functionality preserved under puncturing. Let \(K\leftarrow \mathsf {Gen} (1^\kappa )\) and \(K_S\leftarrow \mathsf {Punct}(K,S)\). Then for all \(x\not \in S, \mathsf{Eval}(K,x)=\mathsf{Eval}(K_S, x)\).

  • Pseudorandom at (non-adaptively) punctured points. For every \(\textsc {ppt}\) adversary \((\mathcal {A}_1,\mathcal {A}_2)\) such that \(\mathcal {A}_1()\) outputs a set \(S \subset {\{0,1\}}^{n(\kappa )}\) and \(x\in S\), consider an experiment \(K\leftarrow \mathsf {Gen} (1^\kappa )\) and \(K_S\leftarrow \mathsf {Punct}(K,S)\). Then

    $$\begin{aligned}\left| \Pr [\mathcal {A}_2(K_S, x, \mathsf{Eval}(K,x))=1] - \Pr [\mathcal {A}_2(K_S, x, U_{m(\kappa )})=1] \right| \le \mathsf{negl}(\kappa )\end{aligned}$$

    where \(U_{m(\kappa )}\) denotes the uniform distribution over \(m(\kappa )\) bits. Note that the set S is chosen non-adaptively, before the key K is generated.

Theorem 1

[8, 10, 32, 41] If one-way functions exist, then for all polynomial n() and m(), there exists a puncturable PRF family that maps n() bits to m() bits.

Next we consider families of PRFs that are with high probability injective:

Definition 7

A statistically injective (puncturable) PRF family with failure probability \(\epsilon ()\) is a family of (puncturable) PRFs such that with probability \(1-\epsilon (\kappa )\) over the random choice of key \(K\leftarrow \mathsf {Gen} (1^\kappa )\), we have that \(\mathsf{Eval}(K, \cdot )\) is injective.

If the failure probability function \(\epsilon ()\) is not specified, then \(\epsilon ()\) is a negligible function.

Theorem 2

[47] If one-way functions exist, then for all efficiently computable functions \(n(\kappa ),m(\kappa )\), and \(e(\kappa )\) such that \(m(\kappa ) > 2 n(\kappa ) + e(\kappa )\) here exists a puncturable statistically injective PRF family with failure probability \(2^{-e(\kappa )}\) that maps \(n(\kappa )\) bits to \(m(\kappa )\) bits.

Finally, we consider PRFs that are also (strong) extractors over their inputs:

Definition 8

An extracting (puncturable) PRF family with error \(\epsilon ()\) for min-entropy \(k(\kappa )\) is a family of (puncturable) PRFs mapping \(n(\kappa )\) bits to \(m(\kappa )\) bits such that for all \(\kappa \), if X is any distribution over \(n(\kappa )\) bits with min-entropy greater than \(k(\kappa )\) then the statistical distance between \((K\leftarrow \mathsf {Gen} (1^\kappa ), \mathsf{Eval}(K,X))\) and \((K\leftarrow \mathsf {Gen} (1^\kappa ), U_{m(\kappa )})\) is at most \(\epsilon (\kappa )\), where \(U_\ell \) denotes the uniform distribution over \(\ell \) bit strings.

Theorem 3

[47] If one-way functions exist, then for all efficiently computable functions \(n(\kappa ),m(\kappa ),k(\kappa )\) and \(e(\kappa )\) such that \(n(\kappa )> k(\kappa )> m(\kappa ) + 2 e(\kappa )+2\) there exists an extracting puncturable PRF family that maps \(n(\kappa )\) bits to \(m(\kappa )\) bits with error \(2^{-e(\kappa )}\) for min-entropy \(k(\kappa )\)

For ease of presentation, for a puncturable family of PRFs \({\mathsf {PRF}} \), we often write \({\mathsf {PRF}} (K,x)\) to represent \({\mathsf {PRF}}.\mathsf{Eval}(K,x)\).

3 Compiler from 2CLR to Leakage on Key Updates

In this section, we present a compiler that upgrades any scheme for public key encryption (PKE) or digital signature (SIG), that is, consecutive two-key leakage resilient, into one that is secure against leakage on update. We first introduce a notion of explainable update transformation, which is a generalization of the idea of universal deniable encryption by Sahai and Waters [47]. We show how to use such a transformation to upgrade a scheme (PKE or SIG) that is secure in the consecutive two-key leakage model to one that is secure in the leak-on-update model (Sect. 3.2). Finally, we show two instantiations of the explainable update transformation: one based on indistinguishability obfuscation and the other on differing-inputs obfuscation (Sect. 3.3). For clarity of exposition, the following sections will focus on constructions of PKE. In Sect. 3.4, we show that the result can be translated to SIG.

3.1 Consecutive Continual Leakage Resilience (2CLR)

In this subsection, we present a new notion of consecutive continual leakage resilience for public key encryption (PKE). We remark that this notion can be easily extended to different cases, such as signatures or leakage-resilient one-way relations [21]. For simplicity and concreteness, we only present the PKE version. Let \(\kappa \) denote the security parameter and \(\mu \) be the leakage bound between two updates. Let \(\mathsf {PKE}= \{\mathsf {Gen},\mathsf {Enc},\mathsf {Dec},\mathsf {Update} \}\) be an encryption scheme with update.

Setup Phase. :

The game begins with a setup phase. The challenger calls \(\mathsf {PKE}.\mathsf {Gen} (1^\kappa )\) to create the initial secret key \(\mathsf{sk}_0\) and public key \(\mathsf {pk}\). It gives \(\mathsf {pk}\) to the attacker. No leakage is allowed in this phase.

Query Phase. :

The attacker specifies an efficiently computable leakage function \( f _1\), whose output is at most \(\mu \) bits. The challenger updates the secret key (changing it from \(\mathsf{sk}_0\) to \(\mathsf{sk}_1\)), and then gives the attacker \( f _1(\mathsf{sk}_0,\mathsf{sk}_1)\). The attacker then repeats this a polynomial number of times, each time supplying an efficiently computable leakage function \( f _i\) whose output is at most \(\mu \) bits. Each time, the challenger updates the secret key from \(\mathsf{sk}_{i-1}\) to \(\mathsf{sk}_i\) according to \(\mathsf {Update} (\cdot )\) and gives the attacker \( f _i(\mathsf{sk}_{i-1}, \mathsf{sk}_i)\).

Challenge Phase. :

The attacker chooses two messages \(m_0,m_1\) which it gives to the challenger. The challenger chooses a random bit \(b \in {\{0,1\}}\), encrypts \(m_b\), and gives the resulting ciphertext to the attacker. The attacker then outputs a guess \(b'\) for b. The attacker wins the game if \(b = b'\). We define the advantage of the attacker in this game as \(|\frac{1}{2} - \Pr [b' = b]|\).

Definition 9

(Continual Consecutive Leakage Resilience) We say a public key encryption scheme is \(\mu \)-leakage resilient against consecutive continual leakage (or \(\mu \)-2CLR) if any probabilistic polynomial-time attacker only has a negligible advantage (negligible in \(\kappa \)) in the above game.

3.2 Explainable Key Update Transformation

Now we introduce a notion of explainable key update transformation and show how it can be used to upgrade security of a PKE scheme from 2CLR to CLR with leakage on key updates. Informally, an encryption scheme has an “explainable” update procedure if given both \(\mathsf{sk}_{i-1}\) and \(\mathsf{sk}_{i} = \mathsf {Update} (\mathsf{sk}_{i-1},r_{i})\), there is an efficient way to come up with some explained random coins \({\hat{r}}_{i}\) such that no adversary can distinguish the real coins \(r_{i}\) from the explained coins \({\hat{r}}_{i}\). Intuitively, this gives a way to handle leakage on random coins given just leakage on two consecutive keys.

We start with any encryption scheme \(\mathsf {PKE}\) that has some key update procedure, and we introduce a transformation that produces a scheme \(\mathsf {PKE}'\) with an explainable key update procedure.

Definition 10

(Explainable Key Update Transformation) Let \(\mathsf {PKE}= \mathsf {PKE}. \{\mathsf {Gen}, \mathsf {Enc}, \mathsf {Dec}, \mathsf {Update} \}\) be an encryption scheme with key update. An explainable key update transformation for \(\mathsf {PKE}\) is a \(\textsc {ppt} \) algorithm \(\mathsf{TransformGen}\) that takes input security parameter \(1^{\kappa }\), an update circuit \(C_{\mathsf {Update}}\) (that implements the key update algorithm \(\mathsf {PKE}.\mathsf {Update} (1^{\kappa }, \cdot ; \cdot )\)), a public key \(\mathsf {pk}\) of \(\mathsf {PKE}\), and outputs two programs \({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}\) with the following syntax:

Let \((\mathsf {pk}, \mathsf{sk})\) be a pair of public and secret keys of the encryption scheme

  • \({\mathcal {P}_\mathsf{update}}\) takes inputs \(\mathsf{sk}\), random coins r, and \({\mathcal {P}_\mathsf{update}}(\mathsf{sk}; r) \) outputs an updated secret key \(\mathsf{sk}'\);

  • \({\mathcal {P}_\mathsf{explain}}\) takes inputs \((\mathsf{sk}, \mathsf{sk}')\), random coins \({\bar{v}}\), and \({\mathcal {P}_\mathsf{explain}}(\mathsf{sk},\mathsf{sk}'; {\bar{v}})\) outputs a string r.

Given a polynomial \(\rho (\cdot )\) and a public key \(\mathsf {pk}\), we define \(\varPi _{\mathsf {pk}} = \bigcup _{j=0}^{\rho (\kappa )} \varPi _{j}\), where \(\varPi _{0} = \{\mathsf{sk}: (\mathsf {pk},\mathsf{sk}) \in \mathsf {PKE}.\mathsf {Gen} \}, \varPi _{i} = \{\mathsf{sk}: \exists \mathsf{sk}' \in \varPi _{i-1}, \mathsf{sk}\in \mathsf {Update} (\mathsf{sk}')\}\) for \(i=1, 2, \ldots , \rho (\kappa )\). In words, \(\varPi _{\mathsf {pk}}\) is the set of all secret keys \(\mathsf{sk}\) such that either \((\mathsf {pk},\mathsf{sk})\) is in the support of \(\mathsf {PKE}.\mathsf {Gen} \) or \(\mathsf{sk}\) can be obtained by the update procedure \(\mathsf {Update} \) (up to polynomially many times) with an initial \((\mathsf {pk},\mathsf{sk}') \in \mathsf {PKE}.\mathsf {Gen} \).

We say the transformation is secure if:

  1. (a)

    For any polynomial \(\rho (\cdot )\), any \(\mathsf {pk}\), all \(\mathsf{sk}\in \varPi _{\mathsf {pk}}\), any \({\mathcal {P}_\mathsf{update}}\in \mathsf{TransformGen}(1^{\kappa },\mathsf {PKE}.\mathsf {Update},\mathsf {pk})\), the following two distributions are statistically close: \(\{{\mathcal {P}_\mathsf{update}}(\mathsf{sk})\} \approx \{\mathsf {PKE}.\mathsf {Update} (\mathsf{sk})\}\). Note that the circuit \({\mathcal {P}_\mathsf{update}}\) and the update algorithm \(\mathsf {PKE}.\mathsf {Update} \) might have different spaces for random coins, but the distributions can still be statistically close.

  2. (b)

    For any public key \(\mathsf {pk}\) and secret key \(\mathsf{sk}\in \varPi _{\mathsf {pk}}\), the following two distributions are computationally indistinguishable:

    $$\begin{aligned}\{({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}},\mathsf {pk},\mathsf{sk},u)\} \approx \{({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}},\mathsf {pk},\mathsf{sk},e)\},\end{aligned}$$

    where \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\leftarrow \mathsf{TransformGen}(1^{\kappa },\mathsf {PKE}.\mathsf {Update}, \mathsf {pk}),u\leftarrow U_{\mathrm{poly}(\kappa )}, \mathsf{sk}' = {\mathcal {P}_\mathsf{update}}(\mathsf{sk};u)\), \(e \leftarrow {\mathcal {P}_\mathsf{explain}}(\mathsf{sk},\mathsf{sk}')\), and \(U_{\mathrm{poly}(\kappa )}\) denotes the uniform distribution over a polynomial number of bits.

Let \(\mathsf {PKE}= \mathsf {PKE}.\{\mathsf {Gen}, \mathsf {Enc},\mathsf {Dec},\mathsf {Update} \}\) be a public key encryption scheme and \(\mathsf{TransformGen}\) be an explainable key update transformation for \(\mathsf {PKE}\) as above. We define the following transformed scheme \(\mathsf {PKE}' = \mathsf {PKE}'.\{\mathsf {Gen}, \mathsf {Enc},\mathsf {Dec},\mathsf {Update} \}\) as follows:

  • \(\mathsf {PKE}'.\mathsf {Gen} (1^{\kappa })\): compute \((\mathsf {pk},\mathsf{sk}) \leftarrow \mathsf {PKE}.\mathsf {Gen} (1^{\kappa })\). Then compute \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\leftarrow \mathsf{TransformGen}(1^{\kappa },\mathsf {PKE}.\mathsf {Update}, \mathsf {pk})\). Finally, output \(\mathsf {pk}' = (\mathsf {pk}, {\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\) and \(\mathsf{sk}' = \mathsf{sk}\).

  • \(\mathsf {PKE}'.\mathsf {Enc} (\mathsf {pk}',m)\): parse \(\mathsf {pk}' = (\mathsf {pk}, {\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\). Then output \(c \leftarrow \mathsf {PKE}.\mathsf {Enc} (\mathsf {pk}, m)\).

  • \(\mathsf {PKE}'.\mathsf {Dec} (\mathsf{sk}',c)\): output \(m = \mathsf {PKE}.\mathsf {Dec} (\mathsf{sk}', c)\).

  • \(\mathsf {PKE}'.\mathsf {Update} (\mathsf{sk}')\): output \(\mathsf{sk}'' \leftarrow {\mathcal {P}_\mathsf{update}}(\mathsf{sk}')\).

Then we are able to show the following theorem for the upgraded scheme \(\mathsf {PKE}'\).

Theorem 4

Let \(\mathsf {PKE}= \mathsf {PKE}.\{\mathsf {Gen}, \mathsf {Enc},\mathsf {Dec},\mathsf {Update} \}\) be a public key encryption scheme that is \(\mu \)-2CLR (without leakage on update), and \(\mathsf{TransformGen}\) a secure explainable key update transformation for \(\mathsf {PKE}\). Then the transformed scheme \(\mathsf {PKE}' = \mathsf {PKE}'.\{\mathsf {Gen}, \mathsf {Enc},\mathsf {Dec},\mathsf {Update} \}\) described above is \(\mu \)-CLR with leakage on key updates.

Proof

Assume toward contradiction that there is a \(\textsc {ppt}\) adversary \(\mathcal {A} \) and a non-negligible \({\epsilon }(\cdot )\) such that for infinitely many values of \(\kappa ,\mathsf{Adv}_{\mathcal {A}, \mathsf {PKE}'} \ge {\epsilon }(\kappa )\) in the leak-on-update model. Then we show that there exists \(\mathcal {B} \) that breaks the security of the underlying \(\mathsf {PKE}\) (in the consecutive two-key leakage model) with probability \({\epsilon }(\kappa )- \mathsf{negl}(\kappa )\). This is a contradiction.

For notional simplicity, we will use \(\mathsf{Adv}_{\mathcal {A},\mathsf {PKE}'}\) to denote the advantage of the adversary \(\mathcal {A} \) attacking the scheme \(\mathsf {PKE}'\) (according to leak-on-update attacks), and \(\mathsf{Adv}_{\mathcal {B},\mathsf {PKE}}\) to denote the advantage of the adversary \(\mathcal {B} \) attacking the scheme \(\mathsf {PKE}\) (according to consecutive two-key leakage attacks).

We define \(\mathcal {B} \) in the following way: \(\mathcal {B} \) internally instantiates \(\mathcal {A} \) and participates externally in a continual consecutive two-key leakage experiment on public key encryption scheme \(\mathsf {PKE}'\). Specifically, \(\mathcal {B} \) does the following:

  • Upon receiving \(\mathsf {pk}^{*}\) externally, \(\mathcal {B} \) runs \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}) \leftarrow \mathsf{TransformGen}(1^{\kappa }, \mathsf {PKE}.\mathsf {Update}, \mathsf {pk}^{*})\). Note that by the properties of the transformation, this can be done given only \(\mathsf {pk}^{*}\). \(\mathcal {B} \) sets \(\mathsf {pk}' = (\mathsf {pk}^{*}, {\mathcal {P}_\mathsf{update}},\)\({\mathcal {P}_\mathsf{explain}})\) to be the public key for the \(\mathsf {PKE}'\) scheme and forwards \(\mathsf {pk}'\) to \(\mathcal {A} \).

  • When \(\mathcal {A} \) asks for a leakage query \(f(\mathsf{sk}_{i-1}', r_i), \mathcal {B} \) asks for the following leakage query on \((\mathsf{sk}_{i-1}, \mathsf{sk}_i)\): \(f'(\mathsf{sk}_{i-1}, \mathsf{sk}_i) = f(\mathsf{sk}_{i-1}, {\mathcal {P}_\mathsf{explain}}(\mathsf{sk}_{i-1}, \mathsf{sk}_i))\) and forwards the response to \(\mathcal {A} \). Note that the output lengths of f and \(f'\) are the same.

  • At some point, \(\mathcal {A} \) submits \(m_0, m_1\) and \(\mathcal {B} \) forwards them to its external experiment.

  • Upon receiving the challenge ciphertext \(c^*,\mathcal {B} \) forwards it to \(\mathcal {A} \) and outputs whatever \(\mathcal {A} \) outputs.

Now we would like to analyze the advantage of \(\mathcal {B} \). It is easy to see that \(\mathcal {B} \) has the same advantage as \(\mathcal {A} \); however, there is a subtlety such that \(\mathcal {A} \) does not necessarily have advantage \({\epsilon }(\kappa )\): The simulation of leakage queries provided by \(\mathcal {B} \) is not identical to the distribution in the real game that \(\mathcal {A} \) would expect. Recall that in the security experiment of the scheme \(\mathsf {PKE}'\), the secret keys are updated according to \({\mathcal {P}_\mathsf{update}}\). In the above experiment (where \(\mathcal {B} \) set up), the secret keys were updated using the \(\mathsf {Update} \) externally, and the random coins were simulated by the \({\mathcal {P}_\mathsf{explain}}\) algorithm.

Our goal is to show that actually \(\mathcal {A} \) has essentially the same advantage in this modified experiment as in the original experiment. We show this by the following lemma:

Lemma 1

For any polynomial n, the following two distributions are computationally indistinguishable.

$$\begin{aligned} D_{1}\equiv & {} ({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}, \mathsf {pk},\mathsf{sk}_{0}, r_{1}, \mathsf{sk}_1, \ldots , \mathsf{sk}_{n-1}, r_{n}, \mathsf{sk}_{n}) \approx \\ D_{2 }\equiv & {} ({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}, \mathsf {pk},\mathsf{sk}_0,{\widehat{r}}_1, {\widehat{\mathsf{sk}}}_1, \ldots , {\widehat{\mathsf{sk}}}_{n-1}, {\widehat{r}}_{n}, {\widehat{\mathsf{sk}}}_{n} ), \end{aligned}$$

where the initial \(\mathsf {pk},\mathsf{sk}_{0}\) and \(\mathsf{TransformGen}(1^{\kappa },\mathsf {pk})\) are sampled identically in both experiment; in \(D_{1}\), \(\mathsf{sk}_{i+1} = {\mathcal {P}_\mathsf{update}}(\mathsf{sk}_{i};r_{i+1})\), and \(r_{i+1}\)’s are uniformly random; in \(D_{2},{\widehat{\mathsf{sk}}}_{i+1} \leftarrow \mathsf {Update} ({\widehat{\mathsf{sk}}}_{i}),{\widehat{r}}_{i+1} \leftarrow {\mathcal {P}_\mathsf{explain}}({\widehat{\mathsf{sk}}}_{i},{\widehat{\mathsf{sk}}}_{i+1})\). (Note \({\widehat{\mathsf{sk}}}_{0}\) = \(\mathsf{sk}_{0}\).)

Proof

To show the lemma, we consider the following hybrids: for \(i\in [n]\) define

$$\begin{aligned}H^{(i)}= & {} ({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}, \mathsf {pk},\mathsf{sk}_0,{\widehat{r}}_1, {\widehat{\mathsf{sk}}}_1, \ldots , {\widehat{\mathsf{sk}}}_{i-1}, \\&r_{i}, \mathsf{sk}_{i}, r_{i+1}, \mathsf{sk}_{i+1},r_{i+2}, \ldots , \mathsf{sk}_{n}),\end{aligned}$$

where the experiment is identical to \(D_{2}\) for up to \({\widehat{\mathsf{sk}}}_{i-1}\). Then it samples a uniformly random \(r_{i}\), sets \(\mathsf{sk}_{i}= {\mathcal {P}_\mathsf{update}}({\widehat{\mathsf{sk}}}_{i-1}; r_{i})\), and proceeds as \(D_{1}\).

$$\begin{aligned}H^{(i.5)}= & {} ({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}, \mathsf {pk},\mathsf{sk}_0,{\widehat{r}}_1, {\widehat{\mathsf{sk}}}_1, \ldots ,\\&{\widehat{\mathsf{sk}}}_{i-1}, {\widehat{r}}_{i}, \mathsf{sk}_{i}, r_{i+1}, \mathsf{sk}_{i+1},r_{i+2}, \ldots , \mathsf{sk}_{n}),\end{aligned}$$

where the experiment is identical to \(H^{(i)}\) for up to \({\widehat{\mathsf{sk}}}_{i-1}\), and then, it samples \(\mathsf{sk}_{i} \leftarrow {\mathcal {P}_\mathsf{update}}({\widehat{\mathsf{sk}}}_{i-1}) \), and \({\widehat{r}}_{i} \leftarrow {\mathcal {P}_\mathsf{explain}}({\widehat{\mathsf{sk}}}_{i-1}, \mathsf{sk}_{i})\). The experiment is identical to \(D_{1}\) for the rest.

Then we establish the following lemmas, and the lemma follows directly.

Lemma 2

For \(i \in [n-1],H^{(i.5)}\) is statistically close to \(H^{(i+1)}\).

Lemma 3

For \(i \in [n],H^{(i)}\) is computationally indistinguishable from \(H^{(i.5)}\).

This first lemma follows directly from the property (a) of Definition 10. We now prove Lemma 3.

Proof

Suppose there exists a (poly-sized) distinguisher \(\mathcal {D}\) that distinguishes \(H^{(i)}\) from \(H^{(i.5)}\) with non-negligible probability, then there exist \(\mathsf {pk}^{*},\mathsf{sk}^{*}\), and another \(\mathcal {D}'\) that can break the property (b).

From the definition of the experiments, we know that \({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}\) are independent of the public key and the first i secret keys, i.e., \(\mathbf {p} = (\mathsf {pk},\mathsf{sk}_0, {\widehat{\mathsf{sk}}}_1, \ldots , {\widehat{\mathsf{sk}}}_{i-1})\). By an average argument, there exists a fixed

$$\begin{aligned} \mathbf {p}^{*} = (\mathsf {pk}^{*},\mathsf{sk}^{*}_0, {\widehat{\mathsf{sk}}}^{*}_1, \ldots , {\widehat{\mathsf{sk}}}^{*}_{i-1}) \end{aligned}$$

such that \(\mathcal {D}\) can distinguish \(H^{(i)}\) from \(H^{(i.5)}\) conditioned on \(\mathbf {p}^{*}\) with non-negligible probability. (The probability is over the randomness of the rest experiment.) Then we are going to argue that there exist a poly-sized distinguisher \(\mathcal {D}'\), a key pair \(\mathsf {pk}', \mathsf{sk}'\) such that \(\mathcal {D}'\) can distinguish \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}},\mathsf {pk}',\mathsf{sk}', u)\) from \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}},\)\(\mathsf {pk}',\mathsf{sk}', e)\) where u is from the uniform distribution, \(\mathsf{sk}'' = {\mathcal {P}_\mathsf{update}}(\mathsf{sk}'; u)\), and \(e \leftarrow {\mathcal {P}_\mathsf{explain}}(\mathsf{sk}', \mathsf{sk}'')\).

Let \(\mathsf {pk}' = \mathsf {pk}^{*},\mathsf{sk}' = {\widehat{\mathsf{sk}}}^{*}_{i-1}\), and we define \(\mathcal {D}'\) (with the prefix \(\mathbf {p}^{*}\) hardwired) who on the challenge input \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}, \mathsf {pk}',\mathsf{sk}', z )\) does the following:

  • For \(j \in [i-1],\mathcal {D}'\) samples \({\widehat{r}}_{j} = {\mathcal {P}_\mathsf{explain}}(\mathsf{sk}^{*}_{j-1},\mathsf{sk}^{*}_{j})\).

  • Set \(\mathsf{sk}_{i-1}=\mathsf{sk}'\) and \(r_{i}= z,\mathsf{sk}_{i} = {\mathcal {P}_\mathsf{update}}(\mathsf{sk}_{i-1}, z)\).

  • For \(j \ge i+1,\mathcal {D}'\) samples \(r_{j}\) from the uniform distribution and sets \(\mathsf{sk}_{j} = {\mathcal {P}_\mathsf{update}}(\mathsf{sk}_{j-1};r_{j})\).

  • Finally, \(\mathcal {D}'\) outputs \(\mathcal {D}({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}}, \mathsf {pk}',\mathsf{sk}^{*}_{0}, {\widehat{r}}_{1}, \mathsf{sk}^{*}_{1}, \ldots , \mathsf{sk}_{i-1},r_{i},\mathsf{sk}_{i},r_{i+1},\ldots , \mathsf{sk}_{n} )\).

Clearly, if the challenge z was sampled according to uniformly random (as u), then \(\mathcal {D}'\) will output according to \(\mathcal {D}(H^{(i)}|_{\mathbf {p}{*}})\). On the other hand, suppose it was sampled according to \({\mathcal {P}_\mathsf{explain}}\) (as e), then \(\mathcal {D}'\) will output according to \(\mathcal {D}(H^{i.5}|_{\mathbf {p}^{*}})\). This completes the proof of the lemma. \(\square \)

Remark 1

The non-uniform argument above is not necessary. We present in this way for simplicity. The uniform reduction can be obtained using a standard Markov type argument, which we omit here.

Now, we are ready to analyze the advantage of \(\mathcal {B} \) (and \(\mathcal {A} \)). Denote \(\mathsf{Adv}_{\mathcal {A}, \mathsf {PKE}' ; D} \) as the advantage of \(\mathcal {A} \) in the experiment where the leakage queries are answered according to the distribution D. By assumption, we know that \(\mathsf{Adv}_{\mathcal {A},\mathsf {PKE}' ; D_{1}} = {\epsilon }(\kappa )\), and by definition the leakage queries are answered according to \(D_{1}\). By the above lemma, we know that \(|\mathsf{Adv}_{\mathcal {A}, \mathsf {PKE}'; D_{1}} -\mathsf{Adv}_{\mathcal {A},\mathsf {PKE}'; D_{2}} | \le \mathsf{negl}(\kappa )\); otherwise, \(D_{1}\) and \(D_{2}\) are distinguishable. Thus, we know \(\mathsf{Adv}_{\mathcal {A},\mathsf {PKE}'; D_{2}}\ge {\epsilon }(\kappa ) - \mathsf{negl}(\kappa )\). It is not hard to see that \(\mathsf{Adv}_{\mathcal {B}, \mathsf {PKE}} = \mathsf{Adv}_{\mathcal {A},\mathsf {PKE}'; D_{2}}\), since \(\mathcal {B} \) answers \(\mathcal {A} \)’s the leakage queries exactly according the distribution \(D_{2}\). Thus, \(\mathsf{Adv}_{\mathcal {B}, \mathsf {PKE}} \ge {\epsilon }(\kappa ) - \mathsf{negl}(\kappa )\), which is a contradiction. This completes the proof of the theorem. \(\square \)

3.3 Instantiations via Obfuscation

In this section, we show how to build an explainable key update transformation from program obfuscation. There are two variants of our construction: one from the weaker notion of indistinguishability obfuscation (iO) [5, 29] and one from the stronger notion of public-coin differing-inputs obfuscation (public-coin diO) [37]. Since our best parameters are achieved using public-coin diO, we present the public-coin diO variant of our construction/proof and indicate the points in the construction/proof where the iO variant differs.

Let \(\mathsf {PKE}= (\mathsf {Gen}, \mathsf {Enc},\mathsf {Dec},\mathsf {Update})\) be a public key encryption scheme (or a signature scheme with algorithms \(\mathsf {Verify},\mathsf {Sign} \)) with key update, and \({\mathsf {diO}} \)(resp. \({\mathsf {iO}} \)) be a public-coin differing-inputs obfuscator (resp. indistinguishability obfuscator) for some class defined later. Let \(\kappa \) be a security parameter. Let \(L_{\mathsf{sk}}\) be the length of secret keys in \(\mathsf {PKE}\) and \(L_{r}\) be the length of randomness used by \(\mathsf {Update} \). For ease of notation, we suppress the dependence of these lengths on \(\kappa \). We note that in the 2CLR case, it is without loss of generality to assume \(L_{r}<< L_{\mathsf{sk}}\), because we can always use pseudorandom coins (e.g., the output of a PRG) to do the update. Since only the two consecutive keys are leaked (not the randomness, e.g., the seed to the PRG), the update with the pseudorandom coins remains secure, assuming the PRG is secure.

Let \({\mathcal {H}} \) be a family of public-coin collision-resistant hash functions, as well as a family of \((2\kappa ,{\epsilon })\)-good extractor,Footnote 9 mapping \(2L_\mathsf{sk}+ 2\kappa \) bits to \(\kappa \) bits. Let \(F_1\) and \(F_2\) be families of puncturable pseudorandom functions, where \(F_1\) has input length \(2 L_\mathsf{sk}+ 3\kappa \) bits and output length \(L_r\) bits, and it is as well an (\(L_{r} + \kappa ,{\epsilon }\))-good unseeded extractor; \(F_2\) has input length \( \kappa \) and output length \(L_\mathsf{sk}+ 2\kappa \). Here \(|u_{1}|=\kappa \) and \(|u_{2}| = L_{\mathsf{sk}}+2\kappa ,|r'| = 2\kappa \).

Define the algorithm \(\mathsf{TransformGen}(1^{\kappa },\mathsf {pk})\) that on input the security parameter, a public key \(\mathsf {pk}\) and a circuit that implements \(\mathsf {PKE}.\mathsf {Update} (\cdot )\) as follows:

  • \(\mathsf{TransformGen}\) samples \(K_{1},K_{2}\) as keys for the puncturable PRF as above, and \(h \leftarrow {\mathcal {H}} \). Let \(P_{1}\) be the program as Fig. 1, and \(P_{2}\) as Fig. 2.

  • Then it samples \({\mathcal {P}_\mathsf{update}}\leftarrow {\mathsf {diO}} (P_{1})\), and \({\mathcal {P}_\mathsf{explain}}\leftarrow {\mathsf {diO}} (P_{2})\). It outputs \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\).

Fig. 1
figure 1

Program update

Fig. 2
figure 2

Program explain

The\({\mathsf {iO}} \)variant. Essentially, the difference between the construction based on iO versus public-coin diO is that the hash function \({\mathcal {H}} \) is replaced with an injective, puncturable PRF, \(F_3: {\{0,1\}}^\kappa \times {\{0,1\}}^{2L_\mathsf{sk}+ \kappa } \rightarrow {\{0,1\}}^{4L_\mathsf{sk}+ 3\kappa }\), which can be constructed from OWF (see [47]). The \({\mathsf {iO}} \)-based construction is a simplified version of the deniable encryption of the work [47], where our construction does not use a PRG in the Explain program. The security proof relies directly on the puncturing technique on the key \( K_{3}\) with a condition check. We elaborate on the details below.

  • Instead of sampling a hash function \(h \leftarrow {\mathcal {H}},\mathsf{TransformGen}\) samples an additional PRF key \(K_3 \leftarrow F.\mathsf{Gen}(1^\kappa )\).

  • We modify program \(P_{1}\) in Fig. 1 by embedding an additional key, \(K_3\), and checking whether \(u_1 = F_3(K_3,\mathsf{sk}_1, \mathsf{sk}_2, r')\).

  • We modify program \(P_{2}\) in Fig. 2, by embedding an additional key, \(K_3\), and setting \(u_1 = F_3(K_3,\mathsf{sk}_1, \mathsf{sk}_2, r)\).

  • All input/output lengths of the programs and pseudorandom functions are adjusted to be consistent with the fact that \(u_1\) now has length \(4L_\mathsf{sk}+ 3\kappa \) (whereas previously it had length \(\kappa \)).

We now establish the following theorem.

Theorem 5

Let \(\mathsf {PKE}\) be any public key encryption scheme with key update. Assume \({\mathsf {diO}} \) (resp. \({\mathsf {iO}} \)) is a secure public-coin differing-inputs indistinguishable obfuscator (resp. indistinguishable obfuscator) for the circuits required by the construction, \(F_{1},F_{2}\) are puncturable pseudorandom functions as above, and \({\mathcal {H}} \) is a family of public-coin collision-resistant hash functions as above. Then the transformation \(\mathsf{TransformGen}\) (resp. \(\mathsf{TransformGen}'\)) defined above is a secure explainable update transformation for \(\mathsf {PKE}\) as defined in Definition 10, which takes randomness \(u = (u_1, u_2)\) of length \(L_1 + L_2\), where \(L_1 := \kappa , L_2 := L_{\mathsf{sk}} + 2 \kappa \) (resp. \(L_1 := 4L_{\mathsf{sk}} + 3\kappa , L_2 := L_{\mathsf{sk}} + 2\kappa \)).

Looking at the big picture, recall that the entire secret state required for continually updating the secret key consists of the current secret state, \(\mathsf{sk}\), and randomness, \(u = (u_1, u_2)\), which together have total length \(L_{\mathsf{sk}} + L_1 + L_2\). Thus, when plugging in our public-coin diO-based construction, we ultimately achieve leakage rate of \(\frac{\mu }{2 L_{\mathsf{sk}} + 2 \kappa } = \frac{\mu }{2L_{\mathsf{sk}}} - o(1)\), where \(\mu \) is the leakage rate of the underlying 2CLR public key encryption scheme. On the other hand, when plugging in our iO-based construction, we achieve leakage rate of \(\frac{\mu }{6 L_{\mathsf{sk}} + 5 \kappa } = \frac{\mu }{6L_{\mathsf{sk}}} - o(1)\).

Proof

Recall that to show that \(\mathsf{TransformGen}\) satisfies property (a) of Definition 10, we need to demonstrate that for any polynomial \(\rho (\cdot )\), any \(\mathsf {pk}\), all \(\mathsf{sk}\in \varPi _{\mathsf {pk}}\), any \({\mathcal {P}_\mathsf{update}}\in \mathsf{TransformGen}(1^{\kappa },\mathsf {PKE}.\mathsf {Update},\mathsf {pk})\), the following two distributions are statistically close: \(\{{\mathcal {P}_\mathsf{update}}(\mathsf{sk})\} \approx \{\mathsf {PKE}.\mathsf {Update} (\mathsf{sk})\}\). Inspecting program \({\mathcal {P}_\mathsf{update}}(\mathsf{sk})\), the above follows in a straightforward manner from the following: (1) When u is chosen uniformly at random, the probability that \(F_2(K_2, u_1) \oplus u_2 = ( \mathsf{sk}_2, r')\) and \(u_1 = h(\mathsf{sk}_1, \mathsf{sk}_2, r')\) is negligible and (2) when u is chosen uniformly at random, then \(x = F_1(K_1, (\mathsf{sk}_1, u))\) is statistically close to uniform. For the analysis showing that (1) holds, see the analysis of Hybrid 1. (2) holds since \(F_1\) is an (\(L_{r} + \kappa ,{\epsilon }\))-good unseeded extractor.

Recall that to show that \(\mathsf{TransformGen}\) satisfies property (b) of Definition 10, we need to demonstrate that for any public key \(\mathsf {pk}^*\) and secret key \(\mathsf{sk}^* \in \varPi _{\mathsf {pk}}\), the following two distributions are computationally indistinguishable:

$$\begin{aligned}\{({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}},\mathsf {pk}^*,\mathsf{sk}^*,u^*)\} \approx \{({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}},\mathsf {pk}^*,\mathsf{sk}^*,e^*)\},\end{aligned}$$

where these values are generated by

  1. 1.

    \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\leftarrow \mathsf{TransformGen}(1^{\kappa },\mathsf {PKE}.\mathsf {Update}, \mathsf {pk}^*)\),

  2. 2.

    \(u^* = (u_1^*, u_2^*) \leftarrow {\{0,1\}}^{L_{\mathsf{sk}}+3\kappa }\) (in the \({\mathsf {iO}} \) variant, \(u^* = (u_1^*, u_2^*) \leftarrow {\{0,1\}}^{3L_{\mathsf{sk}}+4\kappa }\)),

  3. 3.

    Set \(x^* = F_1(K_1, \mathsf{sk}^*||u^*),\mathsf{sk}' = {\mathcal {P}_\mathsf{update}}(\mathsf{sk}^*;u^*)\). Then choose uniformly random \(r^*\) of length \(\kappa \), and set \(e_1^* = h(\mathsf{sk}^*, \mathsf{sk}', r^*)\) (in the \({\mathsf {iO}} \) variant, \(e_1^* = F_3(K_3, \mathsf{sk}^*, \mathsf{sk}', r^*)\)) and \(e_2^* = F_2(K_2, e_1^*)\oplus (\mathsf{sk}', r^*)\).

We prove this through the following sequence of hybrid steps.

Hybrid 1: In this hybrid step, we change Step 3 of the above challenge. Instead of computing \(\mathsf{sk}' = {\mathcal {P}_\mathsf{update}}(\mathsf{sk}^*;u^*)\), we compute \(\mathsf{sk}' = \mathsf {PKE}.\mathsf {Update} (\mathsf {pk}^*, \mathsf{sk}^*; x^*)\):

  1. 1.

    \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\leftarrow \mathsf{TransformGen}(1^{\kappa },\mathsf {PKE}.\mathsf {Update}, \mathsf {pk}^*)\),

  2. 2.

    \(u^* = (u_1^*, u_2^*) \leftarrow {\{0,1\}}^{L_{\mathsf{sk}}+3\kappa }\) (in the \({\mathsf {iO}} \) variant, \(u^* = (u_1^*, u_2^*) \leftarrow {\{0,1\}}^{3L_{\mathsf{sk}}+4\kappa }\)),

  3. 3.

    Set \(x^* = F_1(K_1, \mathsf{sk}^*||u^*)\), \(\underline{\mathsf{sk}' = \mathsf {PKE}.\mathsf {Update} (\mathsf {pk}^*, \mathsf{sk}^*; x^*)}\), and choose uniformly random \(r^*\) of length \(\kappa \). Then, \(e_1^* = h(\mathsf{sk}^*, \mathsf{sk}', r^*)\) (in the \({\mathsf {iO}} \) variant, \(e_1^* = F_3(K_3, \mathsf{sk}^*, \mathsf{sk}', r^*)\)) and \(e_2^* = F_2(K_2, e_1^*)\oplus (\mathsf{sk}', r^*)\).

Note that the only time in which this changes the experiment is when the values \((u_1^*, u_2^*) \leftarrow {\{0,1\}}^{L_{\mathsf{sk}}+3\kappa }\) happen to satisfy \(F_2(K_2, u_1^*)\oplus u_2^* = (\mathsf{sk}', r')\) such that \(u_1^* = h(\mathsf{sk}^*, \mathsf{sk}', r')\). For any fixed \(u_1^*, \mathsf{sk}^{*},\mathsf{sk}'\), and a random \(u_{2^{*}}\), we know the marginal probability of \(r'\) is still uniform given \(u_1^*, \mathsf{sk}^{*},\mathsf{sk}'\). Therefore, we have \(\Pr _{u_{2}*}[h(\mathsf{sk}^*, \mathsf{sk}', r') = u_1^*] = \Pr _{r'}[h(\mathsf{sk}^*, \mathsf{sk}', r') = u_1^*] < 2^{-\kappa } + \epsilon \). This is because h is a \((2\kappa ,\epsilon )\)-extractor, so the output of h is \({\epsilon }\)-close to uniform over \({\{0,1\}}^{\kappa }\), and a uniform distribution hits a particular string with probability \(2^{-\kappa }\). Since we set \({\epsilon }\) to be some negligible, the two distributions are only different with the negligible quantity.

The\({\mathsf {iO}} \)variant. We make the same modification as in the public-coin \({\mathsf {diO}} \) case and set

\(\mathsf{sk}' = \)\(\mathsf {PKE}.\mathsf {Update} (\mathsf {pk}^*, \mathsf{sk}^*; x^*)\). However, the analysis of the hybrid changes, as we now describe: In the \({\mathsf {iO}} \) setting, the only time in which the above modification changes the output of the experiment is when the values \((u_1^*, u_2^*) \leftarrow {\{0,1\}}^{3L_{\mathsf{sk}}+4\kappa }\) happen to satisfy \(F_2(K_2, u_1^*)\oplus u_2^* = (\mathsf{sk}', r')\) such that \(u_1^* = F_3(K_3, \mathsf{sk}^*, \mathsf{sk}', r')\). The only way the above can be satisfied is if \(u_1^*\) is in the range of \(F_3(K_3, \cdot )\). Note that the range of \(F_3(K_3, \cdot )\) has size \(2^{4L_\mathsf{sk}+ 3\kappa }\), while \(u_1^*\) is chosen independently and uniformly at random from a domain of size \(2L_\mathsf{sk}+ 2\kappa \). This means that the probability that \(u_1^*\) is in the range of \(F_3(K_3, \cdot )\) is at most \(\frac{2^{2L_\mathsf{sk}+ \kappa }}{2^{4L_\mathsf{sk}+ 3\kappa }} = 2^{-2L_\mathsf{sk}-2\kappa }\), which is negligible.

Hybrid 2: In this hybrid step, we modify the program in Fig. 1, puncturing key\(\underline{K_1}\)at points\(\underline{\{\mathsf{sk}^* || u^* \}}\)and\(\underline{\{ \mathsf{sk}^* || e^* \}}\), and adding a line of code at the beginning of theprogram to ensure that the PRF is never evaluated at these two points. See Fig. 3. We claim that with overwhelming probability over the choice of \(u^*\), this modified program has identical input/output as the program that was used in Hybrid 1 (Fig. 1). Note that on input \((\mathsf{sk}^*, e^*)\) the output of the original program was already \(\mathsf{sk}'\) as defined in Hybrid 1, so the outputs of the two programs are identical on this input. (This follows because \(e^*\) anyway encodes \(\mathsf{sk}'\), so when the “If”-statement is triggered in the program of Fig. 1, the output is \(\mathsf{sk}'\).) As long as \(u_1^*\) and \(u_2^*\) do not have the property that \(F_2(K_2, u_1^*)\oplus u_2^* = (\mathsf{sk}', r')\) such that \(u_1^* = h(\mathsf{sk}^*, \mathsf{sk}', r')\), then the programs have identical output on input \((\mathsf{sk}^*, u^*)\) as well. (This follows because \(\mathsf{sk}'\) is defined as \(\mathsf{sk}' = {\mathcal {P}_\mathsf{update}}(\mathsf{sk}^*;F_1(K_1, \mathsf{sk}^*||u^*))\) in the challenge game, which is also the output of the program in Fig. 1 when \(u_1^*\) and \(u_2^*\) fail this condition.) As we argued in Hybrid 1, with very high probability, \(u^*\) does not have this property. (We stress that \(u^*\) is fixed before we construct the obfuscated program described in Fig. 3, so with overwhelming probability over the choice of \(u^*\), the two programs have identical input output behavior.) Indistinguishability of Hybrids 1 and 2 follows from the security of the obfuscation. Note that this hybrid requires only the weaker notion of indistinguishability obfuscation.

The\({\mathsf {iO}} \)variant. The modification and security argument are identical, with the exception that we require that \(u_1^*\) and \(u_2^*\) do not have the property that \(F_2(K_2, u_1^*)\oplus u_2^* = (\mathsf{sk}', r')\) such that \(u_1^* = F_3(K_3, \mathsf{sk}^*, \mathsf{sk}', r')\). We invoke the argument from the previous hybrid to show that this is true with overwhelming probability over choice of \(u^*\).

Fig. 3
figure 3

Program update, as used in Hybrid 2

Hybrid 3: In this hybrid, we change the challenge game to use truly random\(\underline{x^*}\)when computing\(\underline{\mathsf{sk}' = \mathsf {PKE}.\mathsf {Update} (\mathsf {pk}^*, \mathsf{sk}^*; x^*)}\) (instead of \(x^* = F_1(K_1; \mathsf{sk}^*|| u^*)\)). Security holds by a reduction to the pseudorandomness of \(F_1\) at the punctured point \((\mathsf{sk}^*, u^*)\). More specifically, given an adversary \(\mathcal {A}\) that distinguishes Hybrid 2 from Hybrid 3 on values \(\mathsf {pk}^*, \mathsf{sk}^*\), we describe an reduction \(\mathcal {B}\) that attacks the security of the puncturable PRF, \(F_1\). \(\mathcal {B}\) generates \(u^*\) at random and submits \((\mathsf{sk}^*, u^*)\) to his challenger. He receives \({\widetilde{K}}_1 = \mathsf{PRF}.\mathsf{Punct}(K_{1}, \{\mathsf{sk}^*|| u^*\})\), and a value \(x^*\) as a challenge. \(\mathcal {B}\) computes \(\mathsf{sk}' = \mathsf {PKE}.\mathsf {Update} (\mathsf {pk}^*, \mathsf{sk}^*; x^*)\), chooses \(r^*\) at random, and computes \(e^*\) as in the original challenge game. He creates \({\mathcal {P}_\mathsf{update}}\) using \({\widetilde{K}}_1\) and sampling \(K_2\) honestly. The same \(K_2\) is used for creating \({\mathcal {P}_\mathsf{explain}}\). \(\mathcal {B}\) obfuscates both circuits, which completes the simulation of \(\mathcal {A}\)’s view.

The\({\mathsf {iO}} \)variant. The modification and security argument are identical for the \({\mathsf {iO}} \) setting.

Hybrid 4: In this hybrid, we puncture\(\underline{K_2}\)at both\(\underline{u_1^*}\)and\(\underline{e_1^*}\), and modify the Update program to output appropriate hardcoded values on these inputs. (See Fig. 4.) To prove that Hybrids 3 and 4 are indistinguishable, we rely on security of public-coin differing-inputs obfuscation and public-coin collision-resistant hash function. In particular, we will show that suppose the hybrids are distinguishable, then we can break the security of the collision-resistant hash function.

Fig. 4
figure 4

Program update, as used in Hybrid 4

Consider the following sampler \(\mathsf {Samp}(1^{\kappa }):\) outputs \(C_{0},C_{1}\) as the two update programs as in Hybrids 3 and 4, respectively, and it outputs an auxiliary input \(\mathsf {aux}= (\mathsf {pk}^{*},\mathsf{sk}^{*},\mathsf{sk}',u^{*},e^{*},K_{2},h,r^{*})\) sampled as in the both hybrids. Note that \(\mathsf {aux}\) includes all the random coins of the sampler. Suppose there exists a distinguisher \(\mathcal {D}\) for the two hybrids, then there exists a distinguished \(\mathcal {D}'\) that distinguishes \(({\mathsf {diO}} (C_{0}),\mathsf {aux})\) from \(({\mathsf {diO}} (C_{1}),\mathsf {aux})\). This is because given the challenge input, \(\mathcal {D}'\) can complete the rest of the experiment either according to Hybrid 3 or Hybrid 4. Then by security of the \({\mathsf {diO}} \), we know there exists an adversary (extractor) \(\mathcal {B}\) that given \((C_{0},C_{1},\mathsf {aux})\) finds an input such that \(C_{0}\) and \(C_{1}\) evaluate differently. However, this contradicts the security of the public-coin collision-resistant hash function. We establish this by the following lemma.

Lemma 4

Assume h is sampled from a family of public-coin collision-resistant hash function, (and \((2\kappa ,{\epsilon })\)-extracting) as above. Then for any \(\textsc {ppt}\) adversary, the probability is negligible to find a differing-inputs given \((C_{0},C_{1},\mathsf {aux})\) as above.

Proof

By examining the two circuits, we observe that the differing-inputs have the following two forms: \((\bar{\mathsf{sk}}, u_{1}^{*},{\bar{u}}_{2})\) such that \(u_{1}^{*} = h(\bar{\mathsf{sk}}, F_{2}(K_{2};u_{1}^{*}) \oplus {\bar{u}}_{2}),({\bar{\mathsf{sk}}},{\bar{u}}_{2}) \ne (\mathsf{sk}^{*},u_{2}^{*})\); or \((\bar{\mathsf{sk}}, e_{1}^{*},{\bar{e}}_{2})\) such that \(e_{1}^{*} = h(\bar{\mathsf{sk}}, F_{2}(K_{2};e_{1}^{*}) \oplus {\bar{e}}_{2}),({\bar{\mathsf{sk}}}, {\bar{e}}_{2} )\ne (\mathsf{sk}^{*},e_{2}^{*})\). This is because they will run enter the first Else IF in Hybrid 3 (Fig. 3), but enter the modified line (the first Else IF) in Hybrid 4 (Fig. 4). We argue that both cases happen with negligible probability; otherwise, security of the hash function can be broken.

For the first case, we observe that the collision resistance and \((2\kappa ,{\epsilon })\) extracting guarantee that the probability of finding an pre-image of a random value \(u_{1}^{*}\) is small, even given \(\mathsf {aux}\); otherwise, there is an adversary who can break collision resistance. For the second case, we know that \(e_{1}^{*}= h(\mathsf{sk}^{*},\mathsf{sk}',r^{*})=h(\bar{\mathsf{sk}}, F_{2}(K_{2};e_{1}^{*}) \oplus {\bar{e}}_{2}) = h(\bar{\mathsf{sk}}, e_2^{*} \oplus (\mathsf{sk}',r^{*})\oplus {\bar{e}}_{2})\). Since we know that \(({\bar{\mathsf{sk}}}, {\bar{e}}_{2} )\ne (\mathsf{sk}^{*},e_{2}^{*})\), we find a collision, which again remains hard even given \(\mathsf {aux}\).

Thus, suppose there exists a differing-inputs finder \(\mathcal {A}\), we can define an adversary \(\mathcal {B}\) to break the collision-resistant hash function: On input \(h,\mathcal {B}\) simulates the sampler \(\mathsf {Samp}\) with the h. Then it runs \(\mathcal {A}\) to find a differing-inputs. Then according to the above argument, either of the two cases will lead to finding a collision. \(\square \)

The\({\mathsf {iO}} \)variant. The hybrid proceeds identically to the \({\mathsf {diO}} \) variant. To prove that Hybrids 3 and 4 are indistinguishable, we rely on the security of the indistinguishability obfuscator. In particular, we will show that the functionality of program update is identical in the two hybrids. By examining the two circuits, we must show that if the event \((u_1 = u_1^* \vee u_1 = e_1^*) \wedge F_2(k_2; u_1) \oplus u_2 = (\mathsf{sk}_2, r') \text{ such } \text{ that } u_1 = F_3(K_3,\mathsf{sk}_1, \mathsf{sk}_2, r')\) occurs then \(\mathsf{sk}_2 = \mathsf {PKE}.\mathsf {Update} (\mathsf {pk}^*, \mathsf{sk}_1; x)\), where \(x = F_1(\widetilde{K}_1, \mathsf{sk}_1||u)\). Indeed, this is the only case where we enter the first Else IF in Hybrid 3 (Fig. 3), but enter the modified line (the first Else IF) in Hybrid 4 (Fig. 4). To see that the above holds, we consider two cases: \(u_1 = u_1^*\) and \(u_1 = e_1^*\). First, if \(u_1 = u^*_1\) then, as argued above, \(u_1\) is not in the range of \(F_3(K_3, \cdot )\) (with overwhelming probability over the choice of \(u^*_1\)) and so the event cannot occur. Second, note that if \(u_1 = e_1^*\) then \(u_1 = F_3(K_3, \mathsf{sk}^*, \mathsf{sk}', r^*)\). Since \(F_3(K_3, \cdot )\) is injective, if the above event occurs, it means that \(\mathsf{sk}_1 = \mathsf{sk}^*\) and \(u_2 = (\mathsf{sk}', r^*) \oplus F_2(k_2; u_1) = e_2^*\). This in turn means that \(x = F_1(\widetilde{K}_1, \mathsf{sk}^*||e^*) = x^*\). Therefore, by definition of \(\mathsf{sk}'\), we have that \(\mathsf{sk}_2 = \mathsf{sk}' = \mathsf {PKE}.\mathsf {Update} (\mathsf {pk}^*, \mathsf{sk}^*; x^*) = \mathsf {PKE}.\mathsf {Update} (\mathsf {pk}^*, \mathsf{sk}_1; x)\), as desired.

Hybrid 5: In this hybrid, we puncture\(\underline{K_2}\)at both\(\underline{u_1^*}\)and\(\underline{e_1^*}\), and modify the Explain program to output appropriate hardcoded values on these inputs. (See Fig. 5.) Similar to the argument for the previous hybrids, we argue that Hybrids 4 and 5 are indistinguishable by security of the public-coin differing-inputs obfuscation and public-coin collision-resistant hash function. Consider a sampler \(\mathsf {Samp}(1^{\kappa })\): outputs \(C_{0},C_{1}\) as the two explain programs as in Hybrids 4 and 5, respectively, and it outputs an auxiliary input \(\mathsf {aux}= (\mathsf {pk}^{*},\mathsf{sk}^{*},\mathsf{sk}',u^{*},e^{*},K_{2},h,r^{*})\) sampled as in the both hybrids (note that \(\mathsf {aux}\) includes all the random coins of the sampler). Similar to the above argument, suppose there exists a distinguisher \(\mathcal {D}\) that distinguishes Hybrids 4 and 5, then we can construct a distinguisher \(\mathcal {D}'\) that distinguishes \(({\mathsf {diO}} (C_{0}),\mathsf {aux})\) from \(({\mathsf {diO}} (C_{1}),\mathsf {aux})\). This is because given the challenging input, \(\mathcal {D}'\) can simulate the hybrids. Then by security of the \({\mathsf {diO}} \), there exists an adversary (extractor) \(\mathcal {B}\) that can find differing-inputs. Now we want to argue that suppose the h comes from a public-coin collision-resistant hash family, then no \(\textsc {ppt}\) adversary can find differing-inputs. This leads to a contradiction.

Lemma 5

Assume h is sampled from a family of public-coin collision-resistant hash function, (and \((2\kappa ,{\epsilon })\)-extracting) as above. Then for any \(\textsc {ppt}\) adversary, the probability is negligible to find a differing-inputs given \((C_{0},C_{1},\mathsf {aux})\) as above.

Proof

The proof is almost identical to that of Lemma 4. We omit the details. \(\square \)

The\({\mathsf {iO}} \)variant. Program explain is modified with the following line: If\(\underline{u_{1}^{*} = F_3(K_3, \mathsf{sk}_1, \mathsf{sk}_2, r)}\), output\(\underline{u^{*}}\). Else If\(\underline{e_{1}^{*} = F_3(K_3,\mathsf{sk}_1, \mathsf{sk}_2, r)}\), output\(\underline{e^{*}}\). The proof is almost identical to that of the previous hybrid.

Fig. 5
figure 5

Program explain, as used in Hybrid 5

Hybrid 6: In this hybrid, we change both\(\underline{e_{1}^{*}}\)and\(\underline{e_{2}^{*}}\)to uniformly random. Hybrids 5 and 6 are indistinguishable by the security of the puncturable PRF \(F_{2}\), and by the fact that h is \((2\kappa ,{\epsilon })\)-extracting. Clearly in this hybrid, the distributions of \(\{({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}},\mathsf {pk}^*,\mathsf{sk}^*,u^*)\}\) and \(\{({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}},\mathsf {pk}^*,\mathsf{sk}^*,e^*)\}\) are identical. From the indistinguishable arguments that the original game and Hybrid 6 are indistinguishable, we can argue that the distributions in the original game are indistinguishable. This concludes the proof.

The\({\mathsf {iO}} \)variant. We must first puncture\(\underline{K_3}\)at\(\underline{(\mathsf{sk}^*, \mathsf{sk}', r^*)}\). and modify both Update and Explain so that whenever we check whether \(u_1 = F_3(K_3, \mathsf{sk}^*, \mathsf{sk}', r^*)\), we instead check whether \(u_1 = e_1^*\). Once we have done this, we can now proceed as in the \({\mathsf {diO}} \) variant and switch both\(\underline{e_{1}^{*}}\)and\(\underline{e_{2}^{*}}\)to uniformly random. \(\square \)

3.4 Extension to Digital Signatures

We have already demonstrated a compiler that upgrades any 2CLR public key encryption scheme into one that is secure against leakage on update. The same results can be translated to digital signature scheme.

Continual Consecutive Leakage Resilience for Digital Signature. Following the presentation in Subsect. 3.1, we can modify the security game and define continual consecutive leakage resilience for digital signature schemes. We say a digital signature scheme is \(\mu \)-leakage resilient against consecutive continual leakage (or \(\mu \)-2CLR) if any probabilistic polynomial-time attacker only has a negligible advantage (negligible in \(\kappa \)) in the modified game.

Explainable Key-Update Transformation for Digital Signature Scheme. Following the presentation in Subsect. 3.2, we can define explainable key update transformation for digital signature scheme. We start with any digital signature scheme \({\mathsf {SIG}} = {\mathsf {SIG}}.\{\mathsf {Gen}, \mathsf {Sign},\mathsf {Verify},\mathsf {Update} \}\) that has some key update procedure. Then we can follow Definition 10, and introduce a transformation \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\leftarrow \mathsf{TransformGen}(1^{\kappa },{\mathsf {SIG}}.\mathsf {Update}, \mathsf {pk})\). This transformation will help us define a scheme \({\mathsf {SIG}} ' = {\mathsf {SIG}} '.\{\mathsf {Gen}, \mathsf {Sign},\mathsf {Verify},\mathsf {Update} \}\) with an explainable key update procedure.

  • \({\mathsf {SIG}} '.\mathsf {Gen} (1^{\kappa })\): compute \((\mathsf {pk},\mathsf{sk}) \leftarrow {\mathsf {SIG}}.\mathsf {Gen} (1^{\kappa })\). Then compute \(({\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\leftarrow \mathsf{TransformGen}(1^{\kappa },{\mathsf {SIG}}.\mathsf {Update}, \mathsf {pk})\). Finally, output \(\mathsf {pk}' = (\mathsf {pk}, {\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\) and \(\mathsf{sk}' = \mathsf{sk}\).

  • \({\mathsf {SIG}} '.\mathsf {Sign} (\mathsf{sk}',m)\): output \(\sigma = {\mathsf {SIG}}.\mathsf {Sign} (\mathsf{sk}', m)\).

  • \({\mathsf {SIG}} '.\mathsf {Verify} (\mathsf {pk}',m,\sigma )\): parse \(\mathsf {pk}' = (\mathsf {pk}, {\mathcal {P}_\mathsf{update}},{\mathcal {P}_\mathsf{explain}})\). Then output \(b \leftarrow {\mathsf {SIG}}.\mathsf {Verify} (\mathsf {pk}, m,\sigma )\).

  • \({\mathsf {SIG}} '.\mathsf {Update} (\mathsf{sk}')\): output \(\mathsf{sk}'' \leftarrow {\mathcal {P}_\mathsf{update}}(\mathsf{sk}')\).

Similarly, we can show a theorem for the upgraded scheme \({\mathsf {SIG}} '\).

Theorem 6

Let \({\mathsf {SIG}} = {\mathsf {SIG}}.\{\mathsf {Gen}, \mathsf {Sign},\mathsf {Verify},\mathsf {Update} \}\) be a digital signature scheme that is \(\mu \)-2CLR (without leakage on update), and \(\mathsf{TransformGen}\) a secure explainable key update transformation for \({\mathsf {SIG}} \). Then the transformed scheme \({\mathsf {SIG}} ' = {\mathsf {SIG}} '.\{\mathsf {Gen}, \mathsf {Sign},\mathsf {Verify},\mathsf {Update} \}\) described above is \(\mu \)-CLR with leakage on key updates.

The poof of the above theorem can exactly follow the proof of Theorem 4 and so we omit the proof.

Instantiations. Finally, as in Subsect. 3.3, we can instantiate the explainable update transformation via indistinguishability obfuscation or differing-inputs obfuscation, and establish the same theorem for digital signature schemes.

Theorem 7

Let \({\mathsf {SIG}} \) be any digital signature scheme with key update. Assume \({\mathsf {diO}} \) (resp. \({\mathsf {iO}} \)) is a secure public-coin differing-inputs indistinguishable obfuscator (resp. indistinguishable obfuscator) for the circuits required by the construction, \(F_{1},F_{2}\) are puncturable pseudorandom functions as above, and \({\mathcal {H}} \) is a family of public-coin collision-resistant hash functions as above. Then the transformation \(\mathsf{TransformGen}\) (resp. \(\mathsf{TransformGen}'\)) defined above is a secure explainable update transformation for \({\mathsf {SIG}} \) which takes randomness \(u = (u_1, u_2)\) of length \(L_1 + L_2\), where \(L_1 := \kappa , L_2 := L_{\mathsf{sk}} + 2 \kappa \) (resp. \(L_1 := 4L_{\mathsf{sk}} + 3\kappa , L_2 := L_{\mathsf{sk}} + 2\kappa \)).

4 2CLR from “Leakage-Resilient Subspaces”

We show that the PKE scheme of Brakerski et al. [12] (BKKV), which has been proved CLR, can achieve 2CLR (with a slight adjustment in the scheme’s parameters). We note that our focus on PKE here is justified by the fact that we show generically that any CLR (resp. 2CLR) PKE scheme implies a CLR “one-way relation” (OWR) [21]; to the best of our knowledge, such an implication was not previously known. Therefore, by the results of Dodis et al. [21], this translates all our results about PKE to the signature setting as well. We show that the approach of Dodis et al. [21] for constructing CLR OWRs can be extended to 2CLR one-way relations, but we achieve weaker parameters this way.

Recall that in the work [12], to prove that their scheme is CLR, they show “random subspaces are leakage resilient”. In particular, they show that for a random subspace X, the statistical difference between \(\big (X,f(v) \big )\) and \(\big (X,f(u)\big )\) is negligible, where f is an arbitrary length-bounded function, v is a random point in the subspace, and u is a random point in the whole space. Then by a simple hybrid argument, they show that \(\big (X,f_{1}(v_{0}), f_{2}(v_{1}),\dots , f_{t}(v_{t-1}) \big )\) and \(\big (X,f_{1}(u_{0}), f_{2}(u_{1}),\dots , f_{t}(u_{t-1})\big )\) are indistinguishable, where \(f_{1},\dots ,f_{t}\) are arbitrary and adaptively chosen length-bounded functions, \(v_{0}, v_{1}, \dots , v_{t-1}\) are independent random points in the subspace, and \(u_{0},u_{1},\dots , u_{t-1}\) are independent random points in the whole space. This lemma plays the core role in their proof.

In order to show that their scheme satisfies the 2CLR security, we consider random subspaces under “consecutive” leakage. That is, we want to show:

$$\begin{aligned}&\big (X,f_{1}(v_{0},v_{1}), f_{2}(v_{1},v_{2}),\dots , f_{t}(v_{t-1},v_{t})\big ) \\&\quad \approx \big (X,f_{1}(u_{0},u_{1}), f_{2}(u_{1},u_{2}),\dots , f_{t}(u_{t-1},u_{t})\big ),\end{aligned}$$

for arbitrary and adaptively chosen \(f_{i}\)’s, i.e., each \(f_{i}\) can be chosen after seeing the previous leakage values \(f_{1},\dots , f_{i-1}\). However, this does not follow by a hybrid argument of \(\big (X,f(v) \big ) \approx \big (X,f(u)\big )\), because in the 2CLR case each point is leaked twice. It is not clear how to embed a challenging instance of (Xf(z)) into the larger experiment while still being able to simulate the rest.

To handle this technical issue, we establish a new lemma showing random subspaces are “consecutive” leakage resilient. With the lemma and a hybrid argument, we can show that the above experiments are indistinguishable. Then we show how to use this fact to prove that the scheme of BKKV is 2CLR.

Lemma 6

Let \(t, n,\ell , d \in \mathbb {N},n\ge \ell \ge 3d\), and q be a prime. Let \((A,X)\leftarrow \mathbb {Z}_{q}^{t\times n} \times \mathbb {Z}_{q}^{n\times \ell }\) such that \(A\cdot X = 0,T, T' \leftarrow \mathsf {Rk}_{d}(\mathbb {Z}_{q}^{\ell \times d})\), \(U \leftarrow \mathbb {Z}_{q}^{n\times d}\) such that \(A\cdot U =0\), (i.e., U is a random matrix in \(\mathsf {Ker}(A)\)), and \(f: \mathbb {Z}_{q}^{t\times n} \times \mathbb {Z}_{q}^{n\times 2d} \rightarrow W\) be any function.Footnote 10 Then we have:

$$\begin{aligned} \varDelta \left( \big (A, X,f( A, X T, X T' ), X T' \big ), \big ( A, X, f\left( A, U, X T' \big ), X T' \right) \right) \le \epsilon , \end{aligned}$$

as long as \(|W| \le (1-1/q) \cdot q^{\ell -3d +1} \cdot \epsilon ^{2}\).

Proof

We will actually prove something stronger, namely we will prove, under the assumptions of Lemma 6, that

$$\begin{aligned}&\varDelta \left( \Big (A, X,f(A, X \cdot T, X \cdot T'), X \cdot T', T' \Big ), \Big ( A, X, f(A, U, X \cdot T'), X \cdot T', T' \Big ) \right) \\&\qquad \le \frac{1}{2} \sqrt{\frac{3 |W|}{(1-1/q) q^{\ell - 3d + 1}}} < \epsilon \;. \end{aligned}$$

Note that this implies the lemma by solving for \(\epsilon \), after noting that ignoring the last component in each tuple can only decrease statistical difference.

For the proof, we will apply Lemma 7 as follows. We will take hash function H to be \(H :\mathbb {Z}_q^{n \times \ell } \times \mathbb {Z}_q^{\ell \times d} \rightarrow \mathbb {Z}_q^{n \times d}\) where \(H_K(D) = K D\) (matrix multiplication), and take the set \(\mathcal{Z}\) to be \(\mathbb {Z}_q^{n \times \ell } \times \mathbb {Z}_q^{\ell \times d}\). Next we take random variable K to be uniform on \(\mathbb {Z}_q^{n \times \ell }\) (denoted as the matrix X), D to be uniform on \(\mathsf {Rk}_{d}(\mathbb {Z}_{q}^{\ell \times d})\), and finally \(Z = (A, X T', T')\) where A is uniform conditioned on \(AX =0,T' \in \mathsf {Rk}_{d}(\mathbb {Z}_{q}^{\ell \times d})\) is independent uniform. We define \(U_{|Z}\) as the uniform distribution such that \(A U =0\). This also means that U is a random matrix in the kernel of A.

It remains to prove under these settings that

$$\begin{aligned}{\Pr \left[ \,{(D,D',Z) \in \mathsf{BAD}}\,\right] } \le \frac{1}{(1-1/q)q^{\ell - 3d + 1}}\end{aligned}$$

with \(\mathsf{BAD}\) defined as in Lemma 7. For this, let us consider

$$\begin{aligned} \varDelta \big ((H_{K|_{Z}}(T_1),H_{K|_{Z}}(T_2)), (U_{|Z}, U'_{|Z}) \big ) \; \end{aligned}$$

where \(Z = (A, X T', T')\) as defined above. The above statistical distance is zero as long as the outcomes of \(T_1,T_2,T'\) are all linearly independent. This is so because \(\ell \ge 3d\). Now, by a standard formula the probability that \(T_1,T_2,T'\) have a linear dependency is bounded by \(\frac{1}{(1-1/q)q^{\ell - 3d + 1}}\), and we are done. \(\square \)

We note that this lemma is slightly different that the original lemma in the work [12]: the leakage function considered here also takes in a public matrix A, which is used as the public key in the system. We observe that both our work and [12] need this version of the lemma to prove security of the encryption scheme.

We actually prove Lemma 6 as a consequence of a new generalization of the Crooked Leftover Hash Lemma (LHL) [6, 26] we introduce (to handle hash functions that are only pairwise independent if some bad event does not happen), as follows.

Lemma 7

Let \(H :\mathcal{K}\times \mathcal{D}\rightarrow \mathcal{R}\) be a hash function and (KZ) be joint random variables over \((\mathcal{K},\mathcal{Z})\) for the set \(\mathcal{K}\) and some set \(\mathcal{Z}\). Define the following set

$$\begin{aligned} \mathsf{BAD}&=\Big \{ \big (d,d',z\big ) \in \mathcal{D}\times \mathcal{D}\times \mathcal{Z}: \varDelta \big ((H_{K|_{Z=z}}(d),H_{K|_{Z=z}}(d')), \nonumber \\&\qquad (U_{|Z=z}, U'_{|Z=z}) \big ) > 0\Big \}, \end{aligned}$$
(1)

where \(U_{|Z=z},U'_{|Z=z}\) denote two independent uniform distributions over \(\mathcal{R}\) conditioned on \(Z=z\), and \(K|_{Z=z}\) is the conditional distribution of K given \(Z=z\). We note that \(\mathcal{R}\) might depend on z, so when we describe a uniform distribution over \(\mathcal{R}\), we need to specify the condition \(Z=z\).

Suppose D and \(D'\) are i.i.d. random variables over \(\mathcal{D},(K,Z)\) are random variables over \(\mathcal{K}\times \mathcal{Z}\) satisfying \({\Pr \left[ \,{(D,D',Z) \in \mathsf{BAD}}\,\right] } \le \epsilon '\). Then for any set \(\mathcal{S}\) and function \(f :\mathcal{R}\times \mathcal{Z}\rightarrow \mathcal{S}\) it holds that

$$\begin{aligned} \varDelta ( (K,Z,f(H_K(D),Z)), (K,Z,f(U_{|Z},Z))) \le \frac{1}{2} \sqrt{ 3 \epsilon ' \; |\mathcal{S}| } \;. \end{aligned}$$

Proof

The proof is an extension of the proof of the Crooked LHL given in [6]. First, using Cauchy–Schwarz and Jensen’s inequality we have

$$\begin{aligned}&\varDelta ((K,Z,f(H_K(D),Z)), (K,Z,f(U_{|Z},Z))) \\&\quad \leqq \frac{1}{2} \sqrt{|S| \, {\mathbf {E}} _{k,z} \left[ \sum _s ({\Pr \left[ \,{f(H_k(D),z) = s}\,\right] } - {\Pr \left[ \,{ f(U_{|Z=z},z) = s}\,\right] })^2 \right] } \;, \end{aligned}$$

where \(U_{|Z=z}\) is uniform on \(\mathcal{R}\) conditioned on \(Z=z\), and the expectation is over (kz) drawn from (KZ). Thus, to complete the proof it suffices to prove the following lemma.

Lemma 8

$$\begin{aligned} {\mathbf {E}} _{k,z} \left[ \sum _s \Big ( {\Pr \left[ \,{f(H_k(D),z) = s}\,\right] } - {\Pr \left[ \,{ f(U_{|Z=z},z) = s}\,\right] } \Big )^2 \right] \le 3\epsilon ' \;. \end{aligned}$$
(2)

Proof

By the linearity of expectation, we can express Eq. 2 as:

$$\begin{aligned}&{\mathbf {E}} _{k,z} \sum _s {\Pr \left[ \,{f(H_{k}(D),z) = s}\,\right] }^2 \nonumber \\&- 2 {\mathbf {E}} _{k,z} \sum _s {\Pr \left[ \,{f(H_{k}(D),z) = s}\,\right] } {\Pr \left[ \,{f(U_{|Z=z},z) = s}\,\right] }\nonumber \\&+ {\mathbf {E}} _{z}\mathsf{Col}(f(U_{|Z=z},z)), \; \end{aligned}$$
(3)

where \(U_{|Z=z}\) is uniform on \(\mathcal{R}\) conditioned on \(Z=z\), and \(\mathsf{Col}\) is the collision probability of its input random variable. Note that since \(f(U_{|Z=z},z)\) is independent of k, we can drop it in the third term. In the following, we are going to calculate bounds for the first two terms.

For any \(s \in \mathcal {S}\), we can write \({\Pr \left[ \,{f(H_{k}(D),z) = s}\,\right] } = \sum _d {\Pr \left[ \,{D = d}\,\right] } \delta _{f(H_k(d),z), s}\) where \(\delta _{a,b}\) is 1 if \(a = b\) and 0 otherwise, and thus

$$\begin{aligned} \sum _s {\Pr \left[ \,{f(H_{k}(D),z) = s}\,\right] }^2 = \sum _{d,d'} {\Pr \left[ \,{D = d}\,\right] } {\Pr \left[ \,{D = d'}\,\right] } \delta _{f(H_k(d),z),f(H_k(d'),z)} \;. \end{aligned}$$

So we have

$$\begin{aligned}&{\mathbf {E}} _{k,z} \sum _s {\Pr \left[ \,{f(H_{k}(D),z) = s}\,\right] }^2 \nonumber \\&\quad = {\mathbf {E}} _{k,z} \left[ \sum _{d,d'} {\Pr \left[ \,{D = d}\,\right] } {\Pr \left[ \,{D = d'}\,\right] } \delta _{f(H_k(d),z),f(H_k(d'),z)} \right] \nonumber \\&\quad = {\mathbf {E}} _z \left[ \sum _{d,d'} {\Pr \left[ \,{D = d}\,\right] } {\Pr \left[ \,{D = d'}\,\right] } {\mathbf {E}} _k \left[ \delta _{f(H_k(d),z),f(H_k(d'),z)} \right] \right] \nonumber \\&\quad \le \sum _{z,d,d' \notin \mathsf{BAD}} {\Pr \left[ \,{Z = z}\,\right] } {\Pr \left[ \,{D = d}\,\right] } {\Pr \left[ \,{D = d'}\,\right] } {\mathbf {E}} _k \left[ \delta _{f(H_k(d),z),f(H_k(d'),z)} \right] + \epsilon '\nonumber \\&\quad = {\mathbf {E}} _z \left[ \mathsf{Col}(f(U_{|Z=z},z))\right] + \epsilon ', \end{aligned}$$
(4)

where \(\mathsf{BAD}\) is defined as in Eq. (1) from Lemma 7. The inequality holds because, by our definition of \(\mathsf{BAD}\), if \((z,d,d')\notin \mathsf{BAD},(H_{k}(d), H_{k}(d'))\) are distributed exactly as two uniformly chosen elements (conditioned on \(Z=z\)), and because \(\Pr [(z, d, d') \in \mathsf{BAD}] \le {\epsilon }'\).

By a similar calculation, we have:

$$\begin{aligned}&{\mathbf {E}} _{k,z}\sum _s {\Pr \left[ \,{f(H_{k}(D),z) = s}\,\right] } {\Pr \left[ \,{f(U_{|Z=z},z) = s}\,\right] } \nonumber \\&\quad \ge {\mathbf {E}} _z \left[ \mathsf{Col}(f(U_{|Z=z},z))\right] - \epsilon ' \;. \end{aligned}$$
(5)

For the same reason, \(H_{k}(D)\) is uniformly random except for the bad event, whose probability is bounded by \({\epsilon }'\).

Putting things together, the inequality in Eq. 2 follows immediately by plugging the bounds in Eqs. 4 and 5. This concludes the proof. \(\square \)

Here we describe the BKKV encryption scheme and show it is 2CLR secure. We begin by presenting the main scheme in BKKV, which uses the weaker linear assumption in bilinear groups, but achieves a worse leakage rate (that can tolerate roughly \(1/8 \cdot |\mathsf{sk}| - o(\kappa )\)). In that work [12], it is also pointed out that under the stronger SXDH assumption in bilinear groups, the rate can be improved to tolerate roughly \(1/4 \cdot |\mathsf{sk}| - o(k)\), with essentially the same proof. The same argument also holds in the 2CLR setting. To avoid repetition, we just describe the original scheme in BKKV and prove that it is actually 2CLR under the linear assumption in bilinear groups.

  • Parameters. Let \(G,G_{T}\) be two groups of prime order p such that there exists a bilinear map \(e: G\times G \rightarrow G_{T}\). Let g be a generator of G (and so e(gg) is a generator of \(G_{T}\)). An additional parameter \(\ell \ge 7\) is polynomial in the security parameter. (Setting different \(\ell \) will enable a trade-off between efficiency and the rate of tolerable leakage). For the scheme to be secure, we require that the linear assumption holds in the group G, which implies that the size of the group must be super-polynomial, i.e., \(p = \kappa ^{\omega (1)}\).

  • Key generation. The algorithm samples \(A \leftarrow \mathbb {Z}_{p}^{2\times \ell }\), and \(Y\leftarrow \mathsf {Ker}^{2}(A)\), i.e., \(Y\in \mathbb {Z}_{p}^{\ell \times 2}\) can be viewed as two random (linearly independent) points in the kernel of A. Then it sets \(\mathsf {pk}= g^{A},\mathsf{sk}= g^{Y}\). Note that since A is known, Y can be sampled efficiently.

  • Key update. Given a secret key \(g^{Y}\in G^{\ell \times 2}\), the algorithm samples \(R\leftarrow \mathsf {Rk}_{2}(\mathbb {Z}_{p}^{2\times 2})\) and then sets \(\mathsf{sk}' = g^{Y \cdot R}\).

  • Encryption. Given a public key \(\mathsf {pk}=g^{A}\), to encrypt 0, it samples a random \(r\in \mathbb {Z}_{p}^{2}\) and outputs \(c = g^{r^{T}\cdot A}\). To encrypt 1, it just outputs \(c= g^{u^{T}}\) where \(u\leftarrow \mathbb {Z}_{p}^{\ell }\) is a uniformly random vector.

  • Decryption. Given a ciphertext \(c = g^{v^{T}}\) and a secret key \(\mathsf{sk}= g^{Y}\), the algorithm computes \(e(g,g)^{v^{T} \cdot Y}\). If the result is \(e(g,g)^{0}\), then it outputs 0, otherwise 1.

Then we are able to achieve the following theorem:

Theorem 8

Under the linear assumption, for every \(\ell \ge 7\), the encryption scheme above is \(\mu \)-bit leakage resilient against two-key continual and consecutive leakage, where \(\mu =\frac{(\ell - 6 )\cdot \log p}{2} - \omega (\kappa )\). Note that the leakage rate would be \(\frac{\mu }{|\mathsf{sk}| + |\mathsf{sk}|} \approx 1/8\), as \(\ell \) is chosen sufficiently large.

Proof

The theorem follows directly from the following lemma:

Lemma 9

For any \(t\in \mathrm{poly}(\kappa ),r \leftarrow \mathbb {Z}_p^2,A \leftarrow \mathbb {Z}_p^{2\times \ell }\), random \(Y\in \mathsf {Ker}^{2}(A)\), and polynomial sized functions \(f_{1},f_{2},\dots , f_{t}\) where each \(f_{i}: \mathbb {Z}_{p}^{\ell \times 2} \times \mathbb {Z}_{p}^{\ell \times 2}\rightarrow {\{0,1\}}^{\mu }\) and can be adaptively chosen (i.e., \(f_{i}\) can be chosen after seeing the leakage values of \(f_{1},\dots , f_{i-1}\)), the following two distributions, \(D_{0}\) and \(D_{1}\), are computationally indistinguishable:

$$\begin{aligned} D_{0}= & {} (g,g^{A}, g^{r^{T}\cdot A }, f_{1}(\mathsf{sk}_{0},\mathsf{sk}_{1}), \dots f_{t}(\mathsf{sk}_{t-1},\mathsf{sk}_{t})) \\ D_{1}= & {} (g,g^{A}, g^{u }, f_{1}(\mathsf{sk}_{0},\mathsf{sk}_{1}), \dots f_{t}(\mathsf{sk}_{t-1},\mathsf{sk}_{t})),\end{aligned}$$

where \(\mathsf{sk}_{0}= g^{Y}\) and \(\mathsf{sk}_{i+1} \) is the updated key from \(\mathsf{sk}_{i}\) using random coins \(R_{i}\leftarrow \mathsf {Rk}_{2}(\mathbb {Z}_{p}^{2\times 2})\) as defined in the key update procedure.

Basically, the distribution \(D_{0}\) is the view of the adversary when given an encryption of 0 as the challenge ciphertext and continual leakage of the secret keys; \(D_{1}\) is the same except the challenge ciphertext is an encryption of 1. Our goal is to show that no polynomial sized adversary can distinguish between them.

We show the lemma in the following steps:

  1. 1.

    We first consider two modified experiment \(D_{0}'\) and \(D_{1}'\) where in these experiments, all the secret keys are sampled independently, i.e., \(\mathsf{sk}_{i+1}' \leftarrow \mathsf {Ker}^{2}(A)\). In other words, instead of using a rotation of the current secret key, the update procedure re-samples two random (linearly independent) points in the kernel of A. Denote \(D_{b}' = (g,g^{A}, g^{z}, f_{1}(\mathsf{sk}_{0}',\mathsf{sk}_{1}'), \dots f_{t}(\mathsf{sk}_{t-1}',\mathsf{sk}_{t}')) \) for \(g^{z}\) is sampled either from \(g^{r^{T}\cdot A}\) or \(g^{u}\) depending on \(b\in {\{0,1\}}\). Intuitively, the operations are computed in the exponent, so the adversary cannot distinguish between the modified experiments from the original ones. We formally prove this using the linear assumption.

  2. 2.

    Then we consider the following modified experiments: for \(b\in {\{0,1\}}\), define

    $$\begin{aligned} D_{b}'' = (g,g^{A},g^{z},f_{1}(g^{u_{0}}, g^{u_{1}}),f_{2}(g^{u_{1}}, g^{u_{2}}),\cdots , f_{t}(g^{u_{t-1}}, g^{u_{t}})), \end{aligned}$$

    where the distribution samples a random \(X \in \mathbb {Z}_{p}^{\ell \times (\ell -3)}\) such that \(A\cdot X =0\); then it samples each \(u_{i}= X \cdot T_{i}\) for \(T_{i} \leftarrow \mathsf {Rk}_{2}(\mathbb {Z}_{p}^{(\ell -3) \times 2})\); finally, it samples z either as \(r^{T}\cdot A\) or uniformly random as in \(D'_{b}\). We then show that \(D_{b}''\) is indistinguishable from \(D_{b}'\) using the new geometric lemma.

  3. 3.

    Finally, we show that \(D_{0}'' \approx D_{1}''\) under the linear assumption.

To implement the approach just described, we establish the following lemmas.

Lemma 10

For \(b\in {\{0,1\}},D_{b}\) is computationally indistinguishable from \(D_{b}'\).

To show this lemma, we first establish a lemma:

Lemma 11

Under the linear assumption, \((g, g^{A}, g^{Y}, g^{Y\cdot U}) \approx (g,g^{A},g^{Y}, g^{Y'})\), where \(A\leftarrow \mathbb {Z}_{p}^{2\times \ell },Y,Y' \leftarrow \mathsf {Ker}^{2}(A)\), and \(U\leftarrow \mathsf {Rk}_{2}(\mathbb {Z}_{p}^{2\times 2})\).

Suppose there exists a distinguisher \(\mathcal {A}\) that breaks the above statement with non-negligible probability, then we can construct \(\mathcal {B}\) that can break the linear assumption (the matrix form). In particular, \(\mathcal {B}\) distinguishes \((g,g^{C}, g^{C\cdot U})\) from \((g,g^{C}, g^{C'})\) where C and \(C'\) are two independent and uniformly random samples from \(\mathbb {Z}_{p}^{(\ell -2) \times 2} \), and U is uniformly random matrix from \(\mathbb {Z}_{p}^{2\times 2}\). Note that when \(p = \kappa ^{\omega (1)}\) (this is required by the linear assumption), then with overwhelming probability, \((C||C')\) is a rank 4 matrix, and \((C||C\cdot U)\) is a rank 2 matrix. The linear assumption is that no polynomial-time adversary can distinguish the two distributions when given in the exponent.

\(\mathcal {B}\) does the following on input \((g,g^{C}, g^{Z})\), where Z is either \(C\cdot U\) or a uniformly random matrix \(C'\):

  • \(\mathcal {B}\) samples a random rank 2 matrix \(A\in \mathbb {Z}_{p}^{2\times \ell }\). Then \(\mathcal {B}\) computes an arbitrary basis of \(\mathsf {Ker}(A)\) (note that \(\mathsf {Ker}(A)=\{v\in \mathbb {Z}_{p}^{\ell }: A\cdot v=0 \}\)), denoted as X. By the rank-nullity theorem (see any linear algebra textbook), the dimension of \(\mathsf {Ker}(A)\) plus \(\mathsf {Rk}(A)\) is \(\ell \). So we know that \(X\in \mathbb {Z}_{p}^{\ell \times (\ell -2)}\), i.e., X contains \((\ell -2)\) vectors that are linearly independent.

  • \(\mathcal {B}\) computes \(g^{X \cdot C}\) and \(g^{X \cdot Z}\). This can be done efficiently given \((g^{C}, g^{Z})\) and X in the clear.

  • \(\mathcal {B}\) outputs \(\mathcal {A}(g,g^{A}, g^{X\cdot C}, g^{X \cdot Z} )\).

We observe that when \(p = \kappa ^{\omega (1)}\), the distribution of A is statistically close to a random matrix, and U is statistically close to a random rank 2 matrix. Then it is not hard to see that \( g^{X\cdot C}\) is identically distributed to \(g^{Y} \), and \(g^{X\cdot Z} \) is distributed as \(g^{(X\cdot C) \cdot U}\) if \(Z = C\cdot U\), and otherwise as \(g^{Y'}\). So \(\mathcal {B}\) can break the linear assumption with probability essentially the same as that of \(\mathcal {A}\). This completes the proof of the lemma.

Then Lemma 10 can be proved using the lemma via a standard hybrid argument. We show that \(D_{0} \approx D_{0}'\) and the other one can be shown by the same argument. For \(i\in [t+1] \), define hybrids \(H_{i}\) as the experiment as \(D_{0} \) except the first i secret keys are sampled independently, as \(D_{0}'\); the rest are sampled according to rotations, as \(D_{0}\). It is not hard to see that \(H_{1}= D_{0},H_{t+1}=D_{0}'\), and \(H_{i} \approx H_{i+1}\) using the lemma. The argument is obvious and standard, so we omit the detail.

Then we recall the modified distribution \(D_{b}''\): for \(b\in {\{0,1\}}\),

$$\begin{aligned} D_{b}'' = (g,g^{A},g^{z},f_{1}(g^{u_{0}}, g^{u_{1}}),f_{2}(g^{u_{1}}, g^{u_{2}}),\cdots , f_{t}(g^{u_{t-1}}, g^{u_{t}})), \end{aligned}$$

where the distribution samples a random \(X \in \mathbb {Z}_{p}^{\ell \times (\ell -2)}\) such that \(A\cdot X =0\); then it samples each \(u_{i}= X \cdot T_{i}\) for \(T_{i} \leftarrow \mathsf {Rk}_{2}(\mathbb {Z}_{p}^{(\ell -2) \times 2})\), and z is sampled either \(r^{T}\cdot A\) or uniformly random. We then establish the following lemma.

Lemma 12

For \(b\in {\{0,1\}}, D_{b}'\) is computationally indistinguishable from \(D_{b}''\).

We prove the lemma using another hybrid argument. We prove that \(D_{0}' \approx D_{0}''\), and the other follows from the same argument. We define hybrids \(Q_{i}\) for \(i\in [t]\) where in \(Q_{i}\), the first i secret keys (the exponents) are sampled randomly from \(\mathsf {Ker}^{2}(A)\) (as \(D_{0}'\)), and the rest secret keys (the exponents) are sampled as \(X\cdot T\) (as \(D_{0}''\)). Clearly, \(Q_{0}= D_{0}''\) and \(Q_{t+1} = D_{0}'\). Then we want to show that \(Q_{i}\) is indistinguishable from \(Q_{i+1}\) using the extended geometric lemma (Lemma 6).

For any \(i\in [t+1]\), we argue that suppose there exists an (even unbounded) adversary that distinguishes \(Q_{i}\) from \(Q_{i+1}\) with probability better than \({\epsilon }\), then there exist a leakage function L and an adversary \(\mathcal {B}\) such that \(\mathcal {B}\) can distinguish \(\Big (A, X,L( A, X \cdot T, X \cdot T'), X \cdot T' \Big )\) from \( \Big (A, X, L(A, U, X \cdot T'), X \cdot T' \Big )\) in Lemma 6 with probability better than \({\epsilon }- \mathsf{negl}(\kappa )\) (dimensions will be set later). We will set the parameters of Lemma 6 such that the two distributions have negligible statistical difference; thus \({\epsilon }\) can be at most a negligible quantity.

Now we formally set the dimensions: let X be a random matrix in \( \mathbb {Z}^{\ell \times (\ell -3 )}\); \(T, T'\) be two random rank 2 matrices in \(\mathbb {Z}_{p}^{(\ell -3 )\times 2}\), i.e., \( \mathsf {Rk}_{2}\left( \mathbb {Z}_{p}^{(\ell -3 )\times 2}\right) \); \(L: \mathbb {Z}_{p}^{\ell \times 2} \times \mathbb {Z}_{p}^{\ell \times 2} \rightarrow {\{0,1\}}^{2\mu }\); recall that \(2\mu =(\ell - 6 )\cdot \log p - \omega (\kappa ) \), and thus \(|L| \le p^{\ell -6} \cdot \kappa ^{-\omega (1)}\). By Lemma 6, for any (even computationally unbounded) L, we have

$$\begin{aligned}&\varDelta \left( \Big (A, X,L(A, X \cdot T, X \cdot T'), X \cdot T' \Big ), \right. \\&\left. \Big (A, X,L(A, U, X \cdot T'), X \cdot T' \Big ) \right) < \kappa ^{-\omega (1)} = \mathsf{negl}(\kappa ). \end{aligned}$$

Let g be a random generator of G, and \(\omega \) is some randomness chosen uniformly. We define a particular function \(L^{*}\), with \(g, \omega \) hardwired, as follows: \(L^{*}(A, w,v)\) on input Awv does the following:

  • It first samples \(Y_{0},\dots , Y_{i-1} \leftarrow \mathsf {Ker}^{2}(A)\), using the random coins \(\omega \). Then it sets \(\mathsf{sk}_{j}=g^{Y_{j}}\) for \(j\in [i-1]\).

  • It simulates the leakage functions, adaptively, obtains the values \(f_{1}(\mathsf{sk}_{0},\mathsf{sk}_{1}), \dots , f_{i-1}(\mathsf{sk}_{i-2},\mathsf{sk}_{i-1})\), and obtains the next leakage function \(f_{i}\).

  • It computes \(f_{i}(\mathsf{sk}_{i-1}, g^{w})\), and then obtains the next leakage function \(f_{i+1}\).

  • Finally it outputs \(f_{i}(\mathsf{sk}_{i-1},g^{w}) || f_{i+1}(g^{w},g^{v})\).

Recall that \(f_{i},f_{i+1}\) are two leakage functions with \(\mu \) bits of output, so \(L^{*}\) has \(2\mu \) bits of output. Now we construct the adversary \(\mathcal {B}\) as follows:

  • Let g be the random generator, \(\omega \) be the random coins as stated above, and \(L^{*}\) be the function defined above. Then \(\mathcal {B}\) gets input \((A, X, L^{*}(A, Z, X \cdot T'), X \cdot T' )\) where Z is either uniformly random or \(X\cdot T\).

  • \(\mathcal {B}\) samples \(Y_{0},\dots , Y_{i-1} \leftarrow \mathsf {Ker}^{2}(A)\), using the random coins \(\omega \). Then it sets \(\mathsf{sk}_{j}=g^{Y_{j}}\) for \(j\in [i-1]\). We note that the secret keys (in the first \(i-1\) rounds) are consistent with the values used in the leakage function for they use the same randomness \(\omega \).

  • \(\mathcal {B}\) sets \(\mathsf{sk}_{i+2} = g^{ X \cdot T'}\).

  • \(\mathcal {B}\) samples \(T_{i+3},\dots , T_{t+1}\leftarrow \mathsf {Rk}_{2}(\mathbb {Z}_{p}^{(\ell -3)\times 2})\) and sets \(\mathsf{sk}_{j} = g^{X\cdot T_{j} }\) for \(j\in \{ i+3, \dots ,t+1\}\).

  • \(\mathcal {B}\) outputs \(\mathcal {A}\Big (g^{A},g^{z}, f_{1}(\mathsf{sk}_{0},\mathsf{sk}_{1}), f_{2}(\mathsf{sk}_{1},\mathsf{sk}_{2}), \cdots , f_{i-1}(\mathsf{sk}_{i-2},\mathsf{sk}_{i-1} ), L^{*}(Z, X \cdot T'), f_{i+2} (\mathsf{sk}_{i+2}, \mathsf{sk}_{i+3}'), \dots , f_{t} (\mathsf{sk}_{t}',\mathsf{sk}_{t+1}')\Big ).\)

Then it is not hard to see that if Z comes from the distribution XT, then the simulation of \(\mathcal {B}\) and \(L^{*}\) distributes as \(Q_{i}\), and otherwise \(Q_{i-1}\). Thus, suppose \(\mathcal {A}\) can distinguish \(Q_{i}\) from \(Q_{i+1}\) with non-negligible probability \({\epsilon }\), then \(\mathcal {B}\) can distinguish the two distributions with a non-negligible probability. This contradicts Lemma 6.

Finally, we show that \(D_{0}''\) is computationally indistinguishable from \(D_{1}''\) under the linear assumption.

Lemma 13

Under the linear assumption, the distributions \(D_{0}''\) and \(D_{1}''\) are computationally indistinguishable.

We use the same argument as the work [12]. In particular, we will prove that suppose there exists an adversary \(\mathcal {A}\) that distinguishes \(D_{0}''\) from \(D_{1}''\), then there exists an adversary \(\mathcal {B}\) that distinguishes the distributions \(\{g^{C}: C \leftarrow \mathbb {Z}_{p}^{3\times 3}\}\) and \(\{g^{C}: C \leftarrow \mathsf {Rk}_{2}(\mathbb {Z}_{p}^{3\times 3})\}\). We assume that the second distribution samples two random rows, and then sets the third row as a random linear combination of the first two rows. As argued in the work [12], this assumption is without loss of generality.

Now we describe the adversary \(\mathcal {B}\). \(\mathcal {B}\) on input \(g^{C}\) does the following.

  • \(\mathcal {B}\) samples a random matrix \(X \leftarrow \mathbb {Z}_{p}^{\ell \times (\ell -3)}\), and a random matrix \(B \leftarrow \mathbb {Z}_{p}^{3\times \ell } \) such that \(B \cdot X=0\).

  • \(\mathcal {B}\) computes \(g^{CB}\), and sets its first two rows as \(g^{A}\) and the last row as \(g^{z}\).

  • \(\mathcal {B}\) samples \(T_{1},\dots , T_{t} \leftarrow \mathsf {Rk}_{2}(\mathbb {Z}_{p}^{(\ell -3)\times 2})\), and sets \(\mathsf{sk}_{i} = g^{XT_{i}}\) for \(i\in [t]\).

  • \(\mathcal {B}\) outputs \(\mathcal {A}(g,g^{A},g^{z}, f_{1}(\mathsf{sk}_{0},\mathsf{sk}_{1}), \dots , f_{t}(\mathsf{sk}_{t-1},\mathsf{sk}_{t}))\).

As argued in the work [12], if C is uniformly random, then (Az) is distributed uniformly as \(D_{1}''\). If C is of rank 2, then (Az) is distributed as \((A, r^{T}A)\) for some random \(r \in \mathbb {Z}_{p}^{2}\) as \(D_{0}''\). Thus, suppose \(\mathcal {A}\) can distinguish \(D_{0}''\) from \(D_{1}''\) with non-negligible probability, then \(\mathcal {B}\) breaks the linear assumption with non-negligible probability.

Lemma 9 (\(D_{0}\approx D_{1}\)) follows directly from Lemmas 1012, and 13. This suffices to prove the theorem. We present the proofs of Lemmas 1012, and 13.

5 Leakage-Resilient PKE from Obfuscation

5.1 Making Sahai–Waters PKE Leakage-Resilient

We show that by modifying the Sahai–Waters (SW) public key encryption scheme [47] in two simple ways, the scheme already becomes non-trivially leakage resilient in the one-time, bounded setting. Recall that in this setting, the adversary, after seeing the public key and before seeing the challenge ciphertext, may request a single leakage query of length L bits. We require that semantic security hold, even given this leakage.

Our scheme can tolerate an arbitrary amount of one-time leakage. Specifically, for any \(L = L(\kappa ) = \mathrm{poly}(\kappa )\), we can obtain a scheme which is L-leakage resilient by setting the parameter \(\rho \) in Fig. 6 depending on L. However, our leakage rate is far from optimal, since the size of the secret key, \(\mathsf{sk}\), grows with L. Indeed, the result of this section is subsumed by the work of Hazay et al. [35]. We view this section as a warm-up; in Sect. 5.2, we will show how to further modify the construction to achieve optimal leakage rate, though, we rely upon much stronger assumptions than those of Hazay et al. [35].

At a high-level, we modify SW in the following ways: (1) Instead of following the general paradigm of encrypting a message m by xoring with the output of a PRF, we first apply a strong randomness extractor \(\mathsf {Ext} \) to the output of the PRF and then xor with the message m; (2) we modify the secret key of the new scheme to be an \({\mathsf {iO}} \) of the underlying decryption circuit. Recall that in SW, decryption essentially consists of evaluating a puncturable PRF. In our scheme, \(\mathsf{sk}\) consists of an \({\mathsf {iO}} \) of the puncturable PRF, padded with \(\mathrm{poly}(L)\) bits.

We show that, even given L bits of leakage, the attacker cannot distinguish \(\mathsf {Ext} (y)\) from random, where y is the output of the PRF on a fixed input \(t^*\). This will be sufficient to prove security. We proceed by a sequence of hybrids: First, we switch \(\mathsf{sk}\) to be an obfuscation of a circuit which has a PRF key punctured at \(t^*\) and a point function \(t^* \rightarrow y\) hardcoded. On input \(t \ne t^*\), the punctured PRF is used to compute the output, whereas on input \(t^*\), the point function is used. Since the circuits compute the same function and—due to appropriate padding—they are both the same size, security of the \({\mathsf {iO}} \) implies that an adversary cannot distinguish the two scenarios. Next, just as in SW, we switch from \(t^* \rightarrow y\) to \(t^* \rightarrow y^*\), where \(y^*\) is uniformly random of length \(L + L_\mathsf{msg}+ 2\log (1/\epsilon )\) bits; here, we rely on the security of the punctured PRF. Now, observe that since \(y^*\) is uniform and since \(\mathsf {Ext} \) is a strong extractor for inputs of min-entropy \(L_\mathsf{msg}+ 2\log (1/\epsilon )\) and output length \(L_\mathsf{msg}, \mathsf {Ext} (y^*)\) looks random, even under L bits of leakage (Figs. 7, 8).

Fig. 6
figure 6

One-time, bounded leakage encryption scheme, \(\mathcal {E}\)

Fig. 7
figure 7

This program \(C_k\) is obfuscated using \({\mathsf {iO}} \) and placed in the public key to be used for encryption

Fig. 8
figure 8

The circuit above is padded with \(\mathrm{poly}(\kappa + \rho )\) dummy gates to obtain the circuit \(C_{k,\kappa + \rho }\). \(C_{k,\kappa + \rho }\) is then obfuscated using \({\mathsf {iO}} \) and placed in the secret key

Theorem 9

Assume

  • \(\mathsf {PRG}\)\(G: {\{0,1\}}^\kappa \rightarrow {\{0,1\}}^\rho \) is a pseudorandom generator with output length \(\rho \ge 2\kappa \).

  • \(\mathsf{PRF}: {\{0,1\}}^\kappa \times {\{0,1\}}^\rho \rightarrow {\{0,1\}}^\rho \) is a puncturable pseudorandom function.

  • \({\mathsf {iO}} \) is indistinguishability obfuscator for circuits in this scheme.

  • \(\mathsf {Ext}: {\{0,1\}}^{\rho } \times {\{0,1\}}^d \rightarrow {\{0,1\}}^{L_\mathsf{msg}}\) is a \((L_\mathsf{msg}+ 2\log (1/\epsilon ), \epsilon )\)-strong extractor, where \(\epsilon = \mathsf{negl}(\kappa )\).

Then \(\mathcal {E}\) is L-leakage resilient against one-time key leakage where

$$\begin{aligned} L = \rho - 2\log (1/\epsilon ) - L_\mathsf{msg}\end{aligned}$$

Note that in the above theorem statement, \(\rho \) can be increased arbitrarily while all other parameters remain fixed. Therefore, to achieve an arbitrary amount, L, of leakage, we fix \(L_\mathsf{msg}\) and \(\epsilon \) and then set \(\rho := L + 2\log (1/\epsilon ) + L_\mathsf{msg}\). Additionally, note that extractors that satisfy the requirements of Theorem 9 can be constructed via the Leftover Hash Lemma (c.f. [36]).

In order to prove Theorem 9, we prove (in Lemma 14) that even under leakage, it is hard for any \(\textsc {ppt}\) adversary \(\mathcal {A}\) to distinguish the output of the extractor, \(\mathsf {Ext} \) from uniform random. Given this, Theorem 9 follows immediately.

Lemma 14

For every \(\textsc {ppt}\) leaking adversary \(\mathcal {A}\), who is given oracle access to a leakage oracle \(\mathcal {O}\) and may leak at most \(\rho - 2\log (1/\epsilon ) - L_\mathsf{msg}\) bits of the secret key, there exist random variables \(\mathsf {pk}', \widetilde{\mathsf{sk}}\) such that:

$$\begin{aligned}&\left( \mathsf {pk}, t, w, \mathsf {Ext} (y, w), f (\mathsf{sk}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}) \right) \\&\quad {\mathop {\approx }\limits ^{c}} \left( \mathsf {pk}', U_{\rho }, w, U_{L_\mathsf{msg}}, f (\widetilde{\mathsf{sk}}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}') \right) \end{aligned}$$

where \(y = C_{\mathsf{Dec}}\mathsf {ct}_\mathsf{dummy}, (t = G(r))\) and the distributions are taken over coins of \(\mathcal {A}\) and choice of \((\mathsf {pk}, \mathsf{sk}) \leftarrow \mathcal {E}.\mathsf{Gen}(1^\kappa ), w, r\) and choice of \(\mathsf {pk}', \widetilde{\mathsf{sk}}, w\), respectively.

We prove the lemma via the following sequence of hybrids: Note that Hybrids 1, 2 are essentially identical to the Sahai–Waters hybrids. We differ from Sahai–Waters when we modify the secret key in Hybrids 3 and 4.

Hybrid 0: This hybrid is identical to the real game.

Let \(\mathcal {D}^\mathcal {A}_{H_0}\) denote the distribution \((\mathsf {pk}, t, w, \mathsf {Ext} (y, w), f (\mathsf{sk}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}))\) as in the left side of Lemma 14.

Hybrid 1: This hybrid is the same as Hybrid 0 except we replace pseudorandom \(t = G(r)\) in the challenge ciphertext with uniform random \(t^* \leftarrow \{0,1\}^\rho \).

Let \(\mathcal {D}^\mathcal {A}_{H_1}\) denote the distribution \((\mathsf {pk}, t^*, w, \mathsf {Ext} (y, w), f (\mathsf{sk}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}))\) where \(y = C_{\mathsf{Dec}}(t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), choice of \((\mathsf {pk}, \mathsf{sk}),w, t^*\) as described above (Fig. 9).

Claim 1

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_0} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_1}. \end{aligned}$$

Proof

The proof is by reduction to the security of the pseudorandom generator G. Assume toward contradiction that there exists a \(\textsc {ppt}\) adversary \(\mathcal {A}\), a corresponding \(\textsc {ppt}\) distinguisher D and a polynomial \(p(\cdot )\) such that for infinitely many \(\kappa , D\) distinguishes \(\mathcal {D}^\mathcal {A}_{H_0}\) and \(\mathcal {D}^\mathcal {A}_{H_1}\) with probability at least \(1/p(\kappa )\). We construct a \(\textsc {ppt}\) adversary \(\mathcal {S}\) that distinguishes the output of the PRG from uniform random with probability at least \(1/p(\kappa )\), for infinitely many \(\kappa \). \(\mathcal {S}\) does the following: \(\mathcal {S}\) runs \(\mathcal {E}.\mathsf{Gen}(1^\kappa )\) honestly to generate \((\mathsf {pk}, \mathsf{sk})\). \(\mathcal {S}\) hands \(\mathsf {pk}\) to \(\mathcal {A}\) and responds to leakage query \( f \) by applying the leakage function directly to \(\mathsf{sk}\) to compute \( f (\mathsf{sk})\). Upon receiving its challenge \(t'\) as the external PRG challenge, \(\mathcal {S}\) sets \(y = C_{\mathsf{Dec}}(t')\), hands \((\mathsf {pk}, t', w, \mathsf {Ext} (y, w), f (\mathsf{sk}))\) to the distinguisher D, and outputs whatever D does. The reader can verify that \(\mathcal {S}\)’s distinguishing advantage is the same as D’s. \(\square \)

Hybrid 2: This hybrid is the same as Hybrid 1 except we replace the key k used in \(C_{\mathsf{Enc}}\) with a punctured key, \(\widetilde{k}=\mathsf{PRF}.\mathsf{Punct}(k, t^*)\), and denote it as \(C'_{\mathsf{Enc}}\).

We denote the resulting public key by \(\mathsf {pk}'\).

Let \(\mathcal {D}^\mathcal {A}_{H_2}\) denote the distribution \(\left( \mathsf {pk}', t^*, w, \mathsf {Ext} (y, w), f (\mathsf{sk}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}') \right) \) where \(y = C_{\mathsf{Dec}}(t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), and choice of \((\mathsf {pk}', \mathsf{sk}),w, t^*\) as described above.

Fig. 9
figure 9

Program \(C_{\widetilde{k}}\). This program replaces \(C_{k}\). It is obfuscated and placed in the public key to be used for encryption

Claim 2

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_1} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_2}. \end{aligned}$$

Proof

The proof is by a reduction to the security of the indistinguishability obfuscation. The main observation is that with all but negligible probability, \(t^*\) is not in the range of the PRG , in which case \(C_{\mathsf{Enc}}\) and the modified circuit \(C'_{\mathsf{Enc}}\) used in Hybrid 2 have identical behavior. Thus, with high probability for all inputs neither program can call on \({\mathsf {PRF}}.\mathsf{Punct}(k,t^*)\). Therefore, puncturing \(t^*\) out from the key k will not effect the input/output behavior. Therefore, if there is a difference in advantage, we can create an algorithm \({\mathcal {B}} \) that breaks the security of indistinguishability obfuscation.

\({\mathcal {B}} \) runs as the challenger, but where \(t^*\) is chosen at random. When it is to create the obfuscated program, it submits both programs \(C_0= C_{k}\) and \(C_1=C_{\widetilde{k}}\) to an \({\mathsf {iO}} \) challenger. If the \({\mathsf {iO}} \) challenger chooses the first, then we are in Hybrid 1. If it chooses the second, then we are in Hybrid 2. Any adversary with non-negligible advantages in the two hybrids leads to \({\mathcal {B}} \) as an attacker on \({\mathsf {iO}} \) security. \(\square \)

Hybrid 3: This hybrid is the same as Hybrid 2 except we replace \(C_{\mathsf{Dec}} = {\mathsf {iO}} (C_{k, \kappa + \rho })\) with \(C'_{\mathsf{Dec}} = {\mathsf {iO}} (C'_{\widetilde{k}})\), where \(C'_{\widetilde{k}}\) is specified in Fig. 10. Note that we puncture k at the challenge point \(t^*\). We denote by \(\mathsf{sk}'\) the resulting secret key.

Fig. 10
figure 10

Program \(C'_{\widetilde{k}}\). This program replaces \(C_{k, \kappa + \rho }\). It is obfuscated and placed in the secret key

Let \(\mathcal {D}^\mathcal {A}_{H_3}\) denote the distribution \(\left( \mathsf {pk}', t^*, w, \mathsf {Ext} (y, w), f (\mathsf{sk}') \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\widehat{\mathsf {pk}}) \right) \) where \(y = C'_{\mathsf{Dec}}(\)\(t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), and choice of \((\mathsf {pk}, \mathsf{sk}'),w, t^*\) as described above.

Claim 3

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_2} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_3}. \end{aligned}$$

Proof

The proof is by a reduction to the security of the indistinguishability obfuscation. The main observation is that the size of the circuit does not change since the description of \(C_{k, \kappa + \rho }\) is padded with \(\mathrm{poly}(\kappa + \rho )\) gates (for appropriate \(\mathrm{poly}\)). Thus, \(C_{k, \kappa + \rho }\) and \(C'_{\widetilde{k}}\) are the same size. Moreover, puncturing \(t^*\) out from the key k will not effect the input/output behavior since on input \(t^*\) we output the hardcoded value \(\beta = \mathsf{PRF}.\mathsf{Eval}(k, t^*)\). Therefore, if there is a difference in advantage, we can create an algorithm \({\mathcal {B}} \) that breaks the security of indistinguishability obfuscation.

\({\mathcal {B}} \) runs as the challenger, but where \(t^*\) is chosen at random. When it is to create the obfuscated program it submits both programs \(C_0= C_{k, \kappa + \rho }\) and \(C_1=C'_{\widetilde{k}}\) to an \({\mathsf {iO}} \) challenger. If the \({\mathsf {iO}} \) challenger chooses the first then we are in Hybrid 1. If it chooses the second then we are in Hybrid 2. Any adversary with non-negligible advantages in the two hybrids leads to \({\mathcal {B}} \) as an attacker on \({\mathsf {iO}} \) security. \(\square \)

Hybrid 4: This hybrid is the same as Hybrid 3 except we replace the hardcoded \(\beta \) with \(y^*\), where \(y^*\) is uniformly random. We denote by \(\widetilde{\mathsf{sk}}\) the resulting secret key. Note that the public key \(\mathsf {pk}'\) remains the same.

Let \(\mathcal {D}^\mathcal {A}_{H_4}\) denote the distribution \(\left( \mathsf {pk}', t^*, w, \mathsf {Ext} (y^*, w), f (\widetilde{\mathsf{sk}}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}') \right) \) where \(y^* = C'_{\mathsf{Dec}}(t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), and choice of \((\mathsf {pk}', \widetilde{\mathsf{sk}}),w, t^*\) as described above.

Claim 4

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_3} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_4}. \end{aligned}$$

Proof

The proof is through a reduction to the security of the puncturable \(\mathsf{PRF}\). Recall, the security notion of puncturable \(\mathsf{PRF}\)s states that, given \(\mathsf{PRF}.\mathsf{Punct}(k, t^*)\), an adversary cannot distinguish \(\mathsf{PRF}.\mathsf{Eval}(k, t^*)\) from random. The reduction is straightforward: to break the security of the \(\mathsf{PRF}, \mathcal {S}\) generates \(t^*\) at random and submits it to his challenger. He receives \(\mathsf{PRF}.\mathsf{Punct}(k, t^*)\), along with either \(y^* = \mathsf{PRF}.\mathsf{Eval}(k, t^*)\) or \(y^* \leftarrow \{0,1\}^\rho \) as a challenge. He uses \(y^*\), and samples all the remaining necessary keys for simulating \(\widehat{\mathsf {pk}}\) and \(\widehat{\mathsf{sk}}\). He chooses w at random and computes \(\mathsf {Ext} (y^*, w)\). He answers leakage queries on \(\widetilde{\mathsf{sk}}\) honestly. The reader can verify that \(\mathcal {S}\)’s advantage is the same as \(\mathcal {A}\)’s advantage in distinguishing the two hybrids. \(\square \)

Claim 5

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_4} {\mathop {\approx }\limits ^{s}} \left( \mathsf {pk}', U_{\rho }, w, U_{L_\mathsf{msg}}, f (\widetilde{\mathsf{sk}}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\widetilde{\mathsf {pk}}) \right) \end{aligned}$$

Note that the right side above is the same as the right side of Lemma 14

Proof

We claim that the min-entropy of \(y^*\) conditioned on \(\mathsf {pk}', f (\widetilde{\mathsf{sk}})\) is at least \(L_\mathsf{msg}+ 2\log (1/\epsilon )\). Note that \(y^*\) initially has min-entropy \(\rho \) since it is chosen uniformly at random. Thus, leaking \(\rho - 2\log (1/\epsilon ) - L_\mathsf{msg}\) number of bits of the secret key reduces \(y^*\)’s min-entropy by at most \(\rho - 2\log (1/\epsilon ) - L_\mathsf{msg}\). Therefore, \(y^*\) maintains min-entropy at least \(L_\mathsf{msg}+ 2\log (1/\epsilon )\). and so the claim follows by the properties of the strong extractor, \(\mathsf {Ext} \). \(\square \)

This concludes the proof of Lemma 14.

5.2 Improving the Leakage Rate

In this section, we show how to modify the previous construction to achieve optimal leakage rate. The key observation is that the leakage rate tolerated by the previous construction is low because the entire obfuscated circuit \({\mathsf {iO}} (C_{k,\kappa + \rho })\) must be stored in the secret key. Ideally, since the circuit is obfuscated, we would like to put it in the public key. However, this cannot possibly work since anyone can then decrypt the challenge ciphertext. Therefore, we store a collision-resistant hash \(h(\mathsf {ct}_\mathsf{dummy})\) in the obfuscated circuit, and include a ciphertext encrypted using a symmetric key encryption scheme, \(\mathsf {ct}_\mathsf{dummy}\), in the secret key: the circuit will only decrypt if the user provides a proper pre-image to the hardcoded value \(h(\mathsf {ct}_\mathsf{dummy})\). This scheme seems to preserve semantic security, but we must prove security in the LR setting. Specifically, we must show that even when leaking \(1-o(1)\)-fraction of \(\mathsf {ct}_\mathsf{dummy}\), the adversary cannot find a valid input to the obfuscated circuit. To prove this, the idea is that in the hybrids, we switch \(\mathsf {ct}_\mathsf{dummy}\) from a “dummy input” to an encryption of the point function \(t^* \rightarrow y^*\), where \(y^*\) is random. The obfuscated circuit will also be changed (as in the proof of the previous construction) so that on input \(t^*\), it outputs the output of the point function. Note that even under leakage, \(y^*\) has high min-entropy and thus \(\mathsf {Ext} (y^*)\) will still look random. Finally, we note that in order for the argument to work, we must now rely on public-coin differing-inputs obfuscation, since in the hybrid arguments the obfuscated circuits in the public key will produce different outputs on inputs \(\mathsf {ct}_\mathsf{dummy}\ne \mathsf {ct}_\mathsf{dummy}'\), such that \(h(\mathsf {ct}_\mathsf{dummy}') = h(\mathsf {ct}_\mathsf{dummy})\), which are hard for an efficient adversary to find (Figs. 11, 12, 13, 14) .

Fig. 11
figure 11

One-time, bounded leakage encryption scheme,\(\mathcal {E}\)

Fig. 12
figure 12

This program \(C_{k}\) is obfuscated and placed in the public key to be used for encryption

Fig. 13
figure 13

This program \(C_{\mathsf{keys}}\) is obfuscated and placed in the public key. It is used during decryption

Fig. 14
figure 14

Program \(C_{\widetilde{k}}\). This program replaces \(C_{k}\). It is obfuscated and placed in the public key to be used for encryption

Theorem 10

Assume

  • \(\mathsf {E} \) is a semantically secure symmetric key encryption scheme with ciphertexts of length \(L_\mathsf{ct}(\kappa , L_\mathsf{msg})\) for \(L_\mathsf{msg}\)-bit messages and security parameter \(\kappa \).

  • h is a collision-resistant hash function. with output length \(L_\mathsf{h}(\kappa )\) for security parameter \(\kappa \).

  • \(\mathsf {PRG}\)\(G: {\{0,1\}}^\kappa \rightarrow {\{0,1\}}^\rho \) is a pseudorandom generator with output length \(\rho \ge 2\kappa \).

  • \(\mathsf{PRF}: {\{0,1\}}^\kappa \times {\{0,1\}}^\rho \rightarrow {\{0,1\}}^\rho \) is a puncturable pseudorandom function.

  • \({\mathsf {diO}} \) is a public-coin, differing-inputs obfuscator for circuits in this scheme.

  • \(\mathsf {Ext}: {\{0,1\}}^{\rho } \times {\{0,1\}}^d \rightarrow {\{0,1\}}^{L_\mathsf{msg}}\) is a \((L_\mathsf{msg}+ 2\log (1/\epsilon ), \epsilon )\)-strong extractor, where \(\epsilon = \mathsf{negl}(\kappa )\).

Then \(\mathcal {E}\) is L-leakage resilient against one-time key leakage where

$$\begin{aligned} L = |\mathsf{sk}| \cdot \frac{\rho - 2\log (1/\epsilon ) - L_\mathsf{msg}-L_\mathsf{h}(\kappa )}{(L_\mathsf{ct}(\kappa , \kappa + \rho ))} \end{aligned}$$

Proof

First, note that extractors that satisfy the requirements of Theorem 10 can be constructed via the Leftover Hash Lemma (c.f. [36]). We can choose a semantically secure symmetric key encryption scheme with \(L_\mathsf{ct}(\kappa , \kappa + \rho ) = O(\kappa ) + \kappa + \rho \), for messages of length \(\kappa + \rho \), as this is achieved by appropriate modes of operation. Finally, choosing a collision-resistant hash function h with output length \(L_\mathsf{h}(\kappa ) = O(\kappa )\), and setting \(\rho = \omega (\kappa ),\epsilon = 2^{-\varTheta (\kappa )},L_\mathsf{msg}= \varTheta (\kappa )\), yields an encryption scheme for messages of length \(\varTheta (\kappa )\) with leakage rate \(1-o(1)\).

In order to prove Theorem 10, we prove (in Lemma 15) that even under leakage, it is hard for any \(\textsc {ppt}\) adversary \(\mathcal {A}\) to distinguish the output of the extractor, \(\mathsf {Ext} \) from uniform random. Given this, Theorem 10 follows immediately. \(\square \)

Lemma 15

For every \(\textsc {ppt}\) leaking adversary \(\mathcal {A}\), who is given oracle access to a leakage oracle \(\mathcal {O}\) and may leak at most \(\rho - 2\log (1/\epsilon ) - L_\mathsf{msg}\) bits of the secret key, there exist random variables \(\widetilde{\mathsf {pk}}, \widetilde{\mathsf{sk}}\) such that:

$$\begin{aligned}&\left( \mathsf {pk}, t, w, \mathsf {Ext} (y, w), f (\mathsf{sk}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}) \right) \\&\quad {\mathop {\approx }\limits ^{c}} \left( \widetilde{\mathsf {pk}}, U_\rho , w, U_{L_\mathsf{msg}}, f (\widetilde{\mathsf{sk}}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\widetilde{\mathsf {pk}}) \right) \end{aligned}$$

where \(y = C_{\mathsf{Dec}}(\mathsf {ct}_\mathsf{dummy},\)\(t = G(r))\) and the distributions are taken over coins of \(\mathcal {A}\) and choice of \((\mathsf {pk}, \mathsf{sk}) \leftarrow \mathcal {E}.\mathsf{Gen}(1^\kappa ), w, r\) and choice of \(\widetilde{\mathsf {pk}}, \widetilde{\mathsf{sk}}, w\), respectively.

We prove the lemma via the following sequence of hybrids:

Hybrid 0: This hybrid is identical to the real game.

Let \(\mathcal {D}^\mathcal {A}_{H_0}\) denote the distribution \((\mathsf {pk}, w, \mathsf {Ext} (y, w), f (\mathsf{sk}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}))\) as in the left side of Lemma 15.

Hybrid 1: This hybrid is the same as Hybrid 0 except we replace pseudorandom \(t = G(r)\) in the challenge ciphertext with uniform random \(t^* \leftarrow \{0,1\}^\rho \).

Let \(\mathcal {D}^\mathcal {A}_{H_1}\) denote the distribution \((\mathsf {pk}, t^*, w, \mathsf {Ext} (y, w), f (\mathsf{sk}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}))\) where \(y = C_{\mathsf{Dec}}(\mathsf {ct}_\mathsf{dummy},\)\(t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), choice of \((\mathsf {pk}, \mathsf{sk}),w, t^*\) as described above.

Claim 6

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_0} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_1}. \end{aligned}$$

Proof

The proof is by reduction to the security of the pseudorandom generator G. Assume toward contradiction that there exists a \(\textsc {ppt}\) adversary \(\mathcal {A}\), a corresponding \(\textsc {ppt}\) distinguisher D and a polynomial \(p(\cdot )\) such that for infinitely many \(\kappa , D\) distinguishes \(\mathcal {D}^\mathcal {A}_{H_0}\) and \(\mathcal {D}^\mathcal {A}_{H_1}\) with probability at least \(1/p(\kappa )\). We construct a \(\textsc {ppt}\) adversary \(\mathcal {S}\) that distinguishes the output of the PRG from uniform random with probability at least \(1/p(\kappa )\), for infinitely many \(\kappa \). \(\mathcal {S}\) does the following: \(\mathcal {S}\) runs \(\mathcal {E}.\mathsf{Gen}(1^\kappa )\) honestly to generate \((\mathsf {pk}, \mathsf{sk})\). \(\mathcal {S}\) hands \(\mathsf {pk}\) to \(\mathcal {A}\) and responds to leakage query \( f \) by apply the leakage function directly to \(\mathsf{sk}\) to compute \( f (\mathsf{sk})\). Upon receiving its challenge \(t'\) as the external PRG challenge, \(\mathcal {S}\) sets \(y = C_{\mathsf{Dec}}(\mathsf {ct}_\mathsf{dummy},\)\(t')\), hands \((\mathsf {pk}, t', w, \mathsf {Ext} (y, w), f (\mathsf{sk}))\) to the distinguisher D, and outputs whatever D does. The reader can verify that \(\mathcal {S}\)’s distinguishing advantage is the same as D’s. \(\square \)

Hybrid 2: This hybrid is the same as Hybrid 1 except we replace the key k used in \(C_{\mathsf{Enc}}\) with a punctured key, \(\widetilde{k}=\mathsf{PRF}.\mathsf{Punct}(k, t^*)\), and denote it as \(C'_{\mathsf{Enc}}\). We denote the resulting public key by \(\mathsf {pk}'\).

Let \(\mathcal {D}^\mathcal {A}_{H_2}\) denote the distribution \(\left( \mathsf {pk}', t^*, w, \mathsf {Ext} (y, w), f (\mathsf{sk}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}') \right) \) where \(y = C_{\mathsf{Dec}}(\mathsf {ct}_\mathsf{dummy},\)\(t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), and choice of \((\mathsf {pk}', \mathsf{sk}),w, t^*\) as described above.

Claim 7

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_1} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_2}. \end{aligned}$$

Proof

The proof is by a reduction to the security of the indistinguishability obfuscation \(({\mathsf {iO}})\). The main observation is that with all but negligible probability, \(t^*\) is not in the range of the PRG, in which case \(C_{\mathsf{Enc}}\) and the modified circuit \(C'_{\mathsf{Enc}}\) used in Hybrid 2 have identical behavior. Thus, with high probability for all inputs neither program can call on \({\mathsf {PRF}}.\mathsf{Punct}(k,t^*)\). Therefore, puncturing \(t^*\) out from the key k will not effect the input/output behavior. Therefore, if there is a difference in advantage, we can create an algorithm \({\mathcal {B}} \) that breaks the security of indistinguishability obfuscation.

\({\mathcal {B}} \) runs as the challenger, but where \(t^*\) is chosen at random. When it is to create the obfuscated program it submits both programs \(C_0= C_{k}\) and \(C_1=C_{\widetilde{k}}\) to an \({\mathsf {iO}} \) challenger. If the \({\mathsf {iO}} \) challenger chooses the first, then we are in Hybrid 1. If it chooses the second, then we are in Hybrid 2. Any adversary with non-negligible advantages in the two hybrids leads to \({\mathcal {B}} \) as an attacker on \({\mathsf {iO}} \) security. Note that since we only require indistinguishability obfuscation \(({\mathsf {iO}})\) for this hybrid, it is actually sufficient in our construction to replace \(C_{\mathsf{Enc}}\) with \(C_{\mathsf{Enc}} \leftarrow {\mathsf {iO}} (C_k)\), i.e., we require only \({\mathsf {iO}} \) obfuscation, rather than \({\mathsf {diO}} \) obfuscation for this program. \(\square \)

Hybrid 3: This hybrid is the same as Hybrid 2 except:

  • we replace \(\mathsf {ct}_\mathsf{dummy}\) with \(\mathsf {ct}_\mathsf{dummy}''\), where \(\mathsf {ct}_\mathsf{dummy}''\) is an encryption of \(t^* || y\) and \(y = \mathsf{PRF}.\mathsf{Eval}(k, t^*)\).

  • we replace \(h^*\) with \(h^{''*} = h(\mathsf {ct}_\mathsf{dummy}'')\).

We denote the resulting public key by \(\mathsf {pk}''\) and the resulting secret key by \(\mathsf{sk}''\).

Let \(\mathcal {D}^\mathcal {A}_{H_3}\) denote the distribution \(\left( \mathsf {pk}'', t^*, w, \mathsf {Ext} (y, w), f (\mathsf{sk}'') \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\mathsf {pk}'') \right) \) where \(y = C_{\mathsf{Dec}}(\mathsf {ct}_\mathsf{dummy}'',\)\(t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), and choice of \((\mathsf {pk}'', \mathsf{sk}''),w, t^*\) as described above.

Claim 8

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_2} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_3}. \end{aligned}$$

The proof is by a reduction to the semantic security of \(\mathsf {E} \).

Hybrid 4: This hybrid is the same as Hybrid 3 except we replace \(C_{\mathsf{Dec}} = {\mathsf {diO}} (C_{\mathsf{keys}})\) with \(C'_{\mathsf{Dec}} = {\mathsf {diO}} (C_{\mathsf{keys}'})\), where \(C_{\mathsf{keys}'}\) is specified in Fig. 15. We denote by \(\widehat{\mathsf {pk}}\) the resulting public key.

Fig. 15
figure 15

Program \(C_{\mathsf{keys}'}\). This program replaces \(C_{\mathsf{keys}}\). It is obfuscated and placed in the public key. It is used during decryption

Let \(\mathcal {D}^\mathcal {A}_{H_4}\) denote the distribution \(\left( \widehat{\mathsf {pk}}, t^*, w, \mathsf {Ext} (y, w), f (\mathsf{sk}'') \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\widehat{\mathsf {pk}}) \right) \) where \(y = C'_{\mathsf{Dec}}(\)\(\mathsf {ct}_\mathsf{dummy}'', t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), and choice of \((\widehat{\mathsf {pk}}, \mathsf{sk}''),w, t^*\) as described above.

Claim 9

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_3} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_4}. \end{aligned}$$

Proof

We define the following sampler \(\mathsf {Samp}\) and show that the circuit family \(\mathcal {C}\) associated with \(\mathsf {Samp}\) is a differing-inputs circuit family.

\(\mathsf {Samp}(1^\kappa )\) does the following:

  • Set \({\mathsf{keys}} = (k, h^{''*})\) and set \({\mathsf{keys}'} = (\mathsf{sk}_\mathsf {E}, \widetilde{k}, h^{''*})\).

  • Let \(C_0 = C_{\mathsf{keys}}\) and let \(C_1 = C_{\mathsf{keys}'}\).

  • Set \(\mathsf {aux}= (\mathsf{sk}_\mathsf {E}, h, h^{''*},\)\(\mathsf {ct}_\mathsf{dummy}'', r\)\(t^*, y)\), where r is the randomness used for \(\mathsf {ct}_\mathsf{dummy}''\).

  • Return \((C_0, C_1, \mathsf {aux})\).

Note that \(\mathsf {aux}\) contains all of the random coins used by \(\mathsf {Samp}\).

We now show that for every \(\textsc {ppt}\) adversary \(\mathcal {A}\) there exists a negligible function \(\mathsf{negl}\) such that

$$\begin{aligned}&\Pr [C_0(x) \ne C_1(x): (C_0, C_1, \mathsf {aux}) \leftarrow \mathsf {Samp}(1^\kappa ), \\&\quad x\leftarrow \mathcal {A}(1^\kappa , C_0, C_1, \mathsf {aux})] \le \mathsf{negl}(\kappa ). \end{aligned}$$

Assume toward contradiction that there exists a \(\textsc {ppt}\) adversary \(\mathcal {A}\) and a polynomial \(p(\cdot )\) such that for infinitely many \(\kappa , \mathcal {A}\) outputs a distinguishing input with probability at least \(1/p(\kappa )\). We construct a \(\textsc {ppt}\) adversary \(\mathcal {S}\) that finds a collision on h.

On input \(h \leftarrow {\mathcal {H}},\mathcal {S}\) does the following:

  • \(\mathcal {S}\) simulates \(\mathsf {Samp}\) by doing the following:

    • Run \((\mathsf{sk}_\mathsf {E})\leftarrow \mathsf {E}.\mathsf {Gen} (1^\kappa ), k \leftarrow \mathsf{PRF}.\mathsf {Gen} (1^\kappa )\). Choose \(t^*\) at random and set \(\widetilde{k} = \mathsf{PRF}.\mathsf{Punct}(k, t^*)\).

    • \(\mathcal {S}\) computes \(y = \mathsf{PRF}.\mathsf{Eval}(k, t^*)\) to generate \(\mathsf {ct}_\mathsf{dummy}''\). It computes \(h^* = h(\mathsf {ct}_\mathsf{dummy}'')\).

    • Set \({\mathsf{keys}} = (k, h^{''*})\) and \({\mathsf{keys}'} = (\mathsf{sk}_\mathsf {E}, \widetilde{k}, h^{''*})\).

    • Let \(C_0 = C_{\mathsf{keys}}\) and let \(C_1 = C_{\mathsf{keys}'}\).

    • Set \(\mathsf {aux}= (\mathsf{sk}_\mathsf {E}, h, h^{''*}\)\(\mathsf {ct}_\mathsf{dummy}'', r\)\(t^*, y)\).

  • \(\mathcal {S}\) runs \(\mathcal {A}(1^\kappa , C_0, C_1, \mathsf {aux})\) and receives x in return.

  • \(\mathcal {S}\) parses x as (mt) and outputs (m).

Note that \(C_0(\mathsf {ct}_\mathsf{dummy}'', \cdot )\) and \(C_1(\mathsf {ct}_\mathsf{dummy}'', \cdot )\) are functionally equivalent. Furthermore, on any input (mt) where \(h(m) \ne h^{''*}\), both circuits output \(\bot \). Therefore, if \(\mathcal {A}\) finds a distinguishing input \(x = (m, t)\), then it must be the case that both of the following hold:

  • \((m \ne \mathsf {ct}_\mathsf{dummy}'')\)

  • \(h(m) = h^{''*}\).

Thus, whenever \(\mathcal {A}\) outputs a differing-inputs, \(\mathcal {S}\) successfully finds a collision on h. Therefore, we have that for infinitely many \(\kappa , \mathcal {S}\) outputs a collision with probability at least \(1/p(\kappa )\).

Claim 9 follows from the fact that \({\mathsf {diO}} \) is a public-coin differing-inputs obfuscator and from the fact that the circuit family \(\mathcal {C}\) associated with \(\mathsf {Samp}\) is a differing-inputs family. This is the case since \(\mathcal {D}^\mathcal {A}_{H_3}\) can be simulated given \(({\mathsf {diO}} (C_0), \mathsf {aux})\) and \(\mathcal {D}^\mathcal {A}_{H_4}\) can be simulated given \(({\mathsf {diO}} (C_1), \mathsf {aux})\). \(\square \)

Hybrid 5: This hybrid is the same as Hybrid 4 except we replace \(\mathsf {ct}_\mathsf{dummy}''\) with \(\widetilde{\mathsf {ct}}_\mathsf{dummy}\), where \(\widetilde{\mathsf {ct}}_\mathsf{dummy}\) is an encryption of \(t^* || y^*\), where \(y^*\) is uniformly random. We denote by \(\widetilde{\mathsf{sk}}\) the resulting secret key. We replace \(h^*\) with \(\widetilde{h}^* = h(\widetilde{\mathsf {ct}}_\mathsf{dummy})\) and denote by \(\widetilde{\mathsf {pk}}\) the updated public key.

Let \(\mathcal {D}^\mathcal {A}_{H_5}\) denote the distribution \(\left( \widetilde{\mathsf {pk}}, t^*, w, \mathsf {Ext} (y^*, w), f (\widetilde{\mathsf{sk}}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\widetilde{\mathsf {pk}}) \right) \) where \(y^* = C'_{\mathsf{Dec}}(\widetilde{\mathsf {ct}}_\mathsf{dummy},\)\(t^*)\) and the distribution is taken over coins of \(\mathcal {A}\), and choice of \((\widetilde{\mathsf {pk}}, \widetilde{\mathsf{sk}}),w, t^*\) as described above.

Claim 10

For every \(\textsc {ppt}\) adversary \(\mathcal {A}\),

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_4} {\mathop {\approx }\limits ^{c}} \mathcal {D}^\mathcal {A}_{H_5}. \end{aligned}$$

Proof

The proof is through a reduction to the security of the puncturable \(\mathsf{PRF}\). Recall, the security notion of puncturable \(\mathsf{PRF}\)s states that, given \(\mathsf{PRF}.\mathsf{Punct}(k, t^*)\), an adversary cannot distinguish \(\mathsf{PRF}.\mathsf{Eval}(k, t^*)\) from random. The reduction is straightforward: to break the security of the \(\mathsf{PRF}, \mathcal {S}\) generates \(t^*\) at random and submits it to his challenger. He receives \(\mathsf{PRF}.\mathsf{Punct}(k, t^*)\), along with either \(y^* = \mathsf{PRF}.\mathsf{Eval}(\kappa , t^*)\) or \(y^* \leftarrow \{0,1\}^\rho \) as a challenge. He uses \(y^*\), and samples all the remaining necessary keys for simulating \(\widehat{\mathsf {pk}}\) and \(\widehat{\mathsf{sk}}\). He chooses w at random and computes \(\mathsf {Ext} (y^*, w)\). He answers leakage queries on \(\widehat{\mathsf{sk}}\) honestly. The reader can verify that \(\mathcal {S}\)’s advantage is the same as \(\mathcal {A}\)’s advantage in distinguishing the two hybrids.

Claim 11

$$\begin{aligned} \mathcal {D}^\mathcal {A}_{H_5} {\mathop {\approx }\limits ^{s}} \left( \widetilde{\mathsf {pk}}, U_{\rho }, w, U_{L_\mathsf{msg}}, f (\widetilde{\mathsf{sk}}) \leftarrow \mathcal {A}^{\mathcal {O}(\cdot )}(\widetilde{\mathsf {pk}}) \right) \end{aligned}$$

Note that the right side above is the same as the right side of Lemma 15

Proof

We claim that the min-entropy of \(y^*\) conditioned on \(\widetilde{\mathsf {pk}}, f (\widetilde{\mathsf{sk}})\) is at least \(L_\mathsf{msg}+ 2\log (1/\epsilon )\). Note that \(y^*\) initially has min-entropy \(\rho \) since it is chosen uniformly at random. Recall that \(\mathsf {ct}_\mathsf{dummy}\) has length \(L_\mathsf{ct}(\kappa , \kappa + \rho )\), and h has output length \(L_\mathsf{h}(\kappa )\). Thus, conditioning on \(\widetilde{\mathsf {pk}}\) reduces \(y^*\)’s min-entropy by at most \(L_\mathsf{h}(\kappa )\) (since only \(\widetilde{h}^*\) contains information about \(y^*\)). Moreover, leaking another \(\rho - 2\log (1/\epsilon ) - L_\mathsf{msg}- L_\mathsf{h}(\kappa )\) number of bits of \(\mathsf {ct}_\mathsf{dummy}\) reduces \(y^*\)’s min-entropy further by at most \(\rho - 2\log (1/\epsilon ) - L_\mathsf{msg}\). Therefore, \(y^*\) maintains min-entropy at least \(L_\mathsf{msg}+ 2\log (1/\epsilon )\), and the claim follows by the properties of the strong extractor, \(\mathsf {Ext} \). \(\square \)

This concludes the proof of Lemma 15. \(\square \)

6 Continual Leakage Resilience for One-Way Relations

Dodis et al. [21] defined one-way relation (OWR) in the regular continual leakage resilience setting, and present a construction based on a simpler primitive – leakage-indistinguishable re-randomizable relation (LIRR). In this section, we first extend their definition to 2CLR and CLR with leakage on key updates. Then we prove their LIRR-based construction actually achieves the 2CLR security. By using our generic transformation from Sect. 3, we can have a construction for achieving CLR with leakage on key updates. Additionally, we give a new construction of 2CLR OWR based on 2CLR PKE, which can be obtained from the previous sections.

6.1 Continual Leakage Model

A one-way relation scheme \(\mathsf{OWR}\) consists of two algorithms: \(\mathsf{OWR}.\mathsf {Gen} \) and \(\mathsf{OWR}.\mathsf {Verify} \). In the continual leakage setting, we require an additional algorithm \(\mathsf{OWR}.\mathsf {Update} \) which updates the secret keys. Note that the public key remains unchanged.

  • \(\mathsf{OWR}.\mathsf {Gen} (1^\kappa ) \rightarrow (\mathsf {pk}, \mathsf{sk}_0)\). The key generation algorithm takes in the security parameter \(\kappa \), and outputs a secret key \(\mathsf{sk}_0\) and a public key \(\mathsf {pk}\).

  • \(\mathsf{OWR}.\mathsf {Verify} (\mathsf {pk}, \mathsf{sk}) \rightarrow {{\{0,1\}}^{}} \). The verification algorithm takes in the public key \(\mathsf {pk}\), a secret key \(\mathsf{sk}\), and outputs either 0 or 1.

  • \(\mathsf{OWR}.\mathsf {Update} (\mathsf{sk}_{i-1}) \rightarrow \mathsf{sk}_i\). The update algorithm takes in a secret key \(\mathsf{sk}_{i-1}\) and produces a new secret key \(\mathsf{sk}_i\) for the same public key.

Correctness. The OWR scheme satisfies correctness if for any polynomial \(q=q(\kappa )\), it holds that for all \(i\in \{0,1, \ldots , q\},\mathsf{OWR}.\mathsf {Verify} (\mathsf {pk},\mathsf{sk}_i)=1\), where \((\mathsf {pk}, \mathsf{sk}_0)\leftarrow \mathsf{OWR}.\mathsf {Gen} (1^\kappa )\), and \(\mathsf{sk}_{i+1} \leftarrow \mathsf{OWR}.\mathsf {Update} (\mathsf{sk}_{i})\).

Security. We define continual leakage security for one-way relations in terms of the following game between a challenger and an attacker. We let \(\kappa \) denote the security parameter, and the parameter \(\mu \) controls the amount of leakage allowed.

Setup Phase. :

The game begins with a setup phase. The challenger calls \(\mathsf{OWR}.\mathsf{Gen}(1^\kappa )\) to create the initial secret key \(\mathsf{sk}_0\) and public key \(\mathsf {pk}\). It gives \(\mathsf {pk}\) to the attacker. No leakage is allowed in this phase.

Query Phase. :

In this phase, the attacker launches a polynomial number of leakage queries. Each time, say in the ith query, the attacker provides an efficiently computable leakage function \( f _i\) whose output is at most \(\mu \) bits, and the challenger chooses randomness \(r_i\), updates the secret key from \(\mathsf{sk}_{i-1}\) to \(\mathsf{sk}_i\), and gives the attacker the leakage response \(\ell _i\). In the CLR model, the leakage attack is applied on a single secret key, and the leakage response \(\ell _i = f _i(\mathsf{sk}_{i-1})\). In the 2CLR model, the leakage attack is applied on consecutive two secret keys, i.e., \(\ell _i = f _i(\mathsf{sk}_{i-1}, \mathsf{sk}_{i})\). In the model of CLR with leakage on key updates, the leakage attack is applied on the current secret key and the randomness used for updating the secret key, i.e., \(\ell _i = f _i(\mathsf{sk}_{i-1}, r_{i})\).

Recovery Phase. :

The attacker outputs some \(\mathsf{sk}^*\). The attacker wins the game if \(\mathsf{OWR}.\mathsf {Verify} (\mathsf {pk}, \mathsf{sk}^*) = 1\). We define the success probability of the attacker in this game as \(\Pr [\mathsf{OWR}.\mathsf {Verify} (\mathsf {pk}, \mathsf{sk}^*) = 1]\).

Definition 11

(Continual Leakage Resilience) We say a one-way relation scheme is \(\mu \)-CLR secure (respectively, \(\mu \)-2CLR secure, or \(\mu \)-CLR secure with leakage on key updates) if any \(\textsc {ppt}\) attacker only has a negligible advantage (negligible in \(\kappa \)) in the above game.

6.2 Construction Based on LIRR

6.2.1 Leakage-Indistinguishable Re-randomizable Relation

In [21], Dodis et al introduce a new primitive, leakage-indistinguishable re-randomizable relation (LIRR), and show that this primitive can be used to construct OWR in the CLR model where the adversary is allowed to leak on the secret key in each round of leakage attack. LIRR allows one to sample two types of secret keys: “good” keys and “bad” keys. Both types of keys look valid and are acceptable by the verification procedure, but they are produced in very different ways. In fact, given the ability to produce good keys, it is hard to produce any bad key and vice-versa. On the other hand, even though the two types of keys are very different, they are hard to distinguish from each other. More precisely, given the ability to produce both types of keys, and \(\mu \) bits of leakage on a “challenge” key of an unknown type (good or bad), it is hard to come up with a new key of the same type. More formally, a LIRR consists of \(\textsc {ppt}\) algorithms \((\mathsf {Setup}, \mathsf {SampG}, \mathsf {SampB}, \mathsf {Update}, \mathsf {Verify}, \mathsf {isGood})\) with the following syntax:

  • \((\mathsf {pk},s_G,s_B,\mathsf {dk})\leftarrow \mathsf {Setup} (1^{\kappa })\): This algorithm returns a public key \(\mathsf {pk}\), a “good” sampling key \(s_G\), a “bad” sampling key \(s_B\), and a distinguishing trapdoor \(\mathsf {dk} \).

  • \(\mathsf{sk}_G\leftarrow \mathsf {SampG}_\mathsf {pk}(s_G)\) and \(\mathsf{sk}_B\leftarrow \mathsf {SampB}_\mathsf {pk}(s_B)\): These algorithms sample good/bad secret keys using good/bad sampling keys, respectively. We omit the subscript \(\mathsf {pk}\) when clear from context.

  • \(b\leftarrow \mathsf {isGood}(\mathsf {pk},\mathsf{sk},\mathsf {dk})\): This algorithm uses \(\mathsf {dk} \) to distinguish good secret keys \(\mathsf{sk}\) from bad ones.

  • \(\mathsf{sk}_i \leftarrow \mathsf {Update} (\mathsf{sk}_{i-1})\) and \(b\leftarrow \mathsf {Verify} (\mathsf {pk},\mathsf{sk})\): These two algorithms have the same syntax as in the definition of OWR in the CLR model.

Definition 12

We say \((\mathsf {Setup},\mathsf {SampG},\mathsf {SampB},\mathsf {Update},\mathsf {Verify},\mathsf {isGood})\) is a \(\mu \) leakage-indistinguishable re-randomizable relation (LIRR) if it satisfies the following properties:

  • Correctness: If \((\mathsf {pk}, s_G, s_B, \mathsf {dk})\leftarrow \mathsf {Setup} (1^{\kappa })\), \(\mathsf{sk}_G \leftarrow \mathsf {SampG}(s_G),\mathsf{sk}_B \leftarrow \mathsf {SampB}(s_B)\) then \(\Pr \left[ \begin{array}{l} \mathsf {Verify} (\mathsf {pk}, \mathsf{sk}_G) = 1 \bigwedge \mathsf {isGood}(\mathsf {pk}, \mathsf{sk}_G, \mathsf {dk}) = 1\\ \qquad \bigwedge \mathsf {Verify} (\mathsf {pk}, \mathsf{sk}_B) = 1 \bigwedge \mathsf {isGood}(\mathsf {pk}, \mathsf{sk}_B, \mathsf {dk}) = 0 \end{array} \right] =1-\mathsf{negl}(\kappa )\)

  • Re-Randomization: We require that \((\mathsf {pk}, s_G, \mathsf{sk}_0, \mathsf{sk}_1) {\mathop {\approx }\limits ^{\text {c}}}(\mathsf {pk}, s_G, \mathsf{sk}_0, \mathsf{sk}'_1) \) where \((\mathsf {pk}, s_G, s_B, \mathsf {dk}) \leftarrow \mathsf {Setup} ()\), \(\mathsf{sk}_0\leftarrow \mathsf {SampG}(s_G)\) and \(\mathsf{sk}_1\leftarrow \mathsf {Update} (\mathsf{sk}_0), \mathsf{sk}'_1\leftarrow \mathsf {SampG}(s_G)\)

  • Hardness of Bad Keys: Given \(s_G\), it is hard to produce a valid “bad key”. Formally, for any \(\textsc {ppt}\)\(\mathcal {A}\),

    $$\begin{aligned} \Pr \left[ \begin{array}{c} (\mathsf {pk}, s_G, s_B, \mathsf {dk}) \leftarrow \mathsf {Setup} (1^{\kappa }), \mathsf{sk}^*\leftarrow \mathcal {A}(\mathsf {pk},s_G) : \\ \qquad \mathsf {Verify} (\mathsf {pk},\mathsf{sk}^*)=1 \bigwedge \mathsf {isGood}(\mathsf {pk},\mathsf{sk}^*,\mathsf {dk})=0 \end{array} \right] \le \mathsf{negl}(\kappa )\end{aligned}$$
  • Hardness of Good Keys: Given \(s_B\), it is hard to produce a valid “good key". Formally, for any \(\textsc {ppt}\)\(\mathcal {A}\),

    $$\begin{aligned} \Pr \left[ \begin{array}{c} (\mathsf {pk}, s_G, s_B, \mathsf {dk}) \leftarrow \mathsf {Setup} (1^{\kappa }), \mathsf{sk}^*\leftarrow \mathcal {A}(\mathsf {pk},s_B) : \\ \qquad \qquad \mathsf {isGood}(\mathsf {pk},\mathsf{sk}^*,\mathsf {dk})=1 \end{array} \right] \le \mathsf{negl}(\kappa ) \end{aligned}$$
  • \(\mu \)Leakage Indistinguishability: Given both sampling keys \(s_G, s_B\), and \(\mu \) bits of leakage on a secret key \(\mathsf{sk}\) (which is either good or bad), it is hard to produce a secret key \(\mathsf{sk}^*\) which is in the same category as \(\mathsf{sk}\). Formally, for any \(\textsc {ppt}\) adversary \(\mathcal {A}\), we have \(|\Pr [\mathcal {A}\ \ \mathrm {wins}\ ] -1/2| \le \mathsf{negl}(\kappa )\) in the following game:

  • The challenger chooses \((\mathsf {pk}, s_G, s_B, \mathsf {dk})\leftarrow \mathsf {Setup} (1^{\kappa })\) and gives \(\mathsf {pk}, s_G, s_B\) to \(\mathcal {A}\). The challenger chooses a random bit \(b\in \{0,1\}\). If \(b = 1\), then it samples \(\mathsf{sk}\leftarrow \mathsf {SampG}(s_G)\), and otherwise, it samples \(\mathsf{sk}\leftarrow \mathsf {SampB}(s_B)\).

  • The adversary \(\mathcal {A}\) can make up to q queries in total to the leakage oracle

  • The adversary outputs \(\mathsf{sk}^*\) and wins if \(\mathsf {isGood}(\mathsf {pk}, \mathsf{sk}^*, \mathsf {dk}) = b\).

6.2.2 Construction

A \(\mu \) LIRR can be used to construct a \(\mu \)-2CLR-secure OWR, as follows:

  • \(\mathsf {Gen} (1^{\kappa })\): Sample \((\mathsf {pk},s_G,\cdot , \cdot )\leftarrow \mathsf {Setup} (1^{\kappa }),\mathsf{sk}\leftarrow \mathsf {SampG}(s_G)\) and output \((\mathsf {pk},\mathsf{sk})\)

  • \(\mathsf {Update} (\cdot ), \mathsf {Verify} (\cdot ,\cdot )\): Same as for LIRR

Note that the CLR-OWR completely ignores the bad sampling algorithm \(\mathsf {SampB}\), the “bad” sampling key \(s_B\), the distinguishing algorithm \(\mathsf {isGood}\), and the distinguishing key \(\mathsf {dk} \) of the LIRR. These are only used in the argument of security. Moreover, the “good” sampling key \(s_G\) is only used as an intermediate step during key generation to sample the secret key \(\mathsf{sk}\), but is never explicitly stored afterward.

Theorem 11

Given any \(2 \mu \)-LIRR scheme, the construction above is a \(\mu \)-2CLR-secure OWR.

Proof

The proof is very similar to that in [21]. To prove the theorem statement, we develop a sequence of games. \(\square \)

Game\(\mathcal {H}_0\): This is the original \(\mu \)-2CLR Game as Definition 11. The adversary is allowed to apply leakage function on consecutive two secret keys in each round of leakage attack.

Games\(\mathcal {H}_{0.i}\) - \(\mathcal {H}_1\): Let q be the total number of leakage rounds for which \(\mathcal {A}\) runs. We define the Games \(\mathcal {H}_{0.i}\) for \(i = 0, 1, \ldots , q\) as follows. The challenger initially samples \((\mathsf {pk}, s_G, s_B, \mathsf {dk}) \leftarrow \mathsf {Setup} (1^{\kappa })\), and \(\mathsf{sk}_0\leftarrow \mathsf {SampG}(s_G)\) and gives \(\mathsf {pk}\) to \(\mathcal {A}\). The game then proceeds as before with many leakage rounds, except that the secret keys in rounds \(j \le i\) are generated as \(\mathsf{sk}_j\leftarrow \mathsf {SampG}(s_G)\), independently of all previous rounds, and in the rounds \(j > i\), they are generated as \(\mathsf{sk}_j \leftarrow \mathsf {Update} (\mathsf{sk}_{j-1})\). Note that Game \(\mathcal {H}_{0.0}\) is the same as Game \(\mathcal {H}_0\), and we define Game \(\mathcal {H}_1\) to be the same as Game \(\mathcal {H}_{0.q}\).

Claim 12

For \(i=1, \ldots , q\), it holds that \(|\Pr [\mathcal {A}\ \ \mathrm {wins}\ | \mathcal {H}_{0.(i-1)}] -\Pr [\mathcal {A}\ \ \mathrm {wins}\ | \mathcal {H}_{0.i}] | \le \mathsf{negl}(\kappa )\)

Proof

We use the re-randomization property to argue that, for \(i = 1, \ldots , q\), the winning probability of \(\mathcal {A}\) is the same in Game \(\mathcal {H}_{0.(i-1)}\) as in Game \(\mathcal {H}_{0.i}\), up to negligible factors. We construct a reduction \(\mathcal {B}\), with input \((\mathsf {pk}, s_G,\mathsf{sk}', \mathsf{sk}'')\). Here \(\mathsf{sk}' \leftarrow \mathsf {SampG}(s_G)\), and \(\mathsf{sk}''\) is sampled based on randomly chosen b: if \(b=1\), then \(\mathsf{sk}''\leftarrow \mathsf {SampG}(s_G)\) and if \(b=0\), then \(\mathsf{sk}''\leftarrow \mathsf {Update} (\mathsf{sk}')\).

More concretely, the reduction \(\mathcal {B}\) emulates a copy of \(\mathcal {A}\) internally. In addition, \(\mathcal {B}\) emulates the view for \(\mathcal {A}\): For all \(j < i,\mathcal {B}\) generates \(\mathsf{sk}_j \leftarrow \mathsf {SampG}(s_G)\), and for all \(j > i+1, \mathcal {B}\) generates \(\mathsf{sk}_j \leftarrow \mathsf {Update} (\mathsf{sk}_{j-1})\); \(\mathcal {B}\) sets \(\mathsf{sk}_i:=\mathsf{sk}'\) and \(\mathsf{sk}_{i+1}:=\mathsf{sk}''\). Upon receiving a leakage query \( f _j\) from \(\mathcal {A}\), the reduction \(\mathcal {B}\) returns \( f _j(\mathsf{sk}_{j-1}, \mathsf{sk}_j)\) to \(\mathcal {A}\).

If \(\mathcal {B}\)’s challenger uses \(\mathsf{sk}''\) which is generated through \(\mathsf {SampG}\) then that corresponds to the view of \(\mathcal {A}\) in Game \(\mathcal {H}_{0.i}\) and if \(\mathsf{sk}''\) is generated through \(\mathsf {Update} \), then corresponds to Game \(\mathcal {H}_{0.(i-1)}\). Therefore, if \(\mathcal {A}\) is able to distinguish the two worlds, then \(\mathcal {B}\) is able to break the re-randomization property. \(\square \)

Game\(\mathcal {H}_2\): Game \(\mathcal {H}_2\) is the same as Game \(\mathcal {H}_1\), except the winning condition: Now the adversary only wins if, at the end, it outputs \(\mathsf{sk}^*\) such that \(\mathsf {isGood}(\mathsf {pk}, \mathsf{sk}^*, \mathsf {dk}) = 1\).

Claim 13

\(|\Pr [\mathcal {A}\ \ \mathrm {wins}\ | \mathcal {H}_{1}] -\Pr [\mathcal {A}\ \ \mathrm {wins}\ | \mathcal {H}_{2}] | \le \mathsf{negl}(\kappa )\)

Proof

The winning probability of \(\mathcal {A}\) in Game \(\mathcal {H}_2\) is at least that of Game \(\mathcal {H}_1\) minus the probability that \(\mathsf{sk}^*\) satisfies \(\mathsf {Verify} (\mathsf {pk}, \mathsf{sk}^*)= 1 \wedge \mathsf {isGood}(\mathsf {pk},\mathsf{sk}^*, \mathsf {dk}) = 0\). However, since the entire interaction between the challenger and the adversary in games \(\mathcal {H}_1,\mathcal {H}_2\) can be simulated using \((\mathsf {pk}, s_G)\), we can use the “hardness of bad keys” property to argue that the probability of the above happening is negligible. Therefore, the probability of \(\mathcal {A}\) winning in Game \(\mathcal {H}_2\) is at least that of Game \(\mathcal {H}_1\), up to negligible factors. \(\square \)

Games\(\mathcal {H}_{2.i}\) - \(\mathcal {H}_3\): Let q be the total number of leakage rounds for which \(\mathcal {A}\) runs. We define the Games \(\mathcal {H}_{2.i}\) for \(i = 0, 1, \ldots , q\) as follows. The challenger initially samples \((\mathsf {pk}, s_G, s_B, \mathsf {dk})\leftarrow \mathsf {Gen} (1^{\kappa })\) and gives \(\mathsf {pk}\) to \(\mathcal {A}\). The game then proceeds as before with many leakage rounds, except that the secret keys in rounds \(j \le i\) are generated as \(\mathsf{sk}_j \leftarrow \mathsf {SampB}(s_B)\), and in the rounds \(j > i\), they are generated as \(\mathsf{sk}_j \leftarrow \mathsf {SampG}(s_G)\). Note that Game \(\mathcal {H}_{2.0}\) is the same as Game \(\mathcal {H}_2\), and we define Game \(\mathcal {H}_3\) to be the same as Game \(\mathcal {H}_{2.q}\).

Claim 14

For \(i = 1, \ldots , q\), it holds that \(|\Pr [\mathcal {A}\ \ \mathrm {wins}\ | \mathcal {H}_{2.(i-1)}] -\Pr [\mathcal {A}\ \ \mathrm {wins}\ | \mathcal {H}_{2.i}] | \le \mathsf{negl}(\kappa )\).

Proof

We use the \(2\mu \)-leakage indistinguishability property to argue that, for \(i = 1, \ldots , q\), the winning probability of \(\mathcal {A}\) is the same in Game \(\mathcal {H}_{2.(i-1)}\) as in Game \(\mathcal {H}_{2.i}\), up to negligible factors. We construct a reduction \(\mathcal {B}\), with input \((\mathsf {pk}, s_G,s_B)\) and with leakage access to \(\mathsf{sk}\). Here \(\mathsf{sk}\) is sampled based on randomly chosen b: if \(b=1\), then \(\mathsf{sk}\leftarrow \mathsf {SampG}(s_G)\) and if \(b=0\), then \(\mathsf{sk}\leftarrow \mathsf {SampB}(s_B)\).

More concretely, the reduction \(\mathcal {B}\) emulates a copy of \(\mathcal {A}\) internally. In addition, \(\mathcal {B}\) emulates the view for \(\mathcal {A}\): In each leakage round \(j < i,\mathcal {B}\) uses \(s_B\) to generate \(\mathsf{sk}_j \leftarrow \mathsf {SampB}(s_B)\); and in round \(j > i\), \(\mathcal {B}\) uses \(s_G\) to generate \(\mathsf{sk}_j \leftarrow \mathsf {SampG}(s_G)\). Upon receiving a leakage query \( f _j\) from \(\mathcal {A}\) in leakage round j, if \(j<i-1,\mathcal {B}\) returns \( f _j(\mathsf{sk}_{j-1}, \mathsf{sk}_j)\) to \(\mathcal {A}\); if \(j=i-1,\mathcal {B}\) defines \({\hat{ f }}_j = f _j(\mathsf{sk}_{j-1},\cdot )\), and then applies \({\hat{ f }}_j\) on \(\mathsf{sk}\), and returns \({\hat{ f }}_j(\mathsf{sk})\) to \(\mathcal {A}\); if \(j=i,\mathcal {B}\) defines \({\hat{ f }}_j = f _j(\mathsf{sk}_{j},\cdot )\), and then applies \({\hat{ f }}_j\) on \(\mathsf{sk}\), and returns \({\hat{ f }}_j(\mathsf{sk})\) to \(\mathcal {A}\); if \(j>i,\mathcal {B}\) returns \( f _j(\mathsf{sk}_{j-1}, \mathsf{sk}_j)\) to \(\mathcal {A}\). At the end, \(\mathcal {B}\) outputs the value \(\mathsf{sk}^*\) output by \(\mathcal {A}\). Note that \(\mathcal {B}\) made queries \({\hat{ f }}_{i-1}\) and \({\hat{ f }}_{i}\), which are \(2\mu \) bits in total.

If \(\mathcal {B}\)’s challenger uses a good key then that corresponds to the view of \(\mathcal {A}\) in Game \(\mathcal {H}_{2.i}\) and a bad key corresponds to Game \(\mathcal {H}_{2.(i-1)}\). Therefore, letting b be the bit used by \(\mathcal {B}\)’s challenger, we have:

$$\begin{aligned} \left| \Pr [\mathcal {B}\ \mathrm {wins}\ ] - 1/2 \right|= & {} \left| \Pr [\mathsf {isGood}(\mathsf {pk},\mathsf{sk}^*,\mathsf {dk})=b] - 1/2 \right| \\= & {} 1/2 \cdot \left| \Pr [\mathsf {isGood}(\mathsf {pk},\mathsf{sk}^*,\mathsf {dk})=1 | b=1 ] \right. \\&\left. - \Pr [\mathsf {isGood}(\mathsf {pk},\mathsf{sk}^*,\mathsf {dk})=1 | b=0 ] \right| \\= & {} 1/2 \cdot \left| \Pr [\mathcal {A}\ \mathrm {wins}\ | \mathcal {H}_{2.(i-1)}] - \Pr [\mathcal {A}\ \mathrm {wins}\ | \mathcal {H}_{2.i} ] \right| \end{aligned}$$

Claim 15

\(\Pr [\mathcal {A}\ \ \mathrm {wins}\ | \mathcal {H}_{3}] \le \mathsf{negl}(\kappa )\)

Proof

We now argue that probability of \(\mathcal {A}\) winning Game \(\mathcal {H}_3\) is negligible, by the “hardness of good keys”. Notice that \(\mathcal {A}\)’s view in Game \(\mathcal {H}_3\) can be simulated entirely just given \((\mathsf {pk}, s_B)\). Therefore, there is a \(\textsc {ppt}\) algorithm which, given \((\mathsf {pk},s_B)\) as inputs, can run Game \(\mathcal {H}_3\) with \(\mathcal {A}\) and output \(\mathsf{sk}^*\) such that \(\mathsf {isGood}(\mathsf {pk},\mathsf{sk}^*, \mathsf {dk}) = 1\) whenever \(\mathcal {A}\) wins. So the probability of \(\mathcal {A}\) winning in Game \(\mathcal {H}_3\) is negligible. \(\square \)

By the hybrid argument, the probability of \(\mathcal {A}\) winning in Game \(\mathcal {H}_0\) is at most that of \(\mathcal {A}\) winning in Game \(\mathcal {H}_3\), up to negligible factors. That is, \(\Pr [\mathcal {A}\ \ \mathrm {wins}\ | \mathcal {H}_{0}] \le \mathsf{negl}(\kappa )\). Therefore, since the latter is negligible, the former must be negligible as well, which concludes the proof of the theorem. \(\square \)

Based on the result in [21], we have the following corollary.

Corollary 1

Fix a constant \(K\ge 1\), and assume that the K-linear assumption holds in the base groups of some pairing. Then, for any constant \(\epsilon >0\), there exists a \(\mu \)-2CLR-secure OWR scheme with relative-leakage \(\frac{\mu }{|\mathsf{sk}|} \ge \frac{1}{2(K+1)} -\epsilon \).

6.3 A Generic Construction Based on PKE

In this section, we describe a generic construction of CLR-secure OWR (resp., 2CLR secure and CLR with leakage on key updates) from CLR secure PKE (resp., 2CLR secure, and CLR with leakage on key updates). A OWR requires verification of the relation to be deterministic; but a PKE does not necessarily give a OWR because there might not be a deterministic way to check the key pair \((\mathsf {pk},\mathsf{sk})\) of a PKE. Here we present a way to check the key pair of a PKE deterministically, so that one can use PKE to construct OWR.

Fig. 16
figure 16

Transformation of PKE to OWR

Theorem 12

Let \(\mathcal {E}\) be a public key encryption scheme secure in the model of CLR (respectively, of 2CLR, and of CLR with leakage on key updates) with leakage rate \(\rho \), then for appropriate choice of polynomial \(p(\cdot )\), the one-way relation scheme \(\mathsf{OWR}\) in Fig. 16 is secure in the model of CLR (respectively, of 2CLR, and of CLR with leakage on key updates) with leakage rate \(\rho \).

Proof

(Sketch.) A well-known result from learning theory known as Occam’s Razor (see, for example, Kearns and Vazirani [40], Theorem 2.1)Footnote 11 says that if a class of circuits has size \(|\mathcal {C}|\) and a circuit \(C \in \mathcal {C}\) agrees with a target circuit \(C^* \in \mathcal {C}\) on \(\mathrm{poly}(\log (|\mathcal {C}|), 1/\epsilon , \log (1/\delta ))\) number of random inputs, then with probability \(1-\delta ,C\) agrees with \(C^*\) over the uniform distribution with probability \(1-\epsilon \). In the following, we will always set \(\log (1/\delta ) \ge \kappa \) and so \(\delta \le 1/2^\kappa \).

Assume we have an adversary \(\mathcal {A} \) breaking the security of the one-way relation, we use it to construct an adversary \(\mathcal {A} '\) breaking the security of the encryption scheme \(\mathcal {E}\). The class \(\mathcal {C}\) consists of the circuits \(\mathcal {E}.\mathsf {Dec} (\widetilde{\mathsf{sk}}, \cdot )\) for all possible \(\mathsf{sk}\). Clearly, \(\log (|\mathcal {C}|) = |\mathsf{sk}| = \mathrm{poly}(\kappa )\). Now, C corresponds to the circuit \(\mathcal {E}.\mathsf {Dec} (\mathsf{sk}', \cdot )\), where \(\mathsf{sk}'\) is the secret key submitted by \(\mathcal {A} \) such that \(\mathsf{OWR}.\mathsf {Verify} (\mathsf{sk}', \mathsf {pk}) = 1\). Furthermore, \(C^*\) is the circuit \(\mathcal {E}.\mathsf {Dec} (\mathsf{sk}, \cdot )\), where \(\mathsf{sk}\) is a real secret key. Note that C and \(C^*\) agree on \(p(\kappa ) = \mathrm{poly}(\log (|\mathcal {C}|), 1/\epsilon , 1/\log (\delta ))\) random inputs, since \(\mathcal {E}.\mathsf {Verify} (\mathsf{sk}', \mathsf {pk}) = 1\). Thus, we are guaranteed that with probability \(1-\delta \) over choice of input/output pairs \((m_i, e_i)\) in \(\mathsf {pk}, \mathsf{sk}'\) decrypts correctly on a fresh random input with probability \(1-\epsilon \). We are now ready to define the adversary \(\mathcal {A} '\).

\(\mathcal {A} '\) internally instantiates \(\mathcal {A} \), while participating externally in a leakage (resp., on consecutive two keys, and on both key and update) on encryption scheme \(\mathcal {E}\). Specifically, \(\mathcal {A} '\) does the following:

  • Upon receiving \(\mathsf {pk}^{\mathcal {E}}\) from the external experiment, do the following:

    • Choose \(p = p(\kappa )\) random messages \(m_1, \ldots , m_p\) from the message space of \(\mathcal {E}\).

    • For \(i \in [p]\), compute a random encryption \(e_i = \mathcal {E}.\mathsf {Enc} (m_i)\).

    • Output public key \(\mathsf {pk}= (\mathsf {pk}^{\mathcal {E}}, (m_1, e_1), \ldots , (m_p, e_p))\) to the internal adversary \(\mathcal {A} \). Note that secret key \(\mathsf{sk}_0 = \mathsf{sk}^{\mathcal {E}}_0\) is a correctly distributed secret key for this \(\mathsf {pk}\).

  • Whenever \(\mathcal {A} \) submits a leakage query \( f ,\mathcal {A} '\) submits the same query to its external challenger who applies it to the secret key (resp., on consecutive two keys, and on both key and update) and forwards the answer to \(\mathcal {A} \).

  • Finally, \(\mathcal {A} \) submits \(\mathsf{sk}'\) to \(\mathcal {A} '\). If there exists \(i \in [p]\) such that \(\mathcal {E}.\mathsf {Dec} (\mathsf{sk}', e_i) \ne m_i\), then \(\mathcal {A} '\) outputs random \(b'\).

  • Otherwise, \(\mathcal {A} '\) chooses two independent, uniformly random messages \(m_0, m_1\) and submits to its external challenger.

  • \(\mathcal {A} '\) then receives the challenge ciphertext \(c^*\).

  • \(\mathcal {A} '\) computes \(m^* = \mathcal {E}.\mathsf {Dec} (\mathsf{sk}', c^*)\). If \(m^* = m_0,\mathcal {A} '\) outputs 0. Otherwise, \(\mathcal {A} '\) outputs 1.

Note that \(\mathcal {A} '\) perfectly simulates \(\mathcal {A} \)’s view in the \(\mathsf{OWR}\) game. Therefore, it is not hard to see that if \(\mathcal {A} \) succeeds with probability \(p_1 = p_1(\kappa ) \ge 8/2^\kappa \), then \(\mathcal {A} '\) succeeds with probability \(1/2 \cdot (1-p_1) + (p_1 - \delta )(1-\epsilon )\). For \(\epsilon \le 1/7\), we have that \((p_1 - \delta )(1-\epsilon ) \ge 3p_1/4\). Thus, \(\mathcal {A} '\) succeeds with probability \(1/2 + p_1/4\) and obtains advantage \(\mathsf{Adv}_{\mathcal {A} ', \mathcal {E}} = p_1/4\). This, in turn, implies that \(\mathcal {A} \) must succeed with negligible \(p_1(\kappa )\) probability, since otherwise we contradict the security of \(\mathcal {E}\). \(\square \)

7 Continual Leakage Resilience for Digital Signatures

In the previous section, we extended the techniques of Dodis et al. [21] to construct \(\mu \)-2CLR-secure one-way relations. Dodis et al. [21] showed how to construct continual leakage-resilient signature schemes from one-way relations secure against continual leakage. In this section, we extend their techniques and to construct \(\mu \)-2CLR-secure signature schemes from \(\mu \)-2CLR-secure one-way relations. Finally, combining our resulting \(\mu \)-2CLR-secure signature scheme with Theorems 6 and 7, we obtain a continual leakage-resilient signature scheme with leakage on update (but no leakage on the randomness used for signing).

See Sects. 2.2 and 3.4 for the formal definition of continual (consecutive) leakage resilience for digital signature schemes.

7.1 NIZK and True-Simulation Extractability

Dodis et al. [21] constructed CLR-secure signature based on CLR-secure OWR and another primitive named true-simulation extractable (tSE) NIZK. We here recall the syntax and security properties of NIZK. We note that the definitions below are taken from [21] for completeness.

Let R be an NP relation on pairs (yx) with corresponding language \(L_R = \{ y \ | \ \exists x \text{ s. } \text{ t. } (y, x) \in R \}\). A NIZK argument for a relation R consists of four \(\textsc {ppt}\) algorithms \((\mathsf {Setup}, \mathsf {Prove}, \mathsf {Verify}, \mathsf {Sim})\) with syntax:

  • \((\mathsf{crs}, \mathsf {tk}) \leftarrow \mathsf {Setup} (1^\kappa )\): Creates a common reference string (CRS) and a trapdoor key to the CRS.

  • \(\pi \leftarrow \mathsf {Prove}_\mathsf{crs}(y, x)\): Creates an argument that \(y \in L_R\).

  • \(\mathsf {Sim} _\mathsf{crs}(y, \mathsf {tk})\): Creates a simulated argument that \(y \in L_R\).

  • \(b\leftarrow \mathsf {Verify} _\mathsf{crs}(y, \pi )\): Verifies whether or not the argument \(\pi \) is correct.

For the sake of clarity, we write \(\mathsf {Prove}, \mathsf {Verify}, \mathsf {Sim} \) without the \(\mathsf{crs}\) in the subscript when the \(\mathsf{crs}\) can be inferred from the context.

Definition 13

We say that \((\mathsf {Setup}, \mathsf {Prove}, \mathsf {Verify})\) are a NIZK argument system for the relation R if the following three properties hold.

Completeness: :

For any \((y, x) \in R\), if \((\mathsf{crs}, \mathsf {tk}) \leftarrow \mathsf {Setup} (1^\kappa ),\pi \leftarrow \mathsf {Prove}(y, x)\), then \(Verify(y, \pi ) = 1\).

Soundness: :

For any \(\textsc {ppt}\) adversary \(\mathcal {A}\), \(\Pr [\mathsf {Verify} (y, \pi ^*)=1 \wedge y \not \in L_R \ : \ (\mathsf{crs}, \mathsf {tk}) \leftarrow \mathsf {Setup} (1^\kappa ), (y, \pi ^*) \leftarrow \mathcal {A}(\mathsf{crs}) ] \le \mathsf{negl}(1^\kappa )\)

Composable Zero Knowledge: :

For any \(\textsc {ppt}\) adversary \(\mathcal {A}\), we have \(\Pr [\mathcal {A}\mathrm {wins}\ ] -1/2 \le \mathsf{negl}(1^\kappa )\) in the following game:

  • The challenger samples \((\mathsf{crs}, \mathsf {tk}) \leftarrow \mathsf {Setup} (1^\kappa )\) and gives \((\mathsf{crs}, \mathsf {tk})\) to \(\mathcal {A}\).

  • The adversary \(\mathcal {A}\) chooses \((y, x) \in R\) and gives these to the challenger.

  • The challenger samples \(\pi _0 \leftarrow \mathsf {Prove}(y, x),\pi _1\leftarrow \mathsf {Sim} (y, \mathsf {tk}),b\leftarrow \{0,1\}\) and gives \(\pi _b\) to \(\mathcal {A}\).

  • The adversary \(\mathcal {A}\) outputs a bit \(b'\) and wins if \(b' = b\).

Definition 14

(True-Simulation Extractability [22]) Let \(\mathsf {NIZK}= (\mathsf {Setup}, \mathsf {Prove},\)\(\mathsf {Verify}, \mathsf {Sim})\) be an NIZK argument for an NP relation R, satisfying the completeness, soundness and zero-knowledge properties. We say that \(\mathsf {NIZK}\) is true-simulation extractable (tSE), if:

  • Apart from outputting a CRS and a trapdoor key, \(\mathsf {Setup} \) also outputs an extraction key: \((\mathsf{crs}, \mathsf {tk}, \mathsf {ek})\leftarrow \mathsf {Setup} (1^\kappa )\).

  • There exists a \(\textsc {ppt}\) algorithm \(\mathsf {Ext} _\mathsf {ek} \) such that for all \(\mathcal {A}\) we have \(\Pr [\mathcal {A}\mathrm {wins}\ ] \le \mathsf{negl}(1^\kappa )\) in the following game:

    1. 1.

      The challenger runs \((\mathsf{crs}, \mathsf {tk}, \mathsf {ek}) \leftarrow \mathsf {Setup} (1^\kappa )\) and gives \(\mathsf{crs}\) to \(\mathcal {A}\).

    2. 2.

      \(\mathcal {A}^{\mathcal {SIM}_\mathsf {tk}()}\) is given access to a simulation oracle \(\mathcal {SIM}_\mathsf {tk}()\), which it can adaptively access. A query to the simulation oracle consists of a pair (yx). The oracle checks if \((y, x) \in R\). If true, it ignores x and outputs a simulated argument \(\mathsf {Sim} _\mathsf {tk}(y)\). Otherwise, the oracle outputs \(\perp \).

    3. 3.

      \(\mathcal {A}\) outputs a pair \((y^*,\sigma ^*)\), and the challenger runs \(x^*\leftarrow \mathsf {Ext} _\mathsf {ek} (y^*, \sigma ^*)\).

\(\mathcal {A}\) wins if \((y,x^*) \not \in R,\mathsf {Verify} (y^*,\sigma ^*) = 1\), and \(y^*\) was not part of a query to the simulation oracle.

7.2 CLR Signatures with Leakage on Key Updates from OWR

Next we recall Dodis et al’s construction and then show that actually their construction is 2CLR secure if the underlying OWR is 2CLR secure. We then combine the above with our generic transformation from Sect. 3 to obtain CLR-secure signatures with leakage on key updates.

In the following, \(\mathsf{OWR}:= (\mathsf{OWR}.\mathsf {Gen} (1^{\kappa }), \mathsf{OWR}.\mathsf {Update} (\mathsf{sk}))\) is a 2CLR-secure one-way relation and \(\mathsf {NIZK}:= (\mathsf {NIZK}.\mathsf {Setup}, \mathsf {NIZK}.\mathsf {Prove}, \mathsf {NIZK}.\mathsf {Verify})\) is a is tSE-NIZK for the relation

$$\begin{aligned} R = \{(y,x) \mid y = (\mathsf {pk},m), x = \mathsf{sk} \text{ s.t. } \mathsf {Verify} (\mathsf {pk}, \mathsf{sk}) = 1\}. \end{aligned}$$

Although input m seems useless in the above relation, looking ahead, m will play the role of the message to be signed. Note that we have the important property that when the message m changes, the statement \(y = (\mathsf {pk}, m)\) also changes.

  • \({\mathsf {SIG}}.\mathsf {Gen} (1^{\kappa }):\) Output \((\mathsf{vk},\mathsf{sk})\) where \(\mathsf{vk}= (\mathsf {pk}, \mathsf {crs}), (\mathsf {pk}, \mathsf{sk})\leftarrow \mathsf{OWR}.\mathsf {Gen} (1^{\kappa })\) and \(\mathsf {crs}\leftarrow \mathsf {NIZK}.\mathsf {Setup} (1^{\kappa })\).

  • \({\mathsf {SIG}}.\mathsf {Sign} _\mathsf{sk}(m)\): Output \(\sigma \leftarrow \mathsf {NIZK}.\mathsf {Prove}((\mathsf {pk},m), \mathsf{sk})\).

  • \({\mathsf {SIG}}.\mathsf {Verify} _\mathsf{vk}(m,\sigma )\): Output \(b:=\mathsf {NIZK}.\mathsf {Verify} (\mathsf {pk},m), \sigma )\).

  • \({\mathsf {SIG}}.\mathsf {Update} (\mathsf{sk})\): Output \(\mathsf{OWR}.\mathsf {Update} (\mathsf{sk})\).

Theorem 13

If one-way relation \(\mathsf{OWR}\) is \(\mu \)-2CLR secure and \(\mathsf {NIZK}\) is true-simulation extractable, then the above signature scheme is \(\mu \)-2CLR secure.

Proof

The proof here is very similar to that in [21]. We prove the above theorem through a sequence of games. \(\square \)

Game\(\mathcal {H}_0\): This is the original \(\mu \)-2CLR game described in Definition 3, in which signing queries are answered honestly by running \(\sigma \leftarrow \mathsf {NIZK}.\mathsf {Prove}((\mathsf {pk},m), \mathsf{sk})\) and \(\mathcal {A}\) wins if she produces a valid forgery \((m^*,\sigma ^*)\).

Game\(\mathcal {H}_1\): In this game, the signing queries are answered by generating simulated arguments, i.e., \(\sigma \leftarrow \mathsf {NIZK}.\mathsf {Sim} _{\mathsf {tk}}(\mathsf {pk},m)\). Games \(\mathcal {H}_0\) and \(\mathcal {H}_1\) are indistinguishable by the zero-knowledge property of \(\mathsf {NIZK}\). Here the simulated arguments given to \(\mathcal {A}\) as answers to signing queries are always of true statements.

Game\(\mathcal {H}_2\): In this game, we modify the winning condition so that the adversary only wins if it produces a valid forgery \((m^*, \sigma ^*)\) and the challenger is able to extract a valid secret key \(\mathsf{sk}^*\) for \(\mathsf {pk}\) from \((m^*,\sigma ^*)\). That is, \(\mathcal {A}\) wins if both \({\mathsf {SIG}}.\mathsf {Verify} (m^*,\sigma ^*)=1\) and \(\mathsf{OWR}.\mathsf {Verify} (\mathsf {pk},\mathsf{sk}^*)=1\), where \(\mathsf{sk}^*\leftarrow \mathsf {NIZK}.\mathsf {Ext} _{\mathsf {ek}}((\mathsf {pk},m^*),\sigma ^*)\). The winning probability of \(\mathcal {A}\) in Game \(\mathcal {H}_2\) is at least that of Game \(\mathcal {H}_1\) minus the probability that \(\mathsf {NIZK}.\mathsf {Verify} ((\mathsf {pk},m^*),\sigma ^*) =1\) and \(\mathsf{OWR}.\mathsf {Verify} (\mathsf {pk}, \mathsf{sk}^*)= 0\). By the true-simulation extractability of the argument \(\mathsf {NIZK}\), we know that this probability is negligible. Therefore, the winning probability of \(\mathcal {A}\) in Game \(\mathcal {H}_2\) differs from that in Game \(\mathcal {H}_1\) by a negligible amount.

We have shown that the probability that \(\mathcal {A}\) wins in Game \(\mathcal {H}_0\) is the same as that in Game \(\mathcal {H}_2\), up to negligible factors. We now argue that the probability that \(\mathcal {A}\) wins in Game \(\mathcal {H}_2\) is negligible, which proves that the probability that \(\mathcal {A}\) wins in Game \(\mathcal {H}_0\) is negligible as well. To prove that the probability that \(\mathcal {A}\) wins in Game \(\mathcal {H}_2\) is negligible, we assume otherwise and show that there exists a \(\textsc {ppt}\) algorithm \(\mathcal {B}\) that breaks the \(\mu \)-2CLR security of \(\mathsf{OWR}\). On input \(\mathsf {pk},\mathcal {B}\) generates \((\mathsf{crs},\mathsf {tk},\mathsf {ek}) \leftarrow \mathsf {NIZK}.\mathsf {Setup} (1^{\kappa })\) and emulates \(\mathcal {A}\) on input \(\mathsf{vk}= (\mathsf{crs}, \mathsf {pk})\). In each leakage round, \(\mathcal {B}\) answers \(\mathcal {A}\)’s leakage queries using the leakage oracle and answers signing queries \(m_i\) by creating simulated arguments \(\sigma _i\leftarrow \mathsf {NIZK}.\mathsf {Sim} _{\mathsf {tk}}(\mathsf {pk}, m_i)\). When \(\mathcal {A}\) outputs her forgery \((m^*,\sigma ^*),\mathcal {B}\) runs \(\mathsf{sk}^*\leftarrow \mathsf {NIZK}.\mathsf {Ext} _{\mathsf {ek}}((\mathsf {pk}, m^*),\sigma ^*)\) and outputs \(\mathsf{sk}^*\). Notice that \(\Pr [ \mathcal {B}\ \mathrm {wins}\ ] = \Pr [\mathcal {A}\ \mathrm {wins}\ ]\), so that if \(\Pr [\mathcal {A}\ \mathrm {wins}\ ]\) is non-negligible then \(\mathcal {B}\) breaks the \(\mu \)-consecutive two-key security of \(\mathsf{OWR}\). We therefore conclude that the probability that \(\mathcal {A}\) wins in Game \(\mathcal {H}_2\) is negligible. This concludes the proof of the theorem. \(\square \)

Based on the result in [21], we have the following corollary.

Corollary 2

Fix a constant \(K\ge 1\), and assume that the K-linear assumption holds in the base groups of some pairing. Then, for any constant \(\epsilon >0\), there exists a \(\mu \)-2CLR-secure signature scheme with relative-leakage \(\frac{\mu }{|\mathsf{sk}|} \ge \frac{1}{2(K+1)} -\epsilon \).