Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

DAPS. Double authentication preventing signature (DAPS) schemes were introduced by Poettering and Stebila (PS) [15]. In such a signature scheme, the message being signed is a pair \(m=(a,p)\) consisting of an “address” a and a “payload” p. Let us say that messages \((a_1,p_1),(a_2,p_2)\) are colliding if \(a_1=a_2\) but \(p_1\ne p_2\). The double authentication prevention requirement is that there be an efficient extraction algorithm that given a public key and valid signatures \(\sigma _1, \sigma _2\) on colliding messages \((a,p_1),(a,p_2)\), respectively, returns the secret signing key underlying . Additionally, the scheme must satisfy standard unforgeability under a chosen-message attack [10], but in light of the first property we must make the restriction that the address components of all messages signed in the attack are different.

Why DAPS? PS [15] suggested that DAPS could deter certificate subversion. This is of particular interest now in light of the Snowden revelations. We know that the NSA obtains court orders to compel corporations into measures that compromise security. The case we consider here is that the corporation is a Certificate Authority (CA) and the court order asks it to produce a rogue certificate. Thus, the CA (eg. Comodo, Go Daddy, ...) has already issued a (legitimate) certificate for a server example.com. Here is the public key of example.com and \(\sigma _1\) is the CA’s signature on the pair , computed under the secret key of the CA. Big brother (this is what we will call the subverting adversary) is targeting clients communicating with example.com. It obtains a court order that requires the CA to issue another certificate—this is the rogue certificate— in the name of example.com, where now is a public key supplied by big brother, so that the latter knows the corresponding secret key , and \(\sigma _2\) is the CA’s signature on the pair , again computed under the secret key of the CA. With this rogue certificate in hand, big brother could impersonate example.com in a TLS session with a client, compromising security of example.com’s communications.

The CA wants to deny the order (complying with it only hurts its reputation and business) but, under normal conditions, has no argument to make to the court in support of such a denial. Using DAPS to create certificates, rather than ordinary signatures, gives the CA such an argument, namely that complying with the order (issuing the rogue certificate) would compromise not just the security of big brother’s target clients communicating with example.com, but would compromise security much more broadly. Indeed, if big brother uses the rogue certificate with a client, it puts the rogue certificate in the client’s hand. The legitimate certificate can be viewed as public. So the client has \(\sigma _1,\sigma _2\). But these are valid signatures on the colliding messages , respectively, which means that the client can extract the CA’s signing key . This would lead to widespread insecurity. The court may be willing to allow big brother to compromise communications of clients with example.com, but it will not be willing to create a situation where the security of all TLS hosts with certificates from this CA is compromised. Ultimately this means the court would have strong incentives to deny big brother’s request for a court order to issue a rogue certificate in the first place.

Further discussion of this application of DAPS may be found in [15, 16] and also in the full version of this paper [2]. The latter includes comparisons with other approaches such as certificate transparency and public key pinning.

Prior DAPS schemes. PS [15, 16] give a factoring-based DAPS that we call \(\mathsf {PS}\). Its signature contains \(n+1\) elements in a group \({{\mathbb Z}}_N^*\), where n is the length of the output of a hash function and N is a (composite) modulus in the public key. With a 2048-bit modulus and 256-bit hash, a signature contains 257 group elements, for a length of 526,336 bits or 64.25 KiB. This is a factor 257 times longer than a 2048-bit RSA PKCS#1 signature. Signing and verifying times are also significantly greater than for RSA PKCS#1. Ruffing, Kate, and Schröder [17, Appendix A] give a chameleon hash function (CHF) based DAPS that we call \(\mathsf {RKS}\) and recall in the full version of this paper [2]. Instantiating it with DLP-based CHFs makes signing quite efficient, but signature sizes and verification times are about the same as in \(\mathsf {PS}\). The large signature sizes in particular of both \(\mathsf {PS}\) and \(\mathsf {RKS}\) inhibits their use in practice.

Goals and contributions. If we want DAPS to be a viable practical option, we need DAPS schemes that are competitive with current non-DAPS schemes on all cost parameters, meaning signature size, key size, signing time and verifying time. Furthermore, to not lose efficiency via inflated security parameters, we need to establish the unforgeability with tight security reductions. Finally, given the high damage that would be created by certificate forgery, we want these reductions to be to assumptions that are standard (factoring, RSA, ...) rather than new. This is what we deliver. We will give two general methods to build DAPS, and thence obtain many particular schemes that are efficient while having tight security reductions to standard algebraic assumptions in the random oracle model. We begin with some background on our main tool, identification schemes.

Background. An identification scheme is a three-move protocol \(\mathsf {ID}\) where the prover sends a commitment \(Y\) computed using private randomness y, the verifier sends a random challenge \(c\), the prover returns a response \(z\) computed using y and its secret key , and the verifier computes a boolean decision from the conversation transcript \(Y\Vert c\Vert z\) and public key (see Fig. 2). Practical ID schemes are typically Sigma protocols, which means they satisfy honest-verifier zero-knowledge and special soundness. The latter says that from two accepting conversation transcripts with the same commitment but different challenges, one can extract the secret key. The identification scheme is trapdoor [3, 12] if the prover can pick the commitment \(Y\) directly at random from the commitment space and compute the associated private randomness y using its secret key.

The classic way to get a signature scheme from an identification scheme is via the Fiat-Shamir transform [9], denoted \(\mathbf {FS}\). Here, a signature of a message \(m\) is a pair \((Y,z)\) such that the transcript \(Y\Vert c\Vert z\) is accepting for \(c= \textsc {H}(Y\Vert m)\), where \(\textsc {H}\) is a random oracle. This signature scheme meets the standard unforgeability notion of [10] assuming the identification scheme is secure against impersonation under passive attack (IMP-PA) [1]. BPS [3] give several alternative transforms of (trapdoor) identification schemes to unforgeable signature schemes, the advantage over \(\mathbf {FS}\) being that in some cases the reduction of unforgeability to the underlying algebraic assumption is tight. (That of \(\mathbf {FS}\) is notoriously loose.) No prior transform yields DAPS. Our first transform, described next, is however an adaptation and extension of the \(\mathbf {MdCmtCh}\) transform of [3].

Double-hash transform \(\mathbf {H2}\). The novel challenge in getting DAPS is to provide the double authentication prevention property. Our idea is to turn to identification schemes, and specifically to exploit their special soundness. Recall this says that from two accepting conversations with the same commitment and different challenges, one can extract the secret key. What we want now is to create identification-based signatures in such a way that signatures are accepting conversations and signatures of messages with the same address have the same commitment, but if payloads differ then challenges differ. This will allow us, from valid signatures of colliding messages, to obtain the secret key.

To ensure signatures of messages with the same address have the same commitment, we make the commitment a hash of the address. This, however, leaves us in general unable to complete the signing, because the prover in an identification scheme relies on having create the commitment \(Y\) in such a way that it knows some underlying private randomness y which is used crucially in the identification. To get around this, we use identification schemes that are trapdoor (see above), so y can be derived from the commitment given a secret key. To ensure unforgeability, we incorporate a fresh random seed into each signature.

In more detail, our first method to obtain DAPS from a trapdoor identification scheme is via a transform that we call the double-hash transform and denote \(\mathbf {H2}\) (cf. Sect. 5.1). To sign a message \(m=(a,p)\), the signer specifies the commitment as a hash \(Y= \textsc {H}_1(a)\) of the address, picks a random seed \(s\) of length (a typical seed length would be ), obtains a challenge \(c= \textsc {H}_2(a\Vert p\Vert s)\), uses the trapdoor property of the identification scheme and the secret key to compute a response \(z\), and returns \((z,s)\) as the signature. Additionally the public key is enhanced so that recovery of the secret identification key allows recovery of the full DAPS secret key. Theorem 1 establishes the double-authentication prevention property via the special soundness property of the identification Sigma protocol, and is unconditional. Theorem 2 shows unforgeability of the DAPS in the ROM under two assumptions on the identification scheme: (1) CIMP-UU, a notion defined in [3] (which refers to security under constrained impersonation attacks, where in the successful impersonation the commitment was unchosen by the adversary and the challenge was also unchosen by the adversary), and (2) KR, security against key recovery. Specific identification schemes can be shown to meet both notions under standard assumptions [3], yielding DAPS from the same assumptions. If typical factoring or RSA based identification schemes are used, DAPS signatures have size , where k is the bitlength of the modulus.

Double-ID transform \(\mathbf {ID2}\). The signature size of \(\mathbf {H2}\) when instantiated with RSA is more than the length k of a signature in RSA PKCS#1. We address this via a second transform of trapdoor identification schemes into DAPS that we call the double ID transform, denoted \(\mathbf {ID2}\). When instantiated with the same identification schemes as above, corresponding DAPS signatures have length \(k+1\) bits, while maintaining (up to a small constant factor) the signing and verifying times of schemes obtained via \(\mathbf {H2}\).

The \(\mathbf {ID2}\) transform has several novel features. It requires that the identification scheme supports multiple challenge lengths, specifically challenge lengths 1 and l (e.g., \(l=256\)). To sign a message \(m=(a,p)\), first we work with the single challenge-bit version of the identification scheme, computing for this a commitment \(Y_1=H_1(a)\), picking a random 1-bit challenge \(c_1\), and letting \(z_1\) be the response, computed using the trapdoor and secret key. Now a random bijection (a public bijection accessible, in both directions, via oracles) is applied to \(z_1\) to get a commitment \(Y_2\) for the l-bit challenge version of the identification scheme. A challenge for this is computed as \(H_2(a,p)\), and then a response \(z_2\) is produced. The signature is simply \((c_1,z_2)\). Section 5.2 specifies the transform in detail and proves the DAP property and unforgeability, modeling the random bijection as ideal. Notably, the CIMP-UU assumption used for the \(\mathbf {H2}\) transform needs to be replaced by the (slightly stronger) CIMP-UC notion [3] (in CIMP-UC, the challenge in the successful impersonation can be chosen by the adversary).

Instantiations. We discuss three different instantiations of the above in Sect. 6. The RSA-based \(\mathsf {GQ}\) identification scheme [11] is not trapdoor as usually written, but can be made so by including the decryption exponent in the secret key [3]. Applying \(\mathbf {H2}\) and \(\mathbf {ID2}\), we get \(\mathbf {H2}[\mathsf {GQ}]\) and \(\mathbf {ID2}[\mathsf {GQ}]\). The factoring-based \(\mathsf {MR}\) identification scheme of Micali and Reyzin [12] is trapdoor, which we exploit (in the full version [2]) to get \(\mathbf {H2}[\mathsf {MR}]\). For details see Fig. 15. (Both \(\mathsf {GQ}\) and \(\mathsf {MR}\) support multiple challenge lengths and meet the relevant security requirements.) Figure 1 shows the signing time, verifying time and signature size for these schemes. In a bit we will discuss implementation results that measure actual performance.

Fig. 1.
figure 1

DAPS efficiency. Performance indications for the DAPS obtained by our \(\mathbf {H2}\) and \(\mathbf {ID2}\) transforms applied to the \(\mathsf {GQ}\) and \(\mathsf {MR}\) trapdoor identification schemes. The first two rows show the prior scheme of PS [15, 16] and the scheme of RKS [17], with n being the length of the output of a hash function, eg. \(n=256\). By k we denote the length of a composite modulus N in the public key, eg. \(k=2048\). The challenge length of \(\mathsf {GQ}\) and \(\mathsf {MR}\) is l, and is the seed length, eg. . The 4th column is the size of a signature in bits. Absolute runtimes and signature sizes are for \(k=2048\)-bit moduli and -bit hashes/challenges/seeds; details appear in Sect. 6.

Reduction tightness. Figure 1 says the signing time for \(\mathbf {H2}[\mathsf {GQ}]\) is \(\mathcal{O}(lk^2+k^3)\), but what this means in practice depends very much on the choice of k (the length of composite N). Roughly speaking, we can expect that doubling k leads to an 8-fold increase in runtime, so signing with \(k=2048\) is 8 times slower than with \(k=1024\). So we want to use the smallest k for which we have a desired level of security. Suppose this is approximately 128 bits. Many keylength recommendations match the difficulty of breaking a 128-bit symmetric cipher with the difficulty of factoring a 2048-bit modulus. But this does not generally mean it is safe to use \(\mathbf {H2}[\mathsf {GQ}]\) with \(k=2048\), because the reduction of unforgeability to RSA may not be tight: the Fiat-Shamir transform \(\mathbf {FS}\) has a very loose reduction, so when signatures are identification based, one should be extra suspicious. Remarkably, our reductions are tight, so we can indeed get 128 bits of security with \(k=2048\). This tightness has two steps or components. First, the reduction of unforgeability to the CIMP-UU/CIMP-UC and KR security of the identification scheme, as given by Theorems 2 and 4, is tight. Second, the reductions of CIMP-UU/CIMP-UC and KR to the underlying algebraic problem (here RSA or factoring) are also tight (cf. Lemma 1, adapting [3]).

Implementation. The efficiency measures of Fig. 1 are asymptotic, with hidden constants. Implementation is key to gauge and compare performance in practice. We implement our two \(\mathsf {GQ}\) based schemes, \(\mathbf {H2}[\mathsf {GQ}]\) and \(\mathbf {ID2}[\mathsf {GQ}]\), as well as \(\mathbf {H2}[\mathsf {MR}]\). For comparison we also implement the prior \(\mathsf {PS}\) DAPS, and also compare with the existing implementation of \(\mathsf {RKS}\). Figure 16 shows the signing time, verifying time, signature size and key sizes for all schemes. \(\mathbf {H2}[\mathsf {GQ}]\) emerges as around 587 times faster than \(\mathsf {PS}\) for signing and 394 times faster for verifying while also having signatures about 229 times shorter. Compared with the previous fastest and smallest DAPS, \(\mathsf {RKS}\), \(\mathbf {H2}[\mathsf {GQ}]\) is 15\(\times \) faster for both signing and verifying, with signatures \(56\times \) shorter. \(\mathbf {ID2}[\mathsf {GQ}]\) is about a factor two slower than \(\mathbf {H2}[\mathsf {GQ}]\) but with signatures about 15% shorter. \(\mathbf {H2}[\mathsf {MR}]\) has the smallest public keys of our new DAPS schemes, with signing runtime about halfway between \(\mathbf {H2}[\mathsf {GQ}]\) and \(\mathbf {ID2}[\mathsf {GQ}]\). The DAPS by RKS remains the one with the smallest public keys, (640 bits), but the schemes in this paper have public keys that are still quite reasonable (between 2048 and 6144 bits). As Fig. 16 shows, \(\mathbf {H2}[\mathsf {GQ}]\), \(\mathbf {H2}[\mathsf {MR}]\), and \(\mathbf {ID2}[\mathsf {GQ}]\) are close to RSA PKCS#1 in all parameters and runtimes (but with potentially improved security, considering our reductions to RSA and factoring are tight). This means that DAPS can replace the signatures currently used for certificates with minimal loss in performance.

Necessity of our assumption. Trapdoor identification schemes may seem a very particular assumption from which to obtain DAPS. However we show in the full version of this paper [2] that from any DAPS satisfying double authentication prevention and unforgeability, one can build a CIMP-UU and CIMP-UC secure trapdoor identification scheme. This shows that the assumption we make is effectively necessary for DAPS.

2 Preliminaries

Notation. By \(\varepsilon \) we denote the empty string. If X is a finite set, denotes selecting an element of X uniformly at random and |X| denotes the size of X. We use \(a_1\Vert a_2\Vert \cdots \Vert a_n\) as shorthand for \((a_1,a_2,\ldots ,a_n)\), and by \(a_1\Vert a_2\Vert \cdots \Vert a_n \leftarrow x\) we mean that x is parsed into its constituents. If A is an algorithm, \(y \leftarrow A(x_1, \dots ; r)\) denotes running A on inputs \(x_1, \dots \) with random coins r and assigning the result to y, and means we pick r at random and let \(y \leftarrow A(x_1, \dots ; r)\). By \([A(x_1, \dots )]\) we denote the set of all y that have positive probability of being returned by \(A(x_1, \dots )\).

Our proofs use the code-based game playing framework of BR [5]. In these proofs, \(\Pr [\mathrm {G}]\) denotes the event that game \(\mathrm {G}\) returns \(\mathsf {true}\). When we speak of running time of algorithms, we mean worst case. For adversaries playing games, this includes the running time of the adversary and that of the game, i.e., the time taken by game procedures to respond to oracle queries is included. Boolean flags (like \(\mathsf {bad}\)) in games are assumed initialized to \(\mathsf {false}\).

In our constructions, we will need random oracles with different ranges. For example we may want one random oracle returning points in a group \({{\mathbb Z}}_N^*\) and another returning strings of some length l. To provide a single unified notation, following [3], we have the game procedure \(\textsc {H}\) take not just the input x but a description \(\mathrm {Rng}\) of the set from which outputs are to be drawn at random. Thus \(y \leftarrow \textsc {H}(x,{{\mathbb Z}}_N^*)\) will return a random element of \({{\mathbb Z}}_N^*\), and so on.

Our \(\mathbf {ID2}\) transform also relies on a random bijection. In the spirit of a random oracle, a random bijection is an idealized unkeyed public bijection to which algorithms and adversaries have access via two oracles, one for the forward direction and one for the backward direction. Cryptographic constructions that build on such objects include the Even-Mansour cipher and the SHA3 hash function. We denote by \(\varPi ^{+}(\cdot ,\mathrm {Dom},\mathrm {Rng})\) a bijection from \(\mathrm {Dom}\) to \(\mathrm {Rng}\), and we denote its inverse with \(\varPi ^{-1}\). Once \(\mathrm {Dom}\) and \(\mathrm {Rng}\) are fixed, our results view \(\varPi ^{+1}(\cdot ,\mathrm {Dom},\mathrm {Rng})\) as being randomly sampled from the set of all bijections from \(\mathrm {Dom}\) to \(\mathrm {Rng}\). We discuss instantiation of a random bijection in Sect. 6.

Signature schemes. A signature scheme \(\mathsf {DS}\) specifies the following. The signer runs key generation algorithm \(\mathsf {DS}.\mathsf {Kg}\) to get a verification key and a signing key . A signature of message \(m\) is generated via . Verification is done by , which returns a boolean v. \(\mathsf {DS}\) is correct if for all , all messages \(m\in \{0,1\}^*\) and all signatures , we have .

3 Identification Schemes

Identification schemes are our main tool. Here we give the necessary definitions and results.

Fig. 2.
figure 2

Top: Message flow of an identification scheme \(\mathsf {ID}\). Bottom: Games defining extractability and HVZK of an identification scheme \(\mathsf {ID}\).

Identification. An identification (ID) scheme \(\mathsf {ID}\) is a three-move protocol between a prover and a verifier, as shown in Fig. 2. A novel feature of our formulation (which we exploit for the \(\mathbf {ID2}\) transform) is that identification schemes support challenges of multiple lengths. Thus, associated to \(\mathsf {ID}\) is a set \(\mathsf {ID}.\mathsf {clS}\subseteq {{\mathbb N}}\) of admissible challenge lengths. At setup time the prover runs key generation algorithm \(\mathsf {ID}.\mathsf {Kg}\) to generate a public verification key , a private identification key , and a trapdoor . To execute a run of the identification scheme for a challenge length \({\mathsf {cl}}\in \mathsf {ID}.\mathsf {clS}\), the prover runs to generate a commitment \(Y\) and a private state \(y\). The prover sends \(Y\) to the verifier, who samples a random challenge \(c\) of length \({\mathsf {cl}}\) and returns it to the prover. The prover computes its response . The verifier checks the response by invoking which returns a boolean value. We require perfect correctness. For any we denote with and the space of commitments and responses, respectively.

In basic ID schemes, key generation only outputs and . The inclusion of was given by [3] in their definition of trapdoor ID schemes. Following [3] (and extending to multiple challenge lengths) we say \(\mathsf {ID}\) is trapdoor if it specifies an additional algorithm \(\mathsf {ID}.\mathsf {Cmt}^{-1}\) that can compute \(y\) from any \(Y\) using trapdoor . The property required of \(\mathsf {ID}.\mathsf {Cmt}^{-1}\) is that the following two distributions on \((Y, y)\) are identical for any admissible challenge length \({\mathsf {cl}}\): (1) Let and return \((Y,y)\), and (2) Let and return \((Y,y)\).

Further properties. We give several further identification-related definitions we will use. First we extend honest-verifier zero-knowledge (HVZK) and extractability to identification schemes with variable challenge length.

HVZK of \(\mathsf {ID}\) asks that there exists an algorithm \(\mathsf {ID}.\mathsf {Sim}\) (called the simulator) that given the verification key and challenge length, generates transcripts which have the same distribution as honest ones. Formally, if \(\mathcal{A}\) is an adversary and \({\mathsf {cl}}\in \mathsf {ID}.\mathsf {clS}\) is an admissible challenge length, let \(\mathbf {Adv}^{\mathrm {zk}}_{\mathsf {ID},{\mathsf {cl}}}(\mathcal{A}) = 2\Pr [\mathbf {G}^\mathrm{zk}_{\mathsf {ID},{\mathsf {cl}}}(\mathcal{A})]-1\) where the game is shown in Fig. 2. Then \(\mathsf {ID}\) is HVZK if \(\mathbf {Adv}^{\mathrm {zk}}_{\mathsf {ID},{\mathsf {cl}}}(\mathcal{A}) = 0\) for all (even computationally unbounded) adversaries \(\mathcal{A}\) and all \({\mathsf {cl}}\in \mathsf {ID}.\mathsf {clS}\).

Extractability of \(\mathsf {ID}\) asks that there exists an algorithm \(\mathsf {ID}.\mathsf {Ex}\) (called the extractor) which from any two (valid) transcripts that have the same commitment but different same-length challenges can recover the secret key. Formally, if \(\mathcal{A}\) is an adversary, let \(\mathbf {Adv}^{\mathrm {ex}}_{\mathsf {ID}}(\mathcal{A}) = \Pr [\mathbf {G}^\mathrm {ex}_{\mathsf {ID}}(\mathcal{A})]\) where the game is shown in Fig. 2. Then \(\mathsf {ID}\) is perfectly extractable if \(\mathbf {Adv}^{\mathrm {ex}}_{\mathsf {ID}}(\mathcal{A}) = 0\) for all (even computationally unbounded) adversaries \(\mathcal{A}\). Perfect extractability is sometimes called special soundness. We say that an identification scheme is a Sigma protocol [7] if it is both HVZK and perfectly extractable.

We define three further notions that are not standard, but sometimes needed and true of typical schemes (cf. Sect. 6). For instance, at times we require that \(\mathsf {ID}\) includes a key-verification algorithm \(\mathsf {ID}.\mathsf {KVf}\) for which iff for some . We say that \(\mathsf {ID}\) is commitment recovering if \(\mathsf {ID}.\mathsf {Vf}\) verifies a transcript \(Y{\Vert }c{\Vert }z\) by recovering \(Y\) from \(c,z\) and then comparing. More precisely, we require that there exist an efficient algorithm \(\mathsf {ID}.\mathsf {Rsp}^{-1}\) that takes a verification key, a challenge, and a response, and outputs a commitment, such that iff . Finally, \(\mathsf {ID}\) is said to have unique responses if for any commitment \(Y\) and any challenge \(c\) there is precisely one response \(z\) such that we have .

Fig. 3.
figure 3

Games defining security of identification scheme \(\mathsf {ID}\) against constrained impersonation (\(\mathrm {CIMP}\text{- }\mathrm {UU}\) and \(\mathrm {CIMP}\text{- }\mathrm {UC}\)) and against key recovery under passive attack.

Security of identification. A framework of notions of security under constrained impersonation was given in [3]. We reproduce and use their \(\mathrm {CIMP}\text{- }\mathrm {UU}\) and \(\mathrm {CIMP}\text{- }\mathrm {UC}\) notions but extend them to support multiple challenge lengths. The value of these notions as starting points is that they can be proven to be achieved by typical identification schemes with tight reductions to standard assumptions, following [3], which is not true of classical notions like IMP-PA (impersonation under passive attack [1]). The formalization relies on the games \(\mathbf {G}^\mathrm{cimp\text{- }xy}_{\mathsf {ID}}(\mathcal{P})\) of Fig. 3 associated to identification scheme \(\mathsf {ID}\) and adversary \(\mathcal{P}\), where \(\mathrm {xy} \in \{\mathrm {uu},\mathrm {uc}\}\). The transcript oracle \(\textsc {Tr}\) returns a fresh identification transcript \(Y_i{\Vert }c_i{\Vert }z_i\) each time it is called, for a challenge length passed in by the adversary. This models a passive attack. In the \(\mathrm {xy}=\mathrm {uu}\) case, the adversary can call \(\textsc {Ch}\) with the index l of an existing transcript \(Y_l{\Vert }c_l{\Vert }z_l\) to indicate that it wants to be challenged to produce a response for a fresh challenge against the commitment \(Y_l\). The index j records the session for future reference. In the \(\mathrm {xy}=\mathrm {uc}\) case, the adversary continues to call \(\textsc {Ch}\) with the index l of an existing transcript, but this time provides its own challenge \(c\), indicating it wants to be challenged to find a response. The game allows this only if the provided challenge is different from the one in the original transcript. The adversary can call \(\textsc {Tr}\) and \(\textsc {Ch}\) as many times as it wants, in any order. The adversary terminates by outputting the index k of a challenge session against which it hopes its response \(z\) will verify. Define the advantage via \(\mathbf {Adv}^\mathrm{cimp\text{- }xy}_{\mathsf {ID}}(\mathcal{P}) = \Pr [\mathbf {G}^\mathrm{cimp\text{- }xy}_{\mathsf {ID}}(\mathcal{P})]\).

We also define a metric of security of the identification scheme against key recovery under passive attack. The formalization considers game \(\mathbf {G}^\mathrm {kr\text{- }pa}_{\mathsf {ID}}(\mathcal{I})\) of Fig. 3 associated to identification scheme \(\mathsf {ID}\) and kr adversary \(\mathcal{I}\). The transcript oracle \(\textsc {Tr}\) is as before. Adversary \(\mathcal{I}\) aims to find a private key  that is functionally equivalent to  in the sense that . (In particular, it certainly succeeds if it recovers the private key .) We let \(\mathbf {Adv}^{\mathrm {kr\text{- }pa}}_{\mathsf {ID}}(\mathcal{I}) = \Pr [\mathbf {G}^\mathrm {kr\text{- }pa}_{\mathsf {ID}}(\mathcal{I})]\) be the probability that it succeeds. The notion of KR security from [3, 14] did not give the adversary a \(\textsc {Tr}\) oracle (excluding even passive attacks) and required that for success it find the target key (rather than, as here, being allowed to get away with something functionally equivalent).

Achieving the notions. For typical identification schemes that are HVZK, security against key recovery under passive attack corresponds exactly to the standard assumption underlying the scheme, for example the one-wayness of RSA for \(\mathsf {GQ}\). The following says that under the assumption of security against key recovery under passive attack, we can establish both \(\mathrm {CIMP}\text{- }\mathrm {UC}\) and \(\mathrm {CIMP}\text{- }\mathrm {UU}\) for identification schemes that are extractable. In the second case, however, we require that the challenge-lengths used be large.

The identification schemes we will use to build DAPS are Sigma protocols, meaning perfectly extractable, and hence for these schemes \(\mathbf {Adv}^{\mathrm {ex}}_{\mathsf {ID}}(\mathcal{A})\) below will be 0. We omit the proof as it uses standard arguments [3].

Lemma 1

Let \(\mathsf {ID}\) be an identification scheme. For any adversary \(\mathcal{P}\) against \(\mathrm {CIMP}\text{- }\mathrm {UC}\) we construct a key recovery adversary \(\mathcal{I}\) and extraction adversary \(\mathcal{A}\) such that

$$\begin{aligned} \mathbf {Adv}^\mathrm{cimp\text{- }uc}_{\mathsf {ID}}(\mathcal{P}) \le \mathbf {Adv}^{\mathrm {kr\text{- }pa}}_{\mathsf {ID}}(\mathcal{I})+\mathbf {Adv}^{\mathrm {ex}}_{\mathsf {ID}}(\mathcal{A}) . \end{aligned}$$

Also for any adversary \(\mathcal{P}\) against \(\mathrm {CIMP}\text{- }\mathrm {UU}\) that makes \(q_c\) queries to its \(\textsc {Ch}\) oracle, each with challenge length at least \({\mathsf {cl}}\), we construct a key recovery adversary \(\mathcal{I}\) such that

$$\begin{aligned} \mathbf {Adv}^\mathrm{cimp\text{- }uu}_{\mathsf {ID}}(\mathcal{P}) \le \mathbf {Adv}^{\mathrm {kr\text{- }pa}}_{\mathsf {ID}}(\mathcal{I})+\mathbf {Adv}^{\mathrm {ex}}_{\mathsf {ID}}(\mathcal{A})+q_c \cdot 2^{-{\mathsf {cl}}} . \end{aligned}$$

In both cases, the running times of \(\mathcal{I}\) and \(\mathcal{A}\) are about that of \(\mathcal{P}\) plus the time for one execution of \(\mathsf {ID}.\mathsf {Ex}\).

Above, \(\mathrm {CIMP}\text{- }\mathrm {UU}\) was established assuming long challenges. We note that this is necessary, meaning \(\mathrm {CIMP}\text{- }\mathrm {UU}\) does not hold for short challenges, such as one-bit ones. To see this, assume \({\mathsf {cl}}\in \mathsf {ID}.\mathsf {clS}\) and \(q \ge 1\) is a parameter. Consider the following attack (adversary) \(\mathcal{P}\). It makes a single query . Then for \(i=1,\ldots ,q\) it queries . If there is a k such that \(c_k=c\) then it returns \((k,z)\) and wins, else it returns \(\bot \). We have

$$\begin{aligned} \mathbf {Adv}^\mathrm{cimp\text{- }uu}_{\mathsf {ID}}(\mathcal{P}) = 1- \left( 1-\frac{1}{2^{{\mathsf {cl}}}}\right) ^q \approx \frac{q}{2^{{\mathsf {cl}}}} . \end{aligned}$$

Thus, roughly, the attack succeeds in time \(2^{{\mathsf {cl}}}\), so if the latter is small, \(\mathrm {CIMP}\text{- }\mathrm {UU}\) security will not hold. Our \(\mathbf {H2}\) transform will use long challenges and be able to rely only on \(\mathrm {CIMP}\text{- }\mathrm {UU}\), but our \(\mathbf {ID2}\) transform will require security on both long and short (1-bit) challenges, and thus will rely on \(\mathrm {CIMP}\text{- }\mathrm {UC}\) in addition to \(\mathrm {CIMP}\text{- }\mathrm {UU}\). We note that given Lemma 1, we could use \(\mathrm {CIMP}\text{- }\mathrm {UC}\) throughout, but for the reductions it is simpler and more convenient to work with \(\mathrm {CIMP}\text{- }\mathrm {UU}\) when possible.

4 DAPS Definitions

Let \(\mathsf {DS}\) be a signature scheme. When used as a DAPS [15, 16], a message \(m=(a,p)\) for \(\mathsf {DS}\) is a pair consisting of an address \(a\) and a payload \(p\). We require (1) the double authentication prevention (DAP) property and (2) a restricted form of unforgeability, as defined below.

Fig. 4.
figure 4

Games defining unforgeability and the DAP property of signature scheme \(\mathsf {DS}\).

The DAP property. Call messages \(m_1 = (a_1,p_1)\) and \(m_2 = (a_2,p_2)\) colliding if \(a_1 = a_2\) but \(p_1 \ne p_2\). Double authentication prevention (DAP) [15, 16] requires that possession of signatures on colliding messages allow anyone to extract the signing key. It is captured formally by the advantage \(\mathbf {Adv}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A})= \Pr [\mathbf {G}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A})]\) associated to adversary \(\mathcal{A}\), where game \(\mathbf {G}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A})\) is in Fig. 4. The adversary produces messages \(m_1,m_2\) and signatures \(\sigma _1,\sigma _2\), and an extraction algorithm \(\mathsf {DS}.\mathsf {Ex}\) associated to the scheme then attempts to compute . The adversary wins if the key produced by \(\mathsf {DS}.\mathsf {Ex}\) is different from yet extraction should have succeeded, meaning the messages were colliding and their signatures were valid. The adversary has as input to cover the fact that the signer is the one attempting—due to coercion and subversion, but nonetheless—to produce signatures on colliding messages. (And thus it does not need access to a \(\textsc {Sign}\) oracle.) We note that we are not saying it is hard to produce signatures on colliding messages—it isn’t, given —but rather that doing so will reveal . We also stress that extraction is not required just for honestly-generated signatures, but for any, even adversarially generated signatures that are valid, again because the signer is the adversary here.

Unforgeability. Let \(\mathbf {Adv}^{\mathrm {uf}}_{\mathsf {DS}}(\mathcal{A})= \Pr [\mathbf {G}^{\mathrm {uf}}_{\mathsf {DS}}(\mathcal{A})]\) be the uf-advantage associated to adversary \(\mathcal{A}\), where game \(\mathbf {G}^{\mathrm {uf}}_{\mathsf {DS}}(\mathcal{A})\) is in Fig. 4. This is the classical notion of [10], except that addresses of the messages the signer signs must be all different, as captured through the set A in the game. This is necessary because the double authentication prevention requirement precludes security if the signer releases signatures of two messages with the same address. In practice it means that the signer must maintain a log of all messages it has signed and make sure that it does not sign two messages with the same address. A CA is likely to maintain such a log in any case so this is unlikely to be an extra burden.

Discussion. Regarding the dap property, asking that the key returned by the extractor \(\mathsf {DS}.\mathsf {Ex}\) be equal to may seem unnecessarily strong. It might suffice if was “functionally equivalent” to , allowing computation of signatures that could not be distinguished from real ones. Such a property is considered in PS [16]. Formalizing it would require adding another security game based on indistinguishability. As our schemes (as well as the ones from [15, 16]) achieve the simpler and stronger property we have defined, we adopt it in our definition.

The dap game chooses the keys honestly. Allowing these to be adversarially chosen would result in a stronger requirement, also formalized in PS [15, 16]. Our view is that our (weaker) requirement is appropriate for the application we envision because the CA does not wish to create rogue certificates and has no incentive to create keys allowing it, and the court order happens after the CA and its keys are established, so that key establishment is honest.

5 Our ID to DAPS Transforms

We specify and analyze our two generic transformations, \(\mathbf {H2}\) and \(\mathbf {ID2}\), of trapdoor identification schemes to DAPS. Both deliver efficient DAPS, signature sizes being somewhat smaller in the second case.

5.1 The Double-Hash Transform

The construction. Let \(\mathsf {ID}\) be a trapdoor identification scheme. Our \(\mathbf {H2}\) (double hash) transform associates to it, a supported challenge length \({\mathsf {cl}}\in \mathsf {ID}.\mathsf {clS}\), and a seed length , a DAPS . The algorithms of \(\mathsf {DS}\) are defined in Fig. 5. We give some intuition on the design. In the signing algorithm, we specify the commitment \(Y\) as a hash of the address, i.e., messages with the same address result in transcripts with the same commitment. We then specify the challenge \(c\) as a hash of the message (i.e., address and payload) and a random seed. Signatures consist of the seed and the corresponding response. Concerning the extractability property, observe that the \(\mathsf {ID}.\mathsf {Ex}\) algorithm, when applied to colliding signature transcripts, reveals but not , whereas DAPS extraction needs to recover both, i.e., the full secret key . We resolve this by putting in the verification key a particular encryption, denoted \( ITK \), of , under  (we assume can be encoded in \(\mathsf {tl}\) bits).

The scheme uses random oracles \(\textsc {H}(\cdot ,\{0,1\}^{\mathsf {tl}})\), and \(\textsc {H}(\cdot ,\{0,1\}^{{\mathsf {cl}}})\). For simplicity it is assumed that the three range sets involved here are distinct, which makes the random oracles independent. If the range sets are not distinct, the scheme must be modified to use domain separation [4] in calling these oracles. This can be done simply by prefixing the query to the i-th oracle with i (\(i=1,2,3\) for our three oracles).

Fig. 5.
figure 5

Our construction of a DAPS from a trapdoor identification scheme \(\mathsf {ID}\), a challenge length \({\mathsf {cl}}\in \mathsf {ID}.\mathsf {clS}\), and a seed length .

DAP security of our construction. The following confirms that double authentication prevention is achieved. We model \(\textsc {H}\) as a random oracle.

Theorem 1

Let DAPS be obtained from trapdoor identification scheme \(\mathsf {ID}\), challenge length \({\mathsf {cl}}\), and seed length as above. Let \(\mathcal{A}\) be an adversary making \(q\ge 2\) distinct \(\textsc {H}(\cdot ,\{0,1\}^{{\mathsf {cl}}})\) queries. If \(\mathsf {ID}\) has perfect extractability then

$$\begin{aligned} \mathbf {Adv}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A})\le q(q-1)/2^{{\mathsf {cl}}+1} . \end{aligned}$$

Proof

(Theorem 1 ). In game \(\mathbf {G}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A})\) of Fig. 4, consider the execution of the algorithm \(\mathsf {DS}.\mathsf {Ex}^{\textsc {H}}\) of Fig. 5 on where . Let \(Y_1{\Vert }c_1{\Vert }z_1,Y_2{\Vert }c_2{\Vert }z_2\) be the transcripts computed within. Assume \(\sigma _1,\sigma _2\) are valid signatures of \(m_1,m_2\), respectively, relative to . As per the verification algorithm \(\mathsf {DS}.\mathsf {Vf}^{\textsc {H}}\) of Fig. 5 this means that the transcripts \(Y_1{\Vert }c_1{\Vert }z_1,Y_2{\Vert }c_2{\Vert }z_2\) are valid under the ID scheme, meaning . If the messages \(m_1= (a_1,p_1)\) and \(m_2 = (a_2, p_2)\) output by \(\mathcal{A}\) are colliding then we also have \(Y_1=Y_2\). This is because \(a_1=a_2\) and verification ensures that and . So if \(c_1\ne c_2\) then the extraction property of \(\mathsf {ID}\) ensures that . If so, we also can obtain , so that the full secret key is recovered. So \(\mathbf {Adv}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A})\) is at most the probability that the challenges are equal even though the payloads are not. But the challenges are outputs of \(\textsc {H}(\cdot ,\{0,1\}^{{\mathsf {cl}}})\), to which the game makes at most q queries. So the probability that these challenges collide is at most \(q(q-1)/2^{{\mathsf {cl}}+1}\).    \(\square \)

We note this proof does not essentially rely on \(\textsc {H}\) being a random oracle.

Unforgeability of our construction. The following shows that the restricted unforgeability of our DAPS tightly reduces to the \(\mathrm {cimp}\text{- }\mathrm {uu}\) plus kr security of the underlying ID scheme. As before we model \(\textsc {H}\) as a random oracle.

Theorem 2

Let DAPS be obtained from trapdoor identification scheme \(\mathsf {ID}\), challenge length \({\mathsf {cl}}\), and seed length as in Fig. 5. Let \(\mathcal{A}\) be a uf adversary against \(\mathsf {DS}\) and suppose the number of queries that \(\mathcal{A}\) makes to its \(\textsc {H}(\cdot ,\{0,1\}^{\mathsf {tl}})\), , \(\textsc {H}(\cdot ,\{0,1\}^{{\mathsf {cl}}})\), \(\textsc {Sign}\) oracles are, respectively, \(q_1,q_2,q_3,q_s\). Then from \(\mathcal{A}\) we can construct \(\mathrm {cimp}\text{- }\mathrm {uu}\) adversary \(\mathcal{P}\) and kr adversary \(\mathcal{I}\) such that

Adversaries \(\mathcal{P},\mathcal{I}\) make \(q_2+q_s+1\) queries to \(\textsc {Tr}\). Adversary \(\mathcal{P}\) makes \(q_3\) queries to \(\textsc {Ch}\). The running time of adversary \(\mathcal{P}\) is about that of \(\mathcal{A}\). The running time of adversary \(\mathcal{I}\) is that of \(\mathcal{A}\) plus the time for \(q_1\) executions of \(\mathsf {ID}.\mathsf {KVf}\).

Fig. 6.
figure 6

Games for proof of Theorem 2. Games \(\mathrm {G}_1,\mathrm {G}_2\) include the boxed code and games \(\mathrm {G}_0,\mathrm {G}_3\) do not.

Proof

(Theorem 2 ). We assume that \(\mathcal{A}\) avoids certain pointless behavior that would only cause it to lose. Thus, we assume that, in the messages it queries to \(\textsc {Sign}\), the addresses are all different. Also we assume it did not query to \(\textsc {Sign}\) the message \(m\) in the forgery \((m,\sigma )\) that it eventually outputs. The two together mean that the sets AM in game \(\mathbf {G}^{\mathrm {uf}}_{\mathsf {DS}}(\mathcal{A})\), and the code and checks associated with them, are redundant and can be removed. We will work with this simplified form of the game, that we call \(\mathrm {G}_0\).

Identical-until-\(\mathsf {bad}\) games \(\mathrm {G}_0,\mathrm {G}_1\) of Fig. 6 move us to allow picking a random seed in responding to a \(\textsc {Sign}\) query, regardless of whether the corresponding hash table entry was defined or not. We have

$$\begin{aligned} \mathbf {Adv}^{\mathrm {uf}}_{\mathsf {DS}}(\mathcal{A}) = \Pr [\mathrm {G}_0]&= \Pr [\mathrm {G}_1] + \Pr [\mathrm {G}_0]-\Pr [\mathrm {G}_1] \\&\le \Pr [\mathrm {G}_1] + \Pr [\mathrm {G}_0\,\text{ sets }\,\mathsf {bad}] , \end{aligned}$$

where the inequality is by the Fundamental Lemma of Game Playing of [5]. The random choice of \(s\) made by procedure \(\textsc {Sign}\) ensures

Now we need to bound \(\Pr [\mathrm {G}_1]\). We start by considering whether the ciphertext helps \(\mathcal{A}\) over and above access to \(\textsc {Sign}\). Consider the games \(\mathrm {G}_2,\mathrm {G}_3\) of Fig. 6. They pick \( ITK \) directly at random rather than as prescribed in the scheme. However, via the boxed code that it contains, game \(\mathrm {G}_2\) compensates, replying to \(\textsc {H}(\cdot ,\{0,1\}^{\mathsf {tl}})\) queries in such a way that . Thus \(\mathrm {G}_2\) is equivalent to \(\mathrm {G}_1\). Game \(\mathrm {G}_3\) omits the boxed code, but the games are identical-until-\(\mathsf {bad}\). So we have

$$\begin{aligned} \Pr [\mathrm {G}_1] = \Pr [\mathrm {G}_2]&= \Pr [\mathrm {G}_3] + \Pr [\mathrm {G}_2] - \Pr [\mathrm {G}_3] \nonumber \\&\le \Pr [\mathrm {G}_3] + \Pr [\mathrm {G}_3\,\text{ sets }\,\mathsf {bad}] , \end{aligned}$$
(1)

where again the inequality is by the Fundamental Lemma of Game Playing of [5]. Now we have two tasks, namely to bound \(\Pr [\mathrm {G}_3]\) and to bound \(\Pr [\mathrm {G}_3\,\text{ sets }\,\mathsf {bad}]\). The first corresponds to showing \(\mathcal{A}\) cannot forge if ciphertext \( ITK \) is random, and the second corresponds to showing that changing the ciphertext to random makes little difference. The first relies on the \(\mathrm {cimp}\text{- }\mathrm {uu}\) security of \(\mathsf {ID}\), the second on its kr security.

To bound \(\Pr [\mathrm {G}_3]\), consider game \(\mathrm {G}_4\) of Fig. 7. It moves us towards using \(\mathrm {cimp}\text{- }\mathrm {uu}\) by generating conversation transcripts \(Y_i{\Vert }c_i{\Vert }z_i\) and having \(\textsc {Sign}\) use these. We have

$$\begin{aligned} \Pr [\mathrm {G}_3] = \Pr [\mathrm {G}_4] . \end{aligned}$$

We build \(\mathrm {cimp}\text{- }\mathrm {uu}\) adversary \(\mathcal{P}\) so that

$$\begin{aligned} \Pr [\mathrm {G}_4] \le \mathbf {Adv}^\mathrm{cimp\text{- }uu}_{\mathsf {ID}}(\mathcal{P}) . \end{aligned}$$

The construction of \(\mathcal{P}\) is described in detail in Fig. 8. The idea is as follows. Adversary \(\mathcal{P}\) uses its transcript oracle \(\textsc {Tr}\) to generate the transcripts that \(\mathrm {G}_4\) generates directly. It can then simulate \(\mathcal{A}\)’s \(\textsc {Sign}\) oracle as per game \(\mathrm {G}_4\). Simulation of \(\textsc {H}(\cdot ,\mathrm {Rng})\) is done directly as in the game for \(\mathrm {Rng}=\{0,1\}^{\mathsf {tl}}\) and . When a query x is made to \(\textsc {H}(\cdot ,\{0,1\}^{{\mathsf {cl}}})\), adversary \(\mathcal{P}\) parses x as \(a{\Vert }p{\Vert }s\), sends the index of the corresponding \(\textsc {Tr}\) query to its challenge oracle \(\textsc {Ch}\) to get back a challenge, and returns this challenge as the response to the oracle query. Finally when \(\mathcal{A}\) produces a forgery, the response in the corresponding signature is output as an impersonation that is successful as long as the forgery was valid.

To bound \(\Pr [\mathrm {G}_3\,\text{ sets }\,\mathsf {bad}]\), consider game \(\mathrm {G}_5\) of Fig. 7. It answers \(\textsc {Sign}\) queries just like \(\mathrm {G}_4\), and the only modification in answering \(\textsc {H}\) queries is to keep track of queries to \(\textsc {H}(\cdot ,\{0,1\}^{\mathsf {tl}})\) in the set T. The game ignores the forgery, returning \(\mathsf {true}\) if was queried to \(\textsc {H}(\cdot ,\{0,1\}^{\mathsf {tl}})\). We have

$$\begin{aligned} \Pr [\mathrm {G}_3\,\text{ sets }\,\mathsf {bad}] = \Pr [\mathrm {G}_5] . \end{aligned}$$

We build \(\mathcal{I}\) so that

$$\begin{aligned} \Pr [\mathrm {G}_5] \le \mathbf {Adv}^{\mathrm {kr\text{- }pa}}_{\mathsf {ID}}(\mathcal{I}) . \end{aligned}$$

The idea is simple, namely that if the adversary queries to \(\textsc {H}(\cdot ,\{0,1\}^{\mathsf {tl}})\) then we can obtain by watching the oracle queries of \(\mathcal{A}\). The difficulty is that, to run \(\mathcal{A}\), one first has to simulate answers to \(\textsc {Sign}\) queries using transcripts, and it is to enable this that we moved to \(\mathrm {G}_5\). Again the game was crafted to make the construction of adversary \(\mathcal{I}\) quite direct. The construction is described in detail in Fig. 8. The simulation of the \(\textsc {Sign}\) oracle is as before. The simulation of \(\textsc {H}\) is more direct, following game \(\mathrm {G}_5\) rather than invoking the \(\textsc {Ch}\) oracle. When \(\mathcal{A}\) returns its forgery, the set T contains candidates for the identification secret key . Adversary \(\mathcal{I}\) now verifies each candidate using the key-verification algorithm of the identification scheme, returning a successful candidate if one exists.   \(\square \)

Fig. 7.
figure 7

More games for the proof of Theorem 2.

Fig. 8.
figure 8

Adversaries for proof of Theorem 2.

5.2 The Double-ID Transform

Our \(\mathbf {ID2}\) transform roughly maintains signing and verifying time compared to \(\mathbf {H2}\) but signatures are shorter, consisting of an ID response plus one bit. Since the verifier can try both possibilities for this bit, if one is willing to double the verification time, even this bit is expendable.

The construction. Our construction assumes two main ingredients: The first is a trapdoor identification scheme \(\mathsf {ID}\) that is commitment recovering, has unique responses, and simultaneously supports challenge lengths 1 and \({\mathsf {cl}}\gg 1\). For the choice of \({\mathsf {cl}}\) we further assume for all , i.e., the response space for 1-bit challenges has the same cardinality as the commitment space for \({\mathsf {cl}}\)-bit challenges. The second component is a random bijection \(\varPi \) (cf. Sect. 2) between sets and , i.e., oracle \(\varPi ^{+1}\) implements a random mapping from to and oracle \(\varPi ^{-1}\) implements its inverse. In Sect. 6 we discuss trapdoor ID schemes that fulfill these requirements and show how random bijections with the required domain and range can be obtained.

Fig. 9.
figure 9

Our construction of a DAPS \(\mathbf {ID2}[\mathsf {ID},{\mathsf {cl}}]\) from a trapdoor identification scheme \(\mathsf {ID}\), where \(\{1,{\mathsf {cl}}\}\subseteq \mathsf {ID}.\mathsf {clS}\).

The details of the \(\mathbf {ID2}\) transform are specified in Fig. 9. We write \(\textsc {H}_1(\cdot )\) shorthand for , and \(\textsc {H}_2(\cdot ,\cdot )\) shorthand for \(\textsc {H}((\cdot ,\cdot ),\{0,1\}^{\mathsf {cl}})\). As in Sect. 5.1 we assume these random oracles are independent. Key generation is as in \(\mathbf {H2}\). Signing works as follows: First a commitment \(Y_1\leftarrow \textsc {H}_1(a)\) is derived from the address using a random oracle that maps to the commitment space , then a random 1-bit challenge \(c_1\) is picked and the corresponding response \(z_1\) of the ID scheme computed. Using bijection \(\varPi ^{+1}\), response \(z_1\) is mapped to a commitment . A corresponding \({\mathsf {cl}}\)-bit challenge is derived from the address and the payload per \(c_2\leftarrow \textsc {H}_2(a,p)\). The DAPS signature consists of the response \(z_2\) corresponding to \(Y_2\) and \(c_2\), together with the one-bit challenge \(c_1\). Signatures are verified using the commitment recovery algorithm \(\mathsf {ID}.\mathsf {Rsp}^{-1}\) to recover \(Y_2\) from \(z_2\), computing \(z_1\leftarrow \varPi ^{-1}(Y_2)\), recovering \(Y_1\) from \(c_1\) and \(z_1\) (again using the commitment recovery algorithm), and comparing with \(\textsc {H}_1(a)\). Extraction algorithm \(\mathsf {DS}.\mathsf {Ex}\) works in the obvious way.

DAP security. The \(\mathbf {ID2}\) construction achieves double authentication prevention, as the following result confirms. The proof relies on the extractability property of the ID scheme twice: once for each challenge length. We model \(\textsc {H}\) as a random oracle as usual. Nothing is assumed of \(\varPi \) other than it being a bijection.

Theorem 3

Let DAPS \(\mathsf {DS}= \mathbf {ID2}[\mathsf {ID},{\mathsf {cl}}]\) be obtained from trapdoor identification scheme \(\mathsf {ID}\) and challenge length \({\mathsf {cl}}\) as above. Let \(\mathcal{A}\) be an adversary making at most q queries to the \(\textsc {H}_2(\cdot )=\textsc {H}(\cdot ,\{0,1\}^{{\mathsf {cl}}})\) oracle. If \(\mathsf {ID}\) has unique responses and perfect extractability, then \(\mathbf {Adv}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A})\le q(q-1)/2^{{\mathsf {cl}}+1}\).

Proof

(Theorem 3 ). Assume, in experiment \(\mathbf {G}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A})\), that the adversary outputs message-signature pairs \((m_1,\sigma _1)\) and \((m_2,\sigma _2)\) such that for \(i\in \{1,2\}\) we have . The latter implies for \(m_i=(a_i,p_i)\) and \(\sigma _i=(c_{1,i},z_{2,i})\) that for recoverable values \(z_{1,i},Y_{2,i}\) and the corresponding transcripts \(T_{1,i}=\textsc {H}_1(a_i){\Vert }c_{1,i}{\Vert }z_{1,i}\) and \(T_{2,i}=Y_{2,i}{\Vert }\textsc {H}_2(a_i,p_i){\Vert }z_{2,i}\) we have and \(Y_{2,i}=\varPi ^{+1}(z_{1,i})\). Assume \(a_1=a_2\) and \(p_1\ne p_2\). We have either \(c_{1,1}\ne c_{1,2}\) or \(c_{1,1}=c_{1,2}\). In the former case, the two transcripts \(T_{1,1},T_{1,2}\) have the same commitment but different challenges. This allows us to extract the secret key via the extractability property of \(\mathsf {ID}\); further, by decrypting \( ITK \) we can recover , as required. Consider thus the case \(c_{1,1}=c_{1,2}\) which implies \(z_{1,1}=z_{1,2}\) and \(Y_{2,1}=Y_{2,2}\) by the unique response property of \(\mathsf {ID}\). If \(\textsc {H}_2(a_1,p_1)\ne \textsc {H}_2(a_2,p_2)\) we can extract from the two transcripts \(T_{2,1},T_{2,2}\) as above. As \(p_1\ne p_2\) and \(\textsc {H}\) is a random oracle, the probability for \(\textsc {H}_2(a_1,p_1)=\textsc {H}_2(a_2,p_2)\) is \(q(q-1)/2^{{\mathsf {cl}}+1}\).    \(\square \)

Fig. 10.
figure 10

Games \(\mathrm {G}_0,\mathrm {G}_1\) for proof of Theorem 4. Game \(\mathrm {G}_0\) includes the boxed code and game \(\mathrm {G}_1\) does not.

Unforgeability. The following establishes that if the ID scheme offers \(\mathrm {cimp}\text{- }\mathrm {uc}\) and kr security, then \(\mathbf {ID2}\) transforms it into an unforgeable DAPS. Here we model \(\textsc {H}\) as a random oracle and \(\varPi \) as a public random bijection.

Theorem 4

Let DAPS \(\mathsf {DS}= \mathbf {ID2}[\mathsf {ID},{\mathsf {cl}}]\) be obtained from trapdoor identification scheme \(\mathsf {ID}\) as in Fig. 9. Let where the minimum is over all . Let \(\mathcal{A}\) be a uf adversary against \(\mathsf {DS}\) and suppose the number of queries that \(\mathcal{A}\) makes to its , \(\textsc {H}(\cdot ,\{0,1\}^{{\mathsf {cl}}})\), \(\varPi ^{\pm 1}\), \(\textsc {Sign}\) oracles are, respectively, \(q_1,q_2,q_3,q_4,q_s\). Then from \(\mathcal{A}\) we can construct dap adversary \(\mathcal{A}'\), kr adversary \(\mathcal{I}\) and \(\mathrm {cimp}\text{- }\mathrm {uc}\) adversaries \(\mathcal{P}_1,\mathcal{P}_2\) such that

$$\begin{aligned} \mathbf {Adv}^{\mathrm {uf}}_{\mathsf {DS}}(\mathcal{A})&\le \mathbf {Adv}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A}') + \mathbf {Adv}^{\mathrm {kr\text{- }pa}}_{\mathsf {ID}}(\mathcal{I}) \\&\quad + 2\mathbf {Adv}^\mathrm{cimp\text{- }uc}_{\mathsf {ID}}(\mathcal{P}_1)+2\mathbf {Adv}^\mathrm{cimp\text{- }uc}_{\mathsf {ID}}(\mathcal{P}_2)+\frac{(q_4+q_s)^2}{2N}. \end{aligned}$$

Adversaries \(\mathcal{I},\mathcal{P}_1,\mathcal{P}_2\) make \(q_2+q_3+q_4+q_s\) queries to \(\textsc {Tr}\), and adversaries \(\mathcal{P}_1,\mathcal{P}_2\) make one query to \(\textsc {Ch}\). Beyond that, the running time of \(\mathcal{A}',\mathcal{P}_1,\mathcal{P}_2\) is about that of \(\mathcal{A}\), and the running time of \(\mathcal{I}\) is that of \(\mathcal{A}\) plus the time for \(q_1\) executions of \(\mathsf {ID}.\mathsf {KVf}\).

Fig. 11.
figure 11

Game \(\mathrm {G}_2\) for proof of Theorem 4.

Proof

(Theorem 4 ). In the proof, we handle queries to the random bijection \(\varPi \) (with oracles \(\varPi ^{+1}\) and \(\varPi ^{-1}\)) via lazy sampling and track input-output pairs using a table \(\mathrm {PT}\). Notation-wise we consider a binary relation to which a mapping of the form \(\varPi ^{+1}(\alpha )=\beta \) or, equivalently, \(\varPi ^{-1}(\beta )=\alpha \) can be added by assigning \(\mathrm {PT}\leftarrow \mathrm {PT}\cup \{(\alpha ,\beta )\}\). We use functional expressions for table look-up, e.g., whenever \((\alpha ,\beta )\in \mathrm {PT}\) we write \(\mathrm {PT}^{+1}(\alpha )=\beta \) and \(\mathrm {PT}^{-1}(\beta )=\alpha \). We annotate the domain of \(\mathrm {PT}\) with \({{\mathrm{dom}}}(\mathrm {PT})=\{\alpha :(\alpha ,\beta )\in \mathrm {PT}\text { for some }\beta \}\), and its range with \({{\mathrm{rng}}}(\mathrm {PT})=\{\beta :(\alpha ,\beta )\in \mathrm {PT}\text { for some }\alpha \}\).

Without loss of generality we assume from \(\mathcal{A}\) the following behavior: (a) if \(\mathcal{A}\) outputs a forgery attempt \((m,\sigma )\) then \(\sigma \) was not returned by \(\textsc {Sign}\) on input \(m\); (b) \(\mathcal{A}\) does not query \(\textsc {Sign}\) twice on the same address; (c) for all messages \(m=(a,p)\), \(\mathcal{A}\) always queries \(\textsc {H}_1(a)\) before \(\textsc {H}_2(a,p)\); further, \(\mathcal{A}\) always queries \(\textsc {H}_2(a,p)\) before querying \(\textsc {Sign}(m)\); (d) before outputting a forgery attempt, \(\mathcal{A}\) makes all random oracle and random bijection queries required by the verification algorithm to verify the signature. We further may assume that \(\mathcal{A}\) does not forge on an address \(a\) for which it queried a signature before: Otherwise, by DAP security, the adversary could extract the secret key and forge also on a fresh address; this is accounted for by the \(\mathbf {Adv}^{\mathrm {dap}}_{\mathsf {DS}}(\mathcal{A}')\) term in the theorem statement. The correspondingly simplified form of the \(\mathbf {G}^{\mathrm {uf}}_{\mathsf {DS}}(\mathcal{A})\) game is given as \(\mathrm {G}_0\) in Fig. 10. (Note that queries to \(\varPi ^{+1}\) and \(\varPi ^{-1}\) are expected to be answered with elements drawn uniformly at random from and , respectively, and that our implementation does precisely this, though in an initially surprising form).

Observe that in \(\mathrm {G}_0\) the flag \(\mathsf {bad}\) is set when resampling is required in the processing of \(\varPi ^{+1}\) and \(\varPi ^{-1}\). The probability that this happens is at most \((0+1+\ldots +(q_4+q_s-1))/N\), where N is the minimum cardinality of the commitment space for challenge length \({\mathsf {cl}}\), as defined in the theorem statement. We define game \(\mathrm {G}_1\) like \(\mathrm {G}_0\) but with the resampling steps in the \(\varPi ^{+1}\) and \(\varPi ^{-1}\) oracles removed. We obtain

$$\begin{aligned} \Pr [\mathrm {G}_0] = \Pr [\mathrm {G}_1] + \frac{(q_4+q_s)^2-(q_4+q_s)}{2N} . \end{aligned}$$

Consider next game \(\mathrm {G}_2\) from Fig. 11. It is obtained from \(\mathrm {G}_1\) by applying the following rewriting steps. First, instead of computing \( ITK \) by evaluating it picks \( ITK \) at random and programs random oracle \(\textsc {H}\) such that relation is maintained. Second, the way random oracle queries of the form and \(\textsc {H}(x,\{0,1\}^{{\mathsf {cl}}})\) are processed is changed: Now, the internal \(\textsc {Transc}\) algorithm is invoked to produce full identification transcripts for the corresponding challenge length; the \(\textsc {H}\) oracle outputs one component of these transcripts and keeps the other components for itself. Also the implementation of \(\varPi ^{+1}\) is modified to use the \(\textsc {Transc}\) algorithm.

Concerning the \(\textsc {Sign}\) oracle, observe that \(\mathrm {G}_1\) samples challenge \(c_1\) and derives corresponding \(y_1\) and \(z_1\) values by itself. In \(\mathrm {G}_2\), as we assume that \(\textsc {H}_1(a)\) is always queried before \(\textsc {Sign}(a,p)\), and as the \(\textsc {H}_1(a)\) implementation now internally prepares a full transcript, the \(c_1,y_1,z_1\) values from this transcript generation can be used within the \(\textsc {Sign}\) oracle. That is, we replace the first invocations of \(\mathsf {ID}.\mathsf {Cmt}^{-1}\) and \(\mathsf {ID}.\mathsf {Rsp}\) in \(\textsc {Sign}\) of \(\mathrm {G}_1\) by the assignments \(Y_1\leftarrow Y_1[a]\), \(y_1\leftarrow y_1[a]\), \(c_1\leftarrow c_1[a]\), and \(z_1\leftarrow z_1[a]\) in \(\mathrm {G}_2\). (Note that this works only because we also assume that \(\textsc {Sign}\) is not queried more than once on the same address.) Consider next the assignment \(Y_2\leftarrow \varPi ^{+1}(z_1)\) of \(\textsc {Sign}\) in \(\mathrm {G}_1\) (which now would be annotated \(Y_2\leftarrow \varPi ^{+1}(z_1[a])\)) and the fact that \(Y_2\) is completed by \(\textsc {Sign}\) to a transcript with challenge \(c_2[a,p]\). In the evaluation of \(\varPi ^{+1}(z_1)\), two cases can be distinguished: either the query is ‘old’, i.e., \(z_1\in {{\mathrm{dom}}}(\mathrm {PT})\), in which case \(\textsc {Sign}\) proceeds its computations using the stored commitment \(Y_2=\mathrm {PT}^{+1}(z_1)\), or the query is ‘fresh’, i.e., \(z_1\notin {{\mathrm{dom}}}(\mathrm {PT})\), in which case a new value \(Y_2\) is sampled from . In both cases \(\textsc {Sign}\) completes \(Y_2\) to a full transcript with challenge \(\textsc {H}_2(a,p)=c_2[a,p]\). As we assume that each \(\textsc {Sign}(a,p)\) query is preceded by a \(\textsc {H}_2(a,p)\) query, and the latter internally generates a full transcript with challenge \(c_2[a,p]\), similarly to what we did for the values \(Y_1,y_1,c_1,z_1\) above, in the case of a ‘fresh’ \(\varPi ^{+1}(z_1)\) query game \(\mathrm {G}_2\) sets \(Y_2\leftarrow Y_2[a,p]\), \(y_2\leftarrow y_2[a,p]\), \(c_2\leftarrow c_2[a,p]\), and \(z_2\leftarrow z_2[a,p]\). The two described cases correspond with the two branches of the second If-statement in \(\textsc {Sign}\) of Fig. 11.

The remaining changes between \(\mathrm {G}_1\) and \(\mathrm {G}_2\) concern the two added flags \(\mathsf {bad}_1\) and \(\mathsf {bad}_2\) and can be ignored for now. Thus all changes between games \(\mathrm {G}_1\) and \(\mathrm {G}_2\) are pure rewriting, so we obtain

$$\begin{aligned} \Pr [\mathrm {G}_1]=\Pr [\mathrm {G}_2] . \end{aligned}$$

Consider next in more detail the flags \(\mathsf {bad}_1\) and \(\mathsf {bad}_2\) that appear in game \(\mathrm {G}_2\). The former is set whenever a value is queried to \(\textsc {H}(\cdot ,\{0,1\}^\mathsf {tl})\) that is a valid secret identification key for verification key , and the latter is set when \(\textsc {Sign}\) is queried on some address \(a\) and the domain of \(\mathrm {PT}\) contains an element that is a valid response for commitment \(Y_1[a]\) and one of the two possible challenges \(c_1\in \{0,1\}\). Observe that any use of in \(\textsc {H}\) is preceded by setting \(\mathsf {bad}_1\leftarrow 1\), and that any execution of the first branch of the second If-statement of \(\textsc {Sign}\) in \(\mathrm {G}_2\) is preceded by setting \(\mathsf {bad}_2\leftarrow 1\).

We’d like to proceed the proof by bounding the probabilities \(\Pr [\mathrm {G}_2\text { sets }\mathsf {bad}_1]\) and \(\Pr [\mathrm {G}_2\text { sets }\mathsf {bad}_2]\) (based on the hardness of key recovery and \(\mathrm {cimp}\text{- }\mathrm {uc}\) impersonation, respectively). However, the following technical problem arises: While in the two corresponding reductions we would be able to simulate the \(\textsc {Transc}\) algorithm with the \(\textsc {Tr}\) oracle, when aiming at bounding the probability of \(\mathsf {bad}_1\leftarrow 1\) it would be unclear how to simulate the \(\textsc {Sign}\) oracle (that uses and in the first If-branch), and when aiming at bounding the probability of \(\mathsf {bad}_2\leftarrow 1\) it would be unclear how to simulate the \(\textsc {H}\) oracle (that uses in the \(\mathrm {Rng}=\{0,1\}^\mathsf {tl}\) branch). We help ourselves by defining the following three complementary events: (a) neither \(\mathsf {bad}_1\) nor \(\mathsf {bad}_2\) is set, (b) \(\mathsf {bad}_1\) is set before \(\mathsf {bad}_2\) (this includes the case that \(\mathsf {bad}_2\) is not set at all), and (c) \(\mathsf {bad}_2\) is set before \(\mathsf {bad}_1\) (this includes the case that \(\mathsf {bad}_1\) is not set at all). In Fig. 12 we construct a kr adversary \(\mathcal{I}\) and a \(\mathrm {cimp}\text{- }\mathrm {uc}\) adversary \(\mathcal{P}_1\) from \(\mathcal{A}\) such that

$$\begin{aligned} \Pr [\mathrm {G}_2\text { sets } \mathsf {bad}_1 \text { first}]=\mathbf {Adv}^{\mathrm {kr\text{- }pa}}_{\mathsf {ID}}(\mathcal{I}) \end{aligned}$$

and

$$\begin{aligned} \Pr [\mathrm {G}_2\text { sets } \mathsf {bad}_2 \text { first}]=2\mathbf {Adv}^\mathrm{cimp\text{- }uc}_{\mathsf {ID}}(\mathcal{P}_1) . \end{aligned}$$

The strategy for constructing the adversaries is clear: We derive \(\mathcal{I}\) from \(\mathrm {G}_2\) by stripping off all code that is only executed after \(\mathsf {bad}_2\) is set, and we construct \(\mathcal{P}_1\) by removing all code only executed after \(\mathsf {bad}_1\) is set. The \(\mathcal{P}_1\)-related code in \(\textsc {Sign}\) deserves further explanation. The reduction obtained commitment \(Y_1[a]\) via \(\textsc {H}\) from the \(\textsc {Tr}\) oracle of the \(\mathrm {cimp}\text{- }\mathrm {uc}\) game, together with challenge \(c_1[a]\) and response \(z_1[a]\). As at the time the \(\mathsf {bad}_2\) flag is set in \(\mathrm {G}_2\) no information on \(c_1[a]\) was used in the game or exposed to the adversary, for the challenge \(c^*\) for which we have that \(c^*\ne c_1[a]\) with probability 1 / 2. The reduction thus tries to break \(\mathrm {cimp}\text{- }\mathrm {uc}\) security with challenge \(1-c_1[a]\) and response \(z\). Whenever this challenge is admissible (i.e., with probability 1 / 2), the response is correct. That is, \(\mathcal{P}_1\) is successful with breaking impersonation with half the probability of \(\mathcal{A}\) having flag \(\mathsf {bad}_2\) be set first.

In Fig. 13 we define game \(\mathrm {G}_3\) which behaves exactly like \(\mathrm {G}_2\) until either \(\mathsf {bad}_1\) or \(\mathsf {bad}_2\) is set. Thus we have

$$\begin{aligned} \Pr [\mathrm {G}_2\text { sets neither } \mathsf {bad}_1 \text { nor } \mathsf {bad}_2]=\Pr [\mathrm {G}_3] . \end{aligned}$$

In \(\mathrm {G}_3\) we expand the \(\mathsf {DS}.\mathsf {Vf}\) algorithm, i.e., the steps where the forgery attempt of \(\mathcal{A}\) is verified. If signature \(\sigma =(c_1,z_2)\) is identified as valid, the game sets flag \(\mathsf {bad}\) to 1 if \(c_1\ne c_1[a]\), i.e., if the challenge \(c_1\) included in the signature does not coincide with the one simulated in the \(\textsc {H}\) oracle for address \(a\). Using the assumption that \(\mathcal{A}\) does not forge on addresses \(a\) for which it posed a \(\textsc {Sign}(a,\cdot )\) query, observe that the game did not release any information on \(c_1[a]\), so by an information theoretic argument, \(c_1\ne c_1[a]\) and thus \(\mathsf {bad}\leftarrow 1\) with probability 1 / 2.

In Fig. 13 we construct a \(\mathrm {cimp}\text{- }\mathrm {uc}\) adversary \(\mathcal{P}_2\) from \(\mathcal{A}\) that is successful whenever \(\mathsf {bad}\) is set in game \(\mathrm {G}_3\). We obtain

$$\begin{aligned} \Pr [\mathrm {G}_3]=2\mathbf {Adv}^\mathrm{cimp\text{- }uc}_{\mathsf {ID}}(\mathcal{P}_2) . \end{aligned}$$

Taken together, the established bounds imply the theorem statement.   \(\square \)

Fig. 12.
figure 12

Adversaries for proof of Theorem 4. The oracles and the \(\textsc {Transc}\) implementation are shared by both adversaries. In \(\textsc {Sign}\), we write \(\#Y_1[a]\) for the number of the \(\textsc {Tr}\) query in which the value of \(Y_1[a]\) was established.

Fig. 13.
figure 13

Top: game \(\mathrm {G}_3\) for proof of Theorem 4. Bottom: one more adversary for proof of Theorem 4. We write \(\#Y_1[a]\) for the number of the \(\textsc {Tr}\) query in which the value of \(Y_1[a]\) was established.

Fig. 14.
figure 14

Trapdoor identification scheme \(\mathsf {GQ}\) associated to RSA generator \(\mathsf {RSA}\) and game defining the RSA one-wayness.

6 Instantiation and Implementation

We illustrate how to instantiate our \(\mathbf {H2}\) and \(\mathbf {ID2}\) transforms, using the \(\mathsf {GQ}\) identification scheme as example, to obtain \(\mathbf {H2}[\mathsf {GQ}]\) and \(\mathbf {ID2}[\mathsf {GQ}]\). Similar instantiations and implementations are possible with many other trapdoor identification schemes. For instance, see the full version of this paper [2] for instantiations based on claw-free permutations [10] or the \(\mathsf {MR}\) identification scheme by Micali and Reyzin [12]. We implement \(\mathbf {H2}[\mathsf {GQ}]\), \(\mathbf {ID2}[\mathsf {GQ}]\), and \(\mathbf {H2}[\mathsf {MR}]\) to get performance data.

6.1 \(\mathsf {GQ}\)-Based Schemes

\(\mathsf {GQ}\). An RSA generator for modulus length k is an algorithm \(\mathsf {RSA}\) that returns a tuple (Npqed) where pq are distinct odd primes, \(N=pq\) is the modulus in the range \(2^{k-1}< N < 2^k\), encryption and decryption exponents ed are in \({{\mathbb Z}}_{\varphi (N)}^*\), and \(ed\equiv 1\pmod {\varphi (N)}\). The assumption is one-wayness, formalized by defining the ow-advantage of an adversary \(\mathcal{A}\) against \(\mathsf {RSA}\) by \(\mathbf {Adv}^{\mathrm {ow}}_{\mathsf {RSA}}(\mathcal{A})= \Pr [\mathrm {OW}_{\mathsf {RSA}}^{\mathcal{A}}]\) where the game is in Fig. 14. Let L be a parameter and \(\mathsf {RSA}\) be such that \(\gcd (e,c)=1\) for all \(0<c<2^L\). (For instance, \(\mathsf {RSA}\) may select encryption exponent e as an \(L+1\) bit prime number.) If \(\mathrm {egcd}\) denotes the extended gcd algorithm that given relatively-prime inputs \(e,c\) returns ab such that \(ea+cb=1\), the \(\mathsf {GQ}\) identification scheme associated to \(\mathsf {RSA}\) is shown in Fig. 14. Any challenge length up to L is admissible, i.e., \(\mathsf {ID}.\mathsf {clS}\subseteq \{1,\ldots ,L\}\), and for all \({\mathsf {cl}}\in \mathsf {ID}.\mathsf {clS}\) the commitment and response space is . Extraction works because of identity \(X^az^b=x^{ea}x^{(c_1-c_2)b}=x\). Algorithm \(\mathsf {GQ}.\mathsf {Cmt}^{-1}\) shows that the scheme is trapdoor; that it also is commitment recovering and has unique responses follows from inspection of the \(z^e=YX^c\) condition of the verification algorithm. Finally, it is a standard result, and in particular follows from Lemma 1, that KR, \(\mathrm {CIMP}\text{- }\mathrm {UU}\), \(\mathrm {CIMP}\text{- }\mathrm {UC}\) security of \(\mathsf {GQ}\) tightly reduce to the one-wayness of \(\mathsf {RSA}\) (note the \(\mathrm {CIMP}\text{- }\mathrm {UU}\) case requires a restriction on the deployed challenge lengths).

Fig. 15.
figure 15

DAPS schemes and \(\mathbf {ID2}[\mathsf {GQ},{\mathsf {cl}}]\) derived via our transforms from ID scheme \(\mathsf {GQ}\).

\(\mathbf {H2}[\mathsf {GQ}]\). Figure 15 shows the algorithms of the \(\mathbf {H2}[\mathsf {GQ}]\) DAPS scheme derived by applying our \(\mathbf {H2}\) transform to the \(\mathsf {GQ}\) identification scheme of Fig. 14. To estimate security for a given modulus length k we use Theorems 1 and 2, and the reductions between \(\mathrm {CIMP}\text{- }\mathrm {UU}\) and KR security of \(\mathsf {GQ}\) and the one-wayness of RSA from Lemma 1. The reductions are tight and so we need to estimate the advantage of an adversary against the one-wayness of \(\mathsf {RSA}\). We do this under the assumption that the NFS is the best factoring method. Thus, our implementation uses a 2048-bit modulus and 256-bit hashes and seeds. See below and Fig. 16 for implementation and performance information.

\(\mathbf {ID2}[\mathsf {GQ}]\). Figure 15 also shows the algorithms of the DAPS scheme derived by applying the \(\mathbf {ID2}\) transform to \(\mathsf {GQ}\). Reductions continue to be tight so instantiation and implementation choices are as for \(\mathbf {H2}[\mathsf {GQ}]\). Concerning the random permutation \(\varPi \) on \({{\mathbb Z}}_N^*\) that the scheme requires, it effectively suffices to construct one that maps \({{\mathbb Z}}_N\) to \({{\mathbb Z}}_N\), and we propose one way to instantiate it in the following.

A random permutation \(\varPi \) on \({{\mathbb Z}}_N\) can be constructed from a random permutation \(\varGamma \) on \(\{0,1\}^k\), where \(2^{k-1}< N < 2^k\), by cycle walking [6, 13]: if x is the input, let \(c \leftarrow \varGamma (x)\); if \(c \in {{\mathbb Z}}_N\), return c; else recurse on c; the inverse is analogous. A Feistel network can be used to construct a random permutation \(\varGamma \) on \(\{0,1\}^{2n}\) from a set of public random functions \(F_1, \dots , F_r\) on \(\{0,1\}^n\). In other words, for input \(x_0 \Vert x_1 \in \{0,1\}^{2n}\), return \(x_r \Vert x_{r+1}\) where \(x_{i+1} = x_{i-1} \oplus F_i(x_i)\). Dai and Steinberger [8] give an indifferentiability result for 8 rounds, under the assumption that the \(F_i\) are independent public random functions. We construct \(F_i\) on \(\{0,1\}^n\) as \(F_i(x) = H(i \Vert 1 \Vert x) \Vert \dots \Vert H(i \Vert \ell \Vert x)\) using \(H=\) SHA-256, where \(\ell = n / 256\) (assuming for simplicity n is a multiple of 256), and the inputs to SHA-256 are encoded to the same length to avoid length extension attacks that make Merkle–Damgård constructions differentiable from a random oracle. Our implementation uses \(r=20\) rounds of the Feistel network as a safety margin for good indifferentiability and to avoid the non-tightness of the result [8] for \(r=8\).

Fig. 16.
figure 16

Operation count, average runtime, and public key/signature sizes of DAPS schemes and RSA signatures. By \(\exp _m^x\) we denote the cost of computing a modular exponentiation with modulus of bitlength m and exponent of bitlength x. All concrete values are for the \(\lambda =128\)-bit security level: timing and size values for RSA and factoring based schemes are based on \(k=2048\)-bit moduli and \(n=l=2\lambda =256\)-bit hash values, and for the \(\mathsf {RKS}\) scheme we assume a group with \(2\lambda =256\)-bit element representation, hash values of the same length, and a binary tree. See also [2].

6.2 Implementation and Performance

Implementation. We implemented \(\mathbf {H2}[\mathsf {GQ}]\), \(\mathbf {H2}[\mathsf {MR}]\), and \(\mathbf {ID2}[\mathsf {GQ}]\) (see [2] for the specification of \(\mathsf {MR}\)). For comparison purposes we also implemented the original \(\mathsf {PS}\). Our implementation is in CFootnote 1, using OpenSSL’s BIGNUM library for number theoretic operations. We also compared with OpenSSL’s implementation of standard RSA PKCS#1v1.5 signatures currently used by CAs for creating certificates. We use the Chinese remainder theorem to speed-up secret key operations whenever possible. For \(\mathsf {GQ}\), we use encryption exponent \(e=\mathrm {nextprime}(2^{{\mathsf {cl}}})\); for RSA public key encryption we use OpenSSL’s default public key exponent, \(e=65537\). We compared against the \(\mathsf {RKS}\) DAPS implementation.

Performance. We measured timings of our implementations on an Intel Core i7 (6700K “Skylake”) with 4 cores each running at 4.0 GHz. The tests were run on a single core with TurboBoost and hyper-threading disabled. Software was compiled for the x86_64 architecture with -O3 optimizations using llvm 8.0.0 (clang 800.0.38). The OpenSSL version used was v1.0.2j. We use RKS’ implementation of their DAPS, which relies on a different library for the secp256k1 elliptic curve. Table 16 shows mean runtimes in milliseconds (with standard deviations) and key sizes using 2048-bit modulii and 256-bit hashes. For DAPS schemes, address is 15 bytes and payload is 33 bytes; for RSA PKCS#1v1.5, message is 48 bytes. Times reported are an average over 30 s. The table omits runtimes for key generation as this is a one-time operation.

Compared with the existing \(\mathsf {PS}\), our \(\mathbf {H2}[\mathsf {GQ}]\), \(\mathbf {ID2}[\mathsf {GQ}]\), and \(\mathbf {H2}[\mathsf {MR}]\) schemes are several orders of magnitude faster for both signing and verification. When using 2048-bit modulii, \(\mathbf {H2}[\mathsf {GQ}]\) signatures can be generated 587\(\times \) and verified 394\(\times \) faster, and \(\mathbf {ID2}[\mathsf {GQ}]\) signatures can be generated 287\(\times \) and verified 108\(\times \) faster; moreover our signatures are 229\(\times \) and 257\(\times \) shorter, respectively, compared with \(\mathsf {PS}\), and ours are nearly the same size as RSA PKCS#1v1.5 signatures. Compared with the previous fastest and smallest DAPS, \(\mathsf {RKS}\), \(\mathbf {H2}[\mathsf {GQ}]\) signatures can be generated and verified 15\(\times \) faster; \(\mathbf {ID2}[\mathsf {GQ}]\) generated 7\(\times \) and verified 4\(\times \) faster; and \(\mathbf {H2}[\mathsf {MR}]\) generated 10\(\times \) and verified 16\(\times \) faster. \(\mathbf {H2}[\mathsf {GQ}]\) and \(\mathbf {H2}[\mathsf {MR}]\) signatures are 56\(\times \) shorter compared with \(\mathsf {RKS}\); \(\mathbf {H2}[\mathsf {GQ}]\) and \(\mathbf {ID2}[\mathsf {GQ}]\) public keys are 9.6\(\times \) larger, though still under 1 KiB total, and \(\mathbf {H2}[\mathsf {MR}]\) keys are only 3.2\(\times \) larger than \(\mathsf {RKS}\).

Signing times for our schemes are competitive with RSA PKCS#1v1.5: using \(\mathbf {H2}[\mathsf {GQ}]\), \(\mathbf {ID2}[\mathsf {GQ}]\), or \(\mathbf {H2}[\mathsf {MR}]\) for signatures in digital certificates would incur little computational or size overhead relative to currently used signatures.