Keywords

1 Introduction

A threshold signature scheme [36, 37, 70] enables a group of n signers to jointly sign a message as long as more than t of them participate. To this end, each of the n signers holds a share of the secret key associated with the public key of the group. When \(t+1\) of them come together and run a signing protocol for a particular message, they obtain a compact signature (independent in size of t and n) without revealing their secret key shares to each other. On the other hand, no subset of at most t potentially malicious signers can generate a valid signature. Despite being a well-studied cryptographic primitive, threshold signatures have experienced a renaissance due to their use in cryptocurrencies [64] and other modern applications [30]. This new attention has also led to ongoing standardization efforts [19]. In this work, we study threshold signatures in the pairing-free discrete logarithm setting. As noted in previous works [29, 78, 79], pairings are not supported in popular libraries and are substantially more expensive to compute, which makes pairing-free solutions appealing.

Static vs. Adaptive Security. When defining security for threshold signatures, the adversary is allowed to concurrently interact with honest signers in the signing protocol. Additionally, it may corrupt up to t out of n parties, thereby learning their secret key material and internal state. Here, we distinguish between static corruptions and adaptive corruptions. For static corruptions, the adversary declares the set of corrupted parties ahead of time before any messages have been signed. For adaptive corruptions, the adversary can corrupt parties dynamically, depending on previous signatures and corruptions.

Adaptive security is a far stronger notion than static security and matches reality more closely. Unfortunately, proving adaptive security for threshold signatures is highly challenging and previous works in the pairing-free setting rely on strong interactive assumptions to simulate the state of adaptively corrupted parties [28]. This simulation strategy, however, is at odds with rewinding the adversary as part of a security proof. Roughly, if the adversary is allowed to corrupt up to \(t_c\) parties, then in the two runs induced by rewinding, it may corrupt up to \(2t_c\) parties in total. Thus, for the reduction to obtain meaningful information from the adversary’s forgery, it has to be restricted to corrupt at most \(t_c \le t/2\) parties [28]. To bypass this unnatural restriction, prior work heavily relies on the algebraic group model (AGM) [42] in order to avoid rewindingFootnote 1. In summary: to support an arbitrary corruption threshold, one has to use the AGM or sacrifice adaptive security.

1.1 Our Contribution

Motivated by this unsatisfactory state of affairs, we construct \(\textsf{Twinkle}\). \(\textsf{Twinkle}\) is the first threshold signature scheme in the pairing-free setting which combines all of the following characteristics:

  • Adaptive Security. We prove \(\textsf{Twinkle}\) secure under adaptive corruptions. Notably, we do not rely on secure erasures of private state.

  • Non-Interactive Assumptions. Our security proof relies on a non-interactive and well-studied assumption, namely, the \(\textsf{DDH}\) assumption. As a slightly more efficient alternative, we give an instantiation based on a one-more variant of \(\textsf{CDH}\), for which we provide evidence of its hardness.

  • No AGM. Our security proof does not rely on the algebraic group model, but only on the random oracle model.

  • Arbitrary Threshold. \(\textsf{Twinkle}\) supports an arbitrary corruption threshold \(t < n\) for n parties. Essentially, this is established by giving a proof without rewinding.

For a comparison of schemes in the pairing-free discrete logarithm setting, see Table 1. We also emphasize that we achieve our goal without the use of heavy cryptographic techniques, and our scheme is practical. For example, signatures of \(\textsf{Twinkle}\) (from \(\textsf{DDH}\)) are at most 3 times as large as regular Schnorr signatures [74], and \(\textsf{Twinkle}\) has three rounds. In the context of our proof, we also identify a gap in the analysis of \(\textsf{Sparkle}\)  [28] and develop new proof techniques to fix it in the context of our schemeFootnote 2.

Table 1. Comparison of different threshold signature schemes in the discrete logarithm setting without pairings and the two instantiations of our \(\textsf{Twinkle}\) scheme. We compare whether the schemes are proven secure under adaptive corruptions and under which assumption and idealized model they are proven. We also compare the corruption thresholds that they support. For all schemes, we assume that there is a trusted dealer distributing key shares securely. For \(\textsf{GJKR}\)  [48]/\(\textsf{StiStr}\)  [77], broadcast channels are assumed, which adds rounds when implemented.

Conceptually, the design of our threshold signature is inspired by five-move identification schemes, which already have found use in the construction of tightly secure signature schemes [24, 49, 57]. We achieve our result in two main steps:

  1. 1.

    We first phrase our scheme abstractly using (a variant of) linear function families [23, 52, 55, 69, 79]. To prove security under adaptive corruptions, we define a security notion for linear functions resembling a one-more style \(\textsf{CDH}\) assumption. This is the step where we identify the gap in the analysis of \(\textsf{Sparkle}\)  [28].

  2. 2.

    We then instantiate the linear function family such that this one-more notion follows from the (non-interactive) \(\textsf{DDH}\) assumption. Note that Tessaro and Zhu [79] showed a related statement, namely, that a suitable one-more variant of \(\textsf{DLOG}\) follows from \(\textsf{DLOG}\). In this sense, our work makes a further step in an agenda aimed at replacing interactive assumptions with non-interactive ones. We are confident that this is interesting in its own right.

1.2 Technical Overview

We keep the technical overview self-contained, but some background on Schnorr signatures [58, 74], five-move identification [24, 49, 57], and \(\textsf{Sparkle}\)  [28] is helpful.

Sparkle and The Problem with Rewinding. As our starting point, let us review the main ideas behind \(\textsf{Sparkle}\)  [28], and why the use of rewinding limits us to tolerating at most t/2 corruptions. For that, we fix a group \({\mathbb {G}} \) with generator g and prime order p. Each signer \(i \in [n]\) holds a secret key share \({\textsf{sk}} _i \in \mathbb {Z}_p\) such that \({\textsf{sk}} _i = f(i)\) for a polynomial f of degree t. Further, the public key is \({\textsf{pk}} = g^{f(0)}\). To sign a message \(\textsf{m}\), a set \(S \subseteq [n]\) of signers engage in the following interactive signing protocol, omitting some details:

  1. 1.

    Each party \(i \in S\) samples a random \(r_i \xleftarrow {{\!\!\tiny \$}}\mathbb {Z}_p\) and computes \(R_i = g^{r_i}\). It then sends a hash \(\textsf{com} _i\) of \(R_i, S\), and \(\textsf{m}\) to the other signers to commit to \(R_i\). We call \(R_i\) a preimage of \(\textsf{com} _i\). The hash function is modeled as a random oracle.

  2. 2.

    Once a party has received all hashes from the first round, it sends \(R_i\) to the other signers to open the commitment.

  3. 3.

    If all commitments are correctly opened, each signer computes the combined nonce \(R=\prod _i R_i\). Then, it derives a challenge \(c \in \mathbb {Z}_p\) from \({\textsf{pk}},R,\) and \(\textsf{m}\) using another random oracle. Each signer i computes and sends its response share \(s_i := c \cdot \ell _{i,S} \cdot {\textsf{sk}} _i + r_i\), where \(\ell _{i,S}\) is a Lagrange coefficient. The signature is (cs), where \(s=\sum _i s_i\).

The overall proof strategy adopted in [28] follows a similar paradigm as that of proving Schnorr signatures, with appropriate twists. Namely, one first takes care of simulating signing queries using honest-verifier zero-knowledge (HVZK) and by suitably programming the random oracle. We will come back to this part of the proof later. Then, via rewinding, one can extract the secret key from a forgery. To simulate adaptive corruption queries, the proof of \(\textsf{Sparkle}\) relies on a \(\textsf{DLOG}\) oracle on each corruption query, i.e., security is proven under the one-more version of \(\textsf{DLOG}\) (\(\textsf{OMDL}\)). Specifically, getting \(t+1\) \(\textsf{DLOG}\) challenges from the \(\textsf{OMDL}\) assumption and t-time access to a \(\textsf{DLOG}\) oracle, the reduction defines a degree t polynomial “in the exponent”, simulates the game as explained, and uses rewinding to solve the final \(\textsf{DLOG}\) challenge. Note that if we allow the adversary to corrupt at most \(t_c\) parties throughout the experiment, it may corrupt up to \(2t_c\) parties over both runs, meaning that the reduction has to query the \(\textsf{DLOG}\) oracle up to \(2t_c\) times. Therefore, we have to require that \(2t_c \le t\).

How to Avoid Rewinding. Now it should be clear that the restriction on the corruption threshold is induced by the use of rewinding. If we avoid rewinding, we can also remove the restriction. To do so, it is natural to follow existing approaches from the literature on tightly-secure (and thus rewinding-free) signatures. A common approach is to rely on lossy identification [1, 56, 58] that has already been used in the closely-related multi-signature setting [69]. We find this unsuitable for two reasons. Namely, (a) these schemes rely on the \(\textsf{DDH}\) assumption, it is not clear at all what a suitable one-more variant would look like, and (b) the core idea of this technique is to move to a hybrid in which there is no secret key for \({\textsf{pk}} \) at all. This seems hard to combine with adaptive corruptions. Roughly, this is because if there is no secret key for \({\textsf{pk}} \), then at most t of the \({\textsf{pk}} _i\) can have a secret key, meaning that we would have to guess the set of corruptions. Instead, we take inspiration from five-move identification [24, 49, 57], for which problems (a) and (b) do not show up. Namely, (a) such schemes rely on the \(\textsf{CDH}\) assumption, and (b) there is always a secret key. To explain the idea, we directly focus on our threshold signature scheme. For that, let \(h \in {\mathbb {G}} \) be derived from the message via a random oracle. Given h, our signing protocol is as follows:

  1. 1.

    Each signer \(i \in S\) samples \(r_i \xleftarrow {{\!\!\tiny \$}}\mathbb {Z}_p\) and computes \(R_{i}^{(1)} = g^{r_i},R_{i}^{(2)} = h^{r_i}\), and \({\textsf{pk}} _{i}^{(2)} = h^{{\textsf{sk}} _i}\). It then sends a hash of \(R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}\) to the other signers.

  2. 2.

    Once a party received all hashes from the first round, it sends \(R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}\).

  3. 3.

    If all commitments are correctly opened, each signer computes the combined nonces \(R^{(k)}\) for \(k \in \{1,2\}\) and secondary public key \({\textsf{pk}} ^{(2)}\) in a natural way. Then, it derives a challenge c from \(R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)}\), and \(\textsf{m}\) and computes \(s_i := c \cdot \ell _{i,S} \cdot {\textsf{sk}} _i + r_i\). The signature is \(({\textsf{pk}} ^{(2)},c,s)\) with \(s=\sum _i s_i\).

Intuitively, the signers engage in two executions of \(\textsf{Sparkle}\) with generators g and h, respectively, using the same randomness \(r_i\). To understand why we can avoid rewinding with this scheme, let us ignore signing and corruption queries for a moment, and focus on how to turn a forgery \(({\textsf{pk}} ^{(2)},c,s)\) into a solution for a hard problem, concretely, \(\textsf{CDH}\). For that, we consider two cases. First, if \({\textsf{pk}} ^{(2)} = h^{f(0)}\), then \({\textsf{pk}} ^{(2)}\) is a \(\textsf{CDH}\) solution for \({\textsf{pk}} = g^{f(0)}\) and h. Indeed, this is what should happen in an honest execution. Second, we can bound the probability that the forgery is valid and \({\textsf{pk}} ^{(2)} \ne h^{f(0)}\) using a statistical argument. Roughly, (cs) acts as a statistically sound proof for the statement \({\textsf{pk}} ^{(2)} = h^{f(0)}\). To simulate adaptive corruptions, for now assume that we can rely on a one-more variant of the \(\textsf{CDH}\) assumption, in which we have t-time access to a \(\textsf{DLOG}\) oracle. We come back to this later. What remains is to simulate honest parties during the signing. For that, the first trick is to set up h (by programming the random oracle) in a special way. Roughly, we want to be able to translate valid transcripts with respect to g into valid transcripts with respect to h. Once this is established, we can focus on simulating the g-side of the protocol.

A Gap in the Proof of Sparkle. If we only focus on the g-side, our protocol is essentially \(\textsf{Sparkle}\). Therefore, it should be possible to simulate signing exactly as in \(\textsf{Sparkle}\) using HVZK. Unfortunately, when looking at this part of \(\textsf{Sparkle}\) ’s proof, we discovered that a certain adversarial behavior is not covered. Namely, the proof does not correctly simulate the case in which the adversary sends inconsistent sets of commitments to different honest parties. It turns out that handling this requires fundamentally new techniques. To understand the gap, it is instructive to consider \(\textsf{Sparkle}\) ’s proof for an example of three signers in a session sid, with two of them being honest, say Signer 1 and 2, and the third one being malicious. Let us assume that Signers 1 and 2 are already in the second round of the protocol. That is, both already sent their commitments \(\textsf{com} _1\) and \(\textsf{com} _2\) and now expect a list of commitments \(\mathcal {M} = (\textsf{com} _1,\textsf{com} _2,\textsf{com} _3)\) from the first round as input. In \(\textsf{Sparkle}\) ’s proof, the reduction sends random commitments \(\textsf{com} _1\) and \(\textsf{com} _2\) on behalf of the honest parties. Later, when Signer 1 (resp. 2) gets \(\mathcal {M} \), it has to output its second message \(R_1\) (resp. \(R_2\)) and program the random oracle at \(R_1\) (resp. \(R_2\)) to be \(\textsf{com} _1\) (resp. \(\textsf{com} _2\)). The goal of the reduction is to set up \(R_1\) and \(R_2\) using HVZK such that the responses \(s_1\) and \(s_2\) can be computed without using the secret key. To understand how the reduction proceeds, assume that Signer 1 is asked (by the adversary) to reveal his nonce \(R_1\) first. When this happens, the reduction samples a challenge c and a response \(s_1\). It then defines \(R_1\) as \(R_1 := g^{s_1}{\textsf{pk}} _1^{-c\ell _{1,S}}\). Ideally, the reduction would now program the random oracle on the combined nonce \(R = R_1R_2R_3\) to return c, and output \(R_1\) to the adversary. However, while the reduction can extract \(R_3\) from \(\textsf{com} _3\) by observing the random oracle queries, \(R_2\) is not yet defined at that point. The solution proposed in \(\textsf{Sparkle}\) ’s proof is as follows. Before returning \(R_1\) to the adversary, the reduction also samples \(s_2\) and defines \(R_2 := g^{s_2}{\textsf{pk}} _2^{-c\ell _{2,S}}\). Then, the reduction can compute the combined nonce \(R = R_1R_2R_3\) and program the random oracle on input R to return c. Later, it can use \(s_1\) and \(s_2\) as responses.

However, as we will argue now, this strategy is flawedFootnote 3. Think about what happens if the first-round messages \(\mathcal {M} '\) that Signer 2 sees do not contain \(\textsf{com} _3\), but instead a differentFootnote 4 commitment \(\textsf{com} _3'\) to a nonce \(R_3' \ne R_3\). Then, with high probability, the combined nonce \(R'\) that Signer 2 will compute is different from R, meaning that its challenge \(c'\) will also be different from c, and so \(s_2\) is not a valid response. One naive idea to solve this is to program \(R_2 := g^{s_2}{\textsf{pk}} _2^{-c'\ell _{2,S}}\) for an independent \(c'\) when we reveal \(R_1\). In this case, however, the adversary may just choose to submit \(\mathcal {M} ' = \mathcal {M} \) to Signer 2, making the simulation fail.

Equivalence Classes to the Rescue. The solution we present is very technical, and we sketch a massively simplified solution here. Abstractly speaking, we want to be able to identify whether two queries \(q = (sid,i,\mathcal {M})\) and \(q' = (sid',i',\mathcal {M} ')\) will result in the same combined nonce before all commitments \(\textsf{com} _j\) in \(\mathcal {M} \) and \(\mathcal {M} '\) have preimages \(R_j\). To do so, we define an equivalence relation \(\sim \) on such queries for which we show two properties.

  1. 1.

    First, the equivalence relation is consistent over time, namely, (a) if \(q \sim q'\) at some point in time, then \(q \sim q'\) at any later point, and (b) if \(q \not \sim q'\) at some point in time, then \(q \not \sim q'\) at any later point.

  2. 2.

    Second, assume that all commitments in \(\mathcal {M} \) and \(\mathcal {M} '\) have preimages. Then the resulting combined nonces R and \(R'\) are the same if and only if \(q \sim q'\).

The technical challenge is that \(\sim \) has to stay consistent while also adapting to changes in the random oracle over time. Assuming we have such a relation, we can make the simulation work. Namely, when we have to reveal the nonce \(R_i\) of an honest signer i, we first define \(c := \textsf{C}(q)\), where \(\textsf{C}\) is a random oracle on equivalence classes and is only known to the reduction. That is, \(\textsf{C}\) is a random oracle with the additional condition that \(\textsf{C}(q) = \textsf{C}(q')\) if \(q\sim q'\). Then, we define \(R_i := g^{s_i}{\textsf{pk}} _i^{-c\ell _{i,S}}\). We do not define any other \(R_{i'}\) of honest parties at that point, meaning that we also may not know the combined nonce yet. Instead, we carefully delay the random oracle programming of the combined nonce until it is completely known.

Cherry on Top: Non-interactive Assumptions. While the scheme we have so far does its job, we still rely on an interactive assumption, and we are eager to avoid it. For that, it is useful to write our scheme abstractly, replacing every exponentiation with the function \({\textsf{T}} (t,x) = t^x\). Note that for almost every \(t \in {\mathbb {G}} \), the function \({\textsf{T}} (t,\cdot )\) is a bijection. Our hope is that by instantiating our scheme with a different function with suitable properties, we can show that the corresponding one-more assumption is implied by a non-interactive assumption. Indeed, Tessaro and Zhu [79] recently used a similar strategy to avoid \(\textsf{OMDL}\) in certain situations. To do so, they replace the bijective function with a compressing function. In our case, the interactive assumption, written abstractly using \({\textsf{T}} \), asks an adversary to win the following game:

  • A random g and h are sampled, and random \(x_0,\dots ,x_t\) are sampled. Then, gh, and all \(X_i = {\textsf{T}} (g,x_i)\) for all \(0\le i \le t\) are given to the adversary.

  • Roughly, the adversary gets t-time access to an algebraic oracle inverting \({\textsf{T}} \). More precisely, the oracle outputs \(\sum _{i=0}^t \alpha _i x_i\) on input \(\alpha _0,\dots ,\alpha _t\).

  • The adversary outputs \(X'_i\) for all \(0\le i \le t\). It wins if all solutions are valid, meaning that there is a \(z_i\) such that \({\textsf{T}} (g,z_i) = X_i \wedge {\textsf{T}} (h,z_i)=X_i'\). Intuitively, the adversary has to “shift” the images \(X_i\) from g to h.

Under a suitable instantiation of \({\textsf{T}} \) and a well-studied non-interactive assumption, we want to show that no adversary can win this game. Unfortunately, if we just use a compressing function as in the case of [79], it is not clear how to make use of the winning condition. Instead, our idea is to use a function that can dynamically be switched between a bijective and a compressing mode. A bit more precisely, a proof sketch works as follows:

  1. 1.

    We start with the game we introduced above. With overwhelming probability, the functions \({\textsf{T}} _g := {\textsf{T}} (g,\cdot )\) and \({\textsf{T}} _h := {\textsf{T}} (h,\cdot )\) should be bijective.

  2. 2.

    Assume that we can efficiently invert \({\textsf{T}} _h\) using knowledge of h. Then, we can state our winning condition equivalently by requiring that \({\textsf{T}} _h^{-1}(X_i') = x_i\) for all i. Roughly, this means that the adversary has to find the \(x_i\) to win.

  3. 3.

    We assume that we can indistinguishably switch g to a mode in which \({\textsf{T}} _g\) is compressing.

  4. 4.

    Finally, we use a statistical argument to show that the adversary can not win. Intuitively, this is because \({\textsf{T}} _g\) is compressing and the inversion oracle does not leak too much about the \(x_i\)’s.

It turns out that, choosing \({\textsf{T}} \) carefully, we find a function that (1) has all the properties we need for our scheme and (2) allows us to follow our proof sketch under the \(\textsf{DDH}\) assumption.

1.3 More on Related Work

We discuss further related work, including threshold signatures from other assumptions, and related cryptographic primitives.

Techniques for Adaptive Security. General techniques for achieving adaptive security have been studied [21, 54, 65]. Unfortunately, these techniques often rely on heavy cryptographic machinery and assumptions, e.g., secure erasures or broadcast channels.

Other Algebraic Structures. In the pairing setting, a natural construction is the (non-interactive) threshold version of the BLS signature scheme [14, 17], which has been modified to achieve adaptive security in [62]. Recently, Bacho and Loss [6] have proven adaptive security of threshold BLS in the AGM. Das et al. have constructed weighted threshold signatures in the pairing-setting [33], and Crites et al. have constructed structure-preserving threshold signatures in the pairing-setting [26]. Threshold signatures have been constructed based on RSA [3, 35, 41, 47, 72, 75, 79]. Notably, adaptive security has been considered in [3]. A few works also have constructed threshold signatures from lattices [2, 12, 16, 32, 51]. Finally, several works have proposed threshold signing protocols for ECDSA signatures [20, 22, 31, 38, 43,44,45,46, 64]. Except for [20], these works focus on static corruptions. For an overview of this line of work, see [5].

Robustness. Recently, there has been renewed interest in robust (Schnorr) threshold signing protocols [13, 50, 73, 76]. Such robust protocols additionally ensure that no malicious party can prevent honest parties from signing. Notably, all of these protocols assume static corruptions.

Multi-signatures. Multi-signatures [10, 53] are threshold signatures with \(t = n-1\), i.e., all n parties need to participate in the signing protocol, with the advantage that parties generate their keys independently and come together to sign spontaneously without setting up a shared key. There is a rich literature on multi-signatures, e.g., [4, 9, 14, 15, 18, 40, 66,67,68, 79]. Closest to our work in spirit are the work by Pan and Wagner [69], which avoids rewinding, and the work of Tessaro and Zhu [79], which aims at non-interactive assumptions.

Distributed Key Generation. In principle, one can rely on generic secure multi-party computation to set up key shares for a threshold signature scheme without using a trusted dealer. To get a more efficient solution, dedicated distributed key generation protocols have been studied [21, 34, 48, 54, 59, 61, 71], with some of them being adaptively secure [21, 54, 59].

2 Preliminaries

By \(\lambda \) we denote the security parameter. We assume all algorithms get \(\lambda \) in unary as input. If X is a finite set, we write \(x \xleftarrow {{\!\!\tiny \$}}X\) to indicate that x is sampled uniformly at random from X. If \(\mathcal {A}\) is a probabilistic algorithm, we write \(y := \mathcal {A}(x;\rho )\) to state that y is assigned to the output of \(\mathcal {A}\) on input x with random coins \(\rho \). If \(\rho \) is sampled uniformly at random, we simply write \(y \leftarrow \mathcal {A}(x)\). Further, the notation \(y \in \mathcal {A}(x)\) indicates that y is a possible output of \(\mathcal {A}\) on input x, i.e., there are random coins \(\rho \) such that \(\mathcal {A}(x;\rho )\) outputs y.

Threshold Signatures. We define threshold signatures, assuming a trusted key generation, which can be replaced by a distributed key generation in practice. Our syntax matches the three-round structure of our protocol. Namely, a (tn)-threshold signature scheme is a tuple of PPT algorithms \({\textsf{TS}} = ({\textsf{Setup}},{\textsf{Gen}},{\textsf{Sig}},{\textsf{Ver}})\), where \({\textsf{Setup}} (1^\lambda )\) outputs system parameters \({\textsf{par}} \), and \({\textsf{Gen}} ({\textsf{par}})\) outputs a public key \({\textsf{pk}} \) and secret key shares \({\textsf{sk}} _1,\dots ,{\textsf{sk}} _n\). Further, \({\textsf{Sig}} \) specifies a signing protocol, formally split into four algorithms \(({\textsf{Sig}} _0,{\textsf{Sig}} _1,{\textsf{Sig}} _2,{\textsf{Combine}})\). Here, algorithm \({\textsf{Sig}} _j\) models how the signers locally compute their \((j+1)\)st protocol message \(\textsf{pm}_{j+1}\) and advance their state, where \({\textsf{Sig}} _0(S,i,{\textsf{sk}} _i,\textsf{m})\) takes as input the signer set S, the index of the signer \(i \in [n]\), its secret key share \({\textsf{sk}} _i\), and the message \(\textsf{m}\), and \({\textsf{Sig}} _1\) (resp. \({\textsf{Sig}} _2\)) takes as input the current state of the signer and the list \(\mathcal {M} _1\) (resp \(\mathcal {M} _2\)) of all protocol messages from the previous round. Finally, \({\textsf{Combine}} (S,\textsf{m},\mathcal {M} _1,\mathcal {M} _2,\mathcal {M} _3)\) can be used to publicly turn the transcript into a signature \(\sigma \), which can then be verified using \({\textsf{Ver}} ({\textsf{pk}},\textsf{m},\sigma )\). Roughly, we say that the scheme is complete if for any such parameters and keys, a signature generated by a signing protocol among \(t+1\) parties outputs a signature for which \({\textsf{Ver}} \) outputs 1. For a more formal and precise definition of syntax and completeness, we refer to the full version [7].

Our security game is in line with the established template and is presented in Fig. 1. First, the adversary gets an honestly generated public key as input. At any point in time, the adversary can start a new signing session with signer set S and message \(\textsf{m}\) with session identifier sid by calling an oracle \({\textsc {Next}} (sid,S,\textsf{m})\). Additionally, the adversary may adaptively corrupt up to t users via an oracle \({\textsc {Corr}} \). Thereby, it learns their secret key and private state in all currently open signing sessions. To interact with honest users in signing sessions, the adversary has access to per-round signing oracles \({\textsc {Sig}} _0,{\textsc {Sig}} _1,{\textsc {Sig}} _2\). Roughly, each signing oracle can be called with respect to a specific honest user i and a session identifier sid, given that the user is already in the respective round for that session (modeled by algorithm \(\textsf{Allowed}\)). Further, when calling such an oracle, the adversary inputs the vector of all messages of the previous round. In particular, the adversary could send different messages to two different honest parties within the same session, i.e., we assume no broadcast channels. Additionally, this means that the adversary can arbitrarily decide which message to send to an honest party on behalf of another honest party, i.e., we assume no authenticated channels. Finally, the adversary outputs a forgery \((\textsf{m}^*,\sigma ^*)\). It wins the security game, if it never started a signing session for message \(\textsf{m}^*\) and the signature \(\sigma ^*\) is valid. Therefore, our notion is (an interactive version of) TS-UF-0 using the terminology of [8, 11], which is similar to recent works [25, 28].

Fig. 1.
figure 1

The game \(\mathbf {TS\text {-}EUF\text {-}CMA} \) for a (three-round) (tn)-threshold signature scheme \({\textsf{TS}} = ({\textsf{Setup}},{\textsf{Gen}},{\textsf{Sig}},{\textsf{Ver}})\) and an adversary \(\mathcal {A}\).

No Erasures. In our pseudocode, the private state of signer i in session sid is stored in \(\textsf{state} [sid,i]\), where \(\textsf{state} \) is a map. After each signing round, this state is updated. We choose to update the state instead of adding a new state to avoid clutter, which is similar to earlier works [28]. On the downside, this means that potentially, schemes that are secure in our model could rely on erasures, i.e., on safely deleting part of the state of an earlier round before a user gets corrupted. We emphasize that in our scheme, any state in earlier rounds can be computed from the state in the current round and the secret key. This means that our schemes do not rely on erasures.

Definition 1

( \(\mathsf {TS\text {-}EUF\text {-}CMA}\)  Security). Let \({\textsf{TS}} = ({\textsf{Setup}},{\textsf{Gen}},{\textsf{Sig}},{\textsf{Ver}})\) be a (tn)-threshold signature scheme. Consider the game \(\mathbf {TS\text {-}EUF\text {-}CMA} \) defined in Fig. 1. We say that \({\textsf{TS}} \) is \(\mathsf {TS\text {-}EUF\text {-}CMA}\) secure, if for all PPT adversaries \(\mathcal {A}\), the following advantage is negligible:

$$\begin{aligned} {\textsf{Adv}_{\mathcal {A},{\textsf{TS}}}^{\mathsf {TS\text {-}EUF\text {-}CMA}}} (\lambda ) := {\Pr \left[ {\mathbf {TS\text {-}EUF\text {-}CMA} _{\textsf{TS}} ^\mathcal {A}(\lambda ) \Rightarrow 1}\right] }. \end{aligned}$$

3 Our Construction

In this section, we present our new threshold signature scheme. However, before we present it, we first introduce a building block we need, which we call tagged linear function families.

3.1 Tagged Linear Function Families

Similar to what is done in other works [23, 52, 55, 69, 79], we use the abstraction of linear function families to describe our scheme in a generic way. However, we slightly change the notion by introducing tags to cover different functions with the same set of parameters.

Definition 2

(Tagged Linear Function Family). A tagged linear function family is a tuple of PPT algorithms \({\textsf{TLF}} = ({\textsf{Gen}},{\textsf{T}})\) with the following syntax:

  • \({\textsf{Gen}} (1^\lambda ) \rightarrow {\textsf{par}} \) takes as input the security parameter \(1^\lambda \) and outputs parameters \({\textsf{par}} \). We assume that \({\textsf{par}} \) implicitly defines the following sets: A set of scalars \(\mathcal {S} _{\textsf{par}} \), which forms a field; a set of tags \(\mathcal {T} _{\textsf{par}} \); a domain \(\mathcal {D} _{\textsf{par}} \) and a range \(\mathcal {R} _{\textsf{par}} \), where each forms a vector space over \(\mathcal {S} _{\textsf{par}} \). If \({\textsf{par}} \) is clear from the context, we omit the subscript \({\textsf{par}} \). We naturally denote the operations of these fields and vector spaces by \(+\) and \(\cdot \), and assume that these operations can be evaluated efficiently.

  • \({\textsf{T}} ({\textsf{par}},g,x) \rightarrow X\) is deterministic, takes as input parameters \({\textsf{par}} \), a tag \(g \in \mathcal {T} \), a domain element \(x \in \mathcal {D} \), and outputs a range element \(X \in \mathcal {R} \). For all parameters \({\textsf{par}} \), and for all tags \(g \in \mathcal {T} \), the function \({\textsf{T}} ({\textsf{par}},g,\cdot )\) realizes a homomorphism, i.e.

    $$\begin{aligned} \forall s \in \mathcal {S}, x,y \in \mathcal {D}:~{\textsf{T}} ({\textsf{par}},g,s \cdot x + y) = s \cdot {\textsf{T}} ({\textsf{par}},g,x) + {\textsf{T}} ({\textsf{par}},g,y). \end{aligned}$$

    For \({\textsf{T}} \), we also omit the input \({\textsf{par}} \) if it is clear from the context.

For our construction, we require that images are uniformly distributed. More precisely, we say that \({\textsf{TLF}} \) is \(\varepsilon _{\textsf{r}}\)-regular, if there is a set \(\textsf{Reg}\) of pairs \(({\textsf{par}},g)\) such that random parameters \({\textsf{par}} \) and tags g are in \(\textsf{Reg}\) with probability at least \(1-\varepsilon _{\textsf{r}}\), and for each such pair in \(\textsf{Reg}\), \({\textsf{T}} ({\textsf{par}},g,x)\) is uniformly distributed over the range, assuming \(x \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \). We postpone a more formal definition to the full version [7]. Next, we show that tagged linear function families satisfy a statistical property that turns out to be useful. This property is implicitly present in other works as well, e.g., in [1, 56, 57, 69], and can be interpreted in various ways, e.g., as the soundness of a natural proof system.

Lemma 1

Let \({\textsf{TLF}} = ({\textsf{Gen}},{\textsf{T}})\) be a tagged linear function family. For every fixed parameters \({\textsf{par}} \) and tags \(g,h \in \mathcal {T} \), define the set

$$ \textsf{Im}({\textsf{par}},g,h) := \left\{ (X_1,X_2)\in \mathcal {R} ^2~\left| ~\exists x \in \mathcal {D}:~{\textsf{T}} (g,x)=X_1 \wedge {\textsf{T}} (h,x)=X_2\right. \right\} . $$

Then, for any (even unbounded) algorithm \(\mathcal {A}\), we have

$$\begin{aligned} \Pr \left[ \begin{array}{rl} &{} (X_1,X_2) \notin \textsf{Im}({\textsf{par}},g,h) \\ \wedge &{} {\textsf{T}} (g,s) = c\cdot X_1 + R_1 \\ \wedge &{} {\textsf{T}} (h,s) = c\cdot X_2 + R_2 \end{array} ~\left| ~ \begin{array}{l} {\textsf{par}} \leftarrow {\textsf{Gen}} (1^\lambda ), \\ (St,g,h,X_1,X_2,R_1,R_2) \leftarrow \mathcal {A}({\textsf{par}}), \\ c \xleftarrow {{\!\!\tiny \$}}\mathcal {S},~~s\leftarrow \mathcal {A}(St,c) \end{array}\right. \right] \le \frac{1}{|\mathcal {S} |}. \end{aligned}$$

The proof of Lemma 1 is postponed to the full version [7]. As another technical tool in our proof, we need our tagged linear function families to be translatable, a notion we define next. Informally, it means that we can rerandomize a given tag g into a tag h, such that we can efficiently compute \({\textsf{T}} (h,x)\) from \({\textsf{T}} (g,x)\) without knowing x.

Definition 3

(Translatability). Let \({\textsf{TLF}} = ({\textsf{Gen}},{\textsf{T}})\) be a tagged linear function family. We say that \({\textsf{TLF}} \) is \(\varepsilon _{\textsf{t}}\)-translatable, if there is a PPT algorithm \({\textsf{Shift}} \) and a deterministic polynomial time algorithm \({\textsf{Translate}} \), such that the following properties hold:

  • Well Distributed Tags. The statistical distance between the following distributions \(\mathcal {X} _0\) and \(\mathcal {X} _1\) is at most \(\varepsilon _{\textsf{t}}\):

    $$\begin{aligned} \mathcal {X} _0 &:= \left\{ ({\textsf{par}},g,h)~\left| ~{\textsf{par}} \leftarrow {\textsf{Gen}} (1^\lambda ),~g \xleftarrow {{\!\!\tiny \$}}\mathcal {T},~h\xleftarrow {{\!\!\tiny \$}}\mathcal {T} \right. \right\} , \\ \mathcal {X} _1 &:= \left\{ ({\textsf{par}},g,h)~\left| ~{\textsf{par}} \leftarrow {\textsf{Gen}} (1^\lambda ),~g \xleftarrow {{\!\!\tiny \$}}\mathcal {T},~(h,{\textsf{td}})\leftarrow {\textsf{Shift}} ({\textsf{par}},g)\right. \right\} . \end{aligned}$$
  • Translation Completeness. For every \({\textsf{par}} \in {\textsf{Gen}} (1^\lambda )\), for any \(g\in \mathcal {T} \), any \(x \in \mathcal {D} \), and any \((h,{\textsf{td}}) \in {\textsf{Shift}} ({\textsf{par}},g)\), we have

    $$ {\textsf{Translate}} ({\textsf{td}}, {\textsf{T}} (g,x)) = {\textsf{T}} (h,x) \text { and } {\textsf{InvTranslate}} ({\textsf{td}}, {\textsf{T}} (h,x)) = {\textsf{T}} (g,x). $$

Next, we define the main security property that we will require for our construction. Intuitively, it should not be possible for an adversary to translate \({\textsf{T}} (g,x)\) into \({\textsf{T}} (h,x)\) if gh and x are chosen randomly. Our actual notion is a one-more variant of this intuition.

Definition 4

(Algebraic Translation Resistance). Let \({\textsf{TLF}} = ({\textsf{Gen}},{\textsf{T}})\) be a tagged linear function family, and \(t \in \mathbb N \) be a number. Consider the game \(\mathbf {A\text {-}TRAN\text {-}RES} \) defined in Fig. 2. We say that \({\textsf{TLF}} \) is t-algebraic translation resistant, if for any PPT algorithm \(\mathcal {A}\), the following advantage is negligible:

$$ {\textsf{Adv}_{\mathcal {A},{\textsf{TLF}}}^{t\text {-}\mathsf {A\text {-}TRAN\text {-}RES}}} (\lambda ):= {\Pr \left[ {t\text {-}\mathbf {A\text {-}TRAN\text {-}RES} _{\textsf{TLF}} ^\mathcal {A}(\lambda )\Rightarrow 1}\right] }. $$
Fig. 2.
figure 2

Game \(\mathbf {A\text {-}TRAN\text {-}RES} \) for a tagged linear function family \({\textsf{TLF}} = ({\textsf{Gen}},{\textsf{T}})\) and adversary \(\mathcal {A}\).

3.2 Construction

Let \({\textsf{TLF}} = ({\textsf{Gen}},{\textsf{T}})\) be a tagged linear function family. Further, let \(\textsf{H}:{{\{0,1\}}^{*}} \rightarrow \mathcal {T} \), \(\hat{\textsf{H}}:{{\{0,1\}}^{*}} \rightarrow {{\{0,1\}}^{}} ^{2\lambda }\), \(\bar{\textsf{H}}:{{\{0,1\}}^{*}} \rightarrow \mathcal {S} \) be random oracles. We construct a (tn)-treshold signature scheme \({\textsf{Twinkle}[{{\textsf{TLF}}}]} = ({\textsf{Setup}},{\textsf{Gen}},{\textsf{Sig}},{\textsf{Ver}})\). We assume that there is an implicit injection from [n] into \(\mathcal {S} \). Further, let \(\ell _{i,S}(x) := \prod _{j \in S\setminus \{i\}} (j-x)/(j-i) \in \mathcal {S} \) denote the ith lagrange coefficient for all \(i \in [n]\) and \(S\subseteq [n]\), and let \(\ell _{i,S} := \ell _{i,S}(0)\). We describe our scheme verbally.

Setup and Key Generation. All parties have access to public parameters \({\textsf{par}} \leftarrow {\textsf{TLF}}.{\textsf{Gen}} (1^\lambda )\) which define the function \({\textsf{T}} \), and sets \(\mathcal {S},\mathcal {T},\mathcal {D},\) and \(\mathcal {R} \), and to a random tag \(g \xleftarrow {{\!\!\tiny \$}}\mathcal {T} \). To generate keys, elements \(a_j \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \) for \(j \in \{0\}\cup [t]\) are sampled. These elements form the coefficients of a polynomial of degree t. For each \(i \in [n]\), we define the key pair \(({\textsf{pk}} _i,{\textsf{sk}} _i)\) for the ith signer as

$$ {\textsf{sk}} _i := \sum _{j=0}^{t}a_j i^j,~~{\textsf{pk}} _i := {\textsf{T}} (g,{\textsf{sk}} _i). $$

The shared public key is defined as \({\textsf{pk}}:= {\textsf{pk}} _0 := {\textsf{T}} (g,a_0)\).

Signing Protocol. Let \(S \subseteq [n]\) be a set of signers of size \(t+1\). We assume all signers are aware of the set S and a message \(\textsf{m}\in {{\{0,1\}}^{*}} \) to be signed. First, they all compute \(h := \textsf{H}(\textsf{m})\). Then, they run the following protocol phases to compute the signature:

  1. 1.

    Commitment Phase. Each signer \(i\in S\) samples \(r_i \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \) and computes

    $$ R_{i}^{(1)} := {\textsf{T}} (g,r_i),~~R_{i}^{(2)} := {\textsf{T}} (h,r_i),~~{\textsf{pk}} _{i}^{(2)} := {\textsf{T}} (h,{\textsf{sk}} _i). $$

    Then, each signer \(i\in S\) computes a commitment

    $$\textsf{com} _i := \hat{\textsf{H}}(S,i,R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)})$$

    and sends \(\textsf{com} _i\) to the other signers.

  2. 2.

    Opening Phase. Each signer \(i\in S\) sends \(R_{i}^{(1)},R_{i}^{(2)}\) and \({\textsf{pk}} _{i}^{(2)}\) to all other signers.

  3. 3.

    Response Phase. Each signer \(i\in S\) checks that \(\textsf{com} _{j} = \hat{\textsf{H}}(S,j,R_{j}^{(1)},R_{j}^{(2)},{\textsf{pk}} _{j}^{(2)})\) holds for all \(j \in S\). If one of these equations does not hold, the signer aborts. Otherwise, the signer defines

    $$ R^{(1)} := \sum _{j\in S} R_{j}^{(1)} ,~~R^{(2)}:= \sum _{j\in S} R_{j}^{(2)},~~ {\textsf{pk}} ^{(2)} := \sum _{j\in S}\ell _{j,S}{\textsf{pk}} _{j}^{(2)}. $$

    The signer computes \(c := \bar{\textsf{H}}({\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m})\) and \(s_i := c \cdot \ell _{i,S}\cdot {\textsf{sk}} _i + r_i\). It sends \(s_i\) to all other signers.

The signature is \(\sigma := ({\textsf{pk}} ^{(2)},c,s)\) for \(s := \sum _{j\in S} s_j\).

Verification. Let \({\textsf{pk}} \) be a public key, let \(\textsf{m}\in {{\{0,1\}}^{*}} \) be a message and let \(\sigma = ({\textsf{pk}} ^{(2)},c,s)\) be a signature. To verify \(\sigma \) with respect to \({\textsf{pk}} \) and \(\textsf{m}\), one first computes \(h := \textsf{H}(\textsf{m})\) and \(R^{(1)} := {\textsf{T}} (g,s)- c\cdot {\textsf{pk}} \), \(R^{(2)} := {\textsf{T}} (h,s)-c\cdot {\textsf{pk}} ^{(2)}\). Then, one accepts the signature, i.e., outputs 1, if and only if \(c = \bar{\textsf{H}}({\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}).\)

Theorem 1

Let \({\textsf{TLF}} = ({\textsf{Gen}},{\textsf{T}})\) be a tagged linear function family and let \(\textsf{H}:{{\{0,1\}}^{*}} \rightarrow \mathcal {T} \), \(\hat{\textsf{H}}:{{\{0,1\}}^{*}} \rightarrow {{\{0,1\}}^{}} ^{2\lambda }\), \(\bar{\textsf{H}}:{{\{0,1\}}^{*}} \rightarrow \mathcal {S} \) be random oracles. Assume that \({\textsf{TLF}} \) is \(\varepsilon _{\textsf{r}}\)-regular and \(\varepsilon _{\textsf{t}}\)-translatable. Further, assume that \({\textsf{TLF}} \) is t-algebraic translation resistant. Then, \({\textsf{Twinkle}[{{\textsf{TLF}}}]} \) is \(\mathsf {TS\text {-}EUF\text {-}CMA}\) secure.

Proof

Fix an adversary \(\mathcal {A}\) against the security of \({\textsf{TS}}:= {\textsf{Twinkle}[{{\textsf{TLF}}}]} \). We prove the statement by presenting a sequence of games \({{\textbf {G}}} _0\)-\({{\textbf {G}}} _8\). All games and associated oracles and algorithms are presented as pseudocode in the full version [7].

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{0}{} \mathbf{:}}\) This game is the security game \(\mathbf {TS\text {-}EUF\text {-}CMA} _{\textsf{TS}} ^\mathcal {A}\) for threshold signatures. We recall the game to fix some notation. First, the game samples parameters \({\textsf{par}} '\) for \({\textsf{TLF}} \) and a tag \(g \xleftarrow {{\!\!\tiny \$}}\mathcal {T} \). It also samples random coefficients \(a_0,\dots ,a_t \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \) and computes a public key \({\textsf{pk}}:= {\textsf{pk}} _0 := {\textsf{T}} (g,a_0)\) and secret key shares \({\textsf{sk}} _i := \sum _{j=0}^{t}a_ji^j\) for each \(i \in [n]\). For convenience, denote the corresponding public key shares by \({\textsf{pk}} _i := {\textsf{T}} (g,{\textsf{sk}} _i)\). Then, the game runs \(\mathcal {A}\) on input \({\textsf{par}}:= ({\textsf{par}} ',g)\) and \({\textsf{pk}} \) with access to signing oracles, corruption oracles, and random oracles. Concretely, it gets access to random oracles \(\textsf{H},\hat{\textsf{H}},\) and \(\bar{\textsf{H}}\), which are provided by the game in the standard lazy way using maps \(h[\cdot ],\hat{h}[\cdot ],\) and \(\bar{h}[\cdot ]\), respectively. The set of corrupted parties is denoted by \(\textsf{Corrupted} \) and the set of queried messages is denoted by \(\textsf{Queried} \). Finally, the adversary outputs a forgery \((\textsf{m}^*,\sigma ^*)\) and the game outputs 1 if \(\textsf{m}^* \notin \textsf{Queried} \), \(|\textsf{Corrupted} | \le t\), and \(\sigma ^*\) is a valid signature for \(\textsf{m}^*\). We make three purely conceptual changes to the game. First, we will never keep the secret key share \({\textsf{sk}} _i\) explicitly in the states \(\textsf{state} [sid,i]\) for users i in a session sid, although the scheme description would require this. This is without loss of generality, as the adversary only gets to see the states when it corrupts a user, and in this case it also gets \({\textsf{sk}} _i\). Second, we assume the adversary always queried \(\textsf{H}(\textsf{m}^*)\) before outputting its forgery. Third, we assume that the adversary makes exactly t (distinct) corruption queries. These changes are without loss of generality and do not change the advantage of \(\mathcal {A}\). Formally, one could build a wrapper adversary that internally runs \(\mathcal {A}\), but makes a query \(\textsf{H}(\textsf{m}^*)\) and enough corruption queries before terminating, and on every corruption query includes \({\textsf{sk}} _i\) in the states before passing the result back to \(\mathcal {A}\). Clearly, we have \( {\textsf{Adv}_{\mathcal {A},{\textsf{TS}}}^{\mathsf {TS\text {-}EUF\text {-}CMA}}} (\lambda ) = {\Pr \left[ {{{\textbf {G}}} _0 \Rightarrow 1}\right] }.\) The remainder of our proof is split into three parts. In the first part (\({{\textbf {G}}} _1\)-\({{\textbf {G}}} _3\)), we ensure that the game no longer needs secret key shares \({\textsf{sk}} _i\) to compute \({\textsf{pk}} _i^{(2)}\) in the signing oracle. Roughly, this is done by embedding shifted tags \((h,{\textsf{td}}) \leftarrow {\textsf{Shift}} ({\textsf{par}} ',g)\) into random oracle \(\textsf{H}\) for signing queries, and keeping random tags h for the query related to the forgery. In the second part (\({{\textbf {G}}} _4\)-\({{\textbf {G}}} _{11}\)), we use careful delayed random oracle programming, observability of the random oracle, and an honest-verifier zero-knowledge-style programming to simulate the remaining parts of the signing queries without \({\textsf{sk}} _i\). As a result, \({\textsf{sk}} _i\) is only needed when the adversary corrupts users. In the third part, we analyze \({{\textbf {G}}} _{11}\). This is done by distinguishing two cases. One of the cases is bounded using a statistical argument. The other case is bounded using a reduction breaking the t-algebraic translation resistance of \({\textsf{TLF}} \). We now proceed with the details.

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{1}{} \mathbf{:}}\) In this game, we introduce a map \(b[\cdot ]\) that maps messages \(\textsf{m}\) to bits \(b[\textsf{m}] \in {{\{0,1\}}^{}} \). Concretely, whenever a query \(\textsf{H}(\textsf{m})\) is made for which the hash value is not yet defined, the game samples \(b[\textsf{m}]\) from a Bernoulli distribution \({\mathcal {B}_{\gamma }} \) with parameter \(\gamma = 1/(Q_S+1)\). That is, \(b[\textsf{m}]\) is set to 1 with probability \(1/(Q_S+1)\) and to 0 otherwise. The game aborts if \(b[\textsf{m}] = 1\) for some message \(\textsf{m}\) for which the signing oracle is called, or \(b[\textsf{m}^*] = 0\) for the forgery message \(\textsf{m}^*\). Clearly, if no abort occurs, games \({{\textbf {G}}} _0\) and \({{\textbf {G}}} _1\) are the same. Further the view of \(\mathcal {A}\) is independent of the map b. We obtain

$$ {\Pr \left[ {{{\textbf {G}}} _1 \Rightarrow 1}\right] } = \gamma \left( 1-\gamma \right) ^{Q_S}\cdot {\Pr \left[ {{{\textbf {G}}} _0 \Rightarrow 1}\right] } $$

Now, we can use the fact \((1-1/x)^x \ge 1/4\) for all \(x \ge 2\) and get

$$ \gamma \left( 1-\gamma \right) ^{Q_S} = \frac{1}{Q_S+1} \left( 1-\frac{1}{Q_S+1} \right) ^{Q_S} = \frac{1}{Q_S} \left( 1-\frac{1}{Q_S+1} \right) ^{Q_S+1} \ge \frac{1}{4Q_S}, $$

where the second equality is shown in the full version [7]. In combination, we get \( {\Pr \left[ {{{\textbf {G}}} _1 \Rightarrow 1}\right] } \ge \frac{1}{4Q_S}\cdot {\Pr \left[ {{{\textbf {G}}} _0 \Rightarrow 1}\right] }. \)

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{2}{} \mathbf{:}}\) In game \({{\textbf {G}}} _2\), we change the way queries to random oracle \(\textsf{H}\) are answered. Namely, for a query \(\textsf{H}(\textsf{m})\) for which the hash value \(h[\textsf{m}]\) is not yet defined, the game samples \(h[\textsf{m}] \xleftarrow {{\!\!\tiny \$}}\mathcal {T} \) as a random tag exactly as the previous game did. However, now, if \(b[\textsf{m}] = 0\), the game samples \((h,{\textsf{td}}) \leftarrow {\textsf{Shift}} ({\textsf{par}} ',g)\) and sets \(h[\textsf{m}]:=h\). Further, it stores \({\textsf{td}} \) in a map tr as \(tr[\textsf{m}] := {\textsf{td}} \). Clearly, \({{\textbf {G}}} _1\) and \({{\textbf {G}}} _2\) are indistinguishable by the \(\varepsilon _{\textsf{t}}\)-translatability of \({\textsf{TLF}} \). Concretely, one can easily see that \( \left| {\Pr \left[ {{{\textbf {G}}} _1 \Rightarrow 1}\right] }-{\Pr \left[ {{{\textbf {G}}} _2 \Rightarrow 1}\right] }\right| \le Q_\textsf{H}\varepsilon _{\textsf{t}}. \)

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{3}{} \mathbf{:}}\) In this game, we change how the values \({\textsf{pk}} _i^{(2)}\) are computed by the signing oracle. To recall, in the commitment phase of the signing protocol, the signing oracle for user \(i\in [n]\) in \({{\textbf {G}}} _2\) would compute the value \({\textsf{pk}} _i^{(2)} := {\textsf{T}} (h,{\textsf{sk}} _i)\), where \(h = \textsf{H}(\textsf{m})\) and \(\textsf{m}\) is the message to be signed. Also, the value \({\textsf{pk}} _i^{(2)} := {\textsf{T}} (h,{\textsf{sk}} _i)\) is recomputed in the opening phase of the signing protocol and included in the output sent to the adversary. From \({{\textbf {G}}} _3\) on, \({\textsf{pk}} _i^{(2)}\) is computed differently, namely, as \({\textsf{pk}} _i^{(2)}:= {\textsf{Translate}} (tr[\textsf{m}], {\textsf{pk}} _i)\). Observe that if the game did not abort, we know that \(b[\textsf{m}] = 0\) (see \({{\textbf {G}}} _1\)) and therefore h has been generated as \((h,{\textsf{td}}) \leftarrow {\textsf{Shift}} ({\textsf{par}} ',g)\) where \(tr[\textsf{m}] = {\textsf{td}} \). Thus, it follows from the translatability of \({\textsf{TLF}} \), or more concretely from the translation completeness, that the view of \(\mathcal {A}\) is not changed. We get \( {\Pr \left[ {{{\textbf {G}}} _2 \Rightarrow 1}\right] } = {\Pr \left[ {{{\textbf {G}}} _3 \Rightarrow 1}\right] }. \)

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{4}{} \mathbf{:}}\) In this game, we let the game abort if \(({\textsf{par}} ',g)\notin \textsf{Reg}\), where \(\textsf{Reg}\) is the set from the regularity definition of \({\textsf{TLF}} \). By regularity of \({\textsf{TLF}} \), we have \( \left| {\Pr \left[ {{{\textbf {G}}} _3 \Rightarrow 1}\right] }-{\Pr \left[ {{{\textbf {G}}} _4 \Rightarrow 1}\right] }\right| \le \varepsilon _{\textsf{r}}. \)

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{5}{} \mathbf{:}}\) In this game, we change the signing oracle again. Specifically, we change the commitment and opening phase. Recall that until now, in the commitment phase for an honest party i in a signer set \(S \subseteq [n]\) and message \(\textsf{m}\), an element \(r_i \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \) is sampled and the party sends a commitment \(\textsf{com} _i := \hat{\textsf{H}}(S,i,R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)})\) for \(R_{i}^{(1)} := {\textsf{T}} (g,r_i), R_{i}^{(2)} := {\textsf{T}} (h,r_i)\), and \({\textsf{pk}} _i^{(2)}:= {\textsf{Translate}} (tr[\textsf{m}], {\textsf{pk}} _i)\). As before, h is defined as \(h := \textsf{H}(\textsf{m})\). Later, in the opening phase, the party sends \(R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}\). Now, we change this as follows: The signing oracle computes \({\textsf{pk}} _i^{(2)}\) as in \({{\textbf {G}}} _4\), but it does not compute \(R_{i}^{(1)},R_{i}^{(2)}\) and instead sends a random commitment \(\textsf{com} _i \xleftarrow {{\!\!\tiny \$}}{{\{0,1\}}^{}} ^{2\lambda }\) on behalf of party i. It also inserts an entry \((S,i,\textsf{com} _i)\) into a list \(\textsf{Sim} \) that keeps track of these simulated commitments. If there is already an \((S',i')\ne (S,i)\) such that \((S',i',\textsf{com} _i) \in \textsf{Sim} \), then the game aborts. Note that there are two situations where the preimage of \(\textsf{com} _i\) has to be revealed. Namely, \(R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}\) has to be given to the adversary in the opening phase, and whenever party i is corrupted the game needs to output \(r_i\). To handle this, consider the opening phase or the case where party i is corrupted before it reaches the opening phase. Here, we let the game sample \(r_i \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \) and define \(R_{i}^{(1)} := {\textsf{T}} (g,r_i)\) and \(R_{i}^{(2)} := {\textsf{T}} (h,r_i)\). Then, the game checks if \(\hat{h}[S,i,R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}] = \bot \). If it is not, the game aborts. Otherwise, it programs \(\hat{h}[S,i,R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}] := \textsf{com} _i\) and continues. That is, in the opening phase it would output \(R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}\), and during a corruption, it would output \(r_i\) as part of its state. If a corruption occurs after the opening phase, then \(r_i\) has already been defined, and corruption is handled as before. Clearly, the view of \(\mathcal {A}\) is only affected by this change if \(R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}\) matches a previous query of \(\mathcal {A}\) or the same commitment has been sampled by the game twice. The latter event occurs only with probability \(Q_{S}^2/2^{2\lambda }\) by a union bound over all pairs of queries. To bound the former event, we use the regularity of \({\textsf{TLF}} \), which implies that \(R_{i}^{(1)}\) is uniform over the range \(\mathcal {R} \). Now, for each fixed pair of signing query and random oracle query, the random oracle query matches \(R_{i}^{(1)},R_{i}^{(2)},{\textsf{pk}} _{i}^{(2)}\) with probability at most \(1/|\mathcal {R} |\). Thus, the event occurs only with probability \(Q_{S}Q_{\hat{\textsf{H}}}/2^{2\lambda }\). We get \( \left| {\Pr \left[ {{{\textbf {G}}} _4 \Rightarrow 1}\right] }-{\Pr \left[ {{{\textbf {G}}} _5 \Rightarrow 1}\right] }\right| \le {Q_{S}Q_{\hat{\textsf{H}}}}/{|\mathcal {R} |} + {Q_{S}^2}/{2^{2\lambda }}. \)

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{6}{} \mathbf{:}}\) In this game, we rule out collisions for random oracle \(\hat{\textsf{H}}\). Namely, the game aborts if there are \(x\ne x'\) such that \(\hat{h}[x] = \hat{h}[x'] \ne \bot \). Clearly, we have \( \left| {\Pr \left[ {{{\textbf {G}}} _5 \Rightarrow 1}\right] }-{\Pr \left[ {{{\textbf {G}}} _6 \Rightarrow 1}\right] }\right| \le \frac{Q_{\hat{\textsf{H}}}^2}{2^{2\lambda }}. \) Subsequent games will internally make use of an algorithm \(\hat{\textsf{H}}^{-1}\). On input y the algorithm searches for an x such that \(\hat{h}[x] = y\). If no such x is found, or if multiple x are found, then the algorithm returns \(\bot \). Otherwise, it returns x. Note that in the latter case the game would abort anyways, and so we can assume that if there is a preimage of y, then this preimage is uniquely determined by y.

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{7}{} \mathbf{:}}\) In this game, we introduce a list \(\textsf{Pending} \) and associated algorithms \(\textsf{UpdatePending} \) and \(\textsf{AddToPending} \) to manage this list. Intuitively, the list keeps track of honest users i and signing sessions sid for which the game can not yet extract preimages of all commitments sent in the commitment phase. More precisely, the list contains a tuple \((sid,i,\mathcal {M} _1)\) if and only if the following two conditions hold:

  • The opening phase oracle \({\textsc {Sig}} _1(sid,i,\mathcal {M} _1)\) has been called with valid inputs, i.e., for this query the game did not output \(\bot \) due to \(\textsf{Allowed}(sid,i,1,\mathcal {M} _1) = 0\), and at that point the following was true: For every commitment \(\textsf{com} _j\) in \(\mathcal {M} _1\) such that \((S,j,\textsf{com} _j) \notin \textsf{Sim} \), we have \(\hat{\textsf{H}}^{-1}(\textsf{com} _j) \ne \bot \) and with \((S',k,R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)}):= \hat{\textsf{H}}^{-1}(\textsf{com} _j)\) we have \(S' = S\) and \(k = j\), where S is the signer set associated with sid.

  • There is a commitment \(\textsf{com} _j\) in \(\mathcal {M} _1\) such that \(\hat{\textsf{H}}^{-1}(\textsf{com} _j) = \bot \).

To ensure that the list satisfies this invariant, we add a triple \((sid,i,\mathcal {M} _1)\) to \(\textsf{Pending} \) when the first condition holds. This is done by algorithm \(\textsf{AddToPending} \). Concretely, whenever \(\mathcal {A}\) calls \({\textsc {Sig}} _1(sid,i,\mathcal {M} _1)\), the oracle returns \(\bot \) in case \(\textsf{Allowed}(sid,i,1,\mathcal {M} _1) = 0\). If \(\textsf{Allowed}(sid,i,1,\mathcal {M} _1) = 1\), the game immediately calls \(\textsf{AddToPending} (sid,i,1,\mathcal {M} _1)\), which checks the first condition of the invariant and inserts the tripe \((sid,i,1,\mathcal {M} _1)\) into \(\textsf{Pending} \) if it holds. Then, the game continues the simulation of \({\textsc {Sig}} _1\) as before. Further, we invoke algorithm \(\textsf{UpdatePending} \) whenever the map \(\hat{h}\) is changed, i.e., during queries to \(\hat{\textsf{H}}\), and in corruption and signing oracles (see \({{\textbf {G}}} _5\)). On every invocation, the algorithm does the following:

  1. 1.

    Initialize an empty list \(\textsf{New}\).

  2. 2.

    Iterate trough all entries \((sid,i,\mathcal {M} _1)\) in \(\textsf{Pending} \), and do the following:

    1. (a)

      Check if the entry has to be removed because it is violating the invariant. That is, check if for all j in the signer set S associated with session sid, we have \(\hat{\textsf{H}}^{-1}(\textsf{com} _j) \ne \bot \), where \(\mathcal {M} _1 = (\textsf{com} _j)_{j\in S}\). If this is not the case, skip this entry and keep it in \(\textsf{Pending} \).

    2. (b)

      We know that for all indices \(j \in S\), the value \((S'_j,k_j,R_{j}^{(1)},R_{j}^{(2)},{\textsf{pk}} _{j}^{(2)}) = \hat{\textsf{H}}^{-1}(\textsf{com} _j)\) exists. Further, it must hold that \(S'_j = S\) and \(k_j = j\), as otherwise this entry would not have been added to \(\textsf{Pending} \) in the first place. Remove the entry from \(\textsf{Pending} \), and determine the combined nonces and secondary public key

      $$ R^{(1)} = \sum _{j\in S} R_{j}^{(1)},~~R^{(2)} = \sum _{j\in S} R_{j}^{(2)},~~{\textsf{pk}} ^{(2)} = \sum _{j\in S}\ell _{j,S}{\textsf{pk}} _{j}^{(2)}. $$
    3. (c)

      Let \(\textsf{m}\) be the message associated with the session sid.

    4. (d)

      If \((R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)},\textsf{m}) \notin \textsf{New}\) but \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] \ne \bot \), abort the execution of the entire game (see bad event \(\textsf{Defined}\) below).

    5. (e)

      Otherwise, sample \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] \xleftarrow {{\!\!\tiny \$}}\mathcal {S} \) and insert the tuple \((R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)},\textsf{m})\) into \(\textsf{New}\).

To summarize, this algorithm removes all entries violating the invariant from the list \(\textsf{Pending} \). For each such entry that is removed, the algorithm computes the combined nonces \(R^{(1)},R^{(2)}\) and secondary public key \({\textsf{pk}} ^{(2)}\). Roughly, it aborts the execution, if random oracle \(\bar{\textsf{H}}\) for these inputs is already defined. List \(\textsf{New}\) ensures that the abort is not triggered if the algorithm itself programmed \(\bar{h}\) in a previous iteration within the same invocation. In addition to algorithm \(\textsf{UpdatePending} \), we introduce the following events, on which the game aborts its execution:

  • Event \(\textsf{BadQuery}\): This event occurs, if for a random oracle query to \(\hat{\textsf{H}}\) for which the hash value is not yet defined and freshly sampled as \(\textsf{com} \xleftarrow {{\!\!\tiny \$}}{{\{0,1\}}^{}} ^{2\lambda }\), there is an entry \((sid,i,\mathcal {M} _1)\) in \(\textsf{Pending} \) such that \(\textsf{com} \) is in \(\mathcal {M} _1\).

  • Event \(\textsf{Defined}\): This event occurs, if the execution is aborted during algorithm \(\textsf{UpdatePending} \).

For shorthand notation, we set \(\textsf{Bad} := \textsf{BadQuery} \vee \textsf{Defined}\). The probability of \(\textsf{BadQuery}\) can be bounded as follows: Fix a random oracle query to \(\hat{\textsf{H}}\) for which the hash value is not yet defined. Fix an entry \((sid,i,\mathcal {M} _1)\). Note that over the entire game, there are at most \(Q_{S}\) of these entries. Further, fix an index \(j \in [t+1]\). The probability that \(\textsf{com} \) collides with the jth entry of \(\mathcal {M} _1\) is clearly at most \(1/2^{2\lambda }\). With a union bound over all triples of queries, entries, and indices, we get that the probability of \(\textsf{BadQuery}\) is at most \(Q_{\hat{\textsf{H}}}Q_{S}(t+1)/2^{2\lambda }\). Next, we bound the probability of \(\textsf{Defined}\) assuming \(\textsf{BadQuery}\) does not occur. Under this assumption, one can easily observe that when an entry is removed from list \(\textsf{Pending} \) and \(R^{(1)} = \sum _{j\in S} R_{j}^{(1)}\) is the combined first nonce, then there is an \(j^* \in S\) such that the game sampled \(R_{j^*}^{(1)}\) just before invoking algorithm \(\textsf{UpdatePending} \). Precisely, it must have set \(R_{j^*}^{(1)} := {\textsf{T}} (g,r)\) for some random \(r \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \). By regularity of \({\textsf{TLF}} \), this means \(R_{j^*}^{(1)}\) is uniform over \(\mathcal {R} \), and this means that the combined first nonce \(R^{(1)}\) is also uniform. Thus for any fixed entry of in \(\textsf{Pending} \), the probability that \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}]\) is already defined when the entry is removed, is at most \(Q_{\hat{\textsf{H}}}/|\mathcal {R} |\). With a union bound over all entries we can now bound the probability of \(\textsf{Defined}\) by \(Q_{\hat{\textsf{H}}}Q_{S}/|\mathcal {R} |\). In combination, we get

$$\begin{aligned} {\Pr \left[ {\textsf{Bad}}\right] } \le {\Pr \left[ {\textsf{BadQuery}}\right] }+{\Pr \left[ { \textsf{Defined} \mid \lnot \textsf{BadQuery}}\right] } \le \frac{Q_{\hat{\textsf{H}}}Q_{S}(t+1)}{2^{2\lambda }}+\frac{Q_{\hat{\textsf{H}}}Q_{S}}{|\mathcal {R} |}. \end{aligned}$$

and thus

$$\begin{aligned} \left| {\Pr \left[ {{{\textbf {G}}} _6 \Rightarrow 1}\right] }-{\Pr \left[ {{{\textbf {G}}} _7 \Rightarrow 1}\right] }\right| &\le {\Pr \left[ {\textsf{Bad}}\right] } \le \frac{Q_{\hat{\textsf{H}}}Q_{S}(t+1)}{2^{2\lambda }}+\frac{Q_{\hat{\textsf{H}}}Q_{S}}{|\mathcal {R} |}. \end{aligned}$$

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{8}{} \mathbf{:}}\) In this game, we change algorithm \(\textsf{UpdatePending} \). Specifically, we change what we insert into list \(\textsf{New}\). Recall from the previous game that when we removed an entry \((sid,i,\mathcal {M} _1)\) from \(\textsf{Pending} \), we aborted the game if \((R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)},\textsf{m}) \notin \textsf{New}\) but \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] \ne \bot \). Otherwise, we inserted tuples \((R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)},\textsf{m})\). Now, we instead abort if \((S,R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)},\textsf{m}) \notin \textsf{New}\) but \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] \ne \bot \), and otherwise insert \((S,R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)},\textsf{m})\), where S is the signer set associated with session sid. One can see that the two games can only differ if for two entries \((sid,i,\mathcal {M} _1)\) and \((sid',i',\mathcal {M} _1')\) that are removed from \(\textsf{Pending} \) in the same invocation of \(\textsf{UpdatePending} \), the signer sets S and \(S'\) differ but the respective tuples \((R^{(1)},R^{(2)},{\textsf{pk}} ^{(2)},\textsf{m})\) and \((R^{'(1)},R^{'(2)},{\textsf{pk}} ^{'(2)},\textsf{m}')\) are the same and \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] \ne \bot \). In this case, game \({{\textbf {G}}} _8\) would abort, but game \({{\textbf {G}}} _7\) would not. We argue that this can not happen: Assume that two entries \((sid,i,\mathcal {M} _1)\) and \((sid',i',\mathcal {M} _1')\) with associated signer sets S and \(S'\) are removed from \(\textsf{Pending} \). Then, we know that algorithm \(\textsf{UpdatePending} \) has been invoked because the game programmed \(\hat{h}\) at some point, say \(\hat{h}[S_*,j_*,R_*^{(1)},R_{*}^{(2)},{\textsf{pk}} _{*}^{(2)}] := \textsf{com} _*\), such that \(\textsf{com} _*\) is in both \(\mathcal {M} _1\) and \(\mathcal {M} _1'\). Thus, the algorithm only removes the entry \((sid,i,\mathcal {M} _1)\) from the list if the first component of \(\hat{\textsf{H}}^{-1}(\textsf{com} _*)\) is S, i.e., if \(S_* = S\). Similarly, it only removes the entry \((sid',i',\mathcal {M} _1')\) if the first the first component of \(\hat{\textsf{H}}^{-1}(\textsf{com} _*)\) is \(S'\), i.e., if \(S_* = S'\). Thus, it only removes both if \(S = S_* = S'\). With that, we have \( {\Pr \left[ {{{\textbf {G}}} _7 \Rightarrow 1}\right] } = {\Pr \left[ {{{\textbf {G}}} _8 \Rightarrow 1}\right] }. \)

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{9}{} \mathbf{:}}\) We introduce two more algorithms. Intuitively, these allow us to group tuples of the form \((sid,i,\mathcal {M} _1)\) that have been inserted into list \(\textsf{Pending} \) into equivalence classes. To be clear, the relation is defined on all triples in \(\textsf{Pending} \) and on all triples that already have been removed from \(\textsf{Pending} \), but not on any other entries. The intuition, roughly, is that such triples lead to the same combined nonces if and only if they are in the same equivalence class. The effect of this is will be that we know the challenge just from the tuple \((sid,i,\mathcal {M} _1)\). We now turn to the details. We introduce an algorithm \(\textsf{Equivalent} \) that takes as input two triples \((sid,i,\mathcal {M} _1)\) and \((sid',i',\mathcal {M} _1')\) and decides whether they are equivalent as follows:

  1. 1.

    Let \(S,S'\) and \(\textsf{m},\textsf{m}'\) be the signer sets and messages associated with sessions sid and \(sid'\), respectively. If \(S \ne S'\) or \(\textsf{m}\ne \textsf{m}'\), the triples are not equivalent.

  2. 2.

    Thus, assume \(S = S'\) and write \(\mathcal {M} _1 = (\textsf{com} _j)_{j \in S}\) and \(\mathcal {M} _1' = (\textsf{com} '_j)_{j \in S}\). Let \(F\subseteq S\) (resp. \(F'\subseteq S'\)) be the set of indices \(j \in S\) (resp \(j \in S'\)) such that \(\hat{\textsf{H}}^{-1}(\textsf{com} _j) = \bot \) (resp. \(\hat{\textsf{H}}^{-1}(\textsf{com} '_j) = \bot \)). If \((\textsf{com} _j)_{j\in F} \ne (\textsf{com} '_j)_{j\in F'}\), then the triples are not equivalent.

  3. 3.

    Define \(\bar{F} := S \setminus F\) and \(\bar{F}' := S \setminus F'\). For each \(j\in \bar{F}\), we know that the value \((\tilde{S}_j,k_j,{R_{j}'}^{(1)},{R_{j}'}^{(2)},{{\textsf{pk}} _{j}'}^{(2)}) = \hat{\textsf{H}}^{-1}(\textsf{com} '_j)\) exists. Similarly, for each \(j \in \bar{F}'\), we know that the value \((\tilde{S}_j',k_j',{R_{j}'}^{(1)},{R_{j}'}^{(2)},{{\textsf{pk}} _{j}'}^{(2)}) = \hat{\textsf{H}}^{-1}(\textsf{com} '_j)\) exists. With these, we can define partially combined nonces and secondary keys

    $$\begin{aligned} \begin{array}{lll} \bar{R}^{(1)} := \sum _{j\in \bar{F}} R_{j}^{(1)}, &{} \bar{R}^{(2)} := \sum _{j\in \bar{F}} R_{j}^{(2)} &{} \bar{{\textsf{pk}}}^{(2)} := \sum _{j\in \bar{F}}\ell _{j,S}{\textsf{pk}} _{j}^{(2)} \\ {\bar{R}}^{'(1)} := \sum _{j\in \bar{F}'} {R_{j}}^{'(1)},&{} {\bar{R}}^{'(2)} := \sum _{j\in \bar{F}'} {R_{j}}^{'(2)} &{} {\bar{{\textsf{pk}}}}^{'(2)} := \sum _{j\in \bar{F}'}\ell _{j,S}{{\textsf{pk}} _{j}}^{'(2)}. \end{array} \end{aligned}$$

    The triples are not equivalent, if \((\bar{R}^{(1)},\bar{R}^{(2)},\bar{{\textsf{pk}}}^{(2)}) \ne ({\bar{R}}^{'(1)},{\bar{R}}^{'(2)},{\bar{{\textsf{pk}}}}^{'(2)})\). Otherwise, they are equivalent.

In summary, two triples are equivalent if their signer sets, messages, partially combined nonces and secondary public keys, and remaining commitments match. It is clear that at any fixed point in time during the experiment, this is indeed an equivalence relation. In the following two claims, we argue that this relation is preserved over time. For that, we first make some preliminary observations, using notation as in the definition of equivalence above:

  1. 1.

    The equivalence relation can potentially only change when oracle \(\hat{\textsf{H}}\) is updated during queries to \({\textsc {Sig}} _1\) (i.e., the opening phase) or during corruption queries, which may make the sets F and \(F'\) change. This is because triples are only inserted into \(\textsf{Pending} \) if the only commitments without preimages are simulated, and the preimages of these are only set in such calls (see \({{\textbf {G}}} _7\)).

  2. 2.

    The sets F and \(F'\) can only get smaller over time, as we assume that no collisions occur.

  3. 3.

    When the oracle is programmed during such calls, say by setting \(\hat{h}[S_*,j_*,R_*^{(1)},R_{*}^{(2)},{\textsf{pk}} _{*}^{(2)}] := \textsf{com} _*\), then it must hold that \((S_*,j_*,\textsf{com} _*) \in \textsf{Sim} \). In particular, if in this case some j is removed from F (or \(F'\)) because \(\textsf{com} _j\) (or \(\textsf{com} {\textbf {}}_j'\)) now has a preimage, then it must hold that \(\textsf{com} _* = \textsf{com} _j\) and \(j_* = j\). This is because otherwise, if \(j \ne j_*\), then we would have \((\tilde{S},j,\textsf{com} _*) \in \textsf{Sim} \) for some \(\tilde{S}\) (because the entry was added to \(\textsf{Pending} \)) and \((S_*,j_*,\textsf{com} _*) \in \textsf{Sim} \), and such a collision was ruled out in \({{\textbf {G}}} _5\).

  4. 4.

    Again, assume that the oracle is programmed during such calls by setting \(\hat{h}[S_*,j_*,R_*^{(1)},R_{*}^{(2)},{\textsf{pk}} _{*}^{(2)}] := \textsf{com} _*\). Now, assume that both F and \(F'\) change. Then, we know (because of the previous observation), that the same \(j = j_*\) is removed from both F and \(F'\), and \(\textsf{com} _j = \textsf{com} _* = \textsf{com} '_j\) is removed from both \((\textsf{com} _j)_{j\in F}\) and \((\textsf{com} '_j)_{j\in F'}\). Thus, these lists are the same before the update if and only if they are the same after the update.

  5. 5.

    In the setting of the previous observation, denote the point in time before the update as \(t_0\), and the point in time after the update as \(t_1\). Further, denote the associated partially combined nonces and secondary public keys at time \(t_b\) for \(b \in {{\{0,1\}}^{}} \) by

    $$\begin{aligned} \bar{R}_{b}^{(1)},~\bar{R}_{b}^{(2)},~\bar{{\textsf{pk}}}_{b}^{(2)}, \text { and } \bar{R}_{b}^{'(1)},~\bar{R}_{b}^{'(2)},~\bar{{\textsf{pk}}}_{b}^{'(2)}. \end{aligned}$$

    Now, we observe that

    $$ \bar{R}_{1}^{(1)} = \bar{R}_{0}^{(1)} + R_*^{(1)},~~\bar{R}_{1}^{(2)} = \bar{R}_{0}^{(2)} + R_*^{(2)},~~\bar{{\textsf{pk}}}_{1}^{(2)} = \bar{{\textsf{pk}}}_{0}^{(2)} + \ell _{j_*,S_*}{\textsf{pk}} _{*}^{(2)}. $$

    The same holds for \(\bar{R}_{b}^{'(1)}\), \(\bar{R}_{b}^{'(2)}\), and \(\bar{{\textsf{pk}}}_{b}^{'(2)}\). Therefore, we see that

    $$\begin{aligned} &~~(\bar{R}_{0}^{(1)},\bar{R}_{0}^{(2)},\bar{{\textsf{pk}}}_{0}^{(2)})=(\bar{R}_{0}^{'(1)},\bar{R}_{0}^{'(2)},\bar{{\textsf{pk}}}_{0}^{'(2)}) \\ \text {if and only if} &~~(\bar{R}_{1}^{(1)},\bar{R}_{1}^{(2)},\bar{{\textsf{pk}}}_{1}^{(2)})=(\bar{R}_{1}^{'(1)},\bar{R}_{1}^{'(2)},\bar{{\textsf{pk}}}_{1}^{'(2)}). \end{aligned}$$

Now, we show that the equivalence relation does not change over time, using our notation from above and the observations we made.

Equivalence Claim 1. If two triples \((sid,i,\mathcal {M} _1)\) and \((sid',i',\mathcal {M} _1')\) are equivalent at some point in time, then they stay equivalent for the rest of the game.

Proof of Equivalence Claim 1. Both signer set and message do not change over time. For the other components that determine whether the triples are equivalent, we consider two cases: Either, on an update of \(\hat{\textsf{H}}\), both do not change. In this case the triples trivially stay equivalent. In the other case, both of them change, as the lists \((\textsf{com} _j)_{j\in F}\) and \((\textsf{com} '_j)_{j\in F'}\) are the same before the update. Now, it easily follows from our last observation above that the triples stay equivalent.

Equivalence Claim 2. If two triples \((sid,i,\mathcal {M} _1)\) and \((sid',i',\mathcal {M} _1')\) are not equivalent at some point in time, then the probability that they become equivalent later is negligible. Concretely, if \(\textsf{Converge}\) is the event that any two non-equivalent triples become equivalent at some point in time, then

$$ {\Pr \left[ {\textsf{Converge}}\right] } \le \frac{Q_{S}^2(Q_{S}+t)}{|\mathcal {R} |}. $$

Proof of Equivalence Claim 2. Clearly, if \(\textsf{m}\ne \textsf{m}'\) or \(S \ne S'\), then the triples will stay non-equivalent. Now, consider an update of \(\hat{\textsf{H}}\) that is caused by a query to \({\textsc {Sig}} _1\) or the corruption oracle and will potentially change the equivalence relation. We consider two cases: In the first case, the lists \((\textsf{com} _j)_{j\in F}\) and \((\textsf{com} '_j)_{j\in F'}\) are the same before the update. In this case, they either do not change, in which case the triples trivially stay non-equivalent, or they both change, in which case it follows from our last observation above that they stay non-equivalent. In the second case, the lists \((\textsf{com} _j)_{j\in F}\) and \((\textsf{com} '_j)_{j\in F'}\) are different before the update. If they stay different after the update, the triples stay non-equivalent. If they become the same after the update, this means that an entry was removed from only one of them, say \(j = j_*\) from F and thus \(\textsf{com} _j = \textsf{com} _*\) from \((\textsf{com} _j)_{j\in F}\). For this case, use notation \(\bar{R}_{b}^{(1)}\) and \(\bar{R}_{b}^{'(1)}\) as in the last observation above and notice that \(\bar{R}_{1}^{'(1)} = \bar{R}_{0}^{'(1)}\) because \((\textsf{com} '_j)_{j\in F'}\) is not changed during the update. On the other hand, \((\textsf{com} _j)_{j\in F}\) is changed by the update and we have \(\bar{R}_{1}^{(1)} = \bar{R}_{0}^{(1)} + R_*^{(1)}\). Thus, if the triples become equivalent, we must have

$$ \bar{R}_{0}^{'(1)} = \bar{R}_{1}^{'(1)} = \bar{R}_{1}^{(1)} = \bar{R}_{0}^{(1)} + R_*^{(1)}. $$

Notice that \(R_*^{(1)}\) is sampled in the signing or corruption oracle by sampling some \(r_* \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \) and setting \(R_*^{(1)} = {\textsf{T}} (g,r_*)\). Thus, \(R_*^{(1)}\) is uniformly distributed over \(\mathcal {R} \) by the regularity of \({\textsf{TLF}} \) and independent of \(\bar{R}_{0}^{'(1)}\) and \(\bar{R}_{0}^{(1)}\), which means that this equation holds with probability at most \(1/|\mathcal {R} |\). Taking a union bound over all pairs of triples and all queries to the signing oracle and the corruption oracle, the claim follows.

With our equivalence relation at hand, we introduce an algorithm \(\textsf{GetChallenge} \) that behaves as a random oracle on equivalence classes. That is, it assigns each class a random challenge \(c \xleftarrow {{\!\!\tiny \$}}\mathcal {S} \) in a lazy manner. More precisely, it gets as input a triple \((sid,i,\mathcal {M} _1)\) and checks if a triple in the same equivalence classFootnote 5 is already assigned a challenge c. This is done using algorithm \(\textsf{Equivalent} \). If so, it returns this challenge c. If not, it assigns a random challenge \(c \xleftarrow {{\!\!\tiny \$}}\mathcal {S} \) to the triple \((sid,i,\mathcal {M} _1)\).

These two new algorithms are used in the following way: Recall that in previous games, algorithm \(\textsf{UpdatePending} \) would program \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] \xleftarrow {{\!\!\tiny \$}}\mathcal {S} \) whenever an entry \((sid,i,\mathcal {M} _1)\) is removed from \(\textsf{Pending} \) and no abort occurs, where \({\textsf{pk}} ^{(2)},R^{(1)}, R^{(2)},\textsf{m}\) are the corresponding secondary public keys, combined nonces, and messages. Now, instead of sampling \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}]\) at random, the algorithm sets \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] := \textsf{GetChallenge} (sid,i,\mathcal {M} _1)\). We need to argue that this way of programming the random oracle does not change the view of the adversary. Concretely, all we need to argue is that two different inputs \(x\ne x'\) to random oracle \(\bar{\textsf{H}}\) get independently sampled outputs. Clearly, it is sufficient to consider inputs of the form

$$\begin{aligned} x = ({\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}),~~ x' = ({\textsf{pk}},{\textsf{pk}} ^{'(2)},R^{'(1)},R^{'(2)},\textsf{m}'), \end{aligned}$$

which both are covered by the newly introduced programming in algorithm \(\textsf{UpdatePending} \). Let \((sid,i,\mathcal {M} _1)\) be the entry removed from \(\textsf{Pending} \) associated with x and \((sid',i',\mathcal {M} _1')\) be the entry removed from \(\textsf{Pending} \) associated with \(x'\). Consider the point in time where the second entry, say \((sid',i',\mathcal {M} _1')\) has been removed. One can see that the outputs \(\bar{\textsf{H}}(x)\) and \(\bar{\textsf{H}}(x')\) are independent, unless at this point in time \((sid,i,\mathcal {M} _1)\) and \((sid',i',\mathcal {M} _1')\) were equivalent. However, by definition of equivalence (algorithm \(\textsf{Equivalent} \)), them being equivalent would mean that \(\textsf{m}= \textsf{m}'\) and \(({\textsf{pk}} ^{(2)},R^{(1)},R^{(2)}) = ({\textsf{pk}} ^{'(2)},R^{'(1)},R^{'(2)})\), as the sets F and \(F'\) are both empty because both entries have been removed from \(\textsf{Pending} \). Thus, we would have \(x = x'\). This shows that the distribution of random oracle outputs does not change, and so we have \( {\Pr \left[ {{{\textbf {G}}} _8 \Rightarrow 1}\right] } = {\Pr \left[ {{{\textbf {G}}} _9 \Rightarrow 1}\right] }. \)

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{10}{} \mathbf{:}}\) In this game, we change the signing oracle and corruption oracle. Roughly, we use an honest-verifier zero-knowledge-style simulation to simulate signing without secret keys. Intuitively, we can do that, because now we know the challenge already in the opening phase before fixing nonces. More precisely, recall that until now, signers in the opening phase, i.e., on a query \({\textsc {Sig}} _1(sid,i,\mathcal {M} _1)\), sampled a random \(r_i \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \) and set \(R_{i}^{(1)} := {\textsf{T}} (g,r_i)\) and \(R_{i}^{(2)} := {\textsf{T}} (h,r_i)\). Later, in the response phase, the signer sent \(s_i := c \cdot \ell _{i,S} \cdot {\textsf{sk}} _i + r_i\) where \(c := \bar{\textsf{H}}({\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m})\) and \({\textsf{pk}} ^{(2)},R^{(1)},R^{(2)}\) are the combined secondary public key and nonces. Additionally, when the signer is corrupted, it has to send \(r_i\) as part of its state. We change this as follows: In the opening phase, consider two cases: First, if \((sid,i,\mathcal {M} _1)\) has not been added to the list \(\textsf{Pending} \), then the signer sets \(c := 0\). Observe that in this case, we can assume that the signer never reaches the response phase for this session due to our changes in \({{\textbf {G}}} _6\) and \({{\textbf {G}}} _7\). Otherwise, it sets \(\tilde{c} := \textsf{GetChallenge} (sid,i,\mathcal {M} _1)\). In both cases, the signer samples \(s_i \xleftarrow {{\!\!\tiny \$}}\mathcal {D} \) and sets \(R_{i}^{(1)} := {\textsf{T}} (g,s_i)-\tilde{c}\cdot \ell _{i,S} \cdot {\textsf{pk}} _i\) and \(R_{i}^{(2)} := {\textsf{T}} (h,s_i)-\tilde{c}\cdot \ell _{i,S} \cdot {\textsf{pk}} _{i}^{(2)}\). Later, when the signer has to output something in the response phase, it outputs the \(s_i\) that it sampled in the opening phase. Further, when the signer is corrupted after the opening phase, it sets \(r_i := s_i - \tilde{c}\cdot \ell _{i,S}\cdot {\textsf{sk}} _i\). To argue indistinguishability, we need to show that \(\tilde{c}\) and \(c = \bar{\textsf{H}}({\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m})\) are the same. This is established as follows:

  1. 1.

    When the signer is queried in the response phase and does not return \(\bot \), we know that the entry \((sid,i,\mathcal {M} _1)\) has been removed from \(\textsf{Pending} \).

  2. 2.

    When it was removed from the list, the combined nonce and secondary public key that have been computed are exactly \(R^{(1)},R^{(2)},\) and \({\textsf{pk}} ^{(2)}\).

  3. 3.

    Therefore, in the invocation of \(\textsf{UpdatePending} \) in which the entry was removed from the list, one of two events happened:

    1. (a)

      Either the map \(\bar{h}\) has been programmed as \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] := \textsf{GetChallenge} (sid,i,\mathcal {M} _1)\);

    2. (b)

      Or, the map \(\bar{h}\) has been programmed as \(\bar{h}[{\textsf{pk}},{\textsf{pk}} ^{(2)},R^{(1)},R^{(2)},\textsf{m}] := \textsf{GetChallenge} (sid',i',\mathcal {M} _1')\) for some triple \((sid',i',\mathcal {M} _1')\) with the same associated signer set S (see \({{\textbf {G}}} _8\)) and message \(\textsf{m}\). In this case, we know that \((sid',i',\mathcal {M} _1')\) is equivalent to \((sid,i,\mathcal {M} _1)\) and therefore \(\textsf{GetChallenge} (sid',i',\mathcal {M} _1')\) returned the same as what the query \(\textsf{GetChallenge} (sid,i,\mathcal {M} _1)\) would have returned at that point.

  4. 4.

    Thus, we only need to argue that the output of \(\textsf{GetChallenge} (sid,i,\mathcal {M} _1)\) did not change over time. This follows from our claims about the stability of equivalence classes over time, assuming event \(\textsf{Converge}\) does not occur.

We get

$$ \left| {\Pr \left[ {{{\textbf {G}}} _9 \Rightarrow 1}\right] } - {\Pr \left[ {{{\textbf {G}}} _{10} \Rightarrow 1}\right] }\right| \le {\Pr \left[ {\textsf{Converge}}\right] } \le \frac{Q_{S}^2(Q_{S}+t)}{|\mathcal {R} |}. $$

\(\underline{{\textbf {Game }}{{\textbf {G}}} _{11}{} \mathbf{:}}\) We change the game by no longer assuming that \(({\textsf{par}} ',g)\in \textsf{Reg}\). Clearly, we have \( \left| {\Pr \left[ {{{\textbf {G}}} _{10} \Rightarrow 1}\right] } - {\Pr \left[ {{{\textbf {G}}} _{11} \Rightarrow 1}\right] }\right| \le \varepsilon _{\textsf{r}}. \)

It remains to bound the probability that game \({{\textbf {G}}} _{11}\) outputs 1. Before turning to that, we emphasize the main property we have established via our changes: We do not longer need secret key shares \({\textsf{sk}} _i\) to simulate the signer oracle. We only need them on corruption queries. Due to space constraints, we postpone the final part of the proof to the full version [7] and only give a short summary here. To bound the probability that game \({{\textbf {G}}} _{11}\) outputs 1, we consider two cases depending on the final forgery \((\textsf{m}^*,\sigma ^*)\) with \(\sigma ^* = ({\textsf{pk}} ^{*(2)},c^*,s^*)\). First, if there is no \(x_0 \in \mathcal {D} \) such that \({\textsf{T}} (g,x_0) = {\textsf{pk}} \) and \({\textsf{T}} (h^*,x_0) = {\textsf{pk}} ^{*(2)}\), where \(h^* = \textsf{H}(\textsf{m}^*)\), then we can bound the probability using Lemma 1. Second, if there is such an \(x_0\), then we bound the probability using a reduction against the t-algebraic translation resistance of \({\textsf{TLF}} \). The reduction defines all keys from its initial input by interpolation, simulates the signing oracle without any secret keys as in \({{\textbf {G}}} _{11}\), and uses its own oracle to answer corruption queries. From the forgery and the corruption queries, it can then interpolate a solution for t-algebraic translation resistance. See the full version [7] for details.    \(\square \)

4 Instantiations

In this section, we instantiate our threshold signature scheme by providing concrete tagged linear function families.

4.1 Instantiation from (Algebraic) One-More CDH

We can instantiate the tagged linear function family by mapping a tag \(h \in {\mathbb {G}} \) and a domain element \(x \in \mathbb {Z}_p\) to \(h^x \in {\mathbb {G}} \). Regularity and translatability are easy to show, and algebraic translation resistance follows from an algebraic one-more variant of \(\textsf{CDH}\). We postpone the details to the full version [7].

4.2 Instantiation from DDH

Here, we present our construction \({\textsf{TLF}} _\textsf{DDH} = ({\textsf{Gen}} _\textsf{DDH},{\textsf{T}} _\textsf{DDH})\) of a tagged linear function family based on the \(\textsf{DDH}\) assumption. Recall that the \(\textsf{DDH}\) assumption states that it is hard to distinguish tuples \(({\mathbb {G}},p,g,h,g^a,h^a)\) from tuples \(({\mathbb {G}},p,g,h,u,v)\), where \({\mathbb {G}} \) is a cyclic group with generator g and prime order p, huv are random group elements, and \(a \in \mathbb {Z}_p\) is a random exponent. From now on, let \(\textsf{GGen}\) be an algorithm that takes as input \(1^\lambda \) and outputs the description of a group \({\mathbb {G}} \) of prime order p, along with some generator \(g \in {\mathbb {G}} \). Algorithm \({\textsf{Gen}} _\textsf{DDH} \) simply runs \(\textsf{GGen}\) and outputs the description of \({\mathbb {G}} \), p, and g as parameters \({\textsf{par}} \). We make use of the implicit notation for group elements from [39]. That is, we write \([\vec{A}] \in {\mathbb {G}} ^{r\times l}\) for the matrix of group elements with exponents given by the matrix \(\vec{A} \in \mathbb {Z}_p^{r\times l}\). Precisely, if \(\vec{A} = (A_{i,j})_{i\in [r],j\in [l]}\), then \([\vec{A}] := (g^{A_{i,j}})_{i\in [r],j\in [l]}\). With this notation, observe that one can efficiently compute \([\vec{A}\vec{B}]\) for any matrices \(\vec{A} \in \mathbb {Z}_p^{r\times l},~\vec{B} \in \mathbb {Z}_p^{l\times s}\) with matching dimensions from either \([\vec{A}]\) and \(\vec{B}\) or from \(\vec{A}\) and \([\vec{B}]\). For our tagged linear function family, we define the following sets of scalars, tags, and the domain and range, respectively: \(\mathcal {S}:= \mathbb {Z}_p\), \(\mathcal {T}:= {\mathbb {G}} ^{2\times 2}\), \(\mathcal {D}:= \mathbb {Z}_p^2\), \(\mathcal {R}:= {\mathbb {G}} ^2\). Clearly, \(\mathcal {D} \) and \(\mathcal {R} \) are vector spaces over \(\mathcal {S} \). For a tag \([\vec{G}] \in {\mathbb {G}} ^{2\times 2}\) and an input \(\vec{x} \in \mathbb {Z}_p^2\), the tagged linear function \({\textsf{T}} _\textsf{DDH} \) is defined as \({\textsf{T}} _\textsf{DDH} ([\vec{G}],\vec{x}) := [\vec{G}\vec{x}] \in {\mathbb {G}} ^2.\) We emphasize that the tag \([\vec{G}]\) is given in the group, and the domain element \(\vec{x}\) is given over the field. It is clear that \({\textsf{T}} _\textsf{DDH} \) can be computed efficiently and that it is a homomorphism. What remains is to show regularity, translatability and algebraic translation resistance.

Lemma 2

\({\textsf{TLF}} _\textsf{DDH} \) is \(\varepsilon _{\textsf{r}}\)-regular, where \(\varepsilon _{\textsf{r}} \le (p+1)/p^2\).

Lemma 3

\({\textsf{TLF}} _\textsf{DDH} \) is \(\varepsilon _{\textsf{t}}\)-translatable, where \(\varepsilon _{\textsf{t}} \le (3+3p)/p^2\).

Lemma 4

Let \(t \in \mathbb N \) be a number polynomial in \(\lambda \). If the \(\textsf{DDH}\) assumption holds relative to \(\textsf{GGen}\), then \({\textsf{TLF}} _\textsf{DDH} \) is t-algebraic translation resistant.

We postpone the proofs to the full version [7].

5 Concrete Parameters and Efficiency

Our schemes are slightly less efficient than previous schemes, but they are still in a highly practical regime. Given the strong properties that our schemes achieve from conservative assumptions without the algebraic group model, it is natural to pay such a small price in terms of efficiency. We present a more detailed discussion on efficiency in the full version [7].