1 Introduction

Consider n parties where each party independently generates a key pair for a signature scheme. Some time later all n parties want to sign the same message m. A multi-signature scheme [29, 39] is a protocol that enables the n signers to jointly generate a short signature \(\sigma \) on m so that \(\sigma \) convinces a verifier that all n parties signed m. Specifically, the verification algorithm is given as input the n public keys, the message m, and the multi-signature \(\sigma \). The algorithm either accepts or rejects \(\sigma \). The multi-signature \(\sigma \) should be short – its length should be independent of the number of signers n. We define this concept more precisely in the next section, where we also present the standard security model for such schemes [39]. Secure multi-signatures have been constructed from Schnorr signatures (e.g. [9]), from BLS signatures (e.g. [10]), and from many other schemes as discussed in Sect. 1.4.

A more general concept called an aggregate signature scheme [13] lets each of the n parties sign a different message, but all these signatures can be aggregated into a single short signature \(\sigma \). As before, this short signature should convince the verifier that all signers signed their designated message.

Applications to Bitcoin. Multi-signatures and aggregate signatures can be used to shrink the size of the Bitcoin blockchain [41]. In recent work, Maxwell, Poelstra, Seurin, and Wuille [36] suggest using multi-signatures to shrink the transaction data associated with Bitcoin Multisig addresses. Conceptually, a Multisig address is the hash of n public keys \( pk _1,\ldots , pk _n\) along with some number \(t \in \{1,\ldots ,n\}\) called a threshold (see [2, 36] for details). To spend funds associated with this address, one creates a transaction containing all n public keys \( pk _1,\ldots , pk _n\) followed by t valid signatures from t of the n public keys, and writes this transaction to the blockchain. The message being signed is the same in all t signatures, namely the transaction data.

In practice, Multisig addresses often use \(t=n\), so that signatures from all n public keys are needed to spend funds from this address. In this case, all n signatures can be compressed using a multi-signature scheme into a single short signature. This shrinks the overall transaction size and reduces the amount of data written to the blockchain. This approach can also be made to work for \(t<n\), when \({n \atopwithdelims ()t}\) is small, by enumerating all t-size subsets [36, Sect. 5.2]. Multi-signatures can also be used to compress multi-input transactions, but for simplicity we will focus on Multisig addresses.

Notice that we still need to write all n public keys to the blockchain, so compressing the signatures does not save too much. Fortunately, there is a solution to this as well. Maxwell et al. [36], building on the work on Bellare and Neven [9], construct a Schnorr-based multi-signature scheme that also supports public key aggregation; the verifier only needs a short aggregate public key instead of an explicit list of all n public keys. With this approach, an n-of-n Multisig address is simply the hash of the short aggregate public key, and the data written to the blockchain in a spending transaction is this single short aggregated public key, a single short compressed signature, and the message. This data is sufficient to convince the verifier that all n signers signed the transaction. It shrinks the amount of data written to the blockchain by a factor of n.

Maxwell et al. call this primitive a multi-signature scheme with public key aggregation. Their signing protocol requires two rounds of communication among the signing parties, and they prove security of their scheme assuming the one-more discrete-log assumption (as assumption introduced in [8]). However, recent work [22] has shown that there is a gap in the security proof, and that security cannot be proven under this assumption. Whether their scheme can be proved secure under a different assumption or in a generic group model is currently an open problem.

In Sect. 5, we present a modification of the scheme by Maxwell et al. that we prove secure under the standard discrete-log assumption. Our scheme retains all the benefits of the original scheme, and in particular uses the same key aggregation technique, but we add one round to the signing protocol. Independently from our work, Maxwell et al. [37] revised their work to use the same protocol we present here.

1.1 Better Constructions Using Pairings

Our main results show that we can do much better by replacing the Schnorr signature scheme in [36] by BLS signatures [14]. The resulting schemes are an extremely good fit for Bitcoin, but are also very useful wherever multi-signatures are needed.

To describe our new constructions, we first briefly review the BLS signature scheme and its aggregation mechanism. Recall that the scheme needs: (1) An efficiently computable non-degenerate pairing \(e:\mathbb {G}_\mathrm {1}\times \mathbb {G}_\mathrm {2}\rightarrow \mathbb {G}_\mathrm {t}\) in groups \(\mathbb {G}_\mathrm {1},\mathbb {G}_\mathrm {2},\mathbb {G}_\mathrm {t}\) of prime order q. We let \(g_\mathrm {1}\) and \(g_\mathrm {2}\) be generators of \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\) respectively. (2) A hash function \(\mathsf {H} _0: \mathcal {M}\rightarrow \mathbb {G}_\mathrm {1}\). Now the BLS signature scheme works as follows:

  • Key generation: choose a random and output \(( pk , sk )\) where \( pk \leftarrow g_\mathrm {2}^ sk \in \mathbb {G}_\mathrm {2}\).

  • \(\text {Sign}( sk , m)\): output \(\sigma \leftarrow \mathsf {H} _0(m)^ sk \in \mathbb {G}_\mathrm {1}\).

  • \(\text {Verify}( pk ,m,\sigma )\): if \(e(\sigma , g_\mathrm {2}) {\mathop {=}\limits ^{{\scriptscriptstyle ?}}} e\big (\mathsf {H} _0(m), pk \big )\) output “accept”, otherwise output “reject”.

This signature scheme supports a simple signature aggregation procedure. Given triples \(( pk _i,\ m_i,\ \sigma _i)\) for \(i=1,\ldots ,n\), anyone can aggregate the signatures \(\sigma _1,\ldots ,\sigma _n \in \mathbb {G}_\mathrm {1}\) into a short convincing aggregate signature \(\sigma \) by computing

$$\begin{aligned} \sigma \leftarrow \sigma _1 \cdots \sigma _n \in \mathbb {G}_\mathrm {1}. \end{aligned}$$
(1)

To verify this aggregate signature \(\sigma \in \mathbb {G}_\mathrm {1}\) one checks that:

$$\begin{aligned} e(\sigma , g_\mathrm {2}) = e\big (\mathsf {H} _0(m_1), pk _1\big ) \cdots e\big (\mathsf {H} _0(m_n), pk _n \big ). \end{aligned}$$
(2)

Note that verification requires all \(( pk _i,\ m_i)\) for \(i=1,\ldots ,n\). When all the messages being signed are the same (i.e., \(m_1 = \ldots = m_n\)) the verification relation (2) reduces to a simpler test that requires only two pairings:

$$\begin{aligned} e(\sigma , g_\mathrm {2}) {\mathop {=}\limits ^{{\scriptscriptstyle ?}}} e\Big (\mathsf {H} _0(m_1),\ pk _1 \cdots pk _n\Big ). \end{aligned}$$
(3)

Observe that the verifier only needs to be given a short aggregate public-key \( apk \mathrel {\mathop :}= pk _1 \cdots pk _n \in \mathbb {G}_\mathrm {2}\).

The Rogue Public-Key Attack. The simple signature aggregation method in (1) is insecure on its own, and needs to be enhanced. To see why, consider the following rogue public-key attack: an attacker registers a rogue public key \( pk _2 \mathrel {\mathop :}=g_\mathrm {2}^\alpha \cdot ( pk _1)^{-1} \in \mathbb {G}_\mathrm {2}\), where \( pk _1 \in \mathbb {G}_\mathrm {2}\) is a public key of some unsuspecting user Bob, and is chosen by the attacker. The attacker can then claim that both it and Bob signed some message \(m \in \mathcal {M}\) by presenting the aggregate signature \( \sigma \mathrel {\mathop :}=\mathsf {H} _0(m)^\alpha . \) This signature verifies as an aggregate of two signatures, one from \( pk _1\) and one from \( pk _2\), because

$$ e(\sigma , g_\mathrm {2}) = e\big (\mathsf {H} _0(m)^\alpha , g_\mathrm {2}\big ) = e\big (\mathsf {H} _0(m), g_\mathrm {2}^\alpha \big ) = e\big (\mathsf {H} _0(m), pk _1 \cdot pk _2\big ). $$

Hence, this \(\sigma \) satisfies (3). In effect, the attacker committed Bob to the message m, without Bob ever signing m.

Defenses. There are two standard defenses against the rogue public-key attack:

  • Require every user to prove knowledge or possession of the corresponding secret key [10, 33, 47]. However, this is difficult to enforce in practice, as argued in [7, 47], and does not fit well with applications to crypto currencies, as explained in [36].

  • Require that the messages being aggregated are distinct [7, 13], namely the verifier rejects an aggregate signature on non-distinct messages. This is sufficient to prevent the rogue key attack. Moreover, message distinctness can be enforced by always prepending the public key to every message prior to signing. However, because now all messages are distinct, we cannot take advantage of fast verification and public-key aggregation as in (3) when aggregating signatures on a common message m.

1.2 Our Pairing-Based Results

In Sect. 3 we propose a different defense against the rogue public-key attack that retains all the benefits of both defenses above without the drawbacks. Our multi-signature scheme, called , supports public key aggregation and fast verification as in (3). Moreover, the scheme is secure in the plain public-key model, which means that users do not need to prove knowledge or possession of their secret key. The scheme has two additional useful properties:

  • The scheme supports batch verification where a set of multi-signatures can be verified as a batch faster than verifying them one by one.

  • We show in Sect. 3.3 that given several multi-signatures on different messages, it is possible to aggregate all them using (1) into a single short signature. This can be used to aggregate signatures across many transactions and further shrink the data on the blockchain.

Our construction is based on the approach developed in [9] and [36] for securing Schnorr multi-signatures against the rogue public key attack.

Our BLS-based multi-signature scheme is much easier to use than Schnorr multi-signatures. Recall that aggregation in Schnorr can only take place at the time of signing and requires a multi-round protocol between the signers. In our new scheme, aggregation can take place publicly by a simple multiplication, even long after all the signatures have been generated and the signers are no longer available. Concretely, in the context of Bitcoin this means that all the signers behind a Multisig address can simply send their signatures to one party who aggregates all of them into a single signature. No interaction is needed, and the parties do not all need to be online at the same time.

Accountable-Subgroup Multi-signatures. Consider again n parties where each party generates an independent signing key pair. An ASM enables any subset \( S \) of the n parties to jointly sign a message m, so that a valid signature implicates the subset \( S \) that generated the signature; hence \( S \) is accountable for signing m. The verifier in an ASM is given as input the (aggregate) ASM public key representing all n parties, the set \( S \subseteq \{1,\ldots ,n\}\), the signature generated by the set \( S \), and the message m. It accepts or rejects the signature. Security should ensure that a set of signers \( S ' \not \supseteq S \) cannot issue a signature that will be accepted as if it were generated by \( S \). We define ASMs and their security properties precisely in Sect. 4. This concept was previously studied by Micali et al. [39].

Any secure signature scheme gives a trivial ASM: every party generates an independent signing key pair. A signature by a set \( S \) on message m is simply the concatenation of all the signatures by the members of \( S \). For a security parameter \(\kappa \), the public key size in this trivial ASM is \(O(n \times \kappa )\) bits. The signature size is \(O(| S | \times \kappa )\) bits.

Our new ASM scheme, called , is presented in Sect. 4.2. It is the first ASM where signature size is only \(O(\kappa )\) bits beyond the description of the set \( S \), independent of n. The public key is only \(O(\kappa )\) bits. Concretely, the signature is only two group elements, along with the description of \( S \), and the public key is a single group element. The signing process is non-interactive, but initial key generation requires a simple one-round protocol between the all n signers. We also present an aggregate ASM, called , that partially aggregates w signatures into a single signature that is considerably smaller than w separate signatures.

To see how all this can be used, consider again a Bitcoin n-of-n Multisig address. We already saw that multi-signatures with public key aggregation reduce the amount of data written to the blockchain to only \(O(\kappa )\) bits when spending funds from this address (as opposed to \(O(\kappa \times n)\) bits as currently done in Bitcoin). The challenge is to do the same for a t-of-n Multisig address where \(t<n\). Our ASM gives a complete solution; the only information that is written to the blockchain is a description of \( S \) plus three additional group elements: one for the public key and two for the signature, even when \({n \atopwithdelims ()t}\) is exponential. When a block contains w such transactions, our aggregate ASM in Sect. 4.4 can reduce this further to two (amortized) group elements per transaction. This is significantly better than the trivial linear size ASM scheme currently employed by Bitcoin.

Proofs of Possession. We further observe that all our schemes, both BLS-based and Schnorr-based, can be adapted to a setting where all users are required to provide a proof of possession (PoP) of their secret key. Proofs of possession increase the size of individual public keys, but there are applications where the size of individual keys is less relevant. For example, Multisig addresses in Bitcoin only need to store the aggregate public key on the blockchain, whereas the individual public keys are only relevant to the signers and can be kept off-chain, or verified once and then discarded. Other applications may involve a more or less static set of signing nodes whose keys can be verified once and used in arbitrary combinations thereafter.

The PoP variants offer some advantages over our main schemes, such as simply using the product or hash of the public keys as the aggregate public key (as opposed to a multi-exponentiation), and having tighter security proofs to the underlying security assumption. Due to space constraints, the PoP variants are only presented in the full version of this work [12].

Table 1. Comparison of the space required to authorize a block in the Bitcoin blockchain containing \(tx \) transactions, each containing \(inp \) inputs, all from \(n \)-out-of-\(n \) multisig wallets. Here, denotes the space required to represent an element of a group. The fourth column shows the concrete number of bytes taken in a bitcoin block by choosing some sample parameters (\(tx = 1500\), \(inp = 3\), \(n = 3\)), using secp256k1 [19] for Bitcoin, , and schemes (, ), and BLS381 [6] for the pairing-based , and schemes (, , ). In the right-most column, “linear” denotes that \(t \)-of-\(n \) thresholds are supported with key and signature sizes linear in \(n \) and \(t \),“small” denotes that support is limited to \(n \atopwithdelims ()t \) being small, and “any” denotes support for arbitrary (polynomial size) t and n.

1.3 Efficiency Comparison

Table 1 shows to what extent our constructions reduce the size of the Bitcoin blockchain. Our pairing-based scheme and and our discrete logarithm-based scheme both require less than 20% of the space to authenticate all transactions in a Bitcoin block compared to the currently deployed solution, assuming realistic parameters. While not immediately visible from the table, accountable-subgroup multi-signature schemes is most useful for \(t \)-of-\(n \) signatures when \(n \atopwithdelims ()t \) is very large. For instance, for a 50-out-of-100 multisig wallets, the currently deployed bitcoin solution would require almost 60 times more space than our scheme. The other schemes support threshold signatures using Merkle trees [38] as outlined in [36, Sect. 5.2], but only when \(n \atopwithdelims ()t \) is small enough to generate the tree. This method would for example be infeasible for a 50-of-100 threshold scheme.

1.4 Related Work

Multi-signatures have been studied extensively based on RSA [29, 30, 43, 45], discrete logarithms [3, 4, 9, 17, 18, 20, 22, 26,27,28, 32, 35, 36, 39, 44], pairings [10, 11, 31, 33, 47], and lattices [5]. Defending against rogue public-key attacks has always been a primary concern in the context of multi-signature schemes based on discrete-log and pairings [7, 9, 13, 28, 39, 40, 47], and is the main reason for the added complexity in discrete-log-based multi-signature systems. Aggregate signatures [1, 13, 25] are a closely related concept where signatures by different signers on different messages can be compressed together. Sequential aggregate signatures [15, 24, 33, 34, 42] are a variant where signers take turns adding their own signature onto the aggregate. The concept of public-key aggregation in addition to signature compression has not been explicitly discussed in the plain public key model until [36] and this work. This concept greatly reduces the combined length of the data needed to verify a multi-signature.

2 Preliminaries

2.1 Bilinear Groups

Let \(\mathcal {G}\) be a bilinear group generator that takes as an input a security parameter \(\kappa \) and outputs the descriptions of multiplicative groups \((q,\mathbb {G}_\mathrm {1},\mathbb {G}_\mathrm {2},\mathbb {G}_\mathrm {t},e,g_\mathrm {1},g_\mathrm {2})\) where \(\mathbb {G}_\mathrm {1}\), \(\mathbb {G}_\mathrm {2}\), and \(\mathbb {G}_\mathrm {t}\) are groups of prime order q, \(e\) is an efficient, non-degenerating bilinear map \(e: \mathbb {G}_\mathrm {1}\times \mathbb {G}_\mathrm {2}\rightarrow \mathbb {G}_\mathrm {t}\), and \(g_\mathrm {1}\) and \(g_\mathrm {2}\) are generators of the groups \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\), respectively.

2.2 Computational Problems

Definition 1

(Discrete Log Problem). For a group of prime order q, we define of an adversary as

where the probability is taken over the random choices of and the random selection of y. \(\mathcal {A}\) \((\tau , \epsilon )\)-breaks the discrete log problem if it runs in time at most \(\tau \) and has . Discrete log is \((\tau , \epsilon )\)-hard if no such adversary exists.

Definition 2

(Computational co-Diffie-Hellman Problem). For a groups of prime order q, define of an adversary \(\mathcal {A}\) as

where the probability is taken over the random choices of \(\mathcal {A} \) and the random selection of \((\alpha , \beta )\). \(\mathcal {A}\) \((\tau , \epsilon )\)-breaks the \(\mathsf {co\text{- }CDH}\) problem if it runs in time at most \(\tau \) and has . \(\mathsf {co\text{- }CDH}\) is \((\tau , \epsilon )\)-hard if no such adversary exists.

Definition 3

(Computational \(\psi \) -co-Diffie-Hellman Problem). For groups of prime order q, let \(\mathcal {O}^\mathtt{{\psi }} (\cdot )\) be an oracle that on input \(g_\mathrm {2}^x \in \mathbb {G}_\mathrm {2}\) returns \(g_\mathrm {1}^x \in \mathbb {G}_\mathrm {1}\). Define of an adversary \(\mathcal {A}\) as

where the probability is taken over the random choices of \(\mathcal {A} \) and the random selection of \((\alpha , \beta )\). \(\mathcal {A}\) \((\tau , \epsilon )\)-breaks the \(\mathsf {\psi \text{- }co\text{- }CDH}\) problem if it runs in time at most \(\tau \) and has . \(\mathsf {\psi \text{- }co\text{- }CDH}\) is \((\tau , \epsilon )\)-hard if no such adversary exists.

2.3 Generalized Forking Lemma

The forking lemma of Pointcheval and Stern [46] is commonly used to prove the security of schemes based on Schnorr signatures [48] in the random-oracle model. Their lemma was later generalized to apply to a wider class of schemes [3, 9]. We recall the version due to Bagherzandi, Cheon, and Jarecki [3] here.

Let \(\mathcal {A} \) be an algorithm that on input \( in \) interacts with a random oracle \(\mathsf {H}: \{0,1\}^* \rightarrow \mathbb {Z} _q\). Let \(f = (\rho , h_1, \ldots , h_{q_\mathrm {H}})\) be the randomness involved in an execution of \(\mathcal {A} \), where \(\rho \) is \(\mathcal {A} \)’s random tape, \(h_i\) is the response to \(\mathcal {A} \)’s i-th query to \(\mathsf {H} \), and \(q_\mathrm {H}\) is its maximal number of random-oracle queries. Let \(\varOmega \) be the space of all such vectors f and let \(f|_i = (\rho , h_1, \ldots , h_{i-1})\). We consider an execution of \(\mathcal {A} \) on input \( in \) and randomness f, denoted \(\mathcal {A} ( in ,f)\), as successful if it outputs a pair \((J, \{ out _j\}_{j \in J})\), where J is a multi-set that is a non-empty subset of \(\{1,\ldots ,q_\mathrm {H}\}\) and \(\{ out _j\}_{j \in J}\) is a multi-set of side outputs. We say that \(\mathcal {A} \) failed if it outputs \(J = \emptyset \). Let \(\epsilon \) be the probability that \(\mathcal {A} ( in ,f)\) is successful for fresh randomness and for an input generated by an input generator \(\mathsf {IG}\).

For a given input \( in \), the generalized forking algorithm \(\mathcal {GF}_\mathcal {A} \) is defined as follows:

figure a

We say that \(\mathcal {GF}_\mathcal {A} \) succeeds if it doesn’t output \(\mathtt {fail}\). Bagherzandi et al. proved the following lemma for this forking algorithm.

Lemma 1

(Generalized Forking Lemma [3]). Let \(\mathsf {IG}\) be a randomized algorithm and \(\mathcal {A} \) be a randomized algorithm running in time \(\tau \) making at most \(q_\mathrm {H}\) random-oracle queries that succeeds with probability \(\epsilon \). If \(q > 8nq_\mathrm {H}/ \epsilon \), then \(\mathcal {GF}_\mathcal {A} ( in )\) runs in time at most \(\tau \cdot 8n^2q_\mathrm {H}/\epsilon \cdot \ln (8n/\epsilon )\) and succeeds with probability at least \(\epsilon /8\), where the probability is over the choice of and over the coins of \(\mathcal {GF}_\mathcal {A} \).

2.4 Multi-signatures and Aggregate Multi-signatures

We follow the definition of Bellare and Neven [9] and define a multisignature scheme as algorithms \(\mathsf {Pg}\), \(\mathsf {Kg}\), \(\mathsf {Sign}\), \(\mathsf {KAg}\), and \(\mathsf {Vf}\). A trusted party generates the system parameters \( par \leftarrow \mathsf {Pg}\). Every signer generates a key pair , and signers can collectively sign a message m by each calling the interactive algorithm \(\mathsf {Sign}( par , \mathcal {PK}, sk , m)\), where \(\mathcal {PK}\) is the set of the public keys of the signers, and \( sk \) is the signer’s individual secret key. At the end of the protocol, every signer outputs a signature \(\sigma \). Algorithm \(\mathsf {KAg}\) on input a set of public keys \(\mathcal {PK}\) outputs a single aggregate public key \( apk \). A verifier can check the validity of a signature \(\sigma \) on message m under an aggregate public key \( apk \) by running \(\mathsf {Vf}( par , apk , m, \sigma )\) which outputs 0 or 1 indicating that the signatures is invalid or valid, respectively.

A multisignature scheme should satisfy completeness, meaning that for any n, if we have \(( pk _i, sk _i) \leftarrow \mathsf {Kg}( par )\) for \(i = 1, \ldots , n\), and for any message m, if all signers input \(\mathsf {Sign}( par , \{ pk _1, \ldots , pk _n\} sk _i, m)\), then every signer will output a signature \(\sigma \) such that \(\mathsf {Vf}( par , \mathsf {KAg}( par , \{ pk _i\}_{i=1}^n), m, \sigma ) = 1\). Second, a multisignature scheme should satisfy unforgeability. Unforgeability of a multisignature scheme is defined by a three-stage game.

Setup. The challenger generates the parameters \( par \leftarrow \mathsf {Pg}\) and a challenge key pair . It runs the adversary on the public key \(\mathcal {A} ( par , pk ^*)\).

Signature queries. \(\mathcal {A}\) is allowed to make signature queries on any message \( m \) for any set of signer public keys \(\mathcal {PK}\) with \( pk ^* \in \mathcal {PK}\), meaning that it has access to oracle that will simulate the honest signer interacting in a signing protocol with the other signers of \(\mathcal {PK}\) to sign message \( m \). Note that \(\mathcal {A}\) may make any number of such queries concurrently.

Output. Finally, the adversary outputs a multisignature forgery \(\sigma \), a message \( m \), and a set of public keys \(\mathcal {PK}\). The adversary wins if \( pk ^* \in \mathcal {PK}\), \(\mathcal {A}\) made no signing queries on \( m ^*\), and \(\mathsf {Vf}( par , \mathsf {KAg}( par , \mathcal {PK}), m )\}, \sigma ) = 1\).

Definition 4

We say \(\mathcal {A}\) is a \((\tau , q_\mathrm {S}, q_\mathrm {H}, \epsilon )\)-forger for multisignature scheme if it runs in time , makes \(q_\mathrm {S}\) signing queries, makes random oracle queries, and wins the above game with probability at least \(\epsilon \). is -unforgeable if no -forger exists.

2.5 Aggregate Multi-signatures

We now introduce aggregate multi-signatures, combining the concepts of aggregate signatures and multisignatures, allowing for multiple multisignatures to be aggregated into one. More precisely, we extend the definition of multisignatures with two algorithms. \(\mathsf {SAg}\) takes input a set of tuples, each tuple containing an aggregate public key \( apk \), a message \( m \), and a multisignature \(\sigma \), and outputs a single aggregate multisignature \(\varSigma \). \(\mathsf {AVf}\) takes input a set of tuples, each tuple containing an aggregate public key \( apk \) and a message \( m \), and an aggregate multisignature \(\varSigma \), and outputs 0 or 1 indicating that the aggregate multisignatures is invalid or valid, respectively. Observe that any multisignature scheme can be transformed into an aggregate multisignature scheme in a trivial manner, by implementing \(\mathsf {SAg}( par , \{ apk _i, m _i, \sigma _i\})\) to output \(\varSigma \leftarrow (\sigma _1, \ldots , \sigma _n)\), and \(\mathsf {AVf}\big ( par , \{ apk _i, m _i\}, (\sigma _1,\ldots ,\sigma _n)\big )\) to output 1 if all individual multisignatures are valid. The goal however is to have \(\varSigma \) much smaller than the concatenation of the individual multisignatures, and ideally of constant size.

The security of aggregate multisignatures is very similar to the security of multisignatures. First, an aggregate multisignature scheme should satisfy completeness, meaning that 1) for any n, if we have \(( pk _i, sk _i) \leftarrow \mathsf {Kg}( par )\) for \(i = 1, \ldots , n\), and for any message m, if all signers input \(\mathsf {Sign}( par , \{ pk _1, \ldots , pk _n\} sk _i, m)\), then every signer will output a signature \(\sigma \) such that \(\mathsf {Vf}( par , \mathsf {KAg}( par , \{ pk _i\}_{i=1}^n), m, \sigma ) = 1\), and 2) for any set of valid multisignatures \(\{( apk _i, m_i, \sigma _i)\}\) (with \(\mathsf {Vf}( par , apk _i, m_i, \sigma _i) = 1\)), the aggregated multisignature is also valid: \(\mathsf {AVf}( par , \{ apk _i, m_i\}, \mathsf {SAg}( par , \{( apk _i, m _i,\sigma _i)\})) = 1\). Second, an aggregate multisignature scheme should satisfy unforgeability. Unforgeability of an aggregate multisignature scheme is defined by a three-stage game, where the setup stage and the signature queries stage are the same as in the multisignature unforgeability game. The output stage is changed as follows:

Output. Finally, the adversary halts by outputting an aggregate multisignature forgery \(\varSigma \), set of aggregate public keys a message pairs \(\{ apk _i, m _i\}\), a set of public keys \(\mathcal {PK}\), and a message \( m ^*\). The adversary wins if \( pk ^* \in \mathcal {PK}\), \(\mathcal {A}\) made no signing queries on \( m ^*\), and \(\mathsf {AVf}( par , \{( apk _i, m _i)\} \cup \{(\mathsf {KAg}( par , \mathcal {PK}), m ^*)\}, \varSigma ) = 1\).

Definition 5

We say \(\mathcal {A}\) is a \((\tau , q_\mathrm {S}, q_\mathrm {H}, \epsilon )\)-forger for aggregate multisignature scheme if it runs in time , makes \(q_\mathrm {S}\) signing queries, makes random oracle queries, and wins the above game with probability at least \(\epsilon \). is -unforgeable if no -forger exists.

3 Multi-signatures with Key Aggregation from Pairings

We begin by presenting our new pairing-based multi-signature scheme that supports public-key aggregation. Bilinear groups are typically asymmetric, in the sense that one of the two groups has a more compact representation. The pairing-based schemes below require public keys and signatures to live in different groups. For standard signatures, a single public key is used to sign many messages, so it would make sense to use the more compact group for signatures. Because our schemes below enable aggregation of both signatures and public keys, however, this may no longer be true, and the best choice of groups may depend strongly on the concrete application. We describe our schemes below placing signatures in \(\mathbb {G}_\mathrm {1}\) and public keys in \(\mathbb {G}_\mathrm {2}\), but leave it open which of those two groups has the more compact representation. Note that efficient hash functions exist mapping into either of the groups [16, 23, 49].

3.1 Description of Our Pairing-Based Scheme

Our pairing-based multi-signature with public-key aggregation is built from the BLS signature scheme [14]. The scheme is secure in the plain public key model, and assumes hash functions \(\mathsf {H} _0 : \{0,1\}^* \rightarrow \mathbb {G}_\mathrm {1}\) and \(\mathsf {H} _1 : \{0,1\}^* \rightarrow \mathbb {Z} _q\).

Parameters Generation. \(\mathsf {Pg}(\kappa )\) sets up bilinear group \((q, \mathbb {G}_\mathrm {1}, \mathbb {G}_\mathrm {2},\mathbb {G}_\mathrm {t},e,g_\mathrm {1},g_\mathrm {2}) \leftarrow \mathcal {G}(\kappa )\) and outputs \( par \leftarrow (q,\mathbb {G}_\mathrm {1},\mathbb {G}_\mathrm {2},\mathbb {G}_\mathrm {t},e,g_\mathrm {1}, g_\mathrm {2})\).

Key Generation. The key generation algorithm \(\mathsf {Kg}( par )\) chooses , computes \( pk \leftarrow g_\mathrm {2}^ sk \), and outputs \(( pk , sk )\).

Key Aggregation. \(\mathsf {KAg}(\{ pk _1, \ldots , pk _n\})\) outputs

$$\begin{aligned} apk \leftarrow \prod _{i = 1}^n pk _i^{\mathsf {H} _1( pk _i, \{ pk _1, \ldots , pk _n\})} \ . \end{aligned}$$

Signing. Signing is a single round protocol. \(\mathsf {Sign}( par , \{ pk _1, \ldots , pk _n\}, sk _i, m )\) computes \(s_i \leftarrow \mathsf {H} _0( m )^{a_i \cdot sk _i}\), where \(a_i \leftarrow \mathsf {H} _1( pk _i, \{ pk _1, \ldots , pk _n\})\). Send \(s_i\) to a designated combiner who computes the final signature as \(\sigma \leftarrow \prod _{j = 1}^n s_j\). This designated combiner can be one of the signers or it can be an external party.

Multi-signature Verification. \(\mathsf {Vf}( par , apk , m, \sigma )\) outputs 1 iff

$$\begin{aligned} e(\sigma , g_\mathrm {2}^{-1}) \cdot e(\mathsf {H} _0( m ), apk ) {\mathop {=}\limits ^{{\scriptscriptstyle ?}}} 1_{\mathbb {G}_\mathrm {t}}. \end{aligned}$$

Batch Verification. We note that a set of b multi-signatures can be verified as a batch faster than verifying them one by one. To see how, suppose we are given triples \((m_i, \sigma _i, apk _i)\) for \(i=1,\ldots ,b\), where \( apk _i\) is the aggregated public-key used to verify the multi-signature \(\sigma _i\) on \(m_i\). If all the messages \(m_1,\ldots ,m_b\) are distinct then we can use signature aggregation as in (1) to verify all these triples as a batch:

  • Compute an aggregate signature \(\tilde{\sigma } = \sigma _1 \cdots \sigma _b \in \mathbb {G}_\mathrm {1}\),

  • Accept all b multi-signature tuples as valid iff

    $$\begin{aligned} e(\tilde{\sigma }, g_\mathrm {2}) {\mathop {=}\limits ^{{\scriptscriptstyle ?}}} e\big (\mathsf {H} _0(m_1), apk _1\big ) \cdots e\big (\mathsf {H} _0(m_b), apk _b\big ). \end{aligned}$$

This way, verifying the b multi-signatures requires only \(b+1\) pairings instead of 2b pairings to verify them one by one. This simple batching procedure can only be used when all the messages \(m_1,\ldots ,m_b\) are distinct. If some messages are repeated then batch verification can be done by first choosing random exponents , where \(\kappa \) is a security parameter, computing \(\tilde{\sigma } = \sigma _1^{\rho _1} \cdots \sigma _b^{\rho _b} \in \mathbb {G}_\mathrm {2}\), and checking that

$$\begin{aligned} e(\tilde{\sigma }, g_\mathrm {2}) {\mathop {=}\limits ^{{\scriptscriptstyle ?}}} e\big (\mathsf {H} _0(m_1), apk _1^{\rho _1}\big ) \cdots e\big (\mathsf {H} _0(m_b), apk _b^{\rho _b}\big ). \end{aligned}$$

Of course the pairings on the right hand side can be coalesced for repeated messages.

3.2 Security Proof

Theorem 1

is an unforgeable multisignature scheme under the computational co-Diffie-Hellman problem in the random-oracle model. More precisely, is \((\tau , q_\mathrm {S}, q_\mathrm {H}, \epsilon )\)-unforgeable in the random-oracle model if and if is -hard, where l is the maximum number of signers involved in a single multisignature, \(\tau _\mathrm {exp_1}\) and \(\tau _\mathrm {exp_2}\) denote the time required to compute exponentiations in \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\) respectively, and \(\tau _\mathrm {exp_1^{i}}\) and \(\tau _\mathrm {exp_2^{i}}\) denote the time required to compute i-multiexponentiations in \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\) respectively.

Proof

Suppose we have a \((\tau ,q_\mathrm {S}, q_\mathrm {H}, \epsilon )\) forger \(\mathcal {F} \) against the multisignature scheme. Then consider an input generator \(\mathsf {IG}\) that generates random tuples \((A, B_1, B_2) = (g_\mathrm {1}^\alpha , g_\mathrm {1}^\beta , g_\mathrm {2}^\beta )\) where , and an algorithm \(\mathcal {A} \) that on input \((A, B_1, B_2)\) and randomness \(f = (\rho , h_1,\ldots ,h_{q_\mathrm {S}})\) proceeds as follows.

Algorithm \(\mathcal {A} \) picks an index and runs the forger \(\mathcal {F} \) on input \( pk ^* \leftarrow B_2\) with random tape \(\rho \). It responds to \(\mathcal {F} \)’s i-th \(\mathsf {H} _0\) query by choosing and returning \(g_\mathrm {1}^{r_i}\) if \(i \ne k\). The k-th \(\mathsf {H} _0\) query is answered by returning A. We assume w.l.o.g. that \(\mathcal {F}\) makes no repeated \(\mathsf {H} _0\) queries. \(\mathcal {A}\) responds to \(\mathcal {F} \)’s \(\mathsf {H} _1\) queries as follows. We distinguish three types of \(\mathsf {H} _1\) queries:

  1. 1.

    A query on \(( pk , \mathcal {PK})\) with \( pk \in \mathcal {PK}\) and \( pk ^* \in \mathcal {PK}\), and this is the first such query with \(\mathcal {PK}\).

  2. 2.

    A query on \(( pk , \mathcal {PK})\) with \( pk \in \mathcal {PK}\) and \( pk ^* \in \mathcal {PK}\), and and a prior query of this form with \(\mathcal {PK}\) has been made.

  3. 3.

    Queries of any other form.

\(\mathcal {A}\) handles the i-th query of type (1) by choosing a random value for \(\mathsf {H} _1( pk _i, \mathcal {PK})\) for every \( pk _i \ne pk ^* \in \mathcal {PK}\). It fixes \(\mathsf {H} _1( pk ^*, \mathcal {PK})\) to \(h_i\), and returns the \(\mathsf {H} _1( pk , \mathcal {PK})\). \(\mathcal {A}\) handles a type (2) query by returning the values chosen earlier when the type (1) query for \(\mathcal {PK}\) was made. \(\mathcal {A}\) handles a type (3) query by simply returning a random value in \(\mathbb {Z} _q\).

When \(\mathcal {F} \) makes a signing query on message \( m \), with signers \(\mathcal {PK}\), \(\mathcal {A}\) computes \( apk \leftarrow \mathsf {KAg}( par ,\mathcal {PK})\) and looks up \(\mathsf {H} _0( m )\). If this is A, then \(\mathcal {A}\) aborts with output \((0,\bot )\). Else, it must be of form \(g_\mathrm {1}^r\), and \(\mathcal {A}\) can simulate the honest signer by computing \(s_i \leftarrow B_1^r\). When \(\mathcal {F} \) fails to output a successful forgery, then \(\mathcal {A} \) outputs \((0,\bot )\). If \(\mathcal {F} \) successfully outputs a forgery for a message \( m \) so that \(\mathsf {H} _0(m) \ne A)\), then \(\mathcal {A} \) also outputs \((0,\bot )\). Otherwise, \(\mathcal {F} \) has output a forgery \((\sigma , \mathcal {PK}, m )\) such that

$$ e(\sigma , g_\mathrm {2}) = e(A, \mathsf {KAg}( par , \mathcal {PK})) . $$

Let \(j_\mathrm {f}\) be the index such that \(\mathsf {H} _1( pk ^*, \mathcal {PK}) = h_{j_\mathrm {f}}\), let \( apk \leftarrow \mathsf {KAg}( par ,\mathcal {PK})\), and let \(a_j \leftarrow \mathsf {H} _1( pk _j, \mathcal {PK})\) for \(\mathcal {PK}= \{ pk _1,\ldots , pk _n\}\). Then \(\mathcal {A}\) outputs \((J = \{j_\mathrm {f}\}, \{(\sigma , \mathcal {PK}, apk , a_1,\ldots ,a_n)\})\).

The running time of \(\mathcal {A} \) is that of \(\mathcal {F} \) plus the additional computation \(\mathcal {A} \) makes. Let \(q_\mathrm {H}\) denote the total hash queries \(\mathcal {F} \) makes, i.e., the queries to \(\mathsf {H} _0\) and \(\mathsf {H} _1\) combined. \(\mathcal {A} \) needs one exponentiation in \(\mathbb {G}_\mathrm {1}\) to answer \(\mathsf {H} _0\) queries, so it spends at most \(q_\mathrm {H}\cdot \tau _\mathrm {exp_1}\) to answer the hash queries. For signing queries with a \(\mathcal {PK}\) of size at most l, \(\mathcal {A} \) computes one multi-exponentiation costing time \(\tau _\mathrm {exp_2^{l}}\), and one exponentiation in \(\mathbb {G}_\mathrm {1}\) costing \(\tau _\mathrm {exp_1}\), giving a total of \(q_\mathrm {S}\cdot (\tau _\mathrm {exp_2^{l}} + \tau _\mathrm {exp_1})\). Finally, \(\mathcal {A} \) computes the output values, which costs an additional \(\tau _\mathrm {exp_2^{l}}\) to compute \( apk \). \(\mathcal {A} \)’s runtime is therefore \(\tau + q_\mathrm {H}\tau _\mathrm {exp_1}+ q_\mathrm {S}(\tau _\mathrm {exp_2^{l}} + \tau _\mathrm {exp_1}) + \tau _\mathrm {exp_2^{l}}\). The success probability of \(\mathcal {A} \) is the probability that \(\mathcal {F} \) succeeds and that it guessed the hash index of \(\mathcal {F} \)’s forgery correctly, which happens with probability at least \(1/q_\mathrm {H}\), making \(\mathcal {A} \)’s overall success probability \(\epsilon _\mathcal {A} = \epsilon /q_\mathrm {H}\).

We prove the theorem by constructing an algorithm \(\mathcal {B} \) that, on input a \(\mathsf {co\text{- }CDH}\) instance \((A, B_1, B_2) \in \mathbb {G}_\mathrm {1}\times \mathbb {G}_\mathrm {1}\times \mathbb {G}_\mathrm {2}\) and a forger \(\mathcal {F} \), solves the \(\mathsf {co\text{- }CDH}\) problem in \((\mathbb {G}_\mathrm {1}, \mathbb {G}_\mathrm {2})\). Namely, \(\mathcal {B} \) runs the generalized forking algorithm \(\mathcal {GF}_\mathcal {A} \) from Lemma 1 on input \((A, B_1, B_2)\) with the algorithm \(\mathcal {A} \) described above. Observe that the \(\mathsf {co\text{- }CDH}\)-instance is distributed indentically to the output of \(\mathsf {IG}\). If \(\mathcal {GF}_\mathcal {A} \) outputs \((0,\bot )\), then \(\mathcal {B} \) outputs \(\mathtt {fail}\). If \(\mathcal {GF}_\mathcal {A} \) outputs \((\{j_\mathrm {f}\}, \{ out \}, \{ out '\})\), then \(\mathcal {B} \) proceeds as follows. \(\mathcal {B} \) parses \( out \) as \((\sigma , \mathcal {PK}, apk , a_1,\ldots ,a_n)\) and \( out '\) as \((\sigma ', \mathcal {PK}', apk ', a'_1,\ldots ,a'_{n'})\). From the construction of \(\mathcal {GF}_\mathcal {A} \), we know that \( out \) and \( out '\) were obtained from two executions of \(\mathcal {A} \) with randomness f and \(f'\) such that \(f|_{j_\mathrm {f}} = f'|_{j_\mathrm {f}}\), meaning that these executions are identical up to the \(j_\mathrm {f}\)-th \(\mathsf {H} _1\) query of type (1). In particular, this means that the arguments of this query are identical, i.e., \(\mathcal {PK}= \mathcal {PK}'\) and \(n=n'\). If i is the index of \( pk ^*\) in \(\mathcal {PK}\), then again by construction of \(\mathcal {GF}_\mathcal {A} \), we have \(a_i = h_{j_\mathrm {f}}\) and \(a'_i = h_{j_\mathrm {f}}'\), and by the forking lemma it holds that \(a_i \ne a_i'\). By construction of \(\mathcal {A} \), we know that \( apk = \prod _{j=1}^n pk _j^{a_j}\) and \( apk ' = \prod _{j=1}^n pk _j^{a'_j}\). Since \(\mathcal {A} \) assigned \(\mathsf {H} _1( pk _j, \mathcal {PK}) \leftarrow a_j\) for all \(j \ne i\) before the forking point, we have that \(a_j = a'_j\) for \(j \ne i\), and therefore that \( apk / apk ' = { pk ^*}^{a_i - a'_i}\). We know that \(\mathcal {A} \)’s output satisfies \(e(\sigma , g_\mathrm {2}) = e(A, apk )\) and \(e(\sigma ', g_\mathrm {2}) = e(A, apk ')\), so that \(e(\sigma /\sigma ', g_\mathrm {2}) = e(A, {B_2}^{a_i - a'_i})\), showing that \((\sigma /\sigma ')^{1/(a_i-a'_i)}\) is a solution to the \(\mathsf {co\text{- }CDH}\) instance.

Using Lemma 1, we know that if \(q > 8 q_\mathrm {H}/ \epsilon \), then \(\mathcal {B} \) runs in time at most \((\tau + q_\mathrm {H}\tau _\mathrm {exp_1}+ q_\mathrm {S}(\tau _\mathrm {exp_2^{l}} + \tau _\mathrm {exp_1}) + \tau _\mathrm {exp_2^{l}}) \cdot 8 q_\mathrm {H}^2 / \epsilon \cdot \ln (8q_\mathrm {H}/\epsilon )\) and succeeds with probability \(\epsilon ' \ge \epsilon /(8q_\mathrm {H})\).

3.3 Aggregating Multi-signatures

It is possible to further aggregate the multi-signatures of the scheme by multiplying them together, as long as the messages of the aggregated multi-signatures are different. The easiest way to guarantee that messages are different is by including the aggregate public key in the message to be signed, which is how we define the aggregate multisignature scheme here. That is, and share the \(\mathsf {Pg}\), \(\mathsf {Kg}\), and \(\mathsf {KAg}\), algorithms, but has slightly modified \(\mathsf {Sign}\) and \(\mathsf {Vf}\) algorithms that include \( apk \) in the signed message, and has additional algorithms \(\mathsf {SAg}\) and \(\mathsf {AVf}\) to aggregate signatures and verify aggregate signatures, respectively.

Signing. \(\mathsf {Sign}( par , \mathcal {PK}, sk _i, m )\) computes \(s_i \leftarrow \mathsf {H} _0( apk , m )^{a_i \cdot sk _i}\), where \( apk \leftarrow \mathsf {KAg}( par ,\mathcal {PK})\) and \(a_i \leftarrow \mathsf {H} _1( pk _i, \{ pk _1, \ldots , pk _n\})\). The designated combiner collect all signatures \(s_i\) and computes the final signature \(\sigma \leftarrow \prod _{j = 1}^n s_j\).

Multi-signature Verification. \(\mathsf {Vf}( par , apk , m, \sigma )\) outputs 1 if and only if \(e(\sigma , g_\mathrm {2}^{-1}) \cdot e(\mathsf {H} _0( apk , m ), apk ) {\mathop {=}\limits ^{{\scriptscriptstyle ?}}} 1_{\mathbb {G}_\mathrm {t}}\).

Signature Aggregation. \(\mathsf {SAg}( par , \{( apk _i, m _i, \sigma _i)\}_{i = 1}^n)\) outputs \(\varSigma \leftarrow \prod _{i = 1}^n \sigma _i\).

Aggregate Signature Verification. \(\mathsf {AVf}(\{( apk _i, m _i)\}_{i = 1}^n, \varSigma )\) outputs 1 if and only if \(e(\varSigma , g_\mathrm {2}^{-1}) \cdot \prod _{i = 1}^n e(\mathsf {H} _0( apk _i, m _i), apk _i) {\mathop {=}\limits ^{{\scriptscriptstyle ?}}} 1_{\mathbb {G}_\mathrm {t}}\).

The security proof is almost identical to that of , but now requires an isomorphism \(\psi \) between \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\). We therefore prove security under the stronger \(\mathsf {\psi \text{- }co\text{- }CDH}\) assumption, which is equivalent to \(\mathsf {co\text{- }CDH}\) but offers this isomorphism as an oracle to the adversary.

Theorem 2

is a secure aggregate multisignature scheme under the computational \(\psi \)-co-Diffie-Hellman problem in the random-oracle model. More precisely, is -unforgeable in the random-oracle model if and if the computational \(\psi \)-co-Diffie-Hellman problem is \(((\tau + q_\mathrm {H}\tau _\mathrm {exp_1}+ q_\mathrm {S}(\tau _\mathrm {exp_2^{l}} + \tau _\mathrm {exp_1}) + \tau _\mathrm {exp_2^{l}} + \tau _\mathrm {exp_1^{n}}) \cdot 8 q_\mathrm {H}^2 / \epsilon \cdot \ln (8q_\mathrm {H}/\epsilon ), \epsilon /(8q_\mathrm {H}))\)-hard, where l is the maximum number of signers involved in a single multisignature, n is the amount of multisignatures aggregated into the forgery, \(\tau _\mathrm {exp_1}\) and \(\tau _\mathrm {exp_2}\) denote the time required to compute exponentiations in \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\) respectively, and \(\tau _\mathrm {exp_1^{i}}\) and \(\tau _\mathrm {exp_2^{i}}\) denote the time required to compute i-multiexponentiations in \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\) respectively.

Proof

Suppose we have a \((\tau ,q_\mathrm {S}, q_\mathrm {H}, \epsilon )\) forger \(\mathcal {F} \) against the multisignature scheme. We construct \(\mathcal {A}\) exactly as in the proof of Theorem 1, except that \(\mathcal {F} \) now outputs an aggregate multisignature signature forgery instead of a plain multisignature forgery. That is, \(\mathcal {F} \) outputs an aggregate multisignature \(\varSigma \), a set of aggregate public keys and message pairs \(\{( apk _1, m _1),\ldots ,( apk _n, m _n)\}\), a set of public keys \(\mathcal {PK}\), and a message \( m ^*\). Let \( apk ^* \leftarrow \mathsf {KAg}( par , \mathcal {PK})\). If \(\mathcal {A} \) correctly guessed that the k-th \(\mathsf {H} _0\) query is \(\mathsf {H} _0( apk ^*,m^*)\), then we have that

$$ e(\varSigma , g_\mathrm {2}^{-1}) \cdot e(A, apk ^*) \cdot \prod _{i = 1}^n e(\mathsf {H} _0( apk _i, m _i), apk _i) = 1_{\mathbb {G}_\mathrm {t}}. $$

\(\mathcal {A}\) looks up \(r_i\) for every \(( apk _i, m _i)\) such that \(\mathsf {H} _0( apk _i, m _i) = g_\mathrm {1}^{r_i}\). It computes \(\sigma \leftarrow \varSigma \cdot \prod _{i = 1}^n \mathcal {O}^\mathtt{{\psi }} ( apk _i^{-r_i})\), so that

$$ e(\sigma , g_\mathrm {2}) = e(y, apk ^*). $$

Note that \(\mathcal {A}\) has now extracted a forgery, meaning that the rest of the reduction is exactly as in the proof of Theorem 1. The success probability of the reduction is therefore the same, and the runtime is only increased by the extra steps required to compute \(\sigma \), which costs \(\tau _\mathrm {exp_1^{n}}\).

4 Accountable-Subgroup Multisignatures

Micali, Ohta, and Reyzin [39] defined an accountable-subgroup multisignature scheme as a multisignature scheme where any subset of a group of signers \(\mathcal {PK}\) can create a valid multisignature that can be verified against the public keys of signers in the subset. An ASM scheme can be combined with an arbitrary access structure over \(\mathcal {PK}\) to determine whether the subset \( S \) is authorized to sign on behalf of \(\mathcal {PK}\). For example, requiring that \(|S| \ge t\) turns the ASM scheme into a type of threshold signature scheme whereby the signature also authenticates the set of signers that participated.

Verification of an ASM scheme obviously requires a description of the set \( S \) of signers which can be described by their indices in the group \(\mathcal {PK}\) using \(\min (|\mathcal {PK}|,\ | S | \times \lceil \log _2 |\mathcal {PK}| \rceil )\) bits. We describe the first ASM scheme that, apart from the description of \( S \), requires no data items with sizes depending on \(| S |\) or \(|\mathcal {PK}|\). Verification is performed based on a compact aggregate public key and signature. The aggregate public key is publicly computable from the individual signers’ public keys, but we do require all members of \(\mathcal {PK}\) to engage in a one-time group setup after which each signer obtains a group-specific membership key that it needs to sign messages for the group \(\mathcal {PK}\).

4.1 Definition of ASM Schemes

We adapt the original syntax and security definition of ASM schemes [39] to support public-key aggregation and an interactive group setup procedure.

An ASM scheme consists of algorithms , , , , , and . The common system parameters are generated as . Each signer generates a key pair . To paricipate in a group of signers \(\mathcal {PK}= \{ pk _1,\ldots , pk _n\}\), each signer in \(\mathcal {PK}\) runs the interactive algorithm \(\mathsf {GSetup}( sk , \mathcal {PK})\) to obtain a membership key \( mk \). We assume that each signer in \(\mathcal {PK}\) is assigned a publicly computable index \(i \in \{1,\ldots ,|\mathcal {PK}|\}\), e.g., the index of \( pk \) in a sorted list of \(\mathcal {PK}\). Any subgroup of signers \( S \subseteq \{1,\ldots ,|\mathcal {PK}|\}\) of \(\mathcal {PK}\) can then collectively sign a message m by each calling the interactive algorithm \(\mathsf {Sign}( par , \mathcal {PK}, S , sk , mk , m)\), where \( mk \) is the signer’s membership key for this group of signers, to obtain a signature \(\sigma \). The key aggregation algorithm, on input the public keys of a group of signers \(\mathcal {PK}\), outputs an aggregate public key \( apk \). A signature \(\sigma \) is verified by running \(\mathsf {Vf}( par , apk , S , m, \sigma )\) which outputs 0 or 1.

Correctness requires that for all \(n > 0\), for all \( S \subseteq \{1,\ldots ,n\}\), and for all \(m \in \{0,1\}^*\) it holds that \(\mathsf {Vf}( par , apk , S , m, \sigma )=1\) with probability one when , , , and , where \(\mathsf {GSetup}\) is executed by all signers \(1,\ldots ,n\) while \(\mathsf {Sign}\) is only executed by the members of \( S \).

Security. Unforgeability is described by the following game. Setup. The challenger generates \( par \leftarrow \mathsf {Pg}\) and , and runs the adversary \(\mathcal {A} ( par , pk ^*)\).

Group Setup. The adversary can perform the group setup protocol \(\mathsf {GSetup}( sk ^*, \mathcal {PK})\) for any set of public keys \(\mathcal {PK}\) so that \( pk ^* \in \mathcal {PK}\), where the challenger plays the role of the target signer \( pk ^*\). The challenger stores the resulting membership key \( mk ^*_\mathcal {PK}\), but doesn’t hand it to \(\mathcal {A} \).

Signature queries. The adversary can also engage in arbitrarily many concurrent signing protocols for any message m, for any group of signers \(\mathcal {PK}\) for which \( pk ^* \in \mathcal {PK}\) and \( mk ^*_\mathcal {PK}\) is defined, and for any \( S \subseteq \{1,\ldots ,|\mathcal {PK}|\}\) so that \(i \in S \), where i is the index of \( pk ^*\) in \(\mathcal {PK}\). The challenger runs \(\mathsf {Sign}( par , \mathcal {PK}, S , sk ^*, mk ^*, m)\) to play the role of the i-th signer and hands the resulting signature \(\sigma \) to \(\mathcal {A} \).

Output. The adversary outputs a set of public keys \(\mathcal {PK}\), a set \( S \subseteq \{1,\ldots ,|\mathcal {PK}|\}\), a message m and an ASM signature \(\sigma \). It wins the game if \(\mathsf {Vf}( par , apk , S , m, \sigma )=1\), where \( apk \leftarrow \mathsf {KAg}(\mathcal {PK})\), \( pk ^* \in \mathcal {PK}\) and i is the index of \( pk ^*\) in \(\mathcal {PK}\), \(i \in S \), and \(\mathcal {A} \) never submitted m as part of a signature query.

Definition 6

We say that \(\mathcal {A}\) is a \((\tau , q_\mathrm {G}, q_\mathrm {S}, q_\mathrm {H}, \epsilon )\)-forger for accountable-subgroup multisignature scheme if it runs in time \(\tau \), makes \(q_\mathrm {G}\) group setup queries, \(q_\mathrm {S}\) signing queries, \(q_\mathrm {H}\) random-oracle queries, and wins the above game with probability at least \(\epsilon \). is -unforgeable if no -forger exists.

4.2 Our ASM Scheme

Key generation and key aggregation in our ASM scheme are the same as for our aggregatable multi-signature scheme in the previous section. We construct an ASM scheme by letting all signers, during group setup, contribute to multi-signatures on the aggregate public key and the index of every signer, such that the i-th signer in \(\mathcal {PK}\) has a “membership key” which is a multi-signature on \(( apk , i)\). On a high level, an accountable-subgroup multi-siganture now consists of the aggregation of the individual signers’ signatures and their membership keys and the aggregate public key of the subroup \( S \). To verify whether a subgroup \( S \) signed a message, one checks that the signature is a valid aggregate signature where the aggregate public key of the subgroup signed the message and the membership keys corresponding to \( S \).

The scheme uses hash functions \(\mathsf {H} _0 : \{0,1\}^* \rightarrow \mathbb {G}_\mathrm {1}\), \(\mathsf {H} _1 : \{0,1\}^* \rightarrow \mathbb {Z} _q\), and \(\mathsf {H} _2 : \{0,1\}^* \rightarrow \mathbb {G}_\mathrm {1}\). Parameter generation, key generation, and key aggregation are the same as for the aggregate multi-signature scheme in Sect. 3.

Group Setup. \(\mathsf {GSetup}( sk _i, \mathcal {PK}= \{ pk _1,\ldots , pk _n\})\) checks that \( pk _i \in \mathcal {PK}\) and that i is the index of \( pk _i\) in \(\mathcal {PK}\). Signer i computes the aggregate public key \( apk \leftarrow \mathsf {KAg}(\mathcal {PK})\) as well as \(a_i \leftarrow \mathsf {H} _1( pk _i, \mathcal {PK})\). It then sends \(\mu _{j,i} = \mathsf {H} _2( apk ,j)^{a_i \cdot sk _i}\) to signer j for \(j\ne i\), or simply publishes these values. After having received \(\mu _{i,j}\) from all other signers \(j \ne i\), it computes \(\mu _{i,i} \leftarrow \mathsf {H} _2( apk ,i)^{a_i \cdot sk_i}\) and returns the membership key \( mk _i \leftarrow \prod _{j=1}^n \mu _{i,j}\). Note that if all signers behave honestly, we have that

$$\begin{aligned} e( mk _i, g_\mathrm {2}) = e(\mathsf {H} _2( apk ,i), apk ). \end{aligned}$$

In other words, this \( mk _i\) is a valid multi-signature on the message \(( apk ,i)\) by all n parties, as defined in the scheme in Sect. 3.1.

Signing. \(\mathsf {Sign}( par , \mathcal {PK}, S , sk _i, mk _i, m)\) computes \( apk \leftarrow \mathsf {KAg}(\mathcal {PK})\) and

$$ s_i \leftarrow \mathsf {H} _0( apk ,m)^{ sk _i} \cdot mk _i, $$

and sends \(( pk _i,s_i)\) to a designated combiner (either one of the members of \( S \) or an external party). The combiner computes

$$ PK \leftarrow \prod _{j \in S } pk _j,\qquad s \leftarrow \prod _{j \in S } s_j, $$

and outputs the multisignature \(\sigma \mathrel {\mathop :}=( PK ,s)\). Note that the set \( S \) does not have to be fixed at the beginning of the protocol, but can be determined as partial signatures are collected.

Verification. \(\mathsf {Vf}( par , apk , S , m, \sigma )\) parses \(\sigma \) as \(( PK ,s)\) and outputs 1 iff

$$ e(\mathsf {H} _0( apk ,m), PK ) \cdot e(\prod _{j \in S } \mathsf {H} _2( apk ,j), apk ) \;{\mathop {=}\limits ^{{\scriptscriptstyle ?}}}\; e(s, g_\mathrm {2}) $$

and \( S \) is a set authorized to sign.

The presented ASM scheme satisfies correctness. If parties honestly execute the group setup and and signing protocols, we have \( PK = g_\mathrm {2}^{\sum _{i \in S } sk _i}\), \( apk = g_\mathrm {2}^{\sum _{i = 1, \ldots , n} a_i \cdot pk _i}\), and \(s = \mathsf {H} _0( apk ,m)^{\sum _{i \in S } sk _i} \cdot \prod _{i \in S } \mathsf {H} _2( apk , i)^{\sum _{j \in 1, \ldots , n} a_j \cdot sk _j}\), which passes verification:

$$\begin{aligned} e(s, g_\mathrm {2})&= e\big (\mathsf {H} _0( apk ,m)^{\sum _{i \in S } sk _i} \cdot \prod _{i \in S } \mathsf {H} _2( apk , i)^{\sum _{j \in 1, \ldots , n} a_j \cdot sk _j}, g_\mathrm {2}\big ) \\&= e\big (\mathsf {H} _0( apk ,m), pk \big ) \cdot e\big (\prod _{i \in S } \mathsf {H} _2( apk ,i), g_\mathrm {2}^{\sum _{j \in 1, \ldots , n} a_j \cdot sk _j}\big ) \\&= e\big (\mathsf {H} _0( apk ,m), pk \big ) \cdot e\big (\prod _{i \in S } \mathsf {H} _2( apk ,i), apk \big ) \end{aligned}$$

4.3 Security of Our ASM Scheme

Theorem 3

Our ASM scheme is unforgeable under the hardness of the computational \(\psi \)-co-Diffie-Hellman problem in the random-oracle model. More precisely, it is -unforgeable in the random-oracle model if and if is \((\tau ', \epsilon ')\)-hard for

$$\begin{aligned} \tau '&\;=\; (\tau + \tau '') \cdot \frac{8 q_\mathrm {H}^2 q }{ (q- q_\mathrm {S}- q_\mathrm {H}) \cdot \epsilon } \cdot \ln \frac{8q_\mathrm {H}q}{(q-q_\mathrm {S}- q_\mathrm {H}) \cdot \epsilon },\\ \tau ''&\;=\; q_\mathrm {H}\cdot \max {(\tau _\mathrm {exp_2^{l}}, \tau _\mathrm {exp_1^{2}})} + (l q_\mathrm {G}+ q_\mathrm {S}) \cdot \tau _\mathrm {exp_1}+ q_\mathrm {S}\cdot \tau _\mathrm {exp_2^{l}} + 2 \cdot \tau _\mathrm {pair}+ \tau _\mathrm {exp_1^{3}}, \\ \epsilon '&\;=\; \frac{\epsilon }{8 q_\mathrm {H}} - \frac{q_\mathrm {S}+ q_\mathrm {H}}{8qq_\mathrm {H}}, \end{aligned}$$

where l is the maximum number of signers involved in any group setup, \(\tau _\mathrm {exp_1}\) and \(\tau _\mathrm {exp_2}\) denote the time required to compute exponentiations in \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\) respectively, and \(\tau _\mathrm {exp_1^{i}}\) and \(\tau _\mathrm {exp_2^{i}}\) denote the time required to compute i-multi-exponentiations in \(\mathbb {G}_\mathrm {1}\) and \(\mathbb {G}_\mathrm {2}\) respectively, and \(\tau _\mathrm {pair}\) denotes the time required to compute a pairing operation.

Proof

Given a forger \(\mathcal {F} \) against the ASM scheme, we construct a wrapper algorithm \(\mathcal {A} \) that can be used by the generalized forking algorithm \(\mathcal {GF}_\mathcal {A} \). We then give an adversary \(\mathcal {B} \) that can solve the \(\mathsf {\psi \text{- }co\text{- }CDH}\) problem by running \(\mathcal {GF}_\mathcal {A} \). The proof essentially combines techniques related to the non-extractability of BGLS aggregate signatures [13, 21] with Maxwell et al.’s key aggregation technique [36].

Given a forger \(\mathcal {F} \), consider the following algorithm \(\mathcal {A} \). On input \( in = (q,\mathbb {G}_\mathrm {1},\mathbb {G}_\mathrm {2}, \mathbb {G}_\mathrm {t},e,g_\mathrm {1}, g_\mathrm {2}, A = g_\mathrm {1}^\alpha , B_1 = g_\mathrm {1}^\beta , B_2 = g_\mathrm {2}^{\beta })\) and randomness \(f = (\rho ,h_1,\ldots ,h_{q_\mathrm {H}})\), and given access to a homomorphism oracle \(\mathcal {O}^\mathtt{{\psi }} (\cdot )\), \(\mathcal {A} \) proceeds as follows. It guesses a random index and runs \(\mathcal {F} \) on input \( par \leftarrow (q,\mathbb {G}_\mathrm {1},\mathbb {G}_\mathrm {2},\mathbb {G}_\mathrm {t}, e,g_\mathrm {1}, g_\mathrm {2})\) and \( pk ^* \leftarrow B_2\), answering its oracle queries using initially empty lists \(L_0, L_2\) as follows:

  • \(\mathsf {H} _1(x)\): If x can be parsed as \(( pk ,\mathcal {PK})\) and \( pk ^* \in \mathcal {PK}\) and \(\mathcal {F} \) did not make any previous query \(\mathsf {H} _1( pk ',\mathcal {PK})\), then it sets \(\mathsf {H} _1( pk ^*,\mathcal {PK})\) to the next unused value \(h_i\) and, for all \( pk \in \mathcal {PK}\setminus \{ pk ^*\}\), assigns a random value in \(\mathbb {Z} _q\) to \(\mathsf {H} _1( pk ,\mathcal {PK})\). Let \( apk \leftarrow \prod _{ pk \in \mathcal {PK}} pk ^{\mathsf {H} _1( pk ,\mathcal {PK})}\) and let i be the index of \( pk ^*\) in \(\mathcal {PK}\). If \(\mathcal {F} \) previously made any random-oracle or signing queries involving \( apk \), then we say that event \(\mathbf {bad}\) happened and \(\mathcal {A} \) gives up by outputting \((0,\bot )\). If \(\mathsf {H} _1(x)\) did not yet get assigned a value, then \(\mathcal {A} \) assigns a random value .

  • \(\mathsf {H} _2(x)\): If x can be parsed as \(( apk , i)\) such that there exist defined entries for \(\mathsf {H} _1\) such that \( apk = \prod _{ pk \in \mathcal {PK}} pk ^{\mathsf {H} _1( pk ,\mathcal {PK})}\), \( pk ^* \in \mathcal {PK}\), and i is the index of \( pk ^*\) in \(\mathcal {PK}\), then \(\mathcal {A} \) chooses , adds \((( apk , i), r, 1)\) to \(L_2\) and assigns \(\mathsf {H} _2(x) \leftarrow g_\mathrm {1}^r A^{-1/a_i}\) where \(a_i = \mathsf {H} _1( pk ^*,\mathcal {PK})\). If not, then \(\mathcal {A} \) chooses , adds (xr, 0) to \(L_2\) and assigns \(\mathsf {H} _2(x) \leftarrow g_\mathrm {1}^r\).

  • \(\mathsf {H} _0(x)\): If this is \(\mathcal {F} \)’s k-th random-oracle query, then \(\mathcal {A} \) sets \(m^* \leftarrow x\), hoping that \(\mathcal {F} \) will forge on message \(m^*\). It then chooses , adds \((m^*,r, 1)\) to \(L_0\) and assigns \(\mathsf {H} _0(m^*) \leftarrow g_\mathrm {1}^{r}\). If this is not \(\mathcal {F} \)’s k-th random-oracle query, then \(\mathcal {A} \) chooses , adds (xr, 0) to \(L_0\) and assigns \(\mathsf {H} _0(x) \leftarrow g_\mathrm {1}^{r} A\).

  • \(\mathsf {GSetup}(\mathcal {PK})\): If \( pk ^* \not \in \mathcal {PK}\), then \(\mathcal {A} \) ignores this query. Otherwise, it computes \( apk \leftarrow \prod _{ pk \in \mathcal {PK}} pk ^{\mathsf {H} _1( pk ,\mathcal {PK})}\), internally simulating the random-oracle queries \(\mathsf {H} _1( pk , \mathcal {PK})\) if needed. It also internally simulates queries \(\mathsf {H} _2( apk ,j)\) for \(j=1,\ldots ,|\mathcal {PK}|, j \ne i\), to create entries \((( apk , j), r_j, 0) \in L_2\), as well as \(a_i \leftarrow \mathsf {H} _1( pk ^*, \mathcal {PK})\), where i is the index of \( pk ^*\) in \(\mathcal {PK}\). Since \(\mathsf {H} _2( apk ,j) = g_\mathrm {1}^{r_j}\), \(\mathcal {A} \) can simulate the values \(\mu _{j,i} = \mathsf {H} _2( apk ,j)^{a_i \cdot sk ^*} = \mathsf {H} _2( apk ,j)^{a_i \cdot \beta }\) for \(j \ne i\) as \(\mu _{j,i} \leftarrow B_1^{a_i \cdot r_j}\).

    After having received \(\mu _{i,j}\) from all other signers \(j \ne i\), \(\mathcal {A} \) internally stores \(\mu _ apk \leftarrow \prod _{j \ne i} \mu _{i,j}\).

  • \(\mathsf {Sign}(\mathcal {PK}, S , m)\): If \(\mathcal {F} \) did not perform group setup for \(\mathcal {PK}\), then \(\mathcal {A} \) ignores this query. If \(m=m^*\), then \(\mathcal {A} \) gives up by outputting \((0,\bot )\). Otherwise, it recomputes \( apk \leftarrow \mathsf {KAg}(\mathcal {PK})\) and looks up \((( apk ,m),r_0, 0) \in L_0\) and \((( apk , i), r_2, 1) \in L_2\), internally simulating queries \(\mathsf {H} _0( apk ,m)\) and \(\mathsf {H} _2( apk ,i)\) to create them if needed, where i is the index of \( pk ^*\) in \(\mathcal {PK}\). Now \(\mathcal {A} \) must simulate the partial signature \(s_i = \mathsf {H} _0( apk ,m)^{ sk ^*} \cdot \mu _ apk \cdot \mathsf {H} _2( apk ,i)^{a_i \cdot sk ^*}\), where \(a_i = \mathsf {H} _1( pk ^*,\mathcal {PK})\). From the way \(\mathcal {A} \) responded to random-oracle queries, we know that \(\mathsf {H} _0( apk ,m) = g_\mathrm {1}^{r_0} A = g_\mathrm {1}^{r_0 + \alpha }\) and \(\mathsf {H} _2( apk ,i) = g_\mathrm {1}^{r_2} A^{-1/a_i} = g_\mathrm {1}^{r_2-\alpha /a_i}\), so that \(\mathcal {A} \) has to simulate \(s_i = g_\mathrm {1}^{\beta (r_0 + \alpha )} \cdot \mu _ apk \cdot g_\mathrm {1}^{\beta (a_i r_2 - \alpha )} = \mu _ apk \cdot g_\mathrm {1}^{\beta (r_0+a_i r_2)}\), which it can easily compute as \(s_i \leftarrow \mu _ apk \cdot B_1^{r_0+a_i r_2}\).

When \(\mathcal {F} \) eventually outputs its forgery \((\mathcal {PK}, S , m, \sigma )\), \(\mathcal {A} \) recomputes \( apk ^* \leftarrow \mathsf {KAg}(\mathcal {PK}) = \prod _{j=1}^{|\mathcal {PK}|} pk _j^{a_j}\), where \( pk _j\) is the j-th public key in \(\mathcal {PK}\) and \(a_j = \mathsf {H} _1( apk ,j)\), and checks that the forgery is valid, i.e., \(\mathsf {Vf}( par , apk , S , m, \sigma )=1\), \( pk ^* \in \mathcal {PK}\), \(i \in S \) where i is the index of \( pk ^* \in \mathcal {PK}\), and \(\mathcal {F} \) never made a signing query for m. If any of these checks fails, \(\mathcal {A} \) outputs \((0, \bot )\). If \(m \ne m^*\), then \(\mathcal {A} \) also outputs \((0, \bot )\). Else, observe that \(\sigma = ( PK ,s)\) such that

$$ s \;=\; \mathsf {H} _0( apk ,m^*)^{\log PK } \cdot \prod _{j \in S } \mathsf {H} _2( apk ,j)^{\log apk ^*}. $$

Because of how \(\mathcal {A} \) simulated \(\mathcal {F} \)’s random-oracle queries, it can look up \((( apk ^*,m^*),r_0, 1) \in L_0\), \((( apk ^*,j),r_{2,j},0) \in L_2\) for \(j \in S \setminus \{i\}\), and \((( apk ^*,i),r_{2,i},1) \in L_2\), where i is the index of \( pk ^*\) in \(\mathcal {PK}\), such that

$$\begin{aligned} \mathsf {H} _0( apk ,m^*)&= g_\mathrm {1}^{r_0} \\ \mathsf {H} _2( apk ,j)&= g_\mathrm {1}^{r_{2,j}} \text { for } j \in S \setminus \{i\} \\ \mathsf {H} _2( apk , i)&= g_\mathrm {1}^{r_{2,i}} A^{-1/a_i} \end{aligned}$$

so that we have that

$$ s = g_\mathrm {1}^{\log PK \cdot r_0} \cdot g_\mathrm {1}^{\log apk ^* \cdot \sum _{j \in S } r_{2,j}} \cdot A^{- \log apk ^* / a_i} $$

If we let

$$ t \leftarrow \big ( \mathcal {O}^\mathtt{{\psi }} ( PK )^{r_0} \cdot \mathcal {O}^\mathtt{{\psi }} ( apk ^*)^{\sum _{j \in S } r_{2,j}} \cdot s^{-1} \big )^{a_i} $$

then we have that

$$ t \;=\; A^{\log apk ^*} \;=\; A^{\sum _{j=1}^{|\mathcal {PK}|} a_j \log pk _j}. $$

If I is the index such that \(\mathsf {H} ( pk ^*,\mathcal {PK}) = h_I\), then algorithm \(\mathcal {A} \) outputs \((I, (t, \mathcal {PK}, a_1,\ldots ,a_n))\).

\(\mathcal {A} \)’s runtime is \(\mathcal {F} \)’s runtime plus the additional computation \(\mathcal {A} \) performs. Let \(q_\mathrm {H}\) denote the total hash queries \(\mathcal {F} \) makes, i.e., the queries to \(\mathsf {H} _0\), \(\mathsf {H} _1\), and \(\mathsf {H} _2\) combined. To answer a \(\mathsf {H} _1\) query, \(\mathcal {A}\) computes \( apk \) which costs at most \(\tau _\mathrm {exp_2^{l}}\) for groups consisting of up to l signers. To answer \(\mathsf {H} _0\) and \(\mathsf {H} _2\) queries, \(\mathcal {A}\) performs at most \(\tau _\mathrm {exp_1^{2}}\). \(\mathcal {A}\) therefore spends at most \(q_\mathrm {H}\cdot \max {(\tau _\mathrm {exp_2^{l}}, \tau _\mathrm {exp_1^{2}})}\) answering hash queries. For every group-setup query with l signers, \(\mathcal {A}\) computes \( apk \) costing \(\tau _\mathrm {exp_2^{l}}\), and \(\mathcal {A}\) computes \(\mu _{j, i}\) costing \((l-1)\tau _\mathrm {exp_1}\), meaning \(\mathcal {A}\) spends \(q_\mathrm {G}\cdot (l-1) \tau _\mathrm {exp_1}\) answering group setup queries. For signing queries with a \(\mathcal {PK}\) of size at most l, \(\mathcal {A} \) computes \( apk \) costing time \(\tau _\mathrm {exp_2^{l}}\), and one exponentiation in \(\mathbb {G}_\mathrm {1}\) costing \(\tau _\mathrm {exp_1}\), giving a total of \(q_\mathrm {S}\cdot (\tau _\mathrm {exp_2^{l}} + \tau _\mathrm {exp_1})\). Finally, \(\mathcal {A} \) computes the output values, which involves verifying the forgery (costing \(2 \tau _\mathrm {pair}\)) and computing t (costing \(\tau _\mathrm {exp_1^{3}}\)), giving \(\mathcal {A}\) a total runtime of \(\tau + q_\mathrm {H}\cdot \max {(\tau _\mathrm {exp_2^{l}}, \tau _\mathrm {exp_1^{2}})} + q_\mathrm {G}\cdot (l-1) \tau _\mathrm {exp_1}+ q_\mathrm {S}\cdot (\tau _\mathrm {exp_2^{l}} + \tau _\mathrm {exp_1}) + 2 \tau _\mathrm {pair}+ \tau _\mathrm {exp_1^{3}}\).

\(\mathcal {A}\) is successful if the \(\mathbf {bad}\) event does not happen, if it guesses the index of the forgery correctly, and if \(\mathcal {F}\) successfully forges. Event \(\mathbf {bad}\) happens with probability at most \((q_\mathrm {S}+ q_\mathrm {H})/q\) for every hash query, so it happens with probability \(q_\mathrm {H}(q_\mathrm {S}+ q_\mathrm {H}) / q\). \(\mathcal {A}\) guesses the forgery index correctly with probability \(1/q_\mathrm {H}\), and \(\mathcal {F}\) forges with probability \(\epsilon \), giving \(\mathcal {A}\) success probability \((1-(q_\mathrm {S}+ q_\mathrm {H})/q) \cdot \epsilon /q_\mathrm {H}\).

Using the generalized forking lemma from Lemma 1, we can build an algorithm \(\mathcal {B} \) that solves the \(\mathsf {\psi \text{- }co\text{- }CDH}\) problem by, on input \((A = g_\mathrm {1}^\alpha , B_1 = g_\mathrm {1}^\beta , B_2 = g_\mathrm {2}^\beta )\), running \(\mathcal {GF}_\mathcal {A} (q,\mathbb {G}_\mathrm {1},\mathbb {G}_\mathrm {2},\mathbb {G}_\mathrm {t},e,g_\mathrm {1}, g_\mathrm {2}, A, B_1, B_2)\) to obtain two outputs \((I, (t, \mathcal {PK}, a_1,\ldots ,a_n))\) and \((I, (t', \mathcal {PK}', a'_1,\ldots ,a'_n))\), giving \(\mathcal {GF}_\mathcal {A} \) access to the homomorphism oracle \(\mathcal {O}^\mathtt{{\psi }} (\cdot )\) offered by \(\mathsf {\psi \text{- }co\text{- }CDH}\). Since the two executions of \(\mathcal {A} \) are identical up to the first query \(\mathsf {H} _1( pk ,\mathcal {PK})\) involving the forged set of signers \(\mathcal {PK}\), we have that \(\mathcal {PK}=\mathcal {PK}'\). Also, from the way \(\mathcal {A} \) assigns values to outputs of \(\mathsf {H} _1\), one can see that \(a_j = a'_j\) for \(j \ne i\) and \(a_i \ne a'_i\), where i is the index of \( pk ^*\) in \(\mathcal {PK}\). We therefore have that

$$ t / t' = A^{(a_i-a'_i) \log pk ^*} = g_\mathrm {1}^{\alpha \beta {(a_i-a'_i)}}, $$

so that \(\mathcal {B} \) can output its solution \(g_\mathrm {1}^{\alpha \cdot \beta } = (t/t')^{1/(a_i-a'_i)}\).

Using Lemma 1, we know that if \(q > 8 q_\mathrm {H}/ \epsilon \), then \(\mathcal {B} \) runs in time at most \((\tau + q_\mathrm {H}\cdot \max {(\tau _\mathrm {exp_2^{l}}, \tau _\mathrm {exp_1^{2}})} + q_\mathrm {G}\cdot (l-1) \tau _\mathrm {exp_1}+ q_\mathrm {S}\cdot (\tau _\mathrm {exp_2^{l}} + \tau _\mathrm {exp_1}) + 2 \tau _\mathrm {pair}+ \tau _\mathrm {exp_1^{3}}) \cdot 8 q_\mathrm {H}^2 / ((1-(q_\mathrm {S}+ q_\mathrm {H})/q) \cdot \epsilon ) \cdot \ln (8q_\mathrm {H}/((1-(q_\mathrm {S}+ q_\mathrm {H})/q) \cdot \epsilon ))\) and succeeds with probability \((1-(q_\mathrm {S}+ q_\mathrm {H})/q) \cdot \epsilon /(8 q_\mathrm {H})\), proving the bounds in the theorem.

4.4 Partial Aggregation of ASM Signatures

Looking at the description of the ASM scheme above, one would expect that one can further aggregate the second components when given several such ASM signatures. The first components are needed separately for verification, though, so even though we don’t obtain full aggregation to constant-size signatures, we would shave a factor two off of the total signature length.

The straightforward way to partially aggregate ASM signatures is insecure, however, because the link between membership keys and signed messages is lost. For example, an aggregate ASM signature \(( PK _1, PK _2,s)\) for a set of tuples \(\{( apk , S _1, m _1), ( apk , S _2, m _2)\}\) would also be a valid signature for \(\{( apk , S _1, m _2), ( apk , S _2, m _1)\}\), leading to easy forgery attacks.

We show that a variation on Maxwell et al.’s key aggregation technique [36] can be used to create a provably secure scheme. We define an aggregate accountable-subgroup multi-signature (AASM) scheme as an ASM scheme with two additional algorithms \(\mathsf {SAg}\) and \(\mathsf {AVf}\), where \(\mathsf {SAg}\) takes as input a set of tuples \(\{ ( apk _i, S _i, m _i,\sigma _i)_{i=1}^n \}\) where \( apk _i\) is an aggregate public key, \( S _i\) is a set of signers, \( m _i\) is a message, and \(\sigma _i\) is an accountable-subgroup multi-signature, and outputs an aggregate multi-signature \(\varSigma \), while \(\mathsf {AVf}\) takes a set of tuples \(( apk , S , m )\) and an AASM signature \(\varSigma \), and outputs 0 or 1 indicating that the signature is invalid or valid, respectively.

Apart from satisfying the natural correctness definition, AASM schemes must satisfy an unforgeability notion that is similar to that of ASM schemes, but where the adversary outputs a signature \(\varSigma \), a set of tuples \(\{( apk _i, S _i, m _i)\}\), a set of public keys \(\mathcal {PK}^*\), a set of signers \( S ^*\), and a message \( m ^*\). The adversary wins if \( pk ^* \in \mathcal {PK}^*\), \(\mathcal {A}\) made no signing queries on \( m ^*\), and \(\mathsf {AVf}( par , \{( apk _i, S _i, m _i)\} \cup \{(\mathsf {KAg}( par , \mathcal {PK}^*), S ^*, m ^*)\}, \varSigma ) = 1\).

Our scheme uses the \(\mathsf {Pg}\), \(\mathsf {Kg}\), \(\mathsf {GSetup}\), \(\mathsf {Sign}\), \(\mathsf {KAg}\), and \(\mathsf {Vf}\) algorithms of , and adds the following two algorithms as well as a hash function \(\mathsf {H} _3: \{0,1\} \rightarrow \mathbb {Z} _q\).

Signature Aggregation. \(\mathsf {SAg}( par , \{( apk _i, S _i, m _i, \sigma _i)_{i = 1}^n\})\) parses \(\sigma _i\) as \(( PK _i, s_i)\) and for \(i=1,\ldots ,n\) computes

$$\begin{aligned} b_i \leftarrow \mathsf {H} _3(( apk _i, S _i, m _i, PK _i), \{( apk _j, S _j, m _j, PK _j)_{j=1}^n\}). \end{aligned}$$

It aggregates the signatures by computing \(s \leftarrow \prod _{i=1}^n s_i^{b_i}\), and outputs \(\varSigma \leftarrow ( PK _1,\ldots , PK _n, s)\).

Aggregate Signature Verification. \(\mathsf {AVf}(\{( apk _i, S _i, m _i)_{i = 1}^n\}, \varSigma )\) parses \(\varSigma \) as \(( PK _1,\ldots , PK _n, s)\), computes

$$\begin{aligned} b_i \leftarrow \mathsf {H} _3(( apk _i, S _i, m _i, PK _i), \{( apk _j, S _j, m _j, PK _j)_{j=1}^n\}) \end{aligned}$$

for \(i=1,\ldots ,n\), and outputs 1 if and only if

$$ \prod _{i=1}^n \bigg ( e(\mathsf {H} _0( apk _i,m_i), PK _i^{b_i}) \cdot e(\prod _{j \in S _i} \mathsf {H} _2( apk _i,j), apk _i^{b_i}) \bigg ) \;{\mathop {=}\limits ^{{\scriptscriptstyle ?}}}\; e(s, g_\mathrm {2}). $$

Theorem 4

Our scheme is unforgeable in the random-oracle model if is unforgeable in the random-oracle model. More precisely, it is -unforgeable in the random-oracle model if and is -unforgeable in the random-oracle model, where \(l\) is the maximum number of multi-signatures that can be aggregated and \(\tau _\mathrm {exp_1}\) denotes the time required to compute an exponentiation in \(\mathbb {G}_\mathrm {1}\).

Proof

Given a forger \(\mathcal {F} \) for , we construct a forger \(\mathcal {G} \) for as follows. We first build a wrapper algorithm \(\mathcal {A} \) to be used in the forking lemma, and then construct \(\mathcal {G} \) based on \(\mathcal {GF}_\mathcal {A} \). We actually use a slight variation on the forking lemma (Lemma 1) by giving \(\mathcal {A} \) access to oracles. To ensure that the executions of \(\mathcal {A} \) are identical up to the forking points \(j_i\), the forking algorithm \(\mathcal {GF}_\mathcal {A} \) remembers the oracle responses during \(\mathcal {A} \)’s first run, and returns the same responses in all subsequent runs up to the respective forking points \(j_i\). One can see that the same bounds hold as in Lemma 1.

Algorithm \(\mathcal {A} \), on input a target public key \( pk ^*\) and \(f = (\rho , h_1,\ldots ,h_{q_\mathrm {H}})\) and given access to oracles \(\mathsf {H} _0\), \(\mathsf {H} _1\), \(\mathsf {H} _2\), \(\mathsf {GSetup}\), and \(\mathsf {Sign}\), runs the forger \(\mathcal {F} \) on input \( pk ^*\) by relaying queries and responses for the mentioned oracles, and responding to \(\mathsf {H} _3\) queries as:

  • \(\mathsf {H} _3(x)\): If x can be parsed as \((y,\mathcal {APK})\) with \(\mathcal {APK}= \{( apk _i, S _i, m _i, PK _i)_{i=1}^n \}\) and \(\mathcal {F} \) did not make any previous query \(\mathsf {H} _1(y',\mathcal {APK})\), then \(\mathcal {A} \) guesses an index and sets \(\mathsf {H} _3(( apk _{i^*}, S _{i^*}, m _{i^*}, PK _{i^*}),\mathcal {APK})\) to the next unused value from \(h_1,\ldots ,h_{q_\mathrm {H}}\). For all other indices \(j \in \{1,\ldots ,n\} \setminus \{i^*\}\), it assigns . If \(\mathsf {H} _3(x)\) did not yet get assigned a value, then \(\mathcal {A} \) assigns a random value . Finally, \(\mathcal {A} \) makes queries \(\mathsf {H} _0( apk _i, m _i)\) and \(\mathsf {H} _2( apk _i,j)\) for all \(i=1,\ldots ,n\) and \(j \in S _i\), just to fix their values at this point.

When \(\mathcal {F} \) outputs a valid forgery \(\varSigma \), \(\{( apk _i, S _i, m _i)_{i=1}^n\}\), \(\mathcal {PK}^*\), \( S ^*\), and \( m ^*\), \(\mathcal {A} \) looks up in its records for \(\mathsf {H} _3\) to check whether \(\mathsf {H} _3(( apk ^*, S ^*, m ^*, PK *),\mathcal {APK}^*)\) was the random-oracle query for which \(\mathcal {A} \) returned a value from \(h_1,\ldots ,h_{q_\mathrm {H}}\), where \(\mathcal {APK}^* = \{( apk _i, S _i, m _i)_{i=1}^n, ( apk ^*, S ^*, m ^*)\}\) and \( apk ^* \leftarrow \mathsf {KAg}(\mathcal {PK}^*)\). If so, then let \(j_\mathrm {f}\) be the index of that query and let \(b^*\) be the response to that query, and let \(\mathcal {A} \) return \((\{j_\mathrm {f}\}, \{(\mathcal {PK}^*, S ^*, m ^*, PK ^*,s,b^*)\})\). Otherwise, \(\mathcal {A} \) returns \((\emptyset , \emptyset )\). The success probability of \(\mathcal {A} \) is \(\epsilon _\mathcal {A} \ge \epsilon /l\), while its running time is \(\tau _\mathcal {A} = \tau + O(lq_\mathrm {H})\).

For the forgery to be valid, it must hold that

$$\begin{aligned} s \;=\;&\mathsf {H} _0( apk ^*,m^*)^{b^* \log PK ^*} \cdot \prod _{j \in S ^*} \mathsf {H} _2( apk ^*,j)^{b^* \log apk ^*} \nonumber \\&\cdot \prod _{i=1}^n \bigg ( \mathsf {H} _0( apk _i,m_i)^{\log PK _i} \cdot \prod _{j \in S _i} \mathsf {H} _2( apk _i,j)^{\log apk _i} \bigg )^{b_i}, \end{aligned}$$
(4)

where \(b_i = \mathsf {H} _3(( apk _i, S _i, m _i, PK _i),\mathcal {APK}^*)\).

Now consider the forger \(\mathcal {G} \) against that runs \(\mathcal {GF}_\mathcal {A} \) to obtain two outputs \((\mathcal {PK}^*, S ^*, m ^*, PK ^*,s,b^*)\) and \((\mathcal {PK}^*, S ^*, m ^*, PK ^*,s',b^*{}')\). From the way \(\mathcal {A} \) simulated \(\mathcal {F} \)’s oracle queries, one can see that all variables and random-oracle responses in Eq.  (4) are the same in both executions of \(\mathcal {A} \), except that \(s \ne s'\) and \(b^* \ne b^*{}'\). By dividing both equations, we have that

$$ s/s' \;=\; \mathsf {H} _0( apk ^*,m^*)^{(b^*-b^*{}') \log PK ^*} \cdot \prod _{j \in S ^*} \mathsf {H} _2( apk ^*,j)^{(b^*-b^*{}') \log apk ^*}, $$

so that \(\mathcal {G} \) can output \(\mathcal {PK}^*, S ^*, m ^*, \sigma = ( PK ^*, (s/s')^{1/(b^*-b^*{}')})\) as its forgery against the scheme. The bounds stated by the theorem follow from Lemma 1.

5 A Scheme from Discrete Logarithms

The basic key aggregation technique of our pairing-based schemes is due to Maxwell et al. [36], who presented a Schnorr-based multi-signature scheme that uses the same key aggregation technique and that also saves one round of interaction in the signing protocol with respect to Bellare-Neven’s scheme [9]. Unfortunately, their security proof was found to be flawed due to a problem in the simulation of the signing protocol [22]. In the following, we recover Maxwell et al.’s key aggregation technique for ordinary (i.e., non-pairing-friendly) curves by combining it with Bellare-Neven’s preliminary round of hashes. The resulting scheme achieves the same space savings as Maxwell et al.’s original scheme, but is provably secure under the hardness of the discrete-logarithm assumption. Independently from our work, Maxwell et al. [37] revised their work to use the same protocol we present here.

5.1 Description of Our Discrete-Logarithm Scheme

Our discrete-logarithm based multi-signature scheme uses hash functions \(\mathsf {H} _0, \mathsf {H} _1, \mathsf {H} _2: \{0,1\}^* \rightarrow \mathbb {Z} _q\), which can be instantiated from a single hash function using domain separation.

Parameters Generation. \(\mathsf {Pg}(\kappa )\) sets up a group of order q with generator g, where q is a \(\kappa \)-bit prime, and output .

Key Generation. The key generation algorithm \(\mathsf {Kg}( par )\) chooses and computes \( pk \leftarrow g^{ sk }\). Output \(( pk , sk )\).

Key Aggregation. \(\mathsf {KAg}(\{ pk _1, \ldots , pk _n\})\) outputs

$$\begin{aligned} apk \leftarrow \prod _{i = 1}^n pk _i^{\mathsf {H} _1( pk _i , \{ pk _1, \ldots , pk _n\})}. \end{aligned}$$

Signing. Signing is an interactive three-round protocol. On input \(\mathsf {Sign}( par , \{ pk _1, \ldots , pk _n\}, sk , m)\), signer i behaves as follows:

Round 1. Choose and compute \(R_i \leftarrow g^{r_{i}}\). Let \(t_i \leftarrow \mathsf {H} _2(R_i)\). Send \(t_i\) to all other signers corresponding to \( pk _1, \ldots , pk _n\) and wait to receive \(t_j\) from all other signers \(j \ne i\).

Round 2. Send \(R_i\) to all other signers corresponding to \( pk _1, \ldots , pk _n\) and wait to receive \(R_j\) from all other signers \(j\ne i\). Check that \(t_j = \mathsf {H} _2(R_j)\) for all \(j = 1, \ldots , n\).

Round 3. Compute \( apk \leftarrow \mathsf {KAg}(\{ pk _1, \ldots , pk _n\})\) and let \(a_i \leftarrow \mathsf {H} _1( pk _i , \{ pk _1, \ldots , pk _n\})\). Note that when multiple messages are signed with the same set of signers, \( apk \) and \(a_i\) can be stored rather than recomputed.

Compute \(\bar{R} \leftarrow \prod _{j = 1}^n R_j\) and \(c \leftarrow \mathsf {H} _0(\bar{R}, apk , m)\). Compute \(s_{i} \leftarrow r_{i} + c \cdot sk _i \cdot a_i \bmod q\). Send \(s_{i}\) to all other signers and wait to receive \(s_j\) from all other signers \(j \ne i\). Compute \(s \leftarrow \sum _{j = 1}^n s_{j}\) and output \(\sigma \leftarrow (\bar{R}, s)\) as the final signature.

Verification. \(\mathsf {Vf}( par , apk , m, \sigma )\) parses \(\sigma \) as , computes \(c \leftarrow \mathsf {H} _0(\bar{R}, apk , m)\) and outputs 1 iff \(g^{s} \cdot apk ^{-c} {\mathop {=}\limits ^{{\scriptscriptstyle ?}}} \bar{R}\).

The scheme allows for more efficient batch verification, which allows a verifier to check the validity of n signatures with one 3n-multi-exponentiation instead of n 2-multi-exponentiations. To verify that every signature in a list of n signatures \(\{( apk _i, m_i, (\bar{R}_i, s_i))\}_{i = 1}^n\) is valid, compute \(c_i \leftarrow \mathsf {H} _1(\bar{R}, apk _i, m_i)\), pick for \(i = 1, \ldots , n\), and accept iff

5.2 Security Proof

The security proof follows that of [36] by applying the forking lemma twice: once by forking on a random-oracle query \(\mathsf {H} _0(\bar{R}, apk ,m)\) to obtain two forgeries from which the discrete logarithm w of \( apk \) can be extracted, and then once again by forking on a query \(\mathsf {H} _1( pk _i, \{ pk _1,\ldots , pk _n\}\) to obtain two such pairs \(( apk , w)\) and \(( apk ',w')\) from which the discrete logarithm of the target public key can be extracted.

Theorem 5

is an unforgeable multisignature scheme (as defined in Definition 4) in the random-oracle model if the discrete log problem is hard. More precisely, is \((\tau , q_\mathrm {S}, q_\mathrm {H}, \epsilon )\)-unforgeable in the random-oracle model if \(q > 8q_\mathrm {H}/\epsilon \) and if discrete log is -hard, where \(l\) is the maximum number of signers involved in a single multisignature, \(q_\mathrm {T}= q_\mathrm {H}+q_\mathrm {S}+1\), \(\delta = 4 lq_\mathrm {T}^2/q\), and \(\tau _\mathrm {exp}\) is the time required to compute an exponentiation in .

Proof

We first wrap the forger \(\mathcal {F} \) into an algorithm \(\mathcal {A} \) that can be used in the forking lemma. We then describe an algorithm \(\mathcal {B} \) that runs \(\mathcal {GF}_\mathcal {A} \) to obtain an aggregated public key \( apk \) and its discrete logarithm w. We finally describe a discrete-logarithm algorithm \(\mathcal {D} \) that applies the forking lemma again to \(\mathcal {B} \) by running \(\mathcal {GF}_\mathcal {B} \) and using its output to compute the wanted discrete logarithm.

Algorithm \(\mathcal {A} \), on input \( in = (y,h_{1,1},\ldots ,h_{1,q_\mathrm {H}})\) and randomness \(f = (\rho , h_{0,1},\ldots ,h_{0,q_\mathrm {H}})\) runs \(\mathcal {F} \) on input \( pk ^* = y\) and random tape \(\rho \), responding to its queries as follows:

  • \(\mathsf {H} _0(\bar{R}, apk , m )\): Algorithm \(\mathcal {A} \) returns the next unused value \(h_{0,i}\) from its randomness f.

  • \(\mathsf {H} _1( pk _i,\mathcal {PK})\): If \( pk ^* \in \mathcal {PK}\) and \(\mathcal {F} \) did not make any previous query \(\mathsf {H} _1( pk ',\mathcal {PK})\), then \(\mathcal {A} \) sets \(\mathsf {H} _1( pk ^*,\mathcal {PK})\) to the next unused value \(h_{1,i}\) from its input and assigns for all \( pk \in \mathcal {PK}\setminus \{ pk ^*\}\). Let \( apk \leftarrow \prod _{ pk \in \mathcal {PK}} pk ^{\mathsf {H} _1( pk ,\mathcal {PK})}\). If \(\mathcal {F} \) already made any random-oracle or signing queries involving \( apk \), then we say that event \(\mathbf {bad}_1\) happened and \(\mathcal {A} \) gives up by outputting \((0,\bot )\).

  • \(\mathsf {H} _2(R)\): \(\mathcal {A} \) simply chooses a random value and assigns \(\mathsf {H} _2(R) \leftarrow t\). If there exists another \(R' \ne R\) such that \(\mathsf {H} _2(R')=t\), or if t has already been used (either by \(\mathcal {F} \) or in \(\mathcal {A} \)’s simulation) in the first round of a signing query, then we say that event \(\mathbf {bad}_2\) happened and \(\mathcal {A} \) gives up by outputting \((0,\bot )\).

  • \(\mathsf {Sign}(\mathcal {PK}, m)\): Algorithm \(\mathcal {A} \) first computes \( apk \leftarrow \mathsf {KAg}(\mathcal {PK})\), simulating internal queries to \(\mathsf {H} _1\) as needed. In the first round of the protocol, \(\mathcal {A} \) returns a random value .

    After receiving values \(t_j\) from all other signers, it looks up the corresponding values \(R_j\) such that \(\mathsf {H} _2(R_j) = t_j\). If not all such values can be found, then \(\mathcal {A} \) sends to all signers; unless \(\mathbf {bad}_2\) happens, the signing protocol finishes in the next round. If all values \(R_j\) are found, then \(\mathcal {A} \) chooses , simulates an internal query \(a_i \leftarrow \mathsf {H} _1( pk ^*, \mathcal {PK})\), computes \(R_i \leftarrow g^{s_i} { pk ^*}^{-a_i \cdot c}\) and \(\bar{R} \leftarrow \prod _{j=1}^n R_j\), assigns \(\mathsf {H} _2(R_i) \leftarrow t_i\) and \(\mathsf {H} _0(\bar{R}, apk , m) \leftarrow c\), and sends \(R_i\) to all signers. If the latter assignment failed because the entry was taken, we say that event \(\mathbf {bad}_3\) happened and \(\mathcal {A} \) gives up by outputting \((0,\bot )\). (Note that the first assignment always succeeds, unless \(\mathbf {bad}_2\) occurs.) After it received the values \(R_j\) from all other signers, \(\mathcal {A} \) sends \(s_i\).

When \(\mathcal {F} \) outputs a valid forgery \((\bar{R},s)\) on message \( m \) for a set of signers \(\mathcal {PK}= \{ pk _1,\ldots , pk _n\}\), \(\mathcal {A} \) computes \( apk \leftarrow \mathsf {KAg}(\mathcal {PK})\), \(c \leftarrow \mathsf {H} _0(\bar{R}, apk , m )\), and \(a_i \leftarrow \mathsf {H} _1( pk _i, \mathcal {PK})\) for \(i=1,\ldots ,n\). If j is the index such that \(c = h_{0,j}\), then \(\mathcal {A} \) returns \((j, (\bar{R}, c, s, apk , \mathcal {PK}, a_1,\ldots ,a_n))\).

Note that \( apk = \prod _{i=1}^n pk _i^{a_i}\) and, because the forgery is valid, \(g^s = \bar{R} \cdot apk ^c\). If \(\mathcal {F} \) is a \((\tau , q_\mathrm {S}, q_\mathrm {H}, \epsilon )\)-forger, then \(\mathcal {A} \) succeeds with probability

$$\begin{aligned} \epsilon _\mathcal {A}&\; = \; \Pr [\mathcal {F} \text { succeeds} \wedge \overline{\mathbf {bad}_1} \wedge \overline{\mathbf {bad}_2} \wedge \overline{\mathbf {bad}_3}] \\&\; \ge \; \Pr [\mathcal {F} \text { succeeds}] - \Pr [ \mathbf {bad}_1] - \Pr [\mathbf {bad}_2] - \Pr [\mathbf {bad}_3] \\&\; \ge \; \epsilon - \frac{q_\mathrm {H}(q_\mathrm {H}+q_\mathrm {S}+1)}{q} - \left( \frac{(q_\mathrm {H}+ q_\mathrm {S})^2}{2q} + \frac{lq_\mathrm {H}q_\mathrm {S}}{q} \right) - \frac{q_\mathrm {H}(q_\mathrm {H}+q_\mathrm {S}+1)}{q} \\&\; \ge \; \epsilon - \frac{4 lq_\mathrm {T}^2}{q} \; = \; \epsilon - \delta \end{aligned}$$

where \(q_\mathrm {T}= q_\mathrm {H}+q_\mathrm {S}+1\) and \(\delta = 4 l(q_\mathrm {H}+q_\mathrm {S}+1)^2/q\). The running time of \(\mathcal {A} \) is \(\tau _\mathcal {A} = \tau + 4 lq_\mathrm {T}\cdot \tau _\mathrm {exp}+ O(lq_\mathrm {T})\).

We now construct algorithm \(\mathcal {B} \) that runs the forking algorithm \(\mathcal {GF}_\mathcal {A} \) on algorithm \(\mathcal {A} \), but that itself is a wrapper algorithm around \(\mathcal {GF}_\mathcal {A} \) that can be used in the forking lemma. Algorithm \(\mathcal {B} \), on input \( in = y\) and randomness \(f = (\rho , h_{1,1}, \ldots , h_{1,q_\mathrm {H}})\), runs \(\mathcal {GF}_\mathcal {A} \) on input \( in ' = (y, h_{1,1}, \ldots , h_{1,q_\mathrm {H}})\) to obtain output

$$ \big (j,\ (\bar{R}, c, s, apk , \mathcal {PK}, a_1,\ldots ,a_n),\ \ (\bar{R}', c', s', apk ', \mathcal {PK}', a'_1,\ldots ,a'_n) \big ). $$

In its two executions by \(\mathcal {GF}_\mathcal {A} \), \(\mathcal {F} \)’s view is identical up to the j-th \(\mathsf {H} _0\) query \(\mathsf {H} _0(\bar{R}, apk ,m)\), meaning that also the arguments of that query are identical in both executions, and hence \(\bar{R} = \bar{R}'\) and \( apk = apk \). From the way \(\mathcal {A} \) answers \(\mathcal {F} \)’s \(\mathsf {H} _1\) queries by aborting when \(\mathbf {bad}_1\) happens, the fact that \( apk = apk '\) also means that \(\mathcal {PK}= \mathcal {PK}'\) and that \(a_i=a'_i\) for \(i=1,\ldots ,n\). The forking algorithm moreover guarantees that \(c \ne c'\).

By dividing the two verification equations \(g^s = \bar{R} \cdot apk ^c\) and \(g^{s'} = \bar{R}' \cdot { apk '}^{c'} = \bar{R} \cdot apk ^{c'}\), one can see that \(w \leftarrow (s-s')/(c-c') \bmod q\) is the discrete logarithm of \( apk \). If i is the index such that \(\mathsf {H} _1( pk ^*, \mathcal {PK}) = h_{1,i}\), then \(\mathcal {B} \) outputs \((i,(w,\mathcal {PK}, a_1,\ldots ,a_n))\). It does so whenever \(\mathcal {GF}_\mathcal {A} \) is successful, which according to Lemma 1 occurs with probability \(\epsilon _\mathcal {B} \) and running time \(\tau _\mathcal {B} \):

$$\begin{aligned} \epsilon _\mathcal {B}&\;\ge \; \frac{\epsilon _\mathcal {A}}{8} \;\ge \; \frac{\epsilon -\delta }{8} \\ \tau _\mathcal {B}&\;=\; \tau _\mathcal {A} \cdot 8q_\mathrm {H}/\epsilon _\mathcal {A} \cdot \ln (8/\epsilon _\mathcal {A}) \\&\;\le \; (\tau + 4 lq_\mathrm {T}\cdot \tau _\mathrm {exp}+ O(lq_\mathrm {T})) \cdot \frac{8q_\mathrm {T}}{\epsilon -\delta } \cdot \ln \frac{8}{\epsilon -\delta }. \end{aligned}$$

Now consider the discrete-logarithm algorithm \(\mathcal {D} \) that, on input y, runs \(\mathcal {GF}_\mathcal {B} \) on input y to obtain output \((i, (w,\mathcal {PK},a_1,\ldots ,a_n), (w,\mathcal {PK}', a'_1,\ldots ,a'_n))\). Both executions of \(\mathcal {B} \) in \(\mathcal {GF}_\mathcal {B} \) are identical up to the i-th \(\mathsf {H} _1\) query \(\mathsf {H} _1( pk ,\mathcal {PK})\), so we have that \(\mathcal {PK}=\mathcal {PK}'\). Because \(\mathcal {A} \) immediately assigns outputs of \(\mathsf {H} _1\) for all public keys in \(\mathcal {PK}\) as soon as the first query for \(\mathcal {PK}\) is made, and because it uses \(h_{1,i}\) to answer \(\mathsf {H} _1( pk ^*,\mathcal {PK})\), we also have that \(a_i = a'_i\) for \( pk _i \ne pk ^*\) and \(a_i \ne a'_i\) for \( pk _i = pk ^*\). By dividing the equations \( apk = \prod _{i=1}^n pk _i^{a_i} = g^w\) and \( apk ' = \prod _{i=1}^n pk _i^{a'_i} = g^{w'}\), one can see that \(\mathcal {D} \) can compute the discrete logarithm of \( pk ^*=y\) as \(x \leftarrow (w-w') / (a_i-a'_i) \bmod q\), where i is the index such that \( pk _i = pk ^*\). By Lemma 1, it can do so with the following success probability \(\epsilon _\mathcal {D} \) and running time \(\tau _\mathcal {D} \):

$$\begin{aligned} \epsilon _\mathcal {D}&\;\ge \; \frac{\epsilon _\mathcal {B}}{8} \;\ge \; \frac{\epsilon -\delta }{64} \\ \tau _\mathcal {D}&\;=\; \tau _\mathcal {B} \cdot 8q_\mathrm {H}/\epsilon _\mathcal {B} \cdot \ln (8/\epsilon _\mathcal {B}) \\&\;\le \; (\tau + 4 lq_\mathrm {T}\cdot \tau _\mathrm {exp}+ O(lq_\mathrm {T})) \cdot \frac{512q_\mathrm {T}^2}{\epsilon -\delta } \cdot \ln ^2 \frac{64}{\epsilon -\delta }. \end{aligned}$$