1 Introduction

Structure-Preserving Signatures. Structure-preserving signature schemes (SPS for short) introduced by Abe et al. [4] are signatures defined over bilinear groups where the messages, public keys and signatures are required to be source group elements. Moreover, signature verification just consists of group membership testing and evaluating pairing product equations (PPE). SPS are very attractive as they can be combined with efficient pairing-based non-interactive zero-knowledge (NIZK) proofs due to Groth and Sahai (GS) [46]. This allows to construct many privacy-preserving cryptographic primitives and protocols under standard assumptions with reasonable practical efficiency.

SPS have been used in the literature to construct numerous cryptographic primitives and building blocks. Among them are many variants of signatures such as blind signatures [4, 40], group signatures [4, 56], traceable signatures [3], policy-compliant signatures [16, 17], homomorphic and network coding signatures [13, 55] and protocols such as anonymous credentials [26], delegatable anonymous credentials [39], compact verifiable shuffles [30] or anonymous e-cash [21]. Due to their wide range of applications, SPS have attracted significant research interest. Looking ahead to the threshold setting (i.e., TSPS), we note that typical applications of SPS in privacy-preserving applications are as follows: a user obtains a signature from some entity and then prove possession of a valid signature without revealing it using GS NIZK. Consequently, thresholdizing the SPS signing process does not have any impact on the remaining protocol and thus, TSPS can be considered a drop-in replacement for SPS.

The first SPS scheme presented by Abe et al. in [4] was followed by a line of research to obtain SPS with short signatures in the generic group model (GGM) [5, 7, 44, 45], lower bounds [1, 5, 6], security under standard assumptions [2, 25, 47, 50, 51, 56] as well as tight security reductions [8,9,10, 31, 42, 49].

Threshold Signatures. Motivated by real-world deployments in decentralized systems such as distributed ledger technologies, cryptocurrencies, and decentralized identity management, the use of threshold cryptography [37] and in particular threshold signatures has become a very active field of research in the last years with a main focus on ECDSA [11, 24, 28, 34, 36, 43, 62], Schnorr [33, 53] and BLS [14] signatures. We recall that an \((n,t)\) threshold signature allows a set of \(n\) potential signers to jointly compute a signature for a message m, which verifies under a single verification key, as long as at least a threshold t many signers participate.

There are different types of constructions in the literature; ones that require multiple rounds of interaction (e.g., ECDSA [28, 43]), ones that require a pre-processing round that does not depend on the message (often called non-interactive schemes), e.g,. FROST [53] and finally, ones that are fully non-interactive. The latter are schemes where all the participating signers can simply send a partial signature and the final signatures can then be combined from threshold many valid partial signatures, e.g., BLS [22].

Security of Threshold Signatures. Although many works on threshold signatures were known in the literature, the rigorous study of security notions was done only very recently. In particular, Bellare et al. in [18] studied a hierarchy of different notions of security for non-interactive schemes. As our work focuses on fully non-interactive schemes, we do not recall the entire hierarchy but only the ones relevant for this setting. In particular, the \(\mathsf {TS\text {-}UF\text {-}}0\) notion is the weaker one and prohibits adversaries from querying the signing oracle for partial signatures on the challenge message, i.e., the message corresponding to the forged signature. The stronger \(\mathsf {TS\text {-}UF\text {-}}1\) notion, which will be our main focus, allows adversaries to query the signing oracle up to \(t-|\textsf{CS}|\) times for partial signatures, even on the challenge message. Here \(\textsf{CS}\) with \(|\textsf{CS}|<t\) denotes the set of (statically corrupted) signers. Surprisingly, the majority of works on threshold signatures in the literature relied on weaker \(\mathsf {TS\text {-}UF\text {-}}0\)-style notions instead of the much more realistic \(\mathsf {TS\text {-}UF\text {-}}1\) notion.

Another dimension in the security of threshold signatures is whether they support static or adaptive corruptions. In the case of static corruptions, the adversary has to declare the set of corrupted signers, \(\textsf{CS}\), before seeing any parameters of the system apart from \((n,t)\). In contrast, an adaptive adversary can choose the set of corrupted signers within a security game based on its view of the execution, which is a realistic assumption in the decentralized setting. All the notions in [18] consider only a static setting and refer to a complexity leveraging argument for adaptive security. Precisely, it suggests that for small number of parties, a guessing argument can yield adaptive security for any statically secure scheme with a loss of \(\left( {\begin{array}{c}n\\ t-1\end{array}}\right) \), i.e., guessing the set of corrupted parties and aborting if the guess is wrong. However, this exponential loss of security can become significant as the number of parties increases, e.g., supporting \(n\ge 1024\) (cf. [33]). While there are known generic techniques to lift statically secure schemes to adaptively secure ones [29, 48, 57], they all have undesirable side-effects such as relying on additional heavy tools, e.g., non-committing encryption [27], or relying on strong assumptions such as reliable erasure of secret states (cf. [33]).

Apart from the adaptively secure threshold RSA signatures [12], until recently there were no results on adaptively secure threshold signatures based on popular signature schemes in the discrete logarithm or pairing setting. Only very recently Bacho and Loss [14] as well as Crites et al. [33] have shown tight adaptive security for threshold versions of the popular BLS [23] and Schnorr schemes [60], respectively. Interestingly, all these adaptive security proofs need to rely on interactive assumptions and in particular variants of the One-More Discrete Logarithm Assumption [19], which is known as a strong assumption. Only very recently and concurrent to this work, Bacho et al. [15] as well as Das and Ren [35] present schemes from standard and non-interactive assumptions in the pairing-free discrete logarithm setting and pairing setting, respectively. It is interesting that only few of the existing works achieve adaptive security under the \(\mathsf {TS\text {-}UF\text {-}}1\) notion, e.g., [14, 35, 54], with [54] being the only one from standard assumptions and without requiring idealized models.

Threshold SPS. Recently, Crites et al. [32] have extended the concept of threshold signatures to threshold SPS (TSPS). They introduce a definitional framework for fully non-interactive TSPS and provide a construction that is proven secure in the Random Oracle Model (ROM) [20] under the hardness of a new interactive assumption, called the \(\text {GPS}_3\) assumption, which is analyzed in the Algebraic Group Model (AGM) [41]. The authors start from an SPS proposed by Ghadafi [44], that is secure in the Generic Group Model (GGM), and introduce a message indexing technique to avoid non-linear operations in the signature components and thus to obtain a fully non-interactive threshold version. While the TSPS proposed in [32] is highly efficient and compact (only 2 group elements), the defined message space is restricted to a so called indexed Diffie-Hellman message space. This prevents its use as a drop-in-replacement for SPS in arbitrary applications of SPS that are desired to be thresholdized. Additionally, the security of their proposed TSPS is only shown in the \(\mathsf {TS\text {-}UF\text {-}}0\) model, i.e., under static corruptions.

1.1 Our Contributions

In this paper, we ask if it is possible to construct TSPS without the aforementioned restrictions and we answer this question affirmatively. We start with an observation that the SPS from Kiltz, Pan and Wee [51] has an interesting structure that makes it amenable for thresholdizing although this process requires some modifications of the original scheme. While Crites et al. [32] prove security in the \(\mathsf {TS\text {-}UF\text {-}}0\) model, i.e., under static corruptions, we are able to prove our construction is secure in the strongest model (TS-UF-1) for non-interactive threshold signatures [18] and even under fully adaptive corruptions (which we denote as \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) security). We provide a brief overview in Table 1 about our results.

Interestingly, we can do so by relying on standard assumptions, i.e., the Matrix Diffie-Hellman (MDDH) assumption family [38, 58]. While this comes at some cost in concrete efficiency, as shown in Table 2, the overhead is still not significant. For instance, when instantiated in type III bilinear groups under the SXDH assumption (\(k=1\)), then signatures consist of 7 group elements. When taking the popular BLS12-381 curve giving around 110 bit of security, this amounts to signatures of size around 380 bytes. Compared to 256 bytes for an RSA signature with comparable security (2048 bit modulus), this gives an increase of around 50%. This seems perfectly tolerable for most practical applications.

As can be seen from Table 2, an important benefit of our TSPS over the one by Crites et al. [32] is that it is not limited to an indexed Diffie-Hellman message space, but works for arbitrary group message vectors. Thus, it represents a drop-in replacement for SPS when aiming to thresholdize its applications (such as anonymous credentials, e-cash, etc.). Moreover, we prove the unforgeability of the proposed TSPS scheme against an adaptive adversary under a stronger \(\mathsf {TS\text {-}UF\text {-}}1\) notion of security. We recall that in contrast, the TSPS proposed by Crites et al. in [32] only achieves \(\mathsf {TS\text {-}UF\text {-}}0\) security against a static adversary based on an interactive assumption, called \(\mathsf GPS_3\), in the AGM and ROM.

Table 1. Overview of security notions and our results. t denotes the threshold, \(M^*\) the message corresponding to the forgery, \(S_1\) the set recording signer indices of issued partial signatures and \(\textsf{CS}\) the set of corrupted signers.
Table 2. Comparison with the existing threshold structure-preserving signature by Crites et al. [32]. \(\textsf{iDH}\) refers to the indexed Diffie-Hellman message spaces. \(\ell \) is the length of the message vector to be signed. \(|\mathbb {G}_i|\) denote the bit-length of elements in groups \(\mathbb {G}_i\) for \(i \in \{1,2\}\). NI stands for Non-Interactive.

1.2 Technical Overview

Considering the insights discussed in [32, Section 1], it can be deduced that a fully non-interactive TSPS scheme does not involve any non-linear operations during the partial signing phase. The use of non-linear operations prevents the reconstruction of the final signature from the partial signatures via Lagrange interpolation. These non-linear operations include the inversion of secret share keys (i.e., \([1/\textsf{sk}_i]\)), performing multiplication of distinct randomness and secret shares (i.e., \([r_i\textsf{sk}_i]\)), as well as raising either secret shares or distinct randomness to a power (e.g., \([\textsf{sk}_i^\zeta ]\) or \([r_i^\zeta ]\) for any \(\zeta >1\)). By employing an indexing approach, the authors in [32] were able to circumvent the need for multiplying randomness and secret keys, as required by Ghadafi’s SPS [44]. In contrast, in our proposed TSPS scheme, we adopt a distinct perspective for avoiding the non-linear operations.

We start from an observation regarding the SPS construction of Kiltz et al. [51] which computes the first and second components of signature on a message \([\textbf{m}]_1 \in \mathbb {G}_1^\ell \) as:

$$\begin{aligned} \text {KPW15}:\; (\sigma _{1},\sigma _{2}):=\left( \underbrace{\left[ \left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \right] _1\textbf{K}_{}}_\text {SP-OTS}+\overbrace{\textbf{r}^\top \left[ \textbf{B}^{\top }(\textbf{U}_{}+\tau \cdot \textbf{V}_{})\right] _1,\left[ \textbf{r}^\top \textbf{B}^{\top }\right] _1}^\text {randomized PRF}\right) \; , \end{aligned}$$

where \(\tau \) is a fresh random integer and \(\textbf{r}\) is a fresh random vector of proper size.Footnote 1 Additionally, the secret signing and verification keys are defined as follows:

$$\begin{aligned} \begin{aligned} \text {KPW15}:\; & \textsf{sk}:=(\textbf{K}_{}, \left[ \textbf{B}^{\top }\textbf{U}_{}\right] _1, \left[ \textbf{B}^{\top }\textbf{V}_{}\right] _1, \left[ \textbf{B}\right] _1)\; ,\\ {} & \textsf{vk}:=(\left[ \textbf{K}_{}\textbf{A}\right] _2,\left[ \textbf{U}_{}\textbf{A}\right] _2,\left[ \textbf{V}_{}\textbf{A}\right] _2,\left[ \textbf{A}\right] _2)\; , \end{aligned} \end{aligned}$$

where \(\textbf{K}_{}\), \(\textbf{A}\), \(\textbf{B}\), \(\textbf{U}_{}\) and \(\textbf{V}_{}\) are random matrices of appropriate dimensions.

As noted by Kiltz et al. in their work [51], their SPS is build based on two fundamental primitives: (i) a structure-preserving one-time signature (SP-OTS), \((\left[ \left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \right] _1\textbf{K}_{})\), and (ii) a randomized pseudorandom function (PRF), \((\textbf{r}^\top \left[ \textbf{B}^{\top }(\textbf{U}_{}+\tau \cdot \textbf{V}_{})\right] _1,\left[ \textbf{r}^\top \textbf{B}^{\top }\right] _1)\). In their proof of security, we observe that both the building blocks are involved in a loose manner. In particular, in most of their proofs, the reduction samples the SP-OTS signing key \(\textbf{K}_{}\). It is easy to verify that this observation still holds even when they are arguing about the security of the randomized PRF. Our approach in this work is motivated by this fact which further inspires us to modify Kiltz et al.’s SPS. This adjustment involves defining the secret key as \(\textsf{sk}:=\textbf{K}_{}\) and transferring the remaining parameters to the set of public parameters, i.e., \(\textsf{pp}:= ([\textbf{A}]_2,[\textbf{U}_{}\textbf{A}]_2,[\textbf{V}_{}\textbf{A}]_2,[\textbf{B}]_1,[\textbf{B}^{\top }\textbf{U}_{}]_1,[\textbf{B}^{\top }\textbf{V}_{}]_1)\) and the verification is defined as \(\textsf{vk}:=\left[ \textbf{K}_{}\textbf{A}\right] _2\). This rather simple structure allows to obtain the first TSPS for general message spaces in the standard model that can withhold adaptive corruptions without the exponential degradation [18] and can be proven secure in the \(\mathsf {TS\text {-}UF\text {-}}1\) model.

Consider the following setting. Imagine there are \(n\) signers, each equipped with their own signing key, either obtained through the involvement of a trusted dealer or by conducting a Distributed Key Generation (DKG). Their collective objective is to generate a signature for a given message \(\left[ \textbf{m}\right] _1\in \mathbb {G}_1^\ell \). It is clear that the linear structure of the SP-OTS \(\{\left[ \left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \right] _1\textbf{K}_{i}\}_{i \in S}\) allows for effortless aggregation when dealing with a collection of them over any subset \(S \subseteq [1,n]\). Since the random quantities \(\tau _i\) and \(\textbf{r}_i\) are independently sampled from a uniform distribution by each signer \(i\in [1,n]\), aggregating the PRF elements is still challenging. Consequently, we must explore potential modifications needed to enable the aggregation of these components in comparison to Kiltz et al.’s SPS. We choose to make the tag \(\tau \) dependent on the message. Thus, the randomized PRF computed by every signer, while still being a random element in the respective space, now allows aggregation. Moreover, by establishing an injective mapping between \(\left[ \textbf{m}\right] _1\) and \(\tau \), we can observe that the randomized PRF structure still guarantees the unforgeability in [51] when attempting to forge a signature on a distinct message. We employ a collision-resistant hash function (CRHF), \(\mathcal {H}(.)\), to derive \(\tau \) from \(\left[ \textbf{m}\right] _1\). This gives the basis of our construction, where each signer \(i \in [1,n]\) computes a partial signature on \(\left[ \textbf{m}\right] _1\) as

$$(\sigma _{1},\sigma _{2})=\left( \left[ \left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \right] _1\textbf{K}_{i}+\textbf{r}_i^\top \left[ \textbf{B}^{\top }(\textbf{U}_{}+\tau \cdot \textbf{V}_{})\right] _1,\left[ \textbf{r}_i^\top \textbf{B}^{\top }\right] _1\right) \; .$$

Here the signer i is holding the secret share \(\textbf{K}_{i}\) and chooses a random quantity \(\textbf{r}_i\) of appropriate size and uses \(\tau =\mathcal {H}(\left[ \textbf{m}\right] _1)\). It is easy to verify that this signature can be aggregated in a non-interactive manner. Looking ahead, as a first step we prove that this construction achieves \(\mathsf {TS\text {-}UF\text {-}}{0}\) security, relying on the well-established and non-interactive standard assumption, i.e., the MDDH assumption.

In case of a \(\mathsf {TS\text {-}UF\text {-}}{1}\) adversary, we need to deal with the fact that the adversary is allowed to obtain partial signatures on the forged message \(\left[ \textbf{m}^*\right] _1\). Let us first consider the case of static corruptions. We cannot apply the unforgeability of [51] here as it did not consider strong \(\textsf{Uf}\text {-}\textsf{CMA}\) security.Footnote 2 To overcome this problem, we introduce an information theoretic step to argue that given a number of partial signatures on the forged message \(\left[ \textbf{m}^*\right] _1\) below the threshold, the adversary does not gather extra information. In particular, we use Shamir’s secret reconstruction security to ensure that partial signatures do not really leak much information. In this argument, we implicitly use the “selective security” of Shamir’s secret sharing where all the parties in the corrupted set are fixed at the start of the game.

In the case of adaptive corruptions, an \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) adversary not only is allowed to obtain partial signatures on the forged message \(\left[ \textbf{m}^*\right] _1\), but also it can corrupt different users to get the corresponding secret keys within the security game, adaptively. We obviously could follow a standard guessing argument to achieve \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) security based on \(\mathsf {TS\text {-}UF\text {-}}{1}\) security. However, that direction unfortunately induces a significant security loss. We critically look at our proof of \(\mathsf {TS\text {-}UF\text {-}}{1}\) security we have briefly discussed above. To make our construction \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) secure, we show that it is sufficient to argue that the underlying secret sharing achieves “adaptive security”. In this work, we indeed form an argument that Shamir’s secret sharing achieves “adaptive security” which in turn makes our construction \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) secure.

Next, we provide a brief intuition of the formal argument for the “adaptive security” of Shamir’s secret sharing. Informally speaking, we produce a reduction \(\mathcal {B}\) to break the “selective security” of Shamir’s secret sharing given an adaptive adversary \(\mathcal {A}\) of the secret sharing. Being an information theoretic reduction, \(\mathcal {B}\) basically runs the adaptive adversary \(\mathcal {A}\) an exponential number of times. Since \(\mathcal {B}\) chooses the target set S independently of \(\mathcal {A}\)’s run, the expected number of parallel runs of \(\mathcal {A}\) required to ensure all the parties whose secrets \(\mathcal {A}\) queried are indeed from S is upper bounded by exponential. Being an information theoretically secure secret sharing scheme, Shamir’s secret sharing basically achieves “adaptive security” due to complexity leveraging but without any degradation in the advantage of the adversary. While we use Shamir secret sharing as our canonical choice, we believe that all information-theoretically secure Linear Secret Sharing schemes can be used instead.

2 Preliminaries

Notation. Throughout the paper, we let \(\kappa \in \mathbb {N}\) denote the security parameter and \(1^\kappa \) as its unary representation. Given a polynomial \(p(\cdot )\), an efficient randomized algorithm, \(\mathcal {A}\), is called probabilistic polynomial time, PPT in short, if its running time is bounded by a polynomial p(|x|) for every input x. A function \(\textsf{negl}:\mathbb {N}\rightarrow \mathbb {R}^+\) is called negligible if for every positive polynomial f(x), there exists \(x_0\) such that for all \(x>x_0 :\textsf{negl}(\kappa )<1/f(x)\). If clear from the context, we sometimes omit \(\kappa \) for improved readability. The set \(\{1,\dots , n\}\) is denoted as [1, n] for a positive integer n. For the equality check of two elements, we use “\(=\)”. The assign operator is denoted with “\(:=\)”, whereas the randomized assignment is denoted by \(a\leftarrow A\), with a randomized algorithm A and where the randomness is not explicit. We use \(\mathcal D_1\approx _c \mathcal D_2\) to show two distributions like \(\mathcal D_1\) and \(\mathcal D_2\) are computationally indistinguishable.

Definition 1

(Secret Sharing). For any two positive integers \(n, t<n\), an \((n,t)_{\mathbb {Z}_p^{a\times b}}\)-secret-sharing scheme over \(\mathbb {Z}_p^{a\times b}\) for \(a,b\in \mathbb {N}\) consists of two functions \(\textsf{Share}\) and \(\textsf{Rec}\). \(\textsf{Share}\) is a randomized function that takes a secret \({\textbf{M}}\in \mathbb {Z}_p^{a\times b}\) and outputs \(({\textbf{M}}_1, \ldots , {\textbf{M}}_{n})\leftarrow \textsf{Share}({\textbf{M}},\mathbb {Z}_p^{a\times b},n,t)\) where \({\textbf{M}}_i\in \mathbb {Z}_p^{a\times b} \; \forall i\in [1,n]\). The pair of functions \((\textsf{Share}, \textsf{Rec})\) satisfy the following requirements.

  • Correctness: For any secret \({\textbf{M}}\in \mathbb {Z}_p^{a\times b}\) and a set of parties \(\{i_1, i_2, \ldots , i_k\} \subseteq [1,n]\) such that \(k\ge t\), we have

    $$\begin{aligned} \Pr [\textsf{Rec}({\textbf{M}}_{i_1}, \ldots , {\textbf{M}}_{i_k}: ({\textbf{M}}_1, \ldots , {\textbf{M}}_{n})\leftarrow \textsf{Share}({\textbf{M}},\mathbb {Z}_p^{a\times b},n,t)) = {\textbf{M}}] = 1 \; . \end{aligned}$$
  • Security: For any secret \({\textbf{M}}\in \mathbb {Z}_p^{a\times b}\) and a set of parties \(S \subseteq [1,n]\) such that \(|S|=k<t\), for all information-theoretic adversary \(\mathcal {A}\) we have

    $$\begin{aligned} \Pr \left[ S=\{i_i\}_{i\in [1,k]} \wedge {\textbf{M}}^*={\textbf{M}}\left| \begin{aligned} & ({\textbf{M}}_1, \ldots , {\textbf{M}}_{n})\leftarrow \textsf{Share}({\textbf{M}},\mathbb {Z}_p^{a\times b},n,t)\\ & S \leftarrow \mathcal {A}() \\ & {\textbf{M}}^*\leftarrow \mathcal {A}({\textbf{M}}_{i_1}, \ldots , {\textbf{M}}_{i_k})\\ \end{aligned} \right. \right] =1/p\; . \end{aligned}$$

    We follow standard nomenclature to call this “selective security”. In case of “adaptive security”, \(\mathcal {A}\) adaptively chooses \(i_j\in [1,n]\) to get \({\textbf{M}}_{i_j}\) one at a time.

We briefly recall the well-known secret sharing scheme due to Shamir [61]. In (nt)-Shamir Secret Sharing, a secret s is shared to n parties via n evaluations of a polynomial of degree \((t-1)\). Reconstruction of the secret is essentially Lagrange interpolation where one computes Lagrange polynomials \(\{\lambda _{i_j}(x)\}_{j\in S}\) and linearly combine them with the given polynomial evaluations. The degree of the original polynomial confirms that one needs at least \(|S|=t\) many polynomial evaluations. In this work, we use Shamir Secret Sharing to secret share a matrix of size \(a\times b\), i.e., we use ab-many parallel instances of Shamir Secret Sharing. To keep our exposition simpler, we however assume that we have an (nt)-Shamir Secret Sharing scheme \((\textsf{Share}, \textsf{Rec})\) which operates on matrices. Since, our work here uses Shamir Secret Sharing quite generically, it is convenient to make such abstraction without going into the details.

Definition 2

(Bilinear Groups). Let an asymmetric bilinear group generator, \(\textsf{ABSGen}(1^\kappa )\), that returns a tuple \(\mathcal {G}:=(p,\mathbb {G}_1, \mathbb {G}_2,\mathbb {G}_T,\textsf{P}_1,\textsf{P}_2,e)\), such that \(\mathbb {G}_1\), \(\mathbb {G}_2\) and \(\mathbb {G}_T\) are cyclic groups of the same prime order p such that there is no known homomorphism between \(\mathbb {G}_1\) and \(\mathbb {G}_2\). \(\textsf{P}_1\) and \(\textsf{P}_2\) are the generators of \(\mathbb {G}_1\) and \(\mathbb {G}_2\), respectively, where \(e:\mathbb {G}_1\times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) is an efficiently computable (non-degenerate) bilinear map with the following properties:

  • \(\forall \, a, b\in \mathbb {Z}_p\), \(e([a]_1,[b]_2)=[ab]_T= e([b]_1,[a]_2) ,\)

  • \(\forall \, a, b\in \mathbb {Z}_p\), \(e([a+b]_1,[1]_2)=e([a]_1,[1]_2)e([b]_1,[1]_2) ,\)

where we use an implicit representation of group elements, in which for \(\zeta \in \{1,2,T\}\) and an integer \(\alpha \in \mathbb {Z}_p\), the implicit representation of integer \(\alpha \) in group \(\mathbb {G}_\zeta \) is defined by \([\alpha ]_\zeta =\alpha \textsf{P}_\zeta \in \mathbb {G}_\zeta \), where \(\textsf{P}_{T}=e(\textsf{P}_1,\textsf{P}_2)\). To be more general, the implicit representation of a matrix \(\textbf{A}=(\alpha _{ij})\in \mathbb {Z}_p^{m \times n}\) in \(\mathbb {G}_\zeta \) is defined by \([\textbf{A}]_\zeta \) and we have:

$$\begin{aligned}{}[\textbf{A}]_\zeta = \begin{pmatrix} \alpha _{1,1}\textsf{P}_\zeta &{} \cdots &{} \alpha _{1,n}\textsf{P}_\zeta \\ \alpha _{2,1}\textsf{P}_\zeta &{} \cdots &{} \alpha _{2,n}\textsf{P}_\zeta \\ \vdots &{} \ddots &{} \vdots \\ \alpha _{m,1}\textsf{P}_\zeta &{} \cdots &{} \alpha _{m,n}\textsf{P}_\zeta \end{pmatrix} \; . \end{aligned}$$

For two matrices \(\textbf{A}\) and \(\textbf{B}\) with matching dimensions we define \(e([\textbf{A}]_1, [\textbf{B}]_2) =[\textbf{A}\textbf{B}]_T\).

Definition 3

(Matrix Distribution). Let \(k,\ell \in \mathbb {N}^*\) s.t. \(k< \ell \). We call \(\mathcal {D}_{\ell ,k}\) a matrix distribution if it outputs matrices over \(\mathbb {Z}_p^{\ell \times k}\) of full rank k in polynomial time. W.l.o.g, we assume the first k rows of matrix \(\textbf{A}\leftarrow \mathcal {D}_{\ell ,k}\) form an invertible matrix. For \(\ell =k+1\), we write \(\mathcal {D}_k\) in short.

Next, we recall the Matrix Decisional Diffie-Hellman assumption, which defines over \(\mathbb {G}_\zeta \) for any \(\zeta =\{1,2\}\) and states two distributions \(([\textbf{A}]_\zeta , [\textbf{A}\textbf{r}]_\zeta )\) and \(([\textbf{A}]_\zeta , [\textbf{u}]_\zeta )\), where \(\textbf{A}\leftarrow \mathcal {D}_{\ell ,k}, \textbf{r}\leftarrow \mathbb {Z}_p^k, \textbf{u}\leftarrow \mathbb {Z}_p^{\ell }\) are computationally indistinguishable.

Definition 4

(\(\mathcal {D}_{\ell ,k}\)-Matrix Decisional Diffie-Hellman (\(\mathcal {D}_{\ell ,k}\) -MDDH) Assumption [38]). For a given security parameter \(\kappa \), let \(k, \ell \in \mathbb {N}^*\) s.t. \(k< \ell \) and \(\mathcal {D}_{\ell ,k}\) be a matrix distribution, defined in Definition 3. We say \(\mathcal {D}_{\ell ,k}\) -MDDH assumption over \(\mathbb {G}_\zeta \) for \(\zeta =\{1,2\}\) holds, if for all PPT adversaries \(\mathcal {A}\) we have:

$$\begin{aligned} \begin{aligned} Adv_{\mathcal {D}_{\ell ,k},\mathbb {G}_\zeta ,\mathcal {A}}^\textsf{MDDH}(\kappa )= &\Big |\Pr \left[ \mathcal {A}(\mathcal {G},[\textbf{A}]_\zeta , [\textbf{A}\textbf{r}]_\zeta )=1\right] \\ {} & -\Pr \left[ \mathcal {A}(\mathcal {G},[\textbf{A}]_\zeta , [\textbf{u}]_\zeta )=1\right] \Big |\le \textsf{negl}(\kappa )\; , \end{aligned} \end{aligned}$$

where \(\mathcal {G}\leftarrow \textsf{ABSGen}(1^\kappa )\), \(\textbf{A}\leftarrow \mathcal {D}_{\ell ,k}, \textbf{r}\leftarrow \mathbb {Z}_p^k\) and \(\textbf{u}\leftarrow \mathbb {Z}_p^\ell \).

Definition 5

(\(\mathcal {D}_{k}\)-Kernel Matrix Diffie-Hellman (\(\mathcal {D}_{k}\) -KerMDH) Assumption [58]). For a given security parameter \(\kappa \), let \(k \in \mathbb {N}^*\) and \(\mathcal {D}_k\) is a matrix distribution, defined in Definition 3. We say \(\mathcal {D}_{k}\) -KerMDH assumption over \(\mathbb {G}_\zeta \) for \(\zeta =\{1,2\}\) holds, if for all PPT adversaries \(\mathcal {A}\) we have:

figure a

The Kernel Matrix Diffie-Hellman assumption is a natural computational analog of the MDDH assumption. It is well-known that for all \(k\ge 1\), \(\mathcal {D}_{k}\) -MDDH \(\Rightarrow \) \(\mathcal {D}_{k}\) -KerMDH [51, 58].

3 Threshold Structure-Preserving Signatures

In this section, we first present our security model for Threshold Structure-Preserving Signatures (TSPS) and then present our construction and prove its security.

3.1 TSPS: Syntax and Security Definitions

First, we recall the definition of the Threshold Structure-Preserving Signatures (TSPS) from [32] and their main security properties: correctness and threshold unforgeability. Informally, a threshold signature scheme enables a group of servers S of size \(n\) to collaboratively sign a message. In this paper, we assume the existence of a trusted dealer who shares the secret key among the signers. However, there are straightforward and well-known techniques in particular distributed key generation (DKG) protocols (e.g., [59]) that eliminate this needed trust.

Definition 6

(Threshold Structure-Preserving Signatures [32]). Over a security parameter \(\kappa \) and a bilinear group, an \((n,t)\)-TSPS contains the following PPT algorithms:

  • \(\textsf{pp}\leftarrow \textsf{Setup}(1^\kappa )\): The setup algorithm takes the security parameter \(\kappa \) as input and returns the set of public parameters \(\textsf{pp}\) as output.

  • \((\{\textsf{sk}_i, \textsf{vk}_i\}_{i \in [1,n]}, \textsf{vk})\leftarrow \textsf{KeyGen}(\textsf{pp},n,t)\): The key generation algorithm takes the public parameters \(\textsf{pp}\) along with two integers \(n,t\) s.t. \(1\le t\le n\) as inputs. It then returns secret/verification keys \((\textsf{sk}_i,\textsf{vk}_i)\) for \(i \in [1,n]\) along with a global verification key \(\textsf{vk}\) as output.

  • \(\varSigma _i \leftarrow \textsf{ParSign}(\textsf{pp},\textsf{sk}_i, [\textbf{m}])\): The partial signing algorithm takes \(\textsf{pp}\), the \(i^{th}\) party’s secret key, \(\textsf{sk}_i\), and a message \([\textbf{m}]\in \mathcal {M}\) as inputs. It then returns a partial signature \(\varSigma _i\) as output.

  • \(0/1 \leftarrow \textsf{ParVerify}(\textsf{pp}, \textsf{vk}_i, [\textbf{m}], \varSigma _i)\): The partial verification algorithm as a deterministic algorithm, takes \(\textsf{pp}\), the \(i^{th}\) verification key, \(\textsf{vk}_i\), and a message \([\textbf{m}] \in \mathcal {M}\) along with partial signature \(\varSigma _i\) as inputs. It then returns 1 (accept), if the partial signature is valid and 0 (reject), otherwise.

  • \(\varSigma \leftarrow \textsf{CombineSign}(\textsf{pp}, T, \{\varSigma _i\}_{i\in T})\): The combine algorithm takes a set of partial signatures \(\varSigma _i\) for \(i \in T\) along with \(T\subseteq [1,n]\) and then returns an aggregated signature \(\varSigma \) as output.

  • \(0/1 \leftarrow \textsf{Verify}(\textsf{pp}, \textsf{vk}, [\textbf{m}], \varSigma )\): The verification algorithm as a deterministic algorithm, takes \(\textsf{pp}\), the global verification key, \(\textsf{vk}\), and message \([\textbf{m}] \in \mathcal {M}\) along with an aggregated signature \(\varSigma \) as inputs. It then returns 1 (accept), if the aggregated signature is valid and 0 (reject), otherwise.

Correctness. Correctness guarantees that a signature obtained from a set \(T \subseteq [1,n]\) of honest signers always verifies for \(|T|\ge t\).

Definition 7

(Correctness). An \((n,t)\)-TSPS scheme is called correct if we have:

$$\begin{aligned} \Pr \left[ \begin{aligned} & \forall ~\textsf{pp}\leftarrow \textsf{Setup}(1^\kappa ), (\{\textsf{sk}_i, \textsf{vk}_i\}_{i \in [1,n]},\textsf{vk}) \leftarrow \textsf{KeyGen}(\textsf{pp},n,t), [\textbf{m}] \in \mathcal {M}, \\ {} & \varSigma _i \leftarrow \textsf{ParSign}(\textsf{pp},\textsf{sk}_i,[\textbf{m}]) ~ \text {for}~ i \in [1,n], \forall ~T\subseteq [1,n], |T|\ge t, \\ {} & \varSigma \leftarrow \textsf{CombineSign}\left( \textsf{pp}, T, \left\{ \varSigma _i \right\} _{i \in T}\right) :\textsf{Verify}\left( \textsf{pp},\textsf{vk},[\textbf{m}],\varSigma \right) = 1 \end{aligned} \right] =1 \; . \end{aligned}$$

Unforgeability. Our security model for threshold unforgeability extends the one from Crites et al. [32]. Therefore, we need to recall a recent work by Bellare et al. [18], which investigates existing security notions and proposes stronger and more realistic security notions for threshold signatures under static corruptions. In particular, the authors in [18] present a hierarchy of different notions of security for non-interactive schemes. We focus on fully non-interactive schemes, i.e., ones that do not require one round of pre-processing, and thus in this paper only the \(\mathsf {TS\text {-}UF\text {-}}0\) and \(\mathsf {TS\text {-}UF\text {-}}1\) notions are relevant. The \(\mathsf {TS\text {-}UF\text {-}}0\) notion is a less stringent notion of unforgeability. In this context, if the adversary has previously seen a partial signature on a challenge message \([\textbf{m}^*]\), the act of forging a signature for that specific message is considered as a trivial forgery. The security of the original TSPS is proved under this notion of unforgeability.

The stronger \(\mathsf {TS\text {-}UF\text {-}}1\) notion, which is our main focus, allows adversaries to query the signing oracle up to \(t-|\textsf{CS}|\) times for partial signatures, even on the challenge message. Here \(\textsf{CS}\) with \(|\textsf{CS}|<t\) denotes the set of (statically corrupted) signers. Moreover, the model in [18] as well as the TSPS construction in [32] only considers static corruptions. But we also integrate the core elements of the model introduced in the recent work by Crites et al. [33], adapted to fully non-interactive schemes, to support fully adaptive corruptions. Our model is depicted in Fig. 1. The dashed box as well as the solid white box in the winning condition apply to the \(\mathsf {TS\text {-}UF\text {-}}0\) and \(\mathsf {TS\text {-}UF\text {-}}1\) notions, respectively. Grey boxes are only present in the adaptive version of the game, i.e., \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}0\) and \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}1\).

Definition 8

(Threshold Unforgeability). Let \(\textsf{TSPS}=(\textsf{Setup},\textsf{KeyGen},\) \(\textsf{ParSign},\textsf{ParVerify}, \textsf{CombineSign},\textsf{Verify})\) be an \((n,t)\)-TSPS scheme over message space \(\mathcal {M}\) and let \(\mathsf prop \in \{\mathsf {TS\text {-}UF\text {-}}b, \textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}b\}_{b \in \{0,1\}}\). The advantage of a PPT adversary \(\mathcal {A}\) playing described security games in Fig. 1, is defined as,

$$\begin{aligned} \textbf{Adv}_\mathsf{TSPS,\mathcal {A}}^\textsf{prop}(\kappa )=\Pr \left[ \begin{aligned} \textbf{G}^\textsf{prop}_\mathsf{TS,\mathcal {A}}(\kappa )=1 \end{aligned} \right] . \end{aligned}$$

A TSPS achieves \(\mathsf prop\)-security if we have, \(\textbf{Adv}_\mathsf{TSPS,\mathcal {A}}^\textsf{prop}(\kappa )\le \textsf{negl}(\kappa )\).

Fig. 1.
figure 1

Games defining the , , and unforgeability notions of threshold signatures.

3.2 Core Lemma

Prior to introducing our construction, we first present the core lemma that forms a basis in the proofs of our proposed TSPS. It extends the core lemmas from [51, 52], however it is important to note that both of these schemes are standard SPS, where there was no need to simulate signatures on forged messages. In contrast, both the \(\mathsf {TS\text {-}UF\text {-}}{1}\) and \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) security models necessitate the simulation of partial signature queries on forged messages. Thus we define our core lemma with a key difference being the introduction of a new oracle, denoted as \(\mathcal {O}^{**}(\cdot )\).

Lemma 1

(Core Lemma). Let the game \(\textbf{G}^\textsf{Core}_{\mathcal {D}_{k}, \textsf{ABSGen}}(\kappa )\) be defined as Fig. 2. For any adversary \(\mathcal {A}\) with the advantage of \({Adv}^{\textsf{Core}}_{\mathcal {D}_{k}, \textsf{ABSGen},\mathcal {A}}(\kappa ):=|\Pr [\textbf{G}^\textsf{Core}_{\mathcal {D}_{k}, \textsf{ABSGen}}(\kappa )]-1/2|\), there exists an adversary \(\mathcal {B}\) against the \(\mathcal {D}_{k}\)-MDDH assumption such that with the running time \(\textbf{T}(\mathcal {A})\approx \textbf{T}(\mathcal {B})\) it holds that

$$\begin{aligned} { Adv}^{\textsf{Core}}_{\mathcal {D}_{k}, \textsf{ABSGen},\mathcal {A}}(\kappa ) \le 2q{ Adv}^\mathsf{{MDDH}}_{\mathcal {D}_{k}, \mathbb {G}_1,\mathcal {B}}(\kappa )+q/p \; , \end{aligned}$$

where q is a bound on the number of queries requested by adversary \(\mathcal {A}\) for oracle \(\mathcal {O}_b(\cdot )\). Note that \(\mathcal {A}\) can only query the other oracles only once.

Fig. 2.
figure 2

Game defining the core lemma, \(\textbf{G}^\textsf{Core}_{\mathcal {D}_{k}, \textsf{ABSGen}}(\kappa )\).

Proof Sketch. The proof of this lemma uses the proof of core lemma in [51, 52]. The fundamental concept of these proofs is primarily an information-theoretic argument that \((\textbf{t}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{}),\textbf{U}_{}+\tau ^*\textbf{V}_{})\) is identically distributed to \((\mu {\textbf{a}^{\perp }}^{\top }+\textbf{t}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{}),\textbf{U}_{}+\tau ^*\textbf{V}_{})\) for \(\mu \leftarrow \mathbb {Z}_p\), \(\textbf{a}^{\perp },\textbf{t}\leftarrow \mathbb {Z}_p^{k+1}\) and \(\tau \ne \tau ^*\). We use \(\left[ b\mu {\textbf{a}^{\perp }}^{\top }+\textbf{t}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})\right] _1\) to simulate \(\mathcal {O}_{b}(\left[ \tau \right] _1)\), \(\left[ \textbf{U}_{}+\tau ^*\textbf{V}_{}\right] _2\) to simulate \(\mathcal {O}^{*}(\left[ \tau ^*\right] _2)\) and \(\left[ \textbf{B}^{\top }(\textbf{U}_{}+\tau ^*\textbf{V}_{})\right] _1\) to simulate \(\mathcal {O}^{**}(\left[ \tau ^*\right] _1)\). The detailed proof can be found in Sect. 3.5.    \(\square \)

Fig. 3.
figure 3

Our proposed TSPS construction.

3.3 Our Threshold SPS Construction

Given a collision resistant hash function, \(\mathcal {H}:\{0,1\}^* \rightarrow \mathbb {Z}_p\), and message space \(\mathcal {M}:= \mathbb {G}_1^\ell \), we present our \((n,t)\)-TSPS construction in Fig. 3. This consists of six main PPT algorithms – \(\textsf{Setup}\), \(\textsf{KeyGen}\), \(\textsf{ParSign}\), \(\textsf{ParVerify}\), \(\textsf{CombineSign}\) and \(\textsf{Verify}\), as defined in Definition 6. Similar to the settings of Bellare et al. [18], we also assume there is a dealer who is responsible for generating key pairs for all signers and a general verification key.

3.4 Security

Theorem 1

Under the \(\mathcal {D}_{k}\) -MDDH Assumption in \(\mathbb {G}_1\) and \(\mathcal {D}_{k}\) -KerMDH Assumption in \(\mathbb {G}_2\), the proposed Threshold Structure-Preserving Signature construction in Fig. 3 achieves \({\mathsf {TS\text {-}UF\text {-}}0}\) security against an efficient adversary making at most q partial signature queries.

Proof

We prove the above theorem through a series of games and we use \(\textbf{Adv}_i\) to denote the advantage of the adversary \(\mathcal {A}\) in winning the Game i. The games are described below.

  • Game 0. This is the \(\mathsf {TS\text {-}UF\text {-}}0\) security game described in Definition 8. As shown in Fig. 4, an adversary \(\mathcal {A}\) after receiving the set of public parameters, \(\textsf{pp}\), returns (n, t, CS), where n, t and CS represents the total number of signers, the threshold, and the set of corrupted signers, respectively. The adversary can query the partial signing oracle \(\mathcal {O}^\textsf{PSign}(\cdot )\) to receive partial signatures and q represents the total number of these queries. In the end, the adversary outputs a message \([\textbf{m}^*]_1\) and a forged signature \(\varSigma ^*\).

  • Game 1. We modify the verification procedure to the one described in Fig. 5. Consider any forged message/signature pair \(([\textbf{m}^*]_1,\varSigma ^*=(\widehat{\sigma }_{1},\widehat{\sigma }_{2},\widehat{\sigma }_{3},\widehat{\sigma }_{4}))\), where \(e(\widehat{\sigma }_{2},\widehat{\sigma }_{4})= e(\widehat{\sigma }_{3},[{1}]_2)\), \(|\textsf{CS}|<t\) and \(S_1([\textbf{m}^*]_1)=\emptyset \). It is easy to observe that if the pair \(([\textbf{m}^*]_1,\varSigma ^*)\) meets the \(\textsf{Verify}^*(\cdot )\) criteria, outlined in Fig. 5, it also satisfies \(\textsf{Verify}(\cdot )\) procedure, described in Fig. 4. This is primarily due to the fact that:

    figure e

    Assume there exists a message/signature pair like \(([\textbf{m}^*]_1,\varSigma ^*=(\widehat{\sigma }_{1},\widehat{\sigma }_{2},\widehat{\sigma }_{3},\widehat{\sigma }_{4}))\) that satisifies \(\textsf{Verify}(\cdot )\) and not \(\textsf{Verify}^*(\cdot )\), then we can compute a non-zero vector \(\textbf{c}\) in the kernal of \(\textbf{A}\) as follows:

    $$ \textbf{c}:= {\widehat{\sigma }_{1}}-{([\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \textbf{K}_{}]_1+ \widehat{\sigma }_{2}\textbf{U}_{}+ \widehat{\sigma }_{3}\textbf{V}_{})}\in \mathbb {G}_1^{1\times (k+1)} \; \cdot $$

    According to \(\mathcal {D}_{k}\) -KerMDH assumption over \(\mathbb {G}_2\) described in Definition 5, computing such a vector \(\textbf{c}\) is considered computationally hard. Thus,

    $$ |\textbf{Adv}_0-\textbf{Adv}_1|\le { Adv}^{\textsf{KerMDH}}_{\mathcal {D}_{k},\mathbb {G}_2, \mathcal {B}_0}(\kappa ) \; \cdot $$
  • Game 2. On receiving a partial signature query on a message \([\textbf{m}_i]_1\), the query list is updated to include the message \([\textbf{m}_i]_1\) along with its corresponding tag, \(\tau _i:=\mathcal {H}([{\textbf{m}_i}]_1)\). The challenger aborts if an adversary can generate two tuples \(([\textbf{m}_i]_1,\tau _i)\), \(([\textbf{m}_j]_1,\tau _j)\) with \([\textbf{m}_i]_1\ne [\textbf{m}_j]_1\) and \(\tau _i=\tau _j\). By the collision resistance property of the underlying hash function we have,

    $$\begin{aligned} |\textbf{Adv}_{1}-\textbf{Adv}_{2}|\le { Adv}^\textsf{CRHF}_{\mathcal {H}}(\kappa )\; \cdot \end{aligned}$$
  • Game 3. In this game, we introduce randomness to the partial signatures by adding \(\mu \mathbf{a^\bot }\) to each partial signature, where \(\mu \) is chosen uniformly at random and the vector \(\textbf{a}^\bot \) is a non-zero vector in the kernel of \(\textbf{A}\). The new partial signatures satisfy the verification procedure as \(\mathbf{a^\bot \textbf{A}=0}\). Figure 6 describes the new partial signing oracle, \(\mathcal {O}^{\textsf{PSign}^*}(.)\).

Fig. 4.
figure 4

\(\textsf{Game}_{{0}}\).

Fig. 5.
figure 5

Modifications in \(\textsf{Game}_{{1}}\).

Fig. 6.
figure 6

Modifications in \(\textsf{Game}_{{3}}\).

Lemma 2

\(|\textbf{Adv}_2-\textbf{Adv}_3|\le 2q{Adv}^\textsf{MDDH}_{\mathcal {D}_{k},\mathbb {G}_1, \mathcal {B}_1}(\kappa )+q/p\).

Fig. 7.
figure 7

Reduction to the core lemma in Lemma 1.

Proof

We prove this lemma through a reduction to the core lemma, Lemma 1. Let us assume there exists an adversary \(\mathcal {A}\) that can distinguish the games \(\textsf{Game}_{{2}}\) and \(\textsf{Game}_{{3}}\), we can use it to build an adversary \(\mathcal {B}_1\), defined in Fig. 7, which breaks the core lemma, Lemma 1. The adversary \(\mathcal {B}_1\) has access to four oracles, \(\textsf{Init}(\cdot ), \mathcal {O}_b(\cdot ), \mathcal {O}^*(\cdot ),\mathcal {O}^{**}(\cdot )\), however in this reduction, we only use the first three oracles, defined as follows:

  • Oracle \(\textsf{Init}(\cdot )\): The oracle \(\textsf{Init}\) provides the set of public parameters \(\textsf{pp}\).

  • Oracle \(\mathcal {O}_b(\cdot )\): On the i-th query to this oracle on \(\left[ \tau \right] _1\), it outputs \(\left( [b\mu \textbf{a}^{\bot }+\textbf{r}_i^\top \textbf{B}^{\top }(\textbf{U}_{}+\tau \cdot \textbf{V}_{})]_1,[\textbf{r}_i^\top \textbf{B}^{\top }]_1\right) \) depending on a random bit b.

  • Oracle \(\mathcal {O}^*(\cdot )\): On input \([\tau ^{*}]_2\), it returns \([\textbf{U}_{}+\tau ^*\textbf{V}_{}]_2 \).

When the lemma challenger selects the challenge bit as \(b=0\), it leads to the game \(\textsf{Game}_{{2}}\), and when \(b=1\), it results in the game \(\textsf{Game}_{{3}}\). All the other values are simulated perfectly. Thus, \(|\textbf{Adv}_2-\textbf{Adv}_3|\le { Adv}^\textsf{Core}_{\mathcal {D}_{k},\textsf{ABSGen}, \mathcal {B}_1}(\kappa )\) holds and therefore we have,

figure f
  • Game 4. In this game, we apply the modifications described in Fig. 8. Shamir secret sharing (see Definition 1) ensures that \((\textbf{K}_{1},\ldots ,\textbf{K}_{n})\) in \(\textsf{Game}_{{3}}\) and \((\widetilde{\textbf{K}}_{1},\ldots ,\widetilde{\textbf{K}}_{n})\) in \(\textsf{Game}_{{4}}\) have identical distributions. W.l.o.g, \(\textbf{K}_{i}\) in \(\textsf{Game}_{{3}}\) and \(\widetilde{\textbf{K}}_{i}\) in \(\textsf{Game}_{{4}}\) are identically distributed. In \(\textsf{Game}_{{4}}\), on the other hand, \(\widetilde{\textbf{K}}_{i}\) and \(\textbf{K}_{i}=\widetilde{\textbf{K}}_{i}-\textbf{u}_i\textbf{a}^{\perp }\) are identically distributed. Combining these observations, it follows that \(\textbf{K}_{i}\) in \(\textsf{Game}_{{3}}\) and \(\textbf{K}_{i}\) in \(\textsf{Game}_{{4}}\) are identically distributed for all \(i\in [1,n]\). Consequently, it can be deduced that \(\textbf{K}_{}\) in \(\textsf{Game}_{{3}}\) and \(\textbf{K}_{}+\textbf{u}_0\textbf{a}^{\perp }\) in \(\textsf{Game}_{{4}}\) are identically distributed. Therefore, this change is just a conceptual change and we have,

    $$\begin{aligned} |\textbf{Adv}_3-\textbf{Adv}_4|=0 \; \cdot \end{aligned}$$
    Fig. 8.
    figure 8

    Modification from \(\textsf{Game}_{{3}}\) to \(\textsf{Game}_{{4}}\).

    Now, we give a bound on \(\textbf{Adv}_4\) via an information-theoretic argument. We first consider the information about \(\textbf{u}_0\) (and subsequently \(\{\textbf{u}_i\}_{i\in [1,n] \setminus \textsf{CS}}\)) leaked from \(\textsf{vk}\) (and subsequently \(\{\textsf{vk}_i\}_{i\in [1,n]}\)) and partial signing queries:

    • \(\textsf{vk}:=\left[ \textbf{K}_{}\textbf{A}\right] _2=\left[ \widetilde{\textbf{K}}_{}\textbf{A}\right] _2\) and \(\textsf{vk}_i:=\left[ \textbf{K}_{i}\textbf{A}\right] _2=\left[ \widetilde{\textbf{K}}_{i}\textbf{A}\right] _2\) for all \(i\in [1,n]\).

    • The output of the \(j^{th}\) partial signature query on \((i,\left[ \textbf{m}\right] _1)\) for \(\left[ \textbf{m}\right] _1\ne \left[ \textbf{m}^*\right] _1\) completely hides \(\{\textbf{u}_i\}_{i\in [1,n] \setminus \textsf{CS}}\) (and subsequently \(\textbf{u}_0\) as the adversary has only \(|\textsf{CS}|\) many \(\textbf{u}_i\) with \(|\textsf{CS}|<t\)), since

      $$\begin{aligned} \left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \textbf{K}_{i}+\mu _j\textbf{a}^{\perp }= \left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \widetilde{\textbf{K}}_{i}+\left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \textbf{u}_i\textbf{a}^{\perp }+\mu _j\textbf{a}^{\perp }\; . \end{aligned}$$

      distributed identically to \(\left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \widetilde{\textbf{K}}_{i}+\mu _j\textbf{a}^{\perp }\). This is because \(\mu _j\textbf{a}^{\perp }\) already hides \(\left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \textbf{u}_i\textbf{a}^{\perp }\) for uniformly random \(\mu _j\leftarrow \mathbb {Z}_p\).

    The only way to successfully convince the verification to accept a signature \(\varSigma ^*\) on \(\textbf{m}^*\), the adversary must correctly compute \(\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) (\textbf{K}_{}+\textbf{u}_0\textbf{a}^{\perp })\) and thus \(\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \textbf{u}_0\). Observe that, \(\{\textbf{u}_i\}_{i\in [1,n] \setminus \textsf{CS}}\) (and thereby \(\textbf{u}_0\)) are completely hidden to the adversary, \(\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \textbf{u}_0\) is uniformly random from \(\mathbb {Z}_p\) from the adversary’s viewpoint. Therefore, \(\textbf{Adv}_4= 1/p\).    \(\square \)

Theorem 2

Under the \(\mathcal {D}_{k}\) -MDDH Assumption in \(\mathbb {G}_1\) and \(\mathcal {D}_{k}\) -KerMDH Assumption in \(\mathbb {G}_2\), our Threshold Structure-Preserving Signature construction achieves \({\mathsf {TS\text {-}UF\text {-}}1}\) security against an efficient adversary making at most q partial signature queries.

Proof Sketch. The difference between \({\mathsf {TS\text {-}UF\text {-}}0}\) and \({\mathsf {TS\text {-}UF\text {-}}1}\) lies in the fact that, in the latter model, an adversary can request \(\mathcal {O}^\textsf{PSign}(\cdot )\) queries on \(\left[ \textbf{m}^*\right] _1\) for which it aims to forge a signature. The natural restriction in Fig. 1 is expressed as \(|S_1(\left[ \textbf{m}^*\right] _1)|< t - |\textsf{CS}|\), where t is the threshold value and the corrupted parties \(\textsf{CS}\) are fixed at the beginning of the game. As this security model allows partial signature oracle queries on \(\left[ \textbf{m}^*\right] _1\), we next explore the changes we need to make on the proof of Theorem 1.

\(\textsf{Game}_{{0}}\), \(\textsf{Game}_{{1}}\) and \(\textsf{Game}_{{2}}\) stay the same. To handle \({\mathsf {TS\text {-}UF\text {-}}1}\) adversaries, we introduce an additional game \(\textsf{Game}_{{2}}'\) to handle partial signature queries on the forged message. In \(\textsf{Game}_{{2}}'\), the challenger makes a list of all the partial signature queries and guesses the message on which forgery will be done. However, the guess will be made on the list of partial signature queries. More precisely, let \(\mathcal {A}\) make partial signature queries on \(\left[ \textbf{m}_1\right] _1,\ldots ,\left[ \textbf{m}_\mathcal {Q}\right] _1\) s.t. \(\mathcal {Q}\le q\), the challenger of \(\textsf{Game}_{{2}}'\) rightly guesses the forged message with \(1/\mathcal {Q}\) probability which introduces a degradation in the advantage. This small yet powerful modification allows the challenger in \(\textsf{Game}_{{3}}\) to add a uniformly random quantity \(\mu \) to partial signature oracle queries on \(\left[ \textbf{m}\right] _1\ne \left[ \textbf{m}^*\right] _1\). This concept is formulated by adding an additional line between lines number 2 and 3 in Fig. 6. In particular, the new \(\textsf{Game}_{{3}}'\) (See Fig. 9) would set \(\mu =0\) if \(\left[ \textbf{m}\right] _1=\left[ \textbf{m}^*\right] _1\). Next, we give an intuitive explanation of the indistinguishability of \(\textsf{Game}_{{2}}'\) and \(\textsf{Game}_{{3}}'\) which basically is a modification of the proof of Lemma 2.

Fig. 9.
figure 9

\(\textsf{Game}_{{3}}'\) in the proof of Theorem 2.

The novelty of this research lies in the need to simulate partial signature queries on the forged message \(\left[ \textbf{m}^*\right] _1\), a challenge not addressed in previous works like [51, 52] upon which this study is based. It’s important to mention that an extra oracle, termed \(\mathcal {O}^{**}(\cdot )\), is sufficient for our objectives. On any partial signature query on the forged message \(\left[ \textbf{m}^*\right] _1\), the reduction calls \(\mathcal {O}^{**}(\left[ \tau ^*\right] _1)\) for \(\tau ^*\leftarrow \mathcal {H}(\left[ \textbf{m}^*\right] _1)\). Next we see that a single query to \(\mathcal {O}^{**}(\left[ \tau ^*\right] _1)\) is sufficient to handle multiple partial signature queries on \(\left[ \textbf{m}^*\right] _1\). In particular, given a partial signature oracle query on \((i,\left[ \textbf{m}^*\right] _1)\), the reduction uses \(\mathcal {O}^{**}(\cdot )\) of the so-called core-lemma (in Lemma 1) to get \(\textbf{X}=\left[ \textbf{B}^{\top }(\textbf{U}_{}+\tau ^*\textbf{V}_{})\right] _1\), where \(\tau ^*=\mathcal {H}(\left[ \textbf{m}^*\right] _1)\). The reduction then replies with \(\big (\left[ \left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \right] _1\textbf{K}_{i}+\textbf{r}^\top \cdot \textbf{X}\), \(\left[ \textbf{r}^\top \textbf{B}^{\top }\right] _1\), \(\left[ \tau ^*\textbf{r}^\top \textbf{B}^{\top }\right] _1\), \(\left[ \tau ^*\right] _2\big )\) as a partial signature response to \(\mathcal {A}\). Thus, a single call to \(\mathcal {O}^{**}(\cdot )\) suffices to handle all partial signature queries on \(\left[ \textbf{m}^*\right] _1\).

We define \(\textsf{Game}_{{4}}\) as being identical to the proof of Theorem 1. In fact, the argument for the indistinguishability of \(\textsf{Game}_{{3}}\) and \(\textsf{Game}_{{4}}\) from the proof of Theorem 1 applies here as well. The argument that \(\textbf{Adv}_4\) is negligible however requires a small modification. Similar to the proof of Theorem 1, we can show that all verification keys \(\textsf{vk}\) and \(\{\textsf{vk}_i\}_{i\in [1,n]}\) stay the same. Furthermore, all partial signature queries on \(\left[ \textbf{m}\right] _1\ne \left[ \textbf{m}^*\right] _1\) do not leak any information about \(\{\textbf{u}_i\}_{i\in [1,n] \setminus \textsf{CS}}\). Since, partial signature oracle queries are allowed on \(\left[ \textbf{m}^*\right] _1\), observe that at most \(\{\textbf{u}_i\}_{i\in S_1(\left[ \textbf{m}^*\right] _1)}\) are leaked to the adversary. To summarise, an adversary in \({\mathsf {TS\text {-}UF\text {-}}1}\) gets at most \(\{\textbf{u}_i\}_{i\in S_1(\left[ \textbf{m}^*\right] _1)\sqcup \textsf{CS}}\) even when it is unbounded. Due to the natural restriction, \(|S_1(\left[ \textbf{m}^*\right] _1)|+|\textsf{CS}|< t\) ensures that \(\textbf{u}_0\) stays completely hidden to the adversary. Thus, \(\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \textbf{u}_0\) is uniformly random from \(\mathbb {Z}_p\) from the adversary’s viewpoint. Therefore, \(\textbf{Adv}_4\le 1/p\).    \(\square \)

Theorem 3

Under the \(\mathcal {D}_{k}\) -MDDH Assumption in \(\mathbb {G}_1\) and \(\mathcal {D}_{k}\) -KerMDH Assumption in \(\mathbb {G}_2\), the proposed Threshold Structure-Preserving Signature construction in Fig. 3 achieves \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) security against an efficient adversary making at most q partial signature queries.

Proof

The difference between \(\mathsf {TS\text {-}UF\text {-}}{1}\) and \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) is that an adversary of the later model has access to \(\mathcal {O}^\textsf{Corrupt}(.)\) oracle and can corrupt the honest signers, adaptively. As per Fig. 1, an \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) adversary proposes a corrupted set \(\textsf{CS}\) at the start of the game which it updates incrementally as the game progresses. At the time of forgery, the natural restriction in Fig. 1 formulates as \(|S_1(\left[ \textbf{m}^*\right] _1)|< t - |\textsf{CS}|\), where t is the threshold value and \(\textsf{CS}\) contains the list of corrupted signers at the forgery phase. Given that this security model permits an adversary to obtain the secret keys of users it may have queried using the \(\mathcal {O}^\textsf{PSign}(.)\) oracle in the past, our next step involves investigating the main modifications required for the proof in Theorem 2.

\(\textsf{Game}_{{0}}\), \(\textsf{Game}_{{1}}\), \(\textsf{Game}_{{2}}\), and \(\textsf{Game}_{{2}}'\) stay the same. In the proof of Theorem 2, we also have showed that \(\textsf{Game}_{{2}}'\) and \(\textsf{Game}_{{3}}'\) to be indistinguishable due to the so-called core lemma, Lemma 1. We reuse the reduction in Fig. 7 towards this purpose. The reduction in Fig. 7 samples \(\textbf{K}_{}\leftarrow \mathbb {Z}_p^{(\ell +1)\times (k+1)}\) and generates \((\textbf{K}_{1},\ldots ,\textbf{K}_{n})\leftarrow \textsf{Share}(\textbf{K}_{}, \mathbb {Z}_p^{(\ell +1)\times (k+1)}, n, t)\). Recall that, the \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) adversary \(\mathcal {A}\) of Lemma 2 corrupts a party \(i\in [1,n]\) adaptively. Since the reduction of Lemma 2 already knows \(\textbf{K}_{i}\) in plain, it can handle the \(\mathcal {O}^\textsf{Corrupt}(.)\) oracle queries quite naturally.

The indistinguishability of \(\textsf{Game}_{{3}}\) and \(\textsf{Game}_{{4}}\) are argued exactly the same as in Theorem 2. We now focus on \(\textbf{Adv}_4\). In \(\textsf{Game}_{{4}}\), the adversary gets to update \(\textsf{CS}\) adaptively. Intuitively, all \(\textbf{K}_i\) are independently sampled. Giving out a few of them to the adversary does not change the adversary’s view. In the proof of Theorem 2, we already have managed to address partial signature queries on forged message. Except a few details, this ensures our proof will work out. We next give a formal argument.

Fig. 10.
figure 10

\(\textsf{Game}_{{0}}\).

We prove this theorem through a series of games and we use \(\textbf{Adv}_i\) to denote the advantage of the adversary \(\mathcal {A}\) in winning the Game i. The games are described below.

  • Game 0. This is the \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) security game described in Definition 8. As shown in Fig. 10, an adversary \(\mathcal {A}\) after receiving the set of public parameters, \(\textsf{pp}\), returns (n, t, CS), where n, t and CS represents the total number of signers, the threshold, and the set of corrupted signers, respectively. The adversary can query the partial signing oracle \(\mathcal {O}^\textsf{PSign}(\cdot )\) to receive partial signatures. Let \(\mathcal {Q}\) represent the number of distinct messages where partial signing queries are made. In the end, the adversary outputs a message \([\textbf{m}^*]_1\) and a forged signature \(\varSigma ^*\).

  • Game 1. We modify the verification procedure to the one described in Fig. 11. Consider any forged message/signature pair \(([\textbf{m}^*]_1,\varSigma ^*=(\widehat{\sigma }_{1},\widehat{\sigma }_{2},\widehat{\sigma }_{3},\widehat{\sigma }_{4}))\) where \(e(\widehat{\sigma }_{2},\widehat{\sigma }_{4})= e(\widehat{\sigma }_{3},[{1}]_2)\), \(|\textsf{CS}|<t\) and \(S_1([\textbf{m}^*]_1)=\emptyset \). Note that if the pair \(([\textbf{m}^*]_1,\varSigma ^*)\) meets the \(\textsf{Verify}^*(\cdot )\) conditions, outlined in Fig. 11, it also satisfies \(\textsf{Verify}(\cdot )\) procedure, described in Fig. 10. This is primarily due to the fact that:

    figure g

    Assume there exists a message/signature pair \(([\textbf{m}^*]_1,\varSigma ^*=(\widehat{\sigma }_{1},\widehat{\sigma }_{2},\widehat{\sigma }_{3},\widehat{\sigma }_{4}) )\) that satisfies \(\textsf{Verify}(.)\) and not \(\textsf{Verify}^*(.)\), then we can compute a non-zero vector \(\textbf{c}\) in the kernel of \(\textbf{A}\) as follows:

    $$ \textbf{c}:= {\widehat{\sigma }_{1}}-{([\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \textbf{K}_{}]_1+ \widehat{\sigma }_{2}\textbf{U}_{}+ \widehat{\sigma }_{3}\textbf{V}_{})}\in \mathbb {G}_1^{1\times (k+1)} \; \cdot $$

    According to \(\mathcal {D}_{k}\) -KerMDH assumption over \(\mathbb {G}_2\) described in Definition 5, such a vector \(\textbf{c}\) is hard to compute. Thus,

    $$ |\textbf{Adv}_0-\textbf{Adv}_1|\le { Adv}^{\textsf{KerMDH}}_{\mathcal {D}_{k},\mathbb {G}_2, \mathcal {B}_0}(\kappa ) \; \cdot $$
  • Game 2. On receiving a partial signature query on a message \([\textbf{m}_i]_1\), a list is updated with the message \([\textbf{m}_i]_1\) and the corresponding tag \(\tau _i:=\mathcal {H}([{\textbf{m}_i}]_1) \). The challenger aborts if an adversary can generate two tuples \(([\textbf{m}_i]_1,\tau _i)\), \(([\textbf{m}_j]_1,\tau _j)\) with \([\textbf{m}_i]_1\ne [\textbf{m}_j]_1\) and \(\tau _i=\tau _j\). By the collision resistance property of the underlying hash function we have:

    $$\begin{aligned} |\textbf{Adv}_{1}-\textbf{Adv}_{2}|\le { Adv}^\textsf{CRHF}_{\mathcal {H}}(\kappa ) \; \cdot \end{aligned}$$
  • Game \(2'\). In \(\textsf{Game}_{{2}}'\), the challenger randomly chooses an index \(j^*\leftarrow [1,Q]\) as its guess of the message on which the forgery will be done. This game is the same as Game 2 except that the challenger aborts the game immediately if forged message \(\left[ \textbf{m}^*\right] _1\ne \left[ \textbf{m}_{j^*}\right] _1\). The challenger of \(\textsf{Game}_{{2}}'\) rightly guesses the forged message \(\left[ \textbf{m}^*\right] _1\) with \(1/\mathcal {Q}\) probability which introduces a degradation in the advantage of \(\textsf{Game}_{{2}}'\): \(\textbf{Adv}_{2'}=\frac{1}{\mathcal {Q}}{} \textbf{Adv}_{2}\).

  • Game \(3'\). This game is same as \(\textsf{Game}_{{2}}'\) except we introduce randomness to the partial signatures by adding \(\mu \mathbf{a^\bot }\) to each partial signature query on all messages \(\left[ \textbf{m}\right] _1\) except \(\left[ \textbf{m}\right] _1^*\) on which the forgery is done. We show that, we can make a reduction algorithm \(\mathcal {B}\) for the so-called core-lemma (in Lemma 1) using \(\mathcal {A}\). At the start of the game, \(\mathcal {B}\) randomly chooses an index \(j^*\leftarrow [1,Q]\) as its guess of the message on which forgery will be done. If \(\left[ \textbf{m}^*\right] _1\ne \left[ \textbf{m}_{j^*}\right] _1=\left[ \textbf{m}^*\right] _1\), \(\mathcal {B}\) aborts. Otherwise, B outputs A’s output as it is. In particular, \(\mathcal {B}\) does the following:

    1. 1.

      \(\mathcal {B}\) receives \(\textsf{pp}\) from the challenger.

    2. 2.

      \(\mathcal {B}\) samples \(\textbf{K}_{}\leftarrow \mathbb {Z}_p^{(\ell +1)\times (k+1)}\).

    3. 3.

      \(\mathcal {B}\) then secret shares \(\textbf{K}_{}\) into \((\textbf{K}_{1},\ldots ,\textbf{K}_{n})\leftarrow \textsf{Share}(\textbf{K}_{}, \mathbb {Z}_p^{(\ell +1)\times (k+1)}, n, t)\).

    4. 4.

      On a \(\mathcal {O}^\textsf{Corrupt}(.)\) oracle query on \(j\in [1,n]\), \(\mathcal {B}\) returns \(\textbf{K}_{j}\).

    5. 5.

      \(\mathcal {B}\) simulates the partial signature query on \((i,\left[ \textbf{m}\right] _1)\) as following:

      • If \(\left[ \textbf{m}\right] _1=\left[ \textbf{m}^*\right] _1\), it makes a query \((i,\tau ^*)\) on \(\mathcal {O}^{**}(.)\) where \(\tau ^*\leftarrow \mathcal {H}(\left[ \textbf{m}^*\right] _1)\).

        • Let \(\mathcal {B}\) receives val as the response of the above queries.

        • \(\mathcal {B}\) samples \(\textbf{r}_i\leftarrow \mathbb {Z}_p^{k}\) and returns \(\varSigma _i:=(\left[ \left( \begin{matrix}1&{}\textbf{m}^{\top }\\ \end{matrix}\right) \textbf{K}_{i}\right] _1\cdot \textbf{r}^\top _i\cdot val,\textbf{r}^\top _i\cdot val,\tau \cdot \textbf{r}^\top _i\cdot val,\left[ \tau \right] _2)\) to \(\mathcal {A}\) as the partial signature.

      • If \(\left[ \textbf{m}\right] _1\ne \left[ \textbf{m}^*\right] _1\), it makes a query \((i,\tau )\) on \(\mathcal {O}^{b}(\cdot )\), where \(\tau \leftarrow \mathcal {H}(\left[ \textbf{m}\right] _1)\).

        • Let \(\mathcal {B}\) receives \((val_1,val_2)\) as the response of the above queries.

        • It returns \(\varSigma _i:=\left( \left[ \left( \begin{matrix}1&{}\textbf{m}^{\top }\\ \end{matrix}\right) \textbf{K}_{i}\right] _1\cdot val_1,val_2,\tau \cdot val_2,\left[ \tau \right] _2\right) \) to \(\mathcal {A}\) as the partial signature.

    6. 6.

      On \(\textsf{Verify}^*(.)\) on \((\textsf{vk},\left[ \textbf{m}^*\right] _1,\varSigma ^*)\), \(\mathcal {B}\) queries on \(\mathcal {O}^{*}(\cdot )\) on \(\left[ \tau ^*\right] _2\) where \(\tau ^*\leftarrow \mathcal {H}(\left[ \textbf{m}^*\right] _1)\).

      • Let \(\varSigma ^*\) is \((\sigma _{1},\sigma _{2},\sigma _{3},\sigma _{4}=\left[ \tau ^*\right] _2)\).

      • Let \(\mathcal {B}\) receives val as the response of the above query.

      • \(\mathcal {B}\) verifies the signature: \(e(\sigma _{1},[1]_2)= e\left( \left[ \left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \textbf{K}_{}\right] _1,\left[ 1\right] _2\right) \cdot e(\sigma _{2}, val) \wedge e(\sigma _{2},\sigma _{4})= e(\sigma _{3},\left[ 1\right] _2)\).

    \(\textsf{Game}_{{2}}'\) and \(\textsf{Game}_{{3}}'\) are indistinguishable due to the so-called core-lemma (in Lemma 1), then we have:

    $$\begin{aligned} |\textbf{Adv}_{2'}-\textbf{Adv}_{3'}|\le 2\mathcal {Q}{Adv}^\textsf{MDDH}_{\mathcal {D}_{k},\mathbb {G}_1, \mathcal {B}_1}(\kappa )+\mathcal {Q}/p \; \cdot \end{aligned}$$
  • Game 4. This game is same as \(\textsf{Game}_{{3}}'\) except that \(\{\textbf{K}_{i}\}_{i\in [n]}\) are sampled. In particular, we sample \(\textbf{K}_{i}=\widetilde{\textbf{K}}_{i}+\textbf{u}_i\textbf{a}^{\perp }\) for \(i\in [1,n]\).

    Shamir secret sharing (see Definition 1) ensures that \((\textbf{K}_{1},\ldots ,\textbf{K}_{n})\) in \(\textsf{Game}_{{3}}\) and \((\widetilde{\textbf{K}}_{1},\ldots ,\widetilde{\textbf{K}}_{n})\) in \(\textsf{Game}_{{4}}\) are identically distributed. W.l.o.g, \(\textbf{K}_{i}\) in \(\textsf{Game}_{{3}}'\) and \(\widetilde{\textbf{K}}_{i}\) in \(\textsf{Game}_{{4}}\) are identically distributed. In \(\textsf{Game}_{{4}}\), on the other hand, \(\widetilde{\textbf{K}}_{i}\) and \(\textbf{K}_{i}=\widetilde{\textbf{K}}_{i}-\textbf{u}_i\textbf{a}^{\perp }\) are identically distributed. Considering both together, \(\textbf{K}_{i}\) is \(\textsf{Game}_{{3}}'\) and \(\textbf{K}_{i}\) in \(\textsf{Game}_{{4}}\) are identically distributed for all \(i\in [1,n]\). Thus further ensures that \(\textbf{K}_{}\) in \(\textsf{Game}_{{3}}'\) and \(\textbf{K}_{}+\textbf{u}_0\textbf{a}^{\perp }\) in \(\textsf{Game}_{{4}}\) are identically distributed. Therefore, this change is just a conceptual change and \(\textbf{Adv}_{3'}-\textbf{Adv}_4=0\).

    Finally, we argue that \(\textbf{Adv}_4= 1/p\). Notice that, the adversary gets to update \(\textsf{CS}\) adaptively. To complete the argument, we have to ensure that even after getting \(\textbf{K}_{i}=\widetilde{\textbf{K}}_{i}+\textbf{u}_i\textbf{a}^{\perp }\) for \(i\in [\textsf{CS}]\) chosen adaptively and even after having several partial signatures (possibly on the corrupted keys too), \(\textbf{u}_0\) is still hidden to the adversary.

    • Firstly, \(\textsf{vk}\) and \(\{\textsf{vk}_i\}_{i\in [1,n]}\) do not leak anything about \(\textbf{u}_0\) and \(\{\textbf{u}_i\}_{i\in [1,n]}\) respectively. Note that, \(\mathcal {A}\) gets \(\textsf{sk}_i=\textbf{K}_{i}=\widetilde{\textbf{K}}_{i}+\textbf{u}_i\textbf{a}^{\perp }\) for \(i\in [\textsf{CS}]\) as a part of \(\textsf{Input}\).

    • The output of \(j^{th}\) partial signature query on \((i,\left[ \textbf{m}\right] _1)\) for \(\left[ \textbf{m}\right] _1\ne \left[ \textbf{m}^*\right] _1\) completely hides \(\{\textbf{u}_i\}_{i\in [1,n] \setminus \textsf{CS}}\) (and subsequently \(\textbf{u}_0\) as the adversary has only \(|\textsf{CS}|\) many \(\textbf{u}_i\) where \(|\textsf{CS}|<t\)), since

      $$\begin{aligned} \left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \textbf{K}_{i}+\mu _j\textbf{a}^{\perp }= \left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \widetilde{\textbf{K}}_{i}+\left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \textbf{u}_i\textbf{a}^{\perp }+\mu _j\textbf{a}^{\perp }\; \cdot \end{aligned}$$

      distributed identically to \(\left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \widetilde{\textbf{K}}_{i}+\mu _j\textbf{a}^{\perp }\). This is because \(\mu _j\textbf{a}^{\perp }\) already hides \(\left( \begin{matrix}1&{}\textbf{m}^\top \\ \end{matrix}\right) \textbf{u}_i\textbf{a}^{\perp }\) for uniformly random \(\mu _j\leftarrow \mathbb {Z}_p\).

    • In case of the \(j^{th}\) partial signature query on \((i,\left[ \textbf{m}^*\right] _1)\), observe that at most \(\{\textbf{u}_i\}_{i\in S_1(\left[ \textbf{m}^*\right] _1)}\) are leaked to the adversary. To summarise, an \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) adversary gets at most \(\{\textbf{u}_i\}_{i\in S_1(\left[ \textbf{m}^*\right] _1)}\) even when it is unbounded.

    • Finally, we take a look at the corrupted set \(\textsf{CS}\). We emphasize that this set was updated through out the game adaptively.

    From the above discussion, it is clear that the information theoretically adversary can at most gets hold of \(\{\textbf{u}_i\}_{i\in S_1(\left[ \textbf{m}^*\right] _1)\sqcup \textsf{CS}}\) adaptively. Note that, the only way to sucessfuly convince the verification to accept a signature \(\varSigma ^*\) on \(\textbf{m}^*\), the adversary must correctly compute \(\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) (\textbf{K}_{}+\textbf{u}_0\textbf{a}^{\perp })\) and thus \(\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \textbf{u}_0\). So the question now reduces to if the adversary can compute \(\textbf{u}_0\) from \(\{\textbf{u}_i\}_{i\in S_1(\left[ \textbf{m}^*\right] _1)\sqcup \textsf{CS}}\) which it got adaptively. Since Shamir secret sharing is information theoretically secure, the advantage of an adversary in case of selective corruption of users is same as the advantage of an adversary in case of adaptive corruption of users. Thus, \(\textbf{u}_0\) is completely hidden to the adaptive adversary, \(\left( \begin{matrix}1&{}\textbf{m}^{*\top }\\ \end{matrix}\right) \textbf{u}_0\) is uniformly random from \(\mathbb {Z}_p\) from its viewpoint. Therefore, \(\textbf{Adv}_4= 1/p\) (Fig. 12).

   \(\square \)

Fig. 11.
figure 11

Modifications in \(\textsf{Game}_{{1}}\).

Fig. 12.
figure 12

\(\textsf{Game}_{{3}}'\) in the proof of Theorem 3.

3.5 Proof of Core Lemma

Proof of Lemma 1. We proceed through a series of games from \(\textsf{Game}_{{0}}\) to \(\textsf{Game}_{{q}}\). Note that, \(\textsf{Init}\) outputs the same in all the games. In \(\textsf{Game}_{{i}}\), the first i queries to the oracle \(\mathcal {O}_b(.)\) are responded with \(([\mu \textbf{a}^{\bot }+\textbf{r}^\top \textbf{B}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{r}^\top \textbf{B}^{\top }]_1)\) and the next \(q-i\) queries are responded with \(([\textbf{r}^\top \textbf{B}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{r}^\top \textbf{B}^{\top }]_1)\). The intermediate games \(\textsf{Game}_{{i}}\) and \(\textsf{Game}_{{i+1}}\) respond differently to the \(i+1\)-th query to \(\mathcal {O}_b(.)\). The \(\textsf{Game}_{{i}}\) responds with \(([\textbf{r}^\top \textbf{B}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{r}^\top \textbf{B}^{\top }]_1)\) whereas \(\textsf{Game}_{{i+1}}\) responds with \(([\mu \textbf{a}^{\bot }+\textbf{r}^\top \textbf{B}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{r}^\top \textbf{B}^{\top }]_1)\). We compute the advantage of the adversary in differentiating the two games below. The advantage of the adversary in \(\textsf{Game}_i\) is denoted by \(\textbf{Adv}_i\) for \(i=0,\dots ,q\). On querying \(\mathcal {O}_b(\cdot )\), \(\textsf{Game}_{{i}}\) responds to \(i+1\)-th query with

$$ ([\textbf{r}^\top \textbf{B}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{r}^\top \textbf{B}^{\top }]_1)\; ,$$

where \(\textbf{r}\leftarrow \mathbb {Z}_p^{k}\).

We define a sub-game \(\textsf{Game}_{{i.1}}\) where \([\textbf{B}\textbf{r}]_1\) is replaced with \([\textbf{w}]_1\), \([\textbf{w}]_1\leftarrow \mathbb {G}_1^{k+1}\). From the \(\textsf{MDDH}\) assumption, a \(\textsf{MDDH}\) adversary cannot distinguish between the distributions \(([\textbf{B}]_1, [\textbf{B}\textbf{r}]_1)\) and \(( [\textbf{B}]_1, [\textbf{w}]_1)\). Thus,

$$\begin{aligned} ([\textbf{r}^\top \textbf{B}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{r}^\top \textbf{B}^{\top }]_1) \approx _c ([\textbf{w}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{w}]_1) \; \cdot \end{aligned}$$

All the other values can be perfectly simulated in the reduction by choosing \(\textbf{U}_{}\) and \( \textbf{V}_{}\) from the appropriate distributions. In the next sub-game \(\textsf{Game}_{{i.2}}\), we introduce the randomness \(\mu \textbf{a}^{\bot }\) to \([ \textbf{w}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1 \) and proceed to use an information-theoretic argument to bound the advantage in this experiment. As shown in [52], for every \(\textbf{A},\textbf{B}\leftarrow \mathcal {D}_{k}\), \(\tau \ne \tau ^*\), the following distributions are identically distributed

$$\begin{aligned} (\textsf{vk}, [\textbf{w}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,\textbf{U}_{}+\tau ^*\textbf{V}_{}) \text { and } (\textsf{vk}, [\mu \textbf{a}^{\bot }+\textbf{w}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,\textbf{U}_{}+\tau ^*\textbf{V}_{}) \; \cdot \end{aligned}$$

with probability \(1-1/p\) over \(\textbf{w}\). The values \([\textbf{B}^{\top }\textbf{U}_{}]_1\) and \([\textbf{B}^{\top }\textbf{V}_{}]_1\) are part of the public values \(\textsf{vk}:=(\textbf{A},\textbf{U}_{}\textbf{A},\textbf{V}_{}\textbf{A},[\textbf{B}]_1,[\textbf{B}^{\top }\textbf{U}_{}]_1,[\textbf{B}^{\top }\textbf{V}_{}]_1)\) and anyone can compute \([\textbf{B}^{\top }(\textbf{U}_{}+\tau ^*\textbf{V}_{})]_1\) corresponding to a \(\tau ^*\). Thus, for \(\tau \ne \tau ^*\), we have the two following identical distributions:

$$\begin{aligned} \begin{aligned} &(\textsf{vk}, [\textbf{w}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{U}_{}+\tau ^*\textbf{V}_{}]_2, [\textbf{B}^{\top }(\textbf{U}_{}+\tau ^*\textbf{V}_{})]_1) \text { and } \\ {} & (\textsf{vk}, [\mu \textbf{a}^{\bot }+\textbf{w}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{U}_{}+\tau ^*\textbf{V}_{}]_2, [\textbf{B}^{\top }(\textbf{U}_{}+\tau ^*\textbf{V}_{})]_1) \; \cdot \end{aligned} \end{aligned}$$
(1)

From Equation (1), the subgames \(\textsf{Game}_{{i.1}}\) and \(\textsf{Game}_{{i.2}}\) are statistically close. We use the \(\textsf{MDDH}\) assumption again in the next sub-game \(\textsf{Game}_{{i.3}}\) and replace \([\textbf{w}]_1\) with \([\textbf{B}\textbf{r}]_1\). The resulting distribution is

$$\begin{aligned} (\textsf{vk},[\mu \textbf{a}^{\bot }+\textbf{r}^\top \textbf{B}^{\top }(\textbf{U}_{}+\tau \textbf{V}_{})]_1,[\textbf{U}_{}+\tau ^*\textbf{V}_{}]_2,[\textbf{B}^{\top }(\textbf{U}_{}+\tau ^*\textbf{V}_{})]_1) \; , \end{aligned}$$

which is same as \(\textsf{Game}_{{i+1}}\). Thus, from the two \(\textsf{MDDH}\) instances as well as the information-theoretic argument,

$$ |\textbf{Adv}_i-\textbf{Adv}_{i+1}|\le 2{ Adv}^\mathsf{{MDDH}}_{\mathcal {D}_{k}, \mathbb {G}_1,\mathcal {B}}(\kappa )+1/p \; \cdot $$

   \(\square \)

4 Conclusion

In this paper, we give the first construction of a non-interactive threshold structure-preserving signature (TSPS) scheme from standard assumptions. We prove our construction secure in the \(\textsf{adp}\text {-}\mathsf {TS\text {-}UF\text {-}}{1}\) security model where the adversary is allowed to obtain partial signatures on the forged message and additionally allow the adversary to adaptively corrupt parties. Although the signatures are constant-size (and in fact quite small), we consider improving the efficiency of TSPS under standard assumptions as an interesting future work.