1 Introduction

1.1 Background

Lattice-based cryptography has gained much attention in recent years due to its simplicity, expressibility, efficiency, and its resistance to quantum computers. The main focus of this work is on the expressibility and efficiency aspect.

Expressibility: homomorphic computation (in \({\textbf {NC}}^1\)) One major reason for the expressiveness of lattice-based cryptography is due to the capability of handling homomorphic computations. Starting with the celebrated work of Gentry [27] on fully homomorphic encryptions (\(\textsf {FHEs}\)), techniques regarding lattice-based homomorphic computation have been refined to a great extent and we have seen significant advancement [4, 18, 19, 21, 28, 30,31,32].

Of particular interest is the line of works subsequent to the result of Gentry, Sahai, and Waters (GSW) [32]. Originally, [32] provided a set of handy tools for homomorphic computation to construct a conceptually very simple \(\textsf {FHE}\) based on the learning with errors (\(\textsf {LWE}\)) assumption [51] with super-polynomial-sized modulus (or \(\textsf {superpoly}\)-\(\textsf {LWE}\)). However, this \(\textsf {superpoly}\)-sized modulus was quite unsatisfactory since it resulted in a significant efficiency loss and a much stronger assumption compared to the standard \(\textsf {LWE}\) assumption with polynomial-sized modulus (or \(\textsf {poly}\)-\(\textsf {LWE}\)). A major milestone in the subsequent works of [32] was Brakerski and Vaikuntanathan [19] who noticed a “quasi-additive” error growth under sequential GSW homomorphic multiplication. They exploited this nature, along with the fact that the decryption circuit of the GSW-\(\textsf {FHE}\) is in \({\textbf {NC}}^1\), to construct \(\textsf {FHE}\) based on \(\textsf {poly}\)-\(\textsf {LWE}\). Soon after, Gorbunov and Vinayagamurthy [33] extended the techniques of [19] and combined it with [14] to construct the first attribute-based encryption (\(\textsf {ABE}\)) for \({\textbf {NC}}^1\) circuits with short secret keys under \(\textsf {poly}\)-\(\textsf {LWE}\).Footnote 1 The crux of [19] and [33] were realizing that the GSW homomorphic computation of an \({\textbf {NC}}^1\) circuit can be done by only incurring a \(\textsf {poly} \)-sized error growth.

The techniques of [19] and [33], originally developed for \(\textsf {FHE}\) and \(\textsf {ABE}\), have proven to be ubiquitous and have been used by countless other lattice-based contexts which require some type of homomorphic computation in \({\textbf {NC}}^1\). Examples of such primitives include, but not limited to, compact identity-based encryption (\(\textsf {IBE}\)) [57], tightly-secure signature and \(\textsf {IBE}\) [16], all-but-many lossy trapdoor function (\(\textsf {ABM-LTDF}\)) [17, 44], predicate encryption [35], fully homomorphic signature [36, 42], distributed pseudorandom function (\(\textsf {PRF}\)) [45], and group signature [38].

Efficiency: converting \({\textbf {NC}}^1\) circuits to branching program The polynomial error growth in the \({\textbf {NC}}^1\) homomorphic computation techniques of [19] and [33] is a significant feature as it allows to construct certain cryptographic primitives based on \(\textsf {poly}\)-\(\textsf {LWE}\). However, taking a closer look, we see that the required modulus size for \(\textsf {poly}\)-\(\textsf {LWE}\) is a very large polynomial in most cases. This has a significant drawback from a practical point of view since we have to pay a prohibitive price in the efficiency and security.

As mentioned above, the core observation of [19] was noticing a quasi-additive error growth under sequential GSW homomorphic multiplication. That is, if we carefully multiply two terms with a large error \(\Delta \) and small error \(\delta \), we obtain a term with error \(\Delta + \delta \cdot {\tilde{O}}(\lambda )\) rather than \(\Delta \cdot \delta \cdot {\tilde{O}}(\lambda )\), where \(\lambda \) denotes the security parameter. To exploit this observation, [19] transformed circuits in \({\textbf {NC}}^1\) with depth \(d = O(\log \lambda )\) into a length-\(4^d\) branching program via the Barrignton’s Theorem [9]. Since branching programs capture a sequential computational model by nature, the quasi-additive error growth of GSW multiplication allowed to sequentially homomorphically compute the branching program while incurring only an error that grows proportionally to the length. The error grows by only \(4^d \cdot \delta \cdot {\tilde{O}}(\lambda ) = \textsf {poly}(\lambda )\) when \(d = O(\log \lambda )\).

However, due to this indirect way of expressing computation as branching programs, the transformation incurs a massive overhead. For instance, most applications listed above require homomorphic computation of simple modulo p arithmetics. Since expressing modulo p arithmetics requires quite a deep \({\textbf {NC}}^1\) circuit, the resulting length of the branching program can get extremely long; even if the depth is a modest \(d = 5 \log \lambda \), this already leads to a branching program of length \(4^d = \lambda ^{10}\). Due to the sequential nature, this means that we require at least a run time that is longer than \(\lambda ^{10}\) to even just compute the branching program. Therefore, many primitives basing on the techniques of [19] and [33] require a large polynomial modulus and running time. Since the required modulus determines the overall efficiency of the scheme and the strength of the \(\textsf {LWE}\) assumption, most of the primitives relying on [19] or [33] are mainly of theoretical interest.

1.2 Our contributions

Our main result is formalizing a set of tools to efficiently homomorphically compute inner-products over the ring \({\mathbb {Z}}_p\) for small \(p = \textsf {poly}(\lambda )\). We view the inner-product computation directly as a particular type of branching program of short polynomial length (\(\approx O(\lambda \log \lambda )\)). The substantial gain in runtime and the modest \(\textsf {poly}\)-error growth comes as a result of not having to go through the aforementioned indirect transformation. If we were to rewrite the inner-product computation over \({\mathbb {Z}}_p\) as a \({\textbf {NC}}^1\) circuit and then to transform it into a branching program via Barrington’s Theorem, it would have resulted in a large polynomial length \(( > rapprox \lambda ^{16})\)Footnote 2 branching program. Our technique is an extension of Alperin-Sheriff and Peikert [5] who optimized the bootstrapping algorithm of GSW-\(\textsf {FHE}\) by viewing the decryption circuit in \({\textbf {NC}}^1\) as a particular type of branching program and applying the idea of [19].

Moreover, we show that our simple homomorphic computation of inner-products over a small ring \({\mathbb {Z}}_p\) can be bootstrapped to other interesting computations by combining it with specific cryptographic primitives and/or by exploiting additional algebraic structures of lattices. Notably, using our homomorphic computation technique, we obtain two concrete results which improve the efficiency of prior works using [19] or [33]:

  • a (selectively) secure \(\textsf {ABE}\) for several useful predicates in \({\textbf {NC}}^1\), and

  • a tightly (adaptively) secure signature and \(\textsf {IBE}\) based on \(\textsf {poly}\)-\(\textsf {LWE}\).

For our second result, we rely on a \(\textsf {PRF}\) proposed by Boneh et al. [15]. Their \(\textsf {PRF}\) is based on new assumptions having natural connection to hardness and questions in complexity and learning theory. Very recently, Cheon et al. [25] proposed attacks and fixes to the weak \(\textsf {PRF}\) construction of Boneh et al. [15]. However, their attack do not affect our results since we rely on the standard (i.e., non-weak) \(\textsf {PRF}\). Although, the in-depth cryptanalysis of the standard \(\textsf {PRF}\) provided in [15] is still intact, further study should be made to gain confidence in its security. We refer to Tables 1 and 2 for comparisons of tightly secure IBE and signature schemes in the standard model, respectively.

Table 1 Comparison of tightly secure lattice-based IBE schemes
Table 2 Comparison of Tightly Secure Lattice-Based signature schemes

We believe our technique is general enough to be used to optimize other cryptographic primitives using the technique of [19] or [33] such as the all-but-many lossy trapdoor function (\(\textsf {ABM-LTDF}\)) of [17, 44]. We leave the investigation of other applications as a future work.

1.3 Technical overview

We first explain how to compute inner-products over a small ring \({\mathbb {Z}}_p\) directly as a branching program. We then provide a brief overview of two concrete applications which benefit from the proposed homomorphic computation.

Expressing inner-products directly as branching programs When we represent elements in \({\mathbb {Z}}_p\) for small \(p = \textsf {poly}(\lambda )\) as binary strings \( \{ 0,1 \} ^{\left\lceil \log p\right\rceil }\) in the conventional bit representation, computing inner-products over \({\mathbb {Z}}_p\) with a (boolean) circuit becomes quite involved (see [10] for example). Especially, if we want to transform it into branching programs via Barrington’s Theorem, we are bound to incur a significant blow up in the length.

The main trick we use to compute inner-products over \({\mathbb {Z}}_p\) directly as a short length branching program is to instead represent elements in \({\mathbb {Z}}_p\) as unit vectors in \( \{ 0,1 \} ^p\) as in [5]. Namely, we represent an element \(a \in {\mathbb {Z}}_p\) as a unit vector in \( \{ 0,1 \} ^p\) where the a-th entry is set to 1 and all other entries are set to 0. Although this representation may look redundant at first glance, unlike binary representations, it naturally supports modulo p additions. Due to Cayley’s Theorem, which states that any finite group G embeds into the symmetric group \(S_{\left|G\right|}\) (i.e., the multiplicative group of p-by-p permutation matrices), the additive group of \({\mathbb {Z}}_p\) embeds into \(S_p\). Moreover, any permutation matrix in \(S_p\) has a natural unit vector representation in \( \{ 0,1 \} ^p\) by taking the first column of the permutation matrix. Therefore, in this representation, a sum \(x_1 + x_2 \mod p\) can be computed by rewriting the unit vector representation of \(x_1\) into its associated permutation matrix, and then multiplying it with the unit vector representation of \(x_2\). In particular, given the unit vector representation as inputs, we can view modulo p addition as a specific type of width-p branching programFootnote 3 which computes matrix multiplications. This argument readily generalizes to inner-products between two vectors \({\mathbf {x}}, {\mathbf {y}}\in {\mathbb {Z}}_p^\ell \) since \(\langle {\mathbf {x}}, {\mathbf {y}}\rangle = \sum _{i \in [\ell ]} ( \sum _{j \in [y_i]} x_i ) \mod p\), where this is a specific branching program of length \(\ell \cdot p\). We show that with some optimization we can further reduce the required length to only \(\ell \cdot \log p = O(\lambda \log \lambda )\) when \(\ell = O(\lambda )\).

\(\textsf {ABE}\) for useful predicates We obtain more efficient constructions of selectively secure lattice-based \(\textsf {ABEs}\) for several useful predicates. Specifically, we construct non-zero inner-product encryption (\(\textsf {NIPE}\)) over a large field \({\mathbb {F}}_{p^t}\) where \(p, t = \textsf {poly}(\lambda )\), identity-based revocation (\(\textsf {IBR}\)) scheme, and a fuzzy identity-based encryption (\(\textsf {FIBE}\)). We note that all predicates considered in this work are in \({\textbf {NC}}^1\), so theoretically, we can construct \(\textsf {NIPE} \), \(\textsf {IBR} \), and \(\textsf {FIBE} \) from \(\textsf {poly}\)-\(\textsf {LWE}\) based on the work of Gorbunov and Vinayagamurthy [33]. However, as already mentioned above, these predicates require a very long branching program to describe, so it remains mostly of theoretical interest.

In attribute based encryption (\(\textsf {ABE}\)), a ciphertext and a secret key are associated with attributes X and Y, respectively, and decryption is possible only when they satisfy \(R(X, Y) = 1\) for a certain relation R. Due to the work of Boneh et al. [14] who constructed lattice-based \(\textsf {ABEs}\) for general circuits, we are able to distill the problem of homomorphically computing the relation R from the problem of constructing \(\textsf {ABEs}\) for relation R. Therefore, in the following, we will mainly be discussing how to compute specific relations R using our base technique for computing inner-products over a small ring \({\mathbb {Z}}_p\).

Constructing \(\textsf {NIPE}\). \(\textsf {NIPE}\) is a specific type of \(\textsf {ABE}\) where vectors are associated with the secret key and ciphertext, and one can decrypt if and only if the inner-product between the vectors does not equal to zero [6, 41]. Namely,

$$\begin{aligned} R^{\textsf {NIPE}}({\mathbf {x}}, {\mathbf {y}}) = 1 \text { if and only if } \langle {\mathbf {x}}, {\mathbf {y}}\rangle \ne 0 \text { over ``some ring'' }. \end{aligned}$$

Since many predicates such as polynomial evaluations, disjunction and/or conjunctions, and membership tests can be encoded as inner-products, \(\textsf {NIPE}\) can be quite useful (see [13, 41] for more motivating examples). The state-of-the-art result for lattice-based \(\textsf {NIPE}\) is the work of Katsumata and Yamada [39]. With similar motivation in mind, they provided lattice-based constructions of \(\textsf {NIPE}\) for several rings that were arguably more efficient than the generic construction of [33]. However, their construction of \(\textsf {NIPE}\) over \({\mathbb {Z}}_p\) with large prime \(p = 2^{O(\lambda )}\) required the secret key generation to be stateful and assumed hardness of the sub-exponential \(\textsf {LWE}\) problem. Since several of the applications of \(\textsf {NIPE}\) stated above require a large finite field for the encoding to work, all of those applications must inherit the undesirable nature of having stateful secret key generation.

In our work, we show how to bootstrap our aforementioned efficient homomorphic computation of inner-products over small rings \({\mathbb {Z}}_{p}\) to inner-products over large fields \({\mathbb {F}}_{p^t}\). We view \({\mathbb {F}}_{p^t}\) as the polynomial ring \({\mathbb {Z}}_p[X] / \left\langle {\mathfrak {g}}\right\rangle \) where \({\mathfrak {g}} \in {\mathbb {Z}}_p[X]\) is a monic degree t polynomial. At a high level, we compute the inner-product of polynomials by embedding the polynomials either in \({\mathbb {Z}}_p^t\) (by the natural coefficient embedding) via a map \(\theta \) or the set of circulant matrices in \({\mathbb {Z}}^{t \times t}_p\) where the columns are shift of the coefficient via a map \(\textsf {Rot}\). The key observation is that multiplication of two polynomials \({\mathfrak {a}}, {\mathfrak {b}} \in {\mathbb {F}}_{p^t}\) can be expressed by t-inner-products. That is, we have \(\theta ({\mathfrak {a}} \cdot {\mathfrak {b}}) = \textsf {Rot}({\mathfrak {a}}) \cdot \theta ({\mathfrak {b}})\); the i-th coefficient of the polynomial \({\mathfrak {a}} \cdot {\mathfrak {b}}\) can be computed by the inner-product between the vectors \(\textsf {Rot}({\mathfrak {a}})_i\) (i.e., the i-th row of \(\textsf {Rot}({\mathfrak {a}}))\) and \(\theta ({\mathfrak {b}})\). Thus we can reduce inner-products of polynomials over \({\mathbb {F}}_{p^t}\) to inner-products of vectors over \({\mathbb {Z}}_{p}\). More details are provided in Sect. 4.2.

As a direct consequence of our \(\textsf {NIPE}\) over a large field, we obtain an \(\textsf {IBR}\) scheme following the transformation given in [6], where an \(\textsf {IBR}\) scheme is a type of broadcast encryption scheme that allows for efficient revocation of small member size.

Constructing \(\textsf {FIBE}\). \(\textsf {FIBE}\) is a specific type of \(\textsf {ABE}\) where the secret key and ciphertext are associated with strings over some alphabet. A secret key can decrypt a ciphertext if and only if the two associating strings are “close” with respect to some metric [3, 52]. A notable application of \(\textsf {FIBE}\) is a generalization of standard \(\textsf {IBEs}\) where we can use one’s biometric information as the identity; biometrics are by default “fuzzy” data due to measurement errors. In this work, we propose \(\textsf {FIBE}\) where the strings are taken from \( \{ 0,1 \} ^\ell \) and the closeness is determined by the Hamming distance. Namely,

$$\begin{aligned} R^{\textsf {FIBE}}({{\textsf {I}}}{{\textsf {D}}}, {{\textsf {I}}}{{\textsf {D}}}') = 1 \text { if and only if } {{\textsf {H}}}{{\textsf {D}}}( {{\textsf {I}}}{{\textsf {D}}}, {{\textsf {I}}}{{\textsf {D}}}' ) \le d, \end{aligned}$$

where d is some pre-determined threshold smaller than \(\ell \). The special case of \(d = 0\) corresponds to the standard definition of \(\textsf {IBEs}\). Other than the generic construction of [33], the only work regarding \(\textsf {FIBE}\) is the direct construction of Agrawal et al. [3]. Although the construction itself is very simple, their construction requires a sub-exponential \(\textsf {LWE}\) assumption, and in particular, it requires the modulus size to grow by \((\ell !)^4\). This severally impacts the overall efficiency and security of the scheme since we require \(\ell = O(\lambda )\).

In our work, we make a similar observation as Alperin-Sheriff and Peikert [5] who noticed that the “closer to” function can be computed efficiently by branching programs. Recall that what we require from \(\textsf {FIBE}\) is to efficiently compute the Hamming distance \(\omega \) of two strings \({{\textsf {I}}}{{\textsf {D}}}, {{\textsf {I}}}{{\textsf {D}}}' \in \{ 0,1 \} ^\ell \), and then, check whether the distance \(\omega \) is below some threshold \(d \in [\ell ]\). Assume we had an efficient method to compute the Hamming distance \(\omega \in {\mathbb {N}}\) in unit vector representation \({\widetilde{\omega }} \in \{ 0,1 \} ^{\ell + 1}\). Then, we can compute the bool value \((\omega \le d)\) (i.e., \((X) = 1\) if and only if X is true) by computing \( \sum _{i = 1}^{ d + 1 }{\widetilde{\omega }}_i = 1,\) where \({\widetilde{\omega }}_i \) is the i-th element of \({\widetilde{\omega }}\). This follows because in the unit vector representation, \({\widetilde{\omega }}_i = 1\) if and only if \(\omega = i - 1\).

Finally, we show that using our homomorphic computation of inner-products over small rings \({\mathbb {Z}}_{p}\), we can efficiently compute the Hamming distance between \({{\textsf {I}}}{{\textsf {D}}}\) and \({{\textsf {I}}}{{\textsf {D}}}'\) in unit vector representation.

Tightly secure signature and \(\textsf {IBE}\) We obtain a tightly adaptively secure signature and \(\textsf {IBE}\) assuming the hardness of the \(\textsf {poly}\)-\(\textsf {LWE}\) problem and the pseudorandomness of \(\textsf {PRF} _\mathsf{BIP^+}\) by Boneh et al. [15]. Compared to the state-of-the-art result of Boyen and Li [16], our construction is far more efficient. In particular, it offers a significantly smaller modulus size and a shorter computation of branching programs (by at least a factor of \(\lambda ^{15}\)). However, we like to point out that the efficiency gain comes at the cost of assuming the security of

\(\textsf {PRF} _\mathsf{BIP^+}\) rather than the hardness of the \(\textsf {superpoly}\)-\(\textsf {LWE}\) problem used by Boyen and Li [16]. Below, for simplicity, we focus on the details of tightly secure signatures.

At a high level, our construction follows the template of Boyen and Li: We simulate the behavior of the random oracle in the tightly-secure signature construction of Katz and Wang [40] by implicitly computing a \(\textsf {PRF}\) during the security proof. Boyen and Li showed that if the \(\textsf {PRF}\) can be computed by an \({\textbf {NC}}^1\) circuit, then we can use the homomorphic computation technique of [19] to obtain a tightly-secure signature scheme based on the \(\textsf {poly}\)-\(\textsf {LWE}\) assumption and any assumption implying pseudorandomness of the \({\textbf {NC}}^1\)-computable \(\textsf {PRF}\). They instantiated their generic construction with the \({\textbf {NC}}^1\)-computable lattice-based \(\textsf {PRF}\) of [7, 8] based on the \(\textsf {superpoly}\)-\(\textsf {LWE}\) assumption.Footnote 4 Although the \(\textsf {PRF} \) is expressible as a \(\textsf {poly}\)-length branching program, the concrete length of the branching program is extremely long and has a significant undesirable impact on the concrete efficiency.

In our work, we instead instantiate the Boyen-Li construction by \(\textsf {PRF} _\mathsf{BIP^+}\) proposed by Boneh et al. [15]. The \(\textsf {PRF} _\mathsf{BIP^+}: {\mathbb {Z}}_2^{\kappa \times 2 \eta } \times {\mathbb {Z}}_2^{\ell } \rightarrow {\mathbb {Z}}_3\) is defined as follows:

$$\begin{aligned} \textsf {PRF} _\mathsf{BIP^+}({\mathbf {K}}, {\mathbf {x}}) {:}{=}\textsf {map}({\mathbf {K}} \cdot \textsf {bin}({\mathbf {H}}\cdot {\mathbf {x}})), \end{aligned}$$

where \(\kappa , \eta , \ell \) are all \(O(\lambda )\). Here, \(\textsf {map}: {\mathbb {Z}}_2^{\kappa } \rightarrow {\mathbb {Z}}_3\) is a function that maps \({\mathbf {y}}\mapsto \sum _{i \in [\kappa ]} y_i \mod 3\), \({\mathbf {H}}\in {\mathbb {Z}}_3^{\eta \times \ell }\) is a publicly known matrix, and \(\textsf {bin}: {\mathbb {Z}}_3^{\eta } \rightarrow {\mathbb {Z}}_2^{2 \eta }\) is a component-wise binary decomposition function. The security of \(\textsf {PRF} _\mathsf{BIP^+}\) is not based on any standard hardness assumption, however, [15] provides extensive cryptanalysis and states plausible security against quantum computers.

At first glance, \(\textsf {PRF} _\mathsf{BIP^+}\) may seem to be easily computable by a short branching program or a very shallow (e.g., depth \(d = \log \lambda \)) circuit. However, it turns out that this is not the case due to the non-linear mapping between \({\mathbb {Z}}_2\) and \({\mathbb {Z}}_3\) elements. Indeed, this non-linear operation was one of the main reasons why \(\textsf {PRF} _\mathsf{BIP^+}\) is claimed to be a secure \(\textsf {PRF}\). In this work, we show that \(\textsf {PRF} _\mathsf{BIP^+}\) can be separated into two short branching programs and that each branching program can be computed using our base branching program for computing inner-products. We note that unlike circuits, branching programs are in general not closed under sequential composition since the input and output have different representations. Specifically, we need to encode the output of the first branching program in a particular manner so that it is compatible with the encoding of the input to the second branching program. More details are provide in Sect. 5.1.

Finally, we note that we will not be able to directly apply the work of Boyen and Li [16] to our setting. This is because the way they use the lattice-trapdoors during the security proof is tailored to \(\textsf {PRFs}\) with output in \( \{ 0,1 \} \). Notably, their construction and proof no longer works when the output of the \(\textsf {PRF}\) is in \({\mathbb {Z}}_3\) as with the above \(\textsf {PRF} _\mathsf{BIP^+}\). Moreover, the proof for \(\textsf {IBE}\) becomes slightly more involved compared to [16] since we require an additional artificial abort step [55] to compensate for \(\textsf {PRF} _\mathsf{BIP^+}\) not being distributed uniformly over 0 and 1. This artificial abort step is not required for the construction of signature schemes. For the knowledgeable readers, we remark that we do not incur a huge reduction loss as with standard artificial aborts for \(\textsf {IBEs}\). This is because in our case the reduction algorithm only aborts when the guess for the output value of \(\textsf {PRF} _\mathsf{BIP^+}\) is wrong, which only happens with probability 2/3.

1.4 Related works

Unit vector encoding As far as our knowledge goes, Alperin-Sheriff and Peikert [5] is the only paper which explicitly uses the idea of embedding \({\mathbb {Z}}_p\) into unit vectors. Their main motivation was to optimize the bootstrapping algorithm of GSW-\(\textsf {FHE}\). They observed that the modular rounding function \(\lfloor \cdot \rceil _2: {\mathbb {Z}}_p \rightarrow \{ 0,1 \} \), which maps elements “close” to p to 0 modulo p and otherwise to 1, can be computed easily in the unit vector representation. Specifically, we simply add all the entries in the unit vector \({\mathbf {v}}\) which represents elements in \({\mathbb {Z}}_p\) that are close to 0.

Lattice-based tightly secure signature and \(\textsf {IBE}\) Boyen and Li were the first to give the construction of tightly secure \(\textsf {IBE}\) scheme from lattices [16]. Subsequently, Tsabary gave the first adaptively secure \(\textsf {ABE}\) scheme for t-CNF predicates for constant t from lattices [53]. As she sketched, her idea can be used to obtain tightly secure \(\textsf {IBE}\). These schemes are based on the idea of homomorphically evaluating \(\textsf {PRF}\) on GSW-\(\textsf {FHE}\) encodings. Brakerski et al. gave a construction of almost tightly secure \(\textsf {IBE}\) scheme from lattices based on completely different ideas [23], where almost tight security means that the reduction cost in the security proof is independent of the number of queries made by the adversary. However, the construction is greatly inefficient due to the use of garbled circuits. Lai et al. proposed tightly secure primitives, including \(\textsf {IBE}\), based on \(\textsf {poly}\)-\(\textsf {LWE}\) [43]. Their constructions have better parameters but longer running time due to the need to homomorphically evaluate a \(\textsf {PRF}\) that is not computable in \({\textbf {NC}}^1\) by using \(\textsf {FHE}\). All these \(\textsf {IBE}\) schemes imply (almost) tightly secure signature scheme via Naor transform [12]. Blazy et al. showed yet another way of constructing tightly secure signatures from any chameleon hash functions [11]. This, in particular, implies tightly secure signatures from lattices. However, their construction is tree-based and results in relatively large signature size.

\(\textsf {FIBE}\) and IPE. Katz, Sahai, and Waters gave a way to convert inner product encryption (\(\textsf {IPE}\)) into \(\textsf {FIBE} \), where \(\textsf {IPE}\) is the dual notion of \(\textsf {NIPE}\) where the decryption is possible only when a vector associated with a ciphertext is orthogonal to that associated with a secret key. However, the transformation considerably blows up the secret key size and if we apply the conversion to existing \(\textsf {IPE}\) such as [2], it results in longer keys than ours.

1.5 Roadmap

In Sect. 2, we recall basic tools for lattice-based cryptography. In Sect. 3, we show a set of tools to efficiently compute inner-products homomorphically over the small ring. In Sect. 4, we provide our lattice-based \(\textsf {ABE}\) scheme for several useful predicates in \({\textbf {NC}}^1\). In Sect. 5, we provide our lattice-based \(\textsf {IBE}\) scheme with small polynomial modulus.

2 Preliminaries

Notations We denote [ab] as the set of \(\{ a, a+1, \ldots , b-1, b \}\) for any integers \(a,b \in {\mathbb {N}}\) satisfying \(a \le b\), and for simplicity write [b] for the special case of \(a=1\). We denote by \(x \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }X\) the process of sampling a value x according to the distribution X. Similarly, for a finite set \({\mathcal {S}}\), we denote by \(x \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathcal {S}}\) the process of sampling a value x according to the uniform distribution over \({\mathcal {S}}\). Let X and Y be two random variables over some finite sets \({\mathcal {S}}_X\) and \({\mathcal {S}}_Y\), respectively. The statistical distance between X and Y is defined as \(\Delta (X,Y) {:}{=}\frac{1}{2}\sum _{s \in {\mathcal {S}}_X \cup {\mathcal {S}}_Y}^{}{\left|\Pr [X=s] - \Pr [Y=s]\right|}\). A function \(f:{\mathbb {N}}\rightarrow [0, 1]\) is said to be negligible, if for all positive polynomials \(p(\cdot )\) and all sufficiently large \(\lambda \in {\mathbb {N}}\), we have \(f(\lambda ) < 1/p(\lambda )\). Throughout this paper, we use \(\lambda \in {\mathbb {N}}\) to denote a security parameter. We denote by \(\textsf {poly}(\lambda )\) an unspecified integer-valued positive polynomial of \(\lambda \) and by \(\textsf {negl}(\lambda )\) an unspecified negligible function of \(\lambda \). Throughout this paper, we use \(\lambda \in {\mathbb {N}}\) to denote a security parameter.

We treat vectors in their column form. For a vector \({\mathbf {v}}\in {\mathbb {R}}^n\), \(\left\| {\mathbf {v}}\right\| \) and \(\Vert {\mathbf {v}}\Vert _{\infty }\) denote \(\ell _2\) and \(\ell _{\infty }\) norm, respectively. For a matrix \({\mathbf {R}}\in {\mathbb {R}}^{n \times n}\), denote \(\Vert {\mathbf {R}}\Vert _{\infty }\) as the infinity norm. \(\left\| {\mathbf {R}}\right\| _2\) is the operator norm of \({\mathbf {R}}\). Namely, \(\left\| {\mathbf {R}}\right\| _2 {:}{=}\sup _{\left\| {\mathbf {x}}\right\| =1}{\left\| {\mathbf {R}}{\mathbf {x}}\right\| }\). We denote \([\cdot \vert \vert \cdot ]\) as the horizontal concatenation of vectors and matrices. We use \(\otimes \) the Kronecker product of two matrices. For any bit string \(x = (x_1,\ldots ,x_\ell ) \in \{ 0,1 \} ^\ell \) and any matrix \({\mathbf {A}}\in {\mathbb {R}}^{n \times m}\), we denote \(\left[ x_1 {\mathbf {A}}\vert \vert \cdots \vert \vert x_\ell {\mathbf {A}}\right] \) by \(x \otimes {\mathbf {A}}\). We denote the n dimension vector whose entries are all 1 as \({\mathbf {1}}_n\).

We use the boolean value [X] as the output 1 if and only if a statement X is true and 0 otherwise. In particular, for \(x \in \{ 0,1 \} \), we have \([x=0] = 1-x\) and \([x=1] = x\).

2.1 Lattices

Here, we recall some facts on lattices that will be used in our paper. Throughout this paper, n, m, \(m'\) and q are positive integers.

Gaussian distributions For an integer \(m > 0\) and a real \(\gamma > 0\), let \(D_{{\mathbb {Z}}^m, \gamma }\) be the discrete Gaussian distribution over \({\mathbb {Z}}^m\) with parameter \(\gamma \). Regarding the Gaussian distributions, the following lemmas hold.

Lemma 1

[51, Lemma 2.5] .

Lemma 2

[37, Lemma 1] Let r be a positive real satisfying \(r > \Omega (\sqrt{n})\). Let \({\mathbf {b}}\in {\mathbb {Z}}_q^m\) be arbitrary and \({\mathbf {z}}\) chosen from \(D_{{\mathbb {Z}}^m, r}\). Then for any \({\mathbf {V}}\in {\mathbb {Z}}^{m \times m'}\) and positive real \(s > \left\| {\mathbf {V}}\right\| _2\), there exists a PPT algorithm \(\textsf {ReRand}({\mathbf {V}}, {\mathbf {b}}+ {\mathbf {z}}, r, s)\) that outputs \({\mathbf {b}}' = {\mathbf {V}}^\top {\mathbf {b}}+ {\mathbf {z}}' \in {\mathbb {Z}}_q^{m'}\), where the distribution of \({\mathbf {z}}'\) is within \(2^{-\Omega (n)}\) statistical distance to \(D_{{\mathbb {Z}}^{m'}, 2rs}\).

Random matrices The following lemma states the properties of random matrices.

Lemma 3

(Leftover Hash Lemma) Let \(q>2\) be a prime, m, n be positive integers such that \(m > n \log {q} + \Omega (n)\). For any integer \(k = \textsf {poly}(n)\), if \({\mathbf {A}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{n \times m}\), \({\mathbf {B}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{n \times k}\), and \({\mathbf {R}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }\{ -1,1 \}^{m \times k}\), then the statistical distance between \(({\mathbf {A}},{\mathbf {B}})\) and \(({\mathbf {A}},{\mathbf {A}}{\mathbf {R}})\) is within \(2^{-\Omega (n)}\).

Trapdoors Here, we follow the presentation of [20]. Let \({\mathbf {A}}\in {\mathbb {Z}}_q^{n \times m}\). For all \({\mathbf {V}}\in {\mathbb {Z}}_q^{n \times m'}\), we let \({\mathbf {A}}_{\gamma }^{-1}({\mathbf {V}})\) denote the random variable whose distribution is a discrete Gaussian \(D_{{\mathbb {Z}}^m, \gamma }^{m'}\) conditioned on \({\mathbf {A}}\cdot {\mathbf {A}}_{\gamma }^{-1}({\mathbf {V}}) = {\mathbf {V}}\). A \(\gamma \)-trapdoor for \({\mathbf {A}}\) is a procedure that can sample from a distribution within \(2^{-\Omega (n)}\) statistical distance of \({\mathbf {A}}_{\gamma }^{-1}({\mathbf {V}})\) in time \(\textsf {poly}(n,m,m',\log {q})\), for any \({\mathbf {V}}\in {\mathbb {Z}}_q^{n \times m'}\). We slightly overload notation and denote \(\gamma \)-trapdoor for \({\mathbf {A}}\) by \({\mathbf {A}}_{\gamma }^{-1}\).

We use the gadget matrix \({\mathbf {G}}\in {\mathbb {Z}}^{n \times m}\) defined in [47]. The following properties had been established in a long sequence of works [1, 22, 24, 29, 47].

Lemma 4

(Properties of trapdoors) Lattice trapdoors exhibit the following properties.

  1. 1.

    Given \({\mathbf {A}}_{\gamma }^{-1}\), one can obtain \({\mathbf {A}}_{\gamma '}^{-1}\) for any \(\gamma ' \ge \gamma \).

  2. 2.

    Given \({\mathbf {A}}_{\gamma }^{-1}\), one can obtain \([{\mathbf {A}}\vert \vert {\mathbf {B}}]_{\gamma }^{-1}\) and \([{\mathbf {B}}\vert \vert {\mathbf {A}}]_{\gamma }^{-1}\) for any \({\mathbf {B}}\in {\mathbb {Z}}_q^{n \times m'}\).

  3. 3.

    For all \({\mathbf {A}}\in {\mathbb {Z}}_q^{n \times m}\), \({\mathbf {R}}\in {\mathbb {Z}}^{m \times m}\) with \(m \ge n \left\lceil \log {q}\right\rceil \), and an invertible element \(t \in {\mathbb {Z}}_q^*\), one can obtain \([{\mathbf {A}}{\mathbf {R}}+ t {\mathbf {G}}\vert \vert {\mathbf {A}}]_{\gamma }^{-1}\) for \(\gamma = \left\| {\mathbf {R}}\right\| _2 \cdot O(\sqrt{\log {m}})\).

  4. 4.

    There exists an efficient algorithm \(\textsf {TrapGen}(1^n, 1^m, q)\) that outputs \(({\mathbf {A}}, {\mathbf {A}}_{\gamma _0}^{-1})\) where \({\mathbf {A}}\in {\mathbb {Z}}_q^{n \times m}\) for some \(m \ge 2 n \left\lceil \log {q}\right\rceil \) and the distribution of \({\mathbf {A}}\) is within \(2^{-\Omega (n)}\) statistical distance to uniform, where \(\gamma _0 = O(\sqrt{n\log {q}\log {m}})\).

Hardness assumption We recall the definition of the Learning with Errors (\(\textsf {LWE}\)) problem.

Definition 1

For integers \(n=n(\lambda )\), \(m=m(\lambda )\), a prime \(q=q(\lambda )>2\), a distribution \(\chi =\chi (\lambda )\) over \({\mathbb {Z}}\), and a PPT algorithm \({\mathcal {A}}\), an advantage for the learning with errors problem \(\textsf {LWE}_{n, m, q, \chi }\) of \({\mathcal {A}}\) is defined as follows:

$$\begin{aligned} \textsf {Adv}^{\textsf {LWE}_{n, m, q, \chi }}_{{\mathcal {A}}}(\lambda ) {:}{=}\left|\Pr \left[ {\mathcal {A}}({\mathbf {A}}, {\mathbf {A}}^\top {\mathbf {s}}+ {\mathbf {z}}) = 1\right] - \Pr \left[ {\mathcal {A}}({\mathbf {A}}, {\mathbf {v}}+ {\mathbf {z}}) = 1\right] \right|, \end{aligned}$$

where \({\mathbf {A}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{n \times m}\), \({\mathbf {s}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^n\), \({\mathbf {v}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^m\), and \({\mathbf {z}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }\chi ^m\). We say that the \(\textsf {LWE}_{n, m, q, \chi }\) assumption holds if \(\textsf {Adv}^{\textsf {LWE}_{n, m, q, \chi }}_{{\mathcal {A}}}(\lambda )\) is negligible in \(\lambda \) for all PPT \({\mathcal {A}}\).

For \(\chi = D_{{\mathbb {Z}}, \alpha q}\) and \(\alpha q > 2 \sqrt{n}\), it is known that the the \(\textsf {LWE}_{n, m, q, \chi }\) problem is as hard as certain worst case lattice problems with approximation factor \({\tilde{O}}(n/\alpha )\). We refer to [22, 49, 51] for more on the hardness of \(\textsf {LWE}\).

2.2 Pseudorandom function

Let \({\textsf {F}} = \{ {\textsf {F}}_{\lambda } : {\mathcal {K}}_{\lambda } \times {\mathcal {X}}_{\lambda } \rightarrow {\mathcal {Y}}_{\lambda } \}\) be an ensemble of function families. For an PPT adversary \({\mathcal {A}}\), we consider the following two experiments:

$$\begin{aligned} \boxed {\begin{array}{l|l} \quad \textsf {Expt}_{{\textsf {F}},{\mathcal {A}}}^{\text {real}}(1^\lambda ): &{} \quad \textsf {Expt}_{{\textsf {F}},{\mathcal {A}}}^{\text {rand}}(1^\lambda ):\\ k \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathcal {K}}_{\lambda } \qquad &{} k \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathcal {K}}_{\lambda }\\ x^*\leftarrow {\mathcal {A}}^{{\textsf {F}}_{\lambda }(k,\cdot )}(1^\lambda ) &{} x^*\leftarrow {\mathcal {A}}^{{\textsf {F}}_{\lambda }(k,\cdot )}(1^\lambda )\\ y^*\leftarrow {\textsf {F}}_{\lambda }(k,x^*) &{}y^*\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathcal {Y}}_{\lambda }\\ b \leftarrow {\mathcal {A}}(1^\lambda ,y^*) &{} b \leftarrow {\mathcal {A}}(1^\lambda ,y^*)\\ \end{array}} \end{aligned}$$

The advantage of a PPT adversary \({\mathcal {A}}\) is defined as

$$\begin{aligned} \textsf {Adv}^{\textsf {PRF}}_{{\mathcal {A}}}(\lambda ) {:}{=}\left|\Pr \left[ \textsf {Expt}_{{\textsf {F}},{\mathcal {A}}}^{\text {real}}(1^\lambda ) = 1\right] - \Pr \left[ \textsf {Expt}_{{\textsf {F}},{\mathcal {A}}}^{\text {rand}}(1^\lambda ) = 1\right] \right|, \end{aligned}$$

We say that \({\textsf {F}}\) is pseudorandom function if the advantage of any PPT \({\mathcal {A}}\) is negligible in \(\lambda \).

2.3 Digital signature scheme

Syntax A digital signature scheme consists of the following three algorithms:

  • \(\textsf {KGen}(1^\lambda ) \rightarrow ({{\textsf {v}}}{{\textsf {k}}}, {{\textsf {s}}}{{\textsf {k}}})\): The key generation algorithm takes as input a security parameter \(1^\lambda \), and outputs a verification key \({{\textsf {v}}}{{\textsf {k}}}\) and signing key \({{\textsf {s}}}{{\textsf {k}}}\).

  • \(\textsf {Sig}({{\textsf {s}}}{{\textsf {k}}}, {{\textsf {v}}}{{\textsf {k}}}, {\textsf {M}}) \rightarrow \sigma \): The signing algorithm takes as inputs the signing key \({{\textsf {s}}}{{\textsf {k}}}\), the verification key \({{\textsf {v}}}{{\textsf {k}}}\) and a message \({\textsf {M}}\), and outputs a signature \(\sigma \).

  • \({{\textsf {V}}}{{\textsf {f}}}({{\textsf {v}}}{{\textsf {k}}}, {\textsf {M}}, \sigma ) \rightarrow 1\) or 0: The verification algorithm takes as inputs the verification key \({{\textsf {v}}}{{\textsf {k}}}\), a message \({\textsf {M}}\), and a signature \(\sigma \), and outputs 1 or 0.

Correctness We require that for all \(\lambda \in {\mathbb {N}}\) and all \({\textsf {M}}\) in the specified message space, \(\Pr [{{\textsf {V}}}{{\textsf {f}}}({{\textsf {v}}}{{\textsf {k}}}, {\textsf {M}}, \textsf {Sig}({{\textsf {s}}}{{\textsf {k}}}, {{\textsf {v}}}{{\textsf {k}}}, {\textsf {M}})) = 1] = 1-\textsf {negl}(\lambda )\) holds, where the probability is taken over the randomness used in all the algorithms.

Security We define the security of a signature scheme

by the following game between a challenger and an adversary. During the game, the challenger maintains lists \({\mathcal {Q}}\), which is set to be empty at the beginning of the game.

  • Setup: The challenger runs \(({{\textsf {v}}}{{\textsf {k}}}, {{\textsf {s}}}{{\textsf {k}}}) \leftarrow \textsf {KGen}(1^\lambda )\) and gives \({{\textsf {v}}}{{\textsf {k}}}\) to \({\mathcal {A}}\).

  • Query Phase: When \({\mathcal {A}}\) submits a message \({\textsf {M}}\), the challenger responds by returning \(\sigma \leftarrow \textsf {Sig}({{\textsf {s}}}{{\textsf {k}}},{{\textsf {v}}}{{\textsf {k}}},{\textsf {M}})\). The challenger adds \(({\textsf {M}},\sigma )\) to \({\mathcal {Q}}\).

  • Forge: Finally, \({\mathcal {A}}\) outputs a pair \(({\textsf {M}}^*, \sigma ^*)\).

The advantage of \({\mathcal {A}}\) is defined as

$$\begin{aligned} \textsf {Adv}^{\textsf {Sig}}_{{\mathcal {A}}}(\lambda ) {:}{=}\Pr \left[ {{\textsf {V}}}{{\textsf {f}}}({{\textsf {v}}}{{\textsf {k}}}, {\textsf {M}}^*, \sigma ^*) = 1 \wedge {\textsf {M}}^*\notin {\mathcal {Q}} \right] . \end{aligned}$$

We say that a signature scheme is secure if the advantage of any PPT \({\mathcal {A}}\) is negligible in \(\lambda \).

2.4 Identity-based encryption

Syntax Let \( \{ 0,1 \} ^{\ell }\) be the identity space of the scheme, where \(\ell \) is a positive integer. An \(\textsf {IBE}\) scheme consists of the four algorithms:

  • \(\textsf {Setup}(1^\lambda ) \rightarrow (\textsf {MPK},\textsf {MSK})\): The setup algorithm takes as input a security parameter \(1^\lambda \), and outputs a master public key \(\textsf {MPK}\) and a master secret key \(\textsf {MSK}\).

  • \(\textsf {KGen}(\textsf {MPK},\textsf {MSK},{{\textsf {I}}}{{\textsf {D}}}) \rightarrow {{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}}\): The key generation algorithm takes as input the master public key \(\textsf {MPK}\), the master secret key \(\textsf {MSK}\), and an identity \({{\textsf {I}}}{{\textsf {D}}}\in \{ 0,1 \} ^{\ell }\). It outputs a secret key \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}}\). We assume that the identity \({{\textsf {I}}}{{\textsf {D}}}\) is implicitly included in \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}}\).

  • \(\textsf {Enc}(\textsf {MPK},{{\textsf {I}}}{{\textsf {D}}},{\textsf {M}}) \rightarrow {{\textsf {C}}}{{\textsf {T}}}\): The encryption algorithm takes as input the master public key \(\textsf {MPK}\), an identity \({{\textsf {I}}}{{\textsf {D}}}\in \{ 0,1 \} ^{\ell }\), and a message \({\textsf {M}}\). It outputs a ciphertext \({{\textsf {C}}}{{\textsf {T}}}\).

  • \(\textsf {Dec}(\textsf {MPK},{{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}},{{\textsf {C}}}{{\textsf {T}}}) \rightarrow {\textsf {M}}\) or \(\bot \): The decryption algorithm takes as input the master public key \(\textsf {MPK}\), a secret key \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}}\), and a ciphertext \({{\textsf {C}}}{{\textsf {T}}}\). It outputs the message \({\textsf {M}}\) or \(\bot \), which means that the ciphertext is not in a valid form.

Correctness We require that for all \(\lambda \), \((\textsf {MPK},\textsf {MSK}) \leftarrow \textsf {Setup}(1^\lambda )\), \({{\textsf {I}}}{{\textsf {D}}}\in \{ 0,1 \} ^{\ell }\), and \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}} \leftarrow \textsf {KGen}(\textsf {MPK},\textsf {MSK},{{\textsf {I}}}{{\textsf {D}}})\), we have

$$\begin{aligned} \Pr [\textsf {Dec}(\textsf {MPK},{{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}},\textsf {Enc}(\textsf {MPK},{{\textsf {I}}}{{\textsf {D}}},{\textsf {M}})) = {\textsf {M}}] = 1 - \textsf {negl}(\lambda ), \end{aligned}$$

where the probability is taken over the randomness used in all the algorithms.

Security We define the security of an \(\textsf {IBE}\) scheme by the following game between a challenger and an adversary \({\mathcal {A}}\).

  • Setup. The challenger runs \((\textsf {MPK},\textsf {MSK}) \leftarrow \textsf {Setup}(1^\lambda )\) and gives the master public key \(\textsf {MPK}\) to \({\mathcal {A}}\).

  • Phase 1. \({\mathcal {A}}\) may adaptively make key generation queries. If \({\mathcal {A}}\) submits an identity \({{\textsf {I}}}{{\textsf {D}}}\in \{ 0,1 \} ^{\ell }\) to the challenger, the challenger runs \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}} \leftarrow \textsf {KGen}(\textsf {MPK},\textsf {MSK},{{\textsf {I}}}{{\textsf {D}}})\) and returns \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}}\) to \({\mathcal {A}}\).

  • Challenge Phase. At some point, \({\mathcal {A}}\) outputs two messages \({\textsf {M}}_0, {\textsf {M}}_1\) and an identity \({{\textsf {I}}}{{\textsf {D}}}^*\in \{ 0,1 \} ^{\ell }\), on which it wishes to be challenged. Then, the challenger picks a random bit \(b \in \{ 0,1 \} \) and returns \({{\textsf {C}}}{{\textsf {T}}}^*\leftarrow \textsf {Enc}(\textsf {MPK},{{\textsf {I}}}{{\textsf {D}}}^*,{\textsf {M}}_b)\) to \({\mathcal {A}}\). We prohibit \({\mathcal {A}}\) from making a challenge query for an identity \({{\textsf {I}}}{{\textsf {D}}}^*\) such that it has already made a secret key query for the same \({{\textsf {I}}}{{\textsf {D}}}= {{\textsf {I}}}{{\textsf {D}}}^*\) and vice versa.

  • Phase 2. After the challenge phase, \({\mathcal {A}}\) may continue to make key generation queries as in Phase 1, with the added restriction that \({{\textsf {I}}}{{\textsf {D}}}\ne {{\textsf {I}}}{{\textsf {D}}}^*\).

  • Guess. Finally, \({\mathcal {A}}\) outputs a guess \(b'\) for b.

The advantage of \({\mathcal {A}}\) in this game is defined as \(\textsf {Adv}^{\textsf {IBE}}_{{\mathcal {A}}}(\lambda ) {:}{=}\left|\Pr [b'=b]-\frac{1}{2}\right|\). We say that an \(\textsf {IBE}\) scheme is adaptively secure, if the advantage of any PPT \({\mathcal {A}}\) is negligible in \(\lambda \).

2.5 Attribute-based encryption

Syntax Let \(R:{\mathcal {K}} \times {\mathcal {X}} \rightarrow \{ 0,1 \} \) be a relation, where \({\mathcal {K}}\) and \({\mathcal {X}}\) denote “key attribute” and “ciphertext attribute” spaces. An \(\textsf {ABE}\) scheme for R consists of the four algorithms:

  • \(\textsf {Setup}(1^\lambda ) \rightarrow (\textsf {MPK},\textsf {MSK})\): The setup algorithm takes as input a security parameter \(1^\lambda \), and outputs a master public key \(\textsf {MPK}\) and a master secret key \(\textsf {MSK}\).

  • \(\textsf {KGen}(\textsf {MPK},\textsf {MSK},k) \rightarrow {{\textsf {s}}}{{\textsf {k}}}_k\): The key generation algorithm takes as input the master public key \(\textsf {MPK}\), the master secret key \(\textsf {MSK}\), and a key attribute \(k \in {\mathcal {K}}\). It outputs a secret key \({{\textsf {s}}}{{\textsf {k}}}_k\). We assume that the key attribute k is implicitly included in \({{\textsf {s}}}{{\textsf {k}}}_k\).

  • \(\textsf {Enc}(\textsf {MPK},x,{\textsf {M}}) \rightarrow {{\textsf {C}}}{{\textsf {T}}}\): The encryption algorithm takes as input the master public key \(\textsf {MPK}\), a ciphertext attribute \(x \in {\mathcal {X}}\), and a message \({\textsf {M}}\). It outputs a ciphertext \({{\textsf {C}}}{{\textsf {T}}}\).

  • \(\textsf {Dec}(\textsf {MPK},{{\textsf {s}}}{{\textsf {k}}}_k,(x,{{\textsf {C}}}{{\textsf {T}}})) \rightarrow {\textsf {M}}\) or \(\bot \): The decryption algorithm takes as input the master public key \(\textsf {MPK}\), a secret key \({{\textsf {s}}}{{\textsf {k}}}_k\), and a ciphertext \({{\textsf {C}}}{{\textsf {T}}}\) with an associating ciphertext attribute x. It outputs the message \({\textsf {M}}\) or \(\bot \), which means that the ciphertext is not in a valid form.

Correctness We require that for all \(\lambda \), \((\textsf {MPK},\textsf {MSK}) \leftarrow \textsf {Setup}(\lambda )\), \(k \in {\mathcal {K}}\), \(x \in {\mathcal {X}}\) such that \(R(k,x) = 0\), and \({{\textsf {s}}}{{\textsf {k}}}_k \leftarrow \textsf {KGen}(\textsf {MPK},\textsf {MSK},k)\), we have

$$\begin{aligned} \Pr [\textsf {Dec}(\textsf {MPK},{{\textsf {s}}}{{\textsf {k}}}_{k},(x,\textsf {Enc}(\textsf {MPK},x,{\textsf {M}}))) = {\textsf {M}}] = 1-\textsf {negl}(\lambda ), \end{aligned}$$

where the probability is taken over the randomness used in all the algorithms.

Security We define the security of an \(\textsf {ABE}\) scheme by the following game between a challenger and an adversary \({\mathcal {A}}\).

  • Setup. At the outset of the game, \({\mathcal {A}}\) submits to the challenger a ciphertext attribute \(x^*\in {\mathcal {X}}\) on which it wishes to be challenged. Then, the challenger runs \((\textsf {MPK},\textsf {MSK}) \leftarrow \textsf {Setup}(1^\lambda )\) and gives the master public key \(\textsf {MPK}\) to \({\mathcal {A}}\).

  • Phase 1. \({\mathcal {A}}\) may adaptively make key generation queries. If \({\mathcal {A}}\) submits a key attribute \(k \in {\mathcal {K}}\) to the challenger, the challenger runs \({{\textsf {s}}}{{\textsf {k}}}_k \leftarrow \textsf {KGen}(\textsf {MPK},\textsf {MSK},k)\) and returns \({{\textsf {s}}}{{\textsf {k}}}_k\) to \({\mathcal {A}}\). Here, we require the key attribute k to satisfy \(R(k,x^*) = 1\) (that is, \({{\textsf {s}}}{{\textsf {k}}}_k\) dose not decrypt the challenge ciphertext).

  • Challenge Phase. At some point, \({\mathcal {A}}\) outputs two messages \({\textsf {M}}_0, {\textsf {M}}_1\). Then, the challenger picks a random bit \(b \in \{ 0,1 \} \) and returns \({{\textsf {C}}}{{\textsf {T}}}^*\leftarrow \textsf {Enc}(\textsf {MPK},x^*,{\textsf {M}}_b)\) to \({\mathcal {A}}\).

  • Phase 2. After the challenge phase, \({\mathcal {A}}\) may continue to make key generation queries as in Phase 1.

  • Guess. Finally, \({\mathcal {A}}\) outputs a guess \(b'\) for b.

The advantage of \({\mathcal {A}}\) in this game is defined as \(\textsf {Adv}^{\textsf {ABE}}_{{\mathcal {A}}}(\lambda ) {:}{=}\left|\Pr [b'=b]-\frac{1}{2}\right|\). We say that an \(\textsf {ABE}\) scheme is selectively secure, if the advantage of any PPT \({\mathcal {A}}\) is negligible in \(\lambda \).

3 Evaluating inner-products via branching program

In this section, we introduce a toolset to efficiently compute inner-products homomorphically over a small ring \({\mathbb {Z}}_p\) for small \(p = \textsf {poly}(\lambda )\). We first recap some basic facts about symmetric groups and then explain how to compute inner-products over \({\mathbb {Z}}_p\) directly as a particular type of branching program.

3.1 Symmetric groups and group embeddings

Let \({\mathcal {S}}_p\) denote the symmetric group of degree p, i.e., the group of permutations \(\pi :[p] \rightarrow [p]\) with function composition as the group operation. The group \({\mathcal {S}}_p\) is isomorphic to the multiplicative group of p-by-p permutation matrices via the map that associates \(\pi \in {\mathcal {S}}_p\) with the permutation matrix \({\mathbf {P}}_\pi = \begin{pmatrix} {\mathbf {e}}_{\pi (1)}&{\mathbf {e}}_{\pi (2)}&\ldots&{\mathbf {e}}_{\pi (p)} \end{pmatrix}\), where \({\mathbf {e}}_i \in \{ 0,1 \} ^p\) is the i-th standard basis vector.

The additive cyclic group \(({\mathbb {Z}}_p,+)\) can be embedded into the symmetric group \({\mathcal {S}}_p\) via the injective homomorphism that maps the generator \(1 \in {\mathbb {Z}}_p\) to the cyclic shift permutation \(\pi \in {\mathcal {S}}_p\), defined as \(\pi (i)=i+1\) for \(1 \le i < p\) and \(\pi (p)=1\). Clearly, this embedding and its inverse can be computed efficiently. Note that the permutation matrices in the image of this embedding can be represented more compactly by just their first column because the remaining columns are just the successive cyclic shifts of this column. In the following, we identify such permutations with their associated vector. Then, we have a group embedding \(\phi _p:{\mathbb {Z}}_p \rightarrow \{ 0,1 \} ^p\). Note that the group embedding \(\phi _p\) can be seen as \(\phi _p(x) = \left( [x=0],[x=1],\ldots ,[x=p-1]\right) ^\top .\)

For any \(x, y \in {\mathbb {Z}}_p\), we define the multiplication \(\phi _p(x) \cdot \phi _p(y)\) by the multiplication of the associated permutation matrix of \(\phi _p(x)\) and a unit vector \(\phi _p(y)\). From the above, we have \(\phi _p(x) \cdot \phi _p(y) = \phi _p(x+y)\).

3.2 Computation of inner-products over a small ring

We explain how to compute the inner-product over \({\mathbb {Z}}_p\). Let \(\ell \) be a positive integer. For vectors \({\mathbf {x}}, {\mathbf {y}}\in {\mathbb {Z}}_p^\ell \), we consider a function \(f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}: \{ 0,1 \} ^{\ell \left\lceil \log {p}\right\rceil } \rightarrow \{ 0,1 \} ^p\) such that \(f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}({\hat{x}}) = \phi _p(\langle {\mathbf {x}}, {\mathbf {y}}\rangle )\), where \({\hat{x}} \in \{ 0,1 \} ^{\ell \left\lceil \log {p}\right\rceil }\) is the binary representation of \({\mathbf {x}}\). We can compute this function \(f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}\) as follows:

$$\begin{aligned} f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}({\hat{x}}) = \prod _{i=1}^{\ell } \prod _{j=1}^{\left\lceil \log {p}\right\rceil } \left( (1-x_{i,j}) \cdot \phi _p(0) + x_{i,j} \cdot \phi _p(2^{j-1} \cdot y_i) \right) , \end{aligned}$$

where \(x_{i,j}\) is the j-th bit of the binary representation of the i-th element of \({\mathbf {x}}\). It is easy to see that the function \(f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}({\hat{x}})\) exactly computes \(\phi _p(\langle {\mathbf {x}}, {\mathbf {y}}\rangle )\) since

$$\begin{aligned} f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}({\hat{x}})&= \prod _{i=1}^{\ell } \prod _{j=1}^{\left\lceil \log {p}\right\rceil } \left( (1-x_{i,j}) \cdot \phi _p(0) + x_{i,j} \cdot \phi _p(2^{j-1} \cdot y_i) \right) \\&= \prod _{i=1}^{\ell } \prod _{j=1}^{\left\lceil \log {p}\right\rceil } \phi _p(x_{i,j} \cdot 2^{j-1} \cdot y_i) \\&= \prod _{i=1}^{\ell } \phi _p\Bigg ( \sum _{j=1}^{\left\lceil \log {p}\right\rceil }(x_{i,j} \cdot 2^{j-1}) \cdot y_i \Bigg ) \\&= \prod _{i=1}^{\ell } \phi _p\big ( x_i \cdot y_i \big ) \\&= \phi _p(\langle {\mathbf {x}}, {\mathbf {y}}\rangle ). \end{aligned}$$

The second equality above follows from the fact that \((1 - x_{i,j}) \cdot \phi _p(0) + x_{i,j} \cdot \phi _p(2^{j-1} \cdot y_i)\) is \(\phi _p(0)\) if \(x_{i,j} = 0\) and \(\phi _p(2^{j-1} \cdot y_i)\) otherwise. The third and the fifth equalities follow from the property of the homomorphism of \(\phi _p\).

3.3 Homomorphic evaluation of inner-products over lattices

The above evaluation process of \(f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}(\cdot )\) (for a fixed \({\mathbf {y}}\)) meets the definition of a branching program, and in particular, we are able to homomorphically compute \(f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}(\cdot )\) using the evaluation algorithms in [33] that exploits the “quasi-additive” nature of GSW-\(\textsf {FHE}\). Specifically, at each step of the computation, \(f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}(\cdot )\) reads one of the input bits \(x_{i, j} \in \{ 0,1 \} \) and chooses the permutation \(\phi _p(0)\) or \(\phi _p(2^{j-1} \cdot y_i)\) accordingly. The following, which is a direct consequence of the results in [33, Sect. 3], is the main technical toolset that allows efficient computation of inner-products over a small ring \({\mathbb {Z}}_p\). It is much more efficient compared to previous techniques that indirectly evaluate inner-products via the Barington’s theorem (such as in [33]).

Theorem 5

(Homomorphic Inner-Product) Let \(p = p(\lambda )\), \(\ell = \ell (\lambda )\), and \(\ell _p = \ell \cdot \left\lceil \log {p}\right\rceil \) be positive integers. There exist efficient deterministic algorithms \(\left( \textsf {PubIP}_{p,\ell }, \textsf {CTIP}_{p,\ell }, \textsf {TrapIP}_{p,\ell }\right) \) with the following properties.

  • \(\textsf {PubIP}_{p,\ell } ( {\mathbf {y}}\in {\mathbb {Z}}_p^\ell , \vec {{\mathbf {B}}} \in {\mathbb {Z}}_q^{n \times m \cdot \ell _p} ) \rightarrow \vec {{\mathbf {B}}}_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}} \in {\mathbb {Z}}_q^{n \times m \cdot p}\).

  • \(\textsf {CTIP}_{p,\ell } ( {\mathbf {x}}\in {\mathbb {Z}}_p^\ell , {\mathbf {y}}\in {\mathbb {Z}}_p^\ell , \vec {{\mathbf {c}}} \in {\mathbb {Z}}_q^{m \cdot \ell _p} ) \rightarrow \vec {{\mathbf {c}}}_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}} \in {\mathbb {Z}}_q^{m \cdot p}\). Furthermore, we have

    $$\begin{aligned} \Vert \vec {{\mathbf {c}}}_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}} - (\vec {{\mathbf {B}}}_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}} + f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}({\hat{x}}) \otimes {\mathbf {G}})^{\top } {\mathbf {s}}\Vert _{\infty } \le (3 m \ell _p + 1) \cdot \Vert \vec {{\mathbf {z}}} \Vert _{\infty } \end{aligned}$$

    if \(\vec {{\mathbf {c}}} = ( \vec {{\mathbf {B}}} + {\hat{x}} \otimes {\mathbf {G}})^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}\) for some \({\mathbf {s}}\in {\mathbb {Z}}_q^n\) and \(\vec {{\mathbf {z}}} \in {\mathbb {Z}}^{m \cdot \ell _p}\).

  • \(\textsf {TrapIP}_{p,\ell } ( {\mathbf {x}}\in {\mathbb {Z}}_p^\ell , {\mathbf {y}}\in {\mathbb {Z}}_p^\ell , \vec {{\mathbf {R}}} \in {\mathbb {Z}}^{m \times m \cdot \ell _p} ) \rightarrow \vec {{\mathbf {R}}}_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}} \in {\mathbb {Z}}^{m \times m \cdot p}\). Furthermore, we have

    $$\begin{aligned} \textsf {PubIP}_{p,\ell } ( {\mathbf {y}}, {\mathbf {A}}\vec {{\mathbf {R}}} - {\hat{x}} \otimes {\mathbf {G}}) = {\mathbf {A}}\vec {{\mathbf {R}}}_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}} - f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}({\hat{x}}) \otimes {\mathbf {G}}\end{aligned}$$

    for any \({\mathbf {A}}\in {\mathbb {Z}}_q^{n \times m}\).

  • Their running time is bounded by \(\textsf {poly}(n,m,p,\ell _p,\log {q})\).

4 Attribute-based encryption for useful predicates

In this section, we show how to construct an attribute-based encryption (\(\textsf {ABE}\)) for several useful predicates using the toolset from Theorem 5. Specifically, we construct a non-zero inner-product encryption (\(\textsf {NIPE}\)), an identity-based revocation (\(\textsf {IBR}\)), and a fuzzy identity-based encryption (\(\textsf {FIBE}\)). These constructions substantially improve the efficiency of previous constructions. Notably, unlike the state-of-the-art construction by Katsumata and Yamada [39], our \(\textsf {NIPE}\) is stateless and can be instantiated only assuming \(\textsf {LWE}\) with a polynomial-size modulus even when we handle inner-products over an exponentially large field. Since our \(\textsf {NIPE}\) handles exponentially large fields, it can be converted into an \(\textsf {IBR}\) scheme with an exponentially large ID-space using standard encoding techniques. Finally, our \(\textsf {FIBE}\) scheme does not require a sub-exponential sized modulus unlike the previous construction by Agrawal et al. [3]. All constructions are much more efficient than the ones that are derived from \(\textsf {ABE}\) for \({\textbf {NC}}^1\) circuits [33, 34] by specializing the predicates to the corresponding ones since our constructions do not involve the step of converting \({\textbf {NC}}^1\) circuit into a branching program by Barrington’s theorem.

4.1 \(\textsf {ABE}\) and \(\textsf {ABE}\) enabling algorithms

Boneh et al. [14] provides a generic way of constructing a lattice-based \(\textsf {ABE}\) scheme from fully key homomorphic algorithms. In this paper, we use an \(\textsf {Encode}\) algorithm, that converts a ciphertext attribute to a bit string \( \{ 0,1 \} ^u\), in addition to these algorithms.

Definition 2

(\(\textsf {ABE}\) Enabling Algorithms) We say that the deterministic algorithms \((\textsf {Encode}\), \(\textsf {PubEval}\), \(\textsf {CTEval}\), \(\textsf {TrapEval})\) are \(\alpha _R\)-\(\textsf {ABE}\) enabling for a relation \(R:{\mathcal {K}} \times {\mathcal {X}} \rightarrow \{ 0,1 \} \) if they are efficient and satisfy the following properties:

  • \(\textsf {Encode}(x \in {\mathcal {X}}) \rightarrow {\hat{x}} \in \{ 0,1 \} ^u\).

  • \(\textsf {PubEval}(k \in {\mathcal {K}}, \vec {{\mathbf {B}}} \in {\mathbb {Z}}_q^{n \times m \cdot u}) \rightarrow {\mathbf {B}}_k \in {\mathbb {Z}}_q^{n \times m}\).

  • \(\textsf {CTEval}(k \in {\mathcal {K}}, x \in {\mathcal {X}}, \vec {{\mathbf {c}}} \in {\mathbb {Z}}_q^{m \cdot u}) \rightarrow {\mathbf {c}}_k \in {\mathbb {Z}}_q^m\). Furthermore, we have

    $$\begin{aligned} \Vert \underbrace{{\mathbf {c}}_k - ({\mathbf {B}}_k + R(k,x) \cdot {\mathbf {G}})^{\top } {\mathbf {s}}}_{{=}{:}{\mathbf {z}}_k}\Vert _{\infty } \le \alpha _R \cdot \Vert \vec {{\mathbf {z}}}\Vert _{\infty } \end{aligned}$$

    if \(\vec {{\mathbf {c}}} = (\vec {{\mathbf {B}}} + {\hat{x}} \otimes {\mathbf {G}})^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}\) for some \({\mathbf {s}}\in {\mathbb {Z}}_q^n\) and \(\vec {{\mathbf {z}}} \in {\mathbb {Z}}^{m \cdot u}\).

  • \(\textsf {TrapEval}(k \in {\mathcal {K}}, x \in {\mathcal {X}}, \vec {{\mathbf {R}}} \in {\mathbb {Z}}^{m \times m \cdot u}) \rightarrow {\mathbf {R}}_k \in {\mathbb {Z}}^{m \times m}\). Furthermore, we have

    $$\begin{aligned} \textsf {PubEval}(k, {\mathbf {A}}\vec {{\mathbf {R}}} - {\hat{x}} \otimes {\mathbf {G}}) = {\mathbf {A}}{\mathbf {R}}_k - R(k,x) \cdot {\mathbf {G}}, \end{aligned}$$

    and \(\Vert {\mathbf {R}}_k \Vert _{\infty } \le \alpha _R \cdot \Vert \vec {{\mathbf {R}}} \Vert _{\infty }\).

Once we have enabling algorithms satisfying the above definition, we can construct an \(\textsf {ABE}\) scheme for the corresponding relation R as was shown by Boneh et al. [14]. We sketch the construction here and refer to Sect. A for the details. The master public key of the construction consists of matrices \({\mathbf {A}}\in {\mathbb {Z}}_q^{n\times m}\), \(\vec {{\mathbf {B}}} \in {\mathbb {Z}}_q^{n\times m\cdot u}\) and a vector \({\mathbf {u}}\in {\mathbb {Z}}_q^n\) and the master secret key is the trapdoor for \({\mathbf {A}}\). The secret key for a key attribute k is a short vector \({\mathbf {d}}\) that satisfies \([{\mathbf {A}}\Vert {\mathbf {B}}_k] \cdot {\mathbf {d}}={\mathbf {u}}\), where \({\mathbf {B}}_k\) is computed from \(\textsf {PubEval}\) on input \(\vec {{\mathbf {B}}}\) and k and \({\mathbf {d}}\) is sampled using the trapdoor for \({\mathbf {A}}\). To encrypt a message \({\textsf {M}}\in \{0,1\}\) for a ciphertext attribute x, we generate \({\mathbf {c}}={\mathbf {A}}^\top {\mathbf {s}}+ \textsf {noise}\), \(\vec {{\mathbf {c}}}=(\vec {{\mathbf {B}}}\ + {\hat{x}}\otimes {\mathbf {G}})^\top {\mathbf {s}}+ \textsf {noise}\), and \(c={\mathbf {u}}^\top {\mathbf {s}}+ {\textsf {M}}\lceil q/2 \rceil + \textsf {noise}\) as a ciphertext, where \({\hat{x}} = \textsf {Encode}(x)\). To decrypt the ciphertext, we first compute \({\mathbf {c}}_k:=({\mathbf {B}}_k + R(k,x) \cdot {\mathbf {G}})^{\top } {\mathbf {s}}+ \textsf {noise} = {\mathbf {B}}_k^{\top } {\mathbf {s}}+ \textsf {noise}\) by running \(\textsf {CTEval}\) on input \((k,x,\vec {{\mathbf {c}}})\). We then compute \([{\mathbf {c}}^{\top } \vert \vert {\mathbf {c}}_k^{\top }] \cdot {\mathbf {d}}= {\mathbf {u}}^\top {\mathbf {s}}+ \textsf {noise}\) and subtract it from c to recover the message.

Due to the work of Boneh et al. [14], we are able to distill the problem of homomorphically computing the relation R from the problem of constructing \(\textsf {ABEs}\) for relation R. Therefore, in the following, we will mainly discuss how to compute the specific relations R using our base technique for computing inner-products over a small ring \({\mathbb {Z}}_p\).

4.2 \(\textsf {ABE}\) enabling algorithms for \(\textsf {NIPE}\)

We show \(\textsf {ABE}\) enabling algorithms for \(\textsf {NIPE}\) relations over an exponentially large field. As briefly explained in the introduction, we bootstrap our algorithms in Theorem 5 for homomorphic computation over a small ring to an exponentially large field. We first recall the definition of \(\textsf {NIPE}\) and polynomial rings.

Definition 3

Let \({\mathcal {R}}\) be a ring. An \(\textsf {NIPE}\) over \({\mathcal {R}}\) with the dimension \(\ell \) is an \(\textsf {ABE}\) for \(R^{\textsf {NIPE}}:{\mathcal {R}}^\ell \times {\mathcal {R}}^\ell \rightarrow \{ 0,1 \} \) defined by \(R^{\textsf {NIPE}}({\mathbf {x}},{\mathbf {y}}) = 0\) iff \(\langle {\mathbf {x}}, {\mathbf {y}}\rangle \ne 0\) over \({\mathcal {R}}\).

Polynomial rings

Let \(p,\ell ,t \in {\mathbb {N}}\) and consider a finite ring \({\mathcal {R}} = {\mathbb {Z}}_p[X]/\left\langle {\mathfrak {g}}\right\rangle \), where \({\mathfrak {g}} \in {\mathbb {Z}}_p[X]\) is a monic polynomial with degree t. If p is prime and \({\mathfrak {g}}\) is irreducible over \({\mathbb {Z}}_p\), the ring \({\mathcal {R}}\) is the field \(\mathrm {GF}(p^t)\). We define the mappings \(\theta :{\mathcal {R}} \rightarrow {\mathbb {Z}}_p^t\) by \({\mathfrak {a}} = a_0 + a_1X + \cdots + a_{t-1}X^{t-1} \mapsto (a_0,a_1,\ldots ,a_{t-1})^{\top }\) and \(\textsf {Rot}:{\mathcal {R}} \rightarrow {\mathbb {Z}}_p^{t \times t}\) by \({\mathfrak {a}} \mapsto \left( \theta ({\mathfrak {a}}),\theta ({\mathfrak {a}}X),\ldots ,\theta ({\mathfrak {a}}X^{t-1})\right) \). We denote by \(\rho _i:{\mathcal {R}} \rightarrow {\mathbb {Z}}_p^t\) the i-th row of \(\textsf {Rot}({\mathfrak {a}})\), i.e., \(\textsf {Rot}({\mathfrak {a}}) = \left( \rho _1({\mathfrak {a}}), \ldots , \rho _t({\mathfrak {a}})\right) ^\top \). We note that \(\theta ({\mathfrak {a}}) + \theta ({\mathfrak {b}}) = \theta ({\mathfrak {a}}+{\mathfrak {b}})\) and \(\textsf {Rot}({\mathfrak {a}}) \cdot \theta ({\mathfrak {b}}) = \theta ({\mathfrak {a}}{\mathfrak {b}})\). For \(\vec {{\mathfrak {a}}} = ({\mathfrak {a}}_1,\ldots ,{\mathfrak {a}}_\ell ), \vec {{\mathfrak {b}}} = ({\mathfrak {b}}_1,\ldots ,{\mathfrak {b}}_\ell ) \in {\mathcal {R}}^{\ell }\), we have

$$\begin{aligned} \theta \left( \langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle \right)&= \theta \left( \sum _{i=1}^{\ell }{{\mathfrak {a}}_i {\mathfrak {b}}_i}\right) \\&= \sum _{i=1}^{\ell } \theta ({\mathfrak {a}}_i {\mathfrak {b}}_i) \\&= \sum _{i=1}^{\ell } \textsf {Rot}({\mathfrak {a}}_i) \cdot \theta ({\mathfrak {b}}_i) \\&= \sum _{i=1}^{\ell } \left( \rho _1({\mathfrak {a}}_i), \ldots , \rho _t({\mathfrak {a}}_i)\right) ^\top \cdot \theta ({\mathfrak {b}}_i) \\&= \sum _{i=1}^{\ell } \left( \langle \rho _1({\mathfrak {a}}_i), \theta ({\mathfrak {b}}_i)\rangle , \ldots , \langle \rho _t({\mathfrak {a}}_i), \theta ({\mathfrak {b}}_i)\rangle \right) ^{\top } \\&= \left( \langle \Psi _1(\vec {{\mathfrak {a}}}), \Theta (\vec {{\mathfrak {b}}})\rangle , \ldots , \langle \Psi _t(\vec {{\mathfrak {a}}}), \Theta (\vec {{\mathfrak {b}}})\rangle \right) ^{\top }, \end{aligned}$$

where \(\Theta (\vec {{\mathfrak {b}}}) {:}{=}[\theta ({\mathfrak {b}}_1)^{\top } \vert \vert \cdots \vert \vert \theta ({\mathfrak {b}}_\ell )^{\top }]^{\top } \in {\mathbb {Z}}_p^{\ell t}\) and \(\Psi _i(\vec {{\mathfrak {a}}}) {:}{=}[\rho _i({\mathfrak {a}}_1)^{\top } \vert \vert \cdots \vert \vert \rho _i({\mathfrak {a}}_\ell )^{\top }]^{\top } \in {\mathbb {Z}}_p^{\ell t}\) for \(i \in [t]\). This means that the inner-product over a polynomial ring \({\mathcal {R}}\) can be computed by the inner-products over \({\mathbb {Z}}_p\).

Finally, we state the following lemma that we use to construct \(\textsf {ABE}\) enabling algorithms for \(\textsf {NIPE}\).

Lemma 6

(Homomorphic multiplication [33, 57]) Let \(d \in {\mathbb {N}}\). There exist three efficient deterministic algorithms \((\textsf {PubMult}_d,\textsf {CTMult}_d,\textsf {TrapMult}_d)\) with the following properties:

  • \(\textsf {PubMult}_d(\vec {{\mathbf {B}}} \in {\mathbb {Z}}_q^{n \times m \cdot d}) \rightarrow {\mathbf {B}}_d^\times \in {\mathbb {Z}}_q^{n \times m}\).

  • \(\textsf {CTMult}_d\left( x \in \{ 0,1 \} ^d, \vec {{\mathbf {c}}} \in {\mathbb {Z}}_q^{m \cdot d}\right) \rightarrow {\mathbf {c}}_d^\times \in {\mathbb {Z}}_q^m\). Furthermore, we have

    $$\begin{aligned} \left\| {\mathbf {c}}_d^\times - \left( {\mathbf {B}}_d^\times + \prod _{i=1}^{d} x_i \cdot {\mathbf {G}}\right) ^{\top } {\mathbf {s}}\right\| _{\infty } \le m d \cdot \Vert {\mathbf {z}}\Vert _{\infty } \end{aligned}$$

    if \(\vec {{\mathbf {c}}} = (\vec {{\mathbf {B}}} + x \otimes {\mathbf {G}})^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}\) for some \({\mathbf {s}}\in {\mathbb {Z}}_q^n\) and \(\vec {{\mathbf {z}}} \in {\mathbb {Z}}^{m \cdot d}\).

  • \(\textsf {TrapMult}_d( x \in \{ 0,1 \} ^d, \vec {{\mathbf {R}}} \in {\mathbb {Z}}^{m \times m \cdot d} ) \rightarrow {\mathbf {R}}_d^\times \in {\mathbb {Z}}^{m \times m}\). Furthermore, we have

    $$\begin{aligned} \textsf {PubMult}_d({\mathbf {A}}\vec {{\mathbf {R}}} - x \otimes {\mathbf {G}}) = {\mathbf {A}}{\mathbf {R}}_d^\times - \prod _{i=1}^{d}x_i \cdot {\mathbf {G}}, \end{aligned}$$

    and \(\Vert {\mathbf {R}}_d^\times \Vert \le m d \cdot \Vert \vec {{\mathbf {R}}} \Vert \).

\(\textsf {ABE}\) Enabling Algorithms for \(R^{\textsf {NIPE}}\). We provide \(\textsf {ABE}\) enabling algorithms \((\textsf {Encode}_{\textsf {NIPE}}\), \(\textsf {PubEval}_{\textsf {NIPE}}\), \(\textsf {CTEval}_{\textsf {NIPE}}\), \(\textsf {TrapEval}_{\textsf {NIPE}})\) for \(R^{\textsf {NIPE}}\). The main technicality is bootstrapping our efficient homomorphic computation of inner-products over a small ring \({\mathbb {Z}}_{p}\) from Theorem 5 to inner-products over a large polynomial ring \({\mathcal {R}}\) (or equivalently a large field). As mentioned above, we do this by reducing inner-products of polynomials over \({\mathcal {R}}\) to inner-products of vectors over \({\mathbb {Z}}_{p}\).

Below, we describe our \(\textsf {ABE}\) enabling algorithms for \(R^{\textsf {NIPE}}\). Let \(u(\lambda ) {:}{=}\ell (\lambda ) t(\lambda ) \left\lceil \log {p(\lambda )}\right\rceil \).

  • \(\textsf {Encode}_{\textsf {NIPE}}( \vec {{\mathfrak {b}}} \in {\mathcal {R}}^\ell )\):] It outputs the binary representation \({\hat{b}} \in \{ 0,1 \} ^u\) of \(\vec {{\mathfrak {b}}}\).

  • \(\textsf {PubEval}_{\textsf {NIPE}}( \vec {{\mathfrak {a}}} \in {\mathcal {R}}^\ell , \vec {{\mathbf {B}}} \in {\mathbb {Z}}_q^{n \times m \cdot u} )\): It proceeds as follows:

  1. 1.

    It computes

    $$\begin{aligned} \textsf {PubIP}_{p,\ell t}\left( \Psi _i(\vec {{\mathfrak {a}}}), \vec {{\mathbf {B}}}\right) \rightarrow \vec {{\mathbf {B}}}_{\Psi _i(\vec {{\mathfrak {a}}})}^{{{\textsf {I}}}{{\textsf {P}}}} = \left[ {\mathbf {B}}_{\Psi _i(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert \cdots \vert \vert {\mathbf {B}}_{\Psi _i(\vec {{\mathfrak {a}}}),p-1}^{{{\textsf {I}}}{{\textsf {P}}}} \right] \in {\mathbb {Z}}_q^{n \times m \cdot p} \end{aligned}$$

    for \(i \in [t]\).

  2. 2.

    It sets \(\vec {{\mathbf {B}}}_{\vec {{\mathfrak {a}}}}' {:}{=}[ {\mathbf {B}}_{\Psi _1(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert \cdots \vert \vert {\mathbf {B}}_{\Psi _t(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} ] \in {\mathbb {Z}}_q^{n \times m \cdot t}\).

  3. 3.

    It computes \({\mathbf {B}}_{\vec {{\mathfrak {a}}}} {:}{=}\textsf {PubMult}_t( \vec {{\mathbf {B}}}_{\vec {{\mathfrak {a}}}}' )\) and finally outputs \({\mathbf {B}}_{\vec {{\mathfrak {a}}}} \in {\mathbb {Z}}_q^{n \times m}\).

  • \(\textsf {CTEval}_{\textsf {NIPE}}(\vec {{\mathfrak {a}}} \in {\mathcal {R}}^\ell , \vec {{\mathfrak {b}}} \in {\mathcal {R}}^\ell , \vec {{\mathbf {c}}} \in {\mathbb {Z}}_q^{m \cdot u})\): It proceeds as follows:

  1. 1.

    It computes

    $$\begin{aligned} \textsf {CTIP}_{p,\ell t}\left( \Psi _i(\vec {{\mathfrak {a}}}),\Theta (\vec {{\mathfrak {b}}}),\vec {{\mathbf {c}}}\right) \rightarrow \vec {{\mathbf {c}}}_{\Psi _i(\vec {{\mathfrak {a}}})}^{{{\textsf {I}}}{{\textsf {P}}}} = \left[ {\mathbf {c}}_{\Psi _i(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert \cdots \vert \vert {\mathbf {c}}_{\Psi _i(\vec {{\mathfrak {a}}}),p-1}^{{{\textsf {I}}}{{\textsf {P}}}} \right] \in {\mathbb {Z}}_q^{m \cdot p} \end{aligned}$$

    and \(y_i = \langle \Psi _i(\vec {{\mathfrak {a}}}), \Theta (\vec {{\mathfrak {b}}}) \rangle \in {\mathbb {Z}}_p\) for \(i \in [t]\).

  2. 2.

    It sets \(\vec {{\mathbf {c}}}_{\vec {{\mathfrak {a}}}}' {:}{=}[ {\mathbf {c}}_{\Psi _1(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert \cdots \vert \vert {\mathbf {c}}_{\Psi _t(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} ] \in {\mathbb {Z}}_q^{m \cdot t}\) and \(y {:}{=}\left( [y_1=0],\ldots ,[y_t=0]\right) \).

  3. 3.

    It computes \({\mathbf {c}}_{\vec {{\mathfrak {a}}}} {:}{=}\textsf {CTMult}_t\left( y,\vec {{\mathbf {c}}}_{\vec {{\mathfrak {a}}}}'\right) \) and \(y' = \prod _{i=1}^{t}[y_i=0]\) and outputs \({\mathbf {c}}_{\vec {{\mathfrak {a}}}} \in {\mathbb {Z}}_q^m\).

  • \(\textsf {TrapEval}_{\textsf {NIPE}}( \vec {{\mathfrak {a}}} \in {\mathcal {R}}^\ell , \vec {{\mathfrak {b}}} \in {\mathcal {R}}^\ell , \vec {{\mathbf {R}}} \in {\mathbb {Z}}^{m \times m \cdot u} )\): It proceeds as follows:

  1. 1.

    It computes

    $$\begin{aligned} \textsf {TrapIP}_{p,\ell t}\left( \Psi _i(\vec {{\mathfrak {a}}}),\Theta (\vec {{\mathfrak {b}}}),\vec {{\mathbf {R}}}\right) \rightarrow \vec {{\mathbf {R}}}_{\Psi _i(\vec {{\mathfrak {a}}})}^{{{\textsf {I}}}{{\textsf {P}}}} = \left[ {\mathbf {R}}_{\Psi _i(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert \cdots \vert \vert {\mathbf {R}}_{\Psi _i(\vec {{\mathfrak {a}}}),p-1}^{{{\textsf {I}}}{{\textsf {P}}}} \right] \in {\mathbb {Z}}_q^{m \cdot p} \end{aligned}$$

    and \(y_i =\langle \Psi _i(\vec {{\mathfrak {a}}}), \Theta (\vec {{\mathfrak {b}}})\rangle \in {\mathbb {Z}}_p\) for \(i \in [t]\).

  2. 2.

    It sets \(\vec {{\mathbf {R}}}_{\vec {{\mathfrak {a}}}}' {:}{=}[ {\mathbf {R}}_{\Psi _1(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert \cdots \vert \vert {\mathbf {R}}_{\Psi _t(\vec {{\mathfrak {a}}}),0}^{{{\textsf {I}}}{{\textsf {P}}}} ] \in {\mathbb {Z}}_q^{m \times m \cdot t}\) and \(y {:}{=}\left( [y_1=0],\ldots ,[y_t=0]\right) \).

  3. 3.

    It computes \({\mathbf {R}}_{\vec {{\mathfrak {a}}}} {:}{=}\textsf {TrapMult}_t( y, \vec {{\mathbf {R}}}_{\vec {{\mathfrak {a}}}}' )\) and \(y' = \prod _{i=1}^{t}[y_i=0]\) and outputs \({\mathbf {R}}_{\vec {{\mathfrak {a}}}} \in {\mathbb {Z}}^{m \times m}\).

Lemma 7

The above algorithms \((\textsf {Encode}_{\textsf {NIPE}},\textsf {PubEval}_{\textsf {NIPE}},\textsf {CTEval}_{\textsf {NIPE}},\textsf {TrapEval}_{\textsf {NIPE}})\) are \(mt \cdot (3mu+1)\)-\(\textsf {ABE}\) enabling algorithms for \(R^{\textsf {NIPE}}\).

Proof

We check that all the requirements of Definition 2 are satisfied. Let \(\vec {{\mathbf {B}}} = {\mathbf {A}}\vec {{\mathbf {R}}} - {\hat{b}} \otimes {\mathbf {G}}\) and \(\vec {{\mathbf {c}}} = (\vec {{\mathbf {B}}} + {\hat{b}} \otimes {\mathbf {G}})^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}\). By applying Theorem 5, we have

$$\begin{aligned} \vec {{\mathbf {B}}}_{\Psi _i(\vec {{\mathfrak {a}}})}^{{{\textsf {I}}}{{\textsf {P}}}}&= {\mathbf {A}}\vec {{\mathbf {R}}}_{\Psi _i(\vec {{\mathfrak {a}}})}^{{{\textsf {I}}}{{\textsf {P}}}} - \phi _p\left( y_i\right) ^{\top } \otimes {\mathbf {G}}, \\ \vec {{\mathbf {c}}}_{\Psi _i(\vec {{\mathfrak {a}}})}^{{{\textsf {I}}}{{\textsf {P}}}}&= (\vec {{\mathbf {B}}}_{\Psi _i(\vec {{\mathfrak {a}}})}^{{{\textsf {I}}}{{\textsf {P}}}} + \phi _p\left( y_i\right) ^{\top } \otimes {\mathbf {G}})^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}_i, \\ \Vert \vec {{\mathbf {z}}}_i\Vert _{\infty }&\le (3m \ell t \left\lceil \log {p}\right\rceil +1) \cdot \Vert \vec {{\mathbf {z}}}\Vert _{\infty } = (3mu+1) \cdot \Vert \vec {{\mathbf {z}}}\Vert _{\infty }, \\ \Vert \vec {{\mathbf {R}}}_{\Psi _i(\vec {{\mathfrak {a}}})}^{{{\textsf {I}}}{{\textsf {P}}}}\Vert _{\infty }&\le (3mu+1) \cdot \Vert \vec {{\mathbf {R}}}\Vert _{\infty }. \end{aligned}$$

Next, after step 2 during each algorithms, we have

$$\begin{aligned} \vec {{\mathbf {B}}}_{\vec {{\mathfrak {a}}}}'&= {\mathbf {A}}\vec {{\mathbf {R}}}_{\vec {{\mathfrak {a}}}}' - y \otimes {\mathbf {G}}\text { and } \vec {{\mathbf {c}}}_{\vec {{\mathfrak {a}}}}' = (\vec {{\mathbf {B}}}_{\vec {{\mathfrak {a}}}}' + y \otimes {\mathbf {G}})^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}', \end{aligned}$$

since \(\phi _p(y_i) = ([y_i = 0], \ldots , [y_i = p-1])^\top \) and \(y = ([y_1 = 0], \ldots , [y_t = 0])\). Finally, by applying Lemma 6, we have

$$\begin{aligned} {\mathbf {B}}_{\vec {{\mathfrak {a}}}}&= {\mathbf {A}}{\mathbf {R}}_{\vec {{\mathfrak {a}}}} - \prod _{i=1}^{t}[y_i=0] \cdot {\mathbf {G}}= {\mathbf {A}}{\mathbf {R}}_{\vec {{\mathfrak {a}}}} - y' \cdot {\mathbf {G}}, \\ {\mathbf {c}}_{\vec {{\mathfrak {a}}}}&= ({\mathbf {B}}_{\vec {{\mathfrak {a}}}} + y' \cdot {\mathbf {G}})^{\top } {\mathbf {s}}+ {\mathbf {z}}'', \\ \Vert {\mathbf {z}}''\Vert _{\infty }&\le mt \cdot \Vert {\mathbf {z}}'\Vert _{\infty } \le mt \cdot (3mu+1) \cdot \Vert {\mathbf {z}}\Vert _{\infty }, \\ \Vert {\mathbf {R}}_{\vec {{\mathfrak {a}}}}\Vert _{\infty }&\le mt \cdot \Vert {\mathbf {R}}_{\vec {{\mathfrak {a}}}}'\Vert _{\infty } \le mt \cdot (3mu+1) \cdot \Vert \vec {{\mathbf {R}}}\Vert _{\infty }. \end{aligned}$$

To complete the proof of Lemma 7, it suffices to show \(y' = R^{\textsf {NIPE}}(\vec {{\mathfrak {a}}},\vec {{\mathfrak {b}}})\). We have

$$\begin{aligned} y' = \prod _{i=1}^{t}[y_i = 0] = \prod _{i=1}^{t}\left[ \left\langle \Psi _i(\vec {{\mathfrak {a}}}),\Theta (\vec {{\mathfrak {b}}})\right\rangle = 0\right] = \prod _{i=1}^{t}\left[ \theta _i\left( \left\langle \vec {{\mathfrak {a}}},\vec {{\mathfrak {b}}}\right\rangle \right) = 0\right] , \end{aligned}$$

where \(\theta _i(\langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle ) \in {\mathbb {Z}}_p\) is the i-th coordinate of \(\theta (\langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle ) \in {\mathbb {Z}}_p^t\). Then, we consider the following cases.

  • If \(\langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle \ne 0\), then there exist only one index \(i \in [t]\) such that \(\theta _i(\langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle ) \ne 0\) and \([\theta _i(\langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle ) = 0] = 0\). Hence, we have

    $$\begin{aligned} \prod _{i=1}^{t}\left[ \theta _i\left( \left\langle \vec {{\mathfrak {a}}},\vec {{\mathfrak {b}}}\right\rangle \right) = 0\right] = 0. \end{aligned}$$
  • If \(\langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle = 0\), then we have \(\theta _i(\langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle ) = 0\) and \([\theta _i(\langle \vec {{\mathfrak {a}}}, \vec {{\mathfrak {b}}}\rangle ) = 0] = 1\) for all \(i \in [t]\). Hence, we have

    $$\begin{aligned} \prod _{i=1}^{t}\left[ \theta _i\left( \left\langle \vec {{\mathfrak {a}}},\vec {{\mathfrak {b}}}\right\rangle \right) = 0\right] = 1. \end{aligned}$$

Therefore, we obtain \(y' = R^{\textsf {NIPE}}(\vec {{\mathfrak {a}}},\vec {{\mathfrak {b}}})\). \(\square \)

4.3 \(\textsf {ABE}\) enabling algorithms for \(\textsf {IBR}\)

Here, we show \(\textsf {ABE}\) enabling algorithms for \(\textsf {IBR}\) relations. First, we give the definition of \(\textsf {IBR}\).

Definition 4

Let \({\mathcal {I}}\) be an identity space. An \(\textsf {IBR}\) with the maximal bound \(\ell \) for the number of receivers per ciphertext is an \(\textsf {ABE}\) for \(R^{\textsf {IBR}}:{\mathcal {I}} \times {\mathcal {I}}^{<\ell } \rightarrow \{ 0,1 \} \) defined by \(R^{\textsf {IBR}}({{\textsf {I}}}{{\textsf {D}}},{\mathcal {S}}) = 0\) iff \({{\textsf {I}}}{{\textsf {D}}}\notin {\mathcal {S}}\), where \({\mathcal {I}}^{<\ell } {:}{=}\{ {\mathcal {S}} \mid {\mathcal {S}} \subseteq {\mathcal {I}}, \left|{\mathcal {S}}\right| < \ell \}\) for \(\ell \le \left|{\mathcal {I}}\right|\).

As was shown by Attrapadung and Libert [6], \(R^{\textsf {IBR}}\) can be expressed by \(R^{\textsf {NIPE}}\) by appropriately encoding an identity set and an identity into vectors. We recall their encoding in the following. Let \({\mathbb {F}}\) be a finite field and \(\ell \in {\mathbb {N}}\). We consider relations \(R^{\textsf {NIPE}}:{\mathbb {F}}^\ell \times {\mathbb {F}}^\ell \rightarrow \{ 0,1 \} \) and \(R^{\textsf {IBR}}:{\mathbb {F}}\times {\mathbb {F}}^{<\ell } \rightarrow \{ 0,1 \} \). For an identity \({{\textsf {I}}}{{\textsf {D}}}\in {\mathbb {F}}\), we set a vector \({\mathbf {x}}= (x_1,\ldots ,x_\ell )^{\top } \in {\mathbb {F}}^\ell \) with \(x_i = {{\textsf {I}}}{{\textsf {D}}}^{i-1}\). For a set \({\mathcal {S}} = \{ {{\textsf {I}}}{{\textsf {D}}}_1,\ldots ,{{\textsf {I}}}{{\textsf {D}}}_{\ell '} \} \in {\mathbb {F}}^{<\ell }\), we set a vector \({\mathbf {y}}= (y_1,\ldots ,y_\ell )^{\top } \in {\mathbb {F}}^\ell \) as a coefficient vector from

$$\begin{aligned} P_{{\mathcal {S}}}(Z) = \sum _{i=1}^{\ell '+1}{y_i Z^{i-1}} = \prod _{{{\textsf {I}}}{{\textsf {D}}}_j\in {\mathcal {S}}}^{}(Z - {{\textsf {I}}}{{\textsf {D}}}_j), \end{aligned}$$

where if \(\ell '+1 < \ell \) the coefficients \(y_{\ell '+2},\ldots ,y_\ell \) are set to 0. The inner-product \(\langle {\mathbf {x}}, {\mathbf {y}}\rangle \) will be non-zero if and only if \(P_{{\mathcal {S}}}({{\textsf {I}}}{{\textsf {D}}}) \ne 0\) or equivalently \({{\textsf {I}}}{{\textsf {D}}}\notin {\mathcal {S}}\) as desired. By combining the above encoding with the enabling algorithm for \(R^{\textsf {NIPE}}\) in Lemma 7, we obtain the following lemma.

Lemma 8

There exists \(mt \cdot (3m\ell t \left\lceil \log {p}\right\rceil +1)\)-\(\textsf {ABE}\) enabling algorithms for \(R^{\textsf {IBR}}:\mathrm {GF}(p^t) \times \mathrm {GF}(p^t)^{<\ell } \rightarrow \{ 0,1 \} \) defined by \(R^{\textsf {IBR}}({{\textsf {I}}}{{\textsf {D}}},{\mathcal {S}}) = 0\) iff \({{\textsf {I}}}{{\textsf {D}}}\notin {\mathcal {S}}\).

4.4 \(\textsf {ABE}\) enabling algorithms for \(\textsf {FIBE}\)

\(\textsf {FIBE}\) is a specific type of \(\textsf {ABE}\) for a relation \(R^{\textsf {FIBE}}\) defined as follows:

$$\begin{aligned} R^{\textsf {FIBE}}({{\textsf {I}}}{{\textsf {D}}}, {{\textsf {I}}}{{\textsf {D}}}') = 1 \text { if and only if } {{\textsf {H}}}{{\textsf {D}}}( {{\textsf {I}}}{{\textsf {D}}}, {{\textsf {I}}}{{\textsf {D}}}' ) \le d, \end{aligned}$$

where \({{\textsf {I}}}{{\textsf {D}}}, {{\textsf {I}}}{{\textsf {D}}}' \in \{ 0,1 \} ^\ell \) are strings and d is some pre-determined threshold smaller than \(\ell \).

Here, we show \(\textsf {ABE}\) enabling algorithms for \(\textsf {FIBE}\) relations. First, we give the definition of \(\textsf {FIBE}\).

Definition 5

Let \({\mathcal {I}}\) be an identity space and \(\ell \in {\mathbb {N}}\). An \(\textsf {FIBE}\) is an \(\textsf {ABE}\) for \(R^{\textsf {FIBE}}:\left( {\mathcal {I}}^{\ell } \times [0,\ell ]\right) \times {\mathcal {I}}^{\ell } \rightarrow \{ 0,1 \} \) defined by \(R^{\textsf {FIBE}}(({{\textsf {I}}}{{\textsf {D}}},d),{{\textsf {I}}}{{\textsf {D}}}') = 0\) iff \({{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}}, {{\textsf {I}}}{{\textsf {D}}}') \le d\), where \({{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}}, {{\textsf {I}}}{{\textsf {D}}}')\) is the Hamming distance between \({{\textsf {I}}}{{\textsf {D}}}\) and \({{\textsf {I}}}{{\textsf {D}}}'\).

Before describing our \(\textsf {ABE}\) enabling algorithms for \(R^{\textsf {FIBE}}\), we state the following lemma.

Lemma 9

(Homomorphic hamming distance) There exist three efficient algorithms \((\textsf {PubHD}_{\ell },\textsf {CTHD}_{\ell },\textsf {TrapHD}_{\ell })\) with the following properties:

  • \(\textsf {PubHD}_{\ell }(y \in \{ 0,1 \} ^{\ell }, \vec {{\mathbf {B}}} \in {\mathbb {Z}}_q^{n \times m \cdot \ell }) \rightarrow \vec {{\mathbf {B}}}_{y}^{{{\textsf {H}}}{{\textsf {D}}}} \in {\mathbb {Z}}_q^{n \times m \cdot (\ell +1)}\).

  • \(\textsf {CTHD}_{\ell }(x \in \{ 0,1 \} ^\ell , y \in \{ 0,1 \} ^\ell , \vec {{\mathbf {c}}} \in {\mathbb {Z}}_q^{m \cdot \ell }) \rightarrow \vec {{\mathbf {c}}}_{y}^{{{\textsf {H}}}{{\textsf {D}}}} \in {\mathbb {Z}}_q^{m \cdot (\ell +1)}\). Furthermore, we have

    $$\begin{aligned} \Vert \vec {{\mathbf {c}}}_{y}^{{{\textsf {H}}}{{\textsf {D}}}} - \left( \vec {{\mathbf {B}}}_{y}^{{{\textsf {H}}}{{\textsf {D}}}} + \phi _{\ell +1}\left( {{\textsf {H}}}{{\textsf {D}}}(x,y)\right) ^{\top } \otimes {\mathbf {G}}\right) ^{\top } {\mathbf {s}}\Vert _{\infty } \le (3m\ell +1) \cdot \Vert \vec {{\mathbf {z}}}\Vert _{\infty } \end{aligned}$$

    if \(\vec {{\mathbf {c}}} = (\vec {{\mathbf {B}}} + x \otimes {\mathbf {G}})^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}\) for some \({\mathbf {s}}\in {\mathbb {Z}}_q^n\) and \(\vec {{\mathbf {z}}} \in {\mathbb {Z}}^{m \cdot \ell }\).

  • \(\textsf {TrapHD}_{\ell }(x \in \{ 0,1 \} ^\ell , y \in \{ 0,1 \} ^\ell , \vec {{\mathbf {R}}} \in {\mathbb {Z}}^{m \times m \cdot \ell }) \rightarrow \vec {{\mathbf {R}}}_{y}^{{{\textsf {H}}}{{\textsf {D}}}} \in {\mathbb {Z}}^{m \times m \cdot (\ell +1)}\). Furthermore, we have

    $$\begin{aligned} \textsf {PubHD}_{\ell }({\mathbf {A}}\vec {{\mathbf {R}}} - x \otimes {\mathbf {G}}) = {\mathbf {A}}\vec {{\mathbf {R}}}_{y}^{{{\textsf {H}}}{{\textsf {D}}}} - \phi _{\ell +1}\left( {{\textsf {H}}}{{\textsf {D}}}(x,y)\right) ^{\top } \otimes {\mathbf {G}}, \end{aligned}$$

    and \(\Vert \vec {{\mathbf {R}}}_{y}^{{{\textsf {H}}}{{\textsf {D}}}} \Vert _{\infty } \le (3 m \ell + 1) \cdot \Vert \vec {{\mathbf {R}}} \Vert _{\infty }\).

Proof

We can compute \(\phi _{\ell +1}\left( {{\textsf {H}}}{{\textsf {D}}}(x,y)\right) \) by computing the function

$$\begin{aligned} f_{y}^{{{\textsf {H}}}{{\textsf {D}}}}(x) = \prod _{i=1}^{\ell }\left( (1-x_i) \cdot \phi _{\ell +1}(y_i) + x_i \cdot \phi _{\ell +1}(1-y_i)\right) , \end{aligned}$$

because

$$\begin{aligned} f_{y}^{{{\textsf {H}}}{{\textsf {D}}}}(x)&= \prod _{i=1}^{\ell }\left( (1-x_i) \cdot \phi _{\ell +1}(y_i) + x_i \cdot \phi _{\ell +1}(1-y_i)\right) \\&= \prod _{i=1}^{\ell }\phi _{\ell +1}([x_i \ne y_i]) \\&= \phi _{\ell +1}\left( \sum _{i=1}^{\ell }[x_i \ne y_i]\right) \\&= \phi _{\ell +1}\left( {{\textsf {H}}}{{\textsf {D}}}(x,y)\right) , \end{aligned}$$

where the second equality follows from the fact that

$$\begin{aligned} \phi _{\ell +1}([x_i \ne y_i])&= {\left\{ \begin{array}{ll} \phi _{\ell +1}([y_i = 1]) &{}\text { if } x_i = 0 \\ \phi _{\ell +1}([y_i = 0]) &{}\text { if } x_i = 1 \end{array}\right. } \\&= {\left\{ \begin{array}{ll} \phi _{\ell +1}(y_i) &{}\text { if } x_i = 0 \\ \phi _{\ell +1}(1-y_i) &{}\text { if } x_i = 1 \end{array}\right. } \\&= (1-x_i) \cdot \phi _{\ell +1}(y_i) + x_i \cdot \phi _{\ell +1}(1-y_i). \end{aligned}$$

Similarly to the function \(f_{{\mathbf {y}}}^{{{\textsf {I}}}{{\textsf {P}}}}(\cdot )\) in Sect. 3.2, the function \(f_{y}^{{{\textsf {H}}}{{\textsf {D}}}}(\cdot )\) (for a fixed y) can be seen as a branching program. Hence, using the evaluation algorithms in [33], we obtain the statements in this lemma. \(\square \)

\(\textsf {ABE}\) Enabling Algorithms for \(R^{\textsf {FIBE}}\). We provide \(\textsf {ABE}\) enabling algorithms \((\textsf {Encode}_{\textsf {FIBE}}\), \(\textsf {PubEval}_{\textsf {FIBE}}\), \(\textsf {CTEval}_{\textsf {FIBE}}\), \(\textsf {TrapEval}_{\textsf {FIBE}})\) for \(R^{\textsf {FIBE}}\). We set \(u(\lambda ) = \ell (\lambda )\).

  • \(\textsf {Encode}_{\textsf {FIBE}}({{\textsf {I}}}{{\textsf {D}}}' \in \{ 0,1 \} ^\ell )\): It outputs \({{\textsf {I}}}{{\textsf {D}}}' \in \{ 0,1 \} ^\ell = \{ 0,1 \} ^u\).

  • \(\textsf {PubEval}_{\textsf {FIBE}}( ({{\textsf {I}}}{{\textsf {D}}}, d) \in \{ 0,1 \} ^\ell \times [0, \ell \mathrm {]}, \vec {{\mathbf {B}}} \in {\mathbb {Z}}_q^{n \times m \cdot u} )\): It proceeds as follows:

  1. 1.

    It computes

    $$\begin{aligned} \textsf {PubHD}_{\ell }({{\textsf {I}}}{{\textsf {D}}}, \vec {{\mathbf {B}}}) \rightarrow \vec {{\mathbf {B}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}} = \left[ {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},0}^{{{\textsf {H}}}{{\textsf {D}}}} \vert \vert \cdots \vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},\ell }^{{{\textsf {H}}}{{\textsf {D}}}} \right] \in {\mathbb {Z}}_q^{n \times m \cdot (\ell +1)}. \end{aligned}$$
  2. 2.

    It then computes \({\mathbf {B}}_{({{\textsf {I}}}{{\textsf {D}}},d)} = \sum _{i=d+1}^{\ell }{{\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},i}^{{{\textsf {H}}}{{\textsf {D}}}}} \in {\mathbb {Z}}_q^{n \times m}\) and outputs \({\mathbf {B}}_{({{\textsf {I}}}{{\textsf {D}}},d)}\).

  • \(\textsf {CTEval}_{\textsf {FIBE}}(({{\textsf {I}}}{{\textsf {D}}},d) \in \{ 0,1 \} ^\ell \times [0, \ell \mathrm {]}, {{\textsf {I}}}{{\textsf {D}}}' \in \{ 0,1 \} ^\ell , \vec {{\mathbf {c}}} \in {\mathbb {Z}}_q^{m \cdot u})\): It proceeds as follows:

  1. 1.

    It computes

    $$\begin{aligned} \textsf {CTHD}_{\ell }({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}',\vec {{\mathbf {c}}}) \rightarrow \vec {{\mathbf {c}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}} = \left[ {\mathbf {c}}_{{{\textsf {I}}}{{\textsf {D}}},0}^{{{\textsf {H}}}{{\textsf {D}}}} \vert \vert \cdots \vert \vert {\mathbf {c}}_{{{\textsf {I}}}{{\textsf {D}}},\ell }^{{{\textsf {H}}}{{\textsf {D}}}} \right] \in {\mathbb {Z}}_q^{m \cdot (\ell +1)}. \end{aligned}$$
  2. 2.

    It then computes \({\mathbf {c}}_{({{\textsf {I}}}{{\textsf {D}}},d)} = \sum _{i=d+1}^{\ell }{{\mathbf {c}}_{{{\textsf {I}}}{{\textsf {D}}},i}^{{{\textsf {H}}}{{\textsf {D}}}}} \in {\mathbb {Z}}_q^m\) and outputs \({\mathbf {c}}_{({{\textsf {I}}}{{\textsf {D}}},d)}\).

  • \(\textsf {TrapEval}_{\textsf {FIBE}}( ({{\textsf {I}}}{{\textsf {D}}}, d) \in \{ 0,1 \} ^\ell \times [0, \ell \mathrm {]}, {{\textsf {I}}}{{\textsf {D}}}' \in \{ 0,1 \} ^\ell , \vec {{\mathbf {R}}} \in {\mathbb {Z}}^{m \times m \cdot u} )\): It proceeds as follows:

  1. 1.

    It computes

    $$\begin{aligned} \textsf {TrapHD}_{\ell }({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}',\vec {{\mathbf {R}}}) \rightarrow \vec {{\mathbf {R}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}} = \left[ {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},0}^{{{\textsf {H}}}{{\textsf {D}}}} \vert \vert \cdots \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},\ell }^{{{\textsf {H}}}{{\textsf {D}}}} \right] \in {\mathbb {Z}}^{m \times m \cdot (\ell +1)}. \end{aligned}$$
  2. 2.

    It then computes \({\mathbf {R}}_{({{\textsf {I}}}{{\textsf {D}}},d)} = \sum _{i=d+1}^{\ell }{{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},i}^{{{\textsf {H}}}{{\textsf {D}}}}} \in {\mathbb {Z}}^{m \times m}\) and outputs \({\mathbf {R}}_{({{\textsf {I}}}{{\textsf {D}}},d)}\).

Lemma 10

The above algorithms \((\textsf {Encode}_{\textsf {FIBE}},\textsf {PubEval}_{\textsf {FIBE}},\textsf {CTEval}_{\textsf {FIBE}},\textsf {TrapEval}_{\textsf {FIBE}})\) are \((\ell - d) \cdot (3mu+1)\)-\(\textsf {ABE}\) enabling algorithms for \(R^{\textsf {FIBE}}\).

Proof

We prove that the algorithms \((\textsf {Encode}_{\textsf {FIBE}},\textsf {PubEval}_{\textsf {FIBE}},\textsf {CTEval}_{\textsf {FIBE}},\textsf {TrapEval}_{\textsf {FIBE}})\) in Sect. 4.4 satisfy the desired property in Definition 2. Let \(\vec {{\mathbf {B}}} = {\mathbf {A}}\vec {{\mathbf {R}}} - {{\textsf {I}}}{{\textsf {D}}}' \otimes {\mathbf {G}}\) and \(\vec {{\mathbf {c}}} = \left( \vec {{\mathbf {B}}} + {{\textsf {I}}}{{\textsf {D}}}' \otimes {\mathbf {G}}\right) ^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}\). By applying Lemma 9, we have

$$\begin{aligned} \vec {{\mathbf {B}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}}&= {\mathbf {A}}\vec {{\mathbf {R}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}} - \phi _{\ell +1}\left( {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}')\right) ^{\top } \otimes {\mathbf {G}}, \\ \vec {{\mathbf {c}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}}&= \left( \vec {{\mathbf {B}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}} + \phi _{\ell +1}\left( {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}')\right) ^{\top } \otimes {\mathbf {G}}\right) ^{\top } {\mathbf {s}}+ \vec {{\mathbf {z}}}', \\ \Vert \vec {{\mathbf {z}}}'\Vert _{\infty }&\le (3m\ell +1) \cdot \Vert \vec {{\mathbf {z}}}\Vert _{\infty } = (3mu+1) \cdot \Vert \vec {{\mathbf {z}}}\Vert _{\infty }, \\ \Vert \vec {{\mathbf {R}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}}\Vert _{\infty }&\le (3mu+1) \cdot \Vert \vec {{\mathbf {R}}}\Vert _{\infty }. \end{aligned}$$

Furthermore, after step 2 during each evaluation algorithms, we have

$$\begin{aligned} {\mathbf {B}}_{({{\textsf {I}}}{{\textsf {D}}},d)}&= {\mathbf {A}}{\mathbf {R}}_{({{\textsf {I}}}{{\textsf {D}}},d)} - \underbrace{\sum _{i=d+1}^{\ell }\phi _{\ell +1, i}\left( {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}')\right) }_{{=}{:}y} \cdot {\mathbf {G}}, \\ {\mathbf {c}}_{({{\textsf {I}}}{{\textsf {D}}},d)}&= \left( {\mathbf {B}}_{({{\textsf {I}}}{{\textsf {D}}},d)} + y \cdot {\mathbf {G}}\right) ^{\top } {\mathbf {s}}+ {\mathbf {z}}'', \\ \Vert {\mathbf {z}}''\Vert _{\infty }&\le (\ell - d) \cdot \Vert \vec {{\mathbf {z}}}'\Vert _{\infty } \le (\ell - d) \cdot (3mu+1) \cdot \Vert \vec {{\mathbf {z}}}\Vert _{\infty }, \\ \Vert {\mathbf {R}}_{({{\textsf {I}}}{{\textsf {D}}},d)}\Vert _{\infty }&\le (\ell - d) \cdot \Vert \vec {{\mathbf {R}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{{{\textsf {H}}}{{\textsf {D}}}}\Vert _{\infty } \le (\ell - d) \cdot (3mu+1) \cdot \Vert \vec {{\mathbf {R}}}\Vert _{\infty }. \end{aligned}$$

To complete the proof of Lemma 10, it suffices to show \(y = R^{\textsf {FIBE}}\left( ({{\textsf {I}}}{{\textsf {D}}},d),{{\textsf {I}}}{{\textsf {D}}}'\right) \). We have

$$\begin{aligned} y = \sum _{i=d+1}^{\ell }\phi _{\ell +1,i}\left( {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}')\right) = \sum _{i=d+1}^{\ell }\left[ {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}') = i\right] \end{aligned}$$

from the property of \(\phi _{\ell +1}\). Then, we consider the following cases.

  • If \({{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}') \le d\), then we have \(\left[ {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}') = i\right] = 0\) for all \(i \in [d+1,\ell ]\). Hence, we have

    $$\begin{aligned} \sum _{i=d+1}^{\ell }\left[ {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}') = i\right] = 0. \end{aligned}$$
  • If \({{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}') > d\), there exist only one index \(i \in [d+1,\ell ]\) such that \(\left[ {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}') = i\right] = 1\). Hence, we have

    $$\begin{aligned} \sum _{i=d+1}^{\ell }\left[ {{\textsf {H}}}{{\textsf {D}}}({{\textsf {I}}}{{\textsf {D}}},{{\textsf {I}}}{{\textsf {D}}}') = i\right] = 1. \end{aligned}$$

Therefore, we obtain \(y = R^{\textsf {FIBE}}\left( ({{\textsf {I}}}{{\textsf {D}}},d),{{\textsf {I}}}{{\textsf {D}}}'\right) \). \(\square \)

5 Tightly secure lattice-based primitives

In this section, we propose a new construction of tightly secure \(\textsf {IBE}\) scheme from lattices using the toolset from Theorem 5. Compared with the previous construction by Boyen and Li [16], which computed a \(\textsf {PRF}\) via the Barrington’s theorem, our construction is much more efficient since it does not involve the step of converting an \({\textbf {NC}}^1\) circuit into an equivalent branching program. The main technical difference is how we compute the \(\textsf {PRF} \), and in particular, the high-level construction of our protocol is identical to [16]. Regarding the security of the scheme, other than the \(\textsf {poly}\)-\(\textsf {LWE}\) assumption, we additionally assume the security of the recently proposed \(\textsf {PRF} \) by Boneh et al. [15], whereas [16] relies additionally on the security of a lattice-based \(\textsf {PRF} \) based on \(\textsf {superpoly}\)-\(\textsf {LWE}\). The description of the \(\textsf {PRF} \) by Boneh et al. [15] follows.

Boneh et al.’s Candidate PRF [15]. Let \(\kappa = \kappa (\lambda )\), \(\ell = \ell (\lambda )\), and \(\eta = \eta (\lambda )\) be positive integers. Consider a \(\textsf {PRF}:{\mathbb {Z}}_2^{\kappa \times 2\eta } \times {\mathbb {Z}}_2^\ell \rightarrow {\mathbb {Z}}_3\) with key space \({\mathcal {K}}_{\lambda } = {\mathbb {Z}}_2^{\kappa \times 2\eta }\), input space \({\mathcal {X}}_{\lambda } = {\mathbb {Z}}_2^\ell \), and output space \({\mathcal {Y}}_{\lambda } = {\mathbb {Z}}_3\). Let \({\mathbf {H}}\in {\mathbb {Z}}_3^{\eta \times \ell }\) be a fixed public matrix, \(\textsf {bin}:{\mathbb {Z}}_3^\eta \rightarrow {\mathbb {Z}}_2^{2\eta }\) be the component-wise binary decomposition function (that maps each \({\mathbb {Z}}_3\) component into two bits corresponding to the binary representation of the component), \(\textsf {map}:{\mathbb {Z}}_2^\kappa \rightarrow {\mathbb {Z}}_3\) be the function that maps \(y \in {\mathbb {Z}}_2^\kappa \mapsto \sum _{i \in [\kappa ]}{y_i} \bmod 3\). For a key \({\mathbf {K}} \in {\mathbb {Z}}_2^{\kappa \times 2\eta }\) and an input \({\mathbf {x}}\in {\mathbb {Z}}_2^\ell \), their \(\textsf {PRF}\) is defined as \(\textsf {PRF}_{{\mathbf {H}}}({\mathbf {K}}, {\mathbf {x}}) {:}{=}\textsf {map}({\mathbf {K}} \cdot \textsf {bin}({\mathbf {H}}\cdot {\mathbf {x}}))\).

$$\begin{aligned} \textsf {PRF}_{{\mathbf {H}}}({\mathbf {K}}, {\mathbf {x}}) {:}{=}\textsf {map}({\mathbf {K}} \cdot \textsf {bin}({\mathbf {H}}\cdot {\mathbf {x}})). \end{aligned}$$

For simplicity, we will often drop the subscript \({\mathbf {H}}\) on \(\textsf {PRF}\). Boneh et al. [15] provides cryptanalysis showing the plausibility of the security of their \(\textsf {PRF}\) when \(\kappa \), \(\ell \), and \(\eta \) are all \(O(\lambda )\).

5.1 Embedding \(\textsf {PRF}\) into matrices

Here, we describe how to homomorphically evaluate \(\textsf {PRF}\). For a \(\textsf {PRF}\) key \({\mathbf {K}} = \left( {\mathbf {k}}_1, \ldots , {\mathbf {k}}_\kappa \right) ^{\top } \in {\mathbb {Z}}_2^{\kappa \times 2\eta }\) and an input \({\mathbf {x}}\in {\mathbb {Z}}_2^{\ell }\), the \(\textsf {PRF}({\mathbf {K}},{\mathbf {x}})\) can be computed as follows:

  1. 1.

    \({\mathbf {z}}= \textsf {bin}({\mathbf {H}}\cdot {\mathbf {x}}) \in {\mathbb {Z}}_2^{2\eta }\),

  2. 2.

    \(y_i = \langle {\mathbf {k}}_i, {\mathbf {z}}\rangle \bmod 2 \in {\mathbb {Z}}_2\) for \(i \in [\kappa ]\),

  3. 3.

    \(y' = \sum _{i\in [\kappa ]}^{}{y_i} \bmod 3 = \langle {\mathbf {y}}, {\mathbf {1}}_\kappa \rangle \bmod 3 \in {\mathbb {Z}}_3\), where \({\mathbf {y}}= (y_1,\ldots ,y_\kappa )^{\top } \in {\mathbb {Z}}_2^\kappa \).

This means that the \(\textsf {PRF}\) can be computed by computing the inner-products over \({\mathbb {Z}}_2\) and \({\mathbb {Z}}_3\). Our main observation is that the \(\textsf {PRF}\) can be computed sequentially by two separate short branching programs and that each branching program can be computed using the algorithms in Theorem 5. However, we note that unlike circuits, branching programs are in general not closed under sequential composition since the input and output have different representations. Therefore, we will need to encode the output of the first branching program in a particular manner so that it will be compatible with the encoding of the input to the second branching program. We briefly explain how to do this. Computing the first inner-product \(y_i = \langle {\mathbf {k}}_i, {\mathbf {z}}\rangle \bmod 2\) directly as a branching program, the output is a unit vector representation \(\phi _2(y_i) = \left( [y_i = 0], [y_i = 1]\right) ^{\top }\) of \(y_i\). Then, we have \([y_i = 1] = y_i\). Therefore, we can use \(([y_1 = 1], \ldots , [y_\kappa = 1])^\top = (y_1, \ldots , y_\kappa )^\top = {\mathbf {y}}\) directly as the input of the second branching program to compute the second inner-product over \({\mathbb {Z}}_3\).

Below, we provide the two (deterministic) algorithms \((\textsf {PubPRF},\textsf {TrapPRF})\) that homomorphically computes the \(\textsf {PRF}\), which uses the algorithms in Theorem 5 as building blocks.

  • \(\textsf {PubPRF}( {\mathbf {x}}\in {\mathbb {Z}}_2^\ell , \{ \vec {{\mathbf {B}}}_i \in {\mathbb {Z}}_q^{n \times m \cdot 2\eta } \}_{i \in [\kappa ]} ):\) It proceeds as follows:

  1. 1.

    It first computes \({\mathbf {z}}= \textsf {bin}({\mathbf {H}}\cdot {\mathbf {x}}) \in {\mathbb {Z}}_2^{2\eta }\).

  2. 2.

    It computes

    $$\begin{aligned} \textsf {PubIP}_{2,2\eta }( {\mathbf {z}}, \vec {{\mathbf {B}}}_i ) \rightarrow \vec {{\mathbf {B}}}_i^{{{\textsf {I}}}{{\textsf {P}}}} = [ {\mathbf {B}}_{i,0}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert {\mathbf {B}}_{i,1}^{{{\textsf {I}}}{{\textsf {P}}}} ] \in {\mathbb {Z}}_q^{n \times m \cdot 2} \end{aligned}$$

    for \(i \in [\kappa ]\) and sets

    $$\begin{aligned} \vec {{\mathbf {B}}}' = [ {\mathbf {B}}_{1,1}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert \cdots \vert \vert {\mathbf {B}}_{\kappa ,1}^{{{\textsf {I}}}{{\textsf {P}}}} ] \in {\mathbb {Z}}_q^{n \times m \cdot \kappa }. \end{aligned}$$
  3. 3.

    It then computes

    $$\begin{aligned} \textsf {PubIP}_{3,\kappa }({\mathbf {1}}_\kappa , \vec {{\mathbf {B}}}') \rightarrow \vec {{\mathbf {B}}}_{{\mathbf {x}}}^{\textsf {PRF}} \in {\mathbb {Z}}_q^{n \times m \cdot 3} \end{aligned}$$

    and outputs \(\vec {{\mathbf {B}}}_{{\mathbf {x}}}^{\textsf {PRF}}\).

  • \(\textsf {TrapPRF}( {\mathbf {x}}\in {\mathbb {Z}}_2^\ell , {\mathbf {K}} \in {\mathbb {Z}}_2^{\kappa \times 2\eta }, \{ \vec {{\mathbf {R}}}_i \in {\mathbb {Z}}^{m \times m \cdot 2\eta } \}_{i \in [\kappa ]} ):\) It proceeds as follows:

  1. 1.

    It first computes \({\mathbf {z}}= \textsf {bin}({\mathbf {H}}\cdot {\mathbf {x}}) \in {\mathbb {Z}}_2^{2\eta }\).

  2. 2.

    It computes

    $$\begin{aligned} \textsf {TrapIP}_{2,2\eta }( {\mathbf {z}}, {\mathbf {k}}_i, \vec {{\mathbf {R}}}_i ) \rightarrow \vec {{\mathbf {R}}}_i^{{{\textsf {I}}}{{\textsf {P}}}} = [ {\mathbf {R}}_{i,0}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert {\mathbf {R}}_{i,1}^{{{\textsf {I}}}{{\textsf {P}}}} ] \in {\mathbb {Z}}^{m \times m \cdot 2} \end{aligned}$$

    and \(y_i = \langle {\mathbf {k}}_i, {\mathbf {z}}\rangle \bmod 2 \in {\mathbb {Z}}_2\) for \(i \in [\kappa ]\), and sets

    $$\begin{aligned} \vec {{\mathbf {R}}}' {:}{=}\left[ {\mathbf {R}}_{1,1}^{{{\textsf {I}}}{{\textsf {P}}}} \vert \vert \cdots \vert \vert {\mathbf {R}}_{\kappa ,1}^{{{\textsf {I}}}{{\textsf {P}}}}\right] \in {\mathbb {Z}}^{m \times m \cdot \kappa } \end{aligned}$$

    and \({\mathbf {y}}{:}{=}(y_1,\ldots ,y_\kappa )^\top \in {\mathbb {Z}}_2^\kappa \).

  3. 3.

    It then computes

    $$\begin{aligned} \textsf {TrapIP}_{3,\kappa }( {\mathbf {y}}, {\mathbf {1}}_\kappa , \vec {{\mathbf {R}}}' ) \rightarrow \vec {{\mathbf {R}}}_{{\mathbf {x}}}^{\textsf {PRF}} \in {\mathbb {Z}}^{m \times m \cdot 3} \end{aligned}$$

    and \(y' = \langle {\mathbf {y}}, {\mathbf {1}}_\kappa \rangle \bmod 3\), and outputs \(\vec {{\mathbf {R}}}_{{\mathbf {x}}}^{\textsf {PRF}}\).

Lemma 11

The above algorithms \((\textsf {PubPRF},\textsf {TrapPRF})\) satisfy the following properties:

  • \(\textsf {PubPRF}( {\mathbf {x}}, \{ {\mathbf {A}}\vec {{\mathbf {R}}}_i - {\mathbf {k}}_i^{\top } \otimes {\mathbf {G}} \}_{i\in [\kappa ]} ) = {\mathbf {A}}\vec {{\mathbf {R}}}_{\mathbf {x}}^{\textsf {PRF}} - \phi _3\left( \textsf {PRF}({\mathbf {K}},{\mathbf {x}})\right) ^{\top } \otimes {\mathbf {G}}\).

  • If \(\vec {{\mathbf {R}}}_{i} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }\{ -1,1 \}^{m \times m \cdot 2\eta }\) for all \(i \in [\kappa ]\), then \(\Vert \vec {{\mathbf {R}}}_{{\mathbf {x}}}^{\textsf {PRF}} \Vert _{\infty } = O(m^2 \kappa \eta )\).

Proof

Let \(\vec {{\mathbf {B}}}_i = {\mathbf {A}}\vec {{\mathbf {R}}}_i - {\mathbf {k}}_i^{\top } \otimes {\mathbf {G}}\). Then, after step 2 during \(\textsf {PubPRF}\), we have

$$\begin{aligned} \vec {{\mathbf {B}}}_i^{{{\textsf {I}}}{{\textsf {P}}}}= & {} {\mathbf {A}}\vec {{\mathbf {R}}}_i^{{{\textsf {I}}}{{\textsf {P}}}} - \phi _2(\langle {\mathbf {k}}_i, {\mathbf {z}}\rangle )^{\top } \otimes {\mathbf {G}}= {\mathbf {A}}\vec {{\mathbf {R}}}_i^{{{\textsf {I}}}{{\textsf {P}}}} - \phi _2(y_i)^{\top } \otimes {\mathbf {G}}, \end{aligned}$$
(1)
$$\begin{aligned} \vec {{\mathbf {B}}}'= & {} {\mathbf {A}}\vec {{\mathbf {R}}}' - {\mathbf {y}}^{\top } \otimes {\mathbf {G}}, \end{aligned}$$
(2)
$$\begin{aligned} \Vert \vec {{\mathbf {R}}}'\Vert _{\infty }= & {} O\left( m \eta \Vert \vec {{\mathbf {R}}}_i\Vert _{\infty }\right) = O(m \eta ), \end{aligned}$$
(3)

where Eqs. (1) and (3) follow from Theorem 5, Eq. (2) follows from the facts that \(\phi _2(y_i) = \left( [y_i = 0], [y_i = 1]\right) ^{\top }\) and \([y_i = 1] = y_i\). Next, by applying Theorem 5, we also have

$$\begin{aligned} \vec {{\mathbf {B}}}_{{\mathbf {x}}}^{\textsf {PRF}}&= {\mathbf {A}}\vec {{\mathbf {R}}}_{{\mathbf {x}}}^{\textsf {PRF}} - \phi _3(\left\langle {\mathbf {y}},{\mathbf {1}}_\kappa \right\rangle )^{\top } \otimes {\mathbf {G}}= {\mathbf {A}}\vec {{\mathbf {R}}}_{{\mathbf {x}}}^{\textsf {PRF}} - \phi _3(\textsf {PRF}({\mathbf {K}},{\mathbf {x}}))^{\top } \otimes {\mathbf {G}}, \\ \Vert \vec {{\mathbf {R}}}_{{\mathbf {x}}}^{\textsf {PRF}}\Vert _{\infty }&= O\left( m \kappa \Vert \vec {{\mathbf {R}}}'\Vert _{\infty }\right) = O(m^2 \kappa \eta ). \end{aligned}$$

This completes the proof of Lemma 11. \(\square \)

5.2 Tightly secure identity-based encryption

Here, we construct a tightly adaptively secure \(\textsf {IBE}\) scheme based on \((\textsf {PubPRF}, \textsf {TrapPRF})\).

Overview We first give the intuition of our construction. At a high level, our construction follows the template of Boyen and Li [16]: Instead of simulating the behavior of the random oracle in the tightly-secure construction of Katz and Wang [40], we implicitly compute a \(\textsf {PRF}\) during the security proof. Boyen and Li showed that if the \(\textsf {PRF}\) can be computed by an \({\textbf {NC}}^1\) circuit, then we can use the homomorphic computation technique of [19] to obtain a tightly-secure \(\textsf {IBE}\) scheme based on the \(\textsf {poly}\)-\(\textsf {LWE}\) assumption and any assumption implying pseudorandomness of the \({\textbf {NC}}^1\)-computable \(\textsf {PRF}\). They instantiated their generic construction with the \({\textbf {NC}}^1\)-computable lattice-based \(\textsf {PRF}\) of [7, 8] based on the \(\textsf {superpoly}\)-\(\textsf {LWE}\) assumption.Footnote 5 Although the \(\textsf {PRF} \) is expressible as a \(\textsf {poly}\)-length branching program, the concrete length of the branching program is extremely long and has a significant undesirable impact on concrete efficiency. In our construction, we instead instantiate the Boyen-Li construction by Boneh et al.’s simple \(\textsf {PRF} \) to improve efficiency. However, since the output space of Boneh et al.’s \(\textsf {PRF} \) is over \({\mathbb {Z}}_3\), rather than over \({\mathbb {Z}}_2\), the concrete construction and proof departs slightly from Boyen and Li [16]. In the construction, to fit the output space of Boneh et al.’s PRF, we generate three GPV-style ciphertexts [29] during encryption, instead of two ciphertexts as in [16]. In the security proof, we add an artificial abort step to compensate for \(\textsf {PRF} _\mathsf{BIP+}\) not being distributed uniformly over 0 and 1.

Construction Let \( \{ 0,1 \} ^\ell \) be an identity space of the scheme. For simplicity, we let the message space of the scheme be \( \{ 0,1 \} \). We can easily extend the scheme to the multi-bit variant similarly to the techniques of [1, 50, 56].

  • \(\textsf {Setup}(1^\lambda )\): On input \(1^\lambda \), it first sets the parameters n, m, q, \(\gamma \), \(\alpha \), and \(\alpha '\) as specified later in this section. Then, it picks \(({\mathbf {A}},{\mathbf {A}}_{\gamma _0}^{-1}) \leftarrow \textsf {TrapGen}(1^n,1^m,q)\) such that \({\mathbf {A}}\in {\mathbb {Z}}_q^{n \times m}\). It also picks random matrices \(\vec {{\mathbf {B}}}_i \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{n \times m \cdot 2\eta }\) for \(i \in [\kappa ]\), a vector \({\mathbf {u}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^n\), a \(\textsf {PRF}\) key \({\mathbf {K}} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_2^{\kappa \times 2\eta }\), and a matrix \({\mathbf {H}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_3^{\eta \times \ell }\). It finally outputs \(\textsf {MPK}= ( {\mathbf {A}}, \{ \vec {{\mathbf {B}}}_i \}_{i\in [\kappa ]}, {\mathbf {u}}, {\mathbf {H}})\) and \(\textsf {MSK}= ( {\mathbf {A}}_{\gamma _0}^{-1}, {\mathbf {K}} )\).

  • \(\textsf {KGen}(\textsf {MPK},\textsf {MSK},{{\textsf {I}}}{{\textsf {D}}})\): Given an identity \({{\textsf {I}}}{{\textsf {D}}}\in \{ 0,1 \} ^{\ell }\), it first computes \(\vec {{\mathbf {B}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{\textsf {PRF}} = [ {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},0}^{\textsf {PRF}} \vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},1}^{\textsf {PRF}} \vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},2}^{\textsf {PRF}} ]\) by running \(\textsf {PubPRF}( {{\textsf {I}}}{{\textsf {D}}}, \{ \vec {{\mathbf {B}}}_i \}_{i\in [\kappa ]} )\) and \(y = \textsf {PRF}({\mathbf {K}},{{\textsf {I}}}{{\textsf {D}}})\). Then, it samples \({\mathbf {d}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }[ {\mathbf {A}}\vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},y}^{\textsf {PRF}} ]_{\gamma }^{-1}({\mathbf {u}})\). such that \({\mathbf {d}}\in {\mathbb {Z}}^{2m}\) and \([{\mathbf {A}}\vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},y}^{\textsf {PRF}}] \cdot {\mathbf {d}}= {\mathbf {u}}\). It finally outputs \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}} =({\mathbf {d}},y)\).

  • \(\textsf {Enc}(\textsf {MPK},{{\textsf {I}}}{{\textsf {D}}},{\textsf {M}})\): Given an identity \({{\textsf {I}}}{{\textsf {D}}}\in \{ 0,1 \} ^{\ell }\) and a message \({\textsf {M}}\in \{ 0,1 \} \) as inputs, it first computes \([ {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},0}^{\textsf {PRF}} \vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},1}^{\textsf {PRF}} \vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},2}^{\textsf {PRF}} ]\) by running \(\textsf {PubPRF}( {{\textsf {I}}}{{\textsf {D}}}, \{ \vec {{\mathbf {B}}}_i \}_{i\in [\kappa ]} )\). For \(i \in \{ 0,1,2 \}\), it picks \({\mathbf {s}}_i \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^n\), \(z_i \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}},\alpha q}\), and \({\mathbf {z}}_i \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^{2m},\alpha ' q}\), and computes \(c_i = {\mathbf {u}}^{\top } {\mathbf {s}}_i + z_i + {\textsf {M}}\cdot \left\lceil q/2\right\rceil \in {\mathbb {Z}}_q\) and \({\mathbf {c}}_i = [ {\mathbf {A}}\vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},i}^{\textsf {PRF}} ]^{\top } {\mathbf {s}}_i + {\mathbf {z}}_i \in {\mathbb {Z}}_q^m\). Finally, it returns the ciphertext \({{\textsf {C}}}{{\textsf {T}}}= \left( \{ c_i, {\mathbf {c}}_i \}_{i \in \{ 0,1,2 \}}\right) \).

  • \(\textsf {Dec}(\textsf {MPK},{{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}},{{\textsf {C}}}{{\textsf {T}}})\): To decrypt a ciphertext \({{\textsf {C}}}{{\textsf {T}}}= \left( \{ c_i, {\mathbf {c}}_i \}_{i \in \{ 0,1,2 \}}\right) \) using a secret key \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}} = ({\mathbf {d}},y)\), it computes \(e = c_y - {\mathbf {c}}_y^{\top } \cdot {\mathbf {d}}\in {\mathbb {Z}}_q\). Finally, it returns 1 if \(\left|e-\left\lceil q/2\right\rceil \right| < \left\lceil q/4\right\rceil \) and 0 otherwise.

Correctness Here, we prove the correctness of the scheme. When the scheme is operated as specified, we have

$$\begin{aligned} e = c_y - {\mathbf {c}}_y^{\top } \cdot {\mathbf {d}}= {\textsf {M}}\cdot \left\lceil \frac{q}{2}\right\rceil + \underbrace{z_y - {\mathbf {z}}_y^{\top } \cdot {\mathbf {d}}}_{\text {noise term}}. \end{aligned}$$

Lemma 12

Assuming \(\alpha ' > \alpha \), the noise term \(z_y - {\mathbf {z}}_y^{\top } \cdot {\mathbf {d}}\) is bounded by \(O(\alpha ' q \gamma \sqrt{m})\) with overwhelming probability.

Proof

We have the following upper bound on the noise.

$$\begin{aligned} \left|z_y - {\mathbf {z}}_y^{\top } \cdot {\mathbf {d}}\right|&\le \left|z_y\right| + \left|{\mathbf {z}}_y^{\top } \cdot {\mathbf {d}}\right| \le \alpha q + \alpha ' q \gamma \sqrt{2m} = O(\alpha ' q \gamma \sqrt{m}). \end{aligned}$$

The second inequality follows from Lemma 1 and the linearity of subgaussian variables. We refer to [47, Sec. 2.4] for more on the properties of subgaussian variables. \(\square \)

Parameter selection We claim that the correctness and security of the scheme can be proven under the following parameter selection: \(m = 2 n \left\lceil \log {q}\right\rceil \), \(q = O(\lambda ^4 m^7 \sqrt{\log {m}})\), \(\gamma = O(\lambda ^2 m^3 \sqrt{\log {m}})\), \(\alpha = O(\lambda ^4 m^{13/2} \sqrt{\log {m}})^{-1}\) and \(\alpha ' = O(\lambda ^2 m^3 \cdot \alpha )\). In the above, we round up q to the nearest largest prime.

To satisfy the correctness and make the security proof, we need the following requirements:

  • the noise term is less that q/5 with overwhelming probability (i.e., \(O(\alpha ' q \gamma \sqrt{m}) < q/5\) by Lemma 12),

  • \(\textsf {TrapGen}\) operates properly (i.e., \(m \ge 2 n \left\lceil \log {q}\right\rceil \) by Lemma 4),

  • we can apply Lemma 3 in the security proof (i.e., \(m \ge n \log {q} + \Omega (n)\)),

  • \(\gamma \) is sufficiently large so that the secret keys in the real world is the same as that in the simulation (i.e., \(\gamma > \gamma _0 = O(\sqrt{n \log {q} \log {m}})\) and \(\gamma = O(m^3 \kappa \eta ) \cdot O(\sqrt{\log {m}})\)),

  • \(\textsf {ReRand}\) operates properly in the security proof (i.e., \(\alpha ' / 2 \alpha \ge O(m^3 \kappa \eta )\) and \(\alpha q > \Omega (\sqrt{n})\)),

  • the \(\textsf {PRF}\) is secure (i.e., \(\kappa = O(\lambda )\) and \(\eta = O(\lambda )\) from [15]),

  • the worst case to average case reduction works (i.e., \(\alpha q > 2 \sqrt{n}\)).

To satisfy the above requirements, one way to set the parameters are as follows:

$$\begin{aligned} m&= 2 n \left\lceil \log {q}\right\rceil ,&q&= O(\lambda ^4 m^7 \sqrt{\log {m}}),&\quad \gamma&= O(\lambda ^2 m^3 \sqrt{\log {m}}), \\ \alpha&= O(\lambda ^4 m^{13/2} \sqrt{\log {m}})^{-1},&\quad \alpha '&= O(\lambda ^2 m^3 \cdot \alpha ), \end{aligned}$$

and round up q to the nearest larger prime.

Security proof The following theorem states the security of the scheme.

Theorem 13

The above \(\textsf {IBE}\) scheme is adaptively secure if the \(\textsf {LWE}_{n,m+1,q,D_{{\mathbb {Z}},\alpha q}}\) assumption holds and \(\textsf {PRF}\) is secure. Namely, for any PPT adversary \({\mathcal {A}}\) making at most Q key generation queries, there exists PPT algorithms \({\mathcal {B}}_{\textsf {LWE}}\) and \({\mathcal {B}}_{\textsf {PRF}}\) such that

$$\begin{aligned} \textsf {Adv}^{\textsf {IBE}}_{{\mathcal {A}}}(\lambda ) \le \frac{7}{2} \textsf {Adv}^{\textsf {LWE}_{n,m+1,q,D_{{\mathbb {Z}},\alpha q}}}_{{\mathcal {B}}_{\textsf {LWE}}}(\lambda ) + \textsf {Adv}^{\textsf {PRF}}_{{\mathcal {B}}_{\textsf {PRF}}}(\lambda ) + Q \cdot 2^{-\Omega (n)}. \end{aligned}$$
(4)

Proof

The proof proceeds in a sequence of games where the first game is identical to the real security game. In the last game in the sequence, the adversary has no advantage. In the following, let \({\mathcal {A}}\) be a PPT adversary that breaks the security of the scheme, and \(W_i\) denotes the event that \({\mathcal {A}}\) wins in Game i.

  • Game 0: This is the real security game between the challenger and \({\mathcal {A}}\). Then, we have \(\left|\Pr [W_0]-1/2\right| = \textsf {Adv}^{\textsf {IBE}}_{{\mathcal {A}}}(\lambda )\).

  • Game 1: In this game, we change the way \(\vec {{\mathbf {B}}}_i\) for \(i \in [\kappa ]\) are chosen. At the beginning of the game, for \(i \in [\kappa ]\), the challenger picks random matrices \(\vec {{\mathbf {R}}}_i \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }\{ -1, 1 \}^{m \times m \cdot 2\eta }\), and sets

    $$\begin{aligned} \vec {{\mathbf {B}}}_i = {\mathbf {A}}\vec {{\mathbf {R}}}_i - {\mathbf {k}}_i^\top \otimes {\mathbf {G}}. \end{aligned}$$
    (5)

    By Lemma 3, the distribution of \(\left( {\mathbf {A}}, \{ \vec {{\mathbf {B}}} \}_{i\in [\kappa ]}\right) \) in both games are statistically close. Therefore, we have \(\left|\Pr [W_0] - \Pr [W_1]\right| \le 2^{-\Omega (n)}\).

  • Game 2: In this game, we change the way key generation queries are answered. By the end of this game, the challenger will no longer require the trapdoor \({\mathbf {A}}_{\gamma _0}^{-1}\) to generate the secret keys. When \({\mathcal {A}}\) queries a secret key for \({{\textsf {I}}}{{\textsf {D}}}\), the challenger first computes

    $$\begin{aligned} \textsf {TrapPRF}\left( {{\textsf {I}}}{{\textsf {D}}},{\mathbf {K}},\{ \vec {{\mathbf {R}}}_i \}_{i\in [\kappa ]}\right) \rightarrow \vec {{\mathbf {R}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{\textsf {PRF}} = \left[ {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},0}^{\textsf {PRF}} \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},1}^{\textsf {PRF}} \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},2}^{\textsf {PRF}} \right] \in {\mathbb {Z}}^{m \times m \cdot 3} \end{aligned}$$

    and \(y = \textsf {PRF}({\mathbf {K}},{{\textsf {I}}}{{\textsf {D}}})\). Then it computes

    $$\begin{aligned} \left[ {\mathbf {A}}\vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},y}^{\textsf {PRF}}\right] _{\gamma }^{-1} = \left[ {\mathbf {A}}\vert \vert {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},y}^{\textsf {PRF}} - [y=y] \cdot {\mathbf {G}}\right] _{\gamma }^{-1} = \left[ {\mathbf {A}}\vert \vert {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},y}^{\textsf {PRF}} - {\mathbf {G}}\right] _{\gamma }^{-1} \end{aligned}$$

    from \({\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},y}^{\textsf {PRF}}\) and y using the algorithms in Lemma 4. It samples

    $$\begin{aligned} {\mathbf {d}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }\left[ {\mathbf {A}}\vert \vert {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},y}^{\textsf {PRF}} - {\mathbf {G}}\right] _{\gamma }^{-1}({\mathbf {u}}) \end{aligned}$$

    and finally outputs \({{\textsf {s}}}{{\textsf {k}}}_{{{\textsf {I}}}{{\textsf {D}}}} = ({\mathbf {d}},y)\) as a secret key on \({{\textsf {I}}}{{\textsf {D}}}\).

    By Lemma 11, we have

    $$\begin{aligned} \vec {{\mathbf {B}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{\textsf {PRF}} = {\mathbf {A}}\vec {{\mathbf {R}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{\textsf {PRF}} - \phi _3(y)^{\top } \otimes {\mathbf {G}}, \end{aligned}$$

    where \(\textsf {PubPRF}\left( {{\textsf {I}}}{{\textsf {D}}},{\mathbf {K}},\{ \vec {{\mathbf {B}}}_i \}_{i\in [\kappa ]}\right) \rightarrow \vec {{\mathbf {B}}}_{{{\textsf {I}}}{{\textsf {D}}}}^{\textsf {PRF}} = \left[ {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},0}^{\textsf {PRF}} \vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},1}^{\textsf {PRF}} \vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},2}^{\textsf {PRF}}\right] \). Note that we have

    $$\begin{aligned} {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},i}^{\textsf {PRF}} = {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},i}^{\textsf {PRF}} - \phi _{3,i}(y) \cdot {\mathbf {G}}= {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}},i}^{\textsf {PRF}} - [y=i] \cdot {\mathbf {G}}\end{aligned}$$

    for \(i \in \{ 0,1,2 \}\) from the property of \(\phi _{3}\).

    By Lemma 4, the distribution of \({\mathbf {d}}\) in both games are statistically close. Since \({\mathcal {A}}\) obtains at most Q secret keys, we have \(\left|\Pr [W_1] - \Pr [W_2]\right| \le Q \cdot 2^{-\Omega (n)}\).

  • Game 3: In this game, we change the way \({\mathbf {A}}\) is sampled. Namely, the challenger samples \({\mathbf {A}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{n \times m}\) instead of generating it with a trapdoor. By Lemma 4, the distribution of \({\mathbf {A}}\) in both games are statistically close. Therefore, we have \(\left|\Pr [W_2] - \Pr [W_3]\right| \le 2^{-\Omega (n)}\).

  • Game 4: In this game, we change the way the challenge ciphertext \(\left( \{ c_i, {\mathbf {c}}_i \}_{i \in \{ 0,1,2 \}}\right) \) is created. To create the challenge ciphertext for \({{\textsf {I}}}{{\textsf {D}}}^*\), the challenger first computes \(y^*= \textsf {PRF}({\mathbf {K}},{{\textsf {I}}}{{\textsf {D}}}^*)\) and sets \(y_1 {:}{=}y^*+ 1 \bmod 3\) and \(y_2 {:}{=}y^*+ 2 \bmod 3\). It computes \((c_{y^*,0}, {\mathbf {c}}_{y^*,1})\) and \((c_{y_2,0}, {\mathbf {c}}_{y_2,1})\) via \(\textsf {Enc}\). The remaining part of the challenge ciphertext \((c_{y_1,0}, {\mathbf {c}}_{y_1,1})\) is created as follows. The challenger picks \({\mathbf {s}}_{y_1} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^n\), \(z_{y_1} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}},\alpha q}\), and \({\bar{{\mathbf {z}}}}_{y_1} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^m,\alpha q}\) and sets \(w_{y_1} {:}{=}{\mathbf {u}}^{\top } {\mathbf {s}}_{y_1} + z_{y_1}\) and \({\mathbf {w}}_{y_1} {:}{=}{\mathbf {A}}^{\top } {\mathbf {s}}_{y_1} + {\bar{{\mathbf {z}}}}_{y_1}\). Then, it computes \({\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}\) using \(\textsf {TrapPRF}\) and sets \((c_{y_1,0}, {\mathbf {c}}_{y_1,1})\) as

    $$\begin{aligned} c_{y_1,0}&{:}{=}w_{y_1} + {\textsf {M}}_b \cdot \left\lceil \frac{q}{2}\right\rceil \text { and } {\mathbf {c}}_{y_1,1} \leftarrow \textsf {ReRand}\left( \left[ {\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}\right] ,{\mathbf {w}}_{y_1},\alpha q, \frac{\alpha '}{2\alpha }\right) . \end{aligned}$$

    We show that the view of \({\mathcal {A}}\) in Game 4 is negligibly close to that in Game 3. To see this, we apply Lemma 2 with \({\mathbf {V}}= [{\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}]\), \({\mathbf {b}}= {\mathbf {A}}^{\top } {\mathbf {s}}\), and \({\mathbf {z}}= {\bar{{\mathbf {z}}}}_{y_1}\) to obtain that the distribution of \({\mathbf {c}}_{y_1,1}\) in Game 3 is negligibly close to the following:

    $$\begin{aligned} {\mathbf {c}}_{y_1,1}&= [{\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}]^{\top } {\mathbf {A}}^{\top } {\mathbf {s}}+ {\mathbf {z}}' = [{\mathbf {A}}\vert \vert {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}]^{\top } {\mathbf {s}}+ {\mathbf {z}}' = [{\mathbf {A}}\vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}]^{\top } {\mathbf {s}}+ {\mathbf {z}}', \end{aligned}$$

    where the last equality follows from Lemma 11 and the fact that \(y^*\ne y_1\), and \({\mathbf {z}}'\) is distributed negligibly close to \(D_{{\mathbb {Z}}^{2m},\alpha ' q}\). Note that we can apply Lemma 2 because

    $$\begin{aligned} \frac{\alpha '}{2\alpha } \ge O(m^3 \kappa \eta ) \ge \sqrt{m} \cdot \sqrt{2m} \cdot \Vert [{\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}]\Vert _{\infty } \ge \left\| [{\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}]\right\| _2 \end{aligned}$$

    where the third inequality regarding the relationship between the infinity norm and the operator norm. It can be seen that \((c_{y_1,0}, {\mathbf {c}}_{y_1,1})\) is distributed statistically close to Game 3. Therefore, we have \(\left|\Pr [W_3] - \Pr [W_4]\right| \le 2^{-\Omega (n)}\).

  • Game 5: In this game, we further change the way a part of the challenge ciphertext \((c_{y_1,0}, {\mathbf {c}}_{y_1,1})\) is created. To create \((c_{y_1,0}, {\mathbf {c}}_{y_1,1})\), the challenger first picks \(w_{y_1} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q\), \({\mathbf {w}}_{y_1} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^m\), and \({\bar{{\mathbf {z}}}}_{y_1} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^m,\alpha q}\), and sets

    $$\begin{aligned} c_{y_1,0}&{:}{=}w_{y_1} + {\textsf {M}}_b \cdot \left\lceil \frac{q}{2}\right\rceil , \end{aligned}$$
    (6)
    $$\begin{aligned} {\mathbf {c}}_{y_1,1}&\leftarrow \textsf {ReRand}\left( \left[ {\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_1}^{\textsf {PRF}}\right] ,{\mathbf {w}}_{y_1} + {\bar{{\mathbf {z}}}}_{y_1},\alpha q, \frac{\alpha '}{2\alpha }\right) . \end{aligned}$$
    (7)

    We will show that for any PPT adversary \({\mathcal {A}}\), there exists another PPT adversary \({\mathcal {B}}\) such that \(\left|\Pr [W_4] - \Pr [W_5]\right| \le \textsf {Adv}^{\textsf {LWE}_{n,m+1,q,\chi }}_{{\mathcal {B}}_{\textsf {LWE}}}(\lambda ) {=}{:}\epsilon _{\textsf {LWE}}\) in Lemma 14.

  • Game 6: In this game, we change the way a part of the challenge ciphertext \((c_{y_2,0}, {\mathbf {c}}_{y_2,1})\) is created. To create \((c_{y_2,0}, {\mathbf {c}}_{y_2,1})\), the challenge picks \({\mathbf {s}}_{y_2} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^n\), \(z_{y_2} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}},\alpha q}\), and \({\bar{{\mathbf {z}}}}_{y_2} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^m,\alpha q}\) and sets \(w_{y_2} {:}{=}{\mathbf {u}}^{\top } {\mathbf {s}}_{y_2} + z_{y_2}\) and \({\mathbf {w}}_{y_2} {:}{=}{\mathbf {A}}^{\top } {\mathbf {s}}_{y_2} + {\bar{{\mathbf {z}}}}_{y_2}\). Then, it computes \({\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_2}^{\textsf {PRF}}\) using \(\textsf {TrapPRF}\) and sets \((c_{y_2,0}, {\mathbf {c}}_{y_2,1})\) as

    $$\begin{aligned} c_{y_2,0}&{:}{=}w_{y_2} + {\textsf {M}}_b \cdot \left\lceil \frac{q}{2}\right\rceil \text { and } {\mathbf {c}}_{y_2,1} \leftarrow \textsf {ReRand}\left( \left[ {\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_2}^{\textsf {PRF}}\right] ,{\mathbf {w}}_{y_2},\alpha q, \frac{\alpha '}{2\alpha }\right) . \end{aligned}$$

    Similarly to the change from Game 3 and Game 4, we have \(\left|\Pr [W_5] - \Pr [W_6]\right| \le 2^{-\Omega (n)}\).

  • Game 7: In this game, we further change the way a part of the challenge ciphertext \((c_{y_2,0}, {\mathbf {c}}_{y_2,1})\) is created. To create \((c_{y_2,0}, {\mathbf {c}}_{y_2,1})\), the challenger first picks \(w_{y_2} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q\), \({\mathbf {w}}_{y_2} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^m\), and \({\bar{{\mathbf {z}}}}_{y_2} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^m,\alpha q}\), and sets

    $$\begin{aligned} c_{y_2,0}&{:}{=}w_{y_2} + {\textsf {M}}_b \cdot \left\lceil \frac{q}{2}\right\rceil , \end{aligned}$$
    (8)
    $$\begin{aligned} {\mathbf {c}}_{y_2,1}&\leftarrow \textsf {ReRand}\left( \left[ {\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,y_2}^{\textsf {PRF}}\right] ,{\mathbf {w}}_{y_2} + {\bar{{\mathbf {z}}}}_{y_2},\alpha q, \frac{\alpha '}{2\alpha }\right) . \end{aligned}$$
    (9)

    Similarly to the change from Game 4 and Game 5, we have \(\left|\Pr [W_6]-\Pr [W_7]\right| \le \epsilon _{\textsf {LWE}}\).

  • Game 8: In this game, \({\mathbf {A}}\) sampled with a trapdoor as \(({\mathbf {A}},{\mathbf {A}}_{\gamma _0}^{-1}) \leftarrow \textsf {TrapGen}(1^n,1^m,q)\). Similarly to the change from Game 2 and Game 3, we have \(\left|\Pr [W_7] - \Pr [W_8]\right| \le 2^{-\Omega (n)}\).

  • Game 9: In this game, the challenge generates the secret keys as in the real scheme. Similarly to the change from Game 1 and Game 2, we have \(\left|\Pr [W_8] - \Pr [W_9]\right| \le Q \cdot 2^{-\Omega (n)}\).

  • Game 10: In this game, \(\vec {{\mathbf {B}}}_i\) for \(i \in [\kappa ]\) are sampled as \(\vec {{\mathbf {B}}}_i \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{n \times m \cdot 2\eta }\). Similarly to the change from Game 0 and Game 1, we have \(\left|\Pr [W_9] - \Pr [W_{10}]\right| \le 2^{-\Omega (n)}\).

  • Game 11: In this game, we change the way the challenge ciphertext is created. To create the challenge ciphertext, the challenger first picks \(v^*\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }\{ 0,1,2 \}\) and computes \((c_{v^*,0}, {\mathbf {c}}_{v^*, 1})\) via \(\textsf {Enc}\) and sets the remain part of the challenge ciphertext \(\left( c_{i,0}, {\mathbf {c}}_{i,1}\right) \) for \(i \in \{ 0,1,2 \} \setminus \{ v^* \}\) as Eqs. (69).

    The only difference between Game 10 and Game 11 is whether to create the challenge ciphertext according to \(y^*= \textsf {PRF}({\mathbf {K}},{{\textsf {I}}}{{\textsf {D}}}^*)\) or \(v^*\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }\{ 0,1,2 \}\). If there exists a PPT algorithm that can distinguish Game 10 and Game 11, then we can obtain a PPT algorithm \({\mathcal {B}}_{\textsf {PRF}}\) that breaks the security of \(\textsf {PRF}\). Hence, we have \(\left|\Pr [W_{10}] - \Pr [W_{11}]\right| \le \textsf {Adv}^{\textsf {PRF}}_{{\mathcal {B}}_{\textsf {PRF}}}(\lambda ) {=}{:}\epsilon _{\textsf {PRF}}\).

  • Game 12: In this game, we change Game 11 so that the challenger performs the following additional step at the challenge phase. First, the challenger picks \(v^*\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }\{ 0,1,2 \}\) and computes \(y^*\leftarrow \textsf {PRF}({\mathbf {K}},{{\textsf {I}}}{{\textsf {D}}}^*)\). Then, the challenger checks whether

    $$\begin{aligned} y^*= v^*. \end{aligned}$$
    (10)

    If it holds, the challenger outputs \(b' \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow } \{ 0,1 \} \) and aborts. If condition (10) does not hold, the challenger proceeds the game as in Game 9. Then, we have

    $$\begin{aligned} \left|\Pr [W_{12}] - \frac{1}{2}\right|&= \left|\Pr [\lnot \text {abort}] \cdot \Pr [W_{12} \mid \lnot \text {abort}] + \Pr [\text {abort}] \cdot \Pr [W_{12} \mid \text {abort}] - \frac{1}{2}\right| \\&= \left|\Pr [v^*\ne y^*] \cdot \Pr [W_{11}] + \Pr [v^*= y^*] \cdot \frac{1}{2} - \frac{1}{2}\right| \\&= \left|\frac{2}{3} \cdot \Pr [W_{11}] + \frac{1}{3} \cdot \frac{1}{2} - \frac{1}{2}\right| \\&= \frac{2}{3} \cdot \left|\Pr [W_{11}] - \frac{1}{2}\right|. \end{aligned}$$
  • Game 13: In this game, \(\vec {{\mathbf {B}}}_i\) for \(i \in [\kappa ]\) are chosen as in Eq. (5). Similarly to the change from Game 0 and Game 1, we have \(\left|\Pr [W_{12}] - \Pr [W_{13}]\right| \le 2^{-\Omega (n)}\).

  • Game 14: In this game, the secret keys are generated as in Game 2. Similarly to the change from Game 1 and Game 2, we have \(\left|\Pr [W_{13}] - \Pr [W_{14}]\right| \le Q \cdot 2^{-\Omega (n)}\).

  • Game 15: In this game, \({\mathbf {A}}\) is sampled as \({\mathbf {A}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{n \times m}\). Similarly to the change from Game 2 and Game 3, we have \(\left|\Pr [W_{14}] - \Pr [W_{15}]\right| \le 2^{-\Omega (n)}\).

  • Game 16: In this game, we change the way a part of the challenge ciphertext \((c_{v^*,0}, {\mathbf {c}}_{v^*,1})\) is generated. To create \((c_{v^*,0}, {\mathbf {c}}_{v^*,1})\), the challenger first picks \({\mathbf {s}}_{v^*} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^n\), \(z_{v^*} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}},\alpha q}\), and \({\bar{{\mathbf {z}}}}_{v^*} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^m,\alpha q}\) and sets \(w_{v^*} {:}{=}{\mathbf {u}}^{\top } {\mathbf {s}}_{v^*} + z_{v^*}\) and \({\mathbf {w}}_{v^*} {:}{=}{\mathbf {A}}^{\top } {\mathbf {s}}_{v^*} + {\bar{{\mathbf {z}}}}_{v^*}\). Then, it computes \({\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}}\) using \(\textsf {TrapPRF}\) and sets \((c_{v^*,0}, {\mathbf {c}}_{v^*,1})\) as

    $$\begin{aligned} c_{v^*,0}&{:}{=}w_{v^*} + {\textsf {M}}_b \cdot \left\lceil \frac{q}{2}\right\rceil \text { and } {\mathbf {c}}_{v^*,1} \leftarrow \textsf {ReRand}\left( \left[ {\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}}\right] ,{\mathbf {w}}_{v^*},\alpha q, \frac{\alpha '}{2\alpha }\right) . \end{aligned}$$

    We show that the view of \({\mathcal {A}}\) in Game 16 is negligibly close to that in Game 15. In Game 15, we have \(y^*\ne v^*\) from Eq. (10), where \(y^*= \textsf {PRF}({\mathbf {K}},{{\textsf {I}}}{{\textsf {D}}}^*)\). Then, we have

    $$\begin{aligned} {\mathbf {c}}_{v^*,1}&= \left[ {\mathbf {A}}\vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}}\right] ^{\top } {\mathbf {s}}+ {\mathbf {z}}_{v^*} \\&= \left[ {\mathbf {A}}\vert \vert {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}} - [y^*= v^*] \cdot {\mathbf {G}}\right] ^{\top } {\mathbf {s}}+ {\mathbf {z}}_{v^*} \\&= \left[ {\mathbf {A}}\vert \vert {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}}\right] ^{\top } {\mathbf {s}}+ {\mathbf {z}}_{v^*}, \end{aligned}$$

    where \({\mathbf {z}}_{v^*}\) is distributed according to \(D_{{\mathbb {Z}}^{2m}, \alpha ' q}\). On the other hand, in Game 16, by applying Lemma 2 with \({\mathbf {V}}= [{\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}}]\), \({\mathbf {b}}= {\mathbf {A}}^{\top } {\mathbf {s}}\), and \({\mathbf {z}}= {\bar{{\mathbf {z}}}}_{v^*}\), we obtain

    $$\begin{aligned} {\mathbf {c}}_{v^*,1} = \left[ {\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}}\right] ^{\top } {\mathbf {A}}^\top {\mathbf {s}}+ {\mathbf {z}}' = \left[ {\mathbf {A}}\vert \vert {\mathbf {A}}{\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}}\right] ^{\top } {\mathbf {s}}+ {\mathbf {z}}', \end{aligned}$$

    where \({\mathbf {z}}'\) is distributed negligibly close to \(D_{{\mathbb {Z}}^{2m}, \alpha ' q}\). Here, we can apply Lemma 2 from our parameter selection. It can be seen that the distribution of \((c_{v^*,0}, {\mathbf {c}}_{v^*,1})\) in both games are statistically close. Therefore, we have \(\left|\Pr [W_{15}] - \Pr [W_{16}]\right| \le 2^{-\Omega (n)}\).

  • Game 17: In this game, we further change the way a part of the challenge ciphertext \((c_{v^*,0}, {\mathbf {c}}_{v^*,1})\) is created. To create \((c_{v^*,0}, {\mathbf {c}}_{v^*,1})\), the challenger first picks \(w_{v^*} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q\), \({\mathbf {w}}_{v^*} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^m\), and \({\bar{{\mathbf {z}}}}_{v^*} \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^m,\alpha q}\), and sets

    $$\begin{aligned} c_{v^*,0}&{:}{=}w_{v^*} + {\textsf {M}}_b \cdot \left\lceil \frac{q}{2}\right\rceil \text { and } {\mathbf {c}}_{v^*,1} \leftarrow \textsf {ReRand}\left( \left[ {\mathbf {I}}_m \vert \vert {\mathbf {R}}_{{{\textsf {I}}}{{\textsf {D}}}^*,v^*}^{\textsf {PRF}}\right] ,{\mathbf {w}}_{v^*} + {\bar{{\mathbf {z}}}}_{v^*},\alpha q, \frac{\alpha '}{2\alpha }\right) . \end{aligned}$$

    Similarly to the change from Game 4 and Game 5, we have \(\left|\Pr [W_{16}] - \Pr [W_{17}]\right| \le \epsilon _{\textsf {LWE}}\).

    Furthermore, since \(w_{i,0}\) for \(i \in \{ 0,1,2 \}\) are uniformly random over \({\mathbb {Z}}_q\) and independent of the other values, the term in the challenge ciphertext \(c_{i,0} = w_{i} + {\textsf {M}}_b \cdot \left\lceil q/2\right\rceil \) that conveys the information on the massage is distributed independently from the value of \({\textsf {M}}_b\). Therefore, we have \(\Pr [W_{17}] = 1/2\).

Combining everything together, we have

$$\begin{aligned} \left|\Pr [W_0] - \frac{1}{2}\right|&\le \sum _{i=0}^{10}\left|\Pr [W_i] - \Pr [W_{i+1}]\right| + \left|\Pr [W_{11}] - \frac{1}{2}\right| \\&= 2\epsilon _{\textsf {LWE}} + \epsilon _{\textsf {PRF}} + Q \cdot 2^{-\Omega (n)} + \frac{3}{2} \left|\Pr [W_{12}] - \frac{1}{2}\right| \\&\le 2\epsilon _{\textsf {LWE}} + \epsilon _{\textsf {PRF}} + Q \cdot 2^{-\Omega (n)} + \frac{3}{2} \left( \sum _{i=12}^{16}\left|\Pr [W_i] - \Pr [W_{i+1}]\right| + \left|\Pr [W_{17}] - \frac{1}{2}\right| \right) \\&\le 2\epsilon _{\textsf {LWE}} + \epsilon _{\textsf {PRF}} + Q \cdot 2^{-\Omega (n)} + \frac{3}{2} \left( \epsilon _{\textsf {LWE}} + Q \cdot 2^{-\Omega (n)} \right) \\&\le \frac{7}{2}\epsilon _{\textsf {LWE}} + \epsilon _{\textsf {PRF}} + Q \cdot 2^{-\Omega (n)}. \end{aligned}$$

Therefore, we obtain Eq. (4). To complete of Theorem 13, it remains to prove the following lemma. \(\square \)

Lemma 14

For any PPT adversary \({\mathcal {A}}\), there exists another PPT adversary \({\mathcal {B}}_{\textsf {LWE}}\) such that

$$\begin{aligned} \left|\Pr [W_4] - \Pr [W_5]\right| \le \textsf {Adv}^{\textsf {LWE}_{n,m+1,q,\chi }}_{{\mathcal {B}}_{\textsf {LWE}}}(\lambda ). \end{aligned}$$

Proof

Suppose that there exists an adversary \({\mathcal {A}}\) with non-negligible advantage in distinguishing between Game 4 and Game 5. We use \({\mathcal {A}}\) to construct an algorithm \({\mathcal {B}}_{\textsf {LWE}}\) that solves the \(\textsf {LWE}\) problem as follows.

  • Instance. \({\mathcal {B}}_{\textsf {LWE}}\) is given \(\left( {\mathbf {A}}', {\mathbf {w}}'\right) \in {\mathbb {Z}}_q^{n \times (m+1)} \times {\mathbb {Z}}_q^{m+1}\) as the problem instance of \(\textsf {LWE}_{n,m+1,q,\chi }\), where \(\chi = D_{{\mathbb {Z}},\alpha q}\). We can assume without loss of generality that \({\mathbf {w}}' = {\mathbf {w}}'' + {\mathbf {z}}\) for \({\mathbf {z}}\overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^{m+1},\alpha q}\) and restate the \(\textsf {LWE}\) problem so that \({\mathcal {B}}_{\textsf {LWE}}\)’s task is to distinguish whether \({\mathbf {w}}'' = {\mathbf {A}}'^{\top } {\mathbf {s}}\) for some \({\mathbf {s}}\in {\mathbb {Z}}_q^n\) or \({\mathbf {w}}'' \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{m+1}\). We note this subtle change from the standard \(\textsf {LWE}\) problem is only a syntactical change made for the convenience of the proof.

  • Setup. To construct the master public key \(\textsf {MPK}\), \({\mathcal {B}}_{\textsf {LWE}}\) first parses \({\mathbf {A}}'\) into \(({\mathbf {u}},{\mathbf {A}}) \in {\mathbb {Z}}_q^n \times {\mathbb {Z}}_q^{n \times m}\) and \({\mathbf {w}}'\) into \((w,{\mathbf {w}}) \in {\mathbb {Z}}_q \times {\mathbb {Z}}_q^m\). Using these terms, \({\mathcal {B}}_{\textsf {LWE}}\) sets the master public key as in Eq. (5) and gives it to \({\mathcal {A}}\).

  • Phase 1 and Phase 2. The key generation queries made by \({\mathcal {A}}\) are answered as in Game 2 (which is equivalent to both Game 4 and Game 5), without knowledge of the trapdoor of \({\mathbf {A}}\).

  • Challenge Phase. When \({\mathcal {A}}\) makes the challenge query for the challenge identity \({{\textsf {I}}}{{\textsf {D}}}^*\) and two messages \({\textsf {M}}_0, {\textsf {M}}_1\), \({\mathcal {B}}_{\textsf {LWE}}\) sets the challenge ciphertext \({{\textsf {C}}}{{\textsf {T}}}^*\) as in Game 5 and returns \({{\textsf {C}}}{{\textsf {T}}}^*\) to \({\mathcal {A}}\).

  • Guess. At last, \({\mathcal {A}}\) outputs its guess \(\textsf {coin}\). Then, \({\mathcal {B}}_{\textsf {LWE}}\) outputs 1 if \(\textsf {coin}= 1\) and 0 otherwise.

Analysis It can be seen that \({\mathcal {B}}_{\textsf {LWE}}\) perfectly simulates the view of \({\mathcal {A}}\) in Game 4 if \(({\mathbf {A}}',{\mathbf {w}}')\) is a valid \(\textsf {LWE}\) instance (i.e., \({\mathbf {w}}''={\mathbf {A}}'^{\top } {\mathbf {s}}\)) and Game 5 otherwise (i.e., \({\mathbf {w}}'' \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }{\mathbb {Z}}_q^{m+1}\)). We therefore conclude that \(\textsf {Adv}^{\textsf {LWE}_{n,m+1,q,\chi }}_{{\mathcal {B}}_{\textsf {LWE}}}(\lambda ) = \left|\Pr [W_4] - \Pr [W_5]\right|\) as desired. \(\square \)

Multi-bit variant Here, we explain how to extend our scheme to a multi-bit variant without increasing much the size of the master public keys, secret keys, and ciphertexts following the techniques of [1, 50, 56]. To modify the scheme to deal with massage space of length \(\ell _M\), we replace \({\mathbf {u}}\in {\mathbb {Z}}_q^n\) in \(\textsf {MPK}\) with \({\mathbf {U}}\in {\mathbb {Z}}_q^{n \times \ell _M}\). The component \(c_i\) for \(i \in \{ 0,1,2 \}\) in the ciphertext is replaced with \({\mathbf {c}}_i' = {\mathbf {U}}^\top {\mathbf {s}}_i + {\mathbf {z}}_i' + {\textsf {M}}\cdot \left\lceil q/2\right\rceil \in {\mathbb {Z}}_q^{\ell _M}\), where \({\mathbf {z}}_i' \overset{{\scriptscriptstyle \textsf {\$}}}{\leftarrow }D_{{\mathbb {Z}}^{\ell _M}, \alpha q}\) and \({\textsf {M}}\in \{ 0,1 \} ^{\ell _M}\) is the message to be encrypted. The secret key for \({{\textsf {I}}}{{\textsf {D}}}\) is replaced with \({\mathbf {D}}\in {\mathbb {Z}}^{2m \times \ell _M}\) such that \([{\mathbf {A}}\vert \vert {\mathbf {B}}_{{{\textsf {I}}}{{\textsf {D}}},y}^{\textsf {PRF}}] \cdot {\mathbf {D}}= {\mathbf {U}}\). We can prove security for the multi-bit variant from \(\textsf {LWE}_{n,m+\ell _M,q,\chi }\) by naturally extending the proof of Theorem 13. We note that the same parameters as in the single-bit variant work for the multi-bit variant. By this change, the sizes of the master public keys, ciphertexts, and secret keys become \({\tilde{O}}(n^2 \kappa \eta + n \ell _M)\), \({\tilde{O}}(n + \ell _M)\), and \({\tilde{O}}(n \ell _M)\) from \({\tilde{O}}(n^2 \kappa \eta )\), \({\tilde{O}}(n)\), and \({\tilde{O}}(n)\), respectively. The size of the master public keys and ciphertexts will be asymptotically the same as long as \(\ell _M = {\tilde{O}}(n)\). To deal with longer messages, we employ a KEM-DEM approach as suggested in [56]. Namely, we encrypt a random ephemeral key of sufficient length and the encrypt the message by using the ephemeral key.

5.3 Tightly secure signature

Our \(\textsf {IBE}\) scheme can be converted into a tightly secure signature via the Naor transform [12]. For the sake of completeness, we describe the scheme and provide the full proof of the security in Sect. B. While black-box application of the Naor transform requires the \(\textsf {LWE}\) assumption for the security of the signature, the \(\textsf {SIS}\) assumption suffices in our direct analysis. Furthermore, the security analysis is simpler than the \(\textsf {IBE}\) case because we do not need the artificial abort step.