1 Introduction

Encryption has traditionally been regarded as a way to ensure confidentiality of an end-to-end communication. However, with the emergence of complex networks and cloud computing, recently the crypto community has been re-thinking the notion of encryption to address security concerns that arise in these more complex environments. Functional encryption [10, 21], generalized from identity based encryption [8, 23] and attribute based encryption [7, 18], provides a satisfying solutions to this problem in theory. Two features provided by functional encryption are fine-grained access and computing on encrypted data. The fine-grained access part is formalized as a cryptographic notion, named predicate encryption [11, 19]. In predicate encryption system, each ciphertext \(\mathsf {ct} \) is associated with an attribute a while each secret key \(\mathsf {sk} \) is associated with a predicate f. A user holding the key \(\mathsf {sk} \) can decrypt ciphertext \(\mathsf {ct} \) if and only if \(f(a) = 0\). Moreover, the attribute a is kept hidden.

With several significant improvements on quantum computing, the community is working intensively on developing applications whose security holds even against quantum attacks. Lattice-based cryptography, the most promising candidate against quantum attacks, has matured significantly since the early works of Ajtai [3] and Regev [22]. Most cryptographic primitives, ranging from basic public-key encryption (PKE) [22] to more advanced schemes e.g., identity-based encryption (IBE) [1, 12], attribute-based encryption (ABE) [9, 17], fully-homomorphic encryption (FHE) [13], etc., can be built from now canonical lattice hardness assumptions, such as Regev’s Learning with Errors (LWE). From the above facts, we can draw the conclusion that our understanding about instantiating different cryptographic primitives based on lattices is quite well. However, for improving the efficiency of existent lattice-based construction, e.g. reducing the size of public parameters and ciphertexts, or simplifying the decryption algorithm, our understanding is limited. Besides the theoretical interests in shrinking the size of ciphertext, as the main motivation of studying functional encryption comes from its potential deployment in complex networks and cloud computing, thus the size of transmitted data is a bottleneck of current lattice-based constructions. Combining all these, this brings us to the following open question:

$$\begin{aligned} \begin{array}{c} \textit{Can we optimize the size of public parameters and ciphertexts of other} \\ \textit{functional encryption scheme beyond identity based encryption?} \end{array} \end{aligned}$$

1.1 Our Contributions

We positively answer the above question by proposing the first lattice-based compact inner product encryption (IPE). Roughly speaking, in an IPE scheme, the secret key \(\mathsf {sk} \) is associated with a predicate vector \(\varvec{v} \in \mathbb {Z}_q^t\) and the ciphertext is associated with an attribute vector \(\varvec{w} \in \mathbb {Z}_q^t\). The decryption works if and only if the inner product \(\langle \varvec{v}, \varvec{w} \rangle = 0\). Despite this apparently restrictive structure, inner product predicates can support conjunction, subset and range queries on encrypted data [11], as well as disjunctions, polynomial evaluation, and CNF and DNF formulas [19]. Our construction can be summarized in the following informal theorem:

Theorem 1.1

(Main). Under the standard Learning with Errors assumption, there is an IPE scheme satisfying weak attribute-hiding property for predicate/attribute vector of length \(t = \log n\), where (1) the modulus q is a prime of size polynomial in the security parameter n, (2) ciphertexts consist of a vector in \(\mathbb {Z}_q^{2m + 1}\), where m is the lattice column dimension, and (3) the public parameters consists two matrices in \(\mathbb {Z}_q^{n \times m}\) and a vector in \(\mathbb {Z}_q^n\).

Remark 1.2

Our technique only allows us to prove a weak form of anonymity (“attribute hiding”). Specifically, given a ciphertext \(\mathsf {ct} \) and a number of keys that do not decrypt \(\mathsf {ct} \), the user cannot determine the attribute associated with \(\mathsf {ct} \). In the strong form of attribute hiding, the user cannot determine the attribute associated with \(\mathsf {ct} \) even when given keys that do decrypt \(\mathsf {ct} \). The weakened form of attribute hiding we do achieve is nonetheless more that is required for \(\mathsf {ABE} \) and should be sufficient for many applications of \(\mathsf {PE} \). See Sect. 2 for more detail.

We can also extend our compact IPE construction to support \(t = \mathsf {poly}(n)\)-length attribute vectors. Let \(t' = t / \log n\), our IPE construction supporting \(\mathsf {poly}(n)\)-length vectors can be stated in the following corollary:

Corollary 1.3

Under the standard Learning with Errors assumption, there is an IPE scheme with weak attribute-hiding property supporting predicate/attribute vector of length \(t = \mathsf {poly}(n)\), where (1) the modulus q is a prime of size polynomial in the security parameter n, (2) ciphertexts consist of a vector in \(\mathbb {Z}_q^{(t' + 1)m + 1}\), where m is the lattice column dimension and (3) the public parameters consists \((t' + 1)\) matrices in \(\mathbb {Z}_q^{n \times m}\) and a vector in \(\mathbb {Z}_q^n\).

In addition to reducing the size of public parameters and ciphertexts, our decryption algorithm is computed in an Single-Instruction-Multiple-Data (SIMD) manner. In prior works [2, 24], the decryption computes the inner product between the predicate vector and ciphertext by (1) decomposing the predicate vector, (2) multiplying-then-adding the corresponding vector bit and ciphertext, entry-by-entry. Our efficient decryption algorithm achieves the inner product by just one vector-matrix multiplication.

1.2 Our Techniques

Our high-level approach to compact inner product encryption from LWE begins by revisiting the first lattice-based IPE construction [2] and the novel fully homomorphic encryption proposed recently by Gentry et al. [15].

The Agrawal-Freeman-Vaikuntanathan IPE. We first briefly review the construction of IPE in [2]. Their construction relies on the algebraic structure of ABB-IBE [1] to solve “lattice matching” problem. Lattice matching means the lattice structure computed in decryption algorithm matches the structure used in key generation, and since the secret key is a short trapdoor of the desired lattice, thus the decryption succeeds. To encode a predicate vector \(\varvec{v} \in \mathbb {Z}_q^t\) according to [2], the key generation first computes the r-ary decomposition of each entry of \(\varvec{v}\) as \(v_i = \sum _{j = 0}^k v_{ij} r^j\), and constructs the \(\varvec{v}\)-specific lattice as

$$[\mathbf {A} | \mathbf {A}_{\varvec{v}}] = [\mathbf {A} | \sum _{i = 1}^t \sum _{j = 0}^k v_{ij} \mathbf {A}_{ij}]$$

by “mixing” a long public matrices \((\mathbf {A}, \{\mathbf {A}_{ij}\}) \in \mathbb {Z}_q^{n \times m}\). The secret key \(\mathsf {sk} _{\varvec{v}}\) is a short trapdoor of lattice \(\varLambda _q^{\bot }([\mathbf {A} | \sum _{i = 1}^t \sum _{j = 0}^k v_{ij} \mathbf {A}_{ij}])\). To encode an attribute vector \(\varvec{w} \in \mathbb {Z}_q^{t}\), for \(i \in [t], j \in [k]\), construct the \(\varvec{w}\)-specific vector as

$$\varvec{c}_{ij} = \varvec{s}^\mathsf {T}( \mathbf {A}_{ij} + r^j w_i \mathbf {B}) + \mathsf {noise}$$

for a randomly chosen vector \(\varvec{s} \in \mathbb {Z}_q^n\) and a public matrix \(\mathbf {B} \in \mathbb {Z}_q^{n \times m}\). To reduce the noise growth in the inner produce computation, decryption only needs to multiply-then-add the r-ary representation of \(v_{ij}\) to its corresponding \(\varvec{c}_{ij}\), as

$$\sum _{i = 1}^t \sum _{j = 0}^k v_{ij}\varvec{r}_{ij} = \varvec{s}^\mathsf {T}(\sum _{i = 1}^t \sum _{j = 0}^k v_{ij} \mathbf {A}_{ij} + \langle \varvec{v}, \varvec{w}\rangle \mathbf {B}) + \mathsf {noise}$$

when \(\langle \varvec{v}, \varvec{w}\rangle = 0\), the \((\langle \varvec{v}, \varvec{w}\rangle \mathbf {B})\) part vanishes, thus the lattice computed after inner produce matches the \(\mathbf {A}_{\varvec{v}}\) part in the key generation. Then the secret key \(\mathsf {sk} _{\varvec{v}}\) can be used to decrypt the ciphertext. Therefore, the number of matrices in public parameters or vectors in ciphertext is quasilinear in the dimension of vectors.

Using GSW-FHE to compute inner product. Recent progress in fully homomorphic encryption [15] makes us re-think the process of computing inner product. We wonder whether we can use GSW-FHE [15] along with its simplification [4] to simplify the computing procedure. Recall ciphertext of message \(x \in \mathbb {Z}_q\) in GSW-FHE can be view in the form \(\mathsf {ct} _x = \mathbf {A} \mathbf {R} + x {\mathbf {G}}\), where \(\mathbf {A} \in \mathbb {Z}_q^{n \times m}\) is a LWE matrix, \(\mathbf {R} \in \mathbb {Z}_q^{m \times m}\) is a random small matrix and \({\mathbf {G}}\) is the “gadget matrix” as first (explicitly) introduced in the work [20]. The salient point is that there is an efficiently computable function \({\mathbf {G}}^{-1}\), so that (1) \(\mathsf {ct} _x \cdot {\mathbf {G}}^{-1}(y {\mathbf {G}}) = \mathsf {ct} _{xy}\), and (2) each entry in matrix \({\mathbf {G}}^{-1}(y {\mathbf {G}})\) is just 0 or 1, and thus has small norm. These two nice properties can shrink the size of public parameters (ciphertext) from quasilinear to linear. In particular, to encoding a predicate vector \(\varvec{v} \in \mathbb {Z}_q^t\), we construct the \(\varvec{v}\)-specific lattice as

$$[\mathbf {A} | \mathbf {A}_{\varvec{v}}] = [\mathbf {A} | \sum _{i = 1}^t \mathbf {A}_{i} {\mathbf {G}}^{-1}(v_i {\mathbf {G}})]$$

where the number of public matrices is \(t + 1\). To encode an attribute vector \(\varvec{w} \in \mathbb {Z}_q^{t}\), for \(i \in [t]\), construct the \(\varvec{w}\)-specific vector as

$$\varvec{c}_i = \varvec{s}^\mathsf {T}( \mathbf {A}_i + w_i {\mathbf {G}}) + \mathsf {noise}$$

Then, we can compute the inner product as

$$\sum _{i = 1}^t \varvec{c}_i \cdot {\mathbf {G}}^{-1}(v_i {\mathbf {G}}) = \varvec{s}^\mathsf {T}(\sum _{i = 1}^t \mathbf {A}_{i} {\mathbf {G}}^{-1}(v_i {\mathbf {G}}) + \langle \varvec{v}, \varvec{w} \rangle {\mathbf {G}}) + \mathsf {noise}$$

Since \({\mathbf {G}}^{-1}(v_i {\mathbf {G}})\) is small norm, the decryption succeeds when \(\langle \varvec{v}, \varvec{w} \rangle = 0\).

Achieving public parameters of two matrices. Our final step is to bring the size of public parameters (or ciphertext) to constant for \((t = \log \lambda )\)-length vectors. Inspired by recent work [6] in optimizing size of public parameters in the IBE setting, we use their vector encoding method to further optimize our IPE construction. The vector encoding for encoding \(\varvec{v} \in \mathbb {Z}_q^t\) is

$$\mathbf {E}_{\varvec{v}} = \big [v_1 \mathbf {I}_n | \cdots | v_t \mathbf {I}_n \big ] \cdot {\mathbf {G}}_{tn, \ell , m}$$

where \({\mathbf {G}}_{tn, \ell , m} \in \mathbb {Z}_q^{tn \times m}\) is the generalized gadget matrix introduced in [6, 20]. The dimension of this generalized gadget matrix \(tn \times tn \log _\ell m\). By setting \(t = \log q\) and \(\ell = n\), we can obtain the similar column dimension as origin gadget matrix, i.e. \(O(n \log q)\). Then the \(\varvec{v}\)-specific lattice becomes

$$\mathbf {A}_{\varvec{v}} = \mathbf {A}_1 \cdot {\mathbf {G}}_{dn, \ell , m}^{-1} \Bigg (\begin{bmatrix} v_{1}\mathbf {I}_n \\ \vdots \\ v_{d}\mathbf {I}_n \end{bmatrix} \cdot {\mathbf {G}}_{n, 2, m}\Bigg ) $$

and the \(\varvec{w}\)-specific ciphertext becomes

$$\varvec{c} = \varvec{s}^\mathsf {T}( \mathbf {A}_1 + \mathbf {E}_{\varvec{w}}) + \mathsf {noise}$$

The inner product can be computed in an SIMD way, as

$$ \varvec{c} \cdot {\mathbf {G}}_{dn, \ell , m}^{-1} \Bigg (\begin{bmatrix} v_{1}\mathbf {I}_n \\ \vdots \\ v_{d}\mathbf {I}_n \end{bmatrix} \cdot {\mathbf {G}}_{n, 2, m}\Bigg ) \approx \varvec{s}^\mathsf {T}(\mathbf {A}_1\cdot {\mathbf {G}}_{dn, \ell , m}^{-1} \Bigg (\begin{bmatrix} v_{1}\mathbf {I}_n \\ \vdots \\ v_{d}\mathbf {I}_n \end{bmatrix} {\mathbf {G}}_{n, 2, m}\Bigg ) + \langle \varvec{v}, \varvec{w} \rangle {\mathbf {G}}_{n, 2, m})$$

As such, our final IPE system contains only two matrices \((\mathbf {A}, \mathbf {A}_1)\) (and a vector \(\varvec{u}\)), and the ciphertext consists of two vectors. By carefully twisting the vector encoding and proof techniques shown in [2], we show our IPE construction satisfies weakly attribute-hiding. Our IPE system can also be extended in a “parallel repetition” manner to support \((t = \lambda )\)-length vectors, as Corollary 1.3 states.

1.3 Related Work

In this section, we provide a comparison with the first IPE construction [2] and its follow-up improvement [24]. In [24], Xagawa used the “Full-Rank Difference encoding”, proposed in [1] to map the vector \(\mathbb {Z}_q^t\) to a matrix in \(\mathbb {Z}_q^{n \times n}\). The size of public parameters (or ciphertext) in his scheme depends linearly on the length of predicate/attribute vectors, and the “Full-Rank Difference encoding” incurs more computation overhead than embedding GSW-FHE structure in IPE construction as described above. The detailed comparison is provided in Table 1 for length parameter \(t = \log \lambda \).

Table 1. Comparison of lattice-based IPE scheme

2 Preliminaries

Notation. Let \(\lambda \) be the security parameter, and let \(\textsc {ppt} \) denote probabilistic polynomial time. We use bold uppercase letters to denote matrices \(\mathbf{M}\), and bold lowercase letters to denote vectors \(\varvec{v}\). We write \(\widetilde{\mathbf {M}}\) to denote the Gram-Schmidt orthogonalization of \(\mathbf {M}.\) We write [n] to denote the set \(\{1,\ldots ,n\}\), and \(|\varvec{t}|\) to denote the number of bits in the string \(\varvec{t}\). We denote the i-th bit \(\varvec{s}\) by \(\varvec{s}[i]\). We say a function \(\mathsf {negl} (\cdot ): \mathbb {N}\rightarrow (0,1)\) is negligible, if for every constant \(c \in \mathbb {N}\), \(\mathsf {negl} (n) < n^{-c}\) for sufficiently large n.

2.1 Inner Product Encryption

We recall the syntax and security definition of inner product encryption (IPE) [2, 19]. IPE can be regarded as a generalization of predicate encryption. An IPE scheme \(\varPi = (\mathsf {Setup}, \mathsf {KeyGen}, \mathsf {Enc}, \mathsf {Dec})\) can be described as follows:

  • \(\mathsf {Setup} (1^\lambda )\): On input the security parameter \(\lambda \), the setup algorithm outputs public parameters \(\mathsf {pp} \) and master secret key \(\mathsf {msk} \).

  • \(\mathsf {KeyGen} (\mathsf {msk}, \varvec{v})\): On input the master secret key \(\mathsf {msk} \) and a predicate vector \(\varvec{v}\), the key generation algorithm outputs a secret key \(\mathsf {sk} _{\varvec{v}}\) for vector \(\varvec{v}\).

  • \(\mathsf {Enc} (\mathsf {pp}, \varvec{w}, \mu )\): On input the public parameter \(\mathsf {pp} \) and an attribute/message pair \((\varvec{w}, \mu )\), it outputs a ciphertext \(\mathsf {ct} _{\varvec{w}}\).

  • \(\mathsf {Dec} (\mathsf {sk} _{\varvec{v}}, \mathsf {ct} _{\varvec{w}})\): On input the secret key \(\mathsf {sk} _{\varvec{v}}\) and a ciphertext \(\mathsf {ct} _{\varvec{w}}\), it outputs the corresponding plaintext \(\mu \) if \(\langle \varvec{v}, \varvec{w} \rangle = 0\); otherwise, it outputs \(\bot \).

Definition 2.1

(Correctness). We say the IPE scheme described above is correct, if for any \((\mathsf {msk}, \mathsf {pp}) \leftarrow \mathsf {Setup} (1^\lambda )\), any message \(\mu \), any predicate vector \(\varvec{v} \in \mathbb {Z}_q^d\), and attribute vector \(\varvec{w} \in \mathbb {Z}_q^d\) such that \(\langle \varvec{v}, \varvec{w}\rangle = 0\), we have \(\mathsf {Dec} (\mathsf {sk} _{\varvec{v}}, \mathsf {ct} _{\varvec{w}}) = \mu \), where \(\mathsf {sk} _{\varvec{w}} \leftarrow \mathsf {KeyGen} (\mathsf {msk}, \varvec{v})\) and \(\mathsf {ct} _{\varvec{v}} \leftarrow \mathsf {Enc} (\mathsf {pp}, \varvec{w}, \mu )\).

Security. For the weakly attribute-hiding property of IPE, we use the following experiment to describe it. Formally, for any \(\textsc {ppt} \) adversary \(\mathcal {A} \), we consider the experiment \(\mathbf {Expt} _{\mathcal {A}}^{\mathsf {IPE}}(1^\lambda )\):

  • Setup: Adversary \(\mathcal {A} \) sends two challenge attribute vectors \(\varvec{w}_{0}, \varvec{w}_{1} \in \mathbb {Z}_q^d\) to challenger. A challenger runs the \(\mathsf {Setup} (1^\lambda )\) algorithm, and sends back the master public key \(\mathsf {pp} \).

  • Query Phase I: Proceeding adaptively, the adversary \(\mathcal {A} \) queries a sequence of predicate vectors \((\varvec{v}_1,\ldots , \varvec{v}_m)\) subject to the restriction that \(\langle \varvec{v}_i, \varvec{w}_{0} \rangle \ne 0\) and \(\langle \varvec{v}_i, \varvec{w}_{1} \rangle \ne 0\). On the i-th query, the challenger runs \(\mathsf {sk} _{\varvec{v}_i} \rightarrow \mathsf {KeyGen} (\mathsf {msk}, \varvec{v}_i),\) and sends the result \(\mathsf {sk} _{\varvec{v}_i}\) to \(\mathcal {A} \).

  • Challenge: Once adversary \(\mathcal {A} \) decides that Query Phase I is over, he outputs two length-equal messages \((\mu ^*_0, \mu ^*_1)\) and sends them to challenger. In response, the challenger selects a random bit \(b^* \in \{0,1\}\), and sends the ciphertext \(\mathsf {ct} ^* \leftarrow \mathsf {Enc} (\mathsf {pp}, \varvec{w}_{b^*}, \mu _{b^*})\) to adversary \(\mathcal {A} \).

  • Query Phase II: Adversary \(\mathcal {A} \) continues to issue secret key queries \((\varvec{v}_{m + 1},\ldots , \varvec{v}_{n})\) adaptively, subject to the restriction that \(\langle \varvec{v}_i, \varvec{w}_{0} \rangle \ne 0\) and \(\langle \varvec{v}_i, \varvec{w}_{1} \rangle \ne 0\). The challenger responds by sending back keys \(\mathsf {sk} _{\varvec{v}_i}\) as in Query Phase I.

  • Guess: Adversary \(\mathcal {A} \) outputs a guess \(b' \in \{0,1\}\).

We note that query phases I and II can happen polynomial times in terms of security parameter. The advantage of adversary \(\mathcal {A} \) in attacking an IPE scheme \(\varPi \) is defined as:

$$\mathbf {Adv}_{\mathcal {A}}(1^\lambda ) = \left| \Pr [b^* = b'] - \frac{1}{2}\right| ,$$

where the probability is over the randomness of the challenger and adversary.

Definition 2.2

(Weakly attribute-hiding). We say an IPE scheme \(\varPi \) is weakly attribute-hiding against chosen-plaintext attacks in selective attribute setting, if for all \(\textsc {ppt} \) adversaries \(\mathcal {A} \) engaging in experiment \(\mathbf {Expt} _{\mathcal {A}}^{\mathsf {IPE}}(1^\lambda )\), we have

$$\mathbf {Adv}_{\mathcal {A}}(1^\lambda ) \le \mathsf {negl} (\lambda ).$$

2.2 LWE and Sampling Algorithms over Lattices

Learning with Errors. The LWE problem was introduced by Regev [22], the works of [22] show that the LWE assumption is as hard as (quantum) solving GapSVP and SIVP under various parameter regimes.

Definition 2.3

(LWE). For an integer \(q = q(n) \ge 2\), and an error distribution \(\chi = \chi (n)\) over \(\mathbb {Z}_q\), the Learning With Errors problem \(\mathsf {LWE}_{n, m, q, \chi }\) is to distinguish between the following pairs of distributions (e.g. as given by a sampling oracle \(\mathcal {O}\in \{\mathcal {O}_{\varvec{s}}, \mathcal {O}_{\$}\}\)):

$$\{\mathbf {A}, \varvec{s}^\mathsf {T}\mathbf {A} + \varvec{x}^\mathsf {T}\} \ \text {and} \ \{\mathbf {A}, \varvec{u}\}$$

where \(\mathbf {A} \overset{\$}{\leftarrow }\mathbb {Z}^{n \times m}_q\), \(\varvec{s} \overset{\$}{\leftarrow } \mathbb {Z}^n_q\), \(\varvec{u} \overset{\$}{\leftarrow } \mathbb {Z}^m_q\), and \(\varvec{x} \overset{\$}{\leftarrow } \chi ^m\).

Two-Sided Trapdoors and Sampling Algorithms. We will use the following algorithms to sample short vectors from specified lattices.

Lemma 2.4

[5, 14]. Let qnm be positive integers with \(q\ge 2\) and sufficiently large \(m = \varOmega (n \log q)\). There exists a \(\textsc {ppt} \) algorithm \(\mathsf {TrapGen} (q, n, m)\) that with overwhelming probability outputs a pair \((\mathbf {A}\in \mathbb {Z}_q^{n\times m}, \mathbf {T}_\mathbf {A} \in \mathbb {Z}^{m\times m})\) such that \(\mathbf {A}\) is statistically close to uniform in \(\mathbb {Z}_q^{n\times m}\) and \(\mathbf {T}_\mathbf {A}\) is a basis for \(\varLambda _q^{\bot }(\mathbf {A})\) satisfying

$$||\mathbf {T}_\mathbf {A}||\le O(n\log q)\quad \text {and}\quad ||\widetilde{\mathbf {T}_{\mathbf {A}}}||\le O(\sqrt{n\log q})$$

except with \(\mathsf {negl}(n)\) probability.

Lemma 2.5

[1, 12, 14]. Let \(q>2, m>n.\) There are two sampling algorithms as follows:

  • There is a \(\textsc {ppt}\) algorithm \(\mathsf {SampleLeft} (\mathbf {A}, \mathbf {B}, \mathbf {T}_{\mathbf {A}}, \varvec{u}, s)\), taking as input: (1) a rank-n matrix \(\mathbf {A}\in \mathbb {Z}_q^{n\times m},\) and any matrix \(\mathbf {B}\in \mathbb {Z}_q^{n\times m_1}\), (2) a “short” basis \(\mathbf {T}_{\mathbf {A}}\) for lattice \(\varLambda _q^{\bot }(\mathbf {A})\), a vector \(\varvec{u}\in \mathbb {Z}_q^n\), (3) a Gaussian parameter \(s > ||\widetilde{\mathbf {T}_{\mathbf {A}}}||\cdot \omega (\sqrt{\log (m+m_1)})\). Then outputs a vector \(\varvec{r}\in \mathbb {Z}^{m+m_1}\) distributed statistically close to \(\mathcal {D}_{\varLambda _q^{\varvec{u}}(\mathbf {F}), s}\) where \(\mathbf {F}:=[\mathbf {A}|\mathbf {B}]\).

  • There is a \(\textsc {ppt}\) algorithm \(\mathsf {SampleRight} (\mathbf {A}, \mathbf {B}, \mathbf {R}, \mathbf {T}_{\mathbf {B}}, \varvec{u}, s)\), taking as input: (1) a matrix \(\mathbf {A}\in \mathbb {Z}_q^{n \times m},\) and a rank-n matrix \(\mathbf {B}\in \mathbb {Z}_q^{n\times m}\), a matrix \(\mathbf {R}\in \mathbb {Z}_q^{m \times m},\) where \(s_{\mathbf {R}} := ||\mathbf {R}|| = \sup _{\varvec{x} : ||\varvec{x}||=1}||\mathbf {R}\varvec{x}||\), (2) a “short” basis \(\mathbf {T}_{\mathbf {B}}\) for lattice \(\varLambda _q^{\bot }(\mathbf {B}),\) a vector \(\varvec{u}\in \mathbb {Z}_q^n\), (3) a Gaussian parameter \(s > ||\widetilde{\mathbf {T}_{\mathbf {B}}}||\cdot {s_{\mathbf {R}}}\cdot \omega (\sqrt{\log {m}})\). Then outputs a vector \(\varvec{r}\in \mathbb {Z}^{2m}\) distributed statistically close to \(\mathcal {D}_{\varLambda _q^{\varvec{u}}(\mathbf {F}), s}\) where \(\mathbf {F}:=(\mathbf {A}|\mathbf {A}\mathbf {R} + \mathbf {B}).\)

Gadget Matrix. We now recall the gadget matrix [4, 20], and the extended gadget matrix technique appeared in [6], that are important to our construction.

Definition 2.6

Let \(m = n \cdot \lceil \log q \rceil \), and define the gadget matrix

$$\mathbf {G}_{n, 2, m} = \varvec{g} \otimes \mathbf {I}_n \in \mathbb {Z}_q^{n \times m}$$

where vector \(\varvec{g} = (1, 2, 4,\ldots , 2^{\lfloor \log q \rfloor }) \in \mathbb {Z}_q^{\lceil \log q \rceil }\), and \(\otimes \) denotes tenser product. We will also refer to this gadget matrix as “powers-of-two” matrix. We define the inverse function \(\mathbf {G}^{-1}_{n, 2, m}: \mathbb {Z}_q^{n \times m} \rightarrow \{0,1\}^{m \times m}\) which expands each entry \(a \in \mathbb {Z}_q\) of the input matrix into a column of size \(\lceil \log q \rceil \) consisting of the bits of binary representations. We have the property that for any matrix \(\mathbf {A} \in Z_q^{n \times m}\), it holds that \({\mathbf {G}}_{n, 2, m} \cdot {\mathbf {G}}^{-1}_{n, 2, m}(\mathbf {A}) = \mathbf {A}\).

As mentioned by [20] and explicitly described in [6], the results for \(\mathbf {G}_{n, 2, m}\) and its trapdoor can be extended to other integer powers or mixed-integer products. In this direction, we give a generalized notation for gadget matrices as follows:

3 Our Construction

In this section, we describe our compact IPE construction. Before diving into the details, we first revisit a novel encoding method implicitly employed in adaptively secure IBE setting in [6]. Consider the vector space \(\mathbb {Z}_q^d\). For vector \(\varvec{v} = (v_1,\ldots , v_d) \in \mathbb {Z}_q^d\), we define the following encoding algorithm which maps a d-dimensional vector to an \(n \times m\) matrix.

$$\begin{aligned} \mathsf {encode}(\varvec{v}) = \mathbf {E}_{\varvec{v}} = \big [v_1 \mathbf {I}_n | \cdots | v_d \mathbf {I}_n \big ] \cdot {\mathbf {G}}_{dn, \ell , m} \end{aligned}$$
(1)

Similarly, we also define the encoding for an integer \(a \in \mathbb {Z}_q\) as: \(\mathsf {encode}(a) = \mathbf {E}_a = a {\mathbf {G}}_{n, 2, m}.\) The above encoding supports the vector space operations naturally, and our compact IPE construction relies on this property.

3.1 IPE Construction Supporting \(\log (\lambda )\)-Length Attributes

We describe our IPE scheme that each secret key is associated with a predicate vector \(\varvec{v} \in \mathbb {Z}^{d}_{q}\) (for some fixed \(d = \log \lambda \)), and each ciphertext will be associated with an attribute vector \(\varvec{w} \in \mathbb {Z}^{d}_{q}\). Decryption succeeds if and only if \(\langle \varvec{v},\varvec{w} \rangle =0 \bmod q\). We further extend our IPE construction supporting \(d = \mathsf {poly}(\lambda )\)-length vectors in Sect. 3.3. The description of \(\varPi = (\mathsf {Setup}, \mathsf {KeyGen}, \mathsf {Enc}, \mathsf {Dec})\) is as follows:

  • \(\mathsf {Setup} (1^\lambda , 1^d)\): On input the security parameter \(\lambda \) and length parameter d, the setup algorithm first sets the parameters (qnms) as below. We assume the parameters (qnms) are implicitly included in both \(\mathsf {pp} \) and \(\mathsf {msk} \). Then it generates a random matrix \(\mathbf {A} \in \mathbb {Z}_q^{n \times m}\) along with its trapdoor \(\mathbf {T}_{\mathbf {A}} \in \mathbb {Z}_q^{m \times m}\), using \((\mathbf {A}, \mathbf {T}_{\mathbf {A}}) \leftarrow \mathsf {TrapGen} (q, n, m)\). Next sample a random matrix \(\mathbf {B} \in \mathbb {Z}_q^{n \times m}\) and a random vector \(\varvec{u} \in \mathbb {Z}_q^n\). Output the public parameter pp and master secret key \(\mathsf {msk} \) as

    $$\mathsf {pp} = (\mathbf {A}, \mathbf {B}, \varvec{u}), \qquad \mathsf {msk} = (\mathsf {pp}, \mathbf {T}_{\mathbf {A}})$$
  • \(\mathsf {KeyGen} (\mathsf {msk}, \varvec{v})\): On input the master secret key \(\mathsf {msk} \) and predictor vector \(\varvec{v} = (v_1,\ldots , v_d) \in \mathbb {Z}_q^d\), the key generation algorithm first sets matrix \(\mathbf {B}_{\varvec{v}}\) as

    $$\mathbf {B}_{\varvec{v}} = \mathbf {B} \cdot {\mathbf {G}}_{dn, \ell , m}^{-1} \Bigg (\begin{bmatrix} v_{1}\mathbf {I}_n \\ \vdots \\ v_{d}\mathbf {I}_n \end{bmatrix} \cdot {\mathbf {G}}_{n, 2, m}\Bigg ) $$

    Then sample a low-norm vector \(\varvec{r}_{\varvec{v}} \in \mathbb {Z}^{2m}\) using algorithm \(\mathsf {SampleLeft} (\mathbf {A}, \mathbf {B}_{\varvec{v}}, \varvec{u}, s)\), such that \([\mathbf {A} | \mathbf {B}_{\varvec{v}}] \cdot \varvec{r}_{\varvec{v}} = \varvec{u} \bmod q\). Output secret key \(\mathsf {sk} _{\varvec{v}} = \varvec{r}_{\varvec{v}}\).

  • \(\mathsf {Enc} (\mathsf {pp}, \varvec{w}, \mu )\): On input the public parameter \(\mathsf {pp} \), an attribute vector \(\varvec{w} = (w_1,\ldots , w_d) \in \mathbb {Z}_q^d\) and a message \(\mu \in \{0,1\}\), the encryption algorithm first chooses a random vector \(\varvec{s} \in \mathbb {Z}_q^n\) and a random matrix \(\mathbf {R} \in \{-1, 1\}^{m \times m}\). Then encode the attribute vector \(\varvec{w}\) as in Eq. (1)

    $$\mathbf {E}_{\varvec{w}} = \big [w_1 \mathbf {I}_n | \cdots | w_d \mathbf {I}_n \big ] \cdot {\mathbf {G}}_{dn, \ell , m}$$

    Let the ciphertext \(\mathsf {ct} _{\varvec{w}} = (\varvec{c}_0, \varvec{c}_1, c_2) \in \mathbb {Z}_q^{2m + 1}\) be

    $$\varvec{c}_0 = \varvec{s}^\mathsf {T}\mathbf {A}+\varvec{e}^\mathsf {T}_0, \quad \varvec{c}_1 = \varvec{s}^\mathsf {T}(\mathbf {B} + \mathbf {E}_{\varvec{w}})+ \varvec{e}_0^T \mathbf {R}, \quad c_2 = \varvec{s}^\mathsf {T}\varvec{u} + e_1 + \lceil q / 2 \rceil \mu $$

    where errors \(\varvec{e}_0 \leftarrow \mathcal {D}_{\mathbb {Z}^m, s}, e_1 \leftarrow \mathcal {D}_{\mathbb {Z}, s}\).

  • \(\mathsf {Dec} (\mathsf {sk} _{\varvec{v}}, \mathsf {ct} _{\varvec{w}})\): On input the secret key \(\mathsf {sk} _{\varvec{v}} = \varvec{r}_{\varvec{v}}\) and ciphertext \(\mathsf {ct} _{\varvec{w}} = (\varvec{c}_0, \varvec{c}_1, c_2)\), if \(\langle \varvec{v}, \varvec{w} \rangle \ne 0 \bmod q\), then output \(\bot \). Otherwise, first compute

    $$\varvec{c}'_1 = \varvec{c}_1 \cdot {\mathbf {G}}_{dn, \ell , m}^{-1} \Bigg (\begin{bmatrix} v_{1}\mathbf {I}_n \\ \vdots \\ v_{d}\mathbf {I}_n \end{bmatrix} \cdot {\mathbf {G}}_{n, 2, m}\Bigg )$$

    then output \(\mathsf {Round}(c_2 - \langle (\varvec{c}_0, \varvec{c}'_1), \varvec{r}_{\varvec{v}} \rangle )\).

Lemma 3.1

The IPE scheme \(\varPi \) described above is correct (c.f. Definition 2.1).

Proof

When the predicate vector \(\varvec{v}\) and attribute vector \(\varvec{w}\) satisfies \(\langle \varvec{v}, \varvec{w} \rangle = 0 \bmod q\), it holds that \(\varvec{c}'_1 = \varvec{s}^\mathsf {T}\mathbf {B}_{\varvec{v}} + \varvec{e}'_0\). Therefore, during decryption, we have

$$\mu ' = \mathsf {Round}\bigg (\lceil q / 2 \rceil \mu + \underbrace{e_1 - \langle (\varvec{e}_0, \varvec{e}'_0), \varvec{r}_{\varvec{v}} \rangle }_{\text {small}}\bigg ) = \mu \in \{0,1\}$$

The third equation follows if \((e_1 - \langle (\varvec{e}_0, \varvec{e}'_0), \varvec{r}_{\varvec{v}} \rangle )\) is indeed small, which holds w.h.p. by setting the parameters appropriately below.    \(\square \)

Parameter Selection. To support \(d = \log (\lambda )\)-length predicate/attribute vectors, we set the system parameters according to Table 2, where \(\epsilon > 0\) is an arbitrarily small constant.

Table 2. \(\log (\lambda )\)-length IPE Parameters Setting

These values are chosen in order to satisfy the following constraints:

  • To ensure correctness, we require \(|e_1 - \langle (\varvec{e}_0, \varvec{e}'_0), \varvec{r}_{\varvec{v}} \rangle | < q/4\); Let \(\varvec{r}_{\varvec{v}} = (\varvec{r}_1, \varvec{r}_2)\), here we can bound the dominating term:

    $$ |\varvec{e}_{0}^{'\mathsf {T}} \varvec{r}_{2}| \le || \varvec{e}_{0}^{'\mathsf {T}}|| \cdot ||\varvec{r}_{2} || \approx s \sqrt{m} d \ell \log _\ell q \cdot s \sqrt{m}= s^2 m n^{1 + \epsilon } < q/4$$
  • For \(\mathsf {SampleLeft}\), we know \(||\widetilde{\mathbf {T}_{\mathbf {A}}}|| = O(\sqrt{n\log (q)}),\) thus this requires that the sampling width s satisfies \(s > \sqrt{n\log (q)}\cdot \omega (\sqrt{\log (m)})\). For \(\mathsf {SampleRight} \), we need \(s > ||\widetilde{\mathbf {T}_{{\mathbf {G}}_{n, 2, m}}}|| \cdot ||\mathbf {R}|| \omega (\sqrt{\log m}) = n^{1 + \epsilon } \omega (\sqrt{\log m})\). To apply Regev’s reduction, we need \(s > \sqrt{n}\omega (\log (n))\) (s here is an absolute value, not a ratio). Therefore, we need \(s > n^{1 + \epsilon }\)

  • To apply the Leftover Hash Lemma, we need \(m\ge (n+1)\log (q) + \omega (\log (n)).\)

3.2 Security Proof

In this part, we show the weakly attribute-hiding property of our IPE construction. We adapt the simulation technique in [2] by plugin the encoding of vectors. Intuitively, to prove the theorem we define a sequences of hybrids against adversary \(\mathcal {A}\) in the weak attribute-hiding experiment. The adversary \(\mathcal {A} \) outputs two attribute vectors \(\varvec{w}_{0}\) and \(\varvec{w}_{1}\) at the beginning of each game, and at some point outputs two messages \(\mu _{0},\mu _{1}\). The first and last games correspond to real security game with challenge ciphertexts \(\mathsf {Enc} (\mathsf {pp},\varvec{w}_{0},\mu _{0})\) and \(\mathsf {Enc} (\mathsf {pp},\varvec{w}_{1},\mu _{1})\) respectively. In the intermediate games we use the “alternative” simulation algorithms \((\mathsf {Sim}.\mathsf {Setup},\mathsf {Sim}.\mathsf {KeyGen}, \mathsf {Sim}.\mathsf {Enc})\). During the course of the game the adversary can only request keys for predicate vector \(\varvec{v}_i\) such that \(\langle \varvec{v}_i, \varvec{w}_{0} \rangle \ne 0\) and \(\langle \varvec{v}_i, \varvec{w}_{1} \rangle \ne 0\). We first define the simulation algorithms \((\mathsf {Sim}.\mathsf {Setup},\mathsf {Sim}.\mathsf {KeyGen}, \mathsf {Sim}.\mathsf {Enc})\) in the following:

  • \(\mathsf {Sim}.\mathsf {Setup} (1^\lambda , 1^d, \varvec{w}^*)\): On input the security parameter \(\lambda \), the length parameter d, and an attribute vector \(\varvec{w}^{*} \in \mathbb {Z}^{d}_{q}\), the simulation setup algorithm first chooses a random matrix \(\mathbf {A} \leftarrow \mathbb {Z}_q^{n \times m}\) and a random vector \(\varvec{u} \leftarrow \mathbb {Z}_q^n\). Then set matrix

    $$\mathbf {B} = \mathbf {A} \mathbf {R}^* - \mathbf {E}_{\varvec{w}^*}, \quad \mathbf {E}_{\varvec{w}^*} = \big [w^*_1 \mathbf {I}_n | \cdots | w^*_d \mathbf {I}_n \big ] \cdot {\mathbf {G}}_{dn, \ell , m}$$

    where matrix \(\mathbf {R}^*\) is chosen randomly from \(\{-1, 1\}^{m \times m}\). Output \(\mathsf {pp} = (\mathbf {A}, \mathbf {B}, \varvec{u})\) and \(\mathsf {msk} = \mathbf {R}^*\).

  • \(\mathsf {Sim}.\mathsf {KeyGen} (\mathsf {msk}, \varvec{v})\): On input the master secret key \(\mathsf {msk} \) and a vector \(\varvec{v} \in \mathbb {Z}_q^d\), the simulation key generation algorithm sets matrix \(\mathbf {R}_{\varvec{v}}\) and \(\mathbf {B}_{\varvec{v}}\) as

    $$\mathbf {R}_{\varvec{v}} = \Bigg (\begin{bmatrix} v_{1}\mathbf {I}_n \\ \vdots \\ v_{d}\mathbf {I}_n \end{bmatrix} \cdot {\mathbf {G}}_{n, 2, m}\Bigg ), \quad \mathbf {B}_{\varvec{v}} = \mathbf {B} \cdot {\mathbf {G}}_{dn, \ell , m}^{-1}(\mathbf {R}_{\varvec{v}})$$

    Then sample a low-norm vector \(\varvec{r}_{\varvec{v}} \in \mathbb {Z}^{2m}\) using algorithm

    $$\varvec{r}_{\varvec{v}} \leftarrow \mathsf {SampleRight} (\mathbf {A}, \langle \varvec{v}, \varvec{w}^* \rangle {\mathbf {G}}_{n, 2, m}, \mathbf {R}^*{\mathbf {G}}_{dn, \ell , m}^{-1}(\mathbf {R}_{\varvec{v}}), \mathbf {T}_{{\mathbf {G}}_{n, 2, m}} \varvec{u}, s)$$

    such that \([\mathbf {A} | \mathbf {B}_{\varvec{v}}] \cdot \varvec{r}_{\varvec{v}} = \varvec{u} \bmod q\). Output secret key \(\mathsf {sk} _{\varvec{v}} = \varvec{r}_{\varvec{v}}\).

  • \(\mathsf {Sim}.\mathsf {Enc} (\mathsf {pp}, \varvec{w}^*, \mu )\): The simulation encryption algorithm is the same as the counterpart in the scheme, except the matrix \(\mathbf {R}^*\) is used in generating the ciphertext instead of sampling a random matrix \(\mathbf {R} \in \{-1, 1\}^{m \times m}\).

Due to the space limit, we include proof of the following theorem in full version.

Theorem 3.2

Assuming the hardness of \((n, q, \chi )\)-LWE assumption, the IPE scheme described above is weakly attribute-hiding (c.f. Definition 2.2).

3.3 IPE Construction Supporting \(\mathsf {poly}(\lambda )\)-length Vectors

We also extend our IPE construction to support \(t = \mathsf {poly}(\lambda )\)-length vectors, which means the predicate and attribute vector are chosen in vector space \(\mathbb {Z}_q^t\). Intuitively speaking, our construction described below can be regarded as a \(t' = \lceil t / d \rceil \) “parallel repetition” version of IPE construction for \(d = \log (\lambda )\)-length vectors. In particular, we encode every \(\log (\lambda )\) part of the attribute vector \(\varvec{v}\), and then concatenate these encoding together as the encoding of \(\varvec{v}\). Due to space limit, we include the detailed scheme and proof in the full version.