Skip to main content

Integer LWE with Non-subgaussian Error and Related Attacks

  • Conference paper
  • First Online:
Information Security (ISC 2021)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 13118))

Included in the following conference series:

Abstract

This paper focuses on the security of lattice based Fiat-Shamir signatures in leakage scenarios. More specifically, how to recover the complete private key after obtaining a large number of noisy linear equations without modular about the private key. Such a set of equations can be obtained, for example, in [5], by attacking the rejecting sampling step with a side-channel attack. The paper refers to the mathematical problem of recovering the secret vector from this structure as the ILWE problem and proves that it can be solved by the least squares method. A similar mathematical structure has been obtained in [13] by leaking a single bit at certain specific locations of the randomness.

However, the ILWE problem requires the error term to be subgaussian, which is not always the case in practice. This paper therefore extends the original ILWE problem by presenting the non-subgaussian ILWE problem, proving that it can be solved by the least squares method combined with a correction factor, and giving two attack scenarios: an attack with lower bits leakage of randomness than in [13], and a careless implementation attack on the randomness. In the lower bit randomness leakage case, we are able to attack successfully with 2 or 3 bits leakage lower than those in [13] experimentally, and in the careless implementation attack, we are able to recover the private key successfully when the rejection sampling partially fails.

This work is supported in part by National Natural Science Foundation of China (No. U1936209, No. 61632020, No. 62002353 and No. 61732021) and Beijing Natural Science Foundation (No. 4192067).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    According to our assumption, the distribution of \(\mathbf{Pa}\) can also be treated as \(\chi _a\), so the latter set of samples can also be treated as sampled from the non-sugbaussian ILWE distribution \(\mathcal {D}_{\mathbf{s},\chi _a,\chi _e,f}\).

  2. 2.

    \(\approx \)” means “converges to” when m tend to infinity.

  3. 3.

    In fact \(\mathbb {E}([y]_{2^{l}})=-\frac{2^l}{2^{\gamma +1}-1}\langle \mathbf {s,\bar{c}}\rangle \) is close to 0, where \(2^{\gamma } \gg 2^l\). So \([y_i]_{2^l}\) can be regarded as subgaussian.

  4. 4.

    More precisely, it is an FS-ILWE problem presented in [13], but due to their similarity, we view this problem as ILWE.

  5. 5.

    In fact, the nonce in [7] is also gaussian, but the signature is of the form \(z = y + (-1)^bsc\), which is different with our case.

  6. 6.

    \(\mathbf{P} = (\mathbf{p}_1,\mathbf{p}_2,\ldots ,\mathbf{p}_m)^T\)” can be constructed as the following way: \(\mathbf{p}_1 = (s_1/\Vert \mathbf{s}\Vert _2,s_2/\Vert \mathbf{s}\Vert _2,\ldots ,s_m/\Vert \mathbf{s}\Vert _2)^T\), \(\mathbf{p}_2,\mathbf{p}_3,\ldots ,\mathbf{p}_m\) are any set of orthonormal basis on the vector space \(Span\{{\mathbf{p}_1}\}^{\perp }\).

  7. 7.

    We have omit an infinitesimal, which only have tiny impact. Similar simplifications are applied in other proofs.

References

  1. Bindel, N., et al.: qTESLA. Submission to the NIST Post-Quantum Cryptography Standardization (2017). https://tesla.informatik.tu-darmstadt.de/de/tesla/

  2. Bleichenbacher, D.: On the generation of one-time keys in DL signature schemes. In: Presentation at IEEE P1363 Working Group Meeting, p. 81 (2000)

    Google Scholar 

  3. Boneh, D., Durfee, G.: Cryptanalysis of RSA with private key d less than N\(^{0.292}\). In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 1–11. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48910-X_1

    Chapter  Google Scholar 

  4. Boneh, D., Venkatesan, R.: Hardness of computing the most significant bits of secret keys in Diffie-Hellman and related schemes. In: Koblitz, N. (ed.) CRYPTO 1996. LNCS, vol. 1109, pp. 129–142. Springer, Heidelberg (1996). https://doi.org/10.1007/3-540-68697-5_11

    Chapter  MATH  Google Scholar 

  5. Bootle, J., Delaplace, C., Espitau, T., Fouque, P.-A., Tibouchi, M.: LWE without modular reduction and improved side-channel attacks against BLISS. In: Peyrin, T., Galbraith, S. (eds.) ASIACRYPT 2018. LNCS, vol. 11272, pp. 494–524. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03326-2_17

    Chapter  Google Scholar 

  6. De Mulder, E., Hutter, M., Marson, M.E., Pearson, P.: Using Bleichenbacher’’s solution to the hidden number problem to attack nonce leaks in 384-bit ECDSA. In: Bertoni, G., Coron, J.-S. (eds.) CHES 2013. LNCS, vol. 8086, pp. 435–452. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40349-1_25

    Chapter  Google Scholar 

  7. Ducas, L., Durmus, A., Lepoint, T., Lyubashevsky, V.: Lattice signatures and bimodal Gaussians. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8042, pp. 40–56. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40041-4_3

    Chapter  Google Scholar 

  8. Espitau, T., Fouque, P.A., Gérard, B., Tibouchi, M.: Side-channel attacks on BLISS lattice-based signatures: exploiting branch tracing against strongswan and electromagnetic emanations in microcontrollers. In: CCS, pp. 1857–1874. ACM, New York (2017)

    Google Scholar 

  9. Groot Bruinderink, L., Hülsing, A., Lange, T., Yarom, Y.: Flush, gauss, and reload – a cache attack on the BLISS lattice-based signature scheme. In: Gierlichs, B., Poschmann, A.Y. (eds.) CHES 2016. LNCS, vol. 9813, pp. 323–345. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-53140-2_16

    Chapter  MATH  Google Scholar 

  10. Heninger, N., Shacham, H.: Reconstructing RSA private keys from random key bits. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 1–17. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03356-8_1

    Chapter  Google Scholar 

  11. Howgrave-Graham, N.A., Smart, N.P.: Lattice attacks on digital signature schemes. Des. Codes Crypt. 23(3), 283–290 (2001)

    Article  MathSciNet  Google Scholar 

  12. Hsu, D., Kakade, S.M., Zhang, T.: Tail inequalities for sums of random matrices that depend on the intrinsic dimension. Electron. Commun. Probab. 17(14), 1–13 (2012)

    MathSciNet  MATH  Google Scholar 

  13. Liu, Y., Zhou, Y., Sun, S., Wang, T., Zhang, R., Ming, J.: On the security of lattice-based Fiat-Shamir signatures in the presence of randomness leakage. IEEE Trans. Inf. Forensics Secur. 16, 1868–1879 (2020)

    Article  Google Scholar 

  14. Lyubashevsky, V.: Lattice signatures without trapdoors. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 738–755. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29011-4_43

    Chapter  Google Scholar 

  15. Lyubashevsky, V., et al.: CRYSTALS-Dilithium. Submission to the NIST Post-Quantum Cryptography Standardization (2017). https://pq-crystals.org/dilithium

  16. Nguyen, S.: The insecurity of the digital signature algorithm with partially known nonces. J. Cryptol. 15(3), 151–176 (2002)

    Article  MathSciNet  Google Scholar 

  17. Nguyen, P.Q., Shparlinski, I.E.: The insecurity of the elliptic curve digital signature algorithm with partially known nonces. Des. Codes Crypt. 30(2), 201–217 (2003)

    Article  MathSciNet  Google Scholar 

  18. Pessl, P., Bruinderink, L.G., Yarom, Y.: To BLISS-B or not to be: attacking strongswan’s implementation of post-quantum signatures. In: CCS, pp. 1843–1855. ACM, New York (2017)

    Google Scholar 

  19. Ravi, P., Jhanwar, M.P., Howe, J., Chattopadhyay, A., Bhasin, S.: Side-channel assisted existential forgery attack on Dilithium - a NIST PQC candidate. Cryptology ePrint Archive, Report 2018/821 (2018). https://eprint.iacr.org/2018/821

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuejun Liu .

Editor information

Editors and Affiliations

A Proof of Proposition 1

A Proof of Proposition 1

Reduce the General Case to the Special Case. This step is reducing any nonlinear ILWE problem to a special nonlinear ILWE problem where the secret satisfies \(\mathbf{s} = (s_1,0,\ldots ,0)^T\). Let \((\mathbf{A}, \mathbf{b})\) be m samples from the non-subgaussian ILWE distribution \(\mathcal {D}_{\mathbf{s},\chi _a,\chi _e,f}\), \(\mathbf{e}' = \mathbf{e} + \mathbf{f(As)}\) and \(\mathbf{P} = (\mathbf{p}_1,\mathbf{p}_2,\ldots ,\mathbf{p}_m)^T\in O_n(\mathbb {R})\), then \(\mathbf{b}=(\mathbf{A}{} \mathbf{P}^T)(\mathbf{P}{} \mathbf{s})+\mathbf{e}'\). In particular, there exists an orthogonal transformation \(\mathbf{P}\) such that \(\mathbf{P}{} \mathbf{s} = (\Vert \mathbf{s}\Vert _2,0,\ldots ,0)^T\).Footnote 6 As we have illustrated above, the result of the least squares method is

$$\begin{aligned}&(\mathbf{A}^T\mathbf{A})^{-1}{} \mathbf{A}^T\mathbf{b} = \mathbf{P}^T((\mathbf{A}{} \mathbf{P}^T)^T\mathbf{A}{} \mathbf{P}^T)^{-1}(\mathbf{A}{} \mathbf{P}^T)^T\mathbf{b}. \end{aligned}$$

If we can prove \(((\mathbf{A}{} \mathbf{P}^T)^T\mathbf{A}{} \mathbf{P}^T)^{-1}(\mathbf{A}{} \mathbf{P}^T)^T\mathbf{b} \approx \lambda \mathbf{Ps} = (\lambda \Vert \mathbf{s}\Vert _2,0,\ldots ,0)^T,\) then \((\mathbf{A}^T\mathbf{A})^{-1}{} \mathbf{A}^T\mathbf{b}\approx \mathbf{P}^T(\lambda \Vert \mathbf{s}\Vert _2,0,\ldots ,0)^T = \lambda \mathbf{s}.\)

Notice that \(((\mathbf{A}{} \mathbf{P}^T)^T\mathbf{A}{} \mathbf{P}^T)^{-1}(\mathbf{A}{} \mathbf{P}^T)^T\mathbf{b}\) is the least squares estimator of the samples \((\mathbf{b}=\mathbf{e}+\mathbf{AP}^T\mathbf{Ps},\mathbf{AP}^T)\) where \(\mathbf{Ps} = (\Vert \mathbf{s}\Vert _2,0,\ldots ,0)^T\). As the distribution of the random vector \(\mathbf{Pa}\) can be treated as the distribution of \(\mathbf{a}\), so we have transferred a general non-subgaussian ILWE problem into a special one that only the first component of \(\mathbf{s}\) is nonzero. In the rest of this paper, \(\mathbf{s}=(s_1,0,\ldots ,0)^T\) and \(\mathbf{a}_i\) is represented as \(({ a}_i^{(1)},{ a}_i^{(2)},\ldots ,{ a}_i^{(m)})^T\).

Next, combine the terms \(\langle \mathbf{a}_i,\mathbf{s}\rangle \) and \(f(\langle \mathbf{a}_i,\mathbf{s}\rangle )\) together and rewrite \(\langle \mathbf{a}_i,\mathbf{s}\rangle + f(\langle \mathbf{a}_i,\mathbf{s}\rangle )\) as \(\langle r(a_i^{(1)})\mathbf{a}_i,\mathbf{s}\rangle \). Now \(b_i = \langle r(a_i^{(1)})\mathbf{a}_i,\mathbf{s}\rangle + e_i\) and the error becomes subgaussian, where \(r_i = \frac{f(a_i^{(1)}s_1) + a_i^{(1)}s_1}{a_i^{(1)}s_1}\). Let \(\mathbf{R} = diag(r_1,r_2,\ldots ,r_m)\), then these samples can be rewrite as the following matrix form

$$\begin{aligned} (\mathbf{A},\mathbf{b}=\mathbf{RA}{} \mathbf{s}+\mathbf{e}). \end{aligned}$$
(3)

Estimate \(\boldsymbol{\lambda }\). We restate that the result of the least squares method is

$$\begin{aligned} (\mathbf{A}^T\mathbf{A})^{-1}{} \mathbf{A}^T\mathbf{b}=(\mathbf{A}^T\mathbf{A})^{-1}{} \mathbf{A}^T\mathbf{e} + (\mathbf{A}^T\mathbf{A})^{-1}(\mathbf{A}^T\mathbf{RA})\mathbf{s}. \end{aligned}$$

According to Theorem 1, \((\mathbf{A}^T\mathbf{A})^{-1}{} \mathbf{A}^T\mathbf{e}\) tend to zero, so what remains unsettled now is

$$(\mathbf{A}^T\mathbf{A})^{-1}(\mathbf{A}^T\mathbf{RA})\mathbf{s}.$$

Let \((\mathbf{A}^T\mathbf{A})^{-1}=\mathbf{M}_1\) and \(\mathbf{A}^T\mathbf{RA}=\mathbf{M}_2\) and next we will analyze \(\mathbf{M}_2\). As the matrix \(\mathbf{R} = diag(r_1,r_2,\ldots ,r_m)\), so \(\mathbf{R}^{\frac{1}{2}} = diag(\sqrt{r_1},\sqrt{r_2},\ldots ,\sqrt{r_m})\) and \(\mathbf{M}_2 = (\mathbf{R}^{\frac{1}{2}}{} \mathbf{A})^T(\mathbf{R}^{\frac{1}{2}}{} \mathbf{A}) \). The components of each row of \(\mathbf{R}^{\frac{1}{2}}{} \mathbf{A}=\mathbf{V}\) are no longer obey the same distribution as \(\chi _a^n\). In order to get the tail inequalities of the spectral radius of \(\mathbf{V}^T\mathbf{V}\), we split \(\mathbf{V}^T\mathbf{V}\) as \(\sum _{k=1}^{m}{} \mathbf{v}_k\mathbf{v}_k^T\), where \(\mathbf{V} = (\mathbf{v}_1,\mathbf{v}_2,\ldots ,\mathbf{v}_m)^T\) and \(\mathbf{v}_k = r_k^{\frac{1}{2}}{} \mathbf{a}_m\). The elements in row i and column j of the matrix \(\mathbf{V}^T\mathbf{V}\) can be expressed as \(\sum _{k=1}^{m}v_k^{(i)}v_k^{(j)} = \sum _{k=1}^{m}r_k^{(1)}a_k^{(i)}a_k^{(j)},\) then

$$\begin{aligned}&\mathbb {E}[r_k^{(1)}a_k^{(i)}a_k^{(j)}] = 0&i\ne j\\&\mathbb {E}[r_k^{(1)}a_k^{(i)}a_k^{(j)}] = \mathbb {E}[r_k^{(1)}]\sigma _a^2=\sigma _2^2&i = j\ne 1\\&\mathbb {E}[r_k^{(1)}a_k^{(1)}a_k^{(1)}] = \mathbb {E}[r_k^{(1)}a_k^{(1)}a_k^{(1)}] = \sigma _1^2&i = j = 1. \end{aligned}$$

According to the central limit theorem, the matrix \(m\mathbf{M}_1\) will converge to \(\sigma _a^2\mathbf{I}_n\) and \(\frac{1}{m}{} \mathbf{M}_2\) will converge to an n-dimensional diagonal matrix \(diag(\sigma _1^2, \sigma _2^2,\ldots ,\sigma _2^2)\). We claim that \(\mathbf{M}_1\mathbf{M}_2\mathbf{s}\) will converge to

$$(\frac{1}{m}\sigma _a^{-2}{} \mathbf{I}_n) (m\cdot diag(\sigma _1^2, \sigma _2^2,\ldots ,\sigma _2^2))\mathbf{s} = (\sigma _1/\sigma _a)^2\mathbf{s}.$$

Formally speaking, we want to prove the following proposition:

Proposition 2

Let \(\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_m\) be random vectors in \(\mathbb {R}^n\). All the components of these vectors are independently and identically distributed with the standard deviation be \(\sigma _x\), \(\mathbf{s} = (s_1,0,\ldots ,0)\in \mathbb {Z}^n\). If for some \(\gamma \ge 0\),

$$\mathbb {E}[\exp (\mathbf {\alpha }^T\mathbf{x}_i)]\le \exp (\Vert \mathbf{\alpha }\Vert _2^2\gamma /2),\quad \forall \alpha \in \mathbb {R}^n$$

for all \(i = 1,2,\ldots ,m\), then for any \(a>0 \) and \(t>0\), when

$$m > M = 2^9\cdot (n \ln 9 + t)\gamma ^2a^2|s_1|^2\max \{\frac{\max \{\sigma _1,\sigma _2\}^2}{\sigma _x^8},\frac{K^2\max \{\sigma _1,\sigma _2\}^4}{\min \{\sigma _1,\sigma _2\}^2}\},$$
$$\mathrm {Pr}[\Vert (\sum _{i=1}^m\mathbf{x}_i\mathbf{x}_i^T)^{-1}(\sum _{i=1}^mr(x_i^{(1)})\mathbf{x}_i\mathbf{x}_i^T)\mathbf{s} - \frac{\sigma _1^2}{\sigma _x^2}{} \mathbf{s}\Vert _2 > 1/a]\le 1 - 4e^{-t}$$

where \(x_i^{(1)}\) is the first component of the random vector \(\mathbf{x}_i\), r is a bounded function satisfies \(\mathbb {E}[r(x_i^{(1)})(x_i^{(1)})^2] \ne 0\) and \(|r| < K > 0\), \(\sigma _1^2 = \mathbb {E}[r(x_i^{(1)})(x_i^{(1)})^2]\) and \(\sigma _2^2 = \mathbb {E}[r(x_i^{(1)})]\).

The proof will be given at the end of this section. Using this theorem, we can prove Proposition 1.

Proof

According to Theorem 1 and Proposition 2, we can choose \(m \ge M\) and \(t = \ln (8n)\), then

$$\begin{aligned}&\mathrm {Pr}[\Vert (\mathbf{A} ^{T}{} \mathbf{A} )^{-1}\cdot \mathbf{A} ^T\mathbf{e} '\Vert _{\infty }<1/4]&\le 1 - 1/2n\\&\mathrm {Pr}[\Vert (\sum _{i=1}^m\mathbf{a}_i\mathbf{a}_i^T)^{-1}(\sum _{i=1}^mr(a_i^{(1)})\mathbf{a}_i\mathbf{a}_i^T)\mathbf{s} - \frac{\sigma _1^2}{\sigma _a^2}{} \mathbf{s}\Vert _2 < 1/4]&\le 1 - 1/2n, \end{aligned}$$

where each component of \(\mathbf{e}'\) satisfies \( e' = e - f(\langle \mathbf{a},\mathbf{s}\rangle )\), thus the least squares estimator satisfies

$$\begin{aligned}&\Vert \tilde{\mathbf{s}} - \sigma _1^2/\sigma _a^2\mathbf{s}\Vert _{\infty } = \Vert (\mathbf{A} ^{T}{} \mathbf{A} )^{-1}\cdot \mathbf{A} ^T\mathbf{b} - \sigma _1^2/\sigma _a^2\mathbf{s}\Vert _{\infty }\\&\le \Vert (\mathbf{A} ^{T}{} \mathbf{A} )^{-1}\cdot \mathbf{A} ^T\mathbf{e} '\Vert _{\infty } + \Vert (\sum _{i=1}^m\mathbf{a}_i\mathbf{a}_i^T)^{-1}(\sum _{i=1}^mr(a_i^{(1)})\mathbf{a}_i\mathbf{a}_i^T)\mathbf{s} - \sigma _1^2/\sigma _a^2\mathbf{s}\Vert _{\infty } \end{aligned}$$
$$\begin{aligned}&\mathrm{then}\quad \Pr [\Vert \tilde{\mathbf{s}} - \sigma _1^2/\sigma _a^2\mathbf{s}\Vert _{\infty }< 1/2]\\&\le \Pr [\Vert (\mathbf{A} ^{T}{} \mathbf{A} )^{-1}{} \mathbf{A} ^T\mathbf{e} '\Vert _{\infty } + \Vert (\mathbf{A} ^{T}{} \mathbf{A} )^{-1}(\sum _{i=1}^mr(a_i^{(1)})\mathbf{a}_i\mathbf{a}_i^T)\mathbf{s} - \frac{\sigma _1^2}{\sigma _a^2}{} \mathbf{s}\Vert _{\infty }< 1/2]\\&\le \Pr [\Vert (\mathbf{A} ^{T}{} \mathbf{A} )^{-1}{} \mathbf{A} ^T\mathbf{e} '\Vert _{\infty }<\frac{1}{4}]\cdot Pr[\Vert (\mathbf{A} ^{T}{} \mathbf{A} )^{-1}(\sum _{i=1}^mr(a_i^{(1)})\mathbf{a}_i\mathbf{a}_i^T)\mathbf{s} - \frac{\sigma _1^2}{\sigma _a^2}{} \mathbf{s}\Vert _{\infty } < \frac{1}{4}]\\&\le (1-1/2n)^2 = 1-\frac{1}{n}+\frac{1}{4n^2}.\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \blacksquare \end{aligned}$$

The bound of Proposition 1 is relatively loose. In fact, when the function f is close to a zero function, the samples required in practice is almost the same as the bound in Theorem 1.

To prove Proposition 2, we should measure the operator norm of the sum of random matrix. Fortunately, the following lemma in [12] provided a method to solve this problem.

Lemma 3

(Lemma A.1 in [12]). Let \(\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_m\) be random vectors in \(\mathbb {R}^n\) such that, for some \(\gamma \ge 0\),

$$\mathbb {E}[\mathbf{x}_i\mathbf{x}_i^T|\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_{i-1}]=\mathbf{I}_n \quad and$$
$$\mathbb {E}[\exp (\mathbf {\alpha }^T\mathbf{x}_i)|\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_{i-1}]\le \exp (\Vert \mathbf{\alpha }\Vert _2^2\gamma /2),\quad \forall \alpha \in \mathbb {R}^n$$

for all \(i = 1,2,\ldots ,m\), almost surely. For all \(\epsilon _0\in (0,1/2)\) and \(t>0\),

$$\mathrm {Pr}[\Vert (\sum _{i=1}^m\frac{1}{m}{} \mathbf{x}_i\mathbf{x}_i^T) - \mathbf{I}_n\Vert _2^{op}>\frac{1}{1-2\epsilon _0}\cdot \varepsilon _{\epsilon _0,t,m}]\le 2e^{-t}$$

where

$$\varepsilon _{\epsilon _0,t,m}:=\gamma \cdot (\sqrt{\frac{32(n\ln (1+2/\epsilon _0)+t)}{m}}+\frac{2(n\ln (1+2/\epsilon _0)+t)}{m}).$$

We can get two corollaries below using the Lemma 3.

Corollary 1

Let \(\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_m\) be random vectors in \(\mathbb {R}^n\) such that, for some \(\gamma \ge 0\),

$$\mathbb {E}[\mathbf{x}_i\mathbf{x}_i^T|\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_{i-1}]=\sigma _x^2\mathbf{I}_n \quad and$$
$$\mathbb {E}[\exp (\mathbf {\alpha }^T\mathbf{x}_i)|\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_{i-1}]\le \exp (\Vert \mathbf{\alpha }\Vert _2^2\gamma /2),\quad \forall \alpha \in \mathbb {R}^n$$

for all \(i = 1,2,\ldots ,m\), almost surely. For all \(\epsilon _0\in (0,1/2)\) and \(t>0\),

$$\mathrm {Pr}[\Vert (\sum _{i=1}^m\frac{1}{m}{} \mathbf{x}_i\mathbf{x}_i^T)^{-1} - {\sigma _x^{-2}}{} \mathbf{I}_n\Vert _2^{op}>\frac{\sigma _x^{-2}}{1-2\epsilon _0}\cdot \varepsilon _{\epsilon _0,t,m}]\le 2e^{-t}$$

where

$$\varepsilon _{\epsilon _0,t,m}:=\sigma _x^{-2}\gamma \cdot (\sqrt{\frac{32(n\ln (1+2/\epsilon _0)+t)}{m}}+\frac{2(n\ln (1+2/\epsilon _0)+t)}{m}).$$

Proof

Let \(\mathbf{y}_i = \frac{1}{\sigma _x}{} \mathbf{x}_i\) for \(i\in \{1,2,\ldots ,m\}\), \(\mathbf{A}_1 = \sum _{i=1}^m\mathbf{y}_i\mathbf{y}_i^T\) so \(\mathbb {E}[\frac{1}{m}{} \mathbf{A}_1]=\mathbf{I}_n\) and \(\mathbf{y}_i\) is \(\sqrt{\gamma }/\sigma _x-subgaussian\). Let \(\mathbf{\varDelta A}_1 = \frac{1}{m}{} \mathbf{A}_1 - \mathbf{I}_n\), then \((\mathbf{I}_n+\varDelta \mathbf{A}_1) = \frac{1}{m}{} \mathbf{A}_1\). Notice that \(\mathbf{I}_n = (m\cdot \mathbf{A}_1^{-1})(\mathbf{I}_n+\varDelta \mathbf{A}_1) = m\cdot \mathbf{A}_1^{-1} + m\cdot \mathbf{A}_1^{-1}\varDelta \mathbf{A}_1\), so it is clear that

$$\begin{aligned} \Vert \mathbf{I}_n - m\cdot \mathbf{A}_1^{-1}\Vert _2^{op} \le m\Vert \mathbf{A}_1^{-1}\Vert _2^{op}\Vert \mathbf{\varDelta A}_1\Vert _2^{op}. \end{aligned}$$
(4)

Furthermore,

$$\begin{aligned}&m\Vert \mathbf{A}_1^{-1}\Vert _2^{op} = \Vert \mathbf{I}_n - m\cdot \mathbf{A}_1^{-1}\varDelta \mathbf{A}_1\Vert _2^{op} \le \Vert \mathbf{I}_n\Vert _2^{op}+m\Vert \mathbf{A}_1^{-1}\Vert _2^{op}\Vert \varDelta \mathbf{A}_1\Vert _2^{op} \end{aligned}$$

so

$$\begin{aligned} m\Vert \mathbf{A}_1^{-1}\Vert _2^{op}\le \frac{1}{1-\Vert \varDelta \mathbf{A}_1\Vert ^{op}_2}. \end{aligned}$$
(5)

Combine Eq. 4 and 5, we have

$$\begin{aligned} \Vert \mathbf{I}_n - m\cdot \mathbf{A}_1^{-1}\Vert _2^{op} \le \frac{\Vert \varDelta \mathbf{A}_1\Vert ^{op}_2}{1-\Vert \varDelta \mathbf{A}_1\Vert ^{op}_2}. \end{aligned}$$
(6)

According to Lemma 3, we have:Footnote 7

$$\begin{aligned}&\mathrm {Pr}[\Vert (\sum _{i=1}^m\frac{1}{m}{} \mathbf{x}_i\mathbf{x}_i^T)^{-1} - {\sigma _x^{-2}}{} \mathbf{I}_n\Vert _2^{op}>\frac{\sigma _x^{-2}}{1-2\epsilon _0}\cdot \varepsilon _{\epsilon _0,t,m}]\\&\le \mathrm {Pr}[\Vert \varDelta \mathbf{A}_1\Vert ^{op}_2 > \frac{1}{1-2\epsilon _0}\cdot \varepsilon _{\epsilon _0,t,m}] \le 2e^{-t}. \qquad \qquad \qquad \qquad \blacksquare \end{aligned}$$

Corollary 2

Let \(\mathbf{x}_i = (x_i^{(1)},x_i^{(2)},\ldots ,x_i^{(n)}), i\in {1,\ldots ,m}\) be m random vectors and all the components are independent random variables with standard deviation be \(\sigma \) and

$$\exists \gamma \ge 0,\quad \mathbb {E}[\exp (\mathbf {\alpha }^T\mathbf{x}_i)|\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_{i-1}]\le \exp (\Vert \mathbf{\alpha }\Vert _2^2\gamma /2),\quad \forall \alpha \in \mathbb {R}^n$$

for all \(i = 1,2,\ldots ,m\), almost surely. For all \(\epsilon _0\in (0,1/2)\) and \(t>0\),

$$\mathrm {Pr}[\Vert (\sum _{i=1}^m\frac{1}{m}r(x_i^{(1)})\mathbf{x}_i\mathbf{x}_i^T) - diag(\sigma _1^2,\sigma _2^2,\ldots ,\sigma _2^2)\Vert _2^{op}>\frac{\max \{\sigma _1,\sigma _2\}^2}{(1-2\epsilon _0)}\cdot \varepsilon _{\epsilon _0,t,m}]\le 2e^{-t}$$

where r is a function satisfies \(\mathbb {E}[r(x_i^{(1)})(x_i^{(1)})^2] \ne 0\), \(|r(x_i^{(1)})| < K\) ,

\(\sigma _1^2 = \mathbb {E}[r(x_i^{(1)})(x_i^{(1)})^2]\), \(\sigma _2^2 = \mathbb {E}[r(x_i^{(1)})]\) and

$$\varepsilon _{\epsilon _0,t,m}:=K\gamma /\min \{\sigma _1,\sigma _2\}^2\cdot (\sqrt{\frac{32(n\ln (1+2/\epsilon _0)+t)}{m}}+\frac{2(n\ln (1+2/\epsilon _0)+t)}{m}).$$

Proof

Let \(\mathbf{v}_i = r(x_i^{(1)})^{1/2}\cdot \mathbf{x}_i\), we can prove \(\mathbb {E}[\mathbf{v}_i\mathbf{v}_i^T]=diag(\sigma _1^2, \sigma _2^2,\ldots ,\sigma _2^2) = \mathbf{\Lambda }\). Then \(\mathbb {E}[\mathbf{\Lambda }^{-1/2}{} \mathbf{v}_i(\mathbf{\Lambda }^{-1/2}{} \mathbf{v}_i)^T]=m\mathbf{I}_n\) and

$$\begin{aligned}&\mathbb {E}[\exp (\mathbf {\alpha }^T\mathbf{\Lambda }^{-1/2}{} \mathbf{v}_i)|] = \mathbb {E}[\exp (\mathbf {\alpha }^T\mathbf{\Lambda }^{-1/2}r(x_i^{(1)})^{1/2}{} \mathbf{x}_i)|]\\&\le \exp ((\Vert \mathbf{\alpha }^T\mathbf{\Lambda }^{-1/2}\Vert _2^{op})^2K\gamma /2)\le \exp (\Vert \mathbf{\Lambda }^{-1}\Vert _2^{op}\Vert \mathbf{\alpha }\Vert _2^2K\gamma /2),\quad \forall \alpha \in \mathbb {R}^n. \end{aligned}$$

This implies that \(\mathbf{\Lambda }^{-1/2}{} \mathbf{v}_i\) is a \(\Vert \mathbf{\Lambda }^{-1/2}\Vert _2^{op}\sqrt{K\gamma } - subgaussian\) random vector. Let \(\mathbf{A}_2 = \sum _{i=1}^mr(x_i^{(1)})\mathbf{x}_i\mathbf{x}_i^T\), then the rest is the similar with Corollary 2.      \(\blacksquare \)

Now we can prove Proposition 2.

Proof

Let \(\mathbf{A}_1 = \sum _{i=1}^m\mathbf{x}_i\mathbf{x}_i^T\) and \(\mathbf{A}_2 = \sum _{i=1}^mf(x_i^{(1)})\mathbf{x}_i\mathbf{x}_i^T\), then \(\mathbb {E}[\mathbf{x}_i\mathbf{x}_i^T]=\sigma _x^2\mathbf{I}_n\),

\(\mathbb {E}[f(x_i^{(1)})\mathbf{x}_i\mathbf{x}_i^T] = diag(\sigma _1^2, \sigma _2^2,\ldots ,\sigma _2^2) = \mathbf{\Lambda }\), and

$$\begin{aligned}&\Vert (m\mathbf{A}_1^{-1})\cdot (\frac{1}{m}{} \mathbf{A}_2)\mathbf{s}-\sigma _x^{-2}{} \mathbf{I}_n \cdot \mathbf{\Lambda }{} \mathbf{s}\Vert _2\\&= \Vert (m\mathbf{A}_1^{-1})\cdot (\frac{1}{m}{} \mathbf{A}_2)\mathbf{s} - (m\mathbf{A}_1^{-1}) \cdot \mathbf{\Lambda }{} \mathbf{s} + (m\mathbf{A}_1^{-1}) \cdot \mathbf{\Lambda }{} \mathbf{s} - \sigma _x^{-2}{} \mathbf{I}_n \cdot \mathbf{\Lambda }{} \mathbf{s}\Vert _2\\&\le (\Vert m\mathbf{A}_1^{-1}\Vert _2^{op}\cdot \Vert \frac{1}{m}{} \mathbf{A}_2 - \mathbf{\Lambda }\Vert _2^{op} + \Vert m\mathbf{A}_1^{-1} - \sigma _x^{-2}{} \mathbf{I}_n \Vert _2^{op}\cdot \Vert \mathbf{\Lambda }\Vert _2^{op})\cdot |s_1|. \end{aligned}$$

Use Corollary 1 and 2, we choose \(\epsilon _0 = 1/4\) and M in the proposition, we have

$$\begin{aligned}&\qquad \mathrm {Pr}[\Vert m\mathbf{A}_1^{-1} - \sigma _x^{-2}{} \mathbf{I}_n\Vert ^{op}_2< \frac{1}{2a\max \{\sigma _1,\sigma _2\}|s_1|}]> 1-2e^{-t}\\&\qquad \mathrm {Pr}[\Vert \frac{1}{m}{} \mathbf{A}_2 - \mathbf{\Lambda }\Vert _2^{op} < \frac{1}{2a\min \{\sigma _1,\sigma _2\}|s_1|}] > 1-2e^{-t}, \end{aligned}$$
$$\begin{aligned}&\mathrm{so} \quad \mathrm {Pr}[\Vert (m\mathbf{A}_1^{-1})\cdot (\frac{1}{m}{} \mathbf{A}_2)\mathbf{s}-\sigma _x^{-2}{} \mathbf{I}_n \cdot \mathbf{\Lambda }{} \mathbf{s}\Vert _2< \frac{1}{a}]\\&> \mathrm {Pr}[\Vert m\mathbf{A}_1^{-1}\Vert _2^{op}\Vert \frac{1}{m}{} \mathbf{A}_2 - \mathbf{\Lambda }\Vert _2^{op}|s_1|< \frac{1}{2a}] \times \mathrm {Pr}[\Vert m\mathbf{A}_1^{-1} - \sigma _x^{-2}{} \mathbf{I}_n \Vert _2^{op}\Vert \mathbf{\Lambda }\Vert _2^{op}|s_1|< \frac{1}{2a}]\\&> \mathrm {Pr}[\Vert \frac{1}{m}{} \mathbf{\Lambda }^{-1/2}{} \mathbf{A}_2\mathbf{\Lambda }^{-1/2} - \mathbf{I}_n\Vert _2^{op}< \frac{1+\Vert \varDelta \mathbf{A}_1\Vert ^{op}_2}{2a\min \{\sigma _1,\sigma _2\}|s_1|}]\\&\;\times \mathrm {Pr}[\Vert m\mathbf{A}_1^{-1} - \sigma _x^{-2}{} \mathbf{I}_n \Vert _2^{op} < \frac{1}{2a\max \{\sigma _1,\sigma _2\}|s_1|}]\\&> (1-2e^{-t})^2 > 1-4e^{-t}.\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \blacksquare \end{aligned}$$

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, T., Liu, Y., Xu, J., Hu, L., Tao, Y., Zhou, Y. (2021). Integer LWE with Non-subgaussian Error and Related Attacks. In: Liu, J.K., Katsikas, S., Meng, W., Susilo, W., Intan, R. (eds) Information Security. ISC 2021. Lecture Notes in Computer Science(), vol 13118. Springer, Cham. https://doi.org/10.1007/978-3-030-91356-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91356-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91355-7

  • Online ISBN: 978-3-030-91356-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics