Skip to main content

Solving the Learning Parity with Noise Problem Using Quantum Algorithms

  • Conference paper
  • First Online:
Book cover Progress in Cryptology - AFRICACRYPT 2022 (AFRICACRYPT 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13503))

Included in the following conference series:

  • 582 Accesses

Abstract

The Learning Parity with Noise (LPN) problem is a famous cryptographic problem consisting in recovering a secret from noised samples. This problem is usually solved via reduction techniques, that is, one reduces the original instance to a smaller one before substituting back the recovered unknowns and starting the process again. There has been an extensive amount of work where time-memory trade-offs, optimal chains of reductions or different solving techniques were considered but hardly any of them involved quantum algorithms. In this work, we are interested in studying the improvements brought by quantum computers when attacking the LPN search problem in the sparse noise regime. Our primary contribution is a novel efficient quantum algorithm based on Grover’s algorithm which searches for permutations achieving specific error patterns. This algorithm non-asymptotically outperforms the known techniques in a low-noise regime while using a low amount of memory.

B. Tran—Supported by the Swiss National Science Foundation (SNSF) through the project grant N\(^{{\underline{\textrm{o}}}}\) 192364 on Post-Quantum Cryptography.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Here, it is important to keep the O(n) factor as it will influence the quantum algorithm presented later.

  2. 2.

    The o(1) term in the exponent was actually missing in [11], but may be negligible in practice since we are usually interested in the logarithmic complexity.

  3. 3.

    In [11], the cost of this partitioning step, namely O(n), was not considered.

  4. 4.

    In [11], the binomial coefficients were wrongly written as \(\left( {\begin{array}{c}w\\ i\end{array}}\right) \) instead of \(\left( {\begin{array}{c}b\\ i\end{array}}\right) \).

References

  1. Akavia, A.: Learning noisy characters, MPC, and cryptographic hardcore predicates. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA, USA (2008)

    Google Scholar 

  2. Asaka, R., Sakai, K., Yahagi, R.: Quantum circuit for the fast Fourier transform. Quantum Inf. Process. 19(8), 1–20 (2020). https://doi.org/10.1007/s11128-020-02776-5

    Article  MathSciNet  Google Scholar 

  3. Becker, A., Joux, A., May, A., Meurer, A.: Decoding random binary linear codes in \(2^n/20\): how \(1 + 1 = 0\) improves information set decoding. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 520–536. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29011-4_31

    Chapter  MATH  Google Scholar 

  4. Bernstein, D.J.: Optimizing linear maps modulo 2 (2009). http://binary.cr.yp.to/linearmod2-20090830.pdf

  5. Bernstein, D.J.: Grover vs. McEliece. In: Sendrier, N. (ed.) PQCrypto 2010. LNCS, vol. 6061, pp. 73–80. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-12929-2_6

    Chapter  Google Scholar 

  6. Bernstein, D.J., Lange, T.: Never trust a bunny. In: Hoepman, J.-H., Verbauwhede, I. (eds.) RFIDSec 2012. LNCS, vol. 7739, pp. 137–148. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36140-1_10

    Chapter  Google Scholar 

  7. Bernstein, D.J., Lange, T., Peters, C.: Smaller decoding exponents: ball-collision decoding. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 743–760. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22792-9_42

    Chapter  Google Scholar 

  8. Bleichenbacher, D.: On the generation of one-time keys in DL signature schemes (2000). https://blog.cr.yp.to/20191024-bleichenbacher.pdf

  9. Blum, A., Kalai, A., Wasserman, H.: Noise-tolerant learning, the parity problem, and the statistical query model. CoRR cs.LG/0010022 (2000)

    Google Scholar 

  10. Bogos, S., Tramèr, F., Vaudenay, S.: On solving LPN using BKW and variants. IACR Cryptology ePrint Archive 2015, 49 (2015)

    Google Scholar 

  11. Bogos, S., Vaudenay, S.: Optimization of LPN solving algorithms. Cryptology ePrint Archive, Report 2016/288 (2016). https://ia.cr/2016/288

  12. Bogos, S.M.: LPN in Cryptography: an algorithmic study. Ph.D. thesis, Lausanne (2017). http://infoscience.epfl.ch/record/228977

  13. Choi, G.: Applying the SFT algorithm for cryptography (2017). https://lasec.epfl.ch/intranet/projects/year16_17/Fall-16_17_Gwangbae_Choi_Applying_The_SFT/report.pdf. Access on demand

  14. Dachman-Soled, D., Gong, H., Kippen, H., Shahverdi, A.: BKW meets Fourier new algorithms for LPN with sparse parities. In: Nissim, K., Waters, B. (eds.) TCC 2021. LNCS, vol. 13043, pp. 658–688. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90453-1_23

    Chapter  Google Scholar 

  15. Esser, A., Kübler, R., May, A.: LPN decoded. Cryptology ePrint Archive, Report 2017/078 (2017)

    Google Scholar 

  16. Galbraith, S.D., Laity, J., Shani, B.: Finding significant Fourier coefficients: clarifications, simplifications, applications and limitations. Chic. J. Theor. Comput. Sci. 2018 (2018)

    Google Scholar 

  17. Grover, L.K.: A fast quantum mechanical algorithm for database search. In: Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC 1996, pp. 212–219. Association for Computing Machinery, New York (1996). https://doi.org/10.1145/237814.237866

  18. Guo, Q., Johansson, T., Löndahl, C.: Solving LPN using covering codes. In: Sarkar, P., Iwata, T. (eds.) ASIACRYPT 2014. LNCS, vol. 8873, pp. 1–20. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-45611-8_1

    Chapter  Google Scholar 

  19. Hallgren, S., Vollmer, U.: Quantum computing. In: Bernstein, D.J., Buchmann, J., Dahmen, E. (eds.) Post-Quantum Cryptography. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-540-88702-7_2

    Chapter  MATH  Google Scholar 

  20. Jiao, L.: Specifications and improvements of LPN solving algorithms. IET Inf. Secur. 14(1), 111–125 (2020). https://doi.org/10.1049/iet-ifs.2018.5448

    Article  Google Scholar 

  21. Kachigar, G., Tillich, J.-P.: Quantum information set decoding algorithms. In: Lange, T., Takagi, T. (eds.) PQCrypto 2017. LNCS, vol. 10346, pp. 69–89. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59879-6_5

    Chapter  MATH  Google Scholar 

  22. Lee, P.J., Brickell, E.F.: An observation on the security of McEliece’s public-key cryptosystem. In: Barstow, D., et al. (eds.) EUROCRYPT 1988. LNCS, vol. 330, pp. 275–280. Springer, Heidelberg (1988). https://doi.org/10.1007/3-540-45961-8_25

    Chapter  Google Scholar 

  23. Levieil, É., Fouque, P.-A.: An improved LPN algorithm. In: De Prisco, R., Yung, M. (eds.) SCN 2006. LNCS, vol. 4116, pp. 348–359. Springer, Heidelberg (2006). https://doi.org/10.1007/11832072_24

    Chapter  Google Scholar 

  24. May, A., Meurer, A., Thomae, E.: Decoding random linear codes in \(\tilde{\cal{O}}(2^{0.054n})\). In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 107–124. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25385-0_6

    Chapter  MATH  Google Scholar 

  25. May, A., Ozerov, I.: On computing nearest neighbors with applications to decoding of binary linear codes. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 203–228. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46800-5_9

    Chapter  Google Scholar 

  26. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information: 10th Anniversary Edition, Anniversary Cambridge University Press, Cambridge (2010). https://doi.org/10.1017/CBO9780511976667

    Book  MATH  Google Scholar 

  27. Prange, E.: The use of information sets in decoding cyclic codes. IRE Trans. Inf. Theory 8(5), 5–9 (1962). https://doi.org/10.1109/TIT.1962.1057777

    Article  MathSciNet  Google Scholar 

  28. Wagner, D.: A generalized birthday problem. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 288–304. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45708-9_19

    Chapter  Google Scholar 

  29. Wiggers, T., Samardjiska, S.: Practically solving LPN. In: IEEE International Symposium on Information Theory, ISIT 2021, pp. 2399–2404. IEEE (2021). https://doi.org/10.1109/ISIT45174.2021.9518109

  30. Xie, Z., Qiu, D., Cai, G.: Quantum algorithms on Walsh transform and Hamming distance for Boolean functions. Quantum Inf. Process. 17(6), 1–17 (2018). https://doi.org/10.1007/s11128-018-1885-y

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bénédikt Tran .

Editor information

Editors and Affiliations

Appendices

A Reductions

Algorithms to solve LPN are tuned so that their time complexity is the most efficient one. In the literature, authors usually bruteforce the parameters space in order to find the best parameters. Most of the LPN solving techniques may be described within the same framework: the original \(\textsf{LPN}(k,\tau ,s)\) is reduced to a smaller \(\textsf{LPN}(k',\tau ',s')\) instance for which the secret \(s'\) is recovered by a solving algorithm. Then, the queries are updated accordingly and the process is repeated until the original secret is completely recovered. In this section, we restrict our attention to some well-studied reductions \(\pi \) and describe the update rule

$$(k,n,\delta _\tau ,\delta _s) \mathop {\longrightarrow }\limits ^{\pi } (k',n',\delta '_\tau ,\delta '_s),$$

where \(k' = \pi (k)\) is the updated secret size, \(n' = \pi (n)\) is the updated number of queries, \(\delta '_\tau =\pi (\delta _\tau )\) is the updated noise bias and \(\delta '_s = \pi (\delta _s)\) is the updated secret bias. The updated noise rate \(\tau '\) can be recovered via \(\tau ' = \frac{1-\delta '_\tau }{2} = \pi (\tau )\). If available, the classical and quantum time (resp. memory) complexities of a reduction \(\pi \) are denoted by \({\boldsymbol{\tau }}_\pi \) (resp. \({\boldsymbol{\mu }}_\pi \)) and \({\boldsymbol{\tau }}_{q,\pi }\) (resp. \({\boldsymbol{\mu }}_{q,\pi }\)) respectively. Unless stated otherwise, the reductions that we present can be found in the existing literature such as [10] or [11] and in [29] for their memory complexities. In this section, we will recall the reductions presented in [11]. These reductions were used to construct the chains (see Appendix B for the formal description) giving the results on Table 1.

1.1 A.1 Reduction: Sparse-Secret

The sparse-secret reduction described by Algorithm 3 transforms an LPN instance with \(\delta _s = 0\) (corresponding to an uniformly distributed secret) to an LPN instance where the secret and the noise bits follow a \(\texttt{Ber}_\tau \) distribution. To that end, the idea is to consider a portion of the noise vector as the new secret at the cost of dropping some of the queries. The time complexity of the reduction depends on the choice of the underlying matrix multiplication algorithm. There exist two versions of the reduction, one is given by Guo, Johansson and Löndahl in [18] and the other by Bernstein in [4]. We assume that inverting a \(k\times k\) binary matrix takes \(k^{\omega +o(1)}\) field operations, where \(\omega \ge 2\) is the matrix multiplication exponentFootnote 2. This term can be ignored if this reduction is typically applied once.

figure w

If we were to replace the classical algorithm with a quantum one, we essentially need to perform boolean matrix multiplications. While there exist quantum algorithms that do improve the classical time complexity, they usually depend on the sparsity of the input matrices or the output matrix. As such, a quantum algorithm equivalent to Algorithm 3 would need to be thought differently. On the other hand, this reduction is usually used at most once and it has a polynomial complexity, hence there is no advantage in making it quantum.

1.2 B.2 Reduction: Part-Reduce (LF1) and Xor-Reduce (LF2)

Notation 2

Given \(\mathcal {Q}\subseteq {\textbf{Z}}_2^{k+1}\) and \(I\subseteq \llbracket k\rrbracket \), we denote by \(\sim _I\) the equivalence relation defined over \(\mathcal {Q}\times \mathcal {Q}\) by \(\psi \sim _I\psi '\) if and only if \(\psi _I = \psi '_I\). The corresponding canonical projection is denoted by .

The \(\mathrm {\texttt {part-reduce}}(b)\) reduction, also called \(\mathrm {\texttt {LF1}}(b)\) reduction, is the original reduction in the BKW algorithm [9] which consists in partitioning the set of queries according to some equivalence relation \(\sim _I\). More precisely, an indexation set \(I\subseteq \llbracket k\rrbracket \) of size b is picked uniformly at random and the queries \(\psi \in \mathcal {Q}\) are sorted according to \([\psi ]_I\). Then, for each equivalence class, one fixes a representative and XOR’s it with the rest of the class before dropping it. The effective size of the secret is then reduced by b bits, at the cost of discarding \(2^b\) queries. By effective size, we mean that we ignore the bits at position \(i\in I\) since they would always be XOR’ed with zero bits. The noise bias \(\delta ' = \delta _{\tau '}\) is amplified as \(\delta ' = \delta ^2_\tau \) while the secret bias \(\delta _s\) remains the same.

figure y

Another reduction similar to \(\mathrm {\texttt {part-reduce}}(b)\) is the \(\mathrm {\texttt {xor-reduce}}(b)\) reduction, also called \(\mathrm {\texttt {LF2}}(b)\) reduction and introduced by Levieil and Fouque in [23]. Instead of XOR’ing a single representative with the rest of the class, \(\mathrm {\texttt {xor-reduce}}(b)\) applies a pairwise XOR on the whole equivalence class. The effective size of the secret is then reduced by b bits, while the expected number of queries \(n' = \frac{n(n-1)}{2^{b+1}}\) increases if \(n > 1 + 2^{b+1}\), remains unchanged if \(n \approx 1 + 2^{b+1}\) and decreases otherwise. According to practical experiments, \(\mathrm {\texttt {xor-reduce}}(b)\) performs better than \(\mathrm {\texttt {part-reduce}}(b)\), even though it relies on heuristic assumptions.

figure z

A partitionFootnote 3 for \(\sim _I\) can be constructed by picking any query \(\psi \in \mathcal {Q}\) and searching for all queries in the same equivalence class. The process is repeated until all queries are exhausted. On average, there are \(2^b\) equivalence classes, each of which containing \(\frac{n}{2^b}\) queries, so that the partitioning would be achieved in quantum time complexity \({\boldsymbol{\tau }}_{q,\pi } \approx \sum \limits _{m=0}^{2^b-1}\dfrac{n-\frac{mn}{2^b}}{2^{b/2}} = \dfrac{n(2^b+1)}{2^{1+b/2}} > n\). Therefore, using Grover’s algorithm does not improve the complexity of the reduction. A similar argument stands for the part-reduce reduction.

1.3 C.3 Reduction: Drop-Reduce

In the \(\mathrm {\texttt {drop-reduce}}(b)\) reduction, queries that are nonzero on a set of b bits are dropped, where the b positions are chosen uniformly at random. The resulting LPN instance consists of a secret \(s'\) of effective size \(k' = k - b\) with an expected number of remaining queries \(n' = \frac{n}{2^b}\). The noise and secret biases remain unchanged.

figure aa

Quantum Algorithm. The \(\mathrm {\texttt {drop-reduce}}(b)\) reduction is essentially a filter. Grover’s algorithm [17] searches for one marked item in an unsorted database of N items in quantum time \({\boldsymbol{\Theta }}(\sqrt{N})\). If there are \(1\le M\le N\) marked items, then the running time decreases [26] to \({\boldsymbol{\Theta }}(\sqrt{N/M})\) so that finding all marked items takes quantum time \({\boldsymbol{\Theta }}(\sqrt{NM})\). Since we expect to keep \(n/2^b\) queries, the expected quantum time of \(\mathrm {\texttt {drop-reduce}}(b)\) is \({\boldsymbol{\tau }}_{q,\pi }=O(n2^{-b/2})\).

Remark 3

While the quantum version of \(\mathrm {\texttt {drop-reduce}}(b)\) may a priori improve the classical time complexity, one critical observation is that this result entirely based on the assumption that a sufficient amount of quantum memory is accessible, that the LPN queries can be accessed in superposition and that the oracle construction cost is negligible.

1.4 D.4 Reduction: Code-Reduce

The \(\mathrm {\texttt {code-reduce}}(k,k',\textsf{params})\) reduction approximates the queries \(\psi _i = (a_i,c_i)\) with codewords in a \([k,k']_2\)-code \(\mathcal {C}\) with \(k' < k\) characterized by \(\textsf{params}\) and generated by a known matrix \(M\in {\textbf{Z}}_2^{k\times k'}\). Let \(\mathfrak {D}_{\mathcal {C}}\) be a decoder which decodes a codeword in \(\mathcal {C}\) in time \({\boldsymbol{\tau }}_{dec}\). Let \(\nu _i{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\nu _i'M^T\in {\textbf{Z}}_2^k\) be the nearest codeword in \(\mathcal {C}\) to \(a_i\). Then, \(c_i = \langle a_i,s_i\rangle \oplus \varepsilon _i = \langle \nu _i'M^T,s\rangle \oplus \langle a_i-\nu _i,s\rangle \oplus \varepsilon _i = \langle \nu _i',sM\rangle \oplus \langle a_i-\nu _i,s\rangle \oplus \varepsilon _i\). By setting, \(\varepsilon '_i{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\langle a_i-\nu _i,s\rangle \oplus \varepsilon _i\), the queries for the \(\textsf{LPN}(k',\tau ',s')\) instance with secret \(s'{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}sM\) are exactly those of the form \(\psi _i' = (\nu _i',c_i)\), hence \(n' = n\). On the other hand, the new noise bias \(\delta '_\tau \) is expressed as \(\delta '_\tau = \delta _\tau \cdot \texttt{bc}\), where \(\texttt{bc}\) denotes the bias of \(\langle a_i-\nu _i,s\rangle \), that is \(\texttt{bc} = \mathbb {E}[(-1)^{\langle a_i-\nu _i,s\rangle }]\). The secret bias \(\delta '_s\) is expressed as a function of \(\delta _s\) and \(\mathcal {C}\).

figure ab

In practice, the idea is to choose \(\mathcal {C}\) with \(\texttt{bc}\) as large as possible and \({\boldsymbol{\tau }}_{dec} = O(1)\). For general LPN instances, the authors of [11] provided three classes of codes, namely repetition codes, perfect codes and quasi-perfect codes defined by some set of parameters, and computed their corresponding \(\texttt{bc}\).

1.5 E.5 Reduction: Guess-Reduce

The \(\mathrm {\texttt {guess-reduce}}(b,w)\) reduction [6, 18] is a reduction that forces the secret to satisfy some distribution. More precisely, one selects \(0\le w'\le w\) and \(b\ge w\) positions of the secret, say \(s_1,\ldots ,s_b\). Then, out of those b unknowns, \(w'\) are set to 1, say the first \(w'\), and the others are set to 0. In this example, this is equivalent to assume that the secret is of the form \(s = \textbf{1}^{w'}||\textbf{0}^{b-w'}||(s_{b+1},\ldots ,s_k)\). More generally, the reduction succeeds if the secret contains a pattern of b bits with at most w errors, which occurs with probabilityFootnote 4

$$ \sum _{i=0}^w\left( {\begin{array}{c}b\\ i\end{array}}\right) \left( \frac{1-\delta _s}{2}\right) ^i\left( \frac{1+\delta _s}{2}\right) ^{b-i} = \sum _{i=0}^w\left( {\begin{array}{c}b\\ i\end{array}}\right) \tau _s^i(1-\tau _s)^{b-i}. $$

Furthermore, the solving algorithm must be iterated \(\sum _{i=0}^w\left( {\begin{array}{c}b\\ i\end{array}}\right) \) times since each pattern of b bits with at most w errors must be tested. The complexity of the reduction step itself is O(1) as there is “nothing” to do except choosing and replacing variables (the complexity is at most \(O(b\log b)\) since this is the complexity of randomly sampling b positions, but this can be amortized to O(1)). It can furthermore be integrated after (resp. before) a \(\mathrm {\texttt {sparse-secret}}\) (resp. \(\mathrm {\texttt {code-reduce}}\)) reduction and the two reductions are merged as \(\mathrm {\texttt {sparse+guess}}\) (resp. \(\mathrm {\texttt {guess+code}}\)).

figure ac

Arbitrary guessing-based reductions do not enjoy well quantum speed-ups since they essentially require to repeat the same algorithm but for different choices of the bits and weights. One idea is to create a superposition of \(N = \sum _{i=0}^w\left( {\begin{array}{c}b\\ i\end{array}}\right) \) states, each of which encoding a fixed pattern and run Grover’s algorithm over this set until we find a “good” one, which corresponds to recovering the secret. Such set could be found in time of order \(\sqrt{N}\le 2^{b/2}\). The issue is that the predicate deciding whether a set is good or not entirely depends on whether the solver succeeded or not, hence the cost of evaluating Grover’s algorithm is likely to explode. As such, we do not consider a quantum version of this reduction. Additionally, the \(\mathrm {\texttt {guess-reduce}}\) reduction may appear at the middle of a reduction chain, making the rest of the chain somehow part of the Grover’s predicate. According to [11, §6], classical \(\mathrm {\texttt {guess-reduce}}\) do not seem to bring substantial improvements. Future research may however investigate whether there is a way to introduce a quantum \(\mathrm {\texttt {guess-reduce}}\) in such a way that the underlying Grover’s predicate is efficiently evaluated.

Other reductions such as \(\mathrm {\texttt {LF4}}(b)\) and \(\mathrm {\texttt {(u)trunc-reduce}}(b)\) were presented in [11] and [12], but as a matter of fact, were considered inefficient compared to the existing ones. The \(\mathrm {\texttt {LF4}}(b)\) reduction is similar to \(\mathrm {\texttt {LF1}}(b)\) and \(\mathrm {\texttt {LF2}}(b)\) but is based on Wagner’s algorithm [28]. It has worse complexity because it is essentially equivalent to two \(\mathrm {\texttt {LF2}}(b)\). On the other hand, the \(\mathrm {\texttt {trunc-reduce}}(b)\) and \(\mathrm {\texttt {utrunc-reduce}}(b)\) reductions introduce secret bits in the error vector by truncating bits. According to [11, §6, table 3], those reductions do not seem to bring notable improvements, and thus were not considered in this study.

B Graph of Reductions

In [11], Bogos and Vaudenay considered each reduction as an edge of a graph and tried to find a path for which the end vertex is an LPN instance whose secret is recovered by one of the presented solvers. Stated otherwise, one chooses a set of reductions \({\boldsymbol{\Pi }}\) and starts from an initial \(\textsf{LPN}(k,\tau ,s)\) instance, identified to some vertex \((k,\log n)\) where n is the number of available queries. By applying consecutive reduction steps \(\pi \in {\boldsymbol{\Pi }}\), an \(\textsf{LPN}(k',\tau ',s')\) instance whose secret \(s'\) is recovered by the solving algorithm is eventually reached. In this section, we suggest an optimized way to construct such graph and extend the formalism introduced by [11] to an arbitrary set of reductions.

Notation 3

For \(\mathcal {L}\subseteq \mathbb {R}\), we define a flooring \(\lfloor \cdot \rfloor _\mathcal {L}\) function, a ceiling \(\lceil \cdot \rceil _\mathcal {L}\) function and a rounding \(\lfloor \cdot \rceil _\mathcal {L}\) function by \(\lfloor x\rfloor _\mathcal {L}{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\max _{z\in \mathcal {L}}\left\{ z\le x\right\} \), \(\lceil x\rceil _\mathcal {L}{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\min _{z\in \mathcal {L}}\left\{ z\ge x\right\} \) and \(\lfloor x\rceil _\mathcal {L}{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\arg \min _{z\in \mathcal {L}}\left|{x-z}\right|\) respectively.

Definition 7 (reduction graph)

Let \({\boldsymbol{\Pi }}\) be a set of reductions and let \(\mathcal {L}\subseteq \mathbb {R}\) and \(k\in \mathbb {N}\). A k-dimensional \(({\boldsymbol{\Pi }},\mathcal {L})\)-reduction graph is a directed and labelled graph \(\mathcal {G}= (\mathcal {V},\mathcal {E})\) defined over \(\mathcal {V}{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\llbracket 1,k\rrbracket \times \mathcal {L}\) such that any edge \((\varepsilon :v\mathop {\longrightarrow }\limits ^{\lambda }v')\in \mathcal {E}\) from \(v=(k,\eta )\in \mathcal {V}\) to \(v'=(k',\eta ')\in \mathcal {V}\) labelled by \(\lambda =(\alpha _\lambda ,\beta _\lambda ,\pi _\lambda )\in \mathbb {R}\times \mathbb {R}\times {\boldsymbol{\Pi }}\) satisfies \(k' = \pi _\lambda (k)\) and \(\eta ' = \lfloor \log \pi _\lambda (2^\eta )\rceil _\mathcal {L}\). Here, \(\alpha _\lambda \) and \(\beta _\lambda \) are real values that describe the evolution of the noise bias .

Remark 4

We assume that \(\pi _\lambda \) encodes all the statically known parameters of a reduction. For instance, the reduction \(\mathrm {\texttt {code-reduce}}(k,k',\textsf{params})\) for fixed \(k,k'\) and \(\textsf{params}\) can only be applied to instances of size k, whereas \(\mathrm {\texttt {drop-reduce}}(b)\) does not depend on the initial LPN instance.

Notation 4

Given a label \(\lambda \), integers \(k'\le k\) and \(\eta \in \mathbb {R}\), let \(\eta '=\eta ^\lambda _{out}(k,\eta ,k')\) be the binary logarithm of the exact number of queries obtained after applying the reduction \((v=(k,\eta )\mathop {\longrightarrow }\limits ^{\lambda }(k',\eta ')=v')\) and let \({\boldsymbol{\tau }}^\lambda _{log}(v,v')\) be the corresponding logarithmic time complexity.

Definition 8 (reduction chain)

Let \({\boldsymbol{\Pi }}\) be a set of reductions and let \(\mathcal {C}\) be a finite sequence of length m of the form

$$ v_0=(k_0,\log n_0) \mathop {\longrightarrow }\limits ^{\lambda _1} \ldots \mathop {\longrightarrow }\limits ^{\lambda _{m}} (k_m,\log n_m)=v_m,\,\quad \lambda _i=(\alpha _{\lambda _i},\beta _{\lambda _i},\pi _{\lambda _i}). $$

Then, \(\mathcal {C}\) is said to be a \({\boldsymbol{\Pi }}\)-reduction chain if \((k_{i-1},\log n_{i-1})\mathop {\longrightarrow }\limits ^{\lambda _i}(k_i,\log n_i)\) follows the update rule defined by the reduction \(\pi _{\lambda _i}\in {\boldsymbol{\Pi }}\) for all \(1\le i \le m\). If this is the case, we abusively write \(v_i = \lambda _i(v_{i-1})\). The time complexity \({\boldsymbol{\tau }}_\mathcal {C}\) and the time max-complexity \({\boldsymbol{\tau }}_{\mathcal {C},\infty }\) of \(\mathcal {C}\) are respectively defined by \({\boldsymbol{\tau }}_{\mathcal {C}} {{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\sum _{\lambda }{\boldsymbol{\tau }}_{\lambda }\) and \({\boldsymbol{\tau }}_{\mathcal {C},\infty }{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\max _{\lambda }{\boldsymbol{\tau }}_{\lambda }\), where \({\boldsymbol{\tau }}_\lambda {{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}{\boldsymbol{\tau }}_{\pi _\lambda }\) is the time complexity of a reduction step \(\lambda \in \mathcal {C}\).

Definition 9 (simple reduction chain)

A \({\boldsymbol{\Pi }}\)-reduction chain \(\mathcal {C}\) is said to be simple if it is accepted by the automaton described on Fig. 1, where the dotted lines are transitions described by an arbitrary solving algorithm \(\mathcal {A}\). The transition map \({\textbf{T}}:\llbracket 1,4\rrbracket \times {\boldsymbol{\Pi }}\longrightarrow \llbracket 0,4\rrbracket \) denoted by \((\sigma ,\pi )\mapsto {\textbf{T}}^\sigma _\pi \) associates \((\sigma ,\pi )\) to \(\sigma '\) if \(\pi \) is a valid transition from \(\sigma \) to \(\sigma '\) and to 0 otherwise.

Fig. 1.
figure 1

Automaton accepting simple chains.

Given an algorithm \(\mathcal {A}\) recovering a \(k'\)-dimensional \(\textsf{LPN}\) secret in time \({\boldsymbol{\tau }}_\vartheta \) with probability at least \(1-\vartheta \), a reduction chain \(\mathcal {C}\) from \(\textsf{LPN}(k,s,\tau )\) to \(\textsf{LPN}(k',s',\tau ')\) is said to be \(\vartheta \)-valid for \(\mathcal {A}\). Given an upper bound \({\boldsymbol{\tau }}_\infty \in \mathbb {R}\) on the logarithmic time complexity, the goal is to find a chain \(\mathcal {C}\) for which \({\boldsymbol{\tau }}_{\mathcal {C},\vartheta } {{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}{\boldsymbol{\tau }}_{\mathcal {C}} + {\boldsymbol{\tau }}_{\vartheta } = \mathcal {O}(2^{{\boldsymbol{\tau }}_{\infty }})\). This can be achieved by searching for relatively small chains \(\mathcal {C}\) for which the time max-complexity \({\boldsymbol{\tau }}_{\mathcal {C},\vartheta ,\infty }{{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\max \left( {\boldsymbol{\tau }}_{\mathcal {C},\infty },\,{\boldsymbol{\tau }}_{\vartheta }\right) \) is upper-bounded by \(2^{{\boldsymbol{\tau }}_{\infty }}\) as the max-complexity metric is a relatively good approximation of the normal complexity.

1.1 F.6 Finding Optimal \(\vartheta \)-valid Chains

To find optimal \(\vartheta \)-valid chains, [11] constructed a directed graph \(\mathcal {G}^* = (\mathcal {V}^*,\mathcal {E}^*)\) where the set of vertices \(\mathcal {V}^* = \mathcal {V}\times \llbracket 1,4\rrbracket \) furthermore encodes the automaton state. The action of a reduction step \(\lambda \) on \(\mathcal {V}\) is extended to \(\mathcal {V}^*\) via

figure ae

and \(\delta _\tau ' = \alpha _\lambda \delta +\beta _\lambda \). In particular, \(\mathcal {E}^*\subseteq \left\{ \varepsilon :v\mathop {\longrightarrow }\limits ^{\lambda }v'\,\mid \,v,v'\in \mathcal {V}^*,\,v'=\lambda (v)\right\} \). The construction of \(\mathcal {G}^*\) is described by [11, §4.1, Alg. 2]. In practice, \(\mathcal {G}^*\) is lazily constructed by iteratively looking for the optimal edges for which \(\delta _\tau \) is the largest at each reduction step. Algorithm 8 describes the high-level idea of how to construct the graph by iterating over possible vertices and adding them to the graph depending on the strategy described by the transition matrix. For practical reasons, \(\mathcal {L}\) is a partition of the real segment \([0,{\boldsymbol{\tau }}_\infty ]\) with a step of size \(\varrho \) (which plays the role of a “precision”) that will be used to approximate the number of required queries.

1.2 G.7 Optimizing the \(\texttt {build()}\) Algorithm

In their original paper, the authors considered LPN instances up to a dimension of \(k=756\) and \(\tau \in \left\{ 0.05,0.1,0.2,0.25\right\} \) and with a precision \(\varrho =10^{-1}\). For small dimensions, their algorithm is sufficiently fast, but for larger instances, there is a way to reduce the running time of the optimization algorithm by half in practice. Indeed, since the conditions at line 11 of Algorithm 8 depend on the loop index i, it suffices to find a smaller interval \(\llbracket i_{min}^\lambda ,i_{max}^\lambda \rrbracket \subseteq \llbracket j+1,k\rrbracket \) for which the conditions hold.

Lemma 1

Let \({\boldsymbol{\tau }}_\infty ,\varrho >0\) and let \(\mathcal {L}= \left\{ \eta _\ell =\varrho (\ell -1):1\le \ell \le N = \lfloor \frac{{\boldsymbol{\tau }}_{\infty }}{\varrho }\rfloor +1\right\} \). Let \(\mathcal {G}^*\) be a \(({\boldsymbol{\Pi }},\mathcal {L})\)-reduction graph. For all \(1\le j < i\le k\) and for all \(1\le \ell _1\le N\), we define \(\eta _j^\lambda = \eta ^\lambda _{out}(i,\eta _{\ell _1},j)\) and \(\ell _2 = \lfloor \eta _j^\lambda /\varrho \rceil + 1\). Let \(v_1=(i,\eta _{\ell _1})\in \mathcal {V}\) be a vertex and let \(v_2 = (j,\eta _{\ell _2})\) be a point, not necessarily in \(\mathcal {V}\) as \(\ell _2\) may be outside the range \(\llbracket 1,N\rrbracket \). Then, the following assertions are verified for \(i_{min}^\lambda = j + 1\):

  1. 1.

    The conditions \(\left\{ \eta _j^\lambda \ge 0\right\} \), \(\left\{ 1\le \ell _2\le N\right\} \) and \(\left\{ {\boldsymbol{\tau }}_{log}^\lambda (v_1,v_2)\le {\boldsymbol{\tau }}_\infty \right\} \) are satisfied for \(\lambda ={ \mathrm {\texttt {drop-reduce}}}\) if \(i_{max}^\lambda = \min \left(k,\eta _{\ell _1}+j,I\right)\), where \(I = \infty \) if \({\boldsymbol{\tau }}_\infty \ge \eta _{\ell _1}+1\) and \( I=j-\log (1-2^{{\boldsymbol{\tau }}_\infty -\eta _{\ell _1}-1})\) otherwise.

  2. 2.

    The conditions \(\left\{ \eta _j^\lambda \ge 0\right\} \) and \(\left\{ 1\le \ell _2\le N\right\} \) are satisfied for \(\lambda ={ \mathrm {\texttt {xor-reduce}}}\) if \(i_{max}^{\lambda } = \min \left( k,2^{{\boldsymbol{\tau }}_\infty -\eta _{\ell _1}}, I\right) \), where \(I = \eta _{\ell _1} + \log (2^{\eta _{\ell _1}}-1) + j - 1\) if \(\eta _{\ell _1}\ge 1\) and 0 otherwise.

  3. 3.

    The conditions \(\left\{ \eta _j^\lambda \ge 0\right\} \), \(\left\{ 1\le \ell _2\le N\right\} \) and \(\left\{ {\boldsymbol{\tau }}_{log}^\lambda (v_1,v_2)\le {\boldsymbol{\tau }}_\infty \right\} \) are satisfied for \(\lambda ={ \mathrm {\texttt {code-reduce}}}\) if \(i_{max}^{\lambda } = \min \left(k,2^{{\boldsymbol{\tau }}_\infty -\eta _{\ell _1}-\log {\boldsymbol{\tau }}_{dec}}\right)\), where \({\boldsymbol{\tau }}_{dec}\) is the time complexity of the corresponding decoder.

figure af

Proof

Let \(\eta = \eta _{\ell _1}\) and \(\eta ' = \eta _{\ell _2}\), \(n=2^\eta \) and \(n' = 2^{\eta '}\) and \(b = i-j\). We claim that if \(i_{min}^\lambda \le i\le i_{max}^\lambda \), then \(1\le \ell _2\le N\) and \(\eta '\ge 0\), and optionally \({\boldsymbol{\tau }}^\lambda _{log}(v_1,v_2)\le {\boldsymbol{\tau }}_\infty \).

  1. 1.

    Since \(\eta ' = \eta -b\), it follows that \(i \le \min (k,\eta + j)\) ensures \(\eta '\ge 0\). On the other hand, \({\boldsymbol{\tau }}_{log}^\lambda {{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}\log \frac{n(2^b-1)}{2^{b-1}} = \eta + 1 + \log (1-2^{-b}) \le {\boldsymbol{\tau }}_{\infty }\) holds if \({\boldsymbol{\tau }}_\infty \ge \eta +1\) or if \(i \le I {{\,\mathrm{{\mathop {=}\limits ^{{\scriptscriptstyle \triangle }}}}\,}}j - \log (1-2^{{\boldsymbol{\tau }}_\infty -\eta -1})\) and \({\boldsymbol{\tau }}_\infty < \eta + 1\). Setting \(I = \infty \) if \({\boldsymbol{\tau }}_\infty \ge \eta +1\) then justifies \(i_{max}^\lambda =\min (k,\eta +j,I)\) as a suitable upper bound.

  2. 2.

    Since \(\eta ' \,=\, \eta \,+\, \log (n\,-\,1) \,-\, b \,-\, 1\), either \(\eta ' \,\ge \,0\) if \(\eta \ge 1\) or \(i \le \eta \,+\, \log (n\,-\,1) \,+\, j \,-\, 1\) otherwise. If \(\eta \in [0,1)\), then \(\eta + \log (n-1) \le b + 1\), meaning that \(\eta ' < 0\) which is not possible as we have an integral number of queries. In that case, the loop is empty. On the other hand, \({\boldsymbol{\tau }}_{log}^\lambda = \log i + \max (\eta ,\eta ')\) is smaller than \({\boldsymbol{\tau }}_\infty \) when \(\eta '\le \eta \) if \(i\le \min (k,2^{{\boldsymbol{\tau }}_\infty -\eta })\). For the case \(\eta '>\eta \), we would need to optimize i so that \(\log i + \eta ' \le {\boldsymbol{\tau }}_{\infty }\), but since \(\eta '\) depends on i itself, we did not dwell into these details.

  3. 3.

    Since the number of queries is maintained, only the condition on the time complexity needs to be checked. Since \({\boldsymbol{\tau }}_{log}^{\lambda } = \log i + \eta + \log {\boldsymbol{\tau }}_{dec}\), it suffices that \(i \le 2^{{\boldsymbol{\tau }}_\infty - \eta - \log {\boldsymbol{\tau }}_{dec}}\) to ensure \({\boldsymbol{\tau }}_{log}^{\lambda }\le {\boldsymbol{\tau }}_{\infty }\) and this concludes the proof.   \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tran, B., Vaudenay, S. (2022). Solving the Learning Parity with Noise Problem Using Quantum Algorithms. In: Batina, L., Daemen, J. (eds) Progress in Cryptology - AFRICACRYPT 2022. AFRICACRYPT 2022. Lecture Notes in Computer Science, vol 13503. Springer, Cham. https://doi.org/10.1007/978-3-031-17433-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17433-9_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17432-2

  • Online ISBN: 978-3-031-17433-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics