Skip to main content

Near-Optimal Private Information Retrieval with Preprocessing

  • Conference paper
  • First Online:
Theory of Cryptography (TCC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14370))

Included in the following conference series:

  • 262 Accesses

Abstract

In Private Information Retrieval (PIR), a client wishes to access an index i from a public n-bit database without revealing any information about i. Recently, a series of works starting with the seminal paper of Corrigan-Gibbs and Kogan (EUROCRYPT 2020) considered PIR with client preprocessing and no additional server storage. In this setting, we now have protocols that achieve \(\widetilde{O}(\sqrt{n})\) (amortized) server time and \(\widetilde{O}(1)\) (amortized) bandwidth in the two-server model (Shi et al., CRYPTO 2021) as well as \(\widetilde{O}(\sqrt{n})\) server time and \(\widetilde{O}(\sqrt{n})\) bandwidth in the single-server model (Corrigan-Gibbs et al., EUROCRYPT 2022). Given existing lower bounds, a single-server PIR scheme with \(\widetilde{O}(\sqrt{n})\) (amortized) server time and \(\widetilde{O}(1)\) (amortized) bandwidth is still feasible, however, to date, no known protocol achieves such complexities. In this paper we fill this gap by constructing the first single-server PIR scheme with \(\widetilde{O}(\sqrt{n})\) (amortized) server time and \(\widetilde{O}(1)\) (amortized) bandwidth. Our scheme achieves near-optimal (optimal up to polylogarithmic factors) asymptotics in every relevant dimension. Central to our approach is a new cryptographic primitive that we call an adaptable pseudorandom set: With an adaptable pseudorandom set, one can represent a large pseudorandom set with a succinct fixed-size key k, and can both add to and remove from the set a constant number of elements by manipulating the key k, while maintaining its concise description as well as its pseudorandomness (under a certain security definition).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In particular, in Step 1 of the actual protocol’s online phase, the client sends \(S_j \setminus \{ i \}\) with probability \(1-1/\sqrt{n}\) and \(S_j \setminus \{ r \}\), for a random element r, with probability \(1/\sqrt{n}\), to ensure no information is leaked about i. Also, \(\omega (\log \lambda )\) parallel executions are required to guarantee overwhelming correctness in \(\lambda \), to account for puncturing ‘fails’ and when a set \(S_j\) that contains i cannot be found.

  2. 2.

    Amortization is over \(\sqrt{n}\) queries.

  3. 3.

    We pick \(\sqrt{n}\) concretely for exposition. Looking ahead, our scheme achieves a same smooth tradeoff where by preprocessing O(Q) sets achieves O(n/Q) amortized online time.

  4. 4.

    Previously this was called “puncture”. We rename it to “resample” for ease of understanding and consistency with our work.

References

  1. Angel, S., Chen, H., Laine, K., Setty, S.: PIR with compressed queries and amortized query processing. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 962–979 (2018). https://doi.org/10.1109/SP.2018.00062. ISSN: 2375-1207

  2. Angel, S., Setty, S.: Unobservable communication over fully untrusted infrastructure. In: Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation, pp. 551–569. OSDI2016, USENIX Association, USA (2016)

    Google Scholar 

  3. Backes, M., Kate, A., Maffei, M., Pecina, K.: ObliviAd: provably secure and practical online behavioral advertising. In: 2012 IEEE Symposium on Security and Privacy, pp. 257–271 (2012). https://doi.org/10.1109/SP.2012.25. ISSN: 2375-1207

  4. Beimel, A., Ishai, Y.: Information-theoretic private information retrieval: a unified construction. In: Orejas, F., Spirakis, P.G., van Leeuwen, J. (eds.) ICALP 2001. LNCS, vol. 2076, pp. 912–926. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-48224-5_74

    Chapter  Google Scholar 

  5. Beimel, A., Ishai, Y., Malkin, T.: Reducing the servers computation in private information retrieval: PIR with preprocessing. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 55–73. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44598-6_4

    Chapter  MATH  Google Scholar 

  6. Bell, J.H., Bonawitz, K.A., Gascón, A., Lepoint, T., Raykova, M.: Secure single-server aggregation with (poly)logarithmic overhead. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 1253–1269. CCS 2020, Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3372297.3417885

  7. Boneh, D., Kim, S., Montgomery, H.: Private puncturable PRFs from standard lattice assumptions. In: Coron, J.-S., Nielsen, J.B. (eds.) EUROCRYPT 2017. LNCS, vol. 10210, pp. 415–445. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56620-7_15

    Chapter  Google Scholar 

  8. Brakerski, Z., Tsabary, R., Vaikuntanathan, V., Wee, H.: Private constrained PRFs (and more) from LWE. In: Kalai, Y., Reyzin, L. (eds.) TCC 2017. LNCS, vol. 10677, pp. 264–302. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70500-2_10

    Chapter  Google Scholar 

  9. Brakerski, Z., Vaikuntanathan, V.: Fully homomorphic encryption from ring-LWE and security for key dependent messages. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 505–524. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22792-9_29

    Chapter  Google Scholar 

  10. Canetti, R., Chen, Y.: Constraint-hiding constrained PRFs for NC\(^1\) from LWE. In: Coron, J.-S., Nielsen, J.B. (eds.) EUROCRYPT 2017. LNCS, vol. 10210, pp. 446–476. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56620-7_16

    Chapter  Google Scholar 

  11. Chor, B., Goldreich, O., Kushilevitz, E., Sudan, M.: Private information retrieval, p. 41. IEEE Computer Society (1995). https://doi.org/10.1109/SFCS.1995.492461. https://www.computer.org/csdl/proceedings-article/focs/1995/71830041/12OmNzYNNfi. ISSN: 0272-5428

  12. Chor, B., Gilboa, N.: Computationally private information retrieval (extended abstract). In: Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pp. 304–313. STOC 1997, Association for Computing Machinery, New York, NY, USA (1997). https://doi.org/10.1145/258533.258609

  13. Chor, B., Gilboa, N., Naor, M.: Private information retrieval by keywords (1998). https://eprint.iacr.org/1998/003. Report Number: 003

  14. Chor, B., Kushilevitz, E., Goldreich, O., Sudan, M.: Private information retrieval. J. ACM 45(6), 965–981 (1998)

    Google Scholar 

  15. Corrigan-Gibbs, H., Henzinger, A., Kogan, D.: Single-server private information retrieval with sublinear amortized time. In: Dunkelman, O., Dziembowski, S. (eds.) Advances in Cryptology – EUROCRYPT 2022. EUROCRYPT 2022. LNCS, vol. 13276. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-07085-3_1

  16. Corrigan-Gibbs, H., Kogan, D.: Private information retrieval with sublinear online time. In: Canteaut, A., Ishai, Y. (eds.) EUROCRYPT 2020. LNCS, vol. 12105, pp. 44–75. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45721-1_3

    Chapter  Google Scholar 

  17. Devadas, S., van Dijk, M., Fletcher, C.W., Ren, L., Shi, E., Wichs, D.: Onion ORAM: a constant bandwidth blowup oblivious ram. In: Kushilevitz, E., Malkin, T. (eds.) TCC 2016. LNCS, vol. 9563, pp. 145–174. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49099-0_6

    Chapter  Google Scholar 

  18. Di Crescenzo, G., Ishai, Y., Ostrovsky, R.: Universal service-providers for private information retrieval. J. Cryptol. 14(1), 37–74 (2001)

    Google Scholar 

  19. Di Crescenzo, G., Malkin, T., Ostrovsky, R.: Single database private information retrieval implies oblivious transfer. In: Preneel, B. (ed.) EUROCRYPT 2000. LNCS, vol. 1807, pp. 122–138. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45539-6_10

    Chapter  Google Scholar 

  20. Dong, C., Chen, L.: A fast single server private information retrieval protocol with low communication cost. In: Kutyłowski, M., Vaidya, J. (eds.) ESORICS 2014. LNCS, vol. 8712, pp. 380–399. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11203-9_22

    Chapter  Google Scholar 

  21. Dvir, Z., Gopi, S.: 2-server PIR with subpolynomial communication. J. ACM 63(4), 1–15 (2016)

    Google Scholar 

  22. Efremenko, K.: 3-query locally decodable codes of subexponential length. SIAM J. Comput. 41(6), 1694–1703 (2012)

    Google Scholar 

  23. Garg, S., Mohassel, P., Papamanthou, C.: TWORAM: efficient oblivious ram in two rounds with applications to searchable encryption. In: Robshaw, M., Katz, J. (eds.) CRYPTO 2016. LNCS, vol. 9816, pp. 563–592. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-53015-3_20

    Chapter  MATH  Google Scholar 

  24. Gentry, C.: Fully homomorphic encryption using ideal lattices. In: Proceedings of the 41st Annual ACM symposium on Symposium on Theory of Computing - STOC 2009, p. 169. ACM Press, Bethesda, MD, USA (2009). https://doi.org/10.1145/1536414.1536440

  25. Gentry, C., Ramzan, Z.: Single-database private information retrieval with constant communication rate. In: Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP 2005. LNCS, vol. 3580, pp. 803–815. Springer, Heidelberg (2005). https://doi.org/10.1007/11523468_65

    Chapter  Google Scholar 

  26. Gentry, C., Sahai, A., Waters, B.: Homomorphic encryption from learning with errors: conceptually-simpler, asymptotically-faster, attribute-based. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8042, pp. 75–92. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40041-4_5

    Chapter  Google Scholar 

  27. Goldreich, O., Goldwasser, S., Micali, S.: How to construct random functions (Extended Abstract). In: FOCS (1984). https://doi.org/10.1109/SFCS.1984.715949

  28. Gupta, T., Crooks, N., Mulhern, W., Setty, S., Alvisi, L., Walfish, M.: Scalable and private media consumption with Popcorn. In: Proceedings of the 13th USENIX Conference on Networked Systems Design and Implementation, pp. 91–107. NSDI2016, USENIX Association, USA (2016)

    Google Scholar 

  29. Kazama, K., Kamatsuka, A., Yoshida, T., Matsushima, T.: A note on a relationship between smooth locally decodable codes and private information retrieval. In: 2020 International Symposium on Information Theory and Its Applications (ISITA), pp. 259–263 (2020). ISSN: 2689–5854

    Google Scholar 

  30. Kiayias, A., Leonardos, N., Lipmaa, H., Pavlyk, K., Tang, Q.: Optimal rate private information retrieval from homomorphic encryption. Proceed. Privacy Enhan. Technol. 2015(2), 222–243 (2015)

    Google Scholar 

  31. Kogan, D., Corrigan-Gibbs, H.: Private blocklist lookups with checklist. In: 30th USENIX Security Symposium (USENIX Security 21), pp. 875–892. USENIX Association (2021). https://www.usenix.org/conference/usenixsecurity21/presentation/kogan

  32. Kushilevitz, E., Ostrovsky, R.: Replication is not needed: single database, computationally-private information retrieval. In: Proceedings 38th Annual Symposium on Foundations of Computer Science, pp. 364–373. IEEE Comput. Soc, Miami Beach, FL, USA (1997). https://doi.org/10.1109/SFCS.1997.646125. https://ieeexplore.ieee.org/document/646125/

  33. Lazzaretti, A., Papamanthou, C.: Near-optimal private information retrieval with preprocessing (2022). https://eprint.iacr.org/2022/830. Publication info: Preprint

  34. Lipmaa, H.: An oblivious transfer protocol with log-squared communication. In: Zhou, J., Lopez, J., Deng, R.H., Bao, F. (eds.) ISC 2005. LNCS, vol. 3650, pp. 314–328. Springer, Heidelberg (2005). https://doi.org/10.1007/11556992_23

    Chapter  Google Scholar 

  35. Lipmaa, H., Pavlyk, K.: A simpler rate-optimal CPIR protocol. In: Financial Cryptography and Data Security 2017 (2017). https://eprint.iacr.org/2017/722

  36. Mughees, M.H., Chen, H., Ren, L.: OnionPIR: response efficient single-server PIR. In: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 2292–2306. CCS 2021, Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3460120.3485381

  37. Shi, E., Aqeel, W., Chandrasekaran, B., Maggs, B.: Puncturable pseudorandom sets and private information retrieval with near-optimal online bandwidth and time. In: Malkin, T., Peikert, C. (eds.) CRYPTO 2021. LNCS, vol. 12828, pp. 641–669. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-84259-8_22

    Chapter  Google Scholar 

  38. Singanamalla, S., et al.: Oblivious DNS over HTTPS (ODoH): a practical privacy enhancement to DNS. In: Proceedings on Privacy Enhancing Technologies 2021(4), 575–592 (2021)

    Google Scholar 

  39. Yekhanin, S.: Towards 3-query locally decodable codes of subexponential length. J. ACM 55(1), 1–16 (2008)

    Google Scholar 

  40. Yekhanin, S.: Locally decodable codes and private information retrieval schemes. Information Security and Cryptography, Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14358-8

  41. Zhou, M., Lin, W.K., Tselekounis, Y., Shi, E.: Optimal single-server private information retrieval. ePrint IACR (2022)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the NSF, VMware and Protocol Labs.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arthur Lazzaretti .

Editor information

Editors and Affiliations

Appendices

A Definitions

1.1 A.1 Additional Definitions for Adaptable PRSs

Our adaptable PRS primitive will satisfy the following definitions.

Definition A1

(Pseudorandomness with respect to some distribution \(\mathbb {D}_n\) for privately-puncturable PRSs [37]) A privately-puncturable PRS scheme (Gen, EnumSet, InSet, Resample) satisfies pseudorandomness with respect to some distribution \(\mathbb {D}_n\) if the distribution of \(\textsf {EnumSet }(sk)\), where sk is output by \(\textsf {Gen }(\lambda , n)\), is indistinguishable from a set sampled from \(\mathbb {D}_n\).

Definition A2

(Security in resampling for privately-puncturable PRSs [37]). A privately-puncturable PRS scheme (Gen, EnumSet, InSet, Resample) satisfies security in resampling if, for any \(x \in \{1,\ldots ,n-1\}\), the following two distributions are computationally indistinguishable.

  • Run \(\textsf {Gen }(\lambda ,n) \rightarrow (sk,msk)\), output sk.

  • Run \(\textsf {Gen }(\lambda ,n) \rightarrow (sk,msk)\) until \(\textsf {InSet }(sk,x)\rightarrow 1\), output \(sk_x = \textsf {Resample }(msk,x)\).

Definition A3

(Functionality preservation in resampling for privately-puncturable PRSs [37]). We say that a privately-puncturable PRS scheme (Gen, EnumSet, InSet, Resample) satisfies functionality preservation in resampling with respect to a predicate Related if, with probability \(1- \texttt {negl} (\lambda )\) for some negligible function \(\texttt {negl} (.)\), the following holds. If \(\textsf {Gen }(1^\lambda ,n)\rightarrow (sk, msk)\) and \(\textsf {Resample }(msk, x)\rightarrow sk_x \) where \(x \in \textsf {InSet }(sk)\) then

  1. 1.

    \(\textsf {EnumSet }(sk_x ) \subseteq \textsf {EnumSet }(sk)\);

  2. 2.

    \(\textsf {EnumSet }(sk_x )\) runs in time no more than \(\textsf {EnumSet }(sk )\);

  3. 3.

    For any \(y\in \textsf {EnumSet }(sk) \setminus \textsf {EnumSet }(sk_x)\), it must be that \(\texttt {{Related}}(x, y) = 1\).

Definition A4

(Security in addition for adaptable PRSs). We say that an adaptable PRS scheme (Gen, EnumSet, InSet, Resample, Add) satisfies security in addition if, for any \(x \in \{0,\ldots ,n-1\}\), the following two distributions are computationally indistinguishable.

  • Run \(\textsf {Gen }(1^\lambda ,n) \rightarrow (sk,msk)\) until \(\textsf {InSet }(sk,x)\rightarrow 1\). Let \(msk[0] = null\) and output (msksk).

  • Run \(\textsf {Gen }(1^\lambda ,n) \rightarrow (sk,msk )\). Output \((msk^x,sk^x)\leftarrow \textsf {Add }(msk,sk,x)\).

Definition A5

(Functionality preservation in addition for adaptable PRS). We say that an adaptable PRS scheme \((\textsf {Gen }, \textsf {EnumSet }, \textsf {InSet }, \textsf {Resample }, \textsf {Add })\) satisfies functionality preservation in addition with respect to a predicate Related if, with probability \(1- \texttt {negl} (\lambda )\) for some negligible function \(\texttt {negl} (.)\), the following holds. If \(\textsf {Gen }(1^\lambda , n)\rightarrow (sk,msk)\) and \(\textsf {Add }(msk,sk,x)\rightarrow sk^x\) then

  • \(\textsf {EnumSet }(sk) \subseteq \textsf {EnumSet }(sk^x)\);

  • For all \(y \in \textsf {EnumSet }(sk^x) \setminus \textsf {EnumSet }(sk)\) it must be that Related\((x,y) = 1\).

B Correctness Lemmata

See below the proof of Lemma 31. We then use it to prove Theorem 31.

Proof

Recall that we fix \(B = 2 \log \log n\). As alluded to in Sect. 3, we can split our failure probability in three cases:

  • Case 1: \(x_i\) is not in any primary set that was preprocessed.

  • Case 2: The resampling does not remove \(x_i\).

  • Case 3: Resampling removes more that just \(x_i\) from the set.

Case 1: We first note that, from our distribution \(\mathbb {D}_n\), for any \(x \in \{0,\ldots ,n-1 \}\), we have that, for \(S \sim \mathbb {D}_n\),

$$\begin{aligned} \text {Pr}[x \in S] &= \left( \frac{1}{2}\right) ^ {\frac{1}{2} \log n + B} = \frac{1}{\sqrt{n}} \left( \frac{1}{2}\right) ^B = \frac{1}{2^B \sqrt{n}}\,. \end{aligned}$$

Then note that the expected size of S is the sum of the probability of each element being in the set, i.e.,

$$\begin{aligned} \mathop {\mathrm {\mathbb {E}}}\limits \left[ |S| \right] &= \mathop {\mathrm {\mathbb {E}}}\limits \left[ \sum _{x=0}^{n-1} \frac{1}{2^B \sqrt{n}} \right] = \sum _{x=0}^{n-1} \mathop {\mathrm {\mathbb {E}}}\limits \left[ \frac{1}{2^B \sqrt{n}} \right] = \frac{\sqrt{n}}{2^B} \le \frac{\sqrt{n}}{(\log n)^2}\,. \end{aligned}$$

We can conclude that the desired probability is

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [x \notin \cup _{i \in [1,l]} S_i] &= \left( 1- \frac{1}{\sqrt{n} (\log n)^2}\right) ^{\sqrt{n}(\log n)^3} \le \left( \frac{1}{e}\right) ^{\log n} \le \frac{1}{n}\,, \end{aligned}$$

where \(\ell = \sqrt{n}\log ^3 n\) and \(S_1,\ldots ,S_{\ell }\sim (\mathbb {D}_n)^{\ell }\).

Case 2: Assuming there is a set S such that \(x_i \in S\), by construction of Resample, it is easy to see that the probability that \(x_i\) is not removed from S is equivalent to a Bernoulli variable that is 1 with probability \(p=\frac{1}{\sqrt{n} \cdot 2^B}\), since we toss \(1/2 \log n + B\) coins, and x is not removed only if all of these coins evaluate to 1. Therefore

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [x_i \in \texttt {Resample}(S,x_i)] &= \frac{1}{\sqrt{n} \cdot 2^B} \le \frac{1}{\sqrt{n}\log ^2 n}\,. \end{aligned}$$

Case 3: Note that for any k less than \(\log n\), there are exactly \(2^{\log n - k} - 1\), or less than \(2^{\log n - k}\) strings in \(\{0,1\}^{\log n}\), that are different than x share a suffix of length \(\ge k\) with x. Note that since x is in the set, for any k, the probability that a string y that has a common suffix of length exactly k with x is included in the set is the chance that its initial B bits and its remaining bits not shared with x evaluate to 1, namely, for any k less than \(\log n\) and \(y = \{0,1\}^{\log n - k} || x[\log n - k:]\) we have that:

$$\begin{aligned} \text {Pr}[y \in S] = \frac{1}{2^B 2^{\log n - k}}\,. \end{aligned}$$

Let \(N_k\) be the set of strings in the set that share a longest common suffix with x of length k. Then, since we know that there are at most \(2^{\log n - k}\) such strings, we can say that for any k, the expected size of \(N_k\) is

$$\begin{aligned} \mathop {\mathrm {\mathbb {E}}}\limits \left[ |N_k| \right] \le \mathop {\mathrm {\mathbb {E}}}\limits \left[ \sum _{x = 1}^{2^{\log n - k}} \frac{1}{2^B 2^{\log n - k}}\right] &= \sum _{x = 1}^{2^{\log n - k}} \mathop {\mathrm {\mathbb {E}}}\limits \left[ \frac{1}{2^B 2^{\log n - k}}\right] = \frac{2^{\log n - k}}{2^B 2^{\log n - k}} = \frac{1}{2^B}. \end{aligned}$$

Then, for our construction, where we only check prefixes for k greater than \((1/2)\log n\), we can find that the sum of the expected size of \(N_k\), for each such k is

$$\begin{aligned} \mathop {\mathrm {\mathbb {E}}}\limits \left[ \sum _{k = \frac{1}{2} \log n + 1}^{\log n - 1} |N_k|\right] =\sum _{k = \frac{1}{2} \log n + 1}^{\log n - 1} \mathop {\mathrm {\mathbb {E}}}\limits \left[ |N_k|\right] & \le \left( \frac{1}{2} \log n - 1\right) \frac{1}{2^B} \le \frac{1}{2 \log n}. \end{aligned}$$

Clearly, we can bound the probability of removing an element along with \(x_i\) by the probability that there exists a related element to \(x_i\) in the set, by previous discussion in Sect. 3. Then, given each bound above, assuming that the previous query was correct and that the refresh phase maintains the set distribution, we see that the probability that the returned bit DB \([x_i]\) is incorrect for query step i is

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [\textsf {DB}[x_i] \text {is incorrect}] &\le \frac{1}{n} +\frac{1}{\sqrt{n}\log ^2 n} + \frac{1}{2 \log n} \le \frac{3}{2 \log n} < \frac{1}{3}\,, \end{aligned}$$

for \(n\ge 32\).    \(\square \)

Now we introduce a new lemma that will help us prove Theorem 31. This lemma will bound the probability that Add does not work as expected. The intuition here is that, just like Resample can remove elements (already in the set) related to the resampled element, Add can add elements (not in the set) related to the added element. Below, we are bounding the number of elements that are not x and are expected to be added to the set when we add x. As we explained in Sect. 3, this is a “failure case”, since it means that our set will not be what we expect.

Lemma B1

(Adding related elements). For \(S \sim \mathbb {D}_n\), and any \(x \in \{0,\ldots ,n-1\}\), the related set \( S_{almost,x}\) is defined as

$$ S_{almost,x} = \{y | y \in \texttt {{Add}}(S,x) \setminus (S \cup \{x\})\}\,. $$

Then the expected size of \(S_{almost,x}\) is at most \(\frac{1}{2\log n}\).

Proof

Note that for any k less than \(\log n\), there are less than \(2^{\log n - k}\) strings in \(\{0,1\}^{\log n}\) that share a suffix of length greater than or equal to k with x that do not equal x. The probability that a string y that has a common suffix of exactly k with x is included in \(S_{almost,x}\) is the chance that its initial B bits and its remaining bits not shared with x evaluate to 1. Namely, let us say that

$$ S_{almost,x} = \bigcup N_k\,, $$

for any \(k \in \mathbb {N}\) that is less than \(\log n\) and more than \((1/2)\log n\). We define each \(N_k\) as

$$ N_k = \{y : y= \{0,1\}^{\log n - k} || x[\log n - k:]\}\,. $$

Since this is the same size as the \(N_k\) in Case 3 of Lemma 31, and we are iterating over the same k, the expected size of \(S_{almost,x}\) is

$$\begin{aligned} \mathop {\mathrm {\mathbb {E}}}\limits \left[ |S_{almost,x}| \right] \le \frac{1}{2 \log n}\,. \end{aligned}$$

   \(\square \)

We are now equipped with all the tools we need to prove Theorem 31. We prove it below:

Proof

We first prove privacy of the scheme, then proceed to prove correctness. The asymptotics follow by construction and were argued in Sect. 3.

Privacy. Privacy for \({\textbf {server}}_{1}\) is trivial. It only ever sees random sets generated completely independent of the queries and is not interacted with online. We present the privacy proof for \({\textbf {server}}_{2}\) below.

Privacy with respect to \({\textbf {server}}_{2}\), as per our definition, must be argued by showing there exists a stateful algorithm Sim that can run without knowledge of the query and be indistinguishable from an honest execution of the protocol, from the view of any PPT adversary \(\mathcal {A}\) acting as \({\textbf {server}}_{2}\) for any protocol \({\textbf {server}}_1^*\) acting as \({\textbf {server}}_{1}\). First, we note that the execution of the protocol between client and \({\textbf {server}}_{2}\) is independent of client’s interaction with \({\textbf {server}}_{1}\). client generates sets and queries \({\textbf {server}}_{1}\) in the offline phase for their parity. Although this affects correctness of each query, it does not affect the message sent to \({\textbf {server}}_{2}\) at each step of the online phase, since this is decided by the sets, generated by client. Then, we can rewrite our security definition, equivalently, disregarding client’s interactions with \({\textbf {server}}_{1}\).

We want to show that for any query \(q_t\) for \(t \in [1,Q]\), \(q_t\) leaks no information about the query index \(x_t\) to \({\textbf {server}}_{2}\), or that interactions between client and \({\textbf {server}}_{2}\) can be simulated with no knowledge of \(x_t\). To do this, we show, equivalently, that the following two experiments are computationally indistinguishable.

  • Expt\(_0\): Here, for each query index \(x_t\) that client receives, client interacts with \({\textbf {server}}_{2}\) as in our PIR protocol.

  • Expt\(_1\) In this experiment, for each query index \(x_t\) that client receives, client ignores \(x_t\), samples a fresh \(S \sim \mathbb {D}_n\) and sends S to \({\textbf {server}}_{2}\).

First we define an intermediate experiment \({\textbf {Expt}}_1^*\).

  • \({\textbf {Expt}}_1^*:\) For each query index \(x_t\) that client receives, client samples \(S \sim \mathbb {D}_n^{x_t}\). client sends \(S' = \texttt {Resample}(S,x_t)\) to the \({\textbf {server}}_{2}\).

By Property 1 defined in Sect. 3, \(S'\) is computationally indistinguishable from a fresh set sampled from \(\mathbb {D}_n\). Therefore, we have that \({\textbf {Expt}}_1^*\) and \({\textbf {Expt}}_1\) are indistinguishable. Next, we define another intermediate experiment Expt\(_0^*\) to help in the proof.

  • Expt\(_0^*\): Here, for each query index \(x_t\) that client receives, client interacts with \({\textbf {server}}_{2}\) as in our PIR protocol, except that on the refresh phase after each query, instead of picking a table entry \(B_k\) = (\(S_k,P_k\)) from our secondary sets and running \(S_k' = \texttt {Add}(S_k,x_t)\), we generate a new random set \(S \sim \mathbb {D}_n^{x_t}\) and replace our used set with sk instead.

First, we note that by Property 2 defined in Sect. 3, it follows directly that Expt\(_0\) and Expt\(_0^*\) are computationally indistinguishable. Now, we continue to show that Expt\(_0^*\) and Expt\(_1^*\) are computationally indistinguishable. At the beginning of the protocol, right after the offline phase, the client has a set of |T| primary sets picked at random. For the first query index, \(x_1\), we either pick an entry \((S_j,p_j) \in T\) from these random sets where \(x_1 \in S_j\) or, if the that fails, we run \(S_j \sim \mathbb {D}_n^{x_1}\).

Then, we send to \({\textbf {server}}_{2}\) \(S_j' = \texttt {Resample}(S_j,x)\). Note that the second case is trivially equivalent to generating a random set with \(x_1\) and resampling it at \(x_1\). But in the first case, note that T holds a sets sampled from \(\mathbb {D}_n\) in order. As a matter of fact, looking at it in this way, \(S_j\) is the first output in a sequence of samplings that satisfies the constraint of x being in the set. Then, if we consider just the executions from 1 to j, this means that picking \(S_j\) is equivalent to sampling from \(\mathbb {D}_n^{x_1}\), by definition. Then, by Property 1, it follows that the set that the server sees in the first query is indistinguishable from a freshly sampled set.

It follows from above that for the first query, \(q_1\), \({\textbf {Expt}}_0^*\) is indistinguishable from Expt\(_1^*\). To show that this holds for all \(q_t\) for \(t \in [1,Q]\) we show, by induction, that after each query, we refresh our set table T to have the same distribution as initially. Then, by the same arguments above, it will follow that every query \(q_t\) in Expt\(_0^*\) is indistinguishable from each query in Expt\(_1^*\).

Base Case. Initially, our table T is a set of |T| random sets sampled from \(\mathbb {D}_n\) independently from the queries, offline.

Inductive Step. After each query \(q_t\), the smallest table entry \((S_j,p_j) \) such that \(x_t \in S_j\) is replaced with a set sampled from \(\mathbb {D}_n^{x_t}\). Since the sets are identically distributed, then it must be that the table of set keys T maintains the same distribution after each query refresh.

Since our set distribution is unchanged across all queries, then using the same argument as for the first query, each query \(q_t\) from client will be indistinguishable from a freshly sampled set to \({\textbf {server}}_{2}\). Then, we can say that Expt\(_1^*\) is indistinguishable from Expt\(_0^*\). This concludes our proof for experiment indistinguishability. Since we have defined a way to simulate our protocol without access to each \(x_t\), it follows that we satisfy \({\textbf {server}}_{2}\) privacy for any PPT non-uniform adversary \(\mathcal {A}\).

Correctness. To show correctness, we consider a slightly modified version of the scheme: After the refresh phase has used the auxiliary set \((S_j, p_j)\), the client stores \((S_j, p_j,z_j)\), where \(z_j\) is the element that was added to \(S_j\) as part of the protocol—for the sets that have not been used, we simply set \(z_j=null\). Note that the rest of the scheme functions exactly as in Fig. 1 and therefore never uses \(z_j\). It follows, then, that the correctness of this modified scheme is exactly equivalent to the correctness of the scheme we presented. Note that the query phase will fail to output the correct bit only on the following four occasions: (Case 1). \(x_i\) is not in any primary set that was preprocessed. (Case 2). The resampling does not remove \(x_i\) (Case 3). Resampling removes more that just \(x_i\) from the set. (Case 4). Parity is incorrect because Add added a related element during the refresh phase.

Case 1: From the privacy proof above, we know that refreshing the sets maintains the primary set distribution. Then, we can use the same argument as in Lemma 31 and say that, for a query \(x_i\), for all \(i \in \{1,\ldots ,Q\}\), we have:

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [x_{i} \notin \cup _{j \in [1,l]} S_j] = \left( \frac{1}{e}\right) ^{\log n} \le \frac{1}{n}\,. \end{aligned}$$

Case 2: Since Resample is independent from the set (just tossing random coins), we can again re-use the proof of Lemma 31 and say that, for any \(x_i\), for all \( i \in \{1,..,Q\}\), we have:

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [x_{i} \in \texttt {Resample}(S,x_{i})] \le \frac{1}{\sqrt{n}(\log n)^2}\,. \end{aligned}$$

Case 3: Case 3 requires us to look into our modified scheme. For the initial primary sets, the probability of removing an element related to the query is exactly the same as in Case 3 for our Lemma 31. However, for sets that were refreshed, we need to consider the fact that these are not freshly sampled sets, in fact, they are sets that were sampled and then had an Add operation performed on them. For a given query \(x_i\), let \(S_j\) be the first set in T that contains \(x_i\). Let us denote PuncRel to be the event that we remove more than just \(x_{i}\) when resampling \(S_j\) on \(x_i\). We split the probability of PuncRel as

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [\texttt {PuncRel}] &= \mathop {\mathrm {\text {Pr}}}\limits [\texttt {PuncRel} \mid \texttt {Related}(x_{i},z_{j}) = 1 \wedge x_{i} \ne z_{j}] \times \mathop {\mathrm {\text {Pr}}}\limits [ \texttt {Related}(x_{i},z_{j}) = 1 \wedge x_{i} \ne z_{j}] \\ {} &\cup \mathop {\mathrm {\text {Pr}}}\limits [\texttt {PuncRel} \mid \texttt {Related}(x_{i},z_{j}) = 0 \vee x_i = z_j] \times \mathop {\mathrm {\text {Pr}}}\limits [ \texttt {Related}(x_{i},z_{j}) = 0 \vee x_i = z_j]\,. \end{aligned}$$

The first term corresponds to the case where the added element in a previous refresh phase, \(z_{j}\), is related to the current query element, \(x_{i}\). Note that if \(x_{i}\) equals \(z_{j}\), we get the same distribution as the initial \(S_j\) by Property 2 in Sect. 3. Then, we consider only the case where \(z_{j}\) does not equal \(x_{i}\). Note that we can bound

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [\texttt {Related}(x_{i},z_{j}) = 1 \wedge x_{i} \ne z_{j}] &\le \mathop {\mathrm {\text {Pr}}}\limits [\texttt {Related}(S_j,z_{j}) = 1] \le \frac{1}{2 \log n}\,. \end{aligned}$$

Above, we use \(\texttt {Related}(S_j,z_{j})\) to denote the probability that there is any related element to \(z_j\) (not equal to \(z_j\)) in \(S_j\). We can bound this event by Lemma 31 (see Case 3). Then, we have

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [\texttt {PuncRel} \mid \texttt {Related}(x_{i},z_{j}) = 1 \wedge x_{i} \ne z_{j}] \times &\mathop {\mathrm {\text {Pr}}}\limits [ \texttt {Related}(x_{i},z_{j}) = 1 \wedge x_{i} \ne z_{j}] \le \frac{1}{2\log n}\,. \end{aligned}$$

For the second term of our initial equation, since Related\((x_{i},z_{j})\) is 0 or \(x_i\) equals \(z_j\), note that our probability of resampling incorrectly is either independent of \(z_{j}\), since \(z_{j}\) does not share any prefix with \(x_{i}\) and therefore the resampling cannot affect \(z_{j}\) or its related elements in any way, by definition; or it is identical to the probability of the initial set, by Property 2. Therefore, we have that the probability of removing a related element is at most the probability of removing a related element in the original set, which by Lemma 31 is

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [\texttt {PuncRel} \mid \texttt {Related}(x_{i},z_{j}) = 0 \vee x_i = z_j] \le \frac{1}{2\log n}. \end{aligned}$$

And, therefore, it follows that

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [\texttt {PuncRel} \mid \texttt {Related}(x_{i},z_{j}) = 0 \vee x_i = z_j] \times \mathop {\mathrm {\text {Pr}}}\limits [ \texttt {Related}(x_{i},z_{j}) = 0 \vee x_i = z_j] \le \frac{1}{2\log n}\,. \end{aligned}$$

Finally, we have that \(\mathop {\mathrm {\text {Pr}}}\limits [\texttt {PuncRel}] \le \frac{1}{2\log n} + \frac{1}{2 \log n} \le \frac{1}{ \log n}\,.\)

Case 4: Lastly, we have the case that query \(x_i\) is incorrect because the parity \(p_{j}\) from the set \(S_j\) where we found \(x_i\) is incorrect. This will only happen when we added elements related to \(z_{j}\) when adding \(z_{j}\) during the refresh phase. We denote this event AddRel. By Lemma B1, we have that

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [\texttt {AddRel}] \le \frac{1}{2 \log n}\,. \end{aligned}$$

We can conclude that at each query \(x_i\), \(i \in \{1,\ldots ,Q\}\), assuming the previous query was correct, it follows that the probability of a query being incorrect, such that the output of the query does not equal DB \([x_i]\), is:

$$\begin{aligned} \mathop {\mathrm {\text {Pr}}}\limits [\text {incorrect query}] &\le \frac{1}{n} + \frac{1}{\sqrt{n}\log ^2 n} + \frac{1}{ \log n} + \frac{1}{2 \log n} \le \frac{2}{\log n} \le \frac{1}{3} \text { for } n > 405. \end{aligned}$$

Because at each step we run a majority vote over \(\omega (\log n)\) parallel instances, we can guarantee that, since our failure probability is less than \(\frac{1}{2}\), each instance will get back the correct DB \([x_i]\) with overwhelming probability.    \(\square \)

C PRS Constructions and Proofs

This section presents a construction and proof for the Adaptable PRS, as introduced and defined in Sect. 4. We present a construction of our Adaptable PRS in Fig. 3. In the proof, we use a function time\(:f(\cdot ) \rightarrow \mathbb {N}\) that takes in a function \(f(\cdot )\) and output the number of calls made in \(f(\cdot )\) to any PRF function. We also prove Theorem 41 for our construction in Fig. 3. We prove Theorem 41 below. In the proof, we use properties of the underlying PRF found only within the full version of the paper [33, Appendix E] or previous work [37].

Proof

We begin the proof by showing that our scheme in Fig. 3 satisfies the definitions in Appendix A. We then argue efficiencies.

Fig. 3.
figure 3

Our Adaptable PRS Implementation.

Correctness and Pseudorandomness with Respect to \(\mathbb {D}_n\). Correctness follows from our construction and functionality preservation of the underlying PRF. Pseudorandomness follows from pseudorandomness of the underlying PRF ( [33, Definition E1]). Both incur a negligible probability of failure in \(\lambda \), inherited from the underlying PRF.

Functionality preservation in resampling and addition. Assuming pseudorandomness and functionality preservation of the underlying PRF ([33, Definitions E1, E2]), our PRS scheme satisfies the properties of Functionality Preservation in Addition.

For \((sk,msk) \leftarrow \textsf {Gen}(1^\lambda , n)\) until InSet(skx), and \(sk_x \leftarrow \textsf {Punc}(msk,sk,x)\):

  • From construction, \(\textsf {EnumSet}(sk_x) \subseteq \textsf {EnumSet}(sk)\), since puncturing strings that evaluate to 1 can only reduce the size of the set (since we only resample elements in the set).

  • From the point above, and construction of our EnumSet, it follows that \(\texttt {time}(\textsf {EnumSet}(sk)) \ge \texttt {time}(\textsf {EnumSet}(sk_x))\).

  • By construction of our resampling operation and Related function, it must be that

    $$\begin{aligned} y \in \textsf {EnumSet}(sk) \setminus \textsf {EnumSet}(sk_x) \leftrightarrow \texttt {Related}(x,y) = 1. \end{aligned}$$

Also, for any \(n ,\lambda \in \textbf{N}\), \(x \in \{0,\ldots ,n-1\}\), for \((sk,msk) \leftarrow \textsf {Gen}(1^\lambda , n)\), \(sk^x \leftarrow \textsf {Add}(msk,sk,x)\) we note that:

  • By construction, \(\textsf {EnumSet}(sk) \subseteq \textsf {EnumSet}(sk^x)\) since since we only ever make 0 s into 1 s.

  • By the converse of same argument as Functionality Preservation in Resampling above, it follows that

    $$y \in \textsf {EnumSet}(sk^x) \setminus \textsf {EnumSet}(sk) \leftrightarrow \texttt {Related}(x,y) = 1.$$

Therefore, our scheme satisfies Functionality preservation in resampling and addition.

Security in resampling. We show that our scheme satisfies Definition A2 below, assuming pseudorandomness and privacy w.r.t. puncturing of the underlying PRF ([33, Definitions E1, E3], respectively).

To aid in the proof, we define an intermediate experiment, Expt\(_1^*\), defined as:

  • Expt\(_1^*\): Run Gen\((\lambda ,n) \rightarrow (sk,msk)\), and return \(sk_x \leftarrow \textsf {Resample}(msk,sk,x)\).

For each sk output by Gen, \(sk = (sk[0],sk[1])\), two keys of m-puncturable PRFs. First, we show indistinguishability between Expt\(_1^*\) and Expt\(_0\):

Assume that there exists a distinguisher \(D_0\) than can distinguish Expt\(_1^*\) and Expt\(_0\). Let us say that \(D_0\) outputs 0 whenever it is on Expt\(_0\) and 1 when it is on Expt\(_1^*\). Then, we can construct a \(D_0^*\) with access to \(D_0\) that breaks the privacy w.r.t. puncturing of the PRF as follows. For any \(x \in \{0,\ldots ,n-1 \}\):

figure x

Note that in the case where b equals 0, the experiment is exactly equivalent to \(D_0\)’s view of Expt\(_0\), since \(sk'\) is two random m-privately-puncturable PRF keys punctured and m points starting with a 1-bit. Also, when b is 1, \(D_0\)’s view is exactly equivalent to Expt\(_1^*\), since we pass in two random m-privately-puncturable PRF keys, one punctured at m points starting with a 1-bit, and the other at \(\{z[i:]\}_{i \in [0,m]}\), with no constraints on whether x was in the set before or after the puncturings. Then, since \(D_0\)’s view is exactly the same as its experiment, it will distinguish between both with non-negligible probability, and whatever it outputs, by construction, will be the correct guess for b with non-negligible probability.

Now we proceed to show that Expt\(_1^*\) and Expt\(_1\) are indistinguishable, assuming pseudorandomness of the underlying PRF. Now, assume there exists a distinguisher \(D_1\) that can distinguish between Expt\(_1^*\) and Expt\(_1\) with non-negligible probability. Then, we can construct a distinguisher \(D_1^*\) that uses \(D_1\) to break the pseudorandomness of the underlying PRF as follows. For any \(x \in \{0,\ldots ,n-1\}\):

figure y

Note that in the case \(D_1\)’s view in the case where the evaluations as described above all output 1 is exactly its view in distinguishing between our Expt\(_1\) and Expt\(_1^*\). With probability \(\frac{1}{2}\), it is given a punctured key where x was an element of the original set, and with probability \(\frac{1}{2}\) it is given a punctured key where x was sampled at random. Then, in this case, it will be able to distinguish between the two with non-negligible by assumption, and therefore distinguish between the real and random experiment for pseudorandomness of the PRF. Since the probability of having all the evaluations output 1 is non-negligible, then we break the pseudorandomness of the PRF. By contraposition, then, assuming pseudorandomness of the PRF, it must be that Expt\(_1\) and Expt\(_1^*\) are indistinguishable. This concludes our proof.

Security in Addition. We now show that our scheme satisfies Definition A4, assuming privacy w.r.t. puncturing of the underlying PRF. Assume there exists a distinguisher D that can distinguish between these two with non-negligible probability. Then, we can construct a distinguisher \(D^*\) that breaks privacy w.r.t. puncturing of the PRF as follows, for any \(x \in \{0,\ldots ,n-1\}\):

figure z

Consider the case where \(x \in \textsf {EnumSet}(sk')\):

  • If \(P_0\) was punctured, D’s view is exactly equivalent to Expt\(_0\) in his experiment, since in Add we output a secret key \(sk = (sk[0],sk[1])\) where sk[0] is punctured at x, sk[1] is punctured at m random points starting with a 1, and InSet(skx) returns true.

  • If \(P_1\) was punctured, D’s view is exactly equivalent to Expt\(_1\) in his experiment, by construction of Gen, \(P_1\) and \(P_2\), the sk outputted is equivalent to a key outputted by \(\textsf {Gen}(1^\lambda , n)\) where InSet(skx) returns true.

We conclude that, conditioned on \(\textsf {InSet}(sk_{P_b},x)\) returning true, D’s view of the experiment is exactly equivalent to the experiment from our Definition A4, and therefore it will be able to distinguish between whether \(P_0\) and \(P_1\) was punctured with non-negligible probability. If we fix a random sk[1], the probability:

$$\text {Pr} \left[ \textsf {InSet}(sk',x) = true \right] = \frac{1}{\sqrt{n}} > \texttt {negl}(n).$$

Then, the algorithm \(D^*\) we constructed will break the privacy w.r.t. puncturing of the PRF with non-negligible probability. By contraposition, assuming privacy w.r.t. puncturing, \(sk^x\) and sk are computationally indistinguishable. Following almost exactly the same argument as above, we can show that the tuples \((sk^x[0],msk^x[1])\) and (sk[0], msk[1]) are also indistinguishable. Also, in both tuples \((msk^x[1],sk^x[1])\) and (msk[1], sk[1]) the master key is just the unpunctured counterpart of the secret key. Finally, \(msk^x[0]=msk[0]=null\). Then, since we have shown that assuming the privacy w.r.t. puncturing property, the keys involved are pairwise indistinguishable, by the transitive property, we see that assuming privacy w.r.t. puncturing, \((msk^x,sk^x)\) and (msksk) are computationally indistinguishable and therefore, security in addition holds.

Efficiencies. Efficiency for our Gen,InSet and Resample follow from the construction and efficiencies for our underlying PRF. The two efficiencies which we will show are EnumSet and Add. Note that in EnumSet, the step 1 takes \(\widetilde{O}(\sqrt{n})\) time to evaluate every string of size \(\frac{\log n}{2}\), then, by pseudorandomness of the PRF, at each subsequent step we only ever keep \(\sqrt{n}\) strings since half are eliminated. Since there are a logarithmic number of steps, we can say that EnumSet runs in probabilistic \(\widetilde{O}(\sqrt{n})\) time. For Add, by pseudorandomness of the PRF, our construction will take probabilistic \(\widetilde{O}(\sqrt{n})\) time. (We provide better, deterministic bounds in the full version of the paper [33]).    \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 International Association for Cryptologic Research

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lazzaretti, A., Papamanthou, C. (2023). Near-Optimal Private Information Retrieval with Preprocessing. In: Rothblum, G., Wee, H. (eds) Theory of Cryptography. TCC 2023. Lecture Notes in Computer Science, vol 14370. Springer, Cham. https://doi.org/10.1007/978-3-031-48618-0_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48618-0_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48617-3

  • Online ISBN: 978-3-031-48618-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics