Skip to main content
Log in

Locality-Preserving Oblivious RAM

  • Research Article
  • Published:
Journal of Cryptology Aims and scope Submit manuscript

Abstract

Oblivious RAMs, introduced by Goldreich and Ostrovsky [JACM’96], compile any RAM program into one that is “memory oblivious,” i.e., the access pattern to the memory is independent of the input. All previous ORAM schemes, however, completely break the locality of data accesses (for instance, by shuffling the data to pseudorandom positions in memory). In this work, we initiate the study of locality-preserving ORAMs—ORAMs that preserve locality of the accessed memory regions, while leaking only the lengths of contiguous memory regions accessed. Our main results demonstrate the existence of a locality-preserving ORAM with polylogarithmic overhead both in terms of bandwidth and locality. We also study the trade-off between locality, bandwidth and leakage, and show that any scheme that preserves locality and does not leak the lengths of the contiguous memory regions accessed, suffers from prohibitive bandwidth. To further improve the parameters, we also consider a weaker notion of a File ORAM, which supports accesses to predefined non-overlapping regions. Assuming one-way functions, we present a computationally secure File ORAM that has a work overhead and locality of roughly \(O(\log ^2 N)\), while ignoring \(\log \log N\) factors. To the best of our knowledge, before our work, the only works combining locality and obliviousness were for symmetric searchable encryption [e.g., Cash and Tessaro (EUROCRYPT’14), Asharov et al. (STOC’16)]. Symmetric search encryption ensures obliviousness if each keyword is searched only once, whereas ORAM provides obliviousness to any input program. Thus, our work generalizes that line of work to the much more challenging task of preserving locality in ORAMs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Intuitively, a file stores the identifiers of the documents matching a keyword search in SSE schemes.

  2. We emphasize that many practical applications leak some more information even when using standard ORAM, e.g., in the form of communication volume. See discussion in below.

  3. For a function f(n), we let \({\tilde{O}}(f(n))\) denote \(O(f(n)\log f(n))\).

  4. Recall that we use n to denote the size of an instance of the underlying building block, in our case, an Oblivious Range Tree, and N to denote the total size of the memory.

  5. However, as we shall see, m does not play a role in the lower bound.

  6. We assume that all information that the client stores at the external memory is encrypted and authenticated using some global secret key. The secret key used in the construction can be derived from the global secret key that the client holds by a proper labeling of the instance of the file hashing scheme, and therefore, no additional key should be stored.

  7. This assumes that there is no file with more than B blocks. In case this assumption does not hold, some bins will contain more than one blocks of the file. We will visit those bins more than once, extracting exactly one block in each scan of the bin.

  8. Recall that we use n to denote the size of an instance of the underlying building block, in our case, non-recurrent file hashing scheme, and N to denote the total size of the memory.

References

  1. G. Asharov, T.-H.H. Chan, K. Nayak, R. Pass, L. Ren, E. Shi, Bucket oblivious sort: An extremely simple oblivious sort, in 3rd Symposium on Simplicity in Algorithms, SOSA@SODA 2020 (SIAM, 2020), pp. 8–14

  2. L. Arge, P. Ferragina, R. Grossi, J.S. Vitter, On sorting strings in external memory (extended abstract), in ACM Symposium on the Theory of Computing (STOC ’97) (1997), pp. 540–548

  3. G. Asharov, I. Komargodski, W.-K. Lin, K. Nayak, E. Peserico, E. Shi, Optorama: Optimal oblivious RAM, in Advances in Cryptology—EUROCRYPT 2020, Proceedings, Part II, volume 12106 of Lecture Notes in Computer Science (Springer, 2020), pp. 403–432

  4. M. Ajtai, J. Komlós, E. Szemerédi, An \(O(N \log N)\) sorting network, in ACM Symposium on Theory of Computing (STOC ’83) (1983), pp. 1–9

  5. D. Apon, J. Katz, E. Shi, A. Thiruvengadam, Verifiable oblivious storage, in Public Key Cryptography (PKC’14) (2014), pp. 131–148

  6. G. Asharov, M. Naor, G. Segev, I. Shahaf, Searchable symmetric encryption: optimal locality in linear space via two-dimensional balanced allocations, in ACM Symposium on Theory of Computing (STOC ’16) (2016), pp. 1101–1114

  7. G. Asharov, G. Segev, I. Shahaf, Tight tradeoffs in searchable symmetric encryption, in CRYPTO (1), vol. 10991 (2018), pp. 407–436

  8. K.E. Batcher, Sorting Networks and Their Applications. AFIPS ’68 (1968)

  9. E. Boyle, M. Naor, Is there an oblivious RAM lower bound?, in ACM Conference on Innovations in Theoretical Computer Science (ITCS ’16) (2016), pp. 357–368

  10. E. Brewer, L. Ying, L. Greenfield, R. Cypher, T. T’so, Disks for data centers—white paper for FAST 2016. Technical report, Google (2016)

  11. A. Chakraborti, A.J. Aviv, S.G. Choi, T. Mayberry, D.S. Roche, R. Sion, rORAM: Efficient Range ORAM with \(O(\log ^2 N)\) Locality, in Network and Distributed System Security (NDSS) (2019)

  12. R. Canetti, Security and composition of multiparty cryptographic protocols. J. Cryptology, 13(1), 143–202 (2000)

  13. T.H.H. Chan, K.-M. Chung, B. Maggs, E. Shi, Foundations of differentially oblivious algorithms, in Symposium on Discrete Algorithms (SODA) (2019)

  14. R. Curtmola, J.A. Garay, S. Kamara, R. Ostrovsky, Searchable symmetric encryption: improved definitions and efficient constructions, in ACM Conference on Computer and Communications Security (CCS ’06) (2006), pp. 79–88

  15. D. Cash, S. Jarecki, C.S. Jutla, H. Krawczyk, M.-C. Rosu, M. Steiner, Highly-scalable searchable symmetric encryption with support for boolean queries, in Advances in Cryptology—CRYPTO 2013. Proceedings, Part I (2013), pp. 353–373

  16. M. Chase, S. Kamara, Structured encryption and controlled disclosure, in Asiacrypt (Springer, 2010), pp. 577–594

  17. K.-M. Chung, Z. Liu, R. Pass, Statistically-secure ORAM with \({{\tilde{O}}}(\log ^2n)\) overhead, in Asiacrypt (2014)

  18. T.-H.H. Chan, K. Nayak, E. Shi, Perfectly secure oblivious parallel RAM, in Theory of Cryptography Conference (TCC) (2018)

  19. T.-H.H. Chan, E. Shi, Circuit OPRAM: unifying statistically and computationally secure orams and oprams, in Theory of Cryptography—15th International Conference, TCC 2017, volume 10678 of Lecture Notes in Computer Science (Springer, 2017), pp. 72–107

  20. D. Cash, S. Tessaro, The locality of searchable symmetric encryption, in Advances in Cryptology—EUROCRYPT 2014, vol. 8441 (2014), pp. 351–368

  21. I. Demertzis, C. Papamanthou, Fast searchable encryption with tunable locality, in SIGMOD Conference (ACM, 2017), pp. 1053–1067

  22. I. Demertzis, D. Papadopoulos, C. Papamanthou, Searchable encryption with optimal locality: Achieving sublogarithmic read efficiency, in CRYPTO (2018)

  23. S. Devadas, M. van Dijk, C.W. Fletcher, L. Ren, E. Shi, D. Wichs, Onion ORAM: a constant bandwidth blowup oblivious RAM, in TCC (2016)

  24. M.T. Goodrich, M. Mitzenmacher, Privacy-preserving access of outsourced data via oblivious RAM simulation, in ICALP (2011)

  25. O. Goldreich, R. Ostrovsky, Software protection and simulation on oblivious RAMs. J. ACM (1996)

  26. O. Goldreich, Towards a theory of software protection and simulation by oblivious RAMs, in STOC (1987)

  27. O. Goldreich, The Foundations of Cryptography—Volume 2, Basic Applications (Cambridge University Press, 2004)

  28. M.T. Goodrich, Zig-zag sort: a simple deterministic data-oblivious sorting algorithm running in o(n log n) time, in STOC (2014)

  29. G. Kellaris, G. Kollios, K. Nissim, A. O’Neill, Generic attacks on secure outsourced databases, in ACM CCS (2016), pp. 1329–1340

  30. G. Kellaris, G. Kollios, K. Nissim, A. O’Neill, Accessing data while preserving privacy. CoRR, arXiv:abs/1706.01552 (2017)

  31. E. Kushilevitz, S. Lu, R. Ostrovsky, On the (in)security of hash-based oblivious RAM and a new balancing scheme, in SODA (2012)

  32. K. Kurosawa, Y. Ohtaki, How to update documents verifiably in searchable symmetric encryption, in International Conference on Cryptology and Network Security (Springer, 2013), pp. 309–328

  33. S. Kamara, C. Papamanthou, Parallel and dynamic searchable symmetric encryption, in Financial Cryptography and Data Security (2013), pp. 258–274

  34. K.G. Larsen, J.B. Nielsen, Yes, there is an oblivious RAM lower bound!, in CRYPTO (2018), pp. 523–542

  35. S. Patel, G. Persiano, M. Raykova, K. Yeo, Panorama: Oblivious RAM with logarithmic overhead, in FOCS (2018)

  36. C. Ruemmler, J. Wilkes, An introduction to disk drive modeling, IEEE Computer, 27(3), 17–28 (1994)

  37. E. Shi, T.-H.H. Chan, E. Stefanov, M. Li, Oblivious RAM with \(O((\log N)^3)\) worst-case cost, in ASIACRYPT (2011)

  38. E. Stefanov, M. van Dijk, E. Shi, C. Fletcher, L. Ren, X. Yu, S. Devadas, Path ORAM—an extremely simple oblivious ram protocol, in CCS (2013)

  39. J.S. Vitter, External memory algorithms and data structures. ACM Comput. Surv. 33(2), 209–271 (2001)

  40. J.S. Vitter, Algorithms and data structures for external memory. Foundations and Trends in Theoretical Computer Science 2(4), 305–474 (2006)

  41. P. Van Liesdonk, S. Sedghi, J. Doumen, P. Hartel, W. Jonker, Computationally efficient searchable symmetric encryption, in Workshop on Secure Data Management (Springer, 2010), pp. 87–100

  42. X. Wang, T.-H.H. Chan, E. Shi, Circuit ORAM: On Tightness of the Goldreich-Ostrovsky Lower Bound, in ACM Conference on Computer and Communications Security (ACM, 2015), pp. 850–861

  43. X.S. Wang, Y. Huang, T.-H.H. Chan, A. Shelat, E. Shi, SCORAM: Oblivious RAM for Secure Computation, in CCS (2014)

  44. Wikipedia. Bitonic sorter. https://en.wikipedia.org/wiki/Bitonic_sorter#/media/File:BitonicSort1.svg. Online; accessed (August 2021).

  45. P. Williams, R. Sion, Usable PIR, in Network and Distributed System Security Symposium (NDSS) (2008)

  46. P. Williams, R. Sion, Round-optimal access privacy on outsourced storage, in ACM Conference on Computer and Communication Security (CCS) (2012)

  47. P. Williams, R. Sion, B. Carbunar, Building castles out of mud: practical access pattern privacy and correctness on untrusted storage, in CCS (2008), pp. 139–148

Download references

Acknowledgements

This work was partially supported by a Junior Fellow award from the Simons Foundation to Gilad Asharov. Currently supported by the Israel Science Foundation (grant No. 2439/20), by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office, and by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 891234. Supported in part by NSF Award CNS-1561209, NSF Award CNS-1217821, NSF Award CNS-1704788, AFOSR Award FA9550-15-1-0262, a Microsoft Faculty Fellowship, a Google Faculty Research Award and a JP Morgan fellowship to Rafael Pass. This work was supported in part by NSF grants CNS-1314857, CNS-1514261, CNS-1544613, CNS-1601879, CNS-1617676, an Office of Naval Research Young Investigator Program Award, a Packard Fellowship, a Sloan Fellowship, Google Faculty Research Awards, a VMWare Research Award, and a Baidu Faculty Research Award to Elaine Shi. Kartik Nayak was partially supported by a Google Ph.D. Fellowship Award. T-H. Hubert Chan was partially supported by the Hong Kong RGC under the GGrants 17200418 and 17201220.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gilad Asharov.

Additional information

Communicated by Stefano Tessaro.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A preliminary version of this paper appeared in IACR-EUROCRYPT 2019.

Appendices

Appendix: Locality of Bitonic sort

In this section, we first analyze the locality of Bitonic sort, which runs in \(O(n \log ^2 n)\) time.

We call an array of numbers bitonic if it consists of two monotonic sequences, the first one ascending and the other descending, or vice versa. For an array S, we write it as \({\widehat{S}}\) if it is bitonic, as \(\overrightarrow{S}\) (resp. \(\overleftarrow{S}\)) if it is sorted in an ascending (resp. descending) order.

The algorithm is based on a “bitonic split” procedure \({\overrightarrow{\textsf {Split}}}\), which receives as input a bitonic sequence \({\widehat{S}}\) of length n and outputs a sorted sequence \(\overrightarrow{S}\). \({\overrightarrow{\textsf {Split}}}\) first separates \({\widehat{S}}\) into two bitonic sequences \({\widehat{S}}_1,{\widehat{S}}_2\), such that all the elements in \(S_1\) are smaller than all the elements in \(S_2\). It then calls \({\overrightarrow{\textsf {Split}}}\) recursively on each sequence to get a sorted sequence. For simplicity, we assume that the length of the array is a power of 2.

Procedure A.1

\(\overrightarrow{S} = {\overrightarrow{\textsf {Split}}}({\widehat{S}})\)

  • If \(|\overrightarrow{S}|=1\) then return the single element. Otherwise:

  • Set \(k = |\overrightarrow{S}|/2\).

  • Let \({\widehat{S}}_1 = \left\langle \min (a_0,a_{k+0}), \min (a_1,a_{k+1}),\ldots ,\min (a_{k-1},a_{2k-1}) \right\rangle \).

  • Let \({\widehat{S}}_2 = \left\langle \max (a_0,a_{k+0}), \max (a_1,a_{k+1}),\ldots ,\max (a_{n/2-1},a_{2k-1}) \right\rangle \).

  • \(\overrightarrow{S}_1={\overrightarrow{\textsf {Split}}}({\widehat{S}}_1)\), \(\overrightarrow{S}_2={\overrightarrow{\textsf {Split}}}({\widehat{S}}_2)\) and \(\overrightarrow{S}=(\overrightarrow{S}_1,\overrightarrow{S}_2)\).

Similarly, \(\overleftarrow{S} = {\overleftarrow{\textsf {Split}}}({\widehat{S}})\) sorts the array in a descending order. We refer to [8] for details.

To sort an array S of n elements, the algorithm first converts S into a bitonic sequence using the \(\textsf {Split}\) procedures in a bottom up fashion, similar to the structure of merge–sort. Specifically, any size-2 sequence is a bitonic sequence. In each iteration \(i =1,\ldots ,\log n-1\), the algorithm merges each pair of size-\(2^i\) bitonic sequences into a size-\(2^{i+1}\) bitonic sequence. Toward this end, it uses the \({\overrightarrow{\textsf {Split}}}\) and \({\overleftarrow{\textsf {Split}}}\) alternately, as two sorted sequences \((\overrightarrow{S}_1,\overleftarrow{S}_2)\) form a bitonic sequence. The full Bitonic sort algorithm is presented below:

Algorithm A.2

\(\textsf {BitonicSort}(S)\)

  1. 1.

    Convert S to a bitonic sequence: For \(i=1,\ldots ,\log n-1\):

    1. (a)

      Let \(S=({\widehat{S}}_0,\ldots ,{\widehat{S}}_{n/2^i-1})\) be the size-\(2^i\) bitonic sequences from the previous iteration.

    2. (b)

      For \(j=0,\ldots ,n/2^{i+1}-1\), \({\widehat{B}}_{j} = ({\overrightarrow{\textsf {Split}}}({\widehat{S}}_{2j}),{\overleftarrow{\textsf {Split}}}({\widehat{S}}_{2j+1}))\).

    3. (c)

      Set \(S = ({\widehat{B}}_0,\ldots ,{\widehat{B}}_{n/2^{i+1}-1})\).

  2. 2.

    The array \({\widehat{S}}\) is now a bitonic sequence. Apply \(\overrightarrow{S}={\overrightarrow{\textsf {Split}}}({\widehat{S}})\) to obtain a sorted sequence.

Locality and obliviousness It is easy to see that the sorting algorithm is oblivious, as all accesses to the memory are independent of the input data. For locality, first note that procedure \({\overrightarrow{\textsf {Split}}}\) and \({\overleftarrow{\textsf {Split}}}\) are \((2,O(\log n))\)-local. No \(\textsf {move}\) operations are needed between instances of recursions, as these can be executed one after another as iterations (and using some vacuous reads). Thus, Algorithm A.2 is \((2,O(\log ^2n))\)-local as it runs in \(\log n\) iterations, each invoking \({\overrightarrow{\textsf {Split}}}\) and \({\overleftarrow{\textsf {Split}}}\). Figure 3 gives a graphic representation of the algorithm for input size 8 and Fig. 4 illustrates its locality. The \((2,O(\log ^2n))\) locality of Bitonic sort is also obvious from the figure.

Remark. Observe that in each pass of \({\overrightarrow{\textsf {Split}}}\) (or \({\overleftarrow{\textsf {Split}}}\)), a min/max operation is a read–compare–write operation. Thus, strictly speaking, each memory location is accessed twice for this operation—once for reading and once for writing. When the write is performed, the read/write head has already moved forward and is thus not writing back to the same two locations that it read from. Going back to the same two locations would incur an undesirable move head operation. However, we can easily convert this into a solution that still preserves (2, O(1))-locality for each pass of \({\overrightarrow{\textsf {Split}}}\) by introducing a slack after every memory location (and thus using twice the amount of storage). In this solution, every memory location \(a_i\) is followed by \(a_i'\); the entire array is stored as \(((a_0, a_0'), \ldots , (a_{n-1}, a_{n-1}'))\) where \(a_i\) stores real blocks and \(a_i'\) is a slack location. When \(a_i\) and \(a_j\) are compared, the results can be written to \(a_i'\) and \(a_j'\), respectively, without incurring a move operation. Before starting the next iteration, we can move the data from slack locations to the actual locations in a single pass, thus preserving (2, O(1))-locality for each pass of \({\overrightarrow{\textsf {Split}}}\) (and \({\overleftarrow{\textsf {Split}}}\)).

Fig. 3
figure 3

Bitonic sorting network for 8 inputs. Input come in from the left end, and outputs are on the right end. When two numbers are joined by an arrow, they are compared, and if necessary are swapped such that the arrow points from the smaller number toward the larger number. This figure is modified from [44]

Fig. 4
figure 4

Locality of Bitonic sort for 8 elements. The figure shows the allocation of the data in the two disks for an 8 element array. For each input, either a compare–and–swap operation is performed in the specified direction or the input is ignored as denoted by \(\bot \). The figure shows the first 3 passes out of the required 6 passes for 8 elements (see Fig. 3)

1.1 Concurrent Executions of Bitonic Sorts

In our constructions, we sometimes need to invoke Bitonic sorts on disjoint segments of equal size in an array. Let n be the array size and k be the segment size. If we naively sort each segment sequential, we would incur \((2, O((n/k) \cdot \log ^2 k))\) locality. We can save the factor n/k by running each step of the Bitonic sort over all instances before starting the next step. Each step requires a scan on the segments, so after finishing one segment, the memory heads are right at the start of the next segment. It is not hard to see that this approach of “striped concurrent execution” achieves \((2, O(\log ^2 k))\) locality.

Theorem A.3

(Perfectly secure concurrent oblivious sorts with locality) Concurrent Bitonic sort can obliviously sort all disjoint size-k segments of a length-n array in \(O(n \cdot \log ^2 k)\) work and \((2,O(\log ^2k))\) locality.

Specifically, the \(k=n\) case is Theorem 2.2.

The Locality of Bucket Oblivious Sort [1]

In Sect. 2.3 we mentioned that Bitonic sort is locality-friendly. However, compared to AKS and Zig-zag sorts, Bitonic sort is asymptotically worse, although it performs much better in practice. To investigate the asymptotic overhead of Range ORAM, using Bitonic sort causes an additional logarithmic factor in bandwidth, and we are looking for an alternative oblivious sort that is locality-friendly and does not introduce the additional overhead.

In the following, we give an overview of Bucket Oblivious Sort of [1]. The sort is statistically secure, locality-friendly, and is \(\textsf {poly} \log \log \) factor away from the optimal oblivious sorts [4, 28]. Let Z be a statistical security parameter that controls the error probability.

The basic idea. The core idea of the algorithm is to assign a random bin and then route the elements to the bins obliviously through a butterfly network. In a more detail, the n elements are divided into \(B=2n/Z\) buckets of size Z/2 each and Z/2 dummy elements are added to each bucket. Now, imagine that these B buckets form the inputs of a butterfly network—for simplicity, assume B is a power of two. Each element is uniformly randomly assigned to one of the B output buckets, represented by a key of \(\log B\) bits. The elements are then routed through the butterfly network to their respective destinations. When the client can store two buckets locally at a time, at level i the client simply reads elements from two buckets that are of distance \(2^i\) away in level i, and writes them to two adjacent buckets in level \(i+1\). The decision how to split the union of the two buckets from level i into two buckets in level \(i+1\) is done using the ith bit of each element’s key. See Fig. 5 for a graphical illustration.

The above algorithm is clearly oblivious, as the order in which the client reads and writes the buckets is fixed and independent of the input array. If no bucket overflows, all elements reach their assigned destinations. By setting Z appropriately, the overflow probability can be bounded.

Fig. 5
figure 5

Oblivious random bin assignment with 8 buckets. The MergeSplit procedure takes elements from two buckets at level i and put them into two buckets at level \(i+1\), according to the \((i+1)\)-th most significant bit of the keys. At level i, every \(2^{i}\) consecutive buckets are semi-sorted by the most significant i bits of the keys

After assigning the elements into the bins, the dummy elements are removed (using a linear scan), each output bucket is permuted and all buckets are concatenated. Asharov et al. [1] showed that this implements an oblivious random permutation. Then, to get oblivious sort, they also show that any non-oblivious comparison based sorting algorithm can be applied on the output of the oblivious random permutation, and the resulting sorting algorithm is oblivious. They show:

Theorem B.1

[1] There exists a statistically secure oblivious sort algorithm which, except with \(\approx e^{-Z/6}\) probability, assuming that the client has local memory of size O(1), then, the sort completes in \(O(n(\log n+Z)\log ^2 Z)\) bandwidth.

The locality of Bucket oblivious sort. While it is mentioned in [1] that the algorithm can be implemented with good locality, there is no analysis. We now analyze the locality of this sort.

Each MergeSplit can be realized with a single invocation of Bitonic sort. Concretely, we first scan the two input buckets to count how many real elements should go to buckets \(A'_0\) vs. \(A'_1\), then tag the correct number of dummy elements going to either buckets, and finally perform a Bitonic sort for moving the elements obliviously. Next, we need to permute each output bucket obliviously with O(1) local storage. This can be done as follows. First, assign each element in a bucket a uniformly random label of \(c \cdot \log n\) bits for some constant \(c>3\). Then, obliviously sort the elements by their random labels using Bitonic sort. Since the labels are “short,” i.e., the probability that two labels collide is \(n^{-c}\). In case of a collision, we simply retry. The probability to have any collision is bounded by \({n \atopwithdelims ()2}n^{-c} < 1/n\) for \(c>3\), and thus, in worst case, for \(n > 3\) we will have by repeating the process Z times we obtain a failure probability that is smaller than \(e^{-Z/6}\). Finally, run merge–sort. We have:

Theorem B.2

The Bucket oblivious sort implements the sorting functionality except for \(\approx e^{-Z/6}\) probability; It uses O(1) client storage, consumes \(2n (\log n + Z)\log ^2Z\) work, and \((3,O((\log n+Z)\log ^2 Z)\)-locality.

Theorem 7.1 is obtained when \(Z = \omega (1)\log {\lambda }\) for some super-constant function \(\alpha :=\omega (1)\) and for \(n \ge \log ^2 {\lambda }\). In that case, the probability of failure is negligible, and the algorithm uses \(O(n (\log n)\log ^2 (\alpha \log {\lambda }))\) work and \((3,O(\log n\log ^2 (\alpha \log {\lambda })))\)-locality.

Proof of Theorem B.2:

We show efficiency and locality.

Efficiency The algorithm has a linear scan of adding the dummy elements and tagging each element with a random bin. Then, we have \(\log B\) iterations, where in each iteration we have B/2 instances of Bitonic sort on 2Z elements. This step can be bounded by \(\log B \cdot B/2 \cdot 2Z \cdot \log ^2(2Z)\) which is bounded by \(2 n\log n \log ^2 Z\). Finally, we have also the permutation of each bucket. We have B buckets, and to permute them with error probability \(e^{-Z/6}\) we have to run Z invocation of Bitonic sort, that is, this step is \(B Z \cdot Z \log ^2 Z\) which is \(2nZ \log ^2 Z\). The final step is an invocation of the merge–sort, which results in \(2n\log n\) work. Overall, the work of the bucket sort algorithm is \(2n (\log n + Z) \log ^2 Z\) work.

Locality The algorithm consists of few linear scans, and concurrent executions of Bitonic sort (i.e., we run Bitonic sort on pair of buckets concurrently). In “Appendix A.1,” we show that concurrent execution of Bitonic sort can sort disjoint size Z segments of a length n array in \(O(n \log ^2 Z)\) work and \((2,O(\log ^2 Z))\) locality. We have \(\log B\) levels in the algorithm, which results in \(O(n \log n\log ^2 Z)\) work and \((2, O(\log n\log ^2Z))\) locality. Permuting the bins is takes \(O(\log ^2 Z)\) locality for one trial, and we have to repeat it Z times which takes in total \(O(Z \log ^2 Z)\) locality. Finally, we invoke the merge–sort. It is easy to see that merging two sorted arrays can be implemented in O(3, O(1)) locality: We linearly scan that the input arrays and write into the output array. In merge–sort, we have several instances of pairs of arrays that have to be merged at the same level. This is implemented in a similar manner to concurrent executions of Bitonic sort. We conclude that the merge–sort requires \(O(n \log n)\) time and \((3, O(\log n))\)-locality. Overall, the Bucket Oblivious Sort algorithm requires \(2n (\log n+Z)\log ^2 Z\) work and \((3, O((\log n+Z)\log ^2 Z)\)-locality. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Asharov, G., Chan, TH.H., Nayak, K. et al. Locality-Preserving Oblivious RAM. J Cryptol 35, 6 (2022). https://doi.org/10.1007/s00145-022-09419-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00145-022-09419-1

Keywords

Navigation