Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Over the years, garbling methods [Yao86, LP09, AIK04, BHR12b, App17] have been extremely influential and have engendered an enormous number of applications in cryptography. Informally, garbling a function f and an input x, yields the function encoding \(\widehat{f}\) and the input encoding \(\widehat{x}\). Given \(\widehat{f}\) and \(\widehat{x}\), there exists an efficient decoding algorithm that recovers f(x). The security property requires that \(\widehat{f}\) and \(\widehat{x}\) do not reveal anything about f or x except f(x). By now, it is well established that realizing garbling schemes [BHR12b, App17] is an important cryptographic goal.

One shortcoming of standard garbling techniques has been that the size of the function encoding grows linearly in the size of the circuit computing the function and thus leads to large communication costs. Several methods have been devised to overcome this constraint.

  • Lu and Ostrovsky [LO13] addressed the question of garbling RAM program execution on a persistent garbled database. Here, the efficiency requirement is that the size of the function encoding grows only with the running time of the RAM program. This work has lead to fruitful line of research [GHL+14, GLOS15, GLO15, LO17] that reduces the communication cost to grow linearly with running times of the programs executed, rather that the corresponding circuit sizes. A key benefit of this approach is that it has led to constructions based on one-way functions.

  • Goldwasser, Kalai, Popa, Vaikuntanathan, and Zeldovich [GKP+13] addressed the question of reducing the communication cost by reusing the encodings. Specifically, they provided a construction of reusable garbled circuits based on standard assumptions (namely learning-with-errors). However, their construction needs input encoding to grow with the depth of the circuit being garbled.

  • Finally, starting with Gentry, Halevi, Raykova, and Wichs [GHRW14], a collection of works [CHJV15, BGL+15, KLW15, CH16, CCHR16, ACC+16] have attempted to obtain garbling schemes where the size of the function encoding only grows with its description size and is otherwise independent of its running time on various inputs. However, these constructions are proven secure only assuming indistinguishability obfuscation [BGI+01, GGH+13].

A recurring theme in all the above research efforts has been the issue of adaptivity: Can the adversary adaptively choose the input after seeing the function encoding?

This task is trivial if one reveals both the function encoding and the input encoding together after the input is specified. However, this task becomes highly non-trivial if we require the size of the input encoding to only grow with the size of the input and independent of the complexity of computing f. The first solution to this problem was provided by Bellare, Hoang and Rogaway [BHR12a] for the case of circuits in the random oracle model [BR93]. Subsequently, several adaptive circuit garbling schemes have been obtained in the standard model from (i) one-way functions [HJO+16, JW16, JKK+17],Footnote 1 or (ii) using laconic OT [GS18a] which relies on public-key assumptions [CDG+17, DG17, DGHM18, BLSV18].

However, constructing adaptively secure schemes for more communication constrained settings has proved much harder. In this paper, we focus on the case of RAM programs. More specifically, adaptively secure garbled RAM is known only using random oracles (e.g. [LO13, GLOS15]) or under very strong assumptions such as indistinguishability obfuscation [CCHR16, ACC+16]. In this work, we ask:

Can we realize adaptively secure garbled RAM from standard assumptions?

Further motivating the above question, is the tightly related application of constructing constant round secure RAM computation over a persistent database in the malicious setting. More specifically, as shown by Beaver, Micali and Rogaway [BMR90] garbling techniques can be used to realize constant round secure computation [Yao82, GMW87] constructions. Similarly, above-mentioned garbling schemes for RAM programs also yield constant round, communication efficient secure computation solutions [HY16, Mia16, GGMP16, KY18]. However, preserving persistence of RAM programs in the malicious setting requires the underlying garbling techniques to provide adaptive security.Footnote 2

1.1 Our Results

In this work, we obtain a construction of adaptively secure garbled RAM based on the assumption that laconic oblivious transfer [CDG+17] exists. Laconic oblivious transfer can be based on a variety of public-key assumptions such as (i) Computation Diffie-Hellman Assumption [DG17], (ii) Factoring Assumption [DG17], or (iii) Learning-With-Errors Assumption [BLSV18, DGHM18]. In our construction, the size of the garbled database and the garbled program grow only linearly in the size of the database and the running time of the executed program respectively (up to poly logarithmic factors). The main result in our paper is:

Theorem 1

(Informal). Assuming either the Computational Diffie-Hellman assumption or the Factoring assumption or the Learning-with-Errors assumption, there exists a construction of adaptive garbled RAM scheme where the time required to garble a database, a program and an input grows linearly (upto poly logarithmic factors) with the size of the database, running time of the program and length of the input respectively.Footnote 3

Additionally, plugging our adaptively secure garbled RAM scheme into a malicious secure constant round secure computation protocol yields a maliciously secure constant round secure RAM computation protocol [IKO+11, ORS15, BL18, GS18b] for a persistent database. Again, this construction is based on the assumption that laconic OT exists and the underlying assumptions needed for the constant round protocol.

2 Our Techniques

In this section, we outline the main challenges and the techniques used in our construction of adaptive garbled RAM.

Starting Point. In a recent result, Garg and Srinivasan [GS18a] gave a construction of adaptively secure garbled circuit transfer where the size of the input encoding grows only with the input and the output length. The main idea behind their construction is a technique to “linearize” a garbled circuit. Informally, a garbled circuit is said to be linearized if the simulation of particular garbled gate depends only on simulating one other gate (or in other words, the simulation dependency graph is a line). In order to linearize a garbled circuit, their work transforms a circuit into a sequence of CPU step circuits that can make read and write accesses at fixed locations in an external memory. The individual step circuits are garbled using a (plain) garbling scheme and the access to the memory is mediated using a laconic OT.Footnote 4 The use of laconic OT enables the above mentioned garbling scheme to have “linear” structure wherein the simulation of a particular CPU step depends only on simulating the previous step circuit.

A Generalization. Though the approach of Garg and Srinivasan shares some similarities with a garbling a RAM program (like garbling a sequence of CPU step circuits), there are some crucial differences.

  1. 1.

    The first difference is that unlike a circuit, the locations that are accessed by a RAM program are dynamically chosen depending on the program’s input.

  2. 2.

    The second difference is that the locations that are accessed might leak information about the program and the input and a garbled RAM scheme must protect against such leakages.

The first step we take in constructing an adaptive garbled RAM scheme is to generalize the above approach of Garg and Srinivasan [GS18a] to construct an adaptively secure garbled RAM scheme with weaker security guarantees. The security that we achieve is that of unprotected memory access [GHL+14]. Informally, a garbled RAM scheme is said to have unprotected memory access if both the contents of the database and the memory locations that are accessed are revealed in the clear. This generalization is given in Sect. 4.

In the non-adaptive setting, there are standard transformations (outlined in [GHL+14]) from a garbled RAM with unprotected memory access to a standard garbled RAM scheme where both the memory contents and the access patterns are hidden. This transformation involves the additional use of an ORAM scheme. Somewhat surprisingly, these transformations fail in the adaptive setting! The details follow.

Challenges. To understand the main challenges, let us briefly explain how the security proof goes through in the work of Garg and Srinivasan [GS18a]. In a typical construction of a garbled RAM program, using a sequence of garbled circuits, one would expect that the simulation of garbled circuits would be done from the first CPU step to the last CPU step. However, in [GS18a] proof, the simulation is done in a rather unusual manner, from the last CPU step to the first CPU step. Of course, it is not possible to simulate the last CPU step directly. Thus, the process of simulating the last CPU step itself involves a sequence of hybrids that simulate and “un-simulate” the garbling of the previous CPU steps. Extending this approach so that the memory contents and the access patterns are both hidden faces the following two main challenges.

  • Challenge 1: In the Garg and Srinivasan construction [GS18a], memory contents were encrypted using one-time pads. Since the locations that each CPU step (for a circuit) reads from and write to are fixed, the one-time pad corresponding to that location could be hardwired to those CPU steps. On the other hand, in the case of RAM programs the locations being accessed are dynamically chosen and thus it is not possible to hard-wire the entire one-time pad into each CPU step as this would blow up the size of these CPU steps.

    It is instructive to note that encrypting the memory using an encryption scheme and decrypting the read memory contents does not suffice. See more on this in preliminary attempt below.

  • Challenge 2: In the non-adaptive setting, it is easy to amplify unprotected memory access security to the setting where memory accesses are hidden using an oblivious RAM scheme [Gol87, Ost90, GO96]. However, in the adaptive setting this transformation turns out to be tricky. In a bit more detail, the Garg and Srinivasan [GS18a] approach of simulating CPU step circuits from the last to the first ends up in conflict with the security of the ORAM scheme where the simulation is typically done from the first to the last CPU steps. We note here that the techniques of Canetti et al. [CCHR16] and Ananth et al. [ACC+16], though useful, do not apply directly to our setting. In particular, in the Canetti et al. [CCHR16] and Ananth et al. [ACC+16] constructions, CPU steps where obfuscated using an indistinguishability obfuscation scheme. Thus, in their scheme the obfuscation for any individual CPU step could be changed independently. For example, the PRF key used in any CPU step could be punctured independent of the other CPU steps. On the other hand, in our construction, inspite of each CPU step being garbled separately, its input labels are hardwired in the previous garbled circuit. Therefore, a change in hardwired secret value (like a puncturing a key) in a CPU step needs an intricate sequence of hybrids for making this change. For instance, in the case of the example above, it is not possible to puncture the PRF key hardwired in a particular CPU step in one simple hybrid step. Instead any change in this CPU step must change the CPU step before it and so on. In summary, in our case, any such change would involve a new and intricate hybrid argument.

2.1 Solving Challenge 1

In this subsection, we describe our techniques to solve challenge 1.

Preliminary Attempt. A very natural approach to encrypting external memory would be to use a pseudorandom function to encrypt memory content in each location. More precisely, a data value d in location L is encrypted using the key \(\mathsf {PRF}_K(L)\) where K is the PRF key. The key K for this pseudorandom function is hardwired in each CPU step so that it first decrypts the ciphertext that is read from the memory and uses the underlying data for further processing. This approach to solving Challenge 1 was in fact used in the works of Canetti et al. [CCHR16] and Ananth et al. [ACC+16] (and several other prior works) in a similar context. However, in order to use the security of this PRF, we must first remove the hardwired key from each of the CPU steps. This is easily achieved if we rely on indistinguishability obfuscation. Indeed, a single hybrid change is sufficient to have the punctured key to be hardwired in each of the CPU steps. However, in our setting this does not work! In particular, we need to puncture the PRF key in each of the CPU step circuits by simulating them individually and the delicate dependencies involved in garbling each CPU step blows up the size of the garbled input to grow with the running time of the program.Footnote 5 Due to the same reason, the approaches of encrypting the memory by maintaining a tree of secret keys [GLOS15, GLO15] do not work.

Our New Idea: A Careful Timed Encryption Mechanism. From the above attempts, the following aspect of secure garbled RAM arise. Prior approaches for garbling RAM programs use PRF keys that in some sense “decrease in power”Footnote 6 as hybrids steps involve sequential simulation of the CPU steps starting with the first CPU step and ending in the last CPU step. However, in the approach of [GS18a], the hybrids do a backward pass, from the last CPU step circuit to the first CPU step circuit. Therefore, we need a mechanism wherein the hardwired key for encryption in some sense “strengthens” along the first to the last CPU step.

Location vs. Time. In almost all garbled RAM constructions, the data stored at a particular location is encrypted using a location dependent key (e.g. [GLOS15]). This was not a problem when the keys are being weakened across CPU steps. However, in our case we need the key to be strengthened in power across CPU steps. Thus, we need a special purpose encryption scheme where the keys are derived based on time rather than the locations. Towards this goal, we construct a special purpose encryption scheme called as a timed encryption scheme. Let us explain this in more detail.

Timed Encryption. A timed encryption scheme is just like any (plain) symmetric key encryption except that every message is encrypted with respect to a timestamp. Additionally, there is a special key constrain algorithm that constrains a key to only decrypt ciphertexts that are encrypted within a specific timestamp. The security requirement is that the constrained key does not help in distinguishing ciphertexts of two messages that are encrypted with respect to some future timestamp. We additionally require the encryption using a key constrained with respect to a timestamp \(\mathsf {time}\) to have the same distribution as an encryption using an unconstrained key as long as the timestamp to which we are encrypting is less than or equal to \(\mathsf {time}\). For efficiency, we require that the size of the constrained key to grow only with the length of the binary representation of the timestamp.

Solving Challenge 1. Timed encryption provides a natural approach to solving challenge 1. In every CPU step, we hardwire a time constrained key that allows that CPU step to decrypt all the memory updates done by the prior CPU steps. The last CPU step in some sense has the most powerful key hardwired, i.e., it can decrypt all the updates made by all the prior CPU steps and the first CPU step has the least powerful key hardwired. Thus, the hardwired secret key strengthens from the first CPU step to the last CPU step. In the security proof, a backward pass of simulating the last CPU step to the first CPU step conforms well with the semantics and security properties of a timed encryption scheme. This is because we remove the most powerful keys first and the rest of the hardwired secret keys in the previous CPU steps do not help in distinguishing between encryptions of the actual value that is written and some junk value. We believe that the notion timed encryption might have other applications and be of independent interest.

Constructing Timed Encryption. We give a construction of a timed encryption scheme from any one-way function. Towards this goal, we introduce a notion called as range constrained PRF. A range constrained PRF is a special constrained PRF [BW13] where the PRF key can be constrained to evaluate input points that fall within a particular range. The ranges that we will be interested in are of the form [0, x]. That is, the constrained key can be used to evaluate the PRF on any \(y \in [0,x]\). For efficiency, we require that the size of the constrained key to only grow with the binary representation of x. Given such a PRF, we can construct a timed encryption scheme as follows. The key generation samples a range constrained PRF key. The encryption of a message m with respect to a timestamp \(\mathsf {time}\) proceeds by evaluating the PRF on \(\mathsf {time}\) to derive sk and then using sk as a key for symmetric encryption scheme to encrypt the message m. The time constraining algorithm just constrains the PRF key with respect to the range \([0,\mathsf {time}]\). Thus, the goal of constructing a timed encryption scheme reduces to the goal of constructing a range constrained PRF. In this work, we give a construction of range constrained PRF by adding a range constrain algorithm to the tree-based PRF scheme of Goldreich, Goldwasser and Micali [GGM86].

2.2 Solving Challenge 2

Challenge 1 involves protecting the contents of the memory whereas challenge 2 involves protecting the access pattern. As mentioned before, in the non-adaptive setting, this problem is easily solved using an oblivious RAM scheme. However, in our setting we need an oblivious RAM scheme with some special properties.

The works of Canetti et al. [CCHR16] and Ananth et al. [ACC+16] define a property of an ORAM scheme as strong localized randomness property and then use this property to hide their access patterns. Informally, an ORAM scheme is said to have a strong localized randomness property if the locations of the random tape accessed by an oblivious program in simulating each memory access are disjoint. Further, the number of locations touched for simulating each memory access must be poly logarithmic in the size of the database. These works further proved that the Chung-Pass ORAM scheme [CP13] satisfies the strong localized randomness property. Unfortunately, this strong localized randomness property alone is not sufficient for our purposes. Let us give the details.

To understand why the strong localized randomness property alone is not sufficient, we first recall the details of the Chung-Pass ORAM (henceforth, denoted as CP ORAM) scheme. The CP ORAM is a tree-based ORAM scheme where the leaves of this tree are associated with the actual memory. A position map associates each data block in the memory with a random leaf node. Accessing a memory location involves first reading the position map to get the address of the leaf where this data block resides. Then, the path from the root to this particular leaf is traversed and the content of the this data block is read. It is guaranteed that the data block is located somewhere along the path from the root to leaf node. The read data block is then placed in the root and the position map is updated so that another random leaf node is associated with this data block. To balance the memory, an additional flush is performed but for the sake of this introduction we ignore this step. The CP ORAM scheme has strong localized randomness as the randomness used in each memory accesses involves choosing a random leaf to update the position map. Let us now explain why this property alone is not sufficient for our purpose.

Recall that in the security proof of [GS18a], the CPU steps are simulated from the last step to the first. A simulation of a CPU step involves changing the bit written by the step to some junk value and the changing the location accessed to a random location. We can change the bit to be written to a junk value using the security of the timed encryption scheme, however changing the location accessed to random is problematic. Note that the location that is being accessed in the CP ORAM is a random root to leaf path. However, the address of this leaf is stored in the memory via the position map. Therefore, to simulate a particular CPU step, we must first change the contents of the position map. This change must be performed in those CPU steps that last updated this memory location. Unfortunately, timed encryption is not useful in this setting as we can use its security only after removing all the secret keys that are hardwired in the future time steps. However, in our case, the CPU steps that last updated this particular location might be so far into the past that removing all the intermediate encryption keys might blow up the cost of the input encoding to be as large as the program running time.

To solve this issue, we modify the Chung-Pass ORAM to additionally have the CPU steps to encrypt the data block that is written using a puncturable PRF. Unlike the previous approaches of encrypting the data block with respect to the location, we encrypt it with respect to the time step that modifies the location. This helps in circumventing the above problem as we can first puncture the PRF key (which in turn involves a careful set of hybrids) and use its security to change the position map to contain an encryption of the junk value instead of the actual address of the leaf node.Footnote 7 Once this change is done, the locations that the concerned CPU step is accessing is a random root to leaf path.

3 Preliminaries

Let \(\lambda \) denote the security parameter. A function \(\mu (\cdot ) : \mathbb {N} \rightarrow \mathbb {R}^+\) is said to be negligible if for any polynomial \(\mathsf {poly}(\cdot )\) there exists \(\lambda _0 \in \mathbb {N}\) such that for all \(\lambda > \lambda _0\) we have \(\mu (\lambda ) < \frac{1}{\mathsf {poly}(\lambda )}\). For a probabilistic algorithm A, we denote A(xr) to be the output of A on input x with the content of the random tape being r. When r is omitted, A(x) denotes a distribution. For a finite set S, we denote \(x \leftarrow S\) as the process of sampling x uniformly from the set S. We will use PPT to denote Probabilistic Polynomial Time. We denote [a] to be the set \(\{1,\ldots ,a\}\) and [ab] to be the set \(\{a,a+1,\ldots ,b\}\) for \(a \le b\) and \(a,b \in \mathbb {Z}\). For a binary string \(x \in \{0,1\}^n\), we will denote the \(i^{th}\) bit of x by \(x_i\). We assume without loss of generality that the length of the random tape used by all cryptographic algorithms is \(\lambda \). We will use \(\mathsf {negl}(\cdot )\) to denote an unspecified negligible function and \(\mathsf {poly}(\cdot )\) to denote an unspecified polynomial function.

We assume reader’s familiarity with the notions of a puncturable PRF and selectively secure garbled circuits and omit the formal definitions here for the lack of space.

3.1 Updatable Laconic Oblivious Transfer

In this subsection, we recall the definition of updatable laconic oblivious transfer from [CDG+17].

We give the formal definition below from [CDG+17]. We generalize their definition to work for blocks of data instead of bits. More precisely, the reads and the updates happen at the block-level rather than at the bit-level.

Definition 1

([CDG+17]). An updatable laconic oblivious transfer consists of the following algorithms:

  • \(\mathsf {crs}\leftarrow \mathsf {crsGen}(1^{\lambda },1^N){:}\) It takes as input the security parameter \(1^{\lambda }\) (encoded in unary) and a block size N and outputs a common reference string \(\mathsf {crs}\).

  • \((\mathsf {d},\widehat{D}) \leftarrow \mathsf {Hash}(\mathsf {crs},D){:}\) It takes as input the common reference string \(\mathsf {crs}\) and database \(D \in \{\{0,1\}^N\}^*\) as input and outputs a digest \(\mathsf {d}\) and a state \(\widehat{D}\). We assume that the state \(\widehat{D}\) also includes the database D.

  • \(e \leftarrow \mathsf {Send}(\mathsf {crs},\mathsf {d},L,\{m_{i,0},m_{i,1}\}_{i\in [N]}){:}\) It takes as input the common reference string \(\mathsf {crs}\), a digest \(\mathsf {d}\), and a location \(L \in \mathbb {N}\) and set of messages \(m_{i,0},m_{i,1} \in \{0,1\}^{p(\lambda )}\) for every \(i \in [N]\) and outputs a ciphertext e.

  • \((m_1,\ldots ,m_{N}) \leftarrow \mathsf {Receive}^{\widehat{D}}(\mathsf {crs},e,L){:}\) This is a RAM algorithm with random read access to \(\widehat{D}\). It takes as input a common reference string \(\mathsf {crs}\), a ciphertext e, and a location \(L \in \mathbb {N}\) and outputs a set of messages \(m_1,\ldots ,m_{N}\).

  • \(e_w \leftarrow \mathsf {SendWrite}(\mathsf {crs},\mathsf {d}, L,\{b_i\}_{i\in [N]}, \{m_{j,0},m_{j,1}\}_{j = 1}^{|\mathsf {d}|}){:}\) It takes as input the common reference string \(\mathsf {crs}\), a digest \(\mathsf {d}\), and a location \(L \in \mathbb {N}\), bits \(b_i \in \{0,1\}\) for each \(i \in [N]\) to be written, and \(|\mathsf {d}|\) pairs of messages \(\{m_{j,0},m_{j,1}\}_{j = 1}^{|\mathsf {d}|}\), where each \(m_{j,c}\) is of length \(p(\lambda )\) and outputs a ciphertext \(e_w\).

  • \(\{m_j\}_{j = 1}^{|\mathsf {d}|} \leftarrow \mathsf {ReceiveWrite}^{\widehat{D}}(\mathsf {crs},L,\{b_i\}_{i\in [N]},e_w){:}\) This is a RAM algorithm with random read/write access to \(\widehat{D}\). It takes as input the common reference string \(\mathsf {crs}\), a location L, a set of bits \(b_1,\ldots ,b_{N} \in \{0,1\}\) and a ciphertext \(e_w\). It updates the state \(\widehat{D}\) (such that \(D[L] = b_1\ldots b_N\)) and outputs messages \(\{m_j\}_{j = 1}^{|\mathsf {d}|}\).

We require an updatable laconic oblivious transfer to satisfy the following properties.

  • Correctness: We require that for any database D of size at most \(M = \mathsf {poly}(\lambda )\), any memory location \(L \in [M]\), any set of messages \((m_{i,0},m_{i,1}) \in \{0,1\}^{p(\lambda )}\) for each \(i \in [N]\) where \(p(\cdot )\) is a polynomial that

    $$\begin{aligned} \Pr \left[ \begin{array}{l} \forall i \in [N],\,\, m_i = m_{i,D[L,i]}\\ \end{array}\begin{array}{|cl} \mathsf {crs}&{}\leftarrow \mathsf {crsGen}(1^\lambda )\\ (\mathsf {d}, \widehat{D}) &{}\leftarrow \mathsf {Hash}(\mathsf {crs},D)\\ e &{}\leftarrow \mathsf {Send}(\mathsf {crs},\mathsf {d},L,\{m_{i,0},m_{i,1}\}_{i\in [N]})\\ (m_1,\ldots ,m_{N}) &{}\leftarrow \mathsf {Receive}^{\widehat{D}}(\mathsf {crs},e,L) \end{array} \right] = 1, \end{aligned}$$

    where D[Li] denotes the \(i^{th}\) bit in the \(L^{th}\) block of D.

  • Correctness of Writes: Let database D be of size at most \(M = \mathsf {poly}(\lambda )\) and let \(L \in [M]\) be any two memory locations. Let \(D^*\) be a database that is identical to D except that \(D^*[L,i] = b_i\) for all \(i \in [N]\) some sequence of \(\{b_j\} \in \{0,1\}\). For any sequence of messages \(\{m_{j,0},m_{j,1}\}_{j \in [\lambda ]} \in \{0,1\}^{p(\lambda )}\) we require that

    $$ \Pr \left[ \begin{array}{l} m_j' = m_{j,\mathsf {d}^*_{j}}\\ \forall j\in [|\mathsf {d}|] \end{array} \begin{array}{|cl} \mathsf {crs}&{}\leftarrow \mathsf {crsGen}(1^\lambda ,1^N)\\ (\mathsf {d}, \widehat{D}) &{}\leftarrow \mathsf {Hash}(\mathsf {crs},D)\\ (\mathsf {d}^*, \widehat{D}^*) &{}\leftarrow \mathsf {Hash}(\mathsf {crs},D^*)\\ e_w &{}\leftarrow \mathsf {SendWrite}(\mathsf {crs},\mathsf {d}, L,\{b_i\}_{i\in [N]}, \{m_{j,0},m_{j,1}\}_{j = 1}^{|\mathsf {d}|})\\ \{m_j'\}_{j=1}^{|\mathsf {d}|} &{} \leftarrow \mathsf {ReceiveWrite}^{\widehat{D}}(\mathsf {crs},L,\{b_i\}_{i\in [N]},e_w) \end{array} \right] = 1, $$
  • Sender Privacy: There exists a PPT simulator \(\mathsf {Sim}_{{\ell \mathsf {OT}}}\) such that the for any non-uniform PPT adversary \(\mathcal {A}= (\mathcal {A}_1,\mathcal {A}_2)\) there exists a negligible function \(\mathsf {negl}(\cdot )\) s.t.,

    $$\begin{aligned} \big |\Pr [\mathsf {Expt}^{\mathsf {real}}(1^{\lambda },\mathcal {A}) = 1] - \Pr [\mathsf {Expt}^{\mathsf {ideal}}(1^{\lambda },\mathcal {A}) = 1] \big | \le \mathsf {negl}(\lambda )\end{aligned}$$

    where \(\mathsf {Expt}^{\mathsf {real}}\) and \(\mathsf {Expt}^{\mathsf {ideal}}\) are described in Fig. 1.

  • Sender Privacy for Writes: There exists a PPT simulator \(\mathsf {Sim}_{{\ell \mathsf {OT}\mathrm {W}}}\) such that the for any non-uniform PPT adversary \(\mathcal {A}= (\mathcal {A}_1,\mathcal {A}_2)\) there exists a negligible function \(\mathsf {negl}(\cdot )\) s.t.,

    $$ \big |\Pr [\mathsf {WriSenPrivExpt}^{\mathsf {real}}(1^{\lambda },\mathcal {A}) = 1] - \Pr [\mathsf {WriSenPrivExpt}^{\mathsf {ideal}}(1^{\lambda },\mathcal {A}) = 1] \big | \le \mathsf {negl}(\lambda ) $$

    where \(\mathsf {WriSenPrivExpt}^{\mathsf {real}}\) and \(\mathsf {WriSenPrivExpt}^{\mathsf {ideal}}\) are described in Fig. 2.

  • Efficiency: The algorithm Hash runs in time \(|D|\mathsf {poly}(\log |D|, \lambda )\). The algorithms \(\mathsf {Send}\), \(\mathsf {SendWrite}\), \(\mathsf {Receive}\), \(\mathsf {ReceiveWrite}\) run in time \(N \cdot \mathsf {poly}(\log |D|, \lambda )\).

Fig. 1.
figure 1

Sender privacy security game

Fig. 2.
figure 2

Sender privacy for writes security game

Theorem 2

([CDG+17, DG17, BLSV18, DGHM18]). Assuming either the Computational Diffie-Hellman assumption or the Factoring assumption or the Learning with Errors assumption, there exists a construction of updatable laconic oblivious transfer.

Remark 1

We note that the security requirements given in Definition 1 is stronger than the one in [CDG+17] as we require the \(\mathsf {crs}\) to be generated before the adversary provides the database D and the location L. However, the construction in [CDG+17] already satisfies this definition since in the proof, we can guess the location by incurring a 1/|D| loss in the security reduction.

3.2 Somewhere Equivocal Encryption

We now recall the definition of Somewhere Equivocal Encryption from the work of [HJO+16]. Informally, a somewhere equivocal encryption allows to create a simulated ciphertext encrypting a message m with certain positions of the message being “fixed” and the other positions having a “hole”. The simulator can later fill these “holes” with arbitrary message values by deriving a suitable decryption key. The main efficiency requirement is that the size of the decryption key grows only with the number of “holes” and is otherwise independent of the message size. We give the formal definition below.

Definition 2

([HJO+16]). A somewhere equivocal encryption scheme with block-length s, message length n (in blocks) and equivocation parameter t (all polynomials in the security parameter) is a tuple of probabilistic polynomial algorithms \(\varPi = (\mathsf {KeyGen},\mathsf {Enc},\mathsf {Dec},\mathsf {SimEnc},\mathsf {SimKey})\) such that:

  • \(\mathsf {key}\leftarrow \mathsf {KeyGen}(1^{\lambda }){:}\) It is a PPT algorithm that takes as input the security parameter (encoded in unary) and outputs a key \(\mathsf {key}\).

  • \(\overline{c} \leftarrow \mathsf {Enc}(\mathsf {key},m_1\ldots m_n){:}\) It is a PPT algorithm that takes as input a key \(\mathsf {key}\) and a vector of messages \(\overline{m} = m_1\ldots m_n\) with each \(m_i \in \{0,1\}^s\) and outputs a ciphertext \(\overline{c}\).

  • \(\overline{m} \leftarrow \mathsf {Dec}(\mathsf {key},\overline{c}){:}\) It is a deterministic algorithm that takes as input a key \(\mathsf {key}\) and a ciphertext \(\overline{c}\) and outputs a vector of messages \(\overline{m} = m_1\ldots m_n\).

  • \((\mathsf {st},\overline{c})\leftarrow \mathsf {SimEnc}((m_i)_{i \notin I},I){:}\) It is a PPT algorithm that takes as input a set of indices \(I \subseteq [n]\) and a vector of messages \((m_i)_{i \notin I}\) and outputs a ciphertext \(\overline{c}\) and a state \(\mathsf {st}\).

  • \(\mathsf {key}' \leftarrow \mathsf {SimKey}(\mathsf {st},(m_i)_{i \in I}){:}\) It is a PPT algorithm that takes as input the state information \(\mathsf {st}\) and a vector of messages \((m_i)_{i \in I}\) and outputs a key \(\mathsf {key}'\).

and satisfies the following properties:

Correctness. For every \(\mathsf {key}\leftarrow \mathsf {KeyGen}(1^{\lambda })\), for every \(\overline{m} \in \{0,1\}^{s \times n}\) it holds that:

$$ \mathsf {Dec}(\mathsf {key},\mathsf {Enc}(\mathsf {key},\overline{m})) = \overline{m} $$

Simulation with No Holes. We require that the distribution of \((\overline{c},\mathsf {key})\) computed via \((\mathsf {st},\overline{c})\leftarrow \mathsf {SimEnc}(\overline{m},\emptyset )\) and \(\mathsf {key}\leftarrow \mathsf {SimKey}(\mathsf {st},\emptyset )\) to be identical to \(\mathsf {key}\leftarrow \mathsf {KeyGen}(1^{\lambda })\) and \(\overline{c} \leftarrow \mathsf {Enc}(\mathsf {key},m_1\ldots m_n)\). In other words, simulation when there are no holes (i.e., \(I = \emptyset \)) is identical to honest key generation and encryption.

Security. For any PPT adversary \(\mathcal {A}\), there exists a negligible function \(\nu = \nu (\lambda )\) such that:

$$ \big | \Pr [\mathsf {Exp}^{\mathsf {simenc}}_{\mathcal {A},\varPi }(1^{\lambda },0) = 1] - \Pr [\mathsf {Exp}^{\mathsf {simenc}}_{\mathcal {A},\varPi }(1^{\lambda },1) = 1]\big | \le \nu (\lambda ) $$

where the experiment \(\mathsf {Exp}^{\mathsf {simenc}}_{\mathcal {A},\varPi }\) is defined as follows:

Experiment \(\mathsf {Exp}^{\mathsf {simenc}}_{\mathcal {A},\varPi }\)

  1. 1.

    The adversary \(\mathcal {A}\) on input \(1^{\lambda }\) outputs a set \(I \subseteq [n]\) s.t. \(|I| < t\), a vector \((m_i)_{i \not \in I}\), and a challenge \(j \in [n] \setminus I\). Let \(I' = I \cup \{j\}\).

  2. 2.
    • If \(b = 0\), compute \(\overline{c}\) as follows: \((\mathsf {st},\overline{c}) \leftarrow \mathsf {SimEnc}((m_i)_{i \not \in I},I)\).

    • If \(b = 1\), compute \(\overline{c}\) as follows: \((\mathsf {st},\overline{c}) \leftarrow \mathsf {SimEnc}((m_i)_{i \not \in I'},I')\).

  3. 3.

    Send \(\overline{c}\) to the adversary \(\mathcal {A}\).

  4. 4.

    The adversary \(\mathcal {A}\) outputs the set of remaining messages \((m_i)_{i \in I}\).

    • If \(b = 0\), compute \(\mathsf {key}\) as follows: \(\mathsf {key}\leftarrow \mathsf {SimKey}(\mathsf {st},(m_i)_{i \in I})\).

    • If \(b = 1\), compute \(\mathsf {key}\) as follows: \(\mathsf {key}\leftarrow \mathsf {SimKey}(\mathsf {st},(m_i)_{i \in I'})\)

  5. 5.

    Send \(\mathsf {key}\) to the adversary.

  6. 6.

    \(\mathcal {A}\) outputs \(b'\) which is the output of the experiment.

Theorem 3

([HJO+16]). Assuming the existence of one-way functions, there exists a somewhere equivocal encryption scheme for any polynomial message-length n, black-length s and equivocation parameter t, having key size \(t\cdot s \cdot \mathsf {poly}(\lambda )\) and ciphertext of size \(n\cdot s \cdot \mathsf {poly}(\lambda )\) bits.

3.3 Random Access Machine (RAM) Model of Computation

We start by describing the Random Access Machine (RAM) model of computation in Sect. 3.3. Most of this subsection is taken verbatim from [CDG+17].

Notation for the RAM Model of Computation. The RAM model consists of a CPU and a memory storage of M blocks where each block has length N. The CPU executes a program that can access the memory by using read/write operations. In particular, for a program P with memory of size M, we denote the initial contents of the memory data by \(D \in \{\{0,1\}^N\}^M\). Additionally, the program gets a “short” input \(x \in \{0,1\}^n\), which we alternatively think of as the initial state of the program. We use |P| to denote the running time of program P. We use the notation \(P^D(x)\) to denote the execution of program P with initial memory contents D and input x. The program P can read from and write to various locations in memory D throughout its execution.Footnote 8

We will also consider the case where several different programs are executed sequentially and the memory persists between executions. We denote this process as \((y_1, \ldots , y_\ell ) = (P_1(x_1),\ldots ,P_\ell (x_\ell ))^D\) to indicate that first \(P_1^D(x_1)\) is executed, resulting in some memory contents \(D_1\) and output \(y_1\), then \(P_2^{D_1}(x_2)\) is executed resulting in some memory contents \(D_2\) and output \(y_2\) etc. As an example, imagine that D is a huge database and the programs \(P_i\) are database queries that can read and possibly write to the database and are parameterized by some values \(x_i\).

CPU-Step Circuit. Consider an execution of a RAM program which involves at most T CPU steps. We represent a RAM program P via T small CPU-Step Circuits each of which executes one CPU step. In this work we will denote one CPU step by:Footnote 9

$$C_{\mathsf {CPU}}^{P}(\mathsf {state},\mathsf {rData}) = (\mathsf {state}',\mathsf {R/W},L,\mathsf {wData})$$

This circuit takes as input the current CPU state \(\mathsf {state}\) and \(\mathsf {rData}\in \{0,1\}^N\). Looking ahead the data \(\mathsf {rData}\) will be read from the memory location that was requested by the previous CPU step. The circuit outputs an updated state \(\mathsf {state}'\), a read or write \(\mathsf {R/W}\), the next location to read/write from \(L\in [M]\), and data \(\mathsf {wData}\) to write into that location (\(\mathsf {wData}=\bot \) when reading). The sequence of locations accessed during the execution of the program collectively form what is known as the access pattern, namely \(\mathsf {MemAccess}= \{(\mathsf {R/W}^{\tau }, L^{\tau }) : \tau = 1,\ldots ,T\}\). We assume that the CPU state \(\mathsf {state}\) contains information about the location that the previous CPU step requested to read from. In particular, \(\mathsf {lastLocation}(\mathsf {state})\) outputs the location that the previous CPU step requested to read and it is \(\bot \) if the previous CPU step was a write.

Note that in the description above without loss of generality we have made some simplifying assumptions. We assume that each CPU-step circuit always reads from or writes to some location in memory. This is easy to implement via a dummy read and write step. Moreover, we assume that the instructions of the program itself are hardwired into the CPU-step circuits.

Representing RAM computation by CPU-Step Circuits. The computation \(P^D(x)\) starts with the initial state set as \(\mathsf {state}_1 = x\). In each step \(\tau \in \{1,\ldots T\}\), the computation proceeds as follows: If \(\tau =1\) or \(\mathsf {R/W}^{\tau -1} = \mathsf {write}\), then \(\mathsf {rData}^{\tau } := \bot \); otherwise \(\mathsf {rData}^{\tau } := D[L^{\tau -1}]\). Next it executes the CPU-Step Circuit \(C_{\mathsf {CPU}}^{P,\tau }(\mathsf {state}^{\tau }, \mathsf {rData}^{\tau }) = (\mathsf {state}^{\tau +1}, \mathsf {R/W}^{\tau }, L^{\tau }, \mathsf {wData}^{\tau })\). If \(\mathsf {R/W}^{\tau } = \mathsf {write}\), then set \(D[L^{\tau }] = \mathsf {wData}^{\tau }\). Finally, when \(\tau = T\), then \(\mathsf {state}^{\tau +1}\) is the output of the program.

3.4 Oblivious RAM

In this subsection, we recall the definition of oblivious RAM [Gol87, Ost90, GO96].

Definition 3

(Oblivious RAM). An Oblivious RAM scheme consists of two procedures \((\mathsf {OProg},\mathsf {OData})\) with the following syntax:

  • \({P^*} \leftarrow \mathsf {OProg}(1^\lambda ,1^{\log M},1^T,P)\): Given a security parameter \(\lambda \), a memory size \(M\), a program P that runs in time \(T\), \(\mathsf {OProg}\) outputs an probabilistic oblivious program \({P^*}\) that can access \(D^*\) as RAM. A probabilistic RAM program is modeled exactly as a deterministic program except that each step circuit additionally take random coins as input.

  • \(D^*\leftarrow \mathsf {OData}(1^{\lambda },D){:}\) Given the security parameter \(\lambda \), the contents of the database \(D \in \{\{0,1\}^N\}^M\), outputs the oblivious database \(D^*\). For convenience, we assume that \(\mathsf {OData}\) works by compiling a program P that writes D to the memory using \(\mathsf {OProg}\) to obtain \(P^*\). It then evaluates the program \(P^*\) by using uniform random tape and outputs the contents of the memory as \(D^*\).

Efficiency. We require that the run-time of \(\mathsf {OData}\) should be \(M\cdot N \cdot \mathsf {poly}(\log (MN))\cdot \mathsf {poly}(\lambda )\), and the run-time of \(\mathsf {OProg}\) should be \(T\cdot \mathsf {poly}(\lambda )\cdot \mathsf {poly}(\log (MN))\). Finally, the oblivious program \(P^*\) itself should run in time \(T' = T\cdot \mathsf {poly}(\lambda )\cdot \mathsf {poly}(\log (MN))\). Both the new memory size \(M'=|D^*|\) and the running time \(T'\) should be efficiently computable from \(M, N, T,\) and \(\lambda \).

Correctness. Let \(P_1,\ldots ,P_\ell \) be programs running in polynomial times \(t_1,\ldots ,t_\ell \) on memory D of size \(M\). Let \(x_1,\ldots ,x_\ell \) be the inputs and \(\lambda \) be a security parameter. Then we require that:

$$\Pr [{(P^*_1(x_1),\ldots ,P^*_\ell (x_\ell ))}^{D^*} = (P_1(x_1),\ldots ,P_\ell (x_\ell ))^D] = 1 $$

where \(D^*\leftarrow \mathsf {OData}(1^\lambda ,D)\), \(P^*_i \leftarrow \mathsf {OProg}(1^\lambda ,1^{\log M},1^T,P_i)\) and \((P^*_1(x_1),\ldots , P^*_\ell (x_\ell ))^{D^*}\) indicates running the ORAM programs on \(D^*\) sequentially using an uniform random tape.

Security. For security, we require that there exists a PPT simulator \(\mathsf {Sim}\) such that for any sequence of programs \({P_1,\ldots ,P_\ell }\) (running in time \(t_1,\ldots ,t_{\ell }\) respectively), initial memory data \(D\in \{\{0,1\}^N\}^M\), and inputs \({x_1,\ldots ,x_\ell }\) we have that:

$$\mathsf {MemAccess}{\mathop {\approx }\limits ^{s}} \mathsf {Sim}(1^\lambda , \{1^{t_i}\}_{i=1}^\ell )$$

where \((y_1,\ldots , y_\ell ) = (P_1(x_1),\ldots ,P_\ell (x_\ell ))^D\), \(D^*\leftarrow \mathsf {OData}(1^\lambda ,1^N,D)\), \(P^*_i \leftarrow \mathsf {OProg}(1^\lambda ,1^{\log M},1^T,P_i)\) and \(\mathsf {MemAccess}\) corresponds to the access pattern of the CPU-step circuits during the sequential execution of the oblivious programs \((P^*_1(x_1),\ldots , P^*_\ell (x_\ell ))^{D^*}\) using an uniform random tape.

3.4.1 Strong Localized Randomness For our construction of adaptively secure garbled RAM, we need an additional property called as strong localized randomness property [CCHR16] from an ORAM scheme. We need a slightly stronger formalization than the one given in [CCHR16] (refer to footnote 10).

Strong Localized Randomness. Let \(D \in \{\{0,1\}^N\}^M\) be any database and (Px) be any program/input pair. Let \(D^*\leftarrow \mathsf {OData}(1^\lambda ,1^N,D)\) and \({P^*} \leftarrow \mathsf {OProg}(1^\lambda ,1^{\log M},1^T,P)\). Further, let the step circuits of \(P^*\) be indicated by \(\{C_{\mathsf {CPU}}^{P^*,\tau }\}_{\tau \in [T']}\). Let R be the contents of the random tape used in the execution of \(P^*\).

Definition 4

([CCHR16]). We say that an ORAM scheme has strong localized randomness property if there there exists a sequence of efficiently computable values \(\tau _1< \tau _2<\ldots < \tau _m\) where \(\tau _1 = 1\), \(\tau _m = T'\) and \(\tau _t -\tau _{t-1} \le \mathsf {poly}(\log MN)\) for all \(t \in [2,m]\) such that:

  1. 1.

    For every \(j \in [m-1]\) there exists an interval \(I_j\) (efficiently computable from j) of size \(\mathsf {poly}(\log MN,\lambda )\) s.t. for any \(\tau \in [\tau _{j},\tau _{j+1})\), the random tape accessed by \(C_{\mathsf {CPU}}^{P^*,\tau }\) is given by \(R_{I_j}\) (here, \(R_{I_j}\) denotes the random tape restricted to the interval \(I_j\)).

  2. 2.

    For every \(j,j' \in [m-1]\) and \(j \ne j'\), \(I_j \cap I_{j'} = \emptyset \).

  3. 3.

    Further, for every \(j \in [m]\), there exists an \(k < j\) such that given \(R_{\setminus \{I_k \cup I_j\}}\) (where \(R_{\setminus \{I_k \cup I_j\}}\) denotes the content of the random tape except in positions \(I_j \cup I_k\)) and the output of step circuits \(C_{\mathsf {CPU}}^{P^*,\tau }\) for \(\tau \in [\tau _k,\tau _{k+1})\), the memory access made by step circuits \(C_{\mathsf {CPU}}^{P^*,\tau }\) for \(\tau \in [\tau _{j},\tau _{j+1})\) is computationally indistinguishable to random. This k is efficiently computable given the program P and the input x.Footnote 10

We argue in the full version of our paper that the Chung-Pass ORAM scheme [CP13] where the contents of the database are encrypted using a special encryption scheme satisfies the above definition of strong localized randomness. We now give details on this special encryption scheme. The key generation samples a puncturable PRF key \(K \leftarrow \mathsf {PP.KeyGen}(1^{\lambda })\). If the \(\tau ^{th}\) step-circuit has to write a value \(\mathsf {wData}\) to a location L, it first samples \(r \leftarrow \{0,1\}^{\lambda }\) and computes \(c = (\tau \Vert r, \mathsf {PP.Eval}(K,\tau \Vert r) \oplus \mathsf {wData})\). It writes c to location L. The decryption algorithm uses K to first compute \(\mathsf {PP.Eval}(K,\tau \Vert r)\) and uses it compute \(\mathsf {wData}\).

Remark 2

For the syntax of the ORAM scheme to be consistent with this special encryption scheme, we will use a puncturable PRF to generate the random tape of \(P^*\). This key will also be used implicitly used to derive the key for this special encryption scheme.

3.5 Adaptive Garbled RAM

We now give the definition of adaptive garbled RAM.

Definition 5

An adaptive garbled RAM scheme \(\mathsf {GRAM}\) consists of the following PPT algorithms satisfying the correctness, efficiency and security properties (Fig. 3).

  • \(\mathsf {GRAM.Memory}(1^{\lambda }, D)\): It is a PPT algorithm that takes the security parameter \(1^{\lambda }\) and a database \(D \in \{0,1\}^M\) as input and outputs a garbled database \(\widetilde{D}\) and a secret key SK.

  • \(\mathsf {GRAM.Program}(SK,i,P)\): It is a PPT algorithm that takes as input a secret key SK, a sequence number i, and a program P as input (represented as a sequence of CPU steps) and outputs a garbled program \(\widetilde{P}\).

  • \(\mathsf {GRAM.Input}(SK,i,x)\): It is a PPT algorithm that takes as input a secret key SK, a sequence number i and a string x as input and outputs the garbled input \(\widetilde{x}\).

  • \(\mathsf {GRAM.Eval}^{\widetilde{D}}(\mathsf {st},\widetilde{P},\widetilde{x})\): It is a RAM program with random read write access to \(\widetilde{D}\). It takes the state information \(\mathsf {st}\), garbled program \(\widetilde{P}\) and the garbled input \(\widetilde{x}\) as input and outputs a string y and updated database \(\widetilde{D}'\).

Correctness. We say that a garbled RAM \(\mathsf {GRAM}\) is correct if for every database D, \(t = \mathsf {poly}(\lambda )\) and every sequence of program and input pair \(\{(P_1,x_1),\ldots ,(P_t,x_t)\}\) we have that

$$ \Pr [\mathsf {Expt}_{\mathsf {correctness}}(1^{\lambda },\mathsf {UGRAM}) = 1] \le \mathsf {negl}(\lambda ) $$

where \(\mathsf {Expt}_{\mathsf {correctness}}\) is defined in Fig. 5.

Fig. 3.
figure 3

Correctness experiment for \(\mathsf {GRAM}\)

Adaptive Security. We say that \(\mathsf {GRAM}\) satisfies adaptive security if there exists (stateful) simulators \((\mathsf {SimD},\mathsf {SimP},\mathsf {SimIn})\) such that for all t that is polynomial in the security parameter \(\lambda \) and for all polynomial time (stateful) adversaries \(\mathcal {A}\), we have that

$$ \left| \Pr [\mathsf {Expt}_{\mathsf {real}}(1^{\lambda },\mathsf {GRAM},\mathcal {A}) = 1] - \Pr [\mathsf {Expt}_{\mathsf {ideal}}(1^{\lambda },\mathsf {Sim},\mathcal {A}) = 1] \right| \le \mathsf {negl}$$

where \(\mathsf {Expt}_{\mathsf {real}},\mathsf {Expt}_{\mathsf {ideal}}\) are defined in Fig. 4.

Fig. 4.
figure 4

Adaptive security experiment for \(\mathsf {GRAM}\)

Efficiency. We require the following efficiency properties from a \(\mathsf {UGRAM}\) scheme.

  • The running time of \(\mathsf {GRAM.Memory}\) should be bounded by \(M\cdot \mathsf {poly}(\log M) \cdot \mathsf {poly}(\lambda )\).

  • The running time of \(\mathsf {GRAM.Program}\) should be bounded by \(T\cdot \mathsf {poly}(\log M) \cdot \mathsf {poly}(\lambda )\) where T is the number of CPU steps in the description of the program P.

  • The running time of \(\mathsf {GRAM.Input}\) should be bounded by \(|x| \cdot \mathsf {poly}(\log M, \log T) \cdot \mathsf {poly}(\lambda )\).

  • The running time of \(\mathsf {GRAM.Eval}\) should be bounded by \(T\cdot \mathsf {poly}(\log M) \cdot \mathsf {poly}(\lambda )\) where T is the number of CPU steps in the description of the program P.

4 Adaptive Garbled RAM with Unprotected Memory Access

Towards our goal of constructing an adaptive garbled RAM, we first construct an intermediate primitive with weaker security guarantees. We call this primitive as adaptive garbled RAM with unprotected memory access. Informally, a garbled RAM scheme has unprotected memory access if both the contents of the database and the access to the database are revealed in the clear to the adversary. We differ from the security definition given in [GHL+14] in three aspects. Firstly, we give an indistinguishability style definition for security whereas [GHL+14] give a simulation style definition. The indistinguishability based definition makes it easier to get full-fledged adaptive security later. Secondly and most importantly, we allow the adversary to adaptively choose the inputs based on the garbled program. Thirdly, we also require the garbled RAM scheme to satisfy a special property called as equivocability. Informally, equivocability requires that the real garbling of a program P is indistinguishable to a simulated garbling where the simulator is not provided with the description of the step circuits for a certain number of time steps (this number is given by the equivocation parameter). Later, when the input is specified, the simulator is given the output of these step circuits and must come-up with an appropriate garbled input.

We now give the formal definition of this primitive.

Definition 6

An adaptive garbled RAM scheme with unprotected memory access \(\mathsf {UGRAM}\) consists of the following PPT algorithms satisfying the correctness, efficiency and security properties.

  • \(\mathsf {UGRAM.Memory}(1^{\lambda },1^n, D)\): It is a PPT algorithm that takes the security parameter \(1^{\lambda }\), an equivocation parameter n and a database \(D \in \{\{0,1\}^N\}^M\) as input and outputs a garbled database \(\widetilde{D}\) and a secret key SK.

  • \(\mathsf {UGRAM.Program}(SK,i,P)\): It is a PPT algorithm that takes as input a secret key SK, a sequence number i, and a program P as input (represented as a sequence of CPU steps) and outputs a garbled program \(\widetilde{P}\).

  • \(\mathsf {UGRAM.Input}(SK,i,x)\): It is a PPT algorithm that takes as input a secret key SK, a sequence number i and a string x as input and outputs the garbled input \(\widetilde{x}\).

  • \(\mathsf {UGRAM.Eval}^{\widetilde{D}}(\mathsf {st},\widetilde{P},\widetilde{x})\): It is a RAM program with random read write access to \(\widetilde{D}\). It takes the state information \(\mathsf {st}\), garbled program \(\widetilde{P}\) and the garbled input \(\widetilde{x}\) as input and outputs a string y and updated database \(\widetilde{D}'\).

Correctness. We say that a garbled RAM \(\mathsf {UGRAM}\) is correct if for every database D, \(t = \mathsf {poly}(\lambda )\) and every sequence of program and input pair \(\{(P_1,x_1),\ldots ,(P_t,x_t)\}\) we have that

$$ \Pr [\mathsf {Expt}_{\mathsf {correctness}}(1^{\lambda },\mathsf {UGRAM}) = 1] \le \mathsf {negl}(\lambda ) $$

where \(\mathsf {Expt}_{\mathsf {correctness}}\) is defined in Fig. 5.

Fig. 5.
figure 5

Correctness experiment for \(\mathsf {UGRAM}\)

Security. We require the following two properties to hold.

  • Equivocability. There exists a simulator \(\mathsf {Sim}\) such that for any non-uniform PPT stateful adversary \(\mathcal {A}\) and \(t = \mathsf {poly}(\lambda )\) we require that:

    $$ \left| \Pr [\mathsf {Expt}_{\mathsf {equiv}}(1^{\lambda },\mathcal {A},0) = 1] - \Pr [\mathsf {Expt}_{\mathsf {equiv}}(1^{\lambda },\mathcal {A},1) = 1] \right| \le \mathsf {negl}(\lambda ) $$

    where \(\mathsf {Expt}_{\mathsf {equiv}}(1^{\lambda },\mathcal {A},b)\) is described in Fig. 6.

  • Adaptive Security. For any non-uniform PPT stateful adversary \(\mathcal {A}\) and \(t = \mathsf {poly}(\lambda )\) we require that:

    $$ \left| \Pr [\mathsf {Expt}_{\mathsf {UGRAM}}(1^{\lambda },\mathcal {A},0) = 1] - \Pr [\mathsf {Expt}_{\mathsf {UGRAM}}(1^{\lambda },\mathcal {A},1) = 1] \right| \le \mathsf {negl}(\lambda ) $$

    where \(\mathsf {Expt}_{\mathsf {UGRAM}}(1^{\lambda },\mathcal {A},b)\) is described in Fig. 7.

Fig. 6.
figure 6

\(\mathsf {Expt}_{\mathsf {equiv}}(1^{\lambda },\mathcal {A},b)\)

Fig. 7.
figure 7

\(\mathsf {Expt}_{\mathsf {UGRAM}}(1^{\lambda },\mathcal {A},b)\)

Efficiency. We require the following efficiency properties from a \(\mathsf {UGRAM}\) scheme.

  • The running time of \(\mathsf {UGRAM.Memory}\) should be bounded by \(MN\cdot \mathsf {poly}(\log MN) \cdot \mathsf {poly}(\lambda )\).

  • The running time of \(\mathsf {UGRAM.Program}\) should be bounded by \(T\cdot \mathsf {poly}(\log MN) \cdot \mathsf {poly}(\lambda )\) where T is the number of CPU steps in the description of the program P.

  • The running time of \(\mathsf {UGRAM.Input}\) should be bounded by \(n \cdot |x| \cdot \mathsf {poly}(\log MN, \log T) \cdot \mathsf {poly}(\lambda )\).

  • The running time of \(\mathsf {UGRAM.Eval}\) should be bounded by \(T\cdot \mathsf {poly}(\log MN,\log T) \cdot \mathsf {poly}(\lambda )\) where T is the number of CPU steps in the description of the program P.

4.1 Construction

In this subsection, we give a construction of adaptive garbled RAM with unprotected memory access from updatable laconic oblivious transfer, somewhere equivocal encryption and garbling scheme for circuits with selective security using the techniques developed in the construction of adaptive garbled circuits [GS18a]. Our main theorem is:

Theorem 4

Assuming the existence of updatable laconic oblivious transfer, somewhere equivocal encryption, a pseudorandom function and garbling scheme for circuits with selective security, there exists a construction of adaptive garbled RAM with unprotected memory access.

Construction. We give the formal description of the construction in Fig. 8. We use a somewhere equivocal encryption with block length set to \(|\widetilde{\mathsf {SC}}_{\tau }|\) where \(\widetilde{\mathsf {SC}}_{\tau }\) denotes the garbled version of the step circuit \(\mathsf {SC}\) described in Fig. 9, the message length to be T (which is the running time of the program P) and the equivocation parameter to be \(t + \log T\) where t is the actual equivocation parameter for the \(\mathsf {UGRAM}\) scheme.

Fig. 8.
figure 8

Adaptive garbled RAM with unprotected memory access

Fig. 9.
figure 9

Description of the step circuit

Correctness. The correctness of the above construction follows from a simple inductive argument that for each step \(\tau \in [|P|]\), the \(\mathsf {state}\) and the database are updated correctly at the end of the execution of \(\widetilde{\mathsf {SC}}_{\tau }\). The base case is \(\tau = 0\). In order to prove the inductive step for a step \(\tau \), observe that if the step \(\tau \) outputs a read then labels recovered in Step 4.(c).(ii) of \(\mathsf {SS}\text {-}\mathsf {EvalCkt}\) correspond to data block in the location requested. Otherwise, the labels recovered in Step 4(b).(ii) of \(\mathsf {SS}\text {-}\mathsf {EvalCkt}\) corresponds to the updated value of the digest with the corresponding block written to the database.

Efficiency. The efficiency of our construction directly follows from the efficiency of updatable laconic oblivious transfer and the parameters set for the somewhere equivocal encryption. In particular, the running time of \(\mathsf {UGRAM.Memory}\) is \(D\cdot \mathsf {poly}(\lambda )\), \(\mathsf {UGRAM.Program}\) is \(T \cdot \mathsf {poly}(\log MN,\lambda )\) and that of \(\mathsf {UGRAM.Input}\) is \(n |x| \cdot \mathsf {poly}(\log M,\log T,\lambda )\). The running time of \(\mathsf {UGRAM.Eval}\) is \(T \cdot \mathsf {poly}(\log M,\log T, \lambda )\).

Security. We prove the security of this construction in the full version of our paper.

5 Timed Encryption

In this section, we give the definition and construction of a timed encryption scheme. We will use a timed encryption scheme in the construction of adaptive garbled RAM in the next section.

A timed encryption scheme is a symmetric key encryption scheme with some special properties. In this encryption scheme, every message is encrypted with respect to a timestamp \(\mathsf {time}\). Additionally, there is a special algorithm called as constrain that takes an encryption key K and a timestamp \(\mathsf {time}'\) as input and outputs a time constrained key \(K[\mathsf {time}']\). A time constrained key \(K[\mathsf {time}']\) can be used to decrypt any ciphertext that is encrypted with respect to timestamp \(\mathsf {time}< \mathsf {time}'\). For security, we require that knowledge of a time constrained key does not help an adversary to distinguish between encryptions of two messages that are encrypted with respect to some future timestamp.

Definition 7

A timed encryption scheme is a tuple of algorithms \((\mathsf {TE.KeyGen},\mathsf {TE.Enc},\mathsf {TE.Dec},\mathsf {TE.Constrain})\) with the following syntax.

  • \(\mathsf {TE.KeyGen}(1^{\lambda }){:}\) It is a randomized algorithm that takes the security parameter \(1^{\lambda }\) and outputs a key K.

  • \(\mathsf {TE.Constrain}(K,\mathsf {time}){:}\) It is a deterministic algorithm that takes a key K and a timestamp \(\mathsf {time}\in [0,2^{\lambda }-1]\) and outputs a time-constrained key \(K[\mathsf {time}]\).

  • \(\mathsf {TE.Enc}(K,\mathsf {time},m){:}\) It is a randomized algorithm that takes a key K, a timestamp \(\mathsf {time}\) and a message m as input and outputs a ciphertext c or \(\bot \).

  • \(\mathsf {TE.Dec}(K,c){:}\) It is a deterministic algorithm that takes a key K and a ciphertext c as input and outputs a message m.

We require a timed encryption scheme to follow the following properties.

Correctness. We require that for all messages m and for all timestamps \(\mathsf {time}_1 \le \mathsf {time}_2 \):

$$ \Pr [\mathsf {TE.Dec}(K[\mathsf {time}_2],c) = m] = 1 $$

where \(K \leftarrow \mathsf {TE.KeyGen}(1^{\lambda })\), \(K[\mathsf {time}_2]:=\mathsf {TE.Constrain}(K,\mathsf {time}_2)\) and \(c \leftarrow \mathsf {TE.Enc}(K,\mathsf {time}_1,m)\).

Encrypting with Constrained Key. For any message m and timestamps \(\mathsf {time}_1 \le \mathsf {time}_2\), we require that:

$$ \{\mathsf {TE.Enc}(K,\mathsf {time}_1,m)\} \approx \{\mathsf {TE.Enc}(K[\mathsf {time}_2],\mathsf {time}_1,m)\} $$

where \(K \leftarrow \mathsf {TE.KeyGen}(1^{\lambda })\), \(K[\mathsf {time}_2] := \mathsf {TE.Constrain}(K,\mathsf {time}_2)\) and \(\approx \) denotes that the two distributions are identical.

Security. For any two messages \(m_0,m_1\) and timestamps \((\mathsf {time},\{\mathsf {time}_i\}_{i \in [t]})\) where \(\mathsf {time}_i < \mathsf {time}\) for all \(i \in [t]\), we require that:

$$ \{\{K[\mathsf {time}_i]\}_{i \in [t]},\mathsf {TE.Enc}(K,\mathsf {time},m_0)\} \overset{c}{\approx }\{\{K[\mathsf {time}_i]\}_{i \in [t]},\mathsf {TE.Enc}(K,\mathsf {time},m_1)\} $$

where \(K \leftarrow \mathsf {TE.KeyGen}(1^{\lambda })\) and \(K[\mathsf {time}_i] := \mathsf {TE.Constrain}(K,\mathsf {time}_i)\) for every \(i \in [t]\).

We prove the following theorem in the full version of our paper.

Theorem 5

Assuming the existence of one-way functions, there exists a construction of timed encryption.

6 Construction of Adaptive Garbled RAM

In this section, we give a construction of adaptive garbled RAM. We make use of the following primitives.

  • A timed encryption scheme \((\mathsf {TE.KeyGen},\mathsf {TE.Enc},\mathsf {TE.Dec},\mathsf {TE.Constrain})\). Let N be the output length of \(\mathsf {TE.Enc}\) when encrypting single bit messages.

  • A puncturable pseudorandom function \((\mathsf {PP.KeyGen},\mathsf {PP.Eval},\mathsf {PP.Punc})\).

  • An oblivious RAM scheme \((\mathsf {OData},\mathsf {OProg})\) with strong localized randomness.

  • An adaptive garbled RAM scheme \(\mathsf {UGRAM}\) with unprotected memory access.

The formal description of our construction appears in Fig. 10.

Fig. 10.
figure 10

Construction of adaptive GRAM

Fig. 11.
figure 11

Description of the step circuit

Correctness. We give an informal argument for correctness. The only difference between \(\mathsf {UGRAM}\) and the construction we give in Fig. 10 is that we encrypt the database using a timed encryption scheme and encode it using a ORAM scheme. To argue the correctness of our construction, it is sufficient to argue that each step circuit \(\mathsf {SC}\) faithfully emulates the corresponding step circuit of \(P^*\). Let \(\mathsf {SC}^{i,\tau }\) be the step circuit that corresponds to the \(\tau ^{th}\) step of the \(i^{th}\) program \(P_i\). We observe that any point in time the \(L^{th}\) location of the database \(\widehat{D}\) is an encryption of the actual data bit with respect to timestamp \(\mathsf {time}:= (i' \Vert \tau ')\) where \(\mathsf {SC}^{i',\tau '}\) last wrote at the \(L^{th}\) location. It now follows from this invariant and the correctness of the timed encryption scheme that the hardwired constrained key \(K[i\Vert \tau ]\) in \(\mathsf {SC}^{i,\tau }\) can be used to decrypt the read block X as the step that last modified this block has a timestamp that is less than \((i\Vert \tau )\).

Efficiency. We note that setting the equivocation parameter \(n = \mathsf {poly}(\log MN)\), we obtain that the running time of \(\mathsf {GRAM.Input}\) is \(|x| \cdot \mathsf {poly}(\lambda ,\log MN)\). The rest of the efficiency criterion follow directly from the efficiency of adaptive garbled RAM with unprotected memory access.

Security. We give the proof of security in the full version of our paper.