Abstract
The k-means clustering problem is one of the most explored problems in data mining. With the advent of protocols that have proven to be successful in performing single database clustering, the focus has shifted in recent years to the question of how to extend the single database protocols to a multiple database setting. To date, there have been numerous attempts to create specific multiparty k-means clustering protocols that protect the privacy of each database, but according to the standard cryptographic definitions of “privacy-protection”, so far all such attempts have fallen short of providing adequate privacy. In this paper, we describe a Two-Party k-Means Clustering Protocol that guarantees privacy against an honest-but-curious adversary, and is more efficient than utilizing a general multiparty “compiler” to achieve the same task. In particular, a main contribution of our result is a way to compute efficiently multiple iterations of k-means clustering without revealing the intermediate values. To achieve this, we describe a technique for performing two-party division securely and also introduce a novel technique allowing two parties to securely sample uniformly at random from an unknown domain size. The resulting Division Protocol and Random Value Protocol are of use to any protocol that requires the secure computation of a quotient or random sampling. Our techniques can be realized based on the existence of any semantically secure homomorphic encryption scheme. For concreteness, we describe our protocol based on Paillier Homomorphic Encryption scheme (see Paillier in Advances in: cryptology EURO-CRYPT’99 proceedings, LNCS 1592, pp 223–238, 1999). We will also demonstrate that our protocol is efficient in terms of communication, remaining competitive with existing protocols (such as Jagannathan and Wright in: KDD’05, pp 593–599, 2005) that fail to protect privacy.
Similar content being viewed by others
1 Introduction
1.1 Background on k-Means Clustering
The k-means clustering problem can be described as follows: A database \({\mathcal {D}}\) holds information about n different objects, each object having d attributes. The information regarding each object is viewed as a coordinate in \({\mathbb {R}}^d\), and hence the objects are interpreted as data points living in d-dimensional Euclidean space. Informally, k-means clustering algorithms are comprised of two steps. First, k initial centers are chosen in some manner, either at random or using some other “seeding” procedure. The second step (known as the “Lloyd Step”) is iterative and does the following: Partition the n data points into k clusters based on which current cluster center they are closest to. Then reset the new cluster centers to be the center of mass (in Euclidean space) of each cluster. This process is either iterated a fixed number of times or until the new cluster centers are sufficiently close to the previous ones (based on a pre-determined measure of “sufficiently close”).
The k-means clustering method is enormously popular among practitioners as an effective way to find a geometric partitioning of data points into k clusters, from which general trends or tendencies can be observed. In particular, k-means clustering is widely used in information retrieval, machine learning, and data mining research (see, e.g. [24] for further discussion about the enormous popularity of k-means clustering).
The question of finding efficient algorithms for solving the k-means clustering problem has been greatly explored and is not investigated in this paper. Rather, we wish to extend an existing algorithm (which solves the k-means problem for a single database without any concern for privacy) to an algorithm that works in the two-database setting (in accordance with multiparty computation literature, we refer to the databases as “parties”). In particular, if two parties each hold partial data describing the d attributes of n objects, then we would like to apply this k-means algorithm to the aggregate data (e.g. imagining a virtual database that contains all the data) in a way that protects the privacy of each party’s data. In this paper, we will work in the most general setting, where we assume the data is arbitrarily partitioned between the two databases. This means that there is no assumption on how the attributes of each data point are distributed among the parties (and in particular, this subsumes the cases of vertically and horizontally partitioned data).
1.2 Previous Work
The k-means clustering problem is one of the functions most studied in the more general class of data-mining problems. Data-mining problems have received much attention in recent years. Due to the sheer volume of inputs that are often involved in data-mining problems, generic multiparty computation (MPC) protocols become infeasible in terms of communication cost. This has led to constructions of function-specific multiparty protocols that attempt to handle a specific functionality in an efficient manner, while still providing privacy to the parties (see, e.g. [1, 2, 21]).
The problem of extending single database k-means clustering protocols to the multiparty setting has been explored by numerous authors, whose approaches have varied widely. The main challenge in designing such a protocol is to prevent intermediate values from being leaked during the Lloyd Step. In particular, each iteration of the Lloyd Step requires k new cluster centers to be found, a process that requires division (the new cluster centers are calculated using a weighted average, which in turn requires dividing by the number of data points in a given cluster). However, the divisors should remain unknown to the parties, as leaking intermediate cluster sizes may reveal excessive information. Additionally, many current protocols for solving the single database k-means clustering problem improve efficiency by choosing data points according to a weighted distribution, which will then serve as preliminary “guesses” to the cluster center (e.g. [5, 24]). Choosing data points in this manner will also likely involve division.
A subtle issue that may not be obvious at first glance is how to perform these divisions in light of current cryptographic tools. In particular, most encryption schemes describe a message space that is a finite group (or field or ring). This means that an algorithm that attempts to solve the multiparty k-means problem in the cryptographic setting (as opposed to the information-theoretic setting) will view the data points not as elements of Euclidean Space \({\mathbb {R}}^d\), but rather as elements in \({\mathbb {G}}^d\) (for some ring \({\mathbb {G}}\)) in order to share encryptions of these data points with the other party members. But this then complicates the notion of “division”, which we wish to mean “division in \({\mathbb {R}}\)” as opposed to “multiplication by the inverse.” (The latter interpretation not only fails to perform the desired task of finding an average, but additionally may not even exist if not all elements in the ring \({\mathbb {G}}\) have a multiplicative inverse).
Previous authors attempting to solve the multiparty k-means problem have incorporated various ideas to combat this obstacle. The “data perturbation” technique (e.g. [1, 2, 23]) avoids the issue altogether by addressing the multiparty k-means problem from an information-theoretic standpoint. These algorithms attempt to protect party members’ privacy by having each member first “perturb” their data (in some regulated manner), and then the perturbed data is made public to all members. Thus, the division (and all other computations) can be performed locally by each party member (on the perturbed data), and the division problem is completely avoided. Unfortunately, all current algorithms utilizing this method do not protect the privacy of the party members in the cryptographic definition of privacy protection. Indeed, these protocols provide some privacy guarantee in terms of hiding the exact values of the database entries, but do not make the more general guarantee that (with overwhelming probability) no information can be obtained about any party’s inputs (other than what follows from the output of the function, i.e. the final cluster centers).
Another solution to the division problem (see e.g. [28]) is to have each party member perform the division locally on their own data. The problem with this method is that it requires each party to know all intermediate cluster assignments (in order to know what they should divide by). The same problem is encountered in [27], which also requires each party to know intermediate cluster assignments. The extra information obtained by the parties in these two papers would not be available to the parties in the ideal model, and thus these solutions fail to provide complete privacy protection (as per Definition 1; see Sect. 2.3). A similar problem is encountered in [19], where they describe a way to privately perform division, but their protocol relies on the fact that both parties will learn the output value of the division (which is again more information than is revealed in the ideal model). Another approach, suggested by Jagannathan and Wright [18] is to interpret division as multiplication by the inverse. However, a simple example shows that this method does not satisfy correctness, i.e. does not correctly implement a k-means algorithm. (Consider, e.g. dividing 11 by 5 in \({\mathbb {Z}}_{21}\). One would expect to round this to 2, but \(11*5^{-1}= 11*17 = 19\)).
There are a number of works in the multiparty computation literature that focus on the problem of secure division. A number of these works (e.g. [3, 7, 20]) seek to implement fixed-point division, outputting an estimate of the quotient (with bounded error). While these protocols can be (and have been) extended to integer division (e.g. [9, 13]), the resulting protocols are more involved than the one presented here (e.g. [3, 13] implement a version of Newton–Raphson method, [7] invokes Goldschmidt’s Division Algorithm, and [9] utilize Taylor Polynomials), and often require additional assumptions on the inputs or setup that are not appropriate when used as a subprotocol of k-means clustering. Furthermore, the Division Protocol presented in Sect. 3.1 is comparable to the above-mentioned protocols in terms of communication complexity, so that (ignoring correctness/compatibility issues) swapping our protocol for any of the above protocols would not result in an (asymptotic) improvement of overall communication complexity of the k-means clustering protocol. There are also results [16, 29] for performing secure division if the denominator is known to both parties, but utilizing such a subprotocol in the context of k-means clustering will require leaking the size (number of data points) of each cluster during each iteration of the Lloyd Step.
One final approach encountered in the literature (see, e.g. [4, 10,11,12]) protects against leaking information about specific data in a different context. In this setting, the data is not distributed among many parties, but rather exists in a single database that is maintained by a trusted third party. The goal now is to have clients send requests to this third party for k-means clustering information on the data, and to ensure that the response from the server does not reveal too much information about the data. In the model we consider in this paper, these techniques cannot be applied since there is no central database or trusted third party.
To summarize, none of the existing “privacy-preserving” k-means clustering protocols provide cryptographically-acceptable security against an “honest-but-curious” adversary. We will present a formal notion of security in Sect. 2.3 (see, e.g. [15]). Informally, the security of a multiparty protocol is defined by comparing the real-life interaction between the parties to an “ideal” scenario where a trusted third party exists. In this ideal setting, the trusted third party receives the private inputs from each of the parties, runs a k-means clustering algorithm on the aggregate data, and returns as output the final cluster centers to each party. (Note: depending on a pre-determined arrangement between the parties, the third party may also give each party the additional information of which cluster each data point belongs to.) The goal of multiparty computation is to achieve in the “real” world (where no trusted third party is assumed to exist) the same level of data privacy protection that can be obtained in the “ideal” model.
One final obstacle in designing a perfectly secure k-means clustering protocol comes from the iterative nature of the Lloyd Step. In the ideal model, the individual parties do not learn any information regarding the number of iterations that were necessary to reach the stopping condition. In the body of this paper, our main protocol will reveal this information to the parties (it is our belief that in practice, this privacy breach is unlikely to reveal meaningful information about the other party’s database). However, we discuss more fully in “Appendix A” alternative methods of controlling the number of iterations without revealing this extra information.
1.3 Our Results
We describe in Sect. 5 of this paper the first protocol for two-party k-means clustering that is secure against an honest-but-curious adversary (as mentioned above, general MPC protocols could in theory be applied to k-means, but for large datasets with many attributes, such protocols require excessive communication and may be infeasible to use in practice; see Sect. 5.4 for a comparison). Moreover, we demonstrate that our protocol is performance-competitive with other protocols (which fail to protect privacy against an honest-but-curious adversary). Let k denote the number of clusters, \(\lambda \) is the security parameter, n is the number of data points, d is the number of attributes of each data point, and \(O(\xi _s)\) is the communication cost of (securely) finding the minimum of two numbers. The exact efficiency bounds that we achieve are as follows:
Communication Complexity Result
Our two-party secure k-means clustering protocol has a one-time communication cost of \(O(\lambda nd)\), followed by \(O(\xi _s kn) \le O(\lambda ^2 kn)\) for each iteration of the Lloyd Step.
A complete discussion on the bounds achieved above can be found in Sect. 5.4. Table 1 compares our protocol with other existing k-means clustering protocols.Footnote 1
Our protocol takes as a template the single-database protocol of Ostrovsky et al. [24], and extends it to the two-party setting. We chose the particular protocol of [24] because it has two advantages over conventional single-database protocols: First, it provides a provable guarantee as to the correctness of its output (assuming moderate conditions on the data); and second, because their protocol reduces the number of iterations in the Lloyd Step. However, the techniques we use to extend the single-database protocol of [24] are general, and can likely be applied to any single-database protocol to achieve security in the two-party setting.
In order to extend the single database protocol of [24] to a two-party protocol, we follow the setup and some of the ideas discussed by Jagannathan and Wright in [18]. In that paper, the authors attempt to perform secure two-party k-means clustering, but (as they remark) fall short of perfect privacy due to leakage of information (including the number of data points in each cluster) that arises from an insecure division algorithm.
To solve the multiparty division problem, we define division in the ring \({\mathbb {Z}}_N\) in a natural way, namely as the quotient Q from the Division Algorithm in the integers: \(P=QD+R\). From this definition, we demonstrate how two parties can perform multiparty division in a secure manner. Additionally, we describe how two parties can select initial data points according to a weighted distribution. To accomplish this, we introduce a new protocol, the Random Value Protocol, which is described in Sect. 4. We note that the Random Value Protocol may be of independent interest as a subprotocol for other protocols that require random, oblivious sampling.
Our results utilize many existing tools and subprotocols developed in the multiparty computation literature. As such, the security guarantee of our result relies on cryptographic assumptions concerning the difficulty of inverting certain functions. In particular, we will assume the existence of a semantically secure homomorphic encryption scheme, and for ease of discussion, we use the homomorphic encryption scheme of Paillier [25].
1.4 Notation
In the following sections, we will adopt the convention of writing [a..b] to represent the integers from a to b (inclusive), i.e. \([a..b] = [a, b] \cap {\mathbb {Z}}\).
1.5 Overview
In the next section, we briefly introduce the cryptographic tools and methods of proving privacy that we will need to guarantee security in the honest-but-curious adversary model. We also include in Sect. 2.2 a complete list of the subprotocols that will be used in this paper. Because most of the subprotocols that we use are general and have been described in previous MPC papers, we provide in Sect. 2.2 only a list of these protocols (possible implementations are included in “Appendix C” for completeness). An exception to this is our new Random Value Protocol, for which we provide full details and proof of security in Sect. 4, and a description of a two-party Division Protocol in Sect. 3. Finally, in Sect. 5 we describe our secure 2-party k-means clustering protocol, which extends the single database (non-secure) k-means clustering protocol of [24].
2 Achieving Privacy
In multiparty computation (MPC) literature, devious behavior is modeled by the existence of an adversary who can corrupt one or more of the parties. In this paper, we will assume that the adversary is honest-but-curious, which means the adversary only learns the inputs/outputs of all of the corrupted parties, but the corrupted parties must engage in the protocol appropriately. We include in Sect. 2.3 a formal definition of what it means for a protocol to “protect privacy” in the honest-but-curious adversary model (see also, e.g. [15] for definitions of security against an honest-but-curious adversary).
In order to construct a private two-party k-means clustering protocol, we begin by presenting a secure Division Protocol (Sect. 3) and Random Value Protocol (Sect. 4) that will be used as subprotocols. The overall k-means clustering protocol will then utilize these subprotocols, as well as a handful of standard subprotocols for which there exist secure (privacy-preserving) instantiations. We utilize the fact that the composition of secure subprotocols results in a secure protocol [6], so that the overall privacy of our k-means clustering protocol reduces to the privacy of each subprotocol. Proof sketches for privacy of the Division Protocol and Random Value Protocol are included with the description of these protocols (Sects. 3 and 4). A list of all other subprotocols used is provided in 2.2; for all such protocols, we either provide a reference to an existing secure instantiation of the protocol, or in cases where the protocol can be instantiated via a simple reduction to a secure subprotocol (who security is already known), we sketch such a reduction in “Appendix C”.
In Sect. 2.3, we classify protocols that have a specified generic form, and prove that such protocols will be secure in the honest-but-curious adversary model. Privacy of our Division Protocol and Random Value Protocol will then follow because they have this generic form. In Sect. 2.1, we first introduce the cryptographic tools we will need to guarantee privacy. The casual reader may wish to skip the description of the cryptographic tools in Sect. 2.1 and read only the high-level arguments of security in the first paragraph of Sect. 2.3, omitting the formal definitions and proofs of privacy in the rest of that section.
2.1 Cryptographic Tools
We will utilize standard cryptographic tools to maintain privacy in our two-party k-means clustering protocol. It will be convenient to name our two participating parties, and we adopt the standard names of “Alice” and “Bob.” We will first utilize an additively homomorphic encryption scheme, e.g. Paillier [25]. Thus, for encryptions we assume a message space \({\mathbb {Z}}_N\), where \(N=pq\) is the product of two \(\lambda \)-bit primes and \(\lambda \) is the security parameter. In the protocols that follow, there is no public setup/dealer; rather, one of the parties will be responsible for choosing the modulus N (we use the convention that Alice plays this role), and will publish the corresponding public key (but keep the decryption key private). The opposite party (Bob) will be responsible for performing the requisite computations on encrypted data. The encryption scheme is a map \(E:{\mathbb {Z}}_N \times {\mathbb {H}} \rightarrow {\mathbb {G}}\), where \({\mathbb {H}}\) represents some group from which we obtain randomness, and \({\mathbb {G}}\) is some other group. For notational convenience, we will write \(E(m) \in {\mathbb {G}}\) rather than E(m, r). This encryption scheme is additively homomorphic, so that: \(E(m_1, r_1) + E(m_2, r_2) = E(m_1+m_2, r_1+r_2)\), where each addition refers to the appropriate group operation in \({\mathbb {G}}, {\mathbb {Z}}_N,\) or \({\mathbb {H}}\). (For Paillier, \({\mathbb {G}} ={\mathbb {Z}}^{\times }_{N^2}\) and thus the group operation is multiplication). Additionally, the encryption scheme allows a user to (efficiently) multiply by a constant, i.e. for \(c \in {\mathbb {Z}}_N\), anyone can compute: \(cE(m, r) = E(cm, r')\). (For Paillier, if (N, g) is the public key, then \(cE(m,r) := (g^mr^N)^c = g^{mc}r^{cN} = g^{mc}(r^c)^N = E(cm, r')\), where \(r' = r^c\)).
2.2 Privacy Protecting Protocols
We list here the generic subprotocols that will be used by our two-party k-means clustering protocol. All of the below protocols can be readily implemented using only the Scalar Product Protocol, and we include possible implementations in “Appendix C”. The Scalar Product Protocol is a standard protocol that has been explored much by other authors; we will not include an implementation of this protocol in this paper, but refer the reader to a number of possible references.
Scalar Product Protocol (SPP) This protocol takes in \(\mathbf {x} \in {\mathbb {Z}}^t_N\) and \(\mathbf {y} \in {\mathbb {Z}}^t_N\), and returns (shares of) some pre-determined degree two function \(f(\mathbf {x}, \mathbf {y}) = \sum _{i=1}^t c_i\mathbf {x}_i \mathbf {y}_i\) for public constants \(c_i\), and where all arithmetic is modulo N. This encompases degree-zero and degree-one terms as well, e.g. by taking \(\mathbf {x}_i\) and/or \(\mathbf {y}_i\) to be one, as appropriate. (See, e.g. [14], where they describe a protocol that achieves \(O(t\lambda )\) communication complexity, \(\lambda \) the security parameter. Other implementations can be found in [22, 30, 32].)
Find Minimum of 2 Numbers Protocol (FM2NP) Alice and Bob share two numbers. This protocol returns shares of the location of the smaller number (0 or 1).
Find Minimum of k Numbers Protocol (FMkNP) An extension of the above protocol, where this time as output they receive shares of the vector \((0, \dots , 1, \dots , 0)\), where the ‘1’ appears in the mth coordinate if the mth number is smallest.
Change Modulus Protocol Let \(N_1, N_2\) be two publicly known integers and let \(Q < \min (N_1, N_2)\). Suppose Alice and Bob share \(Q = Q^A + Q^B \in {\mathbb {Z}}_{N_1}\). Then as output, Alice and Bob should share Q in \({\mathbb {Z}}_{N_2}\), i.e. Alice gets \(\widetilde{Q}^A\) and Bob gets \(\widetilde{Q}^B\) such that \(Q = \widetilde{Q}^A +\widetilde{Q}^B \in {\mathbb {Z}}_{N_2}\).
Addition Modulo Unknown Value Protocol Let \(S = S^A + S^B (\)mod N) and \(T = T^A + T^B (\)mod N) be two values shared by Alice and Bob, and let \(Q \in [1+\max (S, T)..N-1]\) be arbitrary. Then as output, Alice and Bob share \(R := S + T (\)mod Q).
Nested Product Protocol (NPP) Alice and Bob share values \(\{x_i\}_{i=1}^m = \{x^A_i + x^B_i (\)mod \(N)\}\). This protocol returns shares (in \({\mathbb {Z}}_N\)) of the m nested products; that is, for each \(1 \le j \le m\), Alice and Bob share the value: \(y_j := \prod _{j=1}^i x_j\).
To Binary Protocol (TBP) Alice and Bob have shares of some value \(X \in {\mathbb {Z}}_N\). If \(X = x_\lambda \dots x_1 x_0\) is the binary representation of X, then this protocol returns shares of \(x_i\) for each \(0 \le i \le \lambda \). In other words, \(x_i = x^A_i + x^B_i\) (mod N).
Distance Protocol Alice and Bob share two points in \({\mathbb {Z}}^d_N\). As output, they share the distance squared between these points. Since the computation for distance (squared) can be expressed as a product of Alice and Bob’s inputs, the distance protocol can be instantiated via SPP. One such possible reduction of the Distance Protocol to a SPP can be found in [18], which has communication complexity \(O(d\lambda )\).
Compute Modulus Mask Protocol, Compute\(\mathbf {e}_i\)Protocol and Choose\({\varvec{\mu }}_1\)Protocol These will be discussed when they arise in Sects. , 4.1, and 5.3.1.
2.3 Definition of Privacy in the Honest-But-Curious Model
We present first the high-level argument for how our protocols will protect each party’s data. We have one of the parties (Alice) choose the encryption key (i.e. we do not rely on a trusted setup to distribute public/secret key pairs), and encrypt all of her data using this key before sending it to the other party (Bob). Thus, Alice’s privacy will be guaranteed by the semantic security assumption of the encryption scheme. Meanwhile, Bob will also encrypt his data using Alice’s key, utilize the homomorphic properties of the encryption scheme to perform the requisite computations, and then blind all of the outputs he sends to Alice with randomness of his choosing, ensuring that Alice can learn nothing about his data.
We now make these notions precise by first providing a formal definition of privacy protection in the honest-but-curious adversary model, and a formal proof of privacy for the class of protocols that attempt to protect privacy in the above described manner (both the definition and technique of providing privacy in this manner are standard tools used in MPC literature, see, e.g. [15]).
Definition 1
Suppose that protocol X has Alice compute (and output) the function \(f^A(\mathbf {x}, \mathbf {y})\), and has Bob compute (and output) \(f^B(\mathbf {x}, \mathbf {y})\), where \((\mathbf {x}, \mathbf {y})\) denotes the inputs for Alice and Bob (respectively). Let VIEW\(^A(\mathbf {x},\mathbf {y})\) (resp. VIEW\(^B(\mathbf {x},\mathbf {y})\)) represent Alice’s (resp. Bob’s) view of the transcript. In other words, if \((\mathbf {x}, \mathbf {r}^A)\) (resp. \((\mathbf {y}, \mathbf {r}^B)\)) denotes Alice’s (resp. Bob’s) input and randomness, then:
where the \(\{ m_i \}\) denote the messages passed between the parties. Also let \(O^A(\mathbf {x}, \mathbf {y})\) and \(O^B(\mathbf {x}, \mathbf {y})\) denote Alice’s (resp. Bob’s) output. Then we say that protocol Xprotects privacy (or is secure) against an honest-but-curious adversary if there exist probabilistic polynomial time simulators \(S_1\) and \(S_2\) such that:
where \({\mathop {\equiv }\limits ^{c}}\) denotes computational indistinguishability.
With the above definition of privacy protection, we now prove the basic lemma that will allow us to argue that our two-party k-means clustering protocol is secure against an honest-but-curious adversary.
Lemma 2.1
Suppose that Alice has run the key generation algorithm for a semantically secure homomorphic public-key encryption scheme, and has given her public-key to Bob. Further suppose that Alice and Bob run Protocol X, for which all messages passed from Alice to Bob are encrypted using this scheme, and all messages passed from Bob to Alice are uniformly distributed (in the range of the ciphertext) and are independent of Bob’s inputs. Then Protocol X is secure in the honest-but-curious adversary model.
Proof
We prove the privacy protecting nature of Protocol X in two separate cases, depending on which party the adversary has corrupted. To prove privacy, we show that for all PPT Adversaries, the view of the adversary based on Alice and Bob’s interaction is indistinguishable to the adversary’s view when the corrupted party interacts instead with a simulator. In other words, we show that there exist simulators \(S_1\) and \(S_2\) that satisfy conditions (1) and (2).
- Case 1:
Bob is Corrupted by Adversary We simulate Alice’s messages sent to Bob. For each encryption that Alice is supposed to send to Bob, we let the simulator \(S_2\) pick a random element from \({\mathbb {Z}}_N\), and send an encryption of this. Any adversary who can distinguish between interaction with Alice versus interaction with \(S_2\) can be used to break the security assumptions of E. Thus, no such PPT adversary exists, which means (2) holds.
- Case 2:
Alice is Corrupted by Adversary We simulate Bob’s messages sent to Alice. To do this, every time Bob is to send an encryption to Alice, the simulator picks a random element of \({\mathbb {Z}}_N\) and returns an encryption of this. Again, equation (1) holds due to the fact that Alice cannot distinguish the simulator’s encryption of a random number from Bob’s encryption of the correct computation that has been shifted by randomness of Bob’s choice.
\(\square \)
Since every semantically secure homomorphic encryption scheme available today has a finite message space (e.g. \({\mathbb {Z}}_N\)), when our k-means protocol requires the data points (or attributes of the data points) to be encrypted, we must restrict the possible data values to a finite range. Therefore, instead of viewing the data points as living in \({\mathbb {R}}^d\), we “discretize” Euclidean space and approximate it via the lattice \({\mathbb {Z}}_N^d\), for some large N. All of the results of this paper are consequently restricted to the model where the data points live in \({\mathbb {Z}}_N^d\), (both in the “real” and “ideal” setting) and any function performing k-means clustering in this model is restricted to computations in \({\mathbb {Z}}_N\). Note that restricting to this “discretized” model is completely natural; indeed due to memory constraints, calculations performed on computers are handled in this manner. As a consequence of working in the discretized space model, we also avoid privacy issues that arise from possible rounding errors (i.e. restricting input to be in \({\mathbb {Z}}_N^d\) avoids the necessity of approximating inputs in \({\mathbb {R}}\) by rounding up or down).
3 Private Division
As mentioned in Sect. 1.2, performing two-party division has been an obstacle to obtaining a secure two-party k-means clustering protocol. In this section, we discuss our methods for overcoming this obstacle. In particular, we make precise what we mean by division in the ring \({\mathbb {Z}}_N\), and show that this definition not only matches our intuition as to what division should be, but also allows us to perform division in a secure way (namely, so that the dividend and divisor remain hidden).
Let N be a positive integer (when secure division is used as a subprotocol, N may be, e.g. an RSA modulus), and let \(P, D \in {\mathbb {Z}}_N\). Then viewing P and D as integers, we may apply the Division Algorithm to find unique integers \(Q < N\) and \(0 \le R < D\) such that \(P = QD + R\). Viewing \(Q \in {\mathbb {Z}}_N\), we then define division (of P by D) to be the quotient Q. Note that this definition is the natural restriction of division in \({\mathbb {R}}\) to the integers, in that Q represents the actual quotient in \({\mathbb {R}}\) that has been rounded down to the nearest integer. Thus this definition coincides much more closely to real division (e.g. for purposes of finding averages) than other alternatives, such as defining division to be multiplication by the inverse.
In defining what it means for a division protocol to be secure (see Sect. 2.3), one compares the information that could be obtained in an ideal model (where a trusted third party exists) versus what could be obtained in the real world (where no such third party exists, and the proposed protocol is employed). In terms of defining the function that is to be evaluated (which performs the k-means clustering), we force the definition of division to match the above definition. In other words, when the functions \(f^A(\mathbf {x},\mathbf {y})\) and \(f^B(\mathbf {x},\mathbf {y})\) (see notation of Sect. 2.3) call for division to be performed, these divisions are defined to mean division in the ring \({\mathbb {Z}}_N\) as defined here. This way, when our protocol is run and division is performed in this way, it matches the computations that the functions \(f^A\) and \(f^B\) are performing.
With these definitions in place, it remains to implement a secure division subprotocol, where two parties (Alice and Bob) share a numerator and denominator \(P,\ D \in {\mathbb {Z}}_N\), and as output they receive shares of the quotient \(Q \in {\mathbb {Z}}_N\). We describe below a possible implementation, which has been reduced to the Scalar Product Protocol combined with the Find Minimum of 2 Numbers Protocol, and consequently its security follows from the security of those subprotocols.
Before we present the protocol, we mention that if the divisor D is publicly known, then it is likely faster to utilize the equation below, and have Alice and Bob compute the division locally, with the appropriate calls to FM2NP to handle carry-over:
where
We leave it as an exercise that (3) and (4) will compute the appropriate shares of the quotient Q.
3.1 Implementation of the Division Protocol
Intuitively, our protocol attempts to mimic a natural way of performing division. Namely, when dividing P by D, we want to find the biggest integer Q such that \(QD \le P\), but \((Q+1)D > P\). To find Q, we perform an exponential search: Viewing Q in its binary representation, we first try to find the highest power of 2 that will be in the binary expression of Q, and then work down. Namely, once the highest power of 2 (say \(\alpha \)) has been found such that \(2^{\alpha } \le P\) but \(2^{\alpha +1} > P\), then we subtract \(2^{\alpha }\) from P and repeat the process on the difference. This approach is performed by the division protocol below, with appropriate modifications to allow two parties to keep their individual inputs private.
Input Alice and Bob share \(P = P^A+P^B \ (\text{ mod } N)\) and \(D = D^A+D^B \ (\text{ mod } N)\).
Output If \(P = QD + R\) (\(0 \le R < D\)) is the unique expression guaranteed by the Division Algorithm, then this protocol outputs shares of \(Q = Q^A+Q^B \ (\text{ mod } N)\) to Alice and Bob.
Cost The communication in this protocol is dominated by \(1 + \lambda \) calls to the Find Minimum of 2 Numbers Protocol, where \(\lambda = \lfloor \log _2 N \rfloor \). Denoting the communication cost of FM2NP by \(\xi _s\), the implementation of this protocol therefore has communication \(O(\lambda \xi _s)\).
Protocol Description
-
1.
Alice and Bob run the Compute Modulus Mask Protocol to obtain shares of \(\mathbf {v} \in {\mathbb {Z}}_2^{1+\lambda }\), which is defined so that the ith-coordinate \(v_i\) of \(\mathbf {v}\) is ‘1’ if and only if \(2^i \cdot D < N\). Hence, \(\mathbf {v} = (1, 1, \dots , 1, 0, \dots , 0)\), with the last ‘1’ in the \(\lfloor \log _2 ((N - 1) / D)\rfloor \) coordinate (here, coordinates are 0-based, so that \(v_0\) denotes the first coordinate, and \(v_{\lambda }\) the last coordinate). Note: The mask \(\mathbf {v}\) is needed to account for the fact that both subprotocols being utilized perform arithmetic modulo N (to account for the fact that Alice and Bob share inputs in \({\mathbb {Z}}_N\)). Namely, the division protocol will need to determine if a given multiple \(2^i\) of D is less than or equal to some quantity C, where here “less than” means with respect to natural (integral) arithmetic. The protocol below will determine this by noting that if \(2^i \cdot D \le C\), then, when viewed as arithmetic modulo N, \(C - 2^i \cdot D\) will be less than C, because \(C - 2^i \cdot D\) did not “wrap-around;” i.e. \(C - 2^i \cdot D\) (mod N) = \(C - 2^i \cdot D\). On the other hand, if \(2^i \cdot D > C\), then \(C - 2^i \cdot D\) will “wrap-around”, and we want to detect this by finding that \(C - 2^i \cdot D\) (mod N) >C. This will always be true so long as we don’t “wrap-around” more than once, i.e. so long as \(-N < C - 2^i \cdot D\). The mask \(\mathbf {v}\) is used to ensure that the difference appearing in Step 2 below will “wrap-around” at most once, thus resulting in a true comparison of C versus \(2^i \cdot D\).
-
2.
For each \(0 \le i \le \lambda \), Alice and Bob run the FM2NP on \((P_i, \ P_i-v_{\lambda -i} \cdot 2^{\lambda -i} \cdot D)\), where \(P_i :=P_{i-1} - O_{i-1} \cdot 2^{\lambda -i+1} \cdot D\) with \(O_{i-1}\) denoting the output of FM2NP on the previous iteration. The iterative formula is initialized with \(P_0 :=P\).
-
3.
Notice that \(Q = \sum _{i=0}^\lambda O_i \cdot 2^{\lambda -i}\), which can be locally computed by Alice and Bob from their shares of \(O_i\) from each step.
3.2 Example of Division Protocol
In this section, we present an example to see the protocol in work: Let \(N=2^6 = 64\), so \(\lambda =6\). As inputs to the protocol, let \(P=52\) and \(D=5\). The Division Algorithm would write:
so as output, our division protocol should output shares of 10.
- 1.
Step 1 of our protocol finds \(\mathbf {v}\), which for the example values above is:
$$\begin{aligned} \mathbf {v} = (1, 1, 1, 1, 0, 0, 0) \end{aligned}$$(6)since \((2^i)*5 < 64\) for \(i=0, 1,2,3\), but not for \(i= 4, 5, 6\).
- 2.
This step is repeated for \(0 \le i \le \lambda (=6)\):
- (a)
On the first iteration (\(i=0\)), FM2NP is run on \((52,\ (52-0*2^6*5)) = (52,\ 52)\), since \(P_0 = P = 52\) and \(v_{6} = 0\). By the remarks in “Appendix B.1”, this call will return shares of zero, i.e. \(O_0 = 0\).
- (b)
On the next iteration (\(i=1\)), FM2NP is run on \((52,\ (52-0*2^5*5)) = (52,\ 52)\), since \(P_1 = P_0 - O_0*2^{6}*5 = 52 - 0 = 52\) and \(v_{5} = 0\). By the remarks in “Appendix B.1”, this call will return shares of zero, i.e. \(O_1 =0\).
- (c)
On the next iteration (\(i=2\)), FM2NP is run on \((52,\ (52-0*2^4*5)) = (52,\ 52)\), since \(P_2 = P_1 - O_1*2^{5}*5 = 52 - 0 = 52\) (since \(O_1 = 0\)) and \(v_{4} =0\). By the remarks in “Appendix B.1”, this call will return shares of zero, i.e. \(O_2 = 0\).
- (d)
On the next iteration (\(i=3\)), FM2NP is run on \((52,\ (52-1*2^3*5)) = (52,\ 12)\), since \(P_3 = P_2 - O_2*2^{4}*5 = 52 - 0 = 52\) (since \(O_2 = 0\)) and \(v_{3} =1\). Since \(52 > 12\), this call will return shares of one, i.e. \(O_3= 1\).
- (e)
On the next iteration (\(i=4\)), FM2NP is run on \((12, (12-1*2^2*5)) = (12, -8) = (12, 56)\), since \(P_4 = P_3 -O_3*2^{3}*5 = 52 - 40 = 12\) (since \(O_3 = 1\)), \(v_{2} = 1\), and in \({\mathbb {Z}}_{64}\) we have that \(-8=56\). Since \(12 < 56\), this call will return shares of zero, i.e. \(O_4 = 0\).
- (f)
On the next iteration (\(i=5\)), FM2NP is run on \((12, (12-1*2^1*5)) = (12, 10)\), since \(P_5 = P_4 - O_4*2^{2}*5 = 12 - 0 = 12\) (since \(O_4 = 0\)) and \(v_{1} = 1\). Since \(12 > 10\), this call will return shares of one, i.e. \(O_5 = 1\).
- (g)
On the last iteration (\(i=6\)), FM2NP is run on \((2, (2-1*2^0*5)) = (2, -3) = (2, 61)\), since \(P_6 = P_5 - O_5*2^{1}*5 = 12 - 10 = 2\) (since \(O_5 = 1\)), and \(v_{0} = 1\) (by definition), and in \({\mathbb {Z}}_{64}\) we have that \(-3=61\). Since \(2 < 61\), this call will return shares of zero, i.e. \(O_6 = 0\).
Therefore, \(O = (0, 0, 0, 1, 0, 1, 0)\), which is the binary representation of 10, as desired.
- (a)
3.3 Correctness Division Protocol
We provide a short proof sketch that the output of the Division Protocol in Sect. 3.1 is correct. Note that if the quotient Q has binary representation \(Q=q_{\lambda }\dots q_1 q_0\), then the division protocol outputs (shares of) the binary digits as \(q_i = O_{\lambda -i}\). In other words, the output \(O_i\) on iteration i corresponds to the binary digit \(q_{\lambda -i}\).
The proof proceeds with a double-induction on the size of the dividend P and the domain size \(\lambda \), arguing that for a fixed dividend P, the ith coordinate \(O_i\) (which corresponds to binary digit \(q_{\lambda -i}\) of the output) is computed correctly as long as the division protocol correctly handles all dividends less than P and that all higher-order binary digits were computed correctly.
Base Case\(i = 0\). Note that \(v_{\lambda }\) is ‘1’ when \(2^{\lambda } \cdot D < N\) (arithmetic in \({\mathbb {Z}}\)). If \(v_{\lambda } = 0\), then \(2^{\lambda } \cdot D \ge N\) and thus \(2^{\lambda } \cdot D > P\). Notice that FM2NP will return \(O_0 = 0\) for this case, as desired. On the other hand, if \(v_{\lambda } =1\), then \(2^{\lambda } \cdot D < N\), and thus \(-N < -2^{\lambda } \cdot D\), so \(P - 2^{\lambda } \cdot D\) (mod N) will “wrap-around” at most once. In particular, \(2^{\lambda } \cdot D>P\) if and only if \(P \le P - 2^{\lambda } \cdot D\) (mod N), as desired.
Induction Step If \(v_{\lambda - i} = 0\), then the argument is the same as the Base Case above. Otherwise \(v_{\lambda - i} = 1\), and we proceed based on whether any of the higher-order bits of the quotient Q were 1:
\(\underline{\hbox {Case 1: All earlier outputs}\ O_j\ \hbox {were}\ \text {`}0\text {':}\ \forall \,0 \le j < i: \quad O_j = 0.}\)
By the induction hypothesis (on i), outputs \(\{O_0, \dots , O_{i - 1}\}\) were all generated correctly, and since they were all zero, \(P < 2^{\lambda - i + 1} \cdot D\). Meanwhile, we are considering the case that \(v_{\lambda - i} = 1\), and so \(N > 2^{\lambda - i} \cdot D\), and thus \(P - 2^{\lambda - i} \cdot D\) (mod N) will “wrap-around” exactly once if and only if \(2^{\lambda -i} \cdot D > P\). Thus, \(O_i\) will equal ‘1’ if and only if \(P - 2^{\lambda -i} \cdot D\) is less than P, which will happen if and only if \(P - 2^{\lambda -i} \cdot D\) (mod N) does not wrap-around, which happens if and only if \(2^{\lambda -i} \cdot D \le P\), as desired.
Case 2: At least one earlier output was ‘1’ Let l denote the value whose higher-order bits are defined by the outputs \(\{O_0 \ O_1 \ \dots \ O_{i - 1}\}\), i.e. \(l = \sum _{j=0}^{i-1} 2^{\lambda - j} \cdot O_j\), and note that \(l >0\) since we are in the case that at least one of the outputs \(\{O_0, \dots , O_{i - 1}\}\) was ‘1’. By the induction hypothesis, the outputs \(\{O_0, \dots , O_{i - 1}\}\) were generated correctly (i.e. they represent the high-order bits of the quotient Q), and hence \(l \cdot D \le P\) but \((l + 2^{\lambda - i + 1}) \cdot D > P\) (all arithmetic in \({\mathbb {Z}}\)). On the ith iteration, \(P_i = P - l \cdot D\), which is less than P because \(l > 0\), and thus by induction (on the size of the dividend), the division protocol will correctly find the quotient of \(P_i\), i.e. all lower-order bits of Q will be correctly computed.\(\square \)
4 The Random Value Protocol (RVP)
In this section, we discuss how two parties can choose a value \(R \in {\mathbb {Z}}_Q\) uniformly at random, where \(Q \in {\mathbb {Z}}_N\) for a publicly known N but unknown Q (the two parties share Q (mod N)).
Definition 2
Let \(Q=Q^A + Q^B (\hbox {mod}\ N)\) be an arbitrary positive integer shared between Alice and Bob. A Random Value Protocol is a protocol run between Alice and Bob that outputs (shares of) a value \(R \in [0..Q-1]\), where R has been sampled uniformly at random from \([0..Q-1]\).
Before we describe the protocol, we provide motivation for why the problem is interesting. With a secure division protocol in hand (as in Sect. 3.1), a naive solution to the RVP is to have the parties choose (shares of) a random value \(R' \in {\mathbb {Z}}_N\), and then use the secure division protocol to compute \(R = R'\) (mod Q). The problem with this approach is that the resulting distribution will not represent a uniform distribution over \({\mathbb {Z}}_Q\), and its deviation from uniform may be statistically significant (in terms of satisfying (1) and (2)). In particular, if Q does not divide N, then the modulus \(\bar{N} \in [0..Q-1]\) of N in \({\mathbb {Z}}_Q\) is non-zero and \(R = R'\) (mod Q) will NOT be distributed uniformly in \([0..Q-1]\), as R will be slightly more likely to lie in \([0..\bar{N}-1]\) than in \([\bar{N}..Q-1]\). To quantify this, if \(N=DQ + \bar{N}\), then the probability that \(R \in [0..\bar{N}-1]\) is \(\bar{N}(D+1)/N\), while the probability that \(R \in [\bar{N}..Q-1]\) is \((Q-\bar{N})D/N\). To see that this may produce a non-negligible error in generating a uniformly distributed value for R in \([0..Q-1]\), consider as an example \(Q = 2N/7\). Then \(D = 3\) and \(\bar{N} = N/7\), so R lies in the first half of Q (i.e. in \([0..\bar{N}-1]\), since \(\bar{N} = Q/2\)) with probability \(4/7 =\bar{N}(D+1)/N\) versus a probability of 3/7 of lying in the second half of Q (i.e. \([Q/2..Q-1]\)).
Returning to the question of security, if the functions \(f^A\) and \(f^B\) in Definition 1 involve drawing a value Runiformly from \({\mathbb {Z}}_Q\) (for some unknown, shared value Q), then having Alice and Bob generate R as described in the naive RVP above will make it impossible to find simulators as in (1) and (2). We therefore need to find a way to sample uniformly from \({\mathbb {Z}}_Q\)without revealing any information about Q to either party.
We begin by defining precisely the notion that Alice and Bob will not have any knowledge about the random value R that is selected by a Random Value Protocol.
Definition 3
Let \(\hbox {VIEW}^A\) denote Alice’s view of an execution of a RVP. We say that Alice and Bob have chosen Robliviously with respect to Alice’s view if:
If (7) holds for both parties’ view, we say that the RVP samples Robliviously for both parties.
Notice that Definition 3 (and in particular (7)) is independent of any specific value of \(Q < N\). We will say that a RVP is secure if in addition to Definition 3, Q remains hidden from both parties (as in Definition 1).
4.1 Protocol Overview
The protocol proceeds by first describing how Alice and Bob can generate \(S \in {\mathbb {Z}}_Q\) such that S is chosen uniformly at random (in \({\mathbb {Z}}_Q\)), but Bob may have partial knowledge of its value (Alice however is oblivious to the value of S). However, the partial knowledge that Bob has about S does not reveal anything about Q (for example, knowing S exactly would already leak too much information about Q, since S is chosen uniformly from \([0..Q-1]\)). This is followed by the two parties forming \(T \in {\mathbb {Z}}_Q\) in an analogous manner but with their roles reversed, so that it is Alice who may have partial knowledge about T, and Bob who is oblivious. From these they will set \(R=S+T\) (mod Q).
We present first a brief high-level description of how they generate \(S \in {\mathbb {Z}}_Q\). Imagine the integers 0 through \(Q-1\) to be partitioned into sets whose sizes are a power of two, as determined by the binary representation of Q. For example, if \(Q = 37 = 100101\), then [0..36] is partitioned into the sets of size 1, 4, and 32: \(\{ 0 \}, [1..4], [5..36]\). For each set, a value is chosen uniformly at random (amongst all numbers in that set), so that if there are m sets, then m random values \(\{ S_1, \dots , S_m \}\) are chosen. Finally, S will be set to one of these m values, according to a probability that depends on the size of each set. More specifically, if the ith set has size \(2^{j}\), then we set S to be \(S_i\) with probability \(\frac{2^j}{Q}\). Continuing with the above example for \(Q = 37\), then there are \(m = 3\) sets, and \(S_1\) is chosen uniformly at random from \(\{0\}\) (i.e. necessarily \(S_1 = 0\)), \(S_2\) is chosen uniform from [1..4], and \(S_3\) uniformly from [5..36]. Then the final value S will be set to \(S_1\) with probability 1/37, or \(S_2\) with probability 4/37, or \(S_3\) with probability 32/37.
Claim The above described protocol samples values uniformly from \([0..Q-1]\).
Proof Sketch Of the Q values in \([0..Q-1]\), we want to argue that each is selected with probability 1/Q. Let \(Q=q_{\lambda }\dots q_1 q_0\) denote the binary representation of Q. Notice that the partitioning of the values \([0..Q-1]\) into the m sets \(\{S_1, S_2, \dots , S_m\}\) as described above is a complete and disjoint partitioning: every value in \([0..Q-1]\) lies in exactly one set \(S_i\) (this is based on the choice of sizes of the sets, in accordance to the binary representation of Q). Then for any value \(v \in [0..Q-1]\), the probability that v is selected by the above protocol is the probability that v is the chosen representative in its set, times the probability that its set is the one that is selected. If v’s set has size \(2^j\), then the former probability is \(1/2^j\), and the latter probability is \(2^j/Q\), and thus v gets selected with probability 1/Q, as desired. \(\square \)
While the above protocol is a relatively straightforward way of sampling uniformly at random from an integral range, it requires knowledge of (the binary representation of) Q, which ultimately we want to keep hidden. We make the following two modifications that make the above protocol slightly more complex, but that will be more amenable to a secure extension that keeps Q hidden:
- 1.
Translation For each set \(S_i = [a_i..b_i]\), rather than choosing an element randomly in \([a_i..b_i]\), it is equivalent to translate this interval to start at zero and choose a random value in \([0..(b_i - a_i)]\), and then add \(a_i\). Notice that the translation amount (\(a_i\)) is determined by the lower-order binary digits. For example, when \(Q = 37 = 100101\), the third set \(S_3\) (corresponding to the \(2^5\) digit) had the interval [5..36], so the translation amount (five) equals the binary number represented by the lower-order binary digits of Q.
- 2.
Selection Above, we described the relevant sets \(\{S_1, \dots , S_m\}\) based on the m non-zero binary digits of Q. More generally, for every binary digit \(q_i \in \{q_0, q_1, \dots , q_{\lambda }\}\), independent of whether it is ‘0’ or ‘1’, we can form a corresponding set \(S_i := [0..2^i -1]\) (notice the domain of this interval has been translated to zero, as per Translation above) and choose a value \(v_i\) uniformly in \([0..2^i - 1]\). These sets still correspond to the binary digits of Q, so the above protocol (which created only m sets for the m non-zero binary digits of Q) extends to the present scenario by insisting that we select a set \(S_i \in \{S_0, S_1, \dots , S_{\lambda }\}\) with probability:
$$\begin{aligned} \begin{array}{ll} 0 &{} \quad \text{ if } \ q_i = 0 \\ 2^i/Q &{} \quad \text{ if }\ q_i = 1 \end{array} \end{aligned}$$
We formalize the extended protocol described above.
(Insecure) Random Value Protocol Let Q be an arbitrary positive integer, and let \(Q=q_{\lambda }\dots q_1 q_0\) denote its binary representation. Sample a value from \([0..Q-1]\) uniformly at random by:
- 1.
For each \(0 \le i \le \lambda \), set \(S_i = [0..2^i -1]\), and then choose a value \(v_i \leftarrow S_i\) uniformly at random. Let \(\mathbf {v} = (v_0, \dots , v_{\lambda }) \in {\mathbb {Z}}^{1 + \lambda }\) denote the vector whose coordinates are the chosen values of \(\{v_i\}\).
- 2.
For each \(0 \le i \le \lambda \), set (the translation amount) \(a_i := q_{i -1} \dots q_1 q_0\) (define \(a_0 := 0\)). That is, \(a_i\) is the quantity described by the i lowest-order binary digits of Q. Let \(\mathbf {a} = (a_0, a_1, \dots , a_{\lambda }) \in {\mathbb {Z}}^{1 + \lambda }\) denote the vector whose coordinates are the values \(\{a_i\}\).
- 3.
Select a digit \(i \in [0..\lambda ]\) with probability:
$$\begin{aligned} \begin{array}{ll} 0 &{} \quad \text{ if } \ q_i = 0\\ 2^i/Q &{} \quad \text{ if }\ q_i = 1 \end{array} \end{aligned}$$(8)Let \(\mathbf {e}_i \in {\mathbb {Z}}_2^{1+\lambda }\) denote the characteristic vector with the ‘1’ in the ith coordinate.
- 4.
Output value \(v_i + a_i\), which can be expressed as:
$$\begin{aligned} v_i + a_i = \mathbf {e}_i \cdot (\mathbf {v} + \mathbf {a}), \end{aligned}$$(9)where \(\cdot \) denotes the inner-product (in \({\mathbb {Z}}^{1+\lambda }\)).
Notice that Steps 1–2 of the above protocol are extraneous: the digit i could have been selected (as in Step 3) first, and then Steps 1 and 2 could have been done just once for this value of i. However, it will be convenient to describe the protocol as above, as the secure version will follow this pattern.
Claim The above (Insecure) Random Value Protocol samples values uniformly from \([0..Q-1]\).
Proof Sketch Let \(v \in [0..Q-1]\). We want to argue v will be selected with probability 1/Q. Let \(j \in [0..\lambda ]\) denote the lowest index such that \(v < q_{j} \dots q_1 q_0\) (such a j necessarily exists because \(v < Q = q_{\lambda } \dots q_1 q_0\)), and notice that necessarily \(q_j = 1\) (otherwise minimality of choice of j is contradicted). Let \(a_j := q_{j-1} \dots q_1 q_0\) (define \(a_j = 0\) if \(j=0\)). Then it is straightforward to verify that v will be selected if and only if:
- A.
In Step 3, binary digit j was selected.
- B.
In Step 1, when \(i = j\), \(v_i = (v - a_j)\) was chosen.
- C.
In Step 2, when \(i = j\), \(a_i = a_j\) was the translation amount.
Since \(q_j = 1\), the probability of (A) is \(2^j/Q\) by (8). The probability of (B) is \(1/2^j\) (all that must be argued here is that \((v - a_j)\) lies in the interval \([0..2^j - 1]\), which is immediate by choice of j and definition of \(a_j\)). The probability of (C) is 1, by definition of \(a_j\). Thus, the probability of v being selected is \(2^j/Q \cdot 1/2^j \cdot 1 = 1/Q\), as desired. \(\square \)
In terms of extending the Insecure Random Value Protocol to a secure version that hides Q, notice that the formation of the sets \(\{S_i\}\) in Step 1 can be done (mostly) independently from Q; namely, one only need to know (an upper-bound for) \(\lfloor \log _2 Q \rfloor \). Since we will be utilizing the RVP in a setting where \(Q<N\) for known N, we can use \(\lfloor \log _2 N \rfloor \) as an upper-bound of \(\lambda \) (approximating \(\lambda \) this way will not effect correctness of the (Insecure) Random Value Protocol, only efficiency). Also, Step 2 can be done with a secure (scalar) multiplication protocol, since:
Also, Step 4 can be achieved with a secure Scalar Product Protocol (SPP), provided the parties have (shares of) \(\mathbf {v}\), \(\mathbf {a}\), and \(\mathbf {e}_i\).
Thus, the difficult part of extending the Insecure Random Value Protocol to a secure version (that hides Q) is Step 3: Sampling an index \(i \in [0..\lambda ]\) according to the probabilities in (8). Indeed, such a sampling has a similar flavor to the original RVP itself (how to sample from a domain where the domain size is unknown), except that we’ve shifted the problem from an unknown domain sizeQ to an unknown sampling distribution; i.e. now the domain \([0..\lambda ]\) is known, but selection is no longer uniform but rather is according to a probability (0 or \(2^i/Q\)) that depends on Q. The ‘Q’ that appears in the denominator of (8) is independent of i, and can be thought of as a normalizing factor for the desired distribution. The challenge will be respecting the \(q_i\) values when deciding if an index i should be chosen with probability zero or probability proportional to \(2^i\).
To achieve Step 3 in a secure setting (where Q must be hidden), we first describe a Reordering Protocol (which is independent of Q, except for dependence on the publicly known \(\lambda \)) that reorders the integers \([0..\lambda ]\) according to some prudently chosen characteristic. This reordering can be viewed as allowing arbitrary normalization of a given distribution, which in particular will allow the “normalization by Q” that appears in (8). We first define the key characteristic of a Reordering Protocol, and then demonstrate how it can be used to sample index i from \([0..\lambda ]\) according to the distribution prescribed by (8).
Definition 4
A protocol that generates a reordering of the integers \([0..\lambda ]\) in a manner that satisfies the following property will be called a Reordering Protocol:
Reordering Property For any set of indices \({\mathcal {I}} \subseteq [0..\lambda ]\) and for any index \(i \in {\mathcal {I}}\), the probability that i appears in the reordered sequence before all other indices in \({\mathcal {I}}\) is given by:
$$\begin{aligned} \text{ Probability }\ i\ \text{ appears } \text{ first } \text{ among } \text{ all } \text{ indices } \text{ in }\ {\mathcal {I}} = \ 2^i \ / \ \sum _{j \in \ {\mathcal {I}}} 2^{j} \end{aligned}$$(10)
The intuition as to why a protocol that satisfies the Reordering Property is useful as a subprotocol to achieve Step 3 of the (Insecure) RVP is that it enables “normalization” by an arbitrary constant Q. In particular, the set of indices \({\mathcal {I}}\) will be taken to be the indices of the binary digits of Q that equal ‘1’. Then the Reordering Property guarantees that if we sample from the indices \([0..\lambda ]\) by using a Reordering Protocol, and then output the first index in the reordered sequence that appears in \({\mathcal {I}}\), then this is equivalent to sampling from \([0..\lambda ]\) as in (8).
The specific implementation of a Reordering Protocol is not important, so long as it obeys the Reordering Property. One way of producing such a reordering comes from the classic “choosing balls from a bag” scenario from basic Probability Theory, as follows. A bag is filled with balls each marked with an index in \([0..\lambda ]\). For each index \(i \in [0..\lambda ]\), there will be \(2^i\) balls placed in the bag; i.e. one ball with index 0, two balls with index 1, etc. Then a Reordering Protocol can be achieved by selecting a ball at random from the bag, and using its index as the first number in the reordered sequence. Then, all balls with that index are removed from the bag, and the procedure continues. For completeness, we formalize this procedure in an Example Reordering Protocol (“Appendix B”), where we also provide a proof that it indeed satisfies the Reordering Property.
We now describe how a Reordering Protocol can be used as a subprotocol to instantiate Step 3 of the (Insecure) Random Value Protocol.
(Insecure) RVP Step 3 Let Q be an arbitrary positive integer, let \(\lambda \ge \lfloor \log _2 Q \rfloor \), and let \(Q=q_{\lambda } \dots q_1 q_0\) be its binary representation. This protocol outputs a value \(i \in [0..\lambda ]\) as follows:
- A.
Run the Reordering Protocol to get a reordering of the integers in \([0..\lambda ]\). Let \(\tau : [0..\lambda ] \rightarrow [0..\lambda ]\) denote this reordering, and let \(\sigma = \tau ^{-1}\) denote the inverse mapping.
- B.
Let i denote the first value that appears in the reordering such that \(q_i = 1\). Output the characteristic vector \(\mathbf {e}_i = (0, \dots , 0, 1, 0, \dots , 0)\), whose ‘1’ is in the ith coordinate (where coordinate labeling in this vector is 0-based, so that, e.g. \(\mathbf {e}_0 = (1, 0, \dots , 0)\)).
For example, for \(Q=37 = 100101\), we have \(\lambda = 5\), and so the Reordering Protocol will be applied to the integers [0..5]. Suppose the Reordering Protocol outputs: \(\{4, 1, 5, 3, 2, 0\}\). Then the first value appearing is ‘4’, which is rejected because \(q_4 = 0\). The next value ‘1’ is similarly rejected (\(q_1 = 0\)), and hence \(i=5\) is selected in Step B of the (Insecure) RVP Step 3 as the first value appearing such that the corresponding binary digit of Q (\(q_5\)) is ‘1’.
Claim The (Insecure) RVP Step 3 will output \(\mathbf {e}_i\), where \(i \in [0..\lambda ]\) has probability given by (8).
Proof Sketch We utilize the Reordering Property of the Reordering Protocol called in Step A, applied to the set of indices \({\mathcal {I}} := \{i \in [0..\lambda ] \ | \ q_i = 1\}\). Let \(i \in [0..\lambda ]\) be an arbitrary index, we want to show that the (Insecure) RVP Step 3 will output \(\mathbf {e}_i\) with probability given by (8). If \(q_i = 0\), then i is selected with probability zero (Step B only selects i with \(q_i = 1\)), as required. If \(q_i = 1\), then i is selected in Step B if and only if i is the first index in \({\mathcal {I}}\) to appear in the reordering. By the Reordering Property, this is exactly \(2^i / Q\), as required. \(\square \)
Notice that Step A of the (Insecure) RVP Step 3 can be done independently from Q (except for \(\lambda \ge \lfloor \log _2 Q \rfloor \), which as mentioned earlier can be taken to be \(\lambda =\lfloor \log _2 N \rfloor \) for a public value \(N > Q\)). Meanwhile, together with a trick to convert shares of \(\{q_i\}\) to shares of \(\{q_{\sigma (i)}\}\) such that \(\sigma \) is hidden from one party, a secure version of Step B can be obtained directly from a secure Scalar Product Protocol (SPP) and a Nested Product Protocol (NPP). The reduction of Step B to these two subprotocols (plus the reindexing trick) is due to the fact that we can express \(\mathbf {e}_i\) as:
See the Compute \(\mathbf {e}_i\) Protocol in “Appendix C” for details.
We are now ready to put together these ideas and provide a formal description of our (Secure) Random Value Protocol.
4.2 Description of the Protocol
As mentioned at the start of Sect. 4.1, the protocol first has Alice and Bob sample a value \(S \leftarrow {\mathbb {Z}}_Q\) uniformly at random, such that Q remains hidden to both parties and S is sampled obliviously with respect to Alice’s view (as in Definition 3). That is, neither party learns anything about Q, and Alice knows nothing about S (Bob may have partial information about S, but this information will not leak any information about Q). Next, the parties run the same protocol with their roles reversed,Footnote 2 generating a uniform \(T \leftarrow {\mathbb {Z}}_Q\) such that neither party learns anything about Q and Bob knows nothing about T. Next, they run the Addition Modulo Unknown Value Protocol to obtain the final output \(R := S + T (\)mod Q). Thus, a secure RVP will follow from the following protocol, which generates S obliviously for one party’s view, and such that Q remains private (with respect to both parties’ views).
RVP Subprotocol for Generating \(S \leftarrow {\mathbb {Z}}_Q\) Obliviously (with respect to Alice’s view)
Input Public parameter N and \(\lambda =\lfloor \log _2 N \rfloor \) are known to both parties, Alice and Bob (additively secret) share an unknown value \(Q = Q^A + Q^B < N\).
Output Alice and Bob have (shares of) a value \(S \in {\mathbb {Z}}_Q\) such that:
- 1.
S is chosen uniformly at random in \({\mathbb {Z}}_Q\).
- 2.
Alice is oblivious to the value S chosen (as in Definition 3); no requirements are made regarding Bob’s knowledge of S (Bob will have partial knowledge).
- 3.
Alice and Bob learn nothing about Q.
Cost This protocol will add \(O(\lambda ^2)\) to communication.
Protocol Description Note that this protocol follows the framework of the (Insecure) RVP presented in Sect. 4.1, with the appropriate modifications to ensure Q remains hidden.
- 0.
Alice and Bob run the To Binary Protocol to convert their shares of Q to shares of each bit \(\{q_i\}\) in the binary representation of Q: \(q_i = q^A_i + q^B_i (\)mod N).
- 1.
For each \(0 \le i \le \lambda \), Bob sets \(S_i = [0..2^i - 1]\), and chooses a value \(v_i \leftarrow S_i\) uniformly at random. Let \(\mathbf {v} = (v_0, \dots , v_{\lambda }) \in {\mathbb {Z}}^{1 + \lambda }\) denote the vector whose coordinates are the chosen values of \(\{v_i\}\).
- 2.
For each \(0 \le i \le \lambda \), Alice and Bob locally compute shares of (the translation amount) \(a_i := q_{i -1} \dots q_1 q_0\) (define \(a_0 := 0\)). Namely, the parties set \(a^A_0 = 0 =a^B_0\), and then locally compute for each \(1 \le i \le \lambda \):
$$\begin{aligned} \text{ Alice: } \quad a^A_i&= q^A_{i -1} \dots q^A_1 q^A_0 =\sum _{j=0}^{i-1}2^j \cdot q^A_j \\ \text{ Bob: } \quad a^B_i&= q^B_{i -1} \dots q^B_1 q^B_0 = \sum _{j=0}^{i-1}2^j \cdot q^B_j, \end{aligned}$$where all arithmetic above is computed modulo N. Note that \(a_i = a_i^A + a_i^B (\text{ mod } N)\). Let \(\mathbf {a} = (a_0, a_1, \dots , a_{\lambda }) \in {\mathbb {Z}}^{1 + \lambda }\) denote the vector whose coordinates are the values \(\{a_i\}\).
- 3.
Alice and Bob run the following subprotocols to obtain (shares of) the characteristic vector \(\mathbf {e}_i = (0, \dots , 0, 1, 0, \dots , 0) \in {\mathbb {Z}}_2^{1+\lambda }\) where the ‘1’ is in coordinate i with probability as in (8):
- A.
Bob runs a Reordering Protocol to get a reordering of the integers in \([0..\lambda ]\). Let \(\tau : [0..\lambda ] \rightarrow [0..\lambda ]\) denote this reordering, and let \(\sigma = \tau ^{-1}\) denote the inverse mapping.
- B.
Alice and Bob run the Compute\(\mathbf {e}_i\)Protocol to output (shares of) \(\mathbf {e}_i\)
- A.
- 4.
Alice and Bob run the Scalar Product Protocol to obtain (shares of) \(S := \mathbf {e}_i \cdot (\mathbf {v} + \mathbf {a})\), where “\(\cdot \)” denotes the inner-product (in \({\mathbb {Z}}^{1+\lambda }\)).
Lemma 4.1
The above RVP Subprotocol for Generating\(S \leftarrow {\mathbb {Z}}_Q\) protocol satisfies desired output properties 1–3:
Correctness\(S \leftarrow {\mathbb {Z}}_Q\) is chosen uniformly at random.
Obliviousness Alice is oblivious to the output value S, as in Definition 3.
SecurityQ is securely hidden from both parties, as in Definition 1.
Proof
Correctness follows from correctness of the (Insecure) Random Value Protocol (which was proven in Sect. 4.1). Obliviousness of Alice’s view (in the sense of Definition 3) follows from:
- 1.
Obliviousness of Step 0 follows from the security of the To Binary Protocol.
- 2.
Obliviousness of Step 1 is immediate, as Alice is not involved in this step.
- 3.
Obliviousness of Step 2 is immediate, since \(\{a^A_i\}\) can be computed from Alice’s shares of the binary digits \(\{q_i\}\) that she received from Step 0.
- 4.
Obliviousness of Step 3 follows from the security of the Compute\(\mathbf {e}_i\)Protocol.
- 5.
Obliviousness of Step 4 follows from the security of the Scalar Product Protocol.
Regarding security (that Q remains hidden to both parties), notice that while the choice of \(\{v\}_i\) in Step 1 give Bob partial information about S, this step is independent of Q, and thus Bob can learn nothing about Q. Indeed, the only steps that pertain to Q are Step 0, 2, 3B, and 4, and since each of these steps invoke a secure subprotocol, all knowledge of Q remains hidden. \(\square \)
4.3 Proof of Correctness, Obliviousness, and Security
Recall that the overall (secure) RVP proceeds by invoking the RVP Subprotocol for Generating\(S \leftarrow {\mathbb {Z}}_Q\) twice (once to generate S obliviously from Alice’s view and then to generate T obliviously from Bob’s view), followed by a single call to the Addition Modulo Unknown Value Protocol to add \(R = S + T \ (\hbox {mod}\ Q)\).
Theorem 4.2
The above described Random Value Protocol satisfies:
- 1.
Correctness\(R \leftarrow {\mathbb {Z}}_Q\) is chosen uniformly at random.
- 2.
Obliviousness Both parties are oblivious to the output value R, as in Definition 3.
- 3.
SecurityQ is securely hidden from both parties, as in Definition 1.
Proof
Correctness of the RVP Subprotocol for Generating\(S \leftarrow {\mathbb {Z}}_Q\) (that S is sampled uniformly from \({\mathbb {Z}}_Q\)) was demonstrated in Lemma 4.1. Correctness of the RVP then follows from:
Observation If Y is any fixed number in \({\mathbb {Z}}_Q\) and X is a uniform random variable in \({\mathbb {Z}}_Q\), then the random variable \(Z := Y + X\) (mod Q) is uniformly distributed in \({\mathbb {Z}}_Q\).
The following observation demonstrates how obliviousness of the RVP follows from obliviousness of the RVP Subprotocol for Generating\(S \leftarrow {\mathbb {Z}}_Q\) (which was proved in Lemma 4.1):
Observation If a party’s view includes knowledge of Y but no knowledge of X, then \(Z := Y + X\) (mod Q) is oblivious to that party.
Based on composability of security [6], security follows from the security of the subprotcols used: RVP Subprotocol for Generating\(S \leftarrow {\mathbb {Z}}_Q\) (security proved in Lemma 4.1), and the Addition Modulo Unknown Value Protocol, which itself consists of a secure SPP and FM2NP. \(\square \)
As an aside, we note that the above observations actually guarantee that this protocol chooses S obliviously and uniformly at random even if one of the parties is corrupted maliciously. The Random Value Protocol can therefore be used as a subprotocol in models allowing a malicious adversary, provided that the TBP, Compute\(\mathbf {e}_i\)Protocol, and SPP utilized by the RVP are all secure against a malicious adversary.
5 Two-Party k-Means Clustering Protocol
5.1 Notation and Preliminaries
Following the setup of [18], we assume that two parties, “Alice” and “Bob”, each hold (partial) data describing the d attributes of n objects (we assume Alice and Bob both know d and n). Their aggregate data comprises the (virtual) database \({\mathcal {D}}\), holding the complete information of each of the n objects. The goal is to design an efficient algorithm that allows Alice and Bob to perform k-means clustering on their aggregate data in a manner that protects their private data.
As mentioned in the Introduction, we are working in the model where our data points are viewed as living in \({\mathbb {Z}}^d_N\) for some large RSA modulus N chosen by Alice. Note that if Alice and Bob desire a lattice width of W and \(\mathtt {M}\) denotes the maximum Euclidean distance between points,Footnote 3 then Alice will pick N sufficiently large to guarantee that \(N \ge \frac{n^2 \mathtt {M}^2}{W^2}\) (this inequality guarantees that the sum of all data points does not exceed N). Because Alice chooses the RSA modulus, Bob will be performing the bulk of the computation (on the encrypted data points).
We allow the data points to be arbitrarily partitioned between Alice and Bob (see [18]). This means that there is no assumed pattern to how Alice and Bob hold attributes of different data points (in particular, this subsumes the cases of vertically and horizontally partitioned data). We only demand that between them, each of the d attributes of all n data points is known by either Alice or Bob, but not both. As discussed in [27], attributes of the data points that are measured in units significantly larger than others will dominate distance calculations. Alice and Bob may therefore wish to standardize the data before running a k-means clustering protocol on it. The manner in which this standardization is achieved depends on the nature of the data and we do not explore the possibilities here. Rather, we note that any such standardization can likely be achieved with the Scalar Product Protocol and a private Division Protocol (e.g. the one presented in Sect. 3.1). For a given data point \(\mathbf {D}_i \in {\mathcal {D}}\), we denote Alice’s share of its attributes by \(\mathbf {D}^A_i\), and Bob’s share by \(\mathbf {D}^B_i\).
5.2 Single Database k-Means Algorithms
The single database k-means clustering algorithm that we extend to the two-party setting was introduced by [24] and is summarized below. We chose this algorithm because under appropriate conditions on the distribution of the data, the algorithm is provably correct (as opposed to most other algorithms that are used in practice which have no such provable guarantee of correctness). Additionally, the Initialization Phase (or “seeding process”) is done in an optimized manner, reducing the number of iterations required in the Lloyd Step. In general, the number of iterations required in the Lloyd Step depends on the nature of the data: number of data-points, number of attributes/dimensions, distribution (values) of data points, etc., and hence there is no hard bound on the number of iterations that may be required. However, the protocol of [24] argues that if the data points enjoy certain “nice” properties, then the number of iterations is extremely small (i.e. with high probability, only two iterations are necessary). The number of iterations of the Lloyd Step has both communication as well as privacy implications, see “Appendix A” for a discussion.
The single database k-means clustering algorithm is as follows (see [24] for details):
Step I: Initialization This procedure chooses the cluster centers \({\varvec{\mu }}_1, \dots ,{\varvec{\mu }}_k\) according to (an equivalent version of) the protocol described in [24]:
- A.
Center of Gravity Compute the center of gravity of the n data points and denote this by \(\mathbf {C}\):
$$\begin{aligned} \mathbf {C} = \frac{\sum _{i=1}^n \mathbf {D}_i}{n} \end{aligned}$$(12) - B.
Distance to Center of Gravity For each \(1 \le i \le n\), compute the distance (squared) between \(\mathbf {C}\) and \(\mathbf {D}_i\). Denote this as \(\widetilde{C}^0_i =\text{ Dist }^2(\mathbf {C}, \mathbf {D}_i)\).
- C.
Average Squared Distance Compute \(\bar{C} :=\frac{\sum _{i=1}^n \widetilde{C}^0_i}{n}\), the average (squared) distance.
- D.
Pick First Cluster Center Pick \({\varvec{\mu }}_1 = \mathbf {D}_i\) with probability:
$$\begin{aligned} \text{ Pr }[{\varvec{\mu }}_1 = \mathbf {D}_i] =\frac{\bar{C} + \widetilde{C}^0_i}{2n \bar{C}}. \end{aligned}$$(13) - E.
Iterate to Pick the Remaining Cluster Centers Pick \({\varvec{\mu }}_2, \dots ,\)\({\varvec{\mu }}_k\) as follows: Suppose \({\varvec{\mu }}_1, \dots ,\)\({\varvec{\mu }}_{j-1}\) have already been chosen (initially j=2), then we pick \({\varvec{\mu }}_j\) by:
- 1.
For each \(1 \le i \le n\), calculate \(\widetilde{C}^{j-1}_i = \) Dist\(^2(\mathbf {D}_i, \ {\varvec{\mu }}_{j-1})\).
- 2.
For each \(1 \le i \le n\), let \(\widetilde{C}_i\) denote the minimum of \(\{ \widetilde{C}^{l}_i \}_{l=0}^{j-1}\).
- 3.
Update \(\bar{C}\) to be the average of \(\widetilde{C}_i\) (over all \(1 \le i \le n\)).
- 4.
Set \({\varvec{\mu }}_j = \mathbf {D}_i\) with probability:
$$\begin{aligned} \text{ Pr }[{\varvec{\mu }}_j = \mathbf {D}_i] = \frac{\widetilde{C}_i}{n \bar{C}}. \end{aligned}$$
- 1.
Step II: Lloyd Step Repeat the following until \({\varvec{\nu }}_1, \dots ,\)\({\varvec{\nu }}_k\) is “sufficiently close” to \({\varvec{\mu }}_1, \dots ,\)\({\varvec{\mu }}_k\):
- A.
Finding the Closest Cluster Centers For each data point \(\mathbf {D}_i \in {\mathcal {D}}\), find the closest cluster center \({\varvec{\mu }}_{j} \in \{{\varvec{\mu }}_1, \dots , {\varvec{\mu }}_k \}\), and assign data point \(\mathbf {D}_i\) to cluster j.
- B.
Calculating the New Cluster Centers For each cluster j, calculate the new cluster center \({\varvec{\nu }}_j\) by finding the average position of all data points in cluster j. Share these new centers between Alice and Bob as \({\varvec{\nu }}^A_1, \dots \), \({\varvec{\nu }}^A_k\) and \({\varvec{\nu }}^B_1, \dots \), \({\varvec{\nu }}^B_k\), respectively.
- C.
Checking the Stopping Criterion Compare the old cluster centers to the new ones. If they are “close enough”, then the algorithm returns the final cluster centers to Alice and Bob. Otherwise, Step II is repeated after Reassigning New Cluster Centers.
- D.
Reassigning New Cluster Centers To reassign new cluster centers, set:
$$\begin{aligned}&{\varvec{\mu }}^A_1, \dots , {\varvec{\mu }}^A_k = {\varvec{\nu }}^A_1, \dots , {\varvec{\nu }}^A_k, \quad \text{ and } \\&{\varvec{\mu }}^B_1, \dots , {\varvec{\mu }}^B_k = {\varvec{\nu }}^B_1, \dots , {\varvec{\nu }}^B_k. \end{aligned}$$
5.3 Our Two-Party k-Means Clustering Protocol
We now extend the k-means algorithm of [24] to a two-party setting. Section 5.3.1 discusses how to implement Step I of the above algorithm (the Initialization), and Sect. 5.3.2 discusses how to implement Step II of the algorithm (the Lloyd Step). We discuss in “Appendix A” alternative approaches in the number of iterations allowed in the Lloyd Step, and why this question is an issue in terms of protecting privacy.
5.3.1 Step I: Initialization
We now describe how to extend Step I of the above algorithm to the two-party setting. In particular, we need to explain how to perform the computations from Step I in a secure way. As output, Alice should have shares of the cluster centers \({\varvec{\mu }}^A_1, \dots ,{\varvec{\mu }}^A_k\), and Bob should have \({\varvec{\mu }}^B_1, \dots ,{\varvec{\mu }}^B_k\), such that \({\varvec{\mu }}^A_i + {\varvec{\mu }}^B_i ={\varvec{\mu }}_i\), for each \(1 \le i \le k\). Below we follow Step I of the algorithm from Sect. 5.3.1 and describe how to privately implement each step. At the outset of the protocol, we have Alice encrypt her data points once and for all, and send them to Bob. This entails a one-time communication cost of \(O(nd\lambda )\), and without explicit mention we assume that all other subprotocols that require Bob to perform computations on Alice’s encrypted data points do not repeat this communication transfer.
- A.
Center of Gravity To implement Step A, Alice and Bob will need to compute and share:
$$\begin{aligned} \mathbf {C} = \frac{1}{n}\sum _{i=1}^n \mathbf {D}_i = \frac{1}{n} \left( \sum _{i=1}^n \mathbf {D}^A_i + \sum _{i=1}^n \mathbf {D}^B_i \right) \end{aligned}$$(12)Since Bob has Alice’s encrypted data, Bob can locally compute (encryptions) of the above sums, and return a (randomized) share of the sum to Alice. To compute the final shares of \(\mathbf {C}\), they need to divide by n, where division is according to the division algorithm in \({\mathbb {Z}}_N\) (as per Sect. 3). One way to do this is to run a Private Division Protocol (e.g. the protocol presented in Sect. 3.1), but since the divisor n is publicly known, it may be cheaper (in terms of communication complexity) to just have each party perform the division locally, with a few calls to FM2NP, as in (3) and (4).
- B.
Distance to Center of Gravity Alice and Bob can run a secure Distance Protocol (see, e.g. [18]) on the encrypted data such that Bob obtains as output an encryption of \(\widetilde{C}^0_i\), where \(\widetilde{C}^0_i\) is the distance (squared) between \(\mathbf {C}\) and \(\mathbf {D}_i\). He randomizes this encryption and returns it to Alice, so that they share \(\widetilde{C}^0_i = \widetilde{C}^{A,0}_i + \widetilde{C}^{B,0}_i\) for each i.
- C.
Average Squared Distance Define the following sums:
$$\begin{aligned} P := \sum _{i=1}^n \widetilde{C}^{A,0}_i \quad \text{ and } \quad P' := \sum _{i=1}^n \widetilde{C}^{B,0}_i, \end{aligned}$$which Alice and Bob can compute locally. They then compute the division of (\(P + P'\)) by n, which can be done via a Private Division Protocol (e.g. the protocol presented in Sect. 3.1) or, since the divisor n is public, they can compute this locally (with a few calls to FM2NP) using (3) and (4). As output, Alice and Bob will be sharing \(\bar{C}\) as desired.
- D.
Pick First Cluster Center Notice that picking a data point \(\mathbf {D}_i\) with probability \(\frac{\bar{C} + \widetilde{C}^0_i}{2n \bar{C}}\) is equivalent to picking a random number \(R \in [0..2n \bar{C}-1]\) and finding the first i such that \(R \le \sum _{j=1}^i \bar{C} + \widetilde{C}^0_j\). We use this observation to pick data points according to weighted probabilities as follows:
- 1.
Picking a RandomR In this step, Alice and Bob pick a random number in \([0..2n \bar{C}-1]\), where \(2n \bar{C} = 2n \bar{C}^{A} + 2n \bar{C}^{B}\). Alice and Bob run the Random Value Protocol (RVP) with \(Q := 2n \bar{C} = 2n \bar{C}^{A} + 2n \bar{C}^{B}\) to generate and share a random number \(R = R^A + R^B \in {\mathbb {Z}}_{2n \bar{C}}\).
- 2.
Alice and Bob will next compare their random number R with the sum \(\sum _{j=1}^i \bar{C} + \widetilde{C}^0_j\), and find the first i such that \(R \le \sum _{j=1}^i \bar{C} +\widetilde{C}^0_j\). They will then set \({\varvec{\mu }}_1 = \mathbf {D}_i\). The actual implementation of this can be found in the Choose\({\varvec{\mu }}_1\)Protocol in “Appendix C”.
- 1.
- E.
Iterate to Pick the Remaining Cluster Centers
- 1.
This step is done analogously to Step I.B.
- 2.
This step outputs the minimum of \(\{ \widetilde{C}^l_i \}_{l=0}^{j-1}\). However, they don’t have to take the minimum over all j numbers, since from the previous iteration of this step, they already have \(\widetilde{C}_i = \text{ min } \{\widetilde{C}^l_i \}_{l=0}^{j-2}\). Thus, they can compute the minimum of two numbers, that is reset \(\widetilde{C}_i\) to be:
$$\begin{aligned} \widetilde{C}_i = \min \{ \widetilde{C}_i, \widetilde{C}^{j-1}_i \} . \end{aligned}$$Therefore, Alice and Bob run the FM2NP on inputs \((\widetilde{C}^A_i, \widetilde{C}^{A,j-1}_i)\) and \((\widetilde{C}^B_i, \widetilde{C}^{B,j-1}_i)\) so that they share the location of (the new) \(\widetilde{C}_i\) (let \(L = L^A + L^B\) denote this location). They can then share the new \(\widetilde{C}_i = \min \{ \widetilde{C}_i, \widetilde{C}^{j-1}_i \}\) by running the SPP on inputs \(\mathbf {x} = (\widetilde{C}^A_i, \widetilde{C}^{A,j-1}_i, L^A)\) and \(\mathbf {y} =(\widetilde{C}^B_i, \widetilde{C}^{B,j-1}_i, L^B)\) and function \(f(\mathbf {x}, \mathbf {y}) = L\widetilde{C}^{j-1}_i + (1-L)\widetilde{C}_i\).
- 3.
This step is done analogously to Step I.C.
- 4.
This step is done analogously to Step I.D.
- 1.
5.3.2 Step II: Lloyd Step
In this section, we discuss how to implement the Lloyd Step while maintaining privacy protection.
- A.
Finding the Closest Cluster Centers Alice and Bob repeat the following for each \(\mathbf {D}_i \in {\mathcal {D}}\):
- 1.
Find the Distance (squared) to Each Cluster Center Note that because finding the minimum of all distances is equivalent to finding the minimum of the distances squared, and because the latter is easier to compute (no square root), we will calculate the latter. Since Bob has (encryptions of) Alice’s shares of the data points and the cluster centers, Bob can run a secure Distance Protocol (see, e.g. [18]) to obtain for each cluster center j the (encrypted) distance \(X_{i,j}\) of data point \(\mathbf {D}_i\) to cluster center j. As usual, Bob randomizes each distance and returns them to Alice, so that for each j, Alice and Bob share the vector \(\mathbf {X}_i = (X_{i,1}, \dots X_{i,k})\).
- 2.
Alice and Bob run the Find Minimum ofkNumbers Protocol (FMkNP) on \(\mathbf {X}^A_i\) and \(\mathbf {X}^B_i\) to obtain a share of (a vector representation of) the location of the closest cluster center to \(\mathbf {D}_i\):
$$\begin{aligned} \mathbf {C}_i := (0, \dots , 0, 1, 0, \dots , 0) \in {\mathbb {Z}}_2^k, \end{aligned}$$(14)where the 1 appears in the jth coordinate if cluster center \({\varvec{\mu }}_j\) is closest to \(\mathbf {D}_i\). Note that in actuality, \(\mathbf {C}_i\) is shared between Alice and Bob:
$$\begin{aligned} \mathbf {C}_i = \mathbf {C}^A_i + \mathbf {C}^B_i, \end{aligned}$$and Alice encrypts her share and sends this to Bob.
- 1.
- B.
Calculating the New Cluster Centers The following will be done for each cluster \(1 \le j \le k\). We break the calculation into three steps: In Step 1, Bob will compute the sum of data points in cluster j, in Step 2 he will compute the total number of points in cluster j, and in Step 3 the result of Step 1 will be divided by the result of Step 2. To simplify the notation, by \(E(\mathbf {C}_i)\) we will mean \((E(\mathbf {C}_{i,1}), \dots , E(\mathbf {C}_{i,k}))\).
- 1.
Sum of Data Points in Clusterj In this step, Bob will compute the sum of all data points in cluster j. We denote this sum as:
$$\begin{aligned} \mathbf {S}_j \in {\mathbb {Z}}^d_N = \sum _{i=1}^{n} C_{i,j} \cdot \mathbf {D}_i \qquad \text{ where } \ C_{i,j} = j\text{ th } \text{ coordinate } \text{ of } \mathbf {C}_{i} = {\left\{ \begin{array}{ll} 1 &{} \text{ if } \mathbf {D}_i \in \text{ cluster } j \\ 0 &{} \text{ O.W. } \end{array}\right. } \end{aligned}$$At the end of this step, Alice and Bob will share \(\mathbf {S}_j =\mathbf {S}^A_j + \mathbf {S}^B_j\) (here the addition is in \({\mathbb {Z}}^d_N\)). Recall from Step A above that for each data point \(\mathbf {D}_i\), Bob has \(E(\mathbf {C}^A_i)\) and \(\mathbf {C}^B_i\), where:
$$\begin{aligned} \mathbf {C}^A_i + \mathbf {C}^B_i = \mathbf {C}_i = (0, \dots , 0, 1, 0, \dots , 0). \end{aligned}$$Utilizing the homomorphic and single multiplication properties of E, Bob can compute (an encryption) of \(\mathbf {S}_j\), returning a randomized share to Alice so that they share \(\mathbf {S}_j\) as desired.
- 2.
Number of Data Points in Clusterj Now Alice and Bob wish to share the total number of points in cluster j, denoted by \(T_j\). Notice that:
$$\begin{aligned} T_j = \sum _{i=1}^n C_{i,j}, \end{aligned}$$i.e. \(T_j\) can be found by summing the jth coordinate of \(\mathbf {C}_i\) for each i. Bob can compute \(T_j\) using his own shares of \(\mathbf {C}_i\) and Alice’s encrypted shares, and randomizing his computation, Alice and Bob share \(T_j = T^A_j +T^B_j\).
- 3.
Centroid of Data Points in Clusterj In this step Alice and Bob must divide \(\mathbf {S}^A_j + \mathbf {S}^B_j\) (from Step 1) by the total number of data points \(T_j\) in cluster j to obtain the new cluster center \({\varvec{\nu }}_j\):
$$\begin{aligned} {\varvec{\nu }}_j = \frac{\mathbf {S}^A_j + \mathbf {S}^B_j}{T^A_j + T^B_j} \end{aligned}$$(15)Alice and Bob run a Private Division Protocol (e.g. the protocol presented in Sect. 3.1) d times (once for each dimension d) on inputs \(P = \mathbf {S}^A_{j,l} +\mathbf {S}^B_{j,l}\) (the lth coordinate of \(\mathbf {S}_j\)) and divisor \(D = T^A_j + T^B_j\) (notice that necessarily \(D \in [1..n]\)).
- 1.
- C.
Checking the Stopping Criterion Alice and Bob run a secure Distance Protocol (see, e.g. [18]) k times, on the lth time it outputs shares of \(\Vert {\varvec{\mu }}_l - {\varvec{\nu }}_l \Vert ^2\). They can then add their shares together and run the FM2NP to compare these sums with \(\epsilon \), some agreed upon predetermined value. They can then open their outputs from the FM2NP to determine if the stopping criterion has been met.
- D.
Reassigning New Cluster Centers The final step of our algorithm, replacing the old cluster centers with the new ones, is easily accomplished:
$$\begin{aligned}&\text{ Alice } \text{ sets: } ({\varvec{\mu }}^A_1, \dots , {\varvec{\mu }}^A_k) = ({\varvec{\nu }}^A_1, \dots , {\varvec{\nu }}^A_k), \text{ and } \\&\text{ Bob } \text{ sets: } ({\varvec{\mu }}^B_1, \dots , {\varvec{\mu }}^B_k) = ({\varvec{\nu }}^B_1, \dots , {\varvec{\nu }}^B_k). \end{aligned}$$
5.4 Communication Analysis
The following table gives a succinct summary of the communication complexity of the two-party k-means clustering protocol presented in Sect. 5.3.
Thus, assuming \(n > \lambda d\), the overall communication complexity of the 2-Party k-means clustering protocol of Sect. 5.3 is:
Recall that k is the number of clusters, \(\lambda \) is the security parameter, n is the number of data points, d is the number of attributes of each data point, and m is the number of iterations in the Lloyd Step. The communication cost of our protocol matches the communication complexity of [18] while simultaneously enjoying the extra guarantee of security against an honest-but-curious adversary.
As mentioned in the Introduction, k-means clustering can also be performed securely by applying generic tools from multi-party computation, e.g. via Yao’s garbled circuit (see [31]). Let \(\xi _{(k)}\) denote the communication cost of a non-secure protocol that finds the minimum of k numbers (each of at most \(\lambda \) bits) that are shared between two parties. Notice that a circuit representation of the single databasek-means clustering protocol of [24] has size at least:
The first term is necessary, e.g. to add together all the data points in each cluster during each iteration of the Lloyd Step, and the second term is necessary to, e.g. find the minimum of k numbers for each data point (when deciding which cluster the data point belongs to). Notice that any implementation of a protocol that finds the minimum of k (\(\lambda \)-bit) numbers will cost at least \(O(\lambda k)\). Using these observations and the fact that applying Yao’s garbled circuit techniques to a circuit of size O(|C|) has communication complexity \(O(\lambda |C|)\), we have that the communication complexity of a generic solution is at least:
Notice that the second term of our protocol’s communication complexity in (16) matches that of the generic solution in (18), while our first term enjoys asymptotic advantage of a factor of mk over the first term of (18). Furthermore, if d is sufficiently large so that \(d \ge \lambda \), then the first term of Eq. (18) dominates, in which case our protocol has overall asymptotic advantage over a generic solution by a factor of \(\min (mk, d/\lambda )\).
Notice that our protocol consists entirely of the subprotocols listed in Sect. 2.2 together with secure Distance, Division, and Random Value protocols, and utilization of a homomorphic encryption scheme (e.g. Paillier). With the exception of the places in our k-means clustering protocol that rely on the homomorphic encryption scheme, all of the subprotocols could be invoked by applying Yao’s garbled circuit technique to the relevant circuit that represents the subprotocol’s functionality. Thus, in comparing our solution to a generic solution that applies Yao to the circuit that represents the overall (insecure) k-means clustering protocol, it will be useful to separate out the communication costs of transferring ciphertexts (of the homomorphic encryption scheme) versus the rest of the communication cost. As can be seen from Table 2, there are \((nd + n + kmn + 2kmd)\) ciphertexts exchanged in our k-means clustering protocol. Thus, with the assumption that \(d \ge \lambda \) so that the first terms of (16) and (18) dominate, our solution will out perform a generic solution so long as:
Letting \(l = \min (km, d)\), (19) reduces to comparing the communication cost of sending l ciphertexts versus performing Yao on a circuit of size \(l^2\). Asymptotically, since both Yao and homomorphic encryption (e.g. Paillier) add a factor of \(O(\lambda )\) to communication, our protocol enjoys a factor of \(l = \min (km, d)\) (asymptotic) communication advantage over Yao. An advantage will also be observed in practice (accounting for the fact that the constants ignored in the O(\(\lambda \)) asymptotic costs of Yao may be smaller than those for the homomorphic encryption) so long as the extra cost of the encryption scheme is less than l times the extra cost of employing Yao.
As a final point, we note that there \(O(\lambda \text{ log }^c \lambda )\)-sized circuits that can perform integer reciprocation (see [26]). Assuming these methods can be translated to perform division as defined in Sect. 3, we could apply Yao’s garbled circuit techniques locally (i.e. not for the entire k-means protocol, but only for division), in which case the second term in (16) will dominate the third as long as \(n \ge d \text{ log }^c \lambda \) (instead of \(n \ge d\lambda \)).
6 Conclusion and Future Work
As mentioned in Sect. 2.3, the proof of security of the two-party k-means clustering protocol presented above follows from the fact that each of the subprotocols are secure. The only exception to this is in Step C of the Lloyd Step, where Alice and Bob must decide if their protocol has reached the termination condition. Although Alice and Bob remain oblivious to any actual values at this stage, they will gain the information of exactly how many iterations were required in the Lloyd Step. There are various ways of defining the model to handle this potential information leak and thus maintain perfect privacy protection (see “Appendix A”).
The focus of this paper was on performing k-Means clustering when the underlying data is divided among two parties. An interesting direction for further research is the extension to generic multiparty computation for \(n > 2\) parties. There are a number of techniques that can be used to extend secure two-party protocols to the \(n>2\) setting; see, e.g. discussion and references in [8], which describes a multiparty coin-flipping protocol.
Another extension that would be interesting to consider is that of preserving privacy in a malicious adversary model. One approach would be to augment the 2-party protocols presented in this paper with standard techniques (e.g. [15, 17]) to boost security from the honest-but-curious to the malicious adversary setting.
Finally, the focus of the presented k-means protocol was to minimize the (asymptotic) communication cost. An interesting open problem is to consider other costs (e.g. round complexity, computation, etc.), as well as optimizing actual (not just asymptotic) communication. One aspect of this follow-up work would likely involve finding suitable instantiations of the subprotocols listed in Sect. 2.2.
Notes
Although designed to be a secure k-means clustering protocol, [18] falls short of full security due to leakage of intermediate results, e.g. for each iteration of the Lloyd Step, the number of data points in each cluster is revealed.
Implicit in running the same protocol with the roles reversed is reliance upon a Change Modulus Protocol, which will allow the parties to translate their shares of Q (mod N) to shares of Q (mod \(N^A\)) or (mod \(N^B\)), where \(N^A\) (resp. \(N^B\)) is the public-key modulus of the underlying encryption scheme of the subprotocols that are used, in which Alice (resp. Bob) knows the private key.
Since data is split between Alice and Bob, the exact value for \(\mathtt {M}\) is not known by either party. There are various options for how this value can be computed exactly or estimated, and choice of which approach is most appropriate will depend on the nature of the data (e.g. are there natural known domains/bounds for each attribute? Has data been pre-normalized, or is normalization required anyway to ensure some attributes do not dominate the clustering protocol?) as well as privacy considerations (e.g. are players allowed to know the domains (or bounds) of each individual attribute?). If approximation of \(\mathtt {M}\) is not possible, the two parties can engage in a secure multiparty computation protocol to compute \(\mathtt {M}\), e.g. by applying Yao to the circuit that computes \(\mathtt {M}^2\). Since such a circuit has size O(nd), this cost can be absorbed by the \(O(\lambda nd)\) cost of communicating the nd encryptions that occurs at the outset of our protocol.
References
D. Agrawal, C. Aggarwal, On the design and quantification of privacy preserving data mining algorithms, in Proc. of the 20th ACM SIGMOD-SIGACT-SIGART Symp. on Principles of Database Systems (2001), pp. 247–255
R. Agrawal, R. Srikant, Privacy-preserving data mining, in Proc. of the 2000 ACM SIGMOD Int. Conf. on Management of Data (2000), pp. 439–450
J. Algesheimer, J. Camenish, V. Shoup, Efficient computation modulo a shared secret with application to the generation of shared safe-prime products, in CRYPTO’02, LNCS 2442 (2002), pp. 417–432
A. Blum, C. Dwork, F. McSherry, K. Nissim, Practical privacy: the SuLQ framework, in 24th Symposium on Principles of Database Systems (2005), pp. 128–138
P. Bradley, U. Fayyad, Refining initial points for \(K\)-means clustering, in Proc. of the 15th International Conference on Machine Learning (1998), pp. 91–99
R. Canetti, Security and composition of multiparty cryptographic protocols. J. Cryptol., 13(1) 143–202. (2000)
O. Catrina, A. Saxena, Secure computation with fixed-point numbers, in 14th Financial Cryptography and Data Security (2010), pp. 35–50
M. Ciampi, R. Ostrovsky, L. Siniscalchi, I. Visconti, Delayed-input non-malleable zero knowledge and multi-party coin tossing in four rounds, in 15th Theory of Cryptography (TCC) (2017), pp. 711–742
M. Dahl, C. Ning, T. Toft, On secure two-party integer division, in 16th Financial Cryptography and Data Security (2012), pp. 164–178
C. Dwork, F. McSherry, K. Nissim, A. Smith, Calibrating noise to sensitivity private data analysis, in Proc. of the 3rd Theory of Cryptography Conference (2006), pp. 265–284
I. Dinur, K. Nissim, Revealing information while preserving privacy, in Proc. of the 22nd ACM SIGMOD-SIGACT-SIGART Symp. on Principles of Database Systems (2003), pp. 202–210
C. Dwork, K. Nissim, Privacy-preserving datamining on vertically partitioned data- bases, in CRYPTO’04, LNCS 3152 (2004), pp. 528–544
S. From, T. Jakobsen, Secure multi-party computation on integers. Master’s thesis, Univ. of Aarhus, Denmark, BRICS, Dep. of Computer Science (2006)
B. Goethals, S. Laur, H. Lipmaa, T. Mielikäinen, On private scalar product computation for privacy-preserving data mining, in ICISC, LNCS 3506. (2004), pp. 104–120
O. Goldreich, The Foundations of Cryptography, Basic Applications (Cambridge University Press, Cambridge, 2004)
J. Guajardo, B. Mennink, B. Schoenmakers, Modulo reduction for paillier encryptions and application to secure statistical analysis, in 14th Financial Cryptography and Data Security (2010), pp. 375–382
Y. Isahi, E. Kushilevitz, R. Ostrovsky, A. Sahai, Zero-knowledge from secure multiparty computation, in ACM Symposium on Theory of Computing (2007)
G. Jagannathan, R. Wright, Privacy-preserving distributed \(k\)-means clustering over arbitrarily partitioned data, in KDD’05 (2005), pp. 593–599
S. Jha, L. Kruger, P. McDaniel, Privacy Preserving Clustering, in 10th European Symp. on Research in Computer Security (2005), pp. 397–417
E. Kiltz, G. Leander, J. Malone-Lee, Secure computation of the mean and related statistics, in TCC’05, LNCS 3378 (2005), pp. 283–302
Y. Lindell, B. Pinkas, Privacy preserving data mining, in CRYPTO’00, LNCS 1880 (2000), pp. 36–54
M. Naor, B. Pinkas, Oblivious polynomial evaluation. SIAM J. Comput., 35(5), 1254–1281. (2006)
S. Oliveira, O.R. Zaïane, Privacy preserving clustering by data transformation, in Proc. 18th Brazilian Symposium on Databases (2003), pp. 304–318
R. Ostrovsky, Y. Rabani, L. Schulman, C. Swamy, The Effectiveness of Lloyd-Type Methods for the k-Means Problem (FOCS, 2006)
P. Paillier, Public key cryptosystems based on composite degree residuosity classes, in Advances in Cryptology EURO-CRYPT’99 Proceedings, LNCS 1592 (1999), pp. 223–238
J. Reif, S. Tate, Optimal size integer division circuits. SIAM J. Comput, 912–924 (1990)
C. Su, F. Bao, J. Zhou, T. Takagi, K. Sakurai, Privacy-preserving two-party \(K\)-means clustering via secure approximation, in 21st Inter. Conf. on Advanced Information Networking and Applications Workshops (2007), pp. 385–391
J. Vaidya, C. Clifton, Privacy-preserving \(k\)-means clustering over vertically partitioned data, in Proc. 9th ACM SIGDD Inter. Conf. on Knowledge Discovery and Data Mining (2003), pp. 206–215
T. Veugen, Encrypted integer division and secure comparison. Int. J. Appl. Cryptogr., 3(2), 166–180. (2014)
R. Wright, Z. Yang, Privacy-preserving bayesian net- work structure computation on distributed heterogeneous data, in Proc. of the 10th ACM SIGKDD Inter. Conf. on Knowledge Discovery and Data Mining (2004), pp. 713–718
A.C.C. Yao, How to generate and exchange secrets, in Proc. of the 27th IEEE Symp. on Foundations of Computer Science (1986), pp. 162–167
H. Zhu, F. Bao, Oblivious scalar-product protocols, in 11th Australasian Conference on Information Security and Privacy, LNCS 4058 (2006), pp. 313–323
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Damgard.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A preliminary version of this paper appeared in the Proceedings of the 14th ACM Conference on Computer and Communications Security, pp. 486–497, 2007. Paul Bunn: Research partially done while at UCLA and visiting IPAM, and supported in part by NSF VIGRE Grant DMS-0502315, NSF Cybertrust Grant No. 0430254, and by DARPA and SPAWAR under Contract N66001-15-C-4065. Rafail Ostrovsky: Partially done while visiting IPAM. Author supported in part by NSF Grant 1619348, BSF Grant 2015782, DARPA SafeWare subcontract to Galois Inc., DARPA SPAWAR Contract N66001-15-1C-4065, JP Morgan Faculty Research Award, OKAWA Foundation Research Award, IBM Faculty Research Award, Xerox Faculty Research Award, B. John Garrick Foundation Award, Teradata Research Award, and Lockheed-Martin Corporation Research Award. The views expressed are those of the authors and do not reflect position of the Department of Defense or the U.S. Government.
Appendices
A Alternative Stopping Criterion for Lloyd Step
It is possible that the iterative nature of the Lloyd Step may reveal undesirable information to the two parties: the number of iterations that are performed in the Lloyd Step. We suggest three different approaches to handle this privacy concern:
Approach 1: Reveal Number of Iterations If Alice and Bob agree beforehand that this leak of information will not compromise the privacy of their data, they can choose to run our algorithm (as is) and reveal the number of iterations.
Approach 2: Set the Number of Iterations to be Proportional ton In general, the more data points, the more iterations are necessary to reach the stopping condition. Based on n, one could therefore approximate the expected number of iterations that should be necessary, and fix our protocol to perform this many iterations.
Approach 3: Fix the Number of Iterations to be Constant In [24], it is argued that if the data points enjoy certain “nice” properties, then the number of iterations is extremely small (i.e. with high probability, only 2 iterations are necessary). Thus, fixing the number of iterations to be some (small) constant will (with high probability) not result in a premature termination of the Lloyd Step (i.e. the stopping condition will likely have been reached).
Each approach has its pros and cons. Approach 1 guarantees the accuracy of the final output (as the stopping criterion has been met) in the minimal number of steps, but leaks information about how many iterations were performed. Approach 2 succeeds with high probability, but may unnecessarily affect communication complexity if the fixed number of iterations is higher than necessary. Approach 3 keeps communication minimal, but runs a higher risk of losing accuracy of the final output (i.e. if the stopping criterion hasn’t been reached after the fixed number of iterations have been completed). In the body of our paper, we assumed Approach 1, although it is trivial to modify our algorithm to implement instead Approach 2 or 3.
B Reordering Protocol
As mentioned in Sect. 4.1, this protocol can be thought of as selecting balls from a bag, where each ball is marked with an index \(i \in [0..\lambda ]\). In particular, the bag will initially contain \(2^i\) balls marked with index i for each \(i \in [0..\lambda ]\). Then reordering \([0..\lambda ]\) is achieved by selecting a ball from the bag at random, and outputting the corresponding index as the first number in the reordered sequence. Next, all balls with that index are removed from the bag, and the procedure is repeated to generate the second number in the sequence, and so on until the bag is empty. We give below a formal treatment of this procedure.
Example Reordering Protocol Let \(\lambda \) be an arbitrary positive integer. This protocol will reorder the digits \([0..\lambda ]\), or more formally, it will generate a permutation:
It will be more convenient (both in the protocol description as well as proofs) to describe \(\tau \) by its inverse permutation \(\sigma := \tau ^{-1}\). The following procedure generates the permutation \(\sigma \) one element at a time.
- 1.
For \(0 \le i \le \lambda \):
- (a)
If \(i = 0\), define \(U := 2^{\lambda + 1} - 1\). Otherwise, update U by subtracting \(2^{\sigma (i-1)}\). Equivalently, if \(U = u_{\lambda } \dots u_1 u_0\) denotes the binary representation of U, then U is initialized (for \(i=0\)) so that all binary digits are ‘1’, and then for each subsequent iteration, U is updated by flipping the \(\sigma (i-1)\) binary digit from ‘1’ to ‘0’.
- (b)
For each \(0 \le j \le \lambda \) with \(u_j = 1\), define values \(\{a_j\}\) as the value represented by the j lowest order binary digits of U:
$$\begin{aligned} a_j = u_{j-1} \dots u_1 u_0 \end{aligned}$$These values are used to partition the interval [1..U] into \(1 + \lambda -i\) intervals based on the binary representation of U. Namely, for each non-zero binary digit \(0 \le j \le \lambda \) of U, the corresponding interval is from \([a_j + 1..a_j + 2^j]\).
- (c)
Choose a number \(r \leftarrow [1..U]\) uniformly at random, and set \(\sigma (i)\) equal to the interval index that r falls in. Formally, set \(\sigma (i) = j\) if \(r \in [a_j + 1..a_j + 2^j]\).
- (a)
Claim The Example Reordering Protocol satisfies the Reordering Property in Definition 4.
Proof Sketch Viewing the Example Reordering Protocol as a formalization of the “selecting balls from a bag” description (we leave the reader to verify the formalization matches the intuition), we compute the probability that an index j appears first among an arbitrary set of indices \({\mathcal {I}}\). Let \({\mathcal {I}} \subseteq [0..\lambda ]\) and let \(j \in {\mathcal {I}}\) be arbitrary. We utilize the Law of Total Probability to write:
where the above sum stops at \(i = 1 + \lambda - |{\mathcal {I}}|\) because if we reach this iteration without having output any indices in \({\mathcal {I}}\), then all that is left in the bag at iteration \(i = 1 + \lambda - |{\mathcal {I}}|\) are balls with an index in \({\mathcal {I}}\). Notice that at any iteration i, the first probability on the RHS of (20) is independent of i, namely it is \(2^j/\sum _{k\in \ {\mathcal {I}}} 2^k\), since we are conditioning on a ball from \({\mathcal {I}}\) being selected, and there are \(2^k\) balls in the bag for every index k. Since this quantity is independent of i, it can be removed from the sum:
where the last equality is due to the fact that we are summing over the complete probability space, i.e. the sum of probabilities that a ball from \({\mathcal {I}}\) is first selected in iteration i, as i ranges from \([0..(\lambda - |{\mathcal {I}}|)]\), equals one. Notice that (21) matches the Reordering Property (10), as desired. \(\square \)
C Implementations of Protocols from Sect. 2.2
We describe here possible implementations of each of the (non-referenced) protocols listed in Sect. 2.2. We provide these implementations solely for the purpose of completion, and make no claim concerning their efficiency in relation to other existing protocols that perform the same tasks. Since we need each of these protocols to be secure against an honest-but-curious adversary, we need the communication in each subprotocol to be in the generic form of Lemma 2.1 or to utilize other protocols that are already known to be secure; and indeed this will be the case in each of the following.
1.1 C.1 Description of the Find Minimum of 2 Numbers Protocol
Input Let \(X = x_{\lambda } \dots x_1 x_0\) and \(Y = y_{\lambda } \dots y_1 y_0\) denote the binary representations of two values \(X, Y < N\). For each \(0 \le i \le \lambda \), Alice and Bob share \(x_i\) and \(y_i\) (mod N).
Output Alice and Bob should share 0 if \(X < Y\) or share 1 if \(X > Y\). If \(X=Y\), they will share either 0 or 1 depending on an agreed upon distribution, e.g. they can choose to always output ‘0’ in the case of equality, or always output ‘1’, or output ‘0’ according to some fixed probability r.
Cost Communication cost of this protocol is \(O(\lambda ^2)\), where \(\lambda = \lfloor \text{ log } N \rfloor \).
Protocol Description This protocol will be completed by performing a standard minimum comparison on the binary representations of these numbers. In general, note that the following formula will return the location of the minimum of (X, Y), where the formula returns 0 if \(X < Y\), a 1 if \(X>Y\), and a value \(r \in \{0,1\}\) if \(X=Y\):
where \(\oplus \) signifies XOR, and the other operations are performed in \({\mathbb {Z}}_N\). Shares of the output can then be obtained by running the SPP many times, utilizing the fact that:
where addition on the left hand side is in \({\mathbb {Z}}_2\) and on the right hand side is in \({\mathbb {Z}}_N\).
1.2 C.2 Description of the To Binary Protocol
Input Alice and Bob share \(X = X^A + X^B (\hbox {mod}\ N)\).
Output If \(X = x_{\lambda } \dots x_1 x_0\) is the binary representation for X, then Alice and Bob share each binary digit \(x_i = x^A_i + x^B_i\) (mod N).
Cost Communication cost of this protocol is \(O(\lambda ^2)\), where \(\lambda = \lfloor \text{ log } N \rfloor \).
Protocol Description Notice that there are two possibilities for how to compute X from the shares \(X^A\) and \(X^B\) (with arithmetic in \({\mathbb {Z}}\)):
This protocol will find (shares of) the binary representation of both \(X^A + X^B\) and \(X^A + X^B - N\), and then invoke the FM2NP (combined with SPP) to select the proper case. Details are as follows: first, Alice and Bob will obtain (shares of) the binary representation of \(X^A + X^B\). In particular, if \(X^A := a_\lambda \dots a_1 a_0\), \(X^B := b_\lambda \dots b_1 b_0\), then the following formula generates the binary representation \(X^A + X^B = x_\lambda \dots x_1 x_0\):
where the \(\oplus \) symbol above means standard addition in \({\mathbb {Z}}_2\) (i.e. performed base 2, with carry-over). The computation in (24) can be done following the standard (insecure) manner of computation: start on the right and add the bits via XOR, keeping track of carry-over. This can be readily extended to a secure protocol by invoking SPP.
Next, shares of the binary representation of \(2^{\lambda + 1} - N + X^A + X^B\) can be computed similarly, since, e.g. if \(2^{\lambda + 1} - N = d_\lambda \dots d_1 d_0\) is the binary representation of \(2^{\lambda + 1} - N\) (which is publicly known by both parties, since N and \(\lambda \) are public), then the binary representation of \(2^{\lambda + 1} - N + X^A + X^B = y_{\lambda + 1} y_\lambda \dots y_1 y_0\) can be computed via:
After computing (shares of) the binary representation for \(2^{\lambda + 1} - N + X^A + X^B\), one of the parties will subtract ‘1’ from their share of the leading bit \(y_{\lambda + 1}\), so that the parties will share the leading bit \(\widehat{y}_{\lambda + 1} =y_{\lambda + 1} - 1\). Notice that in the case that \(X^A + X^B \ge N\), this will result in the two parties sharing (the binary representation of) \(X^A + X^B - N = \widehat{y}_{\lambda + 1} y_\lambda \dots y_1 y_0\).
Finally, the two parties can run the FM2NP on, e.g. \((X^A, N-X^B)\) to determine if \(X^A + X^B < N\) (the first input \(X^A\) is the minimum if and only if \(X^A + X^B < N\)); notice we use the indicated inputs to FM2NP so that we can use the version where the parties share the binary representations of the inputs (see Section C.1); i.e. Alice knows the binary digits of \(X^A\) (Bob can set his shares to 0) and Bob knows the binary digits of \(N-X^B\) (Alice can set her shares of to 0). Alice and Bob can then run SPP to compute their final shares:
where \(c \in \{0, 1\}\) is ‘1’ iff \(X^A + X^B < N\).
1.3 C.3 Description of the Find Minimum of 2 Numbers Protocol
This protocol is similar to the protocol described in Section C.1, except that here Alice and Bob share X and Y, as opposed to sharing each binary digit.
Input Alice and Bob share two values \(X = X^A + X^B \ (\hbox {mod}\ N)\) and \(Y = Y^A + Y^B \ (\hbox {mod}\ N)\).
Output Alice and Bob should share 0 if \(X < Y\) or share 1 if \(X > Y\). If \(X=Y\), they will share either 0 or 1 depending on an agreed upon distribution, e.g. they can choose to always output ‘0’ in the case of equality, or always output ‘1’, or output ‘0’ according to some fixed probability r.
Cost Communication cost of this protocol is \(O(\lambda ^2)\), where \(\lambda = \lfloor \text{ log } N \rfloor \).
Protocol Description This protocol simply has Alice and Bob utilize the To Binary Protocol twice (once for X and once for Y), and then proceeds with the Find Minimum of 2 Numbers Protocol of Section C.1.
1.4 C.4 Description of the Nested Product Protocol
Input Alice and Bob share a set of values: \(\{X_i = X^A_i + X^B_i \ (\hbox {mod}\ N)\}_{i=1}^m\).
Output For each \(1 \le i \le m\), Alice and Bob share: \(Y_i := \prod _{j=1}^i X_j\).
Cost Cost of \((m-1)\) calls to SPP applied to a two-term function (\(O(m \cdot \lambda )\), where \(\lambda = \log _2 N\)).
Protocol Description Notice Alice and Bob already share \(Y_1\), which equals \(X_1\). We describe how (shares of) term \(Y_{i+1}\) can be obtained from (shares of) \(Y_i\). Namely, let \(Y_i =Y^A_i + Y^B_i \ (\hbox {mod}\ N)\). Then Alice and Bob can compute (shares of) \(Y_{i+1} = Y_i \cdot X_{i+1}\) using the SPP applied to the degree-two function \((Y^A_i + Y^B_i)\cdot (X^A_{i+1}+X^B_{i+1}) =(Y^A_i \cdot X^A_{i+1}) + (Y^A_i \cdot X^B_{i+1}) + (Y^B_i \cdot X^A_{i+1}) + (Y^B_i \cdot X^B_{i+1})\) (all arithmetic modulo N).
1.5 C.5 Description of the Find Minimum of k Numbers Protocol
Input Alice and Bob share k values \(\{X_i =X^A_i + X^B_i \ (\hbox {mod}\ N)\}\).
Output Viewing the k values as a vector in \({\mathbb {Z}}_N^k\), Alice and Bob share the characteristic vector \(\mathbf {e}_i \in {\mathbb {Z}}_2^k\) with the ‘1’ in the ith position, where i is the location of \(\min (X_1, \dots , X_k)\).
Cost Communication cost of this protocol is \((k-1)\) times the cost of FM2NP plus \(O(k\lambda ^2)\).
Protocol Description This protocol can be obtained as a straightforward extension of (a variant of the) FM2NP. In particular, in addition to having the FM2NP output (shares of) the location of the minimum, have it also output (shares of) the value of the minimum. Notice that the value of the minimum can be obtained from the location by running the SPP, since:
where \(L \in \{0, 1\}\) is the location of the minimum of (x, y). Since the function in (27) has a constant number of terms, the cost of employing SPP is \(O(\lambda )\), which can be absorbed in the \(O(\lambda ^2)\) cost of running the FM2NP.
Suppose Alice and Bob have k inputs (WLOG, assume \(k = 2^m\) is a power of 2). They pair-off the k inputs into k/2 sets of pairs, and run this alternate FM2NPk/2 times, obtaining (shares of) the location and minimum value of each pair: \((\mathbf {e}_{l_j}, z_j)\) for each \(1 \le j \le k/2\), where \(l_j \in \{0, 1\}\) denotes the location of the minimum of the jth pair, \(\mathbf {e}_{l_j}\) denotes the characteristic vector (in \({\mathbb {Z}}_2^2\)) with a ‘1’ in the \(l_j\)th position, and \(z_j\) denotes the minimum value within the jth pair of values. This procedure is then repeated by pairing up the k/2 minimums \(\{z_j\}\) into k/4 sets, and running FM2NPk/4 times, and so on. In the end, FM2NP will need to be run a total of \(k-1\) times.
Notice that (shares of) the final minimum value is a direct output of these \(k-1\) calls to FM2NP; if the minimum’s location is required (as per specification of the Output of the FMkNP above), then this location \(\mathbf {e}_i \in {\mathbb {Z}}_2^k\) can be obtained as follows. First, the minimum value is compared to each input, such that for each comparison, the parties will share ‘1’ if that value matches the minimum, and share ‘0’ otherwise. (This requires k calls to a secure Check Equality protocol, that compares two numbers (each shared between Alice and Bob) and returns a ‘1’ if the two numbers are equal, and ‘0’ otherwise. Such a protocol could be implemented with \(O(\lambda ^2)\) communication cost by mimicking the circuit representation for equality on two (binary) numbers; namely, invoking the To Binary Protocol to obtain (shares of) the binary representations of the two numbers, and then applying the SPP and NPP to ‘AND’ together the equality check on the binary digits.) The result of these k calls to a secure equality protocol is almost enough to yield \(\mathbf {e}_i\), except that multiple inputs may equal the minimum value. To control for this case, we can utilize the Nested Product Protocol (on k terms) to arrive at the final \(\mathbf {e}_i\). Namely, if \(u_j\) denotes the output of the equality protocol that compares the minimum with the jth input, then the jth coordinate of \(\mathbf {e}_i\) is given by:
1.6 C.6 Description of the Change Modulus Protocol
Setup Let \(N_1, N_2 \in {\mathbb {Z}}\) be two positive integers, and let \(Q < \min (N_1, N_2)\) be an arbitrary non-negative integer smaller than both \(N_1\) and \(N_2\).
Input Alice and Bob share \(Q = Q^A + Q^B \ (\hbox {mod}\ N_1)\) modulo the first value \(N_1\).
Output Alice and Bob share \(Q= \widehat{Q}^A +\widehat{Q}^B \ (\hbox {mod}\ N_2)\) modulo the second value \(N_2\).
Cost Cost of FM2NP (\(O(\lambda ^2)\), where \(\lambda = \lfloor \log _2 N_1 \rfloor \)).
Protocol Description There are two cases for how Q relates to \(Q^A\) and \(Q^B\) (in terms of ordinary arithmetic in \({\mathbb {Z}}\)):
Alice and Bob compute their new shares of Q modulo \(N_2\) as:
where:
Notice that (shares of) b can be computed via the FM2NP (on inputs \(N_1\) and Q), and then (28) can be computed locally by each party.
1.7 C.7 Description of the Addition Modulo Unknown Value Protocol
Setup\(Q < N \in {\mathbb {Z}}\) are two positive integers.
Input Alice and Bob share \(Q = Q^A + Q^B \ (\hbox {mod} \ N)\) and also share \(X = X^A + X^B \ (\hbox {mod}\ N)\) and \(Y = Y^A + Y^B \ (\hbox {mod}\ N)\) with \(X, Y < Q\).
Output Alice and Bob share (modulo N) the sum of \(X + Y\) (modulo Q):
Cost Cost of FM2NP (\(O(\lambda ^2)\)) plus SPP applied to a two-term function (\(O(\lambda )\)), where \(\lambda = \lfloor \log _2 N \rfloor \).
Protocol Description Since \(X,Y <Q\), we can write (with arithmetic in \({\mathbb {Z}}\)):
Alice and Bob compute shares of \(X+Y \ (\text{ mod } Q)\) as:
where \(C = C^A+C^B\ (\text{ mod } N) := b * Q\), where:
Notice that (shares of) b can be computed via the FM2NP (on inputs \(X+Y\) and Q), and (shares of) C can be obtained via the SPP applied to the degree-two function \(b\cdot Q = (b^A + b^B)\cdot (Q^A+Q^B) = (b^A \cdot Q^A) + (b^A \cdot Q^B) + (b^B \cdot Q^A) + (b^B \cdot Q^B)\) (all arithmetic modulo N). From shares of C, (29) can be computed locally by each party.
1.8 C.8 Compute Modulus Mask Protocol
Input Alice and Bob share \(D = D^A + D^B \ (\hbox {mod}\ N)\).
Output Alice and Bob share \(\mathbf {v} =(v_{\lambda }, v_{\lambda - 1}, \dots , v_1, v_0) \in {\mathbb {Z}}_2^{\lambda +1}\), where \(\lambda = \lfloor \log _2 N \rfloor \) and the ith coordinate of \(\mathbf {v}\) obeys:
Cost. Communication cost of this protocol is \(\lambda \) calls to FM2NP and SPP.
Protocol Description
-
1.
Define \(O_0 = 1\). Repeat the following for \(1 \le i \le \lambda \): Alice and Bob run the FM2NP on \((N-2^{i-1}D, \ N-2^iD-1)\); let \(O_i \in \{0, 1\}\) denote the output. Note that this protocol will use the version of FM2NP that always outputs ‘0’ in the case of equality.
-
2.
Let \(\mathbf {O} = (O_{\lambda }, O_{\lambda -1}, \dots , O_1, O_0)\) denote the vector formed by the \(\lambda \) calls to FM2NP (plus the default \(O_0 = 1\) coordinate), and notice that \(\mathbf {O} = (?, \dots ?, 0, 1, \dots , 1)\), where the rightmost coordinates are (at least one) 1’s, preceded by (at least one) 0, which is preceded by the leading coordinates of \(\mathbf {O}\) which are unimportant. Note that the first ‘0’ (reading from right-to-left) occurs in the ith coordinate iff i is the first time \(2^iD \ge N\). Alice and Bob can modify this to share \(\mathbf {v}= (0, \dots , 0, 1, \dots 1)\) by running the SPP\((\lambda -1)\) times: namely, for \(2 \le i \le \lambda \), compute \({v}_{i} = {v}_{i-1} \cdot {O}_{i}\) (there is no need to compute \(v_0\) or \(v_1\), which can be directly set to equal \(O_0\) and \(O_1\), respectively).
1.9 C.9 Compute \(\mathbf {e}_i\) Protocol
Setup Let \(E: {\mathbb {G}}_1 \rightarrow {\mathbb {G}}_2\) be a public-key homomorphic encryption scheme admitting scalar multiplication (e.g. Paillier) for which Alice has the decryption key (and Bob does not). Let \(N = |{\mathbb {G}}_1|\) denote the size of the plaintext group, and let \(\lambda = \lfloor \log _2 N \rfloor \) be the security parameter.
Input Alice and Bob share \(Q = Q^A + Q^B (\hbox {mod}\ N)\), and writing its binary representation as \(Q=q_\lambda \dots q_1 q_0\), then for each \(0 \le i \le \lambda \), they also share \(q_i = q^A_i + q^B_i\) (mod N). Bob also has run a Reordering Protocol to get a reordering of the integers \([0..\lambda ]\), which is denoted \(\{\sigma (0), \ \sigma (1), \ \dots , \ \sigma (\lambda )\}\).
Output Alice and Bob share the unit vector \(\mathbf {e} = (0, \dots , 1, \dots , 0) \in {\mathbb {Z}}_2^{1 + \lambda }\), where the unique ‘1’ appears in coordinate i with probability:
Cost. Communication cost of this protocol is \(O(\lambda ^2)\): There are \(2\lambda + 2\) ciphertexts (of size \(O(\lambda )\)) and invocation of a NPP on \(O(\lambda )\) terms and \(O(\lambda )\) invocations of a SPP.
Protocol Description Let \(\mathbf {e}_j = (0, \dots , 0, 1, 0, \dots , 0)\) denote the characteristic vector with a ‘1’ in the jth coordinate.
- 1.
Alice sends Bob \(\{E(q^A_0), \ E(q^A_1), \ \dots , \ E(q^A_\lambda )\}\).
- 2.
Bob picks \(1 + \lambda \) elements \(\{ Z_0, \ Z_1, \ \dots , \ Z_\lambda \} \leftarrow _R {\mathbb {G}}_1\) uniformly at random and (utilizing the homomorphic properties of E) returns to Alice \(\{E(q^A_{\sigma (0)} - Z_0), \ E(q^A_{\sigma (1)} - Z_1), \ \dots , \ E(q^A_{\sigma (\lambda )} - Z_{\lambda })\}\) who decrypts each term. Notice that Bob has rearranged the order in which he returns things to Alice based on his choice of \(\sigma \), but Alice doesn’t know the new order because Bob has blinded each term with randomness \(Z_i\). Thus, for each \(0 \le i \le \lambda \), Alice and Bob now share \(q_{\sigma (i)} = ((q^A_{\sigma (i)} - Z_i) + (q^B_{\sigma (i)} + Z_i)) \ (\)mod N), with the property that neither party knows any of the values \(\{q_{\sigma (i)} \}\), and that Alice knows nothing about \(\sigma \).
- 3.
Alice and Bob compute and output (shares of) the quantity:
$$\begin{aligned} \ q_{\sigma (0)} \cdot \mathbf {e}_{\sigma (0)} + \sum _{j=1}^{\lambda } \left[ q_{\sigma (j)} \cdot \left( \prod _{k=0}^{j-1} (1 - q_{\sigma (k)}) \right) \cdot \mathbf {e}_{\sigma (j)} \right] \end{aligned}$$(31)by utilizing a secure Nested Product Protocol (NPP) and a Scalar Product Protocol (SPP). Namely, the terms \(\prod _k (1 - q_{\sigma (k)})\) are computed and shared via NPP, the terms \(\{q_{\sigma (j)}\}\) were shared in Step (2), and Bob can construct the \(\{\mathbf {e}_{\sigma (j)}\}\) terms locally.
Proof of Correctness and Security Correctness follows from the fact that the output value in (31) is the same formula as appeared in the (Insecure) RVP Step 3 (see (11)), and Correctness of the (Insecure) RVP Step 3 protocol was proven in Sect. 4.1.
Security follows from the fact that all communication between Alice and Bob can be classified as one of:
- A.
Encryptions sent from Alice to Bob in Step 1.
- B.
The randomized ciphertexts sent from Bob to Alice in Step 2.
- C.
A secure subprotcol (NPP and SPP) in Step 3.
Security of communication in (A) follows from semantic security of the encryption scheme E. Security of communication in (B) follows from the homomorphic property of E and the fact that Bob chose uniform randomness to blind the returned values to Alice. Security of communication in (C) follows from the security of the subprotocols.
\(\square \)
1.10 C.10 Choose \({\varvec{\mu }}_1\) Protocol
Setup Let \(E: {\mathbb {G}}_1 \rightarrow {\mathbb {G}}_2\) be a public-key homomorphic encryption scheme admitting scalar multiplication (e.g. Paillier) for which Alice has the decryption key (and Bob does not). Let \(N = |{\mathbb {G}}_1|\) denote the size of the plaintext group, and let \(\lambda = \lfloor \log _2 N \rfloor \) be the security parameter.
Input See the setup/notation from Sect. 5.3 where this subprotocol is called. Namely, Alice and Bob have run the RVP, which has returned to them shares of a random \(R \in {\mathbb {Z}}_{2n \bar{C}}\). They also share \(\bar{C}\) and for each \(1 \le i \le n\), they share \(\widetilde{C}_i\).
Output Alice and Bob share \({\varvec{\mu }}_1 = \mathbf {D}_i\), where \(\mathbf {D}_i\) has been chosen with the correct probability.
Cost Communication of this protocol is a single call to FMnNP plus \(n+d\) ciphertexts of size \(O(\lambda )\).
Protocol Description
- 1.
Alice creates the vector \(\mathbf {Z}^A \in {\mathbb {Z}}^n_N\), defined as follows:
$$\begin{aligned} \mathbf {Z}^A&= (\bar{C}^{A} + \widetilde{C}^{A,0}_1 -R^A, \ \ 2\bar{C}^{A} + \widetilde{C}^{A,0}_1 + \widetilde{C}^{A,0}_2 -R^A, \ \dots , \\&\qquad n \bar{C}^{A} + \widetilde{C}^{A,0}_1 + \dots + \widetilde{C}^{A,0}_n -R^A). \end{aligned}$$Notice that the ith coordinate of \(\mathbf {Z}^A\) is: \(i \bar{C}^{A} -R^A + \sum _{j=1}^i \widetilde{C}^{A,0}_j\).
Bob does similarly to obtain \(\mathbf {Z}^B\).
- 2.
Alice and Bob run the FMnNP on the vector \(\mathbf {Z} \in {\mathbb {Z}}^n_N\), which will return the (shares of) \(\mathbf {L} = (L_1, L_2, \dots , L_n) \in \{0, 1\}^n\), with the unique ‘1’ in coordinate i where i is the first time that \(R \le \sum _{j=1}^i \bar{C} + \widetilde{C}^0\). Alice encrypts her share \(\mathbf {L}^A\) and sends this to Bob.
- 3.
Bob can now compute (an encryption of) the scalar product:
$$\begin{aligned} {\varvec{\mu }}_1&= \mathbf {L} \cdot (\mathbf {D}_1, \dots , \mathbf {D}_n) \nonumber \\&= \sum _{i=1}^n L_i \cdot \mathbf {D}_i \end{aligned}$$(32)More precisely, for each \(1 \le i \le n\), Bob will have to compute d products to evaluate \(L_i \cdot \mathbf {D}_i\), one for each dimension. After randomizing each product, he returns these values to Alice, so that they now share \({\varvec{\mu }}_1\).
Note that (32) could have been calculated by having Alice and Bob invoke the SPP with their shares of \(\mathbf {L}\) and each \(\mathbf {D}_i\). However, this would cost O(nd) invocations of a SPP, for an overall communication cost of \(O(\lambda nd)\), which exceeds the cost of the above described protocol (\(O(\lambda ^2n + \lambda (n+d))\)) so long as \(\lambda <n\).
Rights and permissions
About this article
Cite this article
Bunn, P., Ostrovsky, R. Oblivious Sampling with Applications to Two-Party k-Means Clustering. J Cryptol 33, 1362–1403 (2020). https://doi.org/10.1007/s00145-020-09349-w
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00145-020-09349-w