Skip to main content
Log in

Topology-Hiding Communication from Minimal Assumptions

  • Published:
Journal of Cryptology Aims and scope Submit manuscript

Abstract

Topology-hiding broadcast (THB) enables parties communicating over an incomplete network to broadcast messages while hiding the topology from within a given class of graphs. THB is a central tool underlying general topology-hiding secure computation (THC) (Moran et al. TCC’15). Although broadcast is a privacy-free task, it was recently shown that THB for certain graph classes necessitates computational assumptions, even in the semi-honest setting, and even given a single corrupted party. In this work, we investigate the minimal assumptions required for topology-hiding communication: both Broadcast or Anonymous Broadcast (where the broadcaster’s identity is hidden). We develop new techniques that yield a variety of necessary and sufficient conditions for the feasibility of THB/THAB in different cryptographic settings: information theoretic, given existence of key agreement, and given existence of oblivious transfer. Our results show that feasibility can depend on various properties of the graph class, such as connectivity, and highlight the role of different properties of topology when kept hidden, including direction, distance, and/or distance-of-neighbors to the broadcaster. An interesting corollary of our results is a dichotomy for THC with a public number of at least three parties, secure against one corruption: information-theoretic feasibility if all graphs are 2-connected; necessity and sufficiency of key agreement otherwise.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31
Fig. 32

Similar content being viewed by others

Notes

  1. Such protocols exist in the honest-majority setting assuming key agreement, and thus under this assumption, THB implies THC. In the information-theoretic setting, THC can be strictly stronger, as we will see.

  2. That is, the Quadratic Residuosity assumption, the Decisional Diffie–Hellman assumption, and the Learning With Errors assumption, respectively.

  3. The lower bound of [3] holds for 4-party 2-secure THB with respect to a small class of 4-node graphs, namely a square, and a square with any of its edges removed.

  4. To see that \(\textsf{THC}\Rightarrow \textsf{THAB}\), observe that semi-honest anonymous broadcast can be realized using a secure sum, where the broadcaster inputs the message to be broadcast anonymously, and all other parties input 0.

  5. LaVigne et al. [27] recently studied THC in a non-synchronous setting, demonstrating many barriers.

  6. A graph is k-connected if and only if every pair of nodes is connected by k vertex-disjoint paths.

  7. If the class of graphs contains a 2-path, then oblivious transfer is necessary for secure computation [24].

  8. Note that OT is strictly stronger than KA in terms of black-box reductions, since OT implies KA in a black-box way, but the converse does not hold [18].

  9. If the neighbor sends the message in the first round that the party learns it, then its distance is one less of the party’s distance. If the neighbor sends after the party learned it, then its distance equals the party’s distance. If the neighbor does not send, then its distance is one more than the party’s distance.

  10. An infinitely often key agreement guarantees correctness and security for infinitely many (where stands for the security parameter).

  11. In particular, the “left/right” orientation can be deduced locally from each node’s neighbor set.

  12. An infinitely often OT protocol guarantees correctness and security for infinitely many (where stands for the security parameter).

  13. The result of [29] was limited to graphs of small diameter to allow an arbitrary number of corruptions. With a single corruption, the same construction can support all graphs.

  14. THB exists trivially for any graph class in which each party’s neighborhood uniquely identifies the graph topology.

  15. In fact, for this step we will only need for the subclass .

  16. The standard notation in the literature is st-orientation; to avoid confusion with the notation t that stands for the corruption threshold, we use \(\sigma \tau \)-orientation instead.

  17. In fact, the upper bound holds for a large body of graph classes, where only distance needs to be hidden.

  18. Our techniques can be extended to show an \(\varepsilon \)-statistically 1-secure THB for in \(c\cdot \log \varepsilon ^{-1}\) rounds, where c is a constant, requires io-KA, but the gap between this and the upper bound remains exponential.

  19. Note that because the advantage of D is bounded from below by \(\lambda ^{-c_D}\), for each \(\lambda \) we can approximate \(p_D\) in polynomial time up to a \(\lambda ^{-c_D}/2\) factor.

  20. Note that our protocol is only infinitely often correlated ( \(A=B\)); hence, the security property will hold for all but finitely many \(\lambda \) for any efficient distinguisher.

  21. Recall that Hoeffding’s bound says that given \(X_1,\ldots ,X_n\) i.i.d. indicator random variables, for any \(\delta \ge 0\), \({\textrm{Pr}}\left[ |\sum X_i - {\mathbb {E}}[\sum X_i]| \ge \delta \right] \le 2\exp (-2\delta ^2/n)\).

  22. Lemma 2.8 implies that each \(X_1,\ldots ,X_{\lambda ^{2c_D}}\) (and similarly, \(Y_1,\ldots ,Y_{\lambda ^{2c_D}}\)) is indistinguishable from \(Z_1,\ldots ,Z_{\lambda ^{2c_D}}\) where each \(Z_i\sim ({{VIEW}}_3^{12345}{{VIEW}}_3^{12345})\).

  23. The claim itself cannot be invoked as the graphs are different, but the same proof works verbatim.

  24. Recall that although KA can be constructed from OT in a black-box way, OT cannot be constructed from KA in a black-box way [18].

  25. Note that the cyclic labeling implicitly defines an orientation of the path, allowing to talk about “left” and “right”

  26. To account for the case \(i\notin \textrm{Im}(\psi _{H,\tau })\), which can happen if H has less than n nodes, we set \({v_{H,i}}=0\), \(\mathcal {N}_{H}(v_{H,i}) =\{0\}\), and \(m_{{v_{H,i}}}=0\).

  27. Reliable message transmission refers to the concept of transmitting messages in an incomplete network such that the receiver is guaranteed to receive the message [14, 15]. In our setting, the challenge is to realize reliable message transmission in a topology-hiding manner. Note that as described in the following, reliable message transmission in particular implies broadcast in the semi-honest setting.

  28. The channel is a dead-end because used a KA protocol to set up a shared key with who they thought was , simulated by , but obliviously elected not to learn this key. Therefore, whatever information sends over this channel is lost.

  29. Recall that we are crucially dealing with semi-honest adversaries in this paper.

  30. See Ball et al. [3] for a more in-depth definition of leakage in the context of topology-hiding computation.

  31. We actually show a slightly restricted case for simplicity, but the result can easily be extended.

  32. In other words, \({\mathcal {G}} \) is a set of trees on some potential vertex set V such that for every \(v\in V\), \(d_G(v,1)\le d\) if \(v\in V(G)\) (for all \(G\in {\mathcal {G}} \)) and if \(\mathcal {N}_{G}(v) =\mathcal {N}_{H}(v) \) for \(H,G\in {\mathcal {G}} \), then there exists a (unique) \(u\in \mathcal {N}_{G}(v) =\mathcal {N}_{H}(v) \) that disconnects 1 and v in both G and H.

References

  1. A. Akavia and T. Moran. Topology-hiding computation beyond logarithmic diameter. In 36th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT), part III (2017), pp. 609–637

  2. A. Akavia, R. LaVigne, and T. Moran. Topology-hiding computation on all graphs. In 37th Annual International Cryptology Conference (CRYPTO), part I (2017), pp. 447–467

  3. M. Ball, E. Boyle, T. Malkin, and T. Moran. Exploring the boundaries of topology-hiding computation. In 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT), part III (2018), pp. 294–325

  4. M. Ball, E. Boyle, R. Cohen, T. Malkin, and T. Moran. Is information-theoretic topology-hiding computation possible? In Proceedings of the 17th Theory of Cryptography Conference(TCC), part I (2019), pp. 502–530

  5. M. Ball, E. Boyle, R. Cohen, L. Kohl, T. Malkin, P. Meyer, and T. Moran. Topology-hiding communication from minimal assumptions. In Proceedings of the 18th Theory of Cryptography Conference(TCC), part II (2020), pp. 473–501

  6. D. Beaver. Foundations of secure interactive computing. In 10th Annual International Cryptology Conference (CRYPTO) (1991), pp. 377–391

  7. D. Beaver. Precomputing oblivious transfer. In 14th Annual International Cryptology Conference (CRYPTO) (1995), pp. 97–109

  8. M. Ben-Or, S. Goldwasser, and A. Wigderson. Completeness theorems for non-cryptographic fault-tolerant distributed computation (extended abstract). In Proceedings of the 20th Annual ACM Symposium on Theory of Computing (STOC) (1988), pp. 1–10

  9. R. Canetti. Security and composition of multiparty cryptographic protocols. Journal of Cryptology, 13(1):143–202, 2000.

    Article  MathSciNet  MATH  Google Scholar 

  10. R. Canetti. Universally Composable Security: A New Paradigm for Cryptographic Protocols. In Proceedings of the 42nd Annual Symposium on Foundations of Computer Science (FOCS) (2001), pp. 136–145

  11. D. Chaum. Untraceable electronic mail, return addresses, and digital pseudonyms. Communications of the ACM, 24(2):84–88, 1981.

    Article  Google Scholar 

  12. D. Chaum. The dining cryptographers problem: Unconditional sender and recipient untraceability. Journal of Cryptology, 1(1):65–75, 1988.

    Article  MathSciNet  MATH  Google Scholar 

  13. D. Chaum, C. Crépeau, and I. Damgård. Multiparty unconditionally secure protocols (extended abstract). In Proceedings of the 20th Annual ACM Symposium on Theory of Computing (STOC) (1988), pp. 11–19

  14. D. Dolev. The Byzantine generals strike again. J. Algorithms, 3(1):14–30, 1982.

    Article  MathSciNet  MATH  Google Scholar 

  15. D. Dolev, C. Dwork, O. Waarts, and M. Yung. Perfectly secure message transmission. Journal of the ACM, 40(1):17–47, 1993.

    Article  MathSciNet  MATH  Google Scholar 

  16. S. Even and R. E. Tarjan. Computing an st-numbering. Theoretical Computer Science, 2(3):339–344, 1976.

    Article  MathSciNet  MATH  Google Scholar 

  17. S. Even and R. E. Tarjan. Corrigendum: Computing an st-numbering. Theor. Comput. Sci., 4(1):123, 1977

    Article  MATH  Google Scholar 

  18. Y. Gertner, S. Kannan, T. Malkin, O. Reingold, and M. Viswanathan. The relationship between public key encryption and oblivious transfer. In Proceedings of the 41st Annual Symposium on Foundations of Computer Science (FOCS) (2000), pp. 325–335

  19. O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game or a completeness theorem for protocols with honest majority. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing (STOC) (1987), pp. 218–229

  20. I. Haitner, K. Nissim, E. Omri, R. Shaltiel, and J. Silbak. Computational two-party correlation: A dichotomy for key-agreement protocols. In Proceedings of the 59th Annual Symposium on Foundations of Computer Science (FOCS) (2018), pp. 136–147

  21. M. Hirt, U. Maurer, D. Tschudi, and V. Zikas. Network-hiding communication and applications to multi-party protocols. In 36th Annual International Cryptology Conference (CRYPTO), part II (2016), pp. 335–365

  22. Y. Ishai, E. Kushilevitz, R. Ostrovsky, and A. Sahai. Cryptography from anonymity. In Proceedings of the 47th Annual Symposium on Foundations of Computer Science (FOCS) (2006), pp. 239–248

  23. J. Katz, U. Maurer, B. Tackmann, and V. Zikas. Universally composable synchronous computation. In Proceedings of the 10th Theory of Cryptography Conference(TCC) (2013), pp. 477–498

  24. J. Kilian. A general completeness theorem for two-party games. In Proceedings of the 23rd Annual ACM Symposium on Theory of Computing (STOC) (1991), pp. 553–560

  25. L. Lamport, R. E. Shostak, and M. C. Pease. The Byzantine generals problem. ACM Transactions on Programming Languages and Systems, 4(3):382–401, 1982.

    Article  MATH  Google Scholar 

  26. R. LaVigne, C. L. Zhang, U. Maurer, T. Moran, M. Mularczyk, and D. Tschudi. Topology-hiding computation beyond semi-honest adversaries. In Proceedings of the 16th Theory of Cryptography Conference(TCC), part II (2018), pp. 3–35

  27. R. LaVigne, C. L. Zhang, U. Maurer, T. Moran, M. Mularczyk, and D. Tschudi. Topology-hiding computation for networks with unknown delays. In Proceedings of the 23rd International Conference on the Theory and Practice of Public-Key Cryptography (PKC), part II (2020), pp. 215–245

  28. Y. Lindell, E. Omri, and H. Zarosim. Completeness for symmetric two-party functionalities: Revisited. Journal of Cryptology, 31(3):671–697, 2018.

    Article  MathSciNet  MATH  Google Scholar 

  29. T. Moran, I. Orlov, and S. Richelson. Topology-hiding computation. In Proceedings of the 12th Theory of Cryptography Conference(TCC), part I (2015), pp. 159–181

  30. M. C. Pease, R. E. Shostak, and L. Lamport. Reaching agreement in the presence of faults. Journal of the ACM, 27(2):228–234, 1980.

    Article  MathSciNet  MATH  Google Scholar 

  31. T. Rabin and M. Ben-Or. Verifiable secret sharing and multiparty protocols with honest majority (extended abstract). In Proceedings of the 30th Annual Symposium on Foundations of Computer Science (FOCS) (1989), pp. 73–85

  32. R. E. Tarjan. Two streamlined depth-first search algorithms. Fundamenta Informaticae, 9(1):85–94, 1986.

    Article  MathSciNet  MATH  Google Scholar 

  33. A. C. Yao. Protocols for secure computations (extended abstract). In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (FOCS) (1982), pp. 160–164

Download references

Acknowledgements

We thank the anonymous reviewers of TCC 2020 for pointing to the connection between anonymous communication and key agreement in [22], as well as the anonymous JoC reviewer. Some of M. Ball’s work was done, while the author was at Columbia University, supported in part by an IBM Research PhD Fellowship. M. Ball and T. Malkin’s work is supported in part by JPMorgan Chase & Co. as well as the US Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research under award number DE-SC-0001234. E. Boyle’s research is supported in part by ISF grant 1861/16, AFOSR Award FA9550-17-1-0069, and ERC Starting Grant 852952 (HSS). Some of R. Cohen’s was done, while the author was at Northeastern University, supported in part by NSF grant 1646671. L. Kohl is funded by NWO Gravitation project QSC. This Research of L. Kohl was done while at Technion, supported by ERC Project NTSC (742754). P. Meyer’s research is supported in part by ISF grant 1861/16, AFOSR Award FA9550-17-1-0069, and ERC Starting Grant 852952 (HSS); some of his research was done, while the author was a student at École Normale Supérieure de Lyon. Some of T. Moran work was done, while the author was at Spacemesh and Northeastern University. Any views or opinions expressed herein are solely those of the authors listed and may differ from the views and opinions expressed by JPMorgan Chase & Co. or its affiliates. This material is not a product of the Research Department of J.P. Morgan Securities LLC. This material should not be construed as an individual recommendation for any particular client and is not intended as a recommendation of particular securities, financial instruments or strategies for a particular client. This material does not constitute a solicitation or offer in any jurisdiction.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierre Meyer.

Additional information

Communicated by Amit Sahai.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A preliminary version of this work appeared at TCC 2020 [5].

Appendices

Appendix A: UC Framework

We present a highly informal overview of the UC framework and refer the reader to [10] for further details. The framework is based on the real/ideal paradigm for arguing about the security of a protocol.

The real model An execution of a protocol \(\pi \) in the real model consists of n ppt interactive Turing machines (ITMs) \(\textsf{P} _1,\ldots ,\textsf{P} _n\) representing the parties, along with two additional ITMs: an adversary \(\mathcal A\), describing the behavior of the corrupted parties and an environment \(\mathcal Z\), representing the external network environment in which the protocol operates. The environment gives inputs to the honest parties, receives their outputs, and can communicate with the adversary at any point during the execution. It is known that security against the dummy adversary (that forwards every message it sees to the environment and acts according to the environment’s instructions) is sufficient to achieve security against arbitrary adversaries. Throughout, we consider synchronous protocols that proceeds in rounds (this can be formally modeled using the functionality [10], or using the synchronous framework of [23]) and semi-honest (passive) security (where corrupted parties continue following the protocol, but reveal their internal state to the adversary). We will consider both static corruptions (where \(\mathcal A\) chooses the corrupted parties at the onset of the protocol) and adaptive corruptions (where \(\mathcal A\) can dynamically corrupt parties based on information gathered during the computation), and will explicitly mention at any section which type of corruptions are considered. An t-adversary can corrupt up to t parties during the protocol.

The ideal model A computation in the ideal model consists of n dummy parties \(\tilde{\textsf{P}}_1,\ldots ,\tilde{\textsf{P}}_n\), an ideal-model adversary (simulator) \(\textsf {Sim}\), an environment \(\mathcal Z\), and an ideal functionality . As in the real model, the environment gives inputs to the honest (dummy) parties, receives their outputs, and can communicate with the ideal-model adversary at any point during the execution. The dummy parties act as channels between the environment and the ideal functionality, meaning that they send the inputs received from \(\mathcal{Z} \) to and vice versa. The ideal functionality defines the desired behavior of the computation. receives the inputs from the dummy parties, executes the desired computation, and sends the output to the parties. The ideal-model adversary does not see the communication between the parties and the ideal functionality; however, \(\textsf {Sim}\) can corrupt dummy parties (statically or dynamically) and may communicate with according to its specification.

Security definition We present the definition for static and semi-honest adversaries.

We say that a protocol \(\pi \) UC-realizes (with computational security) an ideal functionality in the presence of static semi-honest t-adversaries, if for any ppt static semi-honest t-adversary \(\mathcal{A} \) and any ppt environment \(\mathcal{Z} \), there exists a ppt ideal-model t-adversary \({\textsf {Sim}} \) such that the output distribution of \(\mathcal{Z} \) in the ideal-model computation of with \(\textsf {Sim}\) is computationally indistinguishable from its output distribution in the real-model execution of \(\pi \) with \(\mathcal A\).

We say that a protocol \(\pi \) UC-realizes (with information-theoretic security) an ideal functionality if the above holds even for computationally unbounded \(\mathcal A\), \(\mathcal Z\), and \(\textsf {Sim}\). In that case, the requirement is for the output distribution of \(\mathcal{Z} \) in the ideal-model computation to be statistically close to its output distribution in the real-model execution. If the environment’s outputs are identically distributed, we say that \(\pi \) UC-realizes with perfect security.

The hybrid model The -hybrid model is a combination of the real and ideal models, it extends the real model with an ideal functionality . The parties communicate with each other in exactly the same way as in the real model; however, they can also interact with as in the ideal model. An important property of the UC framework is that the ideal functionality in an -hybrid model can be replaced with a protocol that UC-realizes . The composition theorem of Canetti [10] states the following.

Theorem A.1

([10], informal) Let \(\rho \) be a protocol that UC-realizes in the presence of adaptive semi-honest t-adversaries, and let \(\pi \) be a protocol that UC-realizes in the -hybrid model in the presence of adaptive semi-honest t-adversaries. Then, for any ppt adaptive semi-honest t-adversary \(\mathcal{A} \) and any ppt environment \(\mathcal{Z} \), there exists a ppt adaptive semi-honest t-adversary \({\textsf {Sim}} \) in the -hybrid model such that the output distribution of \(\mathcal{Z} \) when interacting with the protocol \(\pi \) and \(\textsf {Sim}\) is computationally indistinguishable from its output distribution when interacting with the protocol \(\pi ^\rho \) (where every call to is replaced by an execution of \(\rho \)) and \(\mathcal A\) in the real model.

Appendix B: DC-Nets are Topology-Hiding: From THB to THAB

In this section, we show that t-THB implies t-THAB with respect to classes of n-node graphs that are \((t+1)\)-connected. Recall that in Sects. 5.1 and 6.2 we showed a separation between t-THB and t-THAB over non- \((t+1)\)-connected graphs.

Our starting point is the Dining-Cryptographers Network that was introduced by Chaum [12] with the aim of transforming a broadcast primitive into an anonymous broadcast channel, secure in the semi-honest setting. The procedure works as follows on a (public, connected) incomplete network of point-to-point secure channels, where an anonymous party holds as input a bit \(b_{\textsf{BC}}\) to be broadcast and all other parties hold a dummy input bit set to zero. Each party starts by sending a random message to each of its neighbors and keeps the sum of all these outgoing messages. Each party then adds to this sum the messages it received in the previous round, one per neighbor. This new sum is now used as a one-time pad to mask the party’s input; we call this ciphertext the party’s randomized input. These randomized inputs sum to the broadcast bit. Note that the key observation is that so long as a passive adversary cannot corrupt a vertex-cut of the graph, the adversary learns nothing (other than the value of \(b_{\textsf{BC}}\)) about the honest parties’ inputs, even if additionally given the list of randomized inputs. It is therefore safe for the parties to broadcast their randomized inputs to reconstruct the output \(b_{\textsf{BC}}\).

Correctness follows by inspection. Let us recall the high-level idea of why the broadcaster is anonymous. Consider any non-empty proper subset of the vertices \(S\subset V\) such that the union of their closed neighborhoods is connected. At the end of the input randomization phase, the partial sum of the inputs in S (i.e.,  the indicator of the event “the broadcaster is in S”) is secret shared among the randomized inputs of S and the shares the parties in S sent to the parties in \(V\setminus S\). Therefore to learn anything about the partial sum of the inputs in S, the adversary has to corrupt the set Z of vertices at the frontier of S, i.e.,  the vertices in \(V\setminus S\) which are neighbors of S. Given any set Z of at most t corruptions, the adversary can isolate the set of players \(S=V\setminus Z\), but no other one because the graph is \((t+1)\)-connected. For this specific set, the adversary is not learning anything he should not as he knows if the broadcaster is corrupted or not, i.e.,  in Z or in S.

We make the simple observation that if the underlying broadcast primitive is topology hiding, and if the number and identity of the parties participating in the protocol is publicly known (so they have a way of knowing in which order they can broadcast the randomized inputs), then the anonymous broadcast protocol is topology hiding. Indeed, the input randomization phase is purely information local and cannot leak the topology of the graph, while the reconstruction phase inherits the topology-hiding properties of the broadcast primitive used.

Recall that for \(n\in {\mathbb {N}}\) we denote by the class of connected graphs with exactly n nodes.

Theorem B.1

Let \(n\in {\mathbb {N}}\), and let be a class of \((t+1)\)-connected graphs. Then, the existence of a t-THB protocol with respect to \({\mathcal {G}} \) implies a t-THAB protocol with respect to \({\mathcal {G}} \).

Remark B.2

Note that Theorem B.1 can be strengthened in two ways. Firstly, if a slight variation on the DC-net protocol is run on an arbitrary class of \((t+1)\)-connected graphs (where the set of players/vertices is not known a priori), then correctness and privacy of input are still guaranteed and the only leakageFootnote 30 about the topology is the set of players participating in the protocol. In fact it can be strengthened, so only the number of players participating in the protocol is leaked. Secondly, if the parties set their input arbitrarily (rather than all non-broadcasters setting theirs to 0), the DC-net protocol actually constructs a t-secure sum protocol from a broadcast primitive, which is also topology-hiding.

Appendix C: Statistically Secure, Round-Inefficient THB on Oriented-5-Path

In this appendix, we justify our remark that the class from Sect. 4.1 admits an unconditionally \(\varepsilon \)-statistically 1-secure \(1/\varepsilon \)-round topology-hiding broadcast protocol, via a more general lemma. In particular, we observe that the following delayed-flooding protocol is \(\varepsilon \)-statistically secure for any graph class for which the flooding protocol can be simulated given distance (i.e.,  only distance need be hidden).Footnote 31 The core idea is quite straightforward: If the usual flooding protocol only leaks distance (via the round in which the broadcast bit is received), then by simply having the broadcaster delay flooding for a random number of rounds, this leakage is diluted.

Fig. 33
figure 33

\(\varepsilon \)-Delayed Flooding Protocol: A simple, inefficient, distance-hiding broadcast protocol

Lemma C.1

Let \(0<\varepsilon \le 1\). Let \({\mathcal {G}} \) be a class of graphs such that each node can always deduce a unique neighbor through which they are connected to the broadcaster and let d be an upper bound on distance of any node from .Footnote 32

Then, the \(\varepsilon \)-Delayed Flooding Protocol (defined in Fig. 33) is an \(\varepsilon \)-statistically 1-secure topology-hiding broadcast protocol with round complexity \(d(1+1/\varepsilon )\).

Proof

Let \(\Delta = d/\varepsilon \) be the upper bound on the random delay, and let \(R=d/\varepsilon +d\) be the upper bound on round complexity, as defined in Fig. 33.

We begin by observing that the \(\varepsilon \)-Delayed Flooding Protocol will deliver the broadcast message to all nodes because for any choice of delay \(r\in [\Delta ]\), since \(r+ d \le R\). (Recall d is an upper bound on the distance from the broadcaster.) So, the protocol is, in fact, perfectly correct.

It remains to show \(\varepsilon \)-statistical security. Let v be the corrupt node with neighborhood \({\mathcal {N}(v)} \). By the properties of \({\mathcal {G}} \), this means that there exists a fixed \(u\in {\mathcal {N}(v)} \) such that u is a bridge to in all graphs \(G\in {\mathcal {G}} \) such that \(\mathcal {N}_{G}(v) ={\mathcal {N}(v)} \).

Because the protocol follows the same message pattern as the naïve flooding protocol and \({\mathcal {G}} \) consists of trees, we can represent the distribution of the view of v on any graph \(G\in {\mathcal {G}} \) as the tuple \(p(G)=(p_1(G),\ldots ,p_{R}(G))\) where \(p_i\) is the probability v receives the broadcast message in round i (for \(i\in [R]\)) and \(p_\bot \) is the probability v sees nothing.

Observe that if v is distance \(d_v\) from in any graph \(G\in {\mathcal {G}} \), the view of v in G is simply the broadcast message at round \(r_v=r+d_v\), where r is a random variable distributed uniformly over \([\Delta ]\). Thus, we have that \(p_1(G)=\cdots =p_{d_v}(G)=p_{\Delta -(d-d_v)+1}(G)=\cdots =p_{\Delta }(G)=0\), and that \(p_{d_v+1}(G)=\cdots =p_R(G)=1/(\Delta -(d-d_v))\).

The simulator \({\textsf {Sim}} \) works by sampling \(r\leftarrow [\Delta ]\) and sending the broadcast message m in round \(r+d/2\) from the neighbor u, specified above. We can write the distribution of the simulated view as the tuple \(\tilde{p}=(\tilde{p}_1,\ldots ,\tilde{p}_R,\tilde{p}_\bot )\) where \(\tilde{p}_1=\cdots =\tilde{p}_{d/2}=\tilde{p}_{R-d/2+1}=\cdots =\tilde{p}_{R}=0\) and \(\tilde{p}_{d/2+1}=\cdots =\tilde{p}_{R-d/2}=1/\Delta \).

Thus, we can explicitly compute the statistical difference as \(\frac{|d/2-d_v|}{\Delta }\le \frac{d/2}{\Delta } \le \varepsilon /2\). We can therefore deduce that under the simulation notion we have security \(\varepsilon /2\) and under the indistinguishability-based definition (Definition 2.4), we have statistical security \(\varepsilon \).

\(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ball, M., Boyle, E., Cohen, R. et al. Topology-Hiding Communication from Minimal Assumptions. J Cryptol 36, 39 (2023). https://doi.org/10.1007/s00145-023-09473-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00145-023-09473-3

Keywords

Navigation