Skip to main content

Lower Bounds on Anonymous Whistleblowing

  • Conference paper
  • First Online:
Theory of Cryptography (TCC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14371))

Included in the following conference series:

  • 213 Accesses

Abstract

Anonymous transfer, recently introduced by Agrikola, Couteau and Maier [3] (TCC ’22), allows a sender to leak a message anonymously by participating in a public non-anonymous discussion in which everyone knows who said what. This opens up the intriguing possibility of using cryptography to ensure strong anonymity guarantees in a seemingly non-anonymous environment.

The work of [3] presented a lower bound on anonymous transfer, ruling out constructions with strong anonymity guarantees (where the adversary’s advantage in identifying the sender is negligible) against arbitrary polynomial-time adversaries. They also provided a (heuristic) upper bound, giving a scheme with weak anonymity guarantees (the adversary’s advantage in identifying the sender is inverse in the number of rounds) against fine-grained adversaries whose run-time is bounded by some fixed polynomial that exceeds the run-time of the honest users. This leaves a large gap between the lower bound and the upper bound, raising the intriguing possibility that one may be able to achieve weak anonymity against arbitrary polynomial time adversaries, or strong anonymity a-gainst fine grained adversaries.

In this work, we present improved lower bounds on anonymous transfer, that rule out both of the above possibilities:

  • We rule out the existence of anonymous transfer with any non-trivial anonymity guarantees against general polynomial time adversaries.

  • Even if we restrict ourselves to fine-grained adversaries whose run-time is essentially equivalent to that of the honest parties, we cannot achieve strong anonymity, or even quantitatively improve over the inverse polynomial anonymity guarantees (heuristically) achieved by [3].

Consequently, constructions of anonymous transfer can only provide security against fine-grained adversaries, and even in that case they achieve at most weak quantitative forms of anonymity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For concreteness, the public discussion could occur over Facebook or Twitter, and users need to be logged in with their true identity.

  2. 2.

    This departs from our informal setting, where a real discussion occurred, while we now assume that “real discussions” are uniformly random. Various works, including [12, 15, 16] show how to embed uniform randomness into real discussions. Concretely, it suffices to (randomly) encode uniformly random messages to the distribution representing the (non-necessarily uniform) communication pattern, in a way that the random messages can be decoded.

  3. 3.

    In this work, we use the convention that an AT is stronger as \(\varepsilon ,\delta \) tend to 0; this is the opposite convention of [3] where this held whenever \(\varepsilon ,\delta \) tend to 1.

  4. 4.

    Anonymous transfer can also be defined with more than a single “dummy” party. We focus for simplicity on the 2-party case for this overview, and will show how to extend the attacks to the N-party case subsequently.

  5. 5.

    We consider here “silent” receivers who do not send any messages—this is similarly known to be sufficient for lower bounds [3].

  6. 6.

    We remind the reader that [3] takes different conventions than ours for \(\varepsilon \) and \(\delta \). With our notation, an AT satisfies stronger properties as \(\varepsilon \) and \(\delta \) get smaller and closer to 0, and are ideally negligible in the security parameter.

  7. 7.

    In an AT, rounds are by default synchronous; for the sake of this general blueprint, any arbitrary sequentialization of the messages would be meaningful.

  8. 8.

    More precisely, strategies are black-box in the AT algorithms, but need to consider full transcripts in a slightly non-black-box way (namely, by separating messages and considering random continuations).

  9. 9.

    Technically, the quantities \(p_k\) when A is the neutral party and \(p_k\) when A is the bias inducer are not necessarily related. But without loss of generality, the strategies used by the bias inducer and the neutral party are independent of their identity as A or B, in which case the quantities \(p_{2k}\) are equal.

  10. 10.

    One technically needs to be careful handling cases where \(p_i=0\) for some i. We largely ignore this technicality in this overview. For concreteness, it will be enough to output a random guess if this happens, and observe that, for games resulting from an AT, this happens with probability at most \(1-p_f\), and therefore does not affect our advantage too much. We refer to Sect. 4.3 for more details.

  11. 11.

    Actually, the total progress is only guaranteed to be \(p_f/p_0\) on expectation, which induces several technical issues. We will assume the progress is always equal to \(p_f/p_0\) for the sake of this overview, and we refer to Sect. 4.2 for more details on the issues and a solution.

  12. 12.

    This is done by considering all the messages sent by parties \(k\ne i,j\) as part of the CRS of the new 2-party protocol.

  13. 13.

    Note that there is at most one such index. If no such index exist, our attack, say, outputs party 1.

  14. 14.

    This corresponds to setting \(\delta =1-1/\alpha '\), conditioned on executions where the message can be correctly reconstructed. We refer to Sect. 5.3 for more details.

  15. 15.

    The overhead arises from both the \(O(N^2)\) calls to the internal distinguisher, and the runtime of the internal distinguisher itself which is \(\textrm{poly}(\alpha ')=\textrm{poly}(N)\cdot \alpha \).

  16. 16.

    The work of [3] more generally considers a setting with N parties, namely a sender and \(N-1\) dummy parties. Our work focuses on the two-party case, but our main result extend to the N-party case: see Remark 4 and Sect. 5.3 for more details.

  17. 17.

    This is without loss of generality; see Remark 2.

  18. 18.

    This is without loss of generality; see Remark 3.

  19. 19.

    We remind the reader that the quantity \(\delta \) in [3] corresponds to \(1-\delta \) for us. Additionally, in this version of the definition, we do not include the receiver in the count for the number of parties.

  20. 20.

    We technically are also conditioning the expectations XY on all the prior moves instead, but are omitting them for ease of notation. See Remark 5.

  21. 21.

    Recall that each expectation is implicitly conditioned on prior moves (Remark 5).

  22. 22.

    This is a stronger statement in the sense that observers can test whether an execution is winning, and therefore can output an arbitrary bit \(p_{2c}\ne 1\).

  23. 23.

    Technically, \(\bot \) can be replaced by any arbitrary output, e.g. 0; but considering this output separately is in our eyes conceptually cleaner.

  24. 24.

    Looking ahead, this large sample complexity is a result of the techniques we use in our analysis which require us compute precise estimations for each game state.

  25. 25.

    The choice of the specific output doesn’t matter for the sake of the analysis, as long as is the same distribution whenever A or B is the bias inducer.

  26. 26.

    Again, \(\bot \) can be replaced by any arbitrary output, e.g. 0; but considering this output separately is in our eyes conceptually cleaner.

  27. 27.

    Here, we implicitly take the convention that, because players make c moves, they have complexity at least c. This is informal, and there is a mismatch: we are comparing sample complexity of \(C^*\) against standard complexity of A and B. The translation to AT lower bounds will make this statement more precise.

  28. 28.

    Looking ahead, doing so comes at a mild loss in the resulting anonymity \(\delta \). While this loss is mild starting from Theorem 8 yielding the main result of the section, it is quite significant when starting from Theorem 9, in which case the anonymity guarantees we obtain are similar to the ones of [3]. We therefore focus on Theorem 8 in this section.

References

  1. Navalny, A.: Russia’s jailed vociferous putin critic. British Broadcasting Corporation (2022). https://www.bbc.com/news/world-europe-16057045

  2. Abraham, I., Pinkas, B., Yanai, A.: Blinder - scalable, robust anonymous committed broadcast. In: Ligatti, J., Ou, X., Katz, J., Vigna, G. (eds.) ACM CCS 2020, pp. 1233–1252. ACM Press, November 2020. https://doi.org/10.1145/3372297.3417261

  3. Agrikola, T., Couteau, G., Maier, S.: Anonymous whistleblowing over authenticated channels. In: Kiltz, E., Vaikuntanathan, V. (eds.) TCC 2022, Part II. LNCS, vol. 13748, pp. 685–714. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-22365-5_24

    Chapter  Google Scholar 

  4. Andrews, S., Burrough, B., Ellison, S.: The Snowden saga. Vanity Fair (2014). https://archive.vanityfair.com/article/2014/5/the-snowden-saga

  5. Berret, C. (2016). https://www.cjr.org/tow_center_reports/guide_to_securedrop.php

  6. Chaum, D.: The dining cryptographers problem: unconditional sender and recipient untraceability. J. Cryptol. 1(1), 65–75 (1988). https://doi.org/10.1007/BF00206326

    Article  MathSciNet  MATH  Google Scholar 

  7. Chaum, D.: Untraceable electronic mail, return addresses and digital pseudonyms. In: Gritzalis, D.A. (ed.) Secure Electronic Voting. Advances in Information Security, vol. 7, pp. 211–219. Springer, Boston (2003). https://doi.org/10.1007/978-1-4615-0239-5_14

    Chapter  Google Scholar 

  8. Cohn-Gordon, K., Cremers, C., Dowling, B., Garratt, L., Stebila, D.: A formal security analysis of the signal messaging protocol. J. Cryptol. 33(4), 1914–1983 (2020). https://doi.org/10.1007/s00145-020-09360-1

    Article  MathSciNet  MATH  Google Scholar 

  9. Corrigan-Gibbs, H., Boneh, D., Mazières, D.: Riposte: an anonymous messaging system handling millions of users. In: 2015 IEEE Symposium on Security and Privacy, pp. 321–338 (2015). https://doi.org/10.1109/SP.2015.27

  10. Dingledine, R., Mathewson, N., Syverson, P.F.: Tor: The second-generation onion router. In: Blaze, M. (ed.) USENIX Security 2004, pp. 303–320. USENIX Association, August 2004

    Google Scholar 

  11. Eskandarian, S., Corrigan-Gibbs, H., Zaharia, M., Boneh, D.: Express: lowering the cost of metadata-hiding communication with cryptographic privacy. In: Bailey, M., Greenstadt, R. (eds.) USENIX Security 2021, pp. 1775–1792. USENIX Association, August 2021

    Google Scholar 

  12. Hopper, N.J., Langford, J., von Ahn, L.: Provably secure steganography. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 77–92. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45708-9_6

    Chapter  Google Scholar 

  13. Inzaurralde, B.: The cybersecurity 202: leak charges against treasury official show encrypted apps only as secure as you make them. The Washinton Post (2018)

    Google Scholar 

  14. Newman, Z., Servan-Schreiber, S., Devadas, S.: Spectrum: high-bandwidth anonymous broadcast with malicious security. Cryptology ePrint Archive, Report 2021/325 (2021). https://eprint.iacr.org/2021/325

  15. von Ahn, L., Hopper, N.J.: Public-key steganography. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 323–341. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24676-3_20

    Chapter  Google Scholar 

  16. von Ahn, L., Hopper, N.J., Langford, J.: Covert two-party computation. In: Gabow, H.N., Fagin, R. (eds.) 37th ACM STOC, pp. 513–522. ACM Press (2005). https://doi.org/10.1145/1060590.1060668

Download references

Acknowledgements

Daniel Wichs was supported in part by the National Science Foundation under NSF CNS-1750795, CNS-2055510, and the JP Morgan faculty research award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Willy Quach .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 International Association for Cryptologic Research

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Quach, W., Tyner, L., Wichs, D. (2023). Lower Bounds on Anonymous Whistleblowing. In: Rothblum, G., Wee, H. (eds) Theory of Cryptography. TCC 2023. Lecture Notes in Computer Science, vol 14371. Springer, Cham. https://doi.org/10.1007/978-3-031-48621-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48621-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48620-3

  • Online ISBN: 978-3-031-48621-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics