Skip to main content

Protecting the Privacy of Voters: New Definitions of Ballot Secrecy for E-Voting

  • Conference paper
  • First Online:
Selected Areas in Cryptography (SAC 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12804))

Included in the following conference series:

  • 704 Accesses

Abstract

Protecting the privacy of voters is a basic requirement of any electronic voting scheme, and formal definitions can be used to prove that a scheme satisfies privacy. In this work, we provide new game-based definitions of ballot secrecy for electronic voting schemes. First, we propose an intuitive definition in the honest model, i.e., a model in which all election officials are honest. Then, we show that this definition can be easily extended to the malicious ballot box setting and a setting that allows for a distributed tallier. In fact, to the best of our knowledge, we provide the first game-based definition of ballot secrecy that models both a malicious ballot box and a malicious subset of talliers. We demonstrate that our definitions of ballot secrecy are satisfiable, defining electronic voting scheme constructions which we prove satisfy our definitions. Finally, we revisit existing definitions, exploring their limitations and contextualising our contributions to the field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Though the focus of this paper is privacy, another basic property of e-voting schemes is verifiability, by which any interested party can check that the result of the election is computed correctly. For a full discussion of this notion, including formal definitions, the interested reader can consult [18].

  2. 2.

    In particular, Benaloh is the sole author of [4] and is a co-author of [5].

  3. 3.

    In fact, in [8], Bernhard et al. introduce minivoting, a simple e-voting scheme in which voters simply encrypt their vote and a tallier decrypts each ciphertext and computes the result from the resulting plaintext votes. Like \(\varGamma _{\mathsf {mini}}\), mini-voting is not verifiable. However, Bernhard et al. prove that mini-voting satisfies a notion of ballot secrecy and subsequently build a generic, verifiable, e-voting scheme with homomorphic tallying that also satisfies ballot secrecy. Helios is an instantiation of this generic e-voting scheme and, therefore, Helios can be shown to satisfy ballot secrecy if the underlying mini-voting construction is ballot secret.

  4. 4.

    Our result function, similar to the result function in [7], determines how the result of the election is computed.

  5. 5.

    We write that credential pairs are generated by a one-way function f.

  6. 6.

    This can be extended to multi-candidate elections in the style of Helios [1, 28].

  7. 7.

    As with \(\mathsf {BS}\), our balancing condition can be modified to model a no-revote policy.

  8. 8.

    We note that, specifically, \(t=n\) is possible. That is, all n shares are required to reconstruct the private key.

  9. 9.

    For full details of the review and weakness found in [8, 9], and other ballot secrecy definitions, consult [7].

References

  1. Adida, B.: Helios: web-based open-audit voting. In: USENIX 2008, vol. 17, pp. 335–348 (2008)

    Google Scholar 

  2. Belenios voting system. https://www.belenios.org/index.html

  3. Bellare, M., Desai, A., Pointcheval, D., Rogaway, P.: Relations among notions of security for public-key encryption schemes. In: Krawczyk, H. (ed.) CRYPTO 1998. LNCS, vol. 1462, pp. 26–45. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0055718

    Chapter  Google Scholar 

  4. Benaloh, J.: Verifiable secret-ballot elections. Ph.D. thesis (1987)

    Google Scholar 

  5. Benaloh, J., Yung, M.: Distributing the power of a government to enhance the privacy of votes. In: PODC 1986, pp. 52–62 (1986)

    Google Scholar 

  6. Bernhard, D., Cortier, V., Galindo, D., Pereira, O., Warinschi, B.: A comprehensive analysis of game-based ballot privacy definitions. ePrint Report 2015/255 (2015)

    Google Scholar 

  7. Bernhard, D., Cortier, V., Galindo, D., Pereira, O., Warinschi, B.: SoK: a comprehensive analysis of game-based ballot privacy definitions. In: S&P 2015, pp. 499–516 (2015)

    Google Scholar 

  8. Bernhard, D., Cortier, V., Pereira, O., Smyth, B., Warinschi, B.: Adapting helios for provable ballot privacy. In: Atluri, V., Diaz, C. (eds.) ESORICS 2011. LNCS, vol. 6879, pp. 335–354. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23822-2_19

    Chapter  Google Scholar 

  9. Bernhard, D., Pereira, O., Warinschi, B.: How not to prove yourself: pitfalls of the Fiat-Shamir heuristic and applications to helios. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 626–643. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34961-4_38

    Chapter  Google Scholar 

  10. Smyth, B., Bernhard, D.: Ballot secrecy and ballot independence coincide. In: Crampton, J., Jajodia, S., Mayes, K. (eds.) ESORICS 2013. LNCS, vol. 8134, pp. 463–480. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40203-6_26

    Chapter  Google Scholar 

  11. Bernhard, D., Smyth, B.: Ballot secrecy with malicious bulletin boards. ePrint Report 2014/822 (2014)

    Google Scholar 

  12. Chaidos, P., Cortier, V., Fuchsbauer, G., Galindo, D.: BeleniosRF: a non-interactive receipt-free electronic voting scheme. In: CCS 2016, pp. 1614–1625, New York (2016)

    Google Scholar 

  13. Chase, M., Lysyanskaya, A.: On signatures of knowledge. In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 78–96. Springer, Heidelberg (2006). https://doi.org/10.1007/11818175_5

    Chapter  Google Scholar 

  14. Civitas voting system. www.cs.cornell.edu/projects/civitas/

  15. Clarkson, M.R., Chong, S., Myers, A.C.: Civitas: toward a secure voting system. In: S&P 2008, pp. 354–368. IEEE (2008)

    Google Scholar 

  16. Cortier, V., Dragan, C.C., Dupressoir, F., Warinschi, B.: Machine-checked proofs for electronic voting: privacy and verifiability for Belenios. In: CSF 2018, pp. 298–312 (2018)

    Google Scholar 

  17. Cortier, V., Galindo, D., Glondu, S., Izabachène, M.: Election verifiability for helios under weaker trust assumptions. In: Kutyłowski, M., Vaidya, J. (eds.) ESORICS 2014. LNCS, vol. 8713, pp. 327–344. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11212-1_19

    Chapter  Google Scholar 

  18. Cortier, V., Galindo, D., Küsters, R., Müller, J., Truderung, T.: SoK: verifiability notions for e-voting protocols. In: S&P 2016, pp. 779–798 (2016)

    Google Scholar 

  19. Cortier, V., Lallemand, J.: Voting: you can’t have privacy without individual verifiability. In: CCS 2018, pp. 53–66, New York (2018)

    Google Scholar 

  20. Cortier, V., Lallemand, J., Warinschi, B.: Fifty shades of ballot privacy: privacy against a malicious board. ePrint Report 2020/127 (2020)

    Google Scholar 

  21. Cortier, V., Smyth, B.: Attacking and fixing helios: an analysis of ballot secrecy. In: CSF 2011, pp. 297–311 (2011)

    Google Scholar 

  22. del Pino, R., Lyubashevsky, V., Neven, G., Seiler, G.: Practical quantum-safe voting from lattices. In: CCS 2017, pp. 1565–1581, New York (2017)

    Google Scholar 

  23. Delaune, S., Kremer, S., Ryan, M.: Coercion-resistance and receipt-freeness in electronic voting. In: CSFW 2006, pp. 12–42 (2006)

    Google Scholar 

  24. Desmedt, Y., Frankel, Y.: Threshold cryptosystems. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 307–315. Springer, New York (1990). https://doi.org/10.1007/0-387-34805-0_28

    Chapter  Google Scholar 

  25. i-voting. e-estonia.com/solutions/e-governance/i-voting/

  26. Fouque, P.-A., Poupard, G., Stern, J.: Sharing decryption in the context of voting or lotteries. In: Frankel, Y. (ed.) FC 2000. LNCS, vol. 1962, pp. 90–104. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45472-1_7

    Chapter  Google Scholar 

  27. Goldwasser, S., Micali, S.: Probabilistic encryption. J. Comput. Syst. Sci. 28(2), 270–299 (1984)

    Article  MathSciNet  Google Scholar 

  28. Helios voting system. heliosvoting.org/

  29. Juels, A., Catalano, D., Jakobsson, M.: Coercion-resistant electronic elections. In: Chaum, D., et al. (eds.) Towards Trustworthy Elections. LNCS, vol. 6000, pp. 37–63. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-12980-3_2

    Chapter  MATH  Google Scholar 

  30. Kiayias, A., Zacharias, T., Zhang, B.: End-to-end verifiable elections in the standard model. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9057, pp. 468–498. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46803-6_16

    Chapter  MATH  Google Scholar 

  31. Pereira, O.: Internet voting with helios. Real-World Electronic Voting, pp. 277–308 (2016)

    Google Scholar 

  32. Shoup, V., Gennaro, R.: Securing threshold cryptosystems against chosen ciphertext attack. In: Nyberg, K. (ed.) EUROCRYPT 1998. LNCS, vol. 1403, pp. 1–16. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0054113

    Chapter  Google Scholar 

  33. Smyth, B., Bernhard, D.: Ballot secrecy and ballot independence: definitions and relations. ePrint Report 2013/235 (2013)

    Google Scholar 

Download references

Acknowledgement

This work is partly supported by the EPSRC and the UK government as part of the Centre for Doctoral Training in Cyber Security at Royal Holloway, University of London (EP/P009301/1)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashley Fraser .

Editor information

Editors and Affiliations

Appendices

A Building Blocks for our Constructions

1.1 A.1 Public-Key Encryption

Definition 6

(PKE scheme) A public key encryption scheme \(\varPi \) is a tuple of PPT algorithms \((\varPi .\mathsf {Gen}, \varPi .\mathsf {Enc}, \varPi .\mathsf {Dec})\) such that

  • On input security parameter , algorithm \(\varPi .\mathsf {Gen}\) outputs a key pair \((pk_\varPi ,sk_\varPi )\).

  • \(\varPi .\mathsf {Enc}(pk_\varPi ,m)\) On input public key \(pk_\varPi \) and message m, algorithm \(\varPi .\mathsf {Enc}\) outputs a ciphertext c.

  • \(\varPi .\mathsf {Dec}(sk_\varPi ,c)\) On input private key \(sk_\varPi \) and ciphertext c, algorithm \(\varPi .\mathsf {Dec}\) outputs a message m.

Definition 7

( \(\mathsf {NM}\text {-}\mathsf {CPA}\) ). A public key encryption scheme \(\varPi \) satisfies \(\mathsf {NM}\text {-}\mathsf {CPA}\) if, for any PPT adversary , there exists a negligible function such that

figure y

where is the experiment defined in Fig. 4.

Fig. 4.
figure 4

The \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment for public key encryption scheme \(\varPi \) and the \(t\text {-}\mathsf {NM}\text {-}\mathsf {CPA}\) experiment for (t, n)-threshold public key encryption scheme \(\varPhi \).

1.2 A.2 Threshold Public Key Encryption

Definition 8

(Threshold PKE scheme) A (t, n)-threshold public key encryption scheme \(\varPhi \) is a tuple of PPT algorithms \((\varPhi .\mathsf {Gen}, \varPhi .\mathsf {Enc}, \varPhi .\mathsf {Dec}, \varPhi .\mathsf {Combine})\) such that

  • On input security parameter , threshold t and n, algorithm \(\varPhi .\mathsf {Gen}\) outputs a public key \(pk_\varPhi \) and n private keys, \(sk_{\varPhi _1}, \dots , sk_{\varPhi _n}\).

  • \(\varPhi .\mathsf {Enc}(pk_\varPhi ,m)\) On input public key \(pk_\varPhi \) and message m, algorithm \(\varPhi .\mathsf {Enc}\) outputs a ciphertext c.

  • \(\varPhi .\mathsf {Dec}(pk_\varPhi ,i,sk_{\varPhi _i},c)\) On input public key \(pk_\varPhi \), an index \(1\le i\le n\), private key \(sk_{\varPhi _i}\) and ciphertext c, algorithm \(\varPhi .\mathsf {Dec}\) outputs a decryption share \(c_i\).

  • \(\varPhi .\mathsf {Combine}(pk_\varPhi ,c,c_1,\dots ,c_t)\) On input public key \(pk_\varPhi \), ciphertext c and t decryption shares \(c_1,\dots ,c_t\), algorithm \(\varPhi .\mathsf {Combine}\) outputs a message m.

Definition 9

( \(t\text {-}\mathsf {NM}\text {-}\mathsf {CPA}\) ). A (t, n)-threshold public key encryption scheme \(\varPhi \) satisfies \(t\text {-}\mathsf {NM}\text {-}\mathsf {CPA}\) if, for any PPT adversary , there exists a negligible function such that

figure ae

where is the experiment defined in Fig. 4.

1.3 A.3 Signature of Knowledge

Definition 10

(Signature of knowledge). A signature of knowledge \(\mathsf {SOK}\) is a tuple of algorithms \((\mathsf {SoK.Setup},\mathsf {SimSetup},\mathsf {SoK.Sign},\mathsf {SimSign},\mathsf {SoK.Verify})\) relative to a relation \(\mathcal {R}\) such that

  • : on input security parameter , algorithm \(\mathsf {SoK.Setup}\) outputs public parameters pp.

  • : on input security parameter , algorithm \(\mathsf {SimSetup}\) outputs public parameters pp and trapdoor \(\tau \).

  • \(\mathsf {SoK.Sign}(pp,s,w,m)\): on input pp, statement s, witness w and message m, algorithm \(\mathsf {SoK.Sign}\) outputs a signature \(\sigma \) if \((s,w)\in \mathcal {R}\).

  • \(\mathsf {SimSign}(pp,s,\tau ,m)\): on input pp, s, \(\tau \) and m, algorithm \(\mathsf {SimSign}\) outputs a signature \(\sigma \).

  • \(\mathsf {SoK.Verify}(pp,s,m,\sigma )\): on input pp, s, m and \(\sigma \), algorithm \(\mathsf {SoK.Verify}\) outputs 1, if the signature verifies and 0 otherwise.

Definition 11

(Extractability). Let \(\mathsf {Extract}\) be a PPT algorithm such that

  • \(\mathsf {Extract}(pp,\tau ,s,m,\sigma )\): on input public parameters pp, trapdoor \(\tau \), statement s, message m and signature \(\sigma \), algorithm \(\mathsf {Extract}\) outputs a witness w.

Then signature of knowledge \(\mathsf {SOK}\) satisfies extractability if, for all PPT adversaries , there exists a negligible function such that

figure ak

where is the experiment defined in Fig. 5.

Fig. 5.
figure 5

The extractability experiment for signature of knowledge \(\mathsf {SOK}\).

B Ballot Secrecy of our Constructions

1.1 B.1 Proof of Theorem 1

Let be an adversary in the experiment where \(\varGamma _{\mathsf {mini}}\) is the construction in Fig. 2. Assume that can output a bit \(\beta '\) such that \(\beta '=\beta \) and queries \(\mathcal {O}\mathsf {vote}\) and \(\mathcal {O}\mathsf {cast}\) such that \(V_0'=V_1'\). Then succeeds in experiment with probability non-negligibly greater than \(\frac{1}{2}\). We note that, throughout the experiment, can only gain information about \(\beta \) through access to the bulletin board \(\mathcal {BB}_{}\) and the election result r.

Let \(\mathsf {Forge}\) denote the event that submits a valid ballot \(b=(pk_{id_{}},c,\sigma )\) to \(\mathcal {O}\mathsf {cast}\) where \((\cdot ,pk_{id_{}},\cdot )\in \mathcal {Q}\mathsf {reg}\setminus \mathcal {Q}\mathsf {corrupt}\). We write that b is valid if \(\mathsf {Valid}(b,\mathcal {BB}_{},pk,\mathcal {L})=1\) which requires, in particular, that \(\mathsf {SoK.Verify}(pp,(c,pk_\varPi ,pk_{id_{}}),c,\sigma )=1\). Moreover, let \(\mathsf {Success}\) denote the event that experiment returns 1. Note that,

We show that and . The result of the Theorem then follows.

First, we show that . In fact, we show that, if can submit a valid ballot to \(\mathcal {O}\mathsf {corrupt}\), then can be used to construct an adversary \(\mathcal {B}\) in the extractability experiment , where \(\mathcal {B}\) plays the role of the challenger in the \(\mathsf {BS}\) experiment and \(\mathcal {C}\) is the challenger in the extractability experiment. In detail, we construct adversary \(\mathcal {B}\) as follows.

  1. 1.

    \(\mathcal {B}\) obtains public parameters pp for signature of knowledge \(\mathsf {SOK}\) from \(\mathcal {C}\) and runs , and performs the setup of \(\varGamma _{\mathsf {mini}}\), providing with pk. \(\mathcal {B}\) additionally performs the initialisation steps of the \(\mathsf {BS}\) experiment and selects a bit \(\beta \leftarrow \{0,1\}\).

  2. 2.

    \(\mathcal {B}\) answers queries to \(\mathcal {O}\mathsf {reg}\), \(\mathcal {O}\mathsf {corrupt}\) and \(\mathcal {O}\mathsf {board}\) as described in the \(\mathsf {BS}\) experiment. Moreover, \(\mathcal {B}\) computes the election result as described in algorithm \(\mathsf {Tally}\).

  3. 3.

    For queries to \(\mathcal {O}\mathsf {vote}(pk_{id_{}},v_0,v_1)\), \(\mathcal {B}\) computes \(c\leftarrow \varPi .\mathsf {Enc}(pk_\varPi ,m_\beta ;r)\) and queries \(\mathcal {O}_{pp,\tau }((c,pk_\varPi ,pk_{id_{}}),(sk_{id_{}},r),c)\) in the extractability experiment, receiving a signature of knowledge \(\sigma \). \(\mathcal {B}\) constructs ballot \(b\leftarrow (pk_{id_{}},c,\sigma )\) and appends the ballot to \(\mathcal {BB}_{}\).

  4. 4.

    \(\mathcal {B}\) answers queries to \(\mathcal {O}\mathsf {cast}\) as described in the \(\mathsf {BS}\) experiment. By assumption that event \(\mathsf {Forge}\) occurs, \(\mathsf {SoK.Verify}\) returns 1 for at least one tuple \((pk_{id_{}},b)\) queried to \(\mathcal {O}\mathsf {cast}\) such that \(pk_{id_{}}\in \mathcal {Q}\mathsf {reg}\setminus \mathcal {Q}\mathsf {corrupt}\). We denote this ballot as \((pk_{id_{}}^*,c^*,\sigma ^*)\). Then, \(\mathcal {B}\) output \(((c^*,pk_\varPi ,pk_{id_{}}^*),c^*,\sigma ^*)\) in the extractability experiment.

\(\mathcal {B}\) perfectly simulates the role of the challenger in the \(\mathsf {BS}\) experiment to . In fact, \(\mathcal {B}\) trivially simulates oracles \(\mathcal {O}\mathsf {reg}\), \(\mathcal {O}\mathsf {corrupt}\) and \(\mathcal {O}\mathsf {board}\), and trivially computes the result of the election and returns r to . Moreover, when queries \(\mathcal {O}\mathsf {vote}\), \(\mathcal {B}\) returns a ciphertext consisting of an encryption c that is identical to the encryption viewed by in the \(\mathsf {BS}\) experiment and obtains a signature of knowledge from \(\mathcal {O}\mathsf {}\) in the extractability experiment that is identical to the signature viewed by in the \(\mathsf {BS}\) experiment. Therefore, \(\mathcal {B}\) perfectly simulates \(\mathcal {O}\mathsf {vote}\) to . Furthermore, \(\mathcal {B}\) perfectly simulates \(\mathcal {O}\mathsf {cast}\) to and, if event \(\mathsf {Forge}\) occurs, successfully creates a valid signature of knowledge without witness \((sk_{id_{}},r)\), and, therefore, \(\mathcal {B}\) can output this signature in the extractability experiment and succeeds. By assumption, the signature of knowledge \(\mathsf {SOK}\) satisfies extractability, and hence, we conclude that .

We now show that . That is, we show that, if succeeds in the \(\mathsf {BS}\) experiment without event \(\mathsf {Forge}\) occurring, we can use to construct an adversary \(\mathcal {B}'\) in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment , where \(\mathcal {B}'\) plays the role of the challenger in the \(\mathsf {BS}\) experiment and \(\mathcal {C}\) is the challenger in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment against scheme \(\varPi \). In detail, we construct adversary \(\mathcal {B}'\) as follows.

  1. 1.

    \(\mathcal {B}'\) obtains a public key \(pk_\varPi \) for public key encryption scheme \(\varPi \) from \(\mathcal {C}\), runs , and performs the setup of \(\varGamma _{\mathsf {mini}}\), providing with pk. \(\mathcal {B}'\) additionally performs the initialisation steps of the \(\mathsf {BS}\) experiment.

  2. 2.

    \(\mathcal {B}'\) answers queries to oracles \(\mathcal {O}\mathsf {reg}\), \(\mathcal {O}\mathsf {corrupt}\), \(\mathcal {O}\mathsf {cast}\) and \(\mathcal {O}\mathsf {board}\) as described in the \(\mathsf {BS}\) experiment.

  3. 3.

    For queries \(\mathcal {O}\mathsf {vote}(pk_{id_{}}, v_0,v_1)\), \(\mathcal {B}'\) queries \((m_0=v_0,m_1=v_1)\) to oracle \(\mathcal {O}\mathsf {Encrypt}\) in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment and receives a ciphertext c of \(m_\beta \). \(\mathcal {B}'\) then computes \(\sigma \leftarrow \mathsf {SimSign}(pp,(c,pk_\varPi ,pk_{id_{}}),\tau ,c)\) and appends ballot \(b=(pk_{id_{}},c,\sigma )\) to \(\mathcal {BB}_{}\).

  4. 4.

    \(\mathcal {B}'\) computes the election result. Throughout, \(\mathcal {B}'\) keeps track of tuples \((pk_{id_{}},b)\) queried by to \(\mathcal {O}\mathsf {cast}\), such that the query results in a ballot that will be included in the result. We denote by B the set of all tuples of the form \((pk_{id_{}},b)\). \(\mathcal {B}'\) constructs a vector \(\mathbf {c}\) that consists of the ciphertext element of each ballot in B and submits \(\mathbf {c}\) to \(\mathcal {C}\). \(\mathcal {C}\) returns a vector of plaintexts \(\mathbf {m}\) (i.e., the plaintext votes encoded in ballots submitted to \(\mathcal {O}\mathsf {cast}\)) to \(\mathcal {B}'\). For each tuple \((pk_{id_{}},b)\in B\), \(\mathcal {B}'\) replaces ballot b with the corresponding plaintext included in vector \(\mathbf {m}\). \(\mathcal {B}'\) then computes the result r by computing the result function \(f(V_0\cup B)\).

  5. 5.

    \(\mathcal {B}'\) returns the bit \(\beta '\) output by .

We show that \(\mathcal {B'}\) perfectly simulates the role of the challenger in the \(\mathsf {BS}\) experiment to . Trivially, \(\mathcal {B}'\) simulates oracles \(\mathcal {O}\mathsf {reg}\), \(\mathcal {O}\mathsf {corrupt}\), \(\mathcal {O}\mathsf {cast}\) and \(\mathcal {O}\mathsf {board}\) to . Additionally, \(\mathcal {B}'\) answers queries to \(\mathcal {O}\mathsf {vote}\) by obtaining a ciphertext from \(\mathcal {O}\mathsf {Encrypt}\) in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment that is identical to the encryption viewed by in the \(\mathsf {BS}\) experiment and constructs a signature of knowledge that is identical to that viewed by . Consequently, the ballot obtained by \(\mathcal {B}'\), and subsequently appended to the ballot box, is identical to the ballot computed by \(\mathcal {O}\mathsf {vote}\). Therefore, \(\mathcal {B}'\) perfectly simulates \(\mathcal {O}\mathsf {vote}\). Finally, \(\mathcal {B}'\) computes the result function for set \(V_0\) (and B). By the assumption that event \(\mathsf {Forge}\) does not occur, B cannot contain ballots meaningfully related to the votes of honest voters. Moreover, by assumption that succeeds in the \(\mathsf {BS}\) experiment, \(V_0=V_1\) and, as such, \(\mathcal {B}'\) perfectly simulates algorithm \(\mathsf {Tally}\) to . Moreover, we have that \(\beta '\) output by \(\mathcal {B}'\) is equal to the bit \(\beta \) chosen by \(\mathcal {C}\) in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment. In particular, if correctly guesses \(\beta '\) in the \(\mathsf {BS}\) experiment, correctly determines whether \(\mathcal {BB}_{}\) contains ballots corresponding to left- or right-hand votes submitted via \(\mathcal {O}\mathsf {vote}\). As these ballots are created by calling \(\mathcal {O}\mathsf {Encrypt}\) in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment where the bit \(\beta \) is chosen by the \(\mathsf {NM}\text {-}\mathsf {CPA}\) challenger, \(\mathcal {B}'\) succeeds in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment. However, by assumption, \(\varPi \) satisfies \(\mathsf {NM}\text {-}\mathsf {CPA}\) security and we conclude that .    \(\square \)

1.2 B.2 Proof of Theorem 2

The details of this result follow largely from the proof of Theorem 1. We let be an adversary in the experiment where \(\varGamma _{\mathsf {mini}}\) is the construction in Fig. 2. We assume that succeeds in experiment with probability non-negligibly greater than \(\frac{1}{2}\), that is, outputs a bit \(\beta '\) such that \(\beta '=\beta \) and constructs ballot box \(\mathcal {BB}_{}\) such that \(V_0=V_1\). Throughout the experiment, obtains information about \(\beta \) through ballots output by \(\mathcal {O}\mathsf {vote}\) and the election result r.

Let \(\mathsf {Forge}\) denote the event that posts a valid ballot \(b=(pk_{id_{}},c,\sigma )\) to \(\mathcal {BB}_{}\) where \(pk_{id_{}}\in \mathcal {L}\setminus \mathsf {corr}\mathcal {L}\) and b is not the output of \(\mathcal {O}\mathsf {vote}\). We write that b is valid if \(\mathsf {SoK.Verify}(pp,(c,pk_\varPi ,pk_{id_{}}),\sigma ,c)=1\). Moreover, let \(\mathsf {Success}\) denote the event that experiment returns 1. As before,

We show that and . The result of the Theorem then follows.

First, we show that . In fact, we show that, if can post a valid ballot to \(\mathcal {BB}_{}\) for an honest voter (without calling \(\mathcal {O}\mathsf {vote}\)), then can be used to construct an adversary \(\mathcal {B}\) in the extractability experiment , where \(\mathcal {B}\) plays the role of the challenger in the \(\mathsf {mbbBS}\) experiment and \(\mathcal {C}\) is the challenger in the extractability experiment. The detailed construction of \(\mathcal {B}\) is very similar to the adversary \(\mathcal {B}\) described in the proof of Theorem 1, and we refer the reader to this proof for full details. We describe the following changes to the adversary \(\mathcal {B}\):

  1. 1.

    In step 2, \(\mathcal {B}\) does not answer queries to oracles \(\mathcal {O}\mathsf {reg}\), \(\mathcal {O}\mathsf {corrupt}\) or \(\mathcal {O}\mathsf {board}\) as does not have access to these oracles in the \(\mathsf {mbbBS}\) experiment.

  2. 2.

    In step 4, by assumption that event \(\mathsf {Forge}\) occurs, \(\mathsf {SoK.Verify}\) returns 1 for at least one ballot, which we denote \(b^*=(pk_{id_{}}^*,c^*,\sigma ^*)\), that appears on \(\mathcal {BB}_{}\) such that \(pk_{id_{}}\in \mathcal {L}\setminus \mathsf {corr}\mathcal {L}\) and b is not the output of \(\mathcal {O}\mathsf {vote}\). Then, \(\mathcal {B}\) output \(((c^*,pk_\varPi ,pk_{id_{}}^*),c^*,\sigma ^*)\) in the extractability experiment.

\(\mathcal {B}\) perfectly simulates the role of the challenger in the \(\mathsf {mbbBS}\) experiment to . As in the proof of Theorem 1, \(\mathcal {B}\) perfectly simulates \(\mathcal {O}\mathsf {vote}\) to . Furthermore, if event \(\mathsf {Forge}\) occurs, successfully creates a valid signature without witness \((sk_{id_{}},r)\), and, therefore, \(\mathcal {B}\) can output this signature in the extractability experiment and succeeds. By assumption, the signature of knowledge \(\mathsf {SOK}\) satisfies extractability, and hence, we conclude that .

We now show that . That is, we show that, if succeeds in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment without event \(\mathsf {Forge}\) occurring, we can use to construct an adversary \(\mathcal {B}'\) in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment , where \(\mathcal {B}'\) plays the role of the challenger in the \(\mathsf {mbbBS}\) experiment and \(\mathcal {C}\) is the challenger in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment. Again, the detailed construction of \(\mathcal {B'}\) is very similar to the adversary \(\mathcal {B}'\) described in the proof of Theorem 1, and we describe the following changes to \(\mathcal {B'}\):

  1. 1.

    Step 2 is no longer required as does not have access to oracles \(\mathcal {O}\mathsf {reg}\), \(\mathcal {O}\mathsf {corrupt}\) or \(\mathcal {O}\mathsf {board}\) in the \(\mathsf {mbbBS}\) experiment.

  2. 2.

    For queries to \(\mathcal {O}\mathsf {vote}(pk_{id_{}},v_0,v_1)\), rather than appending ballot b to \(\mathcal {BB}_{}\), \(\mathcal {B}'\) outputs b to .

  3. 3.

    To compute the election result, \(\mathcal {B}'\) creates a set B that consists of tuples \((pk_{id_{}},b)\) such that ballot b, submitted on behalf of voter credential \(pk_{id_{}}\), appears on \(\mathcal {BB}_{}\), is not an output of \(\mathcal {O}\mathsf {vote}\), and will be included in the result (i.e., is the last ballot cast for voter \(pk_{id_{}}\)). \(\mathcal {B}'\) then constructs vector \(\mathbf {c}\) from set B and proceeds to compute the election result as described in Step 4 in the proof of Theorem 1.

\(\mathcal {B}'\) perfectly simulates the role of the challenger in the \(\mathsf {mbbBS}\) experiment to . In particular, the ballot output by \(\mathcal {B}'\) following a query to \(\mathcal {O}\mathsf {vote}\) is identical to the ballot output by \(\mathcal {O}\mathsf {vote}\) because \(\mathcal {B}'\) obtains the ballot by querying \(\mathcal {O}\mathsf {Encrypt}\) in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment. Moreover, as in the proof of Theorem 1, \(\mathcal {B}'\) perfectly simulates algorithm \(\mathsf {Tally}\) to as we assume that \(V_0=V_1\) and, as such, the results computed for \(\beta =0\) and \(\beta =1\) are identical. \(\mathcal {B}'\) outputs \(\beta '=\beta \) in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment. That is, if outputs \(\beta '=\beta \) in the \(\mathsf {mbbBS}\) experiment, correctly determines whether \(\mathcal {O}\mathsf {vote}\) returns a ballot corresponding to the left- or right-hand vote. As the ballot is constructed by calling \(\mathcal {O}\mathsf {Encrypt}\), where \(\beta \) is chosen by the challenger \(\mathcal {C}\), \(\mathcal {B}'\) succeeds in the \(\mathsf {NM}\text {-}\mathsf {CPA}\) experiment. By assumption, \(\varPi \) satisfies \(\mathsf {NM}\text {-}\mathsf {CPA}\) security and we conclude that .    \(\square \)

1.3 B.3 Proof of Theorem 3

The details of this result follow largely from the proof of Theorem 2. We focus on the changes to the proof of Theorem 2. We let be an adversary in the experiment where \(\varGamma _{\mathsf {mini}}'\) is the construction described in Sect. 4.2. We assume that succeeds in experiment with probability non-negligibly greater than \(\frac{1}{2}\). We define events \(\mathsf {Forge}\) and \(\mathsf {Success}\) as in the proof of Theorem 2 and we show that and . The result of the Theorem then follows.

First, we show that . This part of the proof is identical to the proof of Theorem 2, with the following exceptions. In step 1 of the description of adversary \(\mathcal {B}\), \(\mathcal {B}\) runs and provides with \(t-1\) private keys of choice. Additionally, \(\mathcal {B}\) computes the result using the \(t-1\) private keys given to plus one other private key. As in the proof of Theorem 2, \(\mathcal {B}\) perfectly simulates the role of the challenger in the \(\mathsf {dtBS}\) experiment to . In particular, \(\mathcal {B}\) generates the keys for \(\varPhi \) and provides with \(t-1\) private keys as expected. This concludes the first part of the proof.

We now show that . We describe the following change to adversary \(\mathcal {B}'\). In step 1, \(\mathcal {B}'\) requests the \(t-1\) private keys from \(\mathcal {C}\) that requests, and \(\mathcal {B}'\) returns the private keys provided by \(\mathcal {C}\) to . Specific to the proof of this result, \(\mathcal {B}'\) returns private keys to that are identical to the private keys output to in the \(\mathsf {dtBS}\) experiment. Therefore, the second part of the proof holds.    \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fraser, A., Quaglia, E.A. (2021). Protecting the Privacy of Voters: New Definitions of Ballot Secrecy for E-Voting. In: Dunkelman, O., Jacobson, Jr., M.J., O'Flynn, C. (eds) Selected Areas in Cryptography. SAC 2020. Lecture Notes in Computer Science(), vol 12804. Springer, Cham. https://doi.org/10.1007/978-3-030-81652-0_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-81652-0_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-81651-3

  • Online ISBN: 978-3-030-81652-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics