Skip to main content
Log in

Watermarking PRFs and PKE Against Quantum Adversaries

  • Research Article
  • Published:
Journal of Cryptology Aims and scope Submit manuscript

Abstract

We initiate the study of software watermarking against quantum adversaries. A quantum adversary generates a quantum state as a pirate software that potentially removes an embedded message from a classical marked software. Extracting an embedded message from quantum pirate software is difficult since measurement could irreversibly alter the quantum state. In software watermarking against classical adversaries, a message extraction algorithm crucially uses the (input–output) behavior of a classical pirate software to extract an embedded message. Even if we instantiate existing watermarking PRFs with quantum-safe building blocks, it is not clear whether they are secure against quantum adversaries due to the quantum-specific property above. Thus, we need entirely new techniques to achieve software watermarking against quantum adversaries.

In this work, we define secure watermarking PRFs and PKE for quantum adversaries (unremovability against quantum adversaries). We also present two watermarking PRFs and one watermarking PKE as follows.

  • We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. The marking and extraction algorithms use a public parameter and a private extraction key, respectively. The watermarking PRF is unremovable even if adversaries have (the public parameter and) access to the extraction oracle, which returns a result of extraction for a queried quantum circuit.

  • We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation and the quantum hardness of the LWE problem. The marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable even if adversaries have the extraction key (and the public parameter).

  • We construct a publicly extractable watermarking PKE against quantum adversaries from standard PKE. The marking algorithm can directly generate a marked decryption from a decryption key, and the extraction algorithm uses a public key of the PKE scheme for extraction.

We develop a quantum extraction technique to extract information (a classical string) from a quantum state without destroying the state too much. We also introduce the notions of extraction-less watermarking PRFs and PKE as crucial building blocks to achieve the results above by combining the tool with our quantum extraction technique.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. Precisely speaking, Aaronson et al. achieve copy detection schemes [5], which are essentially the same as secure software leasing schemes.

  2. Leased software must be a quantum state since classical bit strings can be easily copied.

  3. This definitional choice comes from the definition of traceable PRFs [26]. See Sects. 1.3 and 1.4 for the detail.

  4. In this paper, standard math font stands for classical algorithms, and calligraphic font stands for quantum algorithms.

  5. In the actual extraction process, we use an approximation of projective implementation introduced by Zhandry [63] since applying a projective implementation is inefficient. In this overview, we ignore this issue for simplicity.

  6. Their construction supports public marking in the random oracle model.

  7. A valid software must run on a legitimate platform. For example, a video game title of Xbox must run on Xbox.

  8. We did not provide the detail of our watermarking PKE in the proceedings version [33]. However, it is easy to apply our extraction technique to PKE as we see in Sect. 9.

  9. The superscript parts are gray colored.

  10. We use superscript b to denote that it is associated with the outcome b here.

  11. Even if we consider the weaker adversary model, the same issue appears in the quantum setting in the end. If we run a quantum circuit for an input and measure the output, the measurement could irreversibly alter the quantum state and we lost the functionality of the original quantum state. That is, there is no guarantee that we can correctly check whether a tested quantum circuit is marked or not after we obtain a single valid pair of input and output by running the circuit. However, as we explained above, we want to obtain information related to a target PRF for extraction. Thus, we need a public tag in the syntax in either case.

  12. In the watermarking setting, an extraction algorithm can take the description of a pirate circuit as input (corresponding to the software decoder model [63, Sect. 4.2]), unlike the black-box tracing model of traitor tracing. However, we use a pirate circuit in the black box way for our extraction algorithms. Thus, we follow the black box projection model by Zhandry [63].

  13. See “Appendix B.4”) for the detail of the issue.

  14. In fact, \(\textsf{PRF}_\textsf{io}\) satisfies a stronger evaluation correctness than one written in Definition 4.1. The evaluation correctness holds even for any PRF key \(\textsf{prfk}\) and input \(x \in \textsf{Dom}\) like the statistical correctness by Cohen et al. [19].

  15. We use the term “message” for watermarking messages and “plaintext” for encryption messages in this work.

References

  1. A. Ambainis, M. Hamburg, D. Unruh, Quantum security proofs using semi-classical oracles, in A. Boldyreva, D. Micciancio, editors, CRYPTO 2019, Part II, Volume 11693 of LNCS (Springer, Heidelberg, 2019), pp. 269–295

  2. S. Agrawal, F. Kitagawa, R. Nishimaki, S. Yamada, T. Yamakawa, Public key encryption with secure key leasing, in C. Hazay, M. Stam, editors, EUROCRYPT 2023, Part I, Volume 14004 of LNCS (Springer, Heidelberg, 2023), pp. 581–610

  3. J. Alwen, S. Krenn, K. Pietrzak, D. Wichs, Learning with rounding, revisited—new reduction, properties and applications, in R. Canetti, J. A. Garay, editors, CRYPTO 2013, Part I, Volume 8042 of LNCS (Springer, Heidelberg, 2013), pp. 57–74

  4. P. Ananth, R. L. La Placa, Secure software leasing, in A. Canteaut, F.-X. Standaert, editors, EUROCRYPT 2021, Part II, Volume 12697 of LNCS (Springer, Heidelberg, 2021), pp. 501–530

  5. S. Aaronson, J. Liu, Q. Liu, M. Zhandry, R. Zhang, New approaches for quantum copy-protection, in T. Malkin, C. Peikert, editors, CRYPTO 2021, Part I, Volume 12825 of LNCS, Virtual Event (Springer, Heidelberg, 2021), pp. 526–555

  6. S. Agrawal, A. Pellet-Mary, Indistinguishability obfuscation without maps: attacks and fixes for noisy linear FE, in A. Canteaut, Y. Ishai, editors, EUROCRYPT 2020, Part I, Volume 12105 of LNCS (Springer, Heidelberg, 2020), pp. 110–140

  7. A. Ambainis, A. Rosmanis, D. Unruh. Quantum attacks on classical proof systems: the hardness of quantum rewinding, in 55th FOCS (IEEE Computer Society Press, 2014), pp. 474–483

  8. D. Boneh, Ö. Dagdelen, M. Fischlin, A. Lehmann, C. Schaffner, M. Zhandry, Random oracles in a quantum world, in D. H. Lee, X. Wang, editors, ASIACRYPT 2011, Volume 7073 of LNCS (Springer, Heidelberg, 2011), pp. 41–69

  9. B. Barak, O. Goldreich, R. Impagliazzo, S. Rudich, A. Sahai, S. P. Vadhan, K. Yang, On the (im)possibility of obfuscating programs. J. ACM 59(2), 6:1–6:48 (2012)

  10. E. Boyle, S. Goldwasser, I. Ivan, Functional signatures and pseudorandom functions, in H. Krawczyk, editor, PKC 2014, Volume 8383 of LNCS (Springer, Heidelberg, 2014), pp. 501–519

  11. J. Bartusek, J. Guan, F. Ma, M. Zhandry. Return of GGH15: Provable security against zeroizing attacks, in A. Beimel, S. Dziembowski, editors, TCC 2018, Part II, Volume 11240 of LNCS (Springer, Heidelberg, 2018), pp. 544–574

  12. N. Bindel, M. Hamburg, K. Hövelmanns, A. Hülsing, E. Persichetti, tighter proofs of CCA security in the quantum random oracle model, in D. Hofheinz, A. Rosen, editors, TCC 2019, Part II, Volume 11892 of LNCS (Springer, Heidelberg, 2019), pp. 61–90

  13. D. Boneh, K. Lewi, D. J. Wu, Constraining pseudorandom functions privately, in S. Fehr, editor, PKC 2017, Part II, Volume 10175 of LNCS (Springer, Heidelberg, 2017), pp. 494–524

  14. D. Boneh, A. Sahai, B. Waters, Fully collusion resistant traitor tracing with short ciphertexts and private keys, in S. Vaudenay, editor, EUROCRYPT 2006, Volume 4004 of LNCS (Springer, Heidelberg, 2006), pp. 573–592

  15. Z. Brakerski, R. Tsabary, V. Vaikuntanathan, H. Wee, Private constrained PRFs (and more) from LWE, in Y. Kalai, L. Reyzin, editors, TCC 2017, Part I, Volume 10677 of LNCS (Springer, Heidelberg, 2017), pp. 264–302

  16. D. Boneh, B. Waters, Constrained pseudorandom functions and their applications, in K. Sako, P. Sarkar, editors, ASIACRYPT 2013, Part II, Volume 8270 of LNCS (Springer, Heidelberg, 2013), pp. 280–300

  17. R. Canetti, Y. Chen, Constraint-hiding constrained PRFs for \(\text{NC}^{1}\) from LWE, in J.-S. Coron, J. B. Nielsen, editors, EUROCRYPT 2017, Part I, Volume 10210 of LNCS (Springer, Heidelberg, 2017), pp. 446–476

  18. B. Chor, A. Fiat, M. Naor, Tracing traitors, in Y. Desmedt, editor, CRYPTO’94, Volume 839 of LNCS (Springer, Heidelberg, 1994), pp. 257–270

  19. A. Cohen, J. Holmgren, R. Nishimaki, V. Vaikuntanathan, D. Wichs, Watermarking cryptographic capabilities. SIAM J. Comput. 47(6), 2157–2202 (2018)

    Article  MathSciNet  Google Scholar 

  20. Y. Chen, M. Hhan, V. Vaikuntanathan, H. Wee, Matrix PRFs: constructions, attacks, and applications to obfuscation, in D. Hofheinz, A. Rosen, editors, TCC 2019, Part I, Volume 11891 of LNCS (Springer, Heidelberg, 2019), pp. 55–80

  21. A. Chiesa, F. Ma, N. Spooner, M. Zhandry, Post-quantum succinct arguments: breaking the quantum rewinding barrier, in N. Vishnoi, editor, FOCS 2021 (to appear) (IEEE, 2021)

  22. L. Devadas, W. Quach, V. Vaikuntanathan, H. Wee, D. Wichs, Succinct LWE sampling, random polynomials and obfuscation, in K. Nissim, B. Waters, editors, TCC 2021, LNCS (Springer, 2021)

  23. O. Goldreich, S. Goldwasser, S. Micali, How to construct random functions. J. ACM 33(4), 792–807 (1986)

    Article  MathSciNet  Google Scholar 

  24. R. Goyal, S. Kim, N. Manohar, B. Waters, D. J. Wu, Watermarking public-key cryptographic primitives, in A. Boldyreva, D. Micciancio, editors, CRYPTO 2019, Part III, Volume 11694 of LNCS (Springer, Heidelberg, 2019), pp. 367–398

  25. R. Goyal, V. Koppula, B. Waters, New approaches to traitor tracing with embedded identities, in D. Hofheinz and A. Rosen, editors, TCC 2019, Part II, Volume 11892 of LNCS (Springer, Heidelberg, 2019), pp. 149–179

  26. R. Goyal, S. Kim, B. Waters, D. J. Wu, Beyond software watermarking: traitor-tracing for pseudorandom functions, in M. Tibouchi, H. Wang, editors, Asiacrypt 2021 (to appear), Lecture Notes in Computer Science (Springer, 2021)

  27. R. Gay, R. Pass, Indistinguishability obfuscation from circular security, in S. Khuller, V. V. Williams, editors, 53rd ACM STOC (ACM Press, 2021), pp. 736–749

  28. S. Gorbunov, V. Vaikuntanathan, H. Wee, Functional encryption with bounded collusions via multi-party computation, in R. Safavi-Naini, R. Canetti, editors, CRYPTO 2012, Volume 7417 of LNCS (Springer, Heidelberg, 2012), pp. 162–179

  29. J. Håstad, R. Impagliazzo, L. A. Levin, M. Luby, A pseudorandom generator from any one-way function. SIAM J. Comput. 28(4), 1364–1396 (1999)

    Article  MathSciNet  Google Scholar 

  30. S. B. Hopkins, A. Jain, H. Lin, Counterexamples to new circular security assumptions underlying iO, in T. Malkin, C. Peikert, editors, CRYPTO 2021, Part II, Volume 12826 of LNCS, Virtual Event (Springer, Heidelberg, 2021), pp. 673–700

  31. N. Hopper, D. Molnar, D. Wagner, From weak to strong watermarking, in S. P. Vadhan, editor, TCC 2007, Volume 4392 of LNCS (Springer, Heidelberg, 2007), pp. 362–382

  32. C. Jordan, Essai sur la géométrie à \(n\) dimensions. Bull. Soc. Math. France 3, 103–174 (1875)

    Article  MathSciNet  Google Scholar 

  33. F. Kitagawa, R. Nishimaki, Watermarking PRFs against quantum adversaries, in O. Dunkelman, S. Dziembowski, editors, EUROCRYPT 2022, Part III, Volume 13277 of LNCS (Springer, Heidelberg, 2022), pp. 488–518

  34. F. Kitagawa, R. Nishimaki, One-out-of-many unclonable cryptography: definitions, constructions, and more, in G. N. Rothblum, H. Wee, editors, Theory of Cryptography—21st International Conference, TCC 2023, Taipei, Taiwan, November 29–December 2, 2023, Proceedings, Part IV, Volume 14372 of Lecture Notes in Computer Science (Springer, 2023), pp. 246–275

  35. F. Kitagawa, R. Nishimaki, T. Yamakawa, Secure software leasing from standard assumptions, in K. Nissim, B. Waters, editors, TCC 2021, LNCS (Springer, 2021)

  36. A. Kiayias, S. Papadopoulos, N. Triandopoulos, T. Zacharias, Delegatable pseudorandom functions and applications, in A.-R. Sadeghi, V. D. Gligor, M. Yung, editors, ACM CCS 2013 (ACM Press, 2013), pp. 669–684

  37. S. Kim, D. J. Wu, Watermarking PRFs from lattices: Stronger security via extractable PRFs, in A. Boldyreva, D. Micciancio, editors, CRYPTO 2019, Part III, Volume 11694 of LNCS (Springer, Heidelberg, 2019), pp. 335–366

  38. S. Kim, D. J. Wu, Watermarking cryptographic functionalities from standard lattice assumptions. J. Cryptol. 34(3), 28 (2021)

    Article  MathSciNet  Google Scholar 

  39. C. Marriott, J. Watrous, Quantum Arthur–Merlin games. Comput. Complex. 14(2), 122–152 (2005)

    Article  MathSciNet  Google Scholar 

  40. M. Naor, Bit commitment using pseudorandomness. J. Cryptol. 4(2), 151–158 (1991)

    Article  Google Scholar 

  41. R. Nishimaki, How to watermark cryptographic functions, in T. Johansson, P. Q. Nguyen, editors, EUROCRYPT 2013, Volume 7881 of LNCS (Springer, Heidelberg, 2013), pp. 111–125

  42. R. Nishimaki, How to watermark cryptographic functions by bilinear maps. IEICE Trans. 102-A(1), 99–113 (2019)

  43. R. Nishimaki, Equipping public-key cryptographic primitives with watermarking (or: A hole is to watermark), in R. Pass, K. Pietrzak, editors, TCC 2020, Part I, Volume 12550 of LNCS, (Springer, Heidelberg, 2020), pp. 179–209

  44. D. Naccache, A. Shamir, J. P. Stern, How to copyright a function? in H. Imai, Y. Zheng, editors, PKC’99, Volume 1560 of LNCS (Springer, Heidelberg, 1999), pp. 188–196

  45. R. Nishimaki, D. Wichs, M. Zhandry, Anonymous traitor tracing: How to embed arbitrary information in a key, in M. Fischlin, J.-S. Coron, editors, EUROCRYPT 2016, Part II, Volume 9666 of LNCS (Springer, Heidelberg, 2016), pp. 388–419

  46. C. Peikert, Public-key cryptosystems from the worst-case shortest vector problem: extended abstract, in M. Mitzenmacher, editor, 41st ACM STOC (ACM Press, 2009), pp. 333–342

  47. C. Peikert, S. Shiehian, Privately constraining and programming PRFs, the LWE way, in M. Abdalla, R. Dahab, editors, PKC 2018, Part II, Volume 10770 of LNCS (Springer, Heidelberg, 2018), pp. 675–701

  48. C. Peikert, B. Waters, Lossy trapdoor functions and their applications. SIAM J. Comput. 40(6), 1803–1844 (2011)

    Article  MathSciNet  Google Scholar 

  49. W. Quach, D. Wichs, G. Zirdelis, Watermarking PRFs under standard assumptions: public marking and security with extraction queries, in A. Beimel, S. Dziembowski, editors, TCC 2018, Part II, Volume 11240 of LNCS (Springer, Heidelberg, 2018), pp. 669–698

  50. O. Regev, Witness-preserveing amplification of QMA (lecture notes). https://cims.nyu.edu/~regev/teaching/quantum_fall_2005/ln/qma.pdf

  51. O. Regev, On lattices, learning with errors, random linear codes, and cryptography. J. ACM 56(6), 34:1–34:40 (2009)

  52. A. Sahai, B. Waters, How to use indistinguishability obfuscation: deniable encryption, and more. SIAM J. Comput., 50(3), 857–908, (2021)

    Article  MathSciNet  Google Scholar 

  53. D. Unruh, Quantum proofs of knowledge, in D. Pointcheval, T. Johansson, editors, EUROCRYPT 2012, Volume 7237 of LNCS (Springer, Heidelberg, 2012), pp. 135–152

  54. J. Watrous, Zero-knowledge against quantum attacks. SIAM J. Comput., 39(1), 25–58, (2009)

    Article  MathSciNet  Google Scholar 

  55. H. Wee, D. Wichs, Candidate obfuscation via oblivious LWE sampling, in A. Canteaut, F.-X. Standaert, editors, EUROCRYPT 2021, Part III, Volume 12698 of LNCS (Springer, Heidelberg, 2021), pp. 127–156

  56. R. Yang, M. H. Au, J. Lai, Q. Xu, Z. Yu, Collusion resistant watermarking schemes for cryptographic functionalities, in S. D. Galbraith, S. Moriai, editors, ASIACRYPT 2019, Part I, Volume 11921 of LNCS (Springer, Heidelberg, 2019), pp. 371–398

  57. R. Yang, M. H. Au, Z. Yu, Q. Xu, Collusion resistant watermarkable PRFs from standard assumptions, in D. Micciancio, T. Ristenpart, editors, CRYPTO 2020, Part I, Volume 12170 of LNCS, (Springer, Heidelberg, 2020), pp. 590–620

  58. M. Yoshida, T. Fujiwara, Toward digital watermarking for cryptographic data. IEICE Trans. 94-A(1), 270–272 (2011)

  59. R. Yang, Z. Yu, M. H. Au, W. Susilo, Public-key watermarking schemes for pseudorandom functions, in Y. Dodis, T. Shrimpton, editors, CRYPTO 2022, Part II, Volume 13508 of LNCS (Springer, Heidelberg, 2022), pp. 637–667

  60. M. Zhandry, How to construct quantum random functions, in 53rd FOCS (IEEE Computer Society Press, 2012), pp. 679–687

  61. M. Zhandry, Secure identity-based encryption in the quantum random oracle model, in R. Safavi-Naini, R. Canetti, editors, CRYPTO 2012, Volume 7417 of LNCS (Springer, Heidelberg, 2012), pp. 758–775

  62. M. Zhandry, How to record quantum queries, and applications to quantum indifferentiability, in A. Boldyreva, D. Micciancio, editors, CRYPTO 2019, Part II, Volume 11693 of LNCS (Springer, Heidelberg, 2019), pp. 239–268

  63. M. Zhandry, Schrödinger’s pirate: how to trace a quantum decoder, in R. Pass, K. Pietrzak, editors, TCC 2020, Part III, Volume 12552 of LNCS (Springer, Heidelberg, 2020), pp. 61–91

  64. M. Zhandry, Tracing quantum state distinguishers via backtracking, in H. Handschuh, A. Lysyanskaya, editors, Advances in Cryptology—CRYPTO 2023—43rd Annual International Cryptology Conference, CRYPTO 2023, Santa Barbara, CA, USA, August 20-24, 2023, Proceedings, Part V, Volume 14085 of Lecture Notes in Computer Science (Springer, 2023), pp. 3–36

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryo Nishimaki.

Additional information

Communicated by Jonathan Katz.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper was reviewed by Jiahui Liu and Zuoxia Yu.

A preliminary version of this work appeared in the proceedings of Eurocrypt 2022 [33]. This paper is the revised full version of it.

Appendices

Appendix A: Achieving QSIM-MDD from SIM-MDD

We prove Theorem 4.6, that is, we show that we can transform extraction-less watermarking PRF satisfying SIM-MDD security with private simulation into one satisfying QSIM-MDD security with private simulation, by using a QPRF. Before the proof, we introduce semi-classical one-way to hiding (O2H) lemma.

1.1 Appendix A.1: Semi-classical One-Way to Hiding (O2H) Lemma

We recall a few lemmas.

Definition A.1

(Punctured oracle). Let \(F:X\rightarrow Y\) be any function, and \(S\subset X\) be a set. The oracle \(F\setminus S\) (“F punctured by S”) takes as input a value \(x\in X\). It first computes whether \(x\in S\) into an auxiliary register and measures it. Then it computes F(x) and returns the result. Let \(\texttt{Find}\) be the event that any of the measurements returns 1.

Lemma A.2

(Semi-classical O2H [1, Theorem 1]). Let \(G,H:X\rightarrow Y\) be random functions, z be a random value, and \(S\subseteq X\) be a random set such that \(G(x)=H(x)\) for every \(x\notin S\). The tuple (GHSz) may have arbitrary joint distribution. Furthermore, let be a quantum oracle algorithm. Let \(\texttt{Ev}\) be any classical event. Then we have

Lemma A.3

(Search in semi-classical oracle [1, Theorem 2]). Let \(H:X\rightarrow Y\) be a random function, let z be a random value, and let \(S\subset X\) be a random set. (HSz) may have arbitrary joint distribution. Let be a quantum oracle algorithm. If for each \(x\in X\), \(\Pr [x\in S]\le \epsilon \) (conditioned on H and z), then we have

where q is the number of queries to H by .

Note that the above lemma is originally introduced in [1], but we use a variant that is closer to Lemma 4 in [12].

1.2 Appendix A.2: Proof

Construction.   We start with the construction. Let \(\textsf{ELWMPRF}=(\textsf{Setup},\textsf{Gen},\textsf{Eval},\textsf{Mark},\textsf{Sim})\) be an extraction-less watermarking PRF scheme satisfying SIM-MDD security with private simulation. We also let the message space of \(\textsf{ELWMPRF}\) is \(\{0,1\}^{{\ell _{\textsf{m}}}}\). Let \(\textsf{PRF}\) be a QPRF with domain \(\{0,1\}^{\lambda }\) and range \(\mathcal {R}_{\textsf{Sim}}\), which is the randomness space of \(\textsf{Sim}\). We construct an extraction-less watermarking PRF scheme \(\textsf{QELWMPRF}=(\textsf{QEL}.\textsf{Setup},\textsf{QEL}.\textsf{Gen},\textsf{QEL}.\textsf{Eval},\textsf{QEL}.\textsf{Mark},\textsf{QEL}.\textsf{Sim})\) satisfying QSIM-MDD security with private simulation as follows. We use \(\textsf{Gen}\), \(\textsf{Eval}\), and \(\textsf{Mark}\) as \(\textsf{QEL}.\textsf{Gen}\), \(\textsf{QEL}.\textsf{Eval}\), and \(\textsf{QEL}.\textsf{Mark}\), respectively. The domain and range of \(\textsf{QELWMPRF}\) are the same as those of \(\textsf{ELWMPRF}\). The mark space of \(\textsf{QELWMPRF}\) is \(\{0,1\}^{{\ell _{\textsf{m}}}}\). Also, we construct \(\textsf{QEL}.\textsf{Setup}\) and \(\textsf{QEL}.\textsf{Sim}\) as follows.

  • \(\textsf{QEL}.\textsf{Setup}(1^\lambda )\):

    • \(\bullet \) Generate \((\textsf{pp},\textsf{xk})\leftarrow \textsf{Setup}(1^\lambda )\).

    • \(\bullet \) Generate \(K\leftarrow \{0,1\}^\lambda \).

    • \(\bullet \) Outputs \((\textsf{pp},\textsf{qxk}:=(\textsf{xk},K))\).

  • \(\textsf{QEL}.\textsf{Sim}(\textsf{qxk},\tau , i;r)\):

    • \(\bullet \) Parse \((\textsf{xk},K)\leftarrow \textsf{qxk}\).

    • \(\bullet \) Output \((\gamma ,x,y) \leftarrow \textsf{Sim}(\textsf{xk},\tau ,i;\textsf{PRF}_K(r))\).

Security analysis. Let \(i^*\in [{\ell _{\textsf{m}}}]\) and be any QPT adversary for QSIM-MDD security with private simulation making total q queries to \(O_{\texttt{sim}}\) and \(O_{\texttt{api}}\). We prove that for any polynomial w, it holds that . We prove it using hybrid games. Let \(\texttt {SUC}_X\) be the event that the final output is 1 in Game X. We define a distribution \(\textsf{D}_{\tau ',i'}\) as

  • \(D_{\tau ',i'}\): Output \((\gamma ,x,y)\leftarrow \textsf{Sim}(\textsf{xk},\tau ',i')\).

  • Game 1: This is . Thus, .

    • 1. The challenger generates \((\textsf{pp},\textsf{xk}) \leftarrow \textsf{Setup}(1^\lambda )\) and \(K\leftarrow \{0,1\}^\lambda \), and gives \(\textsf{pp}\) to . send \(\textsf{m}\in \{0,1\}^{{\ell _{\textsf{m}}}}\) to the challenger. The challenger generates \((\tau ,\textsf{prfk})\leftarrow \textsf{Gen}(\textsf{pp})\), computes \(\widetilde{C}\leftarrow \textsf{Mark}(\textsf{pp},\textsf{prfk},\textsf{m})\), and sends \(\widetilde{C}\) to .

    • 2. can access to the following oracles.

      • \(O_{\texttt{sim}}\): On input \(\tau '\) and \(i'\), it returns \(\textsf{Sim}(\textsf{xk},\tau ',i';\textsf{PRF}_K(r))\), where \(r\leftarrow \{0,1\}^\lambda \).

      • \(O_{\texttt{api}}\): On input \((\epsilon ,\delta ,\tau ',i')\) and a quantum state , it returns the result of and the post-measurement state, where \(\textsf{D}^{\textsf{PRF}}_{\tau ',i'}=\textsf{D}_{\tau ',i'}(\textsf{PRF}_K(\cdot ))\).

    • 3. The challenger generates \(\textsf{coin}\leftarrow \{0,1\}\). If \(\textsf{coin}=0\), the challenger samples \((\gamma ,x,y)\leftarrow D_{\texttt{real},i^*}\). If \(\textsf{coin}=1\), the challenger generates \((\gamma ,x,y)\leftarrow \textsf{Sim}(\textsf{xk},\tau ,i^*;\textsf{PRF}_K(r^*))\), where, \(r^*\leftarrow \{0,1\}^\lambda \). The challenger sends \((\gamma ,x,y)\) to .

    • 4. When terminates with output \(\textsf{coin}'\), the challenger outputs 1 if \(\textsf{coin}=\textsf{coin}'\) and 0 otherwise.

  • Game 2: This game is the same as Game 1 except that \(\textsf{PRF}_K\) is replaced with a quantum-accessible random function \(\textsf{R}\).

We have \(\left| \Pr [\texttt {SUC}_1]-\Pr [\texttt {SUC}_2]\right| ={\textsf{negl}}(\lambda )\) from the security of \(\textsf{PRF}\).

Game 3::

This game is the same as Game 2 except that \(\textsf{R}\) is replaced with

$$\begin{aligned} V(r)= {\left\{ \begin{array}{ll} v^* &{} (\text {if}~~ r=r^*)\\ \textsf{R}(r) &{} (\text {otherwise}), \end{array}\right. } \end{aligned}$$

where \(v^* \leftarrow \mathcal {R}_{\textsf{Sim}}\). We have \(\left| \Pr [\texttt {SUC}_2]-\Pr [\texttt {SUC}_3]\right| =0\).

Game 4::

This game is the same as Game 3 except the followings. When makes a query \(\tau '\) and \(i'\) to \(O_{\texttt{sim}}\), \(\textsf{Sim}(\textsf{xk},\tau ',i';R(r))\) is returned instead of \(\textsf{Sim}(\textsf{xk},\tau ',i';V(r))\). Also, when makes a query \((\epsilon ,\delta ,\tau ',i')\) to \(O_{\texttt{api}}\), is performed instead of , where \(D^{\textsf{R}}_{\tau ',i'}=\textsf{D}_{\tau ',i'}(\textsf{R}(\cdot ))\) and \(D^V_{\tau ',i'}=\textsf{D}_{\tau ',i'}(V(\cdot ))\).

By this change, V is now used only for generating the challenge tuple \((\gamma ,x,y)\leftarrow \textsf{Sim}(\textsf{xk},\tau ,i^*;V(r^*))=\textsf{Sim}(\textsf{xk},\tau ,i^*;v^*)\).

We have \(\left| \Pr [\texttt {SUC}_3]-\Pr [\texttt {SUC}_4]\right| =O(\sqrt{\frac{q^2}{2^\lambda }})\) from Lemmas A.2 and A.3.

Game 5::

This game is the same as Game 4 except that \(\textsf{R}\) is replaced with \(G\circ F\), where \(F:\{0,1\}^\lambda \rightarrow [s]\) and \(G:[s]\rightarrow \mathcal {R}_\textsf{Sim}\) are random functions and s is a polynomial of \(\lambda \) specified later.

Theorem A.4

(Small Range Distribution [60]). For any QPT adversary making q quantum queries to \(\textsf{R}\) or \(G\circ F\), we have .

By the above theorem, we have \(\left| \Pr [\texttt {SUC}_4]-\Pr [\texttt {SUC}_5]\right| =O(q^3/s)\).

We can simulate F using a 2q-wise independent function E by the following theorem.

Theorem A.5

[61]. For any QPT adversary making q quantum queries to F or E, we have .

We can efficiently simulate in Game 5 using s samples from \(\textsf{D}_{\tau ',i'}\) since \(\textsf{D}_{\tau ',i'}(G(\cdot ))\) can be interpreted as a mapping for s samples from \(\textsf{D}_{\tau ',i'}\). Then, from the SIM-MDD security with private simulation of \(\textsf{ELWMPRF}\), we have \(\left| \Pr [\texttt {SUC}_5]-1/2\right| ={\textsf{negl}}(\lambda )\). From the above, we also have for some negligible function \(\gamma \). Thus, by setting \(s=O(q^3\cdot w^2)\), we obtain .

Since w is any polynomial, this means that .

Remark A.6

It is easy to see that the extended weak pseudorandomness of \(\textsf{ELWMPRF}\) is preserved after we apply the transformation above since the evaluation algorithm is the same as that of \(\textsf{ELWMPRF}\) and extended weak pseudorandomness holds against adversaries that generate \(\textsf{pp}\). Thus, we omit a formal proof.

Appendix B: Puncturable Encryption with Strong Ciphertext Pseudorandomness

We prove Theorem 7.5 in this section.

1.1 Appendix B.1: Tools for PE

Definition B.1

(Statistically Injective PPRF). If a PPRF family \(\mathcal {F}= \{\textsf{F}_{K}: \{0,1\}^{\ell _1(\lambda )} \rightarrow \{0,1\}^{\ell _2(\lambda )} \mid K \in \{0,1\}^{\lambda }\}\) satisfies the following, we call it a statistically injective PPRF family with failure probability \(\epsilon (\cdot )\). With probability \(1-\epsilon (\lambda )\) over the random choice of \(K \leftarrow \textsf{PRF}.\textsf{Gen}(1^\lambda )\), for all \(x,x^\prime \in \{0,1\}^{\ell _1(\lambda )}\), if \(x\ne x^\prime \), then \(\textsf{F}_K(x)\ne \textsf{F}_K(x^\prime )\). If \(\epsilon (\cdot )\) is not specified, it is a negligible function.

Sahai and Waters show that we can convert any PPRF into a statistically injective PPRF [52].

Theorem B.2

[52]. If OWFs exist, then for all efficiently computable functions \(n(\lambda )\), \(m(\lambda )\), and \(e(\lambda )\) such that \(m(\lambda ) \ge 2n(\lambda ) + e(\lambda )\), there exists a statistically injective PPRF family with failure probability \(2^{-e(\lambda )}\) that maps \(n(\lambda )\) bits to \(m(\lambda )\) bits.

Definition B.3

An injective bit-commitment with setup consists of PPT algorithms \((\textsf{Gen},\textsf{Com})\).

\(\textsf{Gen}(1^\lambda )\)::

The key generation algorithm takes as input the security parameter \(1^\lambda \) and outputs a commitment key \(\textsf{ck}\).

\(\textsf{Com}_\textsf{ck}(b)\)::

The commitment algorithm takes as input \(\textsf{ck}\) and a bit b and outputs a commitment \(\textsf{com}\).

These satisfy the following properties.

Computationally Hiding::

For any QPT , it holds that

Statistically Binding::

It holds that

Injective::

For every security parameter \(\lambda \), there is a bound \(\ell _r\) on the number of random bits used by \(\textsf{Com}\) such that if \(\textsf{ck}\leftarrow \textsf{Gen}(1^\lambda )\), \(\textsf{Com}_\textsf{ck}(\cdot \; \cdot )\) is an injective function on \(\{0,1\}^{} \times \{0,1\}^{\ell _{r}}\) except negligible probability.

Theorem B.4

If the QLWE assumption holds, there exists a secure injective bit-commitment with setup.

This theorem follows from the following theorems.

Theorem B.5

[40]. If there exists (injective) OWFs, there exists (injective) bit-commitment.

Theorem B.6

[3, 48, Adapted]. If the QLWE assumption holds, there exists a secure injective OWF with evaluation key generation algorithms.

Remark B.7

The injective OWFs achieved in Theorem B.6 needs evaluation key generation algorithms unlike the standard definition of OWFs. However, OWFs with evaluation key generation algorithms are sufficient for proving Theorem B.4 by using Theorem B.5 since we use commitment key generation algorithm \(\textsf{Gen}\) (i.e., setup) in Definition B.3. Note that there is no post-quantum secure injective OWF without evaluation key generation algorithm so far.

1.2 Appendix B.2: PE Scheme Description

We review the puncturable encryption scheme by Cohen et al. [19]. We can see Theorem 7.5 holds by inspecting their PE scheme. The scheme utilizes the following ingredients and the length n of ciphertexts is 12 times the length \(\ell \) of plaintexts:

  • A length-doubling \(\textsf{PRG}:\{0,1\}^{\ell } \rightarrow \{0,1\}^{2\ell }\)

  • An injective PPRFs (See Definition B.1) \(F: \{0,1\}^{3\ell } \rightarrow \{0,1\}^{9\ell }\).

  • A PPRF \(G: \{0,1\}^{9\ell } \rightarrow \{0,1\}^{\ell }\).

  • An injective bit-commitment with setup \((\textsf{Com}.\textsf{Gen},\textsf{Com})\) using randomness in \(\{0,1\}^{9 \ell }\). We only use this in our security proof.

Scheme. The scheme \(\textsf{PE}\) by Cohen et al. [19] is as follows.

\(\textsf{Gen}(1^\lambda \))::

Sample functions F and G, generates \(\mathsf {pe.ek}\) as the obfuscated circuit \(i\mathcal {O}(E)\) where E is described in Fig. 12, and returns \((\mathsf {pe.ek}, \mathsf {pe.dk}) {:}{=}(i\mathcal {O}(E),D)\), where \(\mathsf {pe.dk}\) is the (un-obfuscated) program D in Fig. 13.

\(\textsf{Puncture}(\mathsf {pe.dk}, c^*)\)::

Output \(\mathsf {pe.dk}_{\ne c^*}\), where \(\mathsf {pe.dk}_{\ne c^*}\) is the obfuscated circuits \(i\mathcal {O}(D_{\ne c^*})\) where \(D_{\ne c^*}\) is described in Fig. 14, that is, \(\mathsf {pe.dk}_{\ne c^*} {:}{=}i\mathcal {O}(D_{\ne c^*})\).

\(\textsf{Enc}(\mathsf {pe.ek}, m)\)::

Take \(m \in \{0,1\}^{\ell }\), sample \(s\leftarrow \{0,1\}^{\ell }\), and outputs \(c \leftarrow \mathsf {pe.ek}(m,s)\).

\(\textsf{Dec}(\mathsf {pe.dk}, c)\)::

Take \(c \in \{0,1\}^{12 \ell }\) and returns \(m {:}{=}\mathsf {pe.dk}(c)\).

The size of the circuits is appropriately padded to be the maximum size of all modified circuits, which will appear in the security proof.

Fig. 12
figure 12

Description of encryption circuit E

Fig. 13
figure 13

Description of decryption circuit D

Fig. 14
figure 14

Description of punctured decryption circuit \(D_{\ne c^*}\) at \(c^*\)

1.3 Appendix B.3: PE Security Proof

Cohen et al. [19] proved correctness, punctured correctness, and sparseness of \(\textsf{PE}\) above by using secure PRG \(\textsf{PRG}\), secure injective PPRF F, secure PPRF G, and secure IO \(i\mathcal {O}\). Thus, we complete the proof of Theorem 7.5 by combining Theorems B.2, B.4 and B.8, which we prove in this section.

Theorem B.8

If \(\textsf{PRG}\) is a secure PRG, F is a secure injective PPRF, G is a secure PPRF, \(\textsf{Com}\) is a secure injective bit-commitment with setup, and \(i\mathcal {O}\) is a secure IO, then \(\textsf{PE}\) is a secure PE that satisfies strong ciphertext pseudorandomness.

Proof of Theorem B.8

To prove \(x_0 {:}{=}c^*\leftarrow \textsf{Enc}(\mathsf {pe.ek},m^*)\) is indistinguishable from \(x_1 {:}{=}r^*\leftarrow \{0,1\}^{\ell }\), we define a sequence hybrid games.

  • \(\textsf{Real}\): This is the same as the real game with \(b=0\). That is, for queried \(m^*\) the challenger does the following.

    • 1. Choose an injective PPRF \(F: \{0,1\}^{3\ell } \rightarrow \{0,1\}^{9\ell }\) and PPRF \(G: \{0,1\}^{9\ell } \rightarrow \{0,1\}^{\ell }\).

    • 2. Choose \(s\leftarrow \{0,1\}^{\ell }\) and compute \(\alpha _0 {:}{=}\textsf{PRG}(s)\), \(\beta _0 {:}{=}F(\alpha _0\Vert m^*)\), and \(\gamma _0 {:}{=} G(\beta _0) \oplus m^*\).

    • 3. Set \(x_0 {:}{=}\alpha _0\Vert \beta _0\Vert \gamma _0\) and computes \(\mathsf {pe.ek}{:}{=}i\mathcal {O}(E)\) and \(\mathsf {pe.dk}_{\ne x_0} {:}{=}i\mathcal {O}(D_{\ne x_0})\).

    • 4. Send \((x_0,\mathsf {pe.ek},\mathsf {pe.dk}_{\ne x_0})\) to the adversary.

  • \(\textsf{Hyb}_{1}\): This is the same as \(\textsf{Hyb}_{0}(0)\) except that \(\alpha _0\) is uniformly random (Fig. 15).

  • \(\textsf{Hyb}_{2}\): This is the same as \(\textsf{Hyb}_{1}\) except that we use punctured \(F_{\ne \alpha _0\Vert m^*}\) and modified circuits \(E_{\ne \alpha _0\Vert m^*}\) and \(D_{\ne \alpha _0\Vert m^*}^2\) described in Figs. 16 and 17. Intuitively, these modified circuits are punctured at input \(\alpha _0\Vert m^*\) and use exceptional handling for this input.

  • \(\textsf{Hyb}_{3}\): This is the same as \(\textsf{Hyb}_{2}\) except that \(\beta _0 \leftarrow \{0,1\}^{9\ell }\).

  • \(\textsf{Hyb}_{4}\): This is the same as \(\textsf{Hyb}_{3}\) except that we use punctured \(G_{\ne \beta _0}\) and modified circuits \(E_{\ne \alpha _0\Vert m^*,\ne \beta _0}\) and \(D_{\ne \alpha _0 \Vert m^*,\ne \beta _0}^4\) described in Figs. 18 and 19. Intuitively, these modified circuits are punctured at input \(\beta _0\) and use \(F_{\ne \alpha _0\Vert m^*}\) and exceptional handling for \(\beta _0\).

  • \(\textsf{Hyb}_{5}=\textsf{Rand}_2\): This is the same as \(\textsf{Hyb}_{4}\) except that \(\gamma _0\) is uniformly random. Now, \(\alpha _0\), \(\beta _0\), \(\gamma _0\) are uniformly random and we rewrite them into \(\alpha _1\), \(\beta _1\), \(\gamma _1\), respectively. For ease of notation, we also denote this game by \(\textsf{Rand}_2\).

  • \(\textsf{Rand}_{1}\): This is the same as \(\textsf{Hyb}_{5}=\textsf{Rand}_2\) except that we use un-punctured G, circuit \(E_{\ne \alpha _0\Vert m^*,\ne \beta _0}\) reverts to \(E_{\ne \alpha _1\Vert m^*}\) described in Fig. 16, and we change circuit \(D_{\ne \alpha _1 \Vert m^*,\ne \beta _1}^4\) into \(D_{\ne \alpha _1 \Vert m^*}^{\textsf{r}}\) described in Fig. 21.

  • \(\textsf{Rand}\): This is the same as the real game with \(b=1\). That is, for queried \(m^*\) the challenger does the following.

    • 1. Choose an injective PPRF \(F: \{0,1\}^{3\ell } \rightarrow \{0,1\}^{9\ell }\) and PPRF \(G: \{0,1\}^{9\ell } \rightarrow \{0,1\}^{\ell }\).

    • 2. Choose \(\alpha _1 \leftarrow \{0,1\}^{2\ell }\), \(\beta _1\leftarrow \{0,1\}^{9\ell }\), and \(\gamma _1\leftarrow \{0,1\}^{\ell }\).

    • 3. Set \(x_1 {:}{=}\alpha _1\Vert \beta _1\Vert \gamma _1\) and computes \(\mathsf {pe.ek}{:}{=}i\mathcal {O}(E)\) and \(\mathsf {pe.dk}_{\ne x_1} {:}{=}i\mathcal {O}(D_{\ne x_1})\).

    • 4. Send \((x_1,\mathsf {pe.ek},\mathsf {pe.dk}_{\ne x_1})\) to the adversary.

We described the overview of these hybrid games in Fig. 15. If we prove these hybrid games are indistinguishable, we complete the proof of Theorem B.8. \(\square \)

Fig. 15
figure 15

High-level overview of hybrid games from \(\textsf{Real}\) to \(\textsf{Rand}\). Recall that \(\textsf{Hyb}_{5}=\textsf{Rand}_2\). Transitions from \(\textsf{Rand}_2\) to \(\textsf{Rand}\) are basically the reverse transitions from \(\textsf{Hyb}_{0}\) to \(\textsf{Hyb}_{4}\), but there are subtle differences

We prove that those hybrid games in Fig. 15 are indistinguishable by Lemmata B.9, B.10, B.14, B.15, B.19, B.20 and B.25.

From \(\textsf{Real}\) to \(\textsf{Hyb}_{5}\). We first move from \(\textsf{Real}\) to \(\textsf{Hyb}_{5}\).

Lemma B.9

If \(\textsf{PRG}\) is a secure PRG, it holds that \(\left| \Pr [\textsf{Hyb}_{0}(0)=1] - \Pr [\textsf{Hyb}_{1}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Lemma B.9

The randomness s for encryption is never used anywhere except \(\alpha _0 {:}{=}\textsf{PRG}(s)\). We can apply the PRG security and immediately obtain the lemma. \(\square \)

Lemma B.10

If \(i\mathcal {O}\) is a secure IO and F is a secure injective PPRF, it holds that

$$\begin{aligned} \left| \Pr [\textsf{Hyb}_{1}=1] - \Pr [\textsf{Hyb}_{2}=1]\right| \le {\textsf{negl}}(\lambda ). \end{aligned}$$

Proof of Lemma B.10

We change E and \(D_{\ne x_0}\) into \(E_{\ne \alpha _0\Vert m^*}\) and \(D_{\ne \alpha _0\Vert m^*}^2\), respectively.

Fig. 16
figure 16

Description of encryption circuit \(E_{\ne \alpha ^*\Vert m^*}\)

Fig. 17
figure 17

Description of punctured decryption circuit \(D_{\ne \alpha ^*\Vert m^*}^2\)

We define a sequence of sub-hybrid games.

\(\textsf{Hyb}_{1}^{1}\)::

This is the same as \(\textsf{Hyb}_{1}\) except that we generate \(F_{\ne \alpha _0\Vert m^*}\) and set \(F^\prime {:}{=}F_{\ne \alpha _0\Vert m^*}\) and \(\mathsf {pe.ek}{:}{=}i\mathcal {O}(E_{\ne \alpha _0\Vert m^*})\) described in Fig. 16.

\(\textsf{Hyb}_{1}^{2}\)::

This is the same as \(\textsf{Hyb}_{1}^{1}\) except that we set \(\mathsf {pe.dk}_{\ne x_0} {:}{=}i\mathcal {O}(D_{\ne \alpha _0\Vert m^*}^2[F,G,\alpha _0,\beta _0,\gamma _0,m^*])\) described in Fig. 17. That is, we still use F, but modify the circuit.

Proposition B.11

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{Hyb}_{1}=1] - \Pr [\textsf{Hyb}_{1}^{1}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.11

In these games, value \(\alpha _0 \leftarrow \{0,1\}^{2\ell }\) is not in the image of \(\textsf{PRG}\) except with negligible probability. The only difference between the two games is that \(F_{\ne \alpha _0\Vert m^*}\) is used in \(\textsf{Hyb}_{1}^{1}\). Thus, E and \(E_{\ne \alpha _0\Vert m^*}\) are functionally equivalent except with negligible probability. We can obtain the proposition by applying the IO security. \(\square \)

Proposition B.12

If \(i\mathcal {O}\) is a secure IO and F is injective, it holds that \(\left| \Pr [\textsf{Hyb}_{1}^{1}=1] - \Pr [\textsf{Hyb}_{1}^{2}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.12

We analyze the case where \((\alpha ,m) = (\alpha _0,m^*)\) since it is the only difference between \(D_{\ne x_0}\) and \(D_{\ne \alpha _0\Vert m^*}^2\).

  • If \(c = x_0\), \(D_{\ne \alpha _0\Vert m^*}^2\) outputs \(\bot \) by the first line of the description. Thus, the output of \(D_{\ne \alpha _0\Vert m^*}^2(x_0)\) is the same as that of \(D_{\ne x_0}(x_0)\).

  • If \(c \ne x_0\), it holds \((\beta _0,\gamma _0)\ne (\beta ,\gamma )\) in this case. However, it should be \(\beta _0 = \beta \) due to the injectivity of F and \(\beta _0 =F(\alpha _0\Vert m^*)\). Thus, both \(D_{\ne x_0}(c)\) and \(D_{\ne \alpha _0\Vert m^*}^2(c)\) output \(\bot \) in this case (\(D_{\ne x_0}(c)\) outputs \(\bot \) at the first line).

Therefore, \(D_{\ne x_0}\) and \(D_{\ne \alpha _0\Vert m^*}^2\) are functionally equivalent. We can obtain the proposition by applying the IO security. \(\square \)

Proposition B.13

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{Hyb}_{1}^{2}=1] - \Pr [\textsf{Hyb}_{2}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.13

Due to the exceptional handling in the third item of \(D_{\ne \alpha _0\Vert m^*}^2\), \(F(\alpha \Vert m)\) is never computed for input \((\alpha _0,m^*)\). Thus, even if we use \(F_{\ne \alpha _0\Vert m^*}\) instead of F, circuits \(D_{\ne \alpha _0\Vert m^*}^2[F]\) and \(D_{\ne \alpha _0\Vert m^*}^2[F_{\ne \alpha _0\Vert m^*}]\) are functionally equivalent. We can obtain the proposition by the IO security. \(\square \)

We complete the proof of Lemma B.10. \(\square \)

Lemma B.14

If F is a secure injective PPRF, it holds that \(\left| \Pr [\textsf{Hyb}_{2}=1] - \Pr [\textsf{Hyb}_{3}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Lemma B.14

The difference between these two games is that \(\beta _0\) is \(F(\alpha _0\Vert m^*)\) or random. We can immediately obtain the lemma by applying punctured pseudorandomness of F since we use \(F_{\ne \alpha _0 \Vert m^*}\) in these games. \(\square \)

Lemma B.15

If \(i\mathcal {O}\) is a secure IO and F is a secure injective PPRF, it holds that

$$\begin{aligned} \left| \Pr [\textsf{Hyb}_{3}=1] - \Pr [\textsf{Hyb}_{4}=1]\right| \le {\textsf{negl}}(\lambda ). \end{aligned}$$

Proof of Lemma B.15

We change \(E_{\ne \alpha ^*\Vert m^*}\) and \(D_{\ne \alpha ^*\Vert m^*}^2\) into \(E_{\ne \alpha ^*\Vert m^*,\ne \beta ^*}\) and \(D_{\ne \alpha ^*\Vert m^*,\ne \beta ^*}^4\), respectively.

Fig. 18
figure 18

Description of encryption circuit \(E_{\ne \alpha ^*\Vert m^*,\ne \beta ^*}\)

Fig. 19
figure 19

Description of punctured decryption circuit \(D_{\ne \alpha ^*\Vert m^*,\ne \beta ^*}^4\)

We define a sequence of sub-hybrid games.

\(\textsf{Hyb}_{3}^{1}\)::

This is the same as \(\textsf{Hyb}_{3}\) except that we use punctured \(G_{\ne \beta _0}\) and set \(\mathsf {pe.ek}{:}{=} i\mathcal {O}(E_{\ne \alpha _0\Vert m^*,\ne \beta _0}[F_{\ne \alpha _0\Vert m^*},G_{\ne \beta _0}])\).

\(\textsf{Hyb}_{3}^{2}\)::

This is the same as \(\textsf{Hyb}_{3}^{1}\) except that we still use G but set \(\mathsf {pe.dk}_{\ne c_0} {:}{=} i\mathcal {O}(D_{\ne \alpha _0\Vert m^*,\ne \beta _0}^4[F_{\ne \alpha _0\Vert m^*},G])\).

Proposition B.16

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{Hyb}_{3}=1] - \Pr [\textsf{Hyb}_{3}^{1}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.16

In these games \(\beta _0 \leftarrow \{0,1\}^{9\ell }\) is uniformly random. By the sparsity of F, \(\beta _0\) is not in the image of F except with negligible probability. Thus, \(E_{\ne \alpha _0\Vert m^*}\) and \(E_{\ne \alpha _0\Vert m^*, \ne \beta _0}\) are functionally equivalent except with negligible probability. We obtain the proposition by the IO security. \(\square \)

Proposition B.17

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{Hyb}_{3}^{1}=1] - \Pr [\textsf{Hyb}_{3}^{2}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.17

The difference between \(D_{\ne \alpha _0\Vert m^*}^2\) and \(D_{\ne \alpha _0\Vert m^*,\ne \beta _0}^4\) is that we replace “If \(c=x_0\), outputs \(\bot \).” with “If \(\beta = \beta _0\), outputs \(\bot \).” In these games, \(\beta _0 \leftarrow \{0,1\}^{9\ell }\) is not in the image of F except with negligible probability. Recall that \(c=x_0\) means \(c = \alpha _0\Vert \beta _0\Vert \gamma _0\). Thus, those two circuits may differ when \(\beta = \beta _0\) but \((\alpha ,\gamma ) \ne (\alpha _0,\gamma _0)\). However, it does not happen \(\beta = F^\prime (\alpha \Vert (G(\beta )\oplus \gamma ))\) in this case due to the injectivity of F. Thus, \(D_{\ne \alpha _0\Vert m^*}^2\) and \(D_{\ne \alpha _0\Vert m^*,\ne \beta _0}^4\) are functionally equivalent and we obtain the proposition by applying the IO security. \(\square \)

Proposition B.18

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{Hyb}_{3}^{2}=1] - \Pr [\textsf{Hyb}_{4}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.18

The difference between these two games that we use \(D_{\ne \alpha _0\Vert m^*,\ne \beta _0}^4[F_{\ne \alpha _0\Vert m^*},G_{\ne \beta _0}]\) instead of \(D_{\ne \alpha _0\Vert m^*,\ne \beta _0}^4[F_{\ne \alpha _0\Vert m^*},G]\). However, \(G_{\ne \beta _0}(\beta _0)\) is never computed by the first item of \(D_{\ne \alpha _0\Vert m^*,\ne \beta _0}^4\). We obtain the proposition by the IO security. \(\square \)

We complete the proof of Lemma B.15. \(\square \)

Lemma B.19

If G is a secure PPRF, it holds that \(\left| \Pr [\textsf{Hyb}_{4}=1] - \Pr [\textsf{Hyb}_{5}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Lemma B.19

The difference between these two games is that \(\gamma _0\) is \(G(\beta _0)\) or random. We can immediately obtain the lemma by applying punctured pseudorandomness of G since we use \(G_{\ne \beta _0}\) in these games. \(\square \)

In \(\textsf{Hyb}_{5}\), \(\alpha _0\), \(\beta _0\), and \(\gamma _0\) are uniformly random strings as \(\alpha _1\), \(\beta _1\), and \(\gamma _1\).

From \(\textsf{Rand}\) to \(\textsf{Hyb}_{5}\). We leap to \(\textsf{Rand}\) and move from \(\textsf{Rand}\) to \(\textsf{Rand}_2 = \textsf{Hyb}_{5}\) instead of directly moving from \(\textsf{Hyb}_{5}=\textsf{Rand}_2\) to \(\textsf{Rand}\) since \(\textsf{Real}\approx \textsf{Hyb}_{5}\) and \(\textsf{Rand}_{2} \approx \textsf{Rand}\) is almost symmetric (but not perfectly symmetric).

Lemma B.20

If \(i\mathcal {O}\) is a secure IO and F is a secure injective PPRF, it holds that

$$\begin{aligned} \left| \Pr [\textsf{Rand}=1] - \Pr [\textsf{Rand}_1 =1]\right| \le {\textsf{negl}}(\lambda ). \end{aligned}$$

Proof of Lemma B.20

We change E and \(D_{\ne x_1}\) into \(E_{\ne \alpha _1\Vert m^*}\) and \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}\), respectively.

We define a sequence of sub-hybrid games.

\(\textsf{rHyb}_{}^{1}\)::

This is the same as \(\textsf{Rand}\) except that we generate \(F_{\ne \alpha _1\Vert m^*}\) and set \(F^\prime {:}{=}F_{\ne \alpha _1\Vert m^*}\) and \(\mathsf {pe.ek}{:}{=}i\mathcal {O}(E_{\ne \alpha _1\Vert m^*})\) described in Fig. 16.

\(\textsf{rHyb}_{}^{2}\)::

This is the same as \(\textsf{rHyb}_{}^{1}\) except that we set \(\mathsf {pe.dk}_{\ne x_1} {:}{=}i\mathcal {O}(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}\text {-}2}[F,G])\) described in Fig. 20. That is, we still use F, but the modified circuit that outputs \(m^*\) for input \(\alpha _1\Vert \hat{\beta }\Vert \hat{\gamma }\), where \(\hat{\beta } {:}{=}F(\alpha _1\Vert m^*)\) and \(\hat{\gamma } {:}{=}G(\hat{\beta })\oplus m^*\).

\(\textsf{rHyb}_{}^{3}\)::

This is the same as \(\textsf{rHyb}_{}^{2}\) except that we set \(\mathsf {pe.dk}_{\ne x_1} {:}{=}i\mathcal {O}(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}[F,G])\) described in Fig. 21. That is, we still use F, but the modified circuit outputs \(\bot \) for an input such that \((\alpha ,m)=(\alpha _1, m^*)\).

Fig. 20
figure 20

Description of punctured decryption circuit \(D_{\ne \alpha ^*\Vert m^*}^{\textsf{r}\text {-}2}\)

Fig. 21
figure 21

Description of punctured decryption circuit \(D_{\ne \alpha ^*\Vert m^*}^{\textsf{r}}\)

Proposition B.21

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{Rand}=1] - \Pr [\textsf{rHyb}_{}^{1}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.21

In these games, value \(\alpha _1 \leftarrow \{0,1\}^{2\ell }\) is not in the image of \(\textsf{PRG}\) except with negligible probability. Thus, E and \(E_{\ne \alpha _1\Vert m^*}\) are functionally equivalent except with negligible probability. We can obtain the proposition by applying the IO security. \(\square \)

Proposition B.22

If \(i\mathcal {O}\) is a secure IO and F is injective, it holds that \(\left| \Pr [\textsf{rHyb}_{}^{1}=1] - \Pr [\textsf{rHyb}_{}^{2}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.22

The difference between \(D_{\ne x_1}\) and \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}\text {-}2}\) is “If \(\alpha = \alpha ^*\) and \(\beta = \hat{\beta }\) and \(\gamma = \hat{\gamma }\), output \(m^*\).” Although \(\alpha _1\Vert \hat{\beta }\Vert \hat{\gamma }\) is a valid encryption, \(\hat{\beta } = F(\alpha _1\Vert m^*)\) is not equal to \(\beta _1\) except with negligible probability since \(\beta _1\) is uniformly random. Similarly, \(\hat{\gamma }\) is not equal to \(\gamma _1\) except with negligible probability. Thus, \(D_{\ne x_1}(\alpha _1\Vert \hat{\beta }\Vert \hat{\gamma })\) outputs \(m^*\). That is, \(D_{\ne x_1}\) and \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}\text {-}2}\) are functionally equivalent. We can obtain the proposition by applying the IO security. \(\square \)

Proposition B.23

If \(i\mathcal {O}\) is a secure IO and F is injective, it holds that \(\left| \Pr [\textsf{rHyb}_{}^{2}=1] - \Pr [\textsf{rHyb}_{}^{3}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.23

We analyze the case where \((\alpha ,m) = (\alpha _1,m^*)\). We can reach the forth line of \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}\) if \(c \ne x_1\). If \(c\ne x_1\) and \((\alpha ,m) = (\alpha _1,m^*)\), it holds that \((\beta ,\gamma )\ne (\beta _1,\gamma _1)\). However, it should be \(\beta _1 = \beta \) in this case due to the injectivity of F. That is, if \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}(c)\) outputs \(\bot \) at the fourth line, \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}\text {-}2}(c)\) also outputs \(\bot \) at the second line. Therefore, \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}\text {-}2}\) and \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}\) are functionally equivalent. We can obtain the proposition by applying the IO security. \(\square \)

Proposition B.24

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{rHyb}_{}^{3}=1] - \Pr [\textsf{Rand}_1 =1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.24

Due to the exceptional handling in the fourth line of \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}\), \(F(\alpha \Vert m)\) is never computed for input \((\alpha _1,m^*)\). Thus, even if we use \(F_{\ne \alpha _1\Vert m^*}\) instead of F, \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}[F]\) and \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}[F_{\ne \alpha _1\Vert m^*}]\) are functionally equivalent. We can obtain the proposition by the IO security. \(\square \)

We complete the proof of Lemma B.20. \(\square \)

Lemma B.25

If \(i\mathcal {O}\) is a secure IO, F is a secure injective PPRF, and \((\textsf{Com}.\textsf{Gen},\textsf{Com})\) is a secure injective bit-commitment with setup, it holds that \(\left| \Pr [\textsf{Rand}_{1} =1] - \Pr [\textsf{Rand}_2 =1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Lemma B.25

We change \(E_{\ne \alpha ^*\Vert m^*}\) and \(D_{\ne \alpha ^*\Vert m^*}^{\textsf{r}}\) into \(E_{\ne \alpha ^*\Vert m^*,\ne \beta ^*}\) and \(D_{\ne \alpha ^*\Vert m^*,\ne \beta ^*}^4\), respectively.

Fig. 22
figure 22

Description of punctured decryption circuit \(D_{\ne \alpha ^*\Vert m^*}^{\textsf{com}}\)

Fig. 23
figure 23

Description of punctured decryption circuit \(D_{\ne \alpha ^*\Vert m^*}^{\textsf {F}}\)

We define a sequence of sub-hybrid games.

\(\textsf{rHyb}_{1}^{1}\)::

This is the same as \(\textsf{Rand}_1\) except that we use \(\hat{\beta }\leftarrow \{0,1\}^{9\ell }\) instead of \(F(\alpha _1\Vert m^*)\).

\(\textsf{rHyb}_{1}^{2}\)::

This is the same as \(\textsf{rHyb}_{1}^{1}\) except that we use \(D_{\ne \alpha _1\Vert m^*}^{\textsf{com}}\) described in Fig. 22, where \(\textsf{ck}\leftarrow \textsf{Com}.\textsf{Gen}(1^\lambda )\) and \(\hat{z} = \textsf{Com}_{\textsf{ck}}(0;\hat{\beta })\) are hardwired, instead of \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}\).

\(\textsf{rHyb}_{1}^{3}\)::

This is the same as \(\textsf{rHyb}_{1}^{2}\) except that we hard-code \(\hat{z}= \textsf{Com}_{\textsf{ck}}(1;\hat{\beta })\) into \(D_{\ne \alpha _1\Vert m^*}^{\textsf{com}}\) instead of \(\textsf{Com}_{\textsf{ck}}(0;\hat{\beta })\).

\(\textsf{rHyb}_{1}^{4}\)::

This is the same as \(\textsf{rHyb}_{1}^{3}\) except that we use \(D_{\ne \alpha _1\Vert m^*}^{\textsf {F}}\) described in Fig. 23

\(\textsf{rHyb}_{1}^{5}\)::

This is the same as \(\textsf{rHyb}_{1}^{4}\) except that we use punctured \(G_{\ne \beta _1}\) and set \(\mathsf {pe.ek}{:}{=} i\mathcal {O}(E_{\ne \alpha _1\Vert m^*,\ne \beta _1}[F_{\ne \alpha _1\Vert m^*},G_{\ne \beta _1}])\).

\(\textsf{rHyb}_{1}^{6}\)::

This is the same as \(\textsf{rHyb}_{1}^{5}\) except that we still use G but set \(\mathsf {pe.dk}_{\ne c_1} {:}{=} i\mathcal {O}(D_{\ne \alpha _1\Vert m^*,\ne \beta _1}^4[F_{\ne \alpha _1\Vert m^*},G])\).

Proposition B.26

If F is a secure PPRF, it holds that \(\left| \Pr [\textsf{Rand}_{1}=1] - \Pr [\textsf{rHyb}_{1}^{1}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.26

In these games, we use \(F_{\ne \alpha _1\Vert m^*}\) in \(E_{\ne \alpha ^1 \Vert m^*}\) and \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}\). Thus, we can apply the punctured pseudorandomness and immediately obtain the proposition. \(\square \)

Proposition B.27

If \(i\mathcal {O}\) is a secure IO and \(\textsf{Com}_\textsf{ck}\) is injective, it holds that

$$\begin{aligned} \left| \Pr [\textsf{rHyb}_{1}^{1}=1] - \Pr [\textsf{rHyb}_{1}^{2}=1]\right| \le {\textsf{negl}}(\lambda ). \end{aligned}$$

Proof of Proposition B.27

The difference between \(D_{\ne \alpha _1\Vert m^*}^{\textsf{com}}\) and \(D_{\ne \alpha _1\Vert m^*}^{\textsf{r}}\) is whether we use “\(\textsf{Com}_{\textsf{ck}}(0;\beta )= \hat{z}\)” or “\(\beta =\hat{\beta }\)”, where \(\hat{z} = \textsf{Com}_{\textsf{ck}} (0;\hat{\beta })\) and \(\textsf{ck}\leftarrow \textsf{Com}.\textsf{Gen}(1^\lambda )\). Since \(\textsf{Com}\) is injective, these two conditions are equivalent. Therefore, those two circuits are functionally equivalent. We obtain the proposition by applying the IO security. \(\square \)

Proposition B.28

If \((\textsf{Com}.\textsf{Gen},\textsf{Com})\) is computationally hiding, it holds that

$$\begin{aligned} \left| \Pr [\textsf{rHyb}_{1}^{2}=1] - \Pr [\textsf{rHyb}_{1}^{3}=1]\right| \le {\textsf{negl}}(\lambda ). \end{aligned}$$

Proof of Proposition B.28

The only difference between these two games is that \(\hat{z}=\textsf{Com}_{\textsf{ck}}(0;\hat{\beta })\) or \(\hat{z}=\textsf{Com}_{\textsf{ck}}(1;\hat{\beta })\). Note that \(\hat{\beta }\) is never used anywhere else. We can obtain the proposition by the hiding property of \(\textsf{Com}\). \(\square \)

Proposition B.29

If \(i\mathcal {O}\) is a secure IO and \((\textsf{Com}.\textsf{Gen},\textsf{Com})\) is statistically binding, it holds that \(\left| \Pr [\textsf{rHyb}_{1}^{3}=1] - \Pr [\textsf{rHyb}_{1}^{4}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.29

The difference between \(D_{\ne \alpha _1\Vert m^*}^{\textsf {F}}\) and \(D_{\ne \alpha _1\Vert m^*}^{\textsf{com}}\) is that the first line of \(D_{\ne \alpha _1\Vert m^*}^{\textsf {F}}\) is never executed. However, \(\hat{z} = \textsf{Com}_{\textsf{ck}}(1;\hat{\beta })\) is hardwired in \(D_{\ne \alpha _1\Vert m^*}^{\textsf{com}}\). Thus, the first line of \(D_{\ne \alpha _1\Vert m^*}^{\textsf{com}}\), in particular, condition “\(\textsf{Com}_{\textsf{ck}}(0;\beta ) = \hat{z} = \textsf{Com}_{\textsf{ck}}(1;\hat{\beta })\)” is also never true except negligible probability due to the statistical binding property of \(\textsf{Com}\). That is, these two circuits are functionally equivalent except negligible probability. We obtain the proposition by applying the IO security. \(\square \)

Proposition B.30

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{rHyb}_{1}^{4}=1] - \Pr [\textsf{rHyb}_{1}^{5}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.30

In these games \(\beta _1 \leftarrow \{0,1\}^{9\ell }\) is uniformly random. By the sparsity of F, \(\beta _1\) is not in the image of F except with negligible probability. Thus, \(E_{\ne \alpha _1\Vert m^*}\) and \(E_{\ne \alpha _1\Vert m^*, \ne \beta _1}\) are functionally equivalent except with negligible probability. We obtain the proposition by the IO security. \(\square \)

Proposition B.31

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{rHyb}_{1}^{5}=1] - \Pr [\textsf{rHyb}_{1}^{6}=1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.31

The difference between \(D_{\ne \alpha _1\Vert m^*}^{\textsf {F}}\) in Fig. 23 and \(D_{\ne \alpha _1\Vert m^*,\ne \beta _1}^4\) in Fig. 19 is that we replace “If \(c=x_1\), outputs \(\bot \).” with “If \(\beta = \beta _1\), outputs \(\bot \).” since the first line of \(D_{\ne \alpha _1\Vert m^*}^{\textsf {F}}\) is never triggered. In these games, \(\beta _1 \leftarrow \{0,1\}^{9\ell }\) is not in the image of F except with negligible probability. Recall that \(c=x_1\) means \(c = \alpha _1\Vert \beta _1\Vert \gamma _1\). Thus, those two circuits may differ when \(\beta = \beta _1\) but \((\alpha ,\gamma ) \ne (\alpha _ 1,\gamma _1)\). However, it does not happen \(\beta = F^\prime (\alpha \Vert (G(\beta )\oplus \gamma ))\) in this case due to the injectivity of F. Thus, \(D_{\ne \alpha _1\Vert m^*}^{\textsf {F}}\) and \(D_{\ne \alpha _1\Vert m^*,\ne \beta _1}^4\) are functionally equivalent and we obtain the proposition by applying the IO security. \(\square \)

Proposition B.32

If \(i\mathcal {O}\) is a secure IO, it holds that \(\left| \Pr [\textsf{rHyb}_{1}^{6}=1] - \Pr [\textsf{Rand}_2 =1]\right| \le {\textsf{negl}}(\lambda )\).

Proof of Proposition B.32

The difference between these two games that we use \(D_{\ne \alpha _1\Vert m^*,\ne \beta _1}^4[F_{\ne \alpha _1\Vert m^*},G_{\ne \beta _1}]\) instead of \(D_{\ne \alpha _1\Vert m^*,\ne \beta _1}^4[F_{\ne \alpha _1\Vert m^*},G]\). However, \(G_{\ne \beta _1}(\beta _1)\) is never computed by the first line of \(D_{\ne \alpha _1\Vert m^*,\ne \beta _1}^4\). We obtain the proposition by the IO security. \(\square \)

We complete the proof of Lemma B.25. \(\square \)

1.4 Appendix B.4: Original Ciphertext Pseudorandomness of PE

We describe the original ciphertext pseudorandomness of PE defined by Cohen et al. [19] in this section for reference.

Definition B.33

(Ciphertext Pseudorandomness). We define the following experiment \(\textsf{Expt}_{\mathcal {A}}^{\textsf{cpr}}(\lambda )\) for PE.

  1. 1.

    \(\mathcal {A}\) sends a message \(m^*\in \{0,1\}^{{\ell _{\textsf{p}}}}\) to the challenger.

  2. 2.

    The challenger does the following:

    • Generate \((\textsf{ek},\textsf{dk}) \leftarrow \textsf{Gen}(1^\lambda )\)

    • Compute encryption \(c^*\leftarrow \textsf{Enc}(\textsf{ek}, m^*)\).

    • Choose \(r^*\leftarrow \{0,1\}^{{\ell _{\textsf{ct}}}}\).

    • Generate the punctured key \(\textsf{dk}_{\notin \{c^*,r^*\}} \leftarrow \textsf{Puncture}(\textsf{dk}, \{c^*,r^*\})\)

    • Choose \(\textsf{coin}\leftarrow \{0,1\}^{}\) and sends the following to \(\mathcal {A}\):

      $$\begin{aligned} \begin{aligned} (c^*, r^*,\textsf{ek}, \textsf{dk}_{\notin \{c^*,r^*\}})&\text { if } \textsf{coin}=0 \\ (r^*,c^*, \textsf{ek}, \textsf{dk}_{\notin \{c^*,r^*\}})&\text { if } \textsf{coin}=1 \end{aligned} \end{aligned}$$
  3. 3.

    \(\mathcal {A}\) outputs \(\textsf{coin}^*\) and the experiment outputs 1 if \(\textsf{coin}= \textsf{coin}^*\); otherwise 0.

We say that \(\textsf{PE}\) has ciphertext pseudorandomness if for every QPT adversary \(\mathcal {A}\), it holds that

$$\begin{aligned} \textsf{Adv}_{\mathcal {A}}^{\textsf{cpr}}(\lambda ){:}{=}2\cdot \Pr [\textsf{Expt}_{\mathcal {A}}^{\textsf{cpr}}(\lambda ) =1] -1 \le {\textsf{negl}}(\lambda ). \end{aligned}$$

Issue in the proof by Cohen et al. In the watermarking PRF by Cohen et al. [19], we use \(x_0 \leftarrow \textsf{PE}.\textsf{Enc}(\mathsf {pe.ek},a\Vert b\Vert c \Vert i)\) to extract an embedded message. They replace \(x_0 \leftarrow \textsf{PE}.\textsf{Enc}(\mathsf {pe.ek},a\Vert b\Vert c \Vert i)\) with \(x_1 \leftarrow \{0,1\}^{{\ell _{\textsf{ct}}}}\) in their proof of unremovability [19, Lemma 6.7]. Then, they use PRG security [19, Lemma 6.8] to replace \(\textsf{PRG}(c)\) with a uniformly random string since the information about c disappears from the PE ciphertext. However, there is a subtle issue here. The information about c remains in the punctured decryption key \(\textsf{dk}_{\notin \{x_0,x_1\}} \leftarrow \textsf{Puncture}(\mathsf {pe.dk},\{x_0,x_1\})\), which is punctured both at \(x_0\) and \(x_1\), since they use ciphertext pseudorandomness in Definition B.33 and need to use the punctured decryption key. Thus, we cannot apply PRG security even after we apply the ciphertext pseudorandomness in Definition B.33. This is the reason why we introduce the strong ciphertext pseudorandomness in Definition 7.3.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kitagawa, F., Nishimaki, R. Watermarking PRFs and PKE Against Quantum Adversaries. J Cryptol 37, 22 (2024). https://doi.org/10.1007/s00145-024-09500-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00145-024-09500-x

Keywords

Navigation