Skip to main content

Computing Neural Networks with Homomorphic Encryption and Verifiable Computing

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12418))

Abstract

The widespread use of machine learning and in particular of Artificial Neural Networks (ANN) raises multiple security and data privacy issues. Recent works propose to preserve data confidentiality during the inference process, available as an outsourced service, using Homomorphic Encryption techniques. However, their setting is based on an honest-but-curious service provider and none of them addresses the problem of result integrity. In this paper, we propose a practical framework for privacy-preserving predictions with Homomorphic Encryption (HE) and Verifiable Computing (VC). We propose here a partially encrypted Neural Network in which the first layer consists of a quadratic function and its homomorphic evaluation is checked for integrity using a VC scheme which is slight adaption of the one of Fiore et al. [13]. Inspired by the neural network model proposed by Ryffel et al. [26] which combines adversarial training and functional encryption for partially encrypted machine learning, our solution can be deployed in different application contexts and provides additional security guarantees.

We validate our work on the MNIST handwritten recognition dataset for which we achieve high accuracy (97.54%) and decent latency for a practical deployment (on average 3.8 s for both homomorphic evaluation and integrity proof preparation and 0.021 s for the verification).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    This is due to the need to go beyond bilinear maps to achieve higher degrees in the underlying cryptographics primitives involved in both VC and FE.

References

  1. Backes, M., Fiore, D., et al.: Verifiable delegation of computation on outsourced data. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, pp. 863–874 (2013)

    Google Scholar 

  2. Ball, M., Carmer, B., et al.: Garbled neural networks are practical. Cryptology ePrint Archive, Report 2019/338 (2019)

    Google Scholar 

  3. Boemer, F., Costache, A., et al.: Ngraph-HE2: a high-throughput framework for neural network inference on encrypted data. In: Proceedings of the 7th ACM Workshop on Encrypted Computing & Applied Homomorphic Cryptography. WAHC 2019, pp. 45–56 (2019)

    Google Scholar 

  4. Boemer, F., Lao, Y., et al.: nGraph-HE: a graph compiler for deep learning on homomorphically encrypted data. CoRR (2018)

    Google Scholar 

  5. Bourse, F., Minelli, M., Minihold, M., Paillier, P.: Fast homomorphic evaluation of deep discretized neural networks. In: Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018. LNCS, vol. 10993, pp. 483–512. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96878-0_17

    Chapter  Google Scholar 

  6. Brutzkus, A., Oren Elisha, O., et al.: Low latency privacy preserving inference. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, PMLR 97 (2019)

    Google Scholar 

  7. Chabanne, H., de Wargny, A., et al.: Privacy-preserving classification on deep neural network. Cryptology ePrint Archive, Report 2017/035 (2017)

    Google Scholar 

  8. Chabanne, H., Keuffer, J., et al.: Embedded proofs for verifiable neural networks. IACR Cryptology ePrint Archive, 2017:1038 (2017)

    Google Scholar 

  9. Chabanne, H., Lescuyer, R., Milgram, J., Morel, C., Prouff, E.: Recognition over encrypted faces. In: Renault, É., Boumerdassi, S., Bouzefrane, S. (eds.) MSPN 2018. LNCS, vol. 11005. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-03101-5_16

    Chapter  Google Scholar 

  10. Chase, M., Chen, H., et al.: Security of homomorphic encryption. Technical report, HomomorphicEncryption.org, Redmond WA, USA, July 2017

    Google Scholar 

  11. Chou, E., Beal, J., et al.: Faster CryptoNets: leveraging sparsity for real-world encrypted inference. CoRR (2018)

    Google Scholar 

  12. Fan, J., Vercauteren, F.: Somewhat practical fully homomorphic encryption. IACR Cryptology ePrint Archive 2012:144 (2012)

    Google Scholar 

  13. Fiore, D., Gennaro, R., et al.: Efficiently verifiable computation on encrypted data. In: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 844–855 (2014)

    Google Scholar 

  14. Gennaro, R., Gentry, C., Parno, B.: Non-interactive verifiable computing: outsourcing computation to untrusted workers. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 465–482. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14623-7_25

    Chapter  Google Scholar 

  15. Ghodsi, Z., Gu, T., et al.: SafetyNets: verifiable execution of deep neural networks on an untrusted cloud. In: Advances in Neural Information Processing Systems, pp. 4672–4681 (2017)

    Google Scholar 

  16. Gilad-Bachrach, R., Dowlin, N., et al.: CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In: International Conference on Machine Learning, pp. 201–210 (2016)

    Google Scholar 

  17. Groth, J.: On the size of pairing-based non-interactive arguments. In: Fischlin, M., Coron, J.-S. (eds.) EUROCRYPT 2016. LNCS, vol. 9666, pp. 305–326. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49896-5_11

    Chapter  Google Scholar 

  18. Hesamifard, E., Takabi, H., et al.: Deep neural networks classification over encrypted data. In: Proceedings of the Ninth ACM Conference on Data and Application Security and Privacy. CODASPY 2019, pp. 97–108 (2019)

    Google Scholar 

  19. Izabachène, M., Sirdey, R., Zuber, M.: Practical fully homomorphic encryption for fully masked neural networks. In: Mu, Y., Deng, R.H., Huang, X. (eds.) CANS 2019. LNCS, vol. 11829, pp. 24–36. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31578-8_2

    Chapter  Google Scholar 

  20. Keuffer, J., Molva, R., Chabanne, H.: Efficient proof composition for verifiable computation. In: Lopez, J., Zhou, J., Soriano, M. (eds.) ESORICS 2018. LNCS, vol. 11098, pp. 152–171. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99073-6_8

    Chapter  Google Scholar 

  21. LeCun, Y., Cortes, C., et al.: Mnist handwritten digit database 7:23, 2010 (2010). http://yann.lecun.com/exdb/mnist

  22. Lee, S., Ko, H., et al.: VCNN: Verifiable convolutional neural network. IACR Cryptology ePrint Archive, 2020:584 (2020)

    Google Scholar 

  23. Lund, C., Fortnow, L., et al.: Algebraic methods for interactive proof systems. J. ACM (JACM) 39(4), 859–868 (1992)

    Article  MathSciNet  Google Scholar 

  24. Parno, B., Howell, J., et al.: Pinocchio: nearly practical verifiable computation. In: 2013 IEEE Symposium on Security and Privacy, pp. 238–252. IEEE (2013)

    Google Scholar 

  25. Rouhani, B.D., Riazi, M.S., et al.: DeepSecure: scalable provably-secure deep learning. CoRR (2017)

    Google Scholar 

  26. Ryffel, T., Sans, E.D., et al.: Partially encrypted machine learning using functional encryption. arXiv preprint arXiv:1905.10214 (2019)

  27. Sans, E.D., Gay, R., et al.: Reading in the dark: Classifying encrypted digits with functional encryption. IACR Cryptology ePrint Archive 2018:206 (2018)

    Google Scholar 

  28. Sanyal, A., Kusner, M., et al.: ICML, June 2018

    Google Scholar 

  29. Microsoft SEAL (release 3.0). http://sealcrypto.org, October 2018

  30. Thaler, J.: Time-optimal interactive proofs for circuit evaluation. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8043, pp. 71–89. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40084-1_5

    Chapter  Google Scholar 

  31. Zhao, L., Wang, Q., et al.: VeriML: enabling integrity assurances and fair payments for machine learning as a service. arXiv preprint arXiv:1909.06961 (2019)

  32. Zuber, M., Carpov, S., et al.: Towards real-time hidden speaker recognition by means of fully homomorphic encryption. Cryptology ePrint Archive, Report 2019/976 (2019)

    Google Scholar 

  33. Zuber, M., Fiore, D.: Hal: A library for homomorphic authentication (2016–2017). http://www.myurl.com

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Renaud Sirdey .

Editor information

Editors and Affiliations

Appendices

A Properties of VC

Let us now present a summary for some general properties of \(\mathcal {VC}\) schemes, for more details see [13, 14]:

Correctness: The \( \mathcal {VC}\) is correct if the client running the verification algorithm accepts, with a high probability, the output send by the server only when this one is correct.

Security: A \(\mathcal {VC}\) scheme is secure if a malicious server cannot persuade the verification algorithm to accept an incorrect output.

Privacy: A \(\mathcal {VC}\) scheme is private when the public outputs of the problem generation algorithm ProbGen over two different inputs are indistinguishable.

Function Privacy: This requirement guarantees that the public key PK, sampled via \((PK, SK) \leftarrow \mathbf{KeyGen} (f, \lambda )\), does not leak information on the encoded function f, even after a polynomial amount of runs of \(\mathbf{ProbGen} _{SK}\) on adversarially chosen inputs.

Outsourceability: A \(\mathcal {VC}\) can be outsourced if it allows efficient generation and efficient verification. (i.e. the time of \(\left( \mathbf{ProbrGen} _{SK}(x)+\mathbf{Verify} (\sigma _y) \right) \) is in O(T), where T is the time required to compute f(x)).

Adaptive Security: The adaptive security for a \(\mathcal {VC}\) scheme is defined by the security when the adversary chooses f after having seen many “encodings” of \(\sigma _x\) for adaptively-chosen values x.

This type of schemes allows to compute \(\sigma _x\) independently of f so we can calculate \(\sigma _x \) before choosing f.

B Realization of PRF with Amortized Closed-Form Efficiency

Definition 2

A PRF (F.KG, F) is secure if, for every PPT adversary \(\mathcal {A}\), we have that:

\( \left| Pr[\mathcal {A}^{F_K(\cdot )}(\lambda , pp)=1]-Pr[\mathcal {A}^{\varPhi (\cdot )}(\lambda , pp)=1]\right| \le neg(\lambda ) \) where: \((K,pp)\leftarrow KG(\lambda )\) and \(\varPhi : \chi \rightarrow \mathcal {R}\) is a random function (i.e. it is not possible to distinguish between F and \(\varPhi \)).

Let \(f:\mathbb {F}_q^n \rightarrow \mathbb {F}_q\) be an arithmetic circuit of degree 2, and, without loss of generality, parse \( f(x_1,\ldots ,x_n)=\sum _{i,j}^{n}{\zeta _{i,j}\cdot x_i\cdot x_j}+ \sum _{k=1}^{n}{\zeta _{k}\cdot x_k} \)

for some \(\zeta _{i,j},\zeta _k \in \mathbb {F}_q\). we \(\hat{f}:(\mathbb {G}_1\times \mathbb {G}_2)^n\rightarrow \mathbb {G}_T\) as the compilation of f on group elements such as: \( \hat{f}(A_1,B_1\ldots ,A_n,B_n)= \prod _{i,j}^{n}{{\zeta _{i,j}} \cdot e(A_i,B_j)}\cdot \sum _{k=1}^{n}{\zeta _k \cdot e(A_k,h)}\)

We will show the realization for the PRF with amortized closed-form efficiency For \(\mathbf{Comp} (R_1,S_1,V_1\ldots , R_n,S_n,V_n,f)=\hat{f}(R_1,S_1,V_1\ldots , R_n,S_n,V_n)\). That is the adaptation of the scheme of Bakes et al. in [1] to work with the asymmetric bilinear group.

  • F.KG( \(\lambda \) ) \(\rightarrow K=(K_1,K_2):\)

    First generate \(bgpp=(q,g,h,e)\) some bilinear group parameter, where \(\mathbb {G}_1=<g>, \mathbb {G}_2=<h>\ and\ q=order(\mathbb {G}_i)\ for\ i=1,2\) and \(e:\mathbb {G}_1\times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) non−degenerate(\(\mathbb {G}_T=<e(g,h)> \)) bilinear map. Choose two seeds \(K_1,K_2\) for a family of PRFs \(\mathbf{F} '_{K_{1,2}}:\{0,1\}^* \rightarrow \mathbb {F}_q^2\). Output \(K_1,K_2\). The parameters define \(F:\chi =\{0,1\}^*\times \{0,1\}^*\rightarrow \mathcal {R}^3\).

  • F\(_K(\varDelta ,\tau )\rightarrow (R,S,V)\):

    It generates \((u,v)\leftarrow F'_{K_1}(\tau )\) and \((a,b)\leftarrow F'_{K_2}(\varDelta )\) Finally it calculates \((R,S)=( g^{ua+vb},h^{ua+vb})\).

  • CFEval \(_{{\tau }}^{off}(K,f)\rightarrow w_f=\rho :\)

    For \(i=1 \; to \; t:\) calculate \((u_i,v_i)=F'_{K_1}(\tau _i)\) and construct a linear map \(\rho _i\) using \((u_i,v_i)\) as \(\rho _i(x_1,x_2)=u_i \cdot x_1 + v_i \cdot x_2\) Run \(\rho \leftarrow f(\rho _1,\ldots ,\rho _t)\), i.e., \(\forall z_1,z_2 \in \mathbb {F}_q\): \( \rho (z_1,z_2)=f(\rho _1(z_1,z_2),\ldots ,\rho _t(z_1,z_2)). \)

  • CFEval \(_{\varDelta }^{on}(K,w_f)\rightarrow W:\)

    It generates \((a,b)\leftarrow F'_{k_{2}}(\varDelta )\) and computes \(W= e(g,h)^{w_f(a,b)}\). for the proof of this scheme follow theorem 4 of [13].

C Realizations of Homomorphic Hash [13]

In the construction of , we use the \(bgpp=(q,g,h,e)\) where q be a prime of \(\lambda \) bit and let \(\mathbb {F}_q =\mathbb {Z}/q\mathbb {Z}\). Let us define a function \(H_{\alpha ,\beta }(\mu ) \) as follow: for \(\mu \in \mathcal {D}=\{\mu \in \mathbb {Z}_q[x][y]: deg_x(\mu )=N, deg_y(\mu )=c\}\subset R_q[y]\), \(H_{\alpha ,\beta }(\mu ) \) first evaluates \(\mu \) at \(y=\alpha \) and then evaluates \(\mu (\alpha )\) at \(\beta \) i.e. \(H_{\alpha ,\beta }(\mu )=ev_\beta \circ ev_\alpha (\mu )\).

The family of hash functions \((\tilde{H}.KeyGen,\tilde{H},\tilde{H}.Eval)\) with domain \(\mathcal {D}\) and range \(\mathbb {G}_1\times \mathbb {G}_2\) is defined as below:

  • \(\rightarrow (K,\kappa =(\alpha ,\beta ))\):

    First at all, generate \(bgpp=(g,h,q)\) Next, sample a random \((\alpha ,\beta )\leftarrow (\mathbb {F}_q)^2\) Afterwords, for \(i=0,\ldots ,c\) and,\(j=1,\ldots ,N\) and we calculate \( g^{\alpha ^i\beta ^j},\) and \(h^{\alpha _i\alpha _j}\) and include them to K. Output K and \(\kappa =(\alpha ,\beta )\).

  • :

    For \(\mu \in \mathcal {D}\), in function of its degree \(deg_y(\mu )\), \(\tilde{H}_\kappa (\mu )\) is computed differently. If \(deg_y(\mu )\le 1\) then \(\tilde{H}_\kappa (\mu )=(T,U)=(g^{H_\kappa (\mu )},h^{H_\kappa (\mu )})\in \mathbb {G}_1\times \mathbb {G}_2\). If \(deg_y(\mu )=2\), then \(e(g,h)^{H_{\kappa }(\mu )}\)

  • \((f_g,\nu _1,\nu _2)\): It computes in a homomorphic way a function of degree 2 on the outputs of \(\tilde{H}\). For \(\nu _1=(T_1,U_1)\), \(\nu _2=(T_2,U_2)\) and (respectively, \(\tilde{\text {T}}_1, \tilde{\text {T}}_2 \in \mathbb {G}_T\)).

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Madi, A., Sirdey, R., Stan, O. (2020). Computing Neural Networks with Homomorphic Encryption and Verifiable Computing. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2020. Lecture Notes in Computer Science(), vol 12418. Springer, Cham. https://doi.org/10.1007/978-3-030-61638-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61638-0_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61637-3

  • Online ISBN: 978-3-030-61638-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics