Skip to main content

Privacy Leakage in Privacy-Preserving Neural Network Inference

  • Conference paper
  • First Online:
Computer Security – ESORICS 2022 (ESORICS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13554))

Included in the following conference series:

  • 2377 Accesses

Abstract

The community has seen many attempts to secure machine learning algorithms from multi-party computation or other cryptographic primitives. An interesting 3-party framework (SCSDF hereafter) for privacy-preserving neural network inference was presented at ESORICS 2020. SCSDF defines several protocols for non-linear activation functions including ReLU, Sigmoid, etc. In particular, these protocols reckon on a protocol DReLU (derivative computation for ReLU function) they proposed as a building block. All protocols are claimed secure (against one single semi-honest corruption and against one malicious corruption). Unfortunately, the paper shows that there exists grievous privacy leakage of private inputs during SCSDF executions. This would completely destroy the framework security. We first give detailed cryptanalysis on SCSDF from the perspective of the real-ideal simulation paradigm and indicate that these claimed-secure protocols do not meet the underlying security model. We then go into particular steps in SCSDF and demonstrate that the signs of input data would be inevitably revealed to the (either semi-honest or malicious) third party responsible for assisting protocol executions. To show such leakage more explicitly, we perform plenteous experiment evaluations on the MNIST dataset, the CIFAR-10 dataset, and CFD (Chicago Face Database) for both ReLU and Sigmoid non-linear activation functions. All experiments succeed in disclosing original private data of the data owner in the inference process. Potential countermeasures are recommended and demonstrated as well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Including secret sharing, garbled circuit (GC), oblivious transfer (OT), etc.

  2. 2.

    Only one multiplication operation and one extra interaction are needed.

  3. 3.

    M may be a convolutional layer or a fully connected layer.

  4. 4.

    We cannot rule out the possibility that one adversary might perform advanced cryptanalysis on these leaked data (say, from an algebraic analysis perspective).

References

  1. AB-375 California consumer privacy act of 2018 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375

  2. Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing directive 95/46/EC (general data protection regulation) (2016)

    Google Scholar 

  3. SB-1121 California consumer privacy act of 2018 (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121

  4. Agrawal, N., Shamsabadi, A.S., Kusner, M.J., Gascón, A.: QUOTIENT: two-party secure neural network training and prediction. In: Cavallaro, L., Kinder, J., Wang, X., Katz, J. (eds.) Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, 11–15 November 2019, pp. 1231–1247. ACM (2019). https://doi.org/10.1145/3319535.3339819

  5. Araki, T., et al.: Generalizing the SPDZ compiler for other protocols. In: Lie, D., Mannan, M., Backes, M., Wang, X. (eds.) Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS 2018, Toronto, ON, Canada, 15–19 October 2018, pp. 880–895. ACM (2018). https://doi.org/10.1145/3243734.3243854

  6. Araki, T., Furukawa, J., Lindell, Y., Nof, A., Ohara, K.: High-throughput semi-honest secure three-party computation with an honest majority. In: Weippl, E.R., Katzenbeisser, S., Kruegel, C., Myers, A.C., Halevi, S. (eds.) Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016, pp. 805–817. ACM (2016). https://doi.org/10.1145/2976749.2978331

  7. Beaver, D.: Efficient multiparty protocols using circuit randomization. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 420–432. Springer, Heidelberg (1992). https://doi.org/10.1007/3-540-46766-1_34

    Chapter  Google Scholar 

  8. Ben-Or, M., Goldwasser, S., Wigderson, A.: Completeness theorems for non-cryptographic fault-tolerant distributed computation (extended abstract). In: Simon, J. (ed.) Proceedings of the 20th Annual ACM Symposium on Theory of Computing, 2–4 May 1988, Chicago, Illinois, USA, pp. 1–10. ACM (1988). https://doi.org/10.1145/62212.62213

  9. Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, 14–17 October 2001, Las Vegas, Nevada, USA, pp. 136–145. IEEE Computer Society (2001). https://doi.org/10.1109/SFCS.2001.959888

  10. Chaudhari, H., Choudhury, A., Patra, A., Suresh, A.: ASTRA: high throughput 3PC over rings with application to secure prediction. In: Sion, R., Papamanthou, C. (eds.) Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop, CCSW@CCS 2019, London, UK, 11 November 2019, pp. 81–92. ACM (2019). https://doi.org/10.1145/3338466.3358922

  11. Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K.E., Naehrig, M., Wernsing, J.: Cryptonets: applying neural networks to encrypted data with high throughput and accuracy. In: Balcan, M., Weinberger, K.Q. (eds.) Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, 19–24 June 2016. JMLR Workshop and Conference Proceedings, vol. 48, pp. 201–210. JMLR.org (2016)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 770–778. IEEE Computer Society (2016). https://doi.org/10.1109/CVPR.2016.90

  13. Juvekar, C., Vaikuntanathan, V., Chandrakasan, A.P.: GAZELLE: a low latency framework for secure neural network inference. In: Enck, W., Felt, A.P. (eds.) 27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, 15–17 August 2018, pp. 1651–1669. USENIX Association (2018). https://www.usenix.org/conference/usenixsecurity18/presentation/juvekar

  14. Koti, N., Pancholi, M., Patra, A., Suresh, A.: SWIFT: super-fast and robust privacy-preserving machine learning. In: Bailey, M., Greenstadt, R. (eds.) 30th USENIX Security Symposium, USENIX Security 2021, 11–13 August 2021, pp. 2651–2668. USENIX Association (2021). https://www.usenix.org/conference/usenixsecurity21/presentation/koti

  15. Lehmkuhl, R., Mishra, P., Srinivasan, A., Popa, R.A.: Muse: secure inference resilient to malicious clients. In: Bailey, M., Greenstadt, R. (eds.) 30th USENIX Security Symposium, USENIX Security 2021, 11–13 August 2021, pp. 2201–2218. USENIX Association (2021). https://www.usenix.org/conference/usenixsecurity21/presentation/lehmkuhl

  16. Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via minionn transformations. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, 30 October–03 November 2017, pp. 619–631. ACM (2017). https://doi.org/10.1145/3133956.3134056

  17. Mishra, P., Lehmkuhl, R., Srinivasan, A., Zheng, W., Popa, R.A.: Delphi: a cryptographic inference system for neural networks. In: Zhang, B., Popa, R.A., Zaharia, M., Gu, G., Ji, S. (eds.) PPMLP 2020: Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, Virtual Event, USA, November 2020, pp. 27–30. ACM (2020). https://doi.org/10.1145/3411501.3419418

  18. Mohassel, P., Zhang, Y.: Secureml: a system for scalable privacy-preserving machine learning. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38 (2017). https://doi.org/10.1109/SP.2017.12

  19. Mohassel, P., Rindal, P.: Aby\({}^{\text{3}}\): a mixed protocol framework for machine learning. In: Lie, D., Mannan, M., Backes, M., Wang, X. (eds.) Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS 2018, Toronto, ON, Canada, 15–19 October 2018, pp. 35–52. ACM (2018). https://doi.org/10.1145/3243734.3243760

  20. Patra, A., Suresh, A.: BLAZE: blazing fast privacy-preserving machine learning. CoRR abs/2005.09042 (2020). https://arxiv.org/abs/2005.09042

  21. Riazi, M.S., Samragh, M., Chen, H., Laine, K., Lauter, K.E., Koushanfar, F.: XONN: xnor-based oblivious deep neural network inference. In: Heninger, N., Traynor, P. (eds.) 28th USENIX Security Symposium, USENIX Security 2019, Santa Clara, CA, USA, 14–16 August 2019, pp. 1501–1518. USENIX Association (2019). https://www.usenix.org/conference/usenixsecurity19/presentation/riazi

  22. Riazi, M.S., Weinert, C., Tkachenko, O., Songhori, E.M., Schneider, T., Koushanfar, F.: Chameleon: a hybrid secure computation framework for machine learning applications. In: Proceedings of the 2018 on Asia Conference on Computer and Communications Security, ASIACCS 2018, pp. 707–721. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3196494.3196522

  23. Shen, L., Chen, X., Shi, J., Dong, Y., Fang, B.: An efficient 3-party framework for privacy-preserving neural network inference. In: Chen, L., Li, N., Liang, K., Schneider, S. (eds.) ESORICS 2020. LNCS, vol. 12308, pp. 419–439. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58951-6_21

    Chapter  Google Scholar 

  24. Wagh, S., Gupta, D., Chandran, N.: Securenn: 3-party secure computation for neural network training. Proc. Priv. Enhancing Technol. 2019(3), 26–49 (2019)

    Article  Google Scholar 

  25. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Yu, P.S.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2021). https://doi.org/10.1109/TNNLS.2020.2978386

    Article  MathSciNet  Google Scholar 

  26. Yao, A.C.: How to generate and exchange secrets (extended abstract). In: 27th Annual Symposium on Foundations of Computer Science, Toronto, Canada, 27–29 October 1986, pp. 162–167. IEEE Computer Society (1986). https://doi.org/10.1109/SFCS.1986.25

Download references

Acknowledgement

The work is supported by the National Natural Science Foundation of China (Grant No. 61971192), Shanghai Municipal Education Commission (2021-01-07-00-08-E00101), and Shanghai Trusted Industry Internet Software Collaborative Innovation Center.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangxue Li .

Editor information

Editors and Affiliations

A A Preliminary

A A Preliminary

1.1 A.1 A.1 Neural Network

A neural network usually executes in a layer-by-layer fashion. It mainly consists of two different layers: linear layers and non-linear layers. The computation of linear layers (either a fully connected layer or a convolutional layer) generally depends on matrix multiplication. The non-linear layer could be performed by several different activation functions, e.g., ReLU and Sigmoid. ReLU function is formulated as \(f(x)=\max (0,x)\) and Sigmoid as \(f(x)= \frac{1}{1+e^{-x}}\).

1.2 B.2 A.2 Fixed-Point Number

Neural networks usually operate on floating-point numbers that are not suitable for some cryptographic primitives. Thus MPC frameworks usually use the encoding of fixed-point numbers. A fixed-point value is defined as an l-bit integer by using 2’s complement representation where the bottom d bits denote the decimal (\(d<l\)) and the MSB denotes the sign of the number, i.e., \(\textrm{MSB}(x)=1\) if x is negative, and 0 otherwise.

1.3 C.3 A.3 Addictive Secret Sharing

In the framework SCSDF, all values are 2-out-of-2 secret shared between the server and the client. We say x is 2-out-of-2 secret shared in \(\mathbb {Z}_{2^l}\) between \(P_0\) and \(P_1\) if \([x]_0, [x]_1\in \mathbb {Z}_{2^l}\) such that \(x = [x]_0 +[x]_1\) and \([x]_i\) is held by \(P_i\), \(i \in \{0,1\} \).

Sharing Protocol \(\pi _\textrm{Share}\). \(P_i\) generates a sharing of its input x by sampling \(r \in _R \mathbb {Z}_{2^l}\) and sending \(x-r\) to \(P_{1-i}\). \(P_i\) gets \([x]_i=r\) and \(P_{1-i}\) gets \([x]_{1-i}=x-r\).

Reconstruction Protocol \(\pi _\textrm{Rec}\). To reconstruct x, the parties mutually exchange their missing share, then each of them can compute \(x = [x]_0 +[x]_1\).

Addition Operations. Addictive sharing is linear in the sense that given two shared values [x] and [y], the parties can get \([z] =[x]+[y]\) by local computation.

Multiplication Operations \(\pi _\textrm{Multi}\). Given [x], [y], the goal of protocol \(\pi _\textrm{Multi}\) is to generate [z] where \(z = x \cdot y\). It can be performed based on preassigned Multiplication Triples [7]. A multiplication triple is a triple \((a,b,c) \in Z_{2^l}\) such that \(a\cdot b = c\). Two parties hold secret sharing of a triple (abc). \(P_i\) computes \([e]_i=[x]_i-[a]_i\) and \([f]_i = [y]_i- [b]_i\), then call \(\pi _\textrm{Rec}\) to reconstruct e and f. \(P_i\) sets \(z_i=-i \times e\times +f \times x_i+e \times y_i+c_i\). The multiplication operations can be easily extended to matrices.

1.4 D.4 A.4 Threat Model

Similar to [23], we consider an adversary who can corrupt only one of the three parties in the semi-honest model or malicious model.

A party corrupted by a semi-honest adversary follows protocol steps but tries to learn additional information from received messages. Our security definition uses the real-ideal paradigm [9] which is a general method to prove protocol security. In the real world, the parties interact with each other according to the specification of a protocol \(\pi \). In the ideal world, the parties have access to a trusted third party (TTP) that implements an ideal functionality \(\mathcal {F}\). The executions in both worlds are coordinated by an environment \(\mathcal {Z}\), who chooses the inputs for the parties and plays the role of a distinguisher between the real and ideal executions. We say that \(\pi \) securely realizes the ideal functionality \(\mathcal {F}\) if for any adversary \(\mathcal {A}\) in the real world, there exists an adversary \(\mathrm Sim\) (called a simulator) in the ideal world, such that no \(\mathcal {Z}\) can distinguish an execution of the protocol \(\pi \) with the parties and \(\mathcal {A}\) from an execution of the ideal functionality \(\mathcal {F}\) with the parties and \(\mathrm Sim\). To be more formal, we give the following definition.

Definition 1

A protocol \(\pi \) securely realizes an ideal functionality \(\mathcal {F}\) if for any adversary \(\mathcal {A}\), there exists an adversary \(\mathrm Sim\) (called a simulator) such that, for any environment \(\mathcal {Z}\), the following holds:

$$\begin{aligned} \mathrm REAL_{\pi ,\mathcal {A},\mathcal {Z},\lambda } \overset{c}{\approx }\ IDEAL_{\mathcal {F},Sim,\mathcal {Z},\lambda } \end{aligned}$$

where \(\overset{c}{\approx }\) denotes computational indistinguishability, \(\lambda \) denotes security parameter, \(\mathrm REAL_{\pi ,\mathcal {A},\mathcal {Z},\lambda }\) represents the view of \(\mathcal {Z}\) in the real protocol execution with \(\mathcal {A}\) and the parties, and \(\mathrm IDEAL_{\mathcal {F},Sim,\mathcal {Z},\lambda }\) represents the view of \(\mathcal {Z}\) in the ideal execution with the functionality \(\mathcal {F}\), the simulator \(\mathrm Sim\) and the parties.

The environment’s view includes (without loss of generality) all messages that honest parties send to the adversary as well as the outputs of the honest parties.

Malicious Security: In malicious security model, an adversary may arbitrarily deviate from protocol specification. Araki et al. [6] formalize the notion of privacy against malicious adversaries in client-server model using an indistinguishability-based argument: for any two inputs of honest parties, the views of the adversary in protocol executions are indistinguishable. This notion is weaker than full simulation-based malicious security because it does not guarantee protocol correctness in the presence of malicious behavior. However, it does provide a guarantee of the privacy of the protocol. SCSDF [23] and SecureNN consider above malicious security model.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wei, M., Zhu, W., Cui, L., Li, X., Li, Q. (2022). Privacy Leakage in Privacy-Preserving Neural Network Inference. In: Atluri, V., Di Pietro, R., Jensen, C.D., Meng, W. (eds) Computer Security – ESORICS 2022. ESORICS 2022. Lecture Notes in Computer Science, vol 13554. Springer, Cham. https://doi.org/10.1007/978-3-031-17140-6_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17140-6_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17139-0

  • Online ISBN: 978-3-031-17140-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics