Skip to main content

A Framework for Evaluating Client Privacy Leakages in Federated Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12308))

Abstract

Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients (edge devices). FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server. However, recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks and intrude the client privacy regarding its training data. In this paper, we present a principled framework for evaluating and comparing different forms of client privacy leakage attacks. We first provide formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training (e.g., local gradient or weight update vector). We then analyze how different hyperparameter configurations in federated learning and different settings of the attack algorithm may impact on both attack effectiveness and attack cost. Our framework also measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols. Our experiments additionally include some preliminary mitigation strategies to highlight the importance of providing a systematic attack evaluation framework towards an in-depth understanding of the various forms of client privacy leakage threats in federated learning and developing theoretical foundations for attack mitigation.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning: revisited and enhanced. In: Batten, L., Kim, D.S., Zhang, X., Li, G. (eds.) ATIS 2017. CCIS, vol. 719, pp. 100–110. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-5421-1_9

    Chapter  Google Scholar 

  2. Aono, Y., Hayashi, T., Wang, L., Moriai, S., et al.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2017)

    Google Scholar 

  3. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. arXiv preprint arXiv:1807.00459 (2018)

  4. Battiti, R.: First-and second-order methods for learning: between steepest descent and Newton’s method. Neural Comput. 4(2), 141–166 (1992)

    Article  Google Scholar 

  5. Bonawitz, K., et al.: Towards federated learning at scale: System design. In: Proceedings of the 2nd SysML Conference, pp. 619–633 (2018)

    Google Scholar 

  6. Fletcher, R.: Practical Methods of Optimization. Wiley, Hoboken (2013)

    Google Scholar 

  7. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)

    Google Scholar 

  8. Ganju, K., Wang, Q., Yang, W., Gunter, C.A., Borisov, N.: Property inference attacks on fully connected neural networks using permutation invariant representations. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 619–633 (2018)

    Google Scholar 

  9. Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients-how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020)

  10. Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: Logan: evaluating privacy leakage of generative models using generative adversarial networks. arXiv preprint arXiv:1705.07663 (2017)

  11. Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 603–618 (2017)

    Google Scholar 

  12. Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. In: Technical report (2008)

    Google Scholar 

  13. Kamp, M., et al.: Efficient decentralized deep learning by dynamic model averaging. In: Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N., Ifrim, G. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11051, pp. 393–409. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10925-7_24

    Chapter  Google Scholar 

  14. Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. In: NIPS Workshop on Private Multi-Party Machine Learning (2016)

    Google Scholar 

  15. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. In: Technical report (2009)

    Google Scholar 

  16. LeCun, Y., Cortes, C., Burges, C.J.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist10, 34 (1998)

  17. Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: International Conference on Learning Representations (2018)

    Google Scholar 

  18. Liu, W., Chen, L., Chen, Y., Zhang, W.: Accelerating federated learning via momentum gradient descent. IEEE Trans. Parallel Distrib. Syst. (2020)

    Google Scholar 

  19. Ma, C., et al.: Adding vs. averaging in distributed primal-dual optimization. In: Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 1973–1982 (2015)

    Google Scholar 

  20. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)

    Google Scholar 

  21. McMahan, B., Ramage, D.: Federated learning: Collaborative machine learning without centralized training data. Google Res. Blog 3 (2017)

    Google Scholar 

  22. McMahan, H.B., Moore, E., Ramage, D., Arcas, B.A.: Federated learning of deep networks using model averaging. corr abs/1602.05629 (2016). arXiv preprint arXiv:1602.05629 (2016)

  23. Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 691–706. IEEE (2019)

    Google Scholar 

  24. Rossi, F., Gégout, C.: Geometrical initialization, parametrization and control of multilayer perceptrons: application to function approximation. In: Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN 1994), vol. 1, pp. 546–550. IEEE (1994)

    Google Scholar 

  25. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  26. Vanhaesebrouck, P., Bellet, A., Tommasi, M.: Decentralized collaborative learning of personalized models over networks. In: Artificial Intelligence and Statistics (2017)

    Google Scholar 

  27. Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., Qi, H.: Beyond inferring class representatives: user-level privacy leakage from federated learning. In: IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 2512–2520. IEEE (2019)

    Google Scholar 

  28. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  29. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)

    Article  Google Scholar 

  30. Yao, X., Huang, T., Zhang, R.X., Li, R., Sun, L.: Federated learning with unbiased gradient aggregation and controllable meta updating. arXiv preprint arXiv:1910.08234 (2019)

  31. Zhang, Q., Benveniste, A.: Wavelet networks. IEEE Trans. Neural Netw. 3(6), 889–898 (1992)

    Article  Google Scholar 

  32. Zhao, B., Mopuri, K.R., Bilen, H.: iDLG: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020)

  33. Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data. arXiv preprint arXiv:1806.00582 (2018)

  34. Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Advances in Neural Information Processing Systems, pp. 14747–14756 (2019)

    Google Scholar 

Download references

Acknowledgements

The authors acknowledge the partial support from NSF CISE SaTC 1564097, NSF 2038029 and an IBM Faculty Award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenqi Wei .

Editor information

Editors and Affiliations

7 Appendices

7 Appendices

1.1 7.1 Proof of Theorem 1

Assumption 1

(Convexity). we say f(x) is convex if

$$\begin{aligned} f(\alpha x + (1-\alpha )x') \le \alpha f(x) + (1-\alpha ) f(x'), \end{aligned}$$
(3)

where \(x,x'\) are data point in \(\mathbb {R}^d\), and \(\alpha \in [0,1]\).

Lemma 1

If a convex f(x) is differentiable, we have:

$$\begin{aligned} f(x') -f(x) \ge \langle \nabla f(x), x'-x \rangle . \end{aligned}$$
(4)

Proof

Equation 3 can be rewritten as: \(\frac{f(x'+\alpha (x-x'))-f(x')}{\alpha } \le f(x)-f(y).\) When \(\alpha \rightarrow 0\), we complete the proof.

Assumption 2

(Lipschitz Smoothness). With Lipschitz continuous on the differentiable function f(x) and Lipschitz constant L, we have:

$$\begin{aligned} || \nabla f(x) - \nabla f(x') \le L||x-x'||, \end{aligned}$$
(5)

Lemma 2

If f(x) is Lipschitz-smooth, we have:

$$\begin{aligned} f(x^{t+1}) -f(x^{t}) \le - \frac{1}{2L}||\nabla f(x^T)||^2_2 \end{aligned}$$
(6)

Proof

Using the Taylor expansion of f(x) and the uniform bound over Hessian matrix, we have

$$\begin{aligned} f(x') \le f(x) + \langle \nabla f(x), x'-x \rangle + \frac{L}{2}||x'-x||^2_2. \end{aligned}$$
(7)

By inserting \(x' = x - \frac{1}{L}\nabla f(x)\) into Eq. 5 and Eq. 7, we have:

$$\begin{aligned} f(x - \frac{1}{L}\nabla f(x)) -f(x)&\le - \frac{1}{L} \langle \nabla f(x), \nabla f(x) \rangle + \frac{L}{2}||\frac{1}{L} \nabla f(x)||^2_2 = - \frac{1}{2L}||\nabla f(x)||^2_2 \end{aligned}$$

Lemma 3

(Co-coercivity). A convex and Lipschitz-smooth f(x) satisfies:

$$\begin{aligned} \langle \nabla f(x') - \nabla f(x), x'-x \rangle \ge \frac{1}{L} || \nabla f(x') - \nabla f(x)|| \end{aligned}$$
(8)

Proof

Due to Eq. 5,

$$\begin{aligned} \langle \nabla f(x') - \nabla f(x), x'-x \rangle&\ge \langle \nabla f(x') - \nabla f(x), \frac{1}{L}(\nabla f(x') - \nabla f(x)) \rangle = \frac{1}{L} || \nabla f(x') - \nabla f(x)|| \end{aligned}$$

Then we can proof the attack convergence theorem: \(f(x^T)-f(x^*) \le \frac{2L||x^0-x^*||^2}{T}.\)

Proof

Let f(x) be convex and Lipschitz-smooth. It follow that

$$\begin{aligned} ||x^{t+1}-x^*||^2_2&= ||x^t-x^*-\frac{1}{L} \nabla f(x^t)||^2_2 \nonumber \\&= ||x^t-x^*||^2_2 - 2\frac{1}{L}\langle x^t-x^*, \nabla f(x^t) \rangle + \frac{1}{L^2}||\nabla f(x^t)||^2_2 \nonumber \\&\le ||x^t-x^*||^2_2 - \frac{1}{L^2}||\nabla f(x^t)||^2_2 \end{aligned}$$
(9)

Equation 9 holds due to Eq. 8 in Lemma 3. Recall Eq. 6 in Lemma 2, we have:

$$\begin{aligned} f(x^{t+1}) -f(x^*) \le f(x^{t}) -f(x^*) - \frac{1}{2L}||\nabla f(x^t)||^2_2. \end{aligned}$$
(10)

By applying convexity,

$$\begin{aligned} f(x^t)-f(x^*)&\le \langle \nabla f(x^t), x^t-x^* \rangle \nonumber \\&\le ||\nabla f(x^t)||_2||x^t-x^*|| \nonumber \\&\le ||\nabla f(x^t)||_2||x^1-x^*||. \end{aligned}$$
(11)

Then we insert Eq. 11 into Eq. 10:

$$\begin{aligned}&f(x^{t+1}) -f(x^*) \le f(x^{t}) -f(x^*) - \frac{1}{2L}\frac{1}{||x^1-x^*||^2}( f(x^t)-f(x^*))^2 \nonumber \\&\Rightarrow \frac{1}{f(x^{t}) -f(x^*)} \le \frac{1}{f(x^{t+1}) -f(x^*)} - \beta \frac{f(x^t)-f(x^*)}{f(x^{t+1})-f(x^*)} \end{aligned}$$
(12)
$$\begin{aligned}&\Rightarrow \frac{1}{f(x^{t}) -f(x^*)} \le \frac{1}{f(x^{t+1}) -f(x^*)} - \beta \end{aligned}$$
(13)
$$\begin{aligned}&\Rightarrow \beta \le \frac{1}{f(x^{t+1}) -f(x^*)} - \frac{1}{f(x^{t}) -f(x^*)}, \end{aligned}$$
(14)

where \(\beta = \frac{1}{2L}\frac{1}{||x^1-x^*||^2}\). Equation 12 is done by divide both side with \((f(x^{t+1}) -f(x^*))(f(x^{t}) -f(x^*))\) and Eq. 13 utilizes \(f(x^{t+1}) -f(x^*) \le f(x^{t}) -f(x^*)\). Then, following by induction over \(t=0,1,2,..T-1\) and telescopic cancellation, we have

$$T\beta \le \frac{1}{f(x^{T}) -f(x^*)} - \frac{1}{f(x^{0}) -f(x^*)} \le \frac{1}{f(x^{T}) -f(x^*)}.$$
$$\begin{aligned}&T\beta \le \frac{1}{f(x^{T}) -f(x^*)} - \frac{1}{f(x^{0}) -f(x^*)} \le \frac{1}{f(x^{T}) -f(x^*)} \end{aligned}$$
(15)
$$\begin{aligned}&\Rightarrow \frac{T}{2L}\frac{1}{||x^1-x^*||^2} \le \frac{1}{f(x^{T}) -f(x^*)} \end{aligned}$$
(16)
$$\begin{aligned}&\Rightarrow f(x^{T}) -f(x^*) \le \frac{2L||x^0-x^*||^2}{T}. \end{aligned}$$
(17)

Thus complete the proof.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wei, W. et al. (2020). A Framework for Evaluating Client Privacy Leakages in Federated Learning. In: Chen, L., Li, N., Liang, K., Schneider, S. (eds) Computer Security – ESORICS 2020. ESORICS 2020. Lecture Notes in Computer Science(), vol 12308. Springer, Cham. https://doi.org/10.1007/978-3-030-58951-6_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58951-6_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58950-9

  • Online ISBN: 978-3-030-58951-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics