Skip to main content
Log in

A privacy preserving federated learning scheme using homomorphic encryption and secret sharing

  • Published:
Telecommunication Systems Aims and scope Submit manuscript

Abstract

The performance of machine learning models largely depends on the amount of data. However, with the improvement of privacy awareness, data sharing has become more and more difficult. Federated learning provides a solution for joint machine learning, which alleviates this difficulty. Although it works by sharing parameters instead of data, privacy threats like inference attacks still exist owing to the exposed parameters or updates. In this paper, we propose a privacy preserving scheme for federated learning by combining the homomorphism of both secret sharing and encryption. Our scheme ensures the confidentiality of local parameters and tolerates collusion threats under a certain range. Our scheme also tolerates dropping of some clients, performs aggregation without sharing keys and has simple interaction process. Meantime, we use the automatic protocol tool ProVerif to verify its cryptographic functionality, analyze its theoretical complexity and compare them with similar schemes. We verify our scheme by experiment to show that it has less running time compared with some schemes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. McMahan, B., Moore, E., Ramage, D., Hampson, S. & y Arcas, B. A. Communication-efficient learning of deep networks from decentralized data, Vol. 54 of Proceedings of Machine Learning Research, 1273–1282 (PMLR, 2017).

  2. Nasr, M., Shokri, R. & Houmansadr, A. Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning (IEEE, 2019).

  3. Shokri, R., Stronati, M., Song, C. & Shmatikov, V. Membership inference attacks against machine learning models, 3–18 (IEEE, 2017).

  4. Melis, L., Song, C., De Cristofaro, E. & Shmatikov, V. Exploiting unintended feature leakage in collaborative learning, 691–706 (2019). https://doi.org/10.1109/SP.2019.00029.

  5. Zhu, L. & Han, S. Deep leakage from gradients 17–31 (2020). https://doi.org/10.1007/978-3-030-63076-8_2 .

  6. Fredrikson, M., Jha, S. & Ristenpart, T. Model inversion attacks that exploit confidence information and basic countermeasures, 1322–1333 (Association for Computing Machinery, 2015). https://doi.org/10.1145/2810103.2813677.

  7. Kairouz, P. & McMahan, H. Advances and open problems in federated learning. Foundations and Trends in Machine Learning 14 (2021). https://doi.org/10.1561/2200000083 .

  8. Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology, 10, 1–19. https://doi.org/10.1145/3298981

    Article  Google Scholar 

  9. Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006). Calibrating noise to sensitivity in private data analysis, 3876, 265–284. https://doi.org/10.1007/11681878_14

  10. Rivest, R. L., Adleman, L. M., & Dertouzos, M. L. (1978). On data banks and privacy homomorphisms. Foundations of Secure Computation, 4, 169–180.

    Google Scholar 

  11. Chaum, D., Crépeau, C. & Damgård, I. Multiparty unconditionally secure protocols, 11-19 (Association for Computing Machinery, 1988). https://doi.org/10.1145/62212.62214.

  12. Triastcyn, A. & Faltings, B. Federated learning with bayesian differential privacy, 2587–2596 (IEEE, 2019).

  13. Sun, L., Qian, J., Chen, X. & Yu, P. S. Ldp-fl: Practical private aggregation in federated learning with local differential privacy. http://arxiv.org/abs/math/2007.15789 (2021) .

  14. Kim, M., Günlü, O. & Schaefer, R. F. Federated learning with local differential privacy: Trade-offs between privacy, utility, and communication, 2650–2654 (2021). https://doi.org/10.1109/ICASSP39728.2021.9413764.

  15. Rivest, R., Shamir, A., & Adleman, L. (1978). A method for obtaining digital signatures and public-key cryptosystems. Communication ACM, 21, 120–126. https://doi.org/10.1145/357980.358017

    Article  Google Scholar 

  16. Paillier, P. (1999). Public-key cryptosystems based on composite degree residuosity classes, 5, 223–238. https://doi.org/10.1007/3-540-48910-X_16

  17. Gamal, T. (1985). A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Transactions on Information Theory, 31, 469–472.

    Article  Google Scholar 

  18. Boneh, D., Goh, E.-J. & Nissim, K. Evaluating 2-dnf formulas on ciphertexts, Vol. 3378, 325–341 (2005). https://doi.org/10.1007/978-3-540-30576-7_18.

  19. López-Alt, A., Tromer, E. & Vaikuntanathan, V. On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption, 1219-1234 (Association for Computing Machinery, 2012). https://doi.org/10.1145/2213977.2214086.

  20. Bonawitz, K. et al. Practical secure aggregation for privacy-preserving machine learning, 1175-1191 (Association for Computing Machinery, 2017). https://doi.org/10.1145/3133956.3133982.

  21. Mohassel, P. & Zhang, Y. Secureml: A system for scalable privacy-preserving machine learning, 19–38 (2017). https://doi.org/10.1109/SP.2017.12.

  22. Shokri, R. & Shmatikov, V. Privacy-preserving deep learning, 909–910 (2015). https://doi.org/10.1109/ALLERTON.2015.7447103.

  23. Phong, L. T., Aono, Y., Hayashi, T., Wang, L., & Moriai, S. (2018). Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 13(5), 1333–1345. https://doi.org/10.1109/TIFS.2017.2787987

    Article  Google Scholar 

  24. Hao, M., Li, H., Xu, G., Liu, S. & Yang, H. Towards efficient and privacy-preserving federated deep learning, 1–6 (IEEE, 2019).

  25. Zhang, X., Ji, S., Wang, H. & Wang, T. Private, yet practical, multiparty deep learning, 1442–1452 (2017). https://doi.org/10.1109/ICDCS.2017.215.

  26. Fang, C., et al. (2021). Privacy-preserving and communication-efficient federated learning in internet of things. Computers & Security, 103, 102199. https://doi.org/10.1016/j.cose.2021.102199

    Article  Google Scholar 

  27. Wang, T., et al. (2020). Privacy-enhanced data collection based on deep learning for internet of vehicles. IEEE Transactions on Industrial Informatics, 16(10), 6663–6672. https://doi.org/10.1109/TII.2019.2962844

    Article  Google Scholar 

  28. Hao, M., et al. (2020). Efficient and privacy-enhanced federated learning for industrial artificial intelligence. IEEE Transactions on Industrial Informatics, 16(10), 6532–6542. https://doi.org/10.1109/TII.2019.2945367

    Article  Google Scholar 

  29. Shamir, A. How to share a secret (1979) 475–478 (2021). https://doi.org/10.7551/mitpress/12274.003.0048 .

  30. Bruno Blanchet, V. C. https://bblanche.gitlabpages.inria.fr/proverif/.

  31. Hinton, G. & Salakhutdinov, R. Reducing the dimensionality of data with neural networks. Science (New York, N.Y.) 313, 504–507 (2006). https://doi.org/10.1126/science.1127647 .

  32. Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278–2324. https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  33. http://www.cs.toronto.edu/~kriz/cifar.html.

  34. http://yann.lecun.com/exdb/mnist/.

  35. Menzies, A., Oorschot, P. & Vanstone, S. Handbook of applied cryptography. CRC Press, USA 516 (1997). https://doi.org/10.1201/9781439821916.

Download references

Funding

This work is supported by: the Fundamental Research Funds for the Central Universities (UESTC) (Grant No. ZYGX2020ZB025) and Sichuan Science and Technology Program (Grant No. 2021YFG0157).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by [ZS], [ZY], [AH], [FL] and [XD]. The first draft of the manuscript was written by [ZS] and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xuyang Ding.

Ethics declarations

Conflict of interest

We declare that we have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Correctness analysis

Appendix A: Correctness analysis

We illustrate why server can successfully restore \(\sum _{i\in U}{k_i}\).

In round 2, the server S has \(\vert U'\vert \)(\(\vert U'\vert >t \)) shares of \(k_m(m\in U)\), that is, \(y_{m,i}=f_m(x_i)\). S uses t of \( \vert U'\vert \) shares \( y_{m,i}(i=1,2,...,t)\) from each \(k_m\)to construct:

$$\begin{aligned} F(x)=\sum \limits _{i=1}^t{\sum \limits _{m\in U}{y_{m,i}\frac{\prod \limits _{1\le j\le t, j\ne i}{(x-x_j)} }{\prod \limits _{1\le j\le t, j\ne i}{(x_j-x_i)}}}}\ \textrm{mod}\ q \end{aligned}$$
(A1)

Then considering t of \(\vert U'\vert \) shares \(y_{m,i}(i=1,2,...,t)\) can be selected to restore \( k_m \) according Lagrange interpolation formula:

$$\begin{aligned} \begin{aligned} k_m=&F_m(0)\\=&-\sum \limits _{i=1}^t{y_{m,i}\frac{\prod \limits _{1\le j\le t, j\ne i}{x_j}}{\prod \limits _{1\le j\le t, j\ne i}{(x_j-x_i)}}}\ \textrm{mod}\ q \end{aligned} \end{aligned}$$
(A2)

so there is:

$$\begin{aligned} \begin{aligned} F(0)=&-\sum \limits _{i=1}^t{\sum \limits _{m\in U}{y_{m,i}\frac{\prod \limits _{1\le j\le t, j\ne i}{x_j}}{\prod \limits _{1\le j\le t, j\ne i}{(x_j-x_i)}}}}\ \textrm{mod}\ q\\ =&-\sum \limits _{m\in U}{\sum \limits _{i=1}^t}{y_{m,i}\frac{\prod \limits _{1\le j\le t, j\ne i}{x_j}}{\prod \limits _{1\le j\le t, j\ne i}{(x_j-x_i)}}}\ \textrm{mod}\ q\\ =&\sum \limits _{m\in U}{F_m(0)}\ \textrm{mod}\ q\\ =&\sum \limits _{m\in U}{k_m}\ \textrm{mod}\ q\\ \end{aligned} \end{aligned}$$
(A3)

We can see the server restores \( \sum _{m\in U}{k_m} \) correctively owning to the homomorphism of secret sharing.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Z., Yang, Z., Hassan, A. et al. A privacy preserving federated learning scheme using homomorphic encryption and secret sharing. Telecommun Syst 82, 419–433 (2023). https://doi.org/10.1007/s11235-022-00982-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11235-022-00982-3

Keywords

Navigation