Abstract
Centralized machine learning methods that depend on data from multiple sources have faced serious privacy issues. Federated learning (FL), which enables decentralized machine learning by training models on local devices while keeping data private and simply sharing model weights with a central server, has therefore come to be recognized as a promising alternative. However, maintaining the privacy of training data does not protect privacy as training data can be inferred from model weights. To overcome this difficulty, we propose AddShare an FL system that protects the privacy of the local model weights while allowing the computation of a global model. Leveraging state-of-the-art techniques, AddShare uses additive secret-sharing, providing a simple yet efficient method to safeguard sensitive information without compromising predictive accuracy. Moreover, additional components are integrated to ensure lower computational costs and increase privacy. We conducted extensive experiments across multiple datasets that yielded very promising results for AddShare. AddShare did not adversely impact model accuracy compared to the widely-used FedAvg algorithm. Simultaneously, the privacy of the models is ensured and the computational cost is reduced through the implementation of groups while using a single aggregating server, a capability not available in other related solutions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Khojir, H.F., et al.: FedShare: secure aggregation based on additive secret sharing in federated learning. In: International Database Engineered Applications Symposium Conference, Heraklion, Crete Greece, 2023, pp. 25–33. ACM (2023). https://dl.acm.org/doi/10.1145/3589462.3589504
Bell, J.H., et al.: Secure single-server aggregation with (poly)logarithmic overhead. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 1253–1269 (2020)
Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191 (2017)
Bouacida, N., Mohapatra, P.: Vulnerabilities in federated learning. IEEE Access 9, 63229–63249 (2021). https://doi.org/10.1109/ACCESS.2021.3075203
Choudhury, A., Patra, A.: Secret sharing. In: Secure Multi-Party Computation Against Passive Adversaries. Synthesis Lectures on Distributed Computing Theory, pp. 17–31. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-12164-7_3
Deng, L.: The MNIST database of handwritten digit images for machine learning research. IEEE Sig. Process. Mag. 29(6), 141–142 (2012)
Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006). https://doi.org/10.1007/11787006_1
Gongye, C., Fei, Y., Wahl, T.: Reverse-engineering deep neural networks using floating-point timing side-channels. In: 2020 57th ACM/IEEE Design Automation Conference (DAC) (2020). https://doi.org/10.1109/dac18072.2020.9218707
Krizhevsky, A.: Learning multiple layers of features from tiny images. University of Toronto (2009)
McMahan, H.B., et al.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, April 2017, pp. 1273–1282. PMLR (2017). ISSN 2640-3498. https://proceedings.mlr.press/v54/mcmahan17a.html
Moore, E., et al.: A survey on secure and private federated learning using blockchain: theory and application in resource-constrained computing. arXiv arXiv:2303.13727 [cs], March 2023
More, Y., et al.: SCOTCH: an efficient secure computation framework for secure aggregation. arXiv arXiv:2201.07730 [cs], February 2022
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011 (2011)
Rivest, R.L., Shamir, A., Adleman, L.: A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 21(2), 120–126 (1978)
Shamir, A.: How to share a secret. Commun. ACM 22(11), 612–613 (1979). https://doi.org/10.1145/359168.359176
Shokri, R., et al.: Membership inference attacks against machine learning models. arXiv arXiv:1610.05820 [cs, stat], March 2017
Sun, L., Qian, J., Chen, X.: LDP-FL: practical private aggregation in federated learning with local differential privacy. arXiv arXiv:2007.15789 [cs], May 2021
Thursday: Federated learning: collaborative machine learning without centralized training data. https://ai.googleblog.com/2017/04/federated-learning-collaborative.html
Xiao, H., et al.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv arXiv:1708.07747 [cs, stat], September 2017
Yang, Q., et al.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)
Zhao, B., et al.: iDLG: improved deep leakage from gradients. arXiv arXiv:2001.02610 [cs, stat], January 2020
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Asare, B.A., Branco, P., Kiringa, I., Yeap, T. (2024). AddShare: A Privacy-Preserving Approach for Federated Learning. In: Katsikas, S., et al. Computer Security. ESORICS 2023 International Workshops. ESORICS 2023. Lecture Notes in Computer Science, vol 14398. Springer, Cham. https://doi.org/10.1007/978-3-031-54204-6_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-54204-6_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54203-9
Online ISBN: 978-3-031-54204-6
eBook Packages: Computer ScienceComputer Science (R0)