skip to main content
10.1145/3477114.3488765acmconferencesArticle/Chapter ViewAbstractPublication PagessospConference Proceedingsconference-collections
research-article

Separation of Powers in Federated Learning (Poster Paper)

Published: 25 October 2021 Publication History

Abstract

In federated learning (FL), model updates from mutually distrusting parties are aggregated in a centralized fusion server. The concentration of model updates simplifies FL's model building process, but might lead to unforeseeable information leakage. This problem has become acute due to recent FL attacks that can reconstruct large fractions of training data from ostensibly "sanitized" model updates.
In this paper, we re-examine the current design of FL systems under the new security model of reconstruction attacks. To break down information concentration, we build TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture. Based on the unique computational properties of model-fusion algorithms, we disassemble all exchanged model updates at the parameter-granularity and re-stitch them to form random partitions designated for multiple hardware-protected aggregators. Thus, each aggregator only has a fragmentary and shuffled view of model updates and is oblivious to the model architecture. The deployed security mechanisms in TRUDA can effectively mitigate training data reconstruction attacks, while still preserving the accuracy of trained models and keeping performance overheads low.

References

[1]
Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, et al. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 13(5):1333--1345, 2017.
[2]
Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, and Ryan Rogers. Protection against reconstruction and its applications in private federated learning. arXiv preprint arXiv:1812.00984, 2018.
[3]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175--1191. ACM, 2017.
[4]
Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. Inverting gradients-how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053, 2020.
[5]
Robin C Geyer, Tassilo Klein, and Moin Nabi. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017.
[6]
Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Hani Jamjoom, Ankita Lamba, Dimitrios Pendarakis, and Ian Molloy. Confidential inference via ternary model partitioning. arXiv preprint arXiv:1807.00969, 2018.
[7]
Zhongshu Gu, Hani Jamjoom, Dong Su, Heqing Huang, Jialong Zhang, Tengfei Ma, Dimitrios Pendarakis, and Ian Molloy. Reaching data confidentiality and model accountability on the caltrain. In 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, pages 336--348. IEEE, 2019.
[8]
Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Richard Nock, Giorgio Patrini, Guillaume Smith, and Brian Thorne. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv preprint arXiv:1711.10677, 2017.
[9]
K. R. Jayaram, Archit Verma, Ashish Verma, Gegi Thomas, and Colin Sutcher-Shepard. Mystiko: Cloud-mediated, private, federated gradient descent. In 2020 IEEE 13th International Conference on Cloud Computing (CLOUD), pages 201--210. IEEE, 2020.
[10]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273--1282. PMLR, 2017.
[11]
H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963, 2017.
[12]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy, pages 691--706. IEEE, 2019.
[13]
Payman Mohassel and Yupeng Zhang. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE Symposium on Security and Privacy, pages 19--38. IEEE, 2017.
[14]
Reza Shokri and Vitaly Shmatikov. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1310--1321. ACM, 2015.
[15]
Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pages 1--11. ACM, 2019.
[16]
Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M Alvarez, Jan Kautz, and Pavlo Molchanov. See through gradients: Image batch recovery via gradinversion. arXiv preprint arXiv:2104.07586, 2021.
[17]
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610, 2020.
[18]
Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. In Advances in Neural Information Processing Systems, pages 14774--14784, 2019.

Cited By

View all
  • (2024) TAPFed : Threshold Secure Aggregation for Privacy-Preserving Federated Learning IEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2024.3350206(1-14)Online publication date: 2024
  • (2024)FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat DetectionIEEE Access10.1109/ACCESS.2024.348298812(160396-160417)Online publication date: 2024
  • (2024)Sort-then-insert: A space efficient and oblivious model aggregation algorithm for top-k sparsification in federated learningFuture Generation Computer Systems10.1016/j.future.2024.04.022158(1-10)Online publication date: Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ResilientFL '21: Proceedings of the First Workshop on Systems Challenges in Reliable and Secure Federated Learning
October 2021
22 pages
ISBN:9781450387088
DOI:10.1145/3477114
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 October 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SOSP '21
Sponsor:

Upcoming Conference

SOSP '25
ACM SIGOPS 31st Symposium on Operating Systems Principles
October 13 - 16, 2025
Seoul , Republic of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)23
  • Downloads (Last 6 weeks)3
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024) TAPFed : Threshold Secure Aggregation for Privacy-Preserving Federated Learning IEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2024.3350206(1-14)Online publication date: 2024
  • (2024)FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat DetectionIEEE Access10.1109/ACCESS.2024.348298812(160396-160417)Online publication date: 2024
  • (2024)Sort-then-insert: A space efficient and oblivious model aggregation algorithm for top-k sparsification in federated learningFuture Generation Computer Systems10.1016/j.future.2024.04.022158(1-10)Online publication date: Sep-2024
  • (2023)Olive: Oblivious Federated Learning on Trusted Execution Environment against the Risk of SparsificationProceedings of the VLDB Endowment10.14778/3603581.360358316:10(2404-2417)Online publication date: 1-Jun-2023
  • (2023)A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness and PrivacyCompanion Proceedings of the ACM Web Conference 202310.1145/3543873.3587681(1167-1176)Online publication date: 30-Apr-2023
  • (2022)Private Parameter Aggregation for Federated LearningFederated Learning10.1007/978-3-030-96896-0_14(313-336)Online publication date: 8-Feb-2022
  • (2022)Protecting Against Data Leakage in Federated Learning: What Approach Should You Choose?Federated Learning10.1007/978-3-030-96896-0_13(281-312)Online publication date: 8-Feb-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media