Skip to main content

Enhancing Federated Learning Robustness Using Data-Agnostic Model Pruning

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13936))

Included in the following conference series:

  • 993 Accesses

Abstract

Federated learning enables multiple data owners with a common objective to participate in a machine learning task without sharing their raw data. At each round, clients train local models with their own data and then upload the model parameters to update the global model. This multi-agent form of machine learning has been shown prone to adversarial manipulation by recent studies. Byzantine attackers impersonated as benign clients can stealthily interrupt or destroy the learning process. In this paper, we propose FLAP, a post-aggregation model pruning technique to enhance the Byzantine robustness of federated learning by effectively disabling the malicious and dormant components in the learned neural network models. Our technique is data-agnostic, without requiring clients to submit their dataset or training output, well aligned with the data locality of federated learning. FLAP is performed by the server right after the aggregation, which renders it compatible with an arbitrary aggregation algorithm and existing defensive techniques. Our empirical study demonstrates the effectiveness of FLAP under various settings. It reduces the error rate by up to 10.2% against the state-of-the-art adversarial models. Moreover, FLAP also manages to increase the average accuracy by up to 22.1% against different adversarial settings, mitigating the adversarial impacts while preserving learning fidelity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Our source code is hosted at https://github.com/mark-h-meng/flap.

  2. 2.

    https://github.com/pps-lab/fl-analysis.

  3. 3.

    One unit will be pruned if the layer has less than 100 units.

  4. 4.

    We assume the perfect estimation that 20% of clients are excluded due to high loss function value and another 20% of clients are excluded due to low accuracy.

References

  1. Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning (2019)

    Google Scholar 

  2. Blanchard, P., Mhamdi, E.M.E., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: Guyon, I., von Luxburg, U., Bengio, S., Wallach, H.M., Fergus, R., Vishwanathan, S.V.N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, pp. 119–129 (2017)

    Google Scholar 

  3. Cao, X., Fang, M., Liu, J., Gong, N.Z.: Fltrust: Byzantine-robust federated learning via trust bootstrapping. In: Network and Distributed System Security Symposium. The Internet Society (2021)

    Google Scholar 

  4. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)

  5. Fang, M., Cao, X., Jia, J., Gong, N.: Local model poisoning attacks to byzantine-robust federated learning. In: 29th USENIX Security Symposium (2020)

    Google Scholar 

  6. Fang, U., Li, J., Akhtar, N., Li, M., Jia, Y.: GOMIC: multi-view image clustering via self-supervised contrastive heterogeneous graph co-learning. World Wide Web, pp. 1–17 (2022)

    Google Scholar 

  7. Guan, H., Xiao, Y., Li, J., Liu, Y., Bai, G.: A comprehensive study of real-world bugs in machine learning model optimization. In: Proceedings of the International Conference on Software Engineering (2023)

    Google Scholar 

  8. Guerraoui, R., Rouault, S., et al.: The hidden vulnerability of distributed learning in Byzantium. In: International Conference on Machine Learning (2018)

    Google Scholar 

  9. Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.D.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58 (2011)

    Google Scholar 

  10. Jin, C., Wang, J., Teo, S.G., Zhang, L., Chan, C., Hou, Q., Aung, K.M.M.: Towards end-to-end secure and efficient federated learning for XGBoost (2022)

    Google Scholar 

  11. Li, T., Hu, S., Beirami, A., Smith, V.: Ditto: Fair and robust federated learning through personalization. In: International Conference on Machine Learning (2021)

    Google Scholar 

  12. Liu, K., Dolan-Gavitt, B., Garg, S.: Fine-pruning: defending against backdooring attacks on deep neural networks. In: International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 273–294. Springer (2018)

    Google Scholar 

  13. Mahalle, A., Yong, J., Tao, X., Shen, J.: Data privacy and system security for banking and financial services industry based on cloud computing infrastructure. In: IEEE International Conference on Computer Supported Cooperative Work in Design (2018)

    Google Scholar 

  14. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: International Conference on Artificial Intelligence and Statistics (2017)

    Google Scholar 

  15. Meng, M.H., Bai, G., Teo, S.G., Dong, J.S.: Supervised robustness-preserving data-free neural network pruning. In: International Conference on Engineering of Complex Computer Systems (2023)

    Google Scholar 

  16. Meng, M.H., Bai, G., Teo, S.G., Hou, Z., Xiao, Y., Lin, Y., Dong, J.S.: Adversarial robustness of deep neural networks: a survey from a formal verification perspective. IEEE Trans. Depend. Secure Comput. (2022)

    Google Scholar 

  17. Panda, A., Mahloujifar, S., Bhagoji, A.N., Chakraborty, S., Mittal, P.: Sparsefed: mitigating model poisoning attacks in federated learning with sparsification. In: International Conference on Artificial Intelligence and Statistics (2022)

    Google Scholar 

  18. Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: Ml-Leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Network and Distributed System Security Symposium (2019)

    Google Scholar 

  19. Shaik, T., et al.: Fedstack: personalized activity monitoring using stacked federated learning. Knowl.-Based Syst. 257, 109929 (2022)

    Article  Google Scholar 

  20. Shejwalkar, V., Houmansadr, A., Kairouz, P., Ramage, D.: Back to the drawing board: a critical evaluation of poisoning attacks on production federated learning. In: IEEE Symposium on Security and Privacy, pp. 1354–1371. IEEE (2022)

    Google Scholar 

  21. Song, X., Li, J., Cai, T., Yang, S., Yang, T., Liu, C.: A survey on deep learning based knowledge tracing. Knowl.-Based Syst. 258, 110036 (2022)

    Article  Google Scholar 

  22. Srinivas, S., Babu, R.V.: Data-free parameter pruning for deep neural networks. In: Proceedings of the British Machine Vision Conference (2015)

    Google Scholar 

  23. Teo, S.G., Cao, J., Lee, V.C.: DAG: a general model for privacy-preserving data mining. IEEE Trans. Knowl. Data Eng. 32(1), 40–53 (2018)

    Article  Google Scholar 

  24. Wang, B., et al.: Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 707–723. IEEE (2019)

    Google Scholar 

  25. Wang, K., Zhang, J., Bai, G., Ko, R., Dong, J.S.: It’s not just the site, it’s the contents: intra-domain fingerprinting social media websites through CDN bursts. In: Proceedings of the Web Conference (2021)

    Google Scholar 

  26. Wu, C., Yang, X., Zhu, S., Mitra, P.: Mitigating backdoor attacks in federated learning. arXiv preprint arXiv:2011.01767 (2020)

  27. Yin, D., Chen, Y., Ramchandran, K., Bartlett, P.L.: Byzantine-robust distributed learning: towards optimal statistical rates. In: International Conference on Machine Learning (2018)

    Google Scholar 

  28. Yin, H., Song, X., Yang, S., Li, J.: Sentiment analysis and topic modeling for covid-19 vaccine discussions. World Wide Web 25(3), 1067–1083 (2022)

    Article  Google Scholar 

  29. Zhang, Y., Bai, G., Li, X., Curtis, C., Chen, C., Ko, R.K.: PrivColl: practical privacy-preserving collaborative machine learning. In: European Symposium on Research in Computer Security (2020)

    Google Scholar 

Download references

Acknowledgment

This work was supported by The University of Queensland under the NSRSG grant 4018264-617225, Cyber Research Seed Funding, the Global Strategy and Partnerships Seed Funding, and Agency for Science, Technology and Research (A*STAR) Singapore under the ACIS scholarship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guangdong Bai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Meng, M.H., Teo, S.G., Bai, G., Wang, K., Dong, J.S. (2023). Enhancing Federated Learning Robustness Using Data-Agnostic Model Pruning. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13936. Springer, Cham. https://doi.org/10.1007/978-3-031-33377-4_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33377-4_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33376-7

  • Online ISBN: 978-3-031-33377-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics