ABSTRACT
Federated learning enables participants to be capable of collaboratively building powerful machine learning models and exploiting privacy protection mechanisms to protect data privacy. However, the federal-learning security mechanism remains not perfect. Using malicious training samples is considered to be vulnerable to attacks against machine learning models. Lastly, the substitution technique of the model is adopted to eliminate the learning effect of the final model. Though numerous security protocols have been formulated to defend against and prevent malicious attacks, most are active defenses against malicious attacks, instead of passive ones (e.g., distillation-based defense and regularization-based adversarial training). In the present study, a block chain-based passive defense model is proposed to protect the model. To be specific, when a participant updates its local model, model participation, user fingerprints and other key information will be stored on the chain. Malicious attacks can be therefore traced and detected. To ensure data integrity and confidentiality, model data is encrypted and updated via the blockchain. Record rewards for active participation in federal learning and punish violators. Accordingly, the dynamic protection mechanism of federated learning is achieved.
- POUYANFAR S, SADIQ S, YAN Y, A surveys on deep learning: algorithms, techniques, and applications[J]. ACM Computing Surveys, 2019, 51(5): 1-36Google ScholarDigital Library
- The official GDPR website[A/OL]. EU Commission(2020-07-24).Google Scholar
- Data protection laws of the world: Full handbook[A/OL]. DLA Piper(2020-07-24)Google Scholar
- Sidra Malik;Volkan Dedeoglu, Salil S. Kanhere, Raja Jurdak. TrustChain: Trust Management in Blockchain and IoT Supported Supply Chains, 2019 IEEE International Conference on Blockchain.Google Scholar
- Huili Chen, Rosario Cammarota, Felipe Valencia, Francesco Regazzoni. PlaidML-HE: Acceleration of Deep Learning Kernels to Compute on Encrypted Data. 2019 IEEE 37th International Conference on Computer Design (ICCD).Google Scholar
- Matteo Sereno, Authors Info, Affiliations. Cooperative game theory framework for energy efficient policies in wireless networks[J]. e-Energy '12: Proceedings of the 3rd International Conference on Future Energy Systems: Where Energy, Computing and Communication Meet May 2012 Article No.: 17 Pages 1–9.Google Scholar
- L. S. Shapley. A Value for n-person Games. In H. W. Kuhn and A. W. Tucker, editors, Contributions to the Theory of Games, volume II, Annals of Mathematical Studies No. 28, pages 307–317. Princeton University Press, 1953.Google Scholar
- H. P. Young. Cost Allocation: Methods, Principles, Application. North-Holland, 1985..Google Scholar
- Pawel Szalachowski. (Short Paper) Towards More Reliable Bitcoin Timestamps. IEEE International Conference on Crypto Valley Conference on Blockchain Technology (CVCBT), 2018.Google Scholar
Recommendations
FLAIR: Defense against Model Poisoning Attack in Federated Learning
ASIA CCS '23: Proceedings of the 2023 ACM Asia Conference on Computer and Communications SecurityFederated learning—multi-party, distributed learning in a decentralized environment—is vulnerable to model poisoning attacks, more so than centralized learning. This is because malicious clients can collude and send in carefully tailored model updates ...
Attacks against federated learning defense systems and their mitigation
The susceptibility of federated learning (FL) to attacks from untrustworthy endpoints has led to the design of several defense systems. FL defense systems enhance the federated optimization algorithm using anomaly detection, scaling the updates from ...
Defense against backdoor attack in federated learning
AbstractAs a new distributed machine learning framework, Federated Learning (FL) effectively solves the problems of data silo and privacy protection in the field of artificial intelligence. However, for its independent devices, heterogeneous ...
Comments