ABSTRACT
This article examines a poisoning attack on federated learning. While recent studies are actively exploring this topic in classification models of learning such as image recognition, there are few studies that address the topic in regression models. In particular, this research investigates the impacts of poisoning attacks on the performance of load forecasting, which has hardly studied yet in academia. This research implements two poisoning attacks on a federated learning setting and runs experiments to enumerate their impacts on prediction accuracy of load forecasting. With initial results, we plan to bring a couple of research questions for open discussion to audience.
- PJM.Com. [n.d.]. PJM - Data Miner 2, Hourly Load Data. Retrieved June 9, 2021 from https://dataminer2.pjm.com/feed/hrl_load_meteredGoogle Scholar
- Elif Ustundag Soykan, Zeki Bilgin, Mehmet Akif Ersoy, and Emrah Tomur. 2019. Differentially Private Deep Learning for Load Forecasting on Smart Grid. In Proceedings of the IEEE Globecom Workshops (GC Wkshps). IEEE, 1–6.Google Scholar
- Afaf Taïk and Soumaya Cherkaoui. 2020. Electrical Load Forecasting Using Edge Computing and Federated Learning. In Proceedings of the IEEE International Conference on Communications (ICC). IEEE, 1–6.Google ScholarCross Ref
Recommendations
A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning
The prosperity of machine learning has been accompanied by increasing attacks on the training process. Among them, poisoning attacks have become an emerging threat during model training. Poisoning attacks have profound impacts on the target models, e.g., ...
Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection
AbstractThe Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-...
Dynamic defense against byzantine poisoning attacks in federated learning
AbstractFederated learning, as a distributed learning that conducts the training on the local devices without accessing to the training data, is vulnerable to Byzantine poisoning adversarial attacks. We argue that the federated learning model ...
Highlights- We identify Byzantine attacks as a real problem of Federated Learning.
- We ...
Comments