Abstract:
In federated learning, workers periodically upload locally computed weights to a federated learning server (FL server). When Byzantine attacks are presented in the system...Show MoreMetadata
Abstract:
In federated learning, workers periodically upload locally computed weights to a federated learning server (FL server). When Byzantine attacks are presented in the system, attacked workers may upload incorrect weights to the parameter server, i.e., the information received by the FL server is not always the true values computed by workers. Previously proposed score-based, median-based, and distance-based defense algorithms made the following assumptions unrealistic in federated learning: (1) the dataset on each worker is independent and identically distributed (i.i.d.), and (2) the majority of all participating workers are honest. In federated learning, however, a worker may keep its non-i.i.d. private dataset and malicious workers may take over the majority in some iterations. In this paper, we focus on model poisoning type Byzantine attack and propose a novel reference dataset based algorithm along with a practical Two-Filter algorithm (ToFi) to defend against Byzantine attacks in federated learning. Our experiments highlight the effectiveness of our algorithm compared with previous algorithms in different settings.
Published in: IEEE Transactions on Network Science and Engineering ( Volume: 10, Issue: 6, Nov.-Dec. 2023)