Abstract:
Federated Learning (FL) is vulnerable to various attacks including poisoning and inference. However, the existing offensive security evaluation of FL assumes that the att...Show MoreMetadata
Abstract:
Federated Learning (FL) is vulnerable to various attacks including poisoning and inference. However, the existing offensive security evaluation of FL assumes that the attackers know data distribution. In this paper, we present a novel attack where FL participants carry out inference and privacy abuse attacks against the FL by leveraging Generating Adversarial Networks (GANs). The attacker (impersonating a benign participant) uses GAN to generate a similar dataset to other participants, and then covertly poisons the data. We demonstrated the attack successfully and tested it on two datasets, the IoT network traffic dataset and MNIST. The results reveal that for FL to be successfully used in IoT applications, protection against such attacks is critically essential.
Published in: IEEE INFOCOM 2024 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)
Date of Conference: 20-20 May 2024
Date Added to IEEE Xplore: 13 August 2024
ISBN Information: