Abstract:
Federated learning (FL) is a distributed deep learning framework that has become increasingly popular in recent years. Essentially, FL supports numerous participants and ...Show MoreMetadata
Abstract:
Federated learning (FL) is a distributed deep learning framework that has become increasingly popular in recent years. Essentially, FL supports numerous participants and the parameter server to co-train a deep learning model through shared gradients without revealing the private training data. Recent studies, however, have shown that a potential adversary (either the parameter server or participants) can recover private training data from the shared gradients, and such behavior is called gradient leakage attacks (GLAs). In this study, we first present an overview of FL systems and outline the GLA philosophy. We classify the existing GLAs into two paradigms: optimizationbased and analytics-based attacks. In particular, the optimization-based approach defines the attack process as an optimization problem, whereas the analytics-based approach defines the attack as a problem of solving multiple linear equations. We present a comprehensive review of the state-of-the-art GLA algorithms followed by a detailed comparison. Based on the observations of the shortcomings of the existing optimization-based and analytics-based methods, we devise a new generation-based GLA paradigm. We demonstrate the superiority of the proposed GLA in terms of data reconstruction performance and efficiency, thus posing a greater potential threat to federated learning protocols. Finally, we pinpoint a variety of promising future directions for GLA.
Published in: IEEE Network ( Volume: 38, Issue: 2, March 2024)