Byzantine-Robust Stochastic Gradient Descent for Distributed Low-Rank Matrix Completion | IEEE Conference Publication | IEEE Xplore

Byzantine-Robust Stochastic Gradient Descent for Distributed Low-Rank Matrix Completion


Abstract:

To overcome the growing privacy concerns of centralized machine learning, federated learning has been proposed to enable collaboratively training a model with data stored...Show More

Abstract:

To overcome the growing privacy concerns of centralized machine learning, federated learning has been proposed to enable collaboratively training a model with data stored locally in the owners' devices. However, adversarial attacks (e.g., Byzantine attacks in the worst case) still exist in the federated learning systems so that the information shared by the data owners are unreliable. Byzantine-robust aggregation methods, such as median, geometric median and Krum, have been found to perform well in eliminating the negative effects caused by the Byzantine attacks. In this paper, we study the distributed low-rank matrix completion problem in a federated learning setting, where some data owners are malicious. We combine the Byzantine-robust aggregation rules with stochastic gradient descent (SGD) to solve this problem. Numerical experiments on the Netflix dataset demonstrate that the proposed methods are able to achieve comparable performance relative to SGD without attacks.
Date of Conference: 02-05 June 2019
Date Added to IEEE Xplore: 08 July 2019
ISBN Information:
Conference Location: Minneapolis, MN, USA

References

References is not available for this document.