Stochastic Admm For Byzantine-Robust Distributed Learning | IEEE Conference Publication | IEEE Xplore

Stochastic Admm For Byzantine-Robust Distributed Learning


Abstract:

In this paper, we aim at solving a distributed machine learning problem under Byzantine attacks. In the distributed system, a number of workers (termed as Byzantine worke...Show More

Abstract:

In this paper, we aim at solving a distributed machine learning problem under Byzantine attacks. In the distributed system, a number of workers (termed as Byzantine workers) could send arbitrary messages to the master and bias the learning process, due to data corruptions, computation errors or malicious attacks. Prior work has considered a total variation (TV) norm-penalized approximation formulation to handle Byzantine attacks, where the TV norm penalty forces the regular workers' local variables to be close, and meanwhile, tolerates the outliers sent by the Byzantine workers. The stochastic subgradient method, which does not consider the problem structure, is shown to be able to solve the TV norm-penalized approximation formulation. In this paper, we propose a stochastic alternating direction method of multipliers (ADMM) that utilizes the special structure of the TV norm penalty. The stochastic ADMM iterates are further simplified, such that the iteration-wise communication and computation costs are the same as those of the stochastic subgradient method. Numerical experiments on the COVERTYPE and MNIST dataset demonstrate the resilience of the proposed stochastic ADMM to various Byzantine attacks.
Date of Conference: 04-08 May 2020
Date Added to IEEE Xplore: 09 April 2020
ISBN Information:

ISSN Information:

Conference Location: Barcelona, Spain

References

References is not available for this document.