Loading [MathJax]/extensions/MathZoom.js
Distributed subgradient methods for saddle-point problems | IEEE Conference Publication | IEEE Xplore

Distributed subgradient methods for saddle-point problems


Abstract:

We present provably correct distributed subgradient methods for general min-max problems with agreement constraints on a subset of the arguments of both the convex and co...Show More

Abstract:

We present provably correct distributed subgradient methods for general min-max problems with agreement constraints on a subset of the arguments of both the convex and concave parts. Applications include separable constrained minimization problems where each constraint is a sum of convex functions of local variables for the agents. The proposed algorithm then reduces to primal-dual updates using local subgradients and Laplacian averaging on local copies of the multipliers associated to the global constraints. The framework also encodes minimization problems with semidefinite constraints, which results in novel distributed strategies that are scalable if the order of the matrix inequalities is independent of the network size. Our analysis establishes for the case of general convex-concave functions the convergence of the running time-averages of the local estimates to a saddle point under periodic connectivity of the communication digraphs. Specifically, choosing the gradient step-sizes in a suitable way, we show that the evaluation error is proportional to 1/√t where t is the iteration step.
Date of Conference: 15-18 December 2015
Date Added to IEEE Xplore: 11 February 2016
ISBN Information:
Conference Location: Osaka, Japan

Contact IEEE to Subscribe

References

References is not available for this document.