Loading [a11y]/accessibility-menu.js
Edge-Based Stochastic Gradient Algorithm for Distributed Optimization | IEEE Journals & Magazine | IEEE Xplore

Edge-Based Stochastic Gradient Algorithm for Distributed Optimization


Abstract:

This paper investigates distributed optimization problems where a group of networked nodes collaboratively minimizes the sum of all local objective functions. The local o...Show More

Abstract:

This paper investigates distributed optimization problems where a group of networked nodes collaboratively minimizes the sum of all local objective functions. The local objective function of each node is further set as an average of a finite set of subfunctions. This adjustment is motivated by machine learning problems with large training samples distributed and known privately to individual computational nodes. An augmented Lagrange (AL) stochastic gradient algorithm is presented to address the distributed optimization problem, which is integrated with the factorization of weighted Laplacian and local unbiased stochastic averaging gradient methods. At each iteration, only one randomly selected gradient of a subfunction is evaluated at a node, and a variance-reduced stochastic averaging gradient technique is applied to approximate the gradient of local objective function. Strong convexity of the local subfunction and Lipschitz continuity of its gradient are shown to ensure a linear convergence rate of the proposed algorithm in expectation. Numerical experiments on a logistic regression problem demonstrate the correctness of theoretical results.
Published in: IEEE Transactions on Network Science and Engineering ( Volume: 7, Issue: 3, 01 July-Sept. 2020)
Page(s): 1421 - 1430
Date of Publication: 05 August 2019

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.