Abstract:
This article studies the distributed nonconvex optimization problem with nonsmooth regularization, which has wide applications in decentralized learning, estimation, and ...View moreMetadata
Abstract:
This article studies the distributed nonconvex optimization problem with nonsmooth regularization, which has wide applications in decentralized learning, estimation, and control. The objective function is the sum of local objective functions, which consist of differentiable (possibly nonconvex) cost functions and nonsmooth convex functions. This article presents a distributed proximal gradient algorithm for the nonsmooth nonconvex optimization problem. Over time-varying multiagent networks, the proposed algorithm updates local variable estimates with a constant step-size at the cost of multiple consensus steps, where the number of communication rounds increases over time. We prove that the generated local variables achieve consensus and converge to the set of critical points. Finally, we verify the efficiency of the proposed algorithm by numerical simulations.
Published in: IEEE Transactions on Control of Network Systems ( Volume: 10, Issue: 2, June 2023)