Abstract:
This paper investigates a class of distributed optimization problems where the objective function is given by the sum of twice differentiable convex functions and a conve...Show MoreMetadata
Abstract:
This paper investigates a class of distributed optimization problems where the objective function is given by the sum of twice differentiable convex functions and a convex non-differentiable part. The setting assumes a network of communicating agents in which each individual agent's objective is captured by a summand of the aggregate objective function, and agents cooperate through an information exchange with their neighbors. We devise a second order method by transforming the problem into a continuously differentiable form using proximal operators, and truncating the Taylor expansion of the Hessian inverse so that a distributed implementation of the algorithm is possible. We prove global linear convergence (without backtracking), under usual strong convexity assumptions, and further demonstrate the effectiveness of our scheme through numerical simulations.
Published in: 2020 American Control Conference (ACC)
Date of Conference: 01-03 July 2020
Date Added to IEEE Xplore: 27 July 2020
ISBN Information: