Fast cooperative distributed learning | IEEE Conference Publication | IEEE Xplore

Fast cooperative distributed learning


Abstract:

We consider distributed optimization where N agents in a network minimize the sum equation of their individual convex costs. To solve the described problem, existing lite...Show More

Abstract:

We consider distributed optimization where N agents in a network minimize the sum equation of their individual convex costs. To solve the described problem, existing literature proposes distributed gradient-like algorithms that are attractive due to computationally simple iterations k, but have a drawback of slow convergence (in k) to a solution. We propose a distributed gradient-like algorithm, that we build from the (centralized) Nesterov gradient method. For the convex fi's with Lipschitz continuous and bounded gradients, we show that our method converges at rate O(log k/k). The achieved rate significantly improves over the convergence rate of existing distributed gradient-like methods, while the proposed algorithm maintains the same communication cost per k and a very similar computational cost per k. We further show that the rate O(log k/k) still holds if the bounded gradients assumption is replaced by a certain linear growth assumption. We illustrate the gains obtained by our method on two simulation examples: acoustic source localization and learning a linear classifier based on l2-regularized logistic loss.
Date of Conference: 04-07 November 2012
Date Added to IEEE Xplore: 28 March 2013
ISBN Information:

ISSN Information:

Conference Location: Pacific Grove, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.