Skip to main content

Advertisement

Log in

Convergence of Distributed Gradient-Tracking-Based Optimization Algorithms with Random Graphs

  • Published:
Journal of Systems Science and Complexity Aims and scope Submit manuscript

Abstract

This paper studies distributed convex optimization over a multi-agent system, where each agent owns only a local cost function with convexity and Lipschitz continuous gradients. The goal of the agents is to cooperatively minimize a sum of the local cost functions. The underlying communication networks are modelled by a sequence of random and balanced digraphs, which are not required to be spatially or temporally independent and have any special distributions. The authors use a distributed gradient-tracking-based optimization algorithm to solve the optimization problem. In the algorithm, each agent makes an estimate of the optimal solution and an estimate of the average of all the local gradients. The values of the estimates are updated based on a combination of a consensus method and a gradient tracking method. The authors prove that the algorithm can achieve convergence to the optimal solution at a geometric rate if the conditional graphs are uniformly strongly connected, the global cost function is strongly convex and the step-sizes don’t exceed some upper bounds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Alpcan T and Bauckhage C, A distributed machine learning framework, Proceedings of the 48th IEEE Conference on Decision and Control, Shanghai, 2009, 2546–2551.

  2. Ram S S, Nedić A, and Veeravalli V V, Asynchronous gossip algorithms for stochastic optimization, Proceedings of the 1st ICST International Conference on Game Theory for Networks, Istanbul, 2009, 3581–3586.

  3. Xiao L, Dual averaging methods for regularized stochastic learning and online optimization, Journal of Machine Learning Research, 2010, 11: 2543–2596.

    MathSciNet  MATH  Google Scholar 

  4. Kar S and Moura J M F, Distributed consensus algorithms in sensor networks with imperfect communication: Link failures and channel noise, IEEE Trans. Signal Processing, 2009, 57(1): 355–369.

    Article  MathSciNet  Google Scholar 

  5. Rabbat M and Nowak R, Distributed optimization in sensor networks, Proceedings of the 3rd International Symposium on Information Processing for Sensor Networks, Berkeley, 2004, 20–27.

  6. Bazerque J A and Giannakis G B, Distributed spectrum sensing for cognitive radio networks by exploiting sparsity, IEEE Trans. Signal Processing, 2010, 58(3): 1847–1862.

    Article  MathSciNet  Google Scholar 

  7. Chunlin L and Layuan L, A distributed multiple dimensional QoS constrained resource scheduling optimization policy in computational grid, Journal of Computer and System Sciences, 2006, 72(4): 706–726.

    Article  MathSciNet  Google Scholar 

  8. Yang T, Yi X, Wu J, et al., A survey of distributed optimization, Annual Reviews in Control, 2019, 47: 278–305.

    Article  MathSciNet  Google Scholar 

  9. Nedić A and Ozdaglar A, Distributed subgradient methods for multiagent optimization, IEEE Trans. Automatic Control, 2009, 54(1): 48–61.

    Article  MathSciNet  Google Scholar 

  10. Jakovetić D, Xavier J, and Moura J M F, Fast distributed gradient methods, IEEE Trans. Automatic Control, 2014, 59(5): 1131–1146.

    Article  MathSciNet  Google Scholar 

  11. Shi W, Ling Q, Yuan K, et al., On the linear convergence of the ADMM in decentralized consensus optimization, IEEE Trans. Signal Processing, 2014, 62(7): 1750–1761.

    Article  MathSciNet  Google Scholar 

  12. Xu J, Zhu S, Soh Y C, et al., Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes, Proceedings of the 54th IEEE Conference on Decision and Control, Osaka, 2015, 2055–2060.

  13. Nedić A, Olshevsky A, and Shi W, Achieving geometric convergence for distributed optimization over time-varying graphs, SIAM Journal on Optimization, 2017, 27(4): 2597–2633.

    Article  MathSciNet  Google Scholar 

  14. Qu G and Li N, Harnessing smoothness to accelerate distributed optimization, IEEE Trans. Control of Network Systems, 2018, 5(3): 1245–1260.

    Article  MathSciNet  Google Scholar 

  15. Saadatniaki F, Xin R, and Khan U A, Optimization over time-varying directed graphs with row and column-stochastic matrices, IEEE Trans. Automatic Control, 2020, 65(11): 4769–4780.

    Article  MathSciNet  Google Scholar 

  16. Xin R, Sahu A K, Khan U A, et al., Distributed stochastic optimization with gradient tracking over strongly-connected networks, arXiv preprint arXiv: 1903.07266, 2019.

  17. Matei I and Baras J S, Performance evaluation of the consensus based distributed subgradient method under random communication topologies, IEEE Journal of Selected Topics Signal Processing, 2011, 5(4): 754–771.

    Article  Google Scholar 

  18. Lobel I and Ozdaglar A, Distributed subgradient methods for convex optimization over random networks, IEEE Trans. Automatic Control, 2011, 56(6): 1291–1306.

    Article  MathSciNet  Google Scholar 

  19. Duchi J C, Agarwal A, and Wainwright M J, Dual averaging for distributed optimization: Convergence analysis and network scaling, IEEE Trans. Automatic Control, 2012, 57(3): 592–606.

    Article  MathSciNet  Google Scholar 

  20. Pu S and Nedić A, Distributed stochastic gradient tracking methods, Mathematical Programming, 2020, 180(1): 1–49.

    MathSciNet  MATH  Google Scholar 

  21. Cohn Donald L, Measure Theory, Springer, London, 2013.

    Book  Google Scholar 

  22. Bubeck S, Convex optimization: Algorithms and complexity, Foundations and Trends in Machine Learning, 2015, 8(3–4): 231–357.

    Article  Google Scholar 

  23. Huber P J, Robust estimation of a location parameter, Annals of Mathematical Statistics, 1964, 53: 73–101.

    Article  MathSciNet  Google Scholar 

  24. Li T and Wang J, Distributed averaging with random network graphs and noises, IEEE Trans. Information Theory, 2018, 64(11): 7063–7080.

    Article  MathSciNet  Google Scholar 

  25. Zhu M and Martínez S, Discrete-time dynamic average consensus, Automatica, 2010, 46(2): 322–329.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Li.

Additional information

This research was supported by the Basic Research Project of Shanghai Science and Technology Commission under Grant No. 20JC1414000.

This paper was recommended for publication by Editor HU Xiaoming.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, J., Fu, K., Gu, Y. et al. Convergence of Distributed Gradient-Tracking-Based Optimization Algorithms with Random Graphs. J Syst Sci Complex 34, 1438–1453 (2021). https://doi.org/10.1007/s11424-021-9355-5

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11424-021-9355-5

Keywords

Navigation