Elsevier

Automatica

Volume 134, December 2021, 109899
Automatica

Distributed discrete-time convex optimization with nonidentical local constraints over time-varying unbalanced directed graphs

https://doi.org/10.1016/j.automatica.2021.109899Get rights and content

Abstract

In this paper, a class of optimization problems is investigated, where the objective function is the sum of N convex functions viewed as local functions and the constraints are N nonidentical closed convex sets. Additionally, it is aimed to solve the considered optimization problem in a distributed manner and thus a sequence of time-varying unbalanced directed graphs is introduced first to depict the information connection topologies. Then, the novel push-sum based constrained optimization algorithm (PSCOA) is developed, where the new gradient descent-like method is applied to settle the involved closed convex set constraints. Furthermore, the rigorous convergence analysis is shown under some standard and common assumptions and it is proved that the developed distributed discrete-time algorithm owns a convergence rate of O(lntt) in general case. Specially, the convergence rate of O(1t) can be further obtained under the assumption that at least one objective function is strongly convex. Finally, simulation results are given to demonstrate the validity of the theoretical results.

Introduction

A great deal of tasks in large-scale networks, such as formation control of robot networks, source location of sensor networks, machine learning, power grid dispatching, necessarily involve the optimization problems whose objective functions are in the form of minxRnf(x)=i=1Nfi(x) with N being the number of the agents. Classical centralized optimization algorithms need each agent to compute the gradients of all functions fi(x) in order to solve the aforementioned optimization problems, which are heavy computational jobs. On the other hand, when distributed optimization algorithms are employed, each agent can just compute the gradient of single fi and exchange local information with its neighbors to achieve the global optimal solution, which greatly reduces the load on computation. Thus, the research on the distributed optimization is of great importance in practice.

In the past several decades, increasing attention has been paid to distributed optimization and lots of excellent results have been reported, where different kinds of optimization problems were successfully solved by some distributed algorithms thus designed, including discrete-time algorithms and continuous-time algorithms. Nowadays, a number of distributed continuous-time algorithms whose analyses can benefit from the Lyapunov stability theory have been proposed to solve various optimization problems, including unconstrained optimization problems in Gharesifard and Cortés, 2014, Kia et al., 2015, Liang et al., 2019, Wang and Elia, 2011 and Zhu, Yu, Wen and Ren (2019), optimization problems with parts of constraints in Lin et al., 2017, Liu and Wang, 2013, Liu and Wang, 2015, Liu, Yang and Wang, 2017 and Qiu, Yang, and Wang (2016), and optimization problems with general constraints in Yang, Liu, and Wang (2017) and Zhu, Yu, Wen, Chen and Ren (2019). Since this paper mainly cares about employing a distributed discrete-time algorithm to solve the considered optimization problem and the convergence rates of the designed algorithm in different cases, we turn to the subsequent discussion about the existing results with distributed discrete-time algorithms being proposed.

  • On the one hand, in Nedić and Ozdaglar, 2009, Nedić et al., 2017, Qu and Li, 2018 and Yi and Hong (2014), the unconstrained optimization problems were considered and different kinds of distributed discrete-time algorithms were developed. Furthermore, the optimization problem with identical closed convex set constraint was solved in Nedić, Ozdaglar, and Parrilo (2010) and the optimization problem with identical general constraints was addressed in Zhu and Martínez (2012). Moreover, the problems with N nonidentical closed convex set constraints were researched in Lei et al., 2016, Lin et al., 2019 and Wu and Lu (2019) and the problem with both N nonidentical closed convex set constraints and N equality constraints was studied in Liu, Yang and Hong (2017). However, in the aforementioned results, the distributed discrete-time algorithms can only be designed over undirected topologies or balanced directed topologies. Then, the distributed discrete-time algorithms were constructed over static unbalanced topologies to respectively solve the unconstrained optimization problem in Xi, Mai, Xin, Abed, and Khan (2018) and the constrained optimization problem in Mai and Abed (2019). Additionally, when a sequence of time-varying unbalanced directed topologies was introduced, only the unconstrained optimization problems were addressed with distributed discrete-time algorithms developed under the push-sum framework in Nedić and Olshevsky, 2015, Nedić and Olshevsky, 2016 and under the push–pull framework in Saadatniaki, Xin, and Khan (2020). It can be noted from the existing results that designing the distributed discrete-time algorithms over a sequence of time-varying unbalanced directed topologies to solve the constrained optimization problems is still a very challenging issue.

  • On the other hand, the convergence rate owned by each designed distributed algorithm is of equal importance in evaluating that algorithm. Generally, the convergence rates of the designed distributed algorithms were shown to be O(lntt) (see Mai and Abed, 2019, Nedić and Olshevsky, 2015, Nedić and Ozdaglar, 2009, Nedić et al., 2010). Furthermore, with the strong convexity assumption on all local objective functions, the linear convergence rates can be achieved for the algorithms employing gradient-tracking method in Nedić et al., 2017, Qu and Li, 2018 and Xi et al. (2018) and alternating direction method of multipliers in Mota et al., 2013, Shi et al., 2014 and Wei and Ozdaglar (5450). However, those accelerated distributed algorithms can only be effective when the unconstrained optimization problems and the balanced topologies or static unbalanced topologies were involved. Moreover, when a sequence of time-varying unbalanced topologies was introduced, the convergence rate of O(lntt) was established in Nedić and Olshevsky (2016) with strong convexity assumption on all local objective functions and the linear convergence rate was obtained in Saadatniaki et al. (2020) with both strong convexity assumption and L-smoothness assumption on all local objective functions. Additionally, in Wu and Lu (2019) where the optimization problem with N nonidentical closed convex set constraints was studied, a distributed discrete-time algorithm resorting to the Fenchel dual method was designed over the time-varying undirected topologies. Although a convergence rate of O(1t) was achieved therein for the dual optimality, only a convergence rate of O(1t) can be deduced for the primal optimality and feasibility. Thus, it can be concluded that obtaining better convergence rates of the distributed discrete-time algorithms designed for the constrained optimization problems over time-varying unbalanced topologies is still a very difficult issue in this area.

In this paper, it is aimed to solve the constrained optimization problem with the distributed discrete-time algorithm designed under the push-sum framework over time-varying unbalanced directed topologies. Moreover, this work aims to achieve a faster convergence rate for the designed distributed algorithm under the assumption that at least one local objective function is strongly convex. Correspondingly, the main contributions of this work contain two parts, which are delineated as follows.

  • (1)

    First, the optimization problem with N nonidentical closed convex set constraints is successfully addressed with our push-sum based constrained optimization algorithm (PSCOA) being designed over time-varying unbalanced directed topologies. As is well known, it is hard to employ the projection method to deal with the closed convex set constraints under the push-sum framework. Therefore, a gradient decent-like method is newly used to settle the involved constraints here and a number of new proof skills are correspondingly applied in the establishment of the convergence property of the designed distributed algorithm. Moreover, the convergence rate of O(lntt) is achieved for the designed PSCOA in general case, which is standard and common compared to the existing results.

  • (2)

    Second, the convergence rate of O(1t) is achieved for the designed PSCOA under the assumption that at least one local objective function is strongly convex. In order to obtain the convergence rate of O(lntt) for the distributed algorithm designed in Nedić and Olshevsky (2016) where only unconstrained optimization problem was studied, not only the strong convexity of all local objective functions but also the L-smoothness of all local objective functions or the uniform boundedness property of generated variables xi(t) was assumed. In contrast, the restrictive condition, L-smoothness of all local objective functions or the uniform boundedness of generated variables xi(t), is removed in this paper, and a faster convergence rate of O(1t) is obtained only with the assumption that at least one local objective function is strongly convex rather than all local objective functions are strongly convex.

The remaining parts are organized as follows. First, some preliminaries including notations, graph theory, and useful lemmas are recalled in Section 2. Then, the problem setup is formulated, the algorithm development is shown, and the main theorems discussing the convergence property and the convergence rates in different cases are given in Section 3. In Section 4, several necessary preparatory works are developed and the detailed proofs of the main theorems are made. Finally, numerical simulations and a summary of the work are given in Sections 5 Computer simulations, 6 Conclusion, respectively. Besides, the proofs of intermediate results are provided in the Appendix.

Section snippets

Notations

In this paper, Rn denotes the set of n-dimensional real-valued vectors. For a given vector xRn, xi denotes its ith entry for i=1,2,,n and x represents its 2-norm. Additionally, f(x) stands for the gradient of function f at x with f being a function defined on Rn. Specially, for any given closed convex set ΩRn and any xRn, let dist(x,Ω)=xPΩ(x) with PΩ being the projection operator on Ω. Moreover, Rm×n denotes the set of m×n real-valued matrices. For a given matrix ARn×n, Aij or [A]ij

Problem formulation

The optimization problem considered in this paper is formulated as minxf(x)=1Ni=1Nfi(x)s.t.xXi,i=1,2,,N,where xRn, fi:RnR are convex local objective functions, and XiRn are closed convex sets. Moreover, assume that X=i=1NXi is nonempty. Let X and f denote the optimal solutions set and the optimal objective function value, respectively. Moreover, assume that X is nonempty and f is finite. Additionally, we adopt the following assumptions for the researched optimization problem.

Assumption 1

For all i

Guidelines for proofs of main theorems

In this section, we will provide the complete proofs of the above three main theorems. It is well known that the procedures of the convergence analyses for the push-sum based algorithms should resort to analyzing the evolution of the average state x̄(t). Thus, we need to introduce several necessary intermediate results including Lemma 5, Lemma 6, Lemma 7, Lemma 8 and Proposition 1 to analyze the evolution of the average state x̄(t) in advance. That is, for any sX, we need to characterize the

Computer simulations

In this section, we will show the simulation results to verify the theoretical results developed in the above. Consider the optimization problem involved in the machine learning problems (cf. Mai & Abed, 2019), whose objective function is defined on R2 as minxXf(x)=i=1Nfi(x),where for i=1,,N, fi(x)=ln[1+eai(bix1+x2)]+|x1|,with bi=0.01i being the feature parameters, and ai=(1)i being the corresponding labels.

Case A:N=8. First, let N=8 and the N closed convex set constraints are selected as x

Conclusion

In this paper, the constrained optimization problem subject to N nonidentical closed convex set constraints has been resolved over time-varying unbalanced directed topologies by a newly developed distributed discrete-time algorithm, PSCOA. Specifically, the new gradient decent-like method has been employed in the designed algorithm to deal with the involved closed convex set constraints, since it is hard to utilize the classical projection method under the push-sum framework. Moreover, the

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant No. 62073076, the General joint fund of the equipment advance research program of Ministry of Education, China under Grant No. 6141A020223, the Jiangsu Provincial Key Laboratory of Networked Collective Intelligence, China under Grant No. BM2017002, the Natural Science Foundation of Jiangsu Province, China under Grant No. BK20200809, the Natural Science Foundation of the Jiangsu Higher Education

Wenwu Yu received the B.Sc. degree in Information and Computing Science and M.Sc. degree in Applied Mathematics from the Department of Mathematics, Southeast University, Nanjing, China, in 2004 and 2007, respectively, and the Ph.D. degree from the Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China, in 2010. Currently, he is the Founding Director of Laboratory of Cooperative Control of Complex Systems and the Deputy Associate Director of Jiangsu Provincial Key

References (33)

  • LiuQ. et al.

    A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints

    IEEE Transactions on Neural Networks Learning Systems

    (2013)
  • LiuQ. et al.

    A second-order multi-agent network for bound-constrained distributed optimization

    IEEE Transactions on Automatic Control

    (2015)
  • LiuQ. et al.

    Constrained consensus algorithms with fixed step size for distributed convex optimization over multiagent networks

    IEEE Transactions on Automatic Control

    (2017)
  • LiuQ. et al.

    A collective neurodynamic approach to distributed constrained optimization

    IEEE Transactions on Neural Networks Learning Systems

    (2017)
  • MotaJ.F.C. et al.

    D-ADMM: A communication-efficient distributed algorithm for separable optimization

    IEEE Transactions on Signal Processing

    (2013)
  • NedićA. et al.

    Distributed optimization over time-varying directed graphs

    IEEE Transactions on Automatic Control

    (2015)
  • Cited by (0)

    Wenwu Yu received the B.Sc. degree in Information and Computing Science and M.Sc. degree in Applied Mathematics from the Department of Mathematics, Southeast University, Nanjing, China, in 2004 and 2007, respectively, and the Ph.D. degree from the Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China, in 2010. Currently, he is the Founding Director of Laboratory of Cooperative Control of Complex Systems and the Deputy Associate Director of Jiangsu Provincial Key Laboratory of Networked Collective Intelligence, an Associate Dean in the School of Mathematics, and a Full Professor with the Endowed Chair Honor in Southeast University, China.

    Dr. Yu held several visiting positions in Australia, China, Germany, Italy, the Netherlands, and the USA. His research interests include multi-agent systems, complex networks and systems, disturbance control, distributed optimization, machine learning, game theory, cyberspace security, smart grids, intelligent transportation systems, big-data analysis, etc.

    Dr. Yu severs as an Editorial Board Member of several flag journals, including IEEE Transactions on Circuits and Systems II, IEEE Transactions on Industrial Informatics, IEEE Transactions on Systems, Man, and Cybernetics: Systems, Science China Information Sciences, Science China Technological Sciences, etc. He was listed by Clarivate Analytics/Thomson Reuters Highly Cited Researchers in Engineering in 2014–2020. He publishes about 100 IEEE Transactions journal papers with more than 20,000 citations. Moreover, Dr. Yu is also the recipient of the Second Prize of State Natural Science Award of China in 2016.

    Hongzhe Liu received the B.Sc. degree and Ph.D. degree in Applied Mathematics from Southeast University, Nanjing, China, in 2016 and 2021. He is currently Postdoctoral Researcher with Southeast University. Between 2018 and 2019, he was a Visiting Fellow with the School of Computing, Engineering and Mathematics, Western Sydney University, Sydney, Australia for nine months. His research interests include distributed optimization, multi-agent systems, and complex networks.

    Wei Xing Zheng received the B.Sc. degree in Applied Mathematics in 1982, the M.Sc. degree in Electrical Engineering in 1984, and the Ph.D. degree in Electrical Engineering in 1989, all from Southeast University, Nanjing, China. He is currently a University Distinguished Professor at Western Sydney University, Sydney, Australia. Over the years he has also held various faculty/research/visiting positions at Southeast University, Nanjing, China; Imperial College of Science, Technology and Medicine, London, UK; University of Western Australia, Perth, Australia; Curtin University of Technology, Perth, Australia; Munich University of Technology, Munich, Germany; University of Virginia, Charlottesville, VA, USA; and University of California–Davis, Davis, CA, USA. His research interests are in the areas of systems and controls, signal processing, and communications.

    Prof. Zheng is a Fellow of IEEE. He received the 2017 Vice-Chancellor’s Award for Excellence in Research (Researcher of the Year) at Western Sydney University, Sydney, Australia. Previously, he served as an Associate Editor for IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications, IEEE Transactions on Automatic Control, IEEE Signal Processing Letters, IEEE Transactions on Circuits and Systems-II: Express Briefs, and IEEE Transactions on Fuzzy Systems, and as a Guest Editor for IEEE Transactions on Circuits and Systems-I: Regular Papers. Currently, he is an Associate Editor for Automatica, IEEE Transactions on Cybernetics, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Control of Network Systems, IEEE Transactions on Circuits and Systems-I: Regular Papers, and other scholarly journals. He is also an Associate Editor of IEEE Control Systems Society’s Conference Editorial Board. He was the Publication Co-Chair of the 56th IEEE Conference on Decision and Control in Melbourne, Australia in December 2017. He is currently a Distinguished Lecturer of IEEE Control Systems Society.

    Yanan Zhu received the Ph.D. degree in Mathematics from Southeast University, Nanjing, China, in 2019. She is currently a lecturer with the School of Automation, Nanjing University of Information Science and Technology, Nanjing, China. During April 2017 to June 2017, she was a Research Assistant with the City University of Hong Kong, Hong Kong. During December 2017 to June 2018, she was a visiting scholar with the University of California, Riverside, USA. During October 2018 to December 2018 and July 2019 to December 2019, she was a visiting student with the RMIT University, Melbourne, Australia. Her research interests include multi-agent systems, distributed optimization and game theory.

    The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Vijay Gupta under the direction of Editor Christos G. Cassandras.

    View full text