Skip to main content
Log in

A nonautonomous-differential-inclusion neurodynamic approach for nonsmooth distributed optimization on multi-agent systems

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

This paper considers a category of nonsmooth distributed optimization on multi-agent systems, where agents own privacies and collectively minimize a sum of local cost functions. Taking the restrictions on communication among agents into consideration, a nonautonomous-differential-inclusion neurodynamic approach is proposed over a weighed topology graph. The convergence of neural network is analyzed and its state exponentially converges to an optimal solution of distributed optimization under certain conditions. Since no additional conditions are required to guarantee the convergence, the neural network is superior to distributed algorithms based on penalty method, which need to estimate penalty parameters. Compared with some existed approaches, the neural network has the advantage of possessing fewer state variables. Finally, illustrative examples and an application in distributed quantile regression are delineated to testify the effectiveness of the presented neural network.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Aubin J, Cellina A (1984) Differential inclusions. Springer, Berlin

    Book  Google Scholar 

  2. Beck A, Nedic A, Ozdaglar A, Teboulle M (2014) An gradient method for network resource allocation problems. IEEE Trans Control of Netw Syst 1(1):64–73

    Article  MathSciNet  Google Scholar 

  3. Bolte J (2003) Continuous gradient projection method in Hilbert spaces. J Optim Theory Appl 119(2):235–259

    Article  MathSciNet  Google Scholar 

  4. Cheng L, Hou Z, Lin Y, Tan M, Zhang W, Wu F (2011) Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks. IEEE Trans Neural Netws 22(5):714–726

    Article  Google Scholar 

  5. Cherukuri A, Cortés J (2016) Initialization-free distributed coordination for economic dispatch under varying loads and generator commitment. Automatica 74:183–193

    Article  MathSciNet  Google Scholar 

  6. Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New York

    MATH  Google Scholar 

  7. Fang H, Lei J, Chen H (2015) Primal-dual algorithm for distributed constrained optimization. Syst Control Lett 96:110–117

    MathSciNet  MATH  Google Scholar 

  8. Gharesifard B, Cortes J (2014) Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans Autom Control 59(3):781–786

    Article  MathSciNet  Google Scholar 

  9. Hajinezhad D, Hong M, Garcia A (2019) ZONE: Zeroth-order nonconvex multiagent optimization over networks. In: IEEE transactions on automatic control

  10. Hallock K, Koenker R (2001) Quantile regression. J Econ Perspect 15(4):143–156

    Article  Google Scholar 

  11. Hopfield J (1985) Neural computation of decisions in optimization computation of decisions in optimization problems. Biol Cybern 52:141–152

    MathSciNet  MATH  Google Scholar 

  12. Jiang X, Qin S, Xue X (2020) A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization. Neurocomputing 377:225–233

    Article  Google Scholar 

  13. Kennedy M, Chua L (1988) Neural networks for nonlinear programming. IEEE Trans Circuits Syst 35(5):554–562

    Article  MathSciNet  Google Scholar 

  14. Kia S, Cortes J, Martinez S (2014) Periodic and event-triggered communication for distributed continuous-time convex optimization. In: 2014 American Control Conference (ACC)

  15. Liang S, Zeng X, Hong Y (2018) Distributed nonsmooth optimization with coupled inequality constraints via modified Lagrangian function. IEEE Trans Autom Control 63(6):1753–1759

    Article  MathSciNet  Google Scholar 

  16. Lin P, Ren W, Farrell J (2016) Distributed continuous-time optimization: Nonuniform gradient gains, finite-time convergence, and convex constraint set. IEEE Trans Autom Control 99

  17. Liu Q, Yang S, Hong Y (2017) Constrained consensus algorithms with fixed step size for distributed convex optimization over multi-agent networks. IEEE Trans Autom Control 62(8):4259–4265

    Article  Google Scholar 

  18. Liu Q, Yang S, Wang J (2017) A collective neurodynamic approach to distributed constrained optimization. IEEE Trans Neural Netw Learn Syst 28(8):1747–1758

    Article  MathSciNet  Google Scholar 

  19. Long C, Hou ZG, Lin Y, Min T, Zhang W (2011) Solving a modified consensus problem of linear multi-agent systems. Automatica 47(10):2218–2223

    Article  MathSciNet  Google Scholar 

  20. Ma L, Bian W (2019) A novel multiagent neurodynamic approach to constrained distributed convex optimization. IEEE Trans Cybern 51:1–12

    Google Scholar 

  21. Nedic A, Ozdaglar A (2009) Distributed subgradient methods for multi-agent optimization. IEEE Trans Autom Control 54(1):48–61

    Article  MathSciNet  Google Scholar 

  22. Niederlander S, Cortes J (2016) Distributed coordination for nonsmooth convex optimization via saddle-point dynamics. Optim Control 29(1):1247–1272

    MathSciNet  MATH  Google Scholar 

  23. Qin S, Yang X, Xue X, Song J (2017) A one-layer recurrent neural network for pseudoconvex optimization problems with equality and inequality constraints. IEEE Trans Cybern 47(10):3063–3074

    Article  Google Scholar 

  24. Sun C, Ye M, Hu G (2017) Distributed time-varying quadratic optimization for multiple agents under undirected graphs. IEEE Trans Autom Control 62(7):3687–3694

    Article  MathSciNet  Google Scholar 

  25. Wang X, Hong Y, Ji H (2016) Distributed optimization for a class of nonlinear multiagent systems with disturbance rejection. IEEE Trans Cybern 46(7):1655–1666

    Article  Google Scholar 

  26. Wang Y, Lin P, Hong Y (2018) Distributed regression estimation with incomplete data in multi-agent networks. Sci China Inf Sci 61(9):168–181

    MathSciNet  Google Scholar 

  27. Wang J, Liu Q (2015) A second-order multi-agent network for bound-constrained distributed optimization. IEEE Trans Autom Control 60(12):3310–3315

    Article  MathSciNet  Google Scholar 

  28. Wang H, Li C (2017) Distributed quantile regression over sensor networks. IEEE Trans Signal Inf Process Over Netw 99

  29. Wang J, Elia N (2011) Control approach to distributed optimization. In: Communication, control and computing

  30. Xi C, Wu Q, Khan U (2017) On the distributed optimization over directed networks. Neurocomputing 267:508–515

    Article  Google Scholar 

  31. Yang S, Liu Q, Wang J (2016) Distributed optimization based on a multiagent system in the presence of communication delays. IEEE Trans Syst Man Cybern Syst 47(5):1–12

    Google Scholar 

  32. Yang S, Liu Q, Wang J (2017) A multi-agent system with a proportional-integral protocol for distributed constrained optimization. IEEE Trans Autom Control 62(7):3461–3467

    Article  MathSciNet  Google Scholar 

  33. Yi P, Hong Y, Liu F (2015) Distributed gradient algorithm for constrained optimization with application to load sharing in power systems. Syst Control Lett 83:45–52

    Article  MathSciNet  Google Scholar 

  34. You K, Tempo R, Xie P (2018) Distributed algorithms for robust convex optimization via the scenario approach. IEEE Trans Autom Control 64(3):880–895

    Article  MathSciNet  Google Scholar 

  35. Zeng X, Yi P, Hong Y (2017) Distributed continuous-time algorithm for constrained convex optimizations via nonsmooth analysis approach. IEEE Trans Autom Control 62(10):5227–5233

    Article  MathSciNet  Google Scholar 

  36. Zhang J, You K, Basar T (2019) Distributed discrete-time optimization in multi-agent networks using only sign of relative state. IEEE Trans Autom Control 64(6):2352–2367

    Article  Google Scholar 

  37. Zhu Y, Yu W, Wen G, Chen G (2020) Projected primal-dual dynamics for distributed constrained nonsmooth convex optimization. IEEE Trans Syst Man Cybern 50(4):1776–1782

    Google Scholar 

  38. Zhu Y, Yu W, Wen G, Chen G, Ren W (2019) Continuous-time distributed subgradient algorithm for convex optimization with general constraints. IEEE Trans Autom Control 64(4):1694–1701

    Article  MathSciNet  Google Scholar 

  39. Zhu Y, Yu W, Wen G, Ren W (2019) Continuous-time coordination algorithm for distributed convex optimization over weight-unbalanced directed networks. IEEE Trans Circuits Syst II Express Briefs 66(7):1202–1206

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by the National Science Foundation of China (61773136, 11871178).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sitian Qin.

Ethics declarations

Conflicts of interest

We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, and there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled “A Nonautonomous-Differential-Inclusion Neurodynamic Approach for Nonsmooth Distributed Optimization on Multi-Agent Systems.”

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proposition 4

There exists a constant \(\sigma _0>0\), such that for any \(\sigma >\sigma _0\), the minimum points of \(D_{\sigma }(\mathbf{x} )\) are also the optimal solutions of distributed optimization (2) and vice versa.

Proof

Let \(\varOmega _\sigma\) be the set of minimum points of \(D_{\sigma }(\mathbf{x} )\) and \({{\bar{x}}}=\frac{1}{n}\sum \nolimits _{i=1}^n x_i\). Then it has \(\sum \nolimits _{k=1}^n\parallel x_k-{\bar{x}}\parallel \le \frac{1}{n}\sum \nolimits _{k=1}^n\sum \nolimits _{l=1}^n\parallel x_k-x_l\parallel .\) According to Assumption 2, for any k, \(l\in \{1,2,...,n\}\), there must exist a path \(P_{kl}\in {\mathcal {E}}\) that connects the two vertexes. Noticing that weighed graph \({\mathcal {G}}\) is undirected and connected, we have

$$\begin{aligned} \begin{array}{ll} a_{\rm min}\cdot \sum \limits _{k=1}^n \parallel x_k-{\bar{x}}\parallel &{}\le \frac{1}{2n}\sum \limits _{k=1}^n\sum \limits _{l=1}^n\sum \limits _{(i,j)\in P_{kl}} a_{ij}\parallel x_i-x_j\parallel \\ &{}\le \frac{n}{2}\sum \limits _{i=1}^n \sum \limits _{j\in {\mathcal {N}}_i} a_{ij} \parallel x_i-x_j\parallel , \end{array} \end{aligned}$$
(34)

where \(a_{\rm min}\) is an edge of least weight in the weighed graph \({\mathcal {G}}\). Due to Assumption 1, it can be immediately obtained that

$$\begin{aligned} \sum _{i=1}^n f_i(x_i)-\sum _{i=1}^n f_i({\bar{x}})\ge - {\tilde{M}} \sum _{i=1}^n\parallel x_i -{\bar{x}}\parallel , \end{aligned}$$
(35)

where \({\tilde{M}}=\max \nolimits _{i\in {\mathcal {V}}}\{M_i\}\). Then, in the light of (35), let \(\sigma _0=n{\tilde{M}}/2a_{\rm min}\), then

$$\begin{aligned} D_{\sigma }(\mathbf{x} )= & {} \sum \limits _{i=1}^n f_i(x_i)+\sigma \sum \limits _{i=1}^n\sum \limits _{j\in {\mathcal {N}}_i} {{a_{ij} }} \parallel x_i-x_j\parallel \nonumber \\= & {} \sum \limits _{i=1}^n f_i({\bar{x}})+\sum \limits _{i=1}^n f_i(x_i)-\sum \limits _{i=1}^n f_i({{\bar{x}}})\nonumber \\&+\sigma \sum \limits _{i=1}^n\sum \limits _{j\in {\mathcal {N}}_i} {{a_{ij} }} \parallel x_i-x_j\parallel \nonumber \\\ge & {} \sum \limits _{i=1}^n f_i({\bar{x}})+(\sigma -\sigma _0)\sum \limits _{i=1}^n\sum \limits _{j\in {\mathcal {N}}_i} {{a_{ij} }}\parallel x_i-x_j\parallel . \end{aligned}$$
(36)

\(\square\)

In the following part, we will prove the equivalence of minimum points of \(f(\mathbf{x} )\) and \(D_{\sigma }(\mathbf{x} )\) with the help of (36).

Suppose \(\mathbf{x}^*=\mathrm {col}\{x^*,x^*,...,x^*\}\) is an optimal solution to distributed optimization problem (2). By the definition of \(D_{\sigma }(\mathbf{x} )\), it can be immediately obtained that \(\mathbf{x}^*\) also minimizes \(D_{\sigma }(\mathbf{x} )\) on \({\mathbb {R}}^{mn}\). That is to say,

$$\begin{aligned} \varOmega ^*\subseteq \varOmega _\sigma . \end{aligned}$$

Let \(f_{\rm min}\) be the minimum value of \(D_{\sigma }(\mathbf{x} )\). In the above analysis, \(f_{\rm min}\) is also minimum value of \(D_{\sigma }(\mathbf{x} )\) on \(\varOmega =\{\mathbf{x }\in {\mathbb {R}}^{mn}|x_i=x_j\}\).

Suppose that \(\mathbf{y} ^*=\mathrm {col}\{y_1^*,y_2^*,...,y_n^*\}\) minimizes \(D_{\sigma }(\mathbf{x} )\), then for any \(\mathbf{y} =\mathrm {col}\{y,y,...,y\}\in \varOmega\), it is apparent that

$$\begin{aligned} D_{\sigma }(\mathbf{y} )=\sum _{i=1}^nf_i(y)+\sigma \sum \limits _{i=1}^n \sum \limits _{j\in {\mathcal {N}}_i} {{a_{ij}}} \parallel y-y\parallel =\sum _{i=1}^nf_i(y). \end{aligned}$$
(37)

By the fact that \(\mathbf{y}^*\) minimizes \(D_{\sigma }(\mathbf{x} )\) and (36), we have

$$\begin{aligned}&D_{\sigma }(\mathbf{y} ^*)=\sum \limits _{i=1}^nf_i(y_i^*)+\sigma \sum \limits _{i=1}^n \sum \limits _{j\in {\mathcal {N}}_i} {{a_{ij} }} \parallel y_i^*-y_j^*\parallel \nonumber \\&\quad \ge \sum \limits _{i=1}^n f_i\left( \frac{1}{n}\sum \limits _{i=1}^ny_i^*\right) +(\sigma -\sigma _0)\sum \limits _{i=1}^n\sum \limits _{j\in {\mathcal {N}}_i}{{a_{ij} }} \parallel y_i^*-y_j^*\parallel . \end{aligned}$$
(38)

Since the vector \(\mathrm {col}\{\frac{1}{n}\sum \limits _{i=1}^ny_i^*,\frac{1}{n}\sum \limits _{i=1}^ny_i^*,...,\frac{1}{n}\sum \limits _{i=1}^ny_i^*\}\in \varOmega,\) then

$$\begin{aligned} \sum \limits _{i=1}^n f_i\left( \frac{1}{n}\sum \limits _{i=1}^ny_i^*\right) \ge f_{\rm min}. \end{aligned}$$
(39)

Combining (38) with (39), it has

$$\begin{aligned} D_{\sigma }(\mathbf{y} ^*)\ge f_{\rm min}+(\sigma -\sigma _0)\sum \limits _{i=1}^n\sum \limits _{j\in {\mathcal {N}}_i} {{a_{ij} }}\parallel y_i^*-y_j^*\parallel . \end{aligned}$$
(40)

If \(\mathbf{y}^*\notin \varOmega\), there must exist \(k,l\in \{1,2,...,n\}\) such that \(\Vert y_k^*-y_l^*\Vert > 0\). Hence, the condition \(\sigma >\sigma _0\) and (40) determine that

$$\begin{aligned} D(\mathbf{y} ^*)> f_{\rm min}, \end{aligned}$$

which contradicts with the fact that \(D(\mathbf{y} ^*)=\min \nolimits _\mathbf{x \in {\mathbb {R}}^{mn}}{D_{\sigma }(\mathbf{x} )}\le f_{\rm min}\). Thus, we have \(\mathbf{y}^*\in \varOmega\), that is \(y_i^*=y_j^*\) for \(i,j\in \{1,2,...,n\}.\)

Note \(\mathbf{y} ^*=\mathrm {col}\{{\tilde{y}}^*,{\tilde{y}}^*,...,{\tilde{y}}^*\}\). Due to (37) and (38), it follows that

$$\begin{aligned} f(\mathbf{y} )= & {} \sum \limits _{i=1}^nf_i(y)\ge \sum \limits _{i=1}^n f_i({\tilde{y}}^*)\\&+(\sigma -\sigma _0)\sum \limits _{i=1}^n\sum \limits _{j\in {\mathcal {N}}_i} {{a_{ij}}} \parallel {\tilde{y}}^*-{\tilde{y}}^*\parallel . \end{aligned}$$

That is to say,

$$\begin{aligned}\begin{array}{ll} f(\mathbf{y} )\ge f(\mathbf{y} ^*), \end{array} \end{aligned}$$

for any \(\mathbf{y} \in \varOmega\). Therefore, \(\mathbf{y} ^*\) is also an optimal solution to distributed optimization (2) and \(\varOmega _\sigma \subseteq \varOmega ^*\), which completes the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wen, X., Wang, Y. & Qin, S. A nonautonomous-differential-inclusion neurodynamic approach for nonsmooth distributed optimization on multi-agent systems. Neural Comput & Applic 33, 13909–13920 (2021). https://doi.org/10.1007/s00521-021-06026-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06026-2

Keywords

Navigation