Skip to main content
Log in

Event-Triggered Distributed Cooperative Learning Algorithms over Networks via Wavelet Approximation

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper investigates the problem of event-triggered distributed cooperative learning (DCL) over networks based on wavelet approximation theory, where each node only has access to local data which are produced by the same and unknown pattern (map or function). All nodes cooperatively learn this unknown pattern by exchanging learned information with their neighboring nodes under event-triggered strategy in order to remove unnecessary communications, so as to avoid the waste of network resources. For the above problem, two novel event-triggered continuous-time and discrete-time DCL algorithms are proposed to approximate the unknown pattern by using wavelet basis function. The proposed event-triggered DCL algorithms are used to train the optimal weight coefficient matrix of wavelet series. Moreover, the convergence of the proposed algorithms are presented by using the Lyapunov method, and the Zeno behavior is excluded as well by the strictly positive sampling interval. The illustrative examples are presented to show the efficiency and convergence of the proposed algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Predd JB, Kulkarni SR, Poor HV (2006) Distributed learning in wireless sensor networks. IEEE Signal Process Mag 23(4):56–69

    Article  Google Scholar 

  2. Georgopoulos L, Hasler M (2014) Distributed machine learning in networks by consensus. Neurocomputing 124(2):2–12

    Article  Google Scholar 

  3. Chen JS, Sayed AH (2012) Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Trans Signal Process 60(8):4289–4305

    Article  MathSciNet  MATH  Google Scholar 

  4. Chen WS, Hua SY, Ge SS (2014) Consensus-based distributed cooperative learning control for a group of discrete-time nonlinear multi-agent systems using neural networks. Automatica 50(1):2254–2268

    Article  MathSciNet  MATH  Google Scholar 

  5. Chen WS, Hua SY, Zhang HG (2015) Consensus-based distributed cooperative learning from closed-loop neural control systems. IEEE Trans Neural Netw Learn Syst 26(2):331–345

    Article  MathSciNet  Google Scholar 

  6. Ren PF, Chen WS, Dai H, Zhang HG (2017) Distributed cooperative learning over networks via fuzzy logic systems: performance analysis and comparison. IEEE Trans Fuzzy Syst 26:2075–2088

    Article  Google Scholar 

  7. Xie J, Chen WS, Dai H (2017) Distributed cooperative learning algorithms using wavelet neural network. Neural Comput Appl. https://doi.org/10.1007/s00521-017-3134-1

    Google Scholar 

  8. Lim C, Lee S, Choi JH, Chang JH (2014) Efficient implementation of statistical model-based voice activity detection using Taylor series approximation. IEICE Trans Fundam Electron Commun Comput Sci E97.A(3):865–868

    Article  Google Scholar 

  9. Sharapudinov II (2014) Approximation of functions in variable-exponent Lebesgue and Sobolev spaces by finite Fourier–Haar series. Rus Acad Sci Sb Math 205(205):145–160

    MATH  Google Scholar 

  10. Yang C, Yi Z, Zuo L (2008) Function approximation based on twin support vector machines. In Cybernetics and intelligent systems IEEE conference on, pp 259–264

  11. Huang GB, Saratchandran P, Sundararajan N (2005) A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans Neural Netw 16(1):57–67

    Article  Google Scholar 

  12. Yang C, Jiang K, Li Z, He W, Su CY (2017) Neural control of bimanual robots with guaranteed global stability and motion precision. IEEE Trans Ind Inf 13(3):1162–1171

    Article  Google Scholar 

  13. Cui R, Yang C, Li Y, Sharma S (2017) Adaptive neural network control of AUVs with control input nonlinearities using reinforcement learning. IEEE Trans Syst Man Cybern Syst 47(6):1019–1029

    Article  Google Scholar 

  14. Wu S, Er MJ (2000) Dynamic fuzzy neural networks-a novel approach to function approximation. IEEE Trans Syst Man Cybern Part B Cybern A Publ IEEE Syst Man Cybern Soc 30(2):358–364

    Article  Google Scholar 

  15. Ferrari S, Stengel RF (2005) Smooth function approximation using neural networks. IEEE Trans Neural Netw 16(1):24–38

    Article  Google Scholar 

  16. Pavez E, Silva JF (2012) Analysis and design of Wavelet-Packet Cepstral coefficients for automatic speech recognition. Speech Commun 54(6):814–835

    Article  Google Scholar 

  17. Yan R, Gao RX, Chen X (2014) Wavelets for fault diagnosis of rotary machines: a review with applications. Signal Process 96(5):1–15

    Article  Google Scholar 

  18. Siddiqi MH, Lee SW, Khan AM (2014) Weed image classification using wavelet transform, stepwise linear discriminant analysis, and support vector machines for an automatic spray control system. J Inf Sci Eng 30(4):1227–1244

    Google Scholar 

  19. Zainuddin Z, Ong P (2016) Optimization of wavelet neural networks with the firefly algorithm for approximation problems. Neural Comput Appl 28:1–14

    Article  Google Scholar 

  20. Hou MZ, Han XL, Gan YX (2009) Constructive approximation to real function by wavelet neural networks. Neural Comput Appl 18(8):883–889

    Article  Google Scholar 

  21. Cao J, Lin Z, Huang GB (2011) Composite function wavelet neural networks with differential evolution and extreme learning machine. Neural Process Lett 33(3):251–265

    Article  Google Scholar 

  22. Cordova J, Yu W (2012) Two types of haar wavelet neural networks for nonlinear system identification. Neural Process Lett 35(3):283–300

    Article  Google Scholar 

  23. Alexandridis AK, Zapranis AD (2013) Wavelet neural networks: a practical guide. Neural Netw 42:1–27

    Article  MATH  Google Scholar 

  24. Courroux S, Chevobbe S, Darouich M, Paindavoine M (2013) Use of wavelet for image processing in smart cameras with low hardware resources. J Syst Archit 59(10):826–832

    Article  Google Scholar 

  25. Chen S, Zhao HC, Zhang SN, Yang YX (2014) Study of ultra-wideband fuze signal processing method based on wavelet transform. IET Radar Sonar Navig 8(3):167–172

    Article  Google Scholar 

  26. Ganjefar S, Tofighi M (2015) Single-hidden-layer fuzzy recurrent wavelet neural network: applications to function approximation and system identification. Inf Sci 294:269–285

    Article  MathSciNet  MATH  Google Scholar 

  27. Nejad HC, Farshad M, Khayat O, Rahatabad FN (2016) Performance verification of a fuzzy wavelet neural network in the first order partial derivative approximation of nonlinear functions. Neural Process Lett 43(1):219–230

    Article  Google Scholar 

  28. Sibel S, Ali MS, Vadivel R, Arik S (2017) Decentralized event-triggered synchronization of uncertain Markovian jumping neutral-type neural networks with mixed delays. Neural Netw 86:32–41

    Article  Google Scholar 

  29. Wang AJ, Dong T, Liao XF (2016) Event-triggered synchronization strategy for complex dynamical networks with the Markovian switching topologies. Neural Netw 74:52–57

    Article  MATH  Google Scholar 

  30. Han YJ, Lu WL, Chen TP (2015) Consensus analysis of networks with time-varying topology and event-triggered diffusions. Neural Netw 71:196–203

    Article  MATH  Google Scholar 

  31. Li HQ, Liao XF, Chen G, Hill DJ, Dong ZY, Huang TW (2015) Event-triggered asynchronous intermittent communication strategy for synchronization in complex dynamical networks. Neural Netw 66:1–10

    Article  MATH  Google Scholar 

  32. Mazo M, Tabuada P (2011) Decentralized event-triggered control over wireless sensor/actuator networks. IEEE Trans Autom Control 56(10):2456–2461

    Article  MathSciNet  MATH  Google Scholar 

  33. Hu SL, Yue D (2012) Event-triggered control design of linear networked systems with quantizations. ISA Trans 51:153–162

    Article  Google Scholar 

  34. Fan Y, Feng G, Wang Y, Song C (2013) Distributed event-triggered control of multi-agent systems with combinational measurements. Automatica 49(2):671–675

    Article  MathSciNet  MATH  Google Scholar 

  35. Seyboth GS, Dimarogonas DV, Johansson KH (2013) Event-based broadcasting for multi-agent average consensus. Automatica 49(1):245–252

    Article  MathSciNet  MATH  Google Scholar 

  36. Aranda-Escolastico E, Guinaldo M, Gordillo F, Dormido S (2016) A novel approach to periodic event-triggered control: design and application to the inverted pendulum. ISA Trans 65:327–338

    Article  Google Scholar 

  37. Mahmoud MS, Sabih M, Elshafei M (2016) Event-triggered output feedback control for distributed networked systems. ISA Trans 60:294–302

    Article  Google Scholar 

  38. Zainuddin Z, Pauline O (2011) Modified wavelet neural network in function approximation and its application in prediction of time-series pollution data. Appl Soft Comput 11(8):4866–4874

    Article  Google Scholar 

  39. Cattani C (2012) Fractional Calculus and Shannon wavelet. Mathe Probl Eng. Article ID 502812, p 26

  40. Bazaraa MS, Goode JJ (1973) On symmetric duality in nonlinear programming. Op Res 21(1):1–9

    Article  MathSciNet  MATH  Google Scholar 

  41. Lu J, Tang CY (2012) Zero-gradient-sum algorithms for distributed convex optimization: the continuous-time case. IEEE Trans Autom Control 57(9):2348–2354

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the reviewers and the editor for their valuable comments on this paper. This work was supported by the National Natural Science Foundation of China (Grant Numbers: 61503292, 61673308 and 61673014),the Natural Science Foundation of Shaanxi Province (Grant Numbers:2018JM6079) and the Fundamental Research Funds for the Central Universities(Grant No: JB181305).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weisheng Chen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

Proof of Theorem 1

Proof

(I) Consider the event-triggered DCL algorithm (8), the following Lyapunov function candidate is constructed:

$$\begin{aligned} {V}(t)=\frac{1}{2}\sum _{i=1}^{N}\Big ({\widetilde{W}}_i^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml}){\widetilde{W}}_i\Big ), \end{aligned}$$
(20)

where \({\widetilde{W}}_i=W^*-W_i\), \(V:\mathbf{R }^{ml}\rightarrow \mathbf{R }\). It is easy to verify that

$$\begin{aligned} {V}(t)\ge \frac{\,{\underline{\theta }}\,}{\,2\,}\sum _{i=1}^{N}\parallel W^*-W_i(t)\parallel ^2. \end{aligned}$$
(21)

In addition, the following inequality holds [41]:

$$\begin{aligned} {V}(t)\le \frac{\,{\overline{\varTheta }}\,}{\,2\lambda _2\,}W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(t). \end{aligned}$$
(22)

Now, we are in the position to give the main result on the convergence of the algorithm (8). Consider the Lyapunov function candidate (20). Then, along the solution of (8), we have

$$\begin{aligned} \frac{dV(t)}{dt}=&-\sum _{i=1}^{N}\dot{W_i}^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})^{\text {T}}(W^*-W_i)\nonumber \\ =&-W^{*T}\sum _{i=1}^{N}(H_i^{\text {T}}H_i+\sigma _iI_{ml})\dot{W_i}(t)+\sum _{i=1}^{N}W_i^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})\dot{W_i}(t)\nonumber \\ =&-\gamma W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})(W(t)+e(t))\nonumber \\ =&-\gamma W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(t)-\gamma W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})e(t). \end{aligned}$$
(23)

Using Young’s inequality, leads to

$$\begin{aligned} -W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})e(t)\le&\frac{1}{2}W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})\frac{\epsilon }{2}({\mathcal {L}}\otimes I_{ml})^{\text {T}}W(t)+\frac{1}{2}e(t)^{\text {T}}\frac{2}{\epsilon }e(t)\nonumber \\ =\,&\frac{\epsilon }{4}W(t)^{\text {T}}(\mathcal {LL}^{\text {T}}\otimes I_{ml})W(t)+\frac{1}{\epsilon }e(t)^{\text {T}}e(t), \end{aligned}$$
(24)

where \(\epsilon >0\) is a constant. Substituting (24) into (23), together with the trigger function (10), yields

$$\begin{aligned} \frac{dV(t)}{dt}\le&-\gamma W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})(W(t))+\frac{\gamma \epsilon }{4}W(t)^{\text {T}}(\mathcal {LL}^{\text {T}}\otimes I_{ml})W(t)+\frac{\gamma }{\epsilon }e(t)^{\text {T}}e(t)\nonumber \\ \le&-\gamma W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(t)+\frac{\gamma \epsilon \eta }{4}W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(t)+\frac{Nc\gamma }{\epsilon }e^{-\alpha t}\nonumber \\ =&-(1-\frac{\epsilon \eta }{4})\gamma W(t)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(t)+\frac{Nc\gamma }{\epsilon }e^{-\alpha t}, \end{aligned}$$
(25)

where \(\eta =\lambda _{max}(L)\).

From inequality (22), one has

$$\begin{aligned} \frac{dV(t)}{dt}\le -(1-\frac{\epsilon \eta }{4})\frac{2\gamma \lambda _2}{{\overline{\varTheta }}}V(t)+\frac{Nc\gamma }{\epsilon }e^{-\alpha t} =-\kappa V(t)+\rho e^{-\alpha t}, \end{aligned}$$
(26)

where \(\kappa =(1-\frac{\epsilon \eta }{4})\frac{2\gamma \lambda _2}{{\overline{\varTheta }}}\), \(\rho =\frac{Nc\gamma }{\epsilon }\). Then, it follows from inequality (26) that

$$\begin{aligned} e^{\kappa t}\frac{dV(t)}{dt}+\kappa e^{\kappa t}V(t)\le \rho e^{(\kappa -\alpha ) t}. \end{aligned}$$
(27)

Integrating both sides of inequality (27) from 0 to t leads to

$$\begin{aligned} V(t)\le \left( V(0)-\frac{\rho }{\kappa -\alpha }\right) e^{-\kappa t}+\frac{\rho }{\kappa -\alpha }e^{-\alpha t} =(V(0)-\zeta )e^{-\kappa t}+\zeta e^{-\alpha t}, \end{aligned}$$
(28)

where \(\zeta =\frac{\rho }{\kappa -\alpha }=\frac{2Nc\gamma {\overline{\varTheta }}}{[(4-\epsilon \eta )\gamma \lambda _2-2\alpha {\overline{\varTheta }}]\epsilon }\).

This, together with inequality (21) leads to

$$\begin{aligned} \begin{aligned} \sum _{i=1}^{N}\parallel W^*-W_i(t)\parallel ^2\le&\frac{\,2\,}{{\underline{\theta }}}(V(0)-\zeta )e^{-\kappa t}+\frac{\,2\zeta \,}{{\underline{\theta }}}e^{-\alpha t}. \end{aligned} \end{aligned}$$
(29)

(II) In order to exclude Zeno behavior, we show that the inter-event times are lower-bounded by a positive constant \(\tau _0\). First, we have \({\dot{e}}_i(t)=-{\dot{W}}_i(t)\) for \(t\in [t^i_{k_i},t^i_{k_i+1})\) from error variable (9). Notice that \(e_i(t^i_{k_i})=0\) and \(e_i(t)=-\int _{t^i_{k_i}}^{\text {T}} {\dot{W}}_i(\tau ) d\tau \). It follows from the algorithm (8) that

$$\begin{aligned} \parallel e_i(t)\parallel \le \int _{t^i_{k_i}}^{\text {T}} \parallel {\dot{W}}_i(\tau ) \parallel d\tau \le \int _{t^i_{k_i}}^{\text {T}} \parallel {\dot{W}}(\tau ) \parallel d\tau . \end{aligned}$$
(30)

Substituting (11) into inequality (30), one has

$$\begin{aligned} \parallel e_i(t)\parallel&\le \int _{t^i_{k_i}}^{\text {T}} \parallel -\gamma (H^{\text {T}}H+\sigma \otimes I_{ml})^{-1}({\mathcal {L}}\otimes I_{ml})(W(\tau )+e(\tau )) \parallel d\tau \nonumber \\&\le \frac{\gamma }{{\underline{\theta }}}\int _{t^i_{k_i}}^{\text {T}} \parallel ({\mathcal {L}}\otimes I_{ml})(W(\tau )+e(\tau )) \parallel d\tau , \end{aligned}$$
(31)

where \({\underline{\theta }}=:\min \limits _{i\in {\mathcal {V}}}\theta _i\), with \(\theta _i=\lambda _{min}(H_i^{\text {T}}H_i+\sigma _iI_{ml})\). Because of \(({\mathcal {L}}\otimes I_{ml})(W(t)+e(t)-\mathbf{1 }_N\otimes W^*)=({\mathcal {L}}\otimes I_{ml})(W(t)+e(t))\), (31) can be rewritten as

$$\begin{aligned} \parallel e_i(t)\parallel&\le \frac{\gamma }{{\underline{\theta }}}\int _{t^i_{k_i}}^{\text {T}} \parallel ({\mathcal {L}}\otimes I_{ml})(W(\tau )+e(\tau )-\mathbf{1 }_N\otimes W^*) \parallel d\tau \nonumber \\&\le \frac{\gamma \parallel {\mathcal {L}}\parallel }{{\underline{\theta }}}\int _{t^i_{k_i}}^{\text {T}}\big (\parallel W(\tau )-\mathbf{1 }_N\otimes W^*\parallel +\parallel e(\tau )\parallel \big )d\tau . \end{aligned}$$
(32)

From the inequality (29) and the trigger function (10), one has

$$\begin{aligned} \parallel W(t)-\mathbf{1 }_N\otimes W^*\parallel +\parallel e(t)\parallel&\le \sqrt{\frac{2V(W(0))}{{\underline{\theta }}}}e^{-\frac{\kappa t}{2}}+\sqrt{\frac{\,2\zeta \,}{{\underline{\theta }}}}e^{-\frac{\alpha t}{2}}+\sqrt{Nc} e^{-\frac{\alpha t}{2}}\nonumber \\&\le \sqrt{\frac{2V(W(0))}{{\underline{\theta }}}}e^{-\frac{\kappa t^i_{k_i}}{2}}+\Big (\sqrt{\frac{\,2\zeta \,}{{\underline{\theta }}}}+\sqrt{Nc} \Big )e^{-\frac{\alpha t^i_{k_i}}{2}}. \end{aligned}$$
(33)

Substituting inequality (33) into (32), one has

$$\begin{aligned} \parallel e_i(t)\parallel \le \Big (\mu _1e^{-\frac{\kappa t^i_{k_i}}{2}}+\mu _2e^{-\frac{\alpha t^i_{k_i}}{2}}\Big )(t-t^i_{k_i}), \end{aligned}$$
(34)

where \(\mu _1 = \frac{\gamma \parallel {{\mathcal {L}}} \parallel }{{\underline{\theta }}} \sqrt{\frac{2V(W(0))}{{\underline{\theta }}}}\), \(\mu _2 = \frac{\gamma \parallel {\mathcal {L}}\parallel }{{\underline{\theta }}}\)\(\Big (\sqrt{\frac{\,2\zeta \,}{{\underline{\theta }}}}+\sqrt{Nc} \Big )\).

The next event will not be triggered before \(\parallel e_i(t)\parallel =\sqrt{c} e^{-\frac{\alpha t}{2}}\). Thus, a lower bound on the inter-event intervals is given by \(\tau _0=t-t^i_{k_i}\) that solves the equation

$$\begin{aligned} \Big (\mu _1e^{-\frac{\kappa t^i_{k_i}}{2}}+\mu _2e^{-\frac{\alpha t^i_{k_i}}{2}}\Big )\tau _0=\sqrt{c} e^{-\frac{\alpha (\tau _0+t^i_{k_i})}{2}} \Longleftrightarrow \Big (\mu _1e^{\frac{(\alpha -\kappa ) t^i_{k_i}}{2}}+\mu _2\Big )\tau _0=\sqrt{c} e^{-\frac{\alpha }{2} \tau _0}. \end{aligned}$$
(35)

Because of \(0<\alpha <\kappa \), it follows that \(\mu _2\le \mu _1e^{\frac{(\alpha -\kappa ) t^i_{k_i}}{2}}+\mu _2\le \mu _1+\mu _2\). For all \(t^i_{k_i}\ge 0\), the solutions \(\tau _0(t^i_{k_i})\) are greater or equal to \(\tau _0\) given by \((\mu _1+\mu _2)\tau _0=\sqrt{c} e^{-\frac{\alpha }{2} \tau _0}\), which is strictly positive constant.

Since there is a positive lower bound \(\tau _0\) on the inter-event intervals, there are no accumulation points in the event sequences, so the Zeno behavior is excluded.

The proof is completed. \(\square \)

Proof of Theorem 2

Proof

Consider of the event-triggered discrete-time DCL algorithm (14), the discrete form of the Lyapunov function candidate (20) is given as follows:

$$\begin{aligned} {V}(k)=\frac{1}{2}\sum \limits _{i\in {\mathcal {V}}}\Big ((W^*-W_i(k))^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})(W^*-W_i(k))\Big ). \end{aligned}$$
(36)

The following two inequalities are still established under discrete form:

$$\begin{aligned} {V}(k)&\ge \frac{\,{\underline{\theta }}\,}{\,2\,}\sum _{i=1}^{N}\parallel W^*-W_i(k)\parallel ^2, \end{aligned}$$
(37)
$$\begin{aligned} {V}(k)&\le \frac{\,{\overline{\varTheta }}\,}{\,2\lambda _2\,}W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(k). \end{aligned}$$
(38)

Now, we are in the position to give the main result on the convergence of the algorithm (14) with the trigger function (15). Consider the Lyapunov function candidate (36), whose difference is given by

$$\begin{aligned} \bigtriangleup {V(k+1)}=&V(k+1)-V(k)\nonumber \\ =\,&\frac{1}{2}\sum _{i=1}^{N}\Big [\Big ((W^*-W_i(k+1))^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})(W^*-W_i(k+1))\Big )\nonumber \\&-\Big ((W^*-W_i(k))^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})(W^*-W_i(k))\Big )\Big ]\nonumber \\ =&-\frac{1}{2}\sum _{i=1}^{N}\Big (W_i(k)^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k)\nonumber \\&-W_i(k+1)^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k+1)\Big )\nonumber \\&-W^{*T}\sum _{i=1}^{N}(H_i^{\text {T}}H_i+\sigma _iI_{ml})\big ({W_i}(k+1)-{W_i}(k)\big ). \end{aligned}$$
(39)

Under the discrete form, \(\sum _{i=1}^{N}(H_i^{\text {T}}H_i+\sigma _iI_l)\big ({W_i}(k+1)-{W_i}(k)\big )=\gamma \sum _{i=1}^{N}\sum \limits _{j\in {\mathcal {N}}_{i}}a_{ij}\Big ({\hat{W}}_j(k)-{\hat{W}}_i(k)\Big )=0\) according to the event-triggered discrete-time DCL algorithm (14). Then,

$$\begin{aligned} \bigtriangleup {V(k+1)}=&-\frac{1}{2}\sum _{i=1}^{N}\Big (W_i(k)^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k)\nonumber \\&-W_i(k+1)^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k+1)\Big ). \end{aligned}$$
(40)

By increasing and decreasing function terms, we have

$$\begin{aligned} \bigtriangleup {V(k+1)} =&-\frac{1}{2}\sum _{i=1}^{N}\Big (W_i(k)^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k)\nonumber \\&-W_i(k+1)^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k+1)\nonumber \\&+2\big ({W_i}(k+1)-{W_i}(k)\big )^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k+1)\nonumber \\&-2\big ({W_i}(k+1)-{W_i}(k)\big )^{\text {T}}\times (H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k+1)\Big ). \end{aligned}$$
(41)

Then, we get

$$\begin{aligned} \bigtriangleup {V(k+1)}=&-\frac{1}{2}\sum _{i=1}^{N}\Big (\big ({W_i}(k+1)-{W_i}(k)\big )^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})\big ({W_i}(k+1)-{W_i}(k)\big )\Big )\nonumber \\&\,+\sum _{i=1}^{N}\big ({W_i}(k+1)-{W_i}(k)\big )^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k+1)\nonumber \\ \le&\sum _{i=1}^{N}\big ({W_i}(k+1)-{W_i}(k)\big )^{\text {T}}(H_i^{\text {T}}H_i+\sigma _iI_{ml})W_i(k+1)\nonumber \\ =\,&\big ({W}(k+1)-{W}(k)\big )^{\text {T}}(H^{\text {T}}H+\sigma \otimes I_{ml})W(k+1). \end{aligned}$$
(42)

Due to (16), we obtain

$$\begin{aligned}&\bigtriangleup {V(k+1)}=-\gamma \big (W(k)+e(k)\big )^{\text {T}}({\mathcal {L}}\otimes I_{ml})^{\text {T}}W(k+1)\nonumber \\&\quad =-\gamma W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(k+1)-\gamma e(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(k+1)\nonumber \\&\quad \le -\gamma W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})\Big [W(k)-\gamma (H^{\text {T}}H+\sigma \otimes I_{ml})^{-1}({\mathcal {L}}\otimes I_{ml})\Big (W(k)+e(k)\Big )\Big ]\nonumber \\&\qquad -\gamma e(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})\Big [W(k)-\gamma (H^{\text {T}}H+\sigma \otimes I_{ml})^{-1}({\mathcal {L}}\otimes I_{ml})\Big (W(k)+e(k)\Big )\Big ]\nonumber \\&\quad =-\gamma W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(k)+\gamma ^2W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})(H^{\text {T}}H+\sigma \otimes I_{ml})^{-1}({\mathcal {L}}\otimes I_{ml})W(k)\nonumber \\&\qquad +2\gamma ^2W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})(H^{\text {T}}H+\sigma \otimes I_{ml})^{-1}({\mathcal {L}}\otimes I_{ml})e(k)-\gamma e(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(k)\nonumber \\&\qquad +\gamma ^2e(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})(H^{\text {T}}H+\sigma \otimes I_{ml})^{-1}({\mathcal {L}}\otimes I_{ml})e(k). \end{aligned}$$
(43)

According to the Young’s inequality, we have

$$\begin{aligned} W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})(H^{\text {T}}H+\sigma \otimes I_{ml})^{-1}({\mathcal {L}}\otimes I_{ml})e(k)&\le \frac{\epsilon \eta ^3}{2{\bar{\lambda }}^2}W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(k)\nonumber \\&\quad +\frac{1}{2\epsilon }e(k)^{\text {T}}e(k), \end{aligned}$$
(44)

and

$$\begin{aligned} e(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(k) \le \frac{\eta }{2\epsilon }W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})W(k)+\frac{\epsilon }{2}e(k)^{\text {T}}e(k), \end{aligned}$$
(45)

where \({\bar{\lambda }}=\lambda _{min}(H^{\text {T}}H+\sigma \otimes I_{ml})\), \(\eta =\lambda _{max}({\mathcal {L}})\), \(\epsilon >\eta /2\) is a constant.

Substituting inequality (44) and (45) into (43), one has

$$\begin{aligned} \bigtriangleup {V(k+1)}\le -\gamma \rho _1W(k)^{\text {T}}({\mathcal {L}}\otimes I_{ml})^{\text {T}}W(k)+\rho _2e(k)^{\text {T}}e(k), \end{aligned}$$
(46)

where \(\rho _1=1-\frac{\gamma \eta }{{\bar{\lambda }}}-\frac{\gamma \epsilon \eta ^3}{{\bar{\lambda }}^2}-\frac{\eta }{2\epsilon }\), \(\rho _2=\frac{\gamma ^2}{\epsilon }+\frac{\gamma \epsilon }{2}+\frac{\gamma ^2\eta ^2}{{\bar{\lambda }}}\).

Based on the conditions \(0<\gamma <\min \left\{ \frac{\,{\overline{\varTheta }}\,}{\,2\lambda _2\,},\frac{{\bar{\lambda }}^2(2\epsilon -\eta )}{2\epsilon \eta ({\bar{\lambda }}+\epsilon \eta ^2)}\right\} \), \(\epsilon >\eta /2\) and the trigger function (15), one gets \(\rho _1\in (0,1)\) and \(e(k)^{\text {T}}e(k)\le Nc\beta ^k\). Then, from inequality (38), one has

$$\begin{aligned} \bigtriangleup {V(k+1)}\le&-\frac{2\gamma \rho _1\lambda _2}{{\overline{\varTheta }}}V(k)+Nc\rho _2\beta ^k. \end{aligned}$$
(47)

Thus,

$$\begin{aligned} V(k+1)\le \varsigma V(k)+Nc\rho _2\beta ^k, \end{aligned}$$
(48)

where \(\varsigma =1-\frac{2\gamma \rho _1\lambda _2}{{\overline{\varTheta }}}\).

Therefore, based on the above results, if \(\gamma \) can be chosen such that \(0<\gamma <\min \left\{ \frac{\,{\overline{\varTheta }}\,}{\,2\lambda _2\,},\frac{{\bar{\lambda }}^2(2\epsilon -\eta )}{2\epsilon \eta ({\bar{\lambda }}+\epsilon \eta ^2)}\right\} \) and \(\epsilon >\eta /2\), then \(\varsigma \in (0,1)\). Furthermore, we have

$$\begin{aligned} V(k)\le&\varsigma V(k-1)+Nc\rho _2\beta ^{k-1}\nonumber \\ \le \,&\varsigma ^2 V(k-2)+\varsigma Nc\rho _2\beta ^{k-2}+Nc\rho _2\beta ^{k-1}\nonumber \\ \vdots \nonumber \\ \le \,&\varsigma ^k V(0)+ Nc\rho _2\Big (\varsigma ^{k-1}\beta ^{0}+\varsigma ^{k-2}\beta ^{1}+...+\varsigma ^{1}\beta ^{k-2}+\varsigma ^{0}\beta ^{k-1}\Big )\nonumber \\ =\,&\varsigma ^k V(0)+Nc\rho _2\frac{\varsigma ^k-\beta ^k}{\varsigma -\beta }\nonumber \\ =\,&\Big (V(0)-\varpi \Big )\varsigma ^k+\varpi \beta ^k, \end{aligned}$$
(49)

where \(\varpi =\frac{Nc\rho _2}{\beta -\varsigma }\). Then, according to the inequality (37), we get inequality (17) in Theorem 2.

The proof is completed. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dai, H., Xie, J. & Chen, W. Event-Triggered Distributed Cooperative Learning Algorithms over Networks via Wavelet Approximation. Neural Process Lett 50, 669–700 (2019). https://doi.org/10.1007/s11063-019-10031-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-019-10031-x

Keywords

Navigation