Abstract
This paper investigates the problem of event-triggered distributed cooperative learning (DCL) over networks based on wavelet approximation theory, where each node only has access to local data which are produced by the same and unknown pattern (map or function). All nodes cooperatively learn this unknown pattern by exchanging learned information with their neighboring nodes under event-triggered strategy in order to remove unnecessary communications, so as to avoid the waste of network resources. For the above problem, two novel event-triggered continuous-time and discrete-time DCL algorithms are proposed to approximate the unknown pattern by using wavelet basis function. The proposed event-triggered DCL algorithms are used to train the optimal weight coefficient matrix of wavelet series. Moreover, the convergence of the proposed algorithms are presented by using the Lyapunov method, and the Zeno behavior is excluded as well by the strictly positive sampling interval. The illustrative examples are presented to show the efficiency and convergence of the proposed algorithms.
Similar content being viewed by others
References
Predd JB, Kulkarni SR, Poor HV (2006) Distributed learning in wireless sensor networks. IEEE Signal Process Mag 23(4):56–69
Georgopoulos L, Hasler M (2014) Distributed machine learning in networks by consensus. Neurocomputing 124(2):2–12
Chen JS, Sayed AH (2012) Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Trans Signal Process 60(8):4289–4305
Chen WS, Hua SY, Ge SS (2014) Consensus-based distributed cooperative learning control for a group of discrete-time nonlinear multi-agent systems using neural networks. Automatica 50(1):2254–2268
Chen WS, Hua SY, Zhang HG (2015) Consensus-based distributed cooperative learning from closed-loop neural control systems. IEEE Trans Neural Netw Learn Syst 26(2):331–345
Ren PF, Chen WS, Dai H, Zhang HG (2017) Distributed cooperative learning over networks via fuzzy logic systems: performance analysis and comparison. IEEE Trans Fuzzy Syst 26:2075–2088
Xie J, Chen WS, Dai H (2017) Distributed cooperative learning algorithms using wavelet neural network. Neural Comput Appl. https://doi.org/10.1007/s00521-017-3134-1
Lim C, Lee S, Choi JH, Chang JH (2014) Efficient implementation of statistical model-based voice activity detection using Taylor series approximation. IEICE Trans Fundam Electron Commun Comput Sci E97.A(3):865–868
Sharapudinov II (2014) Approximation of functions in variable-exponent Lebesgue and Sobolev spaces by finite Fourier–Haar series. Rus Acad Sci Sb Math 205(205):145–160
Yang C, Yi Z, Zuo L (2008) Function approximation based on twin support vector machines. In Cybernetics and intelligent systems IEEE conference on, pp 259–264
Huang GB, Saratchandran P, Sundararajan N (2005) A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans Neural Netw 16(1):57–67
Yang C, Jiang K, Li Z, He W, Su CY (2017) Neural control of bimanual robots with guaranteed global stability and motion precision. IEEE Trans Ind Inf 13(3):1162–1171
Cui R, Yang C, Li Y, Sharma S (2017) Adaptive neural network control of AUVs with control input nonlinearities using reinforcement learning. IEEE Trans Syst Man Cybern Syst 47(6):1019–1029
Wu S, Er MJ (2000) Dynamic fuzzy neural networks-a novel approach to function approximation. IEEE Trans Syst Man Cybern Part B Cybern A Publ IEEE Syst Man Cybern Soc 30(2):358–364
Ferrari S, Stengel RF (2005) Smooth function approximation using neural networks. IEEE Trans Neural Netw 16(1):24–38
Pavez E, Silva JF (2012) Analysis and design of Wavelet-Packet Cepstral coefficients for automatic speech recognition. Speech Commun 54(6):814–835
Yan R, Gao RX, Chen X (2014) Wavelets for fault diagnosis of rotary machines: a review with applications. Signal Process 96(5):1–15
Siddiqi MH, Lee SW, Khan AM (2014) Weed image classification using wavelet transform, stepwise linear discriminant analysis, and support vector machines for an automatic spray control system. J Inf Sci Eng 30(4):1227–1244
Zainuddin Z, Ong P (2016) Optimization of wavelet neural networks with the firefly algorithm for approximation problems. Neural Comput Appl 28:1–14
Hou MZ, Han XL, Gan YX (2009) Constructive approximation to real function by wavelet neural networks. Neural Comput Appl 18(8):883–889
Cao J, Lin Z, Huang GB (2011) Composite function wavelet neural networks with differential evolution and extreme learning machine. Neural Process Lett 33(3):251–265
Cordova J, Yu W (2012) Two types of haar wavelet neural networks for nonlinear system identification. Neural Process Lett 35(3):283–300
Alexandridis AK, Zapranis AD (2013) Wavelet neural networks: a practical guide. Neural Netw 42:1–27
Courroux S, Chevobbe S, Darouich M, Paindavoine M (2013) Use of wavelet for image processing in smart cameras with low hardware resources. J Syst Archit 59(10):826–832
Chen S, Zhao HC, Zhang SN, Yang YX (2014) Study of ultra-wideband fuze signal processing method based on wavelet transform. IET Radar Sonar Navig 8(3):167–172
Ganjefar S, Tofighi M (2015) Single-hidden-layer fuzzy recurrent wavelet neural network: applications to function approximation and system identification. Inf Sci 294:269–285
Nejad HC, Farshad M, Khayat O, Rahatabad FN (2016) Performance verification of a fuzzy wavelet neural network in the first order partial derivative approximation of nonlinear functions. Neural Process Lett 43(1):219–230
Sibel S, Ali MS, Vadivel R, Arik S (2017) Decentralized event-triggered synchronization of uncertain Markovian jumping neutral-type neural networks with mixed delays. Neural Netw 86:32–41
Wang AJ, Dong T, Liao XF (2016) Event-triggered synchronization strategy for complex dynamical networks with the Markovian switching topologies. Neural Netw 74:52–57
Han YJ, Lu WL, Chen TP (2015) Consensus analysis of networks with time-varying topology and event-triggered diffusions. Neural Netw 71:196–203
Li HQ, Liao XF, Chen G, Hill DJ, Dong ZY, Huang TW (2015) Event-triggered asynchronous intermittent communication strategy for synchronization in complex dynamical networks. Neural Netw 66:1–10
Mazo M, Tabuada P (2011) Decentralized event-triggered control over wireless sensor/actuator networks. IEEE Trans Autom Control 56(10):2456–2461
Hu SL, Yue D (2012) Event-triggered control design of linear networked systems with quantizations. ISA Trans 51:153–162
Fan Y, Feng G, Wang Y, Song C (2013) Distributed event-triggered control of multi-agent systems with combinational measurements. Automatica 49(2):671–675
Seyboth GS, Dimarogonas DV, Johansson KH (2013) Event-based broadcasting for multi-agent average consensus. Automatica 49(1):245–252
Aranda-Escolastico E, Guinaldo M, Gordillo F, Dormido S (2016) A novel approach to periodic event-triggered control: design and application to the inverted pendulum. ISA Trans 65:327–338
Mahmoud MS, Sabih M, Elshafei M (2016) Event-triggered output feedback control for distributed networked systems. ISA Trans 60:294–302
Zainuddin Z, Pauline O (2011) Modified wavelet neural network in function approximation and its application in prediction of time-series pollution data. Appl Soft Comput 11(8):4866–4874
Cattani C (2012) Fractional Calculus and Shannon wavelet. Mathe Probl Eng. Article ID 502812, p 26
Bazaraa MS, Goode JJ (1973) On symmetric duality in nonlinear programming. Op Res 21(1):1–9
Lu J, Tang CY (2012) Zero-gradient-sum algorithms for distributed convex optimization: the continuous-time case. IEEE Trans Autom Control 57(9):2348–2354
Acknowledgements
The authors thank the reviewers and the editor for their valuable comments on this paper. This work was supported by the National Natural Science Foundation of China (Grant Numbers: 61503292, 61673308 and 61673014),the Natural Science Foundation of Shaanxi Province (Grant Numbers:2018JM6079) and the Fundamental Research Funds for the Central Universities(Grant No: JB181305).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix
Proof of Theorem 1
Proof
(I) Consider the event-triggered DCL algorithm (8), the following Lyapunov function candidate is constructed:
where \({\widetilde{W}}_i=W^*-W_i\), \(V:\mathbf{R }^{ml}\rightarrow \mathbf{R }\). It is easy to verify that
In addition, the following inequality holds [41]:
Now, we are in the position to give the main result on the convergence of the algorithm (8). Consider the Lyapunov function candidate (20). Then, along the solution of (8), we have
Using Young’s inequality, leads to
where \(\epsilon >0\) is a constant. Substituting (24) into (23), together with the trigger function (10), yields
where \(\eta =\lambda _{max}(L)\).
From inequality (22), one has
where \(\kappa =(1-\frac{\epsilon \eta }{4})\frac{2\gamma \lambda _2}{{\overline{\varTheta }}}\), \(\rho =\frac{Nc\gamma }{\epsilon }\). Then, it follows from inequality (26) that
Integrating both sides of inequality (27) from 0 to t leads to
where \(\zeta =\frac{\rho }{\kappa -\alpha }=\frac{2Nc\gamma {\overline{\varTheta }}}{[(4-\epsilon \eta )\gamma \lambda _2-2\alpha {\overline{\varTheta }}]\epsilon }\).
This, together with inequality (21) leads to
(II) In order to exclude Zeno behavior, we show that the inter-event times are lower-bounded by a positive constant \(\tau _0\). First, we have \({\dot{e}}_i(t)=-{\dot{W}}_i(t)\) for \(t\in [t^i_{k_i},t^i_{k_i+1})\) from error variable (9). Notice that \(e_i(t^i_{k_i})=0\) and \(e_i(t)=-\int _{t^i_{k_i}}^{\text {T}} {\dot{W}}_i(\tau ) d\tau \). It follows from the algorithm (8) that
Substituting (11) into inequality (30), one has
where \({\underline{\theta }}=:\min \limits _{i\in {\mathcal {V}}}\theta _i\), with \(\theta _i=\lambda _{min}(H_i^{\text {T}}H_i+\sigma _iI_{ml})\). Because of \(({\mathcal {L}}\otimes I_{ml})(W(t)+e(t)-\mathbf{1 }_N\otimes W^*)=({\mathcal {L}}\otimes I_{ml})(W(t)+e(t))\), (31) can be rewritten as
From the inequality (29) and the trigger function (10), one has
Substituting inequality (33) into (32), one has
where \(\mu _1 = \frac{\gamma \parallel {{\mathcal {L}}} \parallel }{{\underline{\theta }}} \sqrt{\frac{2V(W(0))}{{\underline{\theta }}}}\), \(\mu _2 = \frac{\gamma \parallel {\mathcal {L}}\parallel }{{\underline{\theta }}}\)\(\Big (\sqrt{\frac{\,2\zeta \,}{{\underline{\theta }}}}+\sqrt{Nc} \Big )\).
The next event will not be triggered before \(\parallel e_i(t)\parallel =\sqrt{c} e^{-\frac{\alpha t}{2}}\). Thus, a lower bound on the inter-event intervals is given by \(\tau _0=t-t^i_{k_i}\) that solves the equation
Because of \(0<\alpha <\kappa \), it follows that \(\mu _2\le \mu _1e^{\frac{(\alpha -\kappa ) t^i_{k_i}}{2}}+\mu _2\le \mu _1+\mu _2\). For all \(t^i_{k_i}\ge 0\), the solutions \(\tau _0(t^i_{k_i})\) are greater or equal to \(\tau _0\) given by \((\mu _1+\mu _2)\tau _0=\sqrt{c} e^{-\frac{\alpha }{2} \tau _0}\), which is strictly positive constant.
Since there is a positive lower bound \(\tau _0\) on the inter-event intervals, there are no accumulation points in the event sequences, so the Zeno behavior is excluded.
The proof is completed. \(\square \)
Proof of Theorem 2
Proof
Consider of the event-triggered discrete-time DCL algorithm (14), the discrete form of the Lyapunov function candidate (20) is given as follows:
The following two inequalities are still established under discrete form:
Now, we are in the position to give the main result on the convergence of the algorithm (14) with the trigger function (15). Consider the Lyapunov function candidate (36), whose difference is given by
Under the discrete form, \(\sum _{i=1}^{N}(H_i^{\text {T}}H_i+\sigma _iI_l)\big ({W_i}(k+1)-{W_i}(k)\big )=\gamma \sum _{i=1}^{N}\sum \limits _{j\in {\mathcal {N}}_{i}}a_{ij}\Big ({\hat{W}}_j(k)-{\hat{W}}_i(k)\Big )=0\) according to the event-triggered discrete-time DCL algorithm (14). Then,
By increasing and decreasing function terms, we have
Then, we get
Due to (16), we obtain
According to the Young’s inequality, we have
and
where \({\bar{\lambda }}=\lambda _{min}(H^{\text {T}}H+\sigma \otimes I_{ml})\), \(\eta =\lambda _{max}({\mathcal {L}})\), \(\epsilon >\eta /2\) is a constant.
Substituting inequality (44) and (45) into (43), one has
where \(\rho _1=1-\frac{\gamma \eta }{{\bar{\lambda }}}-\frac{\gamma \epsilon \eta ^3}{{\bar{\lambda }}^2}-\frac{\eta }{2\epsilon }\), \(\rho _2=\frac{\gamma ^2}{\epsilon }+\frac{\gamma \epsilon }{2}+\frac{\gamma ^2\eta ^2}{{\bar{\lambda }}}\).
Based on the conditions \(0<\gamma <\min \left\{ \frac{\,{\overline{\varTheta }}\,}{\,2\lambda _2\,},\frac{{\bar{\lambda }}^2(2\epsilon -\eta )}{2\epsilon \eta ({\bar{\lambda }}+\epsilon \eta ^2)}\right\} \), \(\epsilon >\eta /2\) and the trigger function (15), one gets \(\rho _1\in (0,1)\) and \(e(k)^{\text {T}}e(k)\le Nc\beta ^k\). Then, from inequality (38), one has
Thus,
where \(\varsigma =1-\frac{2\gamma \rho _1\lambda _2}{{\overline{\varTheta }}}\).
Therefore, based on the above results, if \(\gamma \) can be chosen such that \(0<\gamma <\min \left\{ \frac{\,{\overline{\varTheta }}\,}{\,2\lambda _2\,},\frac{{\bar{\lambda }}^2(2\epsilon -\eta )}{2\epsilon \eta ({\bar{\lambda }}+\epsilon \eta ^2)}\right\} \) and \(\epsilon >\eta /2\), then \(\varsigma \in (0,1)\). Furthermore, we have
where \(\varpi =\frac{Nc\rho _2}{\beta -\varsigma }\). Then, according to the inequality (37), we get inequality (17) in Theorem 2.
The proof is completed. \(\square \)
Rights and permissions
About this article
Cite this article
Dai, H., Xie, J. & Chen, W. Event-Triggered Distributed Cooperative Learning Algorithms over Networks via Wavelet Approximation. Neural Process Lett 50, 669–700 (2019). https://doi.org/10.1007/s11063-019-10031-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-019-10031-x