Abstract
This paper addresses the global robust stability for uncertain neural networks with discrete and distributed delays. By using the Lyapunov stability theory, homomorphic mapping theory and matrix theory, improved sufficient robust stability conditions for uncertain neural networks with mixed delays are presented. Our results can be verified easily. The advantage of the proposed results is that they can be expressed in terms of network parameters only. Finally, two illustrative numerical examples are provided to show the validity comparing with the existing corresponding results.
Similar content being viewed by others
1 Introduction
Neural networks (NNs) have been attracted attention of many researchers for its wide applications in practical engineering, such as combinatorial optimization, moving image processing, signal processing, and so on. Stability analysis of dynamical neural network model has been a hot issue of growing discussion, due to its important role in solving engineering problems [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. In the process of studying stability for NNs, we often encounter two problems: parameter uncertainty and time delays. On the one hand, parameters are uncertain in practical engineering problems; On the other hand, time delays are always occurring due to finite switch speed [17]. Therefore, it is necessary to study stability for NNs with uncertain parameters and time delays.
In the past years, there are lots of excellent results for dynamical neural networks [17,18,19,20] and references therein. In [17,18,19,20], they got stability results in the form of LMIs. However, it is well known that LMIs conditions are very complicated and difficulty to verify. At the same time, researchers got some simple stability conditions in the form of matrix norm inequalities. Shao et al. derived stability conditions in the form of matrix norm inequality for uncertain neural networks with discrete delays, and without distributed delays [22,23,24,25,26,27]. Some researchers investigated dissipativity of interval neural networks with discrete delay and discontinuous activations [28, 29]. It should be noted that neural networks have spatial extent because of the presence of many parallel pathways with a variety of axon sizes and lengths. Therefore, there will be a distribution of propagation delays during a certain period. In some practical engineering applications, distributed delays are introduced in dynamical systems, such as feeding system and combustion chamber in a liquid mono-propellant rocket motor with pressure feeding and filter design in signal processing. In recent years, there has been a growing interest in stability of neural networks with discrete and distributed delays [30,31,32,33,34,35]. In order to reduce the conservatism, many approach have been proposed, for instance, the method of constructing novel Lyapunov–Krasovskii functionals, free-weighting matrix method, delay decomposition method, model transformation method, and so on. It is clear that the systems in [30,31,32,33,34,35] were not uncertain systems, and they got stability results in the form of LMIs. It is well known that the existence of uncertainty increases the difficulty of research. Up to now, few researchers focused on uncertain neural networks with discrete and distributed delays by the homeomorphism mapping theorem. There is still room to improve the existing stability works using the homeomorphism mapping theorem.
Motivated by the above discussions, in this paper, we will study the robust stability problem for uncertain neural networks with mixed delays. The main contribution of this paper is summarized as follows. Employing the homeomorphism mapping theorem, we derive novel robust stability conditions for uncertain neural networks with mixed delays. The less conservative results are in the form of matrix norm inequality, which can be illustrated by following numerical examples.
Notations In this paper, we use following notations. \(R^{n}\) is the n-dimensional Euclidean space; \(R^{n\times m }\) is the \(n\times m\)-dimensional Euclidean space; I is an identity matrix; \(\Vert \cdot \Vert\) represents a vector or a matrix norm; \(P>0(\ge 0)\) means P is a positive definite matrix(nonnegative definite matrix); Superscript ‘\(T\)’ means transposition of a vector or a matrix; \(|x|=(|x_{1}|,|x_{2}|,\ldots ,|x_{n}|)^{T}\), where \(x=(x_{1},x_{2},\ldots ,x_{n})^{T};\)\(|A|=(|a_{ij}|)_{n \times n}\), where \(A=(a_{ij})_{n \times n}\); \(\lambda _{m}\) represents the minimum eigenvalue of a matrix.
2 Preliminaries
We consider the following uncertain neural network with discrete and distributed delays:
then, it can be written in the form:
where \(x(t)=(x_{1}(t),x_{2}(t),\ldots ,z_{n}(t))^{T}\) is the neuron state vector; \(C=diag(c_{1},c_{2},\ldots ,c_{n})>0\) ; \(A=(a_{ij})_{n\times n}\) is the interconnection weight matrix , \(B=(b_{ij})_{n\times n}\) and \(D=(d_{ij})_{n\times n}\) are delayed interconnection weight matrices, respectively; \(f(x(t))=(f_{1}(x_{1}(t)),f_{2}(x_{2}(t)),\ldots ,f_{n}(x_{n}(t)))^{T}\in R^{n}\) represent the neuron activations; \(f(x(t-\tau ))=(f_{1}(x_{1}(t-\tau _{1})),f_{2}(x_{2}(t-\tau _{2})),\ldots ,f_{n}(x_{n}(t-\tau _{n})))^{T}\in R^{n}\); \(U=(u_{1},u_{2},\ldots ,u_{n})^{T}\) is a constant input vector; discrete delay \(\tau =(\tau _{1},\tau _{2},\ldots ,\tau _{n})^{T}\) ; \(\sigma\) is the distributed delay.
The neuron activation functions \(f_{i}(x_{i})\) satisfy the following assumptions
where \(l_{i}(i=1,2,\ldots , n)\) are constant scalars.
The uncertain parameters in the system (2) satisfy the following assumptions:
where \(\underline{C}=diag(\underline{c}_{1},\underline{c}_{2} \ldots , \underline{c}_{n}), \overline{C}=diag(\overline{c}_{1},\overline{c}_{2}\ldots ,\overline{c}_{n})\)\(\underline{A}=(\underline{a}_{ij})_{n\times n},\overline{A}=(\overline{a}_{ij})_{n\times n},\)\(\underline{B}=(\underline{b}_{ij})_{n\times n},\overline{B}=(\overline{b}_{ij})_{n\times n},\)\(\underline{D}=(\underline{d}_{ij})_{n\times n},\overline{D}=(\overline{d}_{ij})_{n\times n}.\)
Denote
We will use the following vector norms and a matrix norm in this paper:
where \(x=(x_{1},x_{2},\ldots ,x_{n})^{T}\) is a vector and \(A=(a_{ij})_{n \times n}\) is a real matrix.
Some useful Lemmas for the main results are stated as follows.
Lemma 2.1
[24] The map\(H(x): R^{n}\rightarrow R^{n}\)is a homeomorphism ifH(x) satisfies the following conditions
-
(i)
H(x) is injective, that is, \(H(x)\ne H(y)\) for all \(x\ne y\);
-
(ii)
H(x) is proper, that is, \(\Vert H(x)\Vert \rightarrow + \infty\) as \(\Vert x\Vert \rightarrow + \infty\).
Lemma 2.2
[18] For any vectors\(x,y \in R^{n}\)and a positive matrix\(G\in R^{n\times n,}\) the following inequality holds:
Lemma 2.3
[23] IfAis a real matrix defined by\(A\in A_{I}=[\underline{A},\overline{A}]\) , then, for \(x\in R^{n}\), there exist a positive diagonal matrixPand a nonnegative diagonal matrix\(\Gamma\)such that the following inequality holds:
Lemma 2.4
[36] Real matricesA, Bdefined by\(A\in A_{I}=[\underline{A},\overline{A}], B\in B_{I}=[\underline{B},\overline{B}]\), then, there exist positive constants\(h_{1},h_{2}\)such that
Lemma 2.5
[17] IfBis a real matrix defined by\(B\in B_{I}=[\underline{B},\overline{B}]\), then, for any positive diagonal matrix\(P=diag(p_{1},p_{2},\ldots ,p_{n})>0\)and for any two real vectors\(x\in R^{n}, y\in R^{n}\), the following inequality holds:
where\(p_{M}=max\{p_{i}\}\), \(\rho\)is any positive constant, and\(R=diag(r_{i})\ge 0\)with\(r_{i}=\sum \nolimits _{k=1}^{n}\widehat{b}_{ki}\sum \nolimits _{k=1}^{n}\widehat{b}_{kj}\)and\(\widehat{b}_{ij}=max\{|\underline{b}_{ij}|,|\overline{b}_{ij}|\}(i,j=1,2,\ldots ,n)\).
3 Main results
3.1 Existence and uniqueness of equilibrium point
Theorem 3.1
For the neural networks (2), coefficient matrices satisfy (3). The system (2) has a unique equilibrium point, if there exist a positive diagonal matrix\(P=diag(p_{i})>0\), a nonnegative diagonal matrix\(\Gamma =diag(\nu _{i})\ge 0\)and two positive constants\(\rho , \mu\)such that the following inequality holds:
where\(L=diag(l_{i})>0,\)\(R=diag(r_{i})>0,\)\(Q=diag(q_{i})>0\)with\(r_{i}=\sum \limits _{k=1}^{n}\widehat{b}_{ki}\sum \limits _{j=1}^{n}\widehat{b}_{kj},\)\(\widehat{b}_{ij}=max\{|\underline{b}_{ij}|,|\overline{b}_{ij}|\},\)
\(q_{i}=\sum \limits _{k=1}^{n}\widehat{d}_{ki}\sum \limits _{j=1}^{n}\widehat{d}_{kj},\)\(\widehat{d}_{ij}=max\{|\underline{d}_{ij}|,|\overline{d}_{ij}|\}\).
Proof
We can get the map associated with system (2):
For any \(x\ne y, x,y\in R^{n}\), we have:
\(x\ne y, x,y\in R^{n}\) contain two cases:
-
case (1)
when\(x\ne y,\)\(f(x)-f(y)=0\);
-
case (2)
when\(x\ne y,\)\(f(x)-f(y)\ne 0\).
For case (1), one gets
Obviously, \(H(x)\ne H(y)\) because of the positive diagonal matrix C.
For case (2), \(2(f(x)-f(y))^{T}P\) is multiplied at both sides of (12):
Then,
In the light of Lemma 2.3,
According to Lemma 2.4, one has
Substituting (14) − (17) into (13),
that is,
Since \(x\ne y,\)\(f(x)-f(y)\ne 0\) and \(\Omega>0\), thus,
This means that \(H(x)\ne H(y)\).
Hence, we can get that \(H(x)\ne H(y)\) for case (1) and case (2) (\(\forall x\ne y\)).
Letting \(y=0\) in (19),
It yields
where \(\lambda _{m}(\Omega )\) is the minimum eigenvalue of \(\Omega\).
It follows that
furthermore,
owing to \(\Vert f(x)-f(0)\Vert _{\infty }\le \Vert (f(x)-f(0))\Vert _{2}\).
It is clear that the formulas \(\Vert H(x)-H(0)\Vert _{1}\le \Vert H(x)\Vert _{1}+\Vert H(0)\Vert _{1}\) and \(\Vert (f(x)-f(0))\Vert _{2}\ge \Vert f(x)\Vert _{2}-\Vert f(0)\Vert _{2}\) hold. Therefore, (24) can be written
Because \(\Vert f(0)\Vert _{2},\Vert H(0)\Vert _{1},p_{M}\) are finite, it follows that \(\Vert H(x)\Vert _{1}\rightarrow \infty\) as \(\Vert f(x)\Vert _{2}\rightarrow \infty\). According to the above analysis, \(H(x):R^{n}\rightarrow R^{n}\) is a homeomorphism map on \(R^{n}\). Thus, we conclude that (2) has a unique \(x^{*}\) such that \(H(x^{*})=0\). It means that the equilibrium point for the system (2) is unique. \(\square\)
3.2 Stability analysis of equilibrium point
We shift the equilibrium point for system (2) to the origin form by the transformation \(z(t)=x(t)-x^{*}\), where \(x(t)=(x_{1}(t),x_{2}(t),\ldots ,x_{n}(t))^{T}, x^{*}=(x^{*}_{1},x^{*}_{2},\ldots ,x^{*}_{n})^{T}\), and \(x^{*}\) is the equilibrium point of neural network (2). Then, system (2) is written in the form:
where \(g(z(t))=(g_{1}(z_{1}(t)),g_{2}(z_{2}(t)),\ldots ,g_{n}(z_{n}(t)))^{T}, g_{i}(z_{i}(t))=f_{i}(z_{i}(t)+x_{i}^{*})-f_{i}(x_{i}^{*})\), satisfying:
Next, we study stability conditions for the origin system (26), because the stability properties of origin system (2) is equivalent to stability of system (26).
Theorem 3.2
For the neural networks (26) , coefficient matrices satisfy (3). The system (26) is global asymptotical robust stable, if there exist a positive diagonal matrix\(P=diag(p_{i})\), a nonnegative diagonal matrix\(\Gamma =diag(\nu _{i})\ge 0\)and two positive constants\(\rho , \mu\)such that the following inequality holds:
where\(L=diag(l_{i})>0,\)\(p_{M}=\max \{p_{i}\},\)\(R=diag(r_{i})>0\), \(Q=diag(q_{i})>0\)with
\(r_{i}=\sum \limits _{k=1}^{n}\widehat{b}_{ki}\sum \limits _{j=1}^{n}\widehat{b}_{kj},\) \(\widehat{b}_{ij}=\max \{|\underline{b}_{ij}|,|\overline{b}_{ij}|\},\)
\(q_{i}=\sum \limits _{k=1}^{n}\widehat{d}_{ki}\sum \limits _{j=1}^{n}\widehat{d}_{kj},\)\(\widehat{d}_{ij}=\max \{|\underline{d}_{ij}|,|\overline{d}_{ij}|\}\).
Proof
We construct the following Lyapunov functional:
where
where \(l_{m}=min\{l_{i}\}, l_{M}=max\{l_{i}\},\) \(\alpha , \beta , \beta _{1}, \beta _{2}, \mu _{1}, \mu _{2}\) are any positive constants to be determined later.
Computing the derivative of V(z(t)) along (26), we have
Letting \(S=C-\frac{1}{\alpha }\frac{1}{\sigma }\frac{\beta _{1}}{\mu _{1}}AA^{T} -\sigma l_{M}^{2}\frac{1}{\alpha p_{M}}\frac{\mu _{1}}{\beta _{1}}I\), then,
Setting \(\beta =h_{2}^{2}\Vert \underline{C}^{-1}\Vert _{2}\) and using Lemma 2.4, one gets
Furthermore, denote \(\mu \triangleq \frac{1}{ p_{M}}\frac{\mu _{1}}{\beta _{1}}+\frac{\mu _{2}}{\beta _{2}}\), then, we can choose appropriate \(\alpha , \beta _{1},\beta _{2}, \mu _{1}, \mu _{2}\) such that \(\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}}+\frac{\beta _{2}}{\mu _{2}}=\frac{1}{\mu }\) due to \(\beta _{1},\beta _{2}, \mu _{1}, \mu _{2}\) are any positive constants. Hence, (33) can be rewritten as following:
Next, we analyse negative definiteness of \(\dot{V}(z(t))\) in three cases: case (1) \(z(t)\ne 0, g(z(t))\ne 0\); case (2) \(z(t)\ne 0, g(z(t))=0\); case (3) \(z(t)=0, g(z(t))=0\).
For case (1), setting
it can guarantee \(\dot{V}(z(t))\) is negative definite.
For case (2),
since \(\alpha , \beta , \beta _{1}, \mu _{1}\) are arbitrary positive constants, we can always choose appropriate \(\alpha , \beta , \beta _{1}, \mu _{1}\) to guarantee \(\dot{V}(z(t))\) is negative definite.
For case (3),
obviously, \(\dot{V}(z(t))<0\) for \(\forall g(z(t-\tau ))\ne 0\). And \(\dot{V}(z(t))=0\) if and only if \(z(t)=g(z(t))=g(z(t-\tau ))=0\). Otherwise, \(\dot{V}(z(t))\) is always negative definite.
In a word, according to the above analysis for case(1), case(2)and case(3), we conclude that system (26) or (2) is global asymptotical robust stable. \(\Box\)
If coefficient matrices C, A, B, D in system (2) are constant matrices, the next corollary is obtained directly. \(\square\)
Corollary 3.1
The neural networks (2) with\(C=\underline{C}=\overline{C},A=\underline{A}=\overline{A}, B=\underline{B}=\overline{B}, D=\underline{D}=\overline{D}\)is global asymptotical stable, if there exist a positive diagonal matrix\(P=diag(p_{i})\)and positive constants\(\rho , \mu\)such that the condition holds
where\(L=diag(l_{i})>0,\)\(p_{M}=max\{p_{i}\},\)\(R=diag(r_{i})>0\), \(Q=diag(q_{i})>0\)with\(r_{i}=\sum \nolimits _{k=1}^{n}b_{ki}\sum \nolimits _{j=1}^{n}b_{kj},\)\(q_{i}=\sum \nolimits _{k=1}^{n}d_{ki}\sum \nolimits _{j=1}^{n}d_{kj}.\)
Remark 1
As we all know, stability conditions in light of LMIs are very complex. However, in this paper, our stability conditions are formally simple. Also, the new stability conditions are easy to be verified. The following numerical examples show its validity.
Remark 2
From Theorems 3.1 and 3.2, we can see that the advantage of the condition (10) can not only guarantee the existence and uniqueness of the equilibrium point, but also guarantee the stability of the system (2).
Remark 3
In the literature [37], \(2l_{M}\Vert A^{*}\Vert _{2}+A_{*}P^{-1}A_{*}+PL^{2}\) is used to estimate the coefficient matrix A, and \([P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]\) is used to estimate the coefficient matrix A in this paper, where \(\Gamma\) is an optional free matrix. \([P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]\) is probably a more accurate bound of matrix A. This may reduce the conservatism of system. The following numerical examples verify this fact.
Remark 4
In references [21,22,23,24,25,26,27], they investigated the stability of uncertain neural networks with discrete delay and got conditions in the form of matrix norm inequalities. As is known to all, the existence of distributed delays will increase the difficulty of the research. Therefore, uncertain neural networks with discrete and distributed delays is studied in this paper, and the stability results in terms of matrix norm inequalities are proposed.
Remark 5
It should be noted that if there are more accurate matrix norm inequalities to estimate bounds of coefficient matrices, there is still room for further improvement of the proposed results to reduce the conservatism of systems.
4 Examples
Example 1
Consider the neural network with the following parameters[30,31,32,33,34,35, 37]:
\(L=diag(0.2,0.2,0.2)\).
Setting \(\rho =2, \mu =\frac{1}{2}, P=I\) in Corollary 3.1, we have
The upper bound for distribute delay \(\sigma\) computed by Corollary 3.1 and the methods in references [30,31,32,33,34,35, 37] are listed in Table 1 . We can see that the stability result in this paper is less conservative than those in literatures.
Example 2
Consider the uncertain system (2) with the following parameters:
\(L=diag(1,1), \tau =1, \sigma =1\).
Using Theorem 3.2,
Letting \(\rho =\frac{1}{4}, \mu =\frac{1}{4}, P=I\) and \(\begin{aligned}\Upsilon = \left( \begin{array}{ll} 0.3 & 0 \\ 0 & 0.2\end{array}\right) \end{aligned}\) in Theorem 3.1, then,
Hence, according to Theorem 3.2, the neural network (2) is globally asymptotically robust stable.
The dynamical system behavior in Example 2 with parameters
and initial condition \(U=[3,-2]^{T}\) is shown in Fig. 1, and initial condition \(U=[5,3]^{T}\) is shown in Fig. 2.
5 Conclusions
In this work, we proposed improved stability results for neural networks with discrete and distributed delays. By using homomorphic mapping theory and some matrix theory, choosing appropriate L–K functional candidates, novel stability conditions are derived. In this study, the obtained results have less conservativeness than those ones in [30,31,32,33,34,35, 37]. Finally, two numerical examples are given to show the advantage and effectiveness of the obtained results. Meanwhile, the proposed results may be used for the further study of NNs with mixed delays, for example, stochastic neural networks and Markovian jumping neural networks. Furthermore, it is expected that the method in this paper may be used for other applications, such as the stability analysis of neutral neural networks and \(H_{\infty }\) control design for neural networks.
References
Gu K (2001) A further refinement of discretized Lyapunov functional method for the stability of time-delay systmes. Int J Control 74(5):967–976
Kwon OM, Park JH (2009) Improved delay-dependent stability criterion for neural networks with time-varying delays. Phys Lett A 373(5):529–535
Wu YQ, Lu RQ, Shi P, Su HY, Su ZGWu (2017) Adaptive output synchronization of heterogeneous network with an uncertain leader. Automatica. 76:183–192
Dong SL, Wu ZG, Shi P, Su HY, Lu RQ (2017) Reliable control of fuzzy systems with quantization and switched actuator failures. IEEE Trans Syst Man Cybern Syst. https://doi.org/10.1109/TSMC.2016.2636222
Cheng J, Park JH, Liu Y, Liu Z, Tang L (2017) Finite-time \(H_{1}\) fuzzy control of nonlinear Markovian jump delayed systems with partly uncertain transition descriptions. Fuzzy Sets Syst 314:99–115
Cheng J, Park JH, Karimic HR, Zhao X (2017) Static output feedback control of nonhomogeneous Markovian jump systems with asynchronous time delays. Inf Sci 399:219–238
Wu ZG, Shi P, Shu Z, Su HY, Lu RQ (2017) Passivity-based asynchronous control for Markov jump systems. IEEE Trans Autom Control 62(4):2020–2025. https://doi.org/10.1109/TAC.2016.2593742
Lin H, Su HY, Shu Z, Wu ZG, Xu Y (2016) Optimal estimation in UDP-like networked control systems with intermittent inputs: stability analysis and suboptimal filter design. IEEE Trans Autom Control 61(7):1794–1809
Dong SL, Su HY, Shi P, Lu RQ, Wu ZG (2016) Filtering for discrete-time switched fuzzy systems with quantization. IEEE Trans Fuzzy Syst. https://doi.org/10.1109/TFUZZ.2016.2612699
Shen H, Zhu Y, Zhang L, Park JH (2017) Extended dissipative state estimation for Markov jump neural networks with unreliable links. IEEE Trans Neural Netw Learn Syst 28(2):346–358
Wu YQ, Meng XY, Xie LH, Lu RQ, Su HY, Wu ZG (2017) An input-based triggering approach to leader-following problems. Automatica 75:221–228
Shen H, Su L, Park JH (2017) Reliable mixed \(H_{\infty }\) passive control for T-S fuzzy delayed systems based on a semi-Markov jump model approach. Fuzzy Sets Syst 314:79–98
Zhou C, Zhang WL, Yang XS, Xu C, Feng JW (2017) Finite-time synchronization of complex-valued neural networks with mixed delays and uncertain perturbations. Neural Process Lett 46(1):271–291. https://doi.org/10.1007/s11063-017-9590-x
Gong SM, Wu SXX, So AMC, Huang XX (2017) Distributionally robust collaborative beamforming in D2D relay networks with interference constraints. IEEE Trans Wirel Commun 16(8):5048–5060. https://doi.org/10.1109/TWC.2017.2705062
Feng JQ, Ma Q, Qin ST (2017) Exponential stability of periodic solution for impulsive memristor-based Cohen-Grossberg neural networks with mixed delays. Int J Pattern Recognit Intell. https://doi.org/10.1142/S0218001417500227
Yang XS, Feng ZG, Feng JW, Cao JD (2017) Synchronization of discrete-time neural networks with delays and Markov jump topologies based on tracker information. Neural Netw 85:157–164. https://doi.org/10.1016/j.neunet.2016.10.006
Ozcan N, Arik S (2014) New global robust stability condition for uncertain neural networks with time delays. Neurocomputing 142(1):267–274
Yu WW, Yao LL (2007) Global robust stability of neural networks with time varying delays. J Comput Appl math 206:679–687
Lakshmanan S, Park JH, Jung HY, Kwon OM, Rakkiyappan R (2013) A delay patitioning approach to delay-dependent stability analysis for neutral type neural networks with discrete and distributed delays. Neurocomputing 111(7):81–89
Tian JK, Zhong SM, Wang Y (2012) Improved exponential stability criteria for neual networks with time-varying delays. Neurocpmpting 97(15):164–173
Shao JL, Huang TZ, Wang XP (2012) Further analysis on global exponential stability of neural networks with time-varying delays. Commun Nonlinear Sci Numer Simul 17(3):1117–1124
Shao JL, Huang TZ, Wang XP (2011) Improved global robust exponential stability criteria for interval neural networks with time-varying delays. Expert Syst Appl 38(12):15587–15593
Shao JL, Huang TZ (2009) A new result on global exponential robust stability of neural networks with time-varying delays. J Control Theory Appl 7(3):315–320
Faydasicok O, Arik S (2013) A new robust stability criterion for dynamical neural networks with multiple time delays. Neuraocomputing 99(1):290–297
Faydasicok O, Arik S (2011) Further analysis of global robust stability of neural networks with multiple time delays. J Frankl Inst 349(3):813–825
Ozcan N, Arik S (2006) An analysis of global robust stability of neural networks with discrete time delays. Phys Lett A 359(5):445–450
Arik S (2002) Global asymptotic stabilty of a larger class of neural networks with constant time delay. Phys Lett A 311(6):504–511
Duan L, Huang LH, Fang XW (2017) Finite-time synchronization for recurrent neural networks with discontinuous activations and time-varying delays. Chaos 27(1):013101. https://doi.org/10.1063/1.4966177
Duan L, Huang LH, Guo ZY (2017) Golbal robust dissipativity of interval recurrent neural networks with time-varying delay and discontinuous activations. Chaos 26(7):073101. https://doi.org/10.1063/1.4945798
Shi L, Zhu H, Zhong SM, Hou LY (2013) Globally exponetial stability for nerual networks with time-varying delays. Appl Math Comput 219(21):10487–10498
Tian JK, Zhong SM (2011) New delay-dependent exponential stability criteria for neural networks with discrete and distributed time-varying delays. Neurocomputing 74(17):3365–3375
Song Q, Wang Z (2008) Neural networks with discrete and distributed time-varying delay: a general stability snalysis. Chaos Solitons Fractals 37(5):1538–1547
Lien C, Chung L (2007) Global asymptotic stability for cellular neural networks with discrete and distributed time-varying delays. Chaos Solitons Fractals 34(4):1213–1219
Li T, Luo Q, Sun CY, Zhang BY (2009) Exponential stability of recurrent neural networks with time-varying discrete and distributed delays. Nonlinear Anal Real World Appl 10(4):2581–2589
Zhu X, Wang Y (2009) Delay-dependent exponential stability for neural networks with discrete and distributed time-varying delays. Phys Lett A 373(44):4066–4072
Cao JD, Huang DS, Qu YZ (2005) Global robust stability of delayed recurrent neural networks. Chaos Solitons Fractals 23(1):221–229
Chen H, Zhong SM, Shao JL (2015) Exponential stability criterion for interval neural networks with discrete and distributed delays. Appl Math Comput 250:121–130
Acknowledgements
This work was supported by the Natural Science Foundation of the Anhui Higher Education Institutions of China under Grant nos. KJ2016A625, KJ2016A555, the program for excellent young talents in university of Anhui province under Grant no. gxyq2017158.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chen, H., Kang, W. & Zhong, S. A new global robust stability condition for uncertain neural networks with discrete and distributed delays. Int. J. Mach. Learn. & Cyber. 10, 1025–1035 (2019). https://doi.org/10.1007/s13042-017-0779-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-017-0779-0