Abstract
In this paper, we investigated a problem of dissipativity and passivity analysis of Markovian jump neural networks involving two additive time-varying delays. By considering proper triple integral terms in the Lyapunov–Krasovskii functional, several sufficient conditions are derived for verifying the dissipativity criteria of neural networks. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the time delay. As a result, some improved delay dissipativity criteria for neural networks with two additive time-varying delays components are proposed. The dissipativity criteria that depend on the upper bounds of the leakage time-varying delay and its derivative is given in terms of linear matrix inequalities, which can be efficiently solved via standard numerical software. Finally, three numerical examples are given to show the effectiveness of the proposed results.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
1 Introduction
Neural networks (NNs) have become the center of attention to exhaustive research activities over the last few decades; such networks have found broad applications in areas like associative memory, pattern classification, reconstruction of moving images, signal processing, solving optimization problems, fault diagnosis and special problems of A/D converter design [1–4]. Before considering these applications, an imperative and foregoing work is to check whether the equilibrium points of the designed network are stable or unstable because the applications of NNs mainly depend on the dynamical behavior of the equilibrium point. The foremost motivation is that time delays occur in many engineering systems and the existence of time delays may cause undesirable dynamic behaviors, such as oscillation and instability. There exist two stability criteria for delayed neural networks, namely delay-independent and delay-dependent criteria [5, 6]. The earlier condition has not utilized any information on the length of the delay; while the latter condition contains information about time delays. Delay-dependent conditions tend to be less conservative than delay-independent ones, especially for a neural network with a small time delay. In addition, a typical time delay called leakage delay may exist in the negative feedback terms of the system, and these terms are variously known as forgetting or leakage terms. Such time delays in the leakage terms are difficult to handle and have been rarely considered in the literature. In [7], the authors discussed the problem of existence and global exponential stability of almost periodic solution for Memristor-based neural networks with leakage and distributed time-varying delays. Recently, authors in [8] studied the stochastic stability of bidirectional associative memory neural networks with leakage delays and impulse control. Thus, much attention has been drawn to the study of neural networks with time delay (see, e.g., [9–12] and the references therein).
Generally, in practical situations, signals transmitted from one point to another may experience a few segments of networks, which can possibly induce successive delays with different properties due to the variable network transmission conditions. For instance, in a state-feedback networked control the physical plant, controller, sensor, and actuator are located at different places and hence when signals are transmitted from one device to another, two additive time-varying delays will occur: one from sensor to controller and the other from controller to actuator. Because of the network transmission conditions, the two delays are generally time-varying with different properties. Therefore it is of significance to consider stability for NNs with two additive time-varying delay components. A great number of research results on additive time-delay systems exist in the recent literature (see [13–17] and the references therein).
It is known that systems with Markovian jump parameters are a set of systems with transition among the models governed by a Markov chain taking values in a finite set and they behave like a stochastic hybrid systems having two components in the state. The first one refers to the mode, which is described by a continuous-time finite-state Markovian process, and the second one refers to the state which is represented by a system of differential equations. So due to extensive applications of such models in manufacturing systems, power systems, communication systems and network-based control systems, recently, many works have been reported about Markovian jump systems (MJSs) (see references [18–22]). In [23], the author discussed the exponential synchronization of Markovian jumping neural networks with partly unknown transition probabilities via stochastic sampled-data control. In [24], exponential synchronization criteria for Markovian jumping neural networks with time-varying delays and sampled-data control is investigated. Above all, studies of the dissipativity criteria and the performance for Markovian jump systems with delays are of theoretical and practical importance.
It is well known that dissipativity theory gives lot of attention in Mathematics and Control theory due to the fact of bounded realness and positivity, which will play an important property of physical systems closely related with a perceptive phenomenon of loss or dissipation of energy. The key idea of dissipativity is to generalize a Lyapunov stability which has been found in widespread applications in many areas such as electrical network, nonlinear control systems, stability theory, system norm estimation, chaos and synchronization theory, and robust control. In recent years, the authors of [25] investigated the global dissipativity of delayed neural networks with impulses. Further, the three types of impulses in a uniform way have been introduced by using the excellent ideology in the dissipativity theory. Moreover, the Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays have been discussed by using M-matrix theory in [26].
The passivity analysis is an important concept of system theory and plays an important role in both electrical networks and nonlinear control systems and provides a nice tool for analyzing the stability of systems. The importance of studying the passivity theory is that the passive properties of a system will keep the systems internally stable (see references [27–29]). In [30], passivity and passification of memristor-based recurrent neural networks with additive time-varying delays is studied recently. To the best of our knowledge, so far no related results have been established for dissipativity and passivity of Markovian jump neural networks with additive time-varying delays. To shorten such a gap, we deal with the problem of dissipativity and passivity analysis for continuous-time neural networks with additive time-varying delays.
Moreover, in the existing literature [26, 31, 32], the authors discussed the dissipativity and passivity problem by using only the double integral terms such as \(\int _{-\tau }^{0}\int _{t+\beta }^tx^T(s)Sx(s)\mathrm {d}s\mathrm {d}\beta \). But in our paper, we construct double integral terms along with the triple integral terms such as \(\frac{h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{\theta }^{-d_{21}}\int _{t+\lambda }^{t} \dot{x}^T(s)S_1\dot{x}(s)\mathrm {d}s\mathrm {d}\lambda \mathrm {d}\theta \) in the Lyapunov–Krasovskii functional (LKF) for finding less conservative results over the existing ones. In addition, in order to particularize some less conservative dissipativity conditions for neural networks, several effective approaches have been proposed. To mention a few, one can refer to free-weighting matrix approach, delay decomposition approach and Jensen’s inequality. Recently, a new approach called second order reciprocal convex combination has been proposed in [33] to study the stability of systems with interval time-varying delays. Therefore, based on the observation above, it is significant to establish some new integral inequality techniques such as the second order reciprocal convex combination technique for solving the triple integral terms in order to get some less expensive dissipative criteria, which motivates the present study.
Based on the above discussions, in our paper the dissipativity and passivity analysis of Markovian jump neural networks with two additive time-varying delays have been established. Different from the previous existing literature, the mixed time delays considered here comprise discrete, distributed and leakage time-varying delays. Based on the Lyapunov functional method, a novel dissipativity and passivity criterion is established in terms of linear matrix inequalities (LMI) which can be solved efficiently by using the optimization algorithms. Finally, three numerical examples are shown to support that our results are less conservative than those of the existing ones.
The remainder of this paper is organized as follows. In Sect. 2, the problem of dissipativity and passivity of Markovian jumping neural networks with two additive time-varyigng delays are formulated and some preliminaries are represented. Section 3 presents the main results on dissipativity and passivity analysis. In Sect. 4, three illustrative examples are provided and conclusions are given in Sect. 5.
Notations
Throughout this paper, the superscript T denotes the transposition and the notation \(X \ge Y\) (respectively, \(X>Y\)), where X and Y are symmetric matrices, means that \(X-Y\) is positive semi-definite (respectively, positive definite). \( {\mathbb {R}}^{n}\) and \( {\mathbb {R}}^{n \times n}\) denote n-dimensional Euclidean space and the set of all \(n \times n\) real matrices, respectively. diag\(\{\cdots \}\) stands for a block-diagonal matrix. The matrix \({0_{n,m}}\) denotes the null matrix of order \(n \times m\). The notation \(*\) always denotes the symmetric block in one symmetric matrix. Matrices, if not explicitly stated, are assumed to have compatible dimensions. Let \((\Omega , \mathfrak {F}, \mathcal {P})\) be the probability space, where \(\Omega \) is the sample space; \(\mathfrak {F}\) is the algebra of the events; \(\mathcal {P}\) is the probability measure defined on \(\mathfrak {F}\) and \(\mathbb {E}[\cdot ]\) stand for the corresponding expectation operator with respect to the given probability measure \(\mathcal {P}\).
2 Problem Description and Preliminaries
Let \(\{r(t), t \ge 0\}\) be a right-continuous Markov chain on the probability space \((\Omega ,\mathfrak {F},\mathcal {P})\) taking values in a finite state space \(\mathbb {S}=\{1,2,\ldots ,N\}\) with generator \(Q=(q_{ij})_{N\times N}\) given by
where \(\Delta t>0\) and \(\lim _{\Delta t \rightarrow 0}\frac{o(\Delta t)}{\Delta t}=0\), \(q_{ij}\ge 0\) is the transition rate from i to j if \(i\ne j\) while \(q_{ii}= -\sum _{j\ne i}q_{ij}\).
Consider the following neural networks with Markovian jumping parameters, leakage time-varying delay and two additive time-varying delay components:
The system (1) can be equivalent to
where \(x(t-\sigma (t))=[x_{1}(t-\sigma (t))x_{2}(t-\sigma (t))\ldots x_{n}(t-\sigma (t))]^{T}\in \mathbb {R}^{n}\) is the state vector associated with the n neurons and leakage time-varying delay \(\sigma (t)\), u(t) is the input, y(t) is the output. The diagonal matrix \(A(r(t)) = \text{ diag }\{a_{1}(r(t)),a_{2}(r(t)),\ldots ,a_{n}(r(t))\}\) has positive entries \(a_{i}(r(t))>0\) \((i=1,2,\ldots ,n)\). The matrices \(W_0(r(t))\), \(W_1(r(t))\) and \(W_2(r(t))\) are the interconnection matrices representing the weight coefficients of the neurons. \(g(x(t))=[g_1(x_1(t)) g_2(x_2(t)) \ldots g_n(x_n(t))]^T \in \mathbb {R}^n\) is the neuron activation function. For convenience, in the neural networks (1) each possible value of r(t) is denoted by \(i, i\in \mathbb {S}\) in the sequel. Then, we have \(A(r(t))=A_i, W_0(r(t))=W_{0i}, W_1(r(t))=W_{1i}, W_2(r(t))=W_{2i}.\)
In the neural network (1), the bounded functions \(\sigma (t)\), \(\rho (t)\), \(d_1(t)\) and \(d_2(t)\) represent respectively the leakage, distributed and two additive time-varying delays that are assumed to satisfy the following conditions:
where \(d_{12}\ge d_{11}\), \(d_{21}\ge d_{22}\), \(\varrho _1\), \(\varrho _2\), \(\sigma \), \(\sigma _{\mu }\) and \(\rho \) are known constants with \(d_{11}\) and \(d_{21}\) not equal to zero. Here, we denote \(d_1=d_{11}+d_{21}\), \(d_2=d_{12}+d_{22}\), \(\varrho =\varrho _1+\varrho _2\), \(h_1=d_{12}-d_{11}\), \(h_2=d_{22}-d_{21}\). We are considering system (1) with the initial condition \(x(t)=\phi (t), \ t \in [-\bar{d}, 0]\), \(\bar{d}=\max [\sigma , d_1, d_2, \rho ]\), where \(\phi (t)\) is the given initial function.
Remark 2.1
In this paper, the values of \(\varrho _1\) and \(\varrho _2\) are assumed to be less than 1. When \(\varrho _1\) and \(\varrho _2\) are greater than or equal to 1, the fast time-varying delay case will cause problems with causality, minimality and inconsistency, as indicated in [10, 12]. So this restriction is a reasonable and necessary assumption for proving the main results.
Throughout this paper, we assume that the activation function satisfies the following assumption.
Assumption 2.1
The activation function g(u) is bounded and satisfies
for any \(\zeta _1, \zeta _2 \in \mathbb {R}, \zeta _1 \ne \zeta _2,\) where \(L_i>0\) for \(i=1,2,\cdots ,n.\)
Further, \(g_i(0)=0,\) \(i=1,2,\cdots ,n.\)
We now introduce the following dissipativity and passivity definitions.
Definition 2.1
[31] System (1) is strictly \((\mathcal {Q}, \mathcal {R}, \mathcal {S})\)-dissipative for any \(t_p\ge 0,\) and some scalar \(\gamma > 0,\) if under zero initial state, the following condition is satisfied
where the quadratic energy supply function \(\mathcal {G}\) associated with system (1) is defined by
where \(\mathcal {Q}, \mathcal {R}\) and \(\mathcal {S}\) are real matrices of appropriate dimensions, with \(\mathcal {Q}\) and \(\mathcal {R}\) being symmetric matrices. Let \(L_2[0, \infty )\) be the space of square integrable functions on \([0, \infty )\). The notations \(\left\langle y, \mathcal {S}u\right\rangle _{t_p}\), \(\left\langle y, \mathcal {Q}y\right\rangle _{t_p}\) and \(\left\langle u, \mathcal {R}u\right\rangle _{t_p}\) represent \(\displaystyle \int _{0}^{t_p}y^T(t)\mathcal {S}u(t)\mathrm {d}t,\) \(\displaystyle \int _{0}^{t_p}y^T(t)\mathcal {Q}y(t)\mathrm {d}t\) and \(\displaystyle \int _{0}^{t_p}u^T(t)\mathcal {R}u(t)\mathrm {d}t,\) respectively.
Definition 2.2
[32] The system (1) is said to be passive, if for all solutions of (1) with \(x(0)=0,\) there exists a scalar \(\gamma >0\) such that the inequality
is satisfied under the zero initial condition.
Definition 2.3
[33] Let \(\Omega _{1}, \Omega _{2}, \ldots , \Omega _{n}:\mathbb {R}^{m}\mapsto \mathbb {R}\) be a given finite number of functions that have positive values in an open subset \(\mathbf{D}\) of \(\mathbb {R}^{m}\). Then, a second-order reciprocally convex combination of these functions over \(\mathbf{D}\) is a function of the form
where the real numbers \(\alpha _i\) satisfy \(\alpha _i>0\) and \(\sum _{i}\alpha _i=1\).
To end this section, we introduce the following lemmas, which will play an important role in the proof of the main results.
Lemma 2.1
[34] (lower bound lemma). Let \(f_{1},f_{2},\ldots ,f_{N}:\mathbb {R}^m \rightarrow \mathbb {R}\) have positive values in an open subset \(\mathbf{D}\) of \(\mathbb {R}^m\). Then the reciprocally convex combination of \(f_{i}\) over \(\mathbf{D}\) satisfies
subject to
Lemma 2.2
[35] For any vectors \(x, y \in \mathbb {R}^{n}\) and matrix \(Q>0\), we have the following inequality
3 Main Results
In this section, we consider the dissipativity criteria for Markovian jump neural networks with additive time-varying delays. One of the main issues in dissipativity criteria is how to further reduce the possible conservatism induced by the introduction of the Lyapunov functional when dealing with time delays. By employing the idea of second order reciprocal convex combination technique, we solve the triple integral terms involved in the LKF candidate to find the dissipativity and passivity criteria for Markovian jump neural networks.
Theorem 3.1
The neural networks (1) is dissipative in the sense of Definition 2.1, if there exists positive definite matrices \(P_i(i\in \mathbb {S})\), \(R_s (s=1,2,\cdots ,6)\), \(Q_1, Q_2\), Z, \(J_n(n=1,2)\), \(S_q(q=1,2)\), and M, any matrices \(K_f(f=1,2,\cdots ,6)\), \(Y_1, Y_2\), and diagonal matrix U and scalar \(\gamma >0\), such that the following LMIs hold for \(l=1,2\):
where
and the remaining terms of \(\phi _{p,q,i}\) are zero.
Proof
We construct the Lyapunov–Krasovskii functional as follows:
where
Setting \(\varsigma =(d_2(t)-d_{21})/h_2, \omega =(d_{22}-d_2(t))/h_2\) and applying Jensen inequality lemma to the weak infinitesimal generator \(\mathbb {L}\) of the random process \(\{x(t), r(t), t\ge 0\}\), we have
where
From Lemma 2.1, we can deduce that if there exist matrices \(K_1\) and \(K_2\) such that (10) holds, then the integral term in (18) and 2nd and 6th term in (19) will be rearranged into
and
Note that if \(d_{2}(t)=d_{21}\) or \(d_{2}(t)=d_{22}\), we have
respectively. So inequalities (21) and (22) still hold.
Similarly, by applying Lemma 2.1 to (19), we can have the following inequalities:
and
It should be noted that when \(d_{2}(t)=d_{21}\) or \(d_{2}(t)=d_{22}\), we have
respectively. So the relations (23) and (24) still hold.
Furthermore, there exist positive diagonal matrix U such that the following inequalities hold based on Assumption 2.1:
From the conditions (7), and the inequalities (14)–(25), it can be seen that
where
and
Thus (26) can be treated non-conservatively by two corresponding boundary LMIs (9): The first case is \(d_2(t)=d_{21}\) and the second will be \(d_2(t)=d_{22}.\)
Suppose \(\Omega ^l<0,\) it is easy to get
Integrating (27) from 0 to \(t_p\), under zero initial conditions we obtain
for all \(t_p\ge 0\). Therefore, the Markovian jump neural network (1) is strictly \((\mathcal {Q}, \mathcal {S}, \mathcal {R} )\)-\(\gamma \)-dissipative in the sense of Definition 2.1. This completes the proof. \(\square \)
Remark 3.2
We can obtain the passivity conditions for the system (1) by substituting \(\mathcal {Q}=0, \mathcal {S}=I\) and \(\mathcal {R}=2\gamma I\) in Theorem 3.1. In this case, we get the following corollary obtained from Theorem 3.1.
Corollary 3.3
The neural networks (1) is passive in the sense of Definition 2.2 if there exists positive definite matrices \(P_i(i\in \mathbb {S})\), \(R_s (s=1,2,\cdots ,6)\), \(Q_1, Q_2\), Z, \(J_n(n=1,2)\), \(S_q(q=1,2)\), any matrices \(K_f(f=1,2,\cdots ,6)\), \(Y_1, Y_2\), and diagonal matrix U and scalar \(\gamma >0\), such that the following LMIs hold for \(l=1,2\):
where
except
The remaining coefficients are same as in Theorem 3.1.
Proof
The proof is same as that of Theorem 3.1 and hence it is omitted. \(\square \)
Remark 3.4
In the absence of leakage and distributed delays the system (1) without Markovian jump parameters is reduced to the following neural networks:
where \(d_1(t)\) and \(d_2(t)\) are assumed to satisfy \(0\le d_1(t) \le d_{12}\) with \( \dot{d}_1(t)\le \varrho _1<1\) and \(0\le d_2(t) \le d_{22}\) with \(\dot{d}_2(t)\le \varrho _2<1\). By using Theorem 3.1, one can obtain the passivity criterion for the above NNs (32) as in the following corollary.
Corollary 3.5
The neural networks (32) is passive in the sense of Definition 2.2 if there exists positive definite matrices P, \(R_3, R_4, R_5, R_6\), \(Q_2\), \(J_n(n=1,2)\), \(S_q(q=1,2)\), diagonal matrix U and scalars \(\gamma >0\), such that the following LMI holds:
where
Proof
By putting \(R_1=R_2=Q_1=Z=M=0\) in the LKF (12) and using the similar arguments of Theorem 3.1, we can obtain the passivity results for the system (32).
Remark 3.6
when \(d_1(t)=0, d_2(t)=d(t)\) and \(d_{22}=d\), the system (32) reduces to the following form with the single delay
where d(t) satisfies \(0\le d_1(t) \le d, \dot{d}(t)\le \varrho <1\). The passivity criterion for delayed neural network (34) can be derived as follows:
Corollary 3.7
The neural networks (34) is passive in the sense of Definition 2.2 if there exists positive definite matrices P, \( R_5\), \(J_n(n=1,2)\), \(S_1\), diagonal matrix U and V and scalars \(\gamma >0\), such that the following LMI holds:
Proof
Consider the LKF (12) with \(R_1=R_2=R_3=R_4=R_6=Q_1=Q_2=S_2=0\). Now based on Assumption 2.1 we can choose a diagonal matrix V such that the following inequality hold:
By adding (36) with (26) and proceeding as in the proof of Theorem 3.1, we can obtain passivity results for (34).
Remark 3.8
The authors of [33] discussed the problem of second-order reciprocally convex approach to study the stability of systems with interval time-varying delays. Followed this, in our paper by utilizing the Jensen inequality, the double integral terms are partitioned into single integral terms so as to find a second order reciprocally convex combination of positive functions involving the inverses of squared convex parameters.
Remark 3.9
Different from [31, 32], in this paper, two triple integral terms \(\displaystyle \frac{h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{\theta }^{-d_{21}} \int _{t+\lambda }^{t}\dot{x}^T(s) S_1\dot{x}(s)\mathrm {d}s\mathrm {d}\lambda \mathrm {d}\theta \) and \(\displaystyle \frac{h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{-d_{22}}^{\theta }\int _{t+\lambda }^{t}\dot{x}^T(s) S_2\dot{x}(s)\mathrm {d}s\mathrm {d}\lambda \mathrm {d}\theta \) are included in \(V_6(t,x(t),i)\) which plays an important role in reducing conservatism of our obtained results. Those two triple integral terms provide the double integral terms \(\displaystyle \frac{-h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}^T(s)S_1\dot{x}(s) \mathrm {d}s\mathrm {d}\theta \) and \(\displaystyle \frac{-h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{t-d_{22}}^{t+\theta }\dot{x}^T(s)S_1 \dot{x}(s) \mathrm {d}s\mathrm {d}\theta \) in \(\mathbb {L}V_6(t,x(t),i)\) respectively. These two double integral terms are further reduced into terms with three integral parts such as \(\displaystyle \frac{-h_2^2}{2}(d_{22}-d_2(t))\int _{t-d_2(t)}^{t-d_{21}}\dot{x}^T(s)S_1\dot{x}(s)\mathrm {d}s\) \(-\displaystyle \frac{h_2^2}{2}\int _{-d_2(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}^T(s)S_1\dot{x}(s) \mathrm {d}s\mathrm {d}\theta \) \(-\displaystyle \frac{h_2^2}{2}\int _{-d_{22}}^{-d_2(t)}\int _{t+\theta }^{t-d_2(t)}\dot{x}^T(s) S_1\dot{x}(s) \mathrm {d}s\mathrm {d}\theta \) and \(\displaystyle -\frac{h_{2}^{2}}{2}(d_{2}(t)-d_{21})\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}^{T}(s)S_{2}\dot{x}(s)\mathrm {d}s -\frac{h_{2}^{2}}{2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}^{T}(s)S_{2}\dot{x}(s)\mathrm {d}s \mathrm {d}{\theta } -\frac{h_{2}^{2}}{2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}^{T}(s)S_{2} \dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\), respectively. Further, applying Jensen’s inequality to these terms will lead to less conservative dissipativity results which can be seen through numerical example in Sect. 4.
Remark 3.10
To reduce the conservatism, the lower bound lemma is used to deal with the derivative of \(V_5(t,x(t),i)\), i.e., by using the relations \(\frac{\omega }{\varsigma }=-1+\frac{1}{\varsigma }\) and \(\frac{\varsigma }{\omega }=-1+\frac{1}{\omega }\) in the following inequality, we can easily get the inequality (22) by using Lemma 2.1.
Remark 3.11
To obtain the inequalities (23) and (24), in this paper we use the relation \(\frac{1}{\varsigma ^2}=\frac{(\varsigma +\omega )^2}{\varsigma ^2}\) and \(\frac{1}{\omega ^2}=\frac{(\varsigma +\omega )^2}{\omega ^2}\) in the following inequalities:
The above approach is very effective in reducing the conservatism of the dissipativity criterion, which will be shown through numerical examples in the subsequent section.
Remark 3.12
The system with two additive time-varying delay has a physically powerful application background in remote control and networked control. In this paper we consider, \(d_1(t)\) is the time-delay induced from sensor to controller and \(d_2(t)\) is the delay induced from controller to the actuator. The stability analysis of such system was earlier carried out by adding up all the successive delays into single delay to develop a sufficient stability condition. In addition, in this paper, we handle both lower and upper bounds of the additive delays (i.e. \( 0 \le d_{11}\le d_1(t) \le d_{12}, \ \ 0\le d_{21}\le d_2(t) \le d_{22}, |\dot{d}_1(t)|\le \varrho _1 <1, |\dot{d}_2(t)|\le \varrho _2 <1\)), for obtaining the dissipativity and passivity results for the system (1).
4 Numerical Examples
In this section, we give three numerical examples to show the validity of our developed theoretical results.
Example 4.1
Consider system (1) with the parameters
For this model, we take the nonlinear activation functions as \(g_1(x)=0.4\tanh (x)\) and \(g_2(x)=0.8\tanh (x)\). It is easy to see that these activation functions satisfy the Assumption 2.1 with \(L=\text{ diag }\{0.4, 0.8\}\). By using MATLAB LMI toolbox and by taking \(d_{11}=0.3, d_{12}=0.8, d_{21}=0.4, d_{22}=2.9185, \varrho _1=0.2, \varrho _2=0.3, \sigma =0.03, \sigma _{\mu }=0.01\) and \(\rho =0.2\), we can solve the LMIs (9)–(11) in Theorem 3.1 and obtain the corresponding feasible solutions as follows (for space consideration, here we list some variables only):
When we fix \(\varrho =0.5, (i.e \ \varrho _1=0.2, \varrho _2=0.3), \rho =0.2, \sigma _{\mu }=0.01, d_{11}=0.3, d_{21}=0.4, d_{12}=0.7,\) we can obtain the maximum allowable upper bound \(d_{22}\) for various \(\sigma \) as listed in Table 1. Moreover, by choosing \(\varrho =0.5, (i.e \ \varrho _1=0.2, \varrho _2=0.3), \sigma =0.03, \sigma _\mu =0.01, \rho =0.2, d_{11}=0.3, d_{21}=0.4, \) the maximum allowable upper bound \(d_{22}\) are computed for various \(d_{12}\) and are listed in Table 2.
Furthermore, Fig. 1 shows the state trajectories of variable x(t) with the initial condition \([-0.2,0.2]^T\) for the additive delays \(d_1(t)=0.5+0.2\sin (t)\) and \(d_2(t)=2.6185+0.3\cos (t)\) in the case of \(\sigma (t)=0.02+0.01\sin (t)\). The state trajectories of variable x(t) for \(\sigma (t)=0.02+0.01\sin (t), d_1(t)=0.2+0.1\sin (t)\) and \(d_2(t)=2.6762+0.3\cos (t)\) are depicted in Fig. 2. Figure 3 predicts the unstable behavior for the state trajectories of variable x(t) with the delay \(\sigma (t)=0.01+0.5\sin (t)\).
Example 4.2
Consider system (32) with two additive time-varying delay components as in [13–17] with the following matrices:
In this example, the activation functions are assumed to be \(g_1(x)=0.4\tanh (x)\) and \(g_2(x)=0.8\tanh (x)\). It is easy to check that they satisfy the Assumption 2.1 with \(L=\text{ diag }\{0.4,0.8\}\). When \(\varrho =0.8 \ (\varrho _1=0.7, \varrho _2=0.1)\) and \(\varrho =0.9 \ (\varrho _1=0.7, \varrho _2=0.2)\), the corresponding upper bounds for \(d_{22}\) for various values of \(d_{21}\) are calculated by Corollary 3.5 and are listed in Tables 3 and 4 in order to compare with the results obtained in [13–17]. Tables 3 and 4 show that the method proposed in this paper is much less conservative and more superior than the corresponding method used in [13–17].
When u(t) = 0, one can obtain the state trajectories of the state x(t) for the delay \(d_1(t)=0.1+0.7\sin (t)\) and \(d_2(t)=2.7325+0.1\cos (t)\), with the initial value \([-0.2,\quad 0.2]^T\) as shown in Fig. 4. In addition, for \(d_1(t)=0.1+0.7\sin (t)\) and \(d_2(t)=2.1982+0.2\cos (t)\), and taking the initial value as \([-0.2,\quad 0.2]^T\) the response of the state trajectories are drawn in Fig. 5.
Remark 4.3
In [14], the double integral terms \(\displaystyle \int _{-\overline{d}_1}^{0}\int _{\beta }^{0}\dot{z}^T(t+\alpha )Z_1\dot{z}(t+\alpha ) \mathrm {d}\alpha \mathrm {d}\beta ,\) \(\displaystyle \int _{-\overline{d}}^{-\overline{d}_1}\int _{\beta }^{0}\dot{z}^T(t+\alpha ) Z_2\dot{z}(t+\alpha ) \mathrm {d}\alpha \mathrm {d}\beta \) and \(\displaystyle \int _{-\overline{d}}^{0}\int _{\beta }^{0}\dot{z}^T(t+\alpha )M\dot{z}(t+\alpha ) \mathrm {d}\alpha \mathrm {d}\beta \) are considered in the LKF and the Jensen inequality is employed to get the derived results. Even though, the similar double integral terms are considered in our paper, a new kind of linear convex combination approach is used from Lemma 2.1 by making use of positive functions weighted by the inverses of squared convex parameters, which improves the results proposed in this paper. In [16] and [17] , by taking the triple integral term \(\displaystyle \int _{-d}^0\int _{\theta }^{0}\int _{t+\lambda }^{t}\dot{z}^T(s)Z_5\dot{z}(s)\mathrm {d}s \mathrm {d}\lambda \mathrm {d}\theta \) in the LKF, the stability results are obtained without using Lemma 2.1. However, in our paper, based on second order reciprocally convex approach, the Lemma 2.1 is used to handle several kinds of function combinations arised from the derivation of triple integral terms considered in \(V_6(t, x(t), i)\). It should be pointed out that the results obtained in Theorem 3.1 in this paper by making use of second order reciprocally convex combination approach are much more better results than those obtained in [16] and [17] which can be easily seen via numerical simulations.
Example 4.4
Consider the neural networks (34) as discussed in [36–39] with the following parameters:
The activation functions are assumed to satisfy Assumption 2.1 with \(g_1(x)=0.3\tanh (x), g_2(x)=0.8\tanh (x)\) and hence \(L=\text{ diag }\{0.3, 0.8\}\). By using Corollary 3.7 and solving MATLAB LMI tool box the corresponding results for the maximum allowable upper bounds of the time-varying delay d(t) are computed as given in Table 5 for different \(\varrho 's\). Further, the computed upper bounds are compared with the existing ones [36–39]. Figure 6 shows the state curves for the delay \(d(t)=5.7305+0.2\sin ^2(t)\) with \(\varrho =0.4\), and the initial condition \([-0.1,0.1]^T.\)
Remark 4.5
It should be noted in [36] that the free weighting matrix method is used for obtaining the theoretical results. By using this approach, any model transformations and bounded techniques for cross-terms have not been applied and it may lead to computational complexity. Further, in [37] the integral terms \(-\int _{t-h}^tz^T(s)S_1z(s)\mathrm {d}s\) and \( -\int _{t-h}^t\dot{z}^T(s)S_1\dot{z}(s)\mathrm {d}s\) considered in the LKF are splitted into two integral terms such as \(-\int _{t-\tau (t)}^Tz^T(s)S_1z(s)\mathrm {d}s\) and \(-\int _{t-h}^{t-\tau (t)}z^T(s)S_1z(s)\mathrm {d}s\), \(-\int _{t-\tau (t)}^t\dot{z}^T(s)S_2\dot{z}(s)\mathrm {d}s\) and \(-\int _{t-h}^{t-\tau (t)}\dot{z}^T(s)S_2\dot{z}(s)\mathrm {d}s\), respectively. By using these integral terms and by introducing the relationship among \(\tau (t), h-\tau (t)\) and h, the stability results are derived for the neural networks in [37] and the obtained results are further shown to be less conservative than those obtained in [36]. Different from [36] and [37], in [38] the new type of double integral term \(\frac{h}{2}\int _{-\frac{h}{2}}^0\int _{t+\theta }^t\dot{z}^T(s)Q_1\dot{z}(s)\mathrm {d}s\mathrm {d}\theta \) is taken into account in the LKF which is further reduced to the single integral term \(-\frac{h}{2}\int _{t-\frac{h}{2}}^t\dot{z}^T(s)Q_1\dot{z}(s)\mathrm {d}s\) after taking the derivative. Further, the single integral term is reduced by a convex optimization approach in [38]. This approach produces less conservatism compared with [36] and [37]. On the other hand, in [39], triple and quadruple integrals are introduced to give better results than in [36–38]. Unlike [36–39], in our paper, the integral term is used in \(V_5(t, x(t), i)\) in order to derive the less conservative passivity results.
5 Conclusion
In this paper, dissipativity and passivity analysis are investigated for Markovian jump neural networks with two additive time-varying delays and leakage time-varying delay based on second order reciprocally convex approach. By introducing second order reciprocally convex approach with an augmented Lyapunov–Krasovskii functional, the novel dissipativity criterion for the concerned system is proposed. By introducing the variations of lower bound lemma we solve several kinds of function combinations arising from triple integral terms in the derivation of LMI conditions. Finally, three illustrative numerical examples are provided to show the effectiveness of the proposed method.
References
Gupta MM, Jin L, Homma N (2003) Static and dynamic neural networks. Wiley, New York
Sakthivel R, Vadivel P, Mathiyalagan K, Arunkumar A, Sivachitra M (2015) Design of state estimator for bidirectional associative memory neural networks with leakage delays. Inf Sci 296:263–274
Zhu Q, Cao J, Hayat T, Alsaadi F (2015) Robust stability of Markovian jump stochastic neural networks with time delays in the leakage terms. Neural Process Lett 41:1–27
Arunkumar A, Sakthivel R, Mathiyalagan K (2015) Robust reliable \(H_{\infty }\) control for stochastic neural networks with randomly occurring delays. Neurocomputing 149:1524–1534
Kwon OM, Park MJ, Park JH, Lee SM, Cha EJ (2014) Improved results on stability of linear systems with time-varying delays via Wirtinger-based integral inequality. J Frankl Inst 351:5386–5398
Wang X, Yu J, Li C, Wang H, Huang T, Huang J (2015) Robust stability of stochastic fuzzy delayed neural networks with impulsive time window. Neural Netw 67:84–91
Jiang P, Zeng Z, Chen J (2015) Almost periodic solutions for a memristor-based neural networks with leakage, time-varying and distributed delays. Neural Netw 68:34–45
Zhu Q, Rakkiyappan R, Chandrasekar A (2014) Stochastic stability of Markovian jump BAM neural networks with leakage delays and impulse control. Neurocomputing 136:136–151
Samli R (2015) A new delay-independent condition for global robust stability of neural networks with time delays. Neural Netw 66:131–137
Zhang H, Yang F, Liu X, Zhang Q (2013) Stability analysis for neural networks with time-varying delay based on quadratic convex combination. IEEE Trans Neural Netw Learn Syst 24:513–521
Guo Z, Wang J, Yan Z (2014) Attractivity analysis of memristor-based cellular neural networks with time-varying delays. IEEE Trans Neural Netw Learn Syst 25:704–717
Verriest E (2011) Inconsistencies in systems with time-varying delays and their resolution. IMA J Math Control Inf 28:147–162
He Y, Liu GP, Rees D (2007) New delay-dependent stability criteria for neural networks with time-varying delay. IEEE Trans Neural Netw 18:310–314
Zhao Y, Gao H, Mou S (2008) Asymptotic stability analysis of neural networks with successive time delay components. Neurocomputing 71:2848–2856
Shao H, Han Q (2011) New delay-dependent stability criteria for neural networks with two additive time-varying delay components. IEEE Trans Neural Netw 22:812–818
Tian J, Zhong S (2012) Improved delay-dependent stability criteria for neural networks with two additive time-varying delay components. Neurocomputing 77:114–119
Chen H (2013) Improved stability criteria for neural networks with two additive time-varying delay components. Circuits Syst Signal Process 32:1977–1990
Zheng CD, Zhang X, Wang Z (2015) Mode and delay-dependent stochastic stability conditions of fuzzy neural networks with Markovian jump parameters. Neural Process Lett. doi:10.1007/s11063-015-9413-x
Chen H, Wang J, Wang L (2014) New criteria on delay-dependent robust stability for uncertain Markovian stochastic delayed neural networks, Neural Process Lett. doi:10.1007/s11063-014-9356-7
Wu ZG, Shi P, Su H, Chu J (2011) Passivity analysis for discretetime stochastic Markovian jump neural networks with mixed time delays. IEEE Trans Neural Netw 22:1566–1575
Wu ZG, Shi P, Su H, Chu J (2014) Asynchronous \(L_2-L_{\infty }\) filtering for discrete-time stochastic Markov jump systems with randomly occurred sensor nonlinearities. Automatica 50:180–186
Wu ZG, Shi P, Su H, Chu J (2013) Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data. IEEE Trans Cybern 43:1796–1806
Chandrasekar A, Rakkiyappan R, Rihan FA, Lakshmanan S (2014) Exponential synchronization of Markovian jumping neural networks with partly unknown transition probabilities via stochastic sampled-data control. Neurocomputing 133:385–398
Rakkiyappan R, Chandrasekar A, Park JH, Kwon OM (2014) Exponential synchronization criteria for Markovian jumping neural networks with time-varying delays and sampled-data control. Nonlinear Anal 14:16–37
Wu SL, Li KL, Huang TZ (2011) Global dissipativity of delayed neural networks with impulses. J Frankl Inst 348:2270–2291
Guo Z, Wang J, Yan Z (2013) Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays. Neural Netw 48:158–172
Chen Y, Li W, Bi W (2009) Improved results on passivity analysis of uncertain neural networks with time-varying discrete and distributed delay. Neural Process Lett 30:155–169
Balasubramaniam P, Nagamani G, Rakkiyappan R (2010) Global passivity analysis of interval neural networks with discrete and distributed delays of neutral type. Neural Process Lett 32:109–130
Xiao J, Zeng Z, Shen W (2015) Passivity analysis of delayed neural networks with discontinuous activations. Neural Process Lett 42:215–232
Rakkiyappan R, Chandrasekar A, Cao J (2014) Passivity and passification of memristor-based recurrent neural networks with additive timevarying delays. IEEE Trans Neural Netw Learn Syst. doi:10.1109/TNNLS.2014.2365059
Sun Y, Cui BT (2008) Dissipativity analysis of neural networks with time-varying delays. Int J Autom Comput 05:290–295
Xu S, Zheng WX, Zou Y (2009) Passivity analysis of neural networks with time-varying delays. IEEE Trans Circuits Syst II 56:325–329
Lee W, Park P (2014) Second-order reciprocally convex approach to stability of systems with interval time-varying delays. Appl Math Comput 229:245–253
Park P, Ko JW, Jeong C (2011) Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47:235–238
Chen Y, Bi W, Li W, Wu Y (2010) Less conservative results of state estimation for neural networks with time-varying delay. Neurocomputing 73:1324–1331
Song Q (2008) Exponential stability of recurrent neural networks with both time-varying delays and general activation functions via LMI approach. Neurocomputing 71:2823–2830
Sun J, Liu GP, Chen J, Rees D (2009) Improved stability criteria for neural networks with time-varying delay. Phys Lett A 373:342–348
Tian J, Zhong S (2011) Improved delay-dependent stability criterion for neural networks with time-varying delay. Appl Math Comput 217:10278–10288
Shi K, Zhong S, Zhu H, Liu X, Zeng Y (2015) New delay-dependent stability criteria for neutral-type neural networks with mixed random time-varying delays. Neurocomputing. doi:10.1016/j.neucom.2015.05.035i
Acknowledgments
The work of authors was supported by UGC-BSR Research Start-Up-Grant, New Delhi, India, under the sanctioned No. F. 20-1/2012 (BSR)/20-5(13)/2012(BSR).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Nagamani, G., Radhika, T. Dissipativity and Passivity Analysis of Markovian Jump Neural Networks with Two Additive Time-Varying Delays. Neural Process Lett 44, 571–592 (2016). https://doi.org/10.1007/s11063-015-9482-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-015-9482-x