1 Introduction

Neural networks (NNs) have become the center of attention to exhaustive research activities over the last few decades; such networks have found broad applications in areas like associative memory, pattern classification, reconstruction of moving images, signal processing, solving optimization problems, fault diagnosis and special problems of A/D converter design [14]. Before considering these applications, an imperative and foregoing work is to check whether the equilibrium points of the designed network are stable or unstable because the applications of NNs mainly depend on the dynamical behavior of the equilibrium point. The foremost motivation is that time delays occur in many engineering systems and the existence of time delays may cause undesirable dynamic behaviors, such as oscillation and instability. There exist two stability criteria for delayed neural networks, namely delay-independent and delay-dependent criteria [5, 6]. The earlier condition has not utilized any information on the length of the delay; while the latter condition contains information about time delays. Delay-dependent conditions tend to be less conservative than delay-independent ones, especially for a neural network with a small time delay. In addition, a typical time delay called leakage delay may exist in the negative feedback terms of the system, and these terms are variously known as forgetting or leakage terms. Such time delays in the leakage terms are difficult to handle and have been rarely considered in the literature. In [7], the authors discussed the problem of existence and global exponential stability of almost periodic solution for Memristor-based neural networks with leakage and distributed time-varying delays. Recently, authors in [8] studied the stochastic stability of bidirectional associative memory neural networks with leakage delays and impulse control. Thus, much attention has been drawn to the study of neural networks with time delay (see, e.g., [912] and the references therein).

Generally, in practical situations, signals transmitted from one point to another may experience a few segments of networks, which can possibly induce successive delays with different properties due to the variable network transmission conditions. For instance, in a state-feedback networked control the physical plant, controller, sensor, and actuator are located at different places and hence when signals are transmitted from one device to another, two additive time-varying delays will occur: one from sensor to controller and the other from controller to actuator. Because of the network transmission conditions, the two delays are generally time-varying with different properties. Therefore it is of significance to consider stability for NNs with two additive time-varying delay components. A great number of research results on additive time-delay systems exist in the recent literature (see [1317] and the references therein).

It is known that systems with Markovian jump parameters are a set of systems with transition among the models governed by a Markov chain taking values in a finite set and they behave like a stochastic hybrid systems having two components in the state. The first one refers to the mode, which is described by a continuous-time finite-state Markovian process, and the second one refers to the state which is represented by a system of differential equations. So due to extensive applications of such models in manufacturing systems, power systems, communication systems and network-based control systems, recently, many works have been reported about Markovian jump systems (MJSs) (see references [1822]). In [23], the author discussed the exponential synchronization of Markovian jumping neural networks with partly unknown transition probabilities via stochastic sampled-data control. In [24], exponential synchronization criteria for Markovian jumping neural networks with time-varying delays and sampled-data control is investigated. Above all, studies of the dissipativity criteria and the performance for Markovian jump systems with delays are of theoretical and practical importance.

It is well known that dissipativity theory gives lot of attention in Mathematics and Control theory due to the fact of bounded realness and positivity, which will play an important property of physical systems closely related with a perceptive phenomenon of loss or dissipation of energy. The key idea of dissipativity is to generalize a Lyapunov stability which has been found in widespread applications in many areas such as electrical network, nonlinear control systems, stability theory, system norm estimation, chaos and synchronization theory, and robust control. In recent years, the authors of [25] investigated the global dissipativity of delayed neural networks with impulses. Further, the three types of impulses in a uniform way have been introduced by using the excellent ideology in the dissipativity theory. Moreover, the Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays have been discussed by using M-matrix theory in [26].

The passivity analysis is an important concept of system theory and plays an important role in both electrical networks and nonlinear control systems and provides a nice tool for analyzing the stability of systems. The importance of studying the passivity theory is that the passive properties of a system will keep the systems internally stable (see references [2729]). In [30], passivity and passification of memristor-based recurrent neural networks with additive time-varying delays is studied recently. To the best of our knowledge, so far no related results have been established for dissipativity and passivity of Markovian jump neural networks with additive time-varying delays. To shorten such a gap, we deal with the problem of dissipativity and passivity analysis for continuous-time neural networks with additive time-varying delays.

Moreover, in the existing literature [26, 31, 32], the authors discussed the dissipativity and passivity problem by using only the double integral terms such as \(\int _{-\tau }^{0}\int _{t+\beta }^tx^T(s)Sx(s)\mathrm {d}s\mathrm {d}\beta \). But in our paper, we construct double integral terms along with the triple integral terms such as \(\frac{h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{\theta }^{-d_{21}}\int _{t+\lambda }^{t} \dot{x}^T(s)S_1\dot{x}(s)\mathrm {d}s\mathrm {d}\lambda \mathrm {d}\theta \) in the Lyapunov–Krasovskii functional (LKF) for finding less conservative results over the existing ones. In addition, in order to particularize some less conservative dissipativity conditions for neural networks, several effective approaches have been proposed. To mention a few, one can refer to free-weighting matrix approach, delay decomposition approach and Jensen’s inequality. Recently, a new approach called second order reciprocal convex combination has been proposed in [33] to study the stability of systems with interval time-varying delays. Therefore, based on the observation above, it is significant to establish some new integral inequality techniques such as the second order reciprocal convex combination technique for solving the triple integral terms in order to get some less expensive dissipative criteria, which motivates the present study.

Based on the above discussions, in our paper the dissipativity and passivity analysis of Markovian jump neural networks with two additive time-varying delays have been established. Different from the previous existing literature, the mixed time delays considered here comprise discrete, distributed and leakage time-varying delays. Based on the Lyapunov functional method, a novel dissipativity and passivity criterion is established in terms of linear matrix inequalities (LMI) which can be solved efficiently by using the optimization algorithms. Finally, three numerical examples are shown to support that our results are less conservative than those of the existing ones.

The remainder of this paper is organized as follows. In Sect. 2, the problem of dissipativity and passivity of Markovian jumping neural networks with two additive time-varyigng delays are formulated and some preliminaries are represented. Section 3 presents the main results on dissipativity and passivity analysis. In Sect. 4, three illustrative examples are provided and conclusions are given in Sect. 5.

Notations

Throughout this paper, the superscript T denotes the transposition and the notation \(X \ge Y\) (respectively, \(X>Y\)), where X and Y are symmetric matrices, means that \(X-Y\) is positive semi-definite (respectively, positive definite). \( {\mathbb {R}}^{n}\) and \( {\mathbb {R}}^{n \times n}\) denote n-dimensional Euclidean space and the set of all \(n \times n\) real matrices, respectively. diag\(\{\cdots \}\) stands for a block-diagonal matrix. The matrix \({0_{n,m}}\) denotes the null matrix of order \(n \times m\). The notation \(*\) always denotes the symmetric block in one symmetric matrix. Matrices, if not explicitly stated, are assumed to have compatible dimensions. Let \((\Omega , \mathfrak {F}, \mathcal {P})\) be the probability space, where \(\Omega \) is the sample space; \(\mathfrak {F}\) is the algebra of the events; \(\mathcal {P}\) is the probability measure defined on \(\mathfrak {F}\) and \(\mathbb {E}[\cdot ]\) stand for the corresponding expectation operator with respect to the given probability measure \(\mathcal {P}\).

2 Problem Description and Preliminaries

Let \(\{r(t), t \ge 0\}\) be a right-continuous Markov chain on the probability space \((\Omega ,\mathfrak {F},\mathcal {P})\) taking values in a finite state space \(\mathbb {S}=\{1,2,\ldots ,N\}\) with generator \(Q=(q_{ij})_{N\times N}\) given by

$$\begin{aligned} P\{r(t+\Delta t)=j|r(t)=i\}=\left\{ \begin{array}{l@{\quad }l} q_{ij}\Delta t+o(\Delta t), &{}i\ne j ,\\ 1+q_{ii}\Delta t+o(\Delta t),&{}i = j, \end{array}\right. \end{aligned}$$

where \(\Delta t>0\) and \(\lim _{\Delta t \rightarrow 0}\frac{o(\Delta t)}{\Delta t}=0\), \(q_{ij}\ge 0\) is the transition rate from i to j if \(i\ne j\) while \(q_{ii}= -\sum _{j\ne i}q_{ij}\).

Consider the following neural networks with Markovian jumping parameters, leakage time-varying delay and two additive time-varying delay components:

$$\begin{aligned} \dot{x}(t)&= -A(r(t))x(t-\sigma (t))+W_0(r(t))g(x(t)) +W_1(r(t))g(x(t-d_1(t)-d_2(t)))\nonumber \\&\quad +W_2(r(t))\int _{t-\rho (t)}^tg(x(s))\mathrm {d}s+u(t)\nonumber \\ y(t)&= g(x(t)). \end{aligned}$$
(1)

The system (1) can be equivalent to

$$\begin{aligned}&\frac{\mathrm {d}}{\mathrm {d}t}\left[ x(t)-A(r(t))\int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s\right] \\&\quad =-A(r(t))x(t)-A(r(t))x(t-\sigma (t)) \dot{\sigma }(t)+W_{0}(r(t))g(x(t))\\&\qquad +W_{1}(r(t))g(x(t-d_1(t)-d_2(t)))+W_2(r(t))\int _{t-\rho (t)}^{t}g(x(s))\mathrm {d}s+u(t) \end{aligned}$$

where \(x(t-\sigma (t))=[x_{1}(t-\sigma (t))x_{2}(t-\sigma (t))\ldots x_{n}(t-\sigma (t))]^{T}\in \mathbb {R}^{n}\) is the state vector associated with the n neurons and leakage time-varying delay \(\sigma (t)\), u(t) is the input, y(t) is the output. The diagonal matrix \(A(r(t)) = \text{ diag }\{a_{1}(r(t)),a_{2}(r(t)),\ldots ,a_{n}(r(t))\}\) has positive entries \(a_{i}(r(t))>0\) \((i=1,2,\ldots ,n)\). The matrices \(W_0(r(t))\), \(W_1(r(t))\) and \(W_2(r(t))\) are the interconnection matrices representing the weight coefficients of the neurons. \(g(x(t))=[g_1(x_1(t)) g_2(x_2(t)) \ldots g_n(x_n(t))]^T \in \mathbb {R}^n\) is the neuron activation function. For convenience, in the neural networks (1) each possible value of r(t) is denoted by \(i, i\in \mathbb {S}\) in the sequel. Then, we have \(A(r(t))=A_i, W_0(r(t))=W_{0i}, W_1(r(t))=W_{1i}, W_2(r(t))=W_{2i}.\)

In the neural network (1), the bounded functions \(\sigma (t)\), \(\rho (t)\), \(d_1(t)\) and \(d_2(t)\) represent respectively the leakage, distributed and two additive time-varying delays that are assumed to satisfy the following conditions:

$$\begin{aligned} 0&\le \sigma (t) \le \sigma < \infty , \ \ \dot{\sigma }(t) \le \sigma _\mu <\infty , \ \ 0 \le \rho (t) \le \rho , \nonumber \\ 0&\le d_{11}\le d_1(t) \le d_{12}, \ \ 0\le d_{21}\le d_2(t) \le d_{22}, \ |\dot{d}_1(t)|\le \varrho _1 <1, \ |\dot{d}_2(t)|\le \varrho _2 <1, \end{aligned}$$
(2)

where \(d_{12}\ge d_{11}\), \(d_{21}\ge d_{22}\), \(\varrho _1\), \(\varrho _2\), \(\sigma \), \(\sigma _{\mu }\) and \(\rho \) are known constants with \(d_{11}\) and \(d_{21}\) not equal to zero. Here, we denote \(d_1=d_{11}+d_{21}\), \(d_2=d_{12}+d_{22}\), \(\varrho =\varrho _1+\varrho _2\), \(h_1=d_{12}-d_{11}\), \(h_2=d_{22}-d_{21}\). We are considering system (1) with the initial condition \(x(t)=\phi (t), \ t \in [-\bar{d}, 0]\), \(\bar{d}=\max [\sigma , d_1, d_2, \rho ]\), where \(\phi (t)\) is the given initial function.

Remark 2.1

In this paper, the values of \(\varrho _1\) and \(\varrho _2\) are assumed to be less than 1. When \(\varrho _1\) and \(\varrho _2\) are greater than or equal to 1, the fast time-varying delay case will cause problems with causality, minimality and inconsistency, as indicated in [10, 12]. So this restriction is a reasonable and necessary assumption for proving the main results.

Throughout this paper, we assume that the activation function satisfies the following assumption.

Assumption 2.1

The activation function g(u) is bounded and satisfies

$$\begin{aligned} 0\le \frac{g_i(\zeta _1)-g_i(\zeta _2)}{\zeta _1-\zeta _2}\le L_i, i=1,2,\cdots , n. \end{aligned}$$
(3)

for any \(\zeta _1, \zeta _2 \in \mathbb {R}, \zeta _1 \ne \zeta _2,\) where \(L_i>0\) for \(i=1,2,\cdots ,n.\)

Further, \(g_i(0)=0,\) \(i=1,2,\cdots ,n.\)

We now introduce the following dissipativity and passivity definitions.

Definition 2.1

[31] System (1) is strictly \((\mathcal {Q}, \mathcal {R}, \mathcal {S})\)-dissipative for any \(t_p\ge 0,\) and some scalar \(\gamma > 0,\) if under zero initial state, the following condition is satisfied

$$\begin{aligned} {\mathbb {E}\{\mathcal {G}(u, y, t_p)\} \ge \mathbb {E}\{\gamma \left\langle \upsilon , \upsilon \right\rangle _{t_p}\},} \end{aligned}$$
(4)

where the quadratic energy supply function \(\mathcal {G}\) associated with system (1) is defined by

$$\begin{aligned} \mathcal {G}(u, y, t_p)=\left\langle y, \mathcal {Q}y\right\rangle _{t_p}+2\left\langle y, \mathcal {S}u\right\rangle _{t_p}+\left\langle u, \mathcal {R}u\right\rangle _{t_p}, \forall t_p>0, \end{aligned}$$
(5)

where \(\mathcal {Q}, \mathcal {R}\) and \(\mathcal {S}\) are real matrices of appropriate dimensions, with \(\mathcal {Q}\) and \(\mathcal {R}\) being symmetric matrices. Let \(L_2[0, \infty )\) be the space of square integrable functions on \([0, \infty )\). The notations \(\left\langle y, \mathcal {S}u\right\rangle _{t_p}\), \(\left\langle y, \mathcal {Q}y\right\rangle _{t_p}\) and \(\left\langle u, \mathcal {R}u\right\rangle _{t_p}\) represent \(\displaystyle \int _{0}^{t_p}y^T(t)\mathcal {S}u(t)\mathrm {d}t,\) \(\displaystyle \int _{0}^{t_p}y^T(t)\mathcal {Q}y(t)\mathrm {d}t\) and \(\displaystyle \int _{0}^{t_p}u^T(t)\mathcal {R}u(t)\mathrm {d}t,\) respectively.

Definition 2.2

[32] The system (1) is said to be passive, if for all solutions of (1) with \(x(0)=0,\) there exists a scalar \(\gamma >0\) such that the inequality

$$\begin{aligned} {2 \int ^{t_{p}}_{0} \mathbb {E}\{y^{T}(s)u(s)\}ds\ge -\gamma \int ^{t_{p}}_{0} \mathbb {E}\{u^{T}(s)u(s)\}\mathrm {d}s} \end{aligned}$$
(6)

is satisfied under the zero initial condition.

Definition 2.3

[33] Let \(\Omega _{1}, \Omega _{2}, \ldots , \Omega _{n}:\mathbb {R}^{m}\mapsto \mathbb {R}\) be a given finite number of functions that have positive values in an open subset \(\mathbf{D}\) of \(\mathbb {R}^{m}\). Then, a second-order reciprocally convex combination of these functions over \(\mathbf{D}\) is a function of the form

$$\begin{aligned} \frac{1}{\alpha _{1}^2}\Omega _{1}+\frac{1}{\alpha _{2}^2}\Omega _{2}+\ldots +\frac{1}{\alpha _{N}^2}\Omega _{N}:\mathbf{D} \mapsto \mathbb {R}^{n}, \end{aligned}$$
(7)

where the real numbers \(\alpha _i\) satisfy \(\alpha _i>0\) and \(\sum _{i}\alpha _i=1\).

To end this section, we introduce the following lemmas, which will play an important role in the proof of the main results.

Lemma 2.1

[34] (lower bound lemma). Let \(f_{1},f_{2},\ldots ,f_{N}:\mathbb {R}^m \rightarrow \mathbb {R}\) have positive values in an open subset \(\mathbf{D}\) of \(\mathbb {R}^m\). Then the reciprocally convex combination of \(f_{i}\) over \(\mathbf{D}\) satisfies

$$\begin{aligned} \min _{\left\{ \alpha _{i}|\alpha _{i}>0,\sum _{i} {\alpha _{i}=1}\right\} } \sum _{i} {\frac{1}{\alpha _{i}}} f_{i}(t)=\sum _{i}{f_{i}(t)}+\max _{g_{i,j}(t)}\sum _{i \ne j}{g_{i,j}(t)} \end{aligned}$$

subject to

$$\begin{aligned} \left\{ {g_{i,j}:\mathbb {R}^m\mapsto \mathbb {R},g_{j,i}(t)\cong g_{i,j}(t),\left[ \begin{array}{cc}f_{i}(t) &{}\quad g_{i,j}(t)\\ g_{i,j}(t) &{}\quad f_{j}(t)\end{array}\right] \ge 0}\right\} . \end{aligned}$$

Lemma 2.2

[35] For any vectors \(x, y \in \mathbb {R}^{n}\) and matrix \(Q>0\), we have the following inequality

$$\begin{aligned} \pm 2x^Ty \le x^TQx+y^TQ^{-1}y. \end{aligned}$$
(8)

3 Main Results

In this section, we consider the dissipativity criteria for Markovian jump neural networks with additive time-varying delays. One of the main issues in dissipativity criteria is how to further reduce the possible conservatism induced by the introduction of the Lyapunov functional when dealing with time delays. By employing the idea of second order reciprocal convex combination technique, we solve the triple integral terms involved in the LKF candidate to find the dissipativity and passivity criteria for Markovian jump neural networks.

Theorem 3.1

The neural networks (1) is dissipative in the sense of Definition 2.1, if there exists positive definite matrices \(P_i(i\in \mathbb {S})\), \(R_s (s=1,2,\cdots ,6)\), \(Q_1, Q_2\), Z, \(J_n(n=1,2)\), \(S_q(q=1,2)\), and M, any matrices \(K_f(f=1,2,\cdots ,6)\), \(Y_1, Y_2\), and diagonal matrix U and scalar \(\gamma >0\), such that the following LMIs hold for \(l=1,2\):

$$\begin{aligned}&\Omega ^l=\left[ \begin{array}{ccc} \widehat{\phi }_i^{(l)} &{}\quad P_iA_i\sqrt{\sigma _{\mu }} &{}\quad A_i^TP_iA_i\sqrt{\sigma _{\mu }}\\ * &{}\quad -Y_1 &{}\quad 0\\ * &{}\quad * &{}\quad -Y_2 \end{array}\right] <0 \end{aligned}$$
(9)
$$\begin{aligned}&\left[ \begin{array}{cc} J_{1} &{}\quad K_{1} \\ * &{}\quad J_{1} \\ \end{array} \right] \ge 0, \left[ \begin{array}{cc} J_{2}+\frac{h_2^2}{2}S_1 &{}\quad K_{2} \\ * &{}\quad J_{2}+\frac{h_2^2}{2}S_2 \\ \end{array} \right] \ge 0, \end{aligned}$$
(10)
$$\begin{aligned}&\left[ \begin{array}{cccc} 2S_{1} &{}\quad 0 &{}\quad K_{3} &{}\quad 0\\ * &{}\quad S_{1} &{}\quad 0 &{}\quad K_{4}\\ * &{}\quad * &{}\quad 2S_{1} &{}\quad 0\\ * &{}\quad * &{}\quad * &{}\quad S_{1} \end{array} \right] >0, \left[ \begin{array}{cccc} 2S_{2} &{}\quad 0 &{}\quad K_{5} &{}\quad 0\\ * &{}\quad S_{2} &{}\quad 0 &{}\quad K_{6}\\ * &{}\quad * &{}\quad 2S_{2} &{}\quad 0\\ * &{}\quad * &{}\quad * &{}\quad S_{2} \end{array} \right] >0, \end{aligned}$$
(11)

where

$$\begin{aligned} \widehat{\phi }_i^{(l)}=\,&\phi _i-\Sigma _{1l}^T(t)\left[ \begin{array}{cc}S_1 &{}\quad K_3+K_4 \\ * &{}\quad S_1\end{array}\right] \Sigma _{1l}(t)-\Sigma _{2l}^T(t)\left[ \begin{array}{cc}S_2 &{}\quad K_5+K_6 \\ * &{}\quad S_2\end{array}\right] \Sigma _{2l}(t),\\ \phi _i=\,&(\phi _{p,q,i})_{17\times 17},\\ \phi _{1,1,i}=&-P_iA_i-A_i^TP_i\!+\!\sum _{j=1}^{N}q_{ij}P_j\!+\!R_1+R_2+R_3+R_4+R_5\\&+d_{11}^2Q_1+d_{12}^2Q_2 +\sigma ^2Z+h_2^2J_1,\\ \phi _{1,3,i}=&P_iW_{0i}+LU, \ \phi _{1,10,i}=P_iW_{1i}, \ \ \phi _{1,11,i}=P_i, \ \phi _{1,16,i}=A_iP_iA_i^T-\sum _{j=1}^{N}q_{ij}P_jA_j, \\ \phi _{1,17,i}=&P_iW_{2i}, \ \ \phi _{2,2,i}=A_i^T\mathcal {C}A_i+Y_1\sigma _\mu +Y_2\sigma _\mu -R_1(1-\sigma _\mu ), \ \phi _{2,3,i}=-A_i^T\mathcal {C}W_{0i},\\ \phi _{2,10,i}=&-A_i^T\mathcal {C}W_{1i}, \ \phi _{2,11,i}=-A_i^T\mathcal {C}, \ \ \phi _{2,17,i}=-A_i^T\mathcal {C}W_{2i}, \\ \phi _{3,3,i}=&W_{0i}^T\mathcal {C}W_{0i}+R_6\!+\!\rho ^2M-U -\mathcal {Q}, \ \phi _{3,10,i}=W_{0i}^T\mathcal {C}W_{1i}, \ \phi _{3,11,i}\!=\!W_{0i}^T\mathcal {C}-\mathcal {S},\\ \phi _{3,16,i}=&-W_{0i}^TP_iA_{i}, \phi _{3,17,i}=W_{0i}^T\mathcal {C}W_{2i},\\ \phi _{4,4,i}=&-R_2, \ \ \phi _{5,5,i}=-R_3, \ \ \phi _{6,6,i}=-R_4(1-\varrho _1),\\ \phi _{7,7,i}=&-R_5(1-\varrho _2)-J_2+K_2+K_2-J_2, \\ \phi _{7,8,i}=&J_2-K_2, \ \ \phi _{7,9,i}=-K_2+J_2, \ \ \phi _{8,8,i}=-J_2, \ \ \phi _{8,9,i}=K_2, \ \ \phi _{9,9,i}=-J_2, \\ \phi _{10,10,i}=&-R_6(1-\varrho _1-\varrho _2)+W_{1i}^T\mathcal {C}W_{1i}, \ \ \phi _{10,11,i}=W_{1i}^T\mathcal {C}, \ \ \phi _{10,16,i}=-W_{1i}^TP_iA_{i},\\ \phi _{10,17,i}=&W_{1i}^T\mathcal {C}W_{2i}, \ \ \phi _{11,11,i}=h^2_2J_2+\frac{h_2^4}{4}S_1+\frac{h_2^4}{4}S_2-(\mathcal {R}-\gamma I), \ \ \phi _{11,16,i}= -P_iA_i, \ \ \\ \phi _{11,17,i}=&W_{2i}^T\mathcal {C}, \ \phi _{12,12,i}=-Q_1, \ \ \phi _{13,13,i}=-Q_2, \ \ \phi _{14,14,i}=-J_1, \\ \phi _{14,15,i}=&-K_1, \phi _{15,15,i}=-J_1, \phi _{16,16,i}=-Z+\sum _{j=1}^Nq_{ij}A_j^TP_jA_j,\\ \phi _{16,17,i}=&-W_{2i}^TP_iA_i,\ \ \phi _{17,17,i}= W_{2i}^T\mathcal {C}W_{2i}-M, \\ \mathcal {C}=\,&h_2^2J_2+\frac{h_2^4}{4}S_1+\frac{h_2^4}{4}S_2, \ \ L=\text{ diag }\{l_1,l_2,\cdots ,l_n\}, \\ \Sigma _{11}(t)=&\left[ \begin{array}{ccccc} 0_{n,7n} &{}\quad 0_{n,n} &{}\quad 0_{n,5n} &{}\quad -I_n &{}\quad 0_{n,3n} \\ 0_{n,6n} &{}\quad h_2I_n &{}\quad 0_{n,7n} &{}\quad -I_n &{}\quad 0_{n,2n}\end{array} \right] , \\ \Sigma _{12}(t)=&\left[ \begin{array}{ccccc} 0_{n,7n} &{}\quad h_2I_n &{}\quad 0_{n,5n} &{}\quad -I_n &{}\quad 0_{n,3n} \\ 0_{n,6n} &{}\quad 0_{n,n} &{}\quad 0_{n,7n} &{}\quad -I_n &{}\quad 0_{n,2n}\end{array} \right] ,\\ \Sigma _{21}(t)=&\left[ \begin{array}{ccccc} 0_{n,6n} &{}\quad 0_{n,n} &{}\quad 0_{n,6n} &{}\quad I_n &{}\quad 0_{n,3n} \\ 0_{n,8n} &{}\quad -h_2I_n &{}\quad 0_{n,5n} &{}\quad I_n &{}\quad 0_{n,2n}\end{array} \right] , \\ \Sigma _{22}(t)=&\left[ \begin{array}{ccccc} 0_{n,6n} &{}\quad -h_2I_n &{}\quad 0_{n,6n} &{}\quad I_n &{}\quad 0_{n,3n} \\ 0_{n,8n} &{}\quad 0_{n,n} &{}\quad 0_{n,5n} &{}\quad I_n &{}\quad 0_{n,2n}\end{array} \right] . \end{aligned}$$

and the remaining terms of \(\phi _{p,q,i}\) are zero.

Proof

We construct the Lyapunov–Krasovskii functional as follows:

$$\begin{aligned} {V(t,x(t),i)=\sum _{k=1}^{7} V_{k}(t,x(t),i)}, \end{aligned}$$
(12)

where

$$\begin{aligned} V_{1}(t,x(t),i)=&\Big [x(t)-A_i\int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s \Big ]^{T}P_i \Big [x(t)-A_i\int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s\Big ],\\ V_{2}(t,x(t),i)=&\int _{t-\sigma (t)}^{t}x^T(s)R_1x(s)\mathrm {d}s+\int _{t-d_{11}}^{t}x^T(s)R_2x(s)\mathrm {d}s +\int _{t-d_{12}}^{t}x^T(s)R_3x(s)\mathrm {d}s\\&+\int _{t-d_1(t)}^{t}x^T(s)R_4x(s)\mathrm {d}s+\int _{t-d_2(t)}^{t}x^T(s)R_5x(s)\mathrm {d}s\\&+\int _{t-d_1(t)-d_2(t)}^{t}g^T(x(s))R_6g(x(s))\mathrm {d}s,\\ V_{3}(t,x(t),i)=&\,d_{11}\int _{-d_{11}}^{0}\int _{t+\theta }^{t}x^T(s)Q_1x(s)\mathrm {d}s\mathrm {d}\theta +d_{12}\int _{-d_{12}}^{0}\int _{t+\theta }^{t}x^T(s)Q_2x(s)\mathrm {d}s\mathrm {d}\theta ,\\ V_4(t,x(t),i)=\,&\sigma \int _{-\sigma }^{0}\int _{t+\theta }^{t}x^T(s)Zx(s)\mathrm {d}s\mathrm {d}\theta ,\\ V_5(t,x(t),i)=\,&h_{2}\int _{-d_{22}}^{-d_{21}}\int _{t+\theta }^{t}x^T(s)J_1x(s)\mathrm {d}s\mathrm {d}\theta +h_{2}\int _{-d_{22}}^{-d_{21}}\int _{t+\theta }^{t}\dot{x}^T(s)J_2\dot{x}(s)\mathrm {d}s\mathrm {d}\theta ,\\ V_6(t,x(t),i)=\,&\frac{h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{\theta }^{-d_{21}}\int _{t+\lambda }^{t} \dot{x}^T(s)S_1\dot{x}(s)\mathrm {d}s\mathrm {d}\lambda \mathrm {d}\theta \\&+\frac{h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{-d_{22}}^{\theta }\int _{t+\lambda }^{t}\dot{x}^T(s)S_2 \dot{x}(s)\mathrm {d}s\mathrm {d}\lambda \mathrm {d}\theta ,\\ V_7(t,x(t),i)=\,&\rho \int _{-\rho }^{0}\int _{t+\theta }^{t}g^T(x(s))Mg(x(s))\mathrm {d}s\mathrm {d}\theta . \end{aligned}$$

Setting \(\varsigma =(d_2(t)-d_{21})/h_2, \omega =(d_{22}-d_2(t))/h_2\) and applying Jensen inequality lemma to the weak infinitesimal generator \(\mathbb {L}\) of the random process \(\{x(t), r(t), t\ge 0\}\), we have

$$\begin{aligned} {{\mathbb {L}V(t,x(t),i)=\sum _{k=1}^{7} \mathbb {L}V_{k}(t,x(t),i)}}, \end{aligned}$$
(13)

where

$$\begin{aligned}&\mathbb {L}{V}_1(t,x(t),i)=2\left[ x(t)-A_i\int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s\right] ^T P_i \left[ -A_ix(t)-A_ix(t-\sigma (t))\dot{\sigma }(t)\right. \nonumber \\&\quad \quad +W_{0i}g(x(t))\left. +W_{1i}g(x(t-d_1(t)-d_2(t)))+W_{2i}\int _{t-\rho (t)}^{t}g(x(s))\mathrm {d}s+u(t)\right] \nonumber \\&\quad \quad +\left[ x(t)-A_j\int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s\right] ^T\sum _{j=1}^{N}q_{ij}P_j \left[ x(t)-A_j\int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s\right] ,\nonumber \\&\quad \le -2x^T(t)P_iA_i x(t)+x^T(t)P_iA_iY_1^{-1}A_i^TP_i\sigma _{\mu } x(t) +x^T(t-\sigma (t))Y_1x(t-\sigma (t))\sigma _{\mu }\nonumber \\&\quad +2x^T(t)P_iW_{0i}g(x(t)) +2x^T(t)P_iW_{1i}g(x(t-d_1(t)-d_2(t)))\nonumber \\&\quad +2x^T(t)P_iW_{2i}\int _{t-\rho (t)}^tg(x(s))\mathrm {d}s +2x^T(t)P_iu(t) +2\int _{t-\sigma (t)}^{t}x^T(s)\mathrm {d}sA_i^TP_iA_i{x}(t)\nonumber \\&\quad +\int _{t-\sigma (t)}^{t}x^{T}(s)\mathrm {d}sA_i^TP_iA_iY_2^{-1}A_iP_iA_i^T\sigma _{\mu } \int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s \nonumber \\&\quad +x^T(t-\sigma (t))Y_2\sigma _{\mu }x(t-\sigma (t))-2\int _{t-\sigma (t)}^tx^T(s)\mathrm {d}sA_i^TP_iW_{0i}g(x(t)) \nonumber \\&\quad -2\int _{t-\sigma (t)}^tx^T(s) \mathrm {d}sA_i^TP_iW_{1i}g(x(t-d_1(t)-d_2(t)))\nonumber \\&\quad -2\int _{t-\sigma (t)}^tx^T(s)\mathrm {d}sA_i^TP_iW_{2i}\int _{t-\rho (t)}^tg(x(s))\mathrm {d}s -2\int _{t-\sigma (t)}^tx^T(s)\mathrm {d}sA_i^TP_iu(t)\nonumber \\&\quad +\left[ x(t)-A_j\int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s\right] ^T\sum _{j=1}^{N}q_{ij}P_j\left[ x(t) -A_j\int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s\right] , \end{aligned}$$
(14)
$$\begin{aligned} \mathbb {L}{V}_2(t,x(t),i)\le \,&x^{T}(t)(R_{1}+R_{2}+R_{3}+R_{4}+R_{5})x(t)+g(x(t))R_6g(x(t))\nonumber \\&-x^T(t-\sigma (t))R_1x(t-\sigma (t))(1-{\sigma _\mu })-x^T(t-d_{11})R_2x(t-d_{11})\nonumber \\&-x^T(t-d_{12})R_3x(t-d_{12})-x^T(t-d_{1}(t))R_4x(t-d_{1}(t))(1-\varrho _1)\nonumber \\&-x^T(t-d_{2}(t))R_5x(t-d_{2}(t))(1-\varrho _2)\nonumber \\&-g^{T}(x(t-d_{1}(t)-d_{2}(t)))R_6g(x((t-d_{1}(t)-d_{2}(t))))(1-\varrho _{1}-\varrho _{2}),\end{aligned}$$
(15)
$$\begin{aligned} \mathbb {L}{V}_{3}(t,x(t),i)=\,&x^{T}(t)[d_{11}^{2}Q_{1}+d_{12}^{2}Q_{2}]x(t) -d_{11}\int _{t-d_{11}}^{t}x^{T}(s)Q_{1}x(s)\mathrm {d}s\nonumber \\&-d_{12}\int _{t-d_{12}}^{t}x^{T}(s)Q_{2}x(s)\mathrm {d}s,\nonumber \\ \le \,&x^{T}(t)[d_{11}^{2}Q_{1}+d_{12}^{2}Q_{2}]x(t)-\int _{t-d_{11}}^{t}x^{T}(s)\mathrm {d}s Q_{1}\int _{t-d_{11}}^{t}x(s)\mathrm {d}s\nonumber \\&-\int _{t-d_{12}}^{t}x^{T}(s)\mathrm {d}s Q_{2}\int _{t-d_{12}}^{t}x(s)\mathrm {d}s,\end{aligned}$$
(16)
$$\begin{aligned} \mathbb {L}{V}_4(t,x(t),i)\le&\sigma ^2x^T(t)Zx(t)-\int _{t-\sigma (t)}^{t}x^T(s)\mathrm {d}sZ \int _{t-\sigma (t)}^{t}x(s)\mathrm {d}s,\end{aligned}$$
(17)
$$\begin{aligned} \mathbb {L}{V}_{5}(t,x(t),i)=\,&h_{2}^{2}x^{T}(t)J_{1}x(t)+h_{2}^{2}\dot{x}^{T}(t)J_{2}\dot{x}(t)-h_{2} \int _{t-d_{22}}^{t-d_{21}}x^{T}(s)J_{1}x(s)\mathrm {d}s\nonumber \\&-h_{2}\int _{t-d_{22}}^{t-d_{21}}\dot{x}^{T}(s)J_{2}\dot{x}(s) \mathrm {d}s,\nonumber \\ =\,&h_{2}^{2}x^{T}(t)J_{1}x(t)+h_{2}^{2}\dot{x}^{T}(t)J_{2}\dot{x}(t)-h_{2} \int _{t-d_{2}(t)}^{t-d_{21}}x^{T}(s)J_{1}x(s)\mathrm {d}s\nonumber \\&-h_{2} \int _{t-d_{22}}^{t-d_{2}(t)}x^{T}(s)J_{1}x(s)\mathrm {d}s -h_{2}\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}^{T}(s)J_{2}\dot{x}(s)\mathrm {d}s\nonumber \\&-h_{2} \int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}^{T}(s)J_{2}\dot{x}(s)\mathrm {d}s,\nonumber \\ \le \,&h_{2}^{2}x^{T}(t)J_{1}x(t)+h_{2}^{2}\dot{x}^{T}(t)J_{2}\dot{x}(t) -\frac{1}{\varsigma }\int _{t-d_{2}(t)}^{t-d_{21}}x^{T}(s)\mathrm {d}sJ_{1} \int _{t-d_{2}(t)}^{t-d_{21}}x(s)\mathrm {d}s\nonumber \\&-\frac{1}{\omega }\int _{t-d_{22}}^{t-d_{2}(t)}x^{T}(s)\mathrm {d}sJ_{1} \int _{t-d_{22}}^{t-d_{2}(t)}x(s)\mathrm {d}s -\frac{1}{\varsigma }\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}^{T}(s)\mathrm {d}sJ_{2} \nonumber \\&\times \int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s-\frac{1}{\omega } \int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}^{T}(s)\mathrm {d}sJ_{2}\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s, \end{aligned}$$
(18)
$$\begin{aligned}&\mathbb {L}{V}_{6}(t,x(t),i)=\frac{h_{2}^{4}}{4}\dot{x}^{T}(t)S_{1}\dot{x}(t)-\frac{h_{2}^{2}}{2} \int _{-d_{22}}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}^{T}(s)S_{1}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \nonumber \\&\qquad +\frac{h_{2}^{4}}{4}\dot{x}^{T}(t)S_{2}\dot{x}(t) -\frac{h_{2}^{2}}{2}\int _{-d_{22}}^{-d_{21}} \int _{t-d_{22}}^{t+\theta }\dot{x}^{T}(s)S_{2}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta },\nonumber \\&\quad = \frac{h_{2}^{4}}{4}\dot{x}^{T}(t)S_{1}\dot{x}(t)-\frac{h_{2}^{2}}{2}(d_{22}-d_{2}(t)) \int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}^{T}(s)S_{1}\dot{x}(s)\mathrm {d}s\nonumber \\&\quad \quad -\frac{h_{2}^{2}}{2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}} \dot{x}^{T}(s)S_{1}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }-\frac{h_{2}^{2}}{2}\int _{-d_{22}}^{-d_{2}(t)} \int _{t+\theta }^{t-d_{2}(t)}\dot{x}^{T}(s)S_{1}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\nonumber \\&\quad \quad +\frac{h_{2}^{4}}{4}\dot{x}^{T}(t)S_{2}\dot{x}(t) -\frac{h_{2}^{2}}{2}(d_{2}(t)-d_{21})\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}^{T}(s)S_{2}\dot{x}(s)\mathrm {d}s\nonumber \\&\quad \quad -\frac{h_{2}^{2}}{2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}^{T}(s)S_{2}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } -\frac{h_{2}^{2}}{2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}^{T}(s)S_{2} \dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\nonumber \\&\quad \le \frac{h_{2}^{4}}{4}\dot{x}^{T}(t)S_{1}\dot{x}(t)-\frac{h_{2}^{2}}{2}\frac{\omega }{\varsigma }\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}^{T}(s)\mathrm {d}sS_{1}\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s\nonumber \\&\quad \quad -\frac{1}{\varsigma ^2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}^{T}(s)\mathrm {d}s\mathrm {d}{\theta }S_{1}\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\nonumber \\&\quad \quad -\frac{1}{\omega ^2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}^{T}(s)\mathrm {d}s\mathrm {d}{\theta }S_{1}\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\nonumber \\&\quad \quad +\frac{h_{2}^{4}}{4}\dot{x}^{T}(t)S_{2}\dot{x}(t)-\frac{h_{2}^{2}}{2}\frac{\varsigma }{\omega }\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}^{T}(s)\mathrm {d}sS_{2}\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\nonumber \\&\quad \quad -\frac{1}{\varsigma ^2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}^{T}(s)\mathrm {d}s\mathrm {d}{\theta }S_{2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\nonumber \\&\quad \quad -\frac{1}{\omega ^2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}^{T}(s)\mathrm {d}s\mathrm {d}{\theta }S_{2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta },\end{aligned}$$
(19)
$$\begin{aligned}&\mathbb {L}{V}_7(t,x(t),i)\le \rho ^2g^T(x(t))Mg(x(t))-\int _{t-\rho (t)}^{t}g^T(x(s))\mathrm {d}sM\int _{t-\rho (t)}^tg(x(s))\mathrm {d}s. \end{aligned}$$
(20)

From Lemma 2.1, we can deduce that if there exist matrices \(K_1\) and \(K_2\) such that (10) holds, then the integral term in (18) and 2nd and 6th term in (19) will be rearranged into

$$\begin{aligned}&-\frac{1}{\varsigma }\int _{t-d_{2}(t)}^{t-d_{21}}x^{T}(s)\mathrm {d}sJ_{1}\int _{t-d_{2}(t)}^{t-d_{21}}x(s)\mathrm {d}s-\frac{1}{\omega }\int _{t-d_{22}}^{t-d_{2}(t)}x^{T}(s)\mathrm {d}sJ_{1}\int _{t-d_{22}}^{t-d_{2}(t)}x(s)\mathrm {d}s\nonumber \\&\quad \le - \left[ \begin{array}{c} \int _{t-d_{2}(t)}^{t-d_{21}}x(s)\mathrm {d}s \\ \int _{t-d_{22}}^{t-d_{2}(t)}x(s)\mathrm {d}s \\ \end{array} \right] ^T \left[ \begin{array}{cc} J_{1} &{} K_{1} \\ * &{} J_{1} \\ \end{array} \right] \left[ \begin{array}{c} \int _{t-d_{2}(t)}^{t-d_{21}}x(s)\mathrm {d}s \\ \int _{t-d_{22}}^{t-d_{2}(t)}x(s)\mathrm {d}s \\ \end{array} \right] \end{aligned}$$
(21)

and

$$\begin{aligned}&-\frac{1}{\varsigma }\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}^{T}(s)\mathrm {d}sJ_{2}\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s-\frac{1}{\omega }\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}^{T}(s)\mathrm {d}sJ_{2}\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s,\nonumber \\&-\frac{h_{2}^{2}}{2}\frac{\omega }{\varsigma }\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}^{T}(s)\mathrm {d}sS_{1}\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s-\frac{h_{2}^{2}}{2}\frac{\varsigma }{\omega }\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}^{T}(s)\mathrm {d}sS_{2}\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\nonumber \\&\quad \le - \left[ \begin{array}{c} \int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s \\ \int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s \\ \end{array} \right] ^T \left[ \begin{array}{cc} J_{2} &{}\quad K_{2} \\ * &{}\quad J_{2} \\ \end{array} \right] \left[ \begin{array}{c} \int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s \\ \int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s \\ \end{array} \right] . \end{aligned}$$
(22)

Note that if \(d_{2}(t)=d_{21}\) or \(d_{2}(t)=d_{22}\), we have

$$\begin{aligned} \int _{t-d_{2}(t)}^{t-d_{21}}x(s)\mathrm {d}s=\int _{t-d_{2}(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s=0\quad \text{ or }\quad \int _{t-d_{22}}^{t-d_{2}(t)}x(s)\mathrm {d}s =\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s =0, \end{aligned}$$

respectively. So inequalities (21) and (22) still hold.

Similarly, by applying Lemma  2.1 to (19), we can have the following inequalities:

$$\begin{aligned}&-\frac{1}{\varsigma ^2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}^{T}(s)\mathrm {d}s\mathrm {d}{\theta }S_{1} \int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\nonumber \\&-\frac{1}{\omega ^2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}^{T}(s)\mathrm {d}s \mathrm {d}{\theta }S_{1}\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s \mathrm {d}{\theta }\nonumber \\&\quad \le - \left[ \begin{array}{c} \int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ \int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\\ \end{array} \right] ^T \left[ \begin{array}{cc} S_{1} &{}\quad K_{3}+K_{4} \\ * &{}\quad S_{1} \\ \end{array} \right] \left[ \begin{array}{c} \int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ \int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ \end{array} \right] \end{aligned}$$
(23)

and

$$\begin{aligned}&-\frac{1}{\varsigma ^2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}^{T}(s)\mathrm {d}s\mathrm {d}{\theta }S_{2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\nonumber \\ {}&-\frac{1}{\omega ^2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}^{T}(s)\mathrm {d}s\mathrm {d}{\theta }S_{2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\nonumber \\&\quad \le - \left[ \begin{array}{c} \int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\\ \int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\end{array} \right] ^T \left[ \begin{array}{cc} S_{2} &{}\quad K_{5}+K_{6} \\ * &{}\quad S_{2} \\ \end{array} \right] \left[ \begin{array}{c} \int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ \int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\end{array} \right] . \end{aligned}$$
(24)

It should be noted that when \(d_{2}(t)=d_{21}\) or \(d_{2}(t)=d_{22}\), we have

$$\begin{aligned}&\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } =\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }=0 \quad \text{ or }\\&\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } =\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }=0, \end{aligned}$$

respectively. So the relations (23) and (24) still hold.

Furthermore, there exist positive diagonal matrix U such that the following inequalities hold based on Assumption 2.1:

$$\begin{aligned} -2g^T(x(t))Ug(x(t))+2x^T(t)ULg(x(t)) \ge 0. \end{aligned}$$
(25)

From the conditions (7), and the inequalities (14)–(25), it can be seen that

$$\begin{aligned} \mathbb {E}\{\mathbb {L}{V}(t,x(t),i)\!-\!y^T(t)\mathcal {Q}y(t)\!-\!2y^T(t)\mathcal {S}u(t)-u^T(t)(\mathcal {R}-\gamma I)u(t)\}\le \mathbb {E}\{\psi ^T(t){\Omega }^{l}\psi (t)\}, \end{aligned}$$
(26)

where

$$\begin{aligned} \psi ^T(t)=&\bigg [ x^T(t) \ \ x^T(t-\sigma (t)) \ \ g^T(x(t)) \ \ x^T(t-d_{11}) \ \ x^T(t-d_{12}) \ \ x^T(t-d_1(t)) \ \ \\&x^T(t-d_2(t))\,x^T(t-d_{21}) \ \ x^T(t-d_{22}) \ \ g^T(x(t-d_1(t)-d_2(t))) \ \ u^T(t) \\&\displaystyle \int _{t-d_{11}}^{t}x^T(s)\mathrm {d}s \ \ \displaystyle \int _{t-d_{12}}^{t}x^T(s)\mathrm {d}s \displaystyle \int _{t-d_{2}(t)}^{t-d_{21}}x^T(s)\mathrm {d}s \ \ \displaystyle \int _{t-d_{22}}^{t-d_{2}(t)}x^T(s)\mathrm {d}s \ \ \\&\displaystyle \int _{t-\sigma (t)}^{t}x^T(s)\mathrm {d}s \ \ \displaystyle \int _{t-\rho (t)}^{t}g^T(x(s))\mathrm {d}s\bigg ],\\ \widehat{\phi }_i^{(l)}=\,&\phi _i-\Sigma _{1l}^T(t)\left[ \begin{array}{cc}S_1 &{}\quad K_3+K_4 \\ * &{}\quad S_1\end{array}\right] \Sigma _{1l}(t)-\Sigma _{2l}^T(t)\left[ \begin{array}{cc}S_2 &{}\quad K_5+K_6 \\ * &{}\quad S_2\end{array}\right] \Sigma _{2l}(t), \end{aligned}$$

and

$$\begin{aligned} \Sigma _{1l}(t)=&\left[ \begin{array}{ccccc} 0_{n,7n} &{}\quad (d_2(t)-d_{21})I_n &{}\quad 0_{n,5n} &{}\quad -I_n &{}\quad 0_{n,3n} \\ 0_{n,6n} &{}\quad (d_{22}-d_2(t))I_n &{}\quad 0_{n,7n} &{}\quad -I_n &{}\quad 0_{n,2n}\end{array} \right] ,\\ \Sigma _{2l}(t)=&\left[ \begin{array}{ccccc} 0_{n,6n} &{}\quad -(d_2(t)-d_{21})I_n &{}\quad 0_{n,6n} &{}\quad I_n &{}\quad 0_{n,3n} \\ 0_{n,8n} &{}\quad -(d_{22}-d_2(t))I_n &{}\quad 0_{n,5n} &{}\quad I_n &{}\quad 0_{n,2n}\end{array} \right] . \end{aligned}$$

Thus (26) can be treated non-conservatively by two corresponding boundary LMIs (9): The first case is \(d_2(t)=d_{21}\) and the second will be \(d_2(t)=d_{22}.\)

Suppose \(\Omega ^l<0,\) it is easy to get

$$\begin{aligned} \mathbb {E}\{y^T(t)\mathcal {Q}y(t)+2y^T(t)\mathcal {S}u(t)+u^T(t)\mathcal {R}u(t)\}\ge \mathbb {E}\{\mathbb {L}{V}(t, x(t), i)+\gamma u^T(t)u(t)\}. \end{aligned}$$
(27)

Integrating (27) from 0 to \(t_p\), under zero initial conditions we obtain

$$\begin{aligned} \mathbb {E}\{ \mathcal {G}(y,u,t_p)\}\ge \mathbb {E}\{\gamma \langle u,u\rangle _{t_p}+{V}(t_p, x(t_p), i)-{V}(0, x(0), i)\} \ge \mathbb {E}\{\gamma \langle u,u\rangle _{t_p}\} \end{aligned}$$
(28)

for all \(t_p\ge 0\). Therefore, the Markovian jump neural network (1) is strictly \((\mathcal {Q}, \mathcal {S}, \mathcal {R} )\)-\(\gamma \)-dissipative in the sense of Definition 2.1. This completes the proof. \(\square \)

Remark 3.2

We can obtain the passivity conditions for the system (1) by substituting \(\mathcal {Q}=0, \mathcal {S}=I\) and \(\mathcal {R}=2\gamma I\) in Theorem 3.1. In this case, we get the following corollary obtained from Theorem 3.1.

Corollary 3.3

The neural networks (1) is passive in the sense of Definition 2.2 if there exists positive definite matrices \(P_i(i\in \mathbb {S})\), \(R_s (s=1,2,\cdots ,6)\), \(Q_1, Q_2\), Z, \(J_n(n=1,2)\), \(S_q(q=1,2)\), any matrices \(K_f(f=1,2,\cdots ,6)\), \(Y_1, Y_2\), and diagonal matrix U and scalar \(\gamma >0\), such that the following LMIs hold for \(l=1,2\):

$$\begin{aligned}&\tilde{\Omega }^{l}=\left[ \begin{array}{ccc} \tilde{\phi }_i^{(l)} &{} P_iA_i\sqrt{\sigma _{\mu }} &{} A_i^TP_iA_i\sqrt{\sigma _{\mu }}\\ * &{}\quad -Y_1 &{}\quad 0\\ * &{}\quad * &{}\quad -Y_2 \end{array}\right] <0 \end{aligned}$$
(29)
$$\begin{aligned}&\left[ \begin{array}{cc} J_{1} &{}\quad K_{1} \\ * &{}\quad J_{1} \\ \end{array} \right] \ge 0, \left[ \begin{array}{cc} J_{2} &{}\quad K_{2} \\ * &{}\quad J_{2} \\ \end{array} \right] \ge 0, \end{aligned}$$
(30)
$$\begin{aligned}&\left[ \begin{array}{cccc} 2S_{1} &{}\quad 0 &{}\quad K_{3} &{}\quad 0\\ * &{}\quad S_{1} &{}\quad 0 &{}\quad K_{4}\\ * &{}\quad * &{}\quad 2S_{1} &{}\quad 0\\ * &{}\quad * &{}\quad * &{}\quad S_{1} \end{array} \right] >0,\quad \left[ \begin{array}{cccc} 2S_{2} &{}\quad 0 &{}\quad K_{5} &{}\quad 0\\ * &{}\quad S_{2} &{}\quad 0 &{}\quad K_{6}\\ * &{}\quad * &{}\quad 2S_{2} &{}\quad 0\\ * &{}\quad * &{}\quad * &{}\quad S_{2} \end{array} \right] >0, \end{aligned}$$
(31)

where

$$\begin{aligned} \tilde{\phi }_i^{(l)}=(\tilde{\phi }_{p,q,i})_{17 \times 17}, \tilde{\phi }_{p,q,i}={\phi }_{p,q,i} \ \ \forall p,q=1,2,\cdots ,17, \end{aligned}$$

except

$$\begin{aligned} \tilde{\phi }_{3,3,i}=&W_{0i}^T\mathcal {C}W_{0i}+R_6+\rho ^2M-U, ~ \tilde{\phi }_{3,11,i}\\ =&W_{0i}^T\mathcal {C}-I, ~ \tilde{\phi }_{11,11,i}=h_2^2J_2+\frac{h_2^4}{4}S_1+\frac{h_2^4}{4}S_2-\gamma I. \end{aligned}$$

The remaining coefficients are same as in Theorem 3.1.

Proof

The proof is same as that of Theorem 3.1 and hence it is omitted. \(\square \)

Remark 3.4

In the absence of leakage and distributed delays the system (1) without Markovian jump parameters is reduced to the following neural networks:

$$\begin{aligned} \dot{x}(t)=&-Ax(t)+W_{0}g(x(t))+W_1g(x(t-d_1(t)-d_2(t)))+u(t),\\ y(t)=\,&g(x(t)),\nonumber \end{aligned}$$
(32)

where \(d_1(t)\) and \(d_2(t)\) are assumed to satisfy \(0\le d_1(t) \le d_{12}\) with \( \dot{d}_1(t)\le \varrho _1<1\) and \(0\le d_2(t) \le d_{22}\) with \(\dot{d}_2(t)\le \varrho _2<1\). By using Theorem 3.1, one can obtain the passivity criterion for the above NNs (32) as in the following corollary.

Corollary 3.5

The neural networks (32) is passive in the sense of Definition 2.2 if there exists positive definite matrices P, \(R_3, R_4, R_5, R_6\), \(Q_2\), \(J_n(n=1,2)\), \(S_q(q=1,2)\), diagonal matrix U and scalars \(\gamma >0\), such that the following LMI holds:

$$\begin{aligned} \Xi =(\Xi _{ij})_{10 \times 10}<0, \end{aligned}$$
(33)

where

$$\begin{aligned} \Xi _{1,1}=&-PA-A^TP+R_3+R_4+R_5+d_{12}^2Q_2+d_{22}^2J_1-J_2+A^T\mathcal {C}A , \\ \Xi _{1,2}=&PW_1-A^T\mathcal {C}W_1+LU, \ \ \Xi _{1,3}=PW_2-A^T\mathcal {C}W_2, \ \ \Xi _{1,7}=J_2, \ \ \Xi _{1,8}= P-A^T\mathcal {C}, \\ \Xi _{2,2}=&W_1^T\mathcal {C}W_1+R_6-2U, \ \ \Xi _{2,3}=W_1^T\mathcal {C}W_2, \ \ \Xi _{2,8}=W_1^T\mathcal {C}-I, \ \ \\ \Xi _{3,3}=&-R_6(1-\varrho _1-\varrho _2) +W_2^T\mathcal {C}W_2, \ \ \Xi _{3,8}=W_2^T\mathcal {C}, \ \ \Xi _{4,4}=-R_3, \\ \Xi _{5,5}=&-R_4(1-\varrho _1), \ \ \Xi _{6,6}\!=\!-R_5(1-\varrho _2), \\ \Xi _{7,7}=&-J_2, \ \Xi _{8,8}=\mathcal {C}-\gamma , \Xi _{9,9}=-Q_2, \ \ \Xi _{10,10}=-J_1,\ \mathcal {C}=d_{22}^2J_2+\frac{d_{22}^4}{4}S_1+\frac{d_{22}^4}{4}S_2. \end{aligned}$$

Proof

By putting \(R_1=R_2=Q_1=Z=M=0\) in the LKF (12) and using the similar arguments of Theorem 3.1, we can obtain the passivity results for the system (32).

Remark 3.6

when \(d_1(t)=0, d_2(t)=d(t)\) and \(d_{22}=d\), the system (32) reduces to the following form with the single delay

$$\begin{aligned} \dot{x}(t)=&-Ax(t)+W_0g(x(t))+W_1g(x(t-d(t)))+u(t),\\ y(t)=\,&g(x(t)),\nonumber \end{aligned}$$
(34)

where d(t) satisfies \(0\le d_1(t) \le d, \dot{d}(t)\le \varrho <1\). The passivity criterion for delayed neural network (34) can be derived as follows:

Corollary 3.7

The neural networks (34) is passive in the sense of Definition 2.2 if there exists positive definite matrices P, \( R_5\), \(J_n(n=1,2)\), \(S_1\), diagonal matrix U and V and scalars \(\gamma >0\), such that the following LMI holds:

$$\begin{aligned} \Phi =&(\Phi _{ij})_{7 \times 7}<0,\\ \Phi _{1,1}=&-PA-A^TP\!+\!R_5\!+\!d^2J_1\!-\!J_1-d^2S_1\!+\! A^T\mathcal {H}A, \Phi _{1,2}\!=\! PW_0-A^T\mathcal {H}W_0+UL,\nonumber \\ \Phi _{1,3}=&PW_1\!-\!A^T\mathcal {H}W_1, \Phi _{1,4}\!=\!dS_1, \Phi _{1,5}\!=\! P-A^T\mathcal {H}, \Phi _{1,7}=J_1, \ \Phi _{2,2}=W_0^T\mathcal {H}W_0-2U, \nonumber \\ \Phi _{2,3}=&W_0^T\mathcal {H}W_1, \Phi _{2,5}=W_0^T\mathcal {H}+I, \Phi _{3,3}\!=\!W_1^T\mathcal {H}W_1\!-\!2V, \ \Phi _{3,5}=W_1^T\mathcal {H}, \ \Phi _{3,6}=LV,\nonumber \\ \Phi _{4,4}=&-J_1-S_1, \Phi _{5,5}{\,=\,}\mathcal {H}-\gamma , \ \Phi _{6,6}{\,=\,}-R_5(1-\varrho ), \ \Phi _{7,7}{=}-J_1, \ \mathcal {H}{\,=\,} d^2J_2+\frac{d^4}{4}S_1.\nonumber \end{aligned}$$
(35)

Proof

Consider the LKF (12) with \(R_1=R_2=R_3=R_4=R_6=Q_1=Q_2=S_2=0\). Now based on Assumption 2.1 we can choose a diagonal matrix V such that the following inequality hold:

$$\begin{aligned} -2g^T(x(t-d(t)))Vg(x(t-d(t)))+2x^T(t-d(t))VLg(x(t-d(t)))\ge 0 \end{aligned}$$
(36)

By adding (36) with (26) and proceeding as in the proof of Theorem 3.1, we can obtain passivity results for (34).

Remark 3.8

The authors of [33] discussed the problem of second-order reciprocally convex approach to study the stability of systems with interval time-varying delays. Followed this, in our paper by utilizing the Jensen inequality, the double integral terms are partitioned into single integral terms so as to find a second order reciprocally convex combination of positive functions involving the inverses of squared convex parameters.

Remark 3.9

Different from [31, 32], in this paper, two triple integral terms \(\displaystyle \frac{h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{\theta }^{-d_{21}} \int _{t+\lambda }^{t}\dot{x}^T(s) S_1\dot{x}(s)\mathrm {d}s\mathrm {d}\lambda \mathrm {d}\theta \) and \(\displaystyle \frac{h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{-d_{22}}^{\theta }\int _{t+\lambda }^{t}\dot{x}^T(s) S_2\dot{x}(s)\mathrm {d}s\mathrm {d}\lambda \mathrm {d}\theta \) are included in \(V_6(t,x(t),i)\) which plays an important role in reducing conservatism of our obtained results. Those two triple integral terms provide the double integral terms \(\displaystyle \frac{-h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}^T(s)S_1\dot{x}(s) \mathrm {d}s\mathrm {d}\theta \) and \(\displaystyle \frac{-h_2^2}{2}\int _{-d_{22}}^{-d_{21}}\int _{t-d_{22}}^{t+\theta }\dot{x}^T(s)S_1 \dot{x}(s) \mathrm {d}s\mathrm {d}\theta \) in \(\mathbb {L}V_6(t,x(t),i)\) respectively. These two double integral terms are further reduced into terms with three integral parts such as \(\displaystyle \frac{-h_2^2}{2}(d_{22}-d_2(t))\int _{t-d_2(t)}^{t-d_{21}}\dot{x}^T(s)S_1\dot{x}(s)\mathrm {d}s\) \(-\displaystyle \frac{h_2^2}{2}\int _{-d_2(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}^T(s)S_1\dot{x}(s) \mathrm {d}s\mathrm {d}\theta \) \(-\displaystyle \frac{h_2^2}{2}\int _{-d_{22}}^{-d_2(t)}\int _{t+\theta }^{t-d_2(t)}\dot{x}^T(s) S_1\dot{x}(s) \mathrm {d}s\mathrm {d}\theta \) and \(\displaystyle -\frac{h_{2}^{2}}{2}(d_{2}(t)-d_{21})\int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}^{T}(s)S_{2}\dot{x}(s)\mathrm {d}s -\frac{h_{2}^{2}}{2}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}^{T}(s)S_{2}\dot{x}(s)\mathrm {d}s \mathrm {d}{\theta } -\frac{h_{2}^{2}}{2}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}^{T}(s)S_{2} \dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\), respectively. Further, applying Jensen’s inequality to these terms will lead to less conservative dissipativity results which can be seen through numerical example in Sect. 4.

Remark 3.10

To reduce the conservatism, the lower bound lemma is used to deal with the derivative of \(V_5(t,x(t),i)\), i.e., by using the relations \(\frac{\omega }{\varsigma }=-1+\frac{1}{\varsigma }\) and \(\frac{\varsigma }{\omega }=-1+\frac{1}{\omega }\) in the following inequality, we can easily get the inequality (22) by using Lemma 2.1.

$$\begin{aligned} 0\ge -\left[ \begin{array}{cc}\sqrt{\frac{\omega }{\varsigma }} \int _{t-d_2(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s \\ \sqrt{-\frac{\varsigma }{\omega }} \int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\end{array}\right] ^T \left[ \begin{array}{cc}J_2+\frac{h_2^2}{2}S_1 &{}\quad K_2 \\ * &{}\quad J_2+\frac{h_2^2}{2}S_2\end{array}\right] \left[ \begin{array}{cc}\sqrt{\frac{\omega }{\varsigma }} \int _{t-d_2(t)}^{t-d_{21}}\dot{x}(s)\mathrm {d}s \\ \sqrt{-\frac{\varsigma }{\omega }} \int _{t-d_{22}}^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\end{array}\right] . \end{aligned}$$

Remark 3.11

To obtain the inequalities (23) and (24), in this paper we use the relation \(\frac{1}{\varsigma ^2}=\frac{(\varsigma +\omega )^2}{\varsigma ^2}\) and \(\frac{1}{\omega ^2}=\frac{(\varsigma +\omega )^2}{\omega ^2}\) in the following inequalities:

$$\begin{aligned}&0\ge -\left[ \begin{array}{cccc} \sqrt{\frac{\omega }{\varsigma }}\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ \frac{\omega }{\varsigma }\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ -\sqrt{\frac{\varsigma }{\omega }}\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\\ -\frac{\varsigma }{\omega }\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \end{array} \right] ^T \left[ \begin{array}{cccc} 2S_{1} &{}\quad 0 &{}\quad K_{3} &{}\quad 0\\ * &{}\quad S_{1} &{}\quad 0 &{}\quad K_{4}\\ * &{}\quad * &{}\quad 2S_{1} &{}\quad 0\\ * &{}\quad * &{}\quad * &{}\quad S_{1} \end{array} \right] \\&\quad \times \left[ \begin{array}{cccc} \sqrt{\frac{\omega }{\varsigma }}\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ \frac{\omega }{\varsigma }\int _{-d_{2}(t)}^{-d_{21}}\int _{t+\theta }^{t-d_{21}}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ -\sqrt{\frac{\varsigma }{\omega }}\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\\ -\frac{\varsigma }{\omega }\int _{-d_{22}}^{-d_{2}(t)}\int _{t+\theta }^{t-d_{2}(t)}\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \end{array} \right] \\&\small 0\ge -\left[ \begin{array}{cccc} \sqrt{\frac{\omega }{\varsigma }}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ \frac{\omega }{\varsigma }\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ -\sqrt{\frac{\varsigma }{\omega }}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\\ -\frac{\varsigma }{\omega }\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \end{array} \right] ^T \left[ \begin{array}{cccc} 2S_{2} &{}\quad 0 &{}\quad K_{5} &{}\quad 0\\ * &{}\quad S_{2} &{}\quad 0 &{}\quad K_{6}\\ * &{}\quad * &{}\quad 2S_{2} &{}\quad 0\\ * &{}\quad * &{}\quad * &{}\quad S_{2} \end{array} \right] \\&\quad \times \left[ \begin{array}{cccc} \sqrt{\frac{\omega }{\varsigma }}\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ \frac{\omega }{\varsigma }\int _{-d_{2}(t)}^{-d_{21}}\int _{t-d_{2}(t)}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \\ -\sqrt{\frac{\varsigma }{\omega }}\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta }\\ -\frac{\varsigma }{\omega }\int _{-d_{22}}^{-d_{2}(t)}\int _{t-d_{22}}^{t+\theta }\dot{x}(s)\mathrm {d}s\mathrm {d}{\theta } \end{array} \right] . \end{aligned}$$

The above approach is very effective in reducing the conservatism of the dissipativity criterion, which will be shown through numerical examples in the subsequent section.

Remark 3.12

The system with two additive time-varying delay has a physically powerful application background in remote control and networked control. In this paper we consider, \(d_1(t)\) is the time-delay induced from sensor to controller and \(d_2(t)\) is the delay induced from controller to the actuator. The stability analysis of such system was earlier carried out by adding up all the successive delays into single delay to develop a sufficient stability condition. In addition, in this paper, we handle both lower and upper bounds of the additive delays (i.e. \( 0 \le d_{11}\le d_1(t) \le d_{12}, \ \ 0\le d_{21}\le d_2(t) \le d_{22}, |\dot{d}_1(t)|\le \varrho _1 <1, |\dot{d}_2(t)|\le \varrho _2 <1\)), for obtaining the dissipativity and passivity results for the system (1).

4 Numerical Examples

In this section, we give three numerical examples to show the validity of our developed theoretical results.

Example 4.1

Consider system (1) with the parameters

$$\begin{aligned} A_1=&\left[ \begin{array}{cc} 2.3&{}\quad 0\\ 0 &{}\quad 0.9 \end{array} \right] , \ A_2\!=\! \left[ \begin{array}{cc} 0.9 &{}\quad 0\\ 0 &{}\quad 1.1 \end{array} \right] , \ W_{01}=\left[ \begin{array}{cc} 0.3 &{}\quad 0.2\\ 0.3 &{}\quad -0.2 \end{array} \right] , \ \ W_{02}=\left[ \begin{array}{cc} 0.3 &{}\quad 0.5\\ -0.2 &{}\quad 0.1 \end{array} \right] , \\ W_{11}=&\left[ \begin{array}{cc} 0.2 &{}\quad -0.3\\ 0.4 &{}\quad 0.2 \end{array} \right] , W_{12}=\left[ \begin{array}{cc} 0.3 &{}\quad 0.2\\ -0.3 &{}\quad 0.5 \end{array} \right] , W_{21}=\left[ \begin{array}{cc} 0.5 &{}\quad 0\\ 0 &{}\quad 0.5 \end{array} \right] , W_{22}=\left[ \begin{array}{cc} 0.5 &{}\quad 0\\ 0 &{}\quad 0.5 \end{array} \right] ,\\ Q=&\left[ \begin{array}{cc}-3 &{}\quad 3 \\ 2 &{}\quad -2 \end{array}\right] , \mathcal {Q}=\left[ \begin{array}{cc}7 &{}\quad 0 \\ 0 &{}\quad 7 \end{array}\right] , \mathcal {R}=\left[ \begin{array}{cc}12 &{}\quad 0 \\ 0 &{}\quad 12 \end{array}\right] , \mathcal {S}=\left[ \begin{array}{cc}0.1 &{}\quad -0.1 \\ -0.1 &{}\quad 0.5 \end{array}\right] . \end{aligned}$$

For this model, we take the nonlinear activation functions as \(g_1(x)=0.4\tanh (x)\) and \(g_2(x)=0.8\tanh (x)\). It is easy to see that these activation functions satisfy the Assumption 2.1 with \(L=\text{ diag }\{0.4, 0.8\}\). By using MATLAB LMI toolbox and by taking \(d_{11}=0.3, d_{12}=0.8, d_{21}=0.4, d_{22}=2.9185, \varrho _1=0.2, \varrho _2=0.3, \sigma =0.03, \sigma _{\mu }=0.01\) and \(\rho =0.2\), we can solve the LMIs (9)–(11) in Theorem 3.1 and obtain the corresponding feasible solutions as follows (for space consideration, here we list some variables only):

$$\begin{aligned} P_1=&\left[ \begin{array}{cc} 1.6389 &{}\quad -0.0203 \\ -0.0203 &{}\quad 1.8653 \end{array}\right] , P_2=\left[ \begin{array}{cc}1.6325 &{}\quad -0.5896 \\ -0.5896 &{}\quad 1.4563 \end{array}\right] , R_1=\left[ \begin{array}{cc}0.8563 &{}\quad 0.0036\\ 0.0036 &{}\quad 0.8921 \end{array}\right] ,\\ R_6=&\left[ \begin{array}{cc}3.2946 &{}\quad -0.1099\\ -0.1099 &{}\quad 4.0803 \end{array}\right] ,Q_1\!=\!\left[ \begin{array}{cc} 0.7377 &{}\quad -0.0086\\ -0.0086 &{}\quad 0.9671 \end{array}\right] , Z_1\!=\!\left[ \begin{array}{cc}29.4440 &{} \quad 0.1304\\ 0.1304 &{}\quad 9.2254 \end{array}\right] , \\ J_1=&\left[ \begin{array}{cc} 0.0536 &{}\quad -0.0003\\ -0.0003 &{}\quad 0.0635 \end{array}\right] , S_1=\left[ \begin{array}{cc} 0.0064 &{}\quad 0.0002\\ 0.0002 &{}\quad 0.0369\end{array}\right] ,S_2=\left[ \begin{array}{cc} 0.0087&{}\quad 0.0001\\ 0.0001 &{}\quad 0.0928 \end{array}\right] ,\\ K_2=&\left[ \begin{array}{cc} -0.0070 &{}\quad -0.0005\\ -0.0005 &{}\quad -0.0246\end{array}\right] , Y_1=\left[ \begin{array}{cc}2.9645 &{}\quad -0.0035\\ -0.0036 &{}\quad 2.8524 \end{array}\right] ,\\ Y_2=&\left[ \begin{array}{cc}2.8963 &{}\quad -0.0076\\ -0.0076 &{}\quad 2.0463 \end{array}\right] , U=\left[ \begin{array}{cc}0.2463 &{}\quad 0\\ 0 &{}\quad 0.2764\end{array}\right] , \gamma = 13.5632. \end{aligned}$$

When we fix \(\varrho =0.5, (i.e \ \varrho _1=0.2, \varrho _2=0.3), \rho =0.2, \sigma _{\mu }=0.01, d_{11}=0.3, d_{21}=0.4, d_{12}=0.7,\) we can obtain the maximum allowable upper bound \(d_{22}\) for various \(\sigma \) as listed in Table 1. Moreover, by choosing \(\varrho =0.5, (i.e \ \varrho _1=0.2, \varrho _2=0.3), \sigma =0.03, \sigma _\mu =0.01, \rho =0.2, d_{11}=0.3, d_{21}=0.4, \) the maximum allowable upper bound \(d_{22}\) are computed for various \(d_{12}\) and are listed in Table 2.

Furthermore, Fig. 1 shows the state trajectories of variable x(t) with the initial condition \([-0.2,0.2]^T\) for the additive delays \(d_1(t)=0.5+0.2\sin (t)\) and \(d_2(t)=2.6185+0.3\cos (t)\) in the case of \(\sigma (t)=0.02+0.01\sin (t)\). The state trajectories of variable x(t) for \(\sigma (t)=0.02+0.01\sin (t), d_1(t)=0.2+0.1\sin (t)\) and \(d_2(t)=2.6762+0.3\cos (t)\) are depicted in Fig. 2. Figure 3 predicts the unstable behavior for the state trajectories of variable x(t) with the delay \(\sigma (t)=0.01+0.5\sin (t)\).

Table 1 Maximum allowable upper bounds of \(d_{22}\) for different \(\sigma \) when \(d_{11}=0.3, d_{21}=0.4, d_{12}=0.7, \varrho =0.5, \sigma _{\mu }=0.01, \rho =0.2\) for example 4.1
Table 2 Maximum allowable upper bounds of \(d_{22}\) for different \(d_{12}\) when \(d_{11}=0.3, d_{21}=0.4, \varrho =0.5, \sigma =0.03, \sigma _{\mu }=0.01, \rho =0.2.\) for example 4.1
Fig. 1
figure 1

The state trajectories of system (1) with additive time-varying delay \(\sigma (t)= 0.02+0.01\sin (t), d_2(t)=2.6185+0.3\cos (t)\)

Fig. 2
figure 2

The state trajectories of system (1) with additive time-varying delay \(d_1(t)= 0.2+0.1\sin (t), d_2(t)=2.6762+0.3\cos (t)\)

Fig. 3
figure 3

Unstable behavior of system (1) with additive time-varying delay \(\sigma (t)=0.01+0.5\sin (t)\)

Example 4.2

Consider system (32) with two additive time-varying delay components as in [1317] with the following matrices:

$$\begin{aligned} A=\left[ \begin{array}{cc} 2 &{}\quad 0 \\ 0 &{}\quad 2 \end{array}\right] , \ W_0=\left[ \begin{array}{cc} 1 &{}\quad 1 \\ -1 &{}\quad -1 \end{array}\right] , \ W_1=\left[ \begin{array}{cc} 0.88 &{}\quad 1 \\ 1 &{}\quad 1 \end{array}\right] . \end{aligned}$$
Table 3 Maximum allowable upper bounds of \(d_{22}\) for different \(d_{21}\) when \(\varrho _1=0.7\) and \(\varrho _2=0.1\) for example 4.2
Table 4 Maximum allowable upper bounds of \(d_{22}\) for different \(d_{21}\) when \(\varrho _1=0.7\) and \(\varrho _2=0.2\) for example 4.2

In this example, the activation functions are assumed to be \(g_1(x)=0.4\tanh (x)\) and \(g_2(x)=0.8\tanh (x)\). It is easy to check that they satisfy the Assumption 2.1 with \(L=\text{ diag }\{0.4,0.8\}\). When \(\varrho =0.8 \ (\varrho _1=0.7, \varrho _2=0.1)\) and \(\varrho =0.9 \ (\varrho _1=0.7, \varrho _2=0.2)\), the corresponding upper bounds for \(d_{22}\) for various values of \(d_{21}\) are calculated by Corollary 3.5 and are listed in Tables 3 and 4 in order to compare with the results obtained in [1317]. Tables 3 and 4 show that the method proposed in this paper is much less conservative and more superior than the corresponding method used in [1317].

When u(t) = 0, one can obtain the state trajectories of the state x(t) for the delay \(d_1(t)=0.1+0.7\sin (t)\) and \(d_2(t)=2.7325+0.1\cos (t)\), with the initial value \([-0.2,\quad 0.2]^T\) as shown in Fig. 4. In addition, for \(d_1(t)=0.1+0.7\sin (t)\) and \(d_2(t)=2.1982+0.2\cos (t)\), and taking the initial value as \([-0.2,\quad 0.2]^T\) the response of the state trajectories are drawn in Fig. 5.

Fig. 4
figure 4

The state trajectories of system (32) with additive time-varying delay \(d_1(t)=0.1+0.7\sin (t)\) and \(d_2(t)=2.7325+0.1\cos (t)\)

Fig. 5
figure 5

The state trajectories of system (32) with additive time-varying delay \(d_1(t)=0.1+0.7\sin (t)\) and \(d_2(t)=2.1982+0.2\cos (t)\)

Remark 4.3

In [14], the double integral terms \(\displaystyle \int _{-\overline{d}_1}^{0}\int _{\beta }^{0}\dot{z}^T(t+\alpha )Z_1\dot{z}(t+\alpha ) \mathrm {d}\alpha \mathrm {d}\beta ,\) \(\displaystyle \int _{-\overline{d}}^{-\overline{d}_1}\int _{\beta }^{0}\dot{z}^T(t+\alpha ) Z_2\dot{z}(t+\alpha ) \mathrm {d}\alpha \mathrm {d}\beta \) and \(\displaystyle \int _{-\overline{d}}^{0}\int _{\beta }^{0}\dot{z}^T(t+\alpha )M\dot{z}(t+\alpha ) \mathrm {d}\alpha \mathrm {d}\beta \) are considered in the LKF and the Jensen inequality is employed to get the derived results. Even though, the similar double integral terms are considered in our paper, a new kind of linear convex combination approach is used from Lemma 2.1 by making use of positive functions weighted by the inverses of squared convex parameters, which improves the results proposed in this paper. In [16] and [17] , by taking the triple integral term \(\displaystyle \int _{-d}^0\int _{\theta }^{0}\int _{t+\lambda }^{t}\dot{z}^T(s)Z_5\dot{z}(s)\mathrm {d}s \mathrm {d}\lambda \mathrm {d}\theta \) in the LKF, the stability results are obtained without using Lemma 2.1. However, in our paper, based on second order reciprocally convex approach, the Lemma 2.1 is used to handle several kinds of function combinations arised from the derivation of triple integral terms considered in \(V_6(t, x(t), i)\). It should be pointed out that the results obtained in Theorem 3.1 in this paper by making use of second order reciprocally convex combination approach are much more better results than those obtained in [16] and [17] which can be easily seen via numerical simulations.

Table 5 Maximum allowable upper bounds of d for different \(\varrho \) for example 4.4
Fig. 6
figure 6

The state trajectories of system (34) with time-varying delay \(d(t)=5.7305+0.2\sin ^2(t)\)

Example 4.4

Consider the neural networks (34) as discussed in [3639] with the following parameters:

$$\begin{aligned} A=\left[ \begin{array}{cc} 1.5 &{}\quad 0 \\ 0 &{}\quad 0.7 \end{array}\right] , W_0=\left[ \begin{array}{cc} 0.0503 &{}\quad 0.0454 \\ 0.0987 &{}\quad 0.2075 \end{array}\right] , W_1=\left[ \begin{array}{cc}0.2381 &{}\quad 0.9320 \\ 0.0388 &{}\quad 0.5062 \end{array}\right] . \end{aligned}$$

The activation functions are assumed to satisfy Assumption 2.1 with \(g_1(x)=0.3\tanh (x), g_2(x)=0.8\tanh (x)\) and hence \(L=\text{ diag }\{0.3, 0.8\}\). By using Corollary 3.7 and solving MATLAB LMI tool box the corresponding results for the maximum allowable upper bounds of the time-varying delay d(t) are computed as given in Table 5 for different \(\varrho 's\). Further, the computed upper bounds are compared with the existing ones [3639]. Figure 6 shows the state curves for the delay \(d(t)=5.7305+0.2\sin ^2(t)\) with \(\varrho =0.4\), and the initial condition \([-0.1,0.1]^T.\)

Remark 4.5

It should be noted in [36] that the free weighting matrix method is used for obtaining the theoretical results. By using this approach, any model transformations and bounded techniques for cross-terms have not been applied and it may lead to computational complexity. Further, in [37] the integral terms \(-\int _{t-h}^tz^T(s)S_1z(s)\mathrm {d}s\) and \( -\int _{t-h}^t\dot{z}^T(s)S_1\dot{z}(s)\mathrm {d}s\) considered in the LKF are splitted into two integral terms such as \(-\int _{t-\tau (t)}^Tz^T(s)S_1z(s)\mathrm {d}s\) and \(-\int _{t-h}^{t-\tau (t)}z^T(s)S_1z(s)\mathrm {d}s\), \(-\int _{t-\tau (t)}^t\dot{z}^T(s)S_2\dot{z}(s)\mathrm {d}s\) and \(-\int _{t-h}^{t-\tau (t)}\dot{z}^T(s)S_2\dot{z}(s)\mathrm {d}s\), respectively. By using these integral terms and by introducing the relationship among \(\tau (t), h-\tau (t)\) and h, the stability results are derived for the neural networks in [37] and the obtained results are further shown to be less conservative than those obtained in [36]. Different from [36] and [37], in [38] the new type of double integral term \(\frac{h}{2}\int _{-\frac{h}{2}}^0\int _{t+\theta }^t\dot{z}^T(s)Q_1\dot{z}(s)\mathrm {d}s\mathrm {d}\theta \) is taken into account in the LKF which is further reduced to the single integral term \(-\frac{h}{2}\int _{t-\frac{h}{2}}^t\dot{z}^T(s)Q_1\dot{z}(s)\mathrm {d}s\) after taking the derivative. Further, the single integral term is reduced by a convex optimization approach in [38]. This approach produces less conservatism compared with [36] and [37]. On the other hand, in [39], triple and quadruple integrals are introduced to give better results than in [3638]. Unlike [3639], in our paper, the integral term is used in \(V_5(t, x(t), i)\) in order to derive the less conservative passivity results.

5 Conclusion

In this paper, dissipativity and passivity analysis are investigated for Markovian jump neural networks with two additive time-varying delays and leakage time-varying delay based on second order reciprocally convex approach. By introducing second order reciprocally convex approach with an augmented Lyapunov–Krasovskii functional, the novel dissipativity criterion for the concerned system is proposed. By introducing the variations of lower bound lemma we solve several kinds of function combinations arising from triple integral terms in the derivation of LMI conditions. Finally, three illustrative numerical examples are provided to show the effectiveness of the proposed method.