1 Introduction

Neural networks (NNs) have been attracted attention of many researchers for its wide applications in practical engineering, such as combinatorial optimization, moving image processing, signal processing, and so on. Stability analysis of dynamical neural network model has been a hot issue of growing discussion, due to its important role in solving engineering problems [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. In the process of studying stability for NNs, we often encounter two problems: parameter uncertainty and time delays. On the one hand, parameters are uncertain in practical engineering problems; On the other hand, time delays are always occurring due to finite switch speed [17]. Therefore, it is necessary to study stability for NNs with uncertain parameters and time delays.

In the past years, there are lots of excellent results for dynamical neural networks [17,18,19,20] and references therein. In [17,18,19,20], they got stability results in the form of LMIs. However, it is well known that LMIs conditions are very complicated and difficulty to verify. At the same time, researchers got some simple stability conditions in the form of matrix norm inequalities. Shao et al. derived stability conditions in the form of matrix norm inequality for uncertain neural networks with discrete delays, and without distributed delays [22,23,24,25,26,27]. Some researchers investigated dissipativity of interval neural networks with discrete delay and discontinuous activations [28, 29]. It should be noted that neural networks have spatial extent because of the presence of many parallel pathways with a variety of axon sizes and lengths. Therefore, there will be a distribution of propagation delays during a certain period. In some practical engineering applications, distributed delays are introduced in dynamical systems, such as feeding system and combustion chamber in a liquid mono-propellant rocket motor with pressure feeding and filter design in signal processing. In recent years, there has been a growing interest in stability of neural networks with discrete and distributed delays [30,31,32,33,34,35]. In order to reduce the conservatism, many approach have been proposed, for instance, the method of constructing novel Lyapunov–Krasovskii functionals, free-weighting matrix method, delay decomposition method, model transformation method, and so on. It is clear that the systems in [30,31,32,33,34,35] were not uncertain systems, and they got stability results in the form of LMIs. It is well known that the existence of uncertainty increases the difficulty of research. Up to now, few researchers focused on uncertain neural networks with discrete and distributed delays by the homeomorphism mapping theorem. There is still room to improve the existing stability works using the homeomorphism mapping theorem.

Motivated by the above discussions, in this paper, we will study the robust stability problem for uncertain neural networks with mixed delays. The main contribution of this paper is summarized as follows. Employing the homeomorphism mapping theorem, we derive novel robust stability conditions for uncertain neural networks with mixed delays. The less conservative results are in the form of matrix norm inequality, which can be illustrated by following numerical examples.

Notations In this paper, we use following notations. \(R^{n}\) is the n-dimensional Euclidean space; \(R^{n\times m }\) is the \(n\times m\)-dimensional Euclidean space; I is an identity matrix; \(\Vert \cdot \Vert\) represents a vector or a matrix norm; \(P>0(\ge 0)\) means P is a positive definite matrix(nonnegative definite matrix); Superscript ‘\(T\)’ means transposition of a vector or a matrix; \(|x|=(|x_{1}|,|x_{2}|,\ldots ,|x_{n}|)^{T}\), where \(x=(x_{1},x_{2},\ldots ,x_{n})^{T};\)\(|A|=(|a_{ij}|)_{n \times n}\), where \(A=(a_{ij})_{n \times n}\); \(\lambda _{m}\) represents the minimum eigenvalue of a matrix.

2 Preliminaries

We consider the following uncertain neural network with discrete and distributed delays:

$$\begin{aligned} \dot{x}_{i}(t)=-c_{i}x_{i}(t)+\sum _{j=1}^{n}a_{ij}f_{j}(x_{j}(t))+\sum _{j=1}^{n}b_{ij}f_{j}(x_{j}(t-\tau _{j})) +\sum _{j=1}^{n}d_{ij}\int _{t-\sigma }^{t}f_{j}(x_{j}(s))ds+u_{i}, i=1,2,\ldots ,n. \end{aligned}$$
(1)

then, it can be written in the form:

$$\begin{aligned} \dot{x}(t)=-Cx(t)+Af(x(t))+Bf(x(t-\tau ))+D\int _{t-\sigma }^{t}f(x(s))ds+U, \end{aligned}$$
(2)

where \(x(t)=(x_{1}(t),x_{2}(t),\ldots ,z_{n}(t))^{T}\) is the neuron state vector; \(C=diag(c_{1},c_{2},\ldots ,c_{n})>0\) ; \(A=(a_{ij})_{n\times n}\) is the interconnection weight matrix , \(B=(b_{ij})_{n\times n}\) and \(D=(d_{ij})_{n\times n}\) are delayed interconnection weight matrices, respectively; \(f(x(t))=(f_{1}(x_{1}(t)),f_{2}(x_{2}(t)),\ldots ,f_{n}(x_{n}(t)))^{T}\in R^{n}\) represent the neuron activations; \(f(x(t-\tau ))=(f_{1}(x_{1}(t-\tau _{1})),f_{2}(x_{2}(t-\tau _{2})),\ldots ,f_{n}(x_{n}(t-\tau _{n})))^{T}\in R^{n}\); \(U=(u_{1},u_{2},\ldots ,u_{n})^{T}\) is a constant input vector; discrete delay \(\tau =(\tau _{1},\tau _{2},\ldots ,\tau _{n})^{T}\) ; \(\sigma\) is the distributed delay.

The neuron activation functions \(f_{i}(x_{i})\) satisfy the following assumptions

$$\begin{aligned} 0\le \frac{f_{i}(x_{i})-f_{i}(y_{i})}{x_{i}-y_{i}}\le l_{i}, \quad i=1,2,\ldots , n, \quad \forall x_{i},y_{i}\in R, \end{aligned}$$
(3)

where \(l_{i}(i=1,2,\ldots , n)\) are constant scalars.

The uncertain parameters in the system (2) satisfy the following assumptions:

$$\begin{aligned} \begin{array}{ll} &{}C_{I}:=\{C=diag(c_i):0<\underline{C}\le C\le \overline{C},\text {i.e.},0<\underline{c}_i\le c_i\le \overline{c}_i,\ \quad \forall i=1,2,\ldots ,n\},\\ &{}A_{I}:=\{A=(a_{ij}):\underline{A}\le A\le \overline{A},\text {i.e.},\underline{a}_{ij}\le a_{ij}\le \overline{a}_{ij}, \quad i,j=1,2,\ldots ,n\},\\ &{}B_{I}:=\{B=(b_{ij}):\underline{B}\le B\le \overline{B},\text {i.e.},\underline{b}_{ij}\le b_{ij}\le \overline{b}_{ij}, \quad i,j=1,2,\ldots ,n\},\\ &{}D_{I}:=\{D=(d_{ij}):\underline{D}\le D\le \overline{D},\text {i.e.},\underline{d}_{ij}\le d_{ij}\le \overline{d}_{ij}, \quad i,j=1,2,\ldots ,n\}, \end{array} \end{aligned}$$
(4)

where \(\underline{C}=diag(\underline{c}_{1},\underline{c}_{2} \ldots , \underline{c}_{n}), \overline{C}=diag(\overline{c}_{1},\overline{c}_{2}\ldots ,\overline{c}_{n})\)\(\underline{A}=(\underline{a}_{ij})_{n\times n},\overline{A}=(\overline{a}_{ij})_{n\times n},\)\(\underline{B}=(\underline{b}_{ij})_{n\times n},\overline{B}=(\overline{b}_{ij})_{n\times n},\)\(\underline{D}=(\underline{d}_{ij})_{n\times n},\overline{D}=(\overline{d}_{ij})_{n\times n}.\)

Denote

$$\begin{aligned} \begin{array}{ll} &{}A^{*}=\frac{1}{2}(\overline{A}+\underline{A}),\,A_{*}=\frac{1}{2}(\overline{A}-\underline{A}),\\ &{}B^{*}=\frac{1}{2}(\overline{B}+\underline{B}),\,B_{*}=\frac{1}{2}(\overline{B}-\underline{B}),\\ &{}D^{*}=\frac{1}{2}(\overline{D}+\underline{D}),\,D_{*}=\frac{1}{2}(\overline{D}-\underline{D}). \end{array} \end{aligned}$$
(5)

We will use the following vector norms and a matrix norm in this paper:

$$\begin{aligned} \Vert x\Vert _{1}=\sum _{i=1}^n|x_{i}| , \Vert x\Vert _{2}=\left\{ \sum _{i=1}^n x_{i}^2\right\} ^{\frac{1}{2}} , \Vert x\Vert _{\infty }=\max \limits _{1\le i\le n}|x_{i}|, \Vert A\Vert _{2}=[\lambda _{\max }(A^{T}A)]^{\frac{1}{2}}, \end{aligned}$$

where \(x=(x_{1},x_{2},\ldots ,x_{n})^{T}\) is a vector and \(A=(a_{ij})_{n \times n}\) is a real matrix.

Some useful Lemmas for the main results are stated as follows.

Lemma 2.1

[24] The map\(H(x): R^{n}\rightarrow R^{n}\)is a homeomorphism ifH(x) satisfies the following conditions

  1. (i)

    H(x) is injective, that is, \(H(x)\ne H(y)\) for all \(x\ne y\);

  2. (ii)

    H(x) is proper, that is, \(\Vert H(x)\Vert \rightarrow + \infty\) as \(\Vert x\Vert \rightarrow + \infty\).

Lemma 2.2

[18] For any vectors\(x,y \in R^{n}\)and a positive matrix\(G\in R^{n\times n,}\) the following inequality holds:

$$\begin{aligned} 2x^{T}y\le x^{T}Gx+y^{T}G^{-1}y. \end{aligned}$$
(6)

Lemma 2.3

[23] IfAis a real matrix defined by\(A\in A_{I}=[\underline{A},\overline{A}]\) , then, for \(x\in R^{n}\), there exist a positive diagonal matrixPand a nonnegative diagonal matrix\(\Gamma\)such that the following inequality holds:

$$\begin{aligned} x^{T}(PA+A^{T}P)x\le x^{T}[P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]x. \end{aligned}$$
(7)

Lemma 2.4

[36] Real matricesABdefined by\(A\in A_{I}=[\underline{A},\overline{A}], B\in B_{I}=[\underline{B},\overline{B}]\), then, there exist positive constants\(h_{1},h_{2}\)such that

$$\begin{aligned} \Vert A\Vert _{2}\le h_{1}, \Vert B\Vert _{2}\le h_{2}. \end{aligned}$$
(8)

Lemma 2.5

[17] IfBis a real matrix defined by\(B\in B_{I}=[\underline{B},\overline{B}]\), then, for any positive diagonal matrix\(P=diag(p_{1},p_{2},\ldots ,p_{n})>0\)and for any two real vectors\(x\in R^{n}, y\in R^{n}\), the following inequality holds:

$$\begin{aligned} 2 x^{T}PBy\le \rho p_{M}x^{T}x+\frac{p_{M}}{\rho }y^{T}R y, \end{aligned}$$
(9)

where\(p_{M}=max\{p_{i}\}\), \(\rho\)is any positive constant, and\(R=diag(r_{i})\ge 0\)with\(r_{i}=\sum \nolimits _{k=1}^{n}\widehat{b}_{ki}\sum \nolimits _{k=1}^{n}\widehat{b}_{kj}\)and\(\widehat{b}_{ij}=max\{|\underline{b}_{ij}|,|\overline{b}_{ij}|\}(i,j=1,2,\ldots ,n)\).

3 Main results

3.1 Existence and uniqueness of equilibrium point

Theorem 3.1

For the neural networks (2), coefficient matrices satisfy (3). The system (2) has a unique equilibrium point, if there exist a positive diagonal matrix\(P=diag(p_{i})>0\), a nonnegative diagonal matrix\(\Gamma =diag(\nu _{i})\ge 0\)and two positive constants\(\rho , \mu\)such that the following inequality holds:

$$\begin{aligned} \begin{array}{ll} \Omega &{}=2\underline{C}PL^{-1} -[P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]I\\ &{}\quad -p_{M}(\rho I+\frac{1}{\rho }R) -p_{M}(\mu I+\frac{1}{\mu }Q)>0, \end{array} \end{aligned}$$
(10)

where\(L=diag(l_{i})>0,\)\(R=diag(r_{i})>0,\)\(Q=diag(q_{i})>0\)with\(r_{i}=\sum \limits _{k=1}^{n}\widehat{b}_{ki}\sum \limits _{j=1}^{n}\widehat{b}_{kj},\)\(\widehat{b}_{ij}=max\{|\underline{b}_{ij}|,|\overline{b}_{ij}|\},\)

\(q_{i}=\sum \limits _{k=1}^{n}\widehat{d}_{ki}\sum \limits _{j=1}^{n}\widehat{d}_{kj},\)\(\widehat{d}_{ij}=max\{|\underline{d}_{ij}|,|\overline{d}_{ij}|\}\).

Proof

We can get the map associated with system (2):

$$\begin{aligned} H(x)=-Cz+Af(x)+Bf(x)+\sigma Df(x)+U. \end{aligned}$$
(11)

For any \(x\ne y, x,y\in R^{n}\), we have:

$$\begin{aligned} H(x)-H(y)=-C(x-y)+A(f(x)-f(y))+B(f(x)-f(y))+\sigma D(f(x)-f(y)). \end{aligned}$$
(12)

\(x\ne y, x,y\in R^{n}\) contain two cases:

  1. case (1)

    when\(x\ne y,\)\(f(x)-f(y)=0\);

  2. case (2)

    when\(x\ne y,\)\(f(x)-f(y)\ne 0\).

For case (1), one gets

$$\begin{aligned} H(x)-H(y)=-C(x-y). \end{aligned}$$

Obviously, \(H(x)\ne H(y)\) because of the positive diagonal matrix C.

For case (2), \(2(f(x)-f(y))^{T}P\) is multiplied at both sides of (12):

$$\begin{aligned} \begin{array}{ll} 2(f(x)-f(y))^{T}P(H(x)-H(y))&{}=-2(f(x)-f(y))^{T}PC(x-y) +2(f(x)-f(y))^{T}PA(f(x)-f(y))\\ &{} \quad +2(f(x)-f(y))^{T}PB(f(x)-f(y)) +2\sigma (f(x)-f(y))^{T}P D(f(x)-f(y)). \end{array} \end{aligned}$$
(13)

Then,

$$\begin{aligned} \begin{array}{l} -2(f(x)-f(y))^{T}PC(x-y)\le -2(f(x)-f(y))^{T}\underline{C}PL^{-1}(f(x)-f(y)), \end{array} \end{aligned}$$
(14)

In the light of Lemma 2.3,

$$\begin{aligned} \begin{array}{ll} 2(f(x)-f(y))^{T}PA(f(x)-f(y))&{}=(f(x)-f(y))^{T}(PA+A^{T}P)(f(x)-f(y))\\ &{}\le (f(x)-f(y))^{T}[P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P\\ &{}\quad +\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}](f(x)-f(y)). \end{array} \end{aligned}$$
(15)

According to Lemma 2.4, one has

$$\begin{aligned} \begin{array}{ll} 2(f(x)-f(y))^{T}PB(f(x)-f(y))&{}\le \rho p_{M}(f(x)-f(y))^{T}(f(x)-f(y))\\ &{}\quad +\frac{p_{M}}{\rho }(f(x)-f(y))^{T}R (f(x)-f(y)), \end{array} \end{aligned}$$
(16)
$$\begin{aligned} \begin{array}{ll} 2\sigma (f(x)-f(y))^{T}P D(f(x)-f(y))&{}\le \sigma \mu p_{M}(f(x)-f(y))^{T}(f(x)-f(y))\\ &{}\quad+\sigma \frac{p_{M}}{\mu }(f(x)-f(y))^{T}Q (f(x)-f(y)), \end{array} \end{aligned}$$
(17)

Substituting (14) − (17) into (13),

$$\begin{aligned} \begin{array}{ll} 2(f(x)-f(y))^{T}P(H(x)-H(y))&{}\le -2(f(x)-f(y))^{T}\underline{C}PL^{-1}(f(x)-f(y))\\ &{}\quad +(f(x)-f(y))^{T}[P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P\\ &{}\quad +\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}](f(x)-f(y))\\ &{}\quad +\rho p_{M}(f(x)-f(y))^{T}(f(x)-f(y))\\ &{}\quad +\frac{p_{M}}{\rho }(f(x)-f(y))^{T}R (f(x)-f(y))\\ &{}\quad +\sigma \mu p_{M}(f(x)-f(y))^{T}(f(x)-f(y))\\ &{}\quad +\sigma \frac{p_{M}}{\mu }(f(x)-f(y))^{T}Q (f(x)-f(y)), \end{array} \end{aligned}$$
(18)

that is,

$$\begin{aligned} 2(f(x)-f(y))^{T}P(H(x)-H(y))\le -(f(x)-f(y))^{T}\Omega (f(x)-f(y)). \end{aligned}$$
(19)

Since \(x\ne y,\)\(f(x)-f(y)\ne 0\) and \(\Omega>0\), thus,

$$\begin{aligned} 2(f(x)-f(y))^{T}P(H(x)-H(y))< 0. \end{aligned}$$
(20)

This means that \(H(x)\ne H(y)\).

Hence, we can get that \(H(x)\ne H(y)\) for case (1) and case (2) (\(\forall x\ne y\)).

Letting \(y=0\) in (19),

$$\begin{aligned} \begin{array}{l} 2(f(x)-f(0))^{T}P(H(x)-H(0))\le -(f(x)-f(0))^{T}\Omega (f(x)-f(0)) \le - \lambda _{m}(\Omega )\Vert (f(x)-f(0))\Vert _{2}^{2}. \end{array} \end{aligned}$$
(21)

It yields

$$\begin{aligned} \begin{array}{l} |2(f(x)-f(0))^{T}P(H(x)-H(0))|\ge \lambda _{m}(\Omega )\Vert (f(x)-f(0))\Vert _{2}^{2}, \end{array} \end{aligned}$$
(22)

where \(\lambda _{m}(\Omega )\) is the minimum eigenvalue of \(\Omega\).

It follows that

$$\begin{aligned} \begin{array}{l} 2p_{M}\Vert f(x)-f(0)\Vert _{\infty }\Vert H(x)-H(0)\Vert _{1} \ge \lambda _{m}(\Omega )\Vert (f(x)-f(0))\Vert _{2}^{2}, \end{array} \end{aligned}$$
(23)

furthermore,

$$\begin{aligned} \begin{array}{l} 2p_{M}\Vert H(x)-H(0)\Vert _{1} \ge \lambda _{m}(\Omega )\Vert (f(x)-f(0))\Vert _{2}, \end{array} \end{aligned}$$
(24)

owing to \(\Vert f(x)-f(0)\Vert _{\infty }\le \Vert (f(x)-f(0))\Vert _{2}\).

It is clear that the formulas \(\Vert H(x)-H(0)\Vert _{1}\le \Vert H(x)\Vert _{1}+\Vert H(0)\Vert _{1}\) and \(\Vert (f(x)-f(0))\Vert _{2}\ge \Vert f(x)\Vert _{2}-\Vert f(0)\Vert _{2}\) hold. Therefore, (24) can be written

$$\begin{aligned} \begin{array}{l} \Vert H(x)\Vert _{1} \ge \frac{[\lambda _{m}(\Omega )\Vert f(x)\Vert _{2}-\lambda _{m}(\Omega )\Vert f(0)\Vert _{2}-2p_{M}\Vert H(0)\Vert _{1}]}{2p_{M}}. \end{array} \end{aligned}$$
(25)

Because \(\Vert f(0)\Vert _{2},\Vert H(0)\Vert _{1},p_{M}\) are finite, it follows that \(\Vert H(x)\Vert _{1}\rightarrow \infty\) as \(\Vert f(x)\Vert _{2}\rightarrow \infty\). According to the above analysis, \(H(x):R^{n}\rightarrow R^{n}\) is a homeomorphism map on \(R^{n}\). Thus, we conclude that (2) has a unique \(x^{*}\) such that \(H(x^{*})=0\). It means that the equilibrium point for the system (2) is unique. \(\square\)

3.2 Stability analysis of equilibrium point

We shift the equilibrium point for system (2) to the origin form by the transformation \(z(t)=x(t)-x^{*}\), where \(x(t)=(x_{1}(t),x_{2}(t),\ldots ,x_{n}(t))^{T}, x^{*}=(x^{*}_{1},x^{*}_{2},\ldots ,x^{*}_{n})^{T}\), and \(x^{*}\) is the equilibrium point of neural network (2). Then, system (2) is written in the form:

$$\begin{aligned} \dot{z}(t)=-Cz(t)+Ag(z(t))+Bg(z(t-\tau ))+D\int _{t-\sigma }^{t}g(z(s))ds, \end{aligned}$$
(26)

where \(g(z(t))=(g_{1}(z_{1}(t)),g_{2}(z_{2}(t)),\ldots ,g_{n}(z_{n}(t)))^{T}, g_{i}(z_{i}(t))=f_{i}(z_{i}(t)+x_{i}^{*})-f_{i}(x_{i}^{*})\), satisfying:

$$\begin{aligned} 0\le \frac{g_{i}(z_{i}(t))}{z_{i}(t)}\le l_{i}, i=1,2,\ldots , n. \end{aligned}$$

Next, we study stability conditions for the origin system (26), because the stability properties of origin system (2) is equivalent to stability of system (26).

Theorem 3.2

For the neural networks (26) , coefficient matrices satisfy (3). The system (26) is global asymptotical robust stable, if there exist a positive diagonal matrix\(P=diag(p_{i})\), a nonnegative diagonal matrix\(\Gamma =diag(\nu _{i})\ge 0\)and two positive constants\(\rho , \mu\)such that the following inequality holds:

$$\begin{aligned} \begin{array}{ll} \Omega =&{}2\underline{C}PL^{-1} -[P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]\\ &{}-p_{M}(\rho I+\frac{1}{\rho }R) -p_{M}(\mu I+\frac{1}{\mu }Q)>0, \end{array} \end{aligned}$$
(27)

where\(L=diag(l_{i})>0,\)\(p_{M}=\max \{p_{i}\},\)\(R=diag(r_{i})>0\), \(Q=diag(q_{i})>0\)with

\(r_{i}=\sum \limits _{k=1}^{n}\widehat{b}_{ki}\sum \limits _{j=1}^{n}\widehat{b}_{kj},\) \(\widehat{b}_{ij}=\max \{|\underline{b}_{ij}|,|\overline{b}_{ij}|\},\)

\(q_{i}=\sum \limits _{k=1}^{n}\widehat{d}_{ki}\sum \limits _{j=1}^{n}\widehat{d}_{kj},\)\(\widehat{d}_{ij}=\max \{|\underline{d}_{ij}|,|\overline{d}_{ij}|\}\).

Proof

We construct the following Lyapunov functional:

$$\begin{aligned} V(z(t))=\sum \limits _{i=1}^{6}V_{i}(z(t)), \end{aligned}$$

where

$$\begin{aligned} \begin{array}{ll} &{}V_{1}(z(t))=z(t)^{T}z(t),\\ &{}V_{2}(z(t))=2\alpha \int _{0}^{z(t)}Pg( z(s))ds,\\ &{}V_{3}(z(t))=2\alpha \frac{p_{M}}{\rho }\int _{t-\tau }^{t}g^{T}( z(s))B^{T}B g( z(s))ds,\\ &{}V_{4}(z(t))=\beta \int _{t-\tau }^{t}g^{2}( z(s))ds,\\ &{}V_{5}(z(t))=\alpha p_{M}\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}} \int _{-\sigma }^{0}\int _{t+\theta }^{t}g^{T}( z(s))D^{T}Dg( z(s))dsd\theta ,\\ &{}V_{6}(z(t))=\alpha p_{M}\frac{\beta _{2}}{\mu _{2}} \int _{-\sigma }^{0}\int _{t+\theta }^{t}g^{T}( z(s))D^{T}Dg( z(s))dsd\theta , \end{array} \end{aligned}$$
(28)

where \(l_{m}=min\{l_{i}\}, l_{M}=max\{l_{i}\},\) \(\alpha , \beta , \beta _{1}, \beta _{2}, \mu _{1}, \mu _{2}\) are any positive constants to be determined later.

Computing the derivative of V(z(t)) along (26), we have

$$\begin{aligned} \begin{aligned} \dot{V}_{1}(z(t))&=-2z(t)^{T}Cz(t)+2z^{T}(t)Ag(z(t))+2z^{T}(t)Bg(z(t-\tau )) +2z^{T}(t)D\int _{t-\sigma }^{t}g(z(s))ds\\&=[-z^{T}(t)Cz(t)+2z(t)^{T}Ag(z(t))]+[-z^{T}(t)Cz(t)+2z^{T}(t)Bg(z(t-\tau ))] +\int _{t-\sigma }^{t}2z^{T}(t)Dg(z(s))ds\\&\le [-z(t)^{T}Cz(t)+\frac{1}{\alpha }\frac{1}{\sigma }\frac{\beta _{1}}{\mu _{1}}z^{T}(t)AA^{T}z(t) +\alpha \sigma \frac{\mu _{1}}{\beta _{1}}g^{T}(z(t))g(z(t))] +\Vert B\Vert _{2}^{2}\Vert C^{-1}\Vert _{2}\Vert g(z(t-\tau ))\Vert _{2}^{2}\\&\quad +\frac{1}{\alpha p_{M}}\frac{\mu _{1}}{\beta _{1}}\int _{t-\sigma }^{t}z^{T}(t)L^{2}z(t)ds +\alpha p_{M}\frac{\beta _{1}}{\mu _{1}}\int _{t-\sigma }^{t}g^{T}(z(s))D^{T}L^{-2}Dg(z(s))ds\\&\le [-z(t)^{T}Cz(t)+\frac{1}{\alpha }\frac{1}{\sigma }\frac{\beta _{1}}{\mu _{1}}z^{T}(t)AA^{T}z(t) +\alpha \sigma \frac{\mu _{1}}{\beta _{1}}g^{T}(z(t))g(z(t))] +\Vert B\Vert _{2}^{2}\Vert C^{-1}\Vert _{2}\Vert g(z(t-\tau ))\Vert _{2}^{2}\\&\quad +\sigma \frac{1}{\alpha p_{M}}\frac{\mu _{1}}{\beta _{1}}z^{T}(t)L^{2}z(t) +\alpha p_{M}\frac{\beta _{1}}{\mu _{1}}\int _{t-\sigma }^{t}g^{T}(z(s))D^{T}L^{-2}Dg(z(s))ds\\&= [-z(t)^{T}Cz(t)+\frac{1}{\alpha }\frac{1}{\sigma }\frac{\beta _{1}}{\mu _{1}}z^{T}(t)AA^{T}z(t) +\sigma l_{M}^{2}\frac{1}{\alpha p_{M}}\frac{\mu _{1}}{\beta _{1}}z^{T}(t)z(t)] +\Vert B\Vert _{2}^{2}\Vert C^{-1}\Vert _{2}\Vert g(z(t-\tau ))\Vert _{2}^{2}\\&\quad +\alpha \sigma \frac{\mu _{1}}{\beta _{1}}g^{T}(z(t))g(z(t)) +\alpha p_{M}\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}}\int _{t-\sigma }^{t}g^{T}(z(s))D^{T}Dg(z(s))ds. \end{aligned} \end{aligned}$$
(29)

Letting \(S=C-\frac{1}{\alpha }\frac{1}{\sigma }\frac{\beta _{1}}{\mu _{1}}AA^{T} -\sigma l_{M}^{2}\frac{1}{\alpha p_{M}}\frac{\mu _{1}}{\beta _{1}}I\), then,

$$\begin{aligned} \begin{aligned} \dot{V}_{1}(z(t))&\le -z(t)^{T}Sz(t) +\Vert B\Vert _{2}^{2}\Vert C^{-1}\Vert _{2}\Vert g(z(t-\tau ))\Vert _{2}^{2}\\&\quad +\alpha \sigma \frac{\beta _{1}}{\mu _{1}}g^{T}(z(t))g(z(t)) +\alpha p_{M}\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}}\int _{t-\sigma }^{t}g^{T}(z(s))D^{T}Dg(z(s))ds\\&\le \Vert S\Vert _{2}\Vert z(t)\Vert _{2}^{2} +\Vert B\Vert _{2}^{2}\Vert C^{-1}\Vert _{2}\Vert g(z(t-\tau ))\Vert _{2}^{2}\\&+\quad \alpha \sigma \frac{\mu _{1}}{\beta _{1}}g^{T}(z(t))g(z(t)) +\alpha p_{M}\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}}\int _{t-\sigma }^{t}g^{T}(z(s))D^{T}Dg(z(s))ds. \end{aligned} \end{aligned}$$
(30)
$$\begin{aligned} \begin{aligned} \dot{V}_{2}(z(t))=&-2\alpha g^{T}(z(t))PCz(t)+2\alpha g^{T}(z(t))PAg(z(t)) +2\alpha g^{T}(z(t))PBg(z(t-\tau ))\\&\quad +2\alpha g^{T}(z(t))PD\int _{t-\sigma }^{t}g(z(s))ds\\&\le -2\alpha g^{T}(z(t))PCL^{-1}g(z(t)) \\&\quad +\alpha g^{T}(z(t)) [P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]g(z(t))\\&\quad +\alpha \rho p_{M}g^{T}(z(t))g(z(t)) +\alpha \frac{p_{M}}{\rho }g^{T}(z(t-\tau ))B^{T}B g(z(t-\tau ))\\&\quad +\alpha p_{M} \int _{t-\sigma }^{t}[\frac{\mu _{2}}{\beta _{2}}g^{T}(z(t))g(z(t)) +\frac{\beta _{2}}{\mu _{2}}g^{T}(z(s))D^{T}Dg(z(s))]ds\\&= -2\alpha g^{T}(z(t))P\underline{C}L^{-1}g(z(t)) \\&\quad +\alpha g^{T}(z(t)) [P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]g(z(t))\\&\quad +\alpha \rho p_{M}g^{T}(z(t))g(z(t)) +\alpha \frac{p_{M}}{\rho }g^{T}(z(t-\tau ))B^{T}B g(z(t-\tau ))\\&\quad +\sigma \alpha p_{M} \frac{\mu _{2}}{\beta _{2}}g^{T}(z(t))g(z(t)) +\alpha p_{M}\frac{\beta _{2}}{\mu _{2}}\int _{t-\sigma }^{t}g^{T}(z(s))D^{T}Dg(z(s))ds, \end{aligned} \end{aligned}$$
(31)
$$\begin{aligned} \begin{array}{ll} &{}\dot{V}_{3}(z(t))=\alpha \frac{p_{M}}{\rho } g^{T}(z(t))B^{T}B g(z(t)) -\alpha \frac{p_{M}}{\rho } g^{T}(z(t-\tau ))B^{T}B g(z(t-\tau )),\\ &{}\dot{V}_{4}(z(t))=\beta g^{T}(z(t)) g(z(t)) -\beta g^{T}(z(t-\tau )) g(z(t-\tau )),\\ &{}\dot{V}_{5}(z(t))=\sigma \alpha p_{M}\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}} g^{T}(z(t))D^{T}D g(z(t)) -\alpha p_{M}\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}}\int _{t-\sigma }^{t} g^{T}(z(s))D^{T}D g(z(s))ds,\\ &{}\dot{V}_{6}(z(t))=\sigma \alpha p_{M}\frac{\beta _{2}}{\mu _{2}} g^{T}(z(t))D^{T}D g(z(t)) -\alpha p_{M}\frac{\beta _{2}}{\mu _{2}}\int _{t-\sigma }^{t} g^{T}(z(s))D^{T}D g(z(s))ds, \end{array} \end{aligned}$$
(32)

Setting \(\beta =h_{2}^{2}\Vert \underline{C}^{-1}\Vert _{2}\) and using Lemma 2.4, one gets

$$\begin{aligned} \begin{aligned} \dot{V}(z(t))&\le h_{1}^{2}\Vert S\Vert _{2}\Vert z(t)\Vert _{2}^{2} +h_{2}^{2}\Vert \underline{C}^{-1}\Vert _{2} g^{T}(z(t)) g(z(t))\\&\quad +\alpha g^{T}(z(t)) [P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]g(z(t))\\&\quad -2\alpha g^{T}(z(t))P\underline{C}L^{-1}g(z(t)) +\alpha \sigma \frac{\mu _{1}}{\beta _{1}}g^{T}(z(t))g(z(t)) +\alpha \rho p_{M}g^{T}(z(t))g(z(t))\\&\quad +\alpha \frac{p_{M}}{\rho } g^{T}(z(t))B^{T}B g(z(t))+\sigma \alpha p_{M}\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}} g^{T}(z(t))D^{T}D g(z(t))\\&\quad +\sigma \alpha p_{M} \frac{\mu _{2}}{\beta _{2}}g^{T}(z(t))g(z(t)) +\sigma \alpha p_{M}\frac{\beta _{2}}{\mu _{2}} g^{T}(z(t))D^{T}D g(z(t)) \end{aligned} \end{aligned}$$
(33)

Furthermore, denote \(\mu \triangleq \frac{1}{ p_{M}}\frac{\mu _{1}}{\beta _{1}}+\frac{\mu _{2}}{\beta _{2}}\), then, we can choose appropriate \(\alpha , \beta _{1},\beta _{2}, \mu _{1}, \mu _{2}\) such that \(\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}}+\frac{\beta _{2}}{\mu _{2}}=\frac{1}{\mu }\) due to \(\beta _{1},\beta _{2}, \mu _{1}, \mu _{2}\) are any positive constants. Hence, (33) can be rewritten as following:

$$\begin{aligned} \begin{aligned} \dot{V}(z(t))&\le h_{1}^{2}\Vert S\Vert _{2}\Vert z(t)\Vert _{2}^{2} +h_{2}^{2}\Vert \underline{C}^{-1}\Vert _{2} g^{T}(z(t)) g(z(t)) -\alpha g^{T}(z(t))\{2 \underline{C}P L^{-1}\\&\quad -[P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}I]\\&\quad -p_{M}(\rho I+\frac{1}{\rho }R)-p_{M}\sigma ( (\frac{1}{ p_{M}}\frac{\mu _{1}}{\beta _{1}}+\frac{\mu _{2}}{\beta _{2}})I +(\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}}+\frac{\beta _{2}}{\mu _{2}}))Q)\}g(z(t))\\&\triangleq h_{1}^{2}\Vert S\Vert _{2}\Vert g(z(t))\Vert _{2}^{2} +h_{2}^{2}\Vert \underline{C}^{-1}\Vert _{2} g^{T}(z(t)) g(z(t)) -\alpha l_{M}^{2} g^{T}(z(t))\{2 \underline{C}P L^{-1}\\&\quad -[P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}I]\\&\quad -p_{M}(\rho I+\frac{1}{\rho }R) -p_{M}\sigma (\mu I+\frac{1}{\mu }Q)\}g(z(t))\\&= h_{1}^{2}\Vert S\Vert _{2}\Vert z(t)\Vert _{2}^{2} +h_{2}^{2}\Vert \underline{C}^{-1}\Vert _{2} g^{T}(z(t)) g(z(t)) -\alpha g^{T}(z(t))\Omega g(z(t))\\&\le h_{1}^{2}\Vert S\Vert _{2}\Vert z(t)\Vert _{2}^{2} +h_{2}^{2}\Vert \underline{C}^{-1}\Vert _{2} \Vert g(z(t))\Vert _{2}^{2} -\alpha \lambda _{m}(\Omega ) \Vert g(z(t))\Vert _{2}^{2}, \end{aligned} \end{aligned}$$
(34)

Next, we analyse negative definiteness of \(\dot{V}(z(t))\) in three cases: case (1) \(z(t)\ne 0, g(z(t))\ne 0\); case (2) \(z(t)\ne 0, g(z(t))=0\); case (3) \(z(t)=0, g(z(t))=0\).

For case (1), setting

$$\begin{aligned} \begin{aligned} \alpha> \frac{h_{1}^{2}\Vert S\Vert _{2}\Vert z(t)\Vert _{2}^{2} +h_{2}^{2}\Vert \underline{C}^{-1}\Vert _{2} \Vert g(z(t))\Vert _{2}^{2}}{\lambda _{m}(\Omega ) \Vert g(z(t))\Vert _{2}^{2}}, \end{aligned} \end{aligned}$$
(35)

it can guarantee \(\dot{V}(z(t))\) is negative definite.

For case (2),

$$\begin{aligned} \begin{aligned} \dot{V}(z(t))&\le -2z^{T}(t)Cz(t)+2z^{T}(t)Bg(z(t-\tau (t))) +\sigma l_{M}^{2}\frac{1}{\alpha p_{M}}\frac{\mu _{1}}{\beta _{1}}z^{T}(t)z(t)\\&\quad -\alpha \frac{p_{M}}{\rho }g^{T}(z(t-\tau ))B^{T}Bg(z(t-\tau )) -\beta g^{T}(z(t-\tau (t)))B^{T}Bg(z(t-\tau ))\\&\le -2z^{T}(t)Cz(t)+2z^{T}(t)Bg(z(t-\tau )) +\sigma l_{M}^{2}\frac{1}{\alpha p_{M}}\frac{\mu _{1}}{\beta _{1}}z^{T}(t)z(t)\\&\quad -\beta g^{T}(z(t-\tau ))B^{T}Bg(z(t-\tau ))\\&\le -2z^{T}(t)Cz(t) +\frac{1}{\beta }z^{T}(t)z(t) +\sigma l_{M}^{2}\frac{1}{\alpha p_{M}}\frac{\mu _{1}}{\beta _{1}}z^{T}(t)z(t) \end{aligned} \end{aligned}$$
(36)

since \(\alpha , \beta , \beta _{1}, \mu _{1}\) are arbitrary positive constants, we can always choose appropriate \(\alpha , \beta , \beta _{1}, \mu _{1}\) to guarantee \(\dot{V}(z(t))\) is negative definite.

For case (3),

$$\begin{aligned} \begin{aligned} \dot{V}(z(t))&= -\alpha \frac{p_{M}}{\rho } g^{T}(z(t-\tau ))B^{T}B g(z(t-\tau )) -\beta g^{T}(z(t-\tau )) g(z(t-\tau ))\\&\quad -\alpha p_{M}\frac{1}{l_{m}^{2}}\frac{\beta _{1}}{\mu _{1}} \int _{t-\sigma }^{t} g^{T}(z(s))D^{T}D g(z(s))ds -\alpha p_{M}\frac{\beta _{2}}{\mu _{2}}\int _{t-\sigma }^{t} g^{T}(z(s))D^{T}D g(z(s))ds\\&\le -\alpha \frac{p_{M}}{\rho } g^{T}(z(t-\tau ))B^{T}B g(z(t-\tau )) -\beta g^{T}(z(t-\tau )) g(z(t-\tau )), \end{aligned} \end{aligned}$$
(37)

obviously, \(\dot{V}(z(t))<0\) for \(\forall g(z(t-\tau ))\ne 0\). And \(\dot{V}(z(t))=0\) if and only if \(z(t)=g(z(t))=g(z(t-\tau ))=0\). Otherwise, \(\dot{V}(z(t))\) is always negative definite.

In a word, according to the above analysis for case(1), case(2)and case(3), we conclude that system (26) or (2) is global asymptotical robust stable. \(\Box\)

If coefficient matrices CABD in system (2) are constant matrices, the next corollary is obtained directly. \(\square\)

Corollary 3.1

The neural networks (2) with\(C=\underline{C}=\overline{C},A=\underline{A}=\overline{A}, B=\underline{B}=\overline{B}, D=\underline{D}=\overline{D}\)is global asymptotical stable, if there exist a positive diagonal matrix\(P=diag(p_{i})\)and positive constants\(\rho , \mu\)such that the condition holds

$$\begin{aligned} \begin{array}{ll} \Omega =&2C PL^{-1} -[PA+A^{T}P] -p_{M}(\rho I+\frac{1}{\rho }R) -p_{M}(\mu I+\frac{1}{\mu }Q)>0, \end{array} \end{aligned}$$
(38)

where\(L=diag(l_{i})>0,\)\(p_{M}=max\{p_{i}\},\)\(R=diag(r_{i})>0\), \(Q=diag(q_{i})>0\)with\(r_{i}=\sum \nolimits _{k=1}^{n}b_{ki}\sum \nolimits _{j=1}^{n}b_{kj},\)\(q_{i}=\sum \nolimits _{k=1}^{n}d_{ki}\sum \nolimits _{j=1}^{n}d_{kj}.\)

Remark 1

As we all know, stability conditions in light of LMIs are very complex. However, in this paper, our stability conditions are formally simple. Also, the new stability conditions are easy to be verified. The following numerical examples show its validity.

Remark 2

From Theorems 3.1 and 3.2, we can see that the advantage of the condition (10) can not only guarantee the existence and uniqueness of the equilibrium point, but also guarantee the stability of the system (2).

Remark 3

In the literature [37], \(2l_{M}\Vert A^{*}\Vert _{2}+A_{*}P^{-1}A_{*}+PL^{2}\) is used to estimate the coefficient matrix A, and \([P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]\) is used to estimate the coefficient matrix A in this paper, where \(\Gamma\) is an optional free matrix. \([P(A^{*}-\Gamma )+(A^{*}-\Gamma )^{T}P+\Vert P(A_{*}+\Gamma )+(A_{*}+\Gamma )^{T}P\Vert _{2}]\) is probably a more accurate bound of matrix A. This may reduce the conservatism of system. The following numerical examples verify this fact.

Remark 4

In references [21,22,23,24,25,26,27], they investigated the stability of uncertain neural networks with discrete delay and got conditions in the form of matrix norm inequalities. As is known to all, the existence of distributed delays will increase the difficulty of the research. Therefore, uncertain neural networks with discrete and distributed delays is studied in this paper, and the stability results in terms of matrix norm inequalities are proposed.

Remark 5

It should be noted that if there are more accurate matrix norm inequalities to estimate bounds of coefficient matrices, there is still room for further improvement of the proposed results to reduce the conservatism of systems.

4 Examples

Example 1

Consider the neural network with the following parameters[30,31,32,33,34,35, 37]:

$$C= \left(\begin{array}{llll} 2.3 & 0 & 0 \\ 0 & 3.4 & 0 \\ 0 & 0 & 2.5 \end{array} \right),\quad A=\left(\begin{array}{llll} 0.9 & -1.5 & 0.1 \\ -1.2 & 0.1 & 0.2 \\ 0.2 & 0.3 & 0.8 \end{array} \right), \quad B=\left(\begin{array}{llll} 0.8 & 0.6 & 0.2 \\ 0.5 & 0.7 & 0.1 \\ 0.2 & 0.1 & 0.5 \end{array} \right), \quad D=\left(\begin{array}{llll} 0.3 & 0.2 & 0.1 \\ 0.1 & 0.2 & 0.1 \\ 0.1 & 0.1 & 0.2 \end{array} \right).$$

\(L=diag(0.2,0.2,0.2)\).

Setting \(\rho =2, \mu =\frac{1}{2}, P=I\) in Corollary 3.1, we have

$$\begin{aligned} \begin{array}{ll} \Omega =&{}2\times 5 \left( \begin{array}{lll} 2.3 &{} 0 &{} 0 \\ 0 &{} 3.4 &{} 0 \\ 0 &{} 0 &{} 2.5 \\ \end{array} \right) -2 \left( \begin{array}{lll} 0.9 &{} -1.5 &{} 0.1 \\ -1.2 &{} 0.1 &{} 0.2 \\ 0.2 &{} 0.3 &{} 0.8 \end{array} \right) -\left[2I+\frac{1}{2} \left( \begin{array}{lll} 2.53 &{} 0 &{} 0 \\ 0 &{} 1.95 &{} 0 \\ 0 &{} 0 &{} 0.85 \\ \end{array} \right)\right] \\ &{}-\sigma \left[\frac{1}{2}I+ \left( \begin{array}{lll} 0.26 &{} 0 &{} 0 \\ 0 &{} 0.24 &{} 0 \\ 0 &{} 0 &{} 0.18 \\ \end{array} \right)\right]. \end{array} \end{aligned}$$

The upper bound for distribute delay \(\sigma\) computed by Corollary 3.1 and the methods in references [30,31,32,33,34,35, 37] are listed in Table 1 . We can see that the stability result in this paper is less conservative than those in literatures.

Table 1 Allowable upper bounds of \(\sigma\) of Example1

Example 2

Consider the uncertain system (2) with the following parameters:

$$\begin{aligned} \begin{array}{ll} &{}\underline{C}= \left(\begin{array}{ll} 1 &{} 0 \\ 0 &{} 1\end{array}\right),\quad \overline{C}= \left(\begin{array}{ll}1.5 &{} 0 \\ 0 &{} 1\end{array}\right),\quad \underline{A}= \left(\begin{array}{ll} 0.1 &{} 0.1 \\ 0 &{} 0.1\end{array}\right),\quad \overline{A}= \left(\begin{array}{ll} 0.5 &{} 0.1 \\ 0.01 &{} 0.3\end{array}\right),\\ &{}\underline{B}= \left(\begin{array}{ll} 0.08 &{} 0.1 \\ 0 &{} 0.08\end{array}\right),\quad \overline{B}= \left(\begin{array}{ll} 0.08 &{} 0.12 \\ 0.05 &{} 0.08\end{array}\right),\quad \underline{D}= \left(\begin{array}{ll} 0.1 &{} 0.1 \\ 0 &{} 0.1\end{array}\right),\quad \overline{D}= \left(\begin{array}{ll} 0.1 &{} 0.16 \\ 0.05 &{} 0.1\end{array}\right). \end{array} \end{aligned}$$

\(L=diag(1,1), \tau =1, \sigma =1\).

Using Theorem 3.2,

$$\begin{aligned} A^{*}= \left(\begin{array}{ll} 0.3 &{} 0.1 \\ 0.005 &{} 0.2\end{array}\right),\quad A_{*}= \left(\begin{array}{ll}0.2 &{} 0 \\ 0.005 &{} 0.1\end{array}\right),\quad R= \left(\begin{array}{ll}0.0225 &{} 0 \\ 0 &{} 0.0344\end{array}\right),\quad Q= \left(\begin{array}{ll}0.0325 &{} 0 \\ 0 &{} 0.0566\end{array}\right),\quad \end{aligned}$$

Letting \(\rho =\frac{1}{4}, \mu =\frac{1}{4}, P=I\) and \(\begin{aligned}\Upsilon = \left( \begin{array}{ll} 0.3 & 0 \\ 0 & 0.2\end{array}\right) \end{aligned}\) in Theorem 3.1, then,

$$\begin{aligned} \begin{array}{ll} \Omega &=2 \left(\begin{array}{ll}1 & 0 \\ 0 & 1\end{array}\right) - \left[\left(\begin{array}{ll} 0 & 0.105 \\ 0.105 & 0\end{array}\right) +\left\Vert \left(\begin{array}{ll} 1 & 0.005 \\ 0.005 & 0.6\end{array}\right) \right\Vert _{2}I\right] -\left[\frac{1}{4}I+ 4 \left(\begin{array}{ll} 0.0225 & 0 \\ 0 & 0.0344 \end{array}\right)\right] \\ &\quad-\left[\frac{1}{4}I+ 4 \left(\begin{array}{ll} 0.0325 & 0 \\ 0 & 0.0566 \end{array}\right)\right] = \left(\begin{array}{ll} 0.279 & -0.105 \\ -0.105 & 0.135\end{array}\right)>0. \end{array} \end{aligned}$$

Hence, according to Theorem 3.2, the neural network (2) is globally asymptotically robust stable.

The dynamical system behavior in Example 2 with parameters

$$\begin{aligned} C=\left(\begin{array}{ll} 1 &{} 0 \\ 0 &{} 1\end{array}\right),\quad A=\left (\begin{array}{ll}0.5 &{} 0.1 \\ 0.01 &{} 0.3\end{array}\right),\quad B=\left(\begin{array}{ll} 0.08 &{} 0.12 \\ 0.05 &{} 0.08\end{array}\right),\quad D= \left(\begin{array}{ll} 0.1 &{} 0.16 \\ 0.05 &{} 0.1\end{array}\right). \end{aligned}$$

and initial condition \(U=[3,-2]^{T}\) is shown in Fig. 1, and initial condition \(U=[5,3]^{T}\) is shown in Fig. 2.

Fig. 1
figure 1

The response of dynamical

Fig. 2
figure 2

The response of dynamical

5 Conclusions

In this work, we proposed improved stability results for neural networks with discrete and distributed delays. By using homomorphic mapping theory and some matrix theory, choosing appropriate L–K functional candidates, novel stability conditions are derived. In this study, the obtained results have less conservativeness than those ones in [30,31,32,33,34,35, 37]. Finally, two numerical examples are given to show the advantage and effectiveness of the obtained results. Meanwhile, the proposed results may be used for the further study of NNs with mixed delays, for example, stochastic neural networks and Markovian jumping neural networks. Furthermore, it is expected that the method in this paper may be used for other applications, such as the stability analysis of neutral neural networks and \(H_{\infty }\) control design for neural networks.