Skip to main content
Log in

Global dissipativity of high-order Hopfield bidirectional associative memory neural networks with mixed delays

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, the problem of the global dissipativity of high-order Hopfield bidirectional associative memory neural networks with time-varying coefficients and distributed delays is discussed. By using Lyapunov–Krasovskii functional method, inequality techniques and linear matrix inequalities, a novel set of sufficient conditions for global dissipativity and global exponential dissipativity for the addressed system is developed. Further, the estimations of the positive invariant set, globally attractive set and globally exponentially attractive set are found. Finally, two examples with numerical simulations are provided to support the feasibility of the theoretical findings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Source: Johansson [20]

Similar content being viewed by others

References

  1. Alimi AM, Aouiti C, Chérif F, Dridi F, M’hamdi MS (2018) Dynamics and oscillations of generalized high-order Hopfield neural networks with mixed delays. Neurocomputing 321:274–295

    Google Scholar 

  2. Aouiti C (2018) Oscillation of impulsive neutral delay generalized high-order Hopfield neural networks. Neural Comput Appl 29:477–495

    Google Scholar 

  3. Aouiti C (2016) Neutral impulsive shunting inhibitory cellular neural networks with time-varying coefficients and leakage delays. Cogn Neurodyn 10(6):573–591

    MathSciNet  Google Scholar 

  4. Aouiti C, Coirault P, Miaadi F, Moulay E (2017) Finite time boundedness of neutral high-order Hopfield neural networks with time delay in the leakage term and mixed time delays. Neurocomputing 260:378–392

    Google Scholar 

  5. Aouiti C, Dridi F (2018) Piecewise asymptotically almost automorphic solutions for impulsive non-autonomous high-order Hopfield neural networks with mixed delays. Neural Comput Appl 31:5527–5545

    Google Scholar 

  6. Aouiti C, Dridi F: \((\mu ,\nu )\)-Pseudo-almost automorphic solutions for high-order Hopfield bidirectional associative memory neural networks. Neural Comput Appl 1–22

  7. Aouiti C, Gharbia IB, Cao J, M’hamdi MS, Alsaedi A (2018) Existence and global exponential stability of pseudo almost periodic solution for neutral delay BAM neural networks with time-varying delay in leakage terms. Chaos, Solitons Fractals 107:111–127

    MathSciNet  MATH  Google Scholar 

  8. Aouiti C, Miaadi F (2018) Finite-time stabilization of neutral Hopfield neural networks with mixed delays. Neural Process Lett 48(3):1645–1669

    Google Scholar 

  9. Aouiti C, Miaadi F (2018) Pullback attractor for neutral Hopfield neural networks with time delay in the leakage term and mixed time delays. Neural Comput Appl 1–10

  10. Aouiti C, M’hamdi MS, Chérif F (2016) The existence and the stability of weighted pseudo almost periodic solution of high-order Hopfield neural network. In: International conference on artificial neural networks, Springer International Publishing, Berlin, pp 478–485

  11. Aouiti C, M’hamdi MS, Touati A (2017) Pseudo almost automorphic solutions of recurrent neural networks with time-varying coefficients and mixed delays. Neural Process Lett 45(1):121–140

    Google Scholar 

  12. Aouiti C, M’hamdi MS, Chérif F (2017) New results for impulsive recurrent neural networks with time-varying coefficients and mixed delays. Neural Process Lett 46(2):487–506

    Google Scholar 

  13. Aouiti C, M’hamdi MS, Cao J, Alsaedi A (2017) Piecewise pseudo almost periodic solution for impulsive generalised high-order Hopfield neural networks with leakage delays. Neural Process Lett 45(2):615–648

    Google Scholar 

  14. Cao J, Liang J, Lam J (2004) Exponential stability of high-order bidirectional associative memory neural networks with time delays. Physica D 199(3–4):425–436

    MathSciNet  MATH  Google Scholar 

  15. Coban R (2013) A context layered locally recurrent neural network for dynamic system identification. Eng Appl Artif Intell 26(1):241–250

    Google Scholar 

  16. Coban R, Aksu IO (2018) Neuro-controller design by using the multifeedback layer neural network and the particle swarm optimization. Tehnički vjesnik 25(2):437–444

    Google Scholar 

  17. Coban R, Can B (2009) An expert trajectory design for control of nuclear research reactors. Expert Syst Appl 36(9):11502–11508

    Google Scholar 

  18. Fan Y, Huang X, Wang Z, Li Y (2018) Global dissipativity and quasi-synchronization of asynchronous updating fractional-order memristor-based neural networks via interval matrix method. J Franklin Inst 355(13):5998–6025

    MathSciNet  MATH  Google Scholar 

  19. Huang T, Li C, Duan S, Starzyk JA (2012) Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects. IEEE Trans Neural Netw Learn Syst 23(6):866–875

    Google Scholar 

  20. Johansson KH (2000) The quadruple-tank process: A multivariable laboratory process with an adjustable zero. IEEE Trans Control Syst Technol 8(3):456–465

    MathSciNet  Google Scholar 

  21. Kosko B (1987) Adaptive bidirectional associative memories. Appl Opt 26(23):4947–4960

    Google Scholar 

  22. Kosko B (1988) Bidirectional associative memories. IEEE Trans Syst Man Cybernet 18(1):49–60

    MathSciNet  Google Scholar 

  23. Lee TH, Park JH, Kwon OM, Lee SM (2013) Stochastic sampled-data control for state estimation of time-varying delayed neural networks. Neural Netw 46:99–108

    MATH  Google Scholar 

  24. Li H, Li C, Zhang W, Xu J (2018) Global dissipativity of inertial neural networks with proportional delay via new generalized halanay inequalities. Neural Process Lett 48(3):1543–1561

    Google Scholar 

  25. Li N, Cao J (2018) Global dissipativity analysis of quaternion-valued memristor-based neural networks with proportional delay. Neurocomputing 321:103–113

    Google Scholar 

  26. Liao X, Wang J (2003) Global dissipativity of continuous-time recurrent neural networks with time delay. Phys Rev E 68(1):016118

    MathSciNet  Google Scholar 

  27. Maharajan C, Raja R, Cao J, Rajchakit G, Tu Z, Alsaedi A (2018) LMI-based results on exponential stability of BAM-type neural networks with leakage and both time-varying delays: a non-fragile state estimation approach. Appl Math Comput 326:33–55

    MathSciNet  MATH  Google Scholar 

  28. Maharajan C, Raja R, Cao J, Rajchakit G, Alsaedi A (2018) Impulsive Cohen–Grossberg BAM neural networks with mixed time-delays: an exponential stability analysis issue. Neurocomputing 275:2588–2602

    Google Scholar 

  29. Maharajan C, Raja R, Cao J, Rajchakit G (2018) Novel global robust exponential stability criterion for uncertain inertial-type BAM neural networks with discrete and distributed time-varying delays via Lagrange sense. J Frankl Inst 355:4727–4754

    MathSciNet  MATH  Google Scholar 

  30. M’hamdi MS, Aouiti C, Touati A, Alimi AM, Snasel V (2016) Weighted pseudo almost-periodic solutions of shunting inhibitory cellular neural networks with mixed delays. Acta Math Sci 36(6):1662–1682

    MathSciNet  MATH  Google Scholar 

  31. Manivannan R, Mahendrakumar G, Samidurai R, Cao J, Alsaedi A (2017) Exponential stability and extended dissipativity criteria for generalized neural networks with interval time-varying delay signals. J Frankl Inst 354(11):4353–4376

    MathSciNet  MATH  Google Scholar 

  32. Manivannan R, Samidurai R, Cao J, Alsaedi A, Alsaadi FE (2018) Design of extended dissipativity state estimation for generalized neural networks with mixed time-varying delay signals. Inf Sci 424:175–203

    MathSciNet  Google Scholar 

  33. Marcus CM, Westervelt RM (1989) Stability of analog neural networks with delay. Phys Rev A 39(1):347

    MathSciNet  Google Scholar 

  34. Pu Z, Rao R (2018) Exponential stability criterion of high-order BAM neural networks with delays and impulse via fixed point approach. Neurocomputing 292:63–71

    Google Scholar 

  35. Qiu J (2010) Dynamics of high-order Hopfield neural networks with time delays. Neurocomputing 73(4–6):820–826

    Google Scholar 

  36. Rajchakit G, Saravanakumar R, Ahn CK, Karimi HR (2017) Improved exponential convergence result for generalized neural networks including interval time-varying delayed signals. Neural Netw 86:10–17

    MATH  Google Scholar 

  37. Rajivganthi C, Rihan FA, Lakshmanan S (2019) Dissipativity analysis of complex-valued BAM neural networks with time delay. Neural Comput Appl 31(1):127–137

    Google Scholar 

  38. Samidurai R, Manivannan R, Ahn CK, Karimi HR (2018) New criteria for stability of generalized neural networks including Markov jump parameters and additive time delays. IEEE Trans Syst Man Cybernet Syst 48(4):485–499

    Google Scholar 

  39. Song Q, Zhao Z (2005) Global dissipativity of neural networks with both variable and unbounded delays. Chaos, Solitons Fractals 25(2):393–401

    MathSciNet  MATH  Google Scholar 

  40. Sowmiya C, Raja R, Cao J, Li X, Rajchakit G (2018) Discrete-time stochastic impulsive BAM neural networks with leakage and mixed time delays: an exponential stability problem. J Franklin Inst 355(10):4404–4435

    MathSciNet  MATH  Google Scholar 

  41. Tu Z, Cao J, Alsaedi A, Alsaadi F (2017) Global dissipativity of memristor-based neutral type inertial neural networks. Neural Netw 88:125–133

    MATH  Google Scholar 

  42. Tu Z, Wang L, Zha Z, Jian J (2013) Global dissipativity of a class of BAM neural networks with time-varying and unbound delays. Commun Nonlinear Sci Numer Simul 18(9):2562–2570

    MathSciNet  MATH  Google Scholar 

  43. Wang L, Zhang L, Ding X (2015) Global dissipativity of a class of BAM neural networks with both time-varying and continuously distributed delays. Neurocomputing 152:250–260

    Google Scholar 

  44. Willem JC (1972) Dissipative dynamical systems-part 1: general theory. Arch Mech Anal 45(5):321–351

    MathSciNet  Google Scholar 

  45. Willems JC (1972) Dissipative dynamical systems part II: linear systems with quadratic supply rates. Arch Ration Mech Anal 45(5):352–393

    MATH  Google Scholar 

  46. Zhang B, Xu S, Li Y, Chu Y (2007) On global exponential stability of high-order neural networks with time-varying delays. Phys Lett A 366(1–2):69–78

    Google Scholar 

  47. Zhang G, Zeng Z, Hu J (2018) New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays. Neural Netw 97:183–191

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chaouki Aouiti.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

If Assumption 2 is satisfied, we suppose that system (1) possesses an equilibrium point \((x^{*},\;y^{*})=(x_{1}^{*},\;x_{2}^{*},\ldots ,x_{n}^{*};\;y_{1}^{*},\; y_{2}^{*},\ldots ,y_{m}^{*})^{T}.\)

$$\begin{aligned} \left\{ \begin{array}{ll} {\dot{x}}_{i}^{*}(t) = -a_{i}x_{i}^{*}(t)\\ \quad+\displaystyle \sum _{j=1}^{m}b_{ij}{\tilde{f}}_{j}(y_{j}^{*}(t))\\ \quad+\, \displaystyle \sum _{j=1}^{m}c_{ij}{\tilde{f}}_{j}(y_{j}^{*}(t-\tau (t)))\\ \quad +\, \displaystyle \sum _{j=1}^{m}\sum _{k=1}^{m}W_{ijk}{\tilde{f}}_{k}(y_{j}^{*}(t)){\tilde{f}}_{j}(y_{j}^{*}(t))\\ \quad +\, \displaystyle \sum _{j=1}^{m}\sum _{k=1}^{m}T_{ijk}{\tilde{f}}_{k}(y_{k}^{*}(t-\tau (t))){\tilde{f}}_{j}(y_{j}^{*}(t-\tau (t)))\\ \quad +\, \displaystyle \sum _{j=1}^{m}p_{ij}\displaystyle \int _{-\infty }^{t}K_{ij}(t-s){\tilde{f}}_{j}(y_{j}^{*}(s))\text {d}s=0,\\ {\dot{y}}_{j}^{*}(t) = -a^{1}_{j}y_{j}^{*}(t)\\ \quad+\,\displaystyle \sum _{i=1}^{n}b^{1}_{ji}{\tilde{g}}_{i}(x_{i}^{*}(t))\\ +\, \displaystyle \sum _{i=1}^{n}c^{1}_{ji}{\tilde{g}}_{i}(y_{i}^{*}(t-\sigma (t)))\\ \quad +\, \displaystyle \sum _{i=1}^{n}\sum _{k=1}^{n}W^{1}_{jik}{\tilde{g}}_{k}(x_{i}^{*}(t)) {\tilde{g}}_{i}(x_{i}^{*}(t))\\ \quad +\, \displaystyle \sum _{i=1}^{n}\sum _{k=1}^{n}T^{1}_{jik}{\tilde{g}}_{k}(x_{k}^{*}(t-\sigma (t))) {\tilde{g}}_{i}(x_{i}^{*}(t-\sigma (t)))\\ \quad +\, \displaystyle \sum _{i=1}^{n}p^{1}_{ji}\displaystyle \int _{-\infty }^{t}K_{ji}(t-s){\tilde{g}}_{i}(x_{i}^{*}(s))\text {d}s=0. \end{array} \right. \end{aligned}$$

Let \(x_{i}(t)=y_{i}(t)-y^{*}_i,\;y_{j}(t)=x_{j}(t)-x^{*}_j\) for \(i=1,\;2,\ldots ,n,\;j=1,\;2,\ldots ,m\) and \(f_{i}(x_{i}(t))={\tilde{f}}_{i}(x_{i}(t)+y^{*})-{\tilde{f}}_{i}(y^{*}),\)

\(g_{j}(y_{j}(t))={\tilde{g}}_{j}(y_{j}(t)+x^{*})-{\tilde{g}}_{j}(x^{*})\) and \(f_{i}(0)={\tilde{f}}_{i}(0)=0,\)

\(g_{i}(0)={\tilde{g}}_{i}(0)=0,\;i=1,2,\ldots ,n,\;j=1,2,\ldots ,m.\)

Hence, we have:

$$\begin{aligned} \left\{ \begin{array}{ll} {\dot{x}}_{i}(t) = -\,a_{i}x_{i}(t)+\displaystyle \sum _{j=1}^{m}b_{ij}f_{j}(y_{j}(t))\\ \quad+\,\displaystyle \sum _{j=1}^{m}c_{ij}f_{j}(y_{j}(t-\tau (t)))\\ \quad +\,\displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}W_{ijk}f_{k}(y_{k}(t)) f_{j}(y_{j}(t))\\ \quad +\,\displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}T_{ijk}f_{k}(y_{k}(t-\tau (t)))f_{j}(y_{j}(t-\tau (t)))\\ \quad +\,\displaystyle \sum _{j=1}^{m}p_{ij}\displaystyle \int _{-\infty }^{t}K_{ij}(t-s)f_{j}(y_{j}(s))\text {d}s\\ \quad {\dot{y}}_{j}(t) = -a^{1}_{j}y_{j}(t)+\displaystyle \sum _{i=1}^{n}b^{1}_{ji}g_{j}(g_{i}(t))\\ \quad +\,\displaystyle \sum _{i=1}^{n}c^{1}_{ji}g_{i}(x_{i}(t-\sigma (t)))\\ \quad +\,\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{k=1}^{n}W^{1}_{jik}g_{k}(x_{k}(t)) g_{i}(x_{i}(t))\\ \quad +\,\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{k=1}^{n}T^{1}_{jik}g_{k}(x_{k}(t-\sigma (t)))g_{i}(x_{i}(t-\sigma (t))\\ \quad +\,\displaystyle \sum _{i=1}^{n}p^{1}_{ji}\displaystyle \int _{-\infty }^{t}K_{ji}(t-s)g_{i}(x_{i}(s))\text {d}s\\ \end{array} \right. \end{aligned}$$

We can conclude that the transformed models are equivalent to the original model only if Assumption 3 is satisfied. The system can be described as:

$$\begin{aligned} \left\{ \begin{array}{ll} {\dot{x}}_{i}(t) = -\,a_{i}x_{i}(t)+\displaystyle \sum _{j=1}^{m}b_{ij}f_{j}(y_{j}(t))\\ \quad +\, \displaystyle \sum _{j=1}^{m}c_{ij}f_{j}(y_{j}(t-\tau (t)))\\ \quad +\, \displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}W_{ijk}\big [\big ({\tilde{f}}_{j}(y_{j}(t))-{\tilde{f}}_{j}(y_{j}^{*}(t))\big ){\tilde{f}}_{k}(y_{k}(t))\\ \quad +\, {\tilde{f}}_{j}(y_{j}^{*}\big (f_{k}(y_{k}(t))- f_{k}(y_{k}^{*})\big )\big ]\\ \quad +\, \displaystyle \sum _{j=1}^{m} \displaystyle \sum _{k=1}^{m}T_{ijk}\big [\big ({\tilde{f}}_{j}(y_{j}(t-\tau _{j}(t)))\\ \quad - {\tilde{f}}_{j}(y_{j}^{*})\big ) {\tilde{f}}_{k}(y_{k}(t-\tau _{k}(t)))\\ \quad +\, {\tilde{f}}_{j}(y_{j}^{*})\big ({\tilde{f}}_{k}(y_{k}(t-\tau _{k}(t))-f_{k}(y_{k}^{*})\big )\big ]\\ \quad +\, \displaystyle \sum _{j=1}^{m}p_{ij} \displaystyle \int _{-\infty }^{t}K_{ij}(t-s)f_{j}(y_{j}(s))\text {d}s\\ {\dot{y}}_{j}(t) = -a^{1}_{j}y_{j}(t)+\displaystyle \sum _{i=1}^{n}b^{1}_{ji}g_{j}(x_{i}(t))\\ \quad +\, \displaystyle \sum _{i=1}^{n}c^{1}_{ji}g_{i}(x_{i}(t-\sigma (t)))\\ \quad +\, \displaystyle \sum _{i=1}^{n} \displaystyle \sum _{k=1}^{n}W^{1}_{jik}\big [\big ({\tilde{g}}_{i}(x_{i}(t))\\ \quad - {\tilde{g}}_{i}(x_{i}^{*}(t))\big ){\tilde{g}}_{k}(x_{k}(t))\\ \quad +\, {\tilde{g}}_{i}(x_{i}^{*}\big (g_{k}(x_{k}(t))- g_{k}(x_{k}^{*})\big )\big ]\\ \quad +\, \displaystyle \sum _{i=1}^{n} \displaystyle \sum _{k=1}^{n}T^{1}_{jik}\big [\big ({\tilde{g}}_{i}(x_{i}(t-\sigma _{i}(t)))-{\tilde{g}}_{i}(x_{i}^{*})\big ) {\tilde{g}}_{k}(x_{k}(t-\sigma _{k}(t)))\\ \quad +\, {\tilde{g}}_{i}(x_{i}^{*})\big ({\tilde{g}}_{k}(x_{k}(t-\sigma _{k}(t))-g_{k}(x_{k}^{*})\big )\big ]\\ \quad +\, \displaystyle \sum _{i=1}^{n}p^{1}_{ji} \displaystyle \int _{-\infty }^{t}K_{ji}(t-s)g_{i}(x_{i}(s))\text {d}s.\\ \end{array} \right. \\ \left\{ \begin{array}{ll} {\dot{x}}_{i}(t) =-\,a_{i}x_{i}(t)+\displaystyle \sum _{j=1}^{m}\big [b_{ij}\\ \quad +\, \displaystyle \sum _{k=1}^{m}\big (W_{ijk}{\tilde{f}}_{k}(y_{k}(t))+W_{ikj}{\tilde{f}}_{k}(y_{k}^{*})\big )\big ]f_{j}(y_{j}(t))\\ \quad +\, \displaystyle \sum _{j=1}^{m}\big [c_{ij}+\displaystyle \sum _{k=1}^{m}(T_{ijk}{\tilde{f}}_{k}(y_{k}(t-\tau _{k}(t)))\\ \quad + T_{ikj}{\tilde{f}}_{k}(y_{k}^{*}))\big ]f_{j}(y_{j}(t-\tau _{j}(t)))\\ \quad + \displaystyle \sum _{j=1}^{m}p_{ij}\displaystyle \int _{-\infty }^{t}K_{ij}(t-s)f_{j}(y_{j}(s))\text {d}s.\\ {\dot{y}}_{j}(t) = -a^{1}_{j}y_{j}(t)+\displaystyle \sum _{i=1}^{n}\big [b^{1}_{ji}+\displaystyle \sum _{k=1}^{n}\big (W^{1}_{jik}{\tilde{g}}_{k}(x_{k}(t))\\ \quad + W^{1}_{jki}{\tilde{g}}_{k}(x_{k}^{*})\big )\big ]g_{i}(x_{i}(t))\\ \quad + \displaystyle \sum _{i=1}^{n}\big [c^{1}_{ji}+\displaystyle \sum _{k=1}^{n}(T^{1}_{jik}{\tilde{g}}_{k}(x_{k}(t-\sigma _{k}(t)))\\ \quad + T^{1}_{jki}{\tilde{g}}_{k}(x_{k}^{*}))\big ]g_{i}(x_{i}(t-\sigma _{i}(t)))\\ \quad + \displaystyle \sum _{i=1}^{n}p^{1}_{ji} \displaystyle \int _{-\infty }^{t}K_{ji}(t-s)g_{i}(x_{i}(s))\text {d}s \end{array} \right. \\ \left\{ \begin{array}{ll} {\dot{x}}_{i}(t) = -\,a_{i}x_{i}(t) + \displaystyle \sum _{j=1}^{m}\big [b_{ij}\\ \quad + \displaystyle \sum _{k=1}^{m}(W_{ijk}+W_{ikj})\xi _{ijk}(y_{k}(t))\big ]f_{j}(y_{j}(t))+\displaystyle \sum _{j=1}^{m}\big [c_{ij}\\ \quad + \displaystyle \sum _{k=1}^{m}(T_{ijk}+T_{ikj})\zeta _{ijk}(y_{k}(t-\tau _{k}(t)))\big ]f_{j}(y_{j}(t-\tau _{j}(t)))\\ \quad + \displaystyle \sum _{j=1}^{m}p_{ij}\displaystyle \int _{-\infty }^{t}K_{ij}(t-s)f_{j}(y_{j}(s))\text {d}s,\\ {\dot{y}}_{j}(t) = -a^{1}_{j}y_{j}(t)+\displaystyle \sum _{i=1}^{n}\big [b^{1}_{ji}\\ \quad + \displaystyle \sum _{k=1}^{n}(W^{1}_{jik}+W^{1}_{jki})\xi ^{1}_{jik}(x_{k}(t))\big ]g_{i}(x_{i}(t))+\displaystyle \sum _{i=1}^{n}\big [c^{1}_{ji}\\ \quad + \displaystyle \sum _{k=1}^{n}(T^{1}_{jik}+T^{1}_{jki})\zeta ^{1}_{jik}(x_{k}(t-\sigma _{k}(t)))\big ]g_{i}(x_{i}(t-\sigma _{i}(t)))\\ \quad + \displaystyle \sum _{i=1}^{n}p^{1}_{ji}\displaystyle \int _{-\infty }^{t}K_{ji}(t-s)g_{i}(x_{j}(s))\text {d}s \end{array} \right. \\ \left\{ \begin{array}{ll} {\dot{x}}_{i}(t) = -\,a_{i}x_{i}(t)+\displaystyle \sum _{j=1}^{m}\big [b_{ij}\\ \quad + \displaystyle \sum _{k=1}^{m}\big (W_{ijk}{\tilde{f}}_{k}(y_{k}(t))\\ \quad + W_{ikj}{\tilde{f}}_{k}(y_{k}^{*})\big )\big ]f_{j}(y_{j}(t))+\displaystyle \sum _{j=1}^{m}\big [c_{ij}\\ \quad + \displaystyle \sum _{k=1}^{m}(T_{ijk}{\tilde{f}}_{k}(y_{k}(t-\tau _{k}(t)))\\ \quad + T_{ikj}{\tilde{f}}_{k}(y_{k}^{*}))\big ]f_{j}(y_{j}(t-\tau _{j}(t)))\\ \quad + \displaystyle \sum _{j=1}^{m}p_{ij} \int _{-\infty }^{t}K_{ij}(t-s)f_{j}(y_{j}(s))\text {d}s,\\ {\dot{y}}_{j}(t) = -a^{j}_{i}y_{j}(t)+\displaystyle \sum _{i=1}^{n}\big [b^{1}_{ji}\\ \quad + \displaystyle \sum _{k=1}^{n}\big (W_{jik}{\tilde{g}}_{k}(x_{k}(t))\\ \quad + W^{1}_{jki}{\tilde{g}}_{k}(x_{k}^{*})\big )\big ]g_{i}(x_{i}(t))\\ \quad + \displaystyle \sum _{i=1}^{n}\big [c^{1}_{ji}+\displaystyle \sum _{k=1}^{n}(T^{1}_{jik}{\tilde{g}}_{k}(x_{k}(t-\sigma _{k}(t)))\\ \quad + T_{jki}{\tilde{g}}_{k}(x_{k}^{*}))\big ]g_{i}(x_{i}(t-\sigma _{i}(t)))\\ \quad + \displaystyle \sum _{i=1}^{n}p^{1}_{ji} \int _{-\infty }^{t}K_{ji}(t-s)g_{i}(x_{i}(s))\text {d}s, \end{array} \right. \\ \left\{ \begin{array}{ll} {\dot{x}}_{i}(t) = -\,a_{i}x_{i}(t)+ \displaystyle \sum _{j=1}^{m}\big [b_{ij}\\ \quad + \displaystyle \sum _{k=1}^{m}(W_{ijk}+W_{ikj})\xi _{ijk}(y_{k}(t))\big ]f_{j}(y_{j}(t))\\ \quad + \displaystyle \sum _{j=1}^{m}\big [c_{ij}+\displaystyle \sum _{k=1}^{m}(T_{ijk}\\ \quad + T_{ikj})\zeta _{ijk}(y_{k}(t-\tau _{k}(t)))\big ]f_{j}(y_{j}(t-\tau _{j}(t)))\\ \quad + \displaystyle \sum _{j=1}^{n}p_{ij} \displaystyle \int _{-\infty }^{t}K_{ij}(t-s)f_{j}(x_{j}(s))\text {d}s\\ {\dot{y}}_{j}(t) = -a^{1}_{j}y_{j}(t)+\displaystyle \sum _{i=1}^{n}\big [b^{1}_{ji}\\ \quad + \displaystyle \sum _{k=1}^{n}(W^{1}_{jik}+W^{1}_{jki})\xi ^{1}_{jik}(x_{k}(t))\big ]g_{i}(x_{i}(t))\\ \quad + \displaystyle \sum _{i=1}^{n}\big [c^{1}_{ji}+\displaystyle \sum _{k=1}^{n}(T^{1}_{jik}\\ \quad + T^{1}_{jki})\zeta ^{1}_{jik}(x_{k}(t-\sigma _{k}(t)))\big ]g_{i}(x_{i}(t-\sigma _{i}(t)))\\ \quad + \displaystyle \sum _{i=1}^{n}p^{1}_{ji} \displaystyle \int _{-\infty }^{t}K_{ji}(t-s)g_{i}(x_{i}(s))\text {d}s \end{array} \right. \end{aligned}$$
(9)

where

  • \(\xi _{ijk}(x_{k}(t))=(W_{ijk}{\tilde{f}}_{k}(y_{k}(t))+W_{ikj}{\tilde{f}}_{k}(y_{k}^{*}))/(W_{ijk}+W_{ikj})\) if it lies between \({\tilde{f}}_{k}(y_{k}(t))\) and \({\tilde{f}}_{k}(y_{k}^{*}),\)

  • \(\xi ^{1}_{jik}(y_{k}(t))=(W^{1}_{jik}{\tilde{g}}_{k}(x_{k}(t))+W^{1}_{kij}{\tilde{g}}_{k}(x_{k}^{*}))/(W^{1}_{jik}+W^{1}_{jki})\)

    if it lies between \({\tilde{g}}_{k}(x_{k}(t))\) and \({\tilde{g}}_{k}(x_{k}^{*}),\)

  • \(\zeta _{ijk}(x_{k}(t-\tau _{k}(t)))=(T_{ijk}{\tilde{f}}_{k}(y_{k}(t-\tau _{k}(t))))+T_{ikj}{\tilde{f}}_{k}(y_{k}^{*}))/(T_{ijk}+T_{ikj})\) if it lies between \({\tilde{f}}_{k}(y_{k}(t-\tau _{k}(t))))\) and \({\tilde{f}}_{k}(y_{k}^{*}),\)

  • \(\zeta ^{1}_{jik}(y_{k}(t-\sigma _{k}(t)))=(T^{1}_{jik}{\tilde{g}}_{k}(x_{k}(t-\sigma _{k}(t))))+T^{1}_{jki}{\tilde{g}}_{k}(x_{k}^{*}))/(T^{1}_{jik}+T^{1}_{jki})\) if it lies between \({\tilde{g}}_{k}(x_{k}(t-\sigma _{k}(t))))\) and \({\tilde{g}}_{k}(x_{k}^{*}),\)

If we denote,

  • \(x(.)=[x_{1}(\cdot ),\;x_{2}(\cdot ),\ldots , x_{n}(\cdot )]^{T},\)

  • \(y(.)=[y_{1}(\cdot ),\;y_{2}(\cdot ),\ldots , y_{m}(\cdot )]^{T},\)

  • \(f(x(.))=[f_{1}(x_{1}(\cdot )),\;f_{2}(x_{2}(\cdot )),\ldots , f_{n}(x_{n}(\cdot ))]^{T},\)

  • \(g(y(.))=[g_{1}(y_{1}(\cdot )),\;g_{2}(y_{2}(\cdot )),\ldots ,\;g_{m}(y_{m}(\cdot ))]^{T},\)

  • \(f(x(t-\tau (t)))=[f_{1}(x_{1}(t-\tau _{1}(t))),\ldots , f_{n}(x_{n}(t-\tau _{n}(t)))]^{T},\)

  • \(g(y(t-\sigma (t)))=[g_{1}(y_{1}(t-\sigma _{1}(t))),\ldots , g_{m}(y_{m}(t-\sigma _{m}(t)))]^{T},\)

  • \(A=diag\{a_{1},a_{2},\ldots ,a_{n}\},\)\(A^{1}=diag\{a^{1}_{1}, a^{1}_{2},\ldots ,\;a_{m}\},\)

  • \(B=(b_{ij})_{n\times n},\)\(B^{1}=(b^{1}_{ji})_{m\times m},\)\(C=(c_{ij})_{n\times n},\;C^{1}=(c^{1}_{ji})_{m\times m},\)

  • \(P=(p_{ij})_{n\times n},\)\(P^{1}=(p^{1}_{ji})_{m\times m},\)\(W_{i}=(W_{ijk})_{n\times n},\)

  • \(W^{1}_{j}=(W^{1}_{jik})_{m\times m},\)\(W=(W_{1}+W_{1}^{T},\;W_{2}+W_{2}^{T},\ldots ,\;W_{n}+W_{n}^{T}),\)

  • \(W^{1}=(W^{1}_{1}+(W^{1}_{1})^{T},\;W^{1}_{2}+(W^{1}_{2})^{T},\ldots ,\;W^{1}_{m}+(W^{1}_{m})^{T}),\; T_{i}=(T_{ijk})_{n\times n},\)

  • \(T^{1}_{j}=(T^{1}_{jik})_{m\times m},\;T=(T_{1}+T_{1}^{T},\;T_{2}+T_{2}^{T},\ldots , T_{n}+T_{n}^{T}),\)

  • \(T^{1}=(T^{1}_{1}+(T^{1}_{1})^{T},\;T^{1}_{2}+(T^{1}_{2})^{T},\ldots , T^{1}_{m}+(T^{1}_{m})^{T}),\)

  • \(\xi =(\xi _{1},\;\xi _{2},\ldots , \xi ^{1}_{n})^{T},\;\xi ^{1}=(\xi ^{1}_{1},\;\xi _{2},\ldots ,\;\xi _{m})^{T},\)

  • \(\varLambda =diag\{\xi _{1},\;\xi _{2},\ldots , \xi _{n}\},\)\(\varLambda ^{1}=diag\{\xi ^{1}_{1},\;\xi ^{1}_{2},\ldots ,\;\xi ^{1}_{m}\},\)

  • \(\zeta =(\zeta _{1},\;\zeta _{2},\ldots ,\zeta _{n})^{T},\)\(\zeta ^{1}=(\zeta ^{1}_{1},\;\zeta ^{1}_{2},\ldots ,\zeta ^{1}_{m})^{T},\)

  • \(\varGamma =diag\{\zeta _{1},\;\zeta _{2},\ldots ,\zeta _{n}\},\)\(\varGamma ^{1}=diag\{\zeta ^{1}_{1},\;\zeta ^{1}_{2},\ldots ,\;\zeta ^{1}_{m}\},\)

  • \(U(.)=\big (u_{1}(\cdot ),\;u_{2}(\cdot ),\ldots ,u_{n}(\cdot )\big )^{T},{\check{V}}(\cdot )=\big (v_{1}(\cdot ),\;v_{2}(\cdot ),\ldots ,v_{n}(\cdot )\big )^{T},\)

then the system (9) can be rewritten in the following vector–matrix form:

$$\begin{aligned} \left\{ \begin{array}{ll} {\dot{x}}(t) = -\,Ax(t)+Bf(y(t))+Cf(y(t-\tau (t)))\\ \quad + P\int _{-\infty }^{t}K(t-s)f(y(s))\text {d}s+\varLambda ^{T}Wf(y(t))\\ \quad + \varGamma ^{T}Tf(y(t-\tau (t)))+U(t),\\ {\dot{y}}(t) = -A^{1}y(t)+B^{1}g(x(t))+C^{1}g(x(t-\sigma (t)))\\ \quad + P^{1}\displaystyle \int _{-\infty }^{t}K(t-s)g(x(s))\text {d}s+(\varLambda ^{1})^{T}W^{1}g(x(t))\\ \quad + (\varGamma ^{1})^{T}T^{1}g(x(t-\sigma (t)))+{\check{V}}(t), \end{array} \right. \end{aligned}$$

Appendix 2 (Proof of Theorem 1)

Proof

Consider the radially unbounded and positive definite Lyapunov function:

$$\begin{aligned} V(t)= \frac{1}{2}\displaystyle \sum _{i=1}^{n}|x_{i}(t)|^{2}+\frac{1}{2}\displaystyle \sum _{j=1}^{m}|y_{j}(t)|^{2},\\ \quad D^{+}V(t)\le \displaystyle \sum _{i=1}^{n}|x_{i}(t)|\bigg [-a_{i}|x_{i}(t)|+\displaystyle \sum _{j=1}^{m}|b_{ij}||{\tilde{f}}_{j}(y_{j}(t))|\\ \quad+\displaystyle \sum _{j=1}^{m}|c_{ij}||{\tilde{f}}_{j}(y_{j}(t-\tau (t)))|\\ \quad+\displaystyle \sum _{j=1}^{m}|p_{ij}|\int _{-\infty }^{t}|K_{ij}(t-s)||{\tilde{f}}_{j}(y_{j}(s))|\text {d}s\\ \quad+\displaystyle \sum _{j=1}^{m} \displaystyle \sum _{k=1}^{m}|W_{ijk}||{\tilde{f}}_{k}(y_{k}(t))||{\tilde{f}}_{j}(y_{j}(t))|\\ \quad+\displaystyle \sum _{j=1}^{m} \displaystyle \sum _{k=1}^{m}|T_{ijk}||{\tilde{f}}_{k}(y_{k}(t-\tau (t)))||{\tilde{f}}_{j}(y_{j}(t-\tau (t)))\\ \quad+|u_{i}(t)|\bigg ]+\displaystyle \sum _{j=1}^{m}|y_{j}(t)|\bigg [-a_{j}^{1}|y_{j}(t)|\\ \quad+\displaystyle \sum _{i=1}^{n}|b_{ji}^{1}||{\tilde{g}}_{i}(x_{i}(t))|+\displaystyle \sum _{i=1}^{n}|c_{ji}^{1}||{\tilde{g}}_{i}(x_{i}(t-\sigma (t)))|\\ \quad+\displaystyle \sum _{i=1}^{n}|p_{ji}^{1}|\int _{-\infty }^{t}|K_{ji}(t-s)||{\tilde{g}}_{i}(x_{i}(s))|\text {d}s\\ \quad+\displaystyle \sum _{i=1}^{n} \displaystyle \sum _{k=1}^{n}|W_{jik}^{1}||{\tilde{g}}_{k}(x_{k}(t))||{\tilde{g}}_{i}(y_{i}(t))|\\ \quad+\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{k=1}^{n}|T_{jik}^{1}||{\tilde{g}}_{k}(x_{k}(t-\sigma (t)))||{\tilde{g}}_{i}(x_{i}(t-\sigma (t)))|+|v_{j}(t)|\bigg ]\\ \quad \le \sum _{i=1}^{n}|x_{i}(t)|\bigg [-\displaystyle \sum _{i=1}^{n}a_{i}|x_{i}(t)|\\ \quad+\bigg (\displaystyle \sum _{j=1}^{n}|b_{ij}|+\displaystyle \sum _{j=1}^{m}|c_{ij}|+\displaystyle \sum _{j=1}^{m}|p_{ij}|\\ \quad+\displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}|W_{ijk}|+\displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}|T_{ijk}|\bigg )L_{j}^{f}+|u_{i}(t)|\bigg ]\\ \quad+\displaystyle \sum _{j=1}^{m}|y_{j}(t)|\bigg [-a_{j}^{1}|y_{j}(t)|\\ \quad+\bigg (\displaystyle \sum _{i=1}^{n}|b_{ji}^{1}|+\displaystyle \sum _{i=1}^{n}|c_{ji}^{1}|+\displaystyle \sum _{i=1}^{n}|p_{ji}^{1}|\\ \quad+\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{k=1}^{n}|W_{jik}^{1}|+\displaystyle \sum _{i=1}^{n} \displaystyle \sum _{k=1}^{n}|T_{jik}^{1}|\bigg )L^{g}_{i}+|v_{j}(t)|\bigg ]\\ \quad= -\sum _{i=1}^{n}a_{i}|x_{i}(t)|^{2}+\sum _{i=1}^{n}|x_i(t)|\varGamma _{i}\\ \quad-\sum _{j=1}^{m}a^{1}_{j}|y_{i}(t)|^{2}+\sum _{j=1}^{m}|y_j(t)|{\tilde{\varGamma }}_{j}\\ \quad \le -\sum _{i=1}^{n}a_{i}|x_{i}(t)|^{2}+\frac{1}{2}\sum _{i=1}^{n}|x_i(t)|^{2}\\ \quad+\frac{1}{2}\sum _{i=1}^{n}\varGamma _{i}^{2}-\sum _{j=1}^{m}a^{1}_{j}|y_{i}(t)|^{2}\\ \quad+\frac{1}{2}\sum _{j=1}^{m}|y_j(t)|^{2}+\frac{1}{2}\sum _{j=1}^{m}{\tilde{\varGamma }}_{j}^{2}\\ \quad= \sum _{i=1}^{n}(\frac{1}{2}-a_{i})|x_{i}(t)|^{2}+\sum _{j=1}^{m}(\frac{1}{2}-a_{j}^{1})|y_j(t)|^{2}\\ \quad+(\frac{1}{2}\sum _{i=1}^{n}\varGamma _{i}^{2}+\sum _{j=1}^{m}{\tilde{\varGamma }}_j^{2})\\ \quad\le -\delta \bigg [\sum _{i=1}^{n}|x_{i}(t)|^{2}+\sum _{j=1}^{m}|y_j(t)|^{2}\bigg ]+\frac{1}{2}(\sum _{i=1}^{n}\varGamma _{i}^{2}+\sum _{j=1}^{m}{\tilde{\varGamma }}_j^{2}). \end{aligned}$$

when \((x^{T}(t),\;y^T(t))^{T}\in {\mathbb {R}}^{m+n}\backslash \varUpsilon _{1},\) that is \((x^{T}(t),\;y^T(t))^{T}\notin \varUpsilon _{1},\) which implies that for \((\varphi ^{T},\;\phi ^{T})^{T}\in \varUpsilon _{1},\;t\ge t_{0};\)

\(\big ((x^{T}(t,\;t_{0},\;\varphi ),\;y(t,\;t_{0},\;\phi )\big )^{T}\in \varUpsilon _{1}\) holds. And for \((\varphi ^{T},\;\phi ^{T})^{T}\notin \varUpsilon _{1},\) there exists \(T>0\) such that:

\(\big ((x^{T}(t,\;t_{0},\;\varphi ),\;y(t,\;t_{0},\;\phi )\big )^{T}\in \varUpsilon _{1}\) holds for all \(t>t_{0}+T\). From Definition 1, it is concluded that the neural network model 1 is a dissipative system and \(\varUpsilon _{1}\) is a positive invariant and globally attractive set of (1) \(\square\)

Appendix 3 (Proof of Theorem 2)

Proof

Consider the following Lyapunov functional:

$$\begin{aligned} V(t)=e^{\alpha t}\sum _{i=1}^{n}|x_{i}(t)|+e^{\alpha t}\sum _{j=1}^{m}|y_{j}(t)| \end{aligned}$$

Calculating the upper right-hand derivative of \(V(\cdot )\) along the positive half trajectory of system (1), we have

$$\begin{aligned} D^{+}V(t)&= e^{\alpha t}\bigg [\sum _{=1}^{n}(\alpha -a_{i})|x_{i}(t)|+\displaystyle \sum _{j=1}^{m}|b_{ij}||{\tilde{f}}_{j}(y_{j}(t))| \\&+\displaystyle \sum _{j=1}^{m}|c_{ij}||{\tilde{f}}_{j}(y_{j}(t-\tau (t)))| \\&+\displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}|W_{ijk}||{\tilde{f}}_{k}(y_{k}(t))||{\tilde{f}}_{j}(y_{j}(t))| \\&+\displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}|T_{ijk}||{\tilde{f}}_{k}(y_{k}(t-\tau (t)))||{\tilde{f}}_{j}(y_{j}(t-\tau (t))) \\&+\displaystyle \sum _{j=1}^{m}|p_{ij}|\int _{-\infty }^{t}|K_{ij}(t-s)||{\tilde{f}}_{j}(y_{j}(s))|\text {d}s+|u_{i}(t)|\bigg ] \\&+e^{\alpha t}[\sum _{j=1}^{m}(\alpha -a_{j}^{1})|y_{j}(t)|+\displaystyle \sum _{i=1}^{n}|b_{ji}^{1}||{\tilde{g}}_{i}(x_{i}(t))| \\&+\displaystyle \sum _{i=1}^{n}|c_{ji}^{1}||{\tilde{g}}_{i}(x_{i}(t-\sigma (t)))| \\&+\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{k=1}^{n}|W_{jik}^{1}||{\tilde{g}}_{k}(x_{k}(t))||{\tilde{g}}_{i}(y_{i}(t))| \\&+\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{k=1}^{n}|T_{jik}^{1}||{\tilde{g}}_{k}(x_{k}(t-\sigma (t)))||{\tilde{g}}_{i}(x_{i}(t-\sigma (t)))| \\&+\displaystyle \sum _{i=1}^{n}|p_{ji}^{1}|\int _{-\infty }^{t}|K_{ji}(t-s)||{\tilde{g}}_{i}(x_{i}(s))|\text {d}s+|v_{j}(t)| \\\le & e^{\alpha t}\bigg [\sum _{=1}^{n}(\alpha -a_{i})|x_{i}(t)|+\bigg (\displaystyle \sum _{j=1}^{n}|b_{ij}| \\&+\displaystyle \sum _{j=1}^{m}|c_{ij}|+\displaystyle \sum _{j=1}^{m}|p_{ij}| \\&+\big \{\displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}|W_{ijk}|+\displaystyle \sum _{j=1}^{m}\displaystyle \sum _{k=1}^{m}|T_{ijk}|\big \}l_{j}^{f}\bigg )l_{j}^{f}+|u_{i}(t)|\bigg ] \\&+e^{\alpha t}\bigg [\sum _{j=1}^{m}(\alpha -a_{j}^{1})|y_{j}(t)| \\&+\bigg (\displaystyle \sum _{i=1}^{n}|b_{ji}^{1}|+\displaystyle \sum _{i=1}^{n}|c_{ji}^{1}|+\displaystyle \sum _{i=1}^{n}|p_{ji}^{1}| +\big \{\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{k=1}^{n}|W_{jik}^{1}| \\&+\displaystyle \sum _{i=1}^{n} \displaystyle \sum _{k=1}^{n}|T_{jik}^{1}|\big \}l^{g}_{i}\bigg \}l^{g}_{i}+|v_{j}(t)|\bigg ]<0 \end{aligned}$$
(10)

when \((x^{T}(t),\;y^{T}(t))^{T}\in {\mathbb {R}}^{m+n}{\setminus} {\tilde{\varUpsilon }}_{2}\). Intergrating two sides of inequality (10) between 0 and \(t>0\), it follows that:

$$\begin{aligned} |x(t)|+|y(t)|<e^{-\alpha t}(|x(0)|+|y(0)|). \end{aligned}$$

Thus, it is concluded that \({\tilde{\varUpsilon }}_{2}\) is a globally exponentially attractive set and also the neural network model (1) is a globally exponentially dissipative system. \(\square\)

Appendix 4 (Proof of Theorem 3)

Proof

We choose the following Lyapunov–Krasovskii functional:

$$\begin{aligned} V(t)=V_{1}(t)+V_{2}(t) \end{aligned}$$

where:

$$\begin{aligned} V_{1}(t)&= 2\sum _{i=1}^{n}\int _{0}^{x_{i}(t)}g_{i}(s)\text {d}s+\sum _{i=1}^{n}\int _{t-\sigma _{i}(t)}^{t}g_{i}^{2}(x_{i}(s))\text {d}s\\&+\sum _{j=1}^{m}\sum _{i=1}^{n}|p_{ji}|\int _{0}^{+\infty }K_{ji}(s)\int _{t-s}^{t}g_{i}^{2}(x_{i}(s_{1}))\text {d}s_{1}\text {d}s\\&+\frac{\varepsilon _{4}}{1-\sigma }\int _{t-\sigma (t)}^{t}g^{T}(x(s))T^{1}(T^{1})^{T}g(x(s))\text {d}s\\ V_{2}(t)&= 2\sum _{j=1}^{m}\int _{0}^{y_{j}(t)}f_{j}(s)\text {d}s+\sum _{j=1}^{m}\int _{t-\tau _{j}(t)}^{t}f_{j}^{2}(y_{j}(s))\text {d}s\\&+\sum _{i=1}^{n}\sum _{j=1}^{m}|p^{1}_{ij}|\int _{0}^{+\infty }K_{ij}(s)\int _{t-s}^{t}f_{j}^{2}(y_{j}(s_{1}))\text {d}s_{1}\text {d}s\\&+\frac{\varepsilon _{2}}{1-\tau }\int _{t-\tau (t)}^{t}f^{T}(y(s))TT^{T}f(y(s))\text {d}s \end{aligned}$$

Calculating the time derivative of \(V(\cdot )\) along any trajectory of system (1), it yields that:

$$\begin{aligned} D^{+}V_{1}(t)\le & 2g^{T}(x(t))\big [-Ax(t)+(B+\varLambda ^{T}W)f(y(t))\\&+(C+\varGamma ^{T}T)f(y(t-\tau (t)))\\&+P\displaystyle \int _{-\infty }^{t}K(t-s)f(y(s))\text {d}s+U(t)\big ]\\&+g^{2}(x(t))-(1-\sigma )g^{2}(x(t-\sigma (t)))\\&+\displaystyle \sum _{j=1}^{m}\sum _{i=1}^{n}|p_{ji}|\displaystyle \int _{0}^{+\infty }K_{ji}(s)[g_{i}^{2}(x_{i}(t))-g_{i}^{2}(x_{i}(t-s))]\text {d}s\\&+\frac{\varepsilon _{4}}{1-\sigma }g^{T}(x(t))(T^{1})^{T}T^{1}g(x(t))\\&-\varepsilon _{4}g^{T}(x(t-\sigma (t)))T^{1}(T^{1})^{T}g(x(t-\sigma (t)))\\\le & -2\frac{A}{L^{g}}g^{2}(x(t))+2g^{T}(x(t))(B+\varLambda ^{T}W)f(y(t))\\&+2g^{T}(x(t))(C+\varGamma ^{T}T)f(y(t-\tau (t)))\\&+2g^{T}(x(t))P\displaystyle \int _{-\infty }^{t}K(t-s)f(y(s))\text {d}s\\&+2|g(x(t))||U(t)|+g^{2}(x(t))\\&-(1-\sigma )g^{2}(x(t-\sigma (t)))\\&+\displaystyle \sum _{j=1}^{m}\sum _{i=1}^{n}|p_{ji}|\displaystyle \int _{0}^{+\infty }K_{ji}(s)[g_{i}^{2}(y_{i}(t))\\&-g_{i}^{2}(y_{i}(t-s))]\text {d}s+\frac{\varepsilon _{4}}{1-\sigma }g^{T}(x(t))T^{1}(T^{1})^{T}g(x(t))\\&-\varepsilon _{4}g^{T}(x(t-\sigma (t)))T^{1}(T^{1})^{T}g(x(t-\sigma (t)))\\ D^{+}V_{2}(t)\le & 2f(y(t))\bigg [-A^{1}y(t)+(B^{1}+(\varLambda ^{1})^{T}W^{1})g(x(t))\\&+(C^{1}+(\varGamma ^{1})^{T}T^{1})g(x(t-\sigma (t)))\\&+P^{1}\displaystyle \int _{-\infty }^{t}K(t-s)g(x(s))\text {d}s+{\check{V}}(t)\bigg ]\\&+f^{2}(y(t))-(1-\tau )f^{2}(y(t-\tau (t)))\\&+\sum _{i=1}^{n}\sum _{j=1}^{m}|p^{1}_{ij}|\int _{0}^{+\infty }K_{ij}(s)[f_{j}^{2}(y_{j}(t))-f_{j}^{2}(y_{j}(t-s))]\text {d}s\\&+\frac{\varepsilon _{2}}{1-\tau }f^{T}(y(t))TT^{T}f(y(t))\\&-\varepsilon _{2}f^{T}(y(t-\tau (t)))TT^{T}f(y(t-\tau (t)))\\\le & -2\frac{A^{1}}{L^{f}}f^{2}(y(t))+2f^{T}(y(t))(B^{1}+(\varLambda ^{1})^{T}W^{1})g(x(t))\\&+2f^{T}(y(t))(C^{1}+(\varGamma ^{1})^{T}T^{1})g(x(t-\sigma (t)))\\&+2f^{T}(y(t))P^{1}\displaystyle \int _{-\infty }^{t}K(t-s)g(x(s))\text {d}s\\&+2|f(y(t))||{\check{V}}(t)|+f^{2}(y(t))-(1-\tau )f^{2}(y(t-\tau (t)))\\&+\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{m}|p^{1}_{ij}|\displaystyle \int _{0}^{+\infty }K_{ij}(s)[f_{j}^{2}(y_{j}(t))-f_{j}^{2}(y_{j}(t-s))]\text {d}s\\&+\frac{\varepsilon _{2}}{1-\tau }f^{T}(y(t))TT^{T}f(y(t))\\&-\varepsilon _{2}f^{T}(y(t-\tau (t)))TT^{T}f(y(t-\tau (t))) \end{aligned}$$

Applying Lemma1 to the above inequality,

$$\begin{aligned}&2g^{T}(x(t))(B+B^{1})f(y(t))\le g^{T}(x(t))(B+B^{1})Q^{-1}(B+B^{1})^{T}g(x(t))\\&\quad +f^{T}(y(t))Qf(y(t))\\&\quad 2g^{T}(x(t))Cf(y(t-\tau (t)))\le \frac{1}{1-\tau }g^{T}(x(t))CC^{T}g(x(t))\\&\quad +(1-\tau )f^{T}(y(t-\tau (t)))f(y(t-\tau (t)))\\&\quad 2f^{T}(y(t))C^{1}g(x(t-\sigma (t)))\le \frac{1}{1-\sigma }f^{T}(y(t))C^{1}(C^{1})^{T}f(y(t))\\&\quad +(1-\sigma )g^{T}(x(t-\sigma (t)))g(x(t-\sigma (t))) \end{aligned}$$

It can be verified that (for more details see [46]):

$$\begin{aligned}&\varGamma ^{T}\varGamma =\zeta ^{T} \zeta I=\displaystyle \sum _{i=1}^{n}(\zeta _{i})^{2}I\le \displaystyle \sum _{i=1}^{n}\delta _{0i}^{2}I=\delta _{0}I,\\&\varLambda ^{T}\varLambda =\xi ^{T} \xi I=\displaystyle \sum _{i=1}^{n}(\xi _{i})^{2}I\le \displaystyle \sum _{i=1}^{n}\delta _{1i}^{2}I=\delta _{1}I,\\&(\varGamma ^{1})^{T}\varGamma ^{1}=(\zeta ^{1})^{T}\zeta ^{1} I=\displaystyle \sum _{i=1}^{n}(\zeta ^{1}_{i})^{2}I\le \displaystyle \sum _{i=1}^{n}\delta _{2i}^{2}I=\delta _{2}I,\\&(\varLambda ^{1})^{T}\varLambda ^{1}=(\xi ^{1})^{T}\xi ^{1} I=\displaystyle \sum _{i=1}^{n}(\xi ^{1}_{i})^{2}I\le \displaystyle \sum _{i=1}^{n}\delta _{3i}^{2}I=\delta _{3}I \end{aligned}$$

Therefore, by Lemma 2, we have:

$$\begin{aligned}&2g^{T}(x(t))\varLambda ^{T}Wf(y(t))\le \varepsilon _{1}^{-1}g^{T}(x(t))\varLambda ^{T}\varLambda g(x(t))\\&\qquad +\varepsilon _{1}f^{T}(y(t))TT^{T}f(y(t))\\&\quad \le \delta _{1}\varepsilon _{1}^{-1}g^{T}(x(t))g(x(t))\\&\qquad +\varepsilon _{1}f^{T}(y(t))TT^{T}f(y(t))\\&2g^{T}(x(t))\varGamma ^{T}Tf(y(t-\tau (t)))\le \varepsilon _{2}^{-1}g^{T}(x(t))\varGamma ^{T}\varGamma g(x(t))\\&\qquad +\varepsilon _{2}f^{T}(y(t-\tau (t)))TT^{T}f(y(t-\tau (t)))\\&\quad \le \delta _{0}\varepsilon _{2}^{-1}g^{T}(x(t))g(x(t))\\&\qquad +\varepsilon _{2}f^{T}(y(t-\tau (t)))TT^{T}f(y(t-\tau (t)))\\&2f^{T}(y(t))(\varLambda ^{1})^{T}W^{1}g(x(t))\le \varepsilon _{3}^{-1}f^{T}(y(t))(\varLambda ^{1})^{T}\varLambda ^{1} f^{T}(y(t))\\&\qquad +\varepsilon _{3}g^{T}(x(t))W^{1}(W^{1})^{T}g(x(t))\\&\quad \le \delta _{3}\varepsilon _{3}^{-1}f^{T}(y(t))f^{T}(y(t))\\&\qquad +\varepsilon _{3}g^{T}(x(t))W^{1}(W^{1})^{T}g(x(t))\\&2f^{T}(y(t))(\varGamma ^{1})^{T}T^{1})g(x(t-\sigma (t)))\le \varepsilon _{4}^{-1}f^{T}(y(t))(\varGamma ^{1})^{T}\varGamma ^{1} f^{T}(y(t))\\&\qquad +\varepsilon _{4}g^{T}(x(t-\sigma (t)))T^{1}(T^{1})^{T}g(x(t-\sigma (t)))\\&\quad \le \delta _{2}\varepsilon _{4}^{-1}f^{T}(y(t))f^{T}(y(t))\\&\qquad +\varepsilon _{4}g^{T}(x(t-\sigma (t)))T^{1}(T^{1})^{T}g(x(t-\sigma (t)))\\&D^{+}V(t)\le -2\frac{A}{L^{g}}g^{2}(x(t))\\&\qquad +g^{T}(x(t))(B+B^{1})Q^{-1}(B+B^{1})^{T}g(x(t))\\&\qquad +f^{T}(y(t))Qf(y(t))+\delta _{0}\varepsilon _{2}^{-1}g^{T}(x(t))g(x(t))\\&\qquad +\varepsilon _{2}f^{T}(y(t-\tau (t)))T^{T}Tf(y(t-\tau (t)))\\&\qquad +\frac{1}{1-\tau }g^{T}(x(t))CC^{T}g(x(t))\\&\qquad +(1-\tau )f^{T}(y(t-\tau (t)))f(y(t-\tau (t)))\\&\qquad +\delta _{1}\varepsilon _{1}^{-1}g^{T}(x(t))g(x(t))+\varepsilon _{1}f^{T}(y(t))TT^{T}f(y(t))\\&\qquad +2g^{T}(x(t))P\displaystyle \int _{-\infty }^{t}K(t-s)f(y(s))\text {d}s\\&\qquad +2|g(x(t))||U(t)|+g^{2}(x(t))-(1-\sigma )g^{2}(x(t-\sigma (t)))\\&\qquad +\displaystyle \sum _{j=1}^{m}\sum _{i=1}^{n}|p_{ji}|\displaystyle \int _{0}^{+\infty }K_{ji}(s)[g_{i}^{2}(x_{i}(t))-g_{i}^{2}(x_{i}(t-s))]\text {d}s\\&\qquad -2\frac{A^{1}}{L^{f}}f^{2}(y(t))+\delta _{3}\varepsilon _{3}^{-1}f^{T}(y(t))f^{T}(y(t))\\&\qquad +\varepsilon _{3}g^{T}(x(t))W^{1}(W^{1})^{T}g(x(t))\\&\qquad +\frac{1}{1-\sigma }f^{T}(y(t))C^{1}(C^{1})^{T}f(y(t))\\&\qquad +(1-\sigma )g^{T}(x(t-\sigma (t)))g(x(t-\sigma (t)))\\&\qquad +\delta _{2}\varepsilon _{4}^{-1}f^{T}(y(t))f(y(t))\\&\qquad +\varepsilon _{4}g^{T}(x(t-\sigma (t)))T^{1}(T^{1})^{T}g(x(t-\sigma (t)))\\&\qquad +2f^{T}(y(t))P^{1}\displaystyle \int _{-\infty }^{t}K(t-s)g(x(s))\text {d}s\\&\qquad +\frac{\varepsilon _{4}}{1-\sigma }g^{T}(x(t))T^{1}(T^{1})^{T}g(x(t))\\&\qquad -\varepsilon _{4}g^{T}(x(t-\sigma (t)))T^{1}(T^{1})^{T}g(x(t-\sigma (t)))\\&\qquad +2|f(y(t))||{\check{V}}(t)|+f^{2}(y(t))-(1-\tau )f^{2}(y(t-\tau (t)))\\&\qquad +\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{m}|p^{1}_{ij}|\displaystyle \int _{0}^{+\infty }K_{ij}(s)[f_{j}^{2}(y_{j}(t))-f_{j}^{2}(y_{j}(t-s))]\text {d}s\\&\qquad +\frac{\varepsilon _{4}}{1-\sigma }g^{T}(x(t))T^{1}(T^{1})^{T}g(x(t))\\&\qquad -\varepsilon _{4}g^{T}(x(t-\sigma (t)))T^{1}(T^{1})^{T}g(x(t-\sigma (t)))\\&\quad =-\,2\frac{A}{L^{g}}g^{2}(x(t))+g^{T}(x(t))\bigg [I+(B+B^{1})Q^{-1}(B+B^{1})^{T}\\&\qquad +\delta _{0}\varepsilon _{2}^{-1}+\frac{1}{1-\tau }CC^{T}+\varepsilon _{3}W^{1}(W^{1})^{T}\\&\qquad +\delta _{1}\varepsilon _{1}^{-1}+\frac{\varepsilon _{4}}{1-\sigma }T^{1}(T^{1})^{T}\bigg ]g(x(t))\\&\qquad +2g^{T}(x(t))P\displaystyle \int _{-\infty }^{t}K(t-s)f(y(s))\text {d}s+2|g(x(t))||U(t)|\\&\qquad +\sum _{j=1}^{m}\sum _{i=1}^{n}|p_{ji}|\int _{0}^{+\infty }K_{ji}(s)[g_{i}^{2}(x_{i}(t))-g_{i}^{2}(x_{i}(t-s))]\text {d}s\\&\qquad -2\frac{A^{1}}{L^{f}}f^{2}(y(t))+f^{T}(y(t))\bigg [Q+I+\varepsilon _{1}TT^{T}\\&\qquad +\delta _{3}\varepsilon _{3}^{-1}C^{1}(C^{1})^{T}+\frac{1}{1-\sigma }C^{1}(C^{1})^{T}\\&\qquad +\delta _{2}\varepsilon _{4}^{-1}+\frac{\varepsilon _{2}}{1-\tau }TT^{T}\bigg ]f(y(t))\\&\qquad +2f^{T}(y(t))P^{1}\displaystyle \int _{-\infty }^{t}K(t-s)g(x(s))\text {d}s+2|f(y(t))||{\check{V}}(t)|\\&\qquad +\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{m}|p^{1}_{ij}|\displaystyle \int _{0}^{+\infty }K_{ij}(s)[f_{j}^{2}(y_{j}(t))-f_{j}^{2}(y_{j}(t-s))]\text {d}s \end{aligned}$$

By the inequality \(2ab\le a^{2}+b^{2}\) for any \(a,\;b\in {\mathbb {R}},\) we obtain:

$$\begin{aligned}&D^{+}V(t)=D^{+}V_{1}(t)+D^{+}V_{2}(t)\\&\quad \le -2\frac{A(\xi _{1}+\xi _{2})}{L^{g}}g^{2}(x(t))+g^{T}(x(t))\bigg [(1+\Vert P\Vert _{1}\\&\qquad +\Vert P^{1}\Vert _{\infty })I+(B+B^{1})Q^{-1}(B+B^{1})^{T}+\delta _{0}\varepsilon _{2}^{-1}\\&\qquad +\frac{1}{1-\tau }CC^{T}+\varepsilon _{3}W^{1}(W^{1})^{T}\\&\qquad +\delta _{1}\varepsilon _{1}^{-1}+\frac{\varepsilon _{4}}{1-\sigma }T^{1}(T^{1})^{T}\bigg ]g(x(t))+2|g(x(t))||U(t)|\\&\qquad -2\frac{A^{1}(\zeta _{1}+\zeta _{2})}{L^{f}}f^{2}(y(t))+f^{T}(y(t))\bigg [Q+(1+\Vert P^{1}\Vert _{1}\\&\qquad +\Vert P\Vert _{\infty })I+\varepsilon _{1}TT^{T}+\delta _{3}\varepsilon _{3}^{-1}C^{1}(C^{1})^{T}\\&\qquad +\frac{1}{1-\sigma }C^{1}(C^{1})^{T}+\delta _{2}\varepsilon _{4}^{-1}\\&\qquad +\frac{\varepsilon _{2}}{1-\tau }TT^{T}\bigg ]f(y(t))+2|f(y(t))||{\check{V}}(t)|\\&\quad \le g^{T}(x(t))\chi g(x(t))-2\frac{A\xi _{1}}{L^{g}}g(x(t))(g(x(t))\\&\qquad -\frac{L^{g}|u_i(t)|}{A\xi _{1}})+f^{T}(y(t)){\bar{\chi }}f(y(t))\\&\qquad -2\frac{A^{1}\zeta _{1}}{L^{f}}f(y(t))(f(y(t))-\frac{L^{f}|{\check{V}}(t)|}{\zeta _{1}A^{1}}). \end{aligned}$$

where

$$\begin{aligned}&\chi =N+(1+\Vert P\Vert _{1}+\Vert P^{1}\Vert _{\infty })I\\&\qquad +(B+B^{1})Q^{-1}(B+B^{1})^{T}+\delta _{0}\varepsilon _{2}^{-1}+\frac{1}{1-\tau }CC^{T}\\&\qquad +\varepsilon _{3}W^{1}(W^{1})^{T}+\delta _{1}\varepsilon _{1}^{-1}+\frac{\varepsilon _{4}}{1-\sigma }T^{1}(T^{1})^{T},\\&{\bar{\chi }}={\bar{N}}+Q+(1+\Vert P^{1}\Vert _{1}+\Vert P\Vert _{\infty })I+\varepsilon _{1}TT^{T}\\&\qquad +\delta _{3}\varepsilon _{3}^{-1}C^{1}(C^{1})^{T}+\frac{1}{1-\sigma }C^{1}(C^{1})^{T}\\&\qquad +\delta _{2}\varepsilon _{4}^{-1}+\frac{\varepsilon _{2}}{1-\tau }TT^{T} \end{aligned}$$

With the use of Lemma (3) and Eq. (4), we get:

$$\begin{aligned} \chi<0,\;{\bar{\chi }}<0 \end{aligned}$$
(11)

In accordance with Eqs. (4) and (11), we obtain:

$$\begin{aligned}&D^{+}V(t)\le -2\frac{A\xi _{1}}{L^{g}}g^{T}(x(t))\bigg (g(x(t))-\frac{L^{g}|U(t)|}{A\xi _{1}}\bigg ) \\&\quad -2\frac{A^{1}\zeta _{1}}{L^{f}}f^{T}(y(t))\bigg (f(y(t))-\frac{L^{f}|{\check{V}}(t)|}{A^{1}\zeta _{1}}\bigg )<0. \end{aligned}$$
(12)

When \((x^{T}(t),\;y^{T}(t))\in {\mathbb {R}}^{n+m}\setminus \varUpsilon _{3}\), it follows that Eq. (12) concludes that the neural network model (1) is a dissipative system, and \(\varUpsilon _{3}\) is a positive invariant and globally attractive set of (1). \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aouiti, C., Sakthivel, R. & Touati, F. Global dissipativity of high-order Hopfield bidirectional associative memory neural networks with mixed delays. Neural Comput & Applic 32, 10183–10197 (2020). https://doi.org/10.1007/s00521-019-04552-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-019-04552-8

Keywords

Navigation