Elsevier

Neurocomputing

Volume 267, 6 December 2017, Pages 85-94
Neurocomputing

Delay-dependent dissipativity of neural networks with mixed non-differentiable interval delays

https://doi.org/10.1016/j.neucom.2017.04.059Get rights and content

Abstract

This paper investigates the global dissipativity and globally exponential dissipativity for neural networks with both interval time-varying delays and interval distributed time-varying delays. By constructing a set of appropriated Lyapunov–Krasovskii functionals and employing Newton–Leibniz formulation and free weighting matrix method, some dissipativity criteria that are dependent on the upper and lower bounds of the time-varying delays are derived in terms of linear matrix inequalities (LMIs), which can be easily verified via the LMI toolbox. Moreover, a positive invariant and globally attractive set is derived via the established LMIs. Finally, two numerical examples and their simulations are provided to demonstrate the effectiveness of the proposed criteria.

Introduction

During the past decades, neural networks have received considerable attentions owing to their fruitful applications in a variety of areas such as signal processing, automatic control engineering, associative memories, parallel computation, combinatorial optimization and pattern recognition and so on [1], [2], [3]. In the process of investigating neural networks, time delays are frequently encountered as a result of the inherent communication time between neurons and the finite switching speed of amplifiers [4], [5], [6]. Beside, in hardware implementation, time delays usually causes oscillation, instability, divergence, chaos, or other bad performances of neural networks [7], [8]. Therefore the study of dynamic behaviors for delayed neural networks have received considerable attention in recent years [9], [10], [11]. For the delay conditions, both delay-independent and delay-dependent conditions have been developed. The delay-dependent conditions are usually less conservative than delay-independent ones, especially for systems with small delays, since the former takes advantage of the additional information of the time delays. Therefore, in recent years, much attention has been paid on delay-dependent conditions such as stability, dissipativity, synchronization control, periodic attractor of neural networks, and many interesting results have been proposed, especially based on Lyapunov–Razumikhin method and Lyapunov–Krasovskii functional and linear matrix inequality (LMI) approach, see [12], [13], [14], [15], [16], [17] for instances.

The theory of dissipativity in dynamical systems, have drawn many researchers’ attention since it was introduced in 1970s by Willems in terms of an inequality involving the storage function and supply rate [18], and it has found applications in many areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control [19], [20], [21]. In fact, the notion of dissipativity is a generalization of Lyapunov stability which only focuses on the stability of equilibrium points. Nevertheless, it does not always hold that the orbits of neural network approach to a single equilibrium point. In addition, the equilibrium point does not exist in some situations. Basically, the aim of dissipativity analysis is to find globally attractive sets. Once the attractive set is found, we only need to focus on its dynamics properties inside the attractive set, since there is no equilibrium, periodic solution, or chaos attractor outside the attractive set [22]. Dissipativity theory provides a fundamental framework for the analysis and design problem of control systems using input–output description based on system energy related considerations, and it has much good performance on neural networks. Moreover, dissipativity theory provides some important connections between physics, system theory and control engineering [23]. In recent years, various interesting results have been obtained for the dissipativity of delayed neural networks [24], [25], [26], [27]. Especially, the dissipativity for neural networks with constant delays were studied in [27], and derived some sufficient conditions for the global dissipativity of neural networks with constant delays. By introducing a triple-summable term in the Lyapunov functional and applying stochastic analysis technique, the problem of dissipativity and passivity analysis for uncertain discrete-time stochastic Markovian jump neural networks with additive time-varying delays was investigated in [24]. By using the framework of Filippov solution, differential inclusion theory, an appropriate Lyapunov–Krasovskii functional and linear matrix inequality (LMI) technique, Ref. [26] considered the dissipativity of memristor-based complex-valued neural networks with time-varying delays. Note that in many practical applications of neural networks [28], the time delay is usually time-varying and belongs to an interval the lower bound of which is not restricted to be zero. Hence, it is necessary to investigate the systems with interval time-varying delays [29], [30], [31], [32]. In [29], based on free-matrix-based integral inequality, the stability analysis of recurrent neural networks with interval time-varying delay was studied, where the information of the activation function and the lower bound of the delay are both fully considered. In [30], the global robust point dissipativity of an uncertain neural networks model with interval time-varying delays was investigated via Lyapunov theory and inequality techniques. Very recently, Park co-workers [31] studied the stability and dissipativity analysis of static neural networks with interval time-varying delay by fully utilizing the information on the neuron activation function and employing a Wirtinger-based inequality which was proposed in [33] that can be used to estimate the derivative of Lyapunov–Krasovskii function. On the other hand, as pointed out in [34], [35], [36], neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths, and hence there is a distribution of propagation delays over a period of time. In Ref. [34], a neural circuit has been designed with distributed delays, which solves a general problem of recognized patterns in time-dependent signal. In Refs. [35], [36], the global stability, periodic solutions, and convergency have been investigated for neural networks with distributed delays. Dynamic properties of neural networks with distributed delays such as stability, passivity, almost periodic solutions have also been explored. However there is relatively less work on the dissipativity of neural networks with both interval time-varying delay and interval distributed time-varying delay. These motivate the present study.

In present paper, the problems of dissipativity are investigated for neural networks with interval time-varying delay and distributed time-varying delay. The upper bound and lower bound of the interval time-varying delay and distributed time-varying delay are extensively considered. By constructing a set of appropriated Lyapunov–Krasovskii functionals and employing Newton–Leibniz formulation and free weighting matrix method, some LMI-based sufficient conditions are derived to guarantee to the global dissipativity and global exponential dissipativity of the addressed neural networks and the case of parameter uncertainties, which can be easily verified via the LMI toolbox. Meanwhile, the positive invariant and globally attractive set of the addressed neural networks are obtained via the LMIs. We do not impose any restriction on the derivative of the time-varying delays. In other words, the developed results in this paper can be applied to the time-varying delays which are not differentiable such as piecewise delays, fuzzy delays, and stochastic delays, which, in this sense, are better than those results in [29], [30]. The rest of this paper is organized as follows. In Section 2, some notations, definitions and some well-known technical lemmas are given. Section 3 presents the global dissipativity and globally exponential dissipativity criteria. Two numerical examples and their computer simulations are provided in Section 4 to demonstrate the effectiveness of the proposed criteria. Finally, the paper is concluded in Section 5.

Section snippets

Preliminaries

Consider the following neural network model with both interval time-varying delay and interval distributed time-varying delay: {x˙(t)=Cx(t)+Af(x(t))+Bf(x(th(t)))+Dtσ2(t)tσ1(t)f(x(s))ds+u(t),tR+,x(t)=ϕ(t),t[τmax,0],where x(t)=(x1(t),x2(t),,xn(t))TRn is the state vector of the network at time t, n corresponds to the number of neurons, C=diag(c1,c2,,cn)>0 is a positive diagonal matrix; A=(aij)n×n,B=(bij)n×n, and D=(dij)n×n represent the connection weight matrix; f(x(t))=(f1(x1(t)),f2(x2(

Main results

In this section, we shall investigate the globally dissipativity and globally exponential dissipativity of neural network (1) by constructing suitable Lyapunov–Krasovskii functionals. For the convenience of presentation, in the following, we denote F1=diag(F1F1+,F2F2+,,FnFn+)and F2=diag(F1+F1+2,F2+F2+2,,Fn+Fn+2).

Theorem 1

Assume that(H1)(H3) hold. If there exist eight n × n symmetric positive definite matrices P > 0, Q1 > 0, Q2 > 0, R1 > 0, R2 > 0, U > 0, L > 0, S2 > 0, two n × n positive

Illustrative examples

In this section, two numerical examples are given to demonstrate the effectiveness and applicability of our stability results.

Example 1

Consider the system (1) with Γu=1,f1(s)=arctan(0.5s)0.3sins,f2(s)=arctan(0.3s)0.5sins, and C=[5005],A=[0.20.10.50.1],B=[0.500.30.2],D=[0.150.100.3].

Then F1=0.3,F1+=0.8,F2=0.5, and F2+=0.8, i.e., F1=[0.24000.4],F2=[0.25000.15].By using the LMI Toolbox in MATLAB, the admissible upper bound h2 of the time delay with different lower bound h1 is given in Table 1

Conclusion

In present paper, we have presented new sufficient conditions for global dissipativity and global exponential dissipativity of NNs with mixed interval time-varying delays by constructing a set of Lyapunov–Krasovskii functionals and employing Newton–Leibniz formulation and free weighting matrix method. We did not impose any restriction on the derivative of the time-varying delays, which can ensure the obtained results have better application and less conservatism. Two numerical examples have

Xiaodi Li was born in Shandong province, China. He received the B.S. and M.S. degrees from Shandong Normal University, Jinan, China, in 2005 and 2008, respectively, and the Ph.D. degree from Xiamen University, Xiamen, China, in 2011, all in applied mathematics. He is currently a professor with the Department of Mathematics, Shandong Normal University. He has authored or coauthored more than 70 research papers. He is an academic editor of International Journal of Applied Physics and Mathematics

References (42)

Cited by (0)

Xiaodi Li was born in Shandong province, China. He received the B.S. and M.S. degrees from Shandong Normal University, Jinan, China, in 2005 and 2008, respectively, and the Ph.D. degree from Xiamen University, Xiamen, China, in 2011, all in applied mathematics. He is currently a professor with the Department of Mathematics, Shandong Normal University. He has authored or coauthored more than 70 research papers. He is an academic editor of International Journal of Applied Physics and Mathematics (SG), Applications and Applied Mathematics (USA), and British Journal of Mathematics & Computer Science (UK). His current research interests include stability theory, delay systems, impulsive control theory, artificial neural networks, and applied mathematics.

Xiaoxiao Lv was born in Shandong Province, China, in 1991. She is currently a graduate student with control theory, School of Mathematics and Statistics, Shandong Normal University, Shandong, China. Her research interests include neural networks, stability and impulsive control theory.

This work was jointly supported by National Natural Science Foundation of China (11301308, 61673247) and the Outstanding Youth Foundation of Shandong Province (ZR2016JL024). The paper has not been presented at any conference.

View full text