Delay-dependent dissipativity of neural networks with mixed non-differentiable interval delays☆
Introduction
During the past decades, neural networks have received considerable attentions owing to their fruitful applications in a variety of areas such as signal processing, automatic control engineering, associative memories, parallel computation, combinatorial optimization and pattern recognition and so on [1], [2], [3]. In the process of investigating neural networks, time delays are frequently encountered as a result of the inherent communication time between neurons and the finite switching speed of amplifiers [4], [5], [6]. Beside, in hardware implementation, time delays usually causes oscillation, instability, divergence, chaos, or other bad performances of neural networks [7], [8]. Therefore the study of dynamic behaviors for delayed neural networks have received considerable attention in recent years [9], [10], [11]. For the delay conditions, both delay-independent and delay-dependent conditions have been developed. The delay-dependent conditions are usually less conservative than delay-independent ones, especially for systems with small delays, since the former takes advantage of the additional information of the time delays. Therefore, in recent years, much attention has been paid on delay-dependent conditions such as stability, dissipativity, synchronization control, periodic attractor of neural networks, and many interesting results have been proposed, especially based on Lyapunov–Razumikhin method and Lyapunov–Krasovskii functional and linear matrix inequality (LMI) approach, see [12], [13], [14], [15], [16], [17] for instances.
The theory of dissipativity in dynamical systems, have drawn many researchers’ attention since it was introduced in 1970s by Willems in terms of an inequality involving the storage function and supply rate [18], and it has found applications in many areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control [19], [20], [21]. In fact, the notion of dissipativity is a generalization of Lyapunov stability which only focuses on the stability of equilibrium points. Nevertheless, it does not always hold that the orbits of neural network approach to a single equilibrium point. In addition, the equilibrium point does not exist in some situations. Basically, the aim of dissipativity analysis is to find globally attractive sets. Once the attractive set is found, we only need to focus on its dynamics properties inside the attractive set, since there is no equilibrium, periodic solution, or chaos attractor outside the attractive set [22]. Dissipativity theory provides a fundamental framework for the analysis and design problem of control systems using input–output description based on system energy related considerations, and it has much good performance on neural networks. Moreover, dissipativity theory provides some important connections between physics, system theory and control engineering [23]. In recent years, various interesting results have been obtained for the dissipativity of delayed neural networks [24], [25], [26], [27]. Especially, the dissipativity for neural networks with constant delays were studied in [27], and derived some sufficient conditions for the global dissipativity of neural networks with constant delays. By introducing a triple-summable term in the Lyapunov functional and applying stochastic analysis technique, the problem of dissipativity and passivity analysis for uncertain discrete-time stochastic Markovian jump neural networks with additive time-varying delays was investigated in [24]. By using the framework of Filippov solution, differential inclusion theory, an appropriate Lyapunov–Krasovskii functional and linear matrix inequality (LMI) technique, Ref. [26] considered the dissipativity of memristor-based complex-valued neural networks with time-varying delays. Note that in many practical applications of neural networks [28], the time delay is usually time-varying and belongs to an interval the lower bound of which is not restricted to be zero. Hence, it is necessary to investigate the systems with interval time-varying delays [29], [30], [31], [32]. In [29], based on free-matrix-based integral inequality, the stability analysis of recurrent neural networks with interval time-varying delay was studied, where the information of the activation function and the lower bound of the delay are both fully considered. In [30], the global robust point dissipativity of an uncertain neural networks model with interval time-varying delays was investigated via Lyapunov theory and inequality techniques. Very recently, Park co-workers [31] studied the stability and dissipativity analysis of static neural networks with interval time-varying delay by fully utilizing the information on the neuron activation function and employing a Wirtinger-based inequality which was proposed in [33] that can be used to estimate the derivative of Lyapunov–Krasovskii function. On the other hand, as pointed out in [34], [35], [36], neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths, and hence there is a distribution of propagation delays over a period of time. In Ref. [34], a neural circuit has been designed with distributed delays, which solves a general problem of recognized patterns in time-dependent signal. In Refs. [35], [36], the global stability, periodic solutions, and convergency have been investigated for neural networks with distributed delays. Dynamic properties of neural networks with distributed delays such as stability, passivity, almost periodic solutions have also been explored. However there is relatively less work on the dissipativity of neural networks with both interval time-varying delay and interval distributed time-varying delay. These motivate the present study.
In present paper, the problems of dissipativity are investigated for neural networks with interval time-varying delay and distributed time-varying delay. The upper bound and lower bound of the interval time-varying delay and distributed time-varying delay are extensively considered. By constructing a set of appropriated Lyapunov–Krasovskii functionals and employing Newton–Leibniz formulation and free weighting matrix method, some LMI-based sufficient conditions are derived to guarantee to the global dissipativity and global exponential dissipativity of the addressed neural networks and the case of parameter uncertainties, which can be easily verified via the LMI toolbox. Meanwhile, the positive invariant and globally attractive set of the addressed neural networks are obtained via the LMIs. We do not impose any restriction on the derivative of the time-varying delays. In other words, the developed results in this paper can be applied to the time-varying delays which are not differentiable such as piecewise delays, fuzzy delays, and stochastic delays, which, in this sense, are better than those results in [29], [30]. The rest of this paper is organized as follows. In Section 2, some notations, definitions and some well-known technical lemmas are given. Section 3 presents the global dissipativity and globally exponential dissipativity criteria. Two numerical examples and their computer simulations are provided in Section 4 to demonstrate the effectiveness of the proposed criteria. Finally, the paper is concluded in Section 5.
Section snippets
Preliminaries
Consider the following neural network model with both interval time-varying delay and interval distributed time-varying delay: where is the state vector of the network at time t, n corresponds to the number of neurons, is a positive diagonal matrix; and represent the connection weight matrix;
Main results
In this section, we shall investigate the globally dissipativity and globally exponential dissipativity of neural network (1) by constructing suitable Lyapunov–Krasovskii functionals. For the convenience of presentation, in the following, we denote
and
Theorem 1 Assume that– hold. If there exist eight n × n symmetric positive definite matrices P > 0, Q1 > 0, Q2 > 0, R1 > 0, R2 > 0, U > 0, L > 0, S2 > 0, two n × n positive
Illustrative examples
In this section, two numerical examples are given to demonstrate the effectiveness and applicability of our stability results.
Example 1 Consider the system (1) with and
Then and i.e.,
By using the LMI Toolbox in MATLAB, the admissible upper bound h2 of the time delay with different lower bound h1 is given in Table 1
Conclusion
In present paper, we have presented new sufficient conditions for global dissipativity and global exponential dissipativity of NNs with mixed interval time-varying delays by constructing a set of Lyapunov–Krasovskii functionals and employing Newton–Leibniz formulation and free weighting matrix method. We did not impose any restriction on the derivative of the time-varying delays, which can ensure the obtained results have better application and less conservatism. Two numerical examples have
Xiaodi Li was born in Shandong province, China. He received the B.S. and M.S. degrees from Shandong Normal University, Jinan, China, in 2005 and 2008, respectively, and the Ph.D. degree from Xiamen University, Xiamen, China, in 2011, all in applied mathematics. He is currently a professor with the Department of Mathematics, Shandong Normal University. He has authored or coauthored more than 70 research papers. He is an academic editor of International Journal of Applied Physics and Mathematics
References (42)
- et al.
Finite-time synchronization of coupled discontinuous neural networks with mixed delays and nonidentical perturbations
J. Franklin Inst.
(2015) - et al.
Delay-interval-dependent stability of recurrent neural networks with time-varying delay
Neurocomputing
(2009) - et al.
Control of a novel class of fractional-order chaotic systems via adaptive sliding mode control approach
Appl. Math. Modell.
(2013) - et al.
On global asymptotic stability of neural networks with discrete and distributed delays
Phys. Lett. A
(2005) - et al.
Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays
Neural Networks
(2015) - et al.
Dissipativity analysis of neural networks with time-varying delays
Neurocomputing
(2015) - et al.
Delay-dependent stability for uncertain cellular neural networks with discrete and distribute time-varying delays
J. Franklin Inst.
(2008) - et al.
Finite-time boundedness of state estimation for neural networks with time-varying delays
Neurocomputing
(2014) - et al.
Positive invariant and global exponential attractive sets of neural networks with time-varying delays
Neurocomputing
(2008) - et al.
Dissipativity and passivity analysis for uncertain discrete-time stochastic Markovian jump neural networks with additive time-varying delays
Neurocomputing
(2016)
Dissipativity analysis of memristor-based complex-valued neural networks with time-varying delays
Inf. Sci.
Global dissipativity of stochastic neural networks with time delay
Franklin Inst.
Stability analysis of recurrent neural networks with interval time-varying delay via free-matrix-based integral inequality
Neurocomputing
Stability and dissipativity analysis of static neural networks with interval time-varying delay
J. Franklin Inst.
Impulsive controller design for exponential synchronization of chaotic neural networks with mixed delays
Commun. Nonlinear Sci. Numer. Simul.
Wirtinger-based integral inequality: application to time-delay systems
Automatica
Exponential stability for stochastic BAM networks with discrete and distributed delays
Appl. Math. Comput.
Stability analysis of Cohen–Grossberg neural network with both time-varying and continuously distributed delays
J. Comput. Appl. Math.
Existence and global stability analysis of equilibrium of fuzzy cellular neural networks with time delay in the leakage term under impulsive perturbations
J. Franklin Inst.
Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays
Phys. Lett. A
Matrix measure based dissipativity analysis for inertial delayed uncertain neural networks
Neural Networks
Cited by (0)
Xiaodi Li was born in Shandong province, China. He received the B.S. and M.S. degrees from Shandong Normal University, Jinan, China, in 2005 and 2008, respectively, and the Ph.D. degree from Xiamen University, Xiamen, China, in 2011, all in applied mathematics. He is currently a professor with the Department of Mathematics, Shandong Normal University. He has authored or coauthored more than 70 research papers. He is an academic editor of International Journal of Applied Physics and Mathematics (SG), Applications and Applied Mathematics (USA), and British Journal of Mathematics & Computer Science (UK). His current research interests include stability theory, delay systems, impulsive control theory, artificial neural networks, and applied mathematics.
Xiaoxiao Lv was born in Shandong Province, China, in 1991. She is currently a graduate student with control theory, School of Mathematics and Statistics, Shandong Normal University, Shandong, China. Her research interests include neural networks, stability and impulsive control theory.