Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays
Introduction
Over the past decades, the recurrent neural networks have found extensive applications in image processing, pattern recognition, optimization solvers, fixed-point computation, and other engineering areas [3], [15], [26]. It is noted that the stability of neural networks is a prerequisite for these applications. Therefore, the stability study of recurrent neural networks is of much importance and has received considerable attention; see, for example, [1], [22], [16] and the references therein.
On the other hand, time delays are frequently encountered in neural networks due to the finite switching speed of amplifiers and the inherent communication time of neurons. Also, the existence of time delays is often one source of instability for neural networks. For these reasons, the stability analysis problem for recurrent neural networks with time delays has been studied recently. For example, the delay-independent stability results have been reported in [2], [4], [6], [5], [24], [28], while the delay-dependent stability results can be found in [12], [18], [19], [25], [32]. When parameter uncertainties, such as norm-bounded uncertainties and polytopic uncertainties, appear in delayed neural networks, delay-independent and -dependent robust stability conditions have been proposed in [30], [29], [31]. Generally speaking, the delay-dependent stability results are less conservative than the delay-independent ones, especially when the size of time delays are small.
It should be pointed out that all of the above-mentioned references are concerned with continuous-time neural networks. For discrete-time neural networks with time delays, the stability analysis problem has been addressed in [7], [11], [14], [17], [21]. It is worth noting that a delay-dependent condition guaranteeing the global exponential stability of a class of discrete-time recurrent neural networks with time-varying delays has been proposed in [14], where the linear matrix inequality (LMI) approach is developed and a weak assumption on the activation functions is considered. The result obtained in [14] has been improved in [21] by using a similar technique to that in [8]. We note that the following two terms and are the lower and upper bounds of the time-varying delay , respectively) are ignored in the Lyapunov functional employed in [14], [21]. The ignorance of these terms may lead to conservatism to some extent. Therefore, it is important and necessary to further improve the stability results reported in [14], [21]. It is also worth mentioning that parameter uncertainties are not taken into account in [14], [21].
In this paper, under a weak assumption on the activation functions and by using a Lyapunov functional that is different from those in [14], [21], we present a new delay-dependent sufficient condition guaranteeing the global exponential stability of discrete-time recurrent neural networks with time-varying delays. It is shown that this condition is less conservative than those in [14], [21]. Based on the proposed stability result, we also give a delay-dependent robust exponential stability criterion for discrete-time recurrent neural networks with time-varying delays and norm-bounded parameter uncertainties. Some numerical examples are provided finally to demonstrate the effectiveness of the proposed stability criteria.
Notation: Throughout this paper, for real symmetric matrices X and Y, the notation (respectively, ) means that the matrix is positive semi-definite (respectively, positive definite). I denotes an identity matrix of appropriate dimension. The superscript ‘T’ represents the transpose. The notation ‘’ is used as an ellipsis for terms that are induced by symmetry. We use and to denote the minimum and maximum eigenvalue of a real symmetric matrix, respectively. The notation denotes a vector norm defined by when x is a vector. For integers a and b satisfying , denotes the discrete interval given . Matrix dimensions, if not explicitly stated, are assumed to be compatible for algebraic operations.
Section snippets
New stability condition
Consider a discrete-time recurrent neural network with time-varying delays described bywhere In (1), is the state of the ith neuron at time k; denotes the activation function of the ith neuron at time k; denotes the external input on the ith neuron; represents the transmission delay that satisfies
Robust stability condition
Consider the system in (1) with norm-bounded parameter uncertainties; that is,where , , , E, , and are known constant matrices of appropriate dimensions; F is an unknown time-invariant matrix function satisfyingFor simplicity, we denote
Before proceeding, we present the following lemma. Lemma 2 Given matrices , and of appropriate dimensions and with symmetrical, then holdsPetersen [20], Zhang and Xu [27]
Numerical examples
Example 1 Consider a delayed discrete-time recurrent neural network in (1) with parameters given byThe activation functions satisfy Assumption 1 with For this example, if we assume , then by [21] it is found that the upper bound of the time-varying delay is 11. The same result is obtained by Theorem 1 in this paper. However, if we assume , by [21] the upper bound is obtained as , while by Theorem 1 in this paper we obtain .
Conclusions
In this paper, we have studied the problem of stability analysis problem for discrete-time recurrent neural networks with time-varying delays. In terms of an LMI, a less conservative delay-dependent exponential stability condition has been proposed. We have also presented a robust stability condition for discrete-time recurrent neural networks with both time-varying delays and parameter uncertainties. The reduced conservatism of the proposed stability criteria has been demonstrated by some
Acknowledgement
This work is supported by the National Science Foundation for Distinguished Young Scholars of P.R. China under Grant 60625303, and the Specialized Research Fund for the Doctoral Program of Higher Education under Grant 20060288021.
Baoyong Zhang was born in Shandong Province, China in 1981. He received the B.Sc. degree in mathematics in 2003 and the M.Sc. degree in control theory in 2006, both from the Qufu Normal University, Qufu, China. He is now a PhD candidate at the School of Automation, Nanjing University of Science and Technology, Nanjing, China. His current research interests include dynamics analysis of neural networks, and robust control and filtering for time-delay systems, stochastic systems and fuzzy systems.
References (32)
Global asymptotic stability of a larger class of neural networks with constant time delay
Phys. Lett. A
(2003)- et al.
A general framework for global asymptotic stability analysis of delayed neural networks based on LMI approach
Chaos Solitons Fractals
(2005) - et al.
Global exponential stability for discrete-time neural networks with variable delays
Phys. Lett. A
(2006) - et al.
Delay-range-dependent stability for systems with time-varying delay
Automatica
(2007) - et al.
Discrete-time bidirectional associative memory neural networks with variable delays
Phys. Lett. A
(2005) - et al.
Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach
Neural Networks
(2002) - et al.
Discrete-time recurrent neural networks with time-varying delays: exponential stability analysis
Phys. Lett. A
(2007) Exponential stability in Hopfield-type neural networks with impulses
Chaos Solitons Fractals
(2007)- et al.
Exponential stability of continuous-time and discrete-time cellular neural networks with delays
Appl. Math. Comput.
(2003) On global stability criterion for neural networks with discrete and distributed delays
Chaos Solitons Fractals
(2006)
A new stability analysis of delayed cellular neural networks
Appl. Math. Comput.
A stabilization algorithm for a class of uncertain linear systems
Syst. Control Lett.
A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays
Phys. Lett. A
Global exponential stability of high order Hopfield type neural networks
Appl. Math. Comput.
Robust control for uncertain discrete-time systems with time-varying delays via exponential output feedback controllers
Syst. Control Lett.
New results on global exponential stability of recurrent neural networks with time-varying delays
Phys. Lett. A
Cited by (136)
Adaptive control of Markov jump distributed parameter systems via model reference
2020, Fuzzy Sets and Systems
Baoyong Zhang was born in Shandong Province, China in 1981. He received the B.Sc. degree in mathematics in 2003 and the M.Sc. degree in control theory in 2006, both from the Qufu Normal University, Qufu, China. He is now a PhD candidate at the School of Automation, Nanjing University of Science and Technology, Nanjing, China. His current research interests include dynamics analysis of neural networks, and robust control and filtering for time-delay systems, stochastic systems and fuzzy systems.
Shengyuan Xu received his B.Sc. degree from the Hangzhou Normal University, China in 1990, M.Sc. degree from the Qufu Normal University, China in 1996, and Ph.D. degree from the Nanjing University of Science and Technology, China in 1999. From 1999 to 2000 he was a Research Associate in the Department of Mechanical Engineering at the University of Hong Kong, Hong Kong. From December 2000 to November 2001, and December 2001 to September 2002, he was a Postdoctoral Researcher in CESAME at the Universitè catholique de Louvain, Belgium, and the Department of Electrical and Computer Engineering at the University of Alberta, Canada, respectively. From September 2002 to September 2003, and September 2003 to September 2004, he was a William Mong Young Researcher and an Honorary Associate Professor, respectively, both in the Department of Mechanical Engineering at the University of Hong Kong, Hong Kong. Since November 2002, he has joined the Department of Automation at the Nanjing University of Science and Technology as a Professor.
Dr. Xu was a recipient of the National Excellent Doctoral Dissertation Award in the year 2002 from the Ministry of Education of China. In the year 2006, he obtained a grant from the National Science Foundation for Distinguished Young Scholars of PR China. He is a member of the Editorial Boards of the Multidimensional Systems and Signal Processing, and the Circuits Systems and Signal Processing. His current research interests include robust filtering and control, singular systems, time-delay systems, neural networks, and multidimensional systems and nonlinear systems.
Yun Zou was born in 1962. He received the B.S. degree in mathematics from Northwestern University in 1983 and the M.S. and Ph.D. degree in control theory and control engineering from Nanjing University of Science and Technology in 1987 and in 1990, respectively. Now he is a Professor in the School of Automation at the Nanjing University of Science and Technology, Nanjing, China. His research interests include differential-algebraic equation system, two dimensional systems and neural networks.