Elsevier

Neurocomputing

Volume 72, Issues 1–3, December 2008, Pages 321-330
Neurocomputing

Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays

https://doi.org/10.1016/j.neucom.2008.01.006Get rights and content

Abstract

This paper is concerned with the problem of stability analysis for a class of discrete-time recurrent neural networks with time-varying delays. Under a weak assumption on the activation functions and using a new Lyapunov functional, a delay-dependent condition guaranteeing the global exponential stability of the concerned neural network is obtained in terms of a linear matrix inequality. It is shown that this stability condition is less conservative than some previous ones in the literature. When norm-bounded parameter uncertainties appear in a delayed discrete-time recurrent neural network, a delay-dependent robust exponential stability criterion is also presented. Numerical examples are provided to demonstrate the effectiveness of the proposed method.

Introduction

Over the past decades, the recurrent neural networks have found extensive applications in image processing, pattern recognition, optimization solvers, fixed-point computation, and other engineering areas [3], [15], [26]. It is noted that the stability of neural networks is a prerequisite for these applications. Therefore, the stability study of recurrent neural networks is of much importance and has received considerable attention; see, for example, [1], [22], [16] and the references therein.

On the other hand, time delays are frequently encountered in neural networks due to the finite switching speed of amplifiers and the inherent communication time of neurons. Also, the existence of time delays is often one source of instability for neural networks. For these reasons, the stability analysis problem for recurrent neural networks with time delays has been studied recently. For example, the delay-independent stability results have been reported in [2], [4], [6], [5], [24], [28], while the delay-dependent stability results can be found in [12], [18], [19], [25], [32]. When parameter uncertainties, such as norm-bounded uncertainties and polytopic uncertainties, appear in delayed neural networks, delay-independent and -dependent robust stability conditions have been proposed in [30], [29], [31]. Generally speaking, the delay-dependent stability results are less conservative than the delay-independent ones, especially when the size of time delays are small.

It should be pointed out that all of the above-mentioned references are concerned with continuous-time neural networks. For discrete-time neural networks with time delays, the stability analysis problem has been addressed in [7], [11], [14], [17], [21]. It is worth noting that a delay-dependent condition guaranteeing the global exponential stability of a class of discrete-time recurrent neural networks with time-varying delays has been proposed in [14], where the linear matrix inequality (LMI) approach is developed and a weak assumption on the activation functions is considered. The result obtained in [14] has been improved in [21] by using a similar technique to that in [8]. We note that the following two terms i=k-τmk-1y(i)TQ2y(i)andj=-τM-τm-1i=k+jk-1η(i)TRη(i)(τm>0 and τM>0 are the lower and upper bounds of the time-varying delay τ(k), respectively) are ignored in the Lyapunov functional employed in [14], [21]. The ignorance of these terms may lead to conservatism to some extent. Therefore, it is important and necessary to further improve the stability results reported in [14], [21]. It is also worth mentioning that parameter uncertainties are not taken into account in [14], [21].

In this paper, under a weak assumption on the activation functions and by using a Lyapunov functional that is different from those in [14], [21], we present a new delay-dependent sufficient condition guaranteeing the global exponential stability of discrete-time recurrent neural networks with time-varying delays. It is shown that this condition is less conservative than those in [14], [21]. Based on the proposed stability result, we also give a delay-dependent robust exponential stability criterion for discrete-time recurrent neural networks with time-varying delays and norm-bounded parameter uncertainties. Some numerical examples are provided finally to demonstrate the effectiveness of the proposed stability criteria.

Notation: Throughout this paper, for real symmetric matrices X and Y, the notation XY (respectively, X>Y) means that the matrix X-Y is positive semi-definite (respectively, positive definite). I denotes an identity matrix of appropriate dimension. The superscript ‘T’ represents the transpose. The notation ‘*’ is used as an ellipsis for terms that are induced by symmetry. We use λmin(·) and λmax(·) to denote the minimum and maximum eigenvalue of a real symmetric matrix, respectively. The notation |x| denotes a vector norm defined by |x|=i=1nxi2 when x is a vector. For integers a and b satisfying a<b, N[a,b] denotes the discrete interval given N[a,b]={a,a+1,,b-1,b}. Matrix dimensions, if not explicitly stated, are assumed to be compatible for algebraic operations.

Section snippets

New stability condition

Consider a discrete-time recurrent neural network with time-varying delays described byx(k+1)=Cx(k)+Af(x(k))+Bf(x(k-τ(k)))+J,where J=[J1J2Jn]T,x(k)=[x1(k)x2(k)xn(k)]T,f(x(k))=[f1(x1(k))f2(x2(k))fn(xn(k))]T,f(x(k-τ(k)))=[f1(x1(k-τ(k)))f2(x2(k-τ(k)))fn(xn(k-τ(k)))]T.In (1), xi(k) is the state of the ith neuron at time k; fi(xi(k)) denotes the activation function of the ith neuron at time k; Ji denotes the external input on the ith neuron; τ(k) represents the transmission delay that satisfies 0

Robust stability condition

Consider the system in (1) with norm-bounded parameter uncertainties; that is,x(k+1)=(C0+EFGC)x(k)+(A0+EFGA)f(x(k))+(B0+EFGB)f(x(k-τ(k)))+J,where C0, A0, B0, E, GC, GA and GB are known constant matrices of appropriate dimensions; F is an unknown time-invariant matrix function satisfyingFTFI.For simplicity, we denoteC=C0+EFGC,A=A0+EFGA,B=B0+EFGB.

Before proceeding, we present the following lemma.

Lemma 2

Petersen [20], Zhang and Xu [27]

Given matrices W, X and Y of appropriate dimensions and with W symmetrical, then W+XFY+YTFTXT<0holds

Numerical examples

Example 1

Consider a delayed discrete-time recurrent neural network in (1) with parameters given byC=0.8000.9,A=0.001000.005,B=-0.10.01-0.2-0.1.The activation functions satisfy Assumption 1 with Γ=0000,Σ=1001.For this example, if we assume τm=2, then by [21] it is found that the upper bound of the time-varying delay is 11. The same result is obtained by Theorem 1 in this paper. However, if we assume τm=4, by [21] the upper bound τM is obtained as τM=11, while by Theorem 1 in this paper we obtain τM=12.

Conclusions

In this paper, we have studied the problem of stability analysis problem for discrete-time recurrent neural networks with time-varying delays. In terms of an LMI, a less conservative delay-dependent exponential stability condition has been proposed. We have also presented a robust stability condition for discrete-time recurrent neural networks with both time-varying delays and parameter uncertainties. The reduced conservatism of the proposed stability criteria has been demonstrated by some

Acknowledgement

This work is supported by the National Science Foundation for Distinguished Young Scholars of P.R. China under Grant 60625303, and the Specialized Research Fund for the Doctoral Program of Higher Education under Grant 20060288021.

Baoyong Zhang was born in Shandong Province, China in 1981. He received the B.Sc. degree in mathematics in 2003 and the M.Sc. degree in control theory in 2006, both from the Qufu Normal University, Qufu, China. He is now a PhD candidate at the School of Automation, Nanjing University of Science and Technology, Nanjing, China. His current research interests include dynamics analysis of neural networks, and robust control and filtering for time-delay systems, stochastic systems and fuzzy systems.

References (32)

Cited by (136)

View all citing articles on Scopus

Baoyong Zhang was born in Shandong Province, China in 1981. He received the B.Sc. degree in mathematics in 2003 and the M.Sc. degree in control theory in 2006, both from the Qufu Normal University, Qufu, China. He is now a PhD candidate at the School of Automation, Nanjing University of Science and Technology, Nanjing, China. His current research interests include dynamics analysis of neural networks, and robust control and filtering for time-delay systems, stochastic systems and fuzzy systems.

Shengyuan Xu received his B.Sc. degree from the Hangzhou Normal University, China in 1990, M.Sc. degree from the Qufu Normal University, China in 1996, and Ph.D. degree from the Nanjing University of Science and Technology, China in 1999. From 1999 to 2000 he was a Research Associate in the Department of Mechanical Engineering at the University of Hong Kong, Hong Kong. From December 2000 to November 2001, and December 2001 to September 2002, he was a Postdoctoral Researcher in CESAME at the Universitè catholique de Louvain, Belgium, and the Department of Electrical and Computer Engineering at the University of Alberta, Canada, respectively. From September 2002 to September 2003, and September 2003 to September 2004, he was a William Mong Young Researcher and an Honorary Associate Professor, respectively, both in the Department of Mechanical Engineering at the University of Hong Kong, Hong Kong. Since November 2002, he has joined the Department of Automation at the Nanjing University of Science and Technology as a Professor.

Dr. Xu was a recipient of the National Excellent Doctoral Dissertation Award in the year 2002 from the Ministry of Education of China. In the year 2006, he obtained a grant from the National Science Foundation for Distinguished Young Scholars of PR China. He is a member of the Editorial Boards of the Multidimensional Systems and Signal Processing, and the Circuits Systems and Signal Processing. His current research interests include robust filtering and control, singular systems, time-delay systems, neural networks, and multidimensional systems and nonlinear systems.

Yun Zou was born in 1962. He received the B.S. degree in mathematics from Northwestern University in 1983 and the M.S. and Ph.D. degree in control theory and control engineering from Nanjing University of Science and Technology in 1987 and in 1990, respectively. Now he is a Professor in the School of Automation at the Nanjing University of Science and Technology, Nanjing, China. His research interests include differential-algebraic equation system, two dimensional systems and neural networks.

View full text