Global exponential stability of cellular neural networks with variable delays

https://doi.org/10.1016/j.amc.2006.06.046Get rights and content

Abstract

For cellular neural networks with time-varying delays, the problems of determining the exponential stability and estimating the exponential convergence rate are investigated by employing the Lyapunov–Krasovskii functional and linear matrix inequality (LMI) technique. A novel criterion for the stability, which give information on the delay-dependent property, is derived. Two examples are given to demonstrate the effectiveness of the obtained results.

Introduction

Time delay is commonly encountered in biological and artificial neural networks [1], [2], [3], [4], [5], [6], and its existence is frequently a source of oscillation and instability [7], [8], [9], [10]. Therefore, the problem of stability analysis of delayed neural networks has been a focused topic of theoretical and practical importance. This stability issue has also gained increasing attention for its essential role in signal processing, image processing, pattern classification, associative memories, fixed-point computation, and so on. Recently, using various analyzing methods, many criteria for global stability of neural networks with constant delays or time-varying delays have been presented [11], [12], [13], [14], [15], [16].

As we have known, fast convergence of a system is essential for real-time computation, and the exponentially convergence rate is generally used to determine the speed of neural computations. Thus, global exponential stability for neural networks without delays or with delays has been also investigated [17], [18], [19], [20] in very recent years.

In this paper. we study the exponential stability and estimate the exponential convergence rates for neural networks with time-varying delays. Lyapunov–Krasovskii functionals and linear matrix inequality (LMI) approaches are combined to investigate the problem. A novel delay-dependent criterion is presented in terms of LMI. The advantage of the proposed approach is that the resulting stability criterion can be used efficiently via existing numerical convex optimization algorithms such as the interior-point algorithms for solving LMIs [21].

Throughout the paper, Rn denotes the n dimensional Euclidean space, and Rn×m is the set of all n  × m real matrices. I denotes the identity matrix with appropriate dimensions. ∥x∥ denotes the Euclidean norm of vector x. λM(·) and λm(·) denote the largest and smallest eigenvalue of a given matrix, respectively. ★ denotes the elements below the main diagonal of a symmetric block matrix. diag{·} denotes the block diagonal matrix. For symmetric matrices X and Y, the notation X > Y (respectively, X  Y) means that the matrix X  Y is positive definite (respectively, nonnegative).

Section snippets

Main results

Consider a continuous neural networks with time-varying delays can be described by the following state equations:y˙i(t)=-aiyi(t)+j=1nwijfj(yj(t))+j=1nwij1fj(yj(t-h(t)))+bi,i=1,2,,n,or equivalentlyy˙(t)=-Ay(t)+Wf(y(t))+W1f(y(t-h(t)))+b,where y(t)=[y1(t),,yn(t)]TRn is the neuron state vector, f(y(t))=[f1(y1(t)),,fn(yn(t))]TRn is the activation functions, f(y(t-h(t)))=[f1(y1(t-h(t))),,fn(yn(t-h(t)))]TRn, b = [b1,  , bn]T is a constant input vector, A = diag(ai) is a positive diagonal matrix, W = (

Concluding remarks

In this paper, the problems of exponential stability and exponential convergence rate for neural networks with time-varying delays have been studied. The exponential stability criterion obtained in this paper, which depends on the size of the time delay, are derived by the Lyapunov–Krasoviskii functional and LMI framework. It has been shown that our novel criterion is less conservative than those reported in the literature by numerical examples.

Cited by (0)

View full text