Elsevier

Neurocomputing

Volume 73, Issues 4–6, January 2010, Pages 986-990
Neurocomputing

On pth moment exponential stability of stochastic Cohen–Grossberg neural networks with time-varying delays

https://doi.org/10.1016/j.neucom.2009.08.019Get rights and content

Abstract

With the help of Lyapunov function, stochastic analysis technique, generalized Halanay inequality and Hardy inequality, a set of novel sufficient conditions on pth moment exponential stability for non-autonomous stochastic Cohen–Grossberg neural networks is given, which modifies and generalizes some corresponding published results.

Introduction

In real nervous systems, there are various stochastic perturbations to the networks and it is important to understand how these perturbations affect the networks. Especially, it is very critical to know whether the networks are stable or not under the perturbations [1]. Recently, a great deal of results on stochastic neural networks with discrete delays have been reported in the literature, see e.g. [2], [3], [4], [5], [6], [7], [8], [9], [10]. pth moment exponential stability analysis is one of the most challenging, yet difficult problems in the field of stochastic neural networks [11]. In the aforementioned paper [11], the authors took the Razumikhin-type theorem to study pth moment exponential stability of the following system and obtained some interesting results:dx(t)=-α(x(t))[β(x(t))-Ag(x(t))-Bg(x(t-τ(t)))]dt+σ(t,x(t),x(t-τ(t)))dw(t),where x(t)=(x1(t),x2(t),,xn(t))TRn is the neuron states vector; α(x(t))=diag(α1(x1(t)),,αn(xn(t))); β(x(t))=(β1(x1(t)),,βn(xn(t)))T; A=(aij)n×n and B=(bij)n×n are the connection weight matrix and delayed connection weight matrix, respectively; g(x(t))=(g1(x1(t)),,gn(xn(t)))T is the activation functions vector; g(x(t-τ(t)))=(g1(x1(t-τ(t))),,gn(xn(t-τ(t))))T,0τ(t)τ; σ(t)=(σij(t))n×n is the diffusion coefficient matrix and ω(t)=(ω1(t),,ωn(t))T is an n-dimensional Brownian motion defined on a complete probability space (Ω,F,P) with a natural filtration {Ft}t0 (i.e. Ft=σ{w(s):0st}). In order to obtain pth moment exponential stability of system (1), the following conditions are established in [11].

  • (H1)

    For each i{1,,n}, there exist positive constants α̲i,α¯i, such that α̲iai(xi(t))α¯i.

  • (H2)

    For each i{1,,n}, xip-1(t)βi(xi(t))γixip(t).

  • (H3)

    For each i{1,,n}, there exists positive constant Gi, such that|gi(x)-gi(y)|Gi|x-y|,x,y,R.

  • (H4)

    For each i,j{1,,n}, there are non-negative constants cij0, cij1, such thatσij2σij2(t,xi(t),xj(t-τ(t)))cij0xi2(t)+cij1xj2(t-τ(t)).We can read the following main theorem which was obtained in [11].

Theorem A Zhu [11]

Under assumptions (H1)(H4), if there exists a positive diagonal matrix Q=diag(q1,,qn), such thatλ1>λ2,whereλ1=min1inpαi̲γi-(p-1)j=1nαi¯|aij|Gj-1qij=1nαj¯qj|aji|Gi-(p-1)j=1nαi¯|bij|Gj-p(p-1)2max1in{qi}1qij=1ncij0-(p-1)(p-2)max1in{qi}1qij=1ncij1,and λ2=max1in1qij=1n|bji|αj¯qjGi+(p-1)max1in{qi}1qij=1ncji1,then for all ξLF0p([-τ,0],Rn), the trivial solution of system (1) is pth moment exponentially stable.

However, a defect appearing in the main result in [11] when p=2k+1,kZ+,x(t)<0, just from the constructed Lyapunov function, one can find the term xp/2(t) is a blemish. In order to modify this imperfection, the constructed Lyapunov function should be replaced with V(t,x(t))=i=1nqi|xi(t)|p. Noticing that |xi(t)|p/xi=p|xi|p-1sgn{xi}=p|xi|p-2xi, we have |xi(t)|p/xiβi(xi(t))=p|xi|p-2xiβi(xi(t)), it is easy to see that the assumed condition (H2) is not correct, which should be revised as the below (H2). On the other hand, there is an error appear in (3), the coefficient of λ1, the term (p-1)(p-2) should be replaced with (p-1)(p-2)/2.

(H2) For each i{1,2,,n}, there exist positive constant γi>0, such thatxi(t)βi(xi(t))γixi2(t).In this paper, we will generalize system (1) and further study the following non-autonomous stochastic functional differential equations:dxi(t)=ai(xi(t))βi(xi(t))-j=1naij(t)gj(xj(t))-j=1nbij(t)gj(xj(t-τj(t)))dt+j=1nσij(t,xi(t),xj(t-τj(t)))dωj(t).We will improve and generalize the corresponding results published in [11]. We assume the following generalized conditions are satisfied:

  • (H2)

    There exist positive functions γj(t), such thatxj(t)βj(xj(t))γj(t)xj2(t).

  • (H4)

    There are non-negative functions cij0(t),cij1(t), for t,u,vR, such thatσij2(t,u,v)cij0(t)u2+cij1(t)v2.

Remark 1.1

Obviously, choosing γj(t)=γj>0, cij0(t)=cij0>0,cij1(t)=cij1>0, we can see that conditions (H2) and (H4) generalize the conditions (H2) and (H4), the significant of these generalization may provide engineer with a wider selection on the parameters in practice. For simplicity and avoiding lengthy statements, some symbols and notations with the same meanings as those in [11] are employed in the following derivations.

Section snippets

Main results

In this section, we will circumvent these problems of Theorem A and improve it. In order to prove the main result, we would like to present two lemmas firstly.

Lemma 2.1

Hardy inequality, Cao and Liang [12]

Assume there exist constants ak0,pk>0,(k=1,,m+1), then the following inequality holds:k=1m+1akpk1/Sm+1k=1m+1pkakr1/rSm+1-1/r,where r>0 and Sm+1=k=1m+1pk. In (6), if we let pm+1=1,r=Sm+1=k=1mpk+1, we will getk=1makpkam+11rk=1mpkakr+1ram+1r,if we let pm+1=2,r=Sm+1=k=1mpk+2, we will getk=1makpkam+11rk=1mpkakr+2ram+1r.

Lemma 2.2

Generalized Halanay inequality, Tian [13]

For two

An illustrative example

In this section, we give one numerical example and simulations to illustrate the stability criteria presented in the previous part.

Example 3.1

Consider the following stochastic recurrent neural networks with time-varying connection matrix and delaysdx(t)=-150015x1(t)x2(t)+-2430.5tanh(x1(t))tanh(x2(t))+410.52tanh(x1(t-τ1(t)))tanh(x2(t-τ2(t)))dt+σ(t,x(t),x(t-τ(t)))dw(t),t0,where τi(t) is a bounded positive function and σ:R+×R2×R2R2×R2 satisfies trace[σT(t,x,y)σ(t,x,y)]x12+x22+y12+y22.In the example, let p

Conclusion

In a word, in this paper, with the help of Lyapunov function, stochastic analysis technique, generalized Halanay inequality and Hardy inequality, a set of novel sufficient conditions on pth moment exponential stability for non-autonomous stochastic Cohen–Grossberg neural networks is given, which modifies and generalizes the corresponding results in [11]. The significant of this paper does offer a wider selection on the networks parameters in order to achieve some necessary convergence in

Chuangxia Huang received the B.S. degree in Mathematics in 1999 from National University of Defense Technology, Changsha, China. From September 2002, he began to pursue his M.S. degree in Applied Mathematics at Hunan University, Changsha, China, and from April 2004, he pursued his Ph.D. degree in Applied Mathematics in advance at Hunan University. He received the Ph.D. degree in June 2006. He is currently an Associate Professor of Changsha University of Science and Technology, Changsha, China,

References (13)

There are more references available in the full text version of this article.

Cited by (0)

Chuangxia Huang received the B.S. degree in Mathematics in 1999 from National University of Defense Technology, Changsha, China. From September 2002, he began to pursue his M.S. degree in Applied Mathematics at Hunan University, Changsha, China, and from April 2004, he pursued his Ph.D. degree in Applied Mathematics in advance at Hunan University. He received the Ph.D. degree in June 2006. He is currently an Associate Professor of Changsha University of Science and Technology, Changsha, China, and a Visiting Scholar at the Department of Mathematics, Southeast University, Nanjing, China. He is the author of more than 20 journal papers. His research interests include dynamics of neural networks, and stability theory of functional differential equations.

Jinde Cao received the B.S. degree from Anhui Normal University, Wuhu, China, the M.S. degree from Yunnan University, Kunming, China, and the Ph.D. degree from Sichuan University, Chengdu, China, all in Mathematics/Applied Mathematics, in 1986, 1989, and 1998, respectively. From March 1989 to May 2000, he was with the Yunnan University. In May 2000, he joined the Department of Mathematics, Southeast University, Nanjing, China. From July 2001 to June 2002, he was a Postdoctoral Research Fellow at the Department of Automation and Computer-Aided Engineering, Chinese University of Hong Kong, Hong Kong. In 2006, 2007 and 2008, he was a Visiting Research Fellow and a Visiting Professor at the School of Information Systems, Computing and Mathematics, Brunel University, UK.

Currently, he is a TePin Professor and Doctoral Advisor at the Southeast University, prior to which he was a Professor at Yunnan University from 1996 to 2000. He is the author or coauthor of more than 150 journal papers and five edited books. His research interests include nonlinear systems, neural networks, complex systems and complex networks, stability theory, and applied mathematics.

Dr. Cao is a senior member of IEEE, a Reviewer of Mathematical Reviews and Zentralblatt-Math, and an Associate Editor of the IEEE Transactions on Neural Networks, the Journal of the Franklin Institute, Mathematics and Computers in Simulation, and Neurocomputing.

This work was jointly supported by China Scholarship Foundation under Grant no. 2008106456, the Foundation of Chinese Society for Electrical Engineering, the Hunan Provincial Natural Science Foundation of China under Grant no. 07JJ4001.

View full text