Elsevier

Neurocomputing

Volume 149, Part C, 3 February 2015, Pages 1280-1285
Neurocomputing

Stability of Markovian jumping recurrent neural networks with discrete and distributed time-varying delays

https://doi.org/10.1016/j.neucom.2014.09.001Get rights and content

Abstract

In this paper, global stability of Markovian jumping recurrent neural networks with discrete and distributed delays (MJRNN) is considered. A novel linear matrix inequality (LMI) based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of Markovian jumping recurrent neural networks with discrete and distributed delays. By applying Lyapunov method and some inequality techniques, several sufficient conditions are obtained under which the delayed neural networks are stable. Finally, numerical examples are given to demonstrate the correctness of the theoretical results.

Introduction

A recurrent neural network, which naturally involves dynamic elements in the form of feedback connections used as internal memories. Unlike the feed forward neural network whose output is a function of its current inputs only and is limited to static mapping, recurrent neural network performs dynamic mapping. Most of the existing recurrent neural networks are obtained by adding trainable temporal elements to feed forward neural networks to make the output history to be sensitive [1]. Like feed forward neural networks, these network functions as block boxes and the meaning of each weight in these nodes are not known. They play an important role in applications such as classification of patterns, associate memories and optimization (see [1], [2], [3] and the references therein). Thus, research on properties of stability problem and relaxed stability problem for recurrent neural networks become a very active area in the past few years [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14].

Hybrid systems driven by continuous-time Markov chain have been used to model many practical systems, where they may experience abrupt changes in their structure and parameters. When the neural network incorporates abrupt changes in its structure, the Markovian jump linear system is very appropriate to describe its dynamics [15], [16], [17], [18]. Recently, systems with Marvokian jumps have been attracting increasing research attention. This class of systems are the hybrid systems with two components in the state. The first one refers to the mode, which is described by a continuous time finite-state Markovian process, and the second one refers to the state which is represented by a system of differential equations. The Markovian jump systems have the advantage of modeling the dynamic systems subject to abrupt variation in their structures, such as component failures or repairs, sudden environmental disturbance, changing subsystem interconnections, and operating in different points of a nonlinear plant [19]. Recently, there has been a growing interest in the study of neural networks with Markovian jumping parameters [19], [20], [21], [22], [23], [24], [25].

Neural networks usually have a spatial extend due to the presence of a multitude of parallel pathways with a variety of axon sizes and length, hence there is a distribution of propagation delays over a period of time. It is worth noting that although the signal propagation is sometimes instantaneous and can be modeled with discrete delays, it may also be distributed during a certain time period so that the distributed delays should be incorporated in the model. In other words, it is often the case that the neural network model possesses both discrete and distributed delays [26], [27], [28]. Thus in recent years researchers [29], [30], [31], [32], [33] focus on the study of stability of Hopfield neural networks, cellular neural networks and recurrent neural networks with distributed delays.

Inspired by the aforementioned works, we study the global stability problem for a class of recurrent neural networks with discrete, distributed time-varying delays and Markovian jumping parameters. The stability analysis for Markovian jumping recurrent neural networks with discrete, distributed time-varying delays is made by using the Lyapunov functional technique. Global stability conditions for the MJRNN are given in terms of LMIs, which can be easily calculated by Matlab LMI toolbox [34]. The main advantage of the LMI based approaches is that the LMI stability conditions can be solved numerically by using the effective interior-point algorithms [35]. Numerical examples are provided to demonstrate the effectiveness and applicability of the proposed stability results.

Notations: Throughout this paper, for symmetric matrices X and Y, the notation XY means that XY is positive-semidefinite; MT denotes the transpose of the matrix M; I is the identity matrix with appropriate dimension; (Ω,F,P) is a probability space with the sample space Ω; F the algebra of subsets of the sample space and P be the probability measure; Matrices, if not explicitly stated, are assumed to have compatible dimensions. The symbol “⁎” denotes a block that is readily inferred by symmetry. {ϱt,t0} is a homogeneous, finite-state Markovian process with right continuous trajectories and taking values in finite set S={1,2,,s} with given a probability space (Ω,F,P) and the initial model ϱ0. Π=[πij], i,jS, which denotes the transition rate matrix with transition probabilityPr(ϱt+Δt=j|ϱt=i)={πijΔt+o(Δt),ij1+πijΔt+o(Δt),i=jwhere Δt>0, limΔt0(o(Δt)/Δt)=0 and πij is the transition rate from mode i to mode j satisfying πij0 for ij with πii=j=1,jisπij, i,jS. The mathematical expectation operator with respect to the given probability measure P is denoted by E{·}.

Section snippets

System description and preliminaries

Consider the following Markovian jumping recurrent neural networks with discrete and distributed time-varying delays described byu̇i(t)=ai(ϱt)ui(t)+j=1nwij(ϱt)Fj(uj(t))+j=1nhij(ϱt)Fj(uj(tτj(t)))+j=1ncij(ϱt)tρj(t)tFj(uj(s))ds+Ii,i=1,2,,n,in which ui(t) is the activation of the ith neuron. Positive constant ai(ϱt) denotes the rates with which the cell i reset their potential to the resting state when isolated from the other cells and inputs. wij(ϱt), hij(ϱt) and cij(ϱt) are the connection

Global stability results

In this section, some sufficient conditions of global stability for system (4) is obtained.

Theorem 3.1

Given scalars τ¯>0,ρ¯>0,d1>0 and d2>0 the system (4) is globally asymptotically stable if there exist symmetric positive definite matrices Pi>0, Q>0, R>0, S>0, T>0, U>0, W>0 symmetric positive definite matrices Nl, Ml, Ol(l=1,2,,7) and scalars a>0,b>0 such that feasible solution exists for LMIsΩi=[Ω11Ω12Ω13Ω14Ω15Ω16Ω17τ¯N1ρ¯M1Ω22Ω23Ω24Ω25Ω26Ω27τ¯N2ρ¯M2Ω33Ω34Ω35Ω36Ω37τ¯N3ρ¯M3Ω44Ω45Ω46Ω47τ¯N4ρ¯M4Ω

Numerical examples

Example 1

Consider the MJRNN with two modes (s=2). The system is of the following form: ẋ(t)=Aix(t)+Wif(x(t))+Hif(x(tτ(t)))+Citρ(t)tf(x(s))dswith the following parameters:A1=[1001],W1=[0.40.30.10.3],H1=[0.60.20.40.7],C1=[0.1000.1],A2=[5003],W2=[0.50.10.40.3],H2=[0.20.40.30.6],C2=[0.9000.8].By using the Matlab LMI toolbox, we solve the LMI (7) with τ¯=0.3, ρ¯=1.3, d1=d2=1.5, L=I, the feasible solutions areP1=[25.88112.406512.406528.5303],P2=[33.70810.59240.592420.4865],Q=[0.00540.00210.0021

Conclusion

In this paper, we have performed the stability analysis for a class of Markovian jumping recurrent neural network with discrete and distributed time varying delays. Some new stability criteria have been presented to guarantee the MJRNNs to be asymptotically stable. Linear matrix inequality (LMI) approach has been used to solve the underlying problem. The applicability of the derived results has been demonstrated through the numerical examples and the results are compared with some existing

M. Syed Ali graduated from the Department of Mathematics of Gobi Arts and Science College affiliated to Bharathiar University, Coimbatore in 2002. He received his post graduation in Mathematics from Sri Ramakrishna Mission Vidyalaya College of Arts and Science affiliated to Bharathiar University, Coimbatore, Tamilnadu, India, in 2005. He was awarded Master of Philosophy in 2006 in the field of Mathematics with specialized area of Numerical Analysis from Gandhigram Rural University Gandhigram,

References (37)

Cited by (51)

  • Design and analysis of three nonlinearly activated ZNN models for solving time-varying linear matrix inequalities in finite time

    2020, Neurocomputing
    Citation Excerpt :

    In [28,29], Guo et al. presented a novel approach to solving LMIs by converting inequalities to equations. Syed Ali [30] presented a novel RNN based on LMIs to prove the global stability. However, the conventional algorithms and ZNN models [31] can not provide finite-time solution of LMIs.

View all citing articles on Scopus

M. Syed Ali graduated from the Department of Mathematics of Gobi Arts and Science College affiliated to Bharathiar University, Coimbatore in 2002. He received his post graduation in Mathematics from Sri Ramakrishna Mission Vidyalaya College of Arts and Science affiliated to Bharathiar University, Coimbatore, Tamilnadu, India, in 2005. He was awarded Master of Philosophy in 2006 in the field of Mathematics with specialized area of Numerical Analysis from Gandhigram Rural University Gandhigram, India. He was conferred with Doctor of Philosophy in 2010 in the field of Mathematics specialized in the area of Fuzzy neural networks in Gandhigram Rural University, Gandhigram, India. He was selected as a Post Doctoral Fellow in the year 2010 for promoting his research in the field of Mathematics at Bharathidasan University, Trichy, Tamilnadu and also worked there from November 2010 to February 2011. Since 2011 he is working as an Assistant Professor in Department of Mathematics, Thiruvalluvar University, Vellore, Tamilnadu, India. He has published 25 research papers in the various SCI journals holding impact factors. He has also published research articles in national journals and international conference proceedings. He also serves as a reviewer for few SCI journals. His research interests include Stochastic Differential Equations, Dynamical systems, Fuzzy Neural Networks and Cryptography.

The work was supported by NBHM Project Grant no. 2/48(10)/2011-RD-II/865.

View full text