Stability of Markovian jumping recurrent neural networks with discrete and distributed time-varying delays☆
Introduction
A recurrent neural network, which naturally involves dynamic elements in the form of feedback connections used as internal memories. Unlike the feed forward neural network whose output is a function of its current inputs only and is limited to static mapping, recurrent neural network performs dynamic mapping. Most of the existing recurrent neural networks are obtained by adding trainable temporal elements to feed forward neural networks to make the output history to be sensitive [1]. Like feed forward neural networks, these network functions as block boxes and the meaning of each weight in these nodes are not known. They play an important role in applications such as classification of patterns, associate memories and optimization (see [1], [2], [3] and the references therein). Thus, research on properties of stability problem and relaxed stability problem for recurrent neural networks become a very active area in the past few years [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14].
Hybrid systems driven by continuous-time Markov chain have been used to model many practical systems, where they may experience abrupt changes in their structure and parameters. When the neural network incorporates abrupt changes in its structure, the Markovian jump linear system is very appropriate to describe its dynamics [15], [16], [17], [18]. Recently, systems with Marvokian jumps have been attracting increasing research attention. This class of systems are the hybrid systems with two components in the state. The first one refers to the mode, which is described by a continuous time finite-state Markovian process, and the second one refers to the state which is represented by a system of differential equations. The Markovian jump systems have the advantage of modeling the dynamic systems subject to abrupt variation in their structures, such as component failures or repairs, sudden environmental disturbance, changing subsystem interconnections, and operating in different points of a nonlinear plant [19]. Recently, there has been a growing interest in the study of neural networks with Markovian jumping parameters [19], [20], [21], [22], [23], [24], [25].
Neural networks usually have a spatial extend due to the presence of a multitude of parallel pathways with a variety of axon sizes and length, hence there is a distribution of propagation delays over a period of time. It is worth noting that although the signal propagation is sometimes instantaneous and can be modeled with discrete delays, it may also be distributed during a certain time period so that the distributed delays should be incorporated in the model. In other words, it is often the case that the neural network model possesses both discrete and distributed delays [26], [27], [28]. Thus in recent years researchers [29], [30], [31], [32], [33] focus on the study of stability of Hopfield neural networks, cellular neural networks and recurrent neural networks with distributed delays.
Inspired by the aforementioned works, we study the global stability problem for a class of recurrent neural networks with discrete, distributed time-varying delays and Markovian jumping parameters. The stability analysis for Markovian jumping recurrent neural networks with discrete, distributed time-varying delays is made by using the Lyapunov functional technique. Global stability conditions for the MJRNN are given in terms of LMIs, which can be easily calculated by Matlab LMI toolbox [34]. The main advantage of the LMI based approaches is that the LMI stability conditions can be solved numerically by using the effective interior-point algorithms [35]. Numerical examples are provided to demonstrate the effectiveness and applicability of the proposed stability results.
Notations: Throughout this paper, for symmetric matrices X and Y, the notation means that X–Y is positive-semidefinite; MT denotes the transpose of the matrix M; I is the identity matrix with appropriate dimension; is a probability space with the sample space ; the algebra of subsets of the sample space and be the probability measure; Matrices, if not explicitly stated, are assumed to have compatible dimensions. The symbol “⁎” denotes a block that is readily inferred by symmetry. is a homogeneous, finite-state Markovian process with right continuous trajectories and taking values in finite set with given a probability space and the initial model ϱ0. , , which denotes the transition rate matrix with transition probabilitywhere , and πij is the transition rate from mode i to mode j satisfying for with , . The mathematical expectation operator with respect to the given probability measure P is denoted by .
Section snippets
System description and preliminaries
Consider the following Markovian jumping recurrent neural networks with discrete and distributed time-varying delays described byin which is the activation of the ith neuron. Positive constant denotes the rates with which the cell i reset their potential to the resting state when isolated from the other cells and inputs. , and are the connection
Global stability results
In this section, some sufficient conditions of global stability for system (4) is obtained. Theorem 3.1 Given scalars and the system (4) is globally asymptotically stable if there exist symmetric positive definite matrices , , , , , , symmetric positive definite matrices Nl, Ml, and scalars such that feasible solution exists for LMIs
Numerical examples
Example 1 Consider the MJRNN with two modes (s=2). The system is of the following form: with the following parameters:By using the Matlab LMI toolbox, we solve the LMI (7) with , , , L=I, the feasible solutions are
Conclusion
In this paper, we have performed the stability analysis for a class of Markovian jumping recurrent neural network with discrete and distributed time varying delays. Some new stability criteria have been presented to guarantee the MJRNNs to be asymptotically stable. Linear matrix inequality (LMI) approach has been used to solve the underlying problem. The applicability of the derived results has been demonstrated through the numerical examples and the results are compared with some existing
M. Syed Ali graduated from the Department of Mathematics of Gobi Arts and Science College affiliated to Bharathiar University, Coimbatore in 2002. He received his post graduation in Mathematics from Sri Ramakrishna Mission Vidyalaya College of Arts and Science affiliated to Bharathiar University, Coimbatore, Tamilnadu, India, in 2005. He was awarded Master of Philosophy in 2006 in the field of Mathematics with specialized area of Numerical Analysis from Gandhigram Rural University Gandhigram,
References (37)
- et al.
Recurrent radial basis function networks for adaptive noise cancellation
Neural Netw.
(1995) - et al.
Exponential stability of uncertain stochastic fuzzy BAM neural networks with time-varying delays
Neurocomputing
(2009) - et al.
Existence and global stability analysis of equilibrium of fuzzy cellular neural networks with time delay in the leakage term under impulsive perturbations
J. Frankl. Inst.
(2011) - et al.
Stability properties for Hopfield neural networks with delays and impulsive perturbations
Nonlinear Anal. Real World Appl.
(2009) Further note on global exponential stability of uncertain cellular neural networks with variable delays
Appl. Math. Comput.
(2007)- et al.
New approaches on stability criteria for neural networks with interval time-varying delays
Appl. Math. Comput.
(2012) - et al.
New results for global stability of a class of neutral-type neural systems with time delays
Appl. Math. Comput.
(2009) - et al.
Improved delay-dependent stability criterion for neural networks with time-varying delay
Appl. Math. Comput.
(2011) - et al.
An LMI approach to stability analysis of stochastic high-order Markovian jumping neural networks with mixed time delays
Nonlinear Anal. Hybrid Syst.
(2008) - et al.
Stochastic stability of discrete-time uncertain recurrent neural networks with Markovian jumping and time-varying delays
Math. Comput. Modell.
(2011)
Delay-dependent stability analysis for continuous time BAM neural networks with Markovian jumping parameters
Neural Netw.
Robust exponential stability of Markovian jumping neural networks with mode-dependent delay
Commun. Nonlinear Sci. Numer. Simul.
Stochastic global exponential stability for neutral-type impulsive neural networks with mixed time-delays and Markovian jumping parameters
Commun. Nonlinear Sci. Numer. Simul.
Stability analysis for stochastic neural networks of neutral type with both Markovian jump parameters and mixed time delays
Neurocomputing
A mode-dependent stability criterion for delayed discrete-time stochastic neural networks with Markovian jumping parameters
Neurocomputing
Delay-dependent H1 filtering for stochastic systems with Markovian switching and mixed mode-dependent delays
Nonlinear Anal. Hybrid Syst.
Global asymptotic stability for cellular neural networks with discrete and distributed time varying delays
Chaos Solitons Fractals
Global exponential stability of generalized recurrent neural networks with discrete and distributed delays
Neural Netw.
Cited by (51)
Finite-time event-triggered approach for recurrent neural networks with leakage term and its application
2021, Mathematics and Computers in SimulationStability study and control of nonautonomous dynamical systems based on divergence conditions
2020, Journal of the Franklin InstituteGlobal Lagrange stability for neutral-type inertial neural networks with discrete and distributed time delays
2020, Chinese Journal of PhysicsDesign and analysis of three nonlinearly activated ZNN models for solving time-varying linear matrix inequalities in finite time
2020, NeurocomputingCitation Excerpt :In [28,29], Guo et al. presented a novel approach to solving LMIs by converting inequalities to equations. Syed Ali [30] presented a novel RNN based on LMIs to prove the global stability. However, the conventional algorithms and ZNN models [31] can not provide finite-time solution of LMIs.
M. Syed Ali graduated from the Department of Mathematics of Gobi Arts and Science College affiliated to Bharathiar University, Coimbatore in 2002. He received his post graduation in Mathematics from Sri Ramakrishna Mission Vidyalaya College of Arts and Science affiliated to Bharathiar University, Coimbatore, Tamilnadu, India, in 2005. He was awarded Master of Philosophy in 2006 in the field of Mathematics with specialized area of Numerical Analysis from Gandhigram Rural University Gandhigram, India. He was conferred with Doctor of Philosophy in 2010 in the field of Mathematics specialized in the area of Fuzzy neural networks in Gandhigram Rural University, Gandhigram, India. He was selected as a Post Doctoral Fellow in the year 2010 for promoting his research in the field of Mathematics at Bharathidasan University, Trichy, Tamilnadu and also worked there from November 2010 to February 2011. Since 2011 he is working as an Assistant Professor in Department of Mathematics, Thiruvalluvar University, Vellore, Tamilnadu, India. He has published 25 research papers in the various SCI journals holding impact factors. He has also published research articles in national journals and international conference proceedings. He also serves as a reviewer for few SCI journals. His research interests include Stochastic Differential Equations, Dynamical systems, Fuzzy Neural Networks and Cryptography.
- ☆
The work was supported by NBHM Project Grant no. 2/48(10)/2011-RD-II/865.