Elsevier

Neurocomputing

Volume 74, Issue 4, January 2011, Pages 638-645
Neurocomputing

Global exponential stability in Lagrange sense for neutral type recurrent neural networks

https://doi.org/10.1016/j.neucom.2010.10.001Get rights and content

Abstract

In this paper, the global exponential stability in Lagrange sense for continuous neutral type recurrent neural networks (NRNNs) with multiple time delays is studied. Three different types of activation functions are considered, including general bounded and two types of sigmoid activation functions. By constructing appropriate Lyapunov functions, some easily verifiable criteria for the ultimate boundedness and global exponential attractivity of NRNNs are obtained. These results can be applied to monostable and multistable neural networks as well as chaos control and chaos synchronization.

Introduction

Recurrent neural networks (RNNs) have found many applications since the pioneering work of Hopfield [11]. In employing RNNs to solve optimization, control, or signal processing problems, one of the most desirable properties of RNNs is the Lyapunov global stability. From a dynamical system point of view, globally stable networks in Lyapunov sense are monostable systems, which have a unique equilibrium attracting all trajectories asymptotically. A large body of research now exists on the study of global asymptotic stability for RNNs. We refer to [1], [5], [6], [13], [14], [15], [16], [19], [20], [21], [22], [23], [24], [26], [29], [36], [38], [39] and the references therein for detailed mathematical analysis on global convergence of various neural network models. In many other applications, however, monostable neural networks have been found computationally restrictive and multistable dynamics are essential to deal with important neural computations desired. For example, in a winner-take-all network [31], [34], where, depending on the external input (or the initial value), only the neuron with the strongest input (or highest initial value) should remain active. This is possible only if there are multiple equilibria with some being unstable. When a neural network is used as an associative memory storage or for pattern recognition, the existence of many equilibria is also necessary [4], [7], [11], [27]. In these applications, the neural networks are no longer globally stable, and more appropriate notions of stability are needed to deal with multistable systems.

Motivated by Yi and Tan [36], who take the view that boundedness, attractivity and complete convergence are three basic properties of a multistable network, we study the first two properties in this paper for NRNNs. More specifically, we generalize the conventional notion of stability in Lyapunov sense and study the global exponential stability (GES) in Lagrange sense for NRNNs with time delay. Our work extends the previous results obtained by Liao and Wang [17], [18], [20]. By constructing appropriate Lyapunov-like functions, we provide easily verifiable criteria for the boundedness of the networks and the existence of globally exponentially attractive (GEA) sets. Three different types of neuron activation functions are considered. Because we do not make any assumptions of the number of equilibria, our results can be used in analyzing both monostable and multistable networks. Once a network is proved to be globally exponentially stable in Lagrange sense, one needs only to focus the study of its dynamics inside the (compact) attractive set, where stable and unstable equilibria, periodic orbits, or even chaotic attractors may coexist. These more complex and richer dynamics are essential for the networks to possess multiple stable patterns that are often important in practical applications [2], [3], [8], [9], [25], [40], [41], [42]. In the special case where the compact attractive set is a single point, the network is then globally stable in Lyapunov sense and the attractive set is the unique equilibrium point.

It is worth to mention that unlike Lyapunov stability, Lagrange stability refers to the stability of the total system, rather than the stability of equilibrium points. The boundedness of solutions and the existence of global attractive sets lead to a total system concept of stability: (asymptotic) Lagrange stability. Our paper extends this concept of stability to neural networks and considers it as a compatible method for assessing the stability of multistable neural networks and their mathematical models.

We note that Lagrange stability has long been studied in theory and applications of dynamical systems. In [12], [37], LaSalle and Yoshizawa apply Lyapunov functions to study Lagrange stability. In [30], Rekasius considers asymptotic stability in Lagrange sense for nonlinear feedback control systems. In [32], Lagrange stability is discussed by Thornton and Mulholland as a useful concept for determining the stability of ecological systems. More recently, Passino and Burgess [28] adapt the concept of Lagrange stability to investigate discrete event systems, and Hassibi et al. [10] study the Lagrange stability of hybrid dynamical systems. See also [33], [35] for recent results on Lagrange stability for pendulum-like systems.

In this paper, we will further extend and generalize the results of our works [17], [18], [20] from RNNs to neutral NRNNs. The paper is organized as follows. In Section 2, we define the notion of GES in Lagrange sense and give a lemma, which will be used in proofs of all results of the paper. Section 3 provides main results for GES of NRNNs with three different types of activation functions. Then an application is obtained in Section 4. Finally,we give an example to illustrate the applications of our results in Section 5.

Section snippets

Preliminaries

Consider the following NRNNs with multiple time delays{cidxi(t)dt=dixi(t)+j=1na˜ijfj(xj(t))+j=1nbijgj(xj(tτij(t)))+j=1ncijhj(ẋj(tτij(t)))+Ii(t)xi(t)=φi(t)t[τ,0]ẋi(t)=ψi(t)t[τ,0]where i=1,2,,n, ci>0 and di>0 are constants, xi(t) is the state variable of the i-th neuron, Ii(t)C(R,R) is variable input (bias), a˜ij,bij and cij are connection weights from neuron i to neuron j, τij(t) are variable time delays, fiC(R,R) is the i-th neuron activation function, gi, hi are bounded

f(),g(),h()B

 

Theorem 1

If the activation functions f(⋅), g(⋅), h(⋅) are all bounded; i.e., |fi()|ki,|gi()|li,|hi()|l¯i, where ki,li,l¯i are all positive constants and xifi(xi)>0, for xi0,fi(0)=0,i=1,2,,n, then NRNN(1) is globally exponentially stable in Lagrange sense and Ω1, Ω2, Ω3 are all GES sets, the set Ω=Ω1Ω2Ω3 is a better GES for NRNN(1), where

Ωi={12j=1n(|aij|kj+|bij|lj+|cij|l¯i)+|Ii|}Ω1={x|i=1ncixi2(t)2(i=1nMi2εi)/2min1in(diεi)ci,0<εi<di}Ω2={x||xi|2Midi,i=1,2,,n}Ω3={x|i=1nci|xi|i=1n2Mimin

An application

The equilibrium points, periodic solution and almost periodic solution of Eq. (1) and (2) or Eq. (3) are all positive invariant sets. Therefore, as a direct application of the results obtained in the previous, we have the following theorem.

Theorem 5

Let Ω is a GES and positive invariant set of Eq. (1) or Eq. (2) or Eq. (3), then outside the Ω, there are no bounded positive invariant sets that do not intersect Ω.

Proof

By contraries, suppose Q is the positive invariant set outside the set Ω. Thus, infX¯ΩXQXX¯

Example

 

Example 1

Consider a 3-dimensional NRNN

(c1ẋ1(t)c2ẋ2(t)c3ẋ3(t))=(d1x1(t)d2x2(t)d3x3(t))+(a11a12a13a12a22a23a13a23a33)(f(x1(t))f(x2(t))f(x3(t)))+(b11b12b13b21b22b23b31b32b33)(y(x1(tτ1(t)))y(x2(tτ2(t)))y(x3(tτ3(t))))+(c11c12c13c21c22c23c31c32c33)(h(ẋ1(tτ˜1(t)))h(ẋ2(tτ˜2(t)))h(ẋ3(tτ˜3(t))))+(I1(t)I2(t)I3(t))f(x)=exexex+ex,g(x)=2x1+x2,h(x)=arctgx,aij, bij, cij are all constants, 0<τiT, aii>0.TakeP=(00000000a33)thenA¯=(a¯11a¯12a¯13a¯12a¯22a¯23a¯13a¯230)A¯+A¯T0

According to Theorem

Acknowledgement

This work was supported by the Natural Science Foundations of China under Grants 60874110 and 60974021.

Qi Luo received his Ph.D. degree in systems engineering from South China University of Technology, Guangzhou, China, in 2004. His current research interests include stability of stochastic systems and automatic control.

He is a Professor and Ph.D. advisor in the College of Information and Control, Nanjing University of Information Science and Technology, China.

References (42)

  • C.Y. Cheng et al.

    Multistability in recurrent neural networks

    SIAM J. Appl. Math.

    (2006)
  • L.O. Chua et al.

    Cellular neural networks: theory

    IEEE Trans. Circuits Syst.

    (1988)
  • M. Forti et al.

    New conditions for global stability of neural networks with application to linear and quadratic programming problems

    IEEE Trans. Circuits Syst. I

    (1995)
  • J. Foss et al.

    Multistability and delayed recurrent loops

    Phys. Rev. Lett.

    (1996)
  • R. Hahnloser

    On the piecewise analysis of liner threshold neurons

    Neural Networks

    (1998)
  • R. Hahnloser et al.

    Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit

    Nature

    (2000)
  • A. Hassibi,, S.P. Boyd, J.P. How., A class of Lyapunov functionals for analyzing hybrid dynamical systems, in:...
  • J.J. Hopfield

    Neurons with graded response have collective computational properties like those of two-state neurons

    Proc. Natl. Acad. Sci.

    (1984)
  • J. LaSalle

    Some extensions of Lyapunov's second method

    IRE Trans. Circuit Theory

    (1960)
  • X.X. Liao

    Stability of Hopfield-type neural networks

    Sci. China Ser. A

    (1993)
  • X.X. Liao

    Mathematical theory of cellular networks(1)

    Sci. China Ser. A

    (1994)
  • Cited by (61)

    • Lagrange stability criteria for hypercomplex neural networks with time varying delays

      2024, Communications in Nonlinear Science and Numerical Simulation
    View all citing articles on Scopus

    Qi Luo received his Ph.D. degree in systems engineering from South China University of Technology, Guangzhou, China, in 2004. His current research interests include stability of stochastic systems and automatic control.

    He is a Professor and Ph.D. advisor in the College of Information and Control, Nanjing University of Information Science and Technology, China.

    Zhigang Zeng received his B.S. degree in mathematics from Hubei Normal University, Huangshi, China, and his M.S. degree in ecological mathematics from the Hubei University, Hubei, China, in 1993 and 1996, respectively, and his Ph.D. degree in systems analysis and integration from the Huazhong University of Science and Technology, Wuhan, China, in 2003.

    He is a Professor at the Department of Control Science and Engineering, the Huazhong University of Science and Technology, China. His current research interests include neural networks and stability analysis of dynamic systems.

    Xiaoxin Liao received his B.S. degree in mathematics from the Wuhan University, China in 1963. His current research interests include stability of dynamical systems, neural networks, chaotic control, synchronization and automatic control.

    He is a Professor and Ph.D advisor in the Department of Control Science and Engineering, the Huazhong University of Science and Technology, China. He is an editorial committee of the Applied Mathematics.

    View full text