Elsevier

Neurocomputing

Volume 118, 22 October 2013, Pages 322-328
Neurocomputing

Letters
Global asymptotic stability of delayed neural networks with discontinuous neuron activations

https://doi.org/10.1016/j.neucom.2013.02.021Get rights and content

Abstract

In this paper, we integrate a class of delayed neural networks with discontinuous activations, which are not supposed to be bounded or nondecreasing. Conditions of existence of an equilibrium point are established by means of the Leray–Schauder theorem of set-valued maps. Then, the existence of solutions is proved based on viability theorem. Furthermore, global asymptotical stability of the networks is studied by using Lyapunov–Krasovskii stability theory. The results of global asymptotical stability are in term of linear matrix inequality. The obtained results extend previous works on global stability of delayed neural networks with discontinues activations.

Introduction

In the past decade, neural networks have extensive applications in optimization, classification, solving nonlinear algebraic equations, signal and image processing, pattern recognition, automatic control, associative memories and other areas [1], [2], [3], [4]. Therefore, a great deal of attentions have been paid to the study of neural networks recently, and various issues for neural networks have been investigated [5], [6], [7]. It is well known that implementations of neural networks depend heavily on the dynamical behaviors of the networks. However, time delays often occur in the processing of information storage and transmission which may create bad dynamical behaviors of the networks for example oscillatory and instability. Hence, it is necessary to study the dynamical behaviors of delayed neural networks. And, the stability of delayed neural networks has received considerable interests [8], [9], [10].

All the above literatures [1], [2], [3], [4], [5], [6], [7], [8], [9], [10] and some other articles such in [11], [12], [13], [14] are based on assuming that the activation functions are continuous or even Lipschitz continuous. Nevertheless, it is well known that neural networks with discontinuous activations may be better for designing and applying an artificial neural networks. For example, considering the classical Hopfield neural networks [15], the standard assumption is that the activations are used in high-gain limit where they closely approach a discontinuous hard-comparator function. Therefore, the dynamical behavior of discontinuous neural networks have attracted much attention and many results have been presented [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27]. To our knowledge, Forti and Nistri in [16] have firstly studied the global stability of a neural network modeled by a differential equation with discontinuous activation functions based on Lyapunov diagonally stable matrix and constructing suitable Lyapunov function, and some stability conditions were proposed. Afterwards, a more general delayed neural network model with discontinuous activation functions is proposed in [17], the author analyzes global exponential stability and global convergence of the system in which the activation functions are unbounded. Since results given in many articles are based on the assumption that the activation functions are nondecreasing, Wang, Huang and Guo [23] discuss the dynamics behaviors of neural networks with discontinuous activations without assuming the boundedness and monotonicity of the neuron activations, where the neural networks are without delay.

Motivated by the above discussions, we will discuss the global asymptotic stability of delayed neural networks with discontinuous activations, where the activation functions are not assumed to be bounded or/and nondecreasing. Some sufficient conditions for the global stability of delay neural networks are established, which extend previous works on delayed neural networks with discontinuous activations.

The structure of this paper is outlined as follows. Due to the consideration of differential equations with discontinuous right-hand sides, in Section 2, we introduce the definitions and some lemmas we need use. We discuss the existence of equilibrium points and solutions in Section 3. In Section 4, the sufficient conditions for global asymptotic stability are established. Some numerical examples are given to show the effectiveness of our proposed results in Section 5. At last, we conclude this paper in Section 6.

Section snippets

Model descriptions and preliminaries

First of all, some mathematical notations used throughout this paper are presented. R denotes the set of real numbers. For xRn, xT denotes its transpose. The vector norm is defined as x=xTx. Rn×m are the set of real matrices. For ARn×n, λmax(A) and λmin(A) are the maximum and the minimum eigenvalues of A, the spectral norm used will be A=(λmax(ATA))1/2.

In this paper, we consider delayed neural networks model with discontinuous neuron activations described by the following equation:x˙(t)=

Existence of equilibrium and viability

In this part, we will first prove existence of equilibrium for the system (1) by equilibrium theorem. Firstly, we give some necessary definitions concerning equilibrium theorem to set-value maps.

Definition 8

K is a convex subset of Rn. The tangent cone TK(x) to K at xK is defined as TK(x)=¯h>0Kxhwhere ¯ is the closure of the union set.

Lemma 2

Aubin and Frankowska [29]

Suppose Ω is a convex subset of Rn and a nonempty convex se-value map F:[0,1]×ΩRn is upper semicontinuous. If the set-valued map xF(0,x) satisfies the tangential

Global asymptotic stability and convergence

In this section, we will discuss the global asymptotical stability. The corresponding results will be presented in the below.

Note that, in view of Theorem 1, suppose that x is an equilibrium point of (1), γ is the corresponding OEP. Let y(t)=x(t)x, then the neural networks (1) can be transformed as belowy˙(t)=Cx(t)+Af(y(t))+Bf(y(tτ))where f(y)=(f1(y1),,fn(yn))T, and fi(yi)=gi(yi+xi)γi,γK[g(x)]. Obviously, γ¯(t)K[f(y(t))], such that y˙=Cy(t)+Aγ¯(t)+Bγ¯(tτ)Then, under the

Example

Example 1

Consider a two-dimensional delayed neural network (1) described by the following:A=1.50.050.051.5,B=0.80.10.10.8,gi(s)=s+1s>0s1s0,i=1,2,C=diag(1.5,1.5),τ=5It is clear that the activation function g(x)=(g1(x),g2(x)) is discontinuous and unbounded. Now take Li=1 and P=diag(1,1), then Pi=1, λ¯=1.4, C1(A+B)=0.2278, so we have LiPic¯2C1(A+B)2λ¯c̲=0.2441<14Thus, the Assumption 1, Assumption 2, Assumption 3, Assumption 4 hold. Using the Matlab LMI and Control Toolbox, we can know the

Conclusion

This paper studies global asymptotic stability of neural networks with discontinuous activation functions where the activation may be described as unbounded or/and nonmonotonic functions. This activation functions are extensive in practice. The generalized Lyapunov–Krasovskii stability theory has been applied to prove the results based on the Filippov theory. The criteria are described in the form of LMIs. Some numerical examples are proposed to show the effectiveness of the results.

Acknowledgments

The work is supported by the Natural Science Foundation of China under Grants 60974021 and 61125303, the 973 Program of China under Grant 2011CB710606 and the Fund for Distinguished Young Scholars of Hubei Province under Grant 2010CDA081.

Jian Xiao received his BS degree from Changchun University, Changchun, China, and his MS degree from Guangxi University, Nanning, China, in 2007 and 2010, respectively. He is currently a doctoral candidate in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include stability theory,

References (33)

  • Z.W. Cai et al.

    On the periodic dynamics of a class of time-varying delayed neural networks via differential inclusions

    Neural Networks

    (2012)
  • J.F. Wang et al.

    Dynamical behavior of delayed Hopfield neural networks with discontinuous activations

    Appl. Math. Model.

    (2009)
  • E. Kaslika et al.

    Impulsive hybrid discrete-time Hopfield neural networks with delays and multistability analysis

    Neural Networks

    (2011)
  • Y. Shen et al.

    Almost sure exponential stability of recurrent neural networks with Markovian switching

    IEEE Trans. Neural Networks

    (2009)
  • Y.R. Liu et al.

    Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays

    IEEE Trans. Neural Networks

    (2009)
  • Z.W. Liu et al.

    Novel stability analysis for recurrent neural networks with multiple delays via line integral-type L–K functional

    IEEE Trans. Neural Networks

    (2010)
  • Cited by (12)

    • Periodicity and global exponential periodic synchronization of delayed neural networks with discontinuous activations and impulsive perturbations

      2021, Neurocomputing
      Citation Excerpt :

      For example, the paper [9] studied the existence and global asymptotic stability of periodic solution for discrete and distributed time-varying delayed neural networks with discontinuous activations by using Filippov’s differential inclusion theory, the Leray–Schauder alternative theorem in multivalued analysis and generalized Lyapunov-like approach. The authors in paper [11] investigated the existence and global asymptotical stability of equilibrium point for a class of delayed neural networks with discontinuous activations by Leray–Schauder theorem of set-valued maps and Lyapunov–Krasovskii stability theory. In [14], the authors studied the dissipativity and synchronization problems of a class of delayed bidirectional associative memory (BAM) neural networks in which neuron activations are modeled by discontinuous bivariate functions.

    • Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions

      2016, Neurocomputing
      Citation Excerpt :

      Under the assumption that the activation functions are nondecreasing, the global exponential stability and global convergence of the network system were analyzed in Refs. [13,14]. Further, without assuming the boundedness and monotonicity of the activation functions, the dynamics behaviors of the discontinuous neural networks were discussed by using Lyapunov stability theory in Refs. [15,16]. Note that all the above mentioned works just studied integer-order models of neural networks.

    • Dynamics in fractional-order neural networks

      2014, Neurocomputing
      Citation Excerpt :

      Neural networks have received much attention in recent years [1–14].

    View all citing articles on Scopus

    Jian Xiao received his BS degree from Changchun University, Changchun, China, and his MS degree from Guangxi University, Nanning, China, in 2007 and 2010, respectively. He is currently a doctoral candidate in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include stability theory, artificial neural networks and switching control.

    Zhigang Zeng received his BS degree from Hubei Normal University, Huangshi, China, and his MS degree from Hubei University, Wuhan, China, in 1993 and 1996, respectively, and his PhD degree from Huazhong University of Science and Technology, Wuhan, China, in 2003.

    He is a professor and PhD advisor in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include neural networks, switched systems, computational intelligence, stability analysis of dynamic systems, pattern recognition and associative memories.

    Wenwen Shen received her BS degree from Wuhan University of Technology, Wuhan, China, in 2009. She is currently a doctoral candidate in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include stability theory, artificial neural networks and switching control.

    View full text