LettersGlobal asymptotic stability of delayed neural networks with discontinuous neuron activations
Introduction
In the past decade, neural networks have extensive applications in optimization, classification, solving nonlinear algebraic equations, signal and image processing, pattern recognition, automatic control, associative memories and other areas [1], [2], [3], [4]. Therefore, a great deal of attentions have been paid to the study of neural networks recently, and various issues for neural networks have been investigated [5], [6], [7]. It is well known that implementations of neural networks depend heavily on the dynamical behaviors of the networks. However, time delays often occur in the processing of information storage and transmission which may create bad dynamical behaviors of the networks for example oscillatory and instability. Hence, it is necessary to study the dynamical behaviors of delayed neural networks. And, the stability of delayed neural networks has received considerable interests [8], [9], [10].
All the above literatures [1], [2], [3], [4], [5], [6], [7], [8], [9], [10] and some other articles such in [11], [12], [13], [14] are based on assuming that the activation functions are continuous or even Lipschitz continuous. Nevertheless, it is well known that neural networks with discontinuous activations may be better for designing and applying an artificial neural networks. For example, considering the classical Hopfield neural networks [15], the standard assumption is that the activations are used in high-gain limit where they closely approach a discontinuous hard-comparator function. Therefore, the dynamical behavior of discontinuous neural networks have attracted much attention and many results have been presented [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27]. To our knowledge, Forti and Nistri in [16] have firstly studied the global stability of a neural network modeled by a differential equation with discontinuous activation functions based on Lyapunov diagonally stable matrix and constructing suitable Lyapunov function, and some stability conditions were proposed. Afterwards, a more general delayed neural network model with discontinuous activation functions is proposed in [17], the author analyzes global exponential stability and global convergence of the system in which the activation functions are unbounded. Since results given in many articles are based on the assumption that the activation functions are nondecreasing, Wang, Huang and Guo [23] discuss the dynamics behaviors of neural networks with discontinuous activations without assuming the boundedness and monotonicity of the neuron activations, where the neural networks are without delay.
Motivated by the above discussions, we will discuss the global asymptotic stability of delayed neural networks with discontinuous activations, where the activation functions are not assumed to be bounded or/and nondecreasing. Some sufficient conditions for the global stability of delay neural networks are established, which extend previous works on delayed neural networks with discontinuous activations.
The structure of this paper is outlined as follows. Due to the consideration of differential equations with discontinuous right-hand sides, in Section 2, we introduce the definitions and some lemmas we need use. We discuss the existence of equilibrium points and solutions in Section 3. In Section 4, the sufficient conditions for global asymptotic stability are established. Some numerical examples are given to show the effectiveness of our proposed results in Section 5. At last, we conclude this paper in Section 6.
Section snippets
Model descriptions and preliminaries
First of all, some mathematical notations used throughout this paper are presented. denotes the set of real numbers. For , xT denotes its transpose. The vector norm is defined as . are the set of real matrices. For , and are the maximum and the minimum eigenvalues of A, the spectral norm used will be .
In this paper, we consider delayed neural networks model with discontinuous neuron activations described by the following equation:
Existence of equilibrium and viability
In this part, we will first prove existence of equilibrium for the system (1) by equilibrium theorem. Firstly, we give some necessary definitions concerning equilibrium theorem to set-value maps. Definition 8 K is a convex subset of . The tangent cone TK(x) to K at is defined as where is the closure of the union set. Lemma 2 Suppose is a convex subset of and a nonempty convex se-value map is upper semicontinuous. If the set-valued map satisfies the tangentialAubin and Frankowska [29]
Global asymptotic stability and convergence
In this section, we will discuss the global asymptotical stability. The corresponding results will be presented in the below.
Note that, in view of Theorem 1, suppose that x⁎ is an equilibrium point of (1), is the corresponding OEP. Let , then the neural networks (1) can be transformed as belowwhere , and . Obviously, , such that Then, under the
Example
Example 1 Consider a two-dimensional delayed neural network (1) described by the following:It is clear that the activation function is discontinuous and unbounded. Now take and , then , , , so we have Thus, the Assumption 1, Assumption 2, Assumption 3, Assumption 4 hold. Using the Matlab LMI and Control Toolbox, we can know the
Conclusion
This paper studies global asymptotic stability of neural networks with discontinuous activation functions where the activation may be described as unbounded or/and nonmonotonic functions. This activation functions are extensive in practice. The generalized Lyapunov–Krasovskii stability theory has been applied to prove the results based on the Filippov theory. The criteria are described in the form of LMIs. Some numerical examples are proposed to show the effectiveness of the results.
Acknowledgments
The work is supported by the Natural Science Foundation of China under Grants 60974021 and 61125303, the 973 Program of China under Grant 2011CB710606 and the Fund for Distinguished Young Scholars of Hubei Province under Grant 2010CDA081.
Jian Xiao received his BS degree from Changchun University, Changchun, China, and his MS degree from Guangxi University, Nanning, China, in 2007 and 2010, respectively. He is currently a doctoral candidate in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include stability theory,
References (33)
- et al.
A feedback neural network for solving convex constraint optimization problems
Appl. Math. Comput.
(2008) - et al.
Global exponential stability in Lagrange sense for recurrent neural networks with time delays
Nonlinear Anal. Real World Appl.
(2008) - et al.
Novel stability criterions of a new fuzzy cellular neural networks with time-varying delays
Neurocomputing
(2009) - et al.
Complete stability of cellular neural networks with unbounded time-varying delays
Neural Networks
(2012) - et al.
Neural networks letterstability analysis of static recurrent neural networks using delay-partitioning and projection
Neural Networks
(2009) - et al.
Stability of delayed neural networks with time-varying impulses
Neural Networks
(2012) - et al.
Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays
Neural Networks
(2012) - et al.
Generalized Lyapunov approach for convergence of neural networks with discontinuous or non-Lipchitz activations
Physica D
(2006) - et al.
Dynamical behaviors of Cohen–Grossberg neural networks with discontinuous activation functions
Neural Networks
(2005) - et al.
Global asymptotic stability of neural networks with discontinuous activations
Neural Networks
(2009)
On the periodic dynamics of a class of time-varying delayed neural networks via differential inclusions
Neural Networks
Dynamical behavior of delayed Hopfield neural networks with discontinuous activations
Appl. Math. Model.
Impulsive hybrid discrete-time Hopfield neural networks with delays and multistability analysis
Neural Networks
Almost sure exponential stability of recurrent neural networks with Markovian switching
IEEE Trans. Neural Networks
Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays
IEEE Trans. Neural Networks
Novel stability analysis for recurrent neural networks with multiple delays via line integral-type L–K functional
IEEE Trans. Neural Networks
Cited by (12)
Finite-time synchronization of discontinuous fuzzy neural networks with mixed time-varying delays and impulsive disturbances
2023, Results in Control and OptimizationPeriodicity and global exponential periodic synchronization of delayed neural networks with discontinuous activations and impulsive perturbations
2021, NeurocomputingCitation Excerpt :For example, the paper [9] studied the existence and global asymptotic stability of periodic solution for discrete and distributed time-varying delayed neural networks with discontinuous activations by using Filippov’s differential inclusion theory, the Leray–Schauder alternative theorem in multivalued analysis and generalized Lyapunov-like approach. The authors in paper [11] investigated the existence and global asymptotical stability of equilibrium point for a class of delayed neural networks with discontinuous activations by Leray–Schauder theorem of set-valued maps and Lyapunov–Krasovskii stability theory. In [14], the authors studied the dissipativity and synchronization problems of a class of delayed bidirectional associative memory (BAM) neural networks in which neuron activations are modeled by discontinuous bivariate functions.
Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions
2016, NeurocomputingCitation Excerpt :Under the assumption that the activation functions are nondecreasing, the global exponential stability and global convergence of the network system were analyzed in Refs. [13,14]. Further, without assuming the boundedness and monotonicity of the activation functions, the dynamics behaviors of the discontinuous neural networks were discussed by using Lyapunov stability theory in Refs. [15,16]. Note that all the above mentioned works just studied integer-order models of neural networks.
Dynamics in fractional-order neural networks
2014, NeurocomputingCitation Excerpt :Neural networks have received much attention in recent years [1–14].
Almost periodic dynamical behaviors for generalized Cohen-Grossberg neural networks with discontinuous activations via differential inclusions
2014, Communications in Nonlinear Science and Numerical Simulation
Jian Xiao received his BS degree from Changchun University, Changchun, China, and his MS degree from Guangxi University, Nanning, China, in 2007 and 2010, respectively. He is currently a doctoral candidate in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include stability theory, artificial neural networks and switching control.
Zhigang Zeng received his BS degree from Hubei Normal University, Huangshi, China, and his MS degree from Hubei University, Wuhan, China, in 1993 and 1996, respectively, and his PhD degree from Huazhong University of Science and Technology, Wuhan, China, in 2003.
He is a professor and PhD advisor in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include neural networks, switched systems, computational intelligence, stability analysis of dynamic systems, pattern recognition and associative memories.
Wenwen Shen received her BS degree from Wuhan University of Technology, Wuhan, China, in 2009. She is currently a doctoral candidate in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include stability theory, artificial neural networks and switching control.