Multistability of recurrent neural networks with time-varying delays and nonincreasing activation function☆
Introduction
Dynamic behavior of equilibrium point has been one of the important contents of the Lyapunov theory, and has also been multistability concerned topic. Multistability study are beginning from multiple attractors analysis, which is closely related with the dynamic behavior of equilibrium point. In recent years, with the development of multistability and neural network, many efforts have been made on the applications such as signal processing, associative memories, image processing, pattern recognition, optimization problems, and so on (see [1], [2], [3], [4], [5], [6]). Such applications also rely heavily on the dynamical properties of neural network systems. With the development of the application, it remains to be further indepth study dynamics behavior of neural network.
For the existence of equilibrium in selecting subset, the existing literature often adopts two kinds of methods. The two methods are respectively based on Banach fixed point and Brouwer fixed point. Adopting the Banach fixed point method, there is a strong constraint for the subarea. If the selected area is too large, it may not satisfy contraction; however, if the selected area is too small, it may not satisfy the condition of self-mapping. Using the other fixed point method, the exact number of the fixed point is not clear. Therefore, in order to make up for these deficiencies, it is necessary to combine the two methods.
Based on the Banach fixed point, it was shown that cellular neural networks could have 3n memory patterns, of which 2n were locally exponentially stably in [7]. In order to increase storage capacity, a class of discontinuous activation functions were introduced in [8], the coexistence of locally stably equilibrium points were derived. Multistability of recurrent neural networks (RNNs) with activation function symmetrical about the origin on the phase plane were also investigated in [9], in which new criteria on the multistability of neural networks were proposed. Also in [10], [11], [12], [13], [14], some multistability properties of neural networks were investigated, and some sufficient conditions were proposed to ensure system multistability.
Based on the Brouwer fixed point, in [15], the coexistence of multiple equilibrium points was investigated based on the geometrical configuration of the Fermi activation functions. In order to expand application scope, a class of nonsmooth activation functions were introduced in high-order neural networks in [16], and it was shown that the coexistence of 3n equilibrium points and the local stability of 2n equilibrium points. In addition, in [17], neural networks with discontinuous non-monotonic piecewise linear activation functions could have at least 5n equilibrium points, 3n of which were locally stable and the others were unstable. For more references, refer to [18], [19], [20] and so on.
In addition, the neural networks with concave-convex characteristics were investigated in [21], [22], and it was shown that method based on a kind of other types of fixed point is given in these paper. In [23], multistable property of neural networks were addressed and this provided a method based on dynamic analysis, which had been applied to study the convergence of equilibria. Of course, dynamics analysis method combined with the fixed point method, also could deal with the coexistence of multistability, for example in [24], [25], [26], [27], [28], [29] and so on. To explore multistability, these methods from [30], [31] were also very valuable.
As far as we know, the type of activation functions plays a crucial role in the multistability analysis of neural networks. In the above-mentioned works as well as most existing works, the activation functions applied to multistability analysis, including nondecreasing activation functions and nonmonotonic activation functions, were mainly focused on piecewise constants activation functions, sigmoidal activation functions, and nondecreasing saturated activation functions. For example, in [10], multistability for neural networks with nondecreasing saturated activation functions was addressed. The nondecreasing activation function in [10] was defined as the following form:
By the above continuous activation functions evolution, in this paper, we introduce a general class of activation functions, which are defined as follows:where and
It is obvious that is a nonincreasing and odd function. When and in function (2), function (2) is equal to function (1) for . That is, both functions have the same mathematical form in common. We surmise that the similar method from [10] may be adopted. However, we need to find more conditions to ensure the existence and uniqueness of equilibria for a given recurrent neural network in each subset. The method from existing literature have apparently unable to realize the goal. In order to further explore existence and uniqueness condition of multiple equilibria, it is necessary to provide a new method to realize the goal.
Inspired by the above discussion, this paper is devoted to investigate the dynamical behaviors of recurrent neural networks, including the total number of equilibria, their locations and local stability. It is worth noting that the main contributions of this paper are as follows:
- •
This paper provides a method to deal with multistability of RNNs with nonincreasing activation function, which the nonincreasing function converts into the nondecreasing function. Meanwhile, the number of multiple equilibria in both of them is the same.
- •
Rigorous mathematical analysis of the dynamical behaviors of some equilibrium points is presented for RNNs with time-varying delays and nonincreasing activation function. It is proof that RNNs can have equilibrium points, and of them are locally exponentially stable.
- •
In terms of the coexistence and local stability of multiple equilibrium points for RNNs, this effect by the division of state space is revealed and we arrive at an important insight: there are many conditions to ensure the coexistence of multiple equilibria.
The following sections are arranged as follows. Section 2 describes system and preliminaries. Section 3 derives sufficient conditions of equilibrium points in RNNs with nondecreasing activation function. Section 4 is extension to sufficient conditions of equilibrium points in RNNs with nonincreasing activation function. Numerical examples are presented to verify the effectiveness of our results in Section 5. Finally, the conclusions are drawn in Section 6.
Section snippets
Notations
Let be the Banach space of functions mapping into with norm defined , where . Denote as the vector norm of the vector .
For the given integer and the given constant , there exist , , , such that
Main results
In this part, we mainly consider activation function , . Obviously, the function f(r) is a nondecreasing odd function in R. We will explore new multistability conditions in the general delayed neural network system. Meanwhile, the new criterions and conclusions are established. Theorem 1 Suppose that hold, then for , RNN has at least one equilibrium point in Λ, and RNN has at least equilibrium points
Extension to RNNs
We are now beginning to consider activation function , . Obviously, the function f(r) is a nonincreasing odd function in R. Due to the symmetrical structure of the function f(r), similar to lemma, theorem and corollary are presented as follows, and similar proof are omitted. Letwhere . Lemma 4 For the given integer , if , then . Theorem 5 Suppose that for
Numerical examples
In this section, two examples are provided to verify the effectiveness of results obtained in the previous section. We will choose , then where ϵ is small enough such that . Example 1 Consider RNN (5) with activation function (6)
Conclusions
In this paper, the multistability has been investigated for RNNs with time-varying delays. By combining two kinds of fixed point method, new conditions are proposed to ensure some equilibrium points are exponential stability. Since the memory mode is often designed stable equilibrium (or attractors), we can use multistability of recurrent neural networks to design associative memory and the corresponding domain of attraction. Taking method in our paper is general and less conservative compared
Fanghai Zhang received the B.S. degree in Mathematics from Fuyang Normal University, Fuyang, China in 2012. He is currently pursuing the Ph.D. degree with the School of Automation, Huazhong University of Science and Technology, Wuhan, China. His current research interests include artificial neural networks, stability analysis of dynamical systems, switching control, and associative memories.
References (31)
- et al.
Some multistability properties of bidirectional associative memory recurrent neural networks with unsaturating piecewise linear transfer functions
Neurocomputing
(2009) - et al.
Event-triggered state estimation for discrete-time stochastic genetic regulatory networks with Markovian jumping parameters and time-varying delays
Neurocomputing
(2016) - et al.
Global robust stability analysis of uncertain neural networks with time varying delays
Neurocomputing
(2015) - et al.
Stability of bidirectional associative memory neural networks with Markov switching via ergodic method and the law of large numbers
Neurocomputing
(2015) - et al.
Memory pattern analysis of cellular neural networks
Phys. Lett. A
(2005) - et al.
Analysis and design of associative memories based on recurrent neural network with discontinuous activation functions
Neurocomputing
(2012) - et al.
Multistability and multiperiodicity of delayed Cohen–Grossberg neural networks with a general class of activation functions
Physica D: Nonlinear Phenom.
(2008) - et al.
Dynamical stability analysis of multiple equilibrium points in time-varying delayed recurrent neural networks with discontinuous activation functions
Neurocomputing
(2012) - et al.
Multistability and complete convergence analysis on high-order neural networks with a class of nonsmooth activation functions
Neurocomputing
(2015) - et al.
Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays
Neural Netw.
(2015)
Multiple u-stability of neural networks with unbounded time-varying delays
Neural Netw.
Multistability and convergence in delayed neural networks
Physica D: Nonlinear Phenom.
Multistability of competitive neural networks with time-varying and distributed delays
Nonlinear Anal. Real World Appl.
Necessary and sufficient condition for multistability of neural networks evolving on a closed hypercube
Neural Netw.
Coexistence and local stability of multiple equilibria in neural networks with piecewise linear nondecreasing activation functions
Neural Netw.
Cited by (23)
Multimode function multistability for Cohen-Grossberg neural networks with mixed time delays
2022, ISA TransactionsCitation Excerpt :So, when this type of stability is achieved, the exponential stability, polynomial stability, logarithmic stability and asymptotic stability can be achieved simultaneously. In several engineering applications of NNs, for example, associative memory and image processing, it is crucial that the designed NNs are possessed of multiple equilibria [26–38], namely, the designed NNs are multiple stable. What should be pointed out is that multiple stability analysis for NNs is quite different from monotone stability for NNs.
Multistability of delayed fractional-order competitive neural networks
2021, Neural NetworksMultistability of delayed neural networks with monotonically nondecreasing linear activation function
2021, NeurocomputingCitation Excerpt :These two methods are very important methods with general applicability. And lots of existing works are established on them [26–33]. Unfortunately, as far as we know, the above two methods are no longer applicable to multistability analysis of neural networks with disturbance.
Multistability of switched neural networks with sigmoidal activation functions under state-dependent switching
2020, Neural NetworksCitation Excerpt :As known widely nowadays, neural networks play important roles in many technical areas, such as pattern recognition (Kwan & Cai, 1994; Suganthan, Teoh, & Mital, 1995; Zeng, Huang, & Wang, 2005), associative memory (Isokawa, Nishimura, Kamiura, & Matsui, 2008; Zeng & Wang, 2008, 2009), and other areas (Bao, Cao, K, A, & B, 2018a; Bao, Park, & Cao, 2016). In recent decades, various neural network models, including Hopfield-type neural network model (Huang & Cao, 2010; Zeng, Huang, & Zheng, 2016), cellular neural networks with memristors (Bao & Zeng, 2013; Di, Forti, & Pancioni, 2017; Duan, Hu, Dong, Wang, & Mazumder, 2015), Cohen–Grossberg neural network model (Gang, Zhang, & Dong, 2007; Wang, Liu, Li, & Liu, 2006; Wang & Zou, 2002; Zhang & Wang, 2008), switched neural network model (Bao, Cao, & Kurths, 2018b; Li & Cao, 2007; Li, Zhao, Dimirovski, & Liu, 2009; Liu, Shen, & Zhao, 2013; Zhao, Yin, & Zheng, 2016; Zhao & Zhao, 2017), have been developed and analyzed. Numerous results are available for the stability analysis of the switched systems, e.g., Branicky (1998), Chen, Long, and Fu (2010), Guo, Liu, and Wang (2018, 2019), Guo, Wang, and Yan (2014), Hu, Fei, Ma, Wu, and Geng (2015), Kahloul and Sakly (2018), Li and Cao (2007), Lian and Wang (2015), Long and Wei (2007), Long, Xiao, Chen, and Zhang (2013), Nie and Cao (2015), Niu et al. (2017), Singh and Sukavanam (2012), Song, Liu, and Wang (2018), Song, Xiang, Chen, and Hu (2006), Tang and Zhao (2017) and Yu, Fei, Yu, and Fei (2011).
Fanghai Zhang received the B.S. degree in Mathematics from Fuyang Normal University, Fuyang, China in 2012. He is currently pursuing the Ph.D. degree with the School of Automation, Huazhong University of Science and Technology, Wuhan, China. His current research interests include artificial neural networks, stability analysis of dynamical systems, switching control, and associative memories.
Zhigang Zeng received the Ph.D. degree in systems analysis and integration from Huazhong University of Science and Technology, Wuhan, China, in 2003. He is currently a Professor with the School of Automation, Huazhong University of Science and Technology, Wuhan, China, and also with the Key Laboratory of Image Processing and Intelligent Control of the Education Ministry of China, Wuhan, China. He has been an Associate Editor of the IEEE Transactions on Neural Networks (2010–2011), IEEE Transactions on Cybernetics (since 2014), IEEE Transactions on Fuzzy Systems (since 2016), and a member of the Editorial Board of Neural Networks (since 2012), Cognitive Computation (since 2010), Applied Soft Computing (since 2013). He has published over 100 international journal papers. His current research interests include theory of functional differential equations and differential equations with discontinuous right-hand sides, and their applications to dynamics of neural networks, memristive systems, and control systems.
- ☆
This work was supported by the Key Program of National Natural Science Foundation of China under Grant 61134012, the Doctoral Program of Higher Education of China under Grant 20130142130012, the Science and Technology Support Program of Hubei Province under Grant 2015BHE013, the Program for Science and Technology in Wuhan of China under Grant 2014010101010004.