Elsevier

Neurocomputing

Volume 216, 5 December 2016, Pages 135-142
Neurocomputing

Multistability of recurrent neural networks with time-varying delays and nonincreasing activation function

https://doi.org/10.1016/j.neucom.2016.07.032Get rights and content

Abstract

In this paper, we are concerned with a class of recurrent neural networks (RNNs) with nonincreasing activation function. First, based on the fixed point theorem, it is shown that under some conditions, such an n-dimensional neural network with nondecreasing activation function can have at least (4k+3)n equilibrium points. Then, it proves that there is only (4k+3)n equilibria under some conditions, among which (2k+2)n equilibria are locally stable. Besides, by analysis and study of RNNs with nondecreasing activation function, we can also obtain the same number of equilibria for RNNs with nonincreasing activation function. Finally, two simulation examples are given to show effectiveness of the obtained results.

Introduction

Dynamic behavior of equilibrium point has been one of the important contents of the Lyapunov theory, and has also been multistability concerned topic. Multistability study are beginning from multiple attractors analysis, which is closely related with the dynamic behavior of equilibrium point. In recent years, with the development of multistability and neural network, many efforts have been made on the applications such as signal processing, associative memories, image processing, pattern recognition, optimization problems, and so on (see [1], [2], [3], [4], [5], [6]). Such applications also rely heavily on the dynamical properties of neural network systems. With the development of the application, it remains to be further indepth study dynamics behavior of neural network.

For the existence of equilibrium in selecting subset, the existing literature often adopts two kinds of methods. The two methods are respectively based on Banach fixed point and Brouwer fixed point. Adopting the Banach fixed point method, there is a strong constraint for the subarea. If the selected area is too large, it may not satisfy contraction; however, if the selected area is too small, it may not satisfy the condition of self-mapping. Using the other fixed point method, the exact number of the fixed point is not clear. Therefore, in order to make up for these deficiencies, it is necessary to combine the two methods.

Based on the Banach fixed point, it was shown that cellular neural networks could have 3n memory patterns, of which 2n were locally exponentially stably in [7]. In order to increase storage capacity, a class of discontinuous activation functions were introduced in [8], the coexistence of (4k1)n locally stably equilibrium points were derived. Multistability of recurrent neural networks (RNNs) with activation function symmetrical about the origin on the phase plane were also investigated in [9], in which new criteria on the multistability of neural networks were proposed. Also in [10], [11], [12], [13], [14], some multistability properties of neural networks were investigated, and some sufficient conditions were proposed to ensure system multistability.

Based on the Brouwer fixed point, in [15], the coexistence of multiple equilibrium points was investigated based on the geometrical configuration of the Fermi activation functions. In order to expand application scope, a class of nonsmooth activation functions were introduced in high-order neural networks in [16], and it was shown that the coexistence of 3n equilibrium points and the local stability of 2n equilibrium points. In addition, in [17], neural networks with discontinuous non-monotonic piecewise linear activation functions could have at least 5n equilibrium points, 3n of which were locally stable and the others were unstable. For more references, refer to [18], [19], [20] and so on.

In addition, the neural networks with concave-convex characteristics were investigated in [21], [22], and it was shown that method based on a kind of other types of fixed point is given in these paper. In [23], multistable property of neural networks were addressed and this provided a method based on dynamic analysis, which had been applied to study the convergence of equilibria. Of course, dynamics analysis method combined with the fixed point method, also could deal with the coexistence of multistability, for example in [24], [25], [26], [27], [28], [29] and so on. To explore multistability, these methods from [30], [31] were also very valuable.

As far as we know, the type of activation functions plays a crucial role in the multistability analysis of neural networks. In the above-mentioned works as well as most existing works, the activation functions applied to multistability analysis, including nondecreasing activation functions and nonmonotonic activation functions, were mainly focused on piecewise constants activation functions, sigmoidal activation functions, and nondecreasing saturated activation functions. For example, in [10], multistability for neural networks with nondecreasing saturated activation functions was addressed. The nondecreasing activation function in [10] was defined as the following form:f(r)={4k3,r[4k3,+)2r(4k3),r[4k5,4k3),,,2r5,r[3,5),1r[1,3),r,r(1,1),1,r(3,1],2r+5,r(5,3],,,2r+4k3,r(34k,54k],34k,r(,34k]

By the above continuous activation functions evolution, in this paper, we introduce a general class of activation functions, which are defined as follows:g(r)=s=1N(ms1ms)2(|r+Es||rEs|)where N=2k+1,m0=1,Es=3s2,ms=1+(1)s+1 and s=1,2,,2k+1.

It is obvious that g(r) is a nonincreasing and odd function. When N=2k1,m0=1,Es=2s1,ms=1+(1)s and s=1,2,,2k1 in function (2), function (2) is equal to function (1) for rR. That is, both functions have the same mathematical form in common. We surmise that the similar method from [10] may be adopted. However, we need to find more conditions to ensure the existence and uniqueness of equilibria for a given recurrent neural network in each subset. The method from existing literature have apparently unable to realize the goal. In order to further explore existence and uniqueness condition of multiple equilibria, it is necessary to provide a new method to realize the goal.

Inspired by the above discussion, this paper is devoted to investigate the dynamical behaviors of recurrent neural networks, including the total number of equilibria, their locations and local stability. It is worth noting that the main contributions of this paper are as follows:

  • This paper provides a method to deal with multistability of RNNs with nonincreasing activation function, which the nonincreasing function converts into the nondecreasing function. Meanwhile, the number of multiple equilibria in both of them is the same.

  • Rigorous mathematical analysis of the dynamical behaviors of some equilibrium points is presented for RNNs with time-varying delays and nonincreasing activation function. It is proof that RNNs can have (4k+3)n equilibrium points, and (2k+2)n of them are locally exponentially stable.

  • In terms of the coexistence and local stability of multiple equilibrium points for RNNs, this effect by the division of state space is revealed and we arrive at an important insight: there are many conditions to ensure the coexistence of multiple equilibria.

The following sections are arranged as follows. Section 2 describes system and preliminaries. Section 3 derives sufficient conditions of equilibrium points in RNNs with nondecreasing activation function. Section 4 is extension to sufficient conditions of equilibrium points in RNNs with nonincreasing activation function. Numerical examples are presented to verify the effectiveness of our results in Section 5. Finally, the conclusions are drawn in Section 6.

Section snippets

Notations

Let C([t0τ,t0],D) be the Banach space of functions mapping [t0τ,t0] into DRn with norm defined ϕ=max1in{supr[t0τ,t0]|ϕi(r)|}, where ϕ(s)=(ϕ1(s),ϕ2(s),,ϕn(s))TC([t0τ,t0],D). Denote x=max1in{|xi|} as the vector norm of the vector x=(x1,x2,,xn)T.

For the given integer k1 and the given constant 0<EsR s=1,2,,2k+1, there exist Zji,Zji+R, j=1,2,,4k+3, i{1,2,,n}, such that Z1i<Z1i+<E2k+1<Z2i<Z2i+<E2k<<E1<Z2k+2i<Z2k+2i+<E1<Z2k+3i<Z2k+3i+<E2<<E2k<Z4k+2i<Z4k+2i+<E2k+1<Z

Main results

In this part, we mainly consider activation function f(r)=g(r), rR. Obviously, the function f(r) is a nondecreasing odd function in R. We will explore new multistability conditions in the general delayed neural network system. Meanwhile, the new criterions and conclusions are established.

Theorem 1

Suppose that i=1,2,,nQi(aij,bij,ui)<3(aii+bii1)(1k)+12{3|(aii+bii1)(46k)+3|}hold, then for ΛΩ2, RNN (5) has at least one equilibrium point in Λ, and RNN (5) has at least (4k+3)n equilibrium points

Extension to RNNs

We are now beginning to consider activation function f(r)=g(r), rR. Obviously, the function f(r) is a nonincreasing odd function in R. Due to the symmetrical structure of the function f(r), similar to lemma, theorem and corollary are presented as follows, and similar proof are omitted. LetFi+(xi)=xi+γ¯ig(xi)G3i+={xi|Fi+(xi)+ci=0,|ci|βi,xiR}}where γ¯i<1.

Lemma 4

For the given integer k1, if 0βi<3(γ¯i+1)(1k)+12{3|(γ¯i+1)(46k)+3|}, then Card(G3i+)=4k+3.

Theorem 5

Suppose that θi0,αi0,ηi0 for i=1,2,,n

Numerical examples

In this section, two examples are provided to verify the effectiveness of results obtained in the previous section. We will choose Di1={[1ϵ,(6k+1)ϵ],[6q+8+ϵ,6q+11ϵ],[(6k+1)+ϵ,1ϵ],[6q11+ϵ,6q8ϵ],q=2,3,,k+1},Di2={[1+ϵ,1ϵ]},Di3={[6q8+ϵ,6q5ϵ],[6q+5+ϵ,6q+8ϵ],q=2,3,,k+1}, then Ω1={i=1nlj(i)i|lj(i)iDi1,j(i)=1,3,5,,or4k+3}Ω2={i=1nlj(i)i|lj(i)iDi1Di2Di3,j(i)=1,2,,4k+3}where ϵ is small enough such that 0<ϵ16k+1.

Example 1

Consider RNN (5) with activation function (6){ẋ1=x1(t)+

Conclusions

In this paper, the multistability has been investigated for RNNs with time-varying delays. By combining two kinds of fixed point method, new conditions are proposed to ensure some equilibrium points are exponential stability. Since the memory mode is often designed stable equilibrium (or attractors), we can use multistability of recurrent neural networks to design associative memory and the corresponding domain of attraction. Taking method in our paper is general and less conservative compared

Fanghai Zhang received the B.S. degree in Mathematics from Fuyang Normal University, Fuyang, China in 2012. He is currently pursuing the Ph.D. degree with the School of Automation, Huazhong University of Science and Technology, Wuhan, China. His current research interests include artificial neural networks, stability analysis of dynamical systems, switching control, and associative memories.

References (31)

Cited by (23)

  • Multimode function multistability for Cohen-Grossberg neural networks with mixed time delays

    2022, ISA Transactions
    Citation Excerpt :

    So, when this type of stability is achieved, the exponential stability, polynomial stability, logarithmic stability and asymptotic stability can be achieved simultaneously. In several engineering applications of NNs, for example, associative memory and image processing, it is crucial that the designed NNs are possessed of multiple equilibria [26–38], namely, the designed NNs are multiple stable. What should be pointed out is that multiple stability analysis for NNs is quite different from monotone stability for NNs.

  • Multistability of delayed neural networks with monotonically nondecreasing linear activation function

    2021, Neurocomputing
    Citation Excerpt :

    These two methods are very important methods with general applicability. And lots of existing works are established on them [26–33]. Unfortunately, as far as we know, the above two methods are no longer applicable to multistability analysis of neural networks with disturbance.

  • Multistability of switched neural networks with sigmoidal activation functions under state-dependent switching

    2020, Neural Networks
    Citation Excerpt :

    As known widely nowadays, neural networks play important roles in many technical areas, such as pattern recognition (Kwan & Cai, 1994; Suganthan, Teoh, & Mital, 1995; Zeng, Huang, & Wang, 2005), associative memory (Isokawa, Nishimura, Kamiura, & Matsui, 2008; Zeng & Wang, 2008, 2009), and other areas (Bao, Cao, K, A, & B, 2018a; Bao, Park, & Cao, 2016). In recent decades, various neural network models, including Hopfield-type neural network model (Huang & Cao, 2010; Zeng, Huang, & Zheng, 2016), cellular neural networks with memristors (Bao & Zeng, 2013; Di, Forti, & Pancioni, 2017; Duan, Hu, Dong, Wang, & Mazumder, 2015), Cohen–Grossberg neural network model (Gang, Zhang, & Dong, 2007; Wang, Liu, Li, & Liu, 2006; Wang & Zou, 2002; Zhang & Wang, 2008), switched neural network model (Bao, Cao, & Kurths, 2018b; Li & Cao, 2007; Li, Zhao, Dimirovski, & Liu, 2009; Liu, Shen, & Zhao, 2013; Zhao, Yin, & Zheng, 2016; Zhao & Zhao, 2017), have been developed and analyzed. Numerous results are available for the stability analysis of the switched systems, e.g., Branicky (1998), Chen, Long, and Fu (2010), Guo, Liu, and Wang (2018, 2019), Guo, Wang, and Yan (2014), Hu, Fei, Ma, Wu, and Geng (2015), Kahloul and Sakly (2018), Li and Cao (2007), Lian and Wang (2015), Long and Wei (2007), Long, Xiao, Chen, and Zhang (2013), Nie and Cao (2015), Niu et al. (2017), Singh and Sukavanam (2012), Song, Liu, and Wang (2018), Song, Xiang, Chen, and Hu (2006), Tang and Zhao (2017) and Yu, Fei, Yu, and Fei (2011).

View all citing articles on Scopus

Fanghai Zhang received the B.S. degree in Mathematics from Fuyang Normal University, Fuyang, China in 2012. He is currently pursuing the Ph.D. degree with the School of Automation, Huazhong University of Science and Technology, Wuhan, China. His current research interests include artificial neural networks, stability analysis of dynamical systems, switching control, and associative memories.

Zhigang Zeng received the Ph.D. degree in systems analysis and integration from Huazhong University of Science and Technology, Wuhan, China, in 2003. He is currently a Professor with the School of Automation, Huazhong University of Science and Technology, Wuhan, China, and also with the Key Laboratory of Image Processing and Intelligent Control of the Education Ministry of China, Wuhan, China. He has been an Associate Editor of the IEEE Transactions on Neural Networks (2010–2011), IEEE Transactions on Cybernetics (since 2014), IEEE Transactions on Fuzzy Systems (since 2016), and a member of the Editorial Board of Neural Networks (since 2012), Cognitive Computation (since 2010), Applied Soft Computing (since 2013). He has published over 100 international journal papers. His current research interests include theory of functional differential equations and differential equations with discontinuous right-hand sides, and their applications to dynamics of neural networks, memristive systems, and control systems.

This work was supported by the Key Program of National Natural Science Foundation of China under Grant 61134012, the Doctoral Program of Higher Education of China under Grant 20130142130012, the Science and Technology Support Program of Hubei Province under Grant 2015BHE013, the Program for Science and Technology in Wuhan of China under Grant 2014010101010004.

View full text