Elsevier

Neurocomputing

Volume 73, Issues 1–3, December 2009, Pages 350-356
Neurocomputing

Adaptive synchronization for competitive neural networks with different time scales and stochastic perturbation

https://doi.org/10.1016/j.neucom.2009.08.004Get rights and content

Abstract

In this paper, an adaptive feedback controller is designed to achieve complete synchronization of coupled delayed competitive neural networks with different time scales and stochastic perturbation. LaSalle-type invariance principle for stochastic differential delay equations is employed to investigate the globally almost surely asymptotical stability of the error dynamical system. An example with numerical simulation is given to demonstrate the effectiveness of the theory results.

Introduction

A laterally inhibited neural network with a deterministic signal Hebbian learning law, which can model the dynamics of cortical cognitive maps with unsupervised synaptic modifications, was recently proposed and its global asymptotic stability was investigated in Meyer-Bäse et al. [25], [27]. In this model, there are two types of state variables, the short-term memory variables (STM) describing the fast neural activity and the long-term memory (LTM) variables describing the slow unsupervised synaptic modifications. Thus, there are two time scales in these neural networks, in which one corresponds to the fast changes of the neural network states and another corresponds to the slow changes of the synapses by external stimuli. The above neural network can be mathematically expressed asSTM:εx˙j(t)=-ajxj(t)+i=1NDijf(xi(t))+Bji=1Pmij(t)yi,j=1,2,,N,LTM:m˙ij(t)=-mij(t)+yif(xj(t)),i=1,2,,P,j=1,2,,N,where xj(t) is the neuron current activity level, f(xj(t)) is the output of neurons, mij(t) is the synaptic efficiency, yi is the constant external stimulus, Dij represents the connection weight between the i th neuron and the j th neuron, aj>0 is a positive function representing the time constant of the neuron, Bj is the strength of the external stimulus, ε>0 is the time scale of STM state [27].

The competitive neural networks with different time scales are extensions of Grossberg's shunting network [11] and Amari's model for primitive neuronal competition [1]. For neural network models without considering the synaptic dynamics, their stability have been extensively analyzed. Cohen and Grossberg [9] found a Lyapunov functional for such a neural network, and derived some sufficient conditions ensuring absolute stability. For the competitive neural network with two time scales, the stability was studied in Meyer-Bäse et al. [25], [26], [27]. In Meyer-Bäse et al. [25], the theory of singular perturbations was employed to derive a condition for global asymptotic stability of the neural networks (1), (2). In Meyer-Bäse et al. [27], the theory of flow invariance was used to prove the boundedness of solutions of the neural network, and a sufficient condition was derived based on Lyapunov method. Meyer-Bäese et al. [26] presented a condition for the uniqueness and global exponential stability of neurosynaptic system also based on the theory of flow invariance. A special case of these neural networks was given in Lemmon and Kumar [16].

In all references mentioned above, for a neuron, only the instant feedbacks from other neurons are considered. However, for both real neural systems and circuit implementations of neural network models, there always exist time delays due to signal transmission between neurons or finite switch speed. Thus, it must analyze the effects of delays on the dynamics of neural network models so as to model the real neural systems more accurately. Stability of neural network models with delays has been extensively and thoroughly investigated in the literature, for example, delayed Hopfield type neural networks [6], [20], [33], delayed cellular neural networks [2], [3], [14], delayed Cohen–Grossberg neural networks [15], [5], delayed bidirectional associative memory neural networks [4], [10]. But, in all these models, only neural activation dynamics is considered, the synaptic dynamics has not involved. So, in Lu et al. [21], the authors introduced the delay into the competitive neural networks (1), (2) with different time scales, where both neural activation and synaptic dynamics are taken into consideration. However, the delays are often different. To our knowledge, such neural networks with multiple delays have been reported in few literatures [22], [28], [29].

In the last decade, much attention has been devoted to the research of chaotic neural networks and it has been founded that synchronization of coupled neural networks have potential applications in secure communication, parallel recognition, etc. [7], [34], [35], [8]. Therefore, the investigation of synchronization of delayed neural networks is of practical importance. In Lou and Cui [19], the exponential synchronization problem for a class of competitive neural networks using Lyapunov functions and LMI method.

However, a real system is usually affected by external perturbations which in many cases are of great uncertainty and hence may be treated as random, as pointed out by Haykin [12] that in real nervous systems, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes. Therefore, the effect of noise should be taken into account in researching the synchronization of chaos systems. Recently, some stochastic synchronization results have been proposed. In Lin and He [18], complete synchronization of the noise-perturbed Chuas circuits is investigated, and sufficient conditions for complete synchronization of coupled Chuas circuits with stochastic perturbation are established by means of the so-called LaSalle-type invariance principle for stochastic functional differential equations. In Pakdamana and Mestivier [30], the authors study the noise induced synchronization in a neuronal oscillator using random dynamical system theory. In Sun et al. [32], the authors deal with the exponential synchronization problem for a class of stochastic perturbed chaotic delayed neural networks. In Sun and Cao [31], the adaptive lag synchronization issue of unknown chaotic delayed neural networks with noise perturbation is considered in detail. In Li and Cao [17], an adaptive feedback controller is designed to achieve complete synchronization of coupled delayed neural networks with stochastic perturbation. However, to the best of our knowledge, the synchronization of delayed competitive neural networks with different scales and stochastic perturbation is seldom considered.

Inspired by the above discussions, in this paper, an adaptive feedback controller is proposed for the complete synchronization of coupled delayed competitive neural networks with different scales and stochastic perturbation, based on LaSalle-type invariance principle for stochastic differential delay equations. The obtained results in this paper show that the complete synchronization between the coupled delayed competitive neural networks with different scales and stochastic perturbation could be almost surely achieved even if the networks are subjected to some stochastic perturbation.

The remainder of this paper is organized as follows. In Section 2, the complete synchronization problem is formulated and some assumptions and lemma are given. In Section 3, the sufficient conditions to ensure the almost surely asymptotical stability of the error dynamical system are derived. In Section 4, an example and numerical simulation are given to illustrate the validity of the proposed coupling scheme, conclusions are drawn in Section 5.

Section snippets

Problem formulation and preliminaries

Throughout this paper, I denotes the identity matrix with compatible dimension. For any matrix A, AT denotes the transpose of A. Let X and Y are two real symmetric matrices, λmax(X) and λmin(X) denote the largest and smallest eigenvalue of matrix X, respectively. The notation XY (or X<Y) means that X-Y is positive semi-definite (or positive definite).

In this paper, I will consider the following competitive neural networks with different time scales and delays:STM:εx˙i(t)=-aixi(t)+k=1NDikfk(xk(

Theory analysis for adaptive synchronization

In this section, employing adaptive control method and LaSalle-type invariance principle for stochastic differential delay equations, I will prove that complete synchronization can be reached for almost every initial data between the systems (9), (10).

Theorem 1

Under assumptions (H1) and (H2), the two coupled delayed competitive neural networks (9), (10) can be synchronized for almost every initial data, if there exist positive definite diagonal matrices P=diag(p1,p2,,pn) and Q=diag(q1,q2,,qn),

An example

I consider a two-neuron competitive neural network as follows: STM:x˙(t)=-1εAx(t)+1εDf(x(t))+1εDτf(x(t-τ))+1εBS(t),LTM:S˙(t)=-CS(t)+f(x(t)),where the parameters ε=1, A=[1002], D=[2.5-0.15-0.13.5], Dτ=[-2-0.3-0.5-2], B=[1.600-0.4], and the activation functions fk(x)=tanh(x)(k=1,2). Obviously, one can find that fk(k=1,2) satisfies the assumption (H1) with lk=1(k=1,2), and L=I2×2. τk=1(k=1,2).

For drive system (30), constructing corresponding response system as follows: STM:dy(t)=-1εAy(t)+1εDf(y(t))

Conclusions

In this paper, an adaptive feedback controller is proposed for the complete synchronization of coupled delayed competitive neural networks subject to stochastic perturbation. By using LaSalle-type invariance principle for stochastic differential delay equations, we prove the globally almost surely asymptotical stability of the error dynamical system, that is to say, the complete synchronization can be almost surely achieved. I consider the effects of stochastic perturbation and time scale

Acknowledgement

The author would like to thank the editor and the anonymous reviewers for their helpful comments and suggestions.

Haibo Gu was born in Shanxi Province, China, in 1982. He was graduated from the Department of Mathematics of Shihezi University, Shihezi, China, in 2004. He received the M.S. degree from the College of Mathematics and System Sciences, Xinjiang University, Urumqi, China, in 2007.

He is a teacher of College of Mathematics, Physics and Information Science at Xinjiang Normal University, Xinjiang, China. His present research interests include nonlinear systems, mathematical biology, neural networks,

References (35)

Cited by (46)

  • Synchronization control for memristive high-order competitive neural networks with time-varying delay

    2019, Neurocomputing
    Citation Excerpt :

    Synchronization has attracted considerable attention of researchers due to its wide potential applications, such as secure communication [3–5], chemical reactions [6] and information science [7]. Recently, the related results on synchronization of competitive neural networks with different types of time delays have been obtained [8–11]. In [8], the author designed an adaptive feedback controller to ensure that competitive neural networks with stochastic perturbation are synchronized.

  • Finite-time anti-synchronization of memristive stochastic BAM neural networks with probabilistic time-varying delays

    2018, Chaos, Solitons and Fractals
    Citation Excerpt :

    In the past decades, a large amount of attention has been devoted to the bidirectional associative memory neural networks (BAMNNs) owing to their potential applications from signal processing, pattern recognition, associative memory, and so on [1–5]. Nowadays, with the increasing amount of data [6,7] and complex neural networks (NNs) [8,9], scientists conceive that we can get a new type of BAMNNs named MBAMNNs, which the self-feedback connection weights are implemented by memristors [10–16] rather than resistances. From the system theoretic point of view, MBAMNNs can be treated as a class of state-depend nonlinear system [17], and it is a challenging topic.

View all citing articles on Scopus

Haibo Gu was born in Shanxi Province, China, in 1982. He was graduated from the Department of Mathematics of Shihezi University, Shihezi, China, in 2004. He received the M.S. degree from the College of Mathematics and System Sciences, Xinjiang University, Urumqi, China, in 2007.

He is a teacher of College of Mathematics, Physics and Information Science at Xinjiang Normal University, Xinjiang, China. His present research interests include nonlinear systems, mathematical biology, neural networks, complex networks.

This work was supported by the National Natural Science Foundation of P.R. China (60764003), the Fund by Scientific Research Program of the Higher Education Institution of Xinjiang (XJEDU2006I05) and College of Mathematics, Physics and Information Science of Xinjiang Normal University.

View full text