Mean square exponential stability for discrete-time stochastic switched static neural networks with randomly occurring nonlinearities and stochastic delay
Introduction
Neural networks have been extensively studied, mostly because of their applications in image processing, pattern recognition, combinatorial optimization, fixed-point computations, and so on (see [1], [2], [3], [4], [5], and the references therein). According to whether neuron states (the external states of neurons) or local fields states (the internal states of neurons) are taken as basic variables, neural networks can be classified as static neural networks or local field neural networks [6], [7]. The static neural networks have been extensively adopted to deal with various optimization problems, for example, the linear variational inequality problem that contains linear and convex quadratic programming problems and linear complementary problems as special cases [8], [9]. Examples of oculomotor integrator and the head-direction system mean that the static neural networks and the local field neural networks are not always equivalent [10]. In the past few years, the stability analysis problem for the latter has been extensively studied [11]. However, it has litter attention for the former.
Switched systems, an important special class of hybrid dynamical systems, which consist of a finite number of subsystems described by differential or difference equations and a switching signal that orchestrates switching between these subsystems, have attracted considerable attention in the last decade because of their applications in both theory and practice. To describe the switching phenomena in neural networks, the so-called switched neural networks are proposed and the stability issues have been investigated in the continuous time settings [12], [13]. However, it is essential to formulate discrete-time analogues of the continuous-time systems when one wants to simulate or compute the continuous-time neural networks after obtaining its dynamical characteristics [14], [15]. Thus, the research on the stability properties of discrete-time neural networks is necessary [16], [17], [18], [19], [20]. Some initial stability results have just appeared, for example, in Wu et al. [21], Hou et al. [22] and Arunkumar et al. [23], for discrete-time switched neural networks. On the other hand, the synaptic transmission is a noisy process brought on by random fluctuation from the release of neuron transmitters and other probabilistic causes in real nervous systems [24], [25], [26], [27], [28]. It has also been known that a neural network could be stabilized or destabilized by certain stochastic inputs. So stochastic perturbation should be considered when investigating the stability problem of discrete-time neural networks.
Time delays are unavoidably encountered in the implementation of neural networks, and they may induce the undesirable dynamic network behaviors such as oscillation, instability or other poor performances. It is worth pointing out that when investigating discrete-time stochastic neural networks only the deterministic time-delay case was concerned [29]. Actually, the time delays in some neural networks are often existent in a stochastic fashion [30], [31], and their probabilistic characteristic, such as Bernoulli distribution, Poisson distribution or Normal distribution, can often be obtained by statistical methods. It often occurs in real systems that some values of the time delays are very large but the probabilities of the time delays taking such large values are very small. In this case, if only the variation range of time delays is employed to derive the criteria, the results may be somewhat more conservative. Hence, discrete-time neural networks with stochastic delays should be considered.
Nonlinearities are ubiquitous in practice, and it is well known that a large class of nonlinearities can be interpreted as the additive nonlinear exogenous disturbances caused by environmental circumstances. In today's networked environment, such nonlinear disturbances themselves may be subject to random abrupt changes, which may result from abrupt phenomena, such as random failures and repairs of the components, changes in the interconnections of subsystems, sudden environment changes, and modification of the operating point of a linearized model of a nonlinear systems [32]. In other words, the nonlinear disturbances may occur in a random way based on certain probabilities law, which are called as randomly occurred nonlinearities (RONs). When modelling the randomly occurring nonlinearities, the Bernoulli distribution model has been proven to be a flexible yet effective one that has been frequently employed. Recently, the concept of RONs has been introduced to model the randomly nonlinear functions in Liang et al. [33] for complex networks, in Shen et al. [32] and Wang et al. [34] for sensor networks, and in Hu et al. [35] for discrete stochastic systems. However, RONs are not taken into account when modelling discrete time neural networks.
Motivated by above discussion, a discrete-time stochastic switched static neural networks (DSSSNNs) model with RONs and stochastic delays simultaneously is proposed and the stability problem for it is concerned. To the author's knowledge, at present, this study is the first attempt to tackle the stability analysis problem for such kind of neural networks. In Section 2, the DSSSNNs model is formulated and some preliminaries are presented. A mean square exponential stability condition for the proposed DSSSNNs model is given in Section 3. In Section 4, a numerical example is provided to illustrate the effectiveness of the theoretical results. And finally, conclusions are drawn in Section 5.
Terminology: Let stands for the set of nonnegative integers; denotes the set of real numbers, the n-dimensional Euclidean space, and the set of all n×m real matrices. AT stands for the transpose of a matrix A and means the largest eigenvalue of A. I denotes identity matrix. is a set involving all integers between a and b. For symmetric matrices X and Y, the notation () means that the matrix is positive definite (nonnegative). denotes the block diagonal matrix and represents the elements below the main diagonal of a symmetric matrix. Moreover, let a complete probability space with a filtration containing all P-null sets and being right continuous. denotes the mathematical expectation operator with respect to the given probability measure P. Denote by the family of all -measurable valued random variables with the norm . In the sequel, if not explicitly stated, matrices are assumed to have compatible dimensions.
Section snippets
Model description and preliminaries
We propose the following model to represent n-neuron discrete-time stochastic static neural networks with RONs and time-varying delay:where is the neural state vector at time k, is the state feedback coefficient matrix with ; are known real constant matrices, is the neuron
Main results
For presentation convenience, in the following, we denote
Firstly, we consider the ith subsystem, that is, when , Theorem 1 Under Assumption 1, Assumption 2, Assumption 3, Assumption 4, the neural networks (6) are globally exponentially stable if there exist matrices and three scalars such that the following linear matrix inequalities hold for any :
An illustrative example
One numerical example is presented to illustrate the effectiveness of the theoretical results. Let l=2. Example Consider the DSSSNNs (6) with the following parameters: The sector-bounded nonlinear functions are assumed to beand the time-varying
Conclusions
In this paper, we have investigated the mean square exponential stability problem for a class of discrete-time stochastic switched static neural networks model. The subsystem under investigation involves randomly occurring nonlinearities, stochastic disturbances, and stochastic delay. An effective linear matrix inequality approach has been proposed to derive the stability criterion. A numerical example has been given to show the effectiveness of the proposed method.
Acknowledgments
This work was jointly supported by the National Natural Science Foundation of China under Grant 10901073, 61272530, 11072059 and 11202084, the Postdoctoral Science Funds of China (2011M500082) and the Postdoctoral Fund of Jiangsu Province (1101076C), the Specialized Research Fund for the Doctoral Program of Higher Education under Grant no. 20110092110017, the Natural Science Foundation of Jiangsu Province of China under Grant no. BK2012741, and the Deanship of Scientific Research (DSR), King
Manfeng Hu received both the B.S. and M.S. degrees from Xuzhou Normal University, Xuzhou, China, and the Ph.D. degree from Jiangnan University, Wuxi, China, in 1998, 2001, and 2008, respectively. He was a Visiting Scholar at the Department of Mathematics, Michigan State University, USA from June 2008 to June 2009. Currently, he is a Postdoctoral Research Fellow at the School of Automation, Southeast University, Nanjing, China. Since July 2008, he has been Associate Professor at Jiangnan
References (38)
- et al.
Image processing with neural networks—a review
Pattern Recognition
(2002) - et al.
Absolute exponential stability of recurrent neural networks with Lipschitz-continuous activation functions and time delays
Neural Networks
(2004) - et al.
A review of Hopfield neural networks for solving mathematical programming problems
Eur. J. Oper. Res.
(2009) - et al.
Stability analysis of static recurrent neural networks using delay-partitioning and projection
Neural Networks
(2009) - et al.
State estimation for static neural networks with time-varying delay
Neural Networks
(2010) - et al.
Robust stability analysis of switched Hopfield neural networks with time-varying delay under uncertainty
Phys. Lett. A
(2005) - et al.
Dynamics of a class of discrete-time neural networks and their continuous-time counterparts
Math. Comput. Simul.
(2000) - et al.
Exponential stability of continuous-time and discrete-time cellular neural networks with delays
Math. Comput. Simul.
(2003) - et al.
Exponential stability criteria for discrete-time recurrent neural networks with time-varying delay
Nonlinear Anal. Real World Appl.
(2010) - et al.
Improved exponential stability criteria for discrete-time neural networks with time-varying delay
Neurocomputing
(2010)
A delay-partitioning approach to the stability analysis of discrete-time systems
Automatica
Delay-dependent exponential stability analysis for discrete-time switched neural networks with time-varying delay
Neurocomputing
Robust exponential stability analysis of discrete-time switched Hopfield neural networks with time delay
Nonlinear Anal. Hybrid Syst.
Robust stability criteria for discrete-time switched neural networks with various activation functions
Appl. Math. Comput.
Robust stability of discrete-time stochastic neural networks with time-varying delays
Neurocomputing
Novel robust stability criteria of discrete-time stochastic recurrent neural networks with time delay
Neurocomputing
Stability analysis of discrete-time stochastic neural networks with time-varying delays
Neurocomputing
Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay
Neural Networks
Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays
Neurocomputing
Cited by (34)
Mean square exponential stability of discrete-time Markov switched stochastic neural networks with partially unstable subsystems and mixed delays
2021, Information SciencesCitation Excerpt :see [41] Compared with literature [16,25,26], they did not consider the neural network with switching signal. Compared with literature [28–31,40], even if they considered the influence of switching signal on the system, but the switching signal was deterministic.
Reliable control for positive switched systems with random nonlinearities
2021, ISA TransactionsDynamic behaviours for semi-discrete stochastic Cohen-Grossberg neural networks with time delays
2020, Journal of the Franklin InstituteFractional delay segments method on time-delayed recurrent neural networks with impulsive and stochastic effects: An exponential stability approach
2019, NeurocomputingCitation Excerpt :In accordance to the above reasons, stochastic neural networks have found fruitful applications in oncology, bio informatics, risk management and other similar fields. Therefore, the stochastic perturbations in neural networks become a fixate issue for stability analysis of recurrent neural networks, and recently the stability analysis problems for time-delayed stochastic neural networks have attracted broad interest, see for instants [15,40,59,76]. The authors in [45], studied the stochastic effects of state behavior for time-delayed neural networks.
Mean-square exponential input-to-state stability of stochastic recurrent neural networks with multi-proportional delays
2017, NeurocomputingCitation Excerpt :So there is an important guiding significance to investigate the problem of stochastic delayed neural networks. Thankfully, there are abundant stability results about the stochastic neural networks with different kinds of delays, for instance, see [5–19] and references therein. For example, in [5], Li Dan and Zhu Quanxin introduced and proposed a new comparison principle to study the stochastic delayed neural networks.
Manfeng Hu received both the B.S. and M.S. degrees from Xuzhou Normal University, Xuzhou, China, and the Ph.D. degree from Jiangnan University, Wuxi, China, in 1998, 2001, and 2008, respectively. He was a Visiting Scholar at the Department of Mathematics, Michigan State University, USA from June 2008 to June 2009. Currently, he is a Postdoctoral Research Fellow at the School of Automation, Southeast University, Nanjing, China. Since July 2008, he has been Associate Professor at Jiangnan University. His current research interests include nonlinear dynamics, dynamics of complex networks and system biology.
Jinde Cao (M’07–SM’07) received the B.S. degree from Anhui Normal University, Wuhu, China, the M.S. degree from Yunnan University, Kunming, China, and the Ph.D. degree from Sichuan University, Chengdu, China, all in mathematics/applied mathematics, in 1986, 1989, and 1998, respectively. From March 1989 to May 2000, he was with the Yunnan University. In May 2000, he joined the Department of Mathematics, Southeast University, Nanjing, China. From July 2001 to June 2002, he was a Postdoctoral Research Fellow at the Department of Automation and Computer-Aided Engineering, Chinese University of Hong Kong, Hong Kong. In the period from 2006 to 2008, he was a Visiting Research Fellow and a Visiting Professor at the School of Information Systems, Computing and Mathematics, Brunel University, UK. Currently, he is a Professor and Doctoral Advisor at the Southeast University, prior to which he was a Professor at Yunnan University from 1996 to 2000. He is the author or coauthor of more than 300 journal papers and five edited books. His research interests include nonlinear systems, neural networks, complex systems and complex networks, stability theory, and applied mathematics. Dr. Cao was an Associate Editor of the IEEE Transactions on Neural Networks and Neurocomputing. He is an Associate Editor of the Abstract and Applied Analysis, Differential Equations and Dynamical Systems, Discrete Dynamics in Nature and Society, International Journal of Differential Equations, Journal of Applied Mathematics, Journal of the Franklin Institute, Mathematics and Computers in Simulation, and Neural Networks. Dr. Cao is a Reviewer of Mathematical Reviews and Zentralblatt-Math.
Aihua Hu received the B.S. degree in Information and Computing Science from the Jiangnan University, Wuxi, China, in 2003, and the M.S. degree and Ph.D. degree in Control Theory and Engineering from the Jiangnan University, Wuxi, China, in 2006 and 2010, respectively. Currently, she is an Associate Professor and Graduate Student Advisor at the Jiangnan University. She is the author or coauthor of about 20 journal papers. Her research interests include nonlinear systems, stochastic systems and complex networks.