Elsevier

Neurocomputing

Volume 186, 19 April 2016, Pages 1-7
Neurocomputing

New delay-interval-dependent stability criteria for static neural networks with time-varying delays

https://doi.org/10.1016/j.neucom.2015.12.063Get rights and content

Abstract

This paper introduces an effective approach to study the stability of static neural networks with interval time-varying delay using delay partitioning approach and tighter integral inequality lemma. By decomposing the delay interval into multiple equidistant subintervals and multiple nonuniform subintervals, some suitable Lyapunov–Krasovskii functionals are constructed on these intervals. A set of novel sufficient conditions are obtained to guarantee the stability analysis issue for the considered system. These conditions are expressed in the framework of linear matrix inequalities, which heavily depend on the lower and upper bounds of the time-varying delay. It is shown, by comparing with existing approaches, that the delay-partitioning approach can largely reduce the conservatism of the stability results. Finally, three examples are given to show the effectiveness of the theoretical results.

Introduction

Various classes of neural networks have been active research topics in the past few year, due to its practical importance and successful applications in many areas such as aerospace, data mining, signal filtering, parallel computing, robotic and telecommunications, see e.g. [1], [2]. This led to significant attraction of many researchers, like mathematicians, physicists, computer scientists and biologist. The achieved applications heavily depend on the dynamic behaviors of the equilibrium point of neural networks. That is, stability is one of the main properties of neural networks, which is a crucial feature in the design of neural networks.

It is well-known that time delays are always unavoidably encountered in the implementation of neural networks due to the finite switching speed of neurons and amplifiers. So the issue of stability analysis of neural networks with time delays attracts many researchers and a large number of stability results have been reported in the literature [3], [4], [5], [6]. The obtained results can be classified into two types: delay-dependent criteria [7], [8], [9], [10], [11], [12], [13], [14], [15] and delay-independent criteria [16], [17]. Generally speaking, delay-dependent stability criteria are usually less conservative than delay-independent ones especially when the size of the delay is small. And, pursuing the delay-dependent stability criteria is of much theoretical and practical value.

Depending on the modeling approaches, neural networks can be modeled either as a static neural network model or as a local field neural network model [18], [19]. The local neural network and static neural network can be transferred equivalently from one to the other under some assumptions, but these assumptions cannot always be satisfied in many applications [20]. That is, local field neural network models and static neural network models are not always equivalent. Thus, it is necessary and important to study them separately.

In [21], the global exponential stability criteria is obtained for static recurrent neural networks to ensure the existence and uniqueness of the equilibrium, based on the nonlinear measure. The authors in [22], investigated the problem for static neural network with constant delay using delay partitioning approach and Finsler׳s Lemma. Li et al. [23] employed a unified approach in stability analysis of generalized static neural networks with time-varying delays and linear fractional uncertainties by utilizing some novel transformation and discretized scheme. In [24], stability criteria is derived for both delay-independent and delay-dependent conditions using augmented Lyapunov functional and it realizes the decoupling of the Lyapunov function matrix and the coefficient matrix of the neural networks. The stability and dissipativity problems of static neural networks with time-varying delay were investigated in [25]. Sun et al. [26] presented the stability criteria for a class of static neural networks by constructing new augmented Lyapunov functional which fully uses the information about the lower bound of the delay and contains some new double integral and triple integral terms. Nevertheless, the results obtained in [24], [25], [26] are based on simple Lyapunov–Krasovskii functionals and are still conservative. Therefore, there is much room for further investigation. This motivates us to carry out this work.

In this paper, our research efforts are focused on developing a new approach to analyze the stability of neural networks with interval time-varying delays. In order to obtain some less conservative sufficient conditions, firstly, we decompose the delay interval [h2,0] into [h2,h1] and [h1,0]. Secondly we decompose the delay interval [h1,0] into m equidistant sub intervals. Furthermore, we choose different weighting matrices that is [h1,0]=i=1m[ih1m,(i1)h1m]. Lastly, we decompose the delay interval [h2,h1] into r nonuniform subintervals and we choose different weighting matrices that is [h2,h1]=j=1r[h1jq,h1(j1)q], with q=h2h1r. The innovation of the method includes employment of a tighter integral inequality and construction of an appropriate type of Lyapunov functional. Finally, two numerical examples are shown to illustrate the merits of the proposed methods.

Notations. Throughout this paper, Rn and Rn×m denotes the n-dimensional Euclidean space and the set of all n×m real matrices, respectively. The notation X0 (respectively, X>0), where X is symmetric matrices, means that X is positive semi definite (respectively, positive definite). The subscript T denotes the transpose of the matrix. The notation “⁎” is used as an ellipsis for terms that are induced by symmetry. Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions for algebraic operations.

Section snippets

Problem formulation

Consider the following static neural networks with interval time-varying delay:u̇(t)=Au(t)+g(Wu(td(t))+J),where u(t)=[u1(t),u2(t),,un(t)]T denotes the state vector, A=diag(a1,a2,,an) with ai>0, i=1,2,,n, g(Wu(·))=[g1(W1u(·)),g2(W2u(·)),,gn(Wnu(·))]T is the activation function. W=[W1T,W2T,,WnT]T is the delayed connection weight matrix. J=[j1,j2,,jn]T is a constant input. d(t) is the time-varying delay and satisfies0h1d(t)h2andḋ(t)μ,where h1,h2 are known positive scalars, and μ is a

Main results

In this section, we derive a new delay-dependent criterion for asymptotic stability of the system (5) using the Lyapunov functional method combining with linear matrix inequality approach.

Theorem 3.1

Assume that Assumption (H1) is hold. Then for given scalars h1,h2 and μ, the neural network described by (5) is asymptotically stable, for any time varying delay τ(t) satisfying (2), (3) if there exist matrices P,Qi,Ri(i=1,2,,m),Wj,Sj(j=1,2,,r), Zi(i=1,2,3) and positive diagonal matrices K,T such that the

Numerical examples

In this section, we are analyzing some numerical examples to show the effectiveness of the proposed methods.

Example 1

Consider the static neural networks (5) with the parameters as follows: A=diag{7.0214,7.4367},W=[6.499312.02750.68675.6614],L=diag{l1,l2}=diag{1,1},B=0.By solving LMI (13) in Theorem 3.1, the obtained upper bound of h2 for different values of h1 and μ are listed in Table 1. It is easy to see that our proposed stability criterion gives a much less conservative result than the one in [24]

Conclusion

In this paper, the stability problem for a class of static neural network with interval time-varying delay is investigated using delay partitioning method. Improved delay-range-dependent stability conditions in terms of LMIs are derived through delay partitioning method and the new integral inequality lemma. The stability criteria derived from this method have less conservatism than some existing ones. Finally numerical examples show the effectiveness of our main results. By utilizing the

Acknowledgments

The work of fourth author was supported by the Department of Science and Technology (DST), Government of India, New Delhi, for its nancial support through the research project grant No. SB/EMEQ-181/2013. Quanxin Zhu's work was jointly supported by the National Natural Science Foundation of China (61374080), the Alexander von Humboldt Foundation of Germany (Fellowship CHN/1163390), and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.

S. Senthilraj graduated from the Department of Mathematics of Kaamadhenu Arts and Science College affiliated to Bharathiar University, Coimbatore in 2008. He received his post graduation in Applied Mathematics from PSG College of Technology affiliated to Anna University, Chennai, Tamilnadu, India, in 2010. He was a recipient of University rank holder award in under graduation. He is currently pursuing Ph.D. degree in Thiruvalluvar University, Vellore, Tamilnadu, India. His research interests

References (41)

Cited by (0)

S. Senthilraj graduated from the Department of Mathematics of Kaamadhenu Arts and Science College affiliated to Bharathiar University, Coimbatore in 2008. He received his post graduation in Applied Mathematics from PSG College of Technology affiliated to Anna University, Chennai, Tamilnadu, India, in 2010. He was a recipient of University rank holder award in under graduation. He is currently pursuing Ph.D. degree in Thiruvalluvar University, Vellore, Tamilnadu, India. His research interests include Neural Networks, stochastic, impulsive and neutral systems.

R. Raja received the M.Sc., M.Phil., and Ph.D. degrees in Mathematics from Periyar University, Salem, India, in 2005, 2006 and 2011, respectively. He served as a Guest faculty at Periyar University, India after the completion of his doctoral studies. He was the recipient of Sir. C.V. Raman Budding Innovator Award in the year 2010 from Periyar University, India. He is currently working as an Assistant Professor in Ramanujan Centre for Higher Mathematics, Alagappa University, Karaikudi, India. He obtained a grant from the UGC for distinguished Young Scientist Award of India in the year 2013. His research interests include fractional differential equations, neural networks, genetic regulatory networks, robust nonlinear control, stochastic systems, stability analysis of dynamical systems, synchronization and chaos theory. He has authored and coauthored for more than 20 publications in these research areas. He was a member of the Editorial Board for the special issues in Mathematical Problems in Engineering and also he served as a reviewer for more than 20 journals.

Quanxin Zhu received the Ph.D. degree from Sun Yatsen (Zhongshan) University, Guangzhou, China, in 2005. From July 2005 to May 2009, he was with the South China Normal University. From May 2009 to August 2012, he was with the Ningbo University. He is currently a Professor of Nanjing Normal University. Zhu is an Associate Editor of the Journal of Mathematical Problems in Engineering, Journal of Applied Mathematics, the Journal of Transnational Journal of Mathematical Analysis and Applications and he is a reviewer of Mathematical Reviews and Zentralblatt-Math. Also, Zhu is the Lead Guest Editor of “Recent Developments on the Stability and Control of Stochastic Systems” in Mathematical Problems in Engineering. Zhu is a reviewer of more than 40 other journals and he is the author or coauthor of more than 85 journal papers. His research interests include random processes, stochastic control, stochastic differential equations, stochastic partial differential equations, stochastic stability, nonlinear systems, Markovian jump systems and stochastic complex networks.

R. Samidurai graduated from the Department of Mathematics of Thiruvalluvar Govt. Arts College affiliated to Periyar University, Salem in 2003. He received his post graduation in Mathematics from Bharathiar University, Coimbatore, India, in 2005. He received Master of Philosophy and Doctor of Philosophy from Department of Mathematics, Periyar University, in 2007 and 2010, respectively. Now he is working as an Assistant Professor in Department of Mathematics, Thiruvalluvar University, Vellore, Tamilnadu, India. He has published a number of papers in these areas. His research interests are in the field of time-delay systems, neural networks and nonlinear systems.

Zhangsong Yao received his Master׳s degree from Nanjing Normal University, Jiangsu, China, in 2006. He is currently working in the School of Information Engineering, Nanjing Xiaozhuang University, Jiangsu, China. He has published over 10 international journal papers. His current interests include financial mathematics, nonlinear science and application, neural networks and random processes.

View full text