Elsevier

Information Sciences

Volumes 346–347, 10 June 2016, Pages 412-423
Information Sciences

L performance of single and interconnected neural networks with time-varying delay

https://doi.org/10.1016/j.ins.2016.02.004Get rights and content

Abstract

This paper is concerned with the L performance analysis problem for time-varying delayed neural networks. First, a condition is proposed for the L performance of single neural networks with time-varying delay and persistent bounded input based on the Wirtinger-type inequality together with the reciprocal convex approach. Then, sufficient conditions are established to ensure the L performance of interconnected neural networks with time-varying delay. Numerical examples are provided to show the effectiveness of the presented results.

Introduction

Neural networks are a significant class of nonlinear dynamical systems [14], [16]. In the past few decades, they have received considerable attention because of their extensive and important applications in fields such as computer vision, voice analysis, pattern recognition, and DNA micro-array analysis [7], [12]. On the other hand, neural networks are usually implemented using analog circuits, digital circuits, and very large-scale integrated (VLSI) circuits. Time delay inevitably occurs in the electronic implementation of neural networks due to the finite switching speed of electronic components in information processing and signal transmission. For this reason, time delay frequently results in the oscillation, poor performance, and instability of neural networks [20], [41]. Recently, several research results have been presented concerning the asymptotic or exponential stability analysis of neural networks with time delay [1], [3], [5], [10], [11], [17], [18], [19], [22], [26], [35], [36], [37], [40].

Engineers and scientists are typically more concerned about the peak amplitude value than the total energy of a perturbed disturbance input signal. For the past three decades, the L-gain optimal control problem, first formulated and solved by Vidyasagar [29], [30], has been investigated extensively to deal with persistent bounded disturbance input signals by considering the minimization of the peak amplitude of the state variable or tracking error in the time-domain specifications. Solutions to L-gain control and filtering problems have been proposed in [8], [15], [28]. Recently, L performance analysis of a class of Hopfield neural networks has been performed in [27]. However, this result requires the strong assumption that the considered neural networks are single neural networks without time delay. Thus, we have formulated the following research questions: Can we obtain a condition for the L performance of neural networks with both time delay and persistent bounded input? Furthermore, what is a condition for the L performance of interconnected neural networks with time delay? As far as the authors are aware, no results have been published on the L performance analysis of single and interconnected neural networks with time delay thus far. Therefore, we have established the first trial that provides solutions to the L performance analysis problems of single and interconnected neural networks with time delay.

In this paper, we propose novel results on the L performance analysis of neural networks with time-varying delay. Based on the Wirtinger-type inequality together with the reciprocal convex approach, a novel sufficient condition for guaranteeing the L performance of single neural networks with time-varying delay is established. Using this result, a novel condition is then proposed to ensure the L performance of interconnected neural networks with time-varying delay. The addressed problems can be cast into convex optimization problems in terms of linear matrix inequalities (LMIs), which can be achieved easily with the assistance of existing convex optimization algorithms [9].

This paper is organized as follows. In Section 2, a novel condition for the L performance of single neural networks is proposed. In Section 3, the L performance of interconnected neural networks is analyzed. In Section 4, numerical examples are provided. Finally, conclusions are presented in Section 5.

Notations: In this paper, ⋆ denotes an entry that can be deduced from the symmetry of the matrix, and diag{ · } represents a block-diagonal matrix with the indicated matrices on its main diagonal. The superscript T represents matrix transposition, and Rm denotes the real space with dimension m. The notation P > 0 (P ≥ 0) indicates that P is a symmetric and positive definite (semi-definite) matrix. For two symmetric matrices A and B, A > B (AB) means AB>0 (AB0).

Section snippets

L performance analysis of single neural networks

Consider the following neural network with time-varying delay: x˙(t)=Ax(t)+W1ϕ(x(t))+W2ϕ(x(tτ(t)))+Gu(t),z(t)=Cx(t),where x(t)=[x1(t),,xn(t)]TRn is the state vector, A=diag{a1,,an}Rn×n(ak>0,k=1,,n) is the self-feedback matrix, WiRn × n (i=1,2) is the connection weight matrix, ϕ(x(t))=[ϕ1(x1(t)),,ϕn(xn(t))]TRn is the neuron activation function, u(t) ∈ Rm is the input vector, z(t)=[z1(t)...zp(t)]TRp is the output vector, and CRp × n and GRn × m are given constant matrices. The

L performance analysis of interconnected neural networks

In this section, we investigate an interesting problem: the L performance analysis problem for the feedback interconnection of neural networks with time-varying delay. The results can serve as a useful tool for examining the robustness of large-scale neural networks represented by a collection of interconnected neural networks. Consider the feedback interconnection of two neural networks with L performance, where the two neural networks, NNA and NNB, are given by NNk:{x˙k(t)=Akxk(t)+W1kϕ(xk(t)

Numerical examples

In this section, two examples are provided to show the usefulness of the results presented in the previous sections. The first example demonstrates the L performance of a single neural network using a new function. The second example illustrates the L performance of an interconnected neural network.

Example 1

Consider the delayed neural network (1) with the following parameters: A=[2.5003.74],G=[1001],C=[0.2000.2],W1=[0.720.380.950.1],W2=[0.510.3241.130.44],τ(t)=0.25(1+cos(2t)),ϕi(s)=tanh(s),i=1,2,

Conclusion

This paper has examined the L performance analysis problems of neural networks with both time-varying delay and persistent bounded input. The Wirtinger-type inequality together with the reciprocal convex approach were utilized to establish a novel sufficient delay-dependent LMI condition for the L performance of single neural networks with time-varying delay. Based on this condition, a novel LMI sufficient condition was also proposed to guarantee the L performance of interconnected neural

Acknowledgement

This work was partially supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2014R1A1A1006101) and partially by “Human Resources program in Energy Technology” of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20154030200610).

References (41)

  • Y. Wei et al.

    A new design of H filtering for continuous-time Markovian jump systems with time-varying delay and partially accessible mode information

    Signal Process.

    (2013)
  • L. Wu et al.

    Passivity-based sliding mode control of uncertain singular time-delay systems

    Automatica

    (2009)
  • Q. Zhu et al.

    Controllability and observability of multi-rate networked control systems with both time delay and packet dropout

    Int. J. Innov. Comput. Inf. Control

    (2015)
  • C.K. Ahn

    l2-l elimination of overflow oscillations in 2-D digital filters described by Roesser model with external interference

    IEEE Trans. Circuits Syst. II

    (2013)
  • C.K. Ahn

    l2-l suppression of limit cycles in interfered two-dimensional digital filters: A Fornasini–Marchesini model case

    IEEE Trans. Circuits Syst. II

    (2014)
  • C.K. Ahn et al.

    Two-dimensional dissipative control and filtering for Roesser model

    IEEE Trans. Autom.c Control

    (2015)
  • P. Arena et al.

    Cellular neural networks for real-time DNA microarray analysis

    IEEE Eng. Med. Biol. Mag.

    (2002)
  • F. Blanchini et al.

    Persistent disturbance rejection via static-state feedback

    IEEE Trans. Autom. Control

    (1995)
  • S. Boyd et al.

    Linear Matrix Inequalities in Systems and Control Theory

    (1994)
  • X.H. Chang et al.

    On sampled-data fuzzy control design approach for T-S model-based fuzzy systems by using discretization approach

    Inf. Sci.

    (2015)
  • Cited by (37)

    • Error convergence analysis of the SUFIN and CSUFIN

      2018, Applied Soft Computing Journal
      Citation Excerpt :

      Error convergent intelligent networks also have become very popular in the application of prognostic health management plants. The L-infinity performance analysis of a neural network is taken into account in [1]. In [2] and [3], robust evolving cloud-based controllers are presented.

    View all citing articles on Scopus
    View full text