Elsevier

Neurocomputing

Volume 363, 21 October 2019, Pages 171-181
Neurocomputing

Design and analysis of new complex zeroing neural network for a set of dynamic complex linear equations

https://doi.org/10.1016/j.neucom.2019.07.044Get rights and content

Abstract

This paper proposes a new complex zeroing neural network (NCZNN) to solve a set of dynamic complex linear equations, which is an extension from the design idea of the real-valued zeroing neural network. Different from the previous complex ZNN (CZNN) model, which cannot process nonlinear activation functions and thus only converge in infinite time, a nonlinear sign-bi-power (SBP) activation function is explored to enable the proposed NCZNN model to converge within finite time in complex domain by using two different ways. One is to simultaneously activate the real part and the imaginary part of a complex number and the other is to activate the modulus of a complex number. In addition, the detailed theoretical analyses of the NCZNN model are provided according to these two processing ways, and the corresponding convergence upper bounds are analytically calculated. Two numerical experiments are conducted by using the NCZNN model and the CZNN model to solve a set of dynamic complex linear equations. Comparative results further prove that the NCZNN model has better convergence performance than the CZNN model. At last, the proposed method is applied to the motion tracking of a mobile manipulator, and simulative results verify the feasibility of our method in robotic applications.

Introduction

Solving linear equations is considered as a fundamental mathematical issue [1], which is widely applied in engineering and science computation domain, such as robotic kinematics [2], mathematics [3], optical flow [4] and electromagnetic analysis [5]. However, serial processing algorithms with time complexity O(n3) have been proven not to be able to well handle large online applications [6]. For instance, when applied to solving a set of dynamic linear equations, these related iterative numerical algorithms need to accomplish computation within a sampling interval [7], [8], [9], [10]. However, if the sampling ratio is too high, these algorithms will fail because they cannot be completed within a sampling period [11], [12]. Recent studies [13], [14], [15], [16], [17], [18] have shown that the parallel and distributed storage characteristics of neural networks are very efficient and powerful in solving complicated problems, such as dynamic linear equations online solving. In [19], the linear equation is solved by a typical gradient-based neural network (GNN), which uses the norm of the error vector as the computation standard, and converges to zero along the direction of the negative gradient descent of the error norm with time. However, GNN cannot solve the time-varying problems [20], [21] because it lacks the velocity compensation of time-varying coefficients, which greatly limits its application range in real-time processing. Significantly, a special of neural network (called zeroing neural network, ZNN) was successfully proposed to address time-varying and time-invariant problems online [22], [23], [24], [25], [26], [27]. Therefore, we can conclude that ZNN has better performance than GNN for time-varying problems in real domain.

It is remarkable that many people design various real-valued neural networks to study a set of real-valued linear equations, but the methods using neural networks for solving complex linear equations have rarely been studied, not to mention to solve dynamic complex-valued linear equations by using complex-valued neural networks. Besides, complex-valued neural networks exhibit better performance in the aspects of signal processing and image processing [28], [29], as compared with real-valued neural networks. At present, some complex-valued ZNN models have been designed to directly solve time-varying problems in complex domains [30]. However, these complex-valued ZNN models cannot process nonlinear activation functions. Inspired by this point, in this work, we will investigate a type fully new complex ZNN (NCZNN) model to solve time-varying complex linear equations and consider using nonlinear activation functions to speed up the finite-time convergence. This is because, as mentioned in [12], the ZNN model using nonlinear activation functions to solve real-valued time-varying linear equations converges to the theoretical solution faster than that using the linear activation function, which is more conducive to online processing. This suggests that we use the nonlinear activation function rather than using linear activation function to improve the convergence speed of the neural network. Besides, in [31], [32], when nonlinear activation functions are used, the neural networks can be still globally convergent.

In this work, the novelty of the NCZNN model is the use of complex-valued neural network model rather than simple real-valued neural network model. Besides, the SBP nonlinear activation function is used to activate the NCZNN model, instead of the linear activation function. When dealing with complex-valued problems, unlike previous studies [33], [34], a complex number is often converted into a real number, and then processed separatively. Instead, in this paper, the complex-valued problem is directly processed in the complex-valued domain using the two types of process ways proposed. one is to simultaneously activate the real part and the imaginary part of a complex number and the other is to activate the modulus of a complex number. Compared with the previously proposed CZNN model, the SBP nonlinear activation function is embedded in the NCZNN model, which accelerates the convergence speed, ensures the global convergence, and enhances the real-time processing ability. Furthermore, the theoretical analyses of the NCZNN model are provided in detail according to these two processing ways with convergence upper bounds analytically calculated. At this point, the main contributions in this article can be seen as below.

  • This article mainly proposes the NCZNN model to solve a set of dynamic complex-valued linear equations, rather than simply real-valued linear equations.

  • Two different types of processing ways have been used to realize the nonlinearity of the NCZNN model by using the SBP activation function, which can make the NCZNN model converge to the theoretical solution in a limited time.

  • The computational ability of the NCZNN model is guaranteed by detailed analysis results, and the upper bounds of the finite convergence time are also derived according to different processing ways.

  • Two simulation examples are given to demonstrate that the NCZNN model can converge to the theoretical solution of linear equations in finite time, and is much better than the CZNN model. In addition, the proposed method is successfully applied to the motion tracking of a mobile manipulator.

Section snippets

Problem and model formulation

In mathematics, the dynamic complex linear equations could be described as follows:D(t)z(t)=p(t)orD(t)z(t)p(t)=0,where D(t)Cn×n represents a smoothly time-varying complex matrix, z(t)Cn×n denotes unknown vector to be solved, and p(t)Cn×n is a smoothly time-varying vector.

Before introducing the NCZNN model for solving the above dynamic complex linear equations, the design process of the CZNN model (i.e., without nonlinear activation functions) is repeated as below.

  • Step 1:

    The error function E(t) is

NCZNN model

Obviously, from the design formula (3), the nonlinear form of CZNN model (4) can be obtained by adding nonlinear activation function Φ(·) as below:D(t)z˙(t)=D˙(t)z(t)γΦ(D(t)z(t)p(t))+p˙(t),where Φ(·) denotes a nonlinear complex-valued activation function array. Generally speaking, there are two ways to process the complex-valued activation function in the complex domain: one is to activate the real and imaginary parts of the complex input simultaneously, and the other is to activate the

Theoretical results

It is worth pointing out that CZNN model (4) can converge to the expected solution of a set of dynamic complex linear equations without using nonlinear activation functions. However, how to use a complex-valued nonlinear activation function to obtain the expected solution for the NCZNN model (5), which is not well studied yet. In this section, the detailed proofs for NCZNN model (5) using the above two types processing ways are given to ensure the superior property.

Comparison verification

In this section, it will be illustrated by two examples to show the superiority of NCZNN model (5) to CZNN model (4). In addition, Types I and II processing ways are both applied to activate NCZNN model (5) by using SBP activation function.

Example 1

The time-varying matrix D(t) and the vector p(t) of the first example are given as follows:D(t)=[exp(i5t)cos(5t)+sin(5t)iexp(i5t)0]andp(t)=[exp(i5t)exp(i10t)].

When Type I processing way is applied, as shown in Fig. 1(a), the neural-state vector z(t) of

Application to mobile manipulator

In order to verify the feasibility of the proposed method, a mobile manipulator is used as a simulation test platform by computing control variables through NCZNN model (5). It is noting that the mathematical model, the geometric figure and related parameters of the mobile manipulator are given in [37], [38], [39], [40], [41], [42], [43], and thus not presented again. In the simulation, the expected tracking path of the mobile manipulator is an ellipse one with the major and minor axis being

Conclusion

This paper presents a new complex zeroing neural network (NCZNN) to solve a set of dynamic complex linear equations. In addition, two different types of processing ways have been adopted to enable the NCZNN model to use the sign-bi-power (SBP) activation function. Theoretical analyses of the NCZNN model have been provided to guarantee the superior finite-time convergence, with upper bounds derived according to two different types of processing ways. Two numerical simulations have been done in

Declaration of Competing Interest

We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.

Acknowledgments

This work is supported by NSFC under grants 61866013, 61503152, 61473259, and 61563017; and the Natural Science Foundation of Hunan Province of China under grants 2019JJ50478, 18A289, 2016JJ2101, 2018TP1018, 2018RS3065, and 17A173.

Lin Xiao received the B.S. degree in Electronic Information Science and Technology from Hengyang Normal University, Hengyang, China, in 2009, and the Ph.D. degree in Communication and Information Systems from Sun Yat-sen University, Guangzhou, China, in 2014. He is currently a Professor with the College of Information Science and Engineering, Hunan Normal University, Changsha, China. He has authored over 90 papers in international conferences and journals, such as the IEEE-TNNLS, the IEEE-TCYB,

References (43)

  • YiQ. et al.

    Nonlinearly activated complex-valued gradient neural network for complex matrix inversion

    Proceedings of IEEE 2018, International Conference on Intelligent Control and Information Processing

    (2018)
  • H. Khalil

    Nonlinear Systems

    (2000)
  • XiaoL. et al.

    Co-design of finite-time convergence and noise suppression: a unified neural model for time varying linear equations with robotic applications

    IEEE Trans. Syst. Man Cybern. Syst.

    (2018)
  • S. Nakata

    Parallel meshfree computation for parabolic equations on graphics hardware

    Int. J. Comput. Math.

    (2011)
  • ChenK.

    Implicit dynamic system for online simultaneous linear equations solving

    Electron. Lett.

    (2013)
  • T. Mifune et al.

    Folded preconditioner: a new class of preconditioners for Krylov subspace methods to solve redundancy-reduced linear systems of equations

    IEEE Trans. Magn.

    (2009)
  • J.H. Mathews et al.

    Numerical Methods using MATLAB

    (2004)
  • ZhangZ. et al.

    Convergence and robustness analysis of the exponential-type varying gain recurrent neural network for solving matrix-type linear time-varying equation

    IEEE Access

    (2018)
  • ZhangZ. et al.

    A varying-gain recurrent neural network and its application to solving online time-varying matrix equation

    IEEE Access

    (2018)
  • XiaoL. et al.

    A finite-time convergent dynamic system for solving online simultaneous linear equations

    Int. J. Comput. Math.

    (2017)
  • LiS. et al.

    Nonlinearly activated neural network for solving time-varying complex Sylvester equation

    IEEE Trans. Cybern.

    (2014)
  • Cited by (12)

    • High-order error function designs to compute time-varying linear matrix equations

      2021, Information Sciences
      Citation Excerpt :

      In recent years, the finite-time convergence has extensive application. Much progress about ZNN models has been made to achieve the superior finite-time convergence [27–30]. For example, Xiao et al. propose a novel ZNN model with SBPAF to obtain finite-time convergence [31].

    • Comprehensive study on complex-valued ZNN models activated by novel nonlinear functions for dynamic complex linear equations

      2021, Information Sciences
      Citation Excerpt :

      It was proved that the designed two processing complex domain methods (involving nonlinear real-valued Sign-bi-power activation function) make the complex-valued ZNN models achieve finite-time convergence together with calculated the upper limit of constriction time. Inspired by [33–36], two complex-valued ZNN models (called Cv-ZNN1 and Cv-ZNN2) are designed in this paper to resolve time-variant linear system of equations in complex domain. Two real-valued nonlinear activation functions (called NF1 and NF2) [37,38] are applied to Cv-ZNN models to achieve superior properties.

    • Design and Application of A Robust Zeroing Neural Network to Kinematical Resolution of Redundant Manipulators Under Various External Disturbances

      2020, Neurocomputing
      Citation Excerpt :

      For instance, in the past years, many different types of RNN models were proposed and developed to address various optimization problems, and scientific computation problems arising in engineering fields [24,26–29]. Gradient neural networks (GNNs) [30–33] and zeroing neural networks (ZNNs) [16,34–39] are two types of important RNNs, which are widely investigated for addressing scientific computation problems. Especially, when applied to time-invariant problems solving, GNNs and ZNNs are both effective, respectively possessing asymptotic and exponential convergence.

    • New error function designs for finite-time ZNN models with application to dynamic matrix inversion

      2020, Neurocomputing
      Citation Excerpt :

      However, meeting with time-varying problems, GNN does not have a good performance because the steady state residual errors of GNN cannot be eliminated. Therefore, in terms of GNN, zeroing neural network (ZNN) is designed to overcome this difficulty [23–29]. The fist step of designing a ZNN model is to construct an error function.

    • Community detection in complex networks using Node2vec with spectral clustering

      2020, Physica A: Statistical Mechanics and its Applications
      Citation Excerpt :

      In recent years, complex networks have attracted enormous attention from different domains [1–3]. Researchers already have presented a lot of algorithms and models to find the hidden relationship between nodes and edges in various domains of networks [4,5]. One of the most active fields is community detection in complex networks [6–8].

    View all citing articles on Scopus

    Lin Xiao received the B.S. degree in Electronic Information Science and Technology from Hengyang Normal University, Hengyang, China, in 2009, and the Ph.D. degree in Communication and Information Systems from Sun Yat-sen University, Guangzhou, China, in 2014. He is currently a Professor with the College of Information Science and Engineering, Hunan Normal University, Changsha, China. He has authored over 90 papers in international conferences and journals, such as the IEEE-TNNLS, the IEEE-TCYB, the IEEE-TII and the IEEE-TSMCA. His main research interests include neural networks, robotics, and intelligent information processing.

    Qian Yi received the B.S. degree in Communication Engineering from Jishou University, Jishou, China, in 2018. She is currently a postgraduate with the College of Information Science and Engineering, Jishou University, Jishou, China. Her main research interests include neural networks, and robotics.

    Jianhua Dai received the B.Sc., M.Eng. and Ph.D. degrees from Wuhan University, Wuhan, China, in 1998, 2000 and 2003, respectively. Prof. Dai is currently the Director of Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing and the Dean of the College of Information Science and Engineering, Hunan Normal University, Changsha, China. He has published over 100 research papers in refereed journals and conferences. His current research interests include artificial intelligence, machine learning, intelligent information processing, evolutionary computation and soft computing.

    Kenli Li received the Ph.D. degree in computer science from the Huazhong University of Science and Technology, Wuhan, China, in 2003. He was a Visiting Scholar with the University of Illinois at Urbana-Champaign, Champaign, IL, USA, from 2004 to 2005. He is currently a Full Professor of Computer Science and Technology with Hunan University, Changsha, China, and also the Deputy Director of the National Supercomputing Center, Changsha. He has authored over 150 papers in international conferences and journals, such as the IEEE-TC, the IEEE-TPDS, and the IEEE-TSP. His current research interests include parallel computing, cloud computing, big data computing, and neural computing.

    Zeshan Hu received the B.S. degree in Computer Science and Technology from Fuzhou University, Fuzhou, China, in 2018. He is currently a postgraduate with the College of Computer Science and Electronic Engineering, Hunan University, Changsha, China. His main research interests include neural networks and robotics.

    View full text