Elsevier

Neurocomputing

Volume 418, 22 December 2020, Pages 221-231
Neurocomputing

Improved recurrent neural networks for solving Moore-Penrose inverse of real-time full-rank matrix

https://doi.org/10.1016/j.neucom.2020.08.026Get rights and content

Abstract

Recently, motivated by Zhang neural network (ZNN) models, Lv et al. presented two novel neural network (NNN) models for solving Moore-Penrose inverse of a time-invariant full-rank matrix. The NNN models were established by introducing two new matrix factors in the ZNN models, which results in their higher convergence rates than those of the ZNN models. In this paper we extend the NNN models to the more general cases through introducing a “regularization” parameter and a power parameter in these two matrix factors. The new proposed models are named here as the improved recurrent neural networks (IMRNN) since their convergence performance can be much better than the NNN models by appropriate choices of the introduced parameters. Such convergence property is theoretically analyzed in detail. Some numerical experiments are also performed to validate the theoretical results, including the numerical comparisons with the existing gradient neural network (GNN), ZNN and NNN models. In particular, the proposed IMRNN models are successfully applied to the inverse kinematic control of a three-link redundant robot manipulator where the superiority of the IMRNN models to the GNN, ZNN and NNN models is also indicated.

Introduction

It is known that Moore-Penrose inverse (i.e., pseudoinverse) plays a key role in science and engineering, e.g., system modeling and design in a variety of applications involving robotics [1], signal processing [2], image restoration [3] and pattern recognition [4]. Therefore, how to efficiently compute Moore-Penrose inverse of a matrix has become a critical problem in real-time applications. Although there exist many numerical methods for computing Moore-Penrose inverse such as singular value decomposition (SVD) [5], Greville’s recursive method [6], Newton iteration method [7] and so on, and from the viewpoint of computation, their serial computational schemes are inefficient when they are applied to large-scale and real-time problems [8], [9]. In fact, the computational costs of these known methods increase significantly with the increase of the size of the matrices because of the complexity O(n3) being usually required, which makes the solving process time-consuming. However, there are real-time matrix Moore-Penrose inversion problems in practice, for example, the inverse kinematic problem for online control of redundant robot manipulator needs the online solution of Moore-Penrose inverse [10], [11] (see also the Section 6 in this paper). Obviously, the existing serial algorithms are not suitable for the online solution of Moore-Penrose inverse of a matrix. For this reason, the parallel computing methods become the natural choices for reducing computational costs and increasing the computational efficiency, especially for the online solution of Moore-Penrose inverse.

Due to the parallel nature and convenience of hardware implementation, recurrent neural network (RNN), originated from Hopfield neural network, has been deemed as a powerful tool to solve online matrices inversion problems [12], [13], [7], [14], [15], [16]. Over the past decades, many recurrent neural network models for computing Moore-Penrose inverse of a full-rank matrix have been proposed and investigated [17], [18], [11], [19]. As a special recurrent neural network, gradient neural network (GNN) is designed intrinsically for solving static problems, and it has become an effective tool for solving the Moore-Penrose inverse of a constant matrix [18], [17], [20]. Nevertheless, many methods including GNN models designed for static problems may not be effective for solving time-varying ones [10], [9]. To overcome this shortcoming, Zhang neural network (ZNN), as another special case of RNN, has been proposed for both time-invariant and time-varying problems solving [21], [22], [23]. Meanwhile, ZNN models have been successfully applied to solve matrix Moore-Penrose inversion [10], [24], [25], [26]. Recently, based on the GNN and ZNN models, Lv et al. [8] presented two more efficient RNN models which were named as novel neural network (NNN) models. By the detailed analyses and comparisons to these neural network models, they theoretically and numerically proved that the proposed NNN models possess the better convergence performance than that of the GNN and ZNN models for the real-time matrix Moore-Penrose inverse solving [8]. As a continuation of their works, in this paper, we further extend the NNN models to the more general cases named as Improved Recurrent Neural Network (IMRNN) models, which can outperform the existing GNN, ZNN and NNN models under certain conditions. Similar to the other papers, here we also highlight the main contributions and novelties of this paper as follows.

  • (1) Two IMRNN models can be utilized to efficiently solve the Moore-Penrose inverse of an real-time full-rank matrix, and they can globally converge to the exact Moore-Penrose inverse with an arbitrary initial state and some special monotonically-increasing odd activation functions.

  • (2) The convergence rate of the proposed IMRNN models can be much faster than GNN, ZNN and NNN models through an appropriate choice of the parameters λ and k involved in the models.

  • (3) The influence of the involved parameters on the convergence rate of IMRNN models is theoretically analyzed in detail and some numerical experiments are performed to further substantiated our theoretical results.

  • (4) The application of our IMRNN model to the inverse kinematic control problem of redundant robot manipulator also indicates the superiority to the known GNN, ZNN and NNN models.

The rest of this paper is organized as follows. In Section 2, we recall some basic notions and useful results on Moore-Penrose inverse. In Section 3, the existed GNN, ZNN and NNN models for computing left or right Moore-Penrose inverse of full-rank matrix are described briefly. Two improved recurrent neural network (i.e., IMRNN-L and IMRNN-R) models are proposed in Section 4, including the theoretical analyses of their superior convergence performance. In Section 5, some numerical experiments are performed to validate the superiority of IMRNN models. Finally, as a practical application and the testing of effects, our IMRNN models are applied to the kinematic control of redundant robot manipulator in Section 6.

Section snippets

Moore-Penrose inverse: notions, formulae and necessary properties

In this section, we briefly state the definition of Moore-Penrose inverse of a matrix and some formulae of a full-rank matrix. Some necessary properties related to the Moore-Penrose inverse are also mentioned.

Definition 2.1

[27], [11] Let ARm×n. Then the existed and unique matrix XRn×m satisfying the following four matrix equationsAXA=A,XAX=X,(AX)T=AX,(XA)T=XA,is called the Moore-Penrose inverse of A and denoted by A, where the superscript T denotes the transpose of a matrix or vector.

Specially, if a given

Some existed neural network models

To describe our improved recurrent neural networks, in this section we first state some existed neural networks for solving the Moore-Penrose inverse of a full-rank matrix.

Improved recurrent neural network (IMRNN) models

Based on the NNN models, in this section we proposed two improved recurrent neural networks i.e., IMRNN-L and IMRNN-R models for solving the Moore-Penrose inverses of the full-rank matrices.

The IMRNN-L model for the left Moore-Penrose inverse could be formulated asATAẊ(t)=-γ(ATA)2+λIkFATAX(t)-AT,while IMRNN-R model for the right Moore-Penrose inverse could be written asẊ(t)AAT=-γFX(t)AAT-AT(AAT)2+λIk,where the designed parameters γ,λ,k satisfy γ>0,λ1 and that k1 is integer, F(·):Rm×nRm×n

Illustrative examples

In this section, we perform some numerical experiments on our proposed IMRNN models (16) and (17). However, the dynamic equations of the above neural network models are all described in matrix forms, which cannot be directly numerically simulated. Thus, we firstly transform the matrix-form differential equations into the vector-form differential equations via the Kronecker product and vectorization techniques [30], [31]. For eaxmple, the IMRNN-L model (16) can be transformed to the vector-form

Application to redundant-manipulator kinematic control

In this section, we apply the proposed IMRNN models to the minimum norm (MVN) scheme in the redundant-manipulator kinematic control with a three-link planar robot manipulator. The kinematic structure of such a robot manipulator is shown in Fig. 7.

Consider a three-link redundant robot manipulator of which the relationship between the end-effector position vector r=[rx;ry]R2 and the joint angle vector θ=[θ1,θ2,θ3]TR3 is embodied through the following forward kinematic equationrθ=φθ,

where the

Conclusions

In this paper we extend the known NNN models to the more general ones by introducing two parameters λ and k which can greatly improve the convergence performance of the NNN models by increasing the parameter values of γ,λ and k when computing the Moore-Penrose inverse of a full-rank matrix. Our proposed new models are named as the improved recurrent neural network (IMRNN) models, i.e., the IMRNN-L and IMRNN-R models for solving the left and right Moore-Penrose inverses, respectively. We analyze

CRediT authorship contribution statement

Wenqi Wu: Conceptualization, Methodology, Software, Writing - original draft. Bing Zheng: Supervision, Project administration, Resources, Validation, Formal analysis.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

The authors thank the anonymous referees for their constructive suggestions and comments which greatly improve the presentation of this paper. This work is supported by the National Natural Science Foundation of China (Grant No. 11571004).

Wenqi Wu received her B.S. degree from School of Mathematics and Statistics, Heze University, Heze, China, in 2018. She is now pursuing his M.S. degree at School of Mathematics and Statistics, Lanzhou University, Lanzhou, China. Her research interests include neural networks, generalized inverse of matrix, and numerical linear algebra.

References (37)

  • A.J.V.D. Veen et al.

    A subspace approach to blind space-time signal processing for wireless communications

    IEEE Trans. Signal Process.

    (1997)
  • S. Chountasis, V. N. Katsikis, D. Pappas, Digital image reconstruction in the spectral domain utilizing the...
  • K.M. Olson, G.A. Ybarra, Performance comparison of neural network and statistical pattern recognition approaches to...
  • R.E. Hartwig

    Singular value decomposition and the moore-penrose inverse of bordered matrices

    Siam J. Appl. Math.

    (1976)
  • J. Zhou et al.

    Variants of the greville formula with applications to exact recursive least squares

    Siam J. Matrix Anal. Appl.

    (2002)
  • X. Lv et al.

    Improved recurrent neural networks for online solution of moorepenrose inverse applied to redundant manipulator kinematic control

    Asian J. Control

    (2018)
  • Y. Zhang et al.

    New discrete-solution model for solving future different-level linear inequality and equality with robot manipulator control

    IEEE Trans. Ind. Inf.

    (2019)
  • Y. Zhang et al.

    Zhang neural network solving for time-varying full-rank matrix moore-penrose inverse

    Computing

    (2011)
  • Cited by (20)

    • Direct derivation scheme of DT-RNN algorithm for discrete time-variant matrix pseudo-inversion with application to robotic manipulator

      2023, Applied Soft Computing
      Citation Excerpt :

      However, generally speaking, continuous time-variant algorithms may be more difficult to implement in industry than discrete time-variant algorithms, such as on the digital circuits and digital computers. Many researchers proposed improved RNN algorithms for solving discrete time-variant matrix pseudo-inversion after Zhang et al. introduced the algorithm [27], such as Wei et al. [28], Guo et al. [29], Liao et al. [30] and Petković et al. [31]. Besides, the study and processing of some mathematical problems can also be realized by RNNs [32–37].

    • A novel hybrid Zhang neural network model for time-varying matrix inversion

      2022, Engineering Science and Technology, an International Journal
      Citation Excerpt :

      Xiao et al. [22] proposed a complex valued RNN model for time-varying complex matrix inversion which converges faster than CVZNN [19]. An improved RNN model for solving Moore-Penrose inverse of real-time full-rank matrix was investigated in [23]. The remaining part of the paper is arranged as follows: The problem statement is described in Section 2.

    View all citing articles on Scopus

    Wenqi Wu received her B.S. degree from School of Mathematics and Statistics, Heze University, Heze, China, in 2018. She is now pursuing his M.S. degree at School of Mathematics and Statistics, Lanzhou University, Lanzhou, China. Her research interests include neural networks, generalized inverse of matrix, and numerical linear algebra.

    Bing Zheng was born in 1963. He received the M.S. degree in fundamental Mathematics from the School of Mathematical Sciences, Anhui University, China, in 1989, and Ph.D. degree in computational mathematics from the College of Sciences, Shanghai University, China, in 2003. He is currently a full time professor of the School of Mathematics and Statistics at Lanzhou University, China. His research interests include numerical linear algebra and its applications, multi-linear algebra, optimization theory, total least squares problem, perturbation analysis of the matrix and so on. He has published more than 80 papers in scientific journals.

    View full text