An iterative algorithm for the least Frobenius norm least squares solution of a class of generalized coupled Sylvester-transpose linear matrix equations

https://doi.org/10.1016/j.amc.2018.01.020Get rights and content

Abstract

The iterative algorithm of a class of generalized coupled Sylvester-transpose matrix equations is presented. We prove that if the system is consistent, a solution can be obtained within finite iterative steps in the absence of round-off errors for any initial matrices; if the system is inconsistent, the least squares solution can be obtained within finite iterative steps in the absence of round-off errors. Furthermore, we provide a method for choosing the initial matrices to obtain the least Frobenius norm least squares solution of the problem. Finally, numerical examples are presented to demonstrate that the algorithm is efficient.

Introduction

Matrix equations appear frequently in many areas of applied mathematics and play important roles in many applications, such as control theory, system theory [19], [20]. For example, in stability analysis of linear jump systems with Markovian transitions, the following matrix equations are typical coupled Lyapunov matrix equations AiT+PiAi+Qi+j=1nπijPj=0,i=1,2,,n,where Qi are positive definite matrices, πij are known transition probabilities and Pj are the unknown matrices [6], [43]. The second order linear system A2x¨+A1x˙+A1x˙+B0u=0has wide applications in vibration and structural analysis, robotics control and spacecraft control [39], [57]. All kinds of publications have studied how to solve different types of matrix equations [14], [15]. Traditionally, linear matrix equations can be converted into their equivalent forms by using the Kronecker product. However, in order to solve the equivalent forms, the inversion of the associated large matrix need be involved, which leads to computational difficulty because excessive computer memory is required. With the increase of the sizes of the related matrices, the iterative methods have replaced the direct methods and become the main strategy for solving the matrix equations [5], [18], [19].

Based on the conjugate gradient algorithm, there are several iterative algorithms for solving the (coupled) linear matrix equations [4], [9], [10], [11], [12], [13], [15], [16], [35], [36], [37], [48], [50], [51], [54]. Bai [1] established the Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester matrix equations. Beik and Salkuyeh [5] derived the global Krylov subspace methods for solving general coupled matrix equations. Deng et al. [17] constructed orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations. Zhou et al. [56] obtained the solutions of a family of matrix equations by using the Kronecker matrix polynomials. Ding et al. [18] constructed the iterative method for finding the solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle. The generalized conjugate direction algorithm for solving the general coupled matrix equations over symmetric matrices was derived by Hajarian [27]. The matrix forms of CGS, GPBiCG, QMRCGSTAB, BiCOR, Bi-CGSTAB, CORS, BiCG, Bi-CR and CGLS algorithms were given to solve linear matrix equations [25], [28], [30], [31], [32], [33], [34].

The Kalman–Yakubovich-transpose matrix equation XAXTB=C, the generalized Yakubovich-transpose matrix equation XAXTB=CY, the nonhomogeneous Yakubovich-transpose matrix equation XAXTB=CY+R and the Sylvester-transpose matrix equation AX+XTB=C play very important roles in many fields [3], [55]. For example, the general Lyapunov-transpose and the Kalman–Yakubovich-transpose matrix equations appear in the Luenberger-type observer design [47], pole/eigenstructure assignment design [40] and robust fault detection [7]. The generalized Yakubovich-transpose matrix equation is encountered in second order or higher order linear systems [21], [46]. The Sylvester-transpose matrix equation is related to the eigenstructure assignment [23], observer design [8], control of system with input constraint [22], and fault detection [24]. In [42], [53], the following linear matrix equations i=1rAiXBi+j=1sCjXTDj=E,where Ai, Bi, Cj, Dj, i=1,,r, j=1,,s, and E were some known constant matrices of appropriate dimensions and X was a matrix to be determined, was considered. The special case of Eq. (1.1), that is i=1k(AiXBi+CiXTDi)=Ewas considered by Hajarian. He established matrix iterative methods [29] and QMRCGSTAB algorithm [34] for solving Eq. (1.2). The special case of Eq. (1.1) AXB+CXTD=E was considered by Wang et al. [49]. Best approximate solution of matrix equation AXB+CXD=E was studied in [41]. In [52], the gradient-based iterative algorithms were established for AXB+CXTD=F which is also a special case of Eq. (1.1). A more special case of Eq. (1.1), namely, the matrix equation AX+XTC=B, was investigated by Piao et al. [44]. Using the Moore–Penrose generalized inverse, some necessary and sufficient conditions for the existence of the solution and the expressions of the matrix equation AX+XTC=B were obtained in [44].

Moreover, the following generalized coupled Sylvester-transpose matrix equations η=1p(AiηXηBiη+CiηXηTDiη)=Fi,i=1,2,,Nwas considered by Song et al. [45]. They obtained the least Frobenius norm solution group and the optimal approximation solution group of system (1.3). Beik and Salkuyeh [4] also considered the coupled Sylvester-transpose Eq. (1.3) over generalized centro-symmetric matrices. As a special case of Eq. (1.3), Dehghan and Hajarian researched the generalized centro-symmetric and least squares generalized centro-symmetric solutions of the matrix equations AYB+CYTD=E [15]. Baksalary and Kala [2] studied the matrix equation AXB+CYD=E. Hajarian [26] established the new finite algorithm for solving the generalized nonhomogeneous Yakubovich-transpose matrix equation AXB+CXTD+EYF=R.

In [54], Xie et al. considered the following generalized coupled Sylvester-transpose linear matrix equations {AXB+CYTD=S1,EXTF+GYH=S2,where A, E ∈ Rp × n, C, G ∈ Rp × m, B, F ∈ Rn × q, D, H ∈ Rm × q, S1, S2 ∈ Rp × q are given constant matrices, and X ∈ Rn × n, Y ∈ Rm × m are unknown matrices to be determined. This kind of matrix equation can be used in future works of control and system theory. Xie et al. [54] proved that the solution can be obtained within finite iteration steps in the absence of round-off errors for any initial given reflexive or anti-reflexive matrix as system (1.4) is consistent. However, as system (1.4) is inconsistent, how to obtain the least squares solution and the least Frobenius norm least squares solution is still open.

In this paper, the problems will be tackled in a new way. Inspired by the previous works, we propose a modified conjugate gradient method to solve system (1.4). We consider two cases. When system (1.4) is consistent, we verify that a solution (X*, Y*) can be obtained within finite iteration steps in the absence of round-off errors for any initial matrices. When system (1.4) is inconsistent, we prove that the least squares solution of system (1.4) can be obtained within finite iteration steps in the absence of round-off errors; moreover, the least Frobenius norm least square solution can be derived by finding the special type of the initial value.

For convenience, we use the following notation throughout this paper. Let Rm × n be the set of all real m × n matrices. We abbreviate Rn × 1 as Rn. For A ∈ Rm × n, we write AT, ‖AF, tr(A) and A1 to denote transpose, Frobenius norm, the trace and the inverse of matrix A, respectively. For any matrix A=(aij), B=(bij), matrix AB denotes the Kronecker product defined as AB=(aijB). For the matrix X=(x1,x2,,xn)Rm×n, vec(X) denotes the vec operator defined as vec(X)=(x1T,x2T,,xnT)TRmn.

The rest of this paper is organized as follows. In Section 2, we construct the modified conjugate gradient algorithm for solving the system (1.4) and prove that if the system is consistent, a solution (X*, Y*) can be obtained within finite iteration steps in the absence of round-off errors for any initial matrices; if the system is inconsistent, the least squares solution can be obtained within finite iteration steps in the absence of round-off errors. Furthermore, we provide a method for choosing the initial matrices to obtain the least Frobenius norm least squares solution of the problem. In Section 3, we present some numerical experiments. Finally, we give our conclusions in Section 4.

Section snippets

The iterative methods for solving the Eq. (1.4)

First, we give the definition of the inner product. In the space Rm × n over the field R, the inner product can be defined as A,B=tr(BTA).The norm of a matrix generated by this inner product space is denoted by ‖ · ‖. Then, for A ∈ Rm × n, we have A2=A,A=tr(ATA)=AF2.In addition, from the definition of the inner product and the properties of matrix trace, we have the following results: A,B=tr(BTA)=tr(ABT)=tr(BAT)=tr(ATB)=B,A,A,BXC=tr[(BXC)TA]=tr(CTXTBTA)=tr(XTBTACT)=BTACT,Xand A,

Numerical experiments

In this section, we report some numerical results to support our Algorithm 2.1. All of the tests were implemented using MATLAB R2015a. In view of the influence of round-off errors, we regard a matrix T as the zero matrix if T,T<1011, where ⟨ · ,  · ⟩ denotes the inner product defined by (2.1).

Example 3.1

For this example, we use the function 10*(rand(m,n)0.5) in MATLAB to obtain the matrices. Function 10*(rand(m,n)0.5) generated a m × n matrix, which contains pseudo-random values drawn from the

Conclusions

The iterative algorithm is proposed to obtain the solution of the generalized coupled Sylvester-transpose matrix equations. When the system is consistent, we verify that a solution (X*, Y*) can be obtained within finite iteration steps in the absence of round-off errors for any initial matrices. When the system is inconsistent, we prove that the least squares solution can be obtained within finite iteration steps in the absence of round-off errors. Furthermore, we show that the least Frobenius

References (57)

  • M. Hajarian

    Extending the CGLS algorithm for least squares solutions of the generalized Sylvester-transpose matrix equations

    J. Frankl. Inst.

    (2016)
  • M. Hajarian

    Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equations

    J. Frankl. Inst.

    (2013)
  • M. Hajarian

    Matrix form of the CGS method for solving general coupled matrix equations

    Appl. Math. Lett.

    (2014)
  • M. Hajarian

    Developing BICOR and CORS methods for coupled Sylvester-transpose and periodic Sylvester matrix equations

    Appl. Math. Model.

    (2015)
  • M. Hajarian

    The generalized QMRCGSTAB algorithm for solving Sylvester-transpose matrix equations

    Appl. Math. Lett.

    (2013)
  • HuangN. et al.

    The modified conjugate gradient methods for solving a class of the generalized coupled Sylvester-transpose matrix equations

    Comput. Math. Appl.

    (2014)
  • HuangN. et al.

    Modified conjugate gradient method for obtaining the minimum-norm solution of the generalized coupled Sylvester-conjugate matrix equations

    Appl. Math. Model.

    (2016)
  • LiZ.Y. et al.

    Least squares solution with the minimum-norm to general matrix equations via iteration

    Appl. Math. Comput.

    (2010)
  • F. Piao et al.

    The solution to matrix equation AX+XTC=B

    J. Frankl. Inst.

    (2007)
  • WangQ.W. et al.

    Consistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebra

    Linear Algebra Appl.

    (2002)
  • WangM. et al.

    Iterative algorithms for solving the matrix equation AXB+CXTD=E

    Appl. Math. Comput.

    (2007)
  • WuA.G. et al.

    Finite iterative solutions to coupled Sylvester-conjugate matrix equations

    Appl. Math. Model.

    (2011)
  • XieL. et al.

    Gradient based and least squares based iterative algorithms for matrix equations AXB+CXTD=F

    Appl. Math. Comput.

    (2010)
  • XieL. et al.

    Gradient based iterative solutions for general linear matrix equations

    Comput. Math. Appl.

    (2009)
  • XieY.J. et al.

    Iterative method to solve the generalized coupled Sylvester-transpose linear matrix equations over reflexive or anti-reflexive matrix

    Comput. Math. Appl.

    (2014)
  • ZhouB. et al.

    Solutions to a family of matrix equations by using the Kronecker matrix polynomials

    Appl. Math. Comput.

    (2009)
  • ZhouB. et al.

    On the generalized Sylvester mapping and matrix equations

    Syst. Control Lett.

    (2008)
  • Z.Z. Bai

    On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations

    J. Comput. Math.

    (2011)
  • Cited by (11)

    • On global Hessenberg based methods for solving Sylvester matrix equations

      2019, Computers and Mathematics with Applications
      Citation Excerpt :

      Also, there are many works that have links with the Sylvester equation (1) (see [20–23] and the references therein) or that deal with generalizations of this equation [24–28,9]. For instance, an iterative algorithm for determining the least Frobenius norm least squares solution of a class of generalized coupled Sylvester-transpose linear matrix equations is described in [27]. Similarly, an iterative scheme for solving the generalized coupled Sylvester-conjugate matrix equations is proposed in [26].

    View all citing articles on Scopus

    This research is supported by National Science Foundation of China (41725017, 41590864) and National Basic Research Program of China under grant number 2014CB845906. It is also partially supported by the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (XDB18010202).

    View full text