Loading [MathJax]/jax/output/SVG/jax.js
Research article

The least-squares solutions of the matrix equation AXB+BXA=D and its optimal approximation

  • Received: 09 September 2021 Accepted: 25 November 2021 Published: 06 December 2021
  • MSC : 15A09, 15A24

  • In this paper, the least-squares solutions to the linear matrix equation AXB+BXA=D are discussed. By using the canonical correlation decomposition (CCD) of a pair of matrices, the general representation of the least-squares solutions to the matrix equation is derived. Moreover, the expression of the solution to the corresponding weighted optimal approximation problem is obtained.

    Citation: Huiting Zhang, Yuying Yuan, Sisi Li, Yongxin Yuan. The least-squares solutions of the matrix equation AXB+BXA=D and its optimal approximation[J]. AIMS Mathematics, 2022, 7(3): 3680-3691. doi: 10.3934/math.2022203

    Related Papers:

    [1] Kanjanaporn Tansri, Pattrawut Chansangiam . Gradient-descent iterative algorithm for solving exact and weighted least-squares solutions of rectangular linear systems. AIMS Mathematics, 2023, 8(5): 11781-11798. doi: 10.3934/math.2023596
    [2] Fengxia Zhang, Ying Li, Jianli Zhao . The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261
    [3] Xia Li, Wen Guan, Da-Bin Wang . Least energy sign-changing solutions of Kirchhoff equation on bounded domains. AIMS Mathematics, 2022, 7(5): 8879-8890. doi: 10.3934/math.2022495
    [4] Dong Wang, Ying Li, Wenxv Ding . The least squares Bisymmetric solution of quaternion matrix equation $ AXB = C $. AIMS Mathematics, 2021, 6(12): 13247-13257. doi: 10.3934/math.2021766
    [5] Yinlan Chen, Yawen Lan . The best approximation problems between the least-squares solution manifolds of two matrix equations. AIMS Mathematics, 2024, 9(8): 20939-20955. doi: 10.3934/math.20241019
    [6] Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097
    [7] Zahra Eidinejad, Reza Saadati, Donal O'Regan, Fehaid Salem Alshammari . Measure of quality and certainty approximation of functional inequalities. AIMS Mathematics, 2024, 9(1): 2022-2031. doi: 10.3934/math.2024100
    [8] Yinlan Chen, Min Zeng, Ranran Fan, Yongxin Yuan . The solutions of two classes of dual matrix equations. AIMS Mathematics, 2023, 8(10): 23016-23031. doi: 10.3934/math.20231171
    [9] Yinlan Chen, Lina Liu . The common Re-nonnegative definite and Re-positive definite solutions to the matrix equations $ A_1XA_1^\ast = C_1 $ and $ A_2XA_2^\ast = C_2 $. AIMS Mathematics, 2022, 7(1): 384-397. doi: 10.3934/math.2022026
    [10] Junaid Nisar, Turki Alsuraiheed, Nadeem ur Rehman . Nonlinear mixed type product $ [\mathscr{K}, \mathscr{F}]_\ast \odot \mathscr{D} $ on $ \ast $-algebras. AIMS Mathematics, 2024, 9(8): 21596-21608. doi: 10.3934/math.20241049
  • In this paper, the least-squares solutions to the linear matrix equation AXB+BXA=D are discussed. By using the canonical correlation decomposition (CCD) of a pair of matrices, the general representation of the least-squares solutions to the matrix equation is derived. Moreover, the expression of the solution to the corresponding weighted optimal approximation problem is obtained.



    Throughout this paper, the complex m×n matrix space is denoted by Cm×n and the set of all n×n unitary matrices is denoted by UCn×n. The conjugate transpose and the Frobenius norm of a complex matrix A are denoted by AˉA and A, respectively. The Hermitian (skew-Hermitian) matrix A is denoted by A=A(A=A). The identity matrix of size n is represented by In. For matrices A=(αij)Cm×n,B=(βij)Cm×n, AB is used to define the Hadamard product of A and B, that is, AB=(αijβij)Cm×n.

    It is known that the matrix equation

    AXB+BXA=D (1.1)

    plays an important role in automatic control. In 1991, Yasuda and Skelton [1] studied the assigning controllability and observability Gramians in feedback control by (1.1). In 1994, Fujioka and Hara [2] considered (1.1) in the context of studying state covariance assignment problem with measurement noise. Owing to its important applications, there has been an increased interest in solving (1.1). In 1980, Baksalary and Kala [3] established the solvability conditions and the representation of the general solution to the matrix equation AXB+CYD=E. In 1987, Chu [4] considered the compatibility of AXB+CYD=E by using the generalized singular value decomposition (GSVD), and provided the least norm solution when the solution exists. After that, Chu [5] provided the solvability conditions of (1.1) by using {the} GSVD. In addition, some iterative methods [6,7,8,9] are also used to solve such matrix equations. There is no doubt that the researching of the least-squares solutions to this kind of matrix equation should also be significant and interesting. In 1998, Xu et al. [10] provided the least-squares Hermitian (skew-Hermitian) solutions of AXAH+CYCH=F by using the canonical correlation decomposition (CCD). In 2006, Liao et al. [11] considered the least-squares solution with the minimum norm of AXBH+CYDH=E by {the} CCD and GSVD. Yuan et al. [12,13] considered the least-squares solutions with some constraints of the matrix equation AXB+CXD=E. Besides, other scholars [15,16] also have studied the least-squares problems of AXB+CXD=E. However, solving the least-squares solutions of (1.1) seems to be rarely considered in the literatures. Recently, Yuan [17] proposed the least-squares solutions to the matrix equation AXBBXA=D by applying the canonical correlation decomposition of [A,B]. Subsequently, Yuan [18,19] proposed the minimum norm solution of (1.1) by taking advantage of the generalized singular value decomposition of the matrix pair [A,B], and provided the least-squares solution with the minimum norm of (1.1) by using the normal equation and singular value decompositions. Motivated by the work above, it occurred to us that can the least-squares solutions to (1.1) be derived in a similar way? The answer is affirmative. In this paper, we will consider the least-squares solutions of (1.1) and the associated weighted optimal approximation problem by utilizing the canonical correlation decomposition of a pair of matrices, which can be mathematically formulated as follows.

    Problem I. Given ACn×m,BCp×m,DCm×m with D=D. Find XCn×p such that

    Φ1=AXB+BXAD=min. (1.2)

    Problem II. Given FCn×p, find ˆXSE such that

    FˆXW=minXSEFXW, (1.3)

    where W is the weighted norm will be defined below, SE is the solution set of Problem I.

    By using the canonical correlation decomposition, the explicit expression of the least-squares solutions to Problem I is derived. Also, the expression of the corresponding optimal approximation solution under a weighted Frobenius norm sense to Problem II is deduced. Further, numerical examples are provided to verify the correctness of our results.

    In order to solve Problem I, the following lemmas are needed.

    Lemma 2.1. [10] Let J1Cm×n,J2Cn×m,CA=diag(α1,α2,,αm),SA=diag(β1,β2,,βm) with αi>0,βi>0, and α2i+β2i=1,(i=1,2,,m). Then the following minimization problem with respect to XCm×n:

    ϕ1=CAXJ12+SAXJ22=min

    holds if and only if X can be expressed as X=CAJ1+SAJ2.

    Lemma 2.2. Let J1,J2Cm×m with J1=J1, CA=diag(α1,α2,,αm),SA=diag(β1,β2,,βm) with αi>0,βi>0, and α2i+β2i=1,(i=1,2,,m). Then the following minimization problem with respect to XCm×m:

    ϕ2=CAX+XCAJ12+SAXJ22+XSAJ22=min

    holds if and only if X can be expressed as

    X+CAXCA=CAJ1+SAJ2. (2.1)

    Proof. For X=(xij)Cm×m, Jl=(J(l)ij)Cm×m, (l=1,2), we have

    ϕ2=Σij(|αixij+xjiαjJ(1)ij|2+2|βixijJ(2)ji|2).

    Clearly, ϕ2 is a continuously differentiable function of 2m2 variables of Re(xij), Im(xij),(i,j=1,2,,m). The function of xij is

    Ω=(|αixij+xjiαjJ(1)ij|2+|αjxji+xijαiJ(1)ji|2+2|βixijJ(2)ji|2).

    According to the necessary condition of function which is minimizing at a point and J(1)ij=J(1)ji, we obtain the following expression:

    xij+αixjiαj=αiJ(1)ij+βiJ(2)ji,  (i,j=1,2,,m). (2.2)

    Then the equation of (2.1) follows from (2.2).

    Lemma 2.3. Let JCm×m,CA=diag(α1,α2,,αm) with 0<αi<1, (i=1,2,,m). Then the following equation with respect to XCm×m:

    X+CAXCA=J (2.3)

    holds if and only if X can be expressed as

    X=K(JCAJCA), (2.4)

    where K=(kij)Cm×m, kij=11(αiαj)2, (i,j=1,2,,m).

    Proof. For X=(xij)Cm×m,J=(Jij)Cm×m, (2.3) can be equivalently written as

    xij+αixjiαj=Jij,  xji+αjxijαi=Jji, (i,j=1,2,,m). (2.5)

    By (2.5), we can get

    xij=JijαiJjiαj1α2iα2j, (i,j=1,2,,m). (2.6)

    Then the equation of (2.4) follows from (2.6).

    Assume that the canonical correlation decomposition (CCD) [20] of the matrix pair [A,B] is

    A=Q[ΣA,0]E1A,     B=Q[ΣB,0]E1B, (2.7)

    where EACn×n and EBCp×p are nonsingular matrices, and

    ΣA=[I000CA00000000SA000I]qshqsmhststq  s  t   ,
    ΣB=[I      0      00      I      00      0      I0      0      00      0      00      0      0]qshqsmhststq      s  hqs,
    CA=diag(α1,α2,,αs),1>α1α2αs>0,
    SA=diag(β1,β2,,βs),0<β1β2βs<1

    with

    α2i+β2i=1, (i=1,2,,s),

    q=rank(A)+rank(B)rank(A,B), g=rank(A)=q+s+t, h=rank(B), and Q=[Q1,Q2,Q3,Q4,Q5,Q6]UCm×m with the partition of Q being compatible with those of ΣA and ΣB.

    According to (2.7), (1.2) can be equivalently written as

    Φ1=[ΣA,0]E1AX(E1B)[ΣB,0]+[ΣB,0]E1BX(E1A)[ΣA,0]QDQ, (2.8)

    partition the matrices E1AX(E1B) and QDQ into the following forms:

    E1AX(E1B)=[X11   X12        X13      X14X21   X22        X23      X24X31   X32        X33      X34X41   X42        X43      X44]qstng                                q      s     hqs  ph    , (2.9)
    QDQ=[D11   D12      D13   D14   D15   D16D12   D22      D23   D24   D25   D26D13   D23      D33   D34   D35   D36D14   D24      D34   D44   D45   D46D15   D25      D35   D45   D55   D56D16   D26      D36   D46   D56   D66]qsuvst                  q       s          u       v       s        t    , (2.10)

    where u=hqs,v=mhst. Inserting (2.9) and (2.10) into (2.8), we have

    Apparently, Φ1=min if and only if

    X11+X11D11=min,X13D132+X13D132=min,  X31D162+X31D162=min,X32D262+X32D262=min,  X33D362+X33D362=min,CAX21+X12D122+SAX21D152=min, (2.11)
    CAX23D232+SAX23D352=min,                                                  (2.12)
    CAX22+X22CAD222+SAX22D252+X22SAD252=min.     (2.13)

    By (2.11), we have

    X11=12D11+N,X13=D13,  X31=D16,  X32=D26,  X33=D36,X12=D12D15S1ACA,  X21=S1AD15, (2.14)

    where NCq×q is some skew-Hermitian matrix. According to (2.12) and Lemma 2.1, we have

    X23=CAD23+SAD35.      (2.15)

    By (2.13) and Lemmas 2.2 and 2.3, we have

    X22=K(JCAJCA), (2.16)

    where K=(kij)Cs×s,kij=11(αiαj)2, (i,j=1,2,,s), J=CAD22+SAD25. Inserting (2.14)–(2.16) into (2.9), we can get the following result.

    Theorem 2.1. Suppose that ACn×m,BCp×m,DCm×m with D=D. Let the canonical correlation decomposition of the matrix pair [A,B] be given by (2.7), the partition of the matrices E1AX(E1B),QDQ be given by (2.9) and (2.10), respectively. Then, the general solution of Problem I can be expressed as

    X=EA[12D11+ND12D15S1ACAD13X14S1AD15X22CAD23+SAD35X24D16D26D36X34X41X42X43X44]EB,

    where X22 is given by (2.16), Xi4,X4j,(i=1,2,3,4;j=1,2,3) are arbitrary matrices and NCq×q is some skew-Hermitian matrix.

    Clearly, (1.1) with D=D is sovable if and only if Φ1=min=0, that is,

    X11+X11D11=0,X13D13=0,  X31D16=0,  X32D26=0,  X33D36=0,CAX21+X12D12=0, SAX21D15=0,CAX23D23=0, SAX23D35=0,CAX22+X22CAD22=0, SAX22D25=0,  (2.17)

    Di4=0,D4j=0, (i=1,2,3,4;j=5,6),D33=0,D55=0,D56=0,D66=0. By (2.17), we obtain

    X11=12D11+N,                                                 (2.18)
    X13=D13,  X31=D16,  X32=D26,  X33=D36,X12=D12D15S1ACA,  X21=S1AD15,X23=C1AD23=S1AD35,  C1AD23S1AD35=0,X22=S1AD25,  D22=D25S1ACA+CAS1AD25, (2.19)

    where NCq×q is some skew-Hermitian matrix. Inserting (2.18) and (2.19) into (2.9), we have the following result.

    Theorem 2.2. Suppose that ACn×m,BCp×m,DCm×m with D=D. Let the canonical correlation decomposition(CCD) of the matrix pair [A,B] be given by (2.7), the partition of the matrices E1AX(E1B),QDQ be given by (2.9) and (2.10), respectively. Then (1.1) has a solution if and only if

    D=D, DQ4=0, D33=0, D55=0, D56=0, D66=0,C1AD23S1AD35=0,  D22=D25S1ACA+CAS1AD25. (2.20)

    In this case, the general solution of (1.1) can be expressed as

    X=EA[12D11+ND12D15S1ACAD13X14S1AD15S1AD25C1AD23X24D16D26D36X34X41X42X43X44]EB, (2.21)

    where Xi4,X4j,(i=1,2,3,4;j=1,2,3) are arbitrary matrices and NCq×q is some skew-Hermitian matrix.

    Remark 2.1. Note that (1.1) has a unique solution X if and only if

    ng=0,   ph=0,   q=0,

    which is equivalent to

    rank(A, B)=n+p.

    In this case, the unique solution of (1.1) can be expressed as

    X=EA[S1AD25C1AD23D26D36]EB.

    Based on Theorem 2.2, we can formulate the following algorithm 2.1 to solve (1.1).

    Algorithm 2.1.

    1) Input matrices A,B and D with D=D.

    2) Compute the canonical correlation decomposition of the matrix pair [A,B] by (2.7).

    3) Compute D11,D12,D13,D15,D16,D22,D23,D25,D26,D35 and D36 by (2.10), respectively.

    4) If the conditions (2.20) are satisfied, go to 5); otherwise, Problem I has no solution, and stop.

    5) Randomly choose skew-Hermitian matrix NCq×q and compute X11 by (2.18).

    6) Compute X12,X13,X21,X22,X23,X31,X32,X33 by (2.19), respectively.

    7) Randomly choose Xi4,X4j,(i=1,2,3,4;j=1,2,3) and compute X by (2.21).

    Example 2.1. Let m=10,n=7,p=6. Suppose that the matrices A,B and D are given by

    It is easy to verify the solvability conditions are satisfied:

    Q4D=1.0×1013[0.17760.03550.12430.14210.07110.09100.63950.15990.26650],D33=1.3831×1014,     D55=1.0×1013[0.05810.43870.34340.0124],D56=1.0×1013[0.21430.06230.21060.0813],  D66=1.0×1013[0.22880.01170.01050.2715],C1AD23=S1AD35=[35.998322.9230],  D22=D25S1ACA+CAS1AD25=[3.512214.786114.786110.6574].

    According to Algorithm 2.1 above, if choose N=0,Xi4=0,X4j=0,(i=1,2,3,4;j=1,2,3), then we can obtain a feasible solution X of (1.1) as follows:

    X=[14.29729.4923.052610.772.573216.15712.6745.16154.405121.7947.815820.3056.451319.8634.37245.623111.7270.985572.964210.0363.93243.29791.33410.04513.03528.0080.766113.09247.922213.1293.570820.560.2964613.4162.92213.57918.50121.8762.659515.7546.877225.22]

    with the corresponding residual estimated by

    AXB+BXAD=8.8662×1013.

    Suppose that the CCD of the matrix pair [A,B] is of the form given by (2.7). For any XCn×p, we define a weighted norm as follows [21,22,23] :

    XWE1AX(E1B). (3.1)

    Write

    E1AF(E1B)=[F11   F12        F13      F14F21   F22        F23      F24F31   F32        F33      F34F41   F42        F43      F44]qstng                                q       s    hqs   ph    . (3.2)

    Therefore, for any XSE, (1.3) can be written as

    FXW=E1AF(E1B)[12D11+ND12D15S1ACAD13X14S1AD15S1AD25C1AD23X24D16D26D36X34X41X42X43X44]=[F11F12F13F14F21F22F23F24F31F32F33F34F41F42F43F44][12D11+ND12D15S1ACAD13X14S1AD15S1AD25C1AD23X24D16D26D36X34X41X42X43X44].

    Obviously, FXW=min, if and only if

    MN=min, (3.3)
    Fi4Xi4=min, (i=1,2,3,4), (3.4)
    F4jX4j=min, (j=1,2,3), (3.5)

    where M=F1112D11. Note that NCq×q is some skew-Hermitian matrix, which implies that the relation of (3.3) is equivalent to

    NM2=N12(MM)2+12(M+M)2,

    therefore, we have

    N=12(MM),  X11=12(D11+MM), (3.6)

    where M=F1112D11. Apparently, by (3.4) and (3.5), we have

    Xi4=Fi4, (i=1,2,3,4),  X4j=F4j, (j=1,2,3). (3.7)

    Summing up the discussions above, we have the following result.

    Theorem 3.1. Given FCn×p and partition E1AF(E1B) as (3.2). Let M=F1112D11, then the unique solution of Problem II can be expressed as

    ˆX=EA[12(D11+MM)D12D15S1ACAD13F14S1AD15S1AD25C1AD23F24D16D26D36F34F41F42F43F44]EB. (3.8)

    Based on Theorem 3.1, we can formulate the following algorithm 3.1 to solve Problem II.

    Algorithm 3.1.

    1) Input matrices A,B,F and D with D=D.

    2) Compute the canonical correlation decomposition of the matrix pair [A,B] by (2.7).

    3) Compute D11,D12,D13,D15,D16,D22,D23,D25,D26,D35 and D36 by (2.10), respectively.

    4) If the conditions (2.20) are satisfied, go to 5); otherwise, Problem II has no solution, and stop.

    5) Compute F11,Fi4,F4j (i=1,2,3,4;j=1,2,3) by (3.2).

    6) Set M=F1112D11 and compute X11 by (3.6).

    7) Compute Xi4 and X4j(i=1,2,3,4;j=1,2,3) by (3.7) and compute ˆX by (3.8).

    Example 3.1. Let m=10,n=7,p=6. Suppose that the matrices A,B,D and F are given by

    F=[0.97549.57179.59497.43130.461714.38742.7854.85386.55743.92230.971323.81565.46888.00280.357126.55488.23467.65529.57511.41898.49131.71196.94837.9529.64894.21769.33997.06053.1711.86871.57619.15746.78740.318339.50224.89769.70597.92217.57742.76920.344464.4559],

    and the matrices A,B,D are the same as those of Example 2.1. It is easy to verify the solvability conditions are satisfied by Example 2.1. According to Algorithm 3.1 above, we can obtain the unique solution ˆX of Problem II as follows:

    ˆX=[22.45327.4052.90233.4213.84320.59920.87912.67920.6661.231730.388.450112.38321.6335.62357.996611.4090.8316912.67512.70510.9161.155623.4961.035530.48325.8938.759621.92810.62217.5999.012716.7125.537146.4479.645217.6232.65120.55326.1226.86528.11718.774]

    with the corresponding residual estimated by

    AˆXB+BˆXAD=9.0698×1013.

    In this paper, we have obtained the expression of the least-squares solutions to Problem I and the unique solution ˆX of Problem II by using the CCD of the matrix pair [A,B]. Two numerical examples verify the correctness of our results.

    All authors declare that there is no conflict of interest in this paper.



    [1] K. Yasuda, R. E. Skelton, Assigning controllability and observability Gramians in feedback control, J. Guid. Control Dynam., 14 (1991), 878–885. doi: 10.2514/3.20727. doi: 10.2514/3.20727
    [2] H. Fujioka, S. Hara, State covariance assignment problem with measurement noise: A unified approach based on a symmetric matrix equation, Linear Algebra Appl., 203-204 (1994), 579–605. doi: 10.1016/0024-3795(94)90215-1. doi: 10.1016/0024-3795(94)90215-1
    [3] J. K. Baksalary, R. Kala, The matrix equation AXB+CYD=E, Linear Algebra Appl., 30 (1980), 141–147. doi: 10.1016/0024-3795(80)90189-5. doi: 10.1016/0024-3795(80)90189-5
    [4] K. W. E. Chu, Singular value and generalized singular value decompositions and the solution of linear matrix equations, Linear Algebra Appl., 88-89 (1987), 83–98. doi: 10.1016/0024-3795(87)90104-2. doi: 10.1016/0024-3795(87)90104-2
    [5] K. W. E. Chu, Symmetric solutions of linear matrix equations by matrix decompositions, Linear Algebra Appl., 119 (1989), 35–50. doi: 10.1016/0024-3795(89)90067-0. doi: 10.1016/0024-3795(89)90067-0
    [6] M. Dehghan, M. Hajarian, Finite iterative algorithms for the reflexive and anti-reflexive solutions of the matrix equation A1X1B1+A2X2B2=C, Math. Comput. Model., 49 (2009), 1937–1959. doi: 10.1016/j.mcm.2008.12.014. doi: 10.1016/j.mcm.2008.12.014
    [7] H. Zhou, An iterative method for symmetric solutions of the matrix equation AXB+CXD=F (in Chinese), Math. Numer. Sin., 32 (2010), 413–422.
    [8] H. Zhou, An iterative algorithm for solutions of matrix equation AXB+CXD=F over linear subspace (in Chinese), Acta Math. Appl. Sin., 39 (2016), 610–619.
    [9] Z. Peng, Y. Peng, An efficient iterative method for solving the matrix equation AXB+CYD=E, Numer. Linear Algebra Appl., 13 (2006), 473–485. doi: 10.1002/nla.470. doi: 10.1002/nla.470
    [10] G. Xu, M. Wei, D. Zheng, On solutions of matrix equation AXB+CYD=F, Linear Algebra Appl., 279 (1998), 93–109. doi: 10.1016/S0024-3795(97)10099-4. doi: 10.1016/S0024-3795(97)10099-4
    [11] A. P. Liao, Z. Z. Bai, Y. Lei, Best approximate solution of matrix equation AXB+CYD=E, SIAM J. Matrix Anal. Appl., 27 (2006), 675–688. doi: 10.1137/040615791. doi: 10.1137/040615791
    [12] S. Yuan, A. Liao, Least squares Hermitian solution of the complex matrix equation AXB+CXD=E with the least norm, J. Franklin Inst., 351 (2014), 4978–4997. doi: 10.1016/j.jfranklin.2014.08.003. doi: 10.1016/j.jfranklin.2014.08.003
    [13] S. Yuan, Q. Wang, Two special kinds of least squares solutions for the quaternion matrix equation AXB+CXD=E, Electron. J. Linear Algebra, 23 (2012), 257–274.
    [14] F. Zhang, M. Wei, Y. Li, J. Zhao, Special least squares solutions of the quaternion matrix equation AXB+CXD=E, Comput. Math. Appl., 72 (2016), 1426–1435. doi: 10.1016/j.amc.2015.08.046. doi: 10.1016/j.amc.2015.08.046
    [15] F. Zhang, M. Wei, Y. Li, J. Zhao, The minimal norm least squares Hermitian solution of the complex matrix equation AXB+CXD=E, J. Franklin Inst., 355 (2018), 1296–1310. doi: 10.1016/j.jfranklin.2017.12.023. doi: 10.1016/j.jfranklin.2017.12.023
    [16] S. F. Yuan, Least squares pure imaginary solution and real solution of the quaternion matrix equation AXB+CXD=E with the least norm, J. Appl. Math., 4 (2014), 1–9. doi: 10.1155/2014/857081. doi: 10.1155/2014/857081
    [17] Y. Yuan, The least-square solutions of a matrix equation (in Chinese), Numerical Mathematics–A Journal of Chinese Universities, 4 (2001), 324–329.
    [18] Y. Yuan, The minimum norm solutions of two classes of matrix equations (in Chinese), Numerical Mathematics–A Journal of Chinese Universities, 2 (2002), 127–134.
    [19] Y. Yuan, H. Dai, The least squares solution with the minimum norm of matrix equation AXB+BXA=D (in Chinese), Numerical Mathematics–A Journal of Chinese Universities, 27 (2005), 232–238.
    [20] G. H. Golub, H. Zha, Perturbation analysis of the canonical correlations of matrix pairs, Linear Algebra Appl., 210 (1994), 3–28. doi: 10.1016/0024-3795(94)90463-4. doi: 10.1016/0024-3795(94)90463-4
    [21] G. Yao, A. Liao, Least-squares solution of AXB=D over Hermitian matrices X and weighted optimal approximation, Math. Theory Appl., 23 (2003), 77–81.
    [22] M. Wang, M. Wei, On weighted least-squares skew-Hermite solution of matrix equation AXB=E (in Chinese), J. East China Normal Univ. (Nat. Sci.), 1 (2004), 22–28.
    [23] M. Wang, M. Wei, On weighted least-squares Hermite solution of matrix equation AXB=E (in Chinese), Math. Numer. Sin., 26 (2004), 129–136.
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2227) PDF downloads(92) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog