In this paper, the least-squares solutions to the linear matrix equation A∗XB+B∗X∗A=D are discussed. By using the canonical correlation decomposition (CCD) of a pair of matrices, the general representation of the least-squares solutions to the matrix equation is derived. Moreover, the expression of the solution to the corresponding weighted optimal approximation problem is obtained.
Citation: Huiting Zhang, Yuying Yuan, Sisi Li, Yongxin Yuan. The least-squares solutions of the matrix equation A∗XB+B∗X∗A=D and its optimal approximation[J]. AIMS Mathematics, 2022, 7(3): 3680-3691. doi: 10.3934/math.2022203
[1] | Kanjanaporn Tansri, Pattrawut Chansangiam . Gradient-descent iterative algorithm for solving exact and weighted least-squares solutions of rectangular linear systems. AIMS Mathematics, 2023, 8(5): 11781-11798. doi: 10.3934/math.2023596 |
[2] | Fengxia Zhang, Ying Li, Jianli Zhao . The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261 |
[3] | Xia Li, Wen Guan, Da-Bin Wang . Least energy sign-changing solutions of Kirchhoff equation on bounded domains. AIMS Mathematics, 2022, 7(5): 8879-8890. doi: 10.3934/math.2022495 |
[4] | Dong Wang, Ying Li, Wenxv Ding . The least squares Bisymmetric solution of quaternion matrix equation $ AXB = C $. AIMS Mathematics, 2021, 6(12): 13247-13257. doi: 10.3934/math.2021766 |
[5] | Yinlan Chen, Yawen Lan . The best approximation problems between the least-squares solution manifolds of two matrix equations. AIMS Mathematics, 2024, 9(8): 20939-20955. doi: 10.3934/math.20241019 |
[6] | Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097 |
[7] | Zahra Eidinejad, Reza Saadati, Donal O'Regan, Fehaid Salem Alshammari . Measure of quality and certainty approximation of functional inequalities. AIMS Mathematics, 2024, 9(1): 2022-2031. doi: 10.3934/math.2024100 |
[8] | Yinlan Chen, Min Zeng, Ranran Fan, Yongxin Yuan . The solutions of two classes of dual matrix equations. AIMS Mathematics, 2023, 8(10): 23016-23031. doi: 10.3934/math.20231171 |
[9] | Yinlan Chen, Lina Liu . The common Re-nonnegative definite and Re-positive definite solutions to the matrix equations $ A_1XA_1^\ast = C_1 $ and $ A_2XA_2^\ast = C_2 $. AIMS Mathematics, 2022, 7(1): 384-397. doi: 10.3934/math.2022026 |
[10] | Junaid Nisar, Turki Alsuraiheed, Nadeem ur Rehman . Nonlinear mixed type product $ [\mathscr{K}, \mathscr{F}]_\ast \odot \mathscr{D} $ on $ \ast $-algebras. AIMS Mathematics, 2024, 9(8): 21596-21608. doi: 10.3934/math.20241049 |
In this paper, the least-squares solutions to the linear matrix equation A∗XB+B∗X∗A=D are discussed. By using the canonical correlation decomposition (CCD) of a pair of matrices, the general representation of the least-squares solutions to the matrix equation is derived. Moreover, the expression of the solution to the corresponding weighted optimal approximation problem is obtained.
Throughout this paper, the complex m×n matrix space is denoted by Cm×n and the set of all n×n unitary matrices is denoted by UCn×n. The conjugate transpose and the Frobenius norm of a complex matrix A are denoted by A∗≜ˉA⊤ and ‖A‖, respectively. The Hermitian (skew-Hermitian) matrix A is denoted by A=A∗(A=−A∗). The identity matrix of size n is represented by In. For matrices A=(αij)∈Cm×n,B=(βij)∈Cm×n, A∗B is used to define the Hadamard product of A and B, that is, A∗B=(αijβij)∈Cm×n.
It is known that the matrix equation
A∗XB+B∗X∗A=D | (1.1) |
plays an important role in automatic control. In 1991, Yasuda and Skelton [1] studied the assigning controllability and observability Gramians in feedback control by (1.1). In 1994, Fujioka and Hara [2] considered (1.1) in the context of studying state covariance assignment problem with measurement noise. Owing to its important applications, there has been an increased interest in solving (1.1). In 1980, Baksalary and Kala [3] established the solvability conditions and the representation of the general solution to the matrix equation AXB+CYD=E. In 1987, Chu [4] considered the compatibility of AXB+CYD=E by using the generalized singular value decomposition (GSVD), and provided the least norm solution when the solution exists. After that, Chu [5] provided the solvability conditions of (1.1) by using {the} GSVD. In addition, some iterative methods [6,7,8,9] are also used to solve such matrix equations. There is no doubt that the researching of the least-squares solutions to this kind of matrix equation should also be significant and interesting. In 1998, Xu et al. [10] provided the least-squares Hermitian (skew-Hermitian) solutions of AXAH+CYCH=F by using the canonical correlation decomposition (CCD). In 2006, Liao et al. [11] considered the least-squares solution with the minimum norm of AXBH+CYDH=E by {the} CCD and GSVD. Yuan et al. [12,13] considered the least-squares solutions with some constraints of the matrix equation AXB+CXD=E. Besides, other scholars [15,16] also have studied the least-squares problems of AXB+CXD=E. However, solving the least-squares solutions of (1.1) seems to be rarely considered in the literatures. Recently, Yuan [17] proposed the least-squares solutions to the matrix equation A⊤XB−B⊤X⊤A=D by applying the canonical correlation decomposition of [A⊤,B⊤]. Subsequently, Yuan [18,19] proposed the minimum norm solution of (1.1) by taking advantage of the generalized singular value decomposition of the matrix pair [A∗,B∗], and provided the least-squares solution with the minimum norm of (1.1) by using the normal equation and singular value decompositions. Motivated by the work above, it occurred to us that can the least-squares solutions to (1.1) be derived in a similar way? The answer is affirmative. In this paper, we will consider the least-squares solutions of (1.1) and the associated weighted optimal approximation problem by utilizing the canonical correlation decomposition of a pair of matrices, which can be mathematically formulated as follows.
Problem I. Given A∈Cn×m,B∈Cp×m,D∈Cm×m with D=D∗. Find X∈Cn×p such that
Φ1=‖A∗XB+B∗X∗A−D‖=min. | (1.2) |
Problem II. Given F∈Cn×p, find ˆX∈SE such that
‖F−ˆX‖W=minX∈SE‖F−X‖W, | (1.3) |
where ‖⋅‖W is the weighted norm will be defined below, SE is the solution set of Problem I.
By using the canonical correlation decomposition, the explicit expression of the least-squares solutions to Problem I is derived. Also, the expression of the corresponding optimal approximation solution under a weighted Frobenius norm sense to Problem II is deduced. Further, numerical examples are provided to verify the correctness of our results.
In order to solve Problem I, the following lemmas are needed.
Lemma 2.1. [10] Let J1∈Cm×n,J2∈Cn×m,CA=diag(α1,α2,⋯,αm),SA=diag(β1,β2,⋯,βm) with αi>0,βi>0, and α2i+β2i=1,(i=1,2,⋯,m). Then the following minimization problem with respect to X∈Cm×n:
ϕ1=‖CAX−J1‖2+‖SAX−J∗2‖2=min |
holds if and only if X can be expressed as X=CAJ1+SAJ∗2.
Lemma 2.2. Let J1,J2∈Cm×m with J1=J∗1, CA=diag(α1,α2,⋯,αm),SA=diag(β1,β2,⋯,βm) with αi>0,βi>0, and α2i+β2i=1,(i=1,2,⋯,m). Then the following minimization problem with respect to X∈Cm×m:
ϕ2=‖CAX+X∗CA−J1‖2+‖SAX−J∗2‖2+‖X∗SA−J2‖2=min |
holds if and only if X can be expressed as
X+CAX∗CA=CAJ1+SAJ∗2. | (2.1) |
Proof. For X=(xij)∈Cm×m, Jl=(J(l)ij)∈Cm×m, (l=1,2), we have
ϕ2=Σij(|αixij+xjiαj−J(1)ij|2+2|βixij−J(2)ji|2). |
Clearly, ϕ2 is a continuously differentiable function of 2m2 variables of Re(xij), Im(xij),(i,j=1,2,⋯,m). The function of xij is
Ω=(|αixij+xjiαj−J(1)ij|2+|αjxji+xijαi−J(1)ji|2+2|βixij−J(2)ji|2). |
According to the necessary condition of function which is minimizing at a point and J(1)ij=J(1)ji, we obtain the following expression:
xij+αixjiαj=αiJ(1)ij+βiJ(2)ji, (i,j=1,2,⋯,m). | (2.2) |
Then the equation of (2.1) follows from (2.2).
Lemma 2.3. Let J∈Cm×m,CA=diag(α1,α2,⋯,αm) with 0<αi<1, (i=1,2,⋯,m). Then the following equation with respect to X∈Cm×m:
X+CAX∗CA=J | (2.3) |
holds if and only if X can be expressed as
X=K∗(J−CAJ∗CA), | (2.4) |
where K=(kij)∈Cm×m, kij=11−(αiαj)2, (i,j=1,2,⋯,m).
Proof. For X=(xij)∈Cm×m,J=(Jij)∈Cm×m, (2.3) can be equivalently written as
xij+αixjiαj=Jij, xji+αjxijαi=Jji, (i,j=1,2,⋯,m). | (2.5) |
By (2.5), we can get
xij=Jij−αiJjiαj1−α2iα2j, (i,j=1,2,⋯,m). | (2.6) |
Then the equation of (2.4) follows from (2.6).
Assume that the canonical correlation decomposition (CCD) [20] of the matrix pair [A∗,B∗] is
A∗=Q[ΣA,0]E−1A, B∗=Q[ΣB,0]E−1B, | (2.7) |
where EA∈Cn×n and EB∈Cp×p are nonsingular matrices, and
ΣA=[I000CA00000000SA000I]qsh−q−sm−h−s−tstq s t , |
ΣB=[I 0 00 I 00 0 I0 0 00 0 00 0 0]qsh−q−sm−h−s−tstq s h−q−s, |
CA=diag(α1,α2,⋯,αs),1>α1≥α2≥⋯≥αs>0, |
SA=diag(β1,β2,⋯,βs),0<β1≤β2≤⋯≤βs<1 |
with
α2i+β2i=1, (i=1,2,⋯,s), |
q=rank(A)+rank(B)−rank(A∗,B∗), g=rank(A)=q+s+t, h=rank(B), and Q=[Q1,Q2,Q3,Q4,Q5,Q6]∈UCm×m with the partition of Q being compatible with those of ΣA and ΣB.
According to (2.7), (1.2) can be equivalently written as
Φ1=‖[ΣA,0]E−1AX(E−1B)∗[ΣB,0]∗+[ΣB,0]E−1BX∗(E−1A)∗[ΣA,0]∗−Q∗DQ‖, | (2.8) |
partition the matrices E−1AX(E−1B)∗ and Q∗DQ into the following forms:
E−1AX(E−1B)∗=[X11 X12 X13 X14X21 X22 X23 X24X31 X32 X33 X34X41 X42 X43 X44]qstn−g q s h−q−s p−h , | (2.9) |
Q∗DQ=[D11 D12 D13 D14 D15 D16D∗12 D22 D23 D24 D25 D26D∗13 D∗23 D33 D34 D35 D36D∗14 D∗24 D∗34 D44 D45 D46D∗15 D∗25 D∗35 D∗45 D55 D56D∗16 D∗26 D∗36 D∗46 D∗56 D66]qsuvst q s u v s t , | (2.10) |
where u=h−q−s,v=m−h−s−t. Inserting (2.9) and (2.10) into (2.8), we have
![]() |
Apparently, Φ1=min if and only if
‖X11+X∗11−D11‖=min,‖X13−D13‖2+‖X∗13−D∗13‖2=min, ‖X31−D∗16‖2+‖X∗31−D16‖2=min,‖X32−D∗26‖2+‖X∗32−D26‖2=min, ‖X33−D∗36‖2+‖X∗33−D36‖2=min,‖CAX21+X∗12−D∗12‖2+‖SAX21−D∗15‖2=min, | (2.11) |
‖CAX23−D23‖2+‖SAX23−D∗35‖2=min, | (2.12) |
‖CAX22+X∗22CA−D22‖2+‖SAX22−D∗25‖2+‖X∗22SA−D25‖2=min. | (2.13) |
By (2.11), we have
X11=12D11+N,X13=D13, X31=D∗16, X32=D∗26, X33=D∗36,X12=D12−D15S−1ACA, X21=S−1AD∗15, | (2.14) |
where N∈Cq×q is some skew-Hermitian matrix. According to (2.12) and Lemma 2.1, we have
X23=CAD23+SAD∗35. | (2.15) |
By (2.13) and Lemmas 2.2 and 2.3, we have
X22=K∗(J−CAJ∗CA), | (2.16) |
where K=(kij)∈Cs×s,kij=11−(αiαj)2, (i,j=1,2,⋯,s), J=CAD22+SAD∗25. Inserting (2.14)–(2.16) into (2.9), we can get the following result.
Theorem 2.1. Suppose that A∈Cn×m,B∈Cp×m,D∈Cm×m with D=D∗. Let the canonical correlation decomposition of the matrix pair [A∗,B∗] be given by (2.7), the partition of the matrices E−1AX(E−1B)∗,Q∗DQ be given by (2.9) and (2.10), respectively. Then, the general solution of Problem I can be expressed as
X=EA[12D11+ND12−D15S−1ACAD13X14S−1AD∗15X22CAD23+SAD∗35X24D∗16D∗26D∗36X34X41X42X43X44]E∗B, |
where X22 is given by (2.16), Xi4,X4j,(i=1,2,3,4;j=1,2,3) are arbitrary matrices and N∈Cq×q is some skew-Hermitian matrix.
Clearly, (1.1) with D=D∗ is sovable if and only if Φ1=min=0, that is,
X11+X∗11−D11=0,X13−D13=0, X31−D∗16=0, X32−D∗26=0, X33−D∗36=0,CAX21+X∗12−D∗12=0, SAX21−D∗15=0,CAX23−D23=0, SAX23−D∗35=0,CAX22+X∗22CA−D22=0, SAX22−D∗25=0, | (2.17) |
Di4=0,D4j=0, (i=1,2,3,4;j=5,6),D33=0,D55=0,D56=0,D66=0. By (2.17), we obtain
X11=12D11+N, | (2.18) |
X13=D13, X31=D∗16, X32=D∗26, X33=D∗36,X12=D12−D15S−1ACA, X21=S−1AD∗15,X23=C−1AD23=S−1AD∗35, C−1AD23−S−1AD∗35=0,X22=S−1AD∗25, D22=D25S−1ACA+CAS−1AD∗25, | (2.19) |
where N∈Cq×q is some skew-Hermitian matrix. Inserting (2.18) and (2.19) into (2.9), we have the following result.
Theorem 2.2. Suppose that A∈Cn×m,B∈Cp×m,D∈Cm×m with D=D∗. Let the canonical correlation decomposition(CCD) of the matrix pair [A∗,B∗] be given by (2.7), the partition of the matrices E−1AX(E−1B)∗,Q∗DQ be given by (2.9) and (2.10), respectively. Then (1.1) has a solution if and only if
D=D∗, DQ4=0, D33=0, D55=0, D56=0, D66=0,C−1AD23−S−1AD∗35=0, D22=D25S−1ACA+CAS−1AD∗25. | (2.20) |
In this case, the general solution of (1.1) can be expressed as
X=EA[12D11+ND12−D15S−1ACAD13X14S−1AD∗15S−1AD∗25C−1AD23X24D∗16D∗26D∗36X34X41X42X43X44]E∗B, | (2.21) |
where Xi4,X4j,(i=1,2,3,4;j=1,2,3) are arbitrary matrices and N∈Cq×q is some skew-Hermitian matrix.
Remark 2.1. Note that (1.1) has a unique solution X if and only if
n−g=0, p−h=0, q=0, |
which is equivalent to
rank(A∗, B∗)=n+p. |
In this case, the unique solution of (1.1) can be expressed as
X=EA[S−1AD∗25C−1AD23D∗26D∗36]E∗B. |
Based on Theorem 2.2, we can formulate the following algorithm 2.1 to solve (1.1).
Algorithm 2.1.
1) Input matrices A,B and D with D=D∗.
2) Compute the canonical correlation decomposition of the matrix pair [A∗,B∗] by (2.7).
3) Compute D11,D12,D13,D15,D16,D22,D23,D25,D26,D35 and D36 by (2.10), respectively.
4) If the conditions (2.20) are satisfied, go to 5); otherwise, Problem I has no solution, and stop.
5) Randomly choose skew-Hermitian matrix N∈Cq×q and compute X11 by (2.18).
6) Compute X12,X13,X21,X22,X23,X31,X32,X33 by (2.19), respectively.
7) Randomly choose Xi4,X4j,(i=1,2,3,4;j=1,2,3) and compute X by (2.21).
Example 2.1. Let m=10,n=7,p=6. Suppose that the matrices A,B and D are given by
![]() |
It is easy to verify the solvability conditions are satisfied:
Q∗4D=1.0×10−13[−0.17760.0355−0.12430.14210.07110.09100.63950.15990.26650],D33=1.3831×10−14, D55=1.0×10−13[0.0581−0.4387−0.3434−0.0124],D56=1.0×10−13[−0.21430.0623−0.21060.0813], D66=1.0×10−13[−0.2288−0.01170.01050.2715],C−1AD23=S−1AD∗35=[35.9983−22.9230], D22=D25S−1ACA+CAS−1AD∗25=[3.512214.786114.786110.6574]. |
According to Algorithm 2.1 above, if choose N=0,Xi4=0,X4j=0,(i=1,2,3,4;j=1,2,3), then we can obtain a feasible solution X of (1.1) as follows:
X=[14.297−29.492−3.052610.77−2.5732−16.15712.674−5.16154.405121.794−7.8158−20.3056.451319.8634.37245.623111.727−0.98557−2.964210.0363.93243.2979−1.334−10.04513.035−28.008−0.766113.0924−7.9222−13.1293.5708−20.560.29646−13.416−2.922−13.57918.501−21.8762.659515.754−6.8772−25.22] |
with the corresponding residual estimated by
‖A∗XB+B∗X∗A−D‖=8.8662×10−13. |
Suppose that the CCD of the matrix pair [A∗,B∗] is of the form given by (2.7). For any X∈Cn×p, we define a weighted norm as follows [21,22,23] :
‖X‖W≜‖E−1AX(E−1B)∗‖. | (3.1) |
Write
E−1AF(E−1B)∗=[F11 F12 F13 F14F21 F22 F23 F24F31 F32 F33 F34F41 F42 F43 F44]qstn−g q s h−q−s p−h . | (3.2) |
Therefore, for any X∈SE, (1.3) can be written as
‖F−X‖W=‖E−1AF(E−1B)∗−[12D11+ND12−D15S−1ACAD13X14S−1AD∗15S−1AD∗25C−1AD23X24D∗16D∗26D∗36X34X41X42X43X44]‖=‖[F11F12F13F14F21F22F23F24F31F32F33F34F41F42F43F44]−[12D11+ND12−D15S−1ACAD13X14S−1AD∗15S−1AD∗25C−1AD23X24D∗16D∗26D∗36X34X41X42X43X44]‖. |
Obviously, ‖F−X‖W=min, if and only if
‖M−N‖=min, | (3.3) |
‖Fi4−Xi4‖=min, (i=1,2,3,4), | (3.4) |
‖F4j−X4j‖=min, (j=1,2,3), | (3.5) |
where M=F11−12D11. Note that N∈Cq×q is some skew-Hermitian matrix, which implies that the relation of (3.3) is equivalent to
‖N−M‖2=‖N−12(M−M∗)‖2+‖12(M+M∗)‖2, |
therefore, we have
N=12(M−M∗), X11=12(D11+M−M∗), | (3.6) |
where M=F11−12D11. Apparently, by (3.4) and (3.5), we have
Xi4=Fi4, (i=1,2,3,4), X4j=F4j, (j=1,2,3). | (3.7) |
Summing up the discussions above, we have the following result.
Theorem 3.1. Given F∈Cn×p and partition E−1AF(E−1B)∗ as (3.2). Let M=F11−12D11, then the unique solution of Problem II can be expressed as
ˆX=EA[12(D11+M−M∗)D12−D15S−1ACAD13F14S−1AD∗15S−1AD∗25C−1AD23F24D∗16D∗26D∗36F34F41F42F43F44]E∗B. | (3.8) |
Based on Theorem 3.1, we can formulate the following algorithm 3.1 to solve Problem II.
Algorithm 3.1.
1) Input matrices A,B,F and D with D=D∗.
2) Compute the canonical correlation decomposition of the matrix pair [A∗,B∗] by (2.7).
3) Compute D11,D12,D13,D15,D16,D22,D23,D25,D26,D35 and D36 by (2.10), respectively.
4) If the conditions (2.20) are satisfied, go to 5); otherwise, Problem II has no solution, and stop.
5) Compute F11,Fi4,F4j (i=1,2,3,4;j=1,2,3) by (3.2).
6) Set M=F11−12D11 and compute X11 by (3.6).
7) Compute Xi4 and X4j(i=1,2,3,4;j=1,2,3) by (3.7) and compute ˆX by (3.8).
Example 3.1. Let m=10,n=7,p=6. Suppose that the matrices A,B,D and F are given by
F=[0.9754−9.5717−9.5949−7.43130.46171−4.3874−2.785−4.85386.55743.9223−0.97132−3.81565.4688−8.0028−0.35712−6.5548−8.2346−7.6552−9.57511.4189−8.4913−1.7119−6.9483−7.952−9.6489−4.2176−9.33997.06053.1711.86871.5761−9.1574−6.7874−0.31833−9.50224.8976−9.70597.92217.5774−2.7692−0.34446−4.4559], |
and the matrices A,B,D are the same as those of Example 2.1. It is easy to verify the solvability conditions are satisfied by Example 2.1. According to Algorithm 3.1 above, we can obtain the unique solution ˆX of Problem II as follows:
ˆX=[22.453−27.4052.902−33.42−13.843−20.59920.879−12.67920.6661.2317−30.38−8.4501−12.38321.633−5.62357.996611.409−0.83169−12.675−12.705−10.916−1.1556−23.496−1.035530.483−25.8938.7596−21.928−10.622−17.5999.0127−16.7125.5371−46.447−9.6452−17.6232.651−20.55326.12−26.865−28.117−18.774] |
with the corresponding residual estimated by
‖A∗ˆXB+B∗ˆX∗A−D‖=9.0698×10−13. |
In this paper, we have obtained the expression of the least-squares solutions to Problem I and the unique solution ˆX of Problem II by using the CCD of the matrix pair [A∗,B∗]. Two numerical examples verify the correctness of our results.
All authors declare that there is no conflict of interest in this paper.
[1] |
K. Yasuda, R. E. Skelton, Assigning controllability and observability Gramians in feedback control, J. Guid. Control Dynam., 14 (1991), 878–885. doi: 10.2514/3.20727. doi: 10.2514/3.20727
![]() |
[2] |
H. Fujioka, S. Hara, State covariance assignment problem with measurement noise: A unified approach based on a symmetric matrix equation, Linear Algebra Appl., 203-204 (1994), 579–605. doi: 10.1016/0024-3795(94)90215-1. doi: 10.1016/0024-3795(94)90215-1
![]() |
[3] |
J. K. Baksalary, R. Kala, The matrix equation AXB+CYD=E, Linear Algebra Appl., 30 (1980), 141–147. doi: 10.1016/0024-3795(80)90189-5. doi: 10.1016/0024-3795(80)90189-5
![]() |
[4] |
K. W. E. Chu, Singular value and generalized singular value decompositions and the solution of linear matrix equations, Linear Algebra Appl., 88-89 (1987), 83–98. doi: 10.1016/0024-3795(87)90104-2. doi: 10.1016/0024-3795(87)90104-2
![]() |
[5] |
K. W. E. Chu, Symmetric solutions of linear matrix equations by matrix decompositions, Linear Algebra Appl., 119 (1989), 35–50. doi: 10.1016/0024-3795(89)90067-0. doi: 10.1016/0024-3795(89)90067-0
![]() |
[6] |
M. Dehghan, M. Hajarian, Finite iterative algorithms for the reflexive and anti-reflexive solutions of the matrix equation A1X1B1+A2X2B2=C, Math. Comput. Model., 49 (2009), 1937–1959. doi: 10.1016/j.mcm.2008.12.014. doi: 10.1016/j.mcm.2008.12.014
![]() |
[7] | H. Zhou, An iterative method for symmetric solutions of the matrix equation AXB+CXD=F (in Chinese), Math. Numer. Sin., 32 (2010), 413–422. |
[8] | H. Zhou, An iterative algorithm for solutions of matrix equation AXB+CXD=F over linear subspace (in Chinese), Acta Math. Appl. Sin., 39 (2016), 610–619. |
[9] |
Z. Peng, Y. Peng, An efficient iterative method for solving the matrix equation AXB+CYD=E, Numer. Linear Algebra Appl., 13 (2006), 473–485. doi: 10.1002/nla.470. doi: 10.1002/nla.470
![]() |
[10] |
G. Xu, M. Wei, D. Zheng, On solutions of matrix equation AXB+CYD=F, Linear Algebra Appl., 279 (1998), 93–109. doi: 10.1016/S0024-3795(97)10099-4. doi: 10.1016/S0024-3795(97)10099-4
![]() |
[11] |
A. P. Liao, Z. Z. Bai, Y. Lei, Best approximate solution of matrix equation AXB+CYD=E, SIAM J. Matrix Anal. Appl., 27 (2006), 675–688. doi: 10.1137/040615791. doi: 10.1137/040615791
![]() |
[12] |
S. Yuan, A. Liao, Least squares Hermitian solution of the complex matrix equation AXB+CXD=E with the least norm, J. Franklin Inst., 351 (2014), 4978–4997. doi: 10.1016/j.jfranklin.2014.08.003. doi: 10.1016/j.jfranklin.2014.08.003
![]() |
[13] | S. Yuan, Q. Wang, Two special kinds of least squares solutions for the quaternion matrix equation AXB+CXD=E, Electron. J. Linear Algebra, 23 (2012), 257–274. |
[14] |
F. Zhang, M. Wei, Y. Li, J. Zhao, Special least squares solutions of the quaternion matrix equation AXB+CXD=E, Comput. Math. Appl., 72 (2016), 1426–1435. doi: 10.1016/j.amc.2015.08.046. doi: 10.1016/j.amc.2015.08.046
![]() |
[15] |
F. Zhang, M. Wei, Y. Li, J. Zhao, The minimal norm least squares Hermitian solution of the complex matrix equation AXB+CXD=E, J. Franklin Inst., 355 (2018), 1296–1310. doi: 10.1016/j.jfranklin.2017.12.023. doi: 10.1016/j.jfranklin.2017.12.023
![]() |
[16] |
S. F. Yuan, Least squares pure imaginary solution and real solution of the quaternion matrix equation AXB+CXD=E with the least norm, J. Appl. Math., 4 (2014), 1–9. doi: 10.1155/2014/857081. doi: 10.1155/2014/857081
![]() |
[17] | Y. Yuan, The least-square solutions of a matrix equation (in Chinese), Numerical Mathematics–A Journal of Chinese Universities, 4 (2001), 324–329. |
[18] | Y. Yuan, The minimum norm solutions of two classes of matrix equations (in Chinese), Numerical Mathematics–A Journal of Chinese Universities, 2 (2002), 127–134. |
[19] | Y. Yuan, H. Dai, The least squares solution with the minimum norm of matrix equation A⊤XB+B⊤X⊤A=D (in Chinese), Numerical Mathematics–A Journal of Chinese Universities, 27 (2005), 232–238. |
[20] |
G. H. Golub, H. Zha, Perturbation analysis of the canonical correlations of matrix pairs, Linear Algebra Appl., 210 (1994), 3–28. doi: 10.1016/0024-3795(94)90463-4. doi: 10.1016/0024-3795(94)90463-4
![]() |
[21] | G. Yao, A. Liao, Least-squares solution of AXB=D over Hermitian matrices X and weighted optimal approximation, Math. Theory Appl., 23 (2003), 77–81. |
[22] | M. Wang, M. Wei, On weighted least-squares skew-Hermite solution of matrix equation AXB=E (in Chinese), J. East China Normal Univ. (Nat. Sci.), 1 (2004), 22–28. |
[23] | M. Wang, M. Wei, On weighted least-squares Hermite solution of matrix equation AXB=E (in Chinese), Math. Numer. Sin., 26 (2004), 129–136. |