Loading [MathJax]/jax/output/SVG/jax.js
Research article

Least squares solutions of matrix equation AXB=C under semi-tensor product

  • Received: 24 January 2024 Revised: 22 March 2024 Accepted: 12 April 2024 Published: 19 April 2024
  • This paper mainly studies the least-squares solutions of matrix equation AXB=C under a semi-tensor product. According to the definition of the semi-tensor product, the equation is transformed into an ordinary matrix equation. Then, the least-squares solutions of matrix-vector and matrix equations respectively investigated by applying the derivation of matrix operations. Finally, the specific form of the least-squares solutions is given.

    Citation: Jin Wang. Least squares solutions of matrix equation AXB=C under semi-tensor product[J]. Electronic Research Archive, 2024, 32(5): 2976-2993. doi: 10.3934/era.2024136

    Related Papers:

    [1] Janthip Jaiprasert, Pattrawut Chansangiam . Exact and least-squares solutions of a generalized Sylvester-transpose matrix equation over generalized quaternions. Electronic Research Archive, 2024, 32(4): 2789-2804. doi: 10.3934/era.2024126
    [2] Jin Wang, Jun-E Feng, Hua-Lin Huang . Solvability of the matrix equation $ AX^{2} = B $ with semi-tensor product. Electronic Research Archive, 2021, 29(3): 2249-2267. doi: 10.3934/era.2020114
    [3] Yimou Liao, Tianxiu Lu, Feng Yin . A two-step randomized Gauss-Seidel method for solving large-scale linear least squares problems. Electronic Research Archive, 2022, 30(2): 755-779. doi: 10.3934/era.2022040
    [4] Zehua Wang, Jinrui Guan, Ahmed Zubair . A structure-preserving doubling algorithm for the square root of regular M-matrix. Electronic Research Archive, 2024, 32(9): 5306-5320. doi: 10.3934/era.2024245
    [5] Hankang Ji, Yuanyuan Li, Xueying Ding, Jianquan Lu . Stability analysis of Boolean networks with Markov jump disturbances and their application in apoptosis networks. Electronic Research Archive, 2022, 30(9): 3422-3434. doi: 10.3934/era.2022174
    [6] Ping Yang, Xingyong Zhang . Existence of nontrivial solutions for a poly-Laplacian system involving concave-convex nonlinearities on locally finite graphs. Electronic Research Archive, 2023, 31(12): 7473-7495. doi: 10.3934/era.2023377
    [7] ShinJa Jeong, Mi-Young Kim . Computational aspects of the multiscale discontinuous Galerkin method for convection-diffusion-reaction problems. Electronic Research Archive, 2021, 29(2): 1991-2006. doi: 10.3934/era.2020101
    [8] Yang Song, Beiyan Yang, Jimin Wang . Stability analysis and security control of nonlinear singular semi-Markov jump systems. Electronic Research Archive, 2025, 33(1): 1-25. doi: 10.3934/era.2025001
    [9] Lot-Kei Chou, Siu-Long Lei . High dimensional Riesz space distributed-order advection-dispersion equations with ADI scheme in compression format. Electronic Research Archive, 2022, 30(4): 1463-1476. doi: 10.3934/era.2022077
    [10] Li Yang, Chunlai Mu, Shouming Zhou, Xinyu Tu . The global conservative solutions for the generalized camassa-holm equation. Electronic Research Archive, 2019, 27(0): 37-67. doi: 10.3934/era.2019009
  • This paper mainly studies the least-squares solutions of matrix equation AXB=C under a semi-tensor product. According to the definition of the semi-tensor product, the equation is transformed into an ordinary matrix equation. Then, the least-squares solutions of matrix-vector and matrix equations respectively investigated by applying the derivation of matrix operations. Finally, the specific form of the least-squares solutions is given.



    Matrix equations of the form AXB=C are important research topics in linear algebra. They are widely used in engineering and theoretical studies, such as in image and signal processing, photogrammetry and surface fitting in computer-aided geometric design [1,2]. In addition, the equation-solving problems are also applied to the numerical solutions of differential equations, signal processing, cybernetics, optimization models, solid mechanics, structural dynamics, and so on [3,4,5,6,7]. So far, there is an abundance of research results on the solutions of matrix equation AXB=C, including the existence [8], uniqueness [9], numerical solution [10], and the structure of solutions [11,12,13,14,15,16]. Moreover, [17] discusses the Hermitian and skew-Hermitian splitting iterative method for solving the equation. The authors of [18] provided the Jacobi and Gauss-Seidel type iterative method to solve the equation.

    However, in practical applications, ordinary matrix multiplication can no longer meet the needs. In 2001, Cheng and Zhao constructed a semi-tensor product method, which makes the multiplication of two matrices no longer limited by dimension [19,20]. After that, the semi-tensor product began to be widely studied and discussed. It is not only applied to problems such as the permutation of high-dimensional data and algebraization of non linear robust stability control of power systems [22], it also provides a new research tool for the study of problems in Boolean networks [23], game theory [24], graph coloring [25], fuzzy control [26] and other fields [27]. However, some of these problems can be reduced to solving linear or matrix equations under the semi-tensor product. Yao et al. studied the solution of the equation under a semi-tensor product (STP equation), i.e., AX=B, in [28]. After that, the authors of [29,30,31] studied the solvability of STP equations AX2=B,AlX=B and AXXB=C, respectively.

    To date, the application of the STP equation AXB=C has also been reflected in many studies using the matrix semi-tensor product method. For example, in the study of multi-agent distributed cooperative control over finite fields, the authors of [32] transformed nonlinear dynamic equations over finite fields into the form of STP equation Z(t)=˜LZ(t+1), where ˜L=ˆLQM and Q is the control matrix. Thus, if we want to get the right control matrix to realize consensus regarding L, we need to solve the STP equation AXB=C. Recently, Ji et al. studied the solutions of STP equation AXB=C and gave the necessary and sufficient conditions for the equation to have a solution; they also formulated the specific solution steps in [33]. Nevertheless, the condition that STP equation AXB=C has a solution is very harsh. On the one hand, parameter matrix C needs to have a specific form; particularly, it should be a block Toeplitz matrix and, even if the matrix C meets certain conditions, the equation may not have a solution. This brings difficulties in practical applications. On the other hand, there is usually a certain error in the data that we measure, which will cause the parameter matrix C of the equation AXB=C to not achieve the required specific form; furthermore, the equation at this time will have no exact solutions.

    Therefore, this paper describes a study of the approximate solutions of STP equation AXB=C. The main contributions of this paper are as follows: (1) The least-squares (LS) solution of STP equation AXB=C is discussed for the first time. Compared with the existing results indicating that the equation has a solution, it is more general and greatly reduces the requirement of matrix form. (2) On the basis of Moore-Penrose generalized inverse operation and matrix differentiation, the specific forms of the LS solutions under the conditions of the matrix-vector equation and matrix equation are derived.

    The paper is organized as follows. First, we study the LS solution problem of the matrix-vector STP equation AXB=C, together with a specific form of the LS solutions, where X is an unknown vector. Then, we study the LS solution problem when X is an unknown matrix and give the concrete form of the LS solutions. In addition, several simple numerical examples are given for each case to verify the feasibility of the theoretical results.

    This study applies the following notations.

    R: the real number field;

    Rn: the set of n-dimensional vectors over R;

    Rm×n: the set of m×n matrices over R;

    AT: the transpose of matrix A;

    A: the Frobenius norm of matrix A;

    tr(A): the trace of matrix A;

    A+: the Moore-Penrose generalized inverse of matrix A;

    lcm{m,n}: the least common multiple of positive integers m and n;

    gcd{m,n}: the greatest common divisor of positive integers m and n;

    ab: the formula b divides a;

    ab: b is divisible by a;

    f(x)x: differentiation of f(x) with respect to x.

    Let A=[aij]Rm×n and B=[bij]Rp×q. We give the following definitions:

    Definition 2.1. [34] The Kronecker product AB is defined as follows:

    AB=[a11Ba21Ba1nBa21Ba22Ba2nBam1Bam2BamnB]Rmp×nq. (2.1)

    Definition 2.2. [20] The left semi-tensor product AB is defined as follows:

    AB=(AIt/n)(BIt/p)R(mt/n)×(qt/p), (2.2)

    where t=lcm{n,p}.

    Definition 2.3. [21] For a matrix ARm×n, the mn column vector Vc(A) is defined as follows:

    Vc(A)=[a11am1a12am2a1namn]T. (2.3)

    Proposition 2.1. [33,34] When A,B are two real-valued matrices and X is an unknown variable matrix, we have the following results about matrix differentiation:

    tr(AX)X=AT,tr(XTA)X=A,tr(XTAX)X=(A+AT)X.

    In this subsection, we will consider the following matrix-vector STP equation:

    AXB=C, (2.4)

    where ARm×n,BRr×l, CRh×k are given matrices, and XRp is the vector that needs to be solved.

    With regard to the requirements of the dimensionality of the matrices in the STP equation (2.4), we have the following properties:

    Proposition 2.2. [33] For matrix-vector STP equation (2.4),

    1) when m=h, the necessary conditions for (2.4) with vector solutions of size p are that kl,nr should be positive integers and klnr,p=lnrk;

    2) when mh, the necessary conditions for (2.4) with vector solutions of size p are that hm,kl should be positive integers and β=gcd{hm,r},gcd{kl,β}=1,gcd{hm,kl}=1 and p=nhlmrk hold.

    Remark: When Proposition 2.2 is satisfied, matrices A,B, and C are said to be compatible, and the sizes of X are called permissible sizes.

    Example 2.1 Consider matrix-vector STP equation AXB=C with the following coefficients:

    A=[1201], B=[01], C=[111020].

    It is easy to see that m=1,n=4,r=2,l=1,h=2, and k=3. Although mh,lk,β=gcd{hm,r},gcd{kl,β}=1, and gcd{hm,kl}=1, nbak is not a positive integer. So, A,B, and C are not compatible. At this time, matrix-vector STP equation (2.4) has no solution.

    For the case that m=h, let X=[x1x2xp]TRp, As be the s-th column of A, and ˇA1,ˇA2,,ˇApRm×np=Rm×rkl be the p equal block of the matrix A, i.e., A=[ˇA1ˇA2ˇAp], and

    ˇAi=[A(i1)rkl+1A(i1)rkl+2Airkl],i=1,,p.

    Let t1=lcm{n,p},t2=lcm{t1p,r}; comparing the relationship of dimensions, we can get that t1=n and t2=rkl. Then

    AXB=(AIt1n)(XIt1p)B=[ˇA1ˇA2ˇAp][x1x2xp]B=(x1ˇA1+x2ˇA2++xpˇAp)B=x1ˇA1B+x2ˇA2B++xpˇApB=x1(ˇA1It2lrk)(BIt2r)+x2(ˇA2It2lrk)(BIt2r)++xp(ˇApIt2lrk)(BIt2r)=x1ˇA1(BIkl)+x2ˇA2(BIkl)++xpˇAp(BIkl)=CRm×k.

    Denote

    ˇBi=ˇAi(BIkl)=[A(i1)rkl+1A(i1)rkl+2Airkl](BIkl)=[A(i1)rkl+1A((ji1)r+1)kl](B1Ikl)++[A(ir1)kl+1Airkl](BhIkl)Rm×k,i=1,,p.

    It is easy to see that when the matrices A and C have the same row dimension, the STP equation (2.4) has a better representation.

    Proposition 2.3. Matrix-vector STP equation (2.4), given m=h, can be rewritten as follows:

    x1ˇB1+x2ˇB2++xpˇBp=C. (2.5)

    Obviously, it can also have the following form:

    [ˇB1,jˇB2,jˇBp,j]X=Cj,i=1,,p,j=1,,k,

    and ˇBi,j is the j-th column of ˇBi.

    At the same time, applying the Vc operator to both sides of (2.5) yields

    xlVc(ˇB1)+x2Vc(ˇB2)++xpVc(ˇBp)=[Vc(ˇB1)Vc(ˇB2)Vc(ˇBp)]X=Vc(C).

    We get the following proposition.

    Proposition 2.4. When m=h, matrix-vector STP equation (2.4) is equivalent to the linear form equation under the traditional matrix product:

    ¯BX=Vc(C),

    where

    ¯B=[Vc(ˇB1)Vc(ˇB2)Vc(ˇBp)]=[ˇB1,1ˇB2,1ˇBp,1ˇB1,2ˇB2,2ˇBp,2ˇB1,kˇB2,kˇBp,k]. (2.6)

    In this subsection, we will consider the following matrix STP equation:

    AXB=C, (2.7)

    where ARm×n,BRr×l,CRh×k are given matrices, and XRp×q is the matrix that needs to be solved.

    For matrix STP equation (2.7), the dimensionality of its matrices has the following requirements:

    Proposition 2.5. [33] For matrix STP equation (2.7),

    1) when m=h, the necessary conditions for (2.7) with a matrix solution with size p×q are that kl,nr should be positive integers and p=nα,q=rklα, where α is a common factor of n and rkl;

    2) when mh, the necessary conditions for (2.7) with a matrix solution of size p×q are that hm,kl should be positive integers, gcd{hmβ,αβ}=1,gcd{hm,kl}=1,βr,p=nhmα,q=rklα, α is the common factor of nhm and rkl, and β=gcd{hm,α}.

    Remark: When Proposition 2.5 is satisfied, matrices A,B, and C are said to be compatible, and the sizes of X are called permissible sizes.

    Example 2.2 Consider matrix STP equation AXB=C with the following coefficients:

    A=[10110100], B=[21], C=[315020].

    We see that m=2,n=4,r=2,l=1,h=2, and k=3, so A,B and C are compatible. At this time, matrix STP equation (2.7) may have a solution XR2×3 or R4×6. (In fact, by Corollary 4.1 of [33], this equation has no solution.)

    When m=h, let As be the s-th column of A and denote ˇA1,ˇA2,,ˇApRm×α as p blocks of A with the same size, i.e., A=[˜A1ˇA2ˇAp], where

    ˇAi=[A(i1)α+1A(i1)α+2Aiα],i=1,,p.

    Denote

    ˉA=[Vc(ˇA1),Vc(ˇA2),,Vc(ˇA2)]=[A1Aα+1A(p1)α+1A2Aα+2A(p1)α+2AαA2αApα],

    we will have the following proposition.

    Proposition 2.6. [33] When m=h, STP equation (2.7) can be rewritten as follows:

    (BTIkml)(IqˉA)Vc(X)=Vc(C). (2.8)

    In this subsection we will consider the LS solutions of the following matrix-vector STP equation:

    AXB=C, (3.1)

    where ARm×n,BRr×l,CRm×k are given matrices, and XRp is the vector that needs to be solved. By Proposition 2.2, we know that when kl,nr, and klnr, all matrices are compatible. At this time, the matrix-vector STP equation (3.1) may have solutions in Rlnrk.

    Now, assuming that kl,nr, and klnr hold and we want to find the LS solutions of matrix-vector STP equation (3.1) on Rlnrk, that is, given ARm×n,BRr×l, and CRm×k, we want to find XRlnrk such that

    AXBC2=minXRlnrkAXBC2. (3.2)

    According to Proposition 2.3, matrix-vector equation (2.4) under the condition that m=h can be rewritten in the column form as follows:

    [ˇB1,jˇB2,k+jˇBp,(p1)k+j]X=Cj,j=1,,k.

    So, we have

    AXBC2=kΣj=1[ˇB1,jˇB2,k+jˇBp,(p1)k+j]XCj2=kΣj=1tr[([ˇB1,jˇB2,k+jˇBp,(p1)k+j]XCj)T([ˇB1,jˇB2,k+jˇBp,(p1)k+j]XCj)]=kΣj=1tr(XT[ˇB1,jˇB2,k+jˇBp,(p1)k+j]T[ˇB1,jˇB2,k+jˇBp,(p1)k+j]XXT[ˇB1,jˇB2,k+jˇBp,(p1)k+j]TCjCTj[ˇB1,jˇB2,k+jˇBp,(p1)k+j]X+CTjCj).

    Since AXBC2 is a smooth function for the variables of X, X is the minimum point if and only if X satisfies the following equation:

    ddXAXBC2=0.

    Then, we derive the following:

    ddXAXBC2=kΣj=1(2[ˇB1,jˇBp,(p1)k+j][ˇB1,jˇBp,(p1)k+j]TX2[ˇB1,jˇBp,(p1)k+j]TCj).

    Taking

    ddXAXBC2=0,

    we have

    kΣj=1[ˇB1,jˇBp,(p1)k+j]T[ˇB1,jˇBp,(p1)k+j]X=kΣj=1[ˇB1,jˇBp,(p1)k+j]TCj. (3.3)

    Hence, the minimum point of linear equation (3.2) is given by

    X=(kΣj=1[ˇB1,jˇBp,(p1)k+j]T[ˇB1,jˇBp,(p1)k+j])+(kΣj=1[ˇB1,jˇBp,(p1)k+j]TCj).

    And, it is also the LS solution of (3.1).

    Meanwhile, we can draw the following result:

    Theorem 3.1. If ˇB1,ˇB2,,ˇBp are linearly independent and ¯B of (2.6) is full rank, then the LS solution of matrix-vector STP equation (3.1) is given by

    X=(¯BT¯B)1¯BTVc(C);

    If ˇB1,ˇB2,,ˇBp are linearly related and ¯B is not full rank, then the LS solution of matrix-vector STP equation (3.1) is given by

    X=(¯BT¯B)+¯BTVc(C).

    Proof. According to Proposition 2.4, (3.1) is equals to the following system of linear equations with a traditional matrix product

    ¯BX=Vc(C). (3.4)

    Therefore, we only need to study the LS solutions of matrix-vector STP equation (3.4). From the conclusion in linear algebra, the LS solutions of matrix-vector STP equation (3.4) must satisfy the following equation:

    ¯BT¯BX=¯BTVc(C). (3.5)

    When ¯B is full rank, ¯BT¯B is invertible and the LS solution of matrix-vector STP equation (3.4) is given by

    X=(¯BT¯B)1¯BTVc(C);

    When ¯B is not full rank, ¯BT¯B is nonsingular and the LS solution of matrix-vector STP equation (3.4) is given by

    X=(¯BT¯B)+¯BTVc(C).

    Comparing (3.3) and (3.5), we can see that

    ¯BT¯B=kΣj=1[ˇB1,jˇB2,k+jˇBp,(p1)k+j]T[ˇB1,jˇB2,k+jˇBp,(p1)k+j],¯BTVc(C)=kΣj=1([ˇB1,jˇB2,k+jˇBp,(p1)k+j]TCj,

    and

    ¯BXVc(C)2=kΣj=1[ˇB1,jˇB2,k+jˇBp,(p1)k+j]XCj2.

    Therefore, the two equations are the same, and the LS solution obtained via the two methods is consistent. Obviously, the second method is easier to employ. Below, we only use the second method to find the LS solutions.

    Example 3.1 Now, we shall explore the LS solution of the matrix-vector STP equation AXB=C with the following coefficients:

    A=[101101001001],B=[212001],C=[111020110].

    By Example 2.1(1), it follows that the matrix-vector STP equation AXB=C has no exact solution. Then, we can investigate the LS solutions of this equation.

    First, because A,B, and C are compatible, the matrix-vector equation may have LS solutions on R2. Second, divide A into 2 blocks; we have

    ˇA1=[100110],ˇA2=[110001],ˇB1=ˇA1(BI1)=[212001212],ˇB2=ˇA2(BI1)=[211000001].

    Then, we can get

    ¯B=[220020110010211021],Vc(C)=[101102110].

    Because ¯B is full rank, the LS solution of this matrix-vector STP equation is given by

    X=(¯BT¯B)1¯BTVc(C)=[0.29630.0741].

    In this subsection we will explore the LS solutions of the following matrix-vector STP equation:

    AXB=C, (3.6)

    where ARm×n,BRr×l and CRh×k are given matrices and XRp is the vector that needs to be solved. By Proposition 2.2, we know that, when m|h,k|l,nlrk,β=gcd{hm,r},gcd{kl,β}=1, and gcd{hm,kl}=1, A,B, and C are compatible. At this time, STP equation (3.6) may have a solution belonging to Rnhlmrk.

    In what follows, we assume that matrix-vector STP equation (3.6) always satisfies the compatibility conditions, and we will find the LS solutions of matrix-vector STP equation (3.6) on Rnhlmrk. Since hm is a factor of the dimension nhlmrk of X, it is easy to obtain the matrix-vector STP equation given by

    AXB=(AIhm)XB,

    according to the multiplication rules of semi-tensor products. Let A=AIhm; then matrix-vector STP equation (3.6) is transformed into the case of m=h, and, from the conclusion of the previous section, one can easily obtain the LS solution of matrix-vector STP equation (3.6).

    Below, we give an algorithm for finding the LS solutions of matrix-vector STP equation (3.6):

    Step one: Check whether A,B, and C are compatible, that is, whether m|h andk|l hold, and whether gcd{hm,kl}=1. If not, we can get that the equation has no solution.

    Step two: Let XRp,p=nhlmrk, and A=AIhmRh×nhm. Take ˇA1,ˇA2,,ˇApRm×nhmp=Rm×rkl to have p equal blocks of the matrix A:

    ˇAi=[A(i1)rkl+1A(i1)rkl+2Airkl],i=1,,p,

    As is the s-th column of A. Let

    ˇB1,ˇB2,,ˇBpRm×k,

    where

    ˇBi=ˇAi(BIkl)=[A(i1)rkl+1A(i1)rkl+2Airkl](BIkl),i=1,,p.

    Step three: Let

    ¯B=[ˇB1,1ˇB2,1ˇBp,1ˇB1,2ˇB2,2ˇBp,2ˇB1,kˇB2,kˇBp,k],

    and calculate Vc(C).

    Step four: Solve the equation ¯BT¯BX=¯BTVc(C); if ¯B is full rank and ¯BT¯B is reversible, at this time, the LS solution of matrix-vector STP equation (3.6) is given by

    X=(¯BT¯B)1¯BTVc(C);

    If ¯B is not full rank, the LS solution of matrix-vector STP equation (3.6) is given by

    X=(¯BT¯B)+¯BTVc(C).

    Example 3.2 Now, we shall explore the LS solutions of the matrix-vector STP equation AXB=C with the following coefficients:

    A=[1011],B=[20],C=[110210].

    According to Example 2.1(2), we know that this matrix-vector STP equation has no exact solution. Then, we can investigate the LS solutions of this STP equation.

    Step one: m|h,k|l,gcd{hm, and kl}=1, so A,B, and C are compatible; we proceed to the second step.

    Step two: The matrix-vector STP equation may have an LS solution XR3, and

    A=AI3=[100000100100010000010010001000001001].

    Let

    ˇA1=[100001000010],ˇA2=[001000010000],ˇA3=[010000101001],

    be three equal blocks of the matrix A. We have

    ˇB1=ˇA1(BI2)=[200200],ˇB2=ˇA2(BI2)=[000000],ˇB3=ˇA3(BI2)=[020020].

    Step three: Let

    ¯B=[200000002002200000],Vc(C)=[101120].

    Step four: Because ¯B is not full rank, the LS solution of this matrix-vector STP equation is given by

    X=(¯BT¯B)+¯BTVc(C)=[0.750000.5000].

    In this subsection we will explore the LS solutions of the following matrix STP equation

    AXB=C, (4.1)

    where ARm×n,BRr×l, and CRm×k are given matrices and XRp×q is the matrix that needs to be solved. By Proposition 2.5, we have that, when lk, all matrices are compatible. At this time, matrix STP equation (4.1) may have solutions in Rnα×rklα, and α is a common factor of n and rkl.

    Now, we assume that lk, and we want to find the LS solutions of matrix STP equation (4.1) on Rnα×rklα; the problem is as follows: Given ARm×n,BRr×l, and CRm×k, we want to find XRp×q such that

    AXBC2=minXRp×qAXBC2. (4.2)

    By Proposition 2.6, matrix STP equation (4.1) can be rewritten as

    (BTIkml)(IqˉA)Vc(X)=Vc(C). (4.3)

    So, finding the LS solution of (4.1) is equivalent to finding XRp×q such that

    (BTIkml)(IqˉA)Vc(X)Vc(C)2=minXRp×q(BTIkml)(IqˉA)Vc(X)Vc(C)2. (4.4)

    Then, we have the following theorem.

    Theorem 4.1. When BA=(BTIkml)(IqˉA) is full rank and BA is invertible, the LS solution of matrix STP equation (4.1) is given by

    Vc(X)=(BA)+C;

    When BA is not full rank and BA is nonsingular, the LS solution of matrix STP equation (4.1) is given by

    Vc(X)=(BA)1C.

    Proof. Let B=BTIkml,A=IqˉA,X=Vc(X), and C=Vc(C); then (4.4) can be rewritten as

    (BTIkml)(IqˉA)Vc(X)Vc(C)2=∥BAXC2=tr[(BAXC)T(BAXC)]=tr[(XTATBTCT)(BAXC)]=tr[(XTATBTBAX)(XTATBTC)(CTBAX)+(CTC)].

    Because AXBC2 is a smooth function for the variables of X, it follows that X is the minimum point if and only if X satisfies

    ddX(BTIkml)(IqˉA)Vc(X)Vc(C)2=0.

    Given that

    ddX(BTIkml)(IqˉA)Vc(X)Vc(C)2=2ATBTBAX2ATBTC,

    let

    ddX(BTIkml)(IqˉA)Vc(X)Vc(C)2=0.

    Then, we have

    ATBTBAX=ATBTC.

    Thus, the minimum point of linear equation (4.2) is also the LS solution of matrix STP equation (4.1). That is to say, AXBC2 is the smallest if and only if (BTIkml)(IqˉA)Vc(X)Vc(C)2 gets the minimum value. And, the statement is naturally proven.

    Now, we shall examine the relationship between the LS solutions of different compatible sizes. Let p1×q1,p2×q2 be two different compatible sizes and 1<p1q1=p2q2Z. If XRp1×q1, we should have that XIp2p1Rp2×q2; we can get the following formula:

    minXRp2×q2AXBC2minXRp1×q1AXBC2.

    Therefore, if we consider (4.1) to take the LS solutions among all compatible sizes of matrices, then it should be the LS solutions of the equation on Rn×k.

    Example 4.1 Now, we shall explore the LS solutions of matrix STP equation AXB=C with the following coefficients:

    A=[10110100],B=[21],C=[315020].

    Example 2.2(1) shows that the matrix STP equation AXB=C has no exact solution. Now, we can investigate the LS solutions of this equation.

    First, given that A,B, and C are compatible, the matrix STP equation may have LS solutions on R2×3 or R4×6.

    (1) The case that α=2,XR2×3:

    Let

    ˇA1=[A1A2]=[1001],ˇA2=[A3A4]=[1100].

    Then, we have

    ˉA=[11000110],A=I3ˉA=[110000000000010000100000001100000000000100001000000011000000000001000010].

    Let

    B=BTI6=[200000100000020000010000002000001000000200000100000020000010000002000001],C=Vc(C)=[315020].

    Because BA is full rank, the LS solution of this matrix STP equation satisfies

    Vc(X)=(BA)1C=[01.16671.00000.666702.6667],thenX=[01.16671.00000.666702.6667].

    (2) The case that α=1,XR4×6:

    Let

    ˉA=[10110100],
    A=I6ˉA=[100000000000100000100000010000000000010000010000001000000000001000001000000100000000000100000100000010000000000010000010000001000000000001000001000000100000000000000000000000010000000000000000000000001000000000000000000000000100000000000000000000000010000000000000000000000001000000000000].

    Let

    B=BTI6=[200000100000020000010000002000001000000200000100000020000010000002000001],C=Vc(C)=[315020].

    Because BA is full rank, the LS solution of this matrix STP equation satisfies

    Vc(X)=(BA)+C=[0.46150.15380.769200.307700.23080.07690.384600.153800.46150.15380.769200.307700.46150.15380.769200.30770],thenX=[0.46150.15380.769200.307700.23080.07690.384600.153800.46150.15380.769200.307700.46150.15380.769200.30770].

    This section focuses on the LS solutions of the following matrix STP equation:

    AXB=C, (4.5)

    where ARm×n,BRr×l and CRh×k are given matrices and XRp×q is the matrix that needs to be solved. By Proposition 2.5, we have that, when m|h,l|k,gcd{hmβ,αβ}=1,gcd{β,kl}=1, and β|r, where β=gcd{hm,α}, all matrices are compatible. At this time, the matrix-vector equation (4.5) may have solutions in Rnhmα×rklα and α is a common factor of nhm and rkl.

    Now, we assume that matrix STP equation (4.5) always satisfies the compatibility conditions. Since hm is a factor of the row dimension nhmα of X, it is easy to obtain the matrix STP equation

    AXB=(AIhm)XB,

    according to the multiplication rules of STP. Let A=AIhm: then (4.5) can be transformed into the case of m=h, and we can easily obtain the LS solution of matrix STP equation (4.5).

    Below, we give an algorithm for finding the LS solutions of matrix STP equation (4.5):

    Step one: Check whether m|h and l|k hold. If not, we can get that the equation has no solution.

    Step two: Find all values of α that satisfy that gcd{rm,h}=1,gcd{hmβ,αβ}=1,gcd{β,kl}=1, and β|r,β=gcd{hm,α}; correspondingly, find all compatible sizes p×q and perform the following steps for each compatible size.

    Step three: Let A=AIhmRh×nhm. We have

    ¯A=[Vc(ˇA1),Vc(ˇA2),,Vc(ˇA2)]=[A1Aα+1A(p1)α+1A2Aα+2A(p1)α+2AαA2αApα],

    where ˇA1,ˇA2,,ˇApRm×α are p blocks of A of the same size, and Ai is the i-th column of A. Let B=BTIkhl,A=Iq¯A,X=Vc(X), and C=Vc(C).

    Step four: Solve the equation ATBTBAX=ATBTC; if BA is full rank and BA is reversible, the LS solution of matrix STP equation (4.5) is given by

    Vc(X)=(BA)1C;

    if BA is not full rank, the LS solution of matrix STP equation (4.5) is given by

    Vc(X)=(BA)+C.

    Example 4.2 Now, we shall explore the LS solutions of matrix STP equation AXB=C with the following coefficients:

    A=[10],B=[101010101001],C=[315020].

    According to Example 2.2(2), matrix STP equation AXB=C has no exact solution. Now, we can investigate the LS solutions of this equation:

    Step one: m|r and l|k hold.

    Step two: gcd{rm,h}=1,gcd{hmβ,αβ}=1,gcd{β,kl}=1,β|r, and β=gcd{hm,α} hold. The matrix STP equation AXB=C may have the solution XR2×2 or R4×4.

    Step three: (1) The case that α=2:

    Let

    A=AI2=[10000100].

    Then, we have

    ¯A=[Vc(ˇA1),Vc(ˇA2)]=[10000010].

    Let

    B=BTI2=[100010000100010000100000000100001000101001000101],A=I2¯A=[10000000000010000010000000000010],C=Vc(C)=[315020].

    (2) The case that α=1:

    Let

    A=AI2=[10000100].

    Then, we have

    ¯A=[Vc(ˇA1),Vc(ˇA2),Vc(ˇA3),Vc(ˇA4)]=[10000100].

    Let

    B=BTI2=[100010000100010000100000000100001000101001000101],A=I4¯A=[1000000000000000000001000000000000000000001000000000000000000001000000000000000000001000000000000000000001000000000000000000001000000000000000000001000000000000],C=Vc(C)=[315020].

    Step four: Because BA is not full rank, the LS solution of this matrix STP equation is obtained as follows:

    (1) The case that α=2:

    Vc(X)=(BA)+C=[1.000001.00000]X=[1.000001.00000].

    (2) The case that α=1:

    Vc(X)=(BA)+C=[1.50000.50005.000001.50000.50001.00001.000000000000]X=[1.50000.50005.000001.50000.50001.00001.000000000000].

    In this paper, we applied the semi-tensor product to solve the matrix equation AXB=C and studied the LS solutions of the matrix equation under the semi-tensor product. By appling the definition of semi-tensor products, the equation can be transformed into the matrix equation under the ordinary matrix product and then combined with the Moore-Penrose generalized inverse operation and matrix differentiation. The specific forms of the LS solutions under the conditions of the matrix-vector equation and matrix equation were also respectively derived. Investigating the solution of Sylvester equations under a semi-tensor product, as well as the LS solution problem, will be future research work.

    No artificial intelligence tools were usded in the creation of this article.

    The work was supported in part by the National Natural Science Foundation (NNSF) of China under Grant 12301573 and in part by the Natural Science Foundation of Shandong under grant ZR2022QA095.

    No potential conflict of interest was reported by the author.



    [1] H. Lin, T. Maekawa, C. Deng, Survey on geometric iterative methods and their applications, Comput. Aided Des., 95 (2018), 40–51. https://doi.org/10.1016/j.cad.2017.10.002 doi: 10.1016/j.cad.2017.10.002
    [2] M. Liu, B. Li, Q. Guo, C. Zhu, P. Hu, Y. Shao, Progressive iterative approximation for regularized least square bivariate B-spline surface fitting, J. Comput. Appl. Math., 327 (2018), 175–187. https://doi.org/10.1016/j.cam.2017.06.013 doi: 10.1016/j.cam.2017.06.013
    [3] Z. Tian, Y. Wang, N. C. Wu, Z. Liu, On the parameterized two-step iteration method for solving the matrix equation AXB=C, Appl. Math. Comput., 464 (2024), 128401. https://doi.org/10.1016/j.amc.2023.128401 doi: 10.1016/j.amc.2023.128401
    [4] N. C. Wu, C. Z. Liu, Q. Zuo, On the Kaczmarz methods based on relaxed greedy selection for solving matrix equation AXB=C, J. Comput. Appl. Math., 413 (2022), 114374. https://doi.org/10.1016/j.cam.2022.114374 doi: 10.1016/j.cam.2022.114374
    [5] Z. Tian, X. Li, Y. Dong, Z. Liu, Some relaxed iteration methods for solving matrix equation AXB=C, Appl. Math. Comput., 403 (2021), 126189. https://doi.org/10.1016/j.amc.2021.126189 doi: 10.1016/j.amc.2021.126189
    [6] F. Chen, T. Li, Two-step AOR iteration method for the linear matrix equation AXB=C, Comput. Appl. Math., 40 (2021), 89. https://doi.org/10.1007/s40314-021-01472-z doi: 10.1007/s40314-021-01472-z
    [7] Z. Liu, Z. Li, C. Ferreira, Y. Zhang, Stationary splitting iterative methods for the matrix equation AXB=C, Appl. Math. Comput., 378 (2020), 125195. https://doi.org/10.1016/j.amc.2020.125195 doi: 10.1016/j.amc.2020.125195
    [8] Y. Xu, Linear Algebra and Matrix Theory, Beijing, Higher Education Press, 1992.
    [9] Y. Ding, About matrix equations AXB=C, Math. Bull., 2 (1997), 43–45.
    [10] Q. Li, Numeric Analysis, Tsinghua University Press, Beijing, 2008.
    [11] Z. Peng, An iterative method for the least squares symmetric solution of the linear matrix equation AXB=C, Appl. Math. Comput., 170 (2005), 711–723. https://doi.org/10.1016/j.amc.2004.12.032 doi: 10.1016/j.amc.2004.12.032
    [12] Y. X. Peng, X. Y. Hu, L. Zhang, An iteration mathod for the symmetric solutions and the optimal approximation solution of the matrix equation AXB=C, Appl. Math. Comput., 3 (2005), 763–777. https://doi.org/10.1016/j.amc.2003.11.030 doi: 10.1016/j.amc.2003.11.030
    [13] Y. Yuan, H. Dai, Generalized reflexive solutions of the matrix equation AXB=C and an associated optimal approximation problem, Math. Appl., 6 (2008), 1643–1649. https://doi.org/10.1016/j.camwa.2008.03.015 doi: 10.1016/j.camwa.2008.03.015
    [14] G. X. Huang, F. Ying, K. Guo, An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB=C, J. Comput. Appl. Math., 212 (2008), 231–244. https://doi.org/10.1016/j.cam.2006.12.005 doi: 10.1016/j.cam.2006.12.005
    [15] Y. Zhang, An iterative method for the bisymmetric least-squares solutions and the optimal approximation of the matrix equation AXB=C, Chin. J. Eng. Math., 4 (2009), 753–756. https://doi.org/10.3969/j.issn.1005-3085.2009.04.023 doi: 10.3969/j.issn.1005-3085.2009.04.023
    [16] Z. Peng, New matrix iterative methods for constraint solutions of the matrix equation AXB=C, J. Comput. Appl. Math., 3 (2010), 726–735. https://doi.org/10.1016/j.cam.2010.07.001 doi: 10.1016/j.cam.2010.07.001
    [17] X. Wang, Y. Li, L. Dai, On Hermitian and skew-Hermitian splitting iteration methods for the linear matrix AXB=C, Comput. Math. Appl. Int. J., 65 (2013), 657–664. https://doi.org/10.1016/j.camwa.2012.11.010 doi: 10.1016/j.camwa.2012.11.010
    [18] T. Xu, M. Tian, Z. Liu, T. Xu, The Jacobi and Gauss-Seidel-type iteration methods for the matrix equation AXB=C, Appl. Math. Comput., 292 (2017), 63–75. https://doi.org/10.1016/j.amc.2016.07.026 doi: 10.1016/j.amc.2016.07.026
    [19] D. Cheng, Semi-tensor product of matrices and its application to Morgen's problem, Sci. China Inf. Sci., 44 (2001), 195–212. https://doi.org/10.1007/BF02714570 doi: 10.1007/BF02714570
    [20] D. Cheng, Y. Zhao, Semi-tensor product of matrices–-A convenient new tool, Sci. China Inf. Sci., 56 (2011), 2664–2674. https://doi.org/10.1360/972011-1262 doi: 10.1360/972011-1262
    [21] M. Ramadan, A. Bayoumi, Explicit and iterative methods for solving the matrix equation AV+BW=EVF+C, Asian J. Control, 13 (2015), 1070–1080. https://doi.org/10.1002/asjc.965 doi: 10.1002/asjc.965
    [22] H. Li, G. Zhao, M. Meng, J. Feng, A survey on applications of semi-tensor product method in engineering, Sci. China Inf. Sci., 61 (2018), 28–44. https://doi.org/10.1007/s11432-017-9238-1 doi: 10.1007/s11432-017-9238-1
    [23] J. E. Feng, J. Yao, P. Cui, Singular Boolean network: Semi-tensor product approach, Sci. China Inf. Sci., 56 (2013), 1–14. https://doi.org/10.1007/s11432-012-4666-8 doi: 10.1007/s11432-012-4666-8
    [24] Y. Yu, J. Feng, J. Pan, Ordinal potential game and its application in agent wireless networks, Control Decis., 32 (2017), 393–402. https://doi.org/10.13195/j.kzyjc.2016.0183 doi: 10.13195/j.kzyjc.2016.0183
    [25] M. Xu, Y. Wang, A. Wei, Robust graph coloring based on the matrix semi-tensor product with application to examination time tabling, Control Theory Technol., 2 (2014), 187–197. https://doi.org/10.1007/s11768-014-0153-7 doi: 10.1007/s11768-014-0153-7
    [26] H. Fan, J. Feng, M, Meng, B. Wang, General decomposition of fuzzy relations: Semi-tensor product approach, Fuzzy Sets Syst., 384 (2020), 75–90. https://doi.org/10.1016/j.fss.2018.12.012 doi: 10.1016/j.fss.2018.12.012
    [27] Y. Yan, D. Cheng, J. E. Feng, H. Li, J. Yue, Survey onapplications of algebraic statespace theory of logicalsystems to finite statemachines, Sci. China Inf. Sci., 66 (2023), 111201. https://doi.org/10.1007/s11432-022-3538-4 doi: 10.1007/s11432-022-3538-4
    [28] J. Yao, J. Feng, M. Meng, On solutions of the matrix equation AX=B with respect to semitensor product, J. Franklin Inst., 353 (2016), 1109–1131. https://doi.org/10.1016/j.jfranklin.2015.04.004 doi: 10.1016/j.jfranklin.2015.04.004
    [29] J. Wang, J. Feng, H. Huang, Solvability of the matrix equation AX2=B with semi-tensor product, Electorn. Res. Arch., 29 (2020), 2249–2267. https://doi.org/10.3934/era.2020114 doi: 10.3934/era.2020114
    [30] J. Wang, On Solutions of the matrix equation AlX=B with respect to MM-2 semi-tensor product, J. Math., 2021 (2021), 651434. https://doi.org/10.1155/2021/6651434 doi: 10.1155/2021/6651434
    [31] N. Wang, Solvability of the sylvester equation AXXB=C under left semi-tensor product, Math. Modell. Control, 2 (2022), 81–89. http://dx.doi.org/10.3934/mmc.2022010 doi: 10.3934/mmc.2022010
    [32] Y. Li, H. Li, X. Ding, G. Zhao, Leader-follower consensus of multiagent systems with time delays over finite fields, IEEE Trans. Cybern., 49 (2018), 3203–3208. https://doi.org/10.1109/TCYB.2018.2839892 doi: 10.1109/TCYB.2018.2839892
    [33] Z. Ji, J. Li, X. Zhou, F. Duan, T. Li, On solutions of matrix equation AXB=C under semi-tensor product, Linear Multilinear Algebra, (2019), 1650881. https://doi.org/10.1080/03081087.2019.1650881
    [34] R. Horn, C. Johnson, Topicsin Matrix Analysis, Cambridge University Press, New York, 1991.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(921) PDF downloads(104) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog