
In this paper, we investigate the mixed solution of reduced biquaternion matrix equation n∑i=1AiXiBi=E with sub-matrix constraints. With the help of LC-representation and the properties of vector operator based on semi-tensor product of reduced biquaternion matrices, the reduced biquaternion matrix equation (1.1) can be transformed into linear equations. A systematic method, GH-representation, is proposed to decrease the number of variables of a special unknown reduced biquaternion matrix and applied to solve the least squares problem of linear equations. Meanwhile, we give the necessary and sufficient conditions for the compatibility of reduced biquaternion matrix equation (1.1) under sub-matrix constraints. Numerical examples are given to demonstrate the results. The method proposed in this paper is applied to color image restoration.
Citation: Yimeng Xi, Zhihong Liu, Ying Li, Ruyu Tao, Tao Wang. On the mixed solution of reduced biquaternion matrix equation n∑i=1AiXiBi=E with sub-matrix constraints and its application[J]. AIMS Mathematics, 2023, 8(11): 27901-27923. doi: 10.3934/math.20231427
[1] | Kahraman Esen Özen . A general method for solving linear matrix equations of elliptic biquaternions with applications. AIMS Mathematics, 2020, 5(3): 2211-2225. doi: 10.3934/math.2020146 |
[2] | Wenxv Ding, Ying Li, Anli Wei, Zhihong Liu . Solving reduced biquaternion matrices equation k∑i=1AiXBi=C with special structure based on semi-tensor product of matrices. AIMS Mathematics, 2022, 7(3): 3258-3276. doi: 10.3934/math.2022181 |
[3] | Jiaxin Lan, Jingpin Huang, Yun Wang . An E-extra iteration method for solving reduced biquaternion matrix equation AX+XB=C. AIMS Mathematics, 2024, 9(7): 17578-17589. doi: 10.3934/math.2024854 |
[4] | Sourav Shil, Hemant Kumar Nashine . Positive definite solution of non-linear matrix equations through fixed point technique. AIMS Mathematics, 2022, 7(4): 6259-6281. doi: 10.3934/math.2022348 |
[5] | Anli Wei, Ying Li, Wenxv Ding, Jianli Zhao . Three special kinds of least squares solutions for the quaternion generalized Sylvester matrix equation. AIMS Mathematics, 2022, 7(4): 5029-5048. doi: 10.3934/math.2022280 |
[6] | Songxiao Li, Jizhen Zhou . Essential norm of generalized Hilbert matrix from Bloch type spaces to BMOA and Bloch space. AIMS Mathematics, 2021, 6(4): 3305-3318. doi: 10.3934/math.2021198 |
[7] | Reena Jain, Hemant Kumar Nashine, Jung Rye Lee, Choonkil Park . Unified relational-theoretic approach in metric-like spaces with an application. AIMS Mathematics, 2021, 6(8): 8959-8977. doi: 10.3934/math.2021520 |
[8] | Efruz Özlem Mersin . Sturm's Theorem for Min matrices. AIMS Mathematics, 2023, 8(7): 17229-17245. doi: 10.3934/math.2023880 |
[9] | Chih-Hung Chang, Ya-Chu Yang, Ferhat Şah . Reversibility of linear cellular automata with intermediate boundary condition. AIMS Mathematics, 2024, 9(3): 7645-7661. doi: 10.3934/math.2024371 |
[10] | Yun Xu, Shanli Ye . A Derivative Hilbert operator acting from Bergman spaces to Hardy spaces. AIMS Mathematics, 2023, 8(4): 9290-9302. doi: 10.3934/math.2023466 |
In this paper, we investigate the mixed solution of reduced biquaternion matrix equation n∑i=1AiXiBi=E with sub-matrix constraints. With the help of LC-representation and the properties of vector operator based on semi-tensor product of reduced biquaternion matrices, the reduced biquaternion matrix equation (1.1) can be transformed into linear equations. A systematic method, GH-representation, is proposed to decrease the number of variables of a special unknown reduced biquaternion matrix and applied to solve the least squares problem of linear equations. Meanwhile, we give the necessary and sufficient conditions for the compatibility of reduced biquaternion matrix equation (1.1) under sub-matrix constraints. Numerical examples are given to demonstrate the results. The method proposed in this paper is applied to color image restoration.
Linear matrix equations play an important role in many fields. Many researchers turn their attention to the solution of real or complex linear matrix equations[1,2,3,4]. Since W. R. Hamilton proposed quaternion and applied it to various aspects of physics, many models based on quaternion matrix equations have emerged. Meanwhile, with the application of quaternion and quaternion matrix equations in many fields such as the stability theory, cybernetics, quantum mechanics and color images[5,6,7,8,9,10,11], the related theory research has become more meaningful [12,13,14,15,16,17,18]. However, the non-commutativeity of quaternion multiplication will make it difficult to implement in many fields.
Reduced biquaternion with product commutability was proposed by Schtte and Wenzel[19] in 1990, which is represented as
a=ar+aii+ajj+akk, |
where i2=k2=−1, j2=1, ij=ji=k, jk=kj=i, ki=ik=−j and ar,ai,aj,ak∈R. A reduced biquaternion matrix A∈Hm×nR can be expressed as
A=Ar+Aii+Ajj+Akk=A1+A2j, |
where Ar, Ai, Aj, Ak are real matrices, A1=Ar+Aii, A2=Aj+Aki and the Frobenius norm of A is defined as
‖A‖=√‖Ar‖2+‖Ai‖2+‖Aj‖2+‖Ak‖2. |
As soon as reduced biquaternion was proposed, it was applied in a digital filter. Ueda and Takahashi[20] proved in 1993 that the first-order digital filter with reduced biquaternion coefficient can realize any real coefficients digital filter less than order four. With the in-depth study of reduced biquaternion, Pei et al.[21,22] studied Fourier transform, the eigenvalue and the singular value decomposition of reduced biquaternion matrix, respectively, which were used in signal and image processing. In addition, the study of reduced biquaternion matrix equations has been a hot topic in recent years. Yuan et al.[23] solved the Hermitian solution of the reduced biquaternion equation (AXB,CXD)=(E,G) using the complex representation method, which can transform the problem in reduced biquaternions into complex number fields. Hidayet Hüda Kösal[24] solved several special least squares solutions of the reduced biquaternion matrix equation AX=B by using the e1−e2 representation, and successfully applied the least squares pure imaginary solutions to color image restoration. Chen et al.[25] presented the general solution and necessary and sufficient conditions for the existence of an η-(anti) Hermitian solution to a constrained Sylvester-type generalized communicative quaternion matrix equation. From above we can see that the study of the matrix equation on reduced biquaternion is a very meaningful work. In this paper, we will study the reduced biquaternion matrix equation with sub-matrix constraints.
The sub-matrix constraint problems were originally from a practical subsystem expansion problem. Thus, researchers have great interest in studying such problems under different sub-matrix constraints. For example, Gong et al.[26] discussed an anti-symmetric solution of AXAT=B for X with a leading principal sub-matrix constraint. Zhao et al.[27] gave some necessary and sufficient conditions for the solvability of the matrix equation AX=B with bisymmetrical central principal sub-matrix constraint. Li et al.[28] proposed an efficient algorithm to study the symmetric solution of matrix equation AXB+CYD=E with a special sub-matrix constraint. However, as far as we know, the sub-matrix problem for the reduced biquaternion matrix equation
n∑i=1AiXiBi=E | (1.1) |
has not been considered yet. In this paper, we will discuss the mixed solution of (1.1) with sub-matrix constraints.
Definition 1.1. If n−q is even, A=(aij)∈Hn×nR, and
Ac(q)=(Aij)n−q2+1≤i,j≤n−n−q2, |
then Ac(q) is called a q-order central principal matrix of A. Clearly, A has only even order central principal matrices when n is even, and odd central principal matrices when n is odd.
Definition 1.2. Suppose A=(aij)∈Hn×nR.
1) The matrix A is Hermitian if aij=¯aji, the set of reduced biquaternion Hermition matrices is denoted by SHn×nR.
2) The matrix A is Centro-symmetric if aij=an−i+1,n−j+1, the set of reduced biquaternion centro-symmetric matrices is denoted by CHn×nR.
3) The matrix A is Bi-hermitian if aij=an−i+1,n−j+1=¯aji, the set of reduced biquaternion Bi-hermitian matrices is denoted by BHn×nR.
For a given set of matrices {Xti}, (i=1,2,⋯,n), where Xti∈SHti×tiR, i=1,2,⋯,s; Xti∈CHti×tiR, i=s+1,s+2,⋯,m; Xti∈BHti×tiR, i=m+1,m+2,⋯,n. Suppose
θ1={X|X∈SHn×nR, and X([1:ti])=Xti}, (i=1,2,⋯,s),θ2={X|X∈CHn×nR, and Xc(ti)=Xti}, (i=s+1,s+2,⋯,m),θ3={X|X∈BHn×nR, and Xc(ti)=Xti}, (i=m+1,m+2,⋯,n). |
Problem 1. Given Ai∈Hm×nR, Bi∈Hn×qR, E∈Hm×qR, and {Xti}, (i=1,2,⋯,n), find a matrix group (X1,X2,⋯,Xn) satisfying
||n∑i=1AiXiBi−E||=min, |
and denoted the set of such matrix group
SQ={(X1,X2,⋯,Xn)|‖n∑i=1AiXiBi−E‖=min}, |
where, Xi∈θ1, i=1,2,⋯,s, Xt∈θ2, t=s+1,s+2,⋯,m, Xk∈θ3, k=m+1,m+2,⋯,n. Find out (XQ1,XQ2,⋯,XQn)∈SQ such that
‖(XQ1,XQ2,⋯,XQn)‖=min(X1,X2,⋯,Xn)∈SQ‖(X1,X2,⋯,Xn)‖. |
(XQ1,XQ2,⋯,XQn) is called the minimal norm least squares mixed solution of (1.1). If min=0, (XQ1,XQ2,⋯,XQn) is called the minimal norm mixed solution of (1.1).
Our main tool is the semi-tensor product (STP) of matrices, which is a generalization of conventional matrix product. With the help of the STP of matrices, many meaningful problems have been resolved and scholars have obtained many constructive results[29,30,31,32]. Recently, STP has been applied to the study of the matrix equation[33,34,35]. However, the limitation of the expanded dimension leads to high computational complexity. This paper aims at providing an improved method based on STP to reduce the computational complexity, as well as extend this method to solve reduced biquaternion matrix equations.
The main contributions of this paper include: (ⅰ) The algebraic expression of isomorphism relation between complex matrix and reduced biquaternion matrix is defined by using STP, which is called LC-representation of reduced biquaternion matrix. At the same time, the necessary and sufficient conditions for computable algebraic expressions are given by using the structure matrix of reduced biquaternion product; (ⅱ) A new method to reduce the number of variables of unknown reduced biquaternion matrix with special structure is proposed, which is called GH-representation. Relative to the H-representation method, the GH-representation method proposed in this paper is suitable for more special matrix forms. Meanwhile, compared with the method of element simplification in [36], the GH-representation method is more systematic.
Notations: R/HR represent the set of real numbers/reduced biquaternions. Rt represents the set of all real column vectors with order t. Rm×n/Hm×nR represent the set of all m×n real matrices/reduced biquaternion matrices, respectively. AT, AH and A† represent the transpose, the conjugate transpose and Moore-Penrose (MP) inverse of matrix A, respectively. ‖⋅‖ represents the Frobenius norm of a matrix.
The rest of this paper is organized as follows: Section 2 provides the definition and properties of STP on reduced biquaternion. The main results of this paper are contained in Section 3, in which we define LC-representation of the reduced biquaternion matrix, then the vector operator on reduced biquaternion matrices are proposed. The general expression of least squares mixed solution of Problem 1 and the necessary and sufficient conditions for compatibility are also given in this section. Section 4 provides the corresponding algorithm of Problem 1 and two numerical examples are proposed to illustrate the effectiveness of the algorithm. Section 5 applies the proposed algorithm to color image restoration. Finally, we make some concluding remarks in Section 6.
In this section we give some necessary preliminaries that will be used throughout this paper, and we introduce some definitions and properties of STP on reduced biquaternion [37].
Definition 2.1. Let A=(aij)∈Hm×nR and B=(bij)∈Hp×qR, then, the Kronecker product of A and B is defined to be the following block matrix
A⊗B=(a11B … a1nB⋮ ⋱ ⋮am1B … amnB). |
Lemma 2.2. Let A∈Hm×nR, B∈Hn×pR, C∈Hn×sR, and D∈Hp×tR, then
1) (A⊗B)⊗C=A⊗(B⊗C).
2) (A⊗B)(C⊗D)=AC⊗BD.
Definition 2.3. Let A∈Hm×nR, B∈Hp×qR and t=lcm(n,p) be the least common multiple of n and p. Then, the left STP of A and B, denoted by A⋉B, is defined as A⋉B=(A⊗It/n)(B⊗It/p).
Definition 2.4. Let A∈Hm×nR, B∈Hp×qR and t=lcm(n,p) be the least common multiple of n and p, then the right STP of A and B, denoted by A⋊B, is defined as A⋊B=(It/n⊗A)(It/p⊗B).
Example 2.5. Let A=(2+i −1+j 1+k 2+i+j), B=(i j)T, then
A⋉B=(2+i −1+j 1+k 2+i+j)(B⊗I2)=(2+i −1+j 1+k 2+i+j)(i 00 ij 00 j)=(−1+3i+j 1−i+2j+2k)=(2+i −1+j)i+(1+k 2+i+j)j, |
A⋊B=(2+i −1+j 1+k 2+i+j)(I2⊗B)=(2+i −1+j 1+k 2+i+j)(i 0j 00 i0 j)=(2i−j 1+i+j+k)≠(2+i −1+j)i+(1+k 2+i+j)j. |
If n=p, the STP of matrices reduces to the common matrix product and the STP of matrices retains most of the properties of common matrix product. From Example 2.5, we can obtain one difference between the left STP and the right STP is that the right STP does not satisfy the block product law. This difference makes the left STP more useful. Next, we mainly discuss some properties of the left STP.
Proposition 2.6. Assume the dimensions of the matrices involved in (1) and (2) meet the dimension requirement such that ⋉ is well defined, then we have
(1) (Distributive Law)
{F⋉(aG±bH)=aF⋉G±bF⋉H,(aG±bH)⋉F=aG⋉F±bH⋉F, a,b∈HR. |
(2) (Associative Law)
(F⋉G)⋉H=F⋉(G⋉H). |
Definition 2.7. For A∈Hm×nR, let at=(a1t,a2t,⋯,amt), t=1,2,⋯,n, ap=(ap1,ap2,⋯,apn), p=1,2,⋯,m, we define
Vc(A)=(a1,a2,⋯,an)T∈Hmn×1R,Vr(A)=(a1,a2,⋯,am)T∈Hmn×1R, |
and Vr(A)=Vc(AT).
Definition 2.8. A swap matrix W[m,n] is an mn×mn matrix defined as follows: Its rows and columns are labeled by double index (i,j), the columns are arranged by the ordered multi-index Id(i,j;m,n), and the rows are arranged by the order multi-index Id(j,i;n,m). The element at position [(I,J),(i,j)] is
W[m,n](I,J)(i,j)=δI,Ji,j={1, I=i and J=j,0, otherwise. | (2.1) |
Next, we illustrate the construction of swap matrix through a simple example.
Example 2.9. Let m=3, n=2. The swap matrix W[m,n] can be constructed as follows: Using double index (i,j) to label its columns and rows, the columns of W[m,n] are labeled by Id(i,j;3,2), i.e., (11,12,21,22,31,32) and the rows of W[m,n] are labeled by Id(j,i;2,3), i.e., (11,21,31,12,22,32). According to (2.1), we have
W[3,2]=(11)(12)(21)(22)(31)(32)(11)(21)(31)(12)(22)(32)(100000001000000010010000000100000001). |
The followings are some useful pesudo-commutative properties. Later on, you can see that they are very useful.
Proposition 2.10. Let A∈Hm×nR, then
(1) W[m,q]⋉A⋉W[q,n]=Iq⊗A,
(2) W[m,n]⋉Vr(A)=Vc(A), W[n,m]⋉Vc(A)=Vr(A).
As a kind of cross-dimensional matrix theory with far-reaching significance, the above proposed extended STP not only enriches the reduced biquaternion matrix theory, but also provides a new method for solving reduced biquaternion matrix equation. The two classic conclusions of real matrix equation are stated as follows.
Lemma 2.11. [38] The least squares solutions of the linear system of equations Ax=b, with A∈Rm×n and b∈Rm, can be represented as
x=A†b+(I−A†A)y, |
where y∈Rn is an arbitrary vector. The minimal norm least squares solution of the linear system of equations Ax=b is A†b.
Lemma 2.12. [38] The linear system of equations Ax=b, with A∈Rm×n and b∈Rm, has a solution x∈Rn if, and only if,
AA†b=b. |
In that case, it has the general solution
x=A†b+(I−A†A)y, |
where y∈Rn is an arbitrary vector. The minimal norm solution of the linear system of equations Ax=b is A†b.
Definition 3.1. For A=A1+A2j∈Hm×nR, As∈Cm×n(s=1,2). Denote
→A=(A1A2), E2=(±1 00 ±1), M=(1 0 0 10 1 1 0). |
Suppose there is a mapping φ(A)=M⋉(I2⊗(E2⋉→A)) of Hm×nR into C2m×2n, denote φc(A)=φ(A)⋉δ12 for A∈Hm×nR, B∈Hn×pR. If φ satisfies
1) φ(AB)=φ(A)φ(B),
2) φc(AB)=φ(A)φc(B),
φ is called the LC-representation of the reduced biquaternion matrix.
The computable equivalent conditions of 1) and 2) in Definition 3.1 can be obtained by using the left STP.
Proposition 3.2. Let A∈Hm×nR, B∈Hn×pR, then φ is the LC-representation of the reduced biquaternion matrix if
1) (M⊗Im)(I2⊗(E2⋉→AB))=(M⊗Im)(M⊗(E2⋉→A))(I2⊗(E2⋉→B)),
2) (M⊗Im)(δ12⊗(E2⋉→AB))=(M⊗Im)(M⊗(E2⋉→A))(δ12⊗(E2⋉→B)).
Proof. The proof is straightforward. Here, we only prove 2). By the LC-representation of the reduced biquaternion matrix, we know φc(AB)=φ(A)φc(B) holds if, and only if,
M⋉(I2⊗(E2⋉→AB))⋉δ12=M⋉(I2⊗(E2⋉→A))(M⋉(I2⊗(E2⋉→B))⋉δ12), |
which is equivalent to
(M⊗Im)(δ12⊗(E2⋉→AB))=M⋉(I2⊗(E2⋉→A))⋉M⋉(I2⊗(E2⋉→B))⋉δ12=(M⊗Im)(M⊗(E2⋉→A))(δ12⊗(E2⋉→B)). |
Example 3.3. Let E2=(1 00 1). It is easy to compute
φ1(A)=M⋉(I2⊗(E2⋉→A))=(A1 A2A2 A1). |
If E2=(1 00 −1), we obtain
φ2(A)=M⋉(I2⊗(E2⋉→A))=(A1 −A2−A2 A1). |
Moreover, we bring φ1(A) and φ2(A) into Proposition 3.2 for inspection. It can be found that φ1(A) and φ2(A) meet the requirements, so φ1(A) and φ2(A) are all LC-representation of the reduced biquaternion matrix.
It is not difficult to see that the product of reduced biquaternion matrices is more difficult than that of complex matrices. Therefore, it is very meaningful for us to find the above isomorphic relationship between the reduced biquaternion matrix and the complex matrix to realize the equivalent transformation of the problem. Moveover, the above isomorphism is more general compared with the conclusion [23].
Using the STP of reduced biquaternion matrices, we can obtain some new properties of vector operators over reduced biquaternions.
Proposition 3.4. Let A∈Hm×nR, B∈Hn×pR, then
Vr(AB)=A⋉Vr(B), | (3.1) |
Vc(AB)=A⋊Vc(B). | (3.2) |
Proof. It can be seen from Example 2.5 that the left STP can realize the multiplication of block matrices. Here, by means of this property, we realize the proof of (3.1).
A⋉Vr(B)=(((δ1n)T⋉Row1(A))((δ1n)T⋉Vr(B))+⋯+((δnn)T⋉Row1(A))((δnn)T⋉Vr(B))⋮((δ1n)T⋉Rowm(A))((δ1n)T⋉Vr(B))+⋯+((δnn)T⋉Rowm(A))((δnn)T⋉Vr(B)))=(a11(b11b12⋮b1p)+a12(b21b22⋮b2p)+…+a1n(bn1bn2⋮bnp)⋮am1(b11b12⋮b1p)+am2(b21b22⋮b2p)+…+amn(bn1bn2⋮bnp))=(Row1(A)Col1(B)Row1(A)Col2(B)⋮Row1(A)Colp(B)⋮Rowm(A)Col1(B)Rowm(A)Col2(B)⋮Rowm(A)Colp(B))=Vr(AB). |
(3.1) can be obtained. From Proposition 2.10, we get
Vc(AB)=W[m,p]⋉Vr(AB)=W[m,p]⋉A⋉Vr(B)=W[m,p]⋉A⋉W[p,n]⋉Vc(B)=(Ip⊗A)⋉Vc(B)=A⋊Vc(B). |
After straightforward calculation, it is not difficult to draw the following conclusions.
Proposition 3.5. Let A∈Hm×nR, B∈Hn×pR, then
Vr(AB)=BT⋊Vr(A), | (3.3) |
Vc(AB)=BT⋉Vc(A). | (3.4) |
Proposition 3.6. Let A∈Hm×nR, X∈Hn×nR, B∈Hn×pR, then
Vc(AXB)=(BT⊗A)Vc(X). | (3.5) |
Proof. Using Propositions 3.4 and 3.5, we get
Vc(AXB)=BT⋉Vc(AX)=BT⋉A⋊Vc(X)=(BT⊗Im)(In⊗A)Vc(X)=(BT⊗A)Vc(X). |
Using LC-representation and the vector operator over reduced biquaternion, we can transform the problem of the reduced biquaternion matrix equation into a system of linear equations in complex number fields. In this subsection, we will discuss Problem 1 using the above methods. According to the special structure of the solution in Problem 1, we propose a systematic method to simplify the calculation.
Definition 3.7. [36] Consider a p-dimensional real matrix subspace X⊂Rn×n. Assume e1, e2, …, ep form the bases of X, which means that for any X∈X we have X=x1e1+x2e2+⋯+xpep, and define H=[Vc(e1),Vc(e2),…,Vc(ep)] if we express Ψ(X)=Vc(X) in the form of
Ψ(X)=Vc(X)=H˜X. |
Then, H˜X is called an H-representation of Ψ(X), and H is called an H-representation matrix of Ψ(X).
Remark 3.8. The main advantage of H-representation is the ability to transform a matrix-valued equation into a standard vector-valued equation with independent coordinates, allowing for the well-known results in the linear system theory to be applied in our study. However, some reduced biquaternion matrices that have special structures cannot be represented by H-representation to achieve the purpose of variable reduction. This is our motivation to give GH-representation.
Definition 3.9. Consider a reduced biquaternion matrix subspace X⊂Hn×nR. For each X=X1+X2i+X3j+X4k∈X, denote χ(X)=[X1 X2 X3 X4] if we express
Φ(X)=Vc(χ(X))=HˆX. |
Then, HˆX is called a GH-representation of Φ(X) and H is called a GH-representation matrix of Φ(X), where H=[HX1 O O OO HX2 O OO O HX3 OO O O HX4], ˆX=[~X1~X2~X3~X4], HXs represents the H-representation matrix of Xs, (s=1,2,3,4).
In this paper we consider the Hermitian matrix, the centro-symmetric matrix and the bisymmetric matrix on reduced biquaternions. From Definition 3.9, we know that the GH-representation matrix can be constructed from some corresponding real matrices, so we are interested in the H-representation of the related real matrices.
We can see from Definition 1.2 that when X=X1+X2i+X3j+X4k is Hermitian, X1 is symmetric and X2,X3,X4 are anti-symmetric. Denote Sn as the set of symmetric matrices and S−n as the set of anti-symmetric matrices. For X=Sn, we select a standard basis throughout this paper as
{E11,E21,⋯,En1,E22,⋯,En2,⋯,Enn}={Eij, 1≤j≤i≤n}, | (3.6) |
where Eij=(elk)n×n with eij=eji=1 and the other entries being zero. Similarly, for X=S−n, we select a standard basis as
{˜E21,˜E31,⋯,˜En1,˜E32,⋯,˜En2,⋯,˜En,n−1}={˜Eij, 1≤j<i≤n}, | (3.7) |
where ˜Eij=(˜elk)n×n with ˜eij=−˜eji=1 and the other entries being zero. After the bases are determined above, for X=Sn/X=S−n we have
˜XSn=(x11,x21,⋯,xn1,x22,⋯,xn2,⋯,xnn)T,˜XS−n=(x21,⋯,xn1,x32,⋯,xn2,⋯,xn,n−1)T. |
Note that Ψ(XSn/S−n) is a column vector formed by all elements of XSn/XS−n, while ˜XSn and ˜XS−n are column vectors formed by different nonzero elements of XSn and XS−n, respectively. We denote the H-representation matrix corresponding to X=Sn by Hn, and H−n refers to the H-representation matrix corresponding to X=S−n.
Similarly, we use the above ideas to consider two other classes of special matrices. For X=Cn×ns, we can select the following standard basis
{F11,F21,⋯,Fn1,F12,F22,⋯,Fn2,⋯,Fn,n2}={Fij,1≤i≤n,1≤j≤n2}, n is even, | (3.8) |
{F11,F21,⋯,Fn1,F12,F22,⋯,Fn2,⋯,F1,n−12,⋯,Fn,n−12}∪{F1,n+12,⋯,Fn+12,n+12}={Fij,1≤i≤n,1≤j≤n−12}∪{Fij,1≤i≤n+12,j=n+12}, n is old, | (3.9) |
where Fij=(flk)n×n with fij=fn−i+1,n−j+1 and the other entries are zero. After the basis is determined, we have:
˜XCes={x11,x21,⋯,xn1,x12,x22,⋯,xn2,⋯,xn,n2} n is even, |
˜XCos={x11,x21,⋯,xn1,x12,x22,⋯,xn2,,x1,n−12,⋯,xn,n−12,⋯,xn+12,n+12} n is odd. |
We denote the H-representation matrix corresponding to X=Cn×ns by HCes and HCos.
For the Bi-hermitian matrix, we have the same discussion as the Hermitian matrix. The components of the Bi-hermitian matrix are considered as two kinds of sets. One is the matrix corresponding to the real part, denoted as BRn, and the other is the matrix corresponding to the imaginary part, denoted as BIn. For X=BRn, we can select a standard basis as
{B11,B21,⋯,Bn1,B22,⋯,Bn−1,2,⋯,Bn2,n2,Bn2+1,n2}={Bij,1≤j≤n2,j≤i≤n−j+1}, n is even, | (3.10) |
{B11,B21,⋯,Bn1,B22,⋯,Bn−1,2,⋯,Bn+12,n+12}={Bij,1≤j≤n+12,j≤i≤n−j+1}, n is odd, | (3.11) |
where B_{ij} = (b_{lk})_{n\times n} with b_{ij} = b_{n-i+1, n-j+1} = b_{ji} = 1 and the other entries are zero. Based on the above basis, we have
\begin{equation*} X_{\mathbb{BR}} = \left\{x_{11}, x_{21}, \cdots, x_{n1}, x_{22}, \cdots, x_{n-1, 2}, \cdots, x_{\frac{n}{2}, \frac{n}{2}}, x_{\frac{n}{2}+1, \frac{n}{2}}\right\}, \ n\ is\ even, \end{equation*} |
\begin{equation*} X_{\mathbb{BR}} = \left\{x_{11}, x_{21}, \cdots, x_{n1}, x_{22}, \cdots, x_{n-1, 2}, \cdots, x_{\frac{n+1}{2}, \frac{n+1}{2}}\right\}, \ n\ is\ odd. \end{equation*} |
For \mathbb{X} = \mathbb{BI}_n , we can select the following standard basis
\begin{equation} \left\{\widetilde{B}_{21}, \widetilde{B}_{31}, \cdots, \widetilde{B}_{n-1, 1}, \widetilde{B}_{32}, \cdots, \widetilde{B}_{n-2, 2}, \cdots, \widetilde{B}_{\frac{n}{2}, \frac{n}{2}-1}, \widetilde{B}_{\frac{n}{2}+1, \frac{n}{2}-1}\right\}, \ n\ is\ even, \end{equation} | (3.12) |
\begin{equation} \left\{\widetilde{B}_{21}, \cdots, \widetilde{B}_{n-1, 1}, \widetilde{B}_{32}, \cdots, \widetilde{B}_{n-2, 2}, \cdots, \widetilde{B}_{\frac{n+1}{2}, \frac{n-1}{2}}\right\}, \ n\ is\ odd, \end{equation} | (3.13) |
where \widetilde{B}_{ij} = (\widetilde{b}_{lk})_{n\times n} with \widetilde{b}_{ij} = \widetilde{b}_{n-i+1, n-j+1} = -\widetilde{b}_{ji} = 1 and the other entries are zero. Based the above basis, we have
\begin{equation*} X_{\mathbb{BI}} = \left\{x_{21}, x_{31}, \cdots, x_{n-1, 1}, x_{32}, \cdots, x_{n-2, 2}, \cdots, x_{\frac{n}{2}+1, \frac{n}{2}-1}\right\}, \ n\ is\ even, \end{equation*} |
\begin{equation*} X_{\mathbb{BI}} = \left\{x_{21}, \cdots, x_{n-1, 1}, x_{32}, \cdots, x_{n-2, 2}, \cdots, x_{\frac{n+1}{2}, \frac{n-1}{2}}\right\}, \ n\ is\ odd. \end{equation*} |
We denote the \mathcal{H} -representation matrix corresponding to \mathbb{X} = \mathbb{BR}_n by H_{BR^e} / H_{BR^o} and denote the \mathcal{H} -representation matrix corresponding to \mathbb{X} = \mathbb{BI}_n by H_{BI^e} / H_{BI^o} . Based on our earlier discussion, we now turn our attention to Problem 1. The following notation is necessary to derive a solution to Problem 1.
\begin{equation} \begin{split} \acute{\theta}_1& = \left\{X\bigg|X\in\mathbb{SH}_R^{n\times n}, \ and\ X\left([1:t_i]\right) = 0_{t_i\times t_i}\right\}\\ \acute{\theta}_2& = \left\{X\bigg|X\in\mathbb{CH}_R^{n\times n}, \ and\ X_c\left(t_i\right) = 0_{t_i\times t_i}\right\}\\ \acute{\theta}_3& = \left\{X\bigg|X\in\mathbb{BH}_R^{n\times n}, \ and\ X_c\left(t_i\right) = 0_{t_i\times t_i}\right\} \end{split} \end{equation} | (3.14) |
\begin{equation} \begin{split} &\widehat{X}_i = \begin{pmatrix}X_{t_i}\ &\ 0\ &\ 0\\ 0\ &\ 0\ &\ 0\\0\ &\ 0\ &\ 0\end{pmatrix}\in\mathbb{SH}_R^{n\times n}, \ where \widehat{X}_i([1:t_i]) = X_{t_i}, \ \left(i = 1, 2, \cdots, s\right), \\ &\widehat{X}_i = \begin{pmatrix}0\ &\ 0\ &\ 0\\ 0\ &\ X_{t_i}\ &\ 0\\0\ &\ 0\ &\ 0\end{pmatrix}\in \mathbb{CH}_R^{n\times n}, \ where\ \widehat{X}_{i_c}(t_i) = X_{t_i}, \ \left(i = s+1, s+2, \cdots, m\right), \\ &\widehat{X}_i = \begin{pmatrix}0\ &\ 0\ &\ 0\\ 0\ &\ X_{t_i}\ &\ 0\\0\ &\ 0\ &\ 0\ \end{pmatrix}\in \mathbb{BH}_R^{n\times n}, \ where\ \widehat{X}_{i_c}(t_i) = X_{t_i}, \ \left(i = m+1, m+2, \cdots, n\right). \end{split} \end{equation} | (3.15) |
Then, the subspace \theta_1, \ \theta_2 and \theta_3 can be written as
\begin{equation*} \begin{split} \theta_1& = \acute{\theta}_1+\widehat{X}_i, \ i = 1, 2, \cdots, s, \\ \theta_2& = \acute{\theta}_2+\widehat{X}_i, \ i = s+1, s+2, \cdots, m, \\ \theta_3& = \acute{\theta}_3+\widehat{X}_i, \ i = m+1, m+2, \cdots, n, \end{split} \end{equation*} |
and Problem 1 is converted into the following problem.
Find a matrix group \left(\ddot{X}_1, \ddot{X}_2, \cdots, \ddot{X}_n\right) such that
\begin{equation} \sum\limits_{i = 1}^nA_i\ddot{X}_iB_i = \widehat{E}, \end{equation} | (3.16) |
where \ddot{X}_i\in\acute{\theta}_1, \ (i = 1, 2, \cdots, s) ; \ddot{X}_i\in\acute{\theta}_2, \ (i = s+1, s+2, \cdots, m) ; \ddot{X}_i\in\acute{\theta}_3, \ (i = m+1, m+2, \cdots, n) , \widehat{E} = E-\sum\limits_{i = 1}^nA_i\widehat{X}_iB_i .
The solution of Problem 1 is expressed as
\begin{equation} X_i = \ddot{X}_i+\widehat{X}_i, \ \ \ i = 1, 2, \cdots, n. \end{equation} | (3.17) |
We firstly state the following notations to Problem 1. For A\in\mathbb{H}_R^{m\times n} , B\in\mathbb{H}_R^{n\times p} , let
\gamma_i = \varphi(\left(B_i^T\otimes A_i\right)), \ \zeta = \begin{pmatrix}I_{n^2}\ &\ {{\bf i}}*I_{n^2}\ &\ O\ &\ O\\ O\ &\ O\ &\ I_{n^2}\ &\ {{\bf i}}*I_{n^2}\end{pmatrix}, |
U = \begin{pmatrix} \mathrm{Re}(\gamma_1\zeta)\mathrm{H}_1&\cdots &\mathrm{Re}(\gamma_s\zeta)\mathrm{H}_1& \mathrm{Re}(\gamma_{s+1}\zeta)\mathrm{H}_2 &\cdots&\mathrm{Re}(\gamma_{m}\zeta)\mathrm{H}_2& \mathrm{Re}(\gamma_{m+1}\zeta)\mathrm{H}_3&\cdots &\mathrm{Re}(\gamma_{n}\zeta)\mathrm{H}_3\\ \mathrm{Im}(\gamma_1\zeta)\mathrm{H}_1&\cdots &\mathrm{Im}(\gamma_s\zeta)\mathrm{H}_1& \mathrm{Im}(\gamma_{s+1}\zeta)\mathrm{H}_2 &\cdots&\mathrm{Re}(\gamma_{m}\zeta)\mathrm{H}_2& \mathrm{Im}(\gamma_{m+1}\zeta)\mathrm{H}_3&\cdots &\mathrm{Im}(\gamma_{n}\zeta)\mathrm{H}_3 \end{pmatrix}, |
\mathrm{H}_1 = \begin{pmatrix}\widehat{H}_n&O&O& O\\ O&\widehat{H}_{-n}&O&O\\ O&O&\widehat{H}_{-n}&O \\ O&O&O&\widehat{H}_{-n}\end{pmatrix}, |
\mathrm{H}_2 = \begin{pmatrix}\widehat{H}_{C_s^e/C_s^o}&O &O&O\\ O&\widehat{H}_{C_s^e/C_s^o} &O&O\\ O&O&\widehat{H}_{C_s^e/C_s^o} &O\\ O&O&O&\widehat{H}_{C_s^e/C_s^o}\ \end{pmatrix}, |
\mathrm{H}_3 = \left(\begin{matrix}\widehat{H}_{BR^e/BR^o}&O &O&O\\ 0&\widehat{H}_{BI^e/BI^o}&O&O\\ O&O& \widehat{H}_{BI^e/BI^o}&O\\ O&O&O&\widehat{H}_{BI^e/BI^o} \end{matrix}\right). |
Theorem 3.10. Suppose A_i\in \mathbb{H}_R^{m\times n} , B_i\in \mathbb{H}_R^{n\times p} , C\in \mathbb{H}_R^{m\times p} , then the set S_Q of Problem 1 can be expressed as
\begin{equation} S_Q = \left\{\left(\ddot{X_1}, \ddot{X_2}, \cdots, \ddot{X_n}\right)\bigg| \begin{pmatrix}\widehat{\ddot{X}_1} \\ \widehat{\ddot{X}_2}\\ \vdots \\ \widehat{\ddot{X}_n}\end{pmatrix} = U^{\dagger}\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}+(I-U^{\dagger}U)y\right\}, \end{equation} | (3.18) |
where \ddot{X}_i\in\acute{\theta}_1, \ (i = 1, 2, \cdots, s) ; \ddot{X}_i\in\acute{\theta}_2, \ (i = s+1, s+2, \cdots, l) ; \ddot{X}_i\in\acute{\theta}_3, \ (i = l+1, l+2, \cdots, n) , and y is an arbitrary real vector of appropriate order. Futhermore, the minimal norm least squares constraint mixed solution \left(\ddot{X}^Q_1, \ddot{X}^Q_2, \cdots \ddot{X}^Q_n\right)\in S_Q satisfies
\begin{equation} \begin{pmatrix}\widehat{\ddot{X}_1^Q} \\ \widehat{\ddot{X}_2^Q}\\ \vdots \\ \widehat{\ddot{X}_n^Q}\end{pmatrix} = U^{\dagger}\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}). \end{equation} | (3.19) |
Proof. In order to facilitate our description of the Problem 1, \varphi is designated as \varphi^1 in Example 3.3 and
\begin{equation*} \begin{split} &\left\|\sum\limits_{i = 1}^nA_i\ddot{X}_iB_i-\widehat{E} \right\| = \left\|\sum\limits_{i = 1}^n\left(B_i^T\otimes A_i\right)V_c(\ddot{X}_i)-V_c(\widehat{E})\right\|\\ & = \left\|\sum\limits_{i = 1}^n\varphi(\left(B_i^T\otimes A_i\right))\varphi^c(V_c(\ddot{X}_i))-\varphi^c(V_c(\widehat{E})) \right\|\\ & = \left\|\sum\limits_{i = 1}^n\varphi(\left(B_i^T\otimes A_i\right))\begin{pmatrix}I_{n^2}\ &\ {{\bf i}}*I_{n^2}\ &\ O\ &\ O\\ O\ &\ O\ &\ I_{n^2}\ &\ {{\bf i}}*I_{n^2}\end{pmatrix}V_c(\chi(\ddot{X}_i))-\varphi^c(V_c(\widehat{E}))\right\|\\ & = \left\|\sum\limits_{i = 1}^n\gamma_i\zeta V_c(\chi(\ddot{X}_i))-\varphi^c(V_c(\widehat{E}))\right\|\\ & = \left\|\sum\limits_{i = 1}^n(\mathrm{Re}(\gamma_i\zeta)+\mathrm{Im}(\gamma_i\zeta){{\bf i}}) V_c(\chi(\ddot{X}_i))-(\mathrm{Re}(\varphi^c(V_c(\widehat{E}))) +\mathrm{Im}(\varphi^c(V_c(\widehat{E}))){{\bf i}})\right\|\\ & = \left\|\begin{pmatrix} \sum\limits_{i = 1}^n\mathrm{Re}(\gamma_i\zeta)V_c(\chi(\ddot{X}_i))-\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \sum\limits_{i = 1}^n\mathrm{Im}(\gamma_i\zeta))V_c(\chi(\ddot{X}_i))-\mathrm{Im}(\varphi^c(V_c(\widehat{E}))) \end{pmatrix}\right\|. \end{split} \end{equation*} |
Using the \mathcal{GH} -representation matrix of special matrix, we can continue to simplify the above process. As \ddot{X_i} has certain constraints, we only need to remove the constraint part of the bases of \mathcal{GH} -representation matrix of \ddot{X_i} , i = 1, 2, \cdots, n , t = 1, 2, 3, 4 . Denote
V_c(\chi(\ddot{X}_i)) = \begin{pmatrix}\widehat{H}_n & 0 & 0 & 0\\ 0 & \widehat{H}_{-n} & 0 & 0\\0 & 0 & \widehat{H}_{-n} & 0 \\0 & 0 & 0 & \widehat{H}_{-n}\end{pmatrix} \begin{pmatrix}\widetilde{\ddot{X}_i^1} \\ \widetilde{\ddot{X}_i^2}\\ \widetilde{\ddot{X}_i^3} \\ \widetilde{\ddot{X}_i^4}\end{pmatrix} = \mathrm{H}_1\widehat{\ddot{X}_i}, \ \ \ i = 1, 2, \cdots, s.
V_c(\chi(\ddot{X}_i)) = \begin{pmatrix}\widehat{H}_{C_s^e/C_s^o} & 0 & 0 & 0\\ 0 & \widehat{H}_{C_s^e/C_s^o} & 0 & 0\\0 & 0 & \widehat{H}_{C_s^e/C_s^o} & 0\\0 & 0 & 0 & \widehat{H}_{C_s^e/C_s^o}\end{pmatrix} \begin{pmatrix}\widetilde{\ddot{X}_i^1} \\ \widetilde{\ddot{X}_i^2}\\ \widetilde{\ddot{X}_i^3} \\ \widetilde{\ddot{X}_i^4}\end{pmatrix} = \mathrm{H}_2\widehat{\ddot{X}_i}, \ i = s+1, s+2, \cdots, m.
V_c(\chi(\ddot{X}_i)) = \begin{pmatrix}\widehat{H}_{BR^e/BR^o} & 0 & 0 & 0\\ 0 & \widehat{H}_{BI^e/BI^o} & 0 & 0\\0 & 0 & \widehat{H}_{BI^e/BI^o} & 0\\0 & 0 & 0 & \widehat{H}_{BI^e/BI^o} \end{pmatrix} \begin{pmatrix}\widetilde{\ddot{X}_i^1} \\ \widetilde{\ddot{X}_i^2}\\ \widetilde{\ddot{X}_i^3} \\ \widetilde{\ddot{X}_i^4}\end{pmatrix} = \mathrm{H}_3\widehat{\ddot{X}_i}, \ (i = m+1, m+2, \cdots, n ).
Further, we can get
= \left\|\begin{pmatrix} \sum\limits_{i = 1}^s\mathrm{Re}(\gamma_i\zeta) V_c(\chi(\ddot{X}_i)) +\sum\limits_{i = s+1}^m\mathrm{Re}(\gamma_i\zeta) V_c(\chi(\ddot{X}_i)) +\sum\limits_{i = m+1}^n\mathrm{Re}(\gamma_i\zeta) V_c(\chi(\ddot{X}_i)) -\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \sum\limits_{i = 1}^s\mathrm{Im}(\gamma_i\zeta) V_c(\chi(\ddot{X}_i)) +\sum\limits_{i = s+1}^m\mathrm{Im}(\gamma_i\zeta) V_c(\chi(\ddot{X}_i)) +\sum\limits_{i = m+1}^n\mathrm{Im}(\gamma_i\zeta) V_c(\chi(\ddot{X}_i)) -\mathrm{Im}(\varphi^c(V_c(\widehat{E}))) \end{pmatrix}\right\| |
\begin{array}{l} = \left\|\begin{pmatrix}\sum\limits_{i = 1}^s\mathrm{Re}(\gamma_i\zeta)\mathrm{H}_1 \widehat{\ddot{X}_i} +\sum\limits_{i = s+1}^m\mathrm{Re}(\gamma_i\zeta)\mathrm{H}_2 \widehat{\ddot{X}_i} +\sum\limits_{i = m+1}^n\mathrm{Re}(\gamma_i\zeta)\mathrm{H}_3 \widehat{\ddot{X}_i} -\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \sum\limits_{i = 1}^s\mathrm{Im}(\gamma_i\zeta)\mathrm{H}_1 \widehat{\ddot{X}_i} +\sum\limits_{i = s+1}^m\mathrm{Im}(\gamma_i\zeta)\mathrm{H}_2 \widehat{\ddot{X}_i} +\sum\limits_{i = m+1}^n\mathrm{Im}(\gamma_i\zeta)\mathrm{H}_3 \widehat{\ddot{X}_i} -\mathrm{Im}(\varphi^c(V_c(\widehat{E}))) \end{pmatrix}\right\| \\ = \left\|U\begin{pmatrix}\widehat{\ddot{X}_1} \\ \widehat{\ddot{X}_2}\\ \vdots \\ \widehat{\ddot{X}_n}\end{pmatrix}-\begin{pmatrix}\mathrm{Re} (\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}\right\| . \end{array} |
Thus,
\left\|\sum\limits_{i = 1}^nA_i\ddot{X}_iB_i-\widehat{E}\right\| = \min, |
if, and only if,
\left\|U\begin{pmatrix}\widehat{\ddot{X}_1} \\ \widehat{\ddot{X}_2}\\ \vdots \\ \widehat{\ddot{X}_n}\end{pmatrix}- \begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}\right\| = \min.\ \ |
For the real matrix equation
U \begin{pmatrix}\widehat{\ddot{X}_1} \\ \widehat{\ddot{X}_2}\\ \vdots \\ \widehat{\ddot{X}_n}\end{pmatrix} = \begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}, \ \ |
according to Lemma 2.11, its least squares mixed solutions can be represented as
\begin{equation*} \label{cxe1} \begin{pmatrix} \widehat{\ddot{X}_1}\\ \widehat{\ddot{X}_2}\\ \vdots\\ \widehat{\ddot{X}_n} \end{pmatrix} = U^{\dagger}\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}+(I-U^{\dagger}U)y, \end{equation*} |
where y is an arbitrary real vector of appropriate order, and the minimal norm least squares mixed solution \left(X^Q_1, X^Q_2, \cdots, X^Q_n\right)\in S_Q of (1.1) satisfies
\begin{equation*} \label{cxe2} \begin{pmatrix} \widehat{\ddot{X}_1^Q}\\ \widehat{\ddot{X}_2^Q}\\ \vdots\\ \widehat{\ddot{X}_n^Q} \end{pmatrix} = U^{\dagger}\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix} .\ \end{equation*} |
Therefore, (3.18) and (3.19) can be obtained.
Corollary 3.11. Suppose A_i\in \mathbb{H}_R^{m\times n} , B_i\in \mathbb{H}_R^{n\times p} , C\in\mathbb{H}_R^{m\times p} i = 1, 2, \cdots, n . (1.1) has a solution satisfying (3.16) if, and only if,
\begin{equation} \left(UU^{\dagger}-I\right)\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix} = 0. \end{equation} | (3.20) |
The set S_L of the general solution is
\begin{equation*} S_L = \left\{\left(\ddot{X_1}, \ddot{X_2}, \cdots, \ddot{X_n}\right)\bigg|\begin{pmatrix} \widehat{\ddot{X}_1}\\ \widehat{\ddot{X}_2}\\ \vdots\\ \widehat{\ddot{X}_n} \end{pmatrix} = U^{\dagger}\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}+(I-U^{\dagger}U)y\right\}, \end{equation*} |
where y is an arbitrary real vector of appropriate order and the minimal norm solution \left(\ddot{X}^L_1, \ddot{X}^L_2, \cdots \ddot{X}^L_n\right)\in S_L satisfies
\begin{equation} \begin{pmatrix} \widehat{\ddot{X}_1^L}\\ \widehat{\ddot{X}_2^L}\\ \vdots\\ \widehat{\ddot{X}_n^L} \end{pmatrix} = U^{\dagger}\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}, \end{equation} | (3.21) |
where y is an arbitrary real vector of appropriate order. U and \begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix} are given in Theorem 3.10.
Proof. Since
\begin{array}{rl} \left\|\sum\limits_{i = 1}^nA_iX_iB_i-C\right\| & = \left\|U\begin{pmatrix} \widehat{\ddot{X}_1}\\ \widehat{\ddot{X}_2}\\ \vdots\\ \widehat{\ddot{X}_n} \end{pmatrix}- \begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}\right\| = \left\|UU^{\dagger}U \begin{pmatrix} \widehat{\ddot{X}_1}\\ \widehat{\ddot{X}_2}\\ \vdots\\ \widehat{\ddot{X}_n} \end{pmatrix}- \begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}\right\|\\ & = \left\|UU^{\dagger}\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}-\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}\right\|\\ & = \left\|\left(UU^{\dagger}-I\right)\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}\right\|, \end{array} |
then
\begin{align*} &\left\|\sum\limits_{i = 1}^nA_iX_iB_i-C\right\| = 0 \Longleftrightarrow \left\|\left(UU^{\dagger}-I\right)\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix}\right\| = 0 \\ &\Longleftrightarrow \left(UU^{\dagger}-I\right)\begin{pmatrix}\mathrm{Re}(\varphi^c(V_c(\widehat{E})))\\ \mathrm{Im}(\varphi^c(V_c(\widehat{E})))\end{pmatrix} = {0} \end{align*} |
(3.20) holds. Moreover, we can obtain the expression of general solution and the minimal norm mixed solution using Lemma 2.12.
A real vector representation of reduced biquaternion matrices based on STP was proposed[34], and used it to solve the anti-Hermitain solution of the reduced biquaternion matrix equation \sum\limits_{i = 1}^nA_iXB_i = C . In this paper, we take the Hermitian solution of the reduced biquaternion matrix equation AXB = C as an example to illustrate the advantagement of our algorithm.
Example 4.1. For A_l\in\mathbb{H}_R^{m\times n} , B_l\in\mathbb{H}_R^{n\times p} , l = 1, 2, 3 , X_1\in\theta_1 , X_2\in\theta_2 , X_3\in\theta_3 , let m = n = p = 5K , t_i = 3K , K = 1:10 . Then, we compute
E = A_1X_1B_1+A_2X_2B_2+A_3X_3B_3. |
By Algorithm 1, we obtain the calculated solution [X_1^{*}, X_2^{*}, X_3^{*}] . Denote the error between the calculated solution and the exact solution as \varepsilon = log_{10}\left\|[X_1, X_2, X_3]-[X_1^{*}, X_2^{*}, X_3^{*}]\right\| , and \varepsilon is recorded in Figure 1.
Algorithm 1 Calculate the minimal norm least squares mixed solution of reduced biquaternion matrix equation (1.1). |
Require: A_i\in\mathbb{H}_R^{m\times n}, \ B_i\in\mathbb{H}_R^{n\times p}, \ C\in\mathbb{H}_R^{m\times p} , \widehat{H}_n/\widehat{H}_{-n} , \widehat{H}_{C_s^e}/\widehat{H}_{C_s^o} , \widehat{H}_{BR^e/BR^o}/\widehat{H}_{BI^e/BI^o} , i = 1, 2, \cdots, n ; Ensure: \left(\widehat{\ddot{X}_1}, \widehat{\ddot{X}_2}, \cdots, \widehat{\ddot{X}_n}\right) ; Fix the form of \psi satisfying Proposition 3.2; Calculate the matrix \gamma, \ \zeta, \ \mathrm{H}_1, \ \mathrm{H}_2, \ \mathrm{H}_3, \ U and the form of \zeta that depends on the choice of \varphi ; Calculate the set S_Q of Problem 1 according to (3.18); Calculate the minimal norm least squares mixed solution \left(X_1, X_2, \cdots, X_n\right) that satisfies (3.19); return \left(\widehat{\ddot{X}_1}, \widehat{\ddot{X}_2}, \cdots, \widehat{\ddot{X}_n}\right) ; |
Algorithm 2 Calculate the minimal norm Hermitian solution of reduced biquaternion matrix equation AXB = C . |
Require: A\in\mathbb{H}_R^{m\times n}, \ B\in\mathbb{H}_R^{n\times p}, \ C\in\mathbb{H}_R^{m\times p} ; H_n/H_{-n} ; Ensure: \varphi^c(V_c(X)) ; Fix the form of \psi satisfying the Proposition 3.2 and Calculate \zeta ; Calculate the \mathcal{GH} -representation matrix of Hermitian matrices, denoted by \mathrm{H}_h ; Calculate the V = B^T\otimes A ; Calculate the minimal norm Hermitian solution X\in\mathbb{SH}_R^{n\times n} satisfies \varphi^c(V_c(X)) = H_h(\varphi(V)H_h)^{\dagger}\varphi^c(V_c(C)); return \varphi^c(V_c(X)) ; |
Example 4.2. For A\in\mathbb{H}_R^{m\times n} , B\in\mathbb{H}_R^{n\times p} , X\in\mathbb{SH}_R^{n\times n} , let m = n = p = 2K , K = 1:9 . Then, we compute
\begin{equation} C = AXB. \end{equation} | (4.1) |
For coefficient matrices of the reduced biquaternion equation (4.1) with different orders, we solve the unique solution X_{\varsigma} by the method in this paper and the method in [34]. Denote \xi = log_{10}\left\|X-X_{\varsigma}\right\| and note down the \xi and CPU times of two methods, respectively. Detailed results are shown in Figure 2.
Remark 4.3. We make some explanations for the above three examples.
1) It can be seen from Figure 1 that the order of magnitude of \varepsilon < -9 . Thus, the effectiveness of our algorithm can be tested.
2) As seen from Figure 2, when calculating the Hermitian solution of the reudced biquaternion matrix equation (4.1) by the two methods, the errors between the obtained solution and the exact solution are very small. Compared with the method in [34], the method in this paper has an absolute advantage in calculation time. Moreover, with the increasing of K , the memory occupied by the method in [34] is also relatively large, which is not feasible for calculating the reduced biquaternion matrix equation under large dimension. It can be seen that the effect of our proposed algorithm is very clear.
In the process of image acquisition, it is always affected by external conditions and the surrounding environment, resulting in image quality damage. For example, underwater images are severely affected by the particular physical and chemical characteristics of underwater conditions. It is well known that the first encounters with digital image restoration in the engineering community were in the area of astronomical imaging. With the progress of society, color image restoration technology has been applied in many fields.
Image restoration is the process of removing and minimizing degradations in an observed image. A linear discrete model of image restoration is the matrix-vector equation
g = Kf+n, |
where g is an observed image, f is the true or ideal iamge, n is additive noise, and K is a matrix that represents the blurring phenomena. The methods used in image restoration aim to construct an approximation to f given g , K and in some cases statistical information about the noise. However, in most cases, the noise n is unknown. We wish to find f' such that
\left\|n\right\| = \left\|Kf'-g\right\| = \min\left\|Kf-g\right\|. |
In [21], Pei proposed to encode the three channel components of a color image on the three imaginary parts of a pure reduced biquaternion. That is,
q(x, y) = r(x, y){{\bf i}}+g(x, y){{\bf j}}+b(x, y){{\bf k}}, |
where r(x, y) , g(x, y) and b(x, y) are the red, green, and blue values of the pixel (x, y) , respectively. Thus, a color image with m rows and n columns can be represented by a pure imaginary reduced biquaternion matrix
Q = (q_{ij})_{m\times n} = R{{\bf i}}+G{{\bf j}}+B{{\bf k}}, \ q_{ij}\in\mathbb{H}_R. |
Since then, reduced biquaternion representation of a color image has attracted great attention. Many researchers applied the reduced biquaternion matrix to study the problems of color image processing[24,34,39,40] due to the ability of reduced biquaternion matrices treating the three color channels holistically without losing color information. The effectiveness of the proposed method was tested by a practical example.
Example 5.1. Two color images are given in Figures 3 and 4. M = (R, G, B) is the image matrix with special structure. M can be represented as the pure imaginary matrix M = R{{\bf i}}+G{{\bf j}}+B{{\bf k}} . By operation, we can get m = (m_r, m_g, m_b) , where m_r = vec(R) , m_g = vec(G) and m_b = vec(B) . By using LEN = 15 , THETA = 30 and PSF = fspecial('motion', LEN, THETA) , we disturb the image R and get the image d_R . Clearly, K = d_rm_r^{\dagger} , where d_r = vec(d_R) . For convenience, we disturb the images G, B using the same matrix K . Thus, d = (R, G, B) becomes an image matrix d = Km , that is d = (d_r, d_b, d_b) = Km = K(m_r, m_g, m_b) . By computation, we obtain
We denote \vartheta R , \vartheta G and \vartheta B as the differences between the computed R and original R , computed G and original G and the computed B and original B , respectively. All the information is contented in Table 1.
\vartheta R | \vartheta G | \vartheta B | |
Figure 3 | 2.1128e^{-10} | 1.7678e^{-11} | 8.1071e^{-12} |
Figure 4 | 4.1478e^{-10} | 1.8661e^{-10} | 2.0285e^{-10} |
In this paper, with the help of STP, some new properties of the reduced biquaternion vector operator were proposed, and the \mathcal{L_C} -representation, a class of algebraic expressions of isomorphism relation between the set of reduced biquaternion matrices and the set of complex matrices, were given. Making use of vector operator \mathcal{L_C} -representation and \mathcal{GH} -representation of special reduced biquaternion matrices, we solved the mixed solution of reduced biquaternion matrix equation \sum\limits_{i = 1}^nA_iX_iB_i = E with sub-matrix constraints. Both the comparison with other methods, and the effect of application in color image restoration demonstrated the effectiveness of our proposed method.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work is supported by the National Natural Science Foundation of China (62176112) and the Natural Science Foundation of Shandong Province (ZR2020MA053 and ZR2022MA030) and Discipline with Strong Characteristic of Liaocheng University Intelligent Science and Technology (319462208).
All authors declare no conflicts of interest in this paper.
[1] |
Z. Al-Zhour, Some new linear representations of matrix quaternions with some applicationss, J. King Saud Univ. Sci., 31 (2019), 42–47. https://doi.org/10.1016/j.jksus.2017.05.017 doi: 10.1016/j.jksus.2017.05.017
![]() |
[2] |
Z. Al-Zhour, The general solutions of singular and non-singular matrix fractional time-varying descriptor systems with constant coefficient matrices in Caputo sense, Alex. Eng. J., 55 (2016), 1675–1681. https://doi.org/10.1016/j.aej.2016.02.024 doi: 10.1016/j.aej.2016.02.024
![]() |
[3] |
A. El-Ajou, Z. Al-Zhour, A vector series solution for a class of hyperbolic system of Caputo time-fractional partial differential equations with variable coefficients, Front. Phys., 9 (2021), 5252501. https://doi.org/10.3389/fphy.2021.525250 doi: 10.3389/fphy.2021.525250
![]() |
[4] |
A. Sarhan, A. Burqan, R. Saadeh, Z. Al-Zhour, Analytical solutions of the nonlinear time-fractional coupled Boussinesq-burger equations using Laplace residual power series technique, Fractal Fract., 6 (2022), 631. https://doi.org/10.3390/fractalfract6110631 doi: 10.3390/fractalfract6110631
![]() |
[5] |
N. Le Bihan, J. Mars, Singular value decomposition of quaternion matrices: A new tool for vector-sensor signal processing, Signal Process., 84 (2004), 1177–1199. https://doi.org/10.1016/j.sigpro.2004.04.001 doi: 10.1016/j.sigpro.2004.04.001
![]() |
[6] |
S. De Leo, G. Scolarici, Right eigenvalue equation in quaternionic quantum mechanics, J. Phys. A: Math. Gen., 33 (2000), 2971–2995. https://doi.org/10.1088/0305-4470/33/15/306 doi: 10.1088/0305-4470/33/15/306
![]() |
[7] |
F. X. Zhang, M. S. Wei, Y. Li, J. L. Zhao, Special least squares solutions of the quaternion matrix equation AX = B with applications, Appl. Math. Comput., 270 (2015), 425–433. https://doi.org/10.1016/j.amc.2015.08.046 doi: 10.1016/j.amc.2015.08.046
![]() |
[8] |
S. F. Yuan, Q. W. Wang, X. F. Duan, On solutions of the quaternion matrix equation AX = B and their applications in color image restoration, Appl. Math. Comput., 221 (2013), 10–20. https://doi.org/10.1016/j.amc.2013.05.069 doi: 10.1016/j.amc.2013.05.069
![]() |
[9] |
L. Fortuna, G. Muscato, M. G. Xibilia, A comparison between HMLP and HRBF for attitude control, IEEE. T. Neural Networ., 12 (2001), 318–328. https://doi.org/10.1109/72.914526 doi: 10.1109/72.914526
![]() |
[10] |
T. Li, Q. W. Wang, Structure preserving quaternion full orthogonalization method with applications, Numer. Linear Algebr., 30 (2023), e2495. https://doi.org/10.1002/nla.2495 doi: 10.1002/nla.2495
![]() |
[11] | A. Ben-Israel, T. N. E. Greville, Generalized inverses: Theory and applications, New York: Springer, 2003. https://doi.org/10.1007/b97366 |
[12] |
Q. W. Wang, H. S. Zhang, S. W. Yu, On solutions to the quaternion matrix equation AXB+CYD = E, Electron. J. Linear Al., 17 (2008), 343–358. https://doi.org/10.13001/1081-3810.1268 doi: 10.13001/1081-3810.1268
![]() |
[13] |
X. L. Xu, Q. W. Wang, The consistency and the general common solution to some quaternion matrix equations, Ann. Funct. Anal., 14 (2023), 53. https://doi.org/10.1007/s43034-023-00276-y doi: 10.1007/s43034-023-00276-y
![]() |
[14] |
S. F. Yuan, Q. W. Wang, Y. B. Yu, Y. Tian, On Hermitian solutions of the split quaternion matrix equation AXB+ CXD = E, Adv. Appl. Clifford Algebras, 27 (2017), 3235–3258. https://doi.org/10.1007/s00006-017-0806-y doi: 10.1007/s00006-017-0806-y
![]() |
[15] |
C. Q. Song, G. L. Chen, On solutions of matrix equation XF-AX = C and XF-A\widetilde{X} = C over quaternion field, J. Appl. Math. Comput., 37 (2011), 57–68. https://doi.org/10.1007/s12190-010-0420-9 doi: 10.1007/s12190-010-0420-9
![]() |
[16] | A. P. Liao, Z. Z. Bai, Least-squares solution of AXB = D over symmetric positive semidefinite matrices X, J. Comput. Math., 21 (2003), 175–182. |
[17] |
A. P. Liao, Z. Z. Bai, Y. Lei, Best approximate solution of matrix equation AXB+CYD = E, SIAM J. Matrix Anal. Appl., 27 (2005), 675–688. https://doi.org/10.1137/040615791 doi: 10.1137/040615791
![]() |
[18] |
B. Y. Ren, Q. W. Wang, X. Y. Chen, The \eta-anti-Hermitian solution to a system of constrained matrix equations over the generalized segre quaternion algebra, Symmetry, 15 (2003), 592. https://doi.org/10.3390/sym15030592 doi: 10.3390/sym15030592
![]() |
[19] |
H. D. Schtte, J. Wenzel, Hypercomplex numbers in digital signal processing, 1990 IEEE International Symposium on Circuits and Systems (ISCAS), 2 (1990), 1557–1560. https://doi.org/10.1109/ISCAS.1990.112431 doi: 10.1109/ISCAS.1990.112431
![]() |
[20] |
K. Ueda, S. I. Takahashi, Digital filters with hypercomplex coefficients, Electronics and Communications in Japan (Part III: Fundamental Electronic Science), 76 (1993), 85–98. https://doi.org/10.1002/ecjc.4430760909 doi: 10.1002/ecjc.4430760909
![]() |
[21] |
S. C. Pei, J. H. Chang, J. J. Ding, Commutative reduced biquaternions and their fourier transform for signal and image processing applications, IEEE T. Signal Proces., 52 (2004), 2012–2031. https://doi.org/10.1109/TSP.2004.828901 doi: 10.1109/TSP.2004.828901
![]() |
[22] |
S. C. Pei, J. H. Chang, J. J. Ding, M. Y. Chen, Eigenvalues and singular value decompositions of reduced biquaternion matrices, IEEE T. Circ. Syst., 55 (2008), 2673–2685. https://doi.org/10.1109/TCSI.2008.920068 doi: 10.1109/TCSI.2008.920068
![]() |
[23] |
S. F. Yuan, Y. Tian, M. Z. Li, On Hermitian solutions of the reduced biquaternion matrix equation (AXB, CXD) = (E, G), Linear Multilinear A., 68 (2020), 1355–1373. https://doi.org/10.1080/03081087.2018.1543383 doi: 10.1080/03081087.2018.1543383
![]() |
[24] |
H. H. Kösal, Least-squares solutions of the reduced biquaternion matrix equation AX = B and their applications in colour image restoration, J. Mod. Optic., 66 (2019), 1802–1810. https://doi.org/10.1080/09500340.2019.1676474 doi: 10.1080/09500340.2019.1676474
![]() |
[25] |
X. Y. Chen, Q. W. Wang, The \eta-(anti-)Hermitian solution to a constrained Sylvester-type generalized commutative quaternion matrix equation, Banach J. Math. Anal., 17 (2023), 40. https://doi.org/10.1007/s43037-023-00262-5 doi: 10.1007/s43037-023-00262-5
![]() |
[26] |
L. Gong, X. Hu, L. Zhang, The expansion problem of anti-symmetric matrix under a linear constraint and the optimal approximation J. Comput. Appl. Math., 197 (2006), 44–52. https://doi.org/10.1016/j.cam.2005.10.021 doi: 10.1016/j.cam.2005.10.021
![]() |
[27] |
L. Zhao, X. Hu, L. Zhang, Least squares solutions to AX = B for bisymmetric matrices under a central principal submatrix constraint and the optimal approximation, Linear Algebra Appl., 428 (2008), 871–880. https://doi.org/10.1016/j.laa.2007.08.019 doi: 10.1016/j.laa.2007.08.019
![]() |
[28] |
J. F. Li, X. Y. Hu, L. Zhang, The submatrix constraint problem of matrix equation AXB+CYD = E, Appl. Math. Comput., 215 (2009), 2578–2590. https://doi.org/10.1016/j.amc.2009.08.051 doi: 10.1016/j.amc.2009.08.051
![]() |
[29] | D. Z. Cheng, H. S. Qi, Z. Q. Li, Analysis and control of Boolean networks: A semi-tensor product approach, London: Springer, 2011. https://doi.org/10.1007/978-0-85729-097-7 |
[30] |
D. Cheng, H. Qi, Z. Li, J. B. Liu, Stability and stabilization of Boolean networks, Int. J. Robust Nonlin., 21 (2011), 134–156. https://doi.org/10.1002/rnc.1581 doi: 10.1002/rnc.1581
![]() |
[31] |
J. Q. Lu, H. T. Li, Y. Liu, F. F. Li, Survey on semi-tensor product method with its applications in logical networks and other finite-valued systems, IET Control Theory Appl., 11 (2017), 2040–2047. https://doi.org/10.1049/iet-cta.2016.1659 doi: 10.1049/iet-cta.2016.1659
![]() |
[32] |
D. Z. Cheng, H. S. Qi, Controllability and observability of Boolean control networks, Automatica, 45 (2009), 1659–1667. https://doi.org/10.1016/j.automatica.2009.03.006 doi: 10.1016/j.automatica.2009.03.006
![]() |
[33] |
W. Ding, Y. Li, D. Wang, A real method for solving quaternion matrix equation X-A\widehat{X}B = C based on semi-tensor product of matrices, Adv. Appl. Clifford Algebras, 31 (2021), 78. https://doi.org/10.1007/s00006-021-01180-1 doi: 10.1007/s00006-021-01180-1
![]() |
[34] |
W. Ding, Y. Li, A. L. Wei, Z. H. Liu, Solving reduced biquaternion matrices equation \sum\limits_{i = 1}^nA_iXB_i = C with special structure based on semi tensor product of matrices, AIMS Mathematics, 7 (2022), 3258–3276. https://doi.org/10.3934/math.2022181 doi: 10.3934/math.2022181
![]() |
[35] |
D. Wang, Y. Li, W. Ding, Several kinds of special least squares solutions to quaternion matrix equation AXB = C, J. Appl. Math. Comput., 68 (2022), 1881–1899. https://doi.org/10.1007/s12190-021-01591-0 doi: 10.1007/s12190-021-01591-0
![]() |
[36] |
W. H. Zhang, B. S. Chen, \mathcal{H}-representation and applications to generalized Lyapunov equations and linear Stochastic systems, IEEE T. Automat Contr., 57 (2012), 3009–3022. https://doi.org/10.1109/TAC.2012.2197074 doi: 10.1109/TAC.2012.2197074
![]() |
[37] | D. Z. Cheng, From dimension-free matrix theory to cross-dimensional dynamic systems, Academic Press, 2019. https://doi.org/10.1016/C2018-0-02653-5 |
[38] | G. H. Golub, C. F. Van Loan, Matrix computation, 4th edn, Baltimore: The Johns Hopkins University Press, 2013. |
[39] |
S. Gai, G. W. Yang, M. H. Wan, L. Wang, Denoising color images by reduced quaternion matrix singular value decomposition, Multidim. Syst. Sign. Process., 26 (2015), 307–320. https://doi.org/10.1007/s11045-013-0268-x doi: 10.1007/s11045-013-0268-x
![]() |
[40] |
S. Gai, M. H. Wan, L. Wang, C. H. Yang, Reduced quaternion matrix for color texture classification, Neural Comput. Appl., 25 (2014), 945–954. https://doi.org/10.1007/s00521-014-1578-0 doi: 10.1007/s00521-014-1578-0
![]() |
1. | Sujia Han, Caiqin Song, Three minimal norm Hermitian solutions of the reduced biquaternion matrix equation EM+M˜F=G EM+\tilde{M}F=G , 2024, 0170-4214, 10.1002/mma.10424 |