This paper serves to identify some new characterizations and representations of the Minkowski inverse in Minkowski space. First of all, a few representations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses are given in order to represent the Minkowski inverse. Second, some famous characterizations of the Moore-Penrose inverse are extended to that of the Minkowski inverse. Third, using the Hartwig-Spindelböck decomposition, we present a representation of the Minkowski inverse. And, based on this result, an interesting characterization of the Minkowski inverse is showed by a rank equation. Finally, we obtain several new representations of the Minkowski inverse in a more general form, by which the Minkowski inverse of a class of block matrices is given.
Citation: Jiale Gao, Kezheng Zuo, Qingwen Wang, Jiabao Wu. Further characterizations and representations of the Minkowski inverse in Minkowski space[J]. AIMS Mathematics, 2023, 8(10): 23403-23426. doi: 10.3934/math.20231189
[1] | Zhengmao Chen . A priori bounds and existence of smooth solutions to Minkowski problems for log-concave measures in warped product space forms. AIMS Mathematics, 2023, 8(6): 13134-13153. doi: 10.3934/math.2023663 |
[2] | Chang-Jian Zhao . Orlicz mixed chord-integrals. AIMS Mathematics, 2020, 5(6): 6639-6656. doi: 10.3934/math.2020427 |
[3] | Talat Körpinar, Yasin Ünlütürk . An approach to energy and elastic for curves with extended Darboux frame in Minkowski space. AIMS Mathematics, 2020, 5(2): 1025-1034. doi: 10.3934/math.2020071 |
[4] | Xudong Wang, Tingting Xiang . Dual Brunn-Minkowski inequality for $ C $-star bodies. AIMS Mathematics, 2024, 9(4): 7834-7847. doi: 10.3934/math.2024381 |
[5] | Yanlin Li, Kemal Eren, Soley Ersoy . On simultaneous characterizations of partner-ruled surfaces in Minkowski 3-space. AIMS Mathematics, 2023, 8(9): 22256-22273. doi: 10.3934/math.20231135 |
[6] | Suriyakamol Thongjob, Kamsing Nonlaopon, Sortiris K. Ntouyas . Some (p, q)-Hardy type inequalities for (p, q)-integrable functions. AIMS Mathematics, 2021, 6(1): 77-89. doi: 10.3934/math.2021006 |
[7] | Abdesslam Boulkhemair, Abdelkrim Chakib, Azeddine Sadik . On a shape derivative formula for star-shaped domains using Minkowski deformation. AIMS Mathematics, 2023, 8(8): 19773-19793. doi: 10.3934/math.20231008 |
[8] | Yanlin Li, A. A. Abdel-Salam, M. Khalifa Saad . Primitivoids of curves in Minkowski plane. AIMS Mathematics, 2023, 8(1): 2386-2406. doi: 10.3934/math.2023123 |
[9] | Mehmet Önder . Non-null slant ruled surfaces. AIMS Mathematics, 2019, 4(3): 384-396. doi: 10.3934/math.2019.3.384 |
[10] | Chang Sun, Kaixin Yao, Donghe Pei . Special non-lightlike ruled surfaces in Minkowski 3-space. AIMS Mathematics, 2023, 8(11): 26600-26613. doi: 10.3934/math.20231360 |
This paper serves to identify some new characterizations and representations of the Minkowski inverse in Minkowski space. First of all, a few representations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses are given in order to represent the Minkowski inverse. Second, some famous characterizations of the Moore-Penrose inverse are extended to that of the Minkowski inverse. Third, using the Hartwig-Spindelböck decomposition, we present a representation of the Minkowski inverse. And, based on this result, an interesting characterization of the Minkowski inverse is showed by a rank equation. Finally, we obtain several new representations of the Minkowski inverse in a more general form, by which the Minkowski inverse of a class of block matrices is given.
In order to easily test that a Mueller matrix maps the forward light cone into itself when studying polarized light, Renardy [1] explored singular value decomposition in Minkowski space. Subsequently, Meenakshi [2] defined the Minkowski inverse in Minkowski space and gave a condition for a Mueller matrix to have a singular value decomposition in terms of its Minkowski inverse. Since this article came out, the generalized inverses in Minkowski space have attracted considerable attention. Zekraoui et al. [3] derived some new algebraic and topological properties of the Minkowski inverse. Meenakshi [4] introduced the concept of a range symmetric matrix in Minkowski space, which was further studied by various scholars [5,6,7]. The Minkowski inverse has been widely used in many applications, such as the anti-reflexive solutions of matrix equations [8] and matrix partial orderings [9,10]. The weighted Minkowski inverse defined in [11] is a generalization of the Minkowski inverse, and many of its properties, representations and approximations were established in [11,12,13]. Recently, Wang et al. introduced the m-core inverse [14], the m-core-EP inverse [15] and the m-WG inverse [16] in Minkowski space, which are viewed as generalizations of the core inverse, the core-EP inverse and the weak group inverse, respectively.
It is well known that the Moore-Penrose inverse [17] not only plays an irreplaceable role in solving linear matrix equations, but it is also a generally accepted tool in statistics, studies of extreme-value problems and other scientific disciplines. Moreover, this inverse pervades a great number of mathematical fields: C∗-algebras, rings, Hilbert spaces, Banach spaces, categories, tensors and the quaternion skew field. The algebraic properties, characterizations, representations, perturbation theory and iterative computations of the Moore-Penrose inverse have been extensively investigated. For more details on the study of the Moore-Penrose inverse, refer to [18,19,20,21,22,23].
Although the Minkowski inverse in Minkowski space can be regarded as an extension of the Moore-Penrose inverse, there are many differences between these two classes of generalized inverses, especially in terms of their existence conditions (see [2,3,12]). So, it is natural to ask what interesting results for the Minkowski inverse can be drawn by considering some known conclusions of the Moore-Penrose inverse.
Mainly inspired by [24,25,26,27], we summarize the main topics of this work as below:
● A few characterizations and representations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses are shown.
● We apply the solvability of matrix equations, the nonsingularity of matrices, the existence of projectors and the index of matrices to characterize the existence of the Minkowski inverse, which extends some classic characterizations of the Moore-Penrose inverse in Cm×n with the usual Hermitian adjoint and in a ring with involution. And, we show various representations of the Minkowski inverse in different cases.
● Using the Hartwig-Spindelböck decomposition, we present a new representation of the Minkowski inverse. Based on this result, an interesting characterization of the Minkowski inverse is presented through the use of a rank equation.
● Motivated by the Zlobec formula of the Moore-Penrose inverse, we give a more general representation of the Minkowski inverse and apply it to compute the Minkowski inverse of a class of block matrices.
This paper is organized as follows. Section 2 presents the notations and terminology. In Section 3, some necessary lemmas are given. We devote Section 4 to the characterizations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses. Some classic properties of the Moore-Penrose inverse are extended to the case of the Minkowski inverse in Section 5. In Section 6, we further extend several characterizations of the Moore-Penrose inverse in a ring to the Minkowski inverse. We characterize the Minkowski inverse by using a rank equation in Section 7. Section 8 focuses on showing a few new representations of the Minkowski inverse.
Throughout this paper, we adopt the following notations and terminology. Let Cn, Cm×n and Cm×nr be the sets of all complex n-dimensional vectors, complex m×n matrices and complex m×n matrices with rank r, respectively. The symbols A∗, R(A), N(A) and rank(A) stand for the conjugate transpose, range, null space and rank of A∈Cm×n, respectively. The index of A∈Cn×n, denoted by Ind(A), is the smallest nonnegative integer t satisfying rank(At+1)=rank(At). And, A0=In for A∈Cn×n, where In is the identity matrix in Cn×n. We denote the dimension and the orthogonal complementary subspace of a subspace L⊆Cn by dim(L) and L⊥, respectively. By PS,T, we denote the projector onto S along T, where two subspaces S,T⊆Cn satisfy that the direct sum of S and T is Cn, i.e., S⊕T=Cn. In particular, PS=PS,S⊥.
The Moore-Penrose inverse [17] of A∈Cm×n is the unique matrix X∈Cn×m verifying
AXA=A,XAX=X,(AX)∗=AX,(XA)∗=XA, |
and it is denoted by A†. The group inverse [28] of A∈Cn×n is the unique matrix X∈Cn×n satisfying
AXA=X,XAX=X,AX=XA, |
and it is denoted by A#. For A∈Cm×n, if there is a matrix X∈Cn×m satisfying
XAX=X,R(X)=T,N(X)=S, |
where T⊆Cn and S⊆Cm are two subspaces, then X is unique and is denoted by A(2)T,S [19,23]. Particularly, if AA(2)T,SA=A, we denote A(1,2)T,S=A(2)T,S.
Additionally, let G be the Minkowski metric tensor [1,2] defined by Gu=(u0,−u1,−u2,…,−un−1), where u∈Cn is indexed as u=(u0,u1,…,un−1). The Minkowski metric tensor G can be determined by a nonsingular matrix G=(100−In−1), which is also called the Minkowski metric matrix [2]. Evidently, G=G∗ and G2=In. The Minkowski inner product [1,2] of two elements x and y in Cn is defined by (x,y)=<x,Gy>, where <⋅,⋅> is the conventional Euclidean inner product. The complex linear space Cn with Minkowski inner product is called the Minkowski space. Notice that the Minkowski space is also an indefinite inner product space [29,30]. The Minkowski adjoint of A∈Cm×n is A∼=GA∗F, where G and F are Minkowski metric matrices of orders n and m, respectively. For A∈Cm×n and B∈Cn×p, it is easy to verify that (A∼)∼=A and (AB)∼=B∼A∼.
Definition 2.1. [2,30] Let A∈Cm×n.
(1) If there exists X∈Cn×m such that
(1)AXA=A,(2)XAX=X,(3m)(AX)∼=AX,(4m)(XA)∼=XA, |
then X is called the Minkowski inverse of A and is denoted by Am.
(2) If X∈Cn×m satisfies equations (i),(j),...,(k) from among equations (1)–(4m), then X is called a {i,j,...,k}-inverse of A and is denoted by A(i,j,...,k). The set of all {i,j,...,k}-inverses of A is denoted by A{i,j,...,k}.
This section begins with recalling existence conditions and some basic properties of the Minkowski inverse, which will be useful in the later discussion.
Lemma 3.1 (Theorem 1, [2] or Theorem 3, [31]). Let A∈Cm×n. Then, Am exists if and only if rank(AA∼)=rank(A∼A)=rank(A).
Lemma 3.2 (Theorem 8, [3]). Let A∈Cm×nr and r>0, and let A=BC be a full-rank factorization of A, where B∈Cm×rr and C∈Cr×nr. If Am exists, then Am=C∼(CC∼)−1(B∼B)−1B∼.
Lemma 3.3 (Theorem 9, [29]). Let A∈Cm×n with rank(AA∼)=rank(A∼A)=rank(A). Then, the following holds:
(1) R(Am)=R(A∼) and N(Am)=N(A∼);
(2) AAm=PR(A),N(A∼);
(3) AmA=PR(A∼),N(A).
Remark 3.4. Under the conditions of the hypotheses of Lemma 3.3, we immediately have
Am=A(1,2)R(A∼),N(A∼). | (3.1) |
Furthermore, we recall an important application of {1}-inverses to solve matrix equations.
Lemma 3.5 (Theorem 1.2.5, [23]). Let A∈Cm×n, B∈Cp×q and D∈Cm×q. Then, there is a solution X∈Cn×p to the matrix equation AXB=D if and only if, for some A(1)∈A{1} and B(1)∈B{1}, AA(1)DB(1)B=D; in this case, the general solution is
X=A(1)DB(1)+(In−A(1)A)Y+Z(Ip−BB(1)), |
where A(1)∈A{1} and B(1)∈B{1} are fixed but arbitrary, and Y∈Cn×p and Z∈Cn×p are arbitrary.
Two significant results for A(2)T,S are reviewed in order to show the existence conditions of the Minkowski inverse in Section 5, and to represent the Minkowski inverse in Section 8, respectively.
Lemma 3.6 (Theorem 2.1, [32]). Let A∈Cm×nr, and let two subspaces T⊆Cn and S⊆Cm be such that dim(T)≤r and dim(S)=m−dim(T). Suppose that H∈Cn×m is such that R(H)=T and N(H)=S. If A(2)T,S exists, then Ind(AH)=Ind(HA)=1. Further, we have that A(2)T,S=(HA)#H=H(AH)#.
Lemma 3.7 (Urquhart formula, [33]). Let A∈Cm×n, U∈Cn×p, V∈Cq×m and
X=U(VAU)(1)V, |
where (VAU)(1)∈(VAU){1}. Then, X=A(1,2)R(U),N(V) if and only if rank(VAU)=rank(U)=rank(V)=rank(A).
The following three auxiliary lemmas are critical to conclude the results in Section 7.
Lemma 3.8 (Hartwig-Spindelböck decomposition, [34]). Let A∈Cn×nr. Then, A can be represented in the form
A=U(ΣKΣL00)U∗, | (3.2) |
where U∈Cn×n is unitary, Σ=diag(σ1,σ2,...,σr) is the diagonal matrix of singular values of A, σi>0 (i=1,2,...,r) and K∈Cr×r and L∈Cr×(n−r) satisfy
KK∗+LL∗=Ir. | (3.3) |
Lemma 3.9 (Theorem 1, [24]). Let A∈Cm×n, B∈Cm×m and C∈Cn×n. Then, there exists a solution X∈Cn×m to the rank equation
rank(ABCX)=rank(A) | (3.4) |
if and only if R(B)⊆R(A) and R(C∗)⊆R(A∗), and in which case,
X=CA†B. | (3.5) |
Lemma 3.10 (Theorem 1, [35]). Let A∈Cm×nr and B∈Cl×hr1. Assume that A=P(Ir000)Q and B=P1(Ir1000)Q1, where P∈Cm×m, Q∈Cn×n, P1∈Cl×l and Q1∈Ch×h are nonsingular. If r=r1, then the general solution of the matrix equation XAY=B is given by
X=P1(X1X20X4)P−1,Y=Q−1(X−110Y3Y4)Q1, |
where X1∈Cr×r is an arbitrary nonsingular matrix and X2∈Cr×(m−r), X4∈C(l−r)×(m−l), Y3∈C(n−r)×r and Y4∈C(n−r)×(n−r) are arbitrary.
An interesting conclusion proved by Kamaraj and Sivakumar in [29,Theorem 4] is that, if A∈Cm×n is such that Am exists, then
Am=A(1,4m)AA(1,3m), | (4.1) |
where A(1,3m)∈A{1,3m} and A(1,4m)∈A{1,4m}. This result shows the importance of A(1,3m) and A(1,4m) to represent the Minkowski inverse Am. Moreover, Petrović and Stanimirović [30] have investigated the representations and computations of {2,3∼}- and {2,4∼}-inverses in an indefinite inner product space, which are generalizations of {2,3m}- and {2,4m}-inverses in Minkowski space. Motivated by the above work, we consider the characterizations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses in this section. Before starting, an auxiliary lemma is given as follows.
Lemma 4.1. Let A∈Cn×s and B∈Ct×n. Then,
(PR(A),N(B))∼=PR(B∼),N(A∼). |
Proof. Write Q=(PR(A),N(B))∼=GP(N(B))⊥,(R(A))⊥G. Then, Q2=Q,
R(Q)=G(N(B))⊥=R(GB∗)=R(B∼),(N(Q))⊥=(N(P(N(B))⊥,(R(A))⊥G))⊥=R(GPR(A),N(B))=R(GA)=R((A∼)∗), |
which implies that N(Q)=(R((A∼)∗))⊥=N(A∼). Thus, Q=PR(B∼),N(A∼).
In the following theorems, we prove the equivalence of the existence of {1,3m}- and {1,2,3m}-inverses, and show some of their characterizations.
Theorem 4.2. Let A∈Cm×n. Then, there exists X∈A{1,3m} if and only if there exists Y∈A{1,2,3m}.
Proof. The 'if' part is obvious. Conversely, if there exists X∈A{1,3m}, then
A=AXA=(AX)∼A=X∼A∼A, |
which implies that rank(A)=rank(A∼A). Using [2,Theorem 2], we have that there exists Y∈A{1,2,3m}.
Theorem 4.3. Let A∈Cm×n and X∈Cn×m. Then, the following statements are equivalent:
(1) X∈A{1,3m};
(2) A∼AX=A∼;
(3) AX=PR(A),N(A∼).
In this case,
A{1,3m}={A(1,3m)+(In−A(1,3m)A)YY∈Cn×m}, | (4.2) |
where A(1,3m)∈A{1,3m} is fixed but arbitrary.
Proof. (1) ⇒ (2). Since X∈A{1,3m}, it follows that A∼AX=A∼(AX)∼=(AXA)∼=A∼.
(2) ⇒ (3). Since (AX)∼A=A from A∼AX=A∼, we have that AX=(AX)∼AX, implying that (AX)∼=AX. Thus, AX=(AX)∼AX=(AX)2, that is, AX is a projector. Again, by (AX)∼=AX, we have that AXA=A, which, together with A∼AX=A∼, shows that R(AX)=R(A) and N(AX)=N(A∼). Hence, AX=PR(A),N(A∼).
(3) ⇒ (1). Clearly, AXA=PR(A),N(A∼)A=A. Applying Lemma 4.1 to AX=PR(A),N(A∼), we see that (AX)∼=PR(A),N(A∼)=AX.
In this case, we have that A{1,3m}={Z∈Cn×mAZ=AA(1,3m)}, where A(1,3m) is a fixed but arbitrary {1,3m}-inverse of A. Thus, applying Lemma 3.5 to AZ=AA(1,3m), we have (4.2) directly.
Theorem 4.4. Let A∈Cm×nr with rank(A∼A)=rank(A)>0, and let a full-rank factorization of A be A=BC, where B∈Cm×rr and C∈Cr×nr. Then,
A{1,2,3m}={C−1R(B∼B)−1B∼C−1Ris an arbitrary right inverse ofC}. |
Proof. We can easily verify that [C−1R(B∼B)−1B∼]∈A{1,2,3m}. Conversely, let H∈A{1,2,3m}. Using the fact that
A{1,2}={C−1RB−1LB−1L is an arbitrary left inverse of B,C−1R is an arbitrary right inverse of C}, |
we have that H=C−1RB−1L for some B−1L∈Cr×mr and C−1R∈Cn×rr. Moreover, it follows from H∈A{3m} that
(AH)∼=AH⇔(BB−1L)∼=BB−1L⇔B−1L=(B∼B)−1B∼. |
Hence, every H∈A{1,2,3m} must be of the form C−1R(B∼B)−1B∼. This completes the proof.
Using an obvious fact that X∼∈A∼{1,3m} if and only if X∈A{1,4m}, where A∈Cm×n, we have the following results, which show that {1,4m}- and {1,2,4m}-inverses have properties similar to that of {1,3m}- and {1,2,3m}-inverses.
Theorem 4.5. Let A∈Cm×n. Then, there exists X∈A{1,4m} if and only if there exists Y∈A{1,2,4m}.
Theorem 4.6. Let A∈Cm×n and X∈Cn×m. Then, the following statements are equivalent:
(1) X∈A{1,4m};
(2) XAA∼=A∼;
(3) XA=PR(A∼),N(A).
In this case,
A{1,4m}={A(1,4m)+Z(Im−AA(1,4m))Z∈Cn×m}, |
where A(1,4m)∈A{1,4m} is fixed but arbitrary.
Theorem 4.7. Let A∈Cm×nr with rank(AA∼)=r>0, and let a full-rank factorization of A be A=BC, where B∈Cm×rr and C∈Cr×nr. Then,
A{1,2,4m}={C∼(CC∼)−1B−1LB−1Lis an arbitrary left inverse ofB}. |
Remark 4.8. If A∈Cm×n and X∈A{1,2}, we derive [2,Theorems 3 and 4] directly by Theorems 4.3 and 4.6, respectively.
Based on Lemma 3.1, we start by proposing several different existence conditions of Am in the following theorem.
Theorem 5.1. Let A∈Cm×n. Then, the following statements are equivalent:
(1) Am exists;
(2) rank(A∼AA∼)=rank(A);
(3) AR(A∼)⊕N(A∼)=Cm.
Proof. (1) ⇒ (2). If Am exists, then rank(AA∼)=rank(A∼A)=rank(A) by Lemma 3.1. Since R(A)∩N(A∼)={0} from rank(A∼A)=rank(A), it follows from rank(AA∼)=rank(A) that
rank(A∼AA∼)=rank(AA∼)−dim(R(AA∼)∩N(A∼))=rank(A)−dim(R(A)∩N(A∼))=rank(A). |
(2) ⇒ (3). From
rank(A)=rank(A∼AA∼)=rank(AA∼)−dim(R(AA∼)∩N(A∼)) |
and rank(A)≥rank(AA∼), we have that R(AA∼)∩N(A∼)={0} and rank(AA∼)=rank(A), which imply that AR(A∼)⊕N(A∼)=Cm.
(3) ⇒ (1). It follows from AR(A∼)⊕N(A∼)=Cm that R(AA∼)∩N(A∼)={0} and rank(AA∼)=rank(A). Thus, R(A)∩N(A∼)={0}, that is, rank(A∼A)=rank(A). Hence, Am exists by Lemma 3.1.
Subsequently, if Am exists for A∈Cm×n, applying Lemma 3.6 to (3.1) in Remark 3.4, we directly obtain a new expression of the Minkowski inverse, Am=(A∼A)#A∼=A∼(AA∼)# and
Ind(AA∼)=Ind(A∼A)=1. | (5.1) |
However, for a matrix A∈Cm×n satisfying (5.1), Am does not necessarily exist, as will be shown in the following example.
Example 5.2. Let
A=(10110010101100000000). |
It can be verified that rank(A∼AA∼)=rank((A∼A)2)=rank(A∼A)=rank((AA∼)2)=rank(AA∼)=1 and rank(A)=2. Obviously, Ind(A∼A)=Ind(AA∼)=1, but rank(A∼AA∼)≠rank(A), implying that Am does not exist.
In the next theorem, we present some necessary and sufficient conditions for the converse implication.
Theorem 5.3. Let A∈Cm×n. Then, the following statements are equivalent:
(1) Am exists;
(2) Ind(A∼A)=1 and N(A∼A)⊆N(A);
(3) Ind(AA∼)=1 and R(A)⊆R(AA∼).
Proof. (1) ⇔ (2). The 'only if' part is obvious by Lemmas 3.1 and 3.6. Conversely, since rank(A)=rank(A∼A) from N(A∼A)⊆N(A), it follows from Ind(A∼A)=1 that R(A∼)∩N(A)=R(A∼A)∩N(A∼A)={0}, which implies that rank(AA∼)=rank(A). Hence, Am exists directly by Lemma 3.1.
(1) ⇔ (3). Its proof is similar to that of (1) ⇔ (2).
As we all know, Moore [36], Penrose [17] and Desoer and Whalen [37] defined the Moore-Penrose inverse from different perspectives, respectively. Next, we review these definitions in the following lemma and extend this result to the Minkowski inverse.
Lemma 5.4 (Desoer-Whalen's and Moore's definitions, [36,37]). Let A∈Cm×n and X∈Cn×m. Then, the following statements are equivalent:
(1) X=A†;
(2) XAa=a for a∈R(A∗), and Xb=0 for b∈N(A∗);
(3) AX=PR(A), XA=PR(X).
There is an interesting example showing that, for some matrices A∈Cm×n and X∈Cn×m, X may not be equivalent to Am though AX=PR(A),N(A∼) and XA=PR(X),N(A).
Example 5.5. Let us consider the matrices
A=(1110101010110010000000000),X=(0−0.20.40000.40.20010−10000.6−0.2000−0.20.400). |
By calculation, we have that rank(A∼AA∼)=rank(A)=3 and
Am=(01−2000010010−10001−1000−1200). |
Evidently, X≠Am. However, we can check that X∈A{1,2} and N(X)=N(A∼), which imply that AX=PR(A),N(A∼) and XA=PR(X),N(A).
Theorem 5.6. Let A∈Cm×n and X∈Cn×m. Then, the following statements are equivalent:
(1) X=Am;
(2) XAa=a for a∈R(A∼), and Xb=0 for b∈N(A∼);
(3) AX=PR(A),N(A∼), XA=PR(X),N(A) and R(X)⊆R(A∼).
Proof. (1) ⇒ (2). It is obvious by Lemma 3.3.
(2) ⇒ (3). It follows from XAa=a for a∈R(A∼) that XAA∼=A∼, which shows that rank(A∼)≤rank(X) and R(A∼)⊆R(X). And, from Xb=0 for b∈N(A∼), we have that N(A∼)⊆N(X), implying that rank(X)≤rank(A∼). Thus, rank(X)=rank(A∼), R(X)=R(A∼) and N(X)=N(A∼). Hence, again, by XAa=a for a∈R(A∼)=R(X), we have that X∈A{1,2}, which implies that the item (3) holds.
(3) ⇒ (1). Clearly, AXA=PR(A),N(A∼)A=A and XAX=PR(X),N(A)X=X, i.e., X∈A{1,2}. Then, from AX=PR(A),N(A∼) and R(X)⊆R(A∼), we have that N(X)=N(A∼) and R(X)=R(A∼). Hence, in view of (3.1) in Remark 3.4, we see that X=Am.
A classic characterization of the Moore-Penrose inverse proposed by Bjerhammar [38,39] is extended to the Minkowski inverse in the following theorem.
Theorem 5.7. Let A∈Cm×n with rank(A∼AA∼)=rank(A), and let X∈Cn×m. Then, the following statements are equivalent:
(1) X=Am;
(2) There exist B∈Cm×m and C∈Cn×n such that AXA=A,X=A∼B,X=CA∼.
Moreover,
B=(A∼)(1)Am+(Im−(A∼)(1)A∼)Y,C=Am(A∼)(1)+Z(In−A∼(A∼)(1)), |
where Y∈Cm×m and Z∈Cn×n are arbitrary and (A∼)(1)∈(A∼){1}.
Proof. It is easily obtained based on Remark 3.4 and Lemma 3.5.
Corollary 5.8. Let A∈Cm×n with rank(A∼AA∼)=rank(A), and let X∈Cn×m. Then, the following statements are equivalent:
(1) X=Am;
(2) There exists D∈Cm×n such that AXA=A,X=A∼DA∼.
In this case,
D=(A∼)(1)Am(A∼)(1)+(Im−(A∼)(1)A∼)Y+Z(In−A∼(A∼)(1)), |
where Y,Z∈Cm×n are arbitrary and (A∼)(1)∈(A∼){1}.
Proof. It is a direct corollary of Theorem 5.7.
As it has been stated in Section 1, a great deal of mathematical effort [18,26,27] has been devoted to the study of the Moore-Penrose inverse in a ring with involution. It is observed that Cm×n is not a ring or even a semigroup for matrix multiplication (unless m=n). However, we note two interesting facts. One is that an involution [26] a↦a∗ in a ring R is a map from R to R such that (a∗)∗=a, (a+b)∗=a∗+b∗ and (ab)∗=b∗a∗ for all a,b∈R; the other one is that the Minkowski adjoint A∼ has similar properties, that is, (A∼)∼=A, (A+C)∼=A∼+C∼ and (AB)∼=B∼A∼, where A,C∈Cm×n and B∈Cn×l. Based on the above considerations, the purpose of this section is to extend some characterizations of the Moore-Penrose inverse in rings, mainly mentioned in [26,27], to the Minkowski inverse. Inspired by [26,Theorem 3.12,Corollary 3.17], we give the following two results in the first part of this section.
Theorem 6.1. Let A∈Cm×n. Then, the following statements are equivalent:
(1) Am exists;
(2) There exists X∈Cm×m such that A=XAA∼A;
(3) There exists Y∈Cn×n such that A=AA∼AY.
In this case, Am=(XA)∼=(AY)∼.
Proof. Note that there exists X∈Cm×m such that A=XAA∼A, which is equivalent to N(AA∼A)⊆N(A). This assertion is also equivalent to rank(A)=rank(AA∼A). Then, the equivalence of (1) and (2) is obvious by the item (2) in Theorem 5.1. And, the proof of the equivalence of (1) and (3) can be completed by using a method analogous to that used above.
Moreover, if Am exists, we first claim that (XA)∼∈A{1,3m,4m}. In fact, using A=XAA∼A, we infer that
(A(XA)∼)∼=XAA∼=XA(XAA∼A)∼=XAA∼AA∼X∼=A(XA)∼,A(XA)∼A=(A(XA)∼)∼A=XAA∼A=A,((XA)∼A)∼=(A∼X∼A)∼=((XAA∼A)∼X∼A)∼=(A∼AA∼(X∼)2A)∼=(A∼XAA∼AA∼(X∼)2A)∼=(A∼XXAA∼AA∼AA∼(X∼)2A)∼=(A∼(X)2(AA∼)3(X∼)2A)∼=A∼(X)2(AA∼)3(X∼)2A=(XA)∼A, |
which imply that (XA)∼∈A{1,3m,4m}. Finally, according to (4.1), we obtain that
Am=(XA)∼A(XA)∼=((XA)∼A)∼(XA)∼=A∼XAA∼X∼=(A(XA)∼A)∼X∼=(XA)∼. |
Using the same method as in the above proof, we can carry out the proof of (AY)∼∈A{1,3m,4m} and Am=(AY)∼.
A well-known result is given directly in the following lemma, which will be useful in the proof of the next theorem.
Lemma 6.2. Let A∈Cm×n and B∈Cn×m. Then, Im−AB is nonsingular if and only if In−BA is nonsingular, and in which case, (Im−AB)−1=Im+A(In−BA)−1B.
Theorem 6.3. Let A∈Cm×n and A(1)∈A{1}. Then, the following statements are equivalent:
(1) Am exists;
(2) A∼A+In−A(1)A is nonsingular;
(3) AA∼+Im−AA(1) is nonsingular.
In this case,
Am=(A(A∼A+In−A(1)A)−1)∼=((AA∼+Im−AA(1))−1A)∼. |
Proof. Denote B=A∼A+In−A(1)A and C=AA∼+Im−AA(1).
(1) ⇒ (2). If Am exists, using items (1) and (2) in Theorem 6.1, we have that A=XAA∼A for some X∈Cm×m. It can be easily verified that
(A(1)XA+In−A(1)A)(A(1)AA∼A+In−A(1)A)=In, |
which shows the nonsingularity of D:=A(1)AA∼A+In−A(1)A. And, D can be rewritten as D=In−A(1)A(In−A∼A). Thus, by Lemma 6.2, it is easy to see that B is nonsingular.
(2) ⇒ (1). Since B is nonsingular, from AB=AA∼A, we have that A=AA∼AB−1. Therefore, Am exists by items (1) and (3) in Theorem 6.1.
(3) ⇔ (2). Since B and C can be rewritten as B=In−(A(1)−A∼)A and C=Im−A(A(1)−A∼), from Lemma 6.2, we have the equivalence of (3) and (2) immediately.
In this case, from items (1) and (2) in Lemma 3.3, we infer that
B∼Am=(A∼A+In−A(1)A)∼Am=A∼AAm+Am−A∼(A∼)(1)Am=A∼, |
which, together with the item (2), gives Am=(AB−1)∼. Analogously, we can derive that Am=(C−1A)∼. This completes the proof.
The Sylvester matrix equation [40] has numerous applications in neural networks, robust control, graph theory and other areas of system and control theory. Motivated by [27,Theorem 2.3], in the following theorem, we use the solvability of a certain Sylverster matrix equation to characterize the existence of the Minkowski inverse, and we apply its solutions to represent the Minkowski inverse.
Theorem 6.4. Let A∈Cm×n. Then, the following statements are equivalent:
(1) Am exists;
(2) rank(AA∼)=rank(A∼), and there exist X∈Cm×m and a projector Y∈Cm×m such that
XAA∼−YX=Im, | (6.1) |
AA∼X=XAA∼ and AA∼Y=0.
In this case,
Am=A∼X. | (6.2) |
Proof. (1) ⇒ (2). If Am exists, it is clear by Lemma 3.1 to see that rank(AA∼)=rank(A∼). Let Q=AA∼+Im−AAm. By items (1) and (2) in Lemma 3.3, it is easy to verify that Q((A∼)mAm+Im−AAm)=Im, showing the nonsingularity of Q. And, AAmQ=QAAm=AA∼. Denote Y=Im−AAm. Clearly, Y2=Y and AA∼Y=YAA∼=0. Let X=AAmQ−1−Y. Hence,
XAA∼=(AAmQ−1−Y)AAmQ=AAmQ−1QAAm=AAm,AA∼X=QAAm(AAmQ−1−Y)=QAAmQ−1=AAmQQ−1=AAm,−YX=−Y(AAmQ−1−Y)=−YAAmQ−1+Y=Y. |
Evidently, XAA∼=AA∼X and XAA∼−YX=Im.
(2) ⇒ (1). Premultiplying (6.1) by AA∼, we have that AA∼XAA∼−AA∼YX=AA∼, which, together with AA∼X=XAA∼ and AA∼Y=0, yields that AA∼AA∼X=AA∼ if and only if R(AA∼X−Im)⊆N(AA∼). Since N(AA∼)=N(A∼) from rank(AA∼)=rank(A∼), we get that A∼=A∼AA∼X, i.e.,
A=X∼AA∼A. | (6.3) |
Consequently, Am exists according to items (1) and (2) in Theorem 6.1.
Finally, if Am exists, applying Theorem 6.1 to (6.3), we have (6.2) directly.
Remark 6.5. Let A∈Cm×n. Using the easy result that Am exists if and only if (A∼)m exists, by Theorem 6.4, we conclude that the following statements are equivalent:
(1) Am exists;
(2) rank(A∼A)=rank(A), and there exist X∈Cn×n and a projector Y∈Cn×n such that XA∼A−YX=In, A∼AX=XA∼A and A∼AY=0.
In this case, Am=(AX)∼.
It is well known that the Hartwig-Spindelböck decomposition is an effective and basic tool for finding representations of various generalized inverses and matrix classes (see [34,41]). A new condition for the existence of the Minkowski inverse is given by the Hartwig-Spindelböck decomposition in this section. Under this condition, we present a new representation of the Minkowski inverse. We first introduce the following notations used in the section.
For A∈Cn×n given by (3.2) in Lemma 3.8, let
U∗GU=(G1G2G3G4), |
where G1∈Cr×r, G2∈Cr×(n−r), G3∈C(n−r)×r and G4∈C(n−r)×(n−r), and let
Δ=(KL)U∗GU(K∗L∗). |
Theorem 7.1. Let A be given in (3.2). Then, the following holds true:
(1) rank(A)=rank(AA∼) if and only if Δ is nonsingular.
(2) rank(A)=rank(A∼A) if and only if G1 is nonsingular.
(3) If Δ and G1 are nonsingular, then
Am=GU(K∗(G1ΣΔ)−10L∗(G1ΣΔ)−10)U∗G | (7.1) |
=U((G1K∗+G2L∗)(ΣΔ)−1(G1K∗+G2L∗)(G1ΣΔ)−1G2(G3K∗+G4L∗)(ΣΔ)−1(G3K∗+G4L∗)(G1ΣΔ)−1G2)U∗. | (7.2) |
Proof. (1). Using the Hartwig-Spindelböck decomposition, we have
rank(A)=rank(AA∼)⇔rank(A)=rank((ΣKΣL00)U∗GU((ΣK)∗0(ΣL)∗0))⇔rank(A)=rank((KL)U∗GU(K∗L∗)), |
which is equivalent to stating that Δ is nonsingular.
(2). Since (ΣKΣL) is of full row rank by (3.3), again, using the Hartwig-Spindelböck decomposition we derive that
rank(A)=rank(A∼A)⇔rank(A)=rank(((ΣK)∗0(ΣL)∗0)(G1G2G3G4)(ΣKΣL00))⇔rank(A)=rank((ΣKΣL)∗G1(ΣKΣL))⇔rank(A)=rank(G1), |
which is equivalent to stating that G1 is nonsingular.
(3). Note that A given in (3.2) can be rewritten as
A=U(Σ0)(KL)U∗, | (7.3) |
where B:=U(Σ0) and C:=(KL)U∗ are of full column rank and full row rank, respectively. If Δ and G1 are nonsingular, by items (1) and (2) and Lemma 3.1, we see that Am exists. Therefore, applying Lemma 3.2 to (7.3) yields that
Am=C∼(CC∼)−1(B∼B)−1B∼=GU(K∗L∗)G((KL)U∗GU(K∗L∗)G)−1(G(Σ0)U∗GU(Σ0))−1G(Σ0)U∗G=GU(K∗L∗)Δ−1(ΣG1Σ)−1(Σ0)U∗G=GU(K∗(G1ΣΔ)−10L∗(G1ΣΔ)−10)U∗G=U(G1G2G3G4)(K∗(G1ΣΔ)−10L∗(G1ΣΔ)−10)(G1G2G3G4)U∗=U((G1K∗+G2L∗)(ΣΔ)−1(G1K∗+G2L∗)(G1ΣΔ)−1G2(G3K∗+G4L∗)(ΣΔ)−1(G3K∗+G4L∗)(G1ΣΔ)−1G2)U∗, |
which completes the proof of this theorem.
Example 7.2. In order to illustrate Theorem 7.1, let us consider the matrix A given in Example 5.5. Then, the Hartwig-Spindelböck decomposition of A is
A=U(ΣKΣL00)U∗, |
where
U=(−0.730560.27137−0.6266100−0.27429−0.95698−0.09465400−0.625340.102720.77356000000100010),Σ=(2.6350001.26850000.66897),K=(0.718990.423930.16652−0.223190.541740.0241880.40383−0.11142−0.86962),L=(−0.51457−0.104090.29491−0.754420.21966−0.14149). |
And, we have that
G1=(0.06743−0.39650.91556−0.3965−0.85272−0.340090.91556−0.34009−0.21471),Δ=(−0.47044−0.3035−0.22606−0.3035−0.826060.12956−0.226060.12956−0.9035). |
Thus, it is easy to check that rank(G1)=rank(Δ)=3. Moreover, Am, calculated by (7.1) or (7.2), is the same as that in Example 5.5, so it is omitted.
Groß [24] considered an interesting problem regarding the characterizations of B and C when X=A† is assumed to be the unique solution of (3.4) in Lemma 3.9. This issue was once more revisited in [42] and [43] on the Drazin inverse and the core inverse, respectively. Subsequently, we apply Theorem 7.1 to provide another characterization of the Minkowski inverse.
Theorem 7.3. Let A be given in (3.2) with rank(A∼AA∼)=rank(A), and let X∈Cn×n. Then, X=Am is the unique solution of the rank equation (3.4) if and only if
B=U(B1B200)U∗GandC=GUTU∗, | (7.4) |
where
T=(J1ΣKJ1ΣLJ3ΣKJ3ΣL),B1=(ΣKΣL)[T(1)(K∗(G1ΣΔ)−1L∗(G1ΣΔ)−1)+(In−T(1)T)Y1],B2=(ΣKΣL)(In−T(1)T)Y2, |
J1∈Cr×r and J3∈C(n−r)×r satisfy N(T∗)⊆N((KL)), Y1∈Cn×r and Y2∈Cn×(n−r) are arbitrary, and T(1)∈T{1}.
Proof. We first prove the 'only if' part. If X=Am is the unique solution of (3.4), from Lemma 3.9, we have that B=AH and C=JA for some H,J∈Cn×n. Put
(H1H2H3H4)=U∗HGU,(J1J2J3J4)=U∗GJU, |
where H1,J1∈Cr×r, H2,J2∈Cr×(n−r), H3,J3∈C(n−r)×r and H4,J4∈C(n−r)×(n−r). Thus,
B=AH=U(ΣKH1+ΣLH3ΣKH2+ΣLH400)U∗G, | (7.5) |
C=JA=GU(J1ΣKJ1ΣLJ3ΣKJ3ΣL)U∗. | (7.6) |
Note that [41,Formula (1.4)] has shown that
A†=U(K∗Σ−10L∗Σ−10)U∗. | (7.7) |
Then, inserting (7.5)–(7.7) in (3.5) gives
X=GU(J1ΣKH1+J1ΣLH3J1ΣKH2+J1ΣLH4J3ΣKH1+J3ΣLH3J3ΣKH2+J3ΣLH4)U∗G. | (7.8) |
By a comparison of (7.1) in Theorem 7.1 with (7.8), we see that
X=Am⇔{J3ΣKH1+J3ΣLH3=L∗(G1ΣΔ)−1,J3ΣKH1+J3ΣLH3=L∗(G1ΣΔ)−1,J1ΣKH2+J1ΣLH4=0,J3ΣKH2+J3ΣLH4=0, |
which can be rewritten as
T(H1H3)=(K∗(G1ΣΔ)−1L∗(G1ΣΔ)−1),T(H2H4)=(00), | (7.9) |
where T=(J1ΣKJ1ΣLJ3ΣKJ3ΣL). Applying Lemma 3.5 to (7.9), we conclude that J1∈Cr×r and J3∈C(n−r)×r satisfy
TT(1)(K∗(G1ΣΔ)−1L∗(G1ΣΔ)−1)=(K∗(G1ΣΔ)−1L∗(G1ΣΔ)−1)⇔R((K∗(G1ΣΔ)−1L∗(G1ΣΔ)−1))⊆R(T)⇔R((K∗L∗))⊆R(T)⇔N(T∗)⊆N((KL)), |
and
(H1H3)=T(1)(K∗(G1ΣΔ)−1L∗(G1ΣΔ)−1)+(In−T(1)T)Y1, | (7.10) |
(H2H4)=(In−T(1)T)Y2, | (7.11) |
where Y1∈Cn×r and Y2∈Cn×(n−r) are arbitrary, and T(1)∈T{1}. Hence, premultiplying (7.10) and (7.11) by (ΣKΣL), from (7.5) and (7.6), we infer that (7.4) holds. Conversely, the 'if' part is easy and is therefore omitted.
Notice that, in the proof of Theorem 7.3, the first equation in (7.9) can be replaced by
(J1J3)(ΣKΣL)(H1H3)=(K∗(G1ΣΔ)−1L∗(G1ΣΔ)−1), | (7.12) |
which is a second-order matrix equation. Then, by applying Lemma 3.10 to (7.12), different characterizations of B and C given by (7.4) are shown in the next theorem.
Theorem 7.4. Let A be given in (3.2) with rank(A∼AA∼)=rank(A), and let X∈Cn×n. Then, X=Am is the unique solution of the rank equation (3.4) if and only if
B=AU(Q−1(X−11Y3)W[In−(ˆC(ΣLΣL))(1)ˆC(ΣLΣL)]Z)U∗G, | (7.13) |
C=GUˆC(ΣLΣL)U∗, | (7.14) |
where ˆC=S(X10)P−1, (ˆC(ΣLΣL))(1)∈(ˆC(ΣLΣL)){1}, X1∈Cr×r is an arbitrary nonsingular matrix, Y3∈C(n−r)×r and Z∈Cn×(n−r) are arbitrary and P,W∈Cr×r and Q,S∈Cn×n are all nonsingular matrices such that
P(Ir0)Q=(ΣKΣL),S(Ir0)W=(K∗(G1ΣΔ)−1L∗(G1ΣΔ)−1). | (7.15) |
Proof. For convenience, we use the same notations as in the proof of Theorem 7.3. First, one can clearly see the existence of nonsingular matrices P,W∈Cr×r and Q,S∈Cn×n satisfying (7.15). To prove the 'only if' part, applying Lemma 3.10 to (7.12), we have that
(J1J3)=S(X10)P−1,(H1H3)=Q−1(X−11Y3)W, | (7.16) |
where X1∈Cr×r is an arbitrary nonsingular matrix and Y3∈C(n−r)×r is arbitrary. Note that the second equation in (7.9) can be rewritten as
(J1J3)(ΣKΣL)(H2H4)=(00). | (7.17) |
Then, substituting the first equation in (7.16) to (7.17), again, by Lemma 3.5, we obtain that
(H2H4)=[In−(ˆC(ΣLΣL))(1)ˆC(ΣLΣL)]Z, | (7.18) |
where ˆC=S(X10)P−1 and Z∈Cn×(n−r) is arbitrary. Therefore, applying (7.16) and (7.18) to (7.5) and (7.6), we infer that (7.13) and (7.14) hold. Conversely, the 'if' part is easy.
There is also much interest in characterizing the generalized inverse by using a specific rank equation (see [32,42,44]). At the end of this section, we turn our attention on this consideration.
Theorem 7.5. Let A∈Cm×nr with rank(A∼AA∼)=rank(A). Then, there exist a unique matrix X∈Cn×n such that
AX=0,X∼=X,X2=X,rank(X)=n−r, | (7.19) |
a unique matrix Y∈Cm×m such that
YA=0,Y∼=Y,Y2=Y,rank(Y)=m−r | (7.20) |
and a unique matrix Z∈Cn×m such that
rank(AIm−YIn−XZ)=rank(A). | (7.21) |
Furthermore, X=In−AmA, Y=Im−AAm and Z=Am.
Proof. From AX=0 and X∼=X, we have that R(X)⊆N(A) and R(A∼)⊆N(X), which, together with rank(X)=n−r, show that R(X)=N(A) and N(X)=R(A∼). Hence, by X2=X and the item (3) in Lemma 3.3, it follows that the unique solution of (7.19) is X=PN(A),R(A∼)=In−AmA. Analogously, we can have that Y=Im−AAm is the unique matrix satisfying (7.20). Next, it is clear that R(Im−Y)=R(A) and R(In−X∗)=R(A∗). Thus, applying Lemma 3.9, we have that Z=(In−X)A†(Im−Y)=AmAA†AAm=Am is the unique matrix such that (7.21).
Zlobec [25] established an explicit form of the Moore-Penrose inverse, also known as the Zlobec formula, that is, A†=A∗(A∗AA∗)(1)A∗, where A∈Cm×n and (A∗AA∗)(1)∈(A∗AA∗){1}. In this section, we first present a more general representation of the Minkowski inverse that is similar to the Zlobec formula.
Theorem 8.1. Let A∈Cm×n be such that rank(AA∼)=rank(A∼A)=rank(A). Then,
Am=(A∼A)kA∼[(A∼A)k+l+1A∼](1)(A∼A)lA∼, |
where k and l are arbitrary nonnegative integers, and [(A∼A)k+l+1A∼](1)∈[(A∼A)k+l+1A∼]{1}.
Proof. First, we use induction on an arbitrary positive integer s to prove that rank((A∼A)s)=rank(A). Clearly, rank(A∼A)=rank(A). Suppose that rank((A∼A)s)=rank(A). Since R(A∼)∩N(A)={0} from rank(AA∼)=rank(A), we infer that
rank((A∼A)s+1)=rank(A∼A(A∼A)s)=rank((A∼A)s)−dim(R((A∼A)s)∩N(A∼A))=rank(A)−dim(R(A∼)∩N(A))=rank(A), |
which completes the induction. Hence, for an arbitrary nonnegative integer k, we have that
rank(A)=rank((A∼A)k+1)≤rank((A∼A)kA∼)≤rank(A), |
which implies that rank((A∼A)kA∼)=rank(A). Thus,
rank((A∼A)k+l+1A∼)=rank((A∼A)kA∼)=rank((A∼A)lA∼)=rank(A), |
where l is an arbitrary nonnegative integer. Therefore, by Lemma 3.7 and (3.1) in Remark 3.4, it follows that
(A∼A)kA∼[(A∼A)k+l+1A∼](1)(A∼A)lA∼=A(1,2)R((A∼A)kA∼),N((A∼A)lA∼)=A(1,2)R(A∼),N(A∼)=Am, |
where [(A∼A)k+l+1A∼](1)∈[(A∼A)k+l+1A∼]{1}. This now completes the proof.
Under the conditions of the hypotheses of Theorem 8.1, when k=l=0, we directly give an explicit expression of the Minkowski inverse in the following corollary. It is worth mentioning that this result can also be obtained by applying Lemma 3.7 to (3.1) in Remark 3.4.
Corollary 8.2. Let A∈Cm×n be such that rank(AA∼)=rank(A∼A)=rank(A). Then,
Am=A∼(A∼AA∼)(1)A∼, | (8.1) |
where (A∼AA∼)(1)∈(A∼AA∼){1}.
Another corollary of Theorem 8.1 given below shows a different representation of the Minkowski inverse.
Corollary 8.3. Let A∈Cm×n be such that rank(AA∼)=rank(A∼A)=rank(A). Then,
Am=(A∼A)kA∼[(AA∼)k+1](1)A[(A∼A)l+1](1)(A∼A)lA∼, |
where k and l are arbitrary nonnegative integers, [(AA∼)k+1](1)∈[(AA∼)k+1]{1} and [(A∼A)l+1](1)∈[(A∼A)l+1]{1}.
Proof. Using Theorem 8.1 and Lemma 3.3, we have that
Am=(A∼A)kA∼[(A∼A)k+l+1A∼](1)(A∼A)lA∼=AmA(A∼A)kA∼[(A∼A)k+l+1A∼](1)(A∼A)lA∼AAm=AmA(A∼A)kA∼[A(A∼A)kA∼](1)A(A∼A)kA∼[(A∼A)k+l+1A∼](1)(A∼A)lA∼A[(A∼A)lA∼A](1)(A∼A)lA∼AAm=AmA(A∼A)kA∼[A(A∼A)kA∼](1)A[(A∼A)lA∼A](1)(A∼A)lA∼AAm=(A∼A)kA∼[(AA∼)k+1](1)A[(A∼A)l+1](1)(A∼A)lA∼, |
which completes the proof.
Remark 8.4. Under the conditions of the hypotheses of Corollary 8.3, when k=l=0, we have immediately [29,Theorem 5], that is,
Am=A∼(AA∼)(1)A(A∼A)(1)A∼, |
where (AA∼)(1)∈(AA∼){1} and (A∼A)(1)∈(A∼A){1}.
This section concludes with showing the Minkowski inverse of a class of block matrices by using Corollary 8.2, which extends [25,Corollary 1] to the Minkowski inverse.
Theorem 8.5. Let A∈Cm×nr be such that rank(AA∼)=rank(A∼A)=rank(A) and
A=(A1A2A3A4), | (8.2) |
where A1∈Cr×r is nonsingular, A2∈Cr×(n−r), A3∈C(m−r)×r and A4∈C(m−r)×(n−r). Then,
Am=(A1A2)∼[(A1A3)∼A(A1A2)∼]−1(A1A3)∼. | (8.3) |
Proof. Let
T1=(A1A2)A∼(A1A3). | (8.4) |
Since A1∈Cr×r is nonsingular, we have
rank(A)=rank((Ir0−A3A−11Im−r)(A1A2A3A4)(Ir−A−11A20In−r))=rank(A100A4−A3A−11A2), | (8.5) |
which, together with rank(A)=rank(A1), gives A4=A3A−11A2. Then, it can be easily verified that
T1=(A1A2)(A1A2)∼(A∼1)−1(A1A3)∼(A1A3). | (8.6) |
It is sufficient to prove that T1 is nonsingular, i.e., that rank(T1)=r. In fact, since N(A)=N((A1A2)) from the nonsingularity of A1, we have that R(A∼)=R((A1A2)∼). Then, since R(A∼)∩N(A)={0} from rank(AA∼)=rank(A), we infer that
rank((A1A2)(A1A2)∼)=rank((A1A2))−dim(R((A1A2)∼)∩N((A1A2)))=rank(A1)−dim(R(A∼)∩N(A))=r. | (8.7) |
Analogously, we can obtain that
rank((A1A3)∼(A1A3))=r. | (8.8) |
Considering (8.6)–(8.8), it is clear that rank(T1)=r. Then, using the item (2) in Theorem 5.1, we get that rank(T1)=rank(A∼AA∼). Denote
(B1B2B3B4)=A∼AA∼, |
where B1∈Cr×r, B2∈Cr×(m−r), B3∈C(n−r)×r and B4∈C(n−r)×(m−r). In view of (8.4), we see that B1=T∼1. Thus, by the same method of (8.5), we have that B4=B3(T∼1)−1B2. Then, it is easy to prove that
((T∼1)−1000)∈(A∼AA∼){1}. | (8.9) |
Substituting (8.9) and (8.2) to (8.1) in Corollary 8.2, we have (8.3) by direct calculation. This completes the proof.
This paper shows some different characterizations and representations of the Minkowski inverse in Minkowski space, mainly by extending some known results of the Moore-Penrose inverse to the Minkowski inverse. In addition, we are convinced that the study of generalized inverses in Minkowski space will maintain its popularity for years to come. Several possible directions for further research can be described as follows:
(1) It is difficult but interesting to explore the representation of the Minkowski inverse by using the core-EP decomposition [45].
(2) A function f from Cm×n into Cn×m, written by f(A)=As for A∈Cm×n, is involutory if (As)s=A and (BC)s=CsBs, where A∈Cm×n, B∈Cm×p and C∈Cp×n. Wong [31] introduced the Moore-Penrose inverse A≈ of A relative to a involutory function f if AA≈A=A, A≈AA≈=A≈, (AA≈)s=AA≈ and (A≈A)s=A≈A. Clearly, the Minkowski adjoint A∼ and the Minkowski inverse Am are particular cases of the involutory function f of A and the Moore-Penrose inverse A≈ of A relative to f, respectively. A worthwhile research direction is to generalize the results of the Minkowski inverse to the Moore-Penrose inverse of A relative to a involutory function.
(3) As we know, the study of the Minkowski inverse originates from the simplification of polarized light problems [1]. It is a meaningful research topic to find out new applications of the Minkowski inverse in the study on the polarization of light by using its existing mathematical results.
The authors declare that they have not used artificial intelligence tools in the creation of this article.
This work was supported by the National Natural Science Foundation of China under grant number 11961076. The authors would like to thank the anonymous referees for their precious suggestions and comments, which improved the presentation of the paper distinctly.
All authors declare no conflict of interest that may affect the publication of this paper.
[1] |
M. Renardy, Singular value decomposition in Minkowski space, Linear Algebra Appl., 236 (1996), 53–58. http://dx.doi.org/10.1016/0024-3795(94)00124-3 doi: 10.1016/0024-3795(94)00124-3
![]() |
[2] | A. R. Meenakshi, Generalized inverses of matrices in Minkowski space, Proc. Nat. Semin. Algebra Appl., 57 (2000), 1–14. |
[3] |
H. Zekraoui, Z. Al-Zhour, C. Özel, Some new algebraic and topological properties of the Minkowski inverse in the Minkowski space, Sci. World J., 2013 (2013), 765732. http://dx.doi.org/10.1155/2013/765732 doi: 10.1155/2013/765732
![]() |
[4] | A. R. Meenakshi, Range symmetric matrices in Minkowski space, Bull. Malays. Math. Sci. Soc., 23 (2000), 45–52. |
[5] | K. Bharathi, Product of k-EP block matrices in Minkowski space, Intern. J. Fuzzy Math. Arch., 5 (2014), 29–38. |
[6] | M. S. Lone, D. Krishnaswamy, m-Projections involving Minkowski inverse and range symmetric property in Minkowski space, J. Linear Topol. Algebra, 5 (2016), 215–228. |
[7] | A. R. Meenakshi, D. Krishnaswamy, Product of range symmetric block matrices in Minkowski space, Bull. Malays. Math. Sci. Soc., 29 (2006), 59–68. |
[8] | D. Krishnaswamy, G. Punithavalli, The anti-reflexive solutions of the matrix equation AXB=C in Minkowski space M, Int. J. Recent Res. Appl. Stud., 15 (2013), 221–227. |
[9] |
D. Krishnaswamy, M. S. Lone, Partial ordering of range symmetric matrices and M-projectors with respect to Minkowski adjoint in Minkowski space, Adv. Linear Algebra Matrix Theor., 6 (2016), 132–145. http://dx.doi.org/10.4236/alamt.2016.64013 doi: 10.4236/alamt.2016.64013
![]() |
[10] |
G. Punithavalli, Matrix partial orderings and the reverse order law for the Minkowski inverse in M, AIP Conf. Proc., 2177 (2019), 020073. http://doi.org/10.1063/1.5135248 doi: 10.1063/1.5135248
![]() |
[11] |
A. Kılıçman, Z. A. Zhour, The representation and approximation for the weighted Minkowski inverse in Minkowski space, Math. Comput. Model., 47 (2008), 363–371. http://dx.doi.org/10.1016/j.mcm.2007.03.031 doi: 10.1016/j.mcm.2007.03.031
![]() |
[12] |
Z. Al-Zhour, Extension and generalization properties of the weighted Minkowski inverse in a Minkowski space for an arbitrary matrix, Comput. Math. Appl., 70 (2015), 954–961. http://dx.doi.org/10.1016/j.camwa.2015.06.015 doi: 10.1016/j.camwa.2015.06.015
![]() |
[13] | X. Liu, Y. Qin, Iterative methods for computing the weighted Minkowski inverses of matrices in Minkowski space, World Acad. Sci. Eng. Technol., 75 (2011), 1083–1085. |
[14] |
H. Wang, N. Li, X. Liu, The m-core inverse and its applications, Linear Multilinear Algebra, 69 (2019), 2491–2509. http://dx.doi.org/10.1080/03081087.2019.1680597 doi: 10.1080/03081087.2019.1680597
![]() |
[15] |
H. Wang, H. Wu, X. Liu, The m-core-EP inverse in Minkowski space, B. Iran. Math. Soc., 48 (2021), 2577–2601. http://dx.doi.org/10.1007/s41980-021-00619-2 doi: 10.1007/s41980-021-00619-2
![]() |
[16] |
H. Wu, H. Wang, H. Jin, The m-WG inverse in Minkowski space, Filomat, 36 (2022), 1125–1141. http://dx.doi.org/10.2298/FIL2204125W doi: 10.2298/FIL2204125W
![]() |
[17] |
R. Penrose, A generalized inverse for matrices, Math. Proc. Cambridge Philos. Soc., 51 (1955), 406–413. http://dx.doi.org/10.1017/S0305004100030401 doi: 10.1017/S0305004100030401
![]() |
[18] | K. P. S. Bhaskara-Rao, The Theory of Generalized Inverses over Commutative Rings, London: Taylor and Francis, 2002. http://doi.org/10.4324/9780203218877 |
[19] | A. Ben-Israel, T. N. E. Greville, Generalized Inverses: Theory and Applications, 2nd edition, New York: Springer, 2003. http://doi.org/10.1007/b97366 |
[20] | S. L. Campbell, C. D. Meyer, Generalized Inverses of Linear Transformations, Philadelphia: Society for Industrial and Applied Mathematics, 2009. http://doi.org/10.1137/1.9780898719048 |
[21] | D. S. Cvetković-llić, Y. Wei, Algebraic Properties of Generalized Inverses, Singapore: Springer, 2017. http://doi.org/10.1007/978-981-10-6349-7 |
[22] | M. Z. Nashed, Generalized Inverses and Applications, New York: Academic Press, 1976. http://doi.org/10.1016/C2013-0-11227-5 |
[23] | G. Wang, Y. Wei, S. Qiao, Generalized Inverses: Theory and Computations, 2nd, Beijing: Science Press, 2018. http://doi.org/10.1007/978-981-13-0146-9 |
[24] |
J. Groß, Solution to a rank equation, Linear Algebra Appl., 289 (1999), 127–130. http://doi.org/10.1016/S0024-3795(97)10001-5 doi: 10.1016/S0024-3795(97)10001-5
![]() |
[25] |
S. Zlobec, An explicit form of the Moore-Penrose inverse of an arbitrary complex matrix, SIAM Rev., 12 (1970), 132–134. http://doi.org/10.1137/1012014 doi: 10.1137/1012014
![]() |
[26] |
H. Zhu, J. Chen, P. Patrício, X. Mary, Centralizer's applications to the inverse along an element, Appl. Math. Comput., 315 (2017), 27–33. http://doi.org/10.1016/j.amc.2017.07.046 doi: 10.1016/j.amc.2017.07.046
![]() |
[27] |
H. Zhu, L. Wu, Q. Wang, Suitable elements, ∗-clean elements and {S}ylvester equations in rings with involution, Commun. Algebra, 50 (2022), 1535–1543. http://doi.org/10.1080/00927872.2021.1985129 doi: 10.1080/00927872.2021.1985129
![]() |
[28] |
I. Erdelyi, On the matrix equation Ax=λBx, J. Math. Anal. Appl., 17 (1967), 119–132. http://doi.org/10.1016/0022-247X(67)90169-2 doi: 10.1016/0022-247X(67)90169-2
![]() |
[29] |
K. Kamaraj, K. C. Sivakumar, Moore-Penrose inverse in an indefinite inner product space, J. Appl. Math. Comput., 19 (2005), 297–310. http://doi.org/10.1007/BF02935806 doi: 10.1007/BF02935806
![]() |
[30] |
M. Z. Petrović, P. S. Stanimirović, Representations and computations of {2,3∼} and {2,4∼}-inverses in indefinite inner product spaces, Appl. Math. Comput., 254 (2015), 157–171. http://doi.org/10.1016/j.amc.2014.12.100 doi: 10.1016/j.amc.2014.12.100
![]() |
[31] |
E. T. Wong, Involutory functions and Moore-Penrose inverses of matrices in an arbitrary field, Linear Algebra Appl., 48 (1982), 283–291. http://doi.org/10.1016/0024-3795(82)90114-8 doi: 10.1016/0024-3795(82)90114-8
![]() |
[32] |
Y. Wei, A characterization and representation of the generalized inverse A(2)T,S and its applications, Linear Algebra Appl., 280 (1998), 87–96. http://doi.org/10.1016/S0024-3795(98)00008-1 doi: 10.1016/S0024-3795(98)00008-1
![]() |
[33] |
N. S. Urquhart, Computation of generalized inverse matrices which satisfy specified conditions, SIAM Rev., 10 (1968), 216–218. http://doi.org/10.1137/1010035 doi: 10.1137/1010035
![]() |
[34] |
R. E. Hartwig, K. Spindelböck, Matrices for which A∗ and A† commute, Linear Multilinear Algebra, 14 (1983), 241–256. http://doi.org/10.1080/03081088308817561 doi: 10.1080/03081088308817561
![]() |
[35] | Z. Liao, Solution to a second order matrix equation over a skew field, in Chinese, J. Math. Technol., 15 (1999), 72–74. |
[36] | E. H. Moore, On the reciprocal of the general algebraic matrix, Bull. Amer. Math. Soc., 26 (1920), 394–395. |
[37] |
C. A. Desoer, B. H. Whalen, A note on pseudoinverses, J. Soc. Indust. Appl. Math., 11 (1963), 442–447. http://doi.org/10.1137/0111031 doi: 10.1137/0111031
![]() |
[38] |
A. Bjerhammar, Rectangular reciprocal matrices, with special reference to geodetic calculations, Bull. Géodésique, 20 (1951), 188–220. http://doi.org/10.1007/BF02526278 doi: 10.1007/BF02526278
![]() |
[39] | A. Bjerhammar, A generalized matrix algebra, Trans. Roy. Inst. Tech., 1958,124. |
[40] | J. J. Sylvester, Sur l'équation en matrices px=xq, Comptes Rendus de l'Académie des Sciences, 99 (1884), 67–71. |
[41] |
O. M. Baksalary, G. P. H. Styan, G. Trenkler, On a matrix decomposition of Hartwig and Spindelböck, Linear Algebra Appl., 430 (2009), 2798–2812. http://doi.org/10.1016/j.laa.2009.01.015 doi: 10.1016/j.laa.2009.01.015
![]() |
[42] |
B. Zheng, R. B. Bapat, Characterization of generalized inverses by a rank equation, Appl. Math. Comput., 151 (2004), 53–67. http://doi.org/10.1016/S0096-3003(03)00322-9 doi: 10.1016/S0096-3003(03)00322-9
![]() |
[43] |
H. Wang, X. Liu, Characterizations of the core inverse and the core partial ordering, Linear Multilinear Algebra, 63 (2015), 1829–1836. http://doi.org/10.1080/03081087.2014.975702 doi: 10.1080/03081087.2014.975702
![]() |
[44] |
M. Fiedler, T. L. Markham, A characterization of the Moore-Penrose inverse, Linear Algebra Appl., 179 (1993), 129–133. http://doi.org/10.1016/0024-3795(93)90325-I doi: 10.1016/0024-3795(93)90325-I
![]() |
[45] |
H. Wang, Core-EP decomposition and its applications, Linear Algebra Appl., 508 (2016), 289–300. http://doi.org/10.1016/j.laa.2016.08.008 doi: 10.1016/j.laa.2016.08.008
![]() |
1. | Jiale Gao, Kezheng Zuo, Qingwen Wang, The m-DMP inverse in Minkowski space and its applications, 2024, 38, 0354-5180, 1663, 10.2298/FIL2405663G |