Research article

Further characterizations and representations of the Minkowski inverse in Minkowski space

  • Received: 23 May 2023 Revised: 25 June 2023 Accepted: 11 July 2023 Published: 25 July 2023
  • MSC : 15A09, 15A03, 15A24

  • This paper serves to identify some new characterizations and representations of the Minkowski inverse in Minkowski space. First of all, a few representations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses are given in order to represent the Minkowski inverse. Second, some famous characterizations of the Moore-Penrose inverse are extended to that of the Minkowski inverse. Third, using the Hartwig-Spindelböck decomposition, we present a representation of the Minkowski inverse. And, based on this result, an interesting characterization of the Minkowski inverse is showed by a rank equation. Finally, we obtain several new representations of the Minkowski inverse in a more general form, by which the Minkowski inverse of a class of block matrices is given.

    Citation: Jiale Gao, Kezheng Zuo, Qingwen Wang, Jiabao Wu. Further characterizations and representations of the Minkowski inverse in Minkowski space[J]. AIMS Mathematics, 2023, 8(10): 23403-23426. doi: 10.3934/math.20231189

    Related Papers:

    [1] Zhengmao Chen . A priori bounds and existence of smooth solutions to Minkowski problems for log-concave measures in warped product space forms. AIMS Mathematics, 2023, 8(6): 13134-13153. doi: 10.3934/math.2023663
    [2] Chang-Jian Zhao . Orlicz mixed chord-integrals. AIMS Mathematics, 2020, 5(6): 6639-6656. doi: 10.3934/math.2020427
    [3] Talat Körpinar, Yasin Ünlütürk . An approach to energy and elastic for curves with extended Darboux frame in Minkowski space. AIMS Mathematics, 2020, 5(2): 1025-1034. doi: 10.3934/math.2020071
    [4] Xudong Wang, Tingting Xiang . Dual Brunn-Minkowski inequality for $ C $-star bodies. AIMS Mathematics, 2024, 9(4): 7834-7847. doi: 10.3934/math.2024381
    [5] Yanlin Li, Kemal Eren, Soley Ersoy . On simultaneous characterizations of partner-ruled surfaces in Minkowski 3-space. AIMS Mathematics, 2023, 8(9): 22256-22273. doi: 10.3934/math.20231135
    [6] Suriyakamol Thongjob, Kamsing Nonlaopon, Sortiris K. Ntouyas . Some (p, q)-Hardy type inequalities for (p, q)-integrable functions. AIMS Mathematics, 2021, 6(1): 77-89. doi: 10.3934/math.2021006
    [7] Abdesslam Boulkhemair, Abdelkrim Chakib, Azeddine Sadik . On a shape derivative formula for star-shaped domains using Minkowski deformation. AIMS Mathematics, 2023, 8(8): 19773-19793. doi: 10.3934/math.20231008
    [8] Yanlin Li, A. A. Abdel-Salam, M. Khalifa Saad . Primitivoids of curves in Minkowski plane. AIMS Mathematics, 2023, 8(1): 2386-2406. doi: 10.3934/math.2023123
    [9] Mehmet Önder . Non-null slant ruled surfaces. AIMS Mathematics, 2019, 4(3): 384-396. doi: 10.3934/math.2019.3.384
    [10] Chang Sun, Kaixin Yao, Donghe Pei . Special non-lightlike ruled surfaces in Minkowski 3-space. AIMS Mathematics, 2023, 8(11): 26600-26613. doi: 10.3934/math.20231360
  • This paper serves to identify some new characterizations and representations of the Minkowski inverse in Minkowski space. First of all, a few representations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses are given in order to represent the Minkowski inverse. Second, some famous characterizations of the Moore-Penrose inverse are extended to that of the Minkowski inverse. Third, using the Hartwig-Spindelböck decomposition, we present a representation of the Minkowski inverse. And, based on this result, an interesting characterization of the Minkowski inverse is showed by a rank equation. Finally, we obtain several new representations of the Minkowski inverse in a more general form, by which the Minkowski inverse of a class of block matrices is given.



    In order to easily test that a Mueller matrix maps the forward light cone into itself when studying polarized light, Renardy [1] explored singular value decomposition in Minkowski space. Subsequently, Meenakshi [2] defined the Minkowski inverse in Minkowski space and gave a condition for a Mueller matrix to have a singular value decomposition in terms of its Minkowski inverse. Since this article came out, the generalized inverses in Minkowski space have attracted considerable attention. Zekraoui et al. [3] derived some new algebraic and topological properties of the Minkowski inverse. Meenakshi [4] introduced the concept of a range symmetric matrix in Minkowski space, which was further studied by various scholars [5,6,7]. The Minkowski inverse has been widely used in many applications, such as the anti-reflexive solutions of matrix equations [8] and matrix partial orderings [9,10]. The weighted Minkowski inverse defined in [11] is a generalization of the Minkowski inverse, and many of its properties, representations and approximations were established in [11,12,13]. Recently, Wang et al. introduced the m-core inverse [14], the m-core-EP inverse [15] and the m-WG inverse [16] in Minkowski space, which are viewed as generalizations of the core inverse, the core-EP inverse and the weak group inverse, respectively.

    It is well known that the Moore-Penrose inverse [17] not only plays an irreplaceable role in solving linear matrix equations, but it is also a generally accepted tool in statistics, studies of extreme-value problems and other scientific disciplines. Moreover, this inverse pervades a great number of mathematical fields: C-algebras, rings, Hilbert spaces, Banach spaces, categories, tensors and the quaternion skew field. The algebraic properties, characterizations, representations, perturbation theory and iterative computations of the Moore-Penrose inverse have been extensively investigated. For more details on the study of the Moore-Penrose inverse, refer to [18,19,20,21,22,23].

    Although the Minkowski inverse in Minkowski space can be regarded as an extension of the Moore-Penrose inverse, there are many differences between these two classes of generalized inverses, especially in terms of their existence conditions (see [2,3,12]). So, it is natural to ask what interesting results for the Minkowski inverse can be drawn by considering some known conclusions of the Moore-Penrose inverse.

    Mainly inspired by [24,25,26,27], we summarize the main topics of this work as below:

    ● A few characterizations and representations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses are shown.

    ● We apply the solvability of matrix equations, the nonsingularity of matrices, the existence of projectors and the index of matrices to characterize the existence of the Minkowski inverse, which extends some classic characterizations of the Moore-Penrose inverse in Cm×n with the usual Hermitian adjoint and in a ring with involution. And, we show various representations of the Minkowski inverse in different cases.

    ● Using the Hartwig-Spindelböck decomposition, we present a new representation of the Minkowski inverse. Based on this result, an interesting characterization of the Minkowski inverse is presented through the use of a rank equation.

    ● Motivated by the Zlobec formula of the Moore-Penrose inverse, we give a more general representation of the Minkowski inverse and apply it to compute the Minkowski inverse of a class of block matrices.

    This paper is organized as follows. Section 2 presents the notations and terminology. In Section 3, some necessary lemmas are given. We devote Section 4 to the characterizations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses. Some classic properties of the Moore-Penrose inverse are extended to the case of the Minkowski inverse in Section 5. In Section 6, we further extend several characterizations of the Moore-Penrose inverse in a ring to the Minkowski inverse. We characterize the Minkowski inverse by using a rank equation in Section 7. Section 8 focuses on showing a few new representations of the Minkowski inverse.

    Throughout this paper, we adopt the following notations and terminology. Let Cn, Cm×n and Cm×nr be the sets of all complex n-dimensional vectors, complex m×n matrices and complex m×n matrices with rank r, respectively. The symbols A, R(A), N(A) and rank(A) stand for the conjugate transpose, range, null space and rank of ACm×n, respectively. The index of ACn×n, denoted by Ind(A), is the smallest nonnegative integer t satisfying rank(At+1)=rank(At). And, A0=In for ACn×n, where In is the identity matrix in Cn×n. We denote the dimension and the orthogonal complementary subspace of a subspace LCn by dim(L) and L, respectively. By PS,T, we denote the projector onto S along T, where two subspaces S,TCn satisfy that the direct sum of S and T is Cn, i.e., ST=Cn. In particular, PS=PS,S.

    The Moore-Penrose inverse [17] of ACm×n is the unique matrix XCn×m verifying

    AXA=A,XAX=X,(AX)=AX,(XA)=XA,

    and it is denoted by A. The group inverse [28] of ACn×n is the unique matrix XCn×n satisfying

    AXA=X,XAX=X,AX=XA,

    and it is denoted by A#. For ACm×n, if there is a matrix XCn×m satisfying

    XAX=X,R(X)=T,N(X)=S,

    where TCn and SCm are two subspaces, then X is unique and is denoted by A(2)T,S [19,23]. Particularly, if AA(2)T,SA=A, we denote A(1,2)T,S=A(2)T,S.

    Additionally, let G be the Minkowski metric tensor [1,2] defined by Gu=(u0,u1,u2,,un1), where uCn is indexed as u=(u0,u1,,un1). The Minkowski metric tensor G can be determined by a nonsingular matrix G=(100In1), which is also called the Minkowski metric matrix [2]. Evidently, G=G and G2=In. The Minkowski inner product [1,2] of two elements x and y in Cn is defined by (x,y)=<x,Gy>, where <,> is the conventional Euclidean inner product. The complex linear space Cn with Minkowski inner product is called the Minkowski space. Notice that the Minkowski space is also an indefinite inner product space [29,30]. The Minkowski adjoint of ACm×n is A=GAF, where G and F are Minkowski metric matrices of orders n and m, respectively. For ACm×n and BCn×p, it is easy to verify that (A)=A and (AB)=BA.

    Definition 2.1. [2,30] Let ACm×n.

    (1) If there exists XCn×m such that

    (1)AXA=A,(2)XAX=X,(3m)(AX)=AX,(4m)(XA)=XA,

    then X is called the Minkowski inverse of A and is denoted by Am.

    (2) If XCn×m satisfies equations (i),(j),...,(k) from among equations (1)(4m), then X is called a {i,j,...,k}-inverse of A and is denoted by A(i,j,...,k). The set of all {i,j,...,k}-inverses of A is denoted by A{i,j,...,k}.

    This section begins with recalling existence conditions and some basic properties of the Minkowski inverse, which will be useful in the later discussion.

    Lemma 3.1 (Theorem 1, [2] or Theorem 3, [31]). Let ACm×n. Then, Am exists if and only if rank(AA)=rank(AA)=rank(A).

    Lemma 3.2 (Theorem 8, [3]). Let ACm×nr and r>0, and let A=BC be a full-rank factorization of A, where BCm×rr and CCr×nr. If Am exists, then Am=C(CC)1(BB)1B.

    Lemma 3.3 (Theorem 9, [29]). Let ACm×n with rank(AA)=rank(AA)=rank(A). Then, the following holds:

    (1) R(Am)=R(A) and N(Am)=N(A);

    (2) AAm=PR(A),N(A);

    (3) AmA=PR(A),N(A).

    Remark 3.4. Under the conditions of the hypotheses of Lemma 3.3, we immediately have

    Am=A(1,2)R(A),N(A). (3.1)

    Furthermore, we recall an important application of {1}-inverses to solve matrix equations.

    Lemma 3.5 (Theorem 1.2.5, [23]). Let ACm×n, BCp×q and DCm×q. Then, there is a solution XCn×p to the matrix equation AXB=D if and only if, for some A(1)A{1} and B(1)B{1}, AA(1)DB(1)B=D; in this case, the general solution is

    X=A(1)DB(1)+(InA(1)A)Y+Z(IpBB(1)),

    where A(1)A{1} and B(1)B{1} are fixed but arbitrary, and YCn×p and ZCn×p are arbitrary.

    Two significant results for A(2)T,S are reviewed in order to show the existence conditions of the Minkowski inverse in Section 5, and to represent the Minkowski inverse in Section 8, respectively.

    Lemma 3.6 (Theorem 2.1, [32]). Let ACm×nr, and let two subspaces TCn and SCm be such that dim(T)r and dim(S)=mdim(T). Suppose that HCn×m is such that R(H)=T and N(H)=S. If A(2)T,S exists, then Ind(AH)=Ind(HA)=1. Further, we have that A(2)T,S=(HA)#H=H(AH)#.

    Lemma 3.7 (Urquhart formula, [33]). Let ACm×n, UCn×p, VCq×m and

    X=U(VAU)(1)V,

    where (VAU)(1)(VAU){1}. Then, X=A(1,2)R(U),N(V) if and only if rank(VAU)=rank(U)=rank(V)=rank(A).

    The following three auxiliary lemmas are critical to conclude the results in Section 7.

    Lemma 3.8 (Hartwig-Spindelböck decomposition, [34]). Let ACn×nr. Then, A can be represented in the form

    A=U(ΣKΣL00)U, (3.2)

    where UCn×n is unitary, Σ=diag(σ1,σ2,...,σr) is the diagonal matrix of singular values of A, σi>0 (i=1,2,...,r) and KCr×r and LCr×(nr) satisfy

    KK+LL=Ir. (3.3)

    Lemma 3.9 (Theorem 1, [24]). Let ACm×n, BCm×m and CCn×n. Then, there exists a solution XCn×m to the rank equation

    rank(ABCX)=rank(A) (3.4)

    if and only if R(B)R(A) and R(C)R(A), and in which case,

    X=CAB. (3.5)

    Lemma 3.10 (Theorem 1, [35]). Let ACm×nr and BCl×hr1. Assume that A=P(Ir000)Q and B=P1(Ir1000)Q1, where PCm×m, QCn×n, P1Cl×l and Q1Ch×h are nonsingular. If r=r1, then the general solution of the matrix equation XAY=B is given by

    X=P1(X1X20X4)P1,Y=Q1(X110Y3Y4)Q1,

    where X1Cr×r is an arbitrary nonsingular matrix and X2Cr×(mr), X4C(lr)×(ml), Y3C(nr)×r and Y4C(nr)×(nr) are arbitrary.

    An interesting conclusion proved by Kamaraj and Sivakumar in [29,Theorem 4] is that, if ACm×n is such that Am exists, then

    Am=A(1,4m)AA(1,3m), (4.1)

    where A(1,3m)A{1,3m} and A(1,4m)A{1,4m}. This result shows the importance of A(1,3m) and A(1,4m) to represent the Minkowski inverse Am. Moreover, Petrović and Stanimirović [30] have investigated the representations and computations of {2,3}- and {2,4}-inverses in an indefinite inner product space, which are generalizations of {2,3m}- and {2,4m}-inverses in Minkowski space. Motivated by the above work, we consider the characterizations of {1,3m}-, {1,2,3m}-, {1,4m}- and {1,2,4m}-inverses in this section. Before starting, an auxiliary lemma is given as follows.

    Lemma 4.1. Let ACn×s and BCt×n. Then,

    (PR(A),N(B))=PR(B),N(A).

    Proof. Write Q=(PR(A),N(B))=GP(N(B)),(R(A))G. Then, Q2=Q,

    R(Q)=G(N(B))=R(GB)=R(B),(N(Q))=(N(P(N(B)),(R(A))G))=R(GPR(A),N(B))=R(GA)=R((A)),

    which implies that N(Q)=(R((A)))=N(A). Thus, Q=PR(B),N(A).

    In the following theorems, we prove the equivalence of the existence of {1,3m}- and {1,2,3m}-inverses, and show some of their characterizations.

    Theorem 4.2. Let ACm×n. Then, there exists XA{1,3m} if and only if there exists YA{1,2,3m}.

    Proof. The 'if' part is obvious. Conversely, if there exists XA{1,3m}, then

    A=AXA=(AX)A=XAA,

    which implies that rank(A)=rank(AA). Using [2,Theorem 2], we have that there exists YA{1,2,3m}.

    Theorem 4.3. Let ACm×n and XCn×m. Then, the following statements are equivalent:

    (1) XA{1,3m};

    (2) AAX=A;

    (3) AX=PR(A),N(A).

    In this case,

    A{1,3m}={A(1,3m)+(InA(1,3m)A)YYCn×m}, (4.2)

    where A(1,3m)A{1,3m} is fixed but arbitrary.

    Proof. (1) (2). Since XA{1,3m}, it follows that AAX=A(AX)=(AXA)=A.

    (2) (3). Since (AX)A=A from AAX=A, we have that AX=(AX)AX, implying that (AX)=AX. Thus, AX=(AX)AX=(AX)2, that is, AX is a projector. Again, by (AX)=AX, we have that AXA=A, which, together with AAX=A, shows that R(AX)=R(A) and N(AX)=N(A). Hence, AX=PR(A),N(A).

    (3) (1). Clearly, AXA=PR(A),N(A)A=A. Applying Lemma 4.1 to AX=PR(A),N(A), we see that (AX)=PR(A),N(A)=AX.

    In this case, we have that A{1,3m}={ZCn×mAZ=AA(1,3m)}, where A(1,3m) is a fixed but arbitrary {1,3m}-inverse of A. Thus, applying Lemma 3.5 to AZ=AA(1,3m), we have (4.2) directly.

    Theorem 4.4. Let ACm×nr with rank(AA)=rank(A)>0, and let a full-rank factorization of A be A=BC, where BCm×rr and CCr×nr. Then,

    A{1,2,3m}={C1R(BB)1BC1Ris an arbitrary right inverse ofC}.

    Proof. We can easily verify that [C1R(BB)1B]A{1,2,3m}. Conversely, let HA{1,2,3m}. Using the fact that

    A{1,2}={C1RB1LB1L is an arbitrary left inverse of B,C1R is an arbitrary right inverse of C},

    we have that H=C1RB1L for some B1LCr×mr and C1RCn×rr. Moreover, it follows from HA{3m} that

    (AH)=AH(BB1L)=BB1LB1L=(BB)1B.

    Hence, every HA{1,2,3m} must be of the form C1R(BB)1B. This completes the proof.

    Using an obvious fact that XA{1,3m} if and only if XA{1,4m}, where ACm×n, we have the following results, which show that {1,4m}- and {1,2,4m}-inverses have properties similar to that of {1,3m}- and {1,2,3m}-inverses.

    Theorem 4.5. Let ACm×n. Then, there exists XA{1,4m} if and only if there exists YA{1,2,4m}.

    Theorem 4.6. Let ACm×n and XCn×m. Then, the following statements are equivalent:

    (1) XA{1,4m};

    (2) XAA=A;

    (3) XA=PR(A),N(A).

    In this case,

    A{1,4m}={A(1,4m)+Z(ImAA(1,4m))ZCn×m},

    where A(1,4m)A{1,4m} is fixed but arbitrary.

    Theorem 4.7. Let ACm×nr with rank(AA)=r>0, and let a full-rank factorization of A be A=BC, where BCm×rr and CCr×nr. Then,

    A{1,2,4m}={C(CC)1B1LB1Lis an arbitrary left inverse ofB}.

    Remark 4.8. If ACm×n and XA{1,2}, we derive [2,Theorems 3 and 4] directly by Theorems 4.3 and 4.6, respectively.

    Based on Lemma 3.1, we start by proposing several different existence conditions of Am in the following theorem.

    Theorem 5.1. Let ACm×n. Then, the following statements are equivalent:

    (1) Am exists;

    (2) rank(AAA)=rank(A);

    (3) AR(A)N(A)=Cm.

    Proof. (1) (2). If Am exists, then rank(AA)=rank(AA)=rank(A) by Lemma 3.1. Since R(A)N(A)={0} from rank(AA)=rank(A), it follows from rank(AA)=rank(A) that

    rank(AAA)=rank(AA)dim(R(AA)N(A))=rank(A)dim(R(A)N(A))=rank(A).

    (2) (3). From

    rank(A)=rank(AAA)=rank(AA)dim(R(AA)N(A))

    and rank(A)rank(AA), we have that R(AA)N(A)={0} and rank(AA)=rank(A), which imply that AR(A)N(A)=Cm.

    (3) (1). It follows from AR(A)N(A)=Cm that R(AA)N(A)={0} and rank(AA)=rank(A). Thus, R(A)N(A)={0}, that is, rank(AA)=rank(A). Hence, Am exists by Lemma 3.1.

    Subsequently, if Am exists for ACm×n, applying Lemma 3.6 to (3.1) in Remark 3.4, we directly obtain a new expression of the Minkowski inverse, Am=(AA)#A=A(AA)# and

    Ind(AA)=Ind(AA)=1. (5.1)

    However, for a matrix ACm×n satisfying (5.1), Am does not necessarily exist, as will be shown in the following example.

    Example 5.2. Let

    A=(10110010101100000000).

    It can be verified that rank(AAA)=rank((AA)2)=rank(AA)=rank((AA)2)=rank(AA)=1 and rank(A)=2. Obviously, Ind(AA)=Ind(AA)=1, but rank(AAA)rank(A), implying that Am does not exist.

    In the next theorem, we present some necessary and sufficient conditions for the converse implication.

    Theorem 5.3. Let ACm×n. Then, the following statements are equivalent:

    (1) Am exists;

    (2) Ind(AA)=1 and N(AA)N(A);

    (3) Ind(AA)=1 and R(A)R(AA).

    Proof. (1) (2). The 'only if' part is obvious by Lemmas 3.1 and 3.6. Conversely, since rank(A)=rank(AA) from N(AA)N(A), it follows from Ind(AA)=1 that R(A)N(A)=R(AA)N(AA)={0}, which implies that rank(AA)=rank(A). Hence, Am exists directly by Lemma 3.1.

    (1) (3). Its proof is similar to that of (1) (2).

    As we all know, Moore [36], Penrose [17] and Desoer and Whalen [37] defined the Moore-Penrose inverse from different perspectives, respectively. Next, we review these definitions in the following lemma and extend this result to the Minkowski inverse.

    Lemma 5.4 (Desoer-Whalen's and Moore's definitions, [36,37]). Let ACm×n and XCn×m. Then, the following statements are equivalent:

    (1) X=A;

    (2) XAa=a for aR(A), and Xb=0 for bN(A);

    (3) AX=PR(A), XA=PR(X).

    There is an interesting example showing that, for some matrices ACm×n and XCn×m, X may not be equivalent to Am though AX=PR(A),N(A) and XA=PR(X),N(A).

    Example 5.5. Let us consider the matrices

    A=(1110101010110010000000000),X=(00.20.40000.40.2001010000.60.20000.20.400).

    By calculation, we have that rank(AAA)=rank(A)=3 and

    Am=(0120000100101000110001200).

    Evidently, XAm. However, we can check that XA{1,2} and N(X)=N(A), which imply that AX=PR(A),N(A) and XA=PR(X),N(A).

    Theorem 5.6. Let ACm×n and XCn×m. Then, the following statements are equivalent:

    (1) X=Am;

    (2) XAa=a for aR(A), and Xb=0 for bN(A);

    (3) AX=PR(A),N(A), XA=PR(X),N(A) and R(X)R(A).

    Proof. (1) (2). It is obvious by Lemma 3.3.

    (2) (3). It follows from XAa=a for aR(A) that XAA=A, which shows that rank(A)rank(X) and R(A)R(X). And, from Xb=0 for bN(A), we have that N(A)N(X), implying that rank(X)rank(A). Thus, rank(X)=rank(A), R(X)=R(A) and N(X)=N(A). Hence, again, by XAa=a for aR(A)=R(X), we have that XA{1,2}, which implies that the item (3) holds.

    (3) (1). Clearly, AXA=PR(A),N(A)A=A and XAX=PR(X),N(A)X=X, i.e., XA{1,2}. Then, from AX=PR(A),N(A) and R(X)R(A), we have that N(X)=N(A) and R(X)=R(A). Hence, in view of (3.1) in Remark 3.4, we see that X=Am.

    A classic characterization of the Moore-Penrose inverse proposed by Bjerhammar [38,39] is extended to the Minkowski inverse in the following theorem.

    Theorem 5.7. Let ACm×n with rank(AAA)=rank(A), and let XCn×m. Then, the following statements are equivalent:

    (1) X=Am;

    (2) There exist BCm×m and CCn×n such that AXA=A,X=AB,X=CA.

    Moreover,

    B=(A)(1)Am+(Im(A)(1)A)Y,C=Am(A)(1)+Z(InA(A)(1)),

    where YCm×m and ZCn×n are arbitrary and (A)(1)(A){1}.

    Proof. It is easily obtained based on Remark 3.4 and Lemma 3.5.

    Corollary 5.8. Let ACm×n with rank(AAA)=rank(A), and let XCn×m. Then, the following statements are equivalent:

    (1) X=Am;

    (2) There exists DCm×n such that AXA=A,X=ADA.

    In this case,

    D=(A)(1)Am(A)(1)+(Im(A)(1)A)Y+Z(InA(A)(1)),

    where Y,ZCm×n are arbitrary and (A)(1)(A){1}.

    Proof. It is a direct corollary of Theorem 5.7.

    As it has been stated in Section 1, a great deal of mathematical effort [18,26,27] has been devoted to the study of the Moore-Penrose inverse in a ring with involution. It is observed that Cm×n is not a ring or even a semigroup for matrix multiplication (unless m=n). However, we note two interesting facts. One is that an involution [26] aa in a ring R is a map from R to R such that (a)=a, (a+b)=a+b and (ab)=ba for all a,bR; the other one is that the Minkowski adjoint A has similar properties, that is, (A)=A, (A+C)=A+C and (AB)=BA, where A,CCm×n and BCn×l. Based on the above considerations, the purpose of this section is to extend some characterizations of the Moore-Penrose inverse in rings, mainly mentioned in [26,27], to the Minkowski inverse. Inspired by [26,Theorem 3.12,Corollary 3.17], we give the following two results in the first part of this section.

    Theorem 6.1. Let ACm×n. Then, the following statements are equivalent:

    (1) Am exists;

    (2) There exists XCm×m such that A=XAAA;

    (3) There exists YCn×n such that A=AAAY.

    In this case, Am=(XA)=(AY).

    Proof. Note that there exists XCm×m such that A=XAAA, which is equivalent to N(AAA)N(A). This assertion is also equivalent to rank(A)=rank(AAA). Then, the equivalence of (1) and (2) is obvious by the item (2) in Theorem 5.1. And, the proof of the equivalence of (1) and (3) can be completed by using a method analogous to that used above.

    Moreover, if Am exists, we first claim that (XA)A{1,3m,4m}. In fact, using A=XAAA, we infer that

    (A(XA))=XAA=XA(XAAA)=XAAAAX=A(XA),A(XA)A=(A(XA))A=XAAA=A,((XA)A)=(AXA)=((XAAA)XA)=(AAA(X)2A)=(AXAAAA(X)2A)=(AXXAAAAAA(X)2A)=(A(X)2(AA)3(X)2A)=A(X)2(AA)3(X)2A=(XA)A,

    which imply that (XA)A{1,3m,4m}. Finally, according to (4.1), we obtain that

    Am=(XA)A(XA)=((XA)A)(XA)=AXAAX=(A(XA)A)X=(XA).

    Using the same method as in the above proof, we can carry out the proof of (AY)A{1,3m,4m} and Am=(AY).

    A well-known result is given directly in the following lemma, which will be useful in the proof of the next theorem.

    Lemma 6.2. Let ACm×n and BCn×m. Then, ImAB is nonsingular if and only if InBA is nonsingular, and in which case, (ImAB)1=Im+A(InBA)1B.

    Theorem 6.3. Let ACm×n and A(1)A{1}. Then, the following statements are equivalent:

    (1) Am exists;

    (2) AA+InA(1)A is nonsingular;

    (3) AA+ImAA(1) is nonsingular.

    In this case,

    Am=(A(AA+InA(1)A)1)=((AA+ImAA(1))1A).

    Proof. Denote B=AA+InA(1)A and C=AA+ImAA(1).

    (1) (2). If Am exists, using items (1) and (2) in Theorem 6.1, we have that A=XAAA for some XCm×m. It can be easily verified that

    (A(1)XA+InA(1)A)(A(1)AAA+InA(1)A)=In,

    which shows the nonsingularity of D:=A(1)AAA+InA(1)A. And, D can be rewritten as D=InA(1)A(InAA). Thus, by Lemma 6.2, it is easy to see that B is nonsingular.

    (2) (1). Since B is nonsingular, from AB=AAA, we have that A=AAAB1. Therefore, Am exists by items (1) and (3) in Theorem 6.1.

    (3) (2). Since B and C can be rewritten as B=In(A(1)A)A and C=ImA(A(1)A), from Lemma 6.2, we have the equivalence of (3) and (2) immediately.

    In this case, from items (1) and (2) in Lemma 3.3, we infer that

    BAm=(AA+InA(1)A)Am=AAAm+AmA(A)(1)Am=A,

    which, together with the item (2), gives Am=(AB1). Analogously, we can derive that Am=(C1A). This completes the proof.

    The Sylvester matrix equation [40] has numerous applications in neural networks, robust control, graph theory and other areas of system and control theory. Motivated by [27,Theorem 2.3], in the following theorem, we use the solvability of a certain Sylverster matrix equation to characterize the existence of the Minkowski inverse, and we apply its solutions to represent the Minkowski inverse.

    Theorem 6.4. Let ACm×n. Then, the following statements are equivalent:

    (1) Am exists;

    (2) rank(AA)=rank(A), and there exist XCm×m and a projector YCm×m such that

    XAAYX=Im, (6.1)

    AAX=XAA and AAY=0.

    In this case,

    Am=AX. (6.2)

    Proof. (1) (2). If Am exists, it is clear by Lemma 3.1 to see that rank(AA)=rank(A). Let Q=AA+ImAAm. By items (1) and (2) in Lemma 3.3, it is easy to verify that Q((A)mAm+ImAAm)=Im, showing the nonsingularity of Q. And, AAmQ=QAAm=AA. Denote Y=ImAAm. Clearly, Y2=Y and AAY=YAA=0. Let X=AAmQ1Y. Hence,

    XAA=(AAmQ1Y)AAmQ=AAmQ1QAAm=AAm,AAX=QAAm(AAmQ1Y)=QAAmQ1=AAmQQ1=AAm,YX=Y(AAmQ1Y)=YAAmQ1+Y=Y.

    Evidently, XAA=AAX and XAAYX=Im.

    (2) (1). Premultiplying (6.1) by AA, we have that AAXAAAAYX=AA, which, together with AAX=XAA and AAY=0, yields that AAAAX=AA if and only if R(AAXIm)N(AA). Since N(AA)=N(A) from rank(AA)=rank(A), we get that A=AAAX, i.e.,

    A=XAAA. (6.3)

    Consequently, Am exists according to items (1) and (2) in Theorem 6.1.

    Finally, if Am exists, applying Theorem 6.1 to (6.3), we have (6.2) directly.

    Remark 6.5. Let ACm×n. Using the easy result that Am exists if and only if (A)m exists, by Theorem 6.4, we conclude that the following statements are equivalent:

    (1) Am exists;

    (2) rank(AA)=rank(A), and there exist XCn×n and a projector YCn×n such that XAAYX=In, AAX=XAA and AAY=0.

    In this case, Am=(AX).

    It is well known that the Hartwig-Spindelböck decomposition is an effective and basic tool for finding representations of various generalized inverses and matrix classes (see [34,41]). A new condition for the existence of the Minkowski inverse is given by the Hartwig-Spindelböck decomposition in this section. Under this condition, we present a new representation of the Minkowski inverse. We first introduce the following notations used in the section.

    For ACn×n given by (3.2) in Lemma 3.8, let

    UGU=(G1G2G3G4),

    where G1Cr×r, G2Cr×(nr), G3C(nr)×r and G4C(nr)×(nr), and let

    Δ=(KL)UGU(KL).

    Theorem 7.1. Let A be given in (3.2). Then, the following holds true:

    (1) rank(A)=rank(AA) if and only if Δ is nonsingular.

    (2) rank(A)=rank(AA) if and only if G1 is nonsingular.

    (3) If Δ and G1 are nonsingular, then

    Am=GU(K(G1ΣΔ)10L(G1ΣΔ)10)UG (7.1)
    =U((G1K+G2L)(ΣΔ)1(G1K+G2L)(G1ΣΔ)1G2(G3K+G4L)(ΣΔ)1(G3K+G4L)(G1ΣΔ)1G2)U. (7.2)

    Proof. (1). Using the Hartwig-Spindelböck decomposition, we have

    rank(A)=rank(AA)rank(A)=rank((ΣKΣL00)UGU((ΣK)0(ΣL)0))rank(A)=rank((KL)UGU(KL)),

    which is equivalent to stating that Δ is nonsingular.

    (2). Since (ΣKΣL) is of full row rank by (3.3), again, using the Hartwig-Spindelböck decomposition we derive that

    rank(A)=rank(AA)rank(A)=rank(((ΣK)0(ΣL)0)(G1G2G3G4)(ΣKΣL00))rank(A)=rank((ΣKΣL)G1(ΣKΣL))rank(A)=rank(G1),

    which is equivalent to stating that G1 is nonsingular.

    (3). Note that A given in (3.2) can be rewritten as

    A=U(Σ0)(KL)U, (7.3)

    where B:=U(Σ0) and C:=(KL)U are of full column rank and full row rank, respectively. If Δ and G1 are nonsingular, by items (1) and (2) and Lemma 3.1, we see that Am exists. Therefore, applying Lemma 3.2 to (7.3) yields that

    Am=C(CC)1(BB)1B=GU(KL)G((KL)UGU(KL)G)1(G(Σ0)UGU(Σ0))1G(Σ0)UG=GU(KL)Δ1(ΣG1Σ)1(Σ0)UG=GU(K(G1ΣΔ)10L(G1ΣΔ)10)UG=U(G1G2G3G4)(K(G1ΣΔ)10L(G1ΣΔ)10)(G1G2G3G4)U=U((G1K+G2L)(ΣΔ)1(G1K+G2L)(G1ΣΔ)1G2(G3K+G4L)(ΣΔ)1(G3K+G4L)(G1ΣΔ)1G2)U,

    which completes the proof of this theorem.

    Example 7.2. In order to illustrate Theorem 7.1, let us consider the matrix A given in Example 5.5. Then, the Hartwig-Spindelböck decomposition of A is

    A=U(ΣKΣL00)U,

    where

    U=(0.730560.271370.62661000.274290.956980.094654000.625340.102720.77356000000100010),Σ=(2.6350001.26850000.66897),K=(0.718990.423930.166520.223190.541740.0241880.403830.111420.86962),L=(0.514570.104090.294910.754420.219660.14149).

    And, we have that

    G1=(0.067430.39650.915560.39650.852720.340090.915560.340090.21471),Δ=(0.470440.30350.226060.30350.826060.129560.226060.129560.9035).

    Thus, it is easy to check that rank(G1)=rank(Δ)=3. Moreover, Am, calculated by (7.1) or (7.2), is the same as that in Example 5.5, so it is omitted.

    Groß [24] considered an interesting problem regarding the characterizations of B and C when X=A is assumed to be the unique solution of (3.4) in Lemma 3.9. This issue was once more revisited in [42] and [43] on the Drazin inverse and the core inverse, respectively. Subsequently, we apply Theorem 7.1 to provide another characterization of the Minkowski inverse.

    Theorem 7.3. Let A be given in (3.2) with rank(AAA)=rank(A), and let XCn×n. Then, X=Am is the unique solution of the rank equation (3.4) if and only if

    B=U(B1B200)UGandC=GUTU, (7.4)

    where

    T=(J1ΣKJ1ΣLJ3ΣKJ3ΣL),B1=(ΣKΣL)[T(1)(K(G1ΣΔ)1L(G1ΣΔ)1)+(InT(1)T)Y1],B2=(ΣKΣL)(InT(1)T)Y2,

    J1Cr×r and J3C(nr)×r satisfy N(T)N((KL)), Y1Cn×r and Y2Cn×(nr) are arbitrary, and T(1)T{1}.

    Proof. We first prove the 'only if' part. If X=Am is the unique solution of (3.4), from Lemma 3.9, we have that B=AH and C=JA for some H,JCn×n. Put

    (H1H2H3H4)=UHGU,(J1J2J3J4)=UGJU,

    where H1,J1Cr×r, H2,J2Cr×(nr), H3,J3C(nr)×r and H4,J4C(nr)×(nr). Thus,

    B=AH=U(ΣKH1+ΣLH3ΣKH2+ΣLH400)UG, (7.5)
    C=JA=GU(J1ΣKJ1ΣLJ3ΣKJ3ΣL)U. (7.6)

    Note that [41,Formula (1.4)] has shown that

    A=U(KΣ10LΣ10)U. (7.7)

    Then, inserting (7.5)–(7.7) in (3.5) gives

    X=GU(J1ΣKH1+J1ΣLH3J1ΣKH2+J1ΣLH4J3ΣKH1+J3ΣLH3J3ΣKH2+J3ΣLH4)UG. (7.8)

    By a comparison of (7.1) in Theorem 7.1 with (7.8), we see that

    X=Am{J3ΣKH1+J3ΣLH3=L(G1ΣΔ)1,J3ΣKH1+J3ΣLH3=L(G1ΣΔ)1,J1ΣKH2+J1ΣLH4=0,J3ΣKH2+J3ΣLH4=0,

    which can be rewritten as

    T(H1H3)=(K(G1ΣΔ)1L(G1ΣΔ)1),T(H2H4)=(00), (7.9)

    where T=(J1ΣKJ1ΣLJ3ΣKJ3ΣL). Applying Lemma 3.5 to (7.9), we conclude that J1Cr×r and J3C(nr)×r satisfy

    TT(1)(K(G1ΣΔ)1L(G1ΣΔ)1)=(K(G1ΣΔ)1L(G1ΣΔ)1)R((K(G1ΣΔ)1L(G1ΣΔ)1))R(T)R((KL))R(T)N(T)N((KL)),

    and

    (H1H3)=T(1)(K(G1ΣΔ)1L(G1ΣΔ)1)+(InT(1)T)Y1, (7.10)
    (H2H4)=(InT(1)T)Y2, (7.11)

    where Y1Cn×r and Y2Cn×(nr) are arbitrary, and T(1)T{1}. Hence, premultiplying (7.10) and (7.11) by (ΣKΣL), from (7.5) and (7.6), we infer that (7.4) holds. Conversely, the 'if' part is easy and is therefore omitted.

    Notice that, in the proof of Theorem 7.3, the first equation in (7.9) can be replaced by

    (J1J3)(ΣKΣL)(H1H3)=(K(G1ΣΔ)1L(G1ΣΔ)1), (7.12)

    which is a second-order matrix equation. Then, by applying Lemma 3.10 to (7.12), different characterizations of B and C given by (7.4) are shown in the next theorem.

    Theorem 7.4. Let A be given in (3.2) with rank(AAA)=rank(A), and let XCn×n. Then, X=Am is the unique solution of the rank equation (3.4) if and only if

    B=AU(Q1(X11Y3)W[In(ˆC(ΣLΣL))(1)ˆC(ΣLΣL)]Z)UG, (7.13)
    C=GUˆC(ΣLΣL)U, (7.14)

    where ˆC=S(X10)P1, (ˆC(ΣLΣL))(1)(ˆC(ΣLΣL)){1}, X1Cr×r is an arbitrary nonsingular matrix, Y3C(nr)×r and ZCn×(nr) are arbitrary and P,WCr×r and Q,SCn×n are all nonsingular matrices such that

    P(Ir0)Q=(ΣKΣL),S(Ir0)W=(K(G1ΣΔ)1L(G1ΣΔ)1). (7.15)

    Proof. For convenience, we use the same notations as in the proof of Theorem 7.3. First, one can clearly see the existence of nonsingular matrices P,WCr×r and Q,SCn×n satisfying (7.15). To prove the 'only if' part, applying Lemma 3.10 to (7.12), we have that

    (J1J3)=S(X10)P1,(H1H3)=Q1(X11Y3)W, (7.16)

    where X1Cr×r is an arbitrary nonsingular matrix and Y3C(nr)×r is arbitrary. Note that the second equation in (7.9) can be rewritten as

    (J1J3)(ΣKΣL)(H2H4)=(00). (7.17)

    Then, substituting the first equation in (7.16) to (7.17), again, by Lemma 3.5, we obtain that

    (H2H4)=[In(ˆC(ΣLΣL))(1)ˆC(ΣLΣL)]Z, (7.18)

    where ˆC=S(X10)P1 and ZCn×(nr) is arbitrary. Therefore, applying (7.16) and (7.18) to (7.5) and (7.6), we infer that (7.13) and (7.14) hold. Conversely, the 'if' part is easy.

    There is also much interest in characterizing the generalized inverse by using a specific rank equation (see [32,42,44]). At the end of this section, we turn our attention on this consideration.

    Theorem 7.5. Let ACm×nr with rank(AAA)=rank(A). Then, there exist a unique matrix XCn×n such that

    AX=0,X=X,X2=X,rank(X)=nr, (7.19)

    a unique matrix YCm×m such that

    YA=0,Y=Y,Y2=Y,rank(Y)=mr (7.20)

    and a unique matrix ZCn×m such that

    rank(AImYInXZ)=rank(A). (7.21)

    Furthermore, X=InAmA, Y=ImAAm and Z=Am.

    Proof. From AX=0 and X=X, we have that R(X)N(A) and R(A)N(X), which, together with rank(X)=nr, show that R(X)=N(A) and N(X)=R(A). Hence, by X2=X and the item (3) in Lemma 3.3, it follows that the unique solution of (7.19) is X=PN(A),R(A)=InAmA. Analogously, we can have that Y=ImAAm is the unique matrix satisfying (7.20). Next, it is clear that R(ImY)=R(A) and R(InX)=R(A). Thus, applying Lemma 3.9, we have that Z=(InX)A(ImY)=AmAAAAm=Am is the unique matrix such that (7.21).

    Zlobec [25] established an explicit form of the Moore-Penrose inverse, also known as the Zlobec formula, that is, A=A(AAA)(1)A, where ACm×n and (AAA)(1)(AAA){1}. In this section, we first present a more general representation of the Minkowski inverse that is similar to the Zlobec formula.

    Theorem 8.1. Let ACm×n be such that rank(AA)=rank(AA)=rank(A). Then,

    Am=(AA)kA[(AA)k+l+1A](1)(AA)lA,

    where k and l are arbitrary nonnegative integers, and [(AA)k+l+1A](1)[(AA)k+l+1A]{1}.

    Proof. First, we use induction on an arbitrary positive integer s to prove that rank((AA)s)=rank(A). Clearly, rank(AA)=rank(A). Suppose that rank((AA)s)=rank(A). Since R(A)N(A)={0} from rank(AA)=rank(A), we infer that

    rank((AA)s+1)=rank(AA(AA)s)=rank((AA)s)dim(R((AA)s)N(AA))=rank(A)dim(R(A)N(A))=rank(A),

    which completes the induction. Hence, for an arbitrary nonnegative integer k, we have that

    rank(A)=rank((AA)k+1)rank((AA)kA)rank(A),

    which implies that rank((AA)kA)=rank(A). Thus,

    rank((AA)k+l+1A)=rank((AA)kA)=rank((AA)lA)=rank(A),

    where l is an arbitrary nonnegative integer. Therefore, by Lemma 3.7 and (3.1) in Remark 3.4, it follows that

    (AA)kA[(AA)k+l+1A](1)(AA)lA=A(1,2)R((AA)kA),N((AA)lA)=A(1,2)R(A),N(A)=Am,

    where [(AA)k+l+1A](1)[(AA)k+l+1A]{1}. This now completes the proof.

    Under the conditions of the hypotheses of Theorem 8.1, when k=l=0, we directly give an explicit expression of the Minkowski inverse in the following corollary. It is worth mentioning that this result can also be obtained by applying Lemma 3.7 to (3.1) in Remark 3.4.

    Corollary 8.2. Let ACm×n be such that rank(AA)=rank(AA)=rank(A). Then,

    Am=A(AAA)(1)A, (8.1)

    where (AAA)(1)(AAA){1}.

    Another corollary of Theorem 8.1 given below shows a different representation of the Minkowski inverse.

    Corollary 8.3. Let ACm×n be such that rank(AA)=rank(AA)=rank(A). Then,

    Am=(AA)kA[(AA)k+1](1)A[(AA)l+1](1)(AA)lA,

    where k and l are arbitrary nonnegative integers, [(AA)k+1](1)[(AA)k+1]{1} and [(AA)l+1](1)[(AA)l+1]{1}.

    Proof. Using Theorem 8.1 and Lemma 3.3, we have that

    Am=(AA)kA[(AA)k+l+1A](1)(AA)lA=AmA(AA)kA[(AA)k+l+1A](1)(AA)lAAAm=AmA(AA)kA[A(AA)kA](1)A(AA)kA[(AA)k+l+1A](1)(AA)lAA[(AA)lAA](1)(AA)lAAAm=AmA(AA)kA[A(AA)kA](1)A[(AA)lAA](1)(AA)lAAAm=(AA)kA[(AA)k+1](1)A[(AA)l+1](1)(AA)lA,

    which completes the proof.

    Remark 8.4. Under the conditions of the hypotheses of Corollary 8.3, when k=l=0, we have immediately [29,Theorem 5], that is,

    Am=A(AA)(1)A(AA)(1)A,

    where (AA)(1)(AA){1} and (AA)(1)(AA){1}.

    This section concludes with showing the Minkowski inverse of a class of block matrices by using Corollary 8.2, which extends [25,Corollary 1] to the Minkowski inverse.

    Theorem 8.5. Let ACm×nr be such that rank(AA)=rank(AA)=rank(A) and

    A=(A1A2A3A4), (8.2)

    where A1Cr×r is nonsingular, A2Cr×(nr), A3C(mr)×r and A4C(mr)×(nr). Then,

    Am=(A1A2)[(A1A3)A(A1A2)]1(A1A3). (8.3)

    Proof. Let

    T1=(A1A2)A(A1A3). (8.4)

    Since A1Cr×r is nonsingular, we have

    rank(A)=rank((Ir0A3A11Imr)(A1A2A3A4)(IrA11A20Inr))=rank(A100A4A3A11A2), (8.5)

    which, together with rank(A)=rank(A1), gives A4=A3A11A2. Then, it can be easily verified that

    T1=(A1A2)(A1A2)(A1)1(A1A3)(A1A3). (8.6)

    It is sufficient to prove that T1 is nonsingular, i.e., that rank(T1)=r. In fact, since N(A)=N((A1A2)) from the nonsingularity of A1, we have that R(A)=R((A1A2)). Then, since R(A)N(A)={0} from rank(AA)=rank(A), we infer that

    rank((A1A2)(A1A2))=rank((A1A2))dim(R((A1A2))N((A1A2)))=rank(A1)dim(R(A)N(A))=r. (8.7)

    Analogously, we can obtain that

    rank((A1A3)(A1A3))=r. (8.8)

    Considering (8.6)–(8.8), it is clear that rank(T1)=r. Then, using the item (2) in Theorem 5.1, we get that rank(T1)=rank(AAA). Denote

    (B1B2B3B4)=AAA,

    where B1Cr×r, B2Cr×(mr), B3C(nr)×r and B4C(nr)×(mr). In view of (8.4), we see that B1=T1. Thus, by the same method of (8.5), we have that B4=B3(T1)1B2. Then, it is easy to prove that

    ((T1)1000)(AAA){1}. (8.9)

    Substituting (8.9) and (8.2) to (8.1) in Corollary 8.2, we have (8.3) by direct calculation. This completes the proof.

    This paper shows some different characterizations and representations of the Minkowski inverse in Minkowski space, mainly by extending some known results of the Moore-Penrose inverse to the Minkowski inverse. In addition, we are convinced that the study of generalized inverses in Minkowski space will maintain its popularity for years to come. Several possible directions for further research can be described as follows:

    (1) It is difficult but interesting to explore the representation of the Minkowski inverse by using the core-EP decomposition [45].

    (2) A function f from Cm×n into Cn×m, written by f(A)=As for ACm×n, is involutory if (As)s=A and (BC)s=CsBs, where ACm×n, BCm×p and CCp×n. Wong [31] introduced the Moore-Penrose inverse A of A relative to a involutory function f if AAA=A, AAA=A, (AA)s=AA and (AA)s=AA. Clearly, the Minkowski adjoint A and the Minkowski inverse Am are particular cases of the involutory function f of A and the Moore-Penrose inverse A of A relative to f, respectively. A worthwhile research direction is to generalize the results of the Minkowski inverse to the Moore-Penrose inverse of A relative to a involutory function.

    (3) As we know, the study of the Minkowski inverse originates from the simplification of polarized light problems [1]. It is a meaningful research topic to find out new applications of the Minkowski inverse in the study on the polarization of light by using its existing mathematical results.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    This work was supported by the National Natural Science Foundation of China under grant number 11961076. The authors would like to thank the anonymous referees for their precious suggestions and comments, which improved the presentation of the paper distinctly.

    All authors declare no conflict of interest that may affect the publication of this paper.



    [1] M. Renardy, Singular value decomposition in Minkowski space, Linear Algebra Appl., 236 (1996), 53–58. http://dx.doi.org/10.1016/0024-3795(94)00124-3 doi: 10.1016/0024-3795(94)00124-3
    [2] A. R. Meenakshi, Generalized inverses of matrices in Minkowski space, Proc. Nat. Semin. Algebra Appl., 57 (2000), 1–14.
    [3] H. Zekraoui, Z. Al-Zhour, C. Özel, Some new algebraic and topological properties of the Minkowski inverse in the Minkowski space, Sci. World J., 2013 (2013), 765732. http://dx.doi.org/10.1155/2013/765732 doi: 10.1155/2013/765732
    [4] A. R. Meenakshi, Range symmetric matrices in Minkowski space, Bull. Malays. Math. Sci. Soc., 23 (2000), 45–52.
    [5] K. Bharathi, Product of k-EP block matrices in Minkowski space, Intern. J. Fuzzy Math. Arch., 5 (2014), 29–38.
    [6] M. S. Lone, D. Krishnaswamy, m-Projections involving Minkowski inverse and range symmetric property in Minkowski space, J. Linear Topol. Algebra, 5 (2016), 215–228.
    [7] A. R. Meenakshi, D. Krishnaswamy, Product of range symmetric block matrices in Minkowski space, Bull. Malays. Math. Sci. Soc., 29 (2006), 59–68.
    [8] D. Krishnaswamy, G. Punithavalli, The anti-reflexive solutions of the matrix equation AXB=C in Minkowski space M, Int. J. Recent Res. Appl. Stud., 15 (2013), 221–227.
    [9] D. Krishnaswamy, M. S. Lone, Partial ordering of range symmetric matrices and M-projectors with respect to Minkowski adjoint in Minkowski space, Adv. Linear Algebra Matrix Theor., 6 (2016), 132–145. http://dx.doi.org/10.4236/alamt.2016.64013 doi: 10.4236/alamt.2016.64013
    [10] G. Punithavalli, Matrix partial orderings and the reverse order law for the Minkowski inverse in M, AIP Conf. Proc., 2177 (2019), 020073. http://doi.org/10.1063/1.5135248 doi: 10.1063/1.5135248
    [11] A. Kılıçman, Z. A. Zhour, The representation and approximation for the weighted Minkowski inverse in Minkowski space, Math. Comput. Model., 47 (2008), 363–371. http://dx.doi.org/10.1016/j.mcm.2007.03.031 doi: 10.1016/j.mcm.2007.03.031
    [12] Z. Al-Zhour, Extension and generalization properties of the weighted Minkowski inverse in a Minkowski space for an arbitrary matrix, Comput. Math. Appl., 70 (2015), 954–961. http://dx.doi.org/10.1016/j.camwa.2015.06.015 doi: 10.1016/j.camwa.2015.06.015
    [13] X. Liu, Y. Qin, Iterative methods for computing the weighted Minkowski inverses of matrices in Minkowski space, World Acad. Sci. Eng. Technol., 75 (2011), 1083–1085.
    [14] H. Wang, N. Li, X. Liu, The m-core inverse and its applications, Linear Multilinear Algebra, 69 (2019), 2491–2509. http://dx.doi.org/10.1080/03081087.2019.1680597 doi: 10.1080/03081087.2019.1680597
    [15] H. Wang, H. Wu, X. Liu, The m-core-EP inverse in Minkowski space, B. Iran. Math. Soc., 48 (2021), 2577–2601. http://dx.doi.org/10.1007/s41980-021-00619-2 doi: 10.1007/s41980-021-00619-2
    [16] H. Wu, H. Wang, H. Jin, The m-WG inverse in Minkowski space, Filomat, 36 (2022), 1125–1141. http://dx.doi.org/10.2298/FIL2204125W doi: 10.2298/FIL2204125W
    [17] R. Penrose, A generalized inverse for matrices, Math. Proc. Cambridge Philos. Soc., 51 (1955), 406–413. http://dx.doi.org/10.1017/S0305004100030401 doi: 10.1017/S0305004100030401
    [18] K. P. S. Bhaskara-Rao, The Theory of Generalized Inverses over Commutative Rings, London: Taylor and Francis, 2002. http://doi.org/10.4324/9780203218877
    [19] A. Ben-Israel, T. N. E. Greville, Generalized Inverses: Theory and Applications, 2nd edition, New York: Springer, 2003. http://doi.org/10.1007/b97366
    [20] S. L. Campbell, C. D. Meyer, Generalized Inverses of Linear Transformations, Philadelphia: Society for Industrial and Applied Mathematics, 2009. http://doi.org/10.1137/1.9780898719048
    [21] D. S. Cvetković-llić, Y. Wei, Algebraic Properties of Generalized Inverses, Singapore: Springer, 2017. http://doi.org/10.1007/978-981-10-6349-7
    [22] M. Z. Nashed, Generalized Inverses and Applications, New York: Academic Press, 1976. http://doi.org/10.1016/C2013-0-11227-5
    [23] G. Wang, Y. Wei, S. Qiao, Generalized Inverses: Theory and Computations, 2nd, Beijing: Science Press, 2018. http://doi.org/10.1007/978-981-13-0146-9
    [24] J. Groß, Solution to a rank equation, Linear Algebra Appl., 289 (1999), 127–130. http://doi.org/10.1016/S0024-3795(97)10001-5 doi: 10.1016/S0024-3795(97)10001-5
    [25] S. Zlobec, An explicit form of the Moore-Penrose inverse of an arbitrary complex matrix, SIAM Rev., 12 (1970), 132–134. http://doi.org/10.1137/1012014 doi: 10.1137/1012014
    [26] H. Zhu, J. Chen, P. Patrício, X. Mary, Centralizer's applications to the inverse along an element, Appl. Math. Comput., 315 (2017), 27–33. http://doi.org/10.1016/j.amc.2017.07.046 doi: 10.1016/j.amc.2017.07.046
    [27] H. Zhu, L. Wu, Q. Wang, Suitable elements, -clean elements and {S}ylvester equations in rings with involution, Commun. Algebra, 50 (2022), 1535–1543. http://doi.org/10.1080/00927872.2021.1985129 doi: 10.1080/00927872.2021.1985129
    [28] I. Erdelyi, On the matrix equation Ax=λBx, J. Math. Anal. Appl., 17 (1967), 119–132. http://doi.org/10.1016/0022-247X(67)90169-2 doi: 10.1016/0022-247X(67)90169-2
    [29] K. Kamaraj, K. C. Sivakumar, Moore-Penrose inverse in an indefinite inner product space, J. Appl. Math. Comput., 19 (2005), 297–310. http://doi.org/10.1007/BF02935806 doi: 10.1007/BF02935806
    [30] M. Z. Petrović, P. S. Stanimirović, Representations and computations of {2,3} and {2,4}-inverses in indefinite inner product spaces, Appl. Math. Comput., 254 (2015), 157–171. http://doi.org/10.1016/j.amc.2014.12.100 doi: 10.1016/j.amc.2014.12.100
    [31] E. T. Wong, Involutory functions and Moore-Penrose inverses of matrices in an arbitrary field, Linear Algebra Appl., 48 (1982), 283–291. http://doi.org/10.1016/0024-3795(82)90114-8 doi: 10.1016/0024-3795(82)90114-8
    [32] Y. Wei, A characterization and representation of the generalized inverse A(2)T,S and its applications, Linear Algebra Appl., 280 (1998), 87–96. http://doi.org/10.1016/S0024-3795(98)00008-1 doi: 10.1016/S0024-3795(98)00008-1
    [33] N. S. Urquhart, Computation of generalized inverse matrices which satisfy specified conditions, SIAM Rev., 10 (1968), 216–218. http://doi.org/10.1137/1010035 doi: 10.1137/1010035
    [34] R. E. Hartwig, K. Spindelböck, Matrices for which A and A commute, Linear Multilinear Algebra, 14 (1983), 241–256. http://doi.org/10.1080/03081088308817561 doi: 10.1080/03081088308817561
    [35] Z. Liao, Solution to a second order matrix equation over a skew field, in Chinese, J. Math. Technol., 15 (1999), 72–74.
    [36] E. H. Moore, On the reciprocal of the general algebraic matrix, Bull. Amer. Math. Soc., 26 (1920), 394–395.
    [37] C. A. Desoer, B. H. Whalen, A note on pseudoinverses, J. Soc. Indust. Appl. Math., 11 (1963), 442–447. http://doi.org/10.1137/0111031 doi: 10.1137/0111031
    [38] A. Bjerhammar, Rectangular reciprocal matrices, with special reference to geodetic calculations, Bull. Géodésique, 20 (1951), 188–220. http://doi.org/10.1007/BF02526278 doi: 10.1007/BF02526278
    [39] A. Bjerhammar, A generalized matrix algebra, Trans. Roy. Inst. Tech., 1958,124.
    [40] J. J. Sylvester, Sur l'équation en matrices px=xq, Comptes Rendus de l'Académie des Sciences, 99 (1884), 67–71.
    [41] O. M. Baksalary, G. P. H. Styan, G. Trenkler, On a matrix decomposition of Hartwig and Spindelböck, Linear Algebra Appl., 430 (2009), 2798–2812. http://doi.org/10.1016/j.laa.2009.01.015 doi: 10.1016/j.laa.2009.01.015
    [42] B. Zheng, R. B. Bapat, Characterization of generalized inverses by a rank equation, Appl. Math. Comput., 151 (2004), 53–67. http://doi.org/10.1016/S0096-3003(03)00322-9 doi: 10.1016/S0096-3003(03)00322-9
    [43] H. Wang, X. Liu, Characterizations of the core inverse and the core partial ordering, Linear Multilinear Algebra, 63 (2015), 1829–1836. http://doi.org/10.1080/03081087.2014.975702 doi: 10.1080/03081087.2014.975702
    [44] M. Fiedler, T. L. Markham, A characterization of the Moore-Penrose inverse, Linear Algebra Appl., 179 (1993), 129–133. http://doi.org/10.1016/0024-3795(93)90325-I doi: 10.1016/0024-3795(93)90325-I
    [45] H. Wang, Core-EP decomposition and its applications, Linear Algebra Appl., 508 (2016), 289–300. http://doi.org/10.1016/j.laa.2016.08.008 doi: 10.1016/j.laa.2016.08.008
  • This article has been cited by:

    1. Jiale Gao, Kezheng Zuo, Qingwen Wang, The m-DMP inverse in Minkowski space and its applications, 2024, 38, 0354-5180, 1663, 10.2298/FIL2405663G
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1288) PDF downloads(77) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog