Research article

Revisiting of the BT-inverse of matrices

  • Received: 14 October 2020 Accepted: 21 December 2020 Published: 28 December 2020
  • MSC : 15A09

  • In this paper, we discuss different characteristics of the BT-inverse of a square matrix introduced by Baksalary and Trenkler [On a generalized core inverse, Appl. Math. Comput., 236 (2014), 450-457]. While the BT-inverse is defined by a expression, we present some necessary and sufficient conditions for a matrix to be the BT-inverse. Then we give a canonical form of BT-inverse and investigate the relationships between BT-inverse and other generalized inverses by Core-EP decomposition. Some properties of BT-inverse concerned with some classes of special matrix are identified by Core-EP decomposition. Furthermore new representations of BT-inverse are given by the maximal classes of matrices.

    Citation: Wanlin Jiang, Kezheng Zuo. Revisiting of the BT-inverse of matrices[J]. AIMS Mathematics, 2021, 6(3): 2607-2622. doi: 10.3934/math.2021158

    Related Papers:

    [1] Xiaofei Cao, Yuyue Huang, Xue Hua, Tingyu Zhao, Sanzhang Xu . Matrix inverses along the core parts of three matrix decompositions. AIMS Mathematics, 2023, 8(12): 30194-30208. doi: 10.3934/math.20231543
    [2] Jinyong Wu, Wenjie Shi, Sanzhang Xu . Revisiting the m-weak core inverse. AIMS Mathematics, 2024, 9(8): 21672-21685. doi: 10.3934/math.20241054
    [3] Zhimei Fu, Kezheng Zuo, Yang Chen . Further characterizations of the weak core inverse of matrices and the weak core matrix. AIMS Mathematics, 2022, 7(3): 3630-3647. doi: 10.3934/math.2022200
    [4] Hui Yan, Hongxing Wang, Kezheng Zuo, Yang Chen . Further characterizations of the weak group inverse of matrices and the weak group matrix. AIMS Mathematics, 2021, 6(9): 9322-9341. doi: 10.3934/math.2021542
    [5] Yang Chen, Kezheng Zuo, Zhimei Fu . New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375. doi: 10.3934/math.2022242
    [6] Jiale Gao, Kezheng Zuo, Qingwen Wang, Jiabao Wu . Further characterizations and representations of the Minkowski inverse in Minkowski space. AIMS Mathematics, 2023, 8(10): 23403-23426. doi: 10.3934/math.20231189
    [7] Hongjie Jiang, Xiaoji Liu, Caijing Jiang . On the general strong fuzzy solutions of general fuzzy matrix equation involving the Core-EP inverse. AIMS Mathematics, 2022, 7(2): 3221-3238. doi: 10.3934/math.2022178
    [8] Yanpeng Zheng, Xiaoyu Jiang . Quasi-cyclic displacement and inversion decomposition of a quasi-Toeplitz matrix. AIMS Mathematics, 2022, 7(7): 11647-11662. doi: 10.3934/math.2022649
    [9] Jiaqi Qu, Yunlan Wei, Yanpeng Zheng, Zhaolin Jiang . Fast algorithms for a linear system with infinitesimal generator structure of a Markovian queueing model. AIMS Mathematics, 2025, 10(3): 6546-6559. doi: 10.3934/math.2025299
    [10] Emrah Polatlı . On some properties of a generalized min matrix. AIMS Mathematics, 2023, 8(11): 26199-26212. doi: 10.3934/math.20231336
  • In this paper, we discuss different characteristics of the BT-inverse of a square matrix introduced by Baksalary and Trenkler [On a generalized core inverse, Appl. Math. Comput., 236 (2014), 450-457]. While the BT-inverse is defined by a expression, we present some necessary and sufficient conditions for a matrix to be the BT-inverse. Then we give a canonical form of BT-inverse and investigate the relationships between BT-inverse and other generalized inverses by Core-EP decomposition. Some properties of BT-inverse concerned with some classes of special matrix are identified by Core-EP decomposition. Furthermore new representations of BT-inverse are given by the maximal classes of matrices.



    For many different generalized inverses such as A,AD, , , AD,,A(B,C), below can all be characterized by several equations respectively, while there is no such equations to define A. Our main aim is to develop some necessary and sufficient conditions for a matrix to be the BT-inverse by equations and derive some properties of the BT-inverse.

    Throughout this paper, we denote the set of m×n complex matrices by Cm×n. We denote the identity matrix of order n by In, the range space, the null space, the conjugate transpose and the rank of the matrix ACm×n by R(A), N(A), A and r(A), respectively. The index of ACn×n, denoted by Ind(A), is the smallest nonnegative integer k such that r(Ak)=r(Ak+1). PL,M stands for the projector (idempotent) on the space L along the M. For ACm×n, PA represents the orthogonal projection onto R(A), i.e. PA=PR(A)=AA.

    For the readers' convenience, we will first recall the definitions of some generalized inverses. For ACm×n, the Moore-Penrose inverse A of A is the unique matrix XCn×m satisfying the following four Penrose equations [1]:

    (1) AXA=A,   (2) XAX=X,   (3) (AX)=AX,   (4) (XA)=XA.

    A matrix XCn×m that satisfies condition (1) above is called an inner inverse of A and is denoted by A(1). A matrix XCn×m that satisfies condition (2) above is called an outer inverse of A and is denoted by A(2). A matrix XCn×m that satisfies condition (1) and condition (3) above is denoted by A(1,3). The symbol A{1}, A{1,3} stand for the set of all A(1), A(1,3) respectively. Let ACm×n be of rank r, and T, S be a subspace of Cn,Cm where T, S is of dimension t (r), mt, respectively. Then a matrix X satisfies X=XAX, R(X)=T and N(X)=S if and only if ATS=Cm, and in this case X denoted by A(2)T,S is unique.

    The Drazin inverse of ACn×n with Ind(A)=k, denoted by AD [2], is the unique matrix XCn×n satisfying:

    XAX=X,   AX=XA,   XAk+1=Ak.

    Especially, if Ind(A)=1, then the Drazin inverse of A is called the group inverse of A and is denoted by A#.

    Baksalary and Trenkler [3] introduced the core inverse on the CCMn (CCMn={A|ACn×n,r(A)=r(A2)}): the core inverse of ACCMn is defined to be the unique matrix XCn×n such that

    AX=PA,   R(X)R(A)

    and denoted by (see [3,4,5,6]).

    Moreover, three kinds of generalizations of the core inverse were given for n×n complex matrices, called core-EP inverse, DMP-inverse and BT-inverse, respectively.

    Firstly, for ACn×n with Ind(A)=k, the unique matrix XCn×n satisfying:

    XAX=X,   R(X)=R(X)=R(Ak),

    is called the Core-EP inverse of A written as (see [7,8,9,10]). Moreover, it is seen that =(Ak+1(Ak)) (see [7,Theorem 2.7]).

    Secondly, the DMP-inverse of ACn×n with Ind(A)=k, written by AD, [11,12], is defined as the unique matrix ACn×n satisfying:

    XAX=X,   XA=ADA,   AkX=AkA.

    Moreover, it was proved that AD,=ADAA. Also, the dual DMP inverse of A was introduced in [12], namely A,D=AAAD.

    Thirdly, the BT-inverse of ACn×n, denoted by A[13], is defined as

    A=(A2A)=(APA).

    In recent years, some new generalized inverses are introduced. The (B, C)-inverse of ACm×n, denoted by A(B,C) [14,15], is the unique matrix XCn×m satisfying:

    XAB=B,   CAX=C,   R(X)=R(B),   N(X)=N(C),

    where B,CCn×m.

    In [16], Wang and Chen introduced a new generalized inverse called the weak group inverse of ACn×n, denoted by . It is defined as the unique matrix XCn×n satisfying:

    Moreover, it is proved that = ()2A.

    While the authors in [13] introduced the BT-inverse defined as A=(APA), the characterizations of how a matrix is A, however, seldom gave. In this paper, we concern more on the necessary and sufficient conditions for a matrix to be A and characterize the relationships between A and other generalized inverses. The research is as follows. In Section 2, some indispensable matrix classes and lemmas are given. In Section 3, some characterizations of A are given too. In Section 4, we first derive a canonical form of A by Core-EP decomposition and verify the validity of it by Example 1. By the canonical form of A and Core-EP decomposition, we obtain the relationships between A and other generalized inverses and some properties of A when A or A belongs to some special matrix classes. In Section 5, we extend the representation A=(APA) to a more general one by the maximal classes of matrices.

    For convenience, some matrix classes will be given as follows.

    These symbols CCMn, CPn, COPn and CEPn will stand for the subsets of Cn×n consisting of core matrices, projectors (idempotent matrices), orthogonal projectors (Hermitian idempotent matrices) and EP (Range-Hermitian) matrices, respectively, i.e.,

    CCMn={A|ACn×n,r(A2)=r(A)},CPn={A|ACn×n,A2=A},COPn={A|ACn×n,A2=A=A}={A|ACn×n,A2=A=A},CEPn={A|ACn×n,AA=AA}={A|ACn×n,R(A)=R(A)}.

    In order to present some characterizations and properties of A, we need to introduce the following lemmas.

    Lemma 2.1. [17] Let ACn×n, r(A)=r. Then we have

    A=U[ΣKΣL00]U, (2.1)

    where UCn×n is unitary, Σ=diag(σ1,σ2,,σr) is the diagonal matrix of singular values of A, σi>0(i=1,2,,r) and KCr×r, LCr×(nr) satisfy

    KK+LL=Ir. (2.2)

    Moreover, from (2.1), it follows that

    A=U[KΣ10LΣ10]U,   PA=AA=U[Ir000]U. (2.3)

    By [12,13], we obtain that

    AD=U[(ΣK)D((ΣK)D)2ΣL00]U,    (2.4)
    A=U[(ΣK)000]U   (2.5)

    and

    (2.6)

    The lemma below gives the Core-EP decomposition introduced by Wang which plays an important role in this paper.

    Lemma 2.2. [9] Let ACn×n with Ind(A)=k. Then there exists a unitary matrix UCn×n such that

    A=A1+A2=U[TS0N]U, (2.7)
    A1=U[TS00]U,   A2=U[000N]U,

    where TCt×t is nonsingular with t=r(T)=r(Ak) and N is nilpotent of index k.

    Lemma 2.3. [18,Lemma 6] Let ACn×n with Ind(A)=k be the form of (2.7). Then

    A=U[TTSN(IntNN)SN(IntNN)SSN]U, (2.8)

    where N is not necessary nilpotent, =(TT+S(IntNN)S)1, t=r(Ak).

    From (2.7) and (2.8), a straightforward computation shows that

    AA=U[It00NN]U, (2.9)
    AA=U[TTTS(IntNN)(IntNN)STNN+(IntNN)SS(IntNN)]U. (2.10)

    Lemma 2.4. [13,Theorem 1] Let ACn×n. Then

    AA=PAPA,   AA=PR(PAA),N((APA)A). (2.11)

    It is well-known that some of generalized inverses such as MP-inverse, Drazin inverse, DMP-inverse, etc. can be presented as an outer inverse under the condition of prescribed range and null space. Therefore, we will prove that the same holds in the case of BT-inverse as follows. In the following theorem, we show the other characterizations of BT-inverse by the fact that AAA=A.

    Theorem 3.1. Let A,XCn×n. Then the following conditions are equivalent:

    (a) X=A;

    (b) XAX=X, R(X)=R(PAA) and N(X)=N(PAA), i.e., X=A(2)R(PAA),N(PAA);

    (c) XAX=X, AX=A(APA) and XA=(APA)A;

    (d) XAX=X, AX=PAPA and XA=(APA)A.

    Proof. (a)(b). From the definition of BT-inverse and Lemma 2.4, we derive that

    A(APA)=AA=PAPA, (3.1)

    moreover

    (APA)A(APA)=(APA)APA(APA). (3.2)

    From the definition of BT-inverse and (3.2), it follows that

    AAA=(APA)A(APA)=(APA)APA(APA)=(APA)=A,
    R(A)=R((APA))=R((APA))=R(PAA),
    N(A)=N((APA))=N((APA))=N(PAA).

    (b)(c). From [19,Remark 3.1], we have that A(2)R(A),N(A) exits. It is easy to check that A=A(2)R((APA)),N((APA))=A(2)R(PAA),N(PAA). Since X=A(2)R(PAA),N(PAA) and the uniqueness of X, we obtain that X=A. Then the rest of proof is trivial.

    (c)(d). Since AX=A(APA), by (3.1), we obtain that AX=APA(APA)=PAPA.

    (d)(a). By the condition, we conclude that

    X=XAX=XAPA(APA)=(APA)APA(APA)=(APA)=A.

    In the following theorem, we present a connection between (B, C)-inverse and BT-inverse showing that a BT-inverse of a matrix ACn×n is its (PAA,PAA)-inverse.

    Theorem 3.2. Let ACn×n. Then A=A(PAA,PAA).

    Proof. From the definition of BT-inverse and (3.1), it follows that

    AAPAA=(APA)APA(APA)=(APA),
    PAAAA=(APA)A(APA)=(APA)(APA)(APA)=(APA),
    R(A)=R(PAA),N(A)=N(PAA).

    Hence A=A(PAA,PAA).

    According to the fact that R(A)=R(PAA) and N(A)=N(PAA), there are several different characterizations of BT-inverse as follows.

    Theorem 3.3. Let A,XCn×n. Then the following conditions are equivalent:

    (a) X=A;

    (b) AX=A(APA), R(X)=R(PAA);

    (c) AX=PAPA, R(X)=R(PAA);

    (d) PAX=(APA), R(X)=R(PAA);

    (e) AX=A(APA), R(X)=R(PAA);

    (f) XA=(APA)A, N(X)=N(PAA);

    (g) XA=PR(PAA),N((APA)A), N(X)=N(PAA).

    Proof. That (a) implies all other items (b), (c), (d), (e), (f) and (g) can be checked directly by Theorem 3.1, the definition of BT-inverse and Lemma 2.4.

    (b)(a). By R(X)=R(PAA), we have X=(APA)T for some TCn×n. By (3.2), then

    X=(APA)T=(APA)APA(APA)T=(APA)AX=(APA)A(APA)=(APA)APA(APA)=A.

    (c)(b). Since AX=PAPA, by (3.1), we obtain that AX=PAPA=APA(APA)=A(APA).

    (d)(a). By R(X)=R(PAA), we get X=(APA)T for some TCn×n. By (3.2), then

    X=(APA)T=(APA)APA(APA)T=(APA)APAX=(APA)A(APA)=(APA)APA(APA)=A.

    (e)(d). Premultiplying AX=A(APA) by A, we obtain that PAX=PA(APA)=(APA).

    (f)(a). By N(X)=N(PAA), we obtain X=K(APA) for some KCn×n. By (3.2), then

    X=K(APA)=K(APA)A(APA)=XA(APA)=(APA)A(APA)=(APA)=A.

    (g)(a). Since XA=PR(PAA),N((APA)A)=PR((APA)),N((APA)A), we get XA(APA)=(APA). By N(X)=N(PAA), we have X=K(APA) for some KCn×n. Then

    X=K(APA)=K(APA)A(APA)=XA(APA)=A.

    Remark 3.4. Notice that the condition R(X)=R(PAA) in items (b), (c), (d) and (e) of Theorem 3.3 can be replaced by R(X)R(PAA). Also the condition N(X)=N(PAA) in items (f),(g) of Theorem 3.3 can be replaced by N(PAA)N(X).

    Theorem 3.5. Let A,XCn×n. Then the following conditions are equivalent:

    (a) X=A;

    (b) r(X)=r(A2), XA(APA)=(APA) and AX=A(APA);

    (c) r(X)=r(A2), (APA)AX=(APA) and XA=A(APA)A;

    (d) r(X)=r(A2), XA(APA)=(APA) and AX=PAPA;

    (e) r(X)=r(A2), XA(APA)=(APA) and PAX=(APA);

    (f) r(X)=r(A2), XA(APA)=(APA) and AX=A(APA);

    (g) r(X)=r(A2), (APA)AX=(APA) and XA=PR(PAA),N(PAA).

    Proof. (a)(b). For X=A, we get that r(A)=r(APA). For R(A2)=R(APAA)R(APA)R(A2), then we get that R(APA)=R(A2), hence r(A)=r(APA)=r(A2). From the definition of BT-inverse and the latter half of (2.11), we derive that AA(APA)=(APA) and AA=A(APA).

    That (a) implies all other items (c), (d), (e), (f) and (g) can be similarly proved.

    (b)(a). Combining r(X)=r(A2)=r(APA) with XA(APA)=(APA), we obtain R(X)=R(PAA). Hence it follows from (b) of Theorem 3.3 that X=A.

    (c)(a). From r(X)=r(A2)=r(APA) and (APA)AX=(APA), we get N(X)=N(PAA). Hence we get X=A by (f) of Theorem 3.3.

    The proofs of (d)(a), (e)(a) and (f)(a) are analogous to that of (b)(a). Also (g)(a) follows similarly as in the part (c)(a).

    In this section, we first give the canonical form of BT-inverse by using Core-EP decomposition. Then some properties of BT-inverse will be given by utilizing the definition and the canonical form of BT-inverse.

    Theorem 4.1. Let ACn×n be of the form (2.7). Then

    A=U[TTSN(PNPN)SN(PNPN)SSN]U, (4.1)

    where =[TT+S(PNPN)S]1.

    Proof. By (2.9) of Lemma 2.3, we get that

    A=(APA)=(U[TSPN0NPN]U)=U[TSPN0NPN]U.

    From (2.8) of Lemma 2.3, we have that

    A=U[TTSPNN(PNPN)SN(PNPN)SSPNN]U,

    where =[TT+S(PNPNPN)S]1.

    It is easy to check that PNN=N by (2.3) and (2.5). Hence

    A=U[TTSN(PNPN)SN(PNPN)SSN]U,

    where =[TT+S(PNPNPN)S]1=[TT+S(PNPN)S]1.

    Next, we will verify the correctness of the expression (4.1) as follows.

    Example 1. Given matrix

    A=[0.51910.59220.80960.33410.74910.08010.36640.69880.18340.19870.38970.28280.50730.65341.15330.10980.58470.73250.96180.17291.16830.39830.51910.34540.50720.38630.03721.05680.55830.33110.81770.31131.01330.74510.67380.57830.07140.15840.05240.11950.82940.33710.82220.98301.45290.12820.02990.35070.70320.51010.71890.02000.80320.58230.59890.57930.42540.09080.49430.90900.59230.61930.56850.49650.40730.31210.16420.24140.39790.33851.13990.04330.06940.60840.71490.80390.24170.34850.46290.34360.38830.36240.95900.48110.58950.29800.35990.40590.34570.49830.40630.37630.22830.74861.00070.81140.47960.36020.10580.5583].

    By the definition of BT-inverse, it turns out that

    r1=(APA)=[1.25070.02260.06630.60580.21540.27900.04481.11140.82241.15970.20730.10520.02440.69520.02870.43451.71120.13700.27990.03480.01400.05400.06360.70720.14730.25730.50760.43220.54300.36681.36340.07400.12600.70410.15290.42190.40080.50870.31990.69530.56190.12880.23130.27230.52750.06350.66960.22780.28540.12091.15760.16230.32740.70510.62610.15630.10710.22720.45540.70881.53610.61730.76010.97060.44800.65440.00550.80600.39460.72100.26610.26180.76750.45710.29990.35930.72830.55810.51520.86860.76390.18450.20220.21580.19600.12040.91330.06000.14650.02970.03140.74610.12170.59910.27600.52320.06960.24710.25610.4946].

    Assume that A is of the form (2.7), we obtain that

    U=[0.29220.35670.25930.34270.02530.22890.66030.01030.33530.03230.33300.48010.13810.42010.20870.36480.05410.14850.48490.16220.33160.22410.32880.41950.18600.22290.09960.47650.44400.19340.29550.16100.21120.08920.25130.15390.35380.56460.52540.16550.38240.18400.23390.15270.72450.41140.13670.06630.14550.05900.33270.12610.60050.13600.04050.46350.12750.25840.07030.43580.26490.24880.06990.45040.40740.30440.24700.55680.17390.01870.29750.56240.18920.39280.02900.09600.44270.02020.12030.42930.30120.29600.05140.34640.16930.25810.26450.02150.13890.71700.31640.22490.54990.05590.36890.43420.25320.22370.29860.1258],T=[4.96950.59550.02560.11360.50710.49290.50741.053900.37450.76150.11750.09140.14660.07710.233500.60280.37450.16230.05360.36000.23170.30980000.68360.10550.19770.51230.050100000.61850.26330.40030.195300000.03920.61850.08970.55580000000.37050.29090000000.62300.3705],
    S=[0.39730.09620.43490.04310.17270.13830.40680.03750.24370.01320.42050.59830.14540.33390.04290.0343],N=[0100].

    According to (4.1), a straightforward computation shows that

    A=[1.25070.02260.06630.60580.21540.27900.04481.11140.82241.15970.20730.10520.02440.69520.02870.43451.71120.13700.27990.03480.01400.05400.06360.70720.14730.25730.50760.43220.54300.36681.36340.07400.12600.70410.15290.42190.40080.50870.31990.69530.56190.12880.23130.27230.52750.06350.66960.22780.28540.12091.15760.16230.32740.70510.62610.15630.10710.22720.45540.70881.53610.61730.76010.97060.44800.65440.00550.80600.39460.72100.26610.26180.76750.45710.29990.35930.72830.55810.51520.86860.76390.18450.20220.21580.19600.12040.91330.06000.14650.02970.03140.74610.12170.59910.27600.52320.06960.24710.25610.4946].

    Let be the Frobenius norm, then it follows that

    Ar1∥=3.5313×1014

    which implies the validity of the representation (4.1).

    Lemma 4.2. [20] Let ACn×n written as in (2.7). Then

    AD=U[T1(Tk+1)1˜T00]U, (4.2)

    where ˜T=k1j=0TjSNk1j.

    In [13], the necessary and sufficient conditions for A=A, were given by using the Hartwig-Spindelböck decomposition in Lemma 2.1. We will prove the conditions that A=AD, A=A,D and A= are equivalent by utilizing Core-EP decomposition as follows.

    Theorem 4.3. Let ACn×n be decomposed by (2.7). Then the following statements are equivalent:

    (a) S=0 and N2=0;

    (b) A=AD;

    (c) A2CEPn;

    (d) A=A,D;

    (e) A=.

    Proof. (a)(b). It follows from the definition of A, Lemma 2.3 and (4.2).

    A=ADA2A=(AD)U[TSPN0NPN]U=(U[T1(Tk+1)1˜T00]U)˜T=0, SPN=0, NPN=0S=0, N2=0.

    (a)(c). From (2.7) and (2.8), we can calculate that

    A2=U[T2TS+SN0N2]U,
    (A2)=U[(T2)(T2)(TS+SN)(N2)(Int(N2)N2)(TS+SN)(N2)(Int(N2)N2)(TS+SN)(TS+SN)(N2)]U,

    where =(T2(T2)+(TS+SN)(Int(N2)N2)(TS+SN))1.

    Then it follows that

    A2CEPnA2(A2)=(A2)A2(TS+SN)=(TS+SN)(N2)N2, (N2)N2=N2(N2)N2=0, TS+SN=0S=0, N2=0.

    (d)(a). We can get AA=AAD by A=A,D. From (2.1), (2.4) and (2.5), AA=AAD is equivalent to

    U[ΣKΣL00][(ΣK)000]U=U[ΣKΣL00][(ΣK)D((ΣK)D)2ΣL00]U.

    Thus ΣK(ΣK)=ΣK(ΣK)D. Then we have ΣK=(ΣK)2(ΣK)D which implies Ind(ΣK)1, moreover Ind(A)2.

    Then let A be the form of (2.7). For Ind(A)2, we obtain N2=0. Representations (4.1) and (4.2) directly lead to

    AA=AADU[TS0N][T0PNS0]U=U[TS0N][T1(Tk+1)1˜T00]U[It0NPNS0]U=U[It(Tk)1˜T00].

    Hence we get ˜T=0 which implies S=0.

    (a)(d). It can be directly checked.

    (a)(e). From the definition of A and together with Lemma 2.3, it follows that

    From [7], it is shown that A= is equivalent to A=AD, by using the Hartwig-Spindelböck decomposition. Now we can verify the equivalence of A= and A=AD, by Core-EP decomposition.

    Theorem 4.4. Let ACn×n be decomposed by (2.7). Then the following statements are equivalent:

    (a) A= ;

    (b) SN=0 and N2=0;

    (c) A=AD,.

    Proof. (a)(b). According to Corollary 3.3 in [9], we have that

    Ak(Ak)=U[It000]U.

    From the definition of A, and (2.9) together with the equation above, it follows that

    (b)(c). From the definition of A and AD, together with (4.2), by using Lemma 2.3, it follows that

    A=AD,A2A=(AD,)U[TSPN0NPN]U=(U[T1(Tk+1)1˜TPN00]U)SPN=0, NPN=0, ˜TPN=0SN=0, N2=0,

    where ˜T=k1j=0TjSNk1j.

    Remark 4.5. If A of the form (2.7) is nilpotent, it follows that A=UNU. Then the (a) of Theorem 4.3 and the (b) of the Theorem 4.4 are equivalent to N2=0. In other words, if A is nilpotent, then it follows that the conditions A=AD,A= , A=AD,,A=A,D and A= are equivalent.

    In [13,Theorem 4], the author gave some equivalent conditions for ACEPn. Then we will give some necessary and sufficient conditions for A which belongs to some special matrix classes by using Core-EP decomposition.

    Theorem 4.6. Let ACn×n be the form of (2.7). Then,

    (a) ACCMnN2=0;

    (b) ACPnN2=0 and T=TT+SPNS;

    (c) ACOPn T=It,SN=0 and N2=0 (or A2=A1. where A1 is presented in Lemma 2.2.)

    Proof. (a). From the definition of BT-inverse, it follows that

    ACCMn(A2A)CCMnA2ACCMn.

    By (2.7) and (2.9), we obtain that

    A2A=U[TSPN0NPN]U.

    Thus ACCMnN2N=0N2=0 which establishes point (a) of the theorem.

    (b). For ACPnCCMn, we have N2=0. From (4.1), now we have that

    A=U[T0PNS0]U,

    where =(TT+SPNS)1.

    Since ACPn, we get that T=It, hence T=()1=1. The sufficient condition of (b) can be directly checked, therefore point (b) of the theorem holds.

    (c). It can be directly checked that A2=A1 is equivalent to T=It,SN=0 and N2=0 by Core-EP decomposition. For ACOPnCPn, we have N2=0 and T=1. From (4.1), we have

    A=U[Ir0PNS0]U,

    where =(TT+SPNS)1.

    Since ACOPn, we get that SPN=0 which implies T=It,SN=0. The sufficient condition of (c) can be directly checked, therefore point (c) of the theorem holds.

    Remark 4.7. If A of the form (2.7) is nilpotent which implies A=UNU, then ACCMn or CPn or COPn is equivalent to A2=0 (or N2=0).

    From [13], it is known that AA=AA and (A)=(A) are both satisfied when ACEPn, but we can't conclude ACEPn when AA=AA or (A)=(A) holds. How to establish an equivalence relation between them, the following theorem will give.

    Theorem 4.8. Let ACn×n written as in (2.1). Then the following statements are equivalent:

    (a) ACEPn;

    (b) AA=AA and ACCMn;

    (c) (A)=(A) and ACCMn;

    (d) (A)m=(A)m for some m2 and ACCMn.

    Proof. That (a) implies items (b),(c) and (d) can be checked directly by the definition of A.

    (b)(a). For ACCMn, we get that K is nonsingular. By (2.5) and (2.6), we get that A= and A = A. Hence it follows that ACEPn by [3,Theorem 3].

    (c)(a). This follows similarly as in the part (b)(a).

    (d)(a). It is known that ACEPn is equivalent to L=0. Combining (2.3), (2.5) with (A)m=(A)m leads to L=0 which means ACEPn.

    Finally, we study the representations for the BT-inverse. In [4], let ACCMn. While =A#AA or (A2A), the author gave new representations by the maximal matrix classes such as =XAY or (A2Z) where R(XA)R(A) and YA{1,3} or ZA{1,3}. Similarly, the author in [21] gave the representations of , AD, by the maximal classes. Now, we will derive the representations of BT-inverse by the maximal classes. We first give the important lemma as follows.

    Lemma 5.1. [22] Let A,B,CCn×n. Then the matrix equation AXB=C is consistent if and only if for some A(1)A{1}, B(1)B{1},

    AA(1)CB(1)B=C,

    in which case the general solution is

    X=A(1)CB(1)+ZA(1)AZBB(1),

    for arbitrary ZCn×n.

    Theorem 5.2. Let ACn×n of rank r has the form (2.1). Then the following conditions are equivalent:

    (a) A=(A2X);

    (b) A2X=APA;

    (c) X=P(A2)A+(InP(A2))Z, for arbitrary ZCn×n;

    (d) X can be expressed as

    X=U[PRΣK+(IrPRP)Z1PRQZ3(IrPRP)Z2PRQZ4QRΣKQRPZ1+(InrQRQ)Z3QRPZ2+(InrQRQ)Z4]U,

    where R=PP+QQ, P=(ΣK)2 and Q=ΣKΣL, for arbitrary Z1,Z2,Z3,Z4.

    Proof. (a)(b). Since A=(APA)=(A2X), we have A2X=APA.

    (b)(c). It is evident that P(A2)A satisfies the equation

    A2X=APA. (5.1)

    Applying Lemma 5.1 to this equation, the general solution of (4.3) is given by

    X=P(A2)A+(InP(A2))Z,

    for arbitrary ZCn×n.

    (c)(d). From (2.1), it follows that

    A2=U[(ΣK)2ΣKΣL00]U, (5.2)

    and applying [23,Lemma 1] to (5.2), we obtain that

    (A2)=U[PR0QR0]U,

    where R=PP+QQ, P=(ΣK)2 and Q=ΣKΣL. Next, partitioning accordingly

    Z=U[Z1Z2Z3Z4]U,

    a straightforward computation shows that X=P(A2)A+(InP(A2))Z is equivalent to

    X=U[PRΣK+(IrPRP)Z1PRQZ3(IrPRP)Z2PRQZ4QRΣKQRPZ1+(InrQRQ)Z3QRPZ2+(InrQRQ)Z4]U, (5.3)

    where R=PP+QQ, P=(ΣK)2 and Q=ΣKΣL, for arbitrary Z1,Z2,Z3,Z4.

    (c)(a). By a direct calculation, we have that A2X=A2A. Therefore

    (A2X)=(A2A)=A.

    Theorem 5.3. Let ACn×n be of the form (2.1), X,YAPA{1}. Then the following conditions are equivalent:

    (a) A=XAPAY;

    (b) XAPA=P(A2) and APAY=A(APA);

    (c) X=(APA)+Z(InPAPA) and Y=(APA)+(InP(APA))W, for arbitrary Z,WCn×n;

    (d) X,Y can be expressed as

    X=U[(ΣK)+Z1(IrΣK(ΣK))Z2Z3(IrΣK(ΣK))Z4]U,

    for arbitrary Z1,Z2,Z3,Z4;

    Y=U[(ΣK)+(Ir(ΣK)ΣK)W1(Ir(ΣK)ΣK)W2W3W4]U,

    for arbitrary W1,W2,W3,W4.

    Proof. (a)(b). Postmultiplying A=XAPAY by APA. For YAPA{1}, it follows that XAPA=P(A2). Premultiplying A=XAPAY by APA. Since XAPA{1}, it follows that APAY=APAA=AA.

    (b)(c). Applying Lemma 5.1 to two equations XAPA=(APA)APA and APAY=A(APA) respectively, the general solutions are given by X=(APA)+Z(InPAPA) for arbitrary ZCn×n and Y=(APA)+(InP(APA))W for arbitrary WCn×n.

    (c)(d). Assume that A has the form given in (2.1), we have

    InPAPA=U[IrΣK(ΣK)00Inr]U,
    InP(APA)=U[Ir(ΣK)ΣK00Inr]U.

    Next, partitioning accordingly

    Z=U[Z1Z2Z3Z4]U,W=U[W1W2W3W4]U,

    a straightforward shows that X=(APA)+Z(InPAPA) is equivalent to

    X=U[(ΣK)+Z1(IrΣK(ΣK))Z2Z3(IrΣK(ΣK))Z4]U, (5.4)

    for arbitrary Z1,Z2,Z3,Z4. Y=(APA)+(InP(APA))W is equivalent to

    Y=U[(ΣK)+(Ir(ΣK)ΣK)W1(Ir(ΣK)ΣK)W2W3W4]U, (5.5)

    for arbitrary W1,W2,W3,W4.

    (d)(a). According to (5.4) and (5.5), a straightforward computation shows that

    XAPAY=U[(ΣK)+Z1(IrΣK(ΣK))Z2Z3(IrΣK(ΣK))Z4][ΣK(ΣK)000]U=U[(ΣK)000]U=A.

    In this work, different characteristics of the BT-inverse of a square matrix have been developed. Some necessary and sufficient conditions for a matrix to be the BT-inverse have been derived. The Core-EP decomposition is efficient for investigating the relationships between the BT-inverse and other generalized inverses. The expression of BT-inverse has been extended to more general ones by the maximal classes of matrices.

    The authors are thankful to four anonymous referees for their careful reading, detailed corrections, insightful comments and pertinent suggestions on the first version of the paper, which enhance the presentation of the results distinctly. This research is supported by the Natural Science Foundation of China under Grants 11961076.

    The authors declare no conflict of interest.



    [1] R. A. Penrose, A generalized inverse for matrices, Math. Proc. Cambrige Philos. Soc., 51 (1955), 406-413. doi: 10.1017/S0305004100030401
    [2] M. P. Drazin, Pseudo-inverses in associative rings and semigroups, Am. Math. Mon., 65 (1958), 506-514. doi: 10.1080/00029890.1958.11991949
    [3] O. M. Baksalary, G. Trenkler, Core inverse of matrices, Linear Multilinear Algebra, 58 (2010), 681-697.
    [4] H. Kurata, Some theorems on the core inverse of matrices and the core partial ordering, Appl. Math. Comput., 316 (2018), 43-51.
    [5] G. Luo, K. Zuo, L. Zhou, Revisitation of core inverse, Wuhan Univ. J. Nat. Sci., 20 (2015), 381-385.
    [6] D. S. Rakić, N. Dinčić, D. S. Djordjević, Core inverse and core partial order of Hilbert space operators, Appl. Math. Comput., 244 (2014), 283-302.
    [7] D. E. Ferreyra, F. E. Levis, N. Thome, Revisiting the core-EP inverse and its extension to rectangular matrices, Quaest. Math., 41 (2018), 1-17. doi: 10.2989/16073606.2017.1368732
    [8] K. M. Prasad, K. S. Mohana, Core-EP inverse, Linear Multilinear Algebra, 62 (2014), 792-802.
    [9] H. Wang, Core-EP decomposition and its applications, Linear Algebra Appl., 508 (2016), 289-300. doi: 10.1016/j.laa.2016.08.008
    [10] K. Zuo, Y. Cheng, The new revisitation of core EP inverse of matrices, Filomat, 33 (2019), 3061-3072. doi: 10.2298/FIL1910061Z
    [11] K. Zuo, C. I. Dragana, Y. Cheng, Different characterizations of DMP-inverse of matrices, Linear Multilinear Algebra, (2020), 1-8.
    [12] S. B. Malik, N. Thome, On a new generalized inverse for matrices of an arbitrary index, Appl. Math. Comput., 226 (2014), 575-580.
    [13] O. M. Baksalary, G. Trenkler, On a generalized core inverse, Appl. Math. Comput., 236 (2014), 450-457.
    [14] J. Benitez, E. Boasso, H. Jin, On one-sided (B, C)-inverse of arbitrary matrices, Electron. J. Linear Algebra, 32 (2017), 391-422.
    [15] M. P. Drazin, A class of outer generalized inverses, Linear Algebra Appl., 436 (2012), 1909-1923. doi: 10.1016/j.laa.2011.09.004
    [16] H. Wang, J. Chen, Weak group inverse, Open Math., 16 (2018), 1218-1232.
    [17] R. E. Hartwig, K. Spindelböck, Matrices for which A* and A commute, Linear Multilinear Algebra, 14 (1983), 241-256. doi: 10.1080/03081088308817561
    [18] C. Y. Deng, H. K. Du, Representation of the Moore-Penrose inverse of 2×2 block operator valued matrices, J. Korean Math. Soc., 46 (2009), 1139-1150. doi: 10.4134/JKMS.2009.46.6.1139
    [19] D. E. Ferreyra, F. E. Levis, N. Thome, Characterizations of k-commutative equalities for some outer generalized inverses, Linear Multilinear Algebra, 68 (2020), 177-192. doi: 10.1080/03081087.2018.1500994
    [20] X. Wang, C. Deng, Properties of m-EP operators, Linear Multilinear Algebra, 65 (2017), 1349-1361. doi: 10.1080/03081087.2016.1235131
    [21] D. E. Ferreyra, F. E. Levis, N. Thome, Maximal classes of matrices determining generalized inverses, Appl. Math. Comput., 333 (2018), 42-52.
    [22] A. Ben-Israel, T. N. E.~Greville, Generalized inverses: Theory and applications, 2 Eds., Springer-Verlag, New-York, 2003.
    [23] C. H. Hung, T. L. Markham, The Moore-Penrose inverse of a partioned matrix M=[ABCD], Linear Algebra Appl., 11 (1975), 73-86. doi: 10.1016/0024-3795(75)90118-4
  • This article has been cited by:

    1. Hui Yan, Hongxing Wang, Kezheng Zuo, Yang Chen, Further characterizations of the weak group inverse of matrices and the weak group matrix, 2021, 6, 2473-6988, 9322, 10.3934/math.2021542
    2. D.E. Ferreyra, N. Thome, C. Torigino, The W -weighted BT inverse, 2022, 1607-3606, 1, 10.2989/16073606.2021.2014596
    3. Na Liu, Hongxing Wang, Efthymios G. Tsionas, The Characterizations of WG Matrix and Its Generalized Cayley–Hamilton Theorem, 2021, 2021, 2314-4785, 1, 10.1155/2021/4952943
    4. Ratikanta Behera, Gayatri Maharana, Jajati Keshari Sahoo, Predrag S. Stanimirović, Characterizations of the Weighted Core-EP Inverses, 2022, 48, 1017-060X, 3659, 10.1007/s41980-022-00715-x
    5. Hongxing Wang, Na Liu, The C-S Inverse and Its Applications, 2023, 46, 0126-6705, 10.1007/s40840-023-01478-2
    6. Congcong Wang, Xiaoji Liu, Hongwei Jin, The MP weak group inverse and its application, 2022, 36, 0354-5180, 6085, 10.2298/FIL2218085W
    7. Mengyu He, Hongjie Jiang, Xiaoji Liu, General strong fuzzy solutions of fuzzy Sylvester matrix equations involving the BT inverse, 2024, 480, 01650114, 108862, 10.1016/j.fss.2024.108862
    8. Dijana Mosić, Predrag S. Stanimirović, Ivan I. Kyrchei, Index-MP and MP-index matrices, 2024, 533, 0022247X, 128004, 10.1016/j.jmaa.2023.128004
    9. Hongxing Wang, Wei Wen, T-BT Inverse and T-GC Partial Order via the T-Product, 2023, 12, 2075-1680, 929, 10.3390/axioms12100929
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1698) PDF downloads(48) Cited by(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog