Loading [MathJax]/jax/element/mml/optable/Latin1Supplement.js
Research article

Further characterizations of the weak core inverse of matrices and the weak core matrix

  • Received: 22 September 2021 Accepted: 16 November 2021 Published: 06 December 2021
  • MSC : 15A09

  • The present paper is devoted to characterizing the weak core inverse and the weak core matrix using the core-EP decomposition. Some new characterizations of the weak core inverse are presented by using its range space, null space and matrix equations. Additionally, we give several new representations and properties of the weak core inverse. Finally, we consider several equivalent conditions for a matrix to be a weak core matrix.

    Citation: Zhimei Fu, Kezheng Zuo, Yang Chen. Further characterizations of the weak core inverse of matrices and the weak core matrix[J]. AIMS Mathematics, 2022, 7(3): 3630-3647. doi: 10.3934/math.2022200

    Related Papers:

    [1] Hui Yan, Hongxing Wang, Kezheng Zuo, Yang Chen . Further characterizations of the weak group inverse of matrices and the weak group matrix. AIMS Mathematics, 2021, 6(9): 9322-9341. doi: 10.3934/math.2021542
    [2] Jinyong Wu, Wenjie Shi, Sanzhang Xu . Revisiting the m-weak core inverse. AIMS Mathematics, 2024, 9(8): 21672-21685. doi: 10.3934/math.20241054
    [3] Xiaofei Cao, Yuyue Huang, Xue Hua, Tingyu Zhao, Sanzhang Xu . Matrix inverses along the core parts of three matrix decompositions. AIMS Mathematics, 2023, 8(12): 30194-30208. doi: 10.3934/math.20231543
    [4] Yang Chen, Kezheng Zuo, Zhimei Fu . New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375. doi: 10.3934/math.2022242
    [5] Wanlin Jiang, Kezheng Zuo . Revisiting of the BT-inverse of matrices. AIMS Mathematics, 2021, 6(3): 2607-2622. doi: 10.3934/math.2021158
    [6] Hongjie Jiang, Xiaoji Liu, Caijing Jiang . On the general strong fuzzy solutions of general fuzzy matrix equation involving the Core-EP inverse. AIMS Mathematics, 2022, 7(2): 3221-3238. doi: 10.3934/math.2022178
    [7] Almudena Campos-Jiménez, Francisco Javier García-Pacheco . The core of the unit sphere of a Banach space. AIMS Mathematics, 2024, 9(2): 3440-3452. doi: 10.3934/math.2024169
    [8] Yuxu Chen, Hui Kou . Core compactness of ordered topological spaces. AIMS Mathematics, 2023, 8(2): 4862-4874. doi: 10.3934/math.2023242
    [9] Wanlin Jiang, Kezheng Zuo . Further characterizations of the $ m $-weak group inverse of a complex matrix. AIMS Mathematics, 2022, 7(9): 17369-17392. doi: 10.3934/math.2022957
    [10] Liying Yang, Jinjin Li, Yiliang Li, Qifang Li . Sub-base local reduct in a family of sub-bases. AIMS Mathematics, 2022, 7(7): 13271-13277. doi: 10.3934/math.2022732
  • The present paper is devoted to characterizing the weak core inverse and the weak core matrix using the core-EP decomposition. Some new characterizations of the weak core inverse are presented by using its range space, null space and matrix equations. Additionally, we give several new representations and properties of the weak core inverse. Finally, we consider several equivalent conditions for a matrix to be a weak core matrix.



    The weak core inverse was introduced in [1] where the authors presented some characterizations and properties. In [2], the authors introduced an extension of the weak core inverse. Continuing previous research about the weak core inverse, our purpose is to present new characterizations and representations of the weak core inverse. Additionally, we also give several equivalent conditions for a matrix to be a weak core matrix.

    Let Cm×n be the set of all m×n complex matrices and Z+ denotes the set of all positive integers. The symbols R(A), N(A), A, r(A) and In will denote the range space, null space, conjugate transpose, rank of ACm×n and the identity matrix of order n. Ind(A) means the index of ACn×n. Let Cn×nk be the set consisting of all n×n complex matrices with index k. The symbol dim(S) represents the dimension of a subspace SCn. PL stands for the orthogonal projection onto the subspace L. PA, PA respectively denote the orthogonal projection onto R(A) and R(A), i.e., PA=AA, PA=AA.

    We will now introduce definitions of several generalized inverses that will be used throughout the paper. The Moore-Penrose inverse of ACm×n, denoted by A, is defined as the unique matrix XCn×m satisfying [3]:

    (1) AXA=A, (2) XAX=X, (3) (AX)=AX, (4) (XA)=XA.

    In particular, X is an outer inverse of A which is denoted as A(2) if XAX=X. For any matrix ACm×n with r(A)=r, let TCn, SCm be two subspaces such that dim(T)=tr and dim(S)=mt. Then A has an outer inverse X that satisfies R(X)=T and N(X)=S if and only if ATS=Cm. In that case X is unique and denoted by A(2)T,S [4].

    The Drazin inverse of ACn×nk, denoted by AD, is the unique matrix XCn×n satisfying [5]: XAX=X, AX=XA, XAk+1=Ak.

    For any matrix ACn×n1, a new generalized inverse, which is called core inverse [6] was introduced. Two other generalizations of the core inverse for ACn×nk such as core-EP inverse [7], DMP inverse [8] were also introduced.

    In 2018, Wang and Chen [9] defined the weak group inverse of ACn×nk, denoted by A, as the unique matrix XCn×n such that [9]: . Moreover, it was verified that .

    Recently, Ferreyra et al. introduced a new generalization of core inverse called the weak core inverse of ACn×nk, denoted by A,(or, in short, WC inverse). It is defined as the unique matrix XCn×n satisfying [1]:

    XAX=X,   AX=CA,   XA=ADC,

    where C=AAA. Moreover, it is proved that A,=ADCA=AAA.

    The structure of this paper is as follows: In Section 2, we give some preliminaries which will be made use of later in this paper. In Section 3, we discuss some characterizations of the WC inverse based on its range space, null space and matrix equations. In Section 4, several new representations of the WC inverse are proposed. Section 5 is devoted to deriving some properties of the WC inverse by the core-EP decomposition. Moreover, in Section 6, we present several equivalent conditions for a matrix to be a weak core matrix.

    For convenience, we will use the following notations: CCMn, CEPn, CPn and COPn will denote the subsets of Cn×n consisting of core matrices, EP matrices, idempotent matrices and Hermitian idempotent matrices, respectively, i.e.,

    CCMn={AACn×n,r(A2)=r(A)};

    CEPn={AACn×n,R(A)=R(A)};

    CPn={AACn×n,A2=A};

    COPn={AACn×n,A2=A=A}.

    Before giving characterizations of the WC inverse, we first present the following auxiliary lemmas which will be repeatedly used throughout this paper.

    Lemma 2.1. [10] Let ACn×nk. Then A can be represented as

    A=U[TS0N]U, (2.1)

    where TCt×t is nonsingular and t=r(T)=r(Ak), N is nilpotent with index k, and UCn×n is unitary.

    Moreover, the representation of A given by (2.1) is unique [10,Theorem 2.4]. In that case, we have that

    Ak=U[Tk˜T00]U, (2.2)

    where ˜T=k1j=0TjSNk1j.

    Lemma 2.2. [1,9,10,11,12] Let ACn×nk be given by (2.1). Then :

    A=U[TTSN(IntNN)SN(IntNN)SSN]U, (2.3)
    AD=U[T1(Tk+1)1˜T00]U, (2.4)
    (2.5)
    AD,=U[T1(Tk+1)1˜TNN00]U, (2.6)
    A,D=U[TTTk˜T(IntNN)S(IntNN)STk˜T]U, (2.7)
    A=U[T1T2S00]U, (2.8)

    .

    A,=U[T1T2SNN00]U, (2.9)

    where ˜T=k1j=0TjSNk1j and =[TT+S(IntNN)S]1.

    ˜T and will be often used throughout this paper.

    Lemma 2.3. [1] Let ACn×nk. Then

    (a) A,=A(2)R(Ak), N((Ak)A2A);

    (b) AA,=PR(Ak), N((Ak)A2A);

    (c) A,A=PR(Ak), N((Ak)A2).

    Lemma 2.4. Let ACn×nk and C=AAA. The following conditions hold:

    (a) [1]

    (b) [1] r(A,)=r(Ak);

    (c) [1] CAC=C;

    (d) [9] AAk+1=Ak;

    (e) Ck=AkAA.

    Proof. Item (e) can be directly verified by (2.1), (2.2) and (2.8).

    Applying existing results for the WC inverse with respect to R(X)=R(Ak) and N(X)=N((Ak)A2A), some new results can be obtained for the WC inverse in the next result.

    Theorem 3.1. Let ACn×nk and C=AAA. The following statements are equivalent:

    (a) X=A,;

    (b) N(X)=N((Ak)A2A) and XAA=AAA;

    (c) N(X)=N((Ak)A2A) and XA=AA;

    (d) R(X)=R(Ak) and AAX=ACA;

    (e) R(X)=R(Ak) and AX=CA;

    (f) R(X)=R(Ak) and AkX=CkA;

    (g) R(X)=R(Ak), N(X)=N((Ak)A2A) and XAA=A;

    (h) R(X)=R(Ak), N(X)=N((Ak)A2A) and XAk+1=Ak.

    Proof. (a)(b). By the definition of A,, we have that XAA=ADCA=ADAAAA=AAA. Hence, by (a) of Lemma 2.3, we now obtain that (b) holds.

    (b)(c). Postmultiplying XAA=AAA by (A), we obtain that XA=AA.

    (c)(d). From N(X)=N((Ak)A2A), we have that N(AA)N((Ak)A2A)=N(X), which leads to X=XAA. Thus we get that X=XAA=AAA=A, by XA=AA. Hence, by the definition of A, and (a) of Lemma 2.3, we have that (d) holds.

    (d)(e). Evidently.

    (e)(f). Since C=AAA and Ck=AkAA, premultiplying AX=CA by Ak1, we have that AkX=CkA.

    (f)(g). From (2.2) and R(X)=R(Ak), we can set X=U[X1X200]U, where X1Ct×t, X2Ct×(nt) and t=r(Ak). Furthermore, it follows from AkX=CkA and (2.9) that X=A,. Therefore, by the definition of A, and (a) of Lemma 2.3, we obtain that (g) holds.

    (g)(h). It follows from AAk+1=Ak and XAA=A that XAk+1=XAAAk+1=AAk+1=Ak.

    (h)(a). By R(X)=R(Ak) and XAk+1=Ak, we get that XAX=X. Hence, by (a) of Lemma 2.3, we get that X=A,.

    Now we will consider other characterizations of the WC inverse by the fact that A,AA,=A,.

    Theorem 3.2. Let ACn×nk and C=AAA. The following statements are equivalent:

    (a) X=A,;

    (b) XAX=X, R(X)=R(Ak) and N(X)=N((Ak)A2A);

    (c) XAX=X, R(X)=R(Ak) and AX=CA;

    (d) XAX=X, AX=CA and XAk=AAk;

    (e) XAX=X, XA=AA and AkX=CkA;

    (f) XAX=X, XA=AA and N(X)=N((Ak)A2A).

    Proof. (a)(b). The proof can be demonstrated by (a) of Lemma 2.3.

    (b)(c). By the definition of A, and (b) of Lemma 2.3, we get that AXCPn, R(AX)=AR(X)=R(Ak+1)=R(Ak)=R(AA,)=R(CA) and N(AX)=N(X)=N((Ak)A2A)=N(AA,)=N(CA). On the other hand, Lemma 2.4 (c) implies CACPn, hence AX=CA.

    (c)(d). By item (c) of Lemma 2.3, we obtain that R(X)=R(Ak)=R(A,A). So we get that A,AX=X, which implies that XAk=A,AXAk=A,CAAk=A,AAAAAk=AAk.

    (d)(e). By conditions and AA=Ak(A)k, we can infer that X=XCA=XAAAA=XAk(A)kAA=AAk(A)kAA=A,. Hence, by A,=AAA and AkAA=Ck, we obtain that (e) holds.

    (e)(f). Since XAX=X, XA=AA, we have that R(X)=R(XA)=R(AA)=R(Ak). We now obtain that X=A, by (f) of Theorem 3.1. Hence (f) holds by (a) of Lemma 2.3.

    (f)(a). It follows from XAX=X that N(AX)=N(X), by conditions and (a) of Lemma 2.3. We now obtain that X=XAX=AAX=AAAAX=A,PR(AX), N(AX)=A,.

    Notice the fact that XAk+1=Ak if X=A,. Therefore, we will characterize the WC inverse in terms of A,Ak+1=Ak.

    Theorem 3.3. Let ACn×nk and C=AAA. The following statements are equivalent:

    (a) X=A,;

    (b) XAk+1=Ak, AAX=ACA and r(X)=r(Ak);

    (c) XAk+1=Ak, AX=CA and r(X)=r(Ak);

    (d) XAk+1=Ak, AkX=CkA and r(X)=r(Ak).

    Proof. (a)(b). Since A,=AAA, we can show that XAk+1=Ak, AAX=ACA. Then, by (b) of Lemma 2.4, we get that (b) holds.

    (b)(c). Obviously.

    (c)(d). Premultiplying AX=CA by Ak1, we have that AkX=CkA from AkAA=Ck.

    (d)(a). It follows from XAk+1=Ak and r(X)=r(Ak) that R(X)=R(Ak). Hence, we obtain that X=A, from (f) of Theorem 3.1.

    In the following example, we show that the condition r(X)=r(Ak) in Theorem 3.3 is necessary.

    Example 3.4. Let

    A=[100003000],   X=[100 002000].

    Then Ind(A)=2,

    A=[10000001/30],  C=[100000000]  and A,=[100000000].

    It can be directly verified that XA3=A2, AAX=ACA and r(X)r(A2), but XA,. The other cases follow similarly.

    By Lemma 2.3, it is clear that AX=PR(Ak), N((Ak)A2A) and XA=PR(Ak), N((Ak)A2) if X=A,. However, the converse is invalid as shown in the next example:

    Example 3.5. Let A, X be the same as in Example 3.4. Then

    AX=[100000000], XA=[100000000]and A,=[100000000].

    It can be directly verified that AX=PR(A2), N((A2)A2A) and XA=PR(A2), N((A2)A2), but XA,.

    In the next result, we will present some new equivalent conditions for the converse implication:

    Theorem 3.6. Let ACn×nk and XCn×n. The following statements are equivalent:

    (a) X=A,;

    (b) AX=PR(Ak), N((Ak)A2A),XA=PR(Ak), N((Ak)A2) and r(X)=r(Ak);

    (c) AX=PR(Ak), N((Ak)A2A),XA=PR(Ak), N((Ak)A2) and XAX=X;

    (d) AX=PR(Ak), N((Ak)A2A),XA=PR(Ak), N((Ak)A2) and AX2=X.

    Proof. (a)(b). The proof can be demonstrated by (b) and (c) of Lemma 2.3 and (b) of Lemma 2.4.

    (b)(c). By R(XA)=R(Ak) and r(X)=r(Ak), we obtain that R(X)=R(XA)=R(Ak), hence we further derive that XAX=X.

    (c)(d). By conditions and (a) of Lemma 2.3, we have that X=A,. Therefore, by (2.9), it can be directly verified that AX2=X.

    (d)(a). From AX2=X, we have that X=AX2=A2X3==AkXk+1, which implies R(X)R(Ak). Combining with the condition R(Ak)=R(XA)R(X), we get that R(X)=R(Ak). From (2.2), we now set X=U[X1X200]U, where X1Ct×t, X2Ct×(nt) and t=r(Ak). On the other hand, it follows from N(AX)=N((Ak)A2A) that (Ak)A2A=(Ak)A2X, which yields X1=T1 and X2=T2SNN. Therefore, by (2.9), we obtain that X=A,.

    In [1], the authors introduced the definition of A, with an algebraic approach. In the next result, we will consider characterization of A, with a geometrical point of view.

    Theorem 3.7. Let ACn×nk. Then:

    (a) A, is the unique matrix X that satisfies:

    AX=PR(Ak), N((Ak)A2A),   R(X)R(Ak). (3.1)

    (b) A, is the unique matrix X that satisfies:

    XA=PR(Ak), N((Ak)A2),   N(A)N(X). (3.2)

    Proof. (a). Since R(AD)=R(Ak), it is a consequence of [2,Corollary 3.2] by properities of Drazin and MP inverse.

    (b). Since items (c) of Lemma 2.3, A, satisfies XA=PR(Ak), N((Ak)A2). Additionally, we derive that N(A)=N(A)N(AAA)=N(X). Now it remains to prove that X is unique.

    Assume that X1, X2 satisfy (3.2), then X1A=X2A, \ N(A)N(X1) and N(A)N(X2). Furthermore, we get that (X1X2)A=0 and R(Xi)R(A) for i=1,2, which further imply that A(X1X2)=0 and R(X1X2)R(A). Therefore we have that R(X1X2)R(A)N(A)=R(A)R(A)={0}. Thus, X1=X2, i.e., X1=X2.

    Remark 3.8. In Theorem 3.7, R(X)R(Ak) in (3.1) can be replaced by R(X)=R(Ak). However, if we replace N(A)N(X) with N(A)=N(X) in (3.2), item (b) of Theorem 3.7 does not hold.

    Characterizations of some generalized inverses by using its block matrices have been investigated in [13,14,15,16,17]. In [18,Theorem 3.2], the authors presented a characterization for the WC inverse using its block matrices. Next we will give another proof of it by using characterization of projection operator.

    Theorem 3.9. Let ACn×nk and r(Ak)=t. Then there exist a unique matrix P such that

    P2=P,  PAk=0,  (Ak)A2P=0,  r(P)=nt, (3.3)

    a unique matrix Q such that

    Q2=Q,  QAk=0,  (Ak)A2AQ=0,  r(Q)=nt, (3.4)

    and a unique matrix X such that

    r([AIQIPX])=r(A). (3.5)

    Furthermore, X is the WC inverse A, of A and

    P=PN((Ak)A2), R(Ak),  Q=PN((Ak)A2A), R(Ak). (3.6)

    Proof. It is not difficult to prove that

    the  condition  (3.3)  hold(IP)2=IP, (IP)Ak=Ak,(Ak)A2(IP)=(Ak)A2, r(P)=ntIP=PR(Ak), N((Ak)A2)P=PN((Ak)A2), R(Ak).

    Similarly, we can show that (3.4) have the unique solution Q=PN((Ak)A2A), R(Ak).

    Furthermore, comparing (3.6) and items (b) and (c) of Lemma 2.3 immediately leads to the conclusion that

    r([AIQIPX])=r([AAA,A,AX])=r(A)+r(XA,).

    By (3.5), we obtain that X=A,.

    In [19], Drazin introduced the (b,c)-inverse in semigroup. In [20], Benítez et al. investigated the (B,C)-inverse of ACm×n, as the unique matrix XCn×m satisfying [20]:

    CAX=C,   XAB=B,   R(X)=R(B),   N(X)=N(C),

    where B,CCn×m. In the next result, we will show that the WC inverse is a special (B,C)-inverse.

    Theorem 4.1. Let ACn×nk. Then

    A,=A(Ak,(Ak)A2A).

    Proof. According to Lemma 2.3, we get that

    R(A,)=R(Ak),   N(A,)=N((Ak)A2A)).

    Observe that A,AAk=AAk+1=Ak and (Ak)A2AAA,=(Ak)A2AAA=(Ak)A2A. Thus, we obtain A,=A(Ak,(Ak)A2A).

    In [21], the authors introduced the Bott-Duffin inverse of ACn×n when APL+PL is nonsingular, i.e., A(1)L=PL(APL+PL)1=PL(APL+IPL)1. In [22], the authors showed the weak group inverse by a special Bott-Duffin inverse. Next we will show that the WC inverse of A is indeed the Bott-Duffin inverse of A2 with respect to R(Ak).

    Theorem 4.2. Let ACn×nk be given by (2.1). Then

    A,=(A2)(1)(R(Ak))APA=(PAkA2PAk)APA. (4.1)

    Proof. It follows from (2.3) and (2.2) that

    PA=U[It00NN]U,    (4.2)
    PAk=U[It000]U.    (4.3)

    We now obtain that

    (A2)(1)(R(Ak))APA=PAk(A2PAk+IPAk)1APA=U[It000][T200Int]1[TS0N][It00NN]U=U[T1T2SNN00]U=A,.

    Similarly, by a direct calculation, we can derive that A,=(PAkA2PAk)APA.

    Working with the fact that P=PN((Ak)A2), R(Ak) and Q=PN((Ak)A2A), R(Ak) in Theorem 3.9, we will consider other representations of A, in the next theorem.

    Theorem 4.3. Let ACn×nk and P=PN((Ak)A2), R(Ak),  Q=PN((Ak)A2A), R(Ak). Then for any a,b0, we have

    A,=(A+aP)1(IQ)=(IP)(A+bQ)1. (4.4)

    Proof. From items (b) and (c) of Lemma 2.3, it is not difficult to conclude that

    (A+aP)A,=IQ.

    Now we only need to show the invertibility of A+aP. Assume that α=U[α1α2]Cn such that (A+aP)α=0, i.e., Aα=aPα, where α1Cp, α2Cnp. Now it follows from condition (c) of Lemma 2.3 and (3.6) that

    [TS0N][α1α2]=a[0T1ST2SN0I][α1α2],

    implying α1=0 and α2=0 since a0, N is nilpotent and T is non singular. Thus A+aP is nonsingular.

    Analogously, we can prove that A+bQ is invertible and A,=(IP)(A+bQ)1.

    The limit expressions for some generalized inverses of matrices have been given in [14,15,16,17,23,24]. Similarly, the WC inverse can also be characterized as limit value as shown in the next result:

    Theorem 4.4. Let ACn×nk. Then:

    (a) A,=limλ0Ak(λIn+(Ak)Ak+2)1(Ak)A2A(λIn+AA)1;

    (b) A,=limλ0Ak(Ak)A(λIn+Ak+1(Ak)A)1AA(λIn+AA)1;

    (c) A,=limλ0(λIn+Ak(Ak)A2)1Ak(Ak)A2A(λIn+AA)1;

    (d) A,=limλ0Ak(Ak)A2A(λIn+AA)1(λIn+Ak+1(Ak)A2A(λIn+AA)1)1.

    Proof. According to condition (a) of Lemma 2.3, it is not hard to show that

    A,=A(2)R(Ak), N((Ak)A2A)=A(2)R(Ak(Ak)A2A), N(Ak(Ak)A2A).

    Thus, by [25,Theorem 2.1], we have the following results:

    (a) Let X=Ak, Y=(Ak)A2A and by A=limλ0A(λIn+AA)1. We have

    A,=limλ0Ak(λIn+(Ak)Ak+2)1(Ak)A2A(λIn+AA)1.

    (b) Let X=Ak(Ak)A, Y=AA and by A=limλ0A(λIn+AA)1. We have

    A,=limλ0Ak(Ak)A(λIn+Ak+1(Ak)A)1AA(λIn+AA)1.

    (c) Let X=In, Y=Ak(Ak)A2A and by A=limλ0A(λIn+AA)1. We have

    A,=limλ0(λIn+Ak(Ak)A2)1Ak(Ak)A2A(λIn+AA)1.

    (d) Let X=Ak(Ak)A2A, Y=In and by A=limλ0A(λIn+AA)1. We have

    A,=limλ0Ak(Ak)A2A(λIn+AA)1(λIn+Ak+1(Ak)A2A(λIn+AA)1)1.

    We end up this section with three examples of computing the WC inverse of a matrix using three different expressions in Theorems 4.2–4.4.

    Example 4.5. Let

    A=[2210342000010000]. (4.5)

    Then Ind(A)=2 and the weak core inverse of A is

    A,=A2(A4)A2A=[211/203/211/2000000000].

    Firstly, using the expression (4.1) to compute the WC inverse of A. Then

    (A2PA2+IPA2)1=[11/23009/25/20000100001] and (PA2A2PA2)=[11/23009/25/20000000000].

    After simplification, it follows that (A2)(1)(R(A2))APA=A, and (PA2A2PA2)APA=A,.

    Secondly, using the expression (4.4), we obtain

    (A6P)1=[211/219/123/217/1297/72001/61/360001/6]  and  (A+15Q)1=[211/25/23/21210005250005].

    Therefore, it can be directly verified (A6P)1(IQ)=A,, (IP)(A+15Q)1=A,.

    Finally, using the limit expressions of item (a) in Theorem 4.4.

    Let B=A2(λIn+(A2)A4)1(A2)A2A(λIn+AA)1, then

    B=A2(λIn+(A2)A4)1(A2)A2A(λIn+AA)1=[2(30321λ2+5361λ+580)/λ12(54629λ2+6699λ290)/λ1(1305λ58)/λ20(110503λ2+19405λ870)/λ14(49773λ2+6090λ+145)/λ1(2378λ+58)/λ2000000000].

    where λ1=λ4+38734λ3+1470569λ2+197888λ+580, λ2=λ3+38697λ2+38812λ+116.

    After simplification, it follows that

    limλ0B=limλ0A2(λIn+(A2)A4)1(A2)A2A(λIn+AA)1=A,.

    The other cases in Theorem 4.4 can be similarly verified.

    In this section, we discuss some properties of the WC inverse and consider the connection between the WC inverse and other known classes of matrices.

    Lemma 5.1. Let ACn×nk be given by (2.1). Then:

    (a) ACEPnS=0 and N=0;

    (b) ACPnT=It and N=0;

    (c) ACOPnT=It,  S=0 and N=0.

    Proof. (a) The proof can be easily verified from (2.1) and (2.3).

    (b) By (2.1), we obtain that ACPn is equivalent with

    U[T2TS+SN0N2]U=U[TS0N]U,

    which is further equivalent with T2=T, TS+SN=S and N2=N. Hence, by non singularity of T and Nk=0, we can conclude that ACPn if and only if T=It and N=0.

    (c) Since COPnCPn, it is a direct consequence of item (b) and (2.1).

    Theorem 5.2. Let ACn×nk be given by (2.1). The following statements hold:

    (a) A,=0A is nilpotent $;

    (b) A,=AACEPn and A3=A;

    (c) A,=AACEPn and AA=PAk;

    (d) A,=PAACPn;

    (e) A,=PAACOPn.

    Proof. (a) By (2.1) and (2.9), we directly get that

    A,=0r(Ak)=t=0A  is  nilpotent.

    (b) It follows (2.1), (2.9) and (a) of lemma 5.1 that

    A,=AU[T1T2SNN00]U=U[TS0N]US=0  ,  N=0  and  T3=TACEPn  and  A3=A.

    (c) By (2.1), (2.9) and (a) of lemma 5.1, we have that

    A,=AU[T1T2SNN00]U=U[T0SN]US=0  ,  N=0  and  TT=ItACEPn  and  AA=PAk.

    (d) From (2.9), (4.2) and (b) of lemma 5.1, we obtain that

    A,=PAU[T1T2SNN00]U=U[It00NN]UT=It,  N=0ACPn.

    (e) It follows from (2.1) and (2.3) that

    PA=AA=U[TTTS(IntNN)(IntNN)STNN+(IntNN)SS(IntNN)]U. (5.1)

    By (2.9) and (5.1), we now get that A,=PA is equivalent with

    U[T1T2SNN00]U=U[TTTS(IntNN)(IntNN)STNN+(IntNN)SS(IntNN)]U,

    which is further equivalent with T1=TT, (IntNN)ST=0 and NN+(IntNN)SS(IntNN)=0. Hence, by nonsingularity of T and (c) of lemma 5.1, we can conclude that A,=PA if and only if ACOPn.

    From Lemma 2.3, we know that both AA, and A,A are oblique projectors. The next theorem will further discuss other characteriations for AA, and A,A.

    Theorem 5.3. Let ACn×nk be given by (2.1). The following statements hold:

    (a) AA,=PAACCMn;       (b) AA,=PAACEPn;

    (c) A,A=PAACEPn;       (d) A,A=PAACEPn.

    Proof. It follows from (2.1) and (2.9) that

    AA,=U[ItT1SNN00]U, (5.2)
    A,A=U[ItT1S+T2SN00]U. (5.3)

    (a) By (4.2) and (5.2), the result can be directly verified.

    (b) By (5.1) and (5.2), we can show that AA,=PA if and only if (IntNN)ST=0 and NN+(IntNN)SS(IntNN)=0, which is further equivalent with S=0 and N=0, i.e., ACEPn.

    (c) It follows from (4.2) and (5.3) that A,A=PA is equivalent with ACEPn.

    (d) From (5.1) and (5.3), it is similar to the proof of (b).

    Recall from [6] that the core inverse is necessarily EP. The next Theorem shows that this is not the case with the WC inverse.

    Theorem 5.4. Let ACn×nk be given by (2.1) and tZ+. The following statements are equivalent:

    (a) A,CEPn;       (b) SN=0;

    (c) ;       (d) ;

    (e) A,At=AtA.

    Proof. (a)(b). Since A,CEPn is equivalent with R(A,)=R((A,)). Using (2.9), we have that A,CEPn if and only if SN=0.

    (c)(b). By (2.5) and (5.3), it can be directly verified that if and only if SN=0.

    (d)(b). By (2.5) and (2.9), it follows that

    (e)(b). From (2.8) and (2.9), it follows that

    A,At=AtAU[Tt1Tt2S+T2TtN00]U=U[Tt1Tt2S00]UT2TtN=0SN=0.

    where Tt=t1j=0TjSNt1j.

    In [26], the authors introduced that a matrix A to be a weak group matrix if ACWGn, which is equivalent with SN=0. Therefore, we have that following remark:

    Remark 5.5. It is worth noting that conditions (a), (c)(e) in Theorem 5.4 are equivalent with ACWGn.

    The next theorems provide some equivalent conditions for A,CPn and A,COPn.

    Theorem 5.6. Let ACn×nk be given by (2.1). The following statements are equivalent:

    (a) A,CPn;            (b) T=It;

    (c) AA,=A,;           (d) A,Ak=Ak;

    (e) Ak(A,)k=A,;         (f) A(A,)k=(A,)k.

    Proof. (a)(b). From (2.9), it is not hard to prove that A,CPn is equivalent with T=It.

    (c)(b). From (2.9) and (5.3), it follows that AA,=A, if and only if T=It.

    (d)(b). By (2.2) and (2.9), it is easy to verify that A,Ak=Ak if and only if T=It.

    The proofs (e)(b) and (f)(b) are similar to the proof (d)(b).

    Theorem 5.7. Let ACn×nk be given by (2.1). The following statements are equivalent:

    (a) A,COPn;            (b) T=It and SN=0;

    (c) AA,=(A,);         (d) A,A=A;

    (e) (A,)kAk=A;         (f) (A,)kA=(A)k.

    Proof. (a)(b). From (2.9) and Theorem 5.6, we can show that A,COPn is equivalent with T=It and SN=0.

    (c)(b). By (2.9) and (5.3), it follows from AA,=(A,) that

    [ItT1SNN00]=[(T1)0(T2SNN)0].

    Hence, we get that AA,=(A,) is equivalent with T=It and SN=0.

    The proofs of (d)(b), (e)(b) and (f)(b) are similar to the proof of (c)(b).

    Corollary 5.8. Let ACn×nk. Then ACOPn if and only if A,CPnCEPn.

    Proof. It is a direct consequence from Theorem 5.4 and Theorem 5.6.

    Working with Theorem 5.6 and Theorem 5.7, we have the following corollary.

    Corollary 5.9. Let ACn×nk and for any lN, lk. The following statements statements hold:

    (a)ACPnA,CPn and Al=A;

    (b)ACOPnA,CPn and Al=A.

    Proof. (a) The result can be easily derived by lemma 5.1 and Theorem 5.6,

    (b) From Lemma 5.1 and Theorem 5.7, we can show that (b) holds.

    Ferreyra et al.[1] introduced the weak core matrix. The set of all n×n weak core matrices is denoted by CWCn, that is:

    CWCn={AACn×n,A,=AD,}.

    In this section, we discuss some equivalent conditions satisfied by a matrix A such that ACWCn using the core-EP decomposition. For convenience, we introduce a necessary lemma.

    Lemma 6.1. [1] Let ACn×nk be given by (2.1). Then the following statements are equivalent:

    (a) ACWCn;

    (b) SN2=0;

    (c) A=AD.

    Theorem 6.2. Let ACn×nk be given by (2.1) and tZ+. The following statements are equivalent:

    (a) ACWCn;

    (b) AA=ADA;

    (c) AtAA=AtADA;

    (d) AtA,=AtAD,;

    (e) AkA,=AkA;

    (f) AkAA=Ak.

    Proof. (a)(b). It is a direct consequence from condition (c) of Lemma 6.1.

    (b)(c). Evident.

    (c)(a). By condition, it follows from (2.4) and (2.8) that

    [TtTt1S+Tt2SN00]=[TtTt1S+Ttk1˜TN00],

    which implies Ttk1(Tk2SN2++TSNk1)=0. We now obtain that SN2=0 since T is invertible. By Lemma 6.1, we obtain that ACWCn.

    (a)(d). It follows form (2.6), (2.9) and Lemma 6.1 that

    AtA,=AtAD,U[Tt1Tt2SNN00]U=U[Tt1Ttk1˜TNN00]UTt2SNN=Ttk1˜TNNSN2=0ACWCn.

    (a)(e). From the definition of the weak core matrix, we have that AkA,=AkAD,=AkA.

    (e)(f). Evident.

    (f)(a). If AkAA=Ak, by (2.2) and (2.8), we can conclude that SN2. Hence item (a) holds.

    Corollary 6.3. Let ACn×nk be given by (2.1). Then ACWCn if and only if Ak=Ck, where C=AAA.

    Proof. Since AkAA=Ck, the result is a direct consequence of item (f) of Theorem 6.2.

    Theorem 6.4. Let ACn×nk be given by (2.1) and for some tZ+. The following statements are equivalent:

    (a) ACWCn;

    (b) A(A)tA=(A)tA2;

    (c) (A)tA=(A)t+1A2;

    (d) A(A)tA commutes with (A)tA2;

    (e) (A)tA commutes with (A)t+1A2.

    Proof. By (2.1) and (2.8), we get that

    A(A)tA=U[Tt+2Tt+1S+TtSN00]U, (6.1)
    (A)tA2=U[Tt+2Tt+1S+TtSN+Tt1SN200]U. (6.2)

    (a)(b). By (6.1), (6.2) and Lemma 6.1, we get that A(A)tA=(A)tA2 if and only if ACWCn.

    (a)(c). Similar to the part (a)(b).

    (a)(d). It follows from (6.1), (6.2) and Lemma 6.1 that

    A(A)tA(A)tA2(A)tA2A(A)tA=U[0T2t+1SN200]U,

    which implies that A(A)tA commutes with (A)tA2 if and only if ACWCn.

    (a)(e). It is analogous to that of the part (a)(d).

    Corollary 6.5. Let ACn×nk and tZ+. The following statements are equivalent:

    (a) ACWCn;

    Proof. Since , Corollary 6.5 can be directly verified.

    In this paper, new characterizations and properties of the WC inverse are derived by using range, null space, matrix equations, respectively. Several expressions of the WC inverse are also given. Finally, we show various characterizations of the weak core matrix.

    According to the current research background, more characterizations and applications for the WC inverse are worthy of further discussion which as follows:

    1) Characterizing the WC inverse by maximal classes of matrices, full rank decomposition, integral expressions and so on;

    2) New iterative algorithms and splitting methods for computing the WC inverse;

    3) Using the WC inverse to solve appropriately constrained systems of linear equations;

    4) Investigating the WC inverse of tensors.

    This work was supported by the Natural Science Foundation of China under Grants 11961076. The authors are thankful to two anonymous referees for their careful reading, detailed corrections and pertinent suggestions on the first version of the paper, which enhanced the presentation of the results distinctly.

    All authors read and approved the final manuscript. The authors declare no conflict of interest.



    [1] D. E. Ferreyra, F. E. Levis, A. N. Priori, N. Thome, The weak core inverse, Aequat. Math., 95 (2021), 351–373. doi: 10.1007/s00010-020-00752-z. doi: 10.1007/s00010-020-00752-z
    [2] D. Mosicˊ, J. Marovt, Weighted weak core inverse of operators, Linear Multilinear A., 2021, 1–23 doi: 10.1080/03081087.2021.1902462. doi: 10.1080/03081087.2021.1902462
    [3] R. A. Penrose, A generalized inverse for matrices, In: Mathematical proceedings of the Cambridge philosophical society, Cambridge University Press, 51 (1995), 406–413.
    [4] A. Ben-Israel, T. N. E. Greville, Generalized inverses: Theory and applications, New-York: Springer-Verlag, 2003.
    [5] M. P. Drazin, Pseudo-inverses in associative rings and semigroups, Am. Math. Mon., 65 (1958), 506–514. doi: 10.1080/00029890.1958.11991949. doi: 10.1080/00029890.1958.11991949
    [6] O. M. Baksalary, G. Trenkler, Core inverse of matrices, Linear Multilinear A., 58 (2010), 681–697. doi: 10.1080/03081080902778222.
    [7] K M. Prasad, K. S. Mohana. Core-EP inverse, Linear Multilinear A., 62 (2014), 792–802. doi: 10.1080/03081087.2013.791690. doi: 10.1080/03081087.2013.791690
    [8] S. B. Malik, N. Thome, On a new generalized inverse for matrices of an arbitrary index, Appl. Math. Comput., 226 (2014), 575–580. doi: 10.1016/j.amc.2013.10.060. doi: 10.1016/j.amc.2013.10.060
    [9] H. X. Wang, J. L. Chen, Weak group inverse, Open Math., 16 (2018), 1218–1232. doi: 10.1515/math-2018-0100.
    [10] H. X. Wang, Core-EP decomposition and its applications, Linear Algebra Appl., 508 (2016), 289–300. doi: 10.1016/j.laa.2016.08.008. doi: 10.1016/j.laa.2016.08.008
    [11] C. Y. Deng, H. K. Du, Representation of the Moore-Penrose inverse of 2\times2 block operator valued matrices, J. Korean Math. Soc., 46 (2009), 1139–1150. doi: 10.4134/JKMS.2009.46.6.1139. doi: 10.4134/JKMS.2009.46.6.1139
    [12] D. E. Ferreyra, F. E. Levis, N. Thome, Characterizations of k-commutative egualities for some outer generalized inverse, Linear Multilinear A., 68 (2020), 177–192. doi: 10.1080/03081087.2018.1500994. doi: 10.1080/03081087.2018.1500994
    [13] H. F. Ma, A characterization and perturbation bounds for the weighted core-EP inverse, Quaest. Math., 43 (2020), 869–879. doi: 10.2989/16073606.2019.1584773. doi: 10.2989/16073606.2019.1584773
    [14] H. F. Ma, Characterizations and representations for the CMP inverse and its application, Linear Multilinear A., 2021, 1–16. doi: 10.1080/03081087.2021.1907275.
    [15] H. F. Ma, T. T. Li, Characterizations and representations of the core inverse and its applications, Linear Multilinear A., 69 (2021), 93–103. doi: 10.1080/03081087.2019.1588847. doi: 10.1080/03081087.2019.1588847
    [16] H. F. Ma, X. S. Gao, P. S. Stanimirovi\acute{c}, Characterizations, iterative method, sign pattern and perturbation analysis for the DMP inverse with its applications, Appl. Math. Comput., 378 (2020), 125196. doi: 10.1016/j.amc.2020.125196.
    [17] D. Mosi\acute{c}, P. S Stanimirovi\acute{c}, Representations for the weak group inverse, Appl. Math. Comput., 397 (2021), 125957. doi: 10.1016/j.amc.2021.125957.
    [18] D. Mosi\acute{c}, P. S Stanimirovi\acute{c}, Expressions and properties of weak core inverse, Appl. Math. Comput., 415 (2022), 126704. doi: 10.1016/j.amc.2021.126704.
    [19] M. P. Drazin, A class of outer generalized inverses, Linear Algebra Appl., 436 (2012), 1909–1923. doi: 10.1016/j.laa.2011.09.004. doi: 10.1016/j.laa.2011.09.004
    [20] J. Benítez, E. Boasso, H. W. Jin, On one-sided (B, C)-inverse of arbitrary matrices, J. Linear Al., 32 (2017), 391–422. doi: 10.13001/1081-3810.3487. doi: 10.13001/1081-3810.3487
    [21] R. Bott, R. J. Duffin, On the algebra of networks, T. Am. Math. Soc., 74 (1953), 99–109.
    [22] Y. Hui, H. X. Wang, K. Z. Zuo, Y. Chen, Further characterizations of the weak group inverse of matrices and the weak group matrix, AIMS Mathematics, 6 (2021), 9322–9342. doi: 10.3934/math.2021542. doi: 10.3934/math.2021542
    [23] C. D. Meyer, Limits and the index of a square matrix, SIAM J. Appl. Math., 26 (1974), 469–478. doi: 10.1137/0126044. doi: 10.1137/0126044
    [24] D. Mosi\acute{c}, I. I. Kyrchei\acute{c}, P. S Stanimirovi\acute{c}, Representations and properties for the MPCEP inverse, J. Appl. Math.Comput., 67 (2021), 101–130. doi: 10.1007/s12190-020-01481-x. doi: 10.1007/s12190-020-01481-x
    [25] Y. X. Yuan, K. Z. Zuo, Compute \lim_{\lambda \to 0}X(\lambda I_{p}+YAX)^{-1}Y by the product singular value decomposition, Linear Multilinear A., 64 (2016), 269–278. doi: 10.1080/03081087.2015.1034641. doi: 10.1080/03081087.2015.1034641
    [26] H. X. Wang, X. J. Liu, The weak group matrix, Aequat. Math., 93 (2019), 1261–1273. doi: 10.1007/s00010-019-00639-8.
  • This article has been cited by:

    1. Wende Li, Jianlong Chen, Yukun Zhou, CHARACTERIZATIONS AND PROPERTIES OF WEAK CORE INVERSES IN RINGS WITH INVOLUTION, 2024, 54, 0035-7596, 10.1216/rmj.2024.54.793
    2. Jiaxuan Yao, Hongwei Jin, Xiaoji Liu, The weak group-star matrix, 2023, 37, 0354-5180, 7919, 10.2298/FIL2323919Y
    3. Dijana Mosić, Janko Marovt, Weighted MP weak group inverse, 2024, 32, 1844-0835, 221, 10.2478/auom-2024-0012
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1986) PDF downloads(63) Cited by(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog