Processing math: 100%
Research article

On the nonlinear matrix equation Xs+AHF(X)A=Q

  • Received: 15 February 2023 Revised: 21 April 2023 Accepted: 11 May 2023 Published: 30 May 2023
  • MSC : 15A24, 65H10

  • Nonlinear matrix equation often arises in control theory, statistics, dynamic programming, ladder networks, and so on, so it has widely applied background. In this paper, the nonlinear matrix equation Xs+AHF(X)A=Q are discussed, where operator F are defined in the set of all n×n positive semi-definite matrices, and Q is a positive definite matrix. Sufficient conditions for the existence and uniqueness of a positive semi-definite solution are derived based on some fixed point theorems. It is shown that under suitable conditions an iteration method converges to a positive semi-definite solution. Moreover, we consider the perturbation analysis for the solution of this class of nonlinear matrix equations, and obtain a perturbation bound of the solution. Finally, we give several examples to show how this works in particular cases, and some numerical results to specify the rationality of the results we have obtain.

    Citation: Yajun Xie, Changfeng Ma, Qingqing Zheng. On the nonlinear matrix equation Xs+AHF(X)A=Q[J]. AIMS Mathematics, 2023, 8(8): 18392-18407. doi: 10.3934/math.2023935

    Related Papers:

    [1] Sourav Shil, Hemant Kumar Nashine . Positive definite solution of non-linear matrix equations through fixed point technique. AIMS Mathematics, 2022, 7(4): 6259-6281. doi: 10.3934/math.2022348
    [2] Changzhou Li, Chao Yuan, Shiliang Chen . On positive definite solutions of the matrix equation Xmi=1AiXpiAi=Q. AIMS Mathematics, 2024, 9(9): 25532-25544. doi: 10.3934/math.20241247
    [3] Muhammad Tariq, Eskandar Ameer, Amjad Ali, Hasanen A. Hammad, Fahd Jarad . Applying fixed point techniques for obtaining a positive definite solution to nonlinear matrix equations. AIMS Mathematics, 2023, 8(2): 3842-3859. doi: 10.3934/math.2023191
    [4] Muhammad Tariq, Muhammad Arshad, Mujahid Abbas, Eskandar Ameer, Saber Mansour, Hassen Aydi . A relation theoretic m-metric fixed point algorithm and related applications. AIMS Mathematics, 2023, 8(8): 19504-19525. doi: 10.3934/math.2023995
    [5] Pattrawut Chansangiam, Arnon Ploymukda . Riccati equation and metric geometric means of positive semidefinite matrices involving semi-tensor products. AIMS Mathematics, 2023, 8(10): 23519-23533. doi: 10.3934/math.20231195
    [6] Yingying Xiao, Chuanxi Zhu, Li Xie . Existence of ground state solutions for the modified Chern-Simons-Schrödinger equations with general Choquard type nonlinearity. AIMS Mathematics, 2022, 7(4): 7166-7176. doi: 10.3934/math.2022399
    [7] A. M. A. El-Sayed, Sh. M. Al-Issa . Monotonic solutions for a quadratic integral equation of fractional order. AIMS Mathematics, 2019, 4(3): 821-830. doi: 10.3934/math.2019.3.821
    [8] Yige Zhao, Yibing Sun, Zhi Liu, Yilin Wang . Solvability for boundary value problems of nonlinear fractional differential equations with mixed perturbations of the second type. AIMS Mathematics, 2020, 5(1): 557-567. doi: 10.3934/math.2020037
    [9] Hamid Boulares, Manar A. Alqudah, Thabet Abdeljawad . Existence of solutions for a semipositone fractional boundary value pantograph problem. AIMS Mathematics, 2022, 7(10): 19510-19519. doi: 10.3934/math.20221070
    [10] Yanhong Zhang, Li Chen . Positive solution for a class of nonlinear fourth-order boundary value problem. AIMS Mathematics, 2023, 8(1): 1014-1021. doi: 10.3934/math.2023049
  • Nonlinear matrix equation often arises in control theory, statistics, dynamic programming, ladder networks, and so on, so it has widely applied background. In this paper, the nonlinear matrix equation Xs+AHF(X)A=Q are discussed, where operator F are defined in the set of all n×n positive semi-definite matrices, and Q is a positive definite matrix. Sufficient conditions for the existence and uniqueness of a positive semi-definite solution are derived based on some fixed point theorems. It is shown that under suitable conditions an iteration method converges to a positive semi-definite solution. Moreover, we consider the perturbation analysis for the solution of this class of nonlinear matrix equations, and obtain a perturbation bound of the solution. Finally, we give several examples to show how this works in particular cases, and some numerical results to specify the rationality of the results we have obtain.



    Let P(n) denote the set of n×n positive semi-definite matrices, and M(n) denote the set of all n×n matrices. In this paper we consider the following class of nonlinear matrix equations

    Xs+AHF(X)A=Q, (1.1)

    where F():P(n)M(n) is continuous, i.e., F transforms positive definite matrices into non-negative definite ones. s1, A,QM(n), and Q is a positive definite matrix. Note that ¯X is a solution of Eq (1.1) if and only if it is a fixed point of the map

    G(X)=(QAHF(X)A)1s, (1.2)

    so the map G() plays an important role throughout this paper.

    Several authors have considered such a nonlinear matrix equation problem. In [1], Qingchun Li and Panpan Liu considered the Eq (1.1), in the case that s1 and map F() is also defined in P(n). The authors in [2] discussed an iteration method for the Eq (1.1) in the case s=1 and with the condition AHF(Q)A<Q, i.e., G(Q)>O. Moreover when s=1, the perturbation theory for the Eq (1.1) can be found in [3]. Other authors discussed this equation for particular choices of the map F(). For example, Beatrice Meini [4] established and proved theorems for the necessary and sufficient conditions of existence of a positive definite solution of the equation as F(X)=±X1, where Q is a positive definite matrix. In [5,6], the case F(X)=±X2 is discussed. And the case F(X)=±Xm,m{3,4,} is treated in [7,8]. Xuefeng Duan and Anping Liao [9] considered the situation where AHF(X)A=mi=1AHiXδiAi,(0<|δi|<1) when the Ai(i=1,2,3,) denoted in [9] are equal with each other, they proved that the nonlinear matrix equation always has an unique positive definite solution, and multi-step stationary iterative method is proposed to work out the unique positive definite solution. In addition, Xuefeng Duan and Anping Liao solved a conjecture which is proposed in [10], and they obtained a conclusion that the nonlinear matrix equation XsAXtA=I does not always has an unique positive definite solution unless when some conditions are satisfied.

    Our main contributions in this paper include the following aspects. Firstly, we give the existent interval of the solution of Eq (1.1) in the case that F maps into P(n) or F maps into P(n), respectively. As compared to previous results, the results from which we have obtained have a better estimates for the solution ¯X in some certain. Secondly, we also give the perturbation analysis of the solution of Eq (1.1) when F:P(n)M(n), where the range of values of F is expanded compared with previous papers. Thirdly, we will promote and replenish the results in [1].

    The paper is organized as follows. In Section 2, we shall derive some necessary and sufficient conditions for the existence of a solution for the nonlinear Eq (1.1) based on the the Schauder's fixed point theorem. And we also discuss the existence scope of the solution whenever the equation is solvable. Section 3 discusses the uniqueness of a solution. In Section 4, we analysis perturbation theory of the solution for Eq (1.1) and the perturbation bounds for the positive semi-definite solutions of these equations are given. Finally, Section 5 illustrates the correctness of the results which we have obtained with some numerical experiments and contains some examples on the matrix Eq (1.1) as well as the results in the preceding sections.

    The following notations will be used throughout the rest of this paper. For AM(n), λ1(A) and λn(A) stand for the maximal and minimal eigenvalue of matrix A, respectively. AH is the conjugate transpose of the matrix A, AH is the inversion of AH. I is the identity matrix, and O is the 0-matrix. 2 and F denote the l2 norm and the Frobenius norm, respectively. With AO(A>O) we denote that matrix A is positive semi-definite (positive definite). As a different notation for ABO(AB>O), we will write AB(A>B), This induces a partial ordering on the Hermitian matrices. When we say that a Hermitian matrix is the smallest (largest) in some set, then this is always meant with respect to the partial ordering induced in this way. Further, the sets [A,B] and (A,B) are defined by

    [A,B]={X|AXB},(A,B)={X|A<X<B},

    whereas LA,B denotes the line segment joining A and B, i.e., LA,B={tA+(1t)B|t[0,1]}. G(G(X)) is denoted by G2(X), and the jth iterate of G on X is denoted by Gj(X). For B=(b1,b2,,bn)=(bij) and a matrix C, BC=(bijC) is a Kronecker product and vec(A) is a vector defined by vec(A)=(a1,a2,,an). In order to develop the paper, we need that

    vec(AXB)=(BA)vec(X)andvec(X)2=XF,

    where A,X and B are n×n complex matrix.

    In this section, we discuss the sufficient conditions for the existence of a positive definite or positive semi-definite solution of Eq (1.1) based on the Schauder's Fixed Point Theorem [11], and some properties of solutions are given.

    Lemma 2.1. (see Parodi [12]). If A>B>O(or AB>O), then Aα>Bα (or AαBα>O) for all α(0,1], and Aα<Bα(or O<AαBα) for all α[1,0).

    Theorem 2.1. Let F:P(n)M(n) be continuous, If Eq (1.1) is solvable and ¯X is a positive semi-definite solution of Eq (1.1), then the following results hold true.

    (i) If F:P(n)P(n), then OAHF(¯X)AQ, and O¯XQ1s,

    (ii) If F:P(n)P(n), then ¯XQ1s.

    Proof. (ⅰ) Because of ¯XP(n), we obtain that F(¯X)O. This implies that AHF(¯X)AO. Note that ¯XO,¯Xs+AHF(¯X)A=Q, we have AHF(¯X)AQ, and ¯XsQ. From Lemma 2.1, we have ¯XQ1s. This proves statement (ⅰ).

    (ⅱ) Because F maps into P(n), we know that F(¯X)O, which implies that AHF(¯X)AO. So we have

    ¯Xs=QAHF(¯X)AQ.

    According to Lemma 2.1, we obtain ¯XQ1s, and this proves the second part of the theorem.

    Remark 2.1. The positive semi-definite solution of the nonlinear matrix Eq (1.1) are not always exist, if the solution is existent, according to Theorem 2.1, we can obtain the existent interval of the solution in the case that F maps into P(n) or F maps into P(n), respectively.

    Sufficient conditions for the existence of a positive semi-definite solution are derived in the following discussion.

    Theorem 2.2. Let F:P(n)P(n) be continuous on [O,Q1s], if inequality AHF(X)AQ are satisfied for all X[O,Q1s], then Eq (1.1) has a positive semi-definite solution in [O,Q1s].

    Proof. From F:P(n)P(n), we obtain F(X)O, which implies that for all X[O,Q1s] must have

    OG(X)=(QAHF(X)A)1sQ1s.

    So we know that G maps [O,Q1s] into [O,Q1s]. Moreover, because of the continuity of map F, we obtain that G is continuous. Obviously, interval [O,Q1s] is a compact convex set, so according to the Schauder's Fixed Point Theorem, Eq (1.1) has a positive semi-definite solution in [O,Q1s].

    Theorem 2.3. Let F:P(n)P(n) be continuous on [Q1s,+), if there exists BQ such that

    QBAHF(X)AO, (2.1)

    for all X[Q1s,B1s], then Eq (1.1) has a positive definite solution in [Q1s,B1s]. Moreover, if the Eq (2.1) is satisfied for every XQ1s, then all solutions of Eq (1.1) are in [Q1s,B1s].

    Proof. From F:P(n)P(n), we obtain F(X)O for all X[Q1s,B1s]. It follows from that G(X)=(QAHF(X)A)1sQ1s. Note that QBAHF(X)AO, we obtain QAHF(X)AB, i.e., G(X)B1s. So G(X)[Q1s,B1s], which implies G maps [Q1s,B1s] into [Q1s,B1s]. Also we can prove that [Q1s,B1s] is a compact convex set, so according to the Schauder's Fixed Point Theorem, Eq (1.1) has a fixed point in [Q1s,B1s]. This fixed point is a positive definite solution solution of Eq (1.1).

    Moreover, if the Eq (2.1) is satisfied for every XQ1s, and let ˆX be a arbitrary solution of Eq (1.1), then

    QAHF(ˆX)AB.

    From Lemma 2.1, we have

    Q1sˆX=(QAHF(ˆX)A)1sB1s.

    This completes the proof.

    An operator F is monotone if and only if F(X)F(Y) for all XY, and the operator F is anti-monotone if and only if F(X)F(Y) for all XY. More specific characters for these function can be seen in [13]. For the rest parts of this section we will discuss the map F which is either monotone or anti-monotone.

    Theorem 2.4. Let F:P(n)P(n) be continuous, monotone and invertible on [O,Q1s], and matrix A is invertible. If Eq (1.1) has a positive semi-definite solution ¯X, then ¯X[max{O,G(Q1s)},min{Q1s,F1(AQA1)}].

    Proof. From Theory 2.1, we obtain ¯X[O,Q1s] and AHF(¯X)A[O,Q], i.e., AHF(¯X)AQ. So we have F(¯X)AQA1. Then we obtain ¯XF1(AQA1), which implies ¯Xmin{Q1s,F1(AQA1)}. Moreover, combine ¯XQ1s and the monotonicity of F, we obtain F(¯X)F(Q1s), so AHF(¯X)AAHF(Q1s)A, which implies that ¯X=(QAHF(¯X)A)1s(QAHF(Q1s)A)1s, then we have ¯Xmax{O,(QAHF(Q1s)A)1s}.

    This complete the proof.

    Remark 2.2. From Theorem 2.4, we obtain that if Q>AHF(Q1s)A, i.e., G(Q1s)>O, then the solution ¯X of Eq (1.1) must be in [G(Q1s),Q1s]. On the contrary, if G(Q1s)<O, then the solution ¯X must be in [O,F1(AQA1)]. Compared with Theorem 2.2, we can narrow the scope of the solution when G(Q1s)O are satisfied, but for the case that G(Q)=O, then the solution can't be narrowed.

    Next we will give the sufficient condition of the existence of the positive definite solution for the Eq (1.1) when F:P(n)P(n) be continuous, monotone and invertible on [O,Q1s].

    Theorem 2.5. Let F:P(n)P(n) be continuous, monotone and invertible on [O,Q1s]. If G(Q1s)>O, then Eq (1.1) has a positive definite solution in [G(Q1s),Q1s]. If G(Q1s)<O,F(Q)<Q, A is invertible, and F1(AQA1)G(X) are satisfied for every X[O,F1(AQA1)], then Eq (1.1) has a positive semi-definite definite solution in [O,F1(AQA1)].

    Proof. Obviously, the map G is continuous and G(X)Q1s. Because F is monotone on [G(Q1s),Q1s], we can obtain F(X)F(Q1s) when X[G(Q1s),Q1s]. This implies QAHF(X)AQAHF(Q1s)A, i, e., G(X)(QAHF(Q1s)A)1s=G(Q1s), so we have G map [G(Q1s),Q1s] into [G(Q1s),Q1s]. According to the Schauder's Fixed Point Theorem, G has a fixed point which is a positive definite solution of Eq (1.1) in [G(Q1s),Q1s]. This proved the first part of the theorem. One step closer, if F1(AQA1)G(X) are satisfied for every X[O,F1(AQA1)], then we have AHF(X)AF(Q) for OXF1(AQA1) because of the monotonicity of F. This implies QAHF(X)AQF(Q), so we can obtain G(x)(QF(Q))1sO, which indicate G maps [O,F1(AQA1)] into [O,F1(AQA1)]. Then G has a fixed point which is a positive semi-definite definite solution of Eq (1.1) in [O,F1(AQA1)]. The proof is completed.

    Remark 2.3. The assumption that F be continuous on [O,Q1s] in the conditions of Theorem 2.5 is very important to guarantee existence of a solution of Eq (1.1). That is, if the map F is not continuous, then Eq (1.1) may have no solution in the interval. To show this, we give a simple example in the following.

    Example 2.1. Consider the scalar case, take A=1,Q=b>1, and let F be a piecewise constant function as follows:

    F(X)={c,X>b2,d,Xb2,

    where cb2 and d<b2, clearly there is no solution to Eq (1.1). So this account for the assumption that F be continuous play an important role to guarantee existence of a solution of Eq (1.1).

    For the case that F:P(n)P(n) be continuous, monotone, if the nonlinear matrix Eq (1.1) has a positive definite solution, then the existence scope of the solution had also been given in Theorem 2 of [1], where the conclusion is proved though mathematical induction. There the map F has some difference with here proposed in above Theorem, and need to solve two positive solutions of two equations, respectively. Also in [1] some conclusions had been given when the map F is anti-monotone, here we will give the existence scope of the solution when Eq (1.1) is solvable.

    Theorem 2.6. Let F:P(n)P(n) be continuous, anti-monotone and invertible. If Eq (1.1) has a positive definite solution ¯X, then ¯X(F1(AQA1),G(Q1s)]. Moreover, if F1(AQA1)G(X) for all [F1(AQA1),G(Q1s)], then Eq (1.1) has a positive definite solution.

    Proof. See Theorem 1 of [1].

    In the previous section, some conditions were derived for the existence of a solution of Eq (1.1) and some properties about the solution. But nothing was said about uniqueness, here we will apply the Banach's Fixed Point Theorem [14] to deduce our conclusion.

    Lemma 3.1. For any positive integer s, and X,YM(n), we always have

    XsYs=s1i=0Xi(XY)Ys1i.

    Proof. Obviously by mathematical induction method we can proved the conclusion, here we omit the proof process.

    Lemma 3.2. (cf. Theorem I.1.8 in [15]) Let F:UM(n)(UM(n)open) be differentiable at any point of U, then

    F(X)F(Y)supZLX,YD(F(Z))XY,

    for all X,YU.

    Lemma 3.3. (Theorem X.38 in [13]) If the operators X,Y satisfy XaI and YaI for some positive number a, and 0<r<1, then XrYrrar1XY.

    In the following discussion ϕ(n) will denote a closed and bounded interval in P(n), i.e., ϕ(n) will be of the form [B;C]={X|BXC}, with B,CP(n). Let Sϕ(n) be the smallest positive value such that supZϕ(n)D(F(Z))=maxZϕ(n)D(F(Z))Sϕ(n) holds.

    Theorem 3.1. When s is a positive integer, G(Q1s)>O, and F:P(n)P(n) be continuous, monotone and invertible, AHFAFSϕ(n)<s˜λs1. If Eq (1.1) has a solution ¯X in ϕ(n) then ¯X is the unique solution in ϕ(n) which is positive definite. Here ϕ(n)=[G(Q1s),Q1s], ˜λ=λn(G(Q1s)).

    Proof. Assume that ˜X be a solution of Eq (1.1) and ˜X¯X, from remark 2.1, we obtain that ˜X,¯X[G(Q1s),Q1s], so we have ¯X˜λI,˜X˜λI. According to Lemma 3.1, we can obtain

    ¯Xs˜XsF=AHF(˜X)AAHF(¯X)AF=AH(F(˜X)F(¯X))AFAHFAFF(¯X)F(˜X)FAHFAFSϕ(n)¯X˜XF.

    Also from Lemma 3.1, we have

    ¯Xs˜XsF=s1i=0¯Xi(¯X˜X)˜Xs1iF=(s1i=0˜Xs1i¯Xi)vec(¯X˜X)2s˜λs1¯X˜XF.

    Combine above two inequality, we have s˜λs1¯X˜XFAHFAFSϕ(n)¯X˜XF. Because G(Q1s)>O, ˜λ>0 is guarantee, so we have ¯X˜XFAHFAFSϕ(n)s˜λs1¯X˜XF<¯X˜XF.

    This is a contradiction, so ˜X=¯X.

    The proof is completed.

    When s is not a positive integer, we can also give a sufficient condition for the existence of the unique solution of Eq (1.1).

    Theorem 3.2. Let H(Y)=QAHF(Y1s)A, and F:P(n)P(n) be continuous, monotone. a=1sS[H(Q),Q]A2λ1sn(H(Q))<1. Then the following results hold.

    (i) Equation (1.1) has and only has a positive semi-definite definite solution ¯X in [(H(Q))1s,Q1s]=[G(Q1s),Q1s].

    (ii) If we consider the following iterative method:

    Y0[H(Q),Q],Yk+1=QAHF(Y1sk)A=H(Yk),k=0,1,2,. (3.1)

    Then the sequence {Yk}k=0 in (3.1) converges to the unique solution ¯Y of the equation Y+AHF(Y1s)A=Q, and ¯X=¯Y1s, moreover, we can obtain Yk+1Yk<aYkYk1.

    Proof. (ⅰ) Because F maps into P(n), we have H(Y)Q for all Y[H(Q),Q]. Note that F is monotone, we have F(Y1s)F(Q1s), which implies H(Y)QAHF(Q1s)A=H(Q). So H maps [H(Q),Q] into [H(Q),Q]. Moreover, Y1,Y2[H(Q),Q], we have Y1λn(H(Q))I,Y2λn(H(Q))I, combine Lemma 3.1–3.3, we can obtain

    H(Y1)H(Y2)=AHF(Y1s2)AAHF(Y1s1)A=AH(F(Y1s2)F(Y1s1))AA2maxZLY1s1,Y1s2D(Z)Y1s1Y1s21sS[H(Q),Q]A2λ1sn(H(Q))Y1Y2.

    From a<1, we obtain that H is a contraction map on [H(Q),Q]. Because [H(Q),Q] is a closed subset of P(n) which implies [H(Q),Q] is a complete metric space. Then it follows from Banach's Fixed Point Theorem that the map H has a unique fixed point ¯Y in [H(Q),Q] i.e., H(¯Y)=¯Y. Let ¯X=¯Y1s[(H(Q))1s,Q1s]=[G(Q1s),Q1s], then the matrix ¯X is the unique solution of Eq (1.1). This prove the first part of the Theorem.

    (ⅱ) From the proof of the (ⅰ), H is a contraction map which maps [H(Q),Q] into [H(Q),Q], and for allY1,Y2[H(Q),Q], we have H(Y1)H(Y2)<aY1Y2. So the sequence {Yk}k=0 in (3.1) converges to the unique solution ¯Y of the equation Y+AHF(Y1s)A=Q, and ¯X=¯Y1s is the unique solution of Eq (1.1), also we have Yk+1Yk<aYkYk1. The proof is completed.

    Corollary 3.1. If the conditions of the Theorem 3.2 are satisfied, then we have

    YkYak1aY1Y0,YkYa1aYkYk1.

    Proof. From the prove of Theorem 3.2, we obtain that Yk+1Yk<aYkYk1. So we have

    Yk+pYkpi=1Yk+iYk+i1(ap1++a+1)Yk+1Ykak1aY1Y0. (3.2)

    Let p, then we get YkYak1aY1Y0, and from the second inequality, i.e., (3.1) we can obtain that YkYa1aYkYk1. This implies that the error of approximate solution can be estimated by the scope of difference between the iteration sequences Yk and Yk+1.

    For the case that F:P(n)P(n) is continuous, anti-monotone and invertible had been discussed in The Theorem 3 of [1], there the conclusion is similar with the above Theorem which we have obtain.

    The next corollary describes the number of iterations to be taken to ensure that YkYε.

    Corollary 3.2. If Y0=Q1s and ε is a convergence tolerance then the number k of iterations to be taken is at most

    k=[lnε+lnAHF(Q1s)Alna]+1.

    Proof. Let Y0=Q1s, from Lemma 3.1, we have YkYak1aY1Y0=ak1aAHF(Q1s)A. If want to ensure that YkYε, we just need make ak1aAHF(Q1s)Aε, then we obtain

    klnε+lnAHF(Q1s)Alna.

    This implies that the number k of iterations to be taken is at most

    [lnε+lnAHF(Q1s)Alna]+1.

    Theorem 3.3. If F:P(n)P(n) is monotone, continuous and G(Q1s)>O, then the following results hold true.

    (i) G is anti-monotone on P(X) which maps [G(Q1s),Q1s] into [G(Q1s),Q1s], and for any positive definite matrix X for which G(X) is positive definite we have G(Q1s)G2(X)Q1s.

    (ii) There always exists either a periodic orbit of period 2 of the map G or a fixed point of G. The sequence of matrices G2j(Q1s)j=0 is a decreasing sequence of positive definite matrices converging to a positive definite matrix X, and the sequence of matrices G2j+1(Q1s)j=0 is an increasing sequence of positive definite matrices converging to a positive definite matrix X, and the matrices X, X form either a periodic orbit of G of period 2, or X=X, in which case it is a fixed point of G, and hence solution of Eq (1.1).

    (iii) Moreover, G maps the set [X,X] into itself, and any periodic orbit of G is contained in this set. In particular, any solution of Eq (1.1) is in between X and X, and if X=X, then there is an unique positive definite solution.

    (iv) In the case where X=X this matrix is the global attractor for the map G in the following sense: For any positive definite X for which G(X) is positive definite as well, we have limjGj(X)=X.

    (v) In the case where X=X the following holds: if XX, then the orbit of X under G converges to the periodic orbit X, X in the sense that limjG2j1(X)=X, and limjG2j(X)=X. If XX and G(X) is positive definite, then the orbit of X under G converges to the periodic orbit X, X in the sense that limjG2j1(X)=X and limjG2j(X)=X.

    Proof. (ⅰ) Noting F is monotone on P(n), for all X,YP(n) and XY, we have F(X)F(Y) which implies G(X)=(QAHF(X)A)1s(QAHF(Y)A)1s=G(Y), so G is anti-monotone. Because F maps into P(n), we can obtain G(X)=(QAHF(X)A)1sQ1s, also from the monotonicity of F, we have F(X)F(Q1s) for all X[G(Q1s),Q1s], which implies that QAHF(X)AQAHF(Q1s)A, i.e., G(X)G(Q1s). For any positive definite matrix X for which G(X) is positive definite, we have G(X)Q1s and G2(X)Q1s. Combine G(X)Q1s and that G is anti-monotone, then we have G2(X)G(Q1s).

    The proof for (ⅱ)-(ⅴ) are similar to the interpretation of Theorem 2.2 in [2], which is omitted by this paper.

    Theorem 3.4. If F:P(n)P(n) is monotone, continuous, Then the sequence of matrices {G2j(Q1s)}1 is a increasing sequence of positive definite matrices converging to a positive definite matrix ˜X, and the sequence of matrices {G2j+1(Q1s)}1 is an decreasing sequence of positive definite matrices converging to a positive definite matrix ˜X. If ˜X=˜X=˜X, then ˜X is the unique definite solution of Eq (1.1).

    Proof. Because F maps into P(n), we have G(X)=(QAHF(X)A)1sQ1s, which implies G(Q1s)Q1s>O and G2(Q1s)Q1s. Similar with the proof of Theorem 3.2, we obtain that G is anti-monotone, and G2 is monotone. So we have G3(Q1s)G(Q1s), then applying G2 repeatedly, we see that the monotonicity of G2 on this set implies that the sequence {G2j+1(Q1s)}1 is a decreasing sequence of positive definite matrices that is bounded below by the positive definite matrix Q1s. Hence it converges to a positive definite matrix ˜X. Also apply the monotonicity of G2 to inequality G2(Q1s)Q1s, we can obtain the sequence {G2j(Q1s)}1 is a increasing sequence of positive definite matrices. Note G is anti-monotone and G(Q1s)Q1s, we have G2(Q1s)G(Q1s). Combine the monotonicity of G2 and {G2j+1(Q1s)}1 is a decreasing sequence that we have proved, we can obtain G2j(Q1s)G2j1(Q1s)G2j3(Q1s)G(Q1s), which implies the sequence {G2j(Q1s)}1 is bounded above by the positive definite matrix G(Q1s). So it converges to a positive definite matrix ˜X. Obviously, when ˜X=˜X=˜X, then ˜X is a definite solution of Eq (1.1). In the following we prove uniqueness.

    Assume that ¯X is a definite solution of Eq (1.1), then we have ¯X=G(¯X)Q1s, so we obtain G2j+1(Q1s)G2j(Q1s). Hence, letting j, we see that ¯X˜X=˜X. Moreover, from ¯X=G(¯X)Q1s, we have ¯X=G2(¯X)G(Q1s). Then applying G2 repeatedly, we see that the monotonicity of G2 implies that G2j(Q1s)G2j1(Q1s). So we obtain ¯X˜X=˜X. Hence, we obtain ˜X=¯X, i.e., ˜X is the unique definite solution of Eq (1.1). This complete the proof.

    In this section we will consider the perturbation analysis of the solution of Eq (1.1). The perturbed equation will be as follows:

    Xs+˜AHF(X)˜A=˜Q, (4.1)

    where ˜Q be positive definite, ˜A and ˜Q is a small perturbation of A and Q, respectively, and s is a positive integer. Denote A=˜AA,Q=˜QQ,X=˜X¯X. Then the following theorems give us upper bounds for ¯X˜XF, here ¯X is a solution of Eq (1.1), and ˜X is a solution of Eq (4.1).

    Theorem 4.1. If F:P(n)M(n) is differentiable, F()Fc, ¯X,˜X be the positive definite semi-definite solutions of Eq (1.1) and its perturbation Eq (4.1). ¯X[B1,C1], ˜X[B2,C2]. We have XFdsλs1l, and XFXFdλ(sλs1l) when l<sλs1 are satisfied. Here ˆλ=max{XF|XΩ}, λ=min{λn(B1),λn(B2)}, d=cˆλ(˜AHF+AF)AF+QF, l=rΩAF˜AHF, rΩ=maxZΩDF(Z)F, Ω={αX+(1α)Y|α[0,1],X[B1,C1], Y[B2,C2]}.

    Proof. Because ¯X,˜X be the positive definite semi-definite solutions of Eq (1.1) and its perturbation Eq (4.1), we have

    ¯Xs˜Xs=˜AHF(˜X)˜A˜QAHF(¯X)A+Q=˜AHF(˜X)(˜AA)+˜AH(F(˜X)F(X))A+(˜AHAH)F(X)A+Q. (4.2)

    Noting ¯X[B1,C1], ˜X[B2,C2], ˆλ=max{XF|XΩ}, λ=min{λn(B1),λn(B2)}, we can obtain ¯XB1λn(B)1IλI, ˜XB2λn(B2)IλI, and ¯XFˆλ,˜XFˆλ. Combining Lemma 3.2 and substituting rΩ=maxZΩDF(Z)F,d=cˆλ(˜AHF+AF)AF+QF for the Eq (4.2) we get

    ¯Xs˜XsF=˜AHF(˜X)(˜AA)+˜AH(F(˜X)F(X))A+(˜AHAH)F(X)A+QF˜AHFF()F˜XFAF+˜AHFmaxZL¯X,˜XDF(Z)F˜X¯XFAF+AFF()F¯XFAF+QF=cˆλ(˜AHF+AF)AF+QF+rΩAF˜AHF¯X˜XF. (4.3)

    From Lemma 3.2, we have

    ¯Xs˜XsF=vec(¯Xs˜Xs)2=vec(s1i=0¯Xi(¯X˜X)˜Xs1i)2=(s1i=0˜Xs1i¯Xi)vec(¯X˜X)2sλs1¯X˜XF. (4.4)

    Combining (4.3) and (4.4), we can obtain

    sλs1¯X˜XFcˆλ(˜AHF+AF)AF+QF+rΩAF˜AHF¯X˜XF. (4.5)

    Substituting d=cˆλ(˜AHF+AF)AF+QF,l=rΩAF˜AHF for (4.5), we can obtain

    XFdsλs1l.

    Moreover, we can get

    XFXFdλ(sλs1l).

    This complete the proof.

    In fact, if we don't consider the high order quantity, i.e., A2, then we can obtain a simpler error bound for ¯X˜XF, which do not need calculate the value of ˜AF.

    Theorem 4.2. The conditions are similar to the conditions of Theorem 4.1 which we have obtained, if we don't consider the high order quantity, i.e., A2F, then we have the following absolute error

    XF=¯X˜XF2cˆλAFAF+QFsλs1rΩA2F

    and the relative error

    XF¯XF2cˆλAFAF+QFλ(sλs1rΩA2F).

    Proof. Because ¯X,˜X be the positive definite semi-definite solutions of Eq (1.1) and its perturbation Eq (4.1), we have

    ¯Xs˜Xs=˜AHF(˜X)˜AAHF(¯X)A+Q˜Q=AH(F(˜X)F(¯X))A+AHF(˜X)(˜AA)+(˜AHAH)F(˜X)A+(˜AHAH)F(˜X)(˜AA)+Q˜Q. (4.6)

    So we can obtain

    ¯Xs˜XsF=AH(F(˜X)F(¯X))A+AHF(˜X)(˜AA)+(˜AHAH)F(˜X)A+(˜AHAH)F(˜X)(˜AA)+Q˜QFA2FF(˜X)F(¯X)F+AHFAFF(˜X)F+AFAFF(˜X)F+A2FF(˜X)F+QFrΩA2F¯X˜XF+cˆλAHFAF+cˆλAFAF+QF. (4.7)

    Similar to Theorem 4.1, we have

    ¯Xs˜XsFsλs1¯X˜XF. (4.8)

    Combine (4.7) and (4.8), we obtain

    sλs1¯X˜XFrΩA2F¯X˜XF+cˆλAHFAF+cˆλAFAF+QF.

    So we have the absolute error

    XF=¯X˜XF2cˆλAFAF+QFsλs1rΩA2F,

    and we can also get the the relative error

    XF¯XF2cˆλAFAF+QFλ(sλs1rΩA2F).

    The proof is completed.

    So far we have considered the general nonlinear matrix Eq (1.1) and achieved general conditions for the existence of a positive definite solution or a positive definite semi-definite solution for this class of equations. Now we give several examples to show how this works in particular cases, and some numerical results to specify the rationality of the results we have obtain above.

    Example 5.1. Take F(X)=Xα(α(0,1]), Q=I, and let A be an arbitrary square matrix with A2<1. Then Eq (1.1) has a positive definite solution in [(IAHA)1s,I].

    Proof. Because of A2<1, so we have λ1(AHA)<1, that is, λ1(AHA)<1, which implies that AHA<I. So we have AHF(Q1s)A=AHIA=AHA<I=Q, i.e., G(Q1s)=(QAHF(Q1s)A)1s>O. Then all conditions of Theorem 2.5 are satisfied. And from the conclusion of Theorem 2.5, we obtain that equation Xs+AHXαA=I has a positive definite solution in [(IAHA)1s,I].

    Example 5.2. Take F(X)=mi=1Xδi(XI,m1,δi(0,1]), Q=I, and let A be an arbitrary square matrix. Then Eq (1.1) has a positive definite solution in interval [I,(I+mAHA)1s], in particularly, all solutions of equation Xsmi=1AHXδiA=I are in [I,(I+mAHA)1s].

    Proof. From Lemma 3.1, we obtain Xδi are monotonous for i=1,2,,m, so Xδi are anti-monotonous for i=1,2,,m. This implies that XδiI(i=1,2,,m) when XI, i.e., F(X)mI, then AHF(X)AmAHA. Let B=I+mAHA, then Eq (2.1) of Theorem 2.3 is set up for all X[Q1s,B1s]=[I,(I+mAHA)1s]. So all conditions of Theorem 2.3 are satisfied. According to Theorem 2.3, equation Xsmi=1AHXδiA=I has a positive definite solution in [I,(I+mAHA)1s]. And more specifically, all solutions of the above equation are in [I,(I+mAHA)1s].

    Example 5.3. Let F(X)=Xt,t(0,1], if Eq (1.1) has a positive definite solution ¯X, then from the conclusion of Theorem 2.6, we have ¯X(F1(AQA1),G(Q1s)). Compared with the conclusion from Theorem 2.1 in [7] that ¯X((AQ1AH)1t,Q1s), the conclusion in our paper, i.e., the result from Theorem 2.6 which we have obtained have a better estimates for the solution ¯X.

    Proof. Note that F maps into P(n), we obtain G(Q1s)Q1s. From F(X)=Xt, we have F1(AQA1)=(AQ1AH)1t. So ¯X(F1(AQA1),G(Q1s))((AQ1AH)1t,Q1s). The proof is completed.

    The above example (i.e., example 4) which we have given show that the conclusion in our paper may have a better estimates for the solution of Eq (1.1) to some extent.

    An application of Theorem 2.4 is given to discuss the property of the positive definite solutions of Eq (1.1) in the following example.

    Example 5.4. For the matrix Eq (1.1), we choose s=5, F(X)=X0.5, then F is monotone by Lemma 3.1. Let

    A=(100010001),X=(7.153.020.113.026.202.010.112.016.50)

    and Q=Xs+AHF(X)A=X5+X0.5, i.e.,

    Q=(5.18374.72042.16584.72044.75782.77612.16582.77612.4251)

    with every element multiplying 104.

    Obviously, X is an positive matrix, that is, XO. By using Matlab 7.0 we can obtain

    λ1(Q)=1.1099×105,λ2(Q)=0.1248×105,λ3(Q)=0.0019×105.

    So we have QO. Now let us judge if G(Q15)=(QQ110)15>O or not. By the calculation, we obtain that

    G(Q15)=(7.15003.02000.11003.02006.02002.01000.11002.01006.5000)

    and we also calculate the eigenvalue of G(Q15), and obtain

    λ1G(Q15)=6.5952,λ2G(Q15)=10.2107,λ3G(Q15)=2.8641.

    This implies that G(Q15)>O. Equation (1.1) with A and Q above has at least one positive definite solution as a result of Q=Xs+AHF(X)A=X5+X0.5. So according to the inherent meaning of Theorem 2.4, all positive semi-definite solutions of Eq (1.1) must be in [G(Q15),Q15], that is all the positive semi-definite solutions ¯X of equation X5+X0.5=Q satisfy

    (7.15003.02000.11003.02006.02002.01000.11002.01006.5000)¯X(7.15153.01800.11093.01806.02292.00850.11092.00856.5010).

    In the remainder of this section, we report some numerical results. These numerical results describe the correctness of Theorem 3.4 that we have obtained. The numerical experiments were carried out using MATLAB 2010a on ZWX-PC Intel i3 processor with 2.10 GHz and 4.0GB RAM computer with double precision.The rounding unit is approximately 1.11×1016. In the example we take s=1,Q=I,F(X)=X2, then the assumptions of Theorem 3.4 are satisfied. In Table 1, k denotes the number of iterations, εk denotes G2k(Q1s)G2k+1(Q1s), Xk,Yk, respectively, denote the iteration results of G2k(Q1s), G2k+1(Q1s) of the kth step. ˜X, ˜X, respectively, is taken to be the final iterate of G2k(Q1s), G2k+1(Q1s) after εk<108 is satisfied. The algorithm to work out the Gk(Q1s) is shown in the appendix of section seven.

    Table 1.  Error analysis for XkYk.
    k 0 1 2 3
    εk 0.4098 0.0023 1.8243e006 8.9540e011

     | Show Table
    DownLoad: CSV

    After calculating, we can also give the Xk, and Yk, k=1,2,3 as follows, where omit the precision of computer's mechanical error.

    X1=(1.05330.06980.03180.06570.06981.12420.05360.11850.03180.05361.02690.06030.06570.11850.06031.1388),Y1=(1.05380.07040.03210.06620.07041.12490.05400.11910.03210.05401.02710.06060.06620.11910.06061.1393),
    X2=(1.05290.07040.03210.06590.07041.12490.05400.11910.03210.05401.02570.06060.06590.11910.06061.1390),Y2=(1.05310.07040.03210.06610.07041.12500.05400.11910.03210.05401.02690.06060.06610.11910.06061.1393),
    X3=(1.05380.07040.03210.06620.07041.12490.05400.11910.03210.05401.02710.06060.06620.11910.06061.1393),Y3=(1.05380.07040.03210.06620.07041.12490.05400.11910.03210.05401.02710.06060.06620.11910.06061.1393).

    Obviously, X3X2X1X0, Y3Y2Y1Y0, and X3Y3. In fact, from the numerical test process we can see that the sequence {Xk}1 are increasing and converging to X3, in the contrary, the sequence {Yk}1 are decreasing and converging to Y3. Here the numerical results we obtained is consistent with the internal interpretation of Theorem 3.4. Moreover, ˜X, ˜X, and ˜X refer to the Theorem 3.4 can also be obtained, that is, ˜X=X3=G6(Q1s), ˜X=Y3=G7(Q1s), and ˜X=˜X=˜X=X3. Which implies that the equation XAHX2A=I has an unique solution, i.e., ˜X. This result is true, which can be demonstrated according to the interpretation [5].

    In this paper, we discuss a general nonlinear matrix equation, which include the existence and properties of the solutions, uniqueness of a solution, iterative method for solve the equations, perturbation analysis about the solutions. These results are more general than [5,16,17,18,19,20,21], the iterative procedure there may be better for the special case under consideration there F(X)=X1,F(X)=±X2 or F(X)=Xt,(t=1,2,3,), but not readily applied to the very general case we have under consideration here. In recent years, many authors have dedicated into finding a good method to solve the nonlinear matrix equation where s=1,F(X)=X1, which is a special case of the discrete algebraic Riccati equation studied in [22,23]. For example, they have proposed the structure-preserving doubling algorithm [21,22,24,25,26], cyclic reduction algorithm [4], Latouche-Ramaswami algorithm [27], these methods may have a better rate of convergence in some cases (where the critical case do not include in). Wether these methods can be applied for our general case or not is an open problem and remains to be further research.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are very much indebted to the anonymous referees for their constructive and valuable comments and suggestions which greatly improved the original manuscript of this paper. This work was supported by Fujian Natural Science Foundation (Grant No.2022J01378, 2023J01131186) and Key Reform in Education (Grant No. FBJG20200310).

    The authors declare that they have no conflict of interest.



    [1] S. M. El-Sayed, A. C. M. Ran, On an iteration method for solving a class of nonlinear matrix equations, SIAM J. Matrix Anal., 23 (2002), 632–645. https://doi.org/10.1137/S0895479899345571 doi: 10.1137/S0895479899345571
    [2] Q. Li, P. P. Liu, Positive definite solutions of a kind of nonlinear matrix equations, J. Inform. Comput. Sci., 7 (2010), 1527–1533.
    [3] A. C. M. Ran, M. C. B. Reurings, On the nonlinear matrix equation X+AHF(X)A=Q solution and perturbation theory, Linear Algebra Appl., 346 (2002), 15–26. https://doi.org/10.1016/S0024-3795(01)00508-0 doi: 10.1016/S0024-3795(01)00508-0
    [4] B. Meini, Efficient computation of the extreme solutions of X+AHX1A=Q and XAHX1A=Q, Math. Comp., 71 (2002), 1189–1204.
    [5] I. G. Ivanov, V. I. Hassanov, B. V. Minchev, On matrix equations X±AHX2A=I, Linear Algebra Appl., 326 (2001), 27–44.
    [6] I. G. Ivanov, S. M. El-Sayed, Properties of positive definite solutions of the equation X+AHX2A=I, Linear Algebra Appl., 297 (1998), 303–316. https://doi.org/10.1016/S0024-3795(98)00023-8 doi: 10.1016/S0024-3795(98)00023-8
    [7] Y. T. Yang, The iterative method for solving nonlinear matrix equation Xs+AHXtA=Q, Appl. Math. Comput., 188 (2007), 46–53. https://doi.org/10.1016/j.amc.2006.09.085 doi: 10.1016/j.amc.2006.09.085
    [8] X. G. Liu, H. Gao, On the positive definite solutions of the matrix equationXs±AXtA=In, Linear Algebra Appl., 368 (2003), 83–97. https://doi.org/10.1016/S0024-3795(02)00661-4 doi: 10.1016/S0024-3795(02)00661-4
    [9] X. F. Duan, A. Liao, B. Tang, On the nonlinear matrix equation Xmi=1AHiXδiAi=Q, Linear Algebra Appl., 429 (2008), 110–121. https://doi.org/10.1016/j.laa.2008.02.014 doi: 10.1016/j.laa.2008.02.014
    [10] X. F. Duan, A. Liao, On the existence of Hermitian positive definite solutions of the matrix equation Xs+AHXtA=Q, Linear Algebra Appl., 429 (2008), 673–687. https://doi.org/10.1016/j.laa.2008.03.019 doi: 10.1016/j.laa.2008.03.019
    [11] W. Kulpa, The Schauder fixed point theorem, Acta Univ. Carol., Math. Phys., 38 (1997), 39–44.
    [12] M. Parodi, La Localisation Des Valeurs Caracterisiques Des Matrices Etses Applications, Paris: Gauthier-Villars, 1959.
    [13] R. Bhatia, Matrix Analysis, New York: Springer-Verlag, 1997. https://doi.org/10.1007/978-1-4612-0653-8
    [14] V. I. Istratescu, Fixed Point Theorem, Mathematics and Its Applications, Dordrecht: Reidel, 1981.
    [15] A. Ambrosetti, G. Prodi, A Primer of Nonlinear Analysis, Cambridge Studies in Advanced Mathematics, Cambridge: Cambridge University Press, 1993.
    [16] J. C. Engwerda, A. C. M. Ran, A. L. Rijkeboer, Necessary and sufficient conditions for the existence of a positive definite solution of the matrix equation X+AHX1A=Q, Linear Algebra Appl., 186 (1993), 255–275. https://doi.org/10.1016/0024-3795(93)90295-Y doi: 10.1016/0024-3795(93)90295-Y
    [17] A. Ferrante, B. C. Levy, Hermitian solutions of the equation X=Q+NX1NH, Linear Algebra Appl., 247 (1996), 359–373. https://doi.org/10.1016/0024-3795(95)00121-2 doi: 10.1016/0024-3795(95)00121-2
    [18] C. H. Guo, P. Lancaster, Iterative solution of two matrix equations, Math. Comput., 68 (1999), 1589–1603.
    [19] S. F. Xu, On the maximal solution of the matrix equation X+ATX1A=I, Acta Sci. Natur. Univ. Pekinensis, 36 (2000), 29–38.
    [20] X. Zhan, J. Xie, On the matrix equation X+ATX1A=I, Linear Algebra Appl., 247 (1996), 337–345. https://doi.org/10.1016/0024-3795(95)00120-4 doi: 10.1016/0024-3795(95)00120-4
    [21] C. H. Guo, Y. C. Kuo, W. W. Lin, Numerical solution of nonlinear matrix equations arising from Green's function calculations in nano research, J. Comput. Appl. Math., 236 (2012), 4166–4180. https://doi.org/10.1016/j.cam.2012.05.012 doi: 10.1016/j.cam.2012.05.012
    [22] E. K. W. Chu, H. Y. Fan, W. W. Lin, C. S. Wang, A structure-preserving doubling algorithm for periodic discrete-time algebraic Riccati equations, Int. J. Control, 77 (2004), 767–788. https://doi.org/10.1080/00207170410001714988 doi: 10.1080/00207170410001714988
    [23] C. H. Guo, Newton's method for discrete algebraic Riccati equations when the closed-loop matrix has eigenvalues on the unit circle, SIAM J. Matrix Anal. Appl., 20 (1998), 279–294. https://doi.org/10.1137/S0895479897322999 doi: 10.1137/S0895479897322999
    [24] P. C. Y. Weng, E. K. W. Chu, Y. C. Kuo, W. W. Lin Solving large-scale nonlinear matrix equations by doubling, Linear Algebra Appl., 439 (2013), 914–932. https://doi.org/10.1016/j.laa.2012.08.008 doi: 10.1016/j.laa.2012.08.008
    [25] C. Y. Chiang, E. K. Wan Chu, C. H. Guo, T. M. Huang, W. W. Lin, S. F. Xu, Convergence analysis of the doubling algorithm for several nonlinear matrix equations in the critical case, SIAM J. Matrix Anal., 227–247. https://doi.org/10.1137/080717304
    [26] C. H. Guo, W. W. Lin, The matrix equation X+AX1A=Q and its application in nano research, SIAM J. Sci. Comput., 32 (2010), 3020–3038. https://doi.org/10.1137/090758209 doi: 10.1137/090758209
    [27] G. Latouche, V. Ramaswami, A logarithmic reduction algorithm for quasi-death-birth processes, J. Appl. Probab., 30 (1993), 650–674. https://doi.org/10.2307/3214773 doi: 10.2307/3214773
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1576) PDF downloads(63) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog