Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A numerically stable high-order Chebyshev-Halley type multipoint iterative method for calculating matrix sign function

  • A new eighth-order Chebyshev-Halley type iteration is proposed for solving nonlinear equations and matrix sign function. Basins of attraction show that several special cases of the new method are globally convergent. It is analytically proven that the new method is asymptotically stable and the new method has the order of convergence eight as well. The effectiveness of the theoretical results are illustrated by numerical experiments. In numerical experiments, the new method is applied to a random matrix, Wilson matrix and continuous-time algebraic Riccati equation. Numerical results show that, compared with some well-known methods, the new method achieves the accuracy requirement in the minimum computing time and the minimum number of iterations.

    Citation: Xiaofeng Wang, Ying Cao. A numerically stable high-order Chebyshev-Halley type multipoint iterative method for calculating matrix sign function[J]. AIMS Mathematics, 2023, 8(5): 12456-12471. doi: 10.3934/math.2023625

    Related Papers:

    [1] Alicia Cordero, Arleen Ledesma, Javier G. Maimó, Juan R. Torregrosa . Design and dynamical behavior of a fourth order family of iterative methods for solving nonlinear equations. AIMS Mathematics, 2024, 9(4): 8564-8593. doi: 10.3934/math.2024415
    [2] Fiza Zafar, Alicia Cordero, Dua-E-Zahra Rizvi, Juan Ramon Torregrosa . An optimal eighth order derivative free multiple root finding scheme and its dynamics. AIMS Mathematics, 2023, 8(4): 8478-8503. doi: 10.3934/math.2023427
    [3] G Thangkhenpau, Sunil Panday, Bhavna Panday, Carmen E. Stoenoiu, Lorentz Jäntschi . Generalized high-order iterative methods for solutions of nonlinear systems and their applications. AIMS Mathematics, 2024, 9(3): 6161-6182. doi: 10.3934/math.2024301
    [4] Malik Zaka Ullah, Sultan Muaysh Alaslani, Fouad Othman Mallawi, Fayyaz Ahmad, Stanford Shateyi, Mir Asma . A fast and efficient Newton-type iterative scheme to find the sign of a matrix. AIMS Mathematics, 2023, 8(8): 19264-19274. doi: 10.3934/math.2023982
    [5] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong, Bruno Carpentieri . Highly efficient family of two-step simultaneous method for all polynomial roots. AIMS Mathematics, 2024, 9(1): 1755-1771. doi: 10.3934/math.2024085
    [6] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong . On the stability analysis of numerical schemes for solving non-linear polynomials arises in engineering problems. AIMS Mathematics, 2024, 9(4): 8885-8903. doi: 10.3934/math.2024433
    [7] Kasmita Devi, Prashanth Maroju . Complete study of local convergence and basin of attraction of sixth-order iterative method. AIMS Mathematics, 2023, 8(9): 21191-21207. doi: 10.3934/math.20231080
    [8] Zahra Pirouzeh, Mohammad Hadi Noori Skandari, Kamele Nassiri Pirbazari, Stanford Shateyi . A pseudo-spectral approach for optimal control problems of variable-order fractional integro-differential equations. AIMS Mathematics, 2024, 9(9): 23692-23710. doi: 10.3934/math.20241151
    [9] Xiaofeng Wang, Mingyu Sun . A new family of fourth-order Ostrowski-type iterative methods for solving nonlinear systems. AIMS Mathematics, 2024, 9(4): 10255-10266. doi: 10.3934/math.2024501
    [10] Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi . A general modulus-based matrix splitting method for quasi-complementarity problem. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614
  • A new eighth-order Chebyshev-Halley type iteration is proposed for solving nonlinear equations and matrix sign function. Basins of attraction show that several special cases of the new method are globally convergent. It is analytically proven that the new method is asymptotically stable and the new method has the order of convergence eight as well. The effectiveness of the theoretical results are illustrated by numerical experiments. In numerical experiments, the new method is applied to a random matrix, Wilson matrix and continuous-time algebraic Riccati equation. Numerical results show that, compared with some well-known methods, the new method achieves the accuracy requirement in the minimum computing time and the minimum number of iterations.



    The Chebyshev-Halley method is a popular iterative method for solving the simple roots of the nonlinear equation f(x)=0. In fact, the Chebyshev-Halley method has been first provided by Hernández and Salanova [1]. Gutiérrez and Hernández [2] have provided a modification for the Chebyshev-Halley type iterative methods in Banach spaces. The base for constructing the family is the third-order method

    xk+1=xk(1+12(L(xk)1aL(xk)))f(xk)f(xk), (1.1)

    where aR, L(xk)=f(xk)f(xk)f(xk)2.

    In 2008, Osada [3] gave two modifications of family of Chebyshev-Halley methods for analytic functions. With the developments of the theory of iteration processes, Kim et al. [4] proposed several new families of Chebyshev-Halley type methods based on weight function. Ivanov [5] established convergence theorems of Chebyshev-Halley iteration family for multiple polynomial zeros. In this paper, we proposed a new Chebyshev-Halley type method for solving the matrix sign functions.

    In addition, the applications obtained for the matrix sign functions are significant. For example, matrix sign function can be used as a tool for solving of the algebraic Riccati equation [6,7,8], Lyapunov matrix equations[9], generalized algebraic Bernoulli equations[10] and separation problem of matrix eigenvalues[11]. It is used as a valuable method to compute the matrix square root, the matrix pth roots and the polar decomposition[12]. Due to the applicability of the matrix sign function, stable iterative schemes with global convergence have become viable choices for computing this matrix. Basin of attractions can help us to obtain the iterative schemes with global convergence, for example, Soleymani et al. [13,14] imposed some high order iterative methods for solving matrix sign function with application in stochastic differential equations. Other high-order iterative methods with global convergent have been proposed for solving matrix sign function, see[15,16,17,18,19] and the references therein.

    The sign of a matrix function generalizes the scalar sign, then the scalar sign function for any zC not lying on the imaginary axis is given by

    sign(z)={1,ifRe(z)>0,1,ifRe(z)<0. (1.2)

    Indeed, this function for the matrix case was introduced by Roberts[19] for solving the problem of model reduction and the algebraic Riccati equations.

    We suppose that the matrix ACn×n has no eigenvalues on the imaginary axis. If A=TJAT1 is a Jordan canonical form arranged so that

    JA=(J100J2), (1.3)

    where the eigenvalues of J1Cq×q and J2C(nq)×(nq) locate in the open left half-plane and the open right half-plane, respectively. Then, the matrix sign function is given by

    S=sign(A)=T(Iq00Inq)T1. (1.4)

    Actually, for any positive integer p, the matrix p-sector function[20] can be defined by

    sectp(A)=A(Ap)1/p, (1.5)

    where the matrix sign function S=A(A2)1/2 is taken in the case p=2. In fact, S has various properties, as shown in[21].

    The well known iteration for computation of S is Newton's method (NM)

    Xk+1=12(Xk+X1k). (1.6)

    We know that iteration (1.6) is by no means the only rational matrix method for computing S, as a lot of other iterations have been derived to get a higher order of convergence, and to improve the convergence acceleration. Hopefully, Laub and Kenney established the Padé family of iterations in [22]. For non-pure imaginary zC, we have the following characterization:

    sign(z)=z(1(1z2))1/2=z(1ξ)1/2, (1.7)

    where ξ=1z2, hence the task of approximating sign(z) leads to that of approximating

    f(ξ)=(1ξ)1/2, (1.8)

    where ξ is less than 1 in magnitude. Let the (l,n)- Padé approximant to f(ξ) be

    Pl,n(ξ)Ql,n(ξ), (1.9)

    and l+n1. More precisely, Kenney and Laub set up the rational iterations of the form

    xk+1=gr(xk)=xkPl,n(1x2k)Ql,n(1x2k):=fln, (1.10)

    where Pl,n/Ql,n denotes the (l,n)- Padé approximant to the function (1ξ)1/2. Obviously, for any suitable l and n, the matrix versions of the iterations are given by

    Xk+1=XkPl,n(IX2k)Ql,n(IX2k)1:=gr(Xk), X0=A, (1.11)

    where r=l+n+1. It has been proved that the iterations with l=n and l=n1 are globally convergent, while those with ln+1 have local convergence. The convergence rate being l+n+1 in every case. Similarly, we can give the reciprocal Padé iterations as follows:

    Xk+1=Ql,n(IX2k)XkPl,n(IX2k):=1gr(Xk). (1.12)

    Table 1 gives the principal Padé iteration and its reciprocal for the order 4r10. PMr denotes the principal Padé iteration, while RPMr denotes its reciprocal Padé iteration.

    Table 1.  Principal Padé iterations and their reciprocals for the order 4r10.
    Method (l,n) Expressions
    PM4 (1,2) 4Xk(X2k+I)(X4k+6X2k+I)1
    PM5 (2,2) Xk(X4k+10X2k+5I)(5X4k+10X2k+I)1
    PM6 (2,3) 2Xk(3X4k+10X2k+3I)(X6k+15X4k+15X2k+I)1
    PM7 (3,3) Xk(X6k+21X4k+35X2k+7I)(7X6k+35X4k+21X2k+I)1
    PM8 (3,4) 8Xk(X6k+7X4k+7X2k+I)(X8k+28X6k+70X4k+28X2k+I)1
    PM9 (4,4) Xk(X8k+36X6k+126X4k+84X2k+9I)(9X8k+84X6k+126X4k+36X2k+I)1
    PM10 (4,5) 2Xk(5X8k+60X6k+126X4k+60X2k+5I)(X10k+45X8k+210X6k+210X4k+45X2k+I)1
    RPM4 (1,2) (X4k+6X2k+I)(4Xk(X2k+I))1
    RPM5 (2,2) (5X4k+10X2k+I)(Xk(X4k+10X2k+I))1
    RPM6 (2,3) (X6k+15X4k+15X2k+I)(2Xk(3X4k+10X2k+3I))1
    RPM7 (3,3) (7X6k+35X4k+21X2k+I)(Xk(X6k+21X4k+35X2k+7I))1
    RPM8 (3,4) (X8k+28X6k+70X4k+28X2k+I)(8Xk(X6k+7X4k+7X2k+I))1
    RPM9 (4,4) (9X8k+84X6k+126X4k+36X2k+I)(Xk(X8k+36X6k+126X4k+84X2k+9I))1
    RPM10 (4,5) (X10k+45X8k+210X6k+210X4k+45X2k+I)(2Xk(5X8k+60X6k+126X4k+60X2k+5I))1

     | Show Table
    DownLoad: CSV

    In particular, we study the cases l=n1 and l=n, which we call the principal Padé iterations[23]. They satisfy the equation

    gr(x)=xPl,n(1x2k)Ql,n(1x2k)=(1+x)r(1x)r(1+x)r+(1x)r. (1.13)

    Note that gr(x)=pr(x)/qr(x), where qr(x) and pr(x) are the even and odd parts of (1+x)r, respectively. On the other hand, lots of known iterative methods are contained in the Padé family or its reciprocal. Newton-Schultz iteration (NSM) can be retrieved in the case of l=1 and n=0,

    Xk+1=12Xk(3IX2k). (1.14)

    Choosing l=1 and n=1 yields the Halley's method (HM):

    Xk+1=[I+3X2k][Xk(3I+X2k)]1. (1.15)

    After a brief introduction, we are interested in constructing more efficient methods for solving the matrix sign function. We also focus on some intriguing properties for finding the matrix sign function, including higher order convergence, global convergence, stability and the efficiency. In Section 2, we propose a variant of Chebyshev-Halley family with a free parameter. The convergence order of new Chebyshev-Halley type family is eight. Fractal results show that some special cases of the new family have global convergence. Some special cases of the new family are not in the Padé family. This means that some new iterative methods are obtained for solving the matrix sign function. We theoretically prove that the new family is convergent and stable in Sections 3 and 4, respectively. In Section 5, we compare our method with the existing methods by numerical experiments. The proposed method is applied to a random matrix, Wilson matrix and continuous-time algebraic Riccati equation. Numerical results show the effectiveness of the proposed methods. Finally, Section 6 concludes the findings.

    Here, we give the nonlinear matrix equation below:

    X2I=0, (2.1)

    in which I is a unit matrix. Moreover, the sign S is a solution of (2.1).

    In fact, before we obtain a new iterative method to solve the matrix equation X2I=0, we should discuss two important problems about the new matrix iteration. The first is not in the Padé family or its reciprocal. Second, it must be globally convergent.

    Now, we propose a new iterative method of Chebyshev-Halley type with a free parameter:

    {yk=xk(1+12(L(xk)1aL(xk)))f(xk)f(xk),zk=ykf(yk)f[yk,xk],xk+1=zkf(zk)2f[zk,yk]f(y), (2.2)

    where aR, noting that

    L(xk)=f(xk)f(xk)f(xk)2,f[yk,xk]=f(yk)f(xk)ykxk,f[zk,yk]=f(zk)f(yk)zkyk. (2.3)

    By inserting Eqs (2.2) and (2.3) into Eq (2.1), we attain iteration in the reciprocal form as follows:

    Xk+1=Xk((216a+24a2)I+(40+128a+32a2)X2k+(140+224a112a2)X4k+(344256a+32a2)X6k+(6680a+24a2)X8k)×[(12a)2I+(11+4a+52a2)X2k+(14+280a56a2)X4k+(32256a56a2)X6k+(205212a+52a2)X8k+(912a+4a2)X10k]1. (2.4)

    We can achieve the method (2.4) for calculating the sign function. Note that, Xk(k0) are rational functions of A and, hence, like A, commute with S.

    Theorem 1. Let f(x)=0 be a function around the simple root α. If the initial point x0 is sufficiently close to α, then the order of convergence for (2.2) is eight, for any value of parameter a, with the following error equation:

    εk+1=c23(2(1+a)c22+c3)2ε8k+o(ε9k), (2.5)

    where cj=f(j)(α)j!f(α), j2, and εk=xkα.

    Proof. The result is based on Taylor's series and symbolic computation in Mathematica [2], where this is skipped over.

    Now, some different cases of the family (2.4) are given below.

    If a=1, we have (M1)

    Xk+1=(42Xk136X3k196X5k+632X7k+170X9k)[9I+37X2k350X4k+322X6k+469X8k+25X10k]1. (2.6)

    If a=2, we have (M2)

    Xk+1=(130Xk168X3k756X5k+984X7k+322X9k)[25I+189X2k798X4k+210X6k+837X8k+49X10k]1. (2.7)

    If a=1, we have (M3)

    Xk+1=(10Xk+120X3k+252X5k+120X7k+10X9k)[I+45X2k+210X4k+210X6k+45X8k+X10k]1. (2.8)

    If a=2, we have (M4)

    Xk+1=(66Xk+344X3k+140X5k40X7k+2X9k)[9I+205X2k+322X4k14X6k11X8k+X10k]1. (2.9)

    If a=1/2, we have (M5)

    Xk+1=(4Xk24X3k+120X7k+28X9k)[I42X4k+84X6k+81X8k+4X10k]1. (2.10)

    If a=1/2, we have (M6)

    Xk+1=(8Xk+56X3k+56X5k+8X7k)[I+28X2k+70X4k+28X6k+X8k]1. (2.11)

    If a=0, we have (M7)

    Xk+1=(2Xk40X3k+140X5k+344X7k+66X9k)[I11X2k14X4k+322X6k+205X8k+9X10k]1. (2.12)

    If a=3/4, we have (M8)

    Xk+1=(14Xk+296X3k+980X5k+680X7k+78X9k)[I+85X2k+658X4k+994X6k+301X8k+9X10k]1. (2.13)

    One might want to attain an efficient scheme for selecting values of the free parameter a to compute the matrix sign function. We remake that the proposed methods should satisfy two cases, that is, to possess global convergence and to not be in the general Padé family of iterations (1.10).

    Now, we can observe the convergence behavior of the members of the family (2.4) by drawing the attraction basins (for more information see[15]). For such a case, the associated attraction basins in terms of some values of parameter a to solve the scalar equation x21=0 are presented. We already know that the attraction basins for the Newton-Schultz iteration (1.14) has local convergence, so we neglect to draw it.

    Here, we take the domain Γ=[2,2]×[2,2]C. Each starting point z0Γ is allocated a color by the simple zero where the new scheme (2.4) converges. The point is painted in black when the method diverges. The stopping criterion is |f(xk)|103 in our programs. In addition, we set the maximum number of iterations to 30. Note that the roots are plotted in two white points.

    First of all, in Figure 1, the attraction basins of NM and HM are shown. We also provide different kinds of attraction basins for a=±1, a=±1/2, a=±2, a=0 and a=3/4 in Figures 25, respectively. We can see that methods M1, M2, M4, M5 and M7 have local convergence. Methods M3, M6 and M8 corresponding to a=1, a=1/2, a=3/4 perform the global convergence. But, M3 and M6 are the members from the Padé family, which are not new iterative schemes. M8 does not belong to the member of the Padé family. So, method M8 is a new iterative scheme with global convergence. Indeed, our main aim in constructing iterative schemes for the matrix sign is to reach the convergence order as fast as possible, and also to minimize computational cost. Taking into account the difficulty of finding a general family of matrix sign iterations, we should consider other essential properties of iterations for calculating the sign of a matrix. The following part is the convergence analysis.

    Figure 1.  The basins of attraction for (1.6) (left) and (1.15) (right), for the polynomial (shaded by the number of iterations).
    Figure 2.  Basins of attraction for (a=1) (left) and (a=2) (right), for the polynomial x21=0.
    Figure 3.  Basins of attraction for (a=1) (left) and (a=2) (right), for the polynomial x21=0.
    Figure 4.  Basins of attraction for (a=1/2) (left) and (a=1/2) (right), for the polynomial x21=0.
    Figure 5.  Basins of attraction for (a=0) (left) and (a=3/4) (right), for the polynomial x21=0.

    Here, we will show a precise convergence analysis of the method (2.4) under some assumptions. The following theorem is done to provide insight into the analysis.

    Theorem 2. Let ACn×n have no pure imaginary eigenvalues. Then, the proposed iterates {Xk}k=k=0 of (2.4) converge to the matrix sign S, choosing X0 = A.

    Proof. Let A have a Jordan canonical form arranged as

    T1AT=Λ=[C00N], (3.1)

    where T is a nonsingular matrix and C, N are the square Jordan blocks corresponding to eigenvalues lying in C and C+, respectively. Denote by λ1, ..., λq and λq+1, ..., λn values lying on the main diagonals of blocks C and N, respectively.

    Of course, recall that

    sign(A)=T(Iq00Inq)T1. (3.2)

    Thus,

    sign(Λ)=sign(T1AT)=T1sign(A)T=diag(sign(λ1),...,sign(λq),sign(λq+1),...,sign(λn)). (3.3)

    Furthermore, we give the definition of Dk=T1XkT, and it follows from (2.4) that

    Dk+1=((216a+24a2)Dk+(40+128a+32a2)D3k+(140+224a112a2)D5k+(344256a+32a2)D7k+(6680a+24a2)D9k)×[(14a+4a2)I+(11+4a+52a2)D2k+(14+280a56a2)D4k+(32256a56a2)D6k+(205212a+52a2)D8k+(912a+4a2)D10k]1. (3.4)

    Notice that if D0 is a diagonal matrix, then using inductive proof, all successive Dk are diagonal matrices as well.

    From the Eq (3.4), we can see that {Dk} converges to sign(Λ). By re-arranging (3.4) as n uncoupled scalar iterations to solve x21=0, we can derive the following equation

    dik+1=((216a+24a2)dik+(40+128a+32a2)dik3+(140+224a112a2)dik5+(344256a+32a2)dik7+(6680a+24a2)dik9)((14a+4a2)+(11+4a+52a2)dik2+(14+280a56a2)dik4+(32256a56a2)dik6+(205212a+52a2)dik8+(912a+4a2)dik10)1, (3.5)

    where dik=(Dk)i,i and 1in. Similarly, for all 1in, application of (3.4) and (3.5) lead to the convergence of {dik} to sign(λi).

    Since the eigenvalues of A are not pure imaginary values and from (3.5), we have sign(λi)=si=±1. Therefore, it follows that

    dik+11dik+1+1=(dik1)8(1+3dik2a(1+dik))2(dik+1)8(13dik+2a(1+dik))2. (3.6)

    Since |di0|=|λi|>0 and |di01di0+1|<1, thus we have limk|dik+11dik+1+1|=0, and limk|dik|=1=|sign(λi)|. This shows that {dik} is convergent. Then, we can obtain that limkDk=sign(Λ). Recalling Dk=T1XkT, we have

    limkXk=T(limkDk)T1=Tsign(Λ)T1=sign(A). (3.7)

    Finally, this finishes the proof.

    Clearly, Xk are rational functions of A and therefore, similar to A, commute with S. Note that we know that S1=S, S2=I, S2j+1=S, and S2j=I, j1. Hence, it is easy to show that the method (2.4) reads the following error inequality

    Xk+1S(B1kI+3Xk2a(I+Xk)2)XkS8, (3.8)

    where Bk=(12a)2I+(11+4a+52a2)X2k+(14+280a56a2)X4k+(32256a56a2)X6k +(205212a+52a2)X8k+(912a+4a2)X10k. The inequality (3.8) reveals the eighth order of convergence.

    This section begins with analysis of the stability of (2.4). We consider to prove the stability of (2.4) for finding S in a neighborhood of the solution. Take into account that how a small perturbation at the kth iterate is amplified or damped along the iterates.

    Theorem 3. Let ACn×n have no pure imaginary eigenvalues. The sequence {Xk}k=k=0 generated by (2.4) is asymptotically stable.

    Proof. If X0 is a function of A, then the iterations of (2.4) are all functions of A and, thus, commute with A. Let Xk be the numerical perturbation introduced at the kth iterate of (2.4). Then, we have

    ˜Xk=Xk+Xk. (4.1)

    Here, we formally use approximations (Xk)i0, since (Xk)i0, i2, is small. For small Xk, we can neglect the value. In this case, we get

    ˜Xk+1=((216a+24a2)˜Xk+(40+128a+32a2)˜Xk3+(140+224a112a2)˜Xk5+(344256a+32a2)˜Xk7+(6680a+24a2)˜Xk9)×[(14a+4a2)I+(11+4a+52a2)˜Xk2+(14+280a56a2)˜Xk4+(32256a56a2)˜Xk6+(205212a+52a2)˜Xk8+(912a+4a2)˜Xk10]1=((216a+24a2)(Xk+Xk)+(40+128a+32a2)(Xk+Xk)3+(140+224a112a2)(Xk+Xk)5+(344256a+32a2)(Xk+Xk)7+(6680a+24a2)(Xk+Xk)9)×[(14a+4a2)I+(11+4a+52a2)(Xk+Xk)2+(14+280a56a2)(Xk+Xk)4+(32256a56a2)(Xk+Xk)6+(205212a+52a2)(Xk+Xk)8+(912a+4a2)(Xk+Xk)10]1. (4.2)

    By using the identity for any nonsingular matrix B and C [2]

    (B+C)1B1B1CB1, (4.3)

    the Eq (4.2) can be written as

    ˜Xk+1(512S+(2048512a)Xk+(1536512a)SXkS)×(512I+(1792512a)SXk+(1792512a)XkS)1(S+(4a)Xk+(3a)SXkS)×(I(72a)SXk(72a)XkS)S+12Xk12SXkS. (4.4)

    Further, we apply Xk+1=˜Xk+1Xk+1=˜Xk+1S, and it is easy to show that

    Xk+112Xk12SXkS. (4.5)

    We can know that the perturbation at the (k+1)st iteration is bounded; that is, we have

    Xk+112SX0SX0. (4.6)

    To summarize, the sequence {Xk}k=k=0 generated by (2.4) is stable. The proof is ended.

    This section describes some numerical experiments that demonstrate the effectiveness of the proposed method. On the other hand, we now only use iterative methods with global convergence. In the following tests, the compared schemes are NM, HM, PM4, M3, M6 and M8.

    For a fairer comparison, all computations are performed on same laptop equipped with Core i5, 7th generation CPU. The stopping termination X2kI2104 is used in Tables 2 and 3. We have examined each method of 9 randomly generated matrices. The numerical results for different methods on random matrices of sizes 5×5, 10×10, 20×20, 50×50,100×100, 150×150,200×200,250×250 and 300×300 are given in Tables 2 and 3. Finally, we have recorded their average in the last line.

    Table 2.  Comparisons of number of iterations.
    MatrixNo. NM HM PM4 M3 M6 M8
    A5×5 11 8 6 4 5 3
    A10×10 11 9 7 4 5 4
    A20×20 14 8 7 5 5 4
    A50×50 16 11 8 5 6 5
    A100×100 18 12 9 6 6 5
    A150×150 21 12 10 6 7 6
    A200×200 23 13 10 7 6 6
    A250×250 22 13 11 7 7 7
    A300×300 23 15 12 8 7 7
    Mean 17.7 11.2 8.9 5.8 6 5.2

     | Show Table
    DownLoad: CSV
    Table 3.  Comparisons of elapsed time (s).
    MatrixNo. NM HM PM4 M3 M6 M8
    A5×5 0.00448 0.00636 0.00432 0.00446 0.00435 0.00434
    A10×10 0.00390 0.00406 0.00142 0.00496 0.00487 0.00492
    A20×20 0.01830 0.02399 0.00457 0.01247 0.01757 0.01204
    A50×50 0.04240 0.04327 0.06968 0.03576 0.08218 0.02653
    A100×100 0.22346 0.19254 0.19842 0.09127 0.21662 0.09236
    A150×150 1.31892 1.24589 0.35491 0.28424 0.40908 0.24713
    A200×200 1.34231 1.53269 1.23860 0.54604 0.70944 0.52773
    A250×250 1.46769 1.57901 1.59231 1.21704 1.27971 1.12810
    A300×300 2.04023 1.91356 1.64894 1.82464 1.67087 1.66469
    Mean 0.71797 0.72682 0.56813 0.44676 0.48830 0.41198

     | Show Table
    DownLoad: CSV

    We study the behavior of different methods for finding the well-known Wilson matrix:

    A=(1078775658610975910), (5.1)

    where we use the stopping termination X2kI2105 in Figure 6. We can see that the results show a stable behavior of the proposed iterative method for finding S.

    Figure 6.  Convergence history of different methods in solving the Wilson matrix.

    Now, we investigate the problem of solving the algebraic Riccati equation (ARE) with matrix sign functions. Consider the continuous-time and discrete-time algebraic Riccati equation of the forms

    XA+ATX+QXBR1BTX=0, (5.2)
    ATXA+AXB(R+BTXB)1BTXA+QX=0, (5.3)

    where the quantities ARn×n, Q=QTRn×n and BRn×n are positive semi-definite, XRn×n is the unknown matrix, R=RTCn×n is positive definite. The solutions of ARE are closely related to Symplectic matrices and Hamiltonian matrices[8].

    First of all, we introduce the continuous-time algebraic Riccati equation. The stabilizing solution of the ARE (5.2) is a real matrix X for which all eigenvalues of ABR1BTX have negative real part and, therefore,

    (ABR1BTQAT)(I0XI)=(I0X0)(ABR1BTXBR1BT0AT+XBR1BT). (5.4)

    Moreover,

    H=(ABR1BTQAT), (5.5)

    is Hamiltonian matrices. Thus, we have

    L=(L11L12L21L22)=sign(H)=(I0XI)(IK0I)(I0XI)1, (5.6)

    for a suitable matrix K.

    Now, we have X in the following form

    (L11L12L21L22)(IX)=(IX), (5.7)

    and, thus,

    (L12L22)X+(L11L21)+(IX)=0, (5.8)

    which implies

    (L12L22+I)X=(L11+IL21). (5.9)

    Once the sign of H is calculated, we can solve the overdetermined system (5.9) by using the standard algorithm to get the required solution. Note that the matrix H can not have eigenvalues on the imaginary axis.

    For the algebraic Riccati Eq (5.3), we have the Symplectic matrices M as follows:

    M=(A+BR1BTATQBR1BTATATQAT). (5.10)

    The details are not explained here, see more information in [8].

    In this test, we use the stopping termination X2kI108. We apply the proposed scheme M8 to solve the ARE with the following matrices

    A=182(7123028),B=(1001), (5.11)
    Q=141(474666645),R=(1073731). (5.12)

    We get

    M=12(53031382012037278283015027127). (5.13)

    By using method M8 (leaving 4 decimal places), it can be seen that

    sign(˜M)=(1.000000036.056918.98890.004919.978588.067435.96261.008835.943936.054117.98950.004418.9801). (5.14)

    The resulting matrix, which is the solution of (5.3), would be

    X=(0.12510.25000.25041.6117). (5.15)

    From the approximate solution (5.15), we know that method M8 is valid for solving the algebraic Riccati equation. In Table 4, the speed of solving the Riccati equation is further improved by using the M8 method. In Tables 2 and 3, essentially, in terms of the number of iterations and the computational CPU time, they imply that M8 has the best performance in general. We can demonstrate that the proposed method affirms the theoretical parts from the results. From numerical experiments, the proposed method presents consistent convergence behavior.

    Table 4.  Results of comparisons for the algebraic Riccati equation.
    Method NM HM PM4 M8
    Iterations 6 5 4 2
    Time(s) 0.011512 0.011809 0.011409 0.010774
    Residual 2.2446e-009 7.4238e-011 7.7876e-011 7.2760e-011

     | Show Table
    DownLoad: CSV

    In this paper, we propose a new family of Chebyshev-Halley type iterative method (2.4) with eighth-order. We theoretically prove that the new method (2.4) is convergent and asymptotically stable. Method M8 is a special case of method (2.4) with parameter a=34, which does not belong to the member of the Padé family. So, method M8 is a new iterative scheme for solving the matrix sign function. Attraction basins in Figures 15 are performed to show the convergence behaviors of different methods. From Figure 5, we know that method M8 is globally convergent. Method M8 is applied to a random matrix, Wilson matrix and continuous-time algebraic Riccati equation. Numerical results show that method M8 costs less computing time and requires less number of iterations. This means that method M8 has good convergence behavior.

    This research was supported by the National Natural Science Foundation of China (No. 61976027), the Natural Science Foundation of Liaoning Province (Nos. 2022-MS-371, 2019-ZD-0502), Educational Commission Foundation of Liaoning Province of China (Nos. LJKMZ20221492, LJKMZ20221498) and the Key Project of Bohai University (No. 0522xn078).

    The authors declare no conflict of interest.



    [1] M. Hernández, M. Salanova, A family of Chebyshev-Halley type methods, Int. J. Comput. Math., 47 (1993), 59–63. http://dx.doi.org/10.1080/00207169308804162 doi: 10.1080/00207169308804162
    [2] J. Gutiérrez, M. Hernández, A family of Chebyshev-Halley type methods in Banach spaces, Bull. Austral. Math. Soc., 55 (1997), 113–130. http://dx.doi.org/10.1017/S0004972700030586 doi: 10.1017/S0004972700030586
    [3] N. Osada, Chebyshev-Halley methods for analytic functions, J. Comput. Appl. Math., 216 (2008), 585–599. http://dx.doi.org/10.1016/j.cam.2007.06.020 doi: 10.1016/j.cam.2007.06.020
    [4] Y. Kim, R. Behl, S. Motsa, Higher-order efficient class of Chebyshev-Halley type methods, Appl. Math. Comput., 273 (2016), 1148–1159. http://dx.doi.org/10.1016/j.amc.2015.09.013 doi: 10.1016/j.amc.2015.09.013
    [5] S. Ivanov, Unified convergence analysis of Chebyshev-Halley methods for multiple polynomial zeros, Mathematics, 10 (2022), 135. http://dx.doi.org/10.3390/math10010135 doi: 10.3390/math10010135
    [6] Z. Bai, J. Demmel, Using the matrix sign function to compute invariant subspaces, SIAM J. Matrix Anal. Appl., 19 (1998), 205–225. http://dx.doi.org/10.1137/S0895479896297719 doi: 10.1137/S0895479896297719
    [7] R. Byers, C. He, V. Mehrmann, The matrix sign function method and the computation of invariant subspaces, SIAM J. Matrix Anal. Appl., 18 (1997), 615–632. http://dx.doi.org/10.1137/S0895479894277454 doi: 10.1137/S0895479894277454
    [8] C. Kenney, A. Laub, P. Papadopoulos, Matrix-sign algorithms for Riccati equations, IMA J. Math. Control I., 9 (1992), 331–344. http://dx.doi.org/10.1093/imamci/9.4.331 doi: 10.1093/imamci/9.4.331
    [9] N. Higham, Functions of matrices: theory and computation, Philadelphia: Society for Industrial and Applied Mathematics, 2008.
    [10] A. Norris, A. Shuvalov, A. Kutsenko, The matrix sign function for solving surface wave problems in homogeneous and laterally periodic elastic half-spaces, Wave Motion, 50 (2013), 1239–1250. http://dx.doi.org/10.1016/j.wavemoti.2013.03.010 doi: 10.1016/j.wavemoti.2013.03.010
    [11] J. van den Eshof, A. Frommer, T. Lippert, K. Schilling, H. van der Vorst, Numerical methods for the QCD overlap operator. I. Sign-function and error bounds, Comput. Phys. Commun., 146 (2002), 203–224. http://dx.doi.org/10.1016/S0010-4655(02)00455-1 doi: 10.1016/S0010-4655(02)00455-1
    [12] P. Benner, E. Quintana-Ortí, Solving stable generalized Lyapunov equations with the matrix sign function, Numerical Algorithms, 20 (1999), 75–100. http://dx.doi.org/10.1023/A:1019191431273 doi: 10.1023/A:1019191431273
    [13] F. Soleymani, P. Stanimirović, S. Shateyi, F. Khaksar Haghani, Approximating the matrix sign function using a novel iterative method, Abst. Appl. Anal., 2014 (2014), 105301. http://dx.doi.org/10.1155/2014/105301 doi: 10.1155/2014/105301
    [14] A. Soheili, F. Toutounian, F. Soleymani, A fast convergent numerical method for matrix sign function with application in SDEs, J. Comput. Appl. Math., 282 (2015), 167–178. http://dx.doi.org/10.1016/j.cam.2014.12.041 doi: 10.1016/j.cam.2014.12.041
    [15] A. Cordero, F. Soleymani, J. Torregrosa, M. Zaka Ullah, Numerically stable improved Chebyshev-Halley type schemes for matrix sign function, J. Comput. Appl. Math., 318 (2017), 189–198. http://dx.doi.org/10.1016/j.cam.2016.10.025 doi: 10.1016/j.cam.2016.10.025
    [16] X. Wang, W. Li, Stability analysis of simple root seeker for nonlinear equation, Axioms, 12 (2023), 215. http://dx.doi.org/10.3390/axioms12020215 doi: 10.3390/axioms12020215
    [17] X. Wang, X. Chen, Derivative-free Kurchatov-type accelerating iterative method for solving nonlinear systems: dynamics and applications, Fractal Fract., 6 (2022), 59. http://dx.doi.org/10.3390/fractalfract6020059 doi: 10.3390/fractalfract6020059
    [18] D. Jung, C. Chun, X. Wang, Construction of stable and globally convergent schemes for the matrix sign function, Linear Algebra Appl., 580 (2019), 14–36. http://dx.doi.org/10.1016/j.laa.2019.06.019 doi: 10.1016/j.laa.2019.06.019
    [19] J. Roberts, Linear model reduction and solution of the algebraic Riccati equation by use of the sign function, Int. J. Control, 32 (1980), 677–687. http://dx.doi.org/10.1080/00207178008922881 doi: 10.1080/00207178008922881
    [20] L. Shieh, Y. Tsay, C. Wang, Matrix sector functions and their applications to systems theory, IEE Proceedings D, 131 (1984), 171–181. http://dx.doi.org/10.1049/ip-d.1984.0029 doi: 10.1049/ip-d.1984.0029
    [21] B. Iannazzo, Numerical solution of certain nonlinear matrix equations, Ph. D. Thesis, Università di Pisa, 2007.
    [22] C. Kenney, A. Laub, Rational iterative methods for the matrix sign function, SIAM Matrix Anal. Appl., 12 (1991), 273–291. http://dx.doi.org/10.1137/0612020 doi: 10.1137/0612020
    [23] M. Misrikhanov, V. Ryabchenko, Matrix sign function in the problems of analysis and design of the linear systems, Autom. Remote Control, 69 (2008), 198–222. http://dx.doi.org/10.1134/S0005117908020033 doi: 10.1134/S0005117908020033
  • This article has been cited by:

    1. Xiaofeng Wang, Yufan Yang, Yuping Qin, Semilocal convergence analysis of an eighth order iterative method for solving nonlinear systems, 2023, 8, 2473-6988, 22371, 10.3934/math.20231141
    2. Stephen Ehidiamhen Uwamusi, Rotation Matrix and Angles of Rotation in the Polar Decomposition, 2023, 2581-8147, 63, 10.34198/ejms.14124.063074
    3. José M. A. Matos, Paulo B. Vasconcelos, José A. O. Matos, On Fourier Series in the Context of Jacobi Matrices, 2024, 13, 2075-1680, 581, 10.3390/axioms13090581
    4. Munish Kansal, Vanita Sharma, Pallvi Sharma, Lorentz Jäntschi, A Globally Convergent Iterative Method for Matrix Sign Function and Its Application for Determining the Eigenvalues of a Matrix Pencil, 2024, 16, 2073-8994, 481, 10.3390/sym16040481
    5. Xiaofeng Wang, Xiaohe Chen, Wenshuo Li, Dynamical behavior analysis of an eighth-order Sharma’s method, 2024, 17, 1793-5245, 10.1142/S1793524523500687
    6. Xiaofeng Wang, Dongdong Ruan, Convergence ball of a new fourth-order method for finding a zero of the derivative, 2024, 9, 2473-6988, 6073, 10.3934/math.2024297
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1460) PDF downloads(52) Cited by(6)

Figures and Tables

Figures(6)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog