Research article

Modified Newton-EHS method for solving nonlinear problems with complex symmetric Jacobian matrices

  • Received: 09 May 2023 Revised: 08 July 2023 Accepted: 27 July 2023 Published: 14 August 2023
  • MSC : 65F10, 65F50, 65H10

  • This manuscript is devoted to the study of numerical methods for a class of nonlinear problems. Instead of the standard Newton method, an efficient nonlinear solver is suggested to be used, and it is referred to as the Newton-EHS method, where "EHS" stands for Euler-extrapolated Hermitian-skew-Hermitian splitting. We construct this modified Newton-EHS method by utilizing a modified Newton method as the outer iteration and the EHS method as the inner iteration. Furthermore, we give the derivations of the local and semilocal convergence properties of the proposed method under the Hölder condition. Finally, in order to show the feasibility and validity of our new method, we compare it with some other iterative methods in two numerical examples.

    Citation: Lv Zhang, Qingbiao Wu. Modified Newton-EHS method for solving nonlinear problems with complex symmetric Jacobian matrices[J]. AIMS Mathematics, 2023, 8(10): 24233-24253. doi: 10.3934/math.20231236

    Related Papers:

    [1] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [2] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [3] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [4] Jan Nordström, Fredrik Laurén, Oskar Ålund . An explicit Jacobian for Newton's method applied to nonlinear initial boundary value problems in summation-by-parts form. AIMS Mathematics, 2024, 9(9): 23291-23312. doi: 10.3934/math.20241132
    [5] Hu Li . The modified quadrature method for Laplace equation with nonlinear boundary conditions. AIMS Mathematics, 2020, 5(6): 6211-6220. doi: 10.3934/math.2020399
    [6] Mouhamad Al Sayed Ali, Miloud Sadkane . Acceleration of implicit schemes for large systems of nonlinear differential-algebraic equations. AIMS Mathematics, 2020, 5(1): 603-618. doi: 10.3934/math.2020040
    [7] Malik Zaka Ullah, Sultan Muaysh Alaslani, Fouad Othman Mallawi, Fayyaz Ahmad, Stanford Shateyi, Mir Asma . A fast and efficient Newton-type iterative scheme to find the sign of a matrix. AIMS Mathematics, 2023, 8(8): 19264-19274. doi: 10.3934/math.2023982
    [8] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [9] Fan Sha, Jianbing Zhang . Randomized symmetric Gauss-Seidel method for solving linear least squares problems. AIMS Mathematics, 2024, 9(7): 17453-17463. doi: 10.3934/math.2024848
    [10] Dingyu Zhu, Yueting Yang, Mingyuan Cao . An accelerated adaptive two-step Levenberg–Marquardt method with the modified Metropolis criterion. AIMS Mathematics, 2024, 9(9): 24610-24635. doi: 10.3934/math.20241199
  • This manuscript is devoted to the study of numerical methods for a class of nonlinear problems. Instead of the standard Newton method, an efficient nonlinear solver is suggested to be used, and it is referred to as the Newton-EHS method, where "EHS" stands for Euler-extrapolated Hermitian-skew-Hermitian splitting. We construct this modified Newton-EHS method by utilizing a modified Newton method as the outer iteration and the EHS method as the inner iteration. Furthermore, we give the derivations of the local and semilocal convergence properties of the proposed method under the Hölder condition. Finally, in order to show the feasibility and validity of our new method, we compare it with some other iterative methods in two numerical examples.



    This manuscript is about the study of numerical methods for large sparse nonlinear equations described by

    F(u)=0, (1.1)

    where F:DCnCn is a continuously differentiable function. We assume that the Jacobian matrix F(u) of the function F can be split as follows:

    F(u)=W(u)+iT(u), (1.2)

    where W(u) and T(u) are both real symmetric matrices, and i=1 stands for the imaginary unit. These nonlinear systems can be seen in the applications of physics, scientific computing and engineering [1,2,3].

    To solve systems that can be described by (1.1), one of the most commonly used methods is Newton's method, i.e.,

    F(uk)sk=F(uk),  k=0,1,2,, with uk+1:=uk+sk. (1.3)

    However, when the problem size becomes large, the cost of each step of the traditional Newton method is expensive. It is well known that the idea of the inexact Newton method [4] overcomes this difficulty and improves the iteration efficiency. In each step of the inexact Newton method, only the inexact solution of Newton's equation (1.3) needs to be obtained. In this sense, we trade a bit of precision for better efficiency. Inexact Newton method have been widely used in recent decades.

    Algorithm 1.1. Inexact Newton method

    1. Set an initial guess u0.

    2. Set some ηk[0,1). For k=0,1, we solve

    F(uk)sk=F(uk),k=0,1,,

    to find some sk which satisfies

    F(uk)+F(uk)skηkF(uk). (1.4)

    3. Set uk+1=uk+sk.

    Inexact Newton methods can be viewed as inner-outer iterative methods. The outer part is Newton's method, which is employed to generate the sequence {xk}. And, the inner iterations are linear iterative methods used inside Newton's method to approximately solve Newton's equations. This kind of inner-outer iteration scheme has greatly improved the computational efficiency of the traditional Newton method. In the past few decades, a number of linear iterations, such as the classical splitting methods [5,6,7] and the modern Krylov subspace methods [9], have been utilized inside the inexact Newton methods [10,11,12]. For some other recent research papers in the area, see [13,14,15].

    In [16], Darvishi and Barati construct a modified Newton method, which requires only one more evaluation of F per step than the Newton method, while it has three R-orders of convergence at least. Compared with Newton's method, the modified Newton method improves the convergence speed and convergence order. It has also been used as the mainstream outer iteration of the previously mentioned iterative scheme in recent years; for examples, see [17,18,19,20,21,22,23]. The modified Newton method is as follows:

    {vk=ukF(uk)1F(uk),uk+1=vkF(uk)1F(vk),k=0,1,2,. (1.5)

    In this manuscript, we shall use the Euler-extrapolated Hermitian and skew-Hermitian splitting (EHS) method as the inner solver of the modified Newton method to establish our new method. The rest of this manuscript is roughly structured as follows. In Section 2, we will review the EHS method and build the modified Newton-EHS (MN-EHS) method. In Sections 3 and 4 we analyze the local and semilocal convergence properties of the MN-EHS method under the Hölder condition, respectively. In Section 5, there are two numerical examples comparing our method with some other methods in the earlier literature to reveal the computational efficiencies of our new iteration scheme. Finally, a short summary is given in Section 6.

    In [24], Li and Ma proposed the EHS method. This method was constructed to appropriately solve complex symmetric linear problems described by Au=b. They proposed the Euler-extrapolated technique in that paper, and it is used as an efficient solver in large sparse linear systems. In this part, the EHS method [24] will be reviewed.

    Let us consider the complex symmetric linear problems described by

    Au=(W+iT)u=b, W,TRn×n,

    where W,T are both symmetric, positive semi-definite matrices, bCn is a known vector and i=1 is the imaginary unit.

    The EHS iterative method can be simply represented by the following formula:

    (cos(θ)W+sin(θ)T)uk+1=i(sin(θ)Wcos(θ)T)uk+eiθb, k=0,1,2,,

    where θ[0,π2].

    Notice that eiθ=cos(θ)isin(θ) according to Euler's formula, and u0C is the initial guess.

    Remark 2.1. There are some restrictions on the selection of the parameter θ. See the convergence theorems Theorem 3.1 and Theorem 3.2 in [24]. For more studies about Euler-extrapolated techniques, see [25,26,27].

    Now, we intend to establish the MN-EHS method. Suppose that

    F(u)=W(u)+iT(u),

    where W(u),T(u)Rn×n, which can be calculated by using

    W(u)=12(F(u)+F(u))  and   T(u)=i12(F(u)F(u)),

    where F(u) denotes the conjugate transpose matrix of F(u).

    Algorithm 2.1. MN-EHS method

    1. Let the initial guess u0 be given. Set a nonnegative parameter θ[0,π2], a positive constant tol and two positive integer sequences {lk}k=0, {mk}k=0.

    2. For k=0,1,, until F(uk)tolF(u0), do the following:

    2.1. Set dk,0=hk,0:=0.

    2.2. For l=0,1,,lk1, apply the EHS method to the linear system described by (1.5):

    (cos(θ)W(uk)+sin(θ)T(uk))dk,l+1=i(sin(θ)W(uk)cos(θ)T(uk))dk,leiθF(uk);

    obtain dk,lk such that

    F(uk)+F(uk)dk,lkηkF(uk), for some ηk[0,1). (2.1)

    2.3. Set vk=uk+dk,lk.

    2.4. Compute F(vk).

    2.5. For m=0,1,2,,mk1, apply the EHS method to the linear system described by (1.5):

    (cos(θ)W(uk)+sin(θ)T(uk))hk,l+1=i(sin(θ)W(uk)cos(θ)T(uk))hk,leiθF(vk);

    obtain hk,mk such that

    F(vk)+F(uk)hk,mk˜ηkF(vk), for some ˜ηk[0,1). (2.2)

    2.6. Set uk+1=vk+hk,mk.

    3. End.

    Remark 2.2. For the needs of subsequent study and derivation, we give the expressions of dk,lk and hk,mk:

    dk,lk=lk1j=0M(θ;uk)jN(θ;uk)eiθF(uk),hk,mk=mk1j=0M(θ;uk)jN(θ;uk)eiθF(vk),

    where

    M(θ;u)=i(cos(θ)W(u)+sin(θ)T(u))1(sin(θ)W(u)cos(θ)T(u)),N(θ;u)=(cos(θ)W(u)+sin(θ)T(u))1.

    After straightforward derivation, we have

    vk=uklk1j=0M(θ;uk)jN(θ;uk)eiθF(uk),uk+1=vkmk1j=0M(θ;uk)jN(θ;uk)eiθF(vk). (2.3)

    Define

    B(θ;u):=eiθ(cos(θ)W(u)+sin(θ)T(u)),C(θ;u):=ieiθ(sin(θ)W(u)cos(θ)T(u)). (2.4)

    Then, the Jacobian matrix F(u)=B(θ;u)C(θ;u), and

    M(θ;u)=B(θ;u)1C(θ;u),N(θ;u)=eiθB(θ;u)1. (2.5)

    Therefore, we equivalently represent the MN-EHS method as follows:

    vk=uk(IM(θ;uk)lk)F(uk)1F(uk),uk+1=vk(IM(θ;uk)mk)F(uk)1F(vk). (2.6)

    In this part, our main task is to give the derivations of the convergence properties. Let us begin with the local convergence. In this section we will analyze the local convergence property under the Hölder continuous condition, similar to that in [28]. First of all, we give, without proof, the following Banach lemma.

    Lemma 3.1. (Banach Lemma) Let A,B in Cn×n satisfy IBA<1; then, the matrices A,B are nonsingular. Moreover,

    A1B1IBA, B1A1IBA,

    and

    A1BBIBA1IBA, AB1AIBA1IBA.

    Suppose that F:DCnCn is a G-differentiable function on N0D, where N0 is the convex neighborhood of the point u which satisfies F(u)=0. Its Jacobian matrix F(u) is continuous, positive definite and complex symmetric. For any xD, suppose that F(u)=W(u)+iT(u) is the splitting of the Jacobian matrix F(u). N(u,r) denotes an open ball centered at u with radius r>0.

    Assumption 3.1. For arbitrary uN(u,r)N0, assume that the following conditions hold.

    (1) (The Bounded Condition) There are positive constants β and γ such that

    max{W(u),T(u)}βandF(u)1γ.

    (2) (The Hölder Condition) For some p(0,1], there exist nonnegative constants Hw and Ht such that

    W(u)W(u)Hwuup,T(u)T(u)Htuup.

    Lemma 3.2. Under Assumption 1, for any u,vN(u,r), if r(0,1(γH)1p), then F(u)1 exists. And, the following inequalities hold:

    F(u)F(u)Huup,F(u)1S(u),F(v)Hp+1vup+1+2βvu,vuF(u)1F(v)S(u)(Hp+1vup+Huup)vu,

    where S(u):=γ1γHuup,  H:=Hw+Ht.

    Proof. According to the Hölder condition,

    F(u)F(u)=W(u)+iT(u)W(u)iT(u)W(u)W(u)+i(T(u)T(u))(Hw+Ht)uup=Huup.

    Since r(0,1(γH)1p), we have

    F(u)1(F(u)F(u))F(u)1F(u)F(u)γHuup1.

    Then, according to F(u)1γ and Lemma 3.1, F(u)1 exists and

    F(u)1F(u)11F(u)1(F(u)F(u))γ1γHuup=S(u).

    By

    F(u)=W(u)+iT(u)W(u)+iT(u)2β,
    F(v)=F(v)F(u)F(u)(vu)+F(u)(vu)=10(F(u+t(vu))F(u))dt(vu)+F(u)(vu),

    we get

    F(v)vu10(F(u+x(vu))F(u))dx+F(u)(vu)vu10Hx(vu)pdx+F(u)(vu)Hp+1vup+1+2βvu.

    As for the last inequality, since

     vuF(u)1F(v)=F(u)1(F(v)F(u)F(u)(vu))=F(u)1(F(v)F(u)F(u)(vu))+F(u)1(F(u)F(u))(vu)=F(u)110(F(u+x(vu))F(u))dx(vu)+F(u)1(F(u)F(u))(vu),

    it follows that

     vuF(u)1F(v)F(u)1(10F(u+x(vu))F(u)dx+F(u)F(u))vuS(u)(Hp+1vup+Huup)vu.

    In the remainder of this article, we use the symbol to represent the smallest integer that is no less than the corresponding real number.

    Theorem 3.1. Under the conditions of Assumption 3.1 and Lemma 3.2, let r(0,r0), where r0=min{r1,r2,r3}, and

    r1=(12γH)1p, r2=(τχ2τH(2+τχ))1p, r3=((1+p)(12βγ[(τ+1)χ]ν)2(2+p)γH)1p.

    The constant ν=min{l,m}, and ν satisfies ν>ln(2βγ)ln((τ+1)θ), where l=lim infklk,m=lim infkmk, τ(0,1χχ) is a prescribed positive constant and χχ(θ;u0)=M(θ;u0)<1.

    Then, for any initial guess u0N(u,r) and any positive integer sequences {lk}k=0 and {mk}k=0, the solution sequence {uk}k=0 of the MN-EHS method represents convergence to the exact solution u. In addition, we have

    lim supkuku1kg(r0;ν)2,

    where

    g(tp;ν):=γ1γHtp(3+p1+pHtp+2β[(τ+1)χ]ν).

    Proof. Since M(θ;u)δ(θ;u)<1,

    B(θ;u)1=(IM(θ;u))F(u)1(1+M(θ;u))F(u)12γ.

    Then

    B(θ;u)B(θ;u)=eiθ(cos(θ)(W(u)W(u))+sin(θ)(T(u)T(u))),C(θ;u)C(θ;u)=ieiθ(sin(θ)(W(u)W(u))cos(θ)(T(u)T(u))).

    According to the Hölder condition, we have

    B(θ;u)B(θ;u)Huup,C(θ;u)C(θ;u)Huup.

    Based on Lemma 3.1,

    B(θ;u)1B(θ;u)11IB(θ;u)1B(θ;u)B(θ;u)11B(θ;u)1B(θ;u)B(θ;u)2γ12γHuup.

    Since

    M(θ;u)M(θ;u)=B(θ;u)1C(θ;u)B(θ;u)1C(θ;u)=B(θ;u)1((C(θ;u)C(θ;u))(B(θ;u)B(θ;u))M(θ;u)),

    then

    M(θ;u)M(θ;u)B(θ;u)1[C(θ;u)C(θ;u)+B(θ;u)B(θ;u)M(θ;u)]2γ12γHuup2Huup=4γHuup12γHuup.

    Here, r<r1 implies that 4γHuup12γHuup<τχ.

    Hence,

    M(θ;u)M(θ;u)M(θ;u)+M(θ;u)4γHuup12γHuup+χ(τ+1)χ.

    Furthermore, we have

    vku=uku(IM(θ;u)lk)F(uk)1F(uk)ukuF(uk)1F(uk)+M(θ;u)lkF(uk)1F(uk)S(uk)(Hp+1ukup+Hukup)uku    +[(τ+1)χ]lkS(uk)(Hp+1ukup+1+2βuku)S(uk)(3+p1+pHukup+2β[(τ+1)χ]lk)uku=g(ukup;lk)uku<g(rp0;ν)uku<uku,

    and, similarly, we get

    uk+1u=vku(IM(θ;u)mk)F(uk)1F(uk)vkuF(uk)1F(vk)+M(θ;u)mkF(uk)1F(uk)S(uk)(Hp+1vkup+Hukup)uku    +[(τ+1)χ]mkS(uk)(Hp+1vkup+1+2βvku)S(uk)(3+p1+pHukup+2β[(τ+1)χ]mk)vkug(ukup;mk)vkug(rp0;ν)2uku<uku.

    Then, by induction, we can prove that {uk}k=0N(u,r). First, when k=0, u0u<r<r0 and

    u1u<g(u0up;ν)2u0u<u0u<r;

    then, u1N(u,r) since u0N(u,r).

    Now, by induction, when k=0, suppose that unN(u,r); then, we have

    un+1u<g(rp0;ν)2unu<g(rp0;ν)2(n+1)unu<r,

    which implies that un+1N(u,r) for k=n+1. Moreover, un+1u as n.

    The convergence property discussed in the previous section is the local convergence property. The iteration is convergent on the premise that the initial guess of the iteration is located in an open ball of the exact solution. In practical calculation, it is hoped that the existence of the solution of the nonlinear system (1.1), as well as the convergence of iteration, can be ascertained directly from some conditions of the initial guess. Here, we put forward the semilocal convergence theorem of the MN-EHS method.

    Assumption 4.1. For any x0N0, assume that the following conditions hold.

    (1) (The Bounded Condition) There are two positive constants β and γ such that

    max{W(u0), T(u0)}β , F(u0)1γ and F(u0)δ. (4.1)

    (2) (The Hölder Condition) There are two nonnegative constants Hw and Ht such that, for any u,vN(u0,r)N0, the following inequalities are satisfied:

    W(u)W(v)Hwuvp, (4.2)
    T(u)T(v)Htuvp. (4.3)

    Lemma 4.1. For any u,vN(u0,r), if r(0,(1γH)1p), then F(u)1 exists. And, the following inequalities hold:

    F(u)F(v)Huvp,F(u)Huu0p+2β,F(u)F(v)F(v)(uv)Hp+1uvp+1,F(u)1γ1γHuu0p,

    where H:=Hw+Ht.

    Proof. It will be omitted because the proof of Lemma 4.1 is similar to that of Lemma 3.2.

    Before giving the semilocal convergence theorem, we need some preparations; we construct two sequences and give some lemmas.

    First, we give two important functions:

    λ(t)=ap+1tp+1bt+c, (4.4)
    ω(t)=dtp1,  t[0,d1p], (4.5)

    where a,b,c and d are positive constants which satisfy

    a>bd,0<b1,d>0,p+1pcb<(ba)1p.

    Define the sequences {tk} and {sk} by

    {t0=0, s0=c,sk=tkλ(tk)ω(tk),tk+1=skλ(sk)ω(tk). (4.6)

    Lemma 4.2. λ(t) is decreasing in [0,(ab)1p) but increasing in [(ab)1p,+). Moreover, if

    p+1pcb<(ba)1p,

    then λ(t)=0 has two solutions t and t in (0,+), which satisfy

    0<t<p+1pcb<t.

    Proof. See Lemma 2.1 in [18].

    Lemma 4.3. Suppose that the sequences {tk},{sk} are generated by the formula (4.6). And, t is the smaller nonnegative solution of φ(t)=0. Then, the sequences {tk} and {sk} increase, converge to t and satisfy the following inequalities:

    0tksktk+1<t.

    Proof. See Lemma 2.2 in [18].

    The following theorem is the semilocal convergence theorem. Take a=(1+η)Hγ,b=(1η),c=(1+η)γδ and d=Hγ in (4.4).

    Theorem 4.1. Set r=min(r1,r2) with

    r1=((1+p)(12βγ[(τ+1)χ]ν)2(2+p)γH)1p, r2=1+ppcb,

    where the constant ν=min{l,m} satisfies ν>ln(2βγ)ln((τ+1)θ); also, l=lim infklk and m=lim infkmk. τ(0,1χχ) is a prescribed positive constant and χχ(θ;u0)=M(θ;u0)<1.

    Under the assumptions of Lemma 4.1, if

    ((1+η)γ1η)1+ppH1pδ<p1+p,

    then the iteration sequence {uk}k=0 generated by the MN-EHS method is well defined and converges to u, which satisfies that F(u)=0.

    Proof. The following formulas are true and can be proved by induction:

    {uku0tkt0,F(uk)1(1+η)γλ(tk),vkuksktk,F(vk)1(1+η)γλ(sk),uk+1vktk+1sk. (4.7)

    We have

    u0u0=0t0t0,F(u0)δ=cγ(1+η)=λ(t0)γ(1+η),v0u0=IM(θ;u0)l0F(u0)1F(u0)(1+χl0)γδ(1+η)γδ=s0t0,F(v0)F(v0)F(u0)F(u0)(v0u0)+F(u0)+F(u0)(v0u0)Hp+1v0u01+p+ηF(u0)Hp+1s1+p0+ηδ1(1+η)γ(ap+1sp+10+η(η+1)γδ)=1(η+1)γ(ap+1sp+10+(1b)s0)=1(η+1)γ(ap+1sp+10bs0+c)=λ(s0)(η+1)γ,u1v0IM(θ;u0)m0F(u0)1F(v0)(1+χl0)γ1(η+1)γλ(s0)λ(s0)ω(t0)=t1s0.

    Here, we use an inequality introduced by Shen and Li in [29]. For any k>1, we have

    Hp+1(tktk1)p+1=Htp+1k1(1p+1(tktk1tk1p+1)p+1+tktk1tk1)Htpk1(tktk1)Hp+1tp+1kHp+1tp+1k1Htpk1(tktk1)

    since

    t1+p+(1+p)t(1+t)1+p1.

    Now, by induction, for any k,

    uku0ukvk1+vk1uk1+uk1u0(tksk1)+(sk1tk1)+(tk1t0)=tkt0<t<r.

    Since uk1,vk1N(u0,r), we have

    F(uk)F(uk)F(vk1)F(vk1)(ukvk1)+F(vk1)+F(vk1)(ukvk1)H1+pukvk11+p+ηF(vk1)H1+p(tksk1)1+p+η(1+η)γλ(sk1)H1+pt1+pkH1+ps1+pk1Hspk1(tksk1)+η(1+η)γλ(sk1)H1+pt1+pkH1+ps1+pk1Htpk1(tksk1)+η(1+η)γλ(sk1)=1(1+η)γ[a1+pt1+pka1+ps1+pk1H(1+η)γtpk1(tksk1)+ηλ(sk1)]=1(1+η)γ[λ(tk)λ(sk1)+b(tksk1)Hγ(1+η)tpk1(tksk1)+ηλ(sk1)]=1(1+η)γλ(tk)+1(1+η)γ[bλ(sk1)ω(tk1)+Hγ(1+η)tpk1λ(sk1)ω(tk1)+(η1)λ(sk1)]=1(1+η)γλ(tk)1(1+η)γ2ηHγtpk11Hγtpk1λ(sk1)1(1+η)γλ(tk).

    Hence,

    vkukIM(θ;uk)lkF(uk)F(uk)(1+((1+τ)χ)lk)F(uk)F(uk)(1+η)γ1γHtpkλ(tk)(1+η)γ=λ(tk)ω(tk)=sktk.

    Similarly, we have

    F(uk)F(vk)F(uk)F(uk)(vkuk)+F(uk)+F(uk)(vkuk)H1+pvkuk1+p+ηF(uk)H1+p(sktk)1+p+η(1+η)γλ(tk)H1+ps1+pkH1+pt1+pkHtpk(sktk)+η(1+η)γλ(tk)1(1+η)γ[λ(sk)λ(tk)+b(sktk)Hγ(1+η)tpk(sktk)+ηλ(tk)]=λ(sk)(1+η)γ+1(1+η)γ[bλ(tk)ω(tk)+Hγ(1+η)tpkλ(tk)ω(tk)+(η1)λ(tk)]=λ(sk)(1+η)γ1(1+η)γ2ηHγtpk1Hγtpkλ(tk)λ(sk)(1+η)γ.

    Consequently,

    uk+1vkIM(θ;uk)mkF(uk)1F(vk)(1+η)γ1γHtpk1(1+η)γλ(sk)=λ(sk)ω(tk)=tk+1sk.

    Now, the formulas given by (4.7) have been proved by induction. For the reason that the sequences {tk},{sk} converge to t and

    uk+1u0uk+1vk+vkuk+uku0(tk+1sk)+(sktk)+(tkt0)=tk+1t0<t<r,

    the sequence {uk} also converges to u. Since M(θ;u)<1, we have that F(u)=0.

    Next, we shall represent the validity of our new method via numerical examples. We chose some methods given in previous paper, i.e., the modified Newton-PMHSS method [19] (MN-PMHSS) and modified Newton-GSOR method [23] (MN-GSOR), for comparison with the MN-EHS method. In our computation, the CPU running time, which is denoted by "CPU time" was recorded by implementing the command "tic-toc".

    Regarding the computer programming, all of the numerical results in the following numerical examples were performed on a laptop, and the software was MATLAB version R2017b. This laptop had an AMD Ryzen7-4800H 2.90 GHz and 16.00 GB RAM. The number of outer iteration steps is denoted by "Outer IT", and that for the inner iteration steps is denoted by "Inner IT". A thorny problem is the selection of parameters in the iterations; we used the experimental optimal parameters in this study. That is, when the parameter minimizes the corresponding iteration steps and errors, it was chosen. All of the important data are listed in tables.

    Example 5.1 What follows is a group of partial differential equations which can be converted to a nonlinear system:

    {ut(α1+iβ1)(uxx+uyy)+κu=(α2+iβ2)u43,in(0,1]×Ω,u(0,x,y)=u0(x,y),inΩ,u(t,x,y)=0,on(0,1]Ω,

    where Ω=(0,1)×(0,1) and the boundary of Ω is Ω. As for the constant κ, it was used to measure the magnitudes of the reaction term; also, κ was set as positive. We set the values of the parameters α1=α2=1 and β1=β2=2. This problem can be converted to a nonlinear system, as discussed in this manuscript, by using the central finite-difference scheme. The grid was set as equidistant, and the step width was set as Δt=h=1/(N+1).

    Here is the form of this nonlinear system:

    F(u)=Mu+(α2+iβ2)hΔtΨ(u)=0, (5.1)

    where

    M=h(1+κΔt)In+(α1+iβ1)Δth(ANIN+INAN),Ψ(u)=(u431,u432,,u43n)T,  n=N×N.

    Notice that AN=tridiag(1,2,1) is a tridiagonal matrix; is the Kronecker product symbol.

    It is obvious that u=0 is a solution of (5.1). And, the Jacobian matrix of F(u) can be easily worked out

    F(u)=M+43(α2+iβ2)hΔtdiag(u131,u132,,u13n).

    In our experiment, the initial guess was chosen to be u0=1. The stopping term for the outer iteration was taken as

    F(uk)2F(u0)21010.

    And, ηk=˜ηk=η=0.1 is the tolerance of the inner iterations. Table 1 gives the optimal values α or θ for the three methods.

    Table 1.  The optimal values of α or θ for the three methods.
    N MN-PMHSS MN-GSOR MN-EHS
    κ=1 κ=10 κ=100 κ=1 κ=10 κ=100 κ=1 κ=10 κ=100
    30 1.35 1.29 0.84 0.60 0.62 0.59 0.91 0.89 0.68
    60 1.23 1.18 0.84 0.59 0.60 0.58 0.80 0.78 0.67
    90 1.12 1.08 0.79 0.60 0.60 0.57 0.75 0.76 0.66

     | Show Table
    DownLoad: CSV

    See Tables 2, 3 and 4; the experimental data when N=30,60,90 are shown to compare our MN-EHS method with the MN-PMHSS method and MN-GSOR method. In order to show how the parameter is chosen, we represent Figure 1, which shows how the inner iteration steps of the MN-EHS method changes when the parameter varies. We employed the parameters that minimize the inner iteration steps as the optimal parameters. When the number of inner iteration steps are the same for different parameters, the one with a smaller error will be chosen.

    Table 2.  Experimental results for Example 5.1 when η=0.1,N=30.
    κ Method Residual CPU time (s) Outer IT Inner IT
    1 MN-PMHSS 2.8097×1012 0.1051 5 40
    MN-GSOR 8.9701×1011 0.0720 4 26
    MN-EHS 7.5151×1012 0.0505 4 16
    10 MN-PMHSS 3.0000×1012 0.1041 5 40
    MN-GSOR 5.8729×1011 0.0739 4 26
    MN-EHS 4.0716×1011 0.0574 4 18
    100 MN-PMHSS 2.4047×1011 0.1040 5 40
    MN-GSOR 4.0082×1012 0.0897 5 30
    MN-EHS 1.0118×1011 0.0709 5 30

     | Show Table
    DownLoad: CSV
    Table 3.  Experimental results for Example 5.1 when η=0.1,N=60.
    κ Method Residual CPU time (s) Outer IT Inner IT
    1 MN-PMHSS 3.6010×1012 0.8060 5 40
    MN-GSOR 1.4244×1011 0.6079 4 29
    MN-EHS 2.6932×1011 0.5807 5 21
    10 MN-PMHSS 4.1899×1012 0.7873 5 40
    MN-GSOR 1.2750×1011 0.5923 4 28
    MN-EHS 4.4046×1012 0.5631 4 24
    100 MN-PMHSS 3.0525×1011 0.7938 5 40
    MN-GSOR 9.0779×1012 0.7393 5 30
    MN-EHS 6.5092×1011 0.6728 5 30

     | Show Table
    DownLoad: CSV
    Table 4.  Experimental results for Example 5.1 when η=0.1,N=90.
    κ Method Residual CPU time (s) Outer IT Inner IT
    1 MN-PMHSS 5.3181×1012 4.2446 5 40
    MN-GSOR 4.7277×1011 3.1208 4 27
    MN-EHS 5.2068×1011 2.7840 4 24
    10 MN-PMHSS 6.1901×1012 4.2938 5 40
    MN-GSOR 4.5188×1011 3.1155 4 27
    MN-EHS 6.8388×1011 2.7904 4 26
    100 MN-PMHSS 4.4007×1011 4.4003 5 40
    MN-GSOR 1.2694×1011 3.8304 5 30
    MN-EHS 4.5328×1011 2.9485 4 32

     | Show Table
    DownLoad: CSV
    Figure 1.  Inner steps according to parameter value (MN-EHS method).

    According to the results in Tables 2, 3 and 4, the CPU time and the number of iterations in the MN-EHS method are typically shorter and smaller, respectively, than the MN-PMHSS method and the MN-GSOR method when the constant κ and the problem size vary. This simply indicates that the MN-EHS method is more effective than the other two in this example.

    Finally, the steps of iteration of the MN-EHS method when the problem size varies are shown in Figures 2, 3 and 4. Broadly, the steps of the outer iterations exhibited almost no change, and the steps of the inner iterations increased when the problem size varied, but the changes were not very intense.

    Figure 2.  Steps of iterations versus N when κ=1 (MN-EHS method).
    Figure 3.  Steps of iterations versus N when κ=10 (MN-EHS method).
    Figure 4.  Steps of iterations versus N when κ=100 (MN-EHS method).

    Example 5.2 Consider the nonlinear Helmholtz equation

    Δu+σ1u+iσ2u=eu, (5.2)

    where σ1 and σ2 are real coefficients. Notice that the solution of this equation should satisfy the Dirichlet boundary value condition on D=[0,1]×[0,1]. After discretization on the mesh size h=1/(N+1), the nonlinear system has the form

    F(x)=Mx+Φ(x)=0,

    where

    M=(K+σ1I)+iσ2I,Φ(x)=(ex1,ex2,,exn)T,

    with

    K=IBN+BNI,

    and BN=1h2tridiag(1,2,1)RN×N is a tridiagonal matrix.

    In this numerical experiment, we applied σ1=103 and σ2=104. The initial guess was taken as x0=0; here, 0 is a zero vector. The tolerance of the inner iterations was set as ηk=~ηk=η=0.1, i.e., the same as the first example. While a little different from the first example, the stopping criteria for the outer iterations was set as

    F(xk)F(x0)106.

    Table 5 lists the experimental parameters we applied.

    Table 5.  The optimal values of α or θ for the three methods on Example 5.2.
    N MN-PMHSS MN-GSOR MN-EHS
    30 1.86 0.18 1.26
    60 1.87 0.18 0.97
    90 1.85 0.18 0.87

     | Show Table
    DownLoad: CSV

    From Table 6, which displays the numerical results for N=30,60,90, the MN-EHS method still outperformed the other two methods in this example.

    Table 6.  Experimental results for Example 5.2.
    N Method Residual CPU time(s) Outer IT Inner IT
    30 MN-PMHSS 9.2568×107 0.0584 3 30
    MN-GSOR 9.2682×108 0.1321 3 82
    MN-EHS 9.6867×109 0.0316 3 12
    60 MN-PMHSS 9.1223×107 0.3841 3 30
    MN-GSOR 1.0197×107 0.8784 3 82
    MN-EHS 4.3828×107 0.2859 3 24
    90 MN-PMHSS 9.0837×107 2.2419 3 30
    MN-GSOR 1.0356×107 4.8795 3 82
    MN-EHS 4.3284×107 1.8615 3 41

     | Show Table
    DownLoad: CSV

    The main aim of this article was to present an iterative method for solving large-scale sparse nonlinear equations, which typically have complex symmetric Jacobian matrices. Finding solutions for this type of nonlinear equation system is very important in practical applications of a large number of scientific calculations. This paper presents the construction of a new MN-EHS method and gives derivations of the convergence properties. Two academic test examples which arise from differential equations are given. In the form of tables and data, we have compared the MN-EHS method with some methods in existing literature; the results indicate that our proposed method performs better than existing methods on these types of problems.

    This work was supported by the National Natural Science Foundation of China (Grant no. 12271479, Research on some efficient and fast algorithms for complex nonlinear problems).

    The authors declare that they have no conflict of interest.



    [1] C. Sulen, P. L. Sulem, The Nonlinear Schrödinger Equation: Self-focusing and Wave Collapse, Springer, New York, 1999.
    [2] L. S. Aranson, L. Kramer, The world of the complex Ginzburg-Landau equation, Rev. Mod. Phys., 74 (2002), 99–143.
    [3] W. C. Rheinboldt, Methods for Solving Systems of Nonlinear Equations, SIAM, Philadephia, 1998.
    [4] R. S. Dembo, S. C. Eisenstat, T. Steihaug, Inexact Newton mehtods, SIAM J. Numer. Anal., 19 (1982), 400–408. https://doi.org/10.1137/0719025 doi: 10.1137/0719025
    [5] Z. Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003), 603–626. https://doi.org/10.1137/S0895479801395458 doi: 10.1137/S0895479801395458
    [6] Z. Z. Bai, M. Benzi, F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algor., 56 (2011), 297–317. https://doi.org/10.1007/s11075-010-9441-6 doi: 10.1007/s11075-010-9441-6
    [7] X. L. Cui, S. L. Wu, A single step HSS method for non-Hermitian positive definite linear systems, J. Comput. Math., 44 (2015), 26–29. https://doi.org/10.1016/j.aml.2014.12.013 doi: 10.1016/j.aml.2014.12.013
    [8] Z. G. Huang, L. G. Wang, Z. Xu, J. J. Cui, An efficient two-step iterative method for solving a class of complex symmetric linear systems, Comput. Math. Appl., 75 (2018), 2473–2498. https://doi.org/10.1016/j.camwa.2017.12.026 doi: 10.1016/j.camwa.2017.12.026
    [9] H. A. van der Vorst, Krylov subspace iteration, Comput. Sci. Eng., 2 (2000), 32–37. https://doi.org/10.1109/5992.814655 doi: 10.1109/5992.814655
    [10] S. Bellavia, M. Macconi, B. Morini, A globally convergent Newton-GMRES subspace method for systems of nonlinear equations, SIAM J. Sci. Comput., 23 (2001), 940–960. https://doi.org/10.1137/S1064827599363976 doi: 10.1137/S1064827599363976
    [11] Z. Z. Bai, X.-P. Guo, On Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices, J. Comput. Math., 28 (2010), 235–260. https://doi.org/10.4208/jcm.2009.10-m2836 doi: 10.4208/jcm.2009.10-m2836
    [12] A. L. Yang, Y. J. Wu, Newton-MHSS methods for solving systems of nonlinear equations with complex symmetric Jacobian matrices, Numer. Alg. Con. Opt., 2 (2012), 839–853. https://doi.org/10.3934/naco.2012.2.839 doi: 10.3934/naco.2012.2.839
    [13] M. Aristizabal, J. L. Hernández-Estrada, M. Garcia, H. Millwater, Solution and sensitivity analysis of nonlinear equations using a hypercomplex-variable Newton-Raphson method, Appl. Math. Comput., 451 (2023), 127981. https://doi.org/10.1016/j.amc.2023.127981 doi: 10.1016/j.amc.2023.127981
    [14] A. M. Awwal, P. Kumam, A. B. Abubakar, A modified conjugate gradient method for monotone nonlinear equations with convex constraints, Appl. Numer. Math., 145 (2019), 507–520. https://doi.org/10.1016/j.apnum.2019.05.012 doi: 10.1016/j.apnum.2019.05.012
    [15] A. B. Abubakar, P. Kumam, A. H. Ibrahim, J. Rilwan, Derivative-free HS-DY-type method for solving nonlinear equations and image restoration, Heliyon, 6 (2020), e05400. https://10.1016/j.heliyon.2020.e05400 doi: 10.1016/j.heliyon.2020.e05400
    [16] M. T. Darvishi, A. Barati, A third-order Newton-type method to solve systems of nonlinear equations, Appl. Math. Comput., 187 (2007), 630–635. https://doi.org/10.1016/j.amc.2006.08.080 doi: 10.1016/j.amc.2006.08.080
    [17] Q. B. Wu, M. H. Chen, Convergence analysis of modified Newton-HSS method for solving systems of nonlinear equations, Numer. Algor., 64 (2013), 659–683. https://doi.org/10.1007/s11075-012-9684-5 doi: 10.1007/s11075-012-9684-5
    [18] M. H. Chen, Q. B. Wu, R. F. Lin. Semilocal convergence analysis for the modified Newton-HSS method under the Hölder condition, Numer. Algor., 72 (2016), 667–685. https://doi.org/10.1007/s11075-015-0061-z doi: 10.1007/s11075-015-0061-z
    [19] H. X. Zhong, G. L. Chen, X. P. Guo, On preconditioned modified Newton-MHSS method for systems of nonlinear equations with complex symmetric Jacobian matrices, Numer. Algor., 69 (2015), 553–567. https://doi.org/10.1007/s11075-014-9912-2 doi: 10.1007/s11075-014-9912-2
    [20] M. H. Chen, Q. B. Wu, On modified Newton–DGPMHSS method for solving nonlinear systems with complex symmetric Jacobian matrices, Comput. Math. Appl., 76 (2018), 45–57. https://doi.org/10.1016/j.camwa.2018.04.003 doi: 10.1016/j.camwa.2018.04.003
    [21] F. Xie, Q. B. Wu, P. F. Dai, Modified Newton-SHSS method for a class of systems of nonlinear equations, Comput. Appl. Math., 38 (2019), 19–37. https://doi.org/10.1007/s40314-019-0793-9 doi: 10.1007/s40314-019-0793-9
    [22] F. Xie, R. F. Lin, Q. B. Wu, Modified Newton-DSS method for solving a class of systems of nonlinear equations with complex symmetric Jacobian matrices, Numer. Algor., 85 (2020), 951–975. https://doi.org/10.1007/s11075-019-00847-y doi: 10.1007/s11075-019-00847-y
    [23] X. Qi, H. T. Qu, X. Y. Xiao, Modified Newton-GSOR method for solving complex nonlinear systems with symmetric Jacobian matrices, Comput. Appl. Math., 39 (2020), 165–182. https://doi.org/10.1007/s40314-020-01204-9 doi: 10.1007/s40314-020-01204-9
    [24] C. L. Li, C. F. Ma, On Euler-extrapolated Hermitian/skew-Hermitian splitting method for complex symmetric linear systems, Appl. Math. Lett., 86 (2018), 42–48. https://doi.org/10.1016/j.aml.2018.06.016 doi: 10.1016/j.aml.2018.06.016
    [25] X. Xie, H. B. Li, On preconditioned Euler-extrapolated single-step Hermitian and skew-Hermitian splitting method for complex symmetric linear systems, Jpn. J. Ind. Appl. Math., 38 (2021), 503–518. https://doi.org/10.1007/s13160-020-00447-7 doi: 10.1007/s13160-020-00447-7
    [26] C. L. Li, C. F. Ma, The inexact Euler-extrapolated block preconditioners for a class of complex linear systems, Appl. Math. Lett., 104 (2020), 106229. https://doi.org/10.1016/j.aml.2020.106229 doi: 10.1016/j.aml.2020.106229
    [27] C. L. Li, C. F. Ma, On Euler Preconditioned SHSS iterative method for a class of complex symmetric linear systems, ESAIM-Math. Model. Num., 53 (2019), 1607–1627. https://doi.org/10.1051/m2an/2019029 doi: 10.1051/m2an/2019029
    [28] M. H. Chen, R. F. Lin, Q. B. Wu, Convergence analysis of the modified Newton-HSS method under the Hölder continuous condition, J. Comput. Appl. Math., 264 (2014), 115–130. https://doi.org/10.1016/j.cam.2013.12.047 doi: 10.1016/j.cam.2013.12.047
    [29] W. P. Shen, C. Li, Convergence criterion of inexact methods for operators with Hölder continuous derivatives, Taiwanese J. Math., 12 (2008), 1865–1882. http://doi.org/10.11650/twjm/1500405093 doi: 10.11650/twjm/1500405093
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1230) PDF downloads(53) Cited by(0)

Figures and Tables

Figures(4)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog