Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Zero-stability of waveform relaxation methods for ordinary differential equations


  • Zero-stability is the basic property of numerical methods of ordinary differential equations (ODEs). Little work on zero-stability is obtained for the waveform relaxation (WR) methods, although it is an important numerical method of ODEs. In this paper we present a definition of zero-stability of WR methods and prove that several classes of WR methods are zero-stable under the Lipschitz conditions. Also, some numerical examples are given to outline the effectiveness of the developed results.

    Citation: Zhencheng Fan. Zero-stability of waveform relaxation methods for ordinary differential equations[J]. Electronic Research Archive, 2022, 30(3): 1126-1141. doi: 10.3934/era.2022060

    Related Papers:

    [1] Liping Yang, Hu Li . A hybrid algorithm based on parareal and Schwarz waveform relaxation. Electronic Research Archive, 2022, 30(11): 4086-4107. doi: 10.3934/era.2022207
    [2] Chunhua Chu, Yijun Chen, Qun Zhang, Ying Luo . Joint linear array structure and waveform design for MIMO radar under practical constraints. Electronic Research Archive, 2022, 30(9): 3249-3265. doi: 10.3934/era.2022165
    [3] Xuerong Hu, Yuxiang Han, Junyan Lu, Linxiang Wang . Modeling and experimental investigation of quasi-zero stiffness vibration isolator using shape memory alloy springs. Electronic Research Archive, 2025, 33(2): 768-790. doi: 10.3934/era.2025035
    [4] Qian He, Wenxin Du, Feng Shi, Jiaping Yu . A fast method for solving time-dependent nonlinear convection diffusion problems. Electronic Research Archive, 2022, 30(6): 2165-2182. doi: 10.3934/era.2022109
    [5] Li Tian, Ziqiang Wang, Junying Cao . A high-order numerical scheme for right Caputo fractional differential equations with uniform accuracy. Electronic Research Archive, 2022, 30(10): 3825-3854. doi: 10.3934/era.2022195
    [6] Yi Wei . The Riccati-Bernoulli subsidiary ordinary differential equation method to the coupled Higgs field equation. Electronic Research Archive, 2023, 31(11): 6790-6802. doi: 10.3934/era.2023342
    [7] Adrian Nicolae Branga . Some conditions for the existence and uniqueness of monotonic and positive solutions for nonlinear systems of ordinary differential equations. Electronic Research Archive, 2022, 30(6): 1999-2017. doi: 10.3934/era.2022101
    [8] Junseok Kim . Maximum principle preserving the unconditionally stable method for the Allen–Cahn equation with a high-order potential. Electronic Research Archive, 2025, 33(1): 433-446. doi: 10.3934/era.2025021
    [9] David Cheban, Zhenxin Liu . Averaging principle on infinite intervals for stochastic ordinary differential equations. Electronic Research Archive, 2021, 29(4): 2791-2817. doi: 10.3934/era.2021014
    [10] Milena Dimova, Natalia Kolkovska, Nikolai Kutev . Global behavior of the solutions to nonlinear Klein-Gordon equation with critical initial energy. Electronic Research Archive, 2020, 28(2): 671-689. doi: 10.3934/era.2020035
  • Zero-stability is the basic property of numerical methods of ordinary differential equations (ODEs). Little work on zero-stability is obtained for the waveform relaxation (WR) methods, although it is an important numerical method of ODEs. In this paper we present a definition of zero-stability of WR methods and prove that several classes of WR methods are zero-stable under the Lipschitz conditions. Also, some numerical examples are given to outline the effectiveness of the developed results.



    Waveform relaxation (WR) is an iterative method for systems of ordinary differential equations (ODEs). It is introduced for the first time by Lelarasmee et al. [1] for the time domain analysis of large-scale nonlinear dynamical systems. A lot of studies have been done on WR methods and most of them focus on convergence (see [2,3,4,5,6,7,8,9]). Zero-stability is the basic property of numerical methods of ODEs[10]. However, to our best knowledge, so far there exists little work on zero-stability of WR methods.

    We first of all propose two examples to show the fact that tiny perturbation may lead to huge change of WR approximation solutions, at the same time, to confirm that it is necessary to study zero-stability of WR methods. For this, it is enough to consider a simple method, and suppose only the initial value has tiny perturbations.

    Example 1. Consider the WR method

    x(k+1)n+2(1+α)x(k+1)n+1+αx(k+1)n=(1α)h(x(k+1)n+x(k)n) (1.1)

    of solving numerically the initial value problem

    ˙x(t)=2x(t),t[0,T];x(0)=x0,

    which is gotten by applying the two step method

    xn+2(1+α)xn+1+αxn=2(1α)hxn

    that is consistent for any α and zero-stable for 1<α<1 to the iterative scheme

    ˙x(k+1)(t)=x(k+1)(t)x(k)(t).

    Suppose that

    Nisapositiveintegerandh=TN, (1.2)
    x(0)n=x0,n=0,1,,N,x(k)0=x0,k=0,1, (1.3)

    and

    x(k+1)1=x(k+1)0h(x(k+1)0+x(k)0),k=0,1,. (1.4)

    We can get the approximate solution {{x(k)n,n=0,1,,N},k=0,1,} by using Eqs (1.1)–(1.4). Suppose that tiny perturbations denoted by δ lead to the initial values become

    ˜x(0)n=x0+δ,n=0,1,,N,˜x(k)0=x0+δ,k=0,1, (1.5)

    and the resulting perturbed solution of Eq (1.1) is denoted by {˜x(k)n}. Let δ=1 for simplicity and denote ˜x(k)nx(k)n with e(k)n. Then we have

    e(k+1)n+2(1+α)e(k+1)n+1+αe(k+1)n=(1α)h(e(k+1)n+e(k)n)(n=0,1,,N2,k=0,1,),e(k+1)0=1,e(k+1)1=(12h)e(k+1)0(k=0,1,),e(0)n=1(n=0,1,,N). (1.6)

    By virtue of Eq (1.6) the following numerical results are obtained.

    The data in Tables 1 and 2 show that errors of the approximate solution of Eq (1.1) brought by perturbations of initial values are controllable for α=0.5, but they are unbounded for α=2 when h tends to zero.

    Table 1.  Results obtained by (1.6) with α=0.5.
    N 10 100 250 500 750 1000
    e(5)N 0.10621 0.1319 0.13343 0.13388 0.13402 0.13409
    e(10)N 0.10621 0.13238 0.13422 0.13478 0.13497 0.13506
    e(100)N 0.10621 0.13238 0.13422 0.13478 0.13497 0.13506

     | Show Table
    DownLoad: CSV
    Table 2.  Results obtained by (1.6) with α=2.
    N 10 100 250 500 1000
    e(5)N 268.74 6.2863e+028 3.7892e+073 3.4922e+148 5.7695e+298
    e(10)N 268.74 6.2867e+028 3.7897e+073 3.4927e+148 5.7704e+298
    e(100)N 268.74 6.2867e+028 3.7897e+073 3.4927e+148 5.7704e+298

     | Show Table
    DownLoad: CSV

    In this numerical examples, taking δ=1 instead of tiny perturbations seem to be unreasonable. The main reason for this is that we are usually interested in the ratio e(k)Nδ instead of δ itself. In fact we have also performed some other experiments using smaller δ, in which the similar ratios are obtained for different δ.

    Example 2. Consider the one-step WR method

    x(k+1)n+1=x(k+1)n+h(500(x(k+1)n)1.001+500(x(k)n)1.001),n=0,1,,2h,x(k+1)0=0; x(0)n=0,foralln, (1.7)

    where 2/h is an integer. Suppose that the initial value 0 becomes δ by the influence of perturbations and the solution x(k)n becomes ˜x(k)n by the influence of δ, which satisfies

    ˜x(k+1)n+1=˜x(k+1)n+h(500(˜x(k+1)n)1.001+500(˜x(k)n)1.001),n=0,1,,2h,˜x(k+1)0=δ; ˜x(0)n=δ,foralln. (1.8)

    Clearly the numerical solution generated by Eq (1.7) is the constant zero and Eq (1.8) can be regarded as the WR method of solving the following initial value problem

    ˙x(t)=1000(x(t))1.001,t[0,2];x(0)=δ. (1.9)

    It is easy to derive that t0=1δ0.001 is the blow-up time of the solution of Eq (1.9) and t0[0,2] whenever δ>10301. Consequently, sup0<t<2x(t)=+ for any δ>10301. If the "2" in Eqs (1.8) and (1.9) is replaced by t0ε, where ε>0 is small enough, then Eq (1.8) converges to Eq (1.9), that is,

    limh0,ksup0nt0εh|˜x(k)nx(nh)|=0,

    see [4]. Thus

    limh0,ksup0n2h|˜x(k)n|=+,foranyδ>10301,

    that is, the change of the solution of Eq (1.8) brought by the tiny change of initial values is unbounded as h tends to zero.

    The above two examples show that tiny perturbations can lead to huge change of WR approximation solutions, although the methods used in the examples are too simple to be practically useful.

    A numerical method is said to be zero-stable if the change of its solution brought by the tiny perturbations is controllable when h tends to zero (see [10]). In this paper we will explore what conditions guarantee the zero-stability of WR methods.

    Consider the initial problem

    ˙x(t)=f(t,x(t)),t[0,T];x(0)=x0(x(t)Rd).

    Taking the splitting function F(t,x,x)=f(t,x) we can construct the iterative scheme

    ˙x(k+1)(t)=F(t,x(k+1)(t),x(k)(t)),t[0,T],k=0,1,;x(0)(t)x0,x(k+1)(0)=x0. (2.1)

    Using one-step methods with variable step-size to discrete Eq (2.1) one arrives at

    x(k+1)n+1=x(k+1)n+hnΦ(tn,hn,x(k+1)n+1,x(k+1)n,x(k)n+1,x(k)n);x(k+1)0=x0, (2.2)

    where n=0,1,,N1,k=0,1,, N1n=0hn=T and x(0)n=x0 for all n. Applying a class of multi-step methods with fixed step-size to Eq (2.1) we get

    x(k+1)n+1=x(k+1)n+hΨ(tn,h,x(k+1)n+1,x(k+1)n,,x(k+1)np,x(k)n+1,x(k)n,,x(k)np), (2.3)

    where n=p,p+1,,N1,k=0,1,, h=T/N, the initial values except x(0)n(n=0,1,,N) and x(k)0(k=0,1,) are generated by a suitable one-step method. Here we take the initial values as

    x(k)n=c(k)n,n=0,1,,p,k=0,1,(c(k)n=x0,ifnk=0). (2.4)

    A classical example of Eq (2.3) is Admas-type linear multi-step methods:

    x(k+1)n+1=x(k+1)n+hp+1j=0βjF(tn+1j,x(k+1)n+1j,x(k)n+1j),n=p,p+1,,N1,

    where βj,j=0,1,,p+1 are constants. The linear multi-step methods use generally the fixed step-size and therefore we only consider the Eq (2.3) with the fixed step-size.

    Let ix(k) denote the component of x(k) satisfying

    x(k)=((1x(k))T,(2x(k))T,,(mx(k))T)T,

    where ix(k)Rdi, x(k)Rd and d1+d2++dm=d. The Eq (2.1) can be rewritten as

    i˙x(k+1)=Fi(t,x(k+1)1x(k+1),x(k+1)2x(k+1),,x(k+1)mx(k+1),x(k)1x(k),x(k)2x(k),,x(k)mx(k)),

    with i=1,2,,m, that is, the large Eq (2.1) is divided into m subsystems. When taking

    Fi(t,)=fi(t,x(k)1x(k),,x(k)i1x(k),x(k+1)ix(k+1),x(k)i+1x(k),,x(k)mx(k)),i=1,2,,m

    one arrives at Guass-Jacobi WR method used frequently in actual computation, which consists of m independent subsystems and is hence parallel in nature(see [2,6,9]). In Eq (2.2) the unique mesh 0=t0<t1<<tN=T is applied to all subsystems. However the subsystems of Gauss-Jocobi WR methods are independent and may have distinct behaviors, and consequently it is better to use different meshes for the subsystems with different behaviors. Applying the mesh

    0=tit0<tit1<<titNi=T,hihj=titj+1titj,j=0,1,,Ni1

    to ith subsystem yields the following multi-rate WR method

    xix(k+1)n+1=xix(k+1)n+hihnΦi(titn,hihn,y1y(k)(titn+1),,yi1y(k)(titn+1),xix(k+1)n+1,yi+1y(k)(titn+1),,ymy(k)(titn+1),y1y(k)(titn),,yi1y(k)(titn),xix(k+1)n,yi+1y(k)(titn),,ymy(k)(titn)),n=0,1,,Ni1, (2.5)

    where i=1,2,,m,k=0,1,. Here the initial values xix(k+1)0=xix(0)n=xix0 for all k and n, (x1x0T,x2x0T,,xmx0T)T=x0 and yjy(k)(t) is the interpolation function satisfying yjy(k)(tjtn)=xjx(k)n for n=0,1,,Nj.

    Because of the presence of errors in actual computing, it is necessary to consider the following perturbed systems of Eqs (2.2), (2.3) and (2.5) generated by the perturbations {δ(k)n,n=0,1,,N,k=0,1,}

    ˜x(k+1)n+1=˜x(k+1)n+hnΦ(tn,hn,˜x(k+1)n+1,˜x(k+1)n,˜x(k)n+1,˜x(k)n)+hnδ(k+1)n+1,n=0,1,,N1;˜x(k+1)0=x0+δ(k+1)0,˜x(0)n=x0+δ(0)n(n=0,1,,N), (2.6)
    ˜x(k+1)n+1=˜x(k+1)n+hΨ(tn,h,˜x(k+1)n+1,˜x(k+1)n,,˜x(k+1)np,˜x(k)n+1,˜x(k)n,,˜x(k)np)+hδ(k+1)n+1,n=p,p+1,,N1;˜x(k+1)n=c(k+1)n+δ(k+1)n,n=0,1,,p,˜x(0)n=x0+δ(0)n(n=0,1,,N), (2.7)

    and

    ˜xi˜x(k+1)n+1=˜xi˜x(k+1)n+hihnΦi(titn,hihn,˜y1˜y(k)(titn+1),,˜yi1˜y(k)(titn+1),˜xi˜x(k+1)n+1,˜yi+1˜y(k)(titn+1),,˜ym˜y(k)(titn+1),˜y1˜y(k)(titn),,˜yi1˜y(k)(titn),˜xi˜x(k+1)n,˜yi+1˜y(k)(titn),,˜ym˜y(k)(titn))+hihnδiδ(k+1)n+1,n=0,1,,Ni1;˜xi˜x(k+1)0=xix0+δiδ(k+1)0,˜xi˜x(0)n=xix0+δiδ(0)n(n=0,1,,Ni), (2.8)

    where i=1,2,,m and ˜yj˜y(k)(t) is the interpolation function satisfying ˜yj˜y(k)(tjtn)=˜xj˜x(k)n for n=0,1,,Nj,j=0,1,,m.

    Definition 2.1. (Page 32 of [10]) Let {δ(k)n,n=0,1,,N,k=0,1,} be any perturbation of Eq (2.2) or Eq (2.3) and Eq (2.4), and let {˜x(k)n,n=0,1,,N,k=0,1,} be the solution of the resulting perturbed system Eq (2.6) or Eq (2.7). Then if there exist constants C and h0 such that, for all h(0,h0](h=maxnhn for Eq (2.2))

    max0k,0nN˜x(k)nx(k)nCε,

    whenever

    max0k,0nNδ(k)nε,

    we say that the Eq (2.2) or Eqs (2.3) and Eq (2.4) is zero-stable.

    Here and hereafter denotes any norm defined in Rd.

    Definition 2.2. Let {iδ(k)n,i=1,2,,m,n=0,1,,N,k=0,1,} be any perturbation of Eq (2.5), and let {i˜x(k)n,i=1,2,,m,n=0,1,,N,k=0,1,} be the solution of the resulting perturbed Eq (2.8). Then if there exist constants h0 and Ck depended on k such that

    max1im0lkmax0nNii˜x(l)nxix(l)nCkε

    for all h=max1immax0nNihihn(0,h0], whenever

    max1im0lkmax0nNiiδ(l)nε,

    we say that the Eq (2.5) is weakly zero-stable.

    The following several lemmas are useful for studying zero-stability of the WR methods mentioned above.

    Lemma 2.3. Let a,b and T be nonnegative constants, and N be a positive integer. Suppose that the sets {hn>0,n=0,1,,N1} and {un>0,n=0,1,,N} satisfy

    un+1(1+ahn)un+bhn,n=0,1,,N1. (2.9)

    Then

    max0nNuneaTu0+ba(eaT1)

    provided that N1n=0hn=T.

    Proof. By using Eq (2.9) repeatedly we have

    un+1(1+ahn)(1+ahn1)(1+ah0)u0+(1+ahn)(1+ahn1)(1+ah1)bh0+(1+ahn)(1+ahn1)(1+ah2)bh1++(1+ahn)bhn1+bhn,

    where n=0,1,,N1. Let t0=0,tn+1=tn+hn,n=0,1,,N1, then tN=T. Noting that 1+x<ex for x>0 we can derive easily from above inequality

    un+1ea(tn+1t0)u0+ea(tn+1t1)b(t1t0)+ea(tn+1t2)b(t2t1)++ea(tn+1tn)b(tntn1)+ea(tn+1tn+1)b(tn+1tn)ea(Tt0)u0+eaTb(eat1(t1t0)++eatn(tntn1)++eatN(tNtN1))

    for n=0,1,,N1. This together with monotonicity of the function eat imply

    un+1ea(Tt0)u0+eaTbTt0eatdt=eaTu0+ba(eaT1)

    for all n=0,1,,N1.

    Lemma 2.4. Let a,b,c,d and T be nonnegative constants, N be a positive integer and hn,n=0,1,,N1, be positive real numbers.

    Suppose that the sequence of positive numbers {e(k)n>0,n=0,1,N,k=0,1,} satisfies

    e(k+1)n+1(1+ahn)e(k+1)n+bhne(k)n+chne(k)n+1+dhn,n=0,1,N1. (2.10)

    Then for any 0<h=max0nN1hn<12c

    max0k,0nNe(k)ne2(a+b+c)Tmax{max0ke(k)0,max0nNe(0)n}+da+b+c(e2(a+b+c)T1) (2.11)

    provided that N1n=0hn=T.

    Proof. By Eq (2.10) we have

    max0ke(k+1)n+1(1+ahn)max0ke(k+1)n+bhnmax0ke(k)n+chnmax0ke(k)n+1+dhn (2.12)

    for n=0,1,,N1. Let ε=max0nNe(0)n and un=max{max0ke(k)n,ε},n=0,1,,N. Clearly max{max0ke(k+1)n,ε}=un. Hence we can get by Eq (2.12)

    un+1(1+ahn)un+bhnun+chnun+1+dhn1+ahn+bhn1chnun+dhn1chn,n=0,1,,N1.

    Suppose that 0<h<12c. Consequently, 11chn<2 and the above inequality thus yield

    un+1(1+2(a+b+c)hn)un+2dhn,n=0,1,,N1.

    This together with Lemma 2.3 yield

    max0nNune2(a+b+c)Tu0+da+b+c(e2(a+b+c)T1).

    Thus Eq (2.11) holds true. The proof is complete.

    Lemma 2.5. Let aj,bj(j=0,1,,p),c,d and T be nonnegative constants. Let N be a positive integer and h=T/N.Suppose that the sequence of positive numbers {e(k)n>0,n=0,1,N,k=0,1,} satisfies

    e(k+1)n+1e(k+1)n+hpj=0(aje(k+1)nj+bje(k)nj)+che(k)n+1+dh,n=p,p+1,,N1. (2.13)

    Then for any 0<h<12c

    max0k,p+1nNe(k)ne2(A+c)Tε+dA+c(e2(A+c)T1),

    where ε=max{max0nNe(0)n,max0k,0npe(k)n}, A=pj=0(aj+bj).

    Proof. Note that e(k)nε for all np and all k0. By using Eq (2.13) repeatedly we have

    e(k+1)p+1(1+ch++(ch)k)(1+Ah)ε+(ch)k+1ε+(1+ch++(ch)k)dh,k=0,1,. (2.14)

    Let 0<h<12c. With the notation αh=1+Ah1ch and βh=dh1ch we can derive from Eq (2.14)

    e(k)p+1αhε+βh,k=0,1,.

    Noting that εαhε+βh and therefore e(k)nαhε+βh for all np+1 and all k0 we thus have by repeating the above process

    e(k)p+2αh(αhε+βh)+βh=(αh)2ε+βh(1+αh),k=0,1,.

    Consequently, we get

    e(k)p+m(αh)mε+βh(1+αh+(αh)2++(αh)m1),k=0,1,

    hold true for m=1,2,,Np. Note that αh1. Thus

    e(k)n(αh)Nε+βh(1+αh+(αh)2++(αh)N1)

    for k=0,1,, n=0,1,,N. This together with 0<ch<1/2 yield

    e(k)n(αh)Nε+(αh)N1αh1βh(1+2(A+c)h)T/hε+dA+c((1+2(A+c)h)T/h1)e2(A+c)Tε+dA+c(e2(A+c)T1)

    for k=0,1,,n=0,1,,N. The proof is complete.

    Theorem 3.1. Suppose that there exist constants L1, L2, L3 and L4 such that

    Φ(t,h,x1,x2,x3,x4)Φ(t,h,y1,y2,y3,y4)4i=1Lixiyi (3.1)

    for any real numbers t,hR,xi,yiRd,i=1,2,3,4. Then the WR Eq (2.2) is zero-stable.

    Proof. Let ε(k)n=˜x(k)nx(k)n and for any ε>0 the perturbations in Eq (2.6) {δ(k)n,k=0,1,,n=0,1,,N} satisfies max0k,0nNδ(k)n<ε. Then by virtue of Eqs (2.2) and (2.6) and the condition (3.1) we have

    ε(k+1)n+1ε(k+1)n+hn(L1ε(k+1)n+1+L2ε(k+1)n+L3ε(k)n+1+L4ε(k)n)+hnε (3.2)

    for n=0,1,,N1,k=0,1, and

    ε(k+1)0ε,k=0,1,,ε(0)nε,n=0,1,,N. (3.3)

    Let hn<12L1. Then 11hnL1<2. We therefore derive from Eq (3.2)

    ε(k+1)n+1(1+2hn(L1+L2))ε(k+1)n+2hnL3ε(k)n+1+2hnL4ε(k)n+2hnε (3.4)

    for n=0,1,,N1,k=0,1,. Consequently, by Eqa (3.3), (3.4) and Lemma 2.4 we have

    ε(k)ne4(L1+L2+L3+L4)Tε+εL1+L2+L3+L4(e4(L1+L2+L3+L4)T1)

    for any k=0,1,,n=0,1,,N if 0<h<min{12L1,14L3}. The proof is therefore complete.

    Theorem 3.2. Suppose that there exist constants Li,i=1,2,,2p+4 such that

    Ψ(t,h,x1,x2,,x2p+4)Ψ(t,h,y1,y2,,y2p+4)2p+4i=1Lixiyi (3.5)

    for any real numbers t,hR,xi,yiRd,i=1,2,,2p+4. Then the WR Eq (2.3) and Eq (2.4) is zero-stable.

    Proof. Let ε(k)n denote ˜x(k)nx(k)n and the perturbations {δ(k)n,k=0,1,,n=0,1,,N} in Eq (2.7) satisfy max0k,0nNδ(k)n<ε, where ε is any positive number. Then by virtue of Eqs (2.3), (2.4), (2.7) and the condition (3.5) we have

    ε(k+1)n+1ε(k+1)n+h(L1ε(k+1)n+1+L2ε(k+1)n++Lp+2ε(k+1)np+Lp+3ε(k)n+1+Lp+4ε(k)n++L2p+4ε(k)np)+hε,n=p,p+1,,N1,ε(k+1)nε,n=0,1,,p,ε(0)nε,n=0,1,,N, (3.6)

    where k=0,1,. Let h<12L1. Consequently, 11hL1<2. We therefore derive from Eq (3.6)

    ε(k+1)n+1(1+2h(L1+L2))ε(k+1)n+2h(L3ε(k+1)n1++Lp+2ε(k+1)np+Lp+3ε(k)n+1+Lp+4ε(k)n++L2p+4ε(k)np)+2hε,n=p,p+1,,N1,ε(k+1)nε,n=0,1,,p,ε(0)nε,n=0,1,,N (3.7)

    for n=0,1,,N1,k=0,1,. By Eq (3.7) and Lemma 2.5 we have

    ε(k)ne4(L1+L2++L2p+4)Tε+εL1+L2++L2p+4(e4(L1+L2++L2p+4)T1)

    for any k=0,1,,n=0,1,,N if 0<h<min{12L1,14Lp+3}. The proof is complete.

    Theorem 3.3. For Eqs (2.5) and (2.8) let Φ=(ΦT1,ΦT2,,ΦTm)T satisfy the Lipschitz condition:there exist constants L1 and L2 such that

    Φ(t,h,x1,x2)Φ(t,h,y1,y2)2i=1Lixiyi,foranyx1,x2,y1,y2Rd, (3.8)

    and the interpolation functions ˜yj˜y(k)(t) and yjy(k)(t) satisfy that

    sup0tT˜yj˜y(k)(t)yjy(k)(t)Cmax0nNj˜xj˜x(k)nxjx(k)n,j=1,2,,m, (3.9)

    where C is the positive constant. Then the multi-rate Eq (2.5) is weakly zero-stable.

    Proof. Define xa=max1imix, xb=mi=1ix for x=(x1xT,x2xT,,xmxT)TRd, where ixRdi,i=1,2,,m,d1+d2++dm=d. It is easy to prove that a and b are the norm. Consequently, there exist constants C1 and C2 such that

    C1xaxC2xbforanyxRd. (3.10)

    Eqs (3.8) and (3.10) thus yield

    C1Φ(t,h,x1,x2)Φ(t,h,y1,y2)aΦ(t,h,x1,x2)Φ(t,h,y1,y2)2j=1Ljxjyj2j=1LjC2xjyjb=2j=1LjC2mi=1ixjyiyj

    for any x1,x2,y1,y2Rd and therefore

    Φi(t,h,x1,x2)Φi(t,h,y1,y2)2j=1LjC2C1mi=1ixjyiyj. (3.11)

    With the notation ie(k)n=i˜x(k)nxix(k)n we can derive from Eqs (2.5), (2.8), (3.9) and (3.11)

    ie(k+1)n+1eie(k+1)n+ihnL1C2CC1(max0jNe1e(k)j++max0jNiei1e(k)j+eie(k+1)n+1+max0jNiei+1e(k)j++max0jNieme(k)j)+hihnL2C2CC1(max0jNie1e(k)j++max0jNiei1e(k)j+eie(k+1)n+max0jNiei+1e(k)j++max0jNieme(k)j)+hihnδiδ(k+1)n+1. (3.12)

    Let h=max1immax0nNihihn<C12L1C2C. Consequently, ihnL1C2CC1<12<1hihnL1C2CC1 which together with Eq (3.12) imply that

    ie(k+1)n+1(1+ahihn)eie(k+1)n+(aml=1limax0jNiele(k)j+2δiδ(k+1)n+1)hihn(1+ahihn)eie(k+1)n+(ami=1max0jNieie(k)j+2max1immax0jNiδiδ(k+1)j)hihn (3.13)

    with a=2(L1+L2)C2CC1. By using Eq (3.13) with k=0 and Lemma 2.3 we get

    ie(1)neaTeie(1)0+1a(eaT1)(ami=1max0jNieie(0)j+2max1immax0jNiδiδ(1)j).

    Thus

    max1immax0nNieie(1)na1max(max1imeie(1)0,mi=1max0jNieie(0)j,max1immax0jNiδiδ(1)j)

    with a1=eaT+1a(eaT1)(a+2). Similarly, using the above inequality, Eq (3.13) with k=1 and Lemma 2.3 we have

    max1immax0nNieie(2)na2max(max1im1l2eie(l)0,mi=1max0jNieie(0)j,max1im1l2max0jNiδiδ(l)j)

    with a2=eaT+(eaT1)(ma1+2a). Repeating the above process yields that for any k there exists the constant ak such that

    max1im0lkmax0nNi˜xi˜x(l)nxix(l)nakmax(max1im0lkδiδ(l)0,mi=1max0nNiδiδ(0)n,max1im0lkmax0nNiδiδ(l)n)

    for any mesh {itn,n=0,1,,N,i=1,2,,m}. The proof is therefore complete.

    Remark 1. In the above discussions we assume that the perturbation at one step is δ(k)nh, which depends on step-size h. This assumption is necessary for the proof of the theorems mentioned above. However it seems to be more reasonable to regard δ(k)n, a small number, as the perturbation, if the step-size is very small. In this case the tiny perturbation at each step may bring out the huge change of the numerical solution. We will explore these in next section by some computer experiments.

    Remark 2. Note that the convergence property of the WR methods is not be used in the proof of Theorem 3.1-3.3. Hence, a divergent WR method may be zero-stable. In other words, zero-stability does not imply convergence.

    In this section some numerical experiments are performed to verify the theorems obtained in Section 3.

    Consider the equation

    x=Ax=(T1I0IT2I0IT3)x,t[0,1],x(0)=x0, (4.1)

    where xR15, Ti=(2i112i112i)R5×5 and I is the identity matrix. The special case of such system has been examined by Burrage [11], which is obtained by discretizing the heat equation in two variables.

    Let A1=(T1T2T3) and A2=AA1. First, choose the following WR methods of solving the Eq (4.1):

    (i) Euler method with variable step-size

    x(k+1)n+1=x(k+1)n+hn(A1x(k+1)n+A2x(k)n),n=0,1,,N1,k=0,1,,x(k+1)0=x0forallk,x(0)n=x0foralln.

    (ii) Two step methods with fixed step-size

    x(k+1)n+1=x(k+1)n+h2[3(A1x(k+1)n+A2x(k)n)(A1x(k+1)n1+A2x(k)n1)],n=1,2,,N1,k=0,1,,x(k+1)0=x0forallk,x(0)n=x0foralln,

    where x(k+1)1 is obtained by a suitable one-step method. Here let

    x(k+1)1=x0+hAx0+h22A2x0.

    (iii) Multi-rate Jacobi methods based on Euler method

    ix(k+1)n+1=xix(k+1)n+hihn(Tixix(k+1)n+1+xi1x(k)n+1+xi+1x(k)n+1),n=0,1,,Ni1,xix(k+1)0=xix(0)n=xix0,x0x(k)n=x4x(k)n=(0,0,0,0,0)Tforallkandn}i=1,2,3.

    Second, assume the initial values and the approximation solution at each calculation step have tiny perturbations as in Eqs (2.6)–(2.8) and let δ(k)nΔ=δ(1,1,,1)TR15, iδ(k)nΔp=δ(1,1,1,1,1)TR5, ˜x(k)n denote the resulting perturbed solution and u(k)n denote ˜x(k)nx(k)n, which yields

    u(k+1)n+1=u(k+1)n+hn(A1u(k+1)n+A2u(k)n)+hnΔ,n=0,1,,N1,k=0,1,,u(k+1)0=Δforallk,u(0)n=Δforalln, (4.2)
    u(k+1)n+1=u(k+1)n+h2[3(A1u(k+1)n+A2u(k)n)(A1u(k+1)n1+A2u(k)n1)]+h2Δ,n=1,2,,N1,k=0,1,,u(k+1)0=Δ,u(k+1)1=(I+hA+h22A2)Δ,forallk,u(0)n=Δforalln, (4.3)

    and

    iu(k+1)n+1=uiu(k+1)n+hihn(Tiuiu(k+1)n+1+ui1u(k)n+1+ui+1u(k)n+1+Δp),n=0,1,,Ni1,uiu(k+1)0=uiu(0)n=Δp,u0u(k)n=u4u(k)n=(0,0,0,0,0)T,k,n}i=1,2,3. (4.4)

    Lastly, reach a reasonable conclusion by analysing data generated by Eqs (4.2)–(4.4).

    We choose A, A1, A2 and δ(k)n as above to guarantee that u(k)n increase with k and n that is the worst case of error propagation.

    In our experiments the step-sizes hn in Eq (4.2) and ihn in Eq (4.4) are taken as

    hn=min{10h,max{0.1h,|ξn|h}}

    and

    ihn=min{10ih,max{10i2h,10i1|ξn|h}},

    where h>0, {ξ0,ξ1,} is a sequence of independent and identically distributed random variables satisfying ξiN(0,1).

    The data in Tables 3, 5 and 7 show clearly the errors resulted by the perturbations hnΔ(hΔ/2) in Eq (4.2) (Eq (4.3) are controllable as k and n tend to infinity which support the theorems developed in Section 3). However the data in Table 4 and 6 show that the errors increase with k and 1/h when the perturbations hnΔ(hΔ/2) are replaced by the constant perturbation Δ(Δ/2).

    Table 3.  The computing results of Eq (4.2) with δ=0.0001 (ε=max0lkmax0nNu(l)n).
    h 0.1 0.001
    k 5 10 10 15
    ε 3.1483e-04 3.1483e-04 3.1605e-04 3.1605e-04
    h 0.00001
    k 10 15
    ε 3.1606e-04 3.1606e-04          

     | Show Table
    DownLoad: CSV
    Table 4.  The computing results of Eq (4.2) with δ=0.0001 (ε=max0lkmax0nNu(l)n) for the case of replacing perturbation hnΔ with the constant perturbation Δ.
    h 0.1      0.001      0.00001
    k 5 10 10 15 10 15
    ε 0.0018 0.0018 0.1824 0.1824 17.6723 17.6723
    h 0.000001
    k 10 15
    ε 176.8079 176.8079

     | Show Table
    DownLoad: CSV
    Table 5.  The computing results of Eq (4.3) with δ=0.0001 (ε=max0lkmax0nNu(l)n).
    h 0.1      0.001     
    k 10 15 10 15
    ε 3.1568e-04 3.1568e-04 3.1606e-04 3.1606e-04
    h 0.00001
    k 10 15
    ε 3.1606e-04 3.1606e-04

     | Show Table
    DownLoad: CSV
    Table 6.  The computing results of Eq (4.3) with δ=0.0001 (ε=max0lkmax0nNu(l)n) for the case of replacing perturbation hΔ/2 with Δ/2.
    h 0.1      0.001      0.00001
    k 10 15 10 15 10 15
    ε 0.1782 0.1782 0.1435 0.1435 14.1724 14.1724
    h 0.000001
    k 10 15
    ε 141.7252 141.7252

     | Show Table
    DownLoad: CSV
    Table 7.  The computing results of Eq (4.4) with δ=0.0001 (ε=max0lkmax0nNu(l)n).
    h 0.1      0.001
    k 5 10 10 15
    ε 3.1085e-04 3.1085e-04 3.1567e-04 3.1567e-04
    h 0.00001
    k 10 15
    ε 3.1606e-04 3.1606e-04

     | Show Table
    DownLoad: CSV

    It is well known that a linear multi-step method with fixed step-size is convergent if, and only if, it is both consistent and zero-stable. In this paper we only present the sufficient conditions of zero-stability for some special methods. We will explore the relationship between convergence and zero-stability for WR methods based on general linear multi-step methods in the future. Moreover, we shall apply our methods to the numerical study of some inverse problems [12,13,14].

    Research is supported by the Natural Science Foundation of Fujian Province, China (grant number 2021J011031).

    The author declares that he has no conflicts of interest regarding the publication of this paper.



    [1] E. Lelarasmee, A. E. Ruehli, L. Sangievanni-Vincentelli, The waveform relaxation method for time-domain analysis of large scale integragted circuits, IEEE Trans. CAD IC Syst., 1 (1982), 131–145. https://doi.org/10.1109/TCAD.1982.1270004 doi: 10.1109/TCAD.1982.1270004
    [2] A. Bellen, Z. Jackiewicz, M. Zennaro, Contractivity of waveform relaxation Runge-Kutta iterations and related limit methods for dissipative systems in the maximum norm, SIAM J. Numer. Anal., 31 (1994), 499–523. https://doi.org/10.1137/0731027 doi: 10.1137/0731027
    [3] C. Dajana, D. Raffaele, P. Beatrice, GPU-acceleration of waveform relaxation methods for large differential systems, Numer. Algorithms, 71 (2016), 293–310. https://doi.org/10.1007/s11075-015-9993-6 doi: 10.1007/s11075-015-9993-6
    [4] Z. C. Fan, Convergence of discrete time waveform relaxation methods, Numer. Algor., 80 (2019), 469–483. https://doi.org/10.1007/s11075-018-0493-3 doi: 10.1007/s11075-018-0493-3
    [5] Z. Hassanzadeh, D. K. Salkuyeh, Two-stage waveform relaxation method for the initial value problems with non-constant coefficients, Comput. Appl. Math., 33 (2014), 641–654. https://doi.org/10.1007/s40314-013-0086-7 doi: 10.1007/s40314-013-0086-7
    [6] K. J. in't Hout, On the convergence of waveform relaxation methods for stiff nolinear ordinary differential equations, Appl. Numer. Math., 18 (1995), 175–190. https://doi.org/10.1016/0168-9274(95)00052-V doi: 10.1016/0168-9274(95)00052-V
    [7] J. Janssen, S. Vandewalle, On SOR waveform relaxation methods, SIAM J. Numer. Anal., 34 (1997), 2456–2481. https://doi.org/10.1137/S0036142995294292 doi: 10.1137/S0036142995294292
    [8] Y. L. Jiang, Windowing waveform relaxation of initial value problems, Acta Math. Appl. Sin., 22 (2006), 575–588. https://doi.org/10.1007/s10255-006-0331-6 doi: 10.1007/s10255-006-0331-6
    [9] J. Sand, K. Burrage, A Jacobi waveform relaxation method for ODEs, SIAM J. Sci. Comput., 20 (1998), 534–552. https://doi.org/10.1137/S1064827596306562 doi: 10.1137/S1064827596306562
    [10] J. D. Lambert, Numerical Methods for Ordinary Differential Systems, John Wiley & Sons, Ltd., Chichester, 1991.
    [11] K. Burrage, Z. Jackiewicz, S. P. Norsett, R. A. Renaut, Preconditioning waveform relaxation iterations for differential systems, BIT Numer. Math., 36 (1996), 54–76. https://doi.org/10.1007/BF01740544 doi: 10.1007/BF01740544
    [12] E. Blåsten, H. Liu, Recovering piecewise constant refractive indices by a single far-field pattern, Inverse Probl., 36 (2020). https://doi.org/10.1088/1361-6420/ab958f
    [13] Y. T. Chow, Y. Deng, Y. He, H. Liu, X. Wang, Surface-localized transmission eigenstates, super-resolution imaging, and pseudo surface plasmon modes, SIAM J. Imaging Sci., 14 (2021), 946–975. https://doi.org/10.1137/20M1388498 doi: 10.1137/20M1388498
    [14] W. Yin, W. Yang, H. Liu, A neural network scheme for recovering scattering obstacles with limited phaseless far-field data, J. Comput. Phys., 417 (2020), 109594. https://doi.org/10.1016/j.jcp.2020.109594 doi: 10.1016/j.jcp.2020.109594
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1628) PDF downloads(51) Cited by(0)

Figures and Tables

Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog