Loading [MathJax]/jax/output/SVG/jax.js
Research article

Convergence and dynamical study of a new sixth order convergence iterative scheme for solving nonlinear systems

  • Received: 09 December 2022 Revised: 06 March 2023 Accepted: 12 March 2023 Published: 30 March 2023
  • MSC : 65H10, 37C25

  • A novel family of iterative schemes to estimate the solutions of nonlinear systems is presented. It is based on the Ermakov-Kalitkin procedure, which widens the set of converging initial estimations. This class is designed by means of a weight function technique, obtaining 6th-order convergence. The qualitative properties of the proposed class are analyzed by means of vectorial real dynamics. Using these tools, the most stable members of the family are selected, and also the chaotical elements are avoided. Some test vectorial functions are used in order to illustrate the performance and efficiency of the designed schemes.

    Citation: Raudys R. Capdevila, Alicia Cordero, Juan R. Torregrosa. Convergence and dynamical study of a new sixth order convergence iterative scheme for solving nonlinear systems[J]. AIMS Mathematics, 2023, 8(6): 12751-12777. doi: 10.3934/math.2023642

    Related Papers:

    [1] G Thangkhenpau, Sunil Panday, Bhavna Panday, Carmen E. Stoenoiu, Lorentz Jäntschi . Generalized high-order iterative methods for solutions of nonlinear systems and their applications. AIMS Mathematics, 2024, 9(3): 6161-6182. doi: 10.3934/math.2024301
    [2] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong . On the stability analysis of numerical schemes for solving non-linear polynomials arises in engineering problems. AIMS Mathematics, 2024, 9(4): 8885-8903. doi: 10.3934/math.2024433
    [3] Alicia Cordero, Arleen Ledesma, Javier G. Maimó, Juan R. Torregrosa . Design and dynamical behavior of a fourth order family of iterative methods for solving nonlinear equations. AIMS Mathematics, 2024, 9(4): 8564-8593. doi: 10.3934/math.2024415
    [4] Xiaofeng Wang, Mingyu Sun . A new family of fourth-order Ostrowski-type iterative methods for solving nonlinear systems. AIMS Mathematics, 2024, 9(4): 10255-10266. doi: 10.3934/math.2024501
    [5] Jia Yu, Xiaofeng Wang . A single parameter fourth-order Jarrat-type iterative method for solving nonlinear systems. AIMS Mathematics, 2025, 10(4): 7847-7863. doi: 10.3934/math.2025360
    [6] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Nadiyah Hussain Alharthi . A faster iterative scheme for solving nonlinear fractional differential equations of the Caputo type. AIMS Mathematics, 2023, 8(12): 28488-28516. doi: 10.3934/math.20231458
    [7] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong, Bruno Carpentieri . Highly efficient family of two-step simultaneous method for all polynomial roots. AIMS Mathematics, 2024, 9(1): 1755-1771. doi: 10.3934/math.2024085
    [8] Awais Gul Khan, Farah Ameen, Muhammad Uzair Awan, Kamsing Nonlaopon . Some new numerical schemes for finding the solutions to nonlinear equations. AIMS Mathematics, 2022, 7(10): 18616-18631. doi: 10.3934/math.20221024
    [9] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Melike Kaplan, W. Eltayeb Ahmed . A novel iterative scheme for solving delay differential equations and third order boundary value problems via Green's functions. AIMS Mathematics, 2024, 9(3): 6468-6498. doi: 10.3934/math.2024315
    [10] Fiza Zafar, Alicia Cordero, Dua-E-Zahra Rizvi, Juan Ramon Torregrosa . An optimal eighth order derivative free multiple root finding scheme and its dynamics. AIMS Mathematics, 2023, 8(4): 8478-8503. doi: 10.3934/math.2023427
  • A novel family of iterative schemes to estimate the solutions of nonlinear systems is presented. It is based on the Ermakov-Kalitkin procedure, which widens the set of converging initial estimations. This class is designed by means of a weight function technique, obtaining 6th-order convergence. The qualitative properties of the proposed class are analyzed by means of vectorial real dynamics. Using these tools, the most stable members of the family are selected, and also the chaotical elements are avoided. Some test vectorial functions are used in order to illustrate the performance and efficiency of the designed schemes.



    From the early ages, real-world and engineering problems have required solving nonlinear equations or systems of nonlinear equations. These solutions usually can only be estimated, as their exact values cannot be calculated. This is the area where iterative methods show their power: to obtain approximations of the solutions of these kinds of problems in the most efficient way.

    Therefore, our main goal is to solve the equation f(x)=0, where f:DRR is a scalar function, or vectorial F(x)=0, where F:DRnRn, n>1, defined on a convex set D. From a seed x(0)Rn (x0R if n=1), the most employed iterative scheme for estimating the zeroes of these kinds of functions is Newton's scheme, with iterative expression

    x(k+1)=x(k)[F(x(k))]1F(x(k)), k=0,1,,

    where F(x(k)) is the Jacobian matrix of F evaluated at x(k).

    Some classical results about the extension of scalar one-point iterative methods to systems can be found in texts from Traub [29], Ortega and Rheimboldt [22] or Ostrowski [23]. Also, a good and more recent review can be found in [15]. Multipoint methods arrived with the need of improving the efficiency of the schemes without using second-order derivatives. Gerlach in [18] and Weerakoon and Fernando in [31] used quadrature formulas for this aim, and this task was generalized by Frontini and Sormani in [17], Cordero and Torregrosa in [12,13], Ozban in [24] and Wang et al. in [30].

    Other techniques were developed to design vectorial methods, such as the pseudocomposition [14], still directly related with quadrature formulas. However, Artidiello et al., in [2], developed the matrix weight function technique to get higher order schemes. Other authors have also used these and other procedures to construct multidimensional methods, as in [3,4,20,25,26,27,32,34], but the amount of this kind of scheme in the literature is still low, compared with their scalar partners. Moreover, it is known that the higher the order of convergence is, the smaller the set of converging initial estimations.

    To assure the convergence of Newton's procedure, a seed near to the solutions is necessary, but in general, the modeling of most technical and scientific problems does not hold this condition. To enlarge the set of converging initial estimations, a modification of classical Newton's scheme was designed for the scalar case f(x)=0 in [16,33] as

    xk+1=xkγkf(x)f(xk),k=0,1,,

    where γk is defined as a sequence of real numbers depending on previous and actual iterations.

    Let us remark that, for fixed values of γk along the iterative process, the order of convergence is only linear. Nevertheless, if it is calculated at each iteration from the available information, it holds the quadratic convergence, and the sets of converging initial estimations are wider (see [5]).

    One of the possibilities for γk is due to Kalitkin and Ermakov (see [16]),

    γk=f(xk)2f(xk)2+f(xkf(x)f(xk))2,k=0,1,,

    where denotes any norm. In [5], Budzko et al. designed a two-step scheme, and also its multidimensional extension was proposed. In it, the predictor step is a damped Newton type (γk=α) and the corrector is Kalitkin-Ermakov type. In the scalar case,

    γkeq=f(xk)2bf(xk)2+cf(xkf(xk)f(xk))2,k=0,1,

    For systems, coefficient γkeq becomes

    Γ(k)sys(α,b,c)=[bI+cα2(1αI[F(x(k))]1[x(k),y(k);F])2]1,

    with I the identity matrix and α,b,c free parameters.

    In this paper, a novel class (PMKE) for solving nonlinear systems is proposed and studied. This class has frozen Jacobian and sixth-order convergence. With the aim of widening the regions of starting points that assure the convergence, this family has been designed with the classical Newton's procedure at the first step. Moreover, the second step has a damping factor of Kalitkin-Ermakov type, and the third step is based on a weight function procedure. For this class, the damped coefficient in the second step for systems has the form Γ(k)sys(1,1,1)=Γ(k).

    As we construct a whole class of iterative procedures, depending on the weight function to be selected and also on any parameter, we apply techniques from discrete dynamical systems in order to select those members with better stability properties. We understand these improvements in terms of the wideness of the sets of initial guesses able to converge to the roots of nonlinear problems. In [11], the authors developed a procedure to arrange this kind of analysis by means of multidimensional real dynamics. From that moment on, different authors have used this technique (or complex dynamics if they return to the scalar case) in order to study the qualitative behavior of the iterative schemes designed. See, for example, [1,6,7,8,9,10,19], among others. These techniques are useful in solving nonlinear partial differential equations, as well as stochastic differential equations. In the numerical section, proposed methods are applied to an equation of heat conduction for a non-homogeneous medium.

    The information of this paper is organized as follows: In Section 2, we introduce, after some basic definitions, the new three step iterative scheme, and its convergence study is presented in Section 3. Next, the dynamical analysis of the proposed method is realized in Section 4. Then, some numerical experiments are shown, and some conclusions are stated, in Sections 5 and 6, respectively.

    For the sake of completeness, we first introduce some main definitions about iterative processes in numerical analysis. After that, we define some concepts and properties of discrete dynamical systems, which are necessary to follow the results obtained in Section 4.

    Let us consider a sequence {x(k)}k0 in Rn converging to the root of F(x)=0. Then, convergence has order p1, if there exist N>0 (0<N<1 with p=1) and k0 satisfying

    e(k+1)Ne(k)p,kk0, with e(k)=x(k)ξ,

    or

    x(k+1)ξNx(k)ξp,kk0.

    Assuming three consecutive iterates x(k1),x(k),x(k+1) close to ξ, ρ was introduced in [12] as the Approximated Computational Order of Convergence (ACOC),

    pACOC=ln(     x(k    1)      x(k)          x(k)      x(k    1)       )ln(     x(k)      x(k    1)          x(k    1)      x(k    2)       ). (2.1)

    On the other hand, as F is a sufficiently Fréchet differentiable function, we can consider ξ+hRn in a neighborhood of ξ. By using Taylor expansions and F(ξ) being non-singular,

    F(ξ+h)=F(ξ)[h+p1q=2Cqhq]+O(hp). (2.2)

    In this expression, Cq=1q![F(ξ)]1F(q)(ξ), q2. Let us remark that CqhqRn, since [F(ξ)]1L(Rn), and F(q)(ξ)L(Rn××Rn,Rn). Then, we can express F as

    F(ξ+h)=F(ξ)[I+p1q=2qCqhq1]+O(hp1), (2.3)

    where qCqhq1L(Rn).

    Moreover, following the notation defined by Artidello et al. in [2], if X=Rn×n is the Banach space of real n×n matrices, a matrix function H:XX can be defined such that its Fréchet derivatives satisfy

    (a) H(u)(v)=H1uv,beingH:XL(X), H1R,

    (b) H(u,v)(v)=H2uvw,beingH:X×XL(X) H2R.

    Let G be a vectorial rational operator related to an iterative scheme applied on a polynomial function p:DRnRn, and the orbit of a point xRn is defined as the successive images G(x0) in the form {x,G(x),,Gm(x)}.

    If x is a such point that G(x)=x, then it is denominated a fixed point of G. This point is attracting if all the eigenvalues λj of G(x) satisfy the condition |λj|<1, it is unstable if one eigenvalue satisfies the condition |λj0|>1, and it is repelling if all λj satisfy |λj|>1. On the other hand, the basin of attraction B(x) of a certain x is defined as the set of pre-images of any order such that

    B(x)={xRn:Gm(x)x,m}.

    If xc is such a point that G(xc)=0, then it is called a critical point. Those critical points different from the roots of F(x)=0 are called free critical points. In each basin of attraction of a periodic point, there exists one critical point, so the study of those points and their orbits is crucial.

    All these notations are widely used in the demonstration of the convergence and dynamical study in the following sections.

    Let us consider the three step iterative method PMKE:

    y(k)=x(k)F(x(k))1F(x(k)),z(k)=y(k)Γ(k)F(x(k))1F(y(k)),x(k+1)=z(k)H(τ(k))F(x(k))1F(z(k)),k0, (3.1)

    where Γ(k)=[I(I[F(x(k))]1[y(k),x(k);F])2]1 and τ(k)=I+[F(x(k))]1[z(k),y(k);F]Γ(k). In the following result, we state the conditions to be satisfied to get fifth- and sixth-order convergence of class (3.1).

    Theorem 3.1. Let us consider a sufficiently Fréchet differentiable function F:DRnRn defined in an open neighborhood D of its zero ξR, and H:Rn×nRn×n is a sufficiently differentiable matrix function. Let us also assume that F(ξ) is non-singular, and x(0) is a seed near enough to ξ. Then, the sequence {x(k)}k0 obtained from (3.1) converges to ξ with order 5 if H0=(2+H2)I,H1=(1+2H2), and |H2|<, where I is the identity matrix, and H0=H(I). In this case, the error equation is

    e(k+1)=8(1+H2)C42e(k)5+O(e(k)6).

    Moreover, if H2=1, the order of convergence is six, and the error equation is

    e(k+1)=(24C524C3C32)e(k)6+O(e(k)7),

    where Cq=1q![F(ξ)]1F(q)(ξ), q=2,3,...

    Proof. By means of Taylor series of F(x) around ξ, we get the expansion of F(x(k)), F(x(k)), F(x(k)) and F(x(k)) around ξ. Indeed,

    [F(x(k))]1=[I+X2e(k)+X3e(k)2+X4e(k)3+X5e(k)4+X6e(k)5][F(ξ)]1+O(e(k)5), (3.2)

    where X2=2C2, X3=3C3+4C22, X4=4C4+6C2C3+6C3C28C32, X5=5C5+8C2C412C22C3+9C23+8C4C212C2C3C2+16C4212C3C22, and

    X6=32C52+24C32C318C2C2316C22C4+12C3C4+10C2C56C6=+24C22C3C2+18C23C216C2C4C2+10C5C2+24C2C3C2216C4C22=+24C3C32+12C4C318C3C2C3.

    These values of Xi, for i=2,3,,6, have been obtained by assuming that [F(x(k))]1[F(x(k))]=I is satisfied and solving the subsequent linear system. Then,

    [F(x(k))]1F(x(k))=e(k)C2e(k)2+2(C22C3)e(k)3+(4C2C3+3C3C24C323C4)e(k)4+(4C5+6C2C48C22C3+6C23+4C4C26C2C3C2+8C426C3C22)e(k)5+(16C52+16C32C312C2C2312C22C4+9C3C4+8C2C55C6+12C22C3C29C23C28C2C4C2+5C5C2+12C2C3C228C4C22+12C3C32+8C4C312C3C2C3)e(k)6+O(e(k)7),

    and

    y(k)ξ=C2e(k)22(C22C3)e(k)3(4C2C3+3C3C24C323C4)e(k)4(4C5+6C2C48C22C3+6C23+4C4C26C2C3C2+8C426C3C22)e(k)5(16C52+16C32C312C2C2312C22C4+9C3C4+8C2C55C6+12C22C3C29C23C28C2C4C2+5C5C2+12C2C3C228C4C22+12C3C32+8C4C312C3C2C3)e(k)6+O(e(k)7). (3.3)

    Therefore,

    F(y(k))=F(ξ)[C2e(k)2+2(C3C22)e(k)3+(3C4+5C323C3C24C2C3)e(k)4+(4C56C2C4+10C22C36C234C4C2+8C2C3C212C42+6C3C22)e(k)5+(28C5227C32C3+16C2C23+15C22C49C3C48C2C5+5C616C22C3C2+9C23C2+11C2C4C25C5C219C2C3C22+8C4C2211C3C328C4C3+12C3C2C3)e(k)6]+O(e(k)7). (3.4)

    Now, we can express the last product of the second step as

    [F(x(k))]1F(y(k))=C2e(k)2+(2C34C22)e(k)3+(13C328C2C3+3C46C3C2)e(k)4+(38C42+26C22C312C2312C2C4+4C5+20C2C3C2+18C3C2208C4C20)e(k)5+(104C5279C32C3+40C2C23+39C22C418C3C416C2C5+5C656C22C3C2+27C23C2+27C2C4C210C5C255C2C3C22+24C4C2250C3C3216C4C3+36C3C2C3)e(k)6+O(e(k)7). (3.5)

    Using the Hermite-Genocchi formula (see [22]) for the operator of first divided difference [x+h,x;F]=10F(x+th)dt=F(x)+12F(x)h+16F(x)h2+O(h3), (x,h)Rn×Rn, we can obtain the operator [y,x;F] related to step two of the iterative scheme (3.1):

    [y(k),x(k);F]=F(ξ)[1+C2e(k)+(C22+C3)e(k)2+(2C32+2C2C3+2C4+C3C2)e(k)3+(4C424C22C3+2C23+3C2C4+5C53C2C3C22C4C2C3C22)e(k)4]+O(e(k)5), (3.6)

    and

    [F(x(k))]1[y(k),x(k);F]=IC2e(k)+(3C222C3)e(k)2+(8C32+6C2C32C4+4C3C2)e(k)3+(20C4216C22C3+8C23+7C2C411C2C3C2+2C4C210C3C22)e(k)4+(48C52+40C32C322C2C2320C22C4+9C3C4+4C2C5+5C6+28C22C3C213C23C28C2C4C25C5C2+26C2C3C224C4C22+24C3C32+4C4C320C3C2C3)e(k)5+(112C6296C42C3+56C22C2326C33+52C32C427C2C3C4+2C2416C22C5+4C3C55C2C6+14C756C32C3C2+35C2C23C2+24C22C4C27C3C4C2+5C2C5C219C6C264C22C3C22+29C23C22+16C2C4C22+15C5C22+8C4C3260C2C3C3256C3C4216C2C4C310C5C3+52C2C3C2C38C4C2C3+48C3C22C324C3C2C412C32C3C22C4C3C2+32C3C2C3C2)e(k)6+O(e(k)7).

    The Taylor expansion around the solution of the term B=I+(I[F(x(k))]1[x(k),y(k);F])2 in the second step of (3.1) can be expressed as

    B=I+C22e(k)2+(6C32+2C2C3+2C3C2)e(k)3+(25C4212C22C3+4C23+2C2C4+2C4C210C2C3C2)e(k)4+O(e(k)5). (3.7)

    Let us now find the inverse of B that can be defined as B1=Γ(k)=I+K1e(k)+K2e(k)2+K3e(k)3+K4e(k)4+O(e(k)5). Fixing the condition BB1=B1B=I, we can find the coefficients:

    K1=0,K2=C22,K3=2(3C32C2C3C3C2),K4=C3225C42+12C22C34C232C2C4+10C2C3C2+10C3C222C4C2.

    Therefore,

    Γ(k)=IC22e(k)2+2(3C32C2C3C3C2)e(k)3+(C3225C42+12C22C34C232C2C4+10C2C3C210C3C22+2C4C2)e(k)4+O(e(k)5), (3.8)

    and with the help of expressions (3.3), (3.5) and (3.8), we obtain the error in the second step:

    z(k)ξ=(y(k)ξ)[I+(I[F(x(k))]1[x(k),y(k);F])2]1F(x(k))1F(y(k))=2C22e(k)3+(8C32+4C2C3+3C3C2)e(k)4+(20C4214C22C3+6C23+6C2C412C2C3C2+4C4C212C3C22)e(k)5+(C42+42C5239C32C3+32C2C23+34C22C418C3C4+5C616C2C532C22C3C2+23C23C2+25C2C4C210C5C245C2C3C22+24C4C2250C3C3216C4C3+36C3C2C3)e(k)6+O(e(k)7). (3.9)

    To get the error equation, we obtain first the first order divided difference [z(k),y(k);F], knowing that

    h(k)=z(k)y(k)=C2e(k)2+(4C222C3)e(k)3+O(e(k)4), (3.10)

    and

    F(z(k))=F(ξ)[(z(k)ξ)+C2(z(k)ξ)2]+O((z(k)ξ)3)=F(ξ)[2C22e(k)3+(8C32+4C2C3+3C3C2)e(k)4+(20C4214C22C3+6C23+6C2C4+4C4C212C2C3C212C3C22)e(k)5+(22C52C42+23C32C320C2C23+9C3C422C22C4+20C22C3C2+8C2C514C23C217C2C4C2+33C2C3C22+5C5C2+38C3C3216C4C22+8C4C324C3C2C3)e(k)6]+O(e(k)7). (3.11)

    Using (3.10) and the Hermite-Genocchi formula, we get

    [z(k),y(k);F]=F(ξ)[I+C22e(k)2+(3C2C44C423C2C3C20+4C3C220)e(k)4+(12C528C32C36C2C23+2C2C23+4C2C5+8C2C3C2220C3C32+8C3C2C3)e(k)5+(10C62C52+27C42C3+4C3312C32C4+5C2C69C2C3C4+2C32C3C2+25C2C23C2+3C3C4C29C22C4C22C23C2211C22C3C2228C2C3C322C2C4C22+76C3C422C4C32+16C2C3C2C349C3C22C3+12C3C2C426C3C2C3C2)e(k)6]+O(e(k)7). (3.12)

    By using (3.2), (3.8) and (3.12), the variable τ(k) can be expanded as

    τ(k)=I+[F(x(k))]1[z(k),y(k);F][I+(I[F(x(k))]1[y(k),x(k);F])2]1=2C2e(k)+(4C22+3C3)e(k)2+(2C326C2C3+4C44C3C2)e(k)3+(C32+26C439C2C4+5C5+C2C3C26C4C22C3C22)e(k)4+(8C42152C522C22C3+58C32C34C2C23+5C22C48C3C4+6C614C2C5+37C22C3C22C2C3C2+3C2C4C26C23C2+37C2C3C2210C5C2+8C4C22)+52C3C328C4C310C3C2C3)e(k)5+(40C52+554C62+16C32C3295C42C34C2C23+82C22C2321C33+59C32C42C22C4+4C2C3C412C24+24C22C515C3C5+7C722C2C6+14C22C3C2185C32C3C2+42C2C23C22C2C4C2+37C4C2C22+2C3C4C2+21C2C5C217C6C2+10C2C3C22195C22C3C22+61C23C22+22C2C4C22+3C3C32+25C5C22+20C4C32222C2C3C32276C3C422C2C4C315C5C3+86C2C3C2C3+8C4C2C3+125C3C22C37C3C2C4+4C4C3C2+81C3C2C3C2)e(k)6+O(e(k)7). (3.13)

    Then, employing (3.9), (3.11)–(3.13) and the Taylor expansion of the weight function H(τ(k))=H0+H1(τ(k)I)+H2(τ(k)I)2+O((τ(k)I)3), the error equation takes the form

    e(k+1)=(2C222(H0+H1+H2)C22)e(k)3+(8C32+4C3C2+3C3C22(2C2H14C2H2)C22(H0+H1+H2)(12C32+4C3C2+3C3C2))e(k)4+(20C4216C22C3+6C23+6C2C412C2C3C2+4C4C210C3C222(H1(4C223C3)+H2(12C226C3))C22(2H1C24H2C2)(12C32+4C2C3+3C3C2)(H0+H1+H2)(44C4224C22C3+6C23+6C2C418C2C3C2+4C4C216C3C22))e(k)5+(C4226C52+43C32C324C2C2324C22C4+9C3C4+8C2C5+26C22C3C214C23C217C2C4C2+5C5C2+25C2C3C2214C4C22+20C3C32+8C4C320C3C2C32(H1(2C32+6C2C34C4+4C3C2)+H2(20C32+18C2C38C4+14C3C2))C22(H2(12C226C3)+H1(4C223C3))(12C32+4C2C3+3C3C2)
    (H0+H1+H2)(C42110C52+91C32C336C2C2336C22C4+9C3C4+8C2C5+62C22C3C223C23C225C2C4C2+5C5C2+57C2C3C2222C4C22+56C3C32+8C4C332C3C2C3))e(k)6+O(e(k)7). (3.14)

    Fixing H0=(2+H2)I and H1=(1+2H2), the error equation becomes

    e(k+1)=8(1+H2)C42e(k)5+O(e(k)6).

    Finally, making H2=1, we obtain

    e(k+1)=(24C524C3C32)e(k)6+O(e(k)7),

    and the proof is complete.

    Once the conditions that the weight function must meet have been obtained, there are several ways to define the matrix weight function H, and each of them generates a different iterative family.

    Family 1. The weight function defined by the polynomial

    H(τ)=β(τ2I)2τ+3I,

    where βR satisfies the hypothesis of Theorem 3.1 and τ(k)=I+[F(x(k))]1[z(k),y(k);F][I+(I[F(x(k))]1[y(k), x(k);F])2]1, generates a new parametric class of fifth order for β1 and a simple scheme of sixth order for β=1,

    y(k)=x(k)F(x(k))1F(x(k)),z(k)=y(k)[I+(I[F(x(k))]1[x(k),y(k);F])2]1F(x(k))1F(y(k)),x(k+1)=z(k)[β(τ2I)2τ+3I]F(x(k))1F(z(k)),k0. (3.15)

    This family is denoted by PM(β)KEp depending on the β parameter.

    Family 2. The weight function defined by

    H(τ)=(2+β)τ1,

    where βR also satisfies the hypothesis of Theorem 3.1, generates a new class of fifth and sixth order with β1 and β=1, respectively.

    y(k)=x(k)F(x(k))1F(x(k)),z(k)=y(k)[I+(I[F(x(k))]1[x(k),y(k);F])2]1F(x(k))1F(y(k)),x(k+1)=z(k)(2+β)τ1F(x(k))1F(z(k)),k0, (3.16)

    with τ(k)=I+[F(x(k))]1[z(k),y(k);F][I+(I[F(x(k))]1[y(k), x(k);F])2]1. In the following we will use the family 1 (PM(β)KEp) for all the numerical studies. The algorithm of this class is presented.

    Algorithm:

    ● Step 1: Knowning the nonlinear system to be solved, defined by function F, and its related Jacobian matrix F, an initial estimation x(0), a small constant tol as the accuracy, let k=0, and start the iteration process.

    ● Step 2: Apply iterative expression 3.17 to get the next iterate, x(1).

    ● Step 3: If F(x(1))<tol, we stop and output x(1) as the solution. Otherwise, set k=k+1, go to Step 2 and continue the iteration process.

    In this section, we study the performance of the vectorial rational function obtained by applying iterative class PM(β)KEp on p(x)={x211,x221} for x=(x1,x2)R2. Because of the dependence of this family on the beta parameter, it is important to study the existence of fixed points different from the roots of the system p(x)=0, mainly those with attractive character and other attractive structures that can be interesting from the dynamical point of view. Through this analysis, those members whose only behavior is the convergence to the roots will be derived.

    So, by applying PM(β)KEp on p(x), we get its rational vectorial operator G(x,β)={g1(x,β),g2(x,β)}, whose j-th coordinate is

    gj(x,β)=18xj3(17xj42xj2+1)6(β(xj21)3(169xj651xj4+11xj21)(409xj8+84xj6+14xj4+4xj2+1)2+2(17xj42xj2+1)2(1+16xj2135xj4+944xj64474xj8+21008xj1067422xj12+204304xj14335157xj16+374304xj18+68757xj20)). (4.1)

    Taking into account the last result, it is possible to formulate the following theorem about the stability of fixed points of G(x,β).

    Theorem 4.1. The duples (1,1),(1,1),(1,1),(1,1) are superattractive fixed points of the rational function G(x,beta) which are zeroes of p(x). In addition, denoting by B the set of strange fixed points of G(x,β), it is defined by (li,lj), (±1,lj) and (li,±1) for i,j26, whose elements different from ±1 are real zeroes of

    l(t,β)=(153359006+28270489β)xj26+(18994402653459981β)xj24+(157699748+23572126β)xj22+(80245084+748834β)xj20+(34849386+406419β)xj18+(11271950+333817β)xj16+(3284120+129684β)xj14+(7588246964β)xj12+(159762+5039β)xj10+(26838+565β)xj8+(397218β)xj6+(44414β)xj4+(38+5β)xj22β.

    Hence, the amount of fixed points in B depends on β:

    i) If β(,61.956], B is composed by thirty-six repulsive and twenty-four saddle strange fixed points.

    ii) If β(61.956,5.4247), B has the same composition as the previous item.

    iii) If β(5.4247,4.8788), B is composed by sixteen repulsive and sixteen saddle strange fixed points.

    iv) If β[4.8788,2), the composition of B is the same as the previous item.

    v) For β[2,0.9328), there exist sixty strange fixed points, whose characters depend on β. Because of their stability, two different situations can be meet:

    a) If β[2,0.9578), B has the same composition as the first item.

    b) When β(0.9578,0.9329), B has twelve attractor, sixteen repulsive and thirty-two saddle strange fixed points.

    vii) If β(0.9329,+), B has four repulsive and eight saddle strange fixed points.

    Proof. G(x,β) is symmetric with respect to its component functions. Considering that the fixed points are solutions of the equation gj(x,β)=xj, j=1,2, we get

    (x2j1)l(xj,β)=0,j=1,2, (4.2)

    where

    l(xj,β)=β2+(+38+38+5β)xj2+(44414β)xj4+(397218β)xj6+(26838+565β)xj8+(159762+5039β)xj10+(7588246964β)xj12+(+3284120+129684β)xj14+(11271950+333817β)xj16+(34849386+406419β)xj18+(80245084+748834β)xj20+(157699748+23572126β)xj22+(18994402653459981β)xj24+(153359006+28270489β)xj26.

    Considering (4.2), it is observed that xj=±1 are solutions of this equation, and therefore, (1,1), (1,1), (1,1), (1,1) are fixed points of the rational operator G(x,β) and at the same time roots of the polynomial p(x). In order to study the stability of the rational operator, the diagonal matrix form of the associated Jacobian G(x,beta) is computed:

    G(x,β)=(J1(x1,β)00J2(x2,β)),

    where

    Jj(xj,β)=(xj1)2(xj+1)28xj4(17xj42xj2+1)7r(xj), j=1,2, (4.3)

    and

    r(xj)=(480598313β675606282)xj28+(1615581902β+3201012552)xj26+(1326760599β1571407002)xj24+(869157440288459620β)xj22+(74267881β295949922)xj20+(17885090β+109500040)xj18+(5710273β34047410)xj16+(212808β+10036672)xj14+(148099β2525438)xj12+(52220054734β)xj10+(5971β90670)xj8+(1884β+11776)xj6+(533β1302)xj4+(30β+88)xj23β6.

    It is straightforward that Jj(±1,β)=0, j=1,2, for any β, and therefore the zeroes of p(x) are superattracting fixed points, given by the fact that the eigenvalues of G((±1,±1),β) are null.

    Fixed points can be calculated through l(t,β) by means of s=t2. A 13-th degree polynomial is then obtained,

    L(s,β)=(153359006+28270489β)s13+(18994402653459981β)s12+(157699748+23572126β)s11+(80245084+748834β)s10+(34849386+406419β)s9+(11271950+333817β)s8+(3284120+129684β)s7+(7588246964β)s6+(159762+5039β)s5+(26838+565β)s4+(397218β)s3+(44414β)s2+(38+5β)s2β.

    The operator G(x,beta) has real fixed points that are symbolized as Li, with Li any real and positive root of L(s). Therefore, the amount of components of B is related to the number of real and positive roots of L(s), as well as their combinations with ±1. Then,

    i) Three real and positive roots L1, L2, L3 are found if β(,61.956], and the roots {+L1,+L2,+L3,L1,L2,L3} are denoted by li for i{1,2,3,4,5,6}. The set of all strange fixed points is obtained combining in pairs li, i=1,2,,6 with themselves and with 1 and 1. In the first case, we have that all combined pairs (li,lk) are 36, in the second the kind of (li,±1),(±1,li) pair are 12+12=24. The information about stability of these fixed points of G(x,β) can be inferred from the analysis of the absolute value of the eigenvalues of matrix G((li,lk),(β)), i,k{1,2,3,4,5,6}; these functions of β are called stability functions of the respective fixed points. Due to the nature of the polynomial system, the eigenvalues λ((li,lk),(β))=J1((li,lk),(β))=J2((li,lk),(β)) for i,k{1,2,3,4,5,6}; however, if any of the components of the fixed point is ±1, the corresponding eigenvalue is always null.

    Figure 1 shows the values of |λ| associated to the Jacobian matrix and evaluated at li for i{1,2,3,4,5,6}. For all of them the corresponding absolute values of λ are greater than one, so the behavior of the strange fixed points (li,lk) is repulsive for the current interval. Indeed, fixed points (li,±1),(±1,li), for i{1,2,3,4,5,6}, are saddle points, as one of the eigenvalues is zero, and the another one is greater than one.

    Figure 1.  Stability functions |λ((li,lk),β)| for i,k{1,2,3,4,5,6} and β(,61.956].

    ii) The amount of fixed points for β(61.956,5.4247) is the same as in the previous case, due to the existence of three real and positive roots. The information about qualitative behavior of fixed points can be extracted from the stability functions represented in Figure 2. For all of them, the corresponding absolute values of λ are greater than one, and therefore the pair (li,±1),(±1,li) are saddle points for i{1,2,3,4,5,6}.

    Figure 2.  Stability functions |λ((li,lk),β)| for i,k{1,2,3,4,5,6} and β(61.956,5.4247).

    iii) If β(5.4247,4.8788) the polynomial L(s) has two real positive roots L1, L2, and the elements {+L1,+L2,L1,L2} are denoted by li for i{1,2,3,4}. The analysis of stability functions associated to the fixed points in Figure 3, shows that the pairs (li,lk) for i,k{1,2,3,4} are repulsive, because their corresponding |λ| are greater than one, so (li,±1) and (±1,li) are saddle fixed points.

    Figure 3.  Stability functions |λ((li,lk),β)| for i,k{1,2,3,4} and β(5.4247,4.8788).

    iv) For β[4.8788,2), the same structure and the same reasoning as in the previous case take place; see Figure 4.

    Figure 4.  Stability functions |λ((li,lk),β)| for i,k{1,2,3,4} and β[4.8788,2).

    v) As the first item, in this case L(s) has three real positive roots, and the elements li are denoted in the same way for i{1,2,3,4,5,6}. For some values of β, there are attractive fixed points, and such values can be calculated by solving the equation |lambdaj((li,lk),(β))|=1, for i,k{1,2,3,4,5,6} and j=1,2. As a result we have found that the values of β10.9578 and β20.9328 satisfy the above equation, related to the strange fixed points (li,lk) with i,k, Figure 5c, d.

    Figure 5.  Stability functions |λ((li,lk),β)| for i,k{1,2,3,4,5,6} and β[2,0.9328).

    a) For β[2,β1) all thirty-six pairs (li,lj) are repulsive fixed points, as absolute values of |λj((li,lk),(β))|>1 for i,k{1,2,3,4,5,6} and j=1,2. On the other hand, pairs like (li,±1), (±1,li) are saddle fixed points, and their quantity is twenty-four for i{1,2,3,4,5,6}. The stability of any of these fixed points can be inferred from Figure 5.

    b) There are not hyperbolic points for the values β=β1,β2. When β(β1,β2) the stability of fixed points (li,±1), (±1,lk) and (li,lk) for i,k{2,5} correspond to a Jacobian matrix whose eigenvalues take values |λj((li,±1),(β))|<1, |λj((li,±1),(β))|=0 and |λj((li,lk),(β))|<1, |λj((li,lk),(β))|=0 for j={1,2}, so they are attracting, and their quantity is twelve. On the other hand, all forty-eight pairs (l2,li), (li,l2), (l5,li), (li,l5), (±1,li) and (li,±1) are saddle fixed points due to the fact that |λj((li,lk),(β))|>1, for i{1,3,4,6}, Figure 5.

    vi) Finally, when β(0.9328,+) the polynomial L(s) has one real positive root L1, and the elements {+L1,L1} are denoted by li for i{1,4}. The four fixed points (li,lk) for i,k{1,4} and j=1,2 are repulsive, as absolute values of |λj((li,lk),(β))|>1. On the other hand, pairs such as (l1,±1), (±1,l1), (l4,±1) (eight in total) are saddle fixed points due to the fact that λj((±1,±1),β)=0 and |λj((li,lk),β)|>1 (see Figure 6).

    Figure 6.  Stability functions |λ((li,lk),β)| for i,k{1,4} and β(0.9328,+).

    The stability of strange fixed points is already known, and it has been shown that attractive fixed points only appear for β at(0.9578,0.9329). However, in addition to the latter, there are other attractive structures whose values of the β parameter should also be avoided to achieve stable performance of the iterative methods. These structures must be sought in basins of attraction in which the free critical points, those different from the roots of p(x)=0, are also found. Related to the latter, we can make the following statement about the location of the critical points.

    Theorem 4.2. Let K denote the set of free real critical points of G(x,β) related to iterative family 2 and let the values {α1=0.46806,α2=6.28814,α3=66.1728} for the real roots collection of a 23th-degree polynomial.

    Then, K is the collection of the duples (qi,qj), (qi,±1) and (±1,qj) for i,j10 whose entries different from ±1 are the real zeroes of polynomial

    r(t)=(480598313β675606282)t28+(1615581902β+3201012552)t26+(1326760599β1571407002)t24+(869157440288459620β)t22+(74267881β295949922)t20+(17885090β+109500040)t18+(5710273β34047410)t16+(212808β+10036672)t14+(148099β2525438)t12+(52220054734β)t10+(5971β90670)t8+(1884β+11776)t6+(533β1302)t4+(30β+88)t23β6.

    Then, K is made up as follows:

    i) If β(,2], K contains 12 free critical points.

    ii) If β(2,0.5], K contains 32 free critical points.

    iii) If β(0.5,α1], K contains 100 free critical points.

    iv) If β(α1,1.40576), K contains 32 free critical points.

    v) If β(1.40576,α2), β[α2,α3) or β(α3,+), K contains 12 free critical points.

    Proof. Is known from Theorem 4.1 that the eigenvalues of Jacobian matrix G(x,β) are λj(x,β)=Jj(x,β), for j{1,2}, that is,

    λj(x,β)=(xj1)2(xj+1)28x14(17x142xj2+1)7r(xj), j=1,2, (4.4)

    where

    r(t)=(480598313β675606282)t28+(1615581902β+3201012552)t26+(1326760599β1571407002)t24+(869157440288459620β)t22+(74267881β295949922)t20+(17885090β+109500040)t18+(5710273β34047410)t16+(212808β+10036672)t14+(148099β2525438)t12+(52220054734β)t10+(5971β90670)t8+(1884β+11776)t6+(533β1302)t4+(30β+88)t23β6. (4.5)

    On the other hand, the critical points are the solutions of equation λj(x,α)=0 for j{1,2} that are found by using t2=s on r(t). Its roots (real and positive, symbolized as Qj) will be the entries of the free critical points, as qk=±Qj. The quantity of positive roots depends on the value of parameter β, forcing Qj to be real:

    (i) When β(,2], r(s) has one real positive root, denoted by Q1. Then, K is composed by (q1,q1),(q1,q5),(q5,q1) and (q5,q5), where q1,5=±Q1, and also by eight critical points of the kinds (qj,±1) and (±1,qj), j{1,5}.

    (ii) If β(2,0.5], r(s) has only two positive real roots Q1 and Q2. Therefore, K={(qi,qj),(qi,±1),(±1,qj) : i,j{1,2,5,6}}, holding 32 different free critical points.

    (iii) For β(0.5,α1], there exist four real positive roots of polynomial r(s): Q1, Q2, Q3 and Q4. So, K has the elements of kind (qi,qj), with i,j{1,2,,8}, where q1,5=±Q1, q2,6=±Q2, q3,7=±Q3 and q4,8=±Q4. In addition, mixed points (±1,qj) and (qi,±1) where i,j{1,2,,8} also belong to K. The total quantity of critical points is 100.

    iv) When β(α1,1.40576), r(s) has two real positive roots, and the K composition is the same as in (ii).

    v) If β(1.40576,α2), β[α2,α3) or β(α3,+), r(s) has one real positive root (different in each case), and the K composition is the same as in (i).

    Theorems 4.1 and 4.2 give us information about stability of the iterative class acting on a certain test polynomial. In general, elements of the strange fixed and critical point collections should be avoided as initial estimates.

    However, from the dynamic point of view, the analysis of the long-term performance of free critical points can give us the key about other strange attractive structures, periodic orbits or even chaotic behavior.

    To take a first glance to the study of the critical points we are going to use the tool named parameter line, first used in [21].

    This tool consists of a mesh of 500×500 points in a specific interval of the β parameter. Each parameter-dependent free critical point used as initial estimate is colored red, if after 200 iterations it converges to some root of the polynomial p(x), and black in any other case. Each of these points is thickened by a multiplication by [0,1]. The error tolerance is 103. The red color is more clear when the number of iterations needed to converge is lower.

    Figure 7 shows the limit performance of free critical points for β(2,0.5]. In Figure 7a, all the free critical points (qi,qj) with i,j{1,5} converge to any of the roots of p(x); moreover, in Figure 7b, unstable behavior is discovered for critical points (qi,qj) with i,j{2,6}, (q1,qm) for m{2,6} and (q2,qn) for n{1,5} around the values β1.75 and β0.95.

    Figure 7.  Parameter lines related with critical points for β(2,0.5].

    We continue the study of the unstable performance in the two dark zones using bifurcation diagrams. They are calculated using each free critical point of the rational operator as a seed, for β(2,0.5], in a mesh of 3000 subintervals. In this interval they are real, and we observe their performance in the last 100 of 1000 iterations. In Figure 8, detailed bifurcation diagrams of critical point (q2,q6) are shown. They are related with the dark zones of no convergence in the parameter lines.

    Figure 8.  Bifurcation diagrams for critical point (q2,q6) with β(2,0.5].

    Because of polynomial p(x) has separate variables, the coordinate functions of the rational operator are symmetrical. Taking it into account, in the bifurcation diagrams the x1 and x2 dimensions are plotted in blue and red color, respectively. For the details, only positive coordinates were chosen.

    Observing the details of bifurcation diagrams given by Figure 8b, e, we can find patterns of bifurcations, such as period doubling cascade, periodic orbits and chaotic behaviors for β1.76,1.762 and 1.765 in the first case and for β0.96,0.97 and 0.986 in the second case, respectively. In the following we will find and show some of these pathological attractive structures previously observed in the bifurcations diagrams.

    To take a full view of dynamic behavior of the rational operator G(x,β) we plotted a dynamical plane, and these graphics are constructed by iterating each point of a mesh with a 0.01 step and painting according to the root to which they converge, with an error tolerance lower than 103 or with a maximum of 50 iterations as a stopping criterion. The brighter the color is the smaller the amount of iteration needed to converge. In case of non-convergence after the number of iterations is completed, the point is colored black. The roots of the polynomial are represented with circles.

    Related with the first dark zone on parameter line (see Figure 7b), in Figure 9a, we can see that the amount of connected components in the basins of attraction of the roots for β=1.7665 is infinite. Green and white diamonds represent the repulsive and saddle strange fixed points, respectively. All of them lie in the Julia set. On the other hand, free critical points are represented with yellow squares. Some of them lie in the basin of attraction of the roots of p(x), and the rest are located in the black zone. Observing the bifurcation diagram in Figure 8b and knowing the position of critical point (q2,q6), we can explore the regions around points (0.48,±0.48) to find attractive elements. In Figure 9a, we can observe an attractive structure (see small white region around the points (0.48,±0.48)). Any initial estimation in those zones tends to densely fill these same symmetrical regions. The details of these small regions can be observed in Figure 9b.

    Figure 9.  Dynamical planes with pathological behavior.

    On the other hand, for the value of β=1.7621 (see the bifurcation diagram, Figure 8b), we find periodic orbits around critical point (q2,q6). One of them, with period 8, is shown in Figure 9c, and its elements are

    {(0.4968,0.4968),(0.4756,0.4756),(0.4938,0.4938),(0.4829,0.4829),(0.4966,0.4966),(0.4763,0.4763),(0.4945,0.4945),(0.4814,0.4814)}.

    Its details can be seen in Figure 9d.

    Finally, by studying the second dark zone of the parameter line in Figure 7b, it is important to note in Figure 8d, e the same color of the bifurcation patterns. This means that the x1 and x2 coordinates maintain the same sign in any attractive structure, that is, each of those attractive elements must be confined in small areas where the coordinates sign of its points do not change. In Figure 10a, for β=0.986, this behavior can be seen in a small area around critical point (q2,q6) and the coordinates (0.48,0.48) where every iteration of initial estimates tend densely to fill this area. Details are shown in Figure 10b. Meanwhile, observing the bifurcation diagram in Figure 8e for β=0.97 we find periodic orbits of period four, and one example can be seen in Figure 10c, d whose components are (0.4589,0.4589),(0.4891,0.4891),(0.4543,0.4543),(0.5010,0.5010).

    Figure 10.  Dynamical planes with unstable values of the parameter.

    In what follows, we show the behavior of PM(β)KEp (described by (3.15)) on some real-life examples and in comparison with the Newton method. The β values used (β=1,2) provides convergence of order 6 and 5, respectively, in accordance with the obtained results of Theorem 3.1. To develop the numerical test, we use MATLAB software with variable precision arithmetic and 2000 digits of mantissa. The experiments have been performed on a computer with Intel(R) Core(TM) i7-1065G7 CPU and 16 GB of RAM. The stopping criterion used is F(x(k+1))<10200 or x(k+1)x(k)<10200. The first order divided difference operator used has the elements given by the expression (see [22])

    [d,c;F]ij  =  fi( d1,  ...,  dj1,  dj,  cj+1,  ...,  am )fi( d1,  ...,  dj1,  cj,  cj+1,  ...,  cm )djxj,1i,jn,

    being fi, i=1,2,,n, the coordinate functions of F. For each system, one table of results is displayed, where x(0) is the initial approximation, k is the number of iterations needed to converge ("nc" appears in the table if the method does not converge), the value of residuals F(x(k+1)) or x(k+1)x(k) at the last iterate ("-" if there is no convergence), and ACOC [12] is

    ACOC=ln(         x(k+1)      x(k)                  x(k)      x(k      1)         )ln(         x(k)      x(k      1)                  x(k      1)      x(k      2)         ).

    In relation to the latter, it is necessary to determine when the root(F(x(k+1))<10200 is achieved) is reached or if it is only a very slow convergence with a no significant difference between the two last iterates (x(k+1)x(k)<10200 but F(x(k+1))>10200), or both criteria are satisfied. The nonlinear test systems used are the following:

    Example1. In this example, with the use of PM(1)KEp, we will study the equation of heat conduction [28] for a non-homogeneous medium :x(k(u)ux)+F(x,t)=cρut, with F(x,t) the thermal source density and k, c and ρ the material quantitative properties of the thermal conductivity coefficient, specific heat capacity and medium density, respectively.

    For the sake of simplicity we are taking c, ρ as constants and the thermal conductivity coefficient as a linear function of temperature k(u)=k0+k1u with k0=1. The thermal sources have been distributed equispaced throughout the space-time grid, and their signs have been chosen randomly. After a few transformations and adding some boundary and initial conditions, we state the problem to be studied:

    {αu2x+(1+αu)uxx+f=ut,α=k1cρ,f=Fcρ,u(x0,t)=0,u(xf,t)=sin2(t),u(x,0)=sech2(7x),

    where f=f(x,t) is obtained from F(x, t) (the thermal source density) after the transformations leading to the definition of the resulting boundary problem. By applying an implicit finite differences scheme, we transform the equation of heat conduction in a family of nonlinear systems to be solved. The estimated solution for each instant tk is obtained from the approximation calculated at tk1. The space-time mesh is constructed taking the spatial step h=8nx and the timing step l=Tmaxnt, with nx, nt and Tmax the amount of subintervals for variables x, t and final instant of the numerical study, respectively.

    Making x0=4 and xf=4, we get the mesh domain as [x0,xf]×[0,Tmax] with (nx+1)×(nt+1) equispaced nodes (xi,tk). Denoting by ui,k the estimation of the solution at the (xi,tk) and using the estimations of the derivatives

    utu(x,t)u(x,th)h,uxu(x+h,t)u(x,t)land uxxu(x+h,t)2u(x,t)+u(xh,t)l,

    we construct the nonlinear system with α=1,

    kui+1,k(h2+k(2+α(ui+1,kui1,k)))ui,kkαu2i,k+k(ui+1,k+αu2i+1,k)+f=h2ui,k1.

    The results for different discretization and thermal sources are shown in Table 1. The mean of the number of iterations employed for solving the nonlinear systems for each instant is denominated as "iter" and F(x(k+1)) is the norm of the solution of the last system. As the number of iterations employed for solving each system is very low, the ACOC cannot be calculated.

    Table 1.  Numerical results for the heat conduction equation.
    Tmax nx nt iter F(x(k+1)) f Tmax nx nt iter F(x(k+1)) f
    0.5 20 10 1.9091 4.6694×1031 0 0.5 20 10 1.9091 1.4974×1026 2
    0.5 20 20 1.4762 6.9684×108 0 0.5 20 20 1.9048 4.9125×1028 2
    1.0 20 10 1.9091 3.1137×1025 0 1.0 20 10 1.9091 3.0552×1020 2
    1.0 40 20 1.9524 7.2903×1032 0 1.0 40 20 1.9524 2.9880×1026 2
    5.0 20 10 2.1818 3.3624×1010 0 5.0 20 10 2.1818 2.3446×1011 2
    5.0 40 20 2.1905 1.3758×1012 0 5.0 40 20 2.1905 1.3758×1012 2
    0.5 20 10 1.9091 1.5948×1018 5 0.5 20 10 1.9091 5.4491×1013 10
    0.5 20 20 1.9048 3.4320×1022 5 0.5 20 20 1.9048 5.6519×1017 10
    1.0 20 10 1.9091 9.9130×1017 5 1.0 20 10 1.9091 6.5624×1013 10
    1.0 40 20 1.9524 2.8640×1021 5 1.0 40 20 1.9524 8.2734×1018 10
    5.0 20 10 2.1818 1.1617×1011 5 5.0 20 10 nc nc 10
    5.0 40 20 2.2381 1.0439×1019 5 5.0 40 20 nc nc 10

     | Show Table
    DownLoad: CSV

    In Figure 11 are shown the approximated solutions of the head equation for different thermal sources. See the influence of f(x,t) in Figure 11bd, compared with Figure 11a.

    Figure 11.  Approximated solutions of heat equation for different thermal sources.

    Table 1 and Figure 11 show that the scheme is robust enough to reproduce with effectiveness the behavior of the model for different values of thermal sources density.

    Example2. This system has variable size and has the solution ξ(0.5149,...,0.5149)T. It is defined by

    xicos(2xi4j=1xj)=0,i=1,2,,n,

    with n=5, and the initial estimation is x(0)=(0.75,...,0.75)T. The results appear in Table 2.

    Table 2.  Numerical results of the examined methods for the first nonlinear system.
    x(0) Method(β) k x(k+1)x(k) F(x(k+1)) ACOC
    PM(1)KEp 3 53.9112×1036 334.579×10210 5.7320
    (0.75,...,0.75) PM(2)KEp 4 6.2324×10141 0.0 5
    N 11 157.9286×10162 11.4870×10321 2

     | Show Table
    DownLoad: CSV

    In there, it can be observed that the PM(β)KEp converges to the solution in fewer iterations than Newton's method, as we might expect. In the case of PM(2)KEp the convergence occurs with null (for the fixed precision of the machine at 2000 digits) or almost null residual F(x(k+1)) for PM(2)KEp and the parameter values involved. In addition, the ACOC is in good accordance with the theoretical one, as we can expect due to Theorem 3.1.

    Example3. Next system of size n=10 is defined as

    arctan(xi)+12(nj=1x2jx2i)=0,i=1,2,,n,

    whose solution is ξ(0.1758,...,0.1758)T, and the seed used is x(0)=(1.0,...,1.0)T. Results are given in Table 3. The PM(β)KEp family converges with null residuals F(x(k+1)) in fewer iterations than Newton's method, and the ACOC is in total agreement with the theoretical one.

    Table 3.  Numerical results of the examined methods for the second nonlinear system.
    x(0) Method(β) k x(k+1)x(k) F(x(k+1)) ACOC
    PM(1)KEp 5 20.4411×10144 0.0 6
    (1.0, ..., 1.0) PM(2)KEp 5 250.020×10111 0.0 5
    N 12 168.3834×10108 163.4591×10213 2

     | Show Table
    DownLoad: CSV

    Example4. Finally, we test the PM(β)KEp family with a nonlinear system of variable size described as

    xi+nj=1xjxiexi=0,i=1,2,,n,

    with n=20. The initial estimation x(0)=(0.25,...,0.25)T, and the solution is ξ=(0.0,...,0.0)T. The results are given in Table 4. The results are similar to those obtained in Example 2, with fewer iterations than Newton's method, null (for the fixed precision of the machine at 2000 digits) or almost null residual F(x(k+1)) and the computational approximation order of convergence in good accordance with the theoretical one.

    Table 4.  Numerical results of the examined methods for the third nonlinear system.
    x(0) Method(β) k x(k+1)x(k) F(x(k+1)) ρ
    PM(1)KEp 3 16.2213×1051 5.8210×10306 6.0643
    (0.25, ..., 0.25) PM(2)KEp 4 36.0237×10192 0.0 5.0781
    N 11 174.2412×10201 0.0 2

     | Show Table
    DownLoad: CSV

    In this manuscript, a class of iterative schemes with sixth-order convergence has been designed. Its construction has been made using the Ermakov-Kalitkin approach, which improves the wideness of the set of converging initial guesses. The analysis of the stability of the resulting class has proven the inclusion of very stable members, which have been used in the numerical section to solve real-life and academical problems.

    This research was partially supported by Grant PGC2018-095896-B-C22 funded by MCIN/AEI/31000.13039/ "ERDF A way to making Europe", European Union.

    There is no conflict of interest regarding publication of this manuscript.



    [1] R. H. Al-Obaidi, M. T. Darvishi, A comparative study on qualification criteria of nonlinear solvers with introducing some new ones, J. Math., 2022 (2022), 1–20. https://doi.org/10.1155/2022/4327913 doi: 10.1155/2022/4327913
    [2] S. Artidiello, A. Cordero, J. R. Torregrosa, M. P. Vassileva, Multidimensional generalization of iterative methods for solving nonlinear problems by means of weight-function procedure, Appl. Math. Comput., 268 (2015), 1064–1071. https://doi.org/10.1016/j.amc.2015.07.024 doi: 10.1016/j.amc.2015.07.024
    [3] R. Behl, I. K. Argyros, F. O. Mallawi, Some high-order convergent iterative procedures for nonlinear systems with local convergence, Mathematics, 9 (2021), 1–13. https://doi.org/10.3390/math9121375 doi: 10.3390/math9121375
    [4] R. Behl, S. Bhalla, A. A. Magreñán, S. Kumar, An efficient high order iterative scheme for large nonlinear systems with dynamics, J. Comput. Appl. Math., 404 (2022), 113249. https://doi.org/10.1016/j.cam.2020.113249 doi: 10.1016/j.cam.2020.113249
    [5] D. A. Budzko, A. Cordero, J. R. Torregrosa, New family of iterative methods based on the Ermakov-Kalitkin scheme for solving nonlinear systems of equations, Comput. Math. Math. Phys., 55 (2015), 1947–1959. https://doi.org/10.1134/S0965542515120040 doi: 10.1134/S0965542515120040
    [6] B. Campos, P. Vindel, Dynamics of subfamilies of Ostrowski-Chun methods, Math. Comput. Simul., 181 (2021), 57–81. https://doi.org/10.1016/j.matcom.2020.09.018 doi: 10.1016/j.matcom.2020.09.018
    [7] A. Cordero, I. Giménez-Palacios, J. R. Torregrosa, Avoiding strange attractors in efficient parametric families of iterative methods for solving nonlinear problems, Appl. Numer. Math., 137 (2019), 1–18. https://doi.org/10.1016/j.apnum.2018.12.006 doi: 10.1016/j.apnum.2018.12.006
    [8] A. Cordero, J. M. Gutiérrez, A. A. Magreñán, J. R. Torregrosa, Stability analysis of a parametric family of iterative methods for solving nonlinear models, Appl. Math. Comput., 285 (2016), 26–40. https://doi.org/10.1016/j.amc.2016.03.021 doi: 10.1016/j.amc.2016.03.021
    [9] A. Cordero, C. Jordán, E. Sanabria-Codesal, J. R. Torregrosa, Design, convergence and stability of a fourth-order class of iterative methods for solving nonlinear vectorial problems, Fractal Fract., 5 (2021), 1–16. https://doi.org/10.3390/fractalfract5030125 doi: 10.3390/fractalfract5030125
    [10] A. Cordero, J. G. Maimó, J. R. Torregrosa, M. P. Vassileva, Solving nonlinear problems by Ostrowski-Chun type parametric families, J. Math. Chem., 53 (2015), 430–449. https://doi.org/10.1007/s10910-014-0432-z doi: 10.1007/s10910-014-0432-z
    [11] A. Cordero, F. Soleymani, J. R. Torregrosa, Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension? Appl. Math. Comput., 244 (2014), 398–412. https://doi.org/10.1016/j.amc.2014.07.010
    [12] A. Cordero, J. R. Torregrosa, Variants of Newton's method using fifth-order quadrature formulas, Appl. Math. Comput., 190 (2007), 686–698. https://doi.org/10.1016/j.amc.2007.01.062 doi: 10.1016/j.amc.2007.01.062
    [13] A. Cordero, J. R. Torregrosa, On interpolation variants of Newton's method for functions of several variables, J. Comput. Appl. Math., 234 (2010), 34–43. https://doi.org/10.1016/j.cam.2009.12.002 doi: 10.1016/j.cam.2009.12.002
    [14] A. Cordero, J. R. Torregrosa, M. P. Vassileva, Pseudo-composition: a technique to design predictor-corrector methods for systems of nonlinear equations, Appl. Math. Comput., 218 (2012), 11496–11504. https://doi.org/10.1016/j.amc.2012.04.081 doi: 10.1016/j.amc.2012.04.081
    [15] A. Cordero, J. R. Torregrosa, M. P. Vassileva, Design, analysis, and applications of iterative methods for solving nonlinear systems, In: Nonlinear systems: design, analysis, estimation and control, Rijeka, Croatia: IntechOpen, 2016.
    [16] V. V. Ermakov, N. N. Kalitkin, The optimal step and regularization for Newton's method, Comput. Math. Math. Phys., 21 (1981), 235–242. https://doi.org/10.1016/0041-5553(81)90022-7 doi: 10.1016/0041-5553(81)90022-7
    [17] M. Frontini, E. Sormani, Third-order methods from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput., 149 (2004), 771–782. https://doi.org/10.1016/S0096-3003(03)00178-4 doi: 10.1016/S0096-3003(03)00178-4
    [18] J. Gerlach, Accelerated convergence in Newton's method, SIAM Rev., 36 (1994), 272–276. https://doi.org/10.1137/1036057 doi: 10.1137/1036057
    [19] Y. H. Geum, Y. I. Kim, Á. A. Magreñán, A biparametric extension of King's fourth-order methods and their dynamics, Appl. Math. Comput., 282 (2016), 254–275. https://doi.org/10.1016/j.amc.2016.02.020 doi: 10.1016/j.amc.2016.02.020
    [20] S. P. He, H. Y. Fang, M. G. Zhang, F. Liu, Z. T. Ding, Adaptive optimal control for a class of nonlinear systems: the online policy iteration approach, IEEE Trans. Neural Netw. Learn. Syst., 31 (2020), 549–558. https://doi.org/10.1109/TNNLS.2019.2905715 doi: 10.1109/TNNLS.2019.2905715
    [21] M. Kansal, A. Cordero, S. Bhalla, J. R. Torregrosa, New fourth- and sixth-order classes of iterative methods for solving systems of nonlinear equations and their stability analysis, Numer. Algorithms, 87 (2021), 1017–1060. https://doi.org/10.1007/s11075-020-00997-4 doi: 10.1007/s11075-020-00997-4
    [22] J. M. Ortega, W. C. Rheinholdt, Iterative solution of nonlinear equations in several variables, Academic Press, 1970.
    [23] A. M. Ostrowski, Solutions of equations and systems of equations, Academic Press, 1966.
    [24] A. Y. Özban, Some new variants of Newton's method, Appl. Math. Lett., 17 (2004), 677–682. https://doi.org/10.1016/S0893-9659(04)90104-8 doi: 10.1016/S0893-9659(04)90104-8
    [25] R. Sharma, J. R. Sharma, N. Kalra, A modifed Newton-Özban composition for solving nonlinear systems, Int. J. Comput. Methods, 17 (2020), 1950047. https://doi.org/10.1142/S0219876219500476 doi: 10.1142/S0219876219500476
    [26] H. Singh, J. R. Sharma, S. Kumar, A simple yet efficient two-step fifth-order weighted-Newton method for nonlinear models, Numer. Algorithms, 2022. https://doi.org/10.1007/s11075-022-01412-w
    [27] O. S. Solaiman, I. Hashim, An iterative scheme of arbitrary odd order and its basins of attraction for nonlinear systems, Comput. Mater. Contin., 66 (2021), 1427–1444. https://doi.org/10.32604/cmc.2020.012610 doi: 10.32604/cmc.2020.012610
    [28] A. Tijonov, A. Samarsky, Ecuaciones de la Física Matemática, Moscow: MIR, 1972.
    [29] J. F. Traub, Iterative methods for the solution of equations, Englewood Cliffs: Prentice-Hall, 1964.
    [30] X. F. Wang, T. Zhang, W. Y. Qian, M. Y. Teng, Seventh-order derivative-free iterative method for solving nonlinear systems, Numer. Algorithms, 70 (2015), 545–558. https://doi.org/10.1007/s11075-015-9960-2 doi: 10.1007/s11075-015-9960-2
    [31] S. Weerakoon, T. G. I. Fernando, A variant of Newton's method with accelerated third-order convergence, Appl. Math. Lett., 13 (2000), 87–93. https://doi.org/10.1016/S0893-9659(00)00100-2 doi: 10.1016/S0893-9659(00)00100-2
    [32] X. Y. Xiao, H. W. Yin, Accelerating the convergence speed of iterative methods for solving nonlinear systems, Appl. Math. Comput., 333 (2018), 8–19. https://doi.org/10.1016/j.amc.2018.03.108 doi: 10.1016/j.amc.2018.03.108
    [33] T. Zhanlav, I. V. Puzynin, The convergence of iterations based on a continuous analogue of Newton's method, Comput. Math. Math. Phys., 32 (1992), 729–737.
    [34] T. Zhanlav, C. Chun, K. Otgondorj, V. Ulziibayar, High-order iterations for systems of nonlinear equations, Int. J. Comput. Math., 97 (2020), 1704–1724. https://doi.org/10.1080/00207160.2019.1652739 doi: 10.1080/00207160.2019.1652739
  • This article has been cited by:

    1. Chein-Shan Liu, Essam R. El-Zahar, Chih-Wen Chang, Optimal Combination of the Splitting–Linearizing Method to SSOR and SAOR for Solving the System of Nonlinear Equations, 2024, 12, 2227-7390, 1808, 10.3390/math12121808
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1251) PDF downloads(102) Cited by(1)

Figures and Tables

Figures(11)  /  Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog