Loading [MathJax]/jax/output/SVG/jax.js
Research article

On the convergence of a new fourth-order method for finding a zero of a derivative

  • Received: 04 January 2024 Revised: 05 February 2024 Accepted: 19 February 2024 Published: 15 March 2024
  • MSC : 65B99, 65H05

  • On the basis of Wang's method, a new fourth-order method for finding a zero of a derivative was presented. Under the hypotheses that the third and fourth order derivatives of nonlinear function were bounded, the local convergence of a new fourth-order method was studied. The error estimate, the order of convergence, and uniqueness of the solution were also discussed. In particular, Herzberger's matrix method was used to obtain the convergence order of the new method to four. By comparing the new method with Wang's method and the same order method, numerical illustrations showed that the new method has a higher order of convergence and accuracy.

    Citation: Dongdong Ruan, Xiaofeng Wang. On the convergence of a new fourth-order method for finding a zero of a derivative[J]. AIMS Mathematics, 2024, 9(4): 10353-10362. doi: 10.3934/math.2024506

    Related Papers:

    [1] Ekram E. Ali, Miguel Vivas-Cortez, Rabha M. El-Ashwah . New results about fuzzy γ-convex functions connected with the q-analogue multiplier-Noor integral operator. AIMS Mathematics, 2024, 9(3): 5451-5465. doi: 10.3934/math.2024263
    [2] Ekram E. Ali, Rabha M. El-Ashwah, Abeer M. Albalahi, R. Sidaoui, Abdelkader Moumen . Inclusion properties for analytic functions of q-analogue multiplier-Ruscheweyh operator. AIMS Mathematics, 2024, 9(3): 6772-6783. doi: 10.3934/math.2024330
    [3] Alina Alb Lupaş, Shujaat Ali Shah, Loredana Florentina Iambor . Fuzzy differential subordination and superordination results for q -analogue of multiplier transformation. AIMS Mathematics, 2023, 8(7): 15569-15584. doi: 10.3934/math.2023794
    [4] Shatha S. Alhily, Alina Alb Lupaş . Sandwich theorems involving fractional integrals applied to the q -analogue of the multiplier transformation. AIMS Mathematics, 2024, 9(3): 5850-5862. doi: 10.3934/math.2024284
    [5] Ekram E. Ali, Georgia Irina Oros, Rabha M. El-Ashwah, Abeer M. Albalahi . Applications of fuzzy differential subordination theory on analytic p -valent functions connected with q-calculus operator. AIMS Mathematics, 2024, 9(8): 21239-21254. doi: 10.3934/math.20241031
    [6] Alina Alb Lupaş, Georgia Irina Oros . Differential sandwich theorems involving Riemann-Liouville fractional integral of q-hypergeometric function. AIMS Mathematics, 2023, 8(2): 4930-4943. doi: 10.3934/math.2023246
    [7] Ekram E. Ali, Georgia Irina Oros, Abeer M. Albalahi . Differential subordination and superordination studies involving symmetric functions using a q-analogue multiplier operator. AIMS Mathematics, 2023, 8(11): 27924-27946. doi: 10.3934/math.20231428
    [8] F. Müge Sakar, Arzu Akgül . Based on a family of bi-univalent functions introduced through the Faber polynomial expansions and Noor integral operator. AIMS Mathematics, 2022, 7(4): 5146-5155. doi: 10.3934/math.2022287
    [9] Norah Saud Almutairi, Adarey Saud Almutairi, Awatef Shahen, Hanan Darwish . Estimates of coefficients for bi-univalent Ma-Minda-type functions associated with q-Srivastava-Attiya operator. AIMS Mathematics, 2025, 10(3): 7269-7289. doi: 10.3934/math.2025333
    [10] Shujaat Ali Shah, Ekram Elsayed Ali, Adriana Cătaș, Abeer M. Albalahi . On fuzzy differential subordination associated with q-difference operator. AIMS Mathematics, 2023, 8(3): 6642-6650. doi: 10.3934/math.2023336
  • On the basis of Wang's method, a new fourth-order method for finding a zero of a derivative was presented. Under the hypotheses that the third and fourth order derivatives of nonlinear function were bounded, the local convergence of a new fourth-order method was studied. The error estimate, the order of convergence, and uniqueness of the solution were also discussed. In particular, Herzberger's matrix method was used to obtain the convergence order of the new method to four. By comparing the new method with Wang's method and the same order method, numerical illustrations showed that the new method has a higher order of convergence and accuracy.



    Some of the topics in geometric function theory are based on q-calculus operator and differential subordinations. Ismail et al. defined the class of q-starlike functions in 1990 [1], presenting the first uses of q-calculus in geometric function theory. Several authors focused on the q-analogue of the Ruscheweyh differential operators established in [2] and the q-analogue of the Sălăgean differential operators defined in [3]. Examples include the investigation of differential subordinations using a specific q-Ruscheweyh type derivative operator in [4].

    In what follows, we recall the main concepts used in this research.

    We denote by H the class of analytic functions in the open unit disc U:={ξC:|ξ|<1}. Also, H[a,n] denotes the subclass of H, containing the functions fH given by

    f(ξ)=a+anξn+an+1ξn+1+...,       ξU.

    Another well-known subclass of H is class A(n), which consists of fH and is given by

    f(ξ)=ξ+κ=n+1aκξκ,ξU, (1.1)

    with nN={1,2,...}, and A=A(1).

    The subclass K is defined by

    K={fA:Re(ξf(ξ)f(ξ)+1)>0, f(0)=0, f(0)=1, ξU},

    means the class of convex functions in the unit disk U.

    For two functions f,L (belong) to A(n), f given by (1.1), and L is given by the next form

    L(ξ)=ξ+κ=n+1bκξκ,ξU,

    the well-known convolution product was defined as: : AA

    (fL)(ξ):=ξ+κ=n+1aκbκξκ,ξU.

    In particular [5,6], several applications of Jackson's q-difference operator dq: AA are defined by

    dqf(ξ):={f(ξ)f(qξ)(1q)ξ(ξ0;0<q<1),f(0)(ξ=0). (1.2)

    Maybe we can put just κN={1,2,3,..}. It is written once previously

    dq{κ=1aκξκ}=κ=1[κ]qaκξκ1, (1.3)

    where

    [κ]q=1qκ1q=1+κ1n=1qnlimq1[κ]q=κ.[κ]q!={κn=1[n]q,            κN,   1                 κ=0.           (1.4)

    In [7], Aouf and Madian investigate the q-analogue Că tas operator Isq(λ,): A A (sN0,,λ0, 0<q<1) as follows:

    Isq(λ,)f(ξ)=ξ+κ=2([1+]q+λ([κ+]q[1+]q)[1+]q)saκξκ,(sN0,,λ0,0<q<1).

    Also, the q-Ruscheweyh operator μqf(ξ) was investigated in 2014 by Aldweby and Darus [8]

    μqf(ξ)=ξ+κ=2[κ+μ1]q![μ]q![κ1]q!aκξκ, (μ0,0<q<1),

    where [a]q and [a]q! are defined in (1.4).

    Let be

    fsq,λ,(ξ)=ξ+κ=2([1+]q+λ([κ+]q[1+]q)[1+]q)sξκ.

    Now we define a new function fs,μq,λ,(ξ) in terms of the Hadamard product (or convolution) such that:

    fsq,λ,(ξ)fs,μq,λ,(ξ)=ξ+κ=2[κ+μ1]q![μ]q![κ1]q!ξκ.

    Next, driven primarily by the q-Ruscheweyh operator and the q-Cătas operator, we now introduce the operator Isq,μ(λ,):AA is defined by

    Isq,μ(λ,)f(ξ)=fs,μq,λ,(ξ)f(ξ), (1.5)

    where sN0,,λ,μ0,0<q<1. For fA and (1.5), it is obvious

    Isq,μ(λ,)f(ξ)=ξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκ, (1.6)

    where

    ψsq(κ,λ,)=([1+]q[1+]q+λ([κ+]q[1+]q))s.

    We observe that:

    (i) If s=0 and q1, we get μf(ξ) is a Russcheweyh differential operator [9] investigated by numerous authors [10,11,12].

    (ii) If we set q1, we obtain Imλ,,μf(ξ) which was presented by Aouf and El-Ashwah [13].

    (iii) If we set μ=0 and q1, we obtain Jmp(λ,)f(ξ), presented by El-Ashwah and Aouf (with p=1) [14].

    (iv) If μ=0, =λ=1, and q1, we get αf(ξ), investigated by Jung et al. [15].

    (v) If μ=0, λ=1,=0, and q1, we obtain Isf(ξ), presented by Sălă gean [16].

    (vi) If we set μ=0 and λ=1, we obtain Iq,sf(ξ), presented by Shah and Noor [17].

    (vii) If we set μ=0, λ=1, and q1, we obtain Jsq, Srivastava–Attiya operator: see [18,19].

    (viii) I1q,0(1,0)=ξ0f(t)tdqt. (q-Alexander operator [17]).

    (ix) I1q,0(1,)=[1+ρ]qξρξ0tρ1f(t)dqt (q-Bernardi operator [20]).

    (x) I1q,0(1,1)=[2]qξξ0f(t)dqt (q-Libera operator [20]).

    Moreover, we have

    (i) Isq,μ(1,0)f(ξ)=Isq,μf(ξ)

    f(ξ)A:Isq,μf(ξ)=ξ+κ=2(1[κ]q)s[κ+μ1]q![μ]q![κ1]q!aκξκ, (sN0,μ0,0<q<1,ξU).

    (ii) Isq,μ(1,)f(ξ)=Is,q,μf(ξ)

    f(ξ)A:Is,q,μf(ξ)=ξ+κ=2([1+]q[κ+]q)s[κ+μ1]q![μ]q![κ1]q!aκξκ, (sN0,>0,μ0,0<q<1,ξU).

    (iii) Isq,μ(λ,0)f(ξ)=Is,λq,μf(ξ)

    f(ξ)A:Is,λq,μf(ξ)=ξ+κ=2(11+λ([κ]q1))s[κ+μ1]q![μ]q![κ1]q!aκξκ, (sN0,λ>0,μ0,0<q<1,ξU).

    Since the investigation of q-difference equations using function theory tools explores various properties, this direction has been considered in many works. Thus, several authors used the q-calculus based linear extended operators recently defined for investigating theories of differential subordination and subordination (see [21,22,23,24,25,26,27,28,29,30,31,32]). Applicable problems involving q-difference equations and q-analogues of mathematical physical problems are studied extensively for: Dynamical systems, q-oscillator, q-classical, and quantum models; q-analogues of mathematical-physical problems, including heat and wave equations; and sampling theory of signal analysis [33,34].

    We denote by Φ the class of analytic univalent functions φ(ξ), which are convex functions with φ(0)=1 and  Reφ(ξ)>0 in U.

    The differential subordination theory, studied by Miller and Mocanu [35], is based on the following definitions:

    f is subordinate to L in U, denote it as fL if there exists an analytic function ϖ, with ϖ(0)=0 and |ϖ(ξ)|<1 for all ξU, such that f(ξ)=L(ϖ(ξ)). Moreover, if L is univalent in U, we have:

    f(ξ)L(ξ)f(0)=L(0)andf(U)L(U).

    Let Φ(r,s,t;ξ):C3×UC and let h in U be a univalent function. An analytic function λ in U, which validates the differential subordination, is a solution of the differential subordination

    Φ(λ(ξ),ξλ(ξ),ξ2λ(ξ);ξ)h(ξ). (1.7)

    We call V a dominant of the solutions of the differential subordination in (1.7) if λ(ξ)V(ξ) for all λ satisfying (1.7). A dominant ˜ϰ is called the best dominant of (1.7) if ˜V(ξ)V(ξ) for all the dominants V.

    The following definitions characterize both of the theories of differential superordination that Miller and Mocanu introduced in 2003 [36]:

    f is superordinate to L, denotes as L f, if there exists an analytic function ϖ, with ϖ(0)=0 and |ϖ(ξ)|<1 for all ξU, such that L(ξ)=f(ϖ(ξ)). For the univalent function f, we have

    L(ξ)f(ξ)f(0)=L(0)andL(U)f(U).

    Let Φ(r,s;ξ):C2×UC and let h in U be an analytic function. A solution of the differential superordination is the univalent function λ such that Φ(λ(ξ),ξλ(ξ);ξ) is univalent in U satisfy the differential superordination

    h(ξ)Φ(λ(ξ),ξλ(ξ);ξ), (1.8)

    then λ is called to be a solution of the differential superordination in (1.8). We call the function V a subordinant of the solutions of the differential superordination in (1.8) if V(ξ) λ(ξ) for all λ satisfying (1.8). A subordinant ˜V is called the best subordinant of (1.8) if V(ξ)˜V(ξ) for all the subordinants V.

    Let say the collection of injective and analytic functions on ¯UE(χ), with χ(ξ)0 for ξUE(χ) and

    E(χ)={ς:ςU  :  limξςχ(ξ)=}.

    Also, (a) is the subclass of with χ(0)=a.

    The proofs of our main results and findings in the upcoming sections can benefit from the usage of the following lemmas:

    Lemma 1.1. (Miller and Mocanu [35]). Suppose g is convex in U, and

    h(ξ)=nγξg(ξ)+g(ξ),

    with ξU, n is +ve integer and γ>0. When

    g(0)+pnξn+pn+1ξn+1+....=p(ξ),   ξU,

    is analytic in U, and

    γξp(ξ)+p(ξ)h(ξ),   ξU,

    holds, then

    p(ξ)g(ξ),

    holds as well.

    Lemma 1.2. (Hallenbeck and Ruscheweyh [37], see also (Miller and Mocanu [38], Th. 3.1.b, p.71)). Let h be a convex with h(0)=a, and let γC with Re(γ)0. When pH[a,n] and

    p(ξ)+ξp(ξ)γh(ξ),      ξU,

    holds, then

    p(ξ)g(ξ)h(ξ),       ξU,

    holds for

    g(ξ)=γnξ(γ/n)ξ0h(t)t(γ/n)1dt,      ξU.

    Lemma 1.3. (Miller and Mocanu [35]) Let h be a convex with h(0)=a, and let γC, with Re(γ)0. When pQH[a,n], p(ξ)+ξp(ξ)γ is a univalent in U and

    h(ξ)p(ξ)+ξp(ξ)γ,      ξU,

    holds, then

    g(ξ)p(ξ),       ξU,

    holds as well, for g(ξ)=γnξ(γ/n)ξ0h(t)t(γ/n)1dt, ξU the best subordinant.

    Lemma 1.4. (Miller and Mocanu [35]) Let a convex g be in U, and

    h(ξ)=g(ξ)+ξg(ξ)γ,      ξU,

    with γC, Re(γ)0. If pQH[a,n], p(ξ)+ξp(ξ)γ is a univalent in U and

    g(ξ)+ξg(ξ)γp(ξ)+ξp(ξ)γ,      ξU,

    holds, then

    g(ξ)p(ξ),       ξU,

    holds as well, for g(ξ)=γnξ(γ/n)ξ0h(t)t(γ/n)1dt, ξU the best subordinant.

    For ˊa,ϱ,ˊc and ˊc(ˊcZ0) let consider the following Gaussian hypergeometric function is

    2F1(ˊa,ϱ;ˊc;ξ)=1+ˊaϱˊc.ξ1!+ˊa(ˊa+1)ϱ(ϱ+1)ˊc(ˊc+1).ξ22!+....

    For ξU, the above series completely converges to an analytic function in U, (see, for details, [ [39], Chapter 14]).

    Lemma 1.5. [39] For ˊa,ϱ and ˊc (ˊcZ0), complex parameters

    10tϱ1(1t)ˊcϱ1(1ξt)ˊadt=Γ(ϱ)Γ(ˊcϱ)Γ(ˊc)2F1(ˊa,ϱ;ˊc;ξ)(Re(ˊc)>Re(ϱ)>0);
    2F1(ˊa,ϱ;ˊc;ξ)=2F1(ϱ,ˊa;ˊc;ξ);
    2F1(ˊa,ϱ;ˊc;ξ)=(1ξ)ˊa2F1(ˊa,ˊcϱ;ˊc;ξξ1);
    2F1(1,1;2;ˊaξˊaξ+1)=(1+ˊaξ)ln(1+ˊaξ)ˊaξ;
    2F1(1,1;3;ˊaξˊaξ+1)=2(1+ˊaξ)ˊaξ(1ln(1+ˊaξ)ˊaξ).

    A q-multiplier-Ruscheweyh operator is considered in the study reported in this paper to create a novel convex subclass of normalized analytic functions in the open unit disc U. Then, employing the techniques of differential subordination and superordination theory, this subclass is examined in more detail.

    Isq,μ(λ,)f(ξ) given in (1.6) is a q-multiplier-Ruscheweyh operator that is applied to define the new class of normalized analytic functions in the open unit disc U.

    Definition 2.1. Let α[0,1). The class Ssq,μ(λ,;α) involves of the function fA with

    Re(Isq,μ(λ,)f(ξ))>α,      ξU. (2.1)

    We use the following denotations:

    (i) Ssq,μ(λ,;0)=Ssq,μ(λ,).

    (ii) S0q,0(λ,;α)=S(α) (Ref(ξ)>α), see Ding et al. [40].

    (iii) S0q,0(λ,;0)=S (Ref(ξ)>0), see MacGregor [41].

    The first result concerning the class Ssq,μ(λ,;α) establishes its convexity.

    Theorem 2.1. The class Ssq,μ(λ,;α) is closed under convex combination.

    Proof. Consider

    fj(ξ)=ξ+κ=2ajκξκ,ξU,  j=1,2,

    being in the class Ssq,μ(λ,;α). It suffices to demonstrate that

    f(ξ)=ηf1(ξ)+(1η)f2(ξ),

    belongs to the class Ssq,μ(λ,;α), with η a positive real number.

    f is given by:

    f(ξ)=ξ+κ=2(ηa1κ+(1η)a2κ)ξκ,ξU,

    and

    Isq,μ(λ,)f(ξ)=ξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!(ηa1κ+(1η)a2κ)ξκ. (2.2)

    Differentiating (2.2), we have

    (Isq,μ(λ,)f(ξ))=1+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!(ηa1κ+(1η)a2κ)κξκ1

    Hence

    Re(Isq,μ(λ,)f(ξ))=1+Re(ηκ=2κψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!a1κξκ1)+Re((1η)κ=2κψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!a2κξκ1). (2.3)

    Taking into account that f1,f2 Ssq,μ(λ,;α), we can write

    Re(ηκ=2κψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!ajκξκ1)>η(α1). (2.4)

    Using relation (2.4), we get from relation (2.3):

    Re(Isq,μ(λ,)f(ξ))>1+η(α1)+(1η)(α1)=α.

    It demonstrated that the set Ssq,μ(λ,;α) is convex.

    Next, we study a class of differential subordinations Ssq,μ(λ,;α) and a q-multiplier-Ruscheweyh operator Isq,μ(λ,) involving convex functions.

    Theorem 2.2. For g to be convex, we define

    h(ξ)=g(ξ)+ξg(ξ)a+2,     a>0, ξU. (2.5)

    For fSsq,μ(λ,;α), consider

    F(ξ)=a+2ξa+1ξ0taf(t)dt, ξU, (2.6)

    then the differential subordination

    (Isq,μ(λ,)f(ξ))h(ξ), (2.7)

    implies the differential subordination

    (Isq,μ(λ,)F(ξ))g(ξ),

    for the best dominant.

    Proof. We can write (2.6) as:

    ξa+1F(ξ)=(a+2)ξ0taf(t)dt,  ξU,

    and differentiating it, we get

    ξF(ξ)+(a+1)F(ξ)=(a+2)f(ξ)

    and

    ξ(Isq,μ(λ,)F(ξ))+(a+1)Isq,μ(λ,)F(ξ)=(a+2)Isq,μ(λ,)f(ξ),  ξU.

    Differentiating the last relation, we obtain

    ξ(Isq,μ(λ,)F(ξ))a+2+(Isq,μ(λ,)F(ξ))=(Isq,μ(λ,)f(ξ)),  ξU,

    and (2.7) can be written as

    ξ(Isq,μ(λ,)F(ξ))a+2+(Isq,μ(λ,)F(ξ))ξg(ξ)a+2+g(ξ). (2.8)

    Denoting

    p(ξ)=(Isq,μ(λ,)F(ξ))H[1,1], (2.9)

    differential subordination (2.8) has the next type:

    ξp(ξ)a+2+p(ξ)ξg(ξ)a+2+g(ξ).

    Through Lemma 1.1, we find

    p(ξ)g(ξ),

    then

    (Isq,μ(λ,)F(ξ))g(ξ),

    where g is the best dominant.

    Theorem 2.3. Denoting

    Ia(f)(ξ)=a+2ξa+1ξ0taf(t)dt, a>0, (2.10)

    then,

    Ia[Ssq,μ(λ,;α)]Ssq,μ(λ,;α), (2.11)

    where

    α=(2α1)(α1)2F1(1,1,a+3;12). (2.12)

    Proof. Using Theorem 2.2 for h(ξ)=1(2α1)ξ1ξ, and using the identical procedures as Theorem 2.2, proof then

    ξp(ξ)a+2+p(ξ)h(ξ),

    holds, with p defined by (2.9).

    Through Lemma 1.2, we find

    p(ξ)g(ξ)h(ξ),

    similar to

    (Isq,μ(λ,)F(ξ))g(ξ)h(ξ),

    where

    g(ξ)=a+2ξa+2ξ0ta+11(2α1)t1tdt=(2α1)2(a+2)(α1)ξa+2ξ0ta+11tdt.

    By using Lemma 1.5, we get

    g(ξ)=(2α1)2(α1)(1ξ)12F1(1,1,a+3;ξξ1).

    Since g is a convex function and g(U) is symmetric around the real axis, we have

    Re(Isq,μ(λ,)F(ξ))min|ξ|=1Reg(ξ)=Reg(1)=α=(2α1)(α1)2F1(1,1,a+3;12).

    If we put α=0, in Theorem 2.3, we obtain

    Corollary 2.1. Let

    Ia(f)(ξ)=a+2ξa+1ξ0taf(t)dt, a>0,

    then,

    Ia[Ssq,μ(λ,)]Ssq,μ(λ,;α),

    where

    α=1+ 2F1(1,1,a+3;12).

    Example 2.1. If a=0 in Corollary 2.1, we get:

    I0(f)(ξ)=2ξξ0f(t)dt,

    then,

    I0[Ssq,μ(λ,)]Ssq,μ(λ,;α), 

    where

    α=1+2F1(1,1,3;12)=34ln2.

    Theorem 2.4. Let g be the convex with g(0)=1, we define

    h(ξ)=ξg(ξ)+g(ξ), ξU.

    If fA verifies

    (Isq,μ(λ,)f(ξ))h(ξ), ξU, (2.13)

    then the sharp differential subordination

    Isq,μ(λ,)f(ξ)ξg(ξ),   ξU, (2.14)

    holds.

    Proof. Considering

    p(ξ)=Isq,μ(λ,)f(ξ)ξ=ξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκξ=1+p1ξ+p2ξ2+....,      ξU,

    clearly pH[1,1], this we can write

    ξp(ξ)=Isq,μ(λ,)f(ξ),

    and differentiating it, we obtain

    (Isq,μ(λ,)f(ξ))=ξp(ξ)+p(ξ).

    Subordination (2.13) takes the form

    ξp(ξ)+p(ξ)h(ξ)=ξg(ξ)+g(ξ), (2.15)

    Lemma 1.1, allows us to have p(ξ)g(ξ), then (2.14) holds.

    Theorem 2.5. Let h be the convex and h(0)=1, if fA verifies

    (Isq,μ(λ,)f(ξ))h(ξ), ξU, (2.16)

    then we obtain the subordination

    Isq,μ(λ,)f(ξ)ξg(ξ),   ξU,

    for the convex function g(ξ)=(2α1)+2(α1)ξln(1ξ), being the best dominant.

    Proof. Let

    p(ξ)=Isq,μ(λ,)f(ξ)ξ=1+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκ1H[1,1],      ξU.

    By differentiating it, we get

    (Isq,μ(λ,)f(ξ))=ξp(ξ)+p(ξ),

    and differential subordination (2.16) becomes

    ξp(ξ)+p(ξ)h(ξ),

    Lemma 1.2 allows us to have

    p(ξ)g(ξ)=1ξξ0h(t)dt,

    then

    Isq,μ(λ,)f(ξ)ξg(ξ)=(2α1)+2(α1)ξln(1ξ),

    for g is the best dominant.

    If we put α=0 in Theorem 2.5, we have

    Corollary 2.2. Considering the convex h with h(0)=1, if fA verifies

    (Isq,μ(λ,)f(ξ))h(ξ), ξU,

    then we obtain the subordination

    Isq,μ(λ,)f(ξ)ξg(ξ)=12ξln(1ξ),   ξU,

    for the convex function g(ξ), which is the best dominant.

    Example 2.2. From Corollary 2.2, if

    (Isq,μ(λ,)f(ξ))h(ξ), ξU,

    we obtain

    Re(Isq,μ(λ,)f(ξ))min|ξ|=1Reg(ξ)=Reg(1)=1+2ln2,

    Theorem 2.6. Let g be a convex function with g(0)=1. We define h(ξ)=ξg(ξ)+g(ξ), ξU. If fA verifies

    (ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ))h(ξ), ξU, (2.17)

    then

    Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)g(ξ),   ξU, (2.18)

    holds.

    Proof. For

    p(ξ)=Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)=ξ+κ=2ψs+1q(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκ.

    By differentiating it, we get

    p(ξ)=(Is+1q,μ(λ,)f(ξ))Isq,μ(λ,)f(ξ)p(ξ)(Isq,μ(λ,)f(ξ))Isq,μ(λ,)f(ξ).

    then

    ξp(ξ)+p(ξ)=(ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)).

    Differential subordination (2.17), then we obtain (2.15), and Lemma 1.1 allows us to have p(ξ)g(ξ), then (2.18) holds.

    This section examines differential superordinations with respect to a first-order derivative of a q-multiplier-Ruscheweyh operator Isq,μ(λ,). For every differential superordination under investigation, we provide the best subordinant.

    Theorem 3.1. Considering fA, a convex h in U such that h(0)=1, and F(ξ) defined in (2.6). We assume that (Isq,μ(λ,)f(ξ)) is a univalent in U, (Isq,μ(λ,)f(ξ))QH[1,1]. If

    h(ξ)(Isq,μ(λ,)f(ξ)),  ξU, (3.1)

    holds, then

    g(ξ)(Isq,μ(λ,)F(ξ)),  ξU,

    with g(ξ)=a+2ξa+2ξ0ta+1h(t)dt the best subordinant.

    Proof. Differentiating (2.6), then ξF(ξ)+(a+1)F(ξ)=(a+2)f(ξ) can be expressed as

    ξ(Isq,μ(λ,)F(ξ))+(a+1)Isq,μ(λ,)F(ξ)=(a+2)Isq,μ(λ,)f(ξ),

    which, after differentiating it again, has the form

    ξ(Isq,μ(λ,)F(ξ))(a+2)+(Isq,μ(λ,)F(ξ))=(Isq,μ(λ,)f(ξ)).

    Using the final relation, (3.1) can be expressed

    h(ξ)ξ(Isq,μ(λ,)F(ξ))(a+2)+(Isq,μ(λ,)F(ξ)). (3.2)

    Define

    p(ξ)=(Isq,μ(λ,)F(ξ)),  ξU, (3.3)

    and putting (3.3) in (3.2), we obtain h(ξ)ξp(ξ)(a+2)+p(ξ), ξU. Using Lemma 1.3, given n=1, and α=a+2, it results in g(ξ)p(ξ), similar g(ξ)(Isq,μ(λ,)F(ξ)), with the best subordinant g(ξ)=a+2ξa+2ξ0ta+1h(t)dt convex function.

    Theorem 3.2. Let fA, F(ξ)=a+2ξa+1ξ0taf(t)dt, and h(ξ)=1(2α1)ξ1ξ where Rea>2, α[0,1). Suppose that (Isq,μ(λ,)f(ξ)) is a univalent in U, (Isq,μ(λ,)F(ξ))QH[1,1] and

    h(ξ)(Isq,μ(λ,)f(ξ)),  ξU, (3.4)

    then

    g(ξ)(Isq,μ(λ,)F(ξ)),  ξU,

    is satisfied for the convex function g(ξ)=(2α1)2(α1)(1ξ)12F1(1,1,a+3;ξξ1) as the best subordinant.

    Proof. Let p(ξ)=(Isq,μ(λ,)F(ξ)). We can express (3.4) as follows when Theorem 3.1 is proved:

    h(ξ)=1(2α1)ξ1ξξp(ξ)a+2+p(ξ).

    By using Lemma 1.4, we obtain g(ξ)p(ξ), with

    g(ξ)=a+2ξa+2ξ01(2α1)t1tta+1dt=(2α1)2(α1)(1ξ)12F1(1,1,a+3;ξξ1)(Isq,μ(λ,)F(ξ)),

    g is convex and the best subordinant.

    Theorem 3.3. Let fA and h be a convex function with h(0)=1. Assuming that (Isq,μ(λ,)f(ξ)) is a univalent and Isq,μ(λ,)f(ξ)ξQH[1,1], if

    h(ξ)(Isq,μ(λ,)f(ξ)),  ξU, (3.5)

    holds, then

    g(ξ)Isq,μ(λ,)f(ξ)ξ,  ξU,

    is satisfied for the convex function g(ξ)=1ξξ0h(t)dt, the best subordinant.

    Proof. Denoting

    p(ξ)=Isq,μ(λ,)f(ξ)ξ=ξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκξH[1,1],

    we can write Isq,μ(λ,)f(ξ)=ξp(ξ) and differentiating it, we have

    (Isq,μ(λ,)f(ξ))=ξp(ξ)+p(ξ).

    With this notation, differential superordination (3.5) becomes

    h(ξ)ξp(ξ)+p(ξ).

    Using Lemma 1.3, we obtain

    g(ξ)p(ξ)=Isq,μ(λ,)f(ξ)ξ  for g(ξ)=1ξξ0h(t)dt,

    convex and the best subordinant.

    Theorem 3.4. Suppose that h(ξ)=1(2α1)ξ1ξ with α[0,1). For fA, assume that (Isq,μ(λ,)f(ξ)) is a univalent and Isq,μ(λ,)f(ξ)ξQH[1,1]. If

    h(ξ)(Isq,μ(λ,)f(ξ)),  ξU, (3.6)

    holds, then

    g(ξ)Isq,μ(λ,)f(ξ)ξ,  ξU,

    where

    g(ξ)=(2α1)+2(α1)ξln(1ξ).

    Proof. After presenting Theorem 3.3's proof for p(ξ)=Isq,μ(λ,)f(ξ)ξ, superordination (3.6) takes the form

    h(ξ)=1(2α1)ξ1ξξp(ξ)+p(ξ).

    By using Lemma 1.3, we obtain g(ξ)p(ξ), with

    g(ξ)=1ξξ01(2α1)t1tdt=(2α1)+2(α1)ξln(1ξ)Isq,μ(λ,)f(ξ)ξ,

    g is convex and the best subordinant.

    Theorem 3.5. Let h be a convex function, with h(0)=1. For fA, let (ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)) is univalent in U and Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)QH[1,1]. If

    h(ξ)(ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)),  ξU, (3.7)

    holds, then

    g(ξ)Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ),  ξU,

    where the convex g(ξ)=1ξξ0h(t)dt is the best subordinant.

    Proof. Let

    p(ξ)=Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ),

    after differentiating it, we can write

    p(ξ)=(Is+1q,μ(λ,)f(ξ))Isq,μ(λ,)f(ξ)p(ξ)(Isq,μ(λ,)f(ξ))Isq,μ(λ,)f(ξ),

    in the form ξp(ξ)+p(ξ)=(ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)).

    Differential superordination (3.7) becomes h(ξ)ξp(ξ)+p(ξ). Applying Lemma 1.3, we obtain g(ξ)p(ξ)=Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ), with the convex g(ξ)=1ξξ0h(t)dt, the best subordinant.

    Theorem 3.6. Assume that h(ξ)=1(2α1)ξ1ξ with α[0,1). For fA, suppose that (ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ))is univalent and Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)QH[1,1]. If

    h(ξ)(ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)),  ξU, (3.8)

    holds, then

    g(ξ)Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ),  ξU,

    where

    g(ξ)=(2α1)+2(α1)ξln(1ξ).

    Proof. By using p(ξ)=Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ), differential superordination (3.8) takes the form

    h(ξ)=1(2α1)ξ1ξξp(ξ)+p(ξ).

    By using Lemma 1.3, we get g(ξ)p(ξ), with

    g(ξ)=1ξξ01(2α1)t1tdt=(2α1)+2(α1)ξln(1ξ)Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ),

    g is convex and the best subordinant.

    A new class of analytical normalized functions Ssq,μ(λ,;α), given in Definition 2.1, is related to the novel findings proven in this study given in Definition 2.1. To introduce some subclasses of univalent functions, we develop the q-analogue multiplier-Ruscheweyh operator Isq,μ(λ,) using the notion of a q-difference operator. The q-Ruscheweyh operator and the q-C ătas operator are also used to introduce and study distinct subclasses. In Section 2, these subclasses are subsequently examined in more detail utilizing differential subordination theory methods. Regarding the q-analogue multiplier-Ruscheweyh operatorIsq,μ(λ,) and its derivatives of first and second order, we derive differential superordinations in Section 3. For every differential superordination under investigation, the best subordinant is provided.

    The authors contributed equally to the writing of this paper. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by "Decembrie 1918" University of Alba Iulia, through the scientific research funds.

    The authors declare that they have no conflicts of interest.



    [1] X. Wang, Y. Yang, Y. Qin, Semilocal convergence analysis of an eighth order iterative method for solving nonlinear systems, AIMS Math., 8 (2023), 22371–22384. http://dx.doi.org/10.3934/math.20231141 doi: 10.3934/math.20231141
    [2] A. A. Samarskii, E.S. Nikolaev, The mathematical theory of iterative methods, Birkhäuser Basel, 1989. http://dx.doi.org/10.1007/978-3-0348-9142-4_1
    [3] I. K. Argyros, S. George, Enlarging the convergence ball of the method of parabola for finding zero of derivatives, Appl. Math. Comput., 256 (2015), 68–74. http://dx.doi.org/10.1016/j.amc.2015.01.030 doi: 10.1016/j.amc.2015.01.030
    [4] X. Wang, D. Ruan, Convergence ball of a new fourth-order method for finding a zero of the derivative, AIMS Math., 9 (2024), 6073–6087. https://dx.doi.org/10.3934/math.2024297 doi: 10.3934/math.2024297
    [5] J. M. Ortega, W. C. Rheinbolt, Iterative solution of nonlinear equations in several variables, New York: Academic Press, 1970. http://dx.doi.org/10.1137/1.9780898719468
    [6] L. B. Rall, A note on the convergence of Newton's method, SIAM J. Numer. Anal., 11 (1974), 34–36. http://dx.doi.org/10.1137/0711004 doi: 10.1137/0711004
    [7] Z. D. Huang, The convergence ball of Newton's method and the uniqueness ball of equations under Hölder-type continuous derivatives, Comput. Math. Appl., 5 (2004), 247–251. http://dx.doi.org/10.1016/s0898-1221(04)90021-1 doi: 10.1016/s0898-1221(04)90021-1
    [8] I. K. Argyros, S. Hilout, Weaker conditions for the convergence of Newton's method, J. Compl., 28 (2012), 364–387. http://dx.doi.org/10.1016/j.jco.2011.12.003 doi: 10.1016/j.jco.2011.12.003
    [9] X. Wang, X. Chen, W. Li, Dynamical behavior analysis of an eighth-order Sharma's method, Int. J. Biomath., 2023, 2350068. https://dx.doi.org/10.1142/S1793524523500687
    [10] X. Wang, J. Xu, Conformable vector Traub's method for solving nonlinear systems, Numer. Alor., 2024. https://dx.doi.org/10.1007/s11075-024-01762-7
    [11] X. H. Wang, C. Li, Convergence of newton's method and uniqueness of the solution of equations in banach spaces II. Acta Math. Sinica, 19 (2003), 405–412. http://dx.doi.org/10.1007/s10114-002-0238-y
    [12] Q. B. Wu, H. M. Ren, Convergence ball of a modified secant method for finding zero of derivatives, Appl. Math. Comput., 174 (2006), 24–33. http://dx.doi.org/10.1016/j.amc.2005.05.007 doi: 10.1016/j.amc.2005.05.007
    [13] Q. B. Wu, H. M. Ren, W. H. Bi, Convergence ball and error analysis of Müller's method, Appl. Math. Comput., 184 (2007), 464–470. http://dx.doi.org/10.1016/j.amc.2006.05.167 doi: 10.1016/j.amc.2006.05.167
    [14] X. H. Wang, C. Li, On the convergent iteration method of order two for finding zeros of the derivative, Math. Numer. Sin., 23 (2001), 121–128. http://dx.doi.org/10.3321/j.issn:0254-7791.2001.01.013. doi: 10.3321/j.issn:0254-7791.2001.01.013
    [15] H. Ren, I. K. Argyros, On the complexity of extending the convergence ball of Wang's method for finding a zero of a derivative, J. Complex., 64 (2021), 101526. http://dx.doi.org/10.1016/j.jco.2020.101526 doi: 10.1016/j.jco.2020.101526
    [16] L. B. Rall, A note on the convergence of Newton's method, SIAM J. Numer. Anal., 11 (1974), 34–36. http://dx.doi.org/10.1137/0711004 doi: 10.1137/0711004
    [17] J. Stoer, R. Bulirsch, Introduction to numerical analysis, New York: Springer-Verlag, 1980. http://dx.doi.org/10.1017/CBO9780511801181
    [18] M. S. Petković, B. Neta, L. D. Petković, J. Džunić, Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput., 226 (2014), 635–660. http://dx.doi.org/10.1016/j.amc.2013.10.072
    [19] X. Wang, T. Zhang, W. Qian, M. Teng, Seventh-order derivative-free iterative method for solving nonlinear systems, Numer. Algor., 70 (2015), 545–558. https://dx.doi.org/10.1007/s11075-015-9960-2 doi: 10.1007/s11075-015-9960-2
  • This article has been cited by:

    1. Ekram E. Ali, Rabha M. El-Ashwah, Abeer M. Albalahi, Rabab Sidaoui, Marwa Ennaceur, Miguel Vivas-Cortez, Convolution Results with Subclasses of p-Valent Meromorphic Function Connected with q-Difference Operator, 2024, 12, 2227-7390, 3548, 10.3390/math12223548
    2. Ekram E. Ali, Rabha M. El-Ashwah, Abeer M. Albalahi, Application of Fuzzy Subordinations and Superordinations for an Analytic Function Connected with q-Difference Operator, 2025, 14, 2075-1680, 138, 10.3390/axioms14020138
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(824) PDF downloads(38) Cited by(0)

Figures and Tables

Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog