Research article

A faster fixed point iterative algorithm and its application to optimization problems

  • Received: 10 May 2024 Revised: 17 July 2024 Accepted: 25 July 2024 Published: 08 August 2024
  • MSC : 47H05, 47H09, 47H10

  • In this paper, we studied the AA-iterative algorithm for finding fixed points of the class of nonlinear generalized (α,β)-nonexpansive mappings. First, we proved weak convergence and then proved several strong convergence results of the scheme in a ground setting of uniformly convex Banach spaces. We gave a few numerical examples of generalized (α,β)-nonexpansive mappings to illustrate the major outcomes. One example was constructed over a subset of a real line while the other one was on the two dimensional space with a taxicab norm. We considered both these examples in our numerical computations to show that our iterative algorithm was more effective in the rate of convergence corresponding to other fixed point algorithms of the literature. Some 2D and 3D graphs were obtained that supported graphically our results and claims. As applications of our major results, we solved a class of fractional differential equations, 2D Voltera differential equation, and a convex minimization problem. Our findings improved and extended the corresponding results of the current literature.

    Citation: Hamza Bashir, Junaid Ahmad, Walid Emam, Zhenhua Ma, Muhammad Arshad. A faster fixed point iterative algorithm and its application to optimization problems[J]. AIMS Mathematics, 2024, 9(9): 23724-23751. doi: 10.3934/math.20241153

    Related Papers:

    [1] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [2] Abdelkader Belhenniche, Amelia Bucur, Liliana Guran, Adrian Nicolae Branga . Using computational techniques of fixed point theory for studying the stationary infinite horizon problem from the financial field. AIMS Mathematics, 2024, 9(1): 2369-2388. doi: 10.3934/math.2024117
    [3] Junaid Ahmad, Kifayat Ullah, Hasanen A. Hammad, Reny George . A solution of a fractional differential equation via novel fixed-point approaches in Banach spaces. AIMS Mathematics, 2023, 8(6): 12657-12670. doi: 10.3934/math.2023636
    [4] Pakhshan M. Hasan, Nejmaddin A. Sulaiman, Fazlollah Soleymani, Ali Akgül . The existence and uniqueness of solution for linear system of mixed Volterra-Fredholm integral equations in Banach space. AIMS Mathematics, 2020, 5(1): 226-235. doi: 10.3934/math.2020014
    [5] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Nadiyah Hussain Alharthi . A faster iterative scheme for solving nonlinear fractional differential equations of the Caputo type. AIMS Mathematics, 2023, 8(12): 28488-28516. doi: 10.3934/math.20231458
    [6] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Melike Kaplan, W. Eltayeb Ahmed . A novel iterative scheme for solving delay differential equations and third order boundary value problems via Green's functions. AIMS Mathematics, 2024, 9(3): 6468-6498. doi: 10.3934/math.2024315
    [7] Kaiwich Baewnoi, Damrongsak Yambangwai, Tanakit Thianwan . A novel algorithm with an inertial technique for fixed points of nonexpansive mappings and zeros of accretive operators in Banach spaces. AIMS Mathematics, 2024, 9(3): 6424-6444. doi: 10.3934/math.2024313
    [8] Junaid Ahmad, Kifayat Ullah, Hasanen A. Hammad, Reny George . On fixed-point approximations for a class of nonlinear mappings based on the JK iterative scheme with application. AIMS Mathematics, 2023, 8(6): 13663-13679. doi: 10.3934/math.2023694
    [9] Kanagaraj Muthuselvan, Baskar Sundaravadivoo, Kottakkaran Sooppy Nisar, Suliman Alsaeed . Discussion on iterative process of nonlocal controllability exploration for Hilfer neutral impulsive fractional integro-differential equation. AIMS Mathematics, 2023, 8(7): 16846-16863. doi: 10.3934/math.2023861
    [10] Maryam Iqbal, Afshan Batool, Aftab Hussain, Hamed Al-Sulami . A faster iterative scheme for common fixed points of $ G $-nonexpansive mappings via directed graphs: application in split feasibility problems. AIMS Mathematics, 2024, 9(5): 11941-11957. doi: 10.3934/math.2024583
  • In this paper, we studied the AA-iterative algorithm for finding fixed points of the class of nonlinear generalized (α,β)-nonexpansive mappings. First, we proved weak convergence and then proved several strong convergence results of the scheme in a ground setting of uniformly convex Banach spaces. We gave a few numerical examples of generalized (α,β)-nonexpansive mappings to illustrate the major outcomes. One example was constructed over a subset of a real line while the other one was on the two dimensional space with a taxicab norm. We considered both these examples in our numerical computations to show that our iterative algorithm was more effective in the rate of convergence corresponding to other fixed point algorithms of the literature. Some 2D and 3D graphs were obtained that supported graphically our results and claims. As applications of our major results, we solved a class of fractional differential equations, 2D Voltera differential equation, and a convex minimization problem. Our findings improved and extended the corresponding results of the current literature.



    Mathematicians are always interested in finding the solutions of nonlinear problems but some problems may not be approached by analytical methods to solve such problems, though the fixed theory plays a key role [1]. In this area, two major directions for research: One is for the existence of fixed points, and to establish a broad class of mappings and conditions; and the other is to define iterative processes to find those fixed points.

    A prominent result of fixed point theory is due to the famous functional analysis founder Banach [2]. This prominent result provides the fixed point (FP) existence for a special class of nonlinear operators called contractions and comes up with an iterative approach to approximate the FP using an algorithm called the Picard iterative algorithm. By retaining the convergence property and by weaking the contraction hypothesis, the researchers attempted to generalize Banach's contraction.

    Nonexpansive mappings are generalizations of contraction mappings. A selfmap η on a nonempty subset E of a Banach space B(from now to onward B will represent Banach space) is called nonexpansive, if for all x,yE and||η(x)η(y)||||xy|| holds. If there is atleast one FP for η, then we will denote and define the set of all FPs of η by F(η)={x:η(x)=x}. If η is nonexpansive and F(η) is nonempty, and for all xE and pF(η), the inequaltiy ||η(x)p||||xp|| holds, and we regard such a selfmap as a quasi-nonexpansive on E [3]. The nonexpansive selfmap η of E has a FP when the domain of the map is convex bounded and closed in any provided reflexive Banach space B [4]. The existence of FPs for nonexpansive maps were also studied by Browder [5] and G{ö}hde [6], who obtain a result similar to the result in [4].

    Suzuki [7] introduced a weaker notion for nonexpansive mappings known as Condition(C). A selfmap η on subest E of B is regarded as mappings with Condition (C), if for any pair of points, x,yE whenever 12||xη(x)||||xy||, then ||η(x)η(y)||||xy|| holds. The class of mappings introduced by Suzuki [7] was a significant advancement because these mappings effectively extended the concept of nonexpansive nonlinear operators in a novel and straightforward manner. It is also important that the class of mappings that are enriched with the Condition(C) are also called as Suzuki's generalized nonexpansive mapping. Although Suzuki mappings with fixed points are quasi-nonexpansive and hence these mappings are not general than the class of quasi-nonexpansive maps. For proving that the Suzuki mappings properly includes nonlinear nonexpansive maps, some examples are constructed in [7,8] and other papers.

    Aoyama and Koshaka [9] introduced the class of α-nonexpansive maps in 2011, which are described as follows: Let E be a subset of B. A selfmap η on E is said to be α-nonexpansive, if α[0,1) can be found such that the following inequality holds for each pair of x,yE.

    η(x)η(y)2αxη(y)2+αyη(x)2+(12α)xy2.

    Clearly, for α=0, we have again nonexpainsive mapping i-e., every nonexpansive mapping is a α-nonexpansive but the converse is not valid. For α>0 an example in [9], which reveals the discountitnuity for α-nonexpansive mappings, and shows the fact that the class of α-nonexpanisve mappings is vast than the class of nonexpansive mappings. For more details, see [10].

    In 2017, Pant and Shukla [11] studied a new type of extension of nonlinear nonexpansive maps in Banach spaces and suggested the concept of generalized α-nonexpansive maps: The selfmap η will be called a generalized α-nonexpansive whenever one has

    12xη(x)xy

    then,

    η(x)η(y)αxη(y)+αyη(x)+(12α)xy,

    for any choice of x,yE and a real number α, which is fixed and 0α<1.

    In other words, the class of generalized α-nonexpansive mappings contains nonexpansive mappings; however, the opposite is not true [11]. It is evident that for α=0, the generalized α-nonexpansive mapping again becomes Suzuki's nonexpainsive mapping. In the context of Banach space, numerous mathematicians have attempted to approximate the FP of generalized α-nonexapnsive mappings [12].

    In 2019, Pant and Pandey [13] studied a new type of extension of nonlinear nonexpansive maps in Banach spaces and suggested the concept of β-nonexpansive maps: The selfmap η will be called a Reich-Suzuki nonexpansive whenever one has

    12xη(x)xy

    then,

    η(x)η(y)βxη(x)+βyη(y)+(12β)xy,

    for any choice of x,yE and a real number β, which is fixed and 0β<1.

    Clearly, for β=0, the β-Reich-Suzuki nonexpansive mapping becomes Suzuki's nonexpansive mapping i-e., every Suzuki map can be regarded as a Reich-Suzuki map but the converse is not true [13].

    In 2020, Ullah et al. [14] enriched the study of nonexpansive mappings with a novel class of generalized mappings: The selfmap η will be called a generalized (α,β)-nonexpansive provided that

    12xη(x)xy

    then,

    η(x)η(y)αxη(y)+αyη(x)+βxη(x)+βyη(y)+(12α2β)xy,

    for any choice of x,yE and real numbers α,β which are fixed and 0α,β<1 and α+β1.

    The authors in [14] compared this new class of maps with the already existing classes of mappings given above. More study on these mappings were carried out in [14,15], which further explored the importance of these mappings.

    Numerous iterative algorithms for numerical solutions have been studied by various authors, with their applications extending to a wide range of applied sciences problems [16,17]. Banach result [2] shows that for any contraction in a complete spaces, the fixed point is essentially the limit point for the Picard algorithm. The iterative sequence {xn} can be produced using Picard iterations as:

    {x1E,xn+1=η(xn), for nN. (1.1)

    The Picard algorithm generates the above sequence, which converges to the fixed point of contractive mapping but not of nonexpansive mapping in general.

    Mann [18] presented an iterative approach to approximate FPs for nonexpansive mappings. For an appropriate sequence {σn} in (0,1), the sequence {xn} obtained by Mann is:

    {x1E,xn+1=(1σn)xn+σnη(xn), for nN. (1.2)

    If η is pseudo-contractive mapping, the Mann algorithm fails to converge the FP of η. Ishikawa[19] overcomes this problem by defining a two-steps iterative algorithm. For two appropriate sequences {σn} and {λn} in (0,1), then the sequence {xn} obtained by the Ishikawa algorithm is given as:

    {x1E,xn+1=(1σn)xn+σnη(yn),yn=(1λn)xn+λnη(xn), for nN. (1.3)

    Noor[20], the pioneer of three-steps iterative algorithms, introduced a three-steps iteration process in 2000, which is faster than Ishikawa's two-steps iterative algorithm. For an arbitrary x1 in E and for three sequences of real numbers {σn}, {λn}, and {ξn} in (0,1), then the sequence {xn} obtained by this algorithm is given as:

    {xn+1=(1σn)xn+σnη(yn),yn=(1λn)xn+λnη(zn),zn=(1ξn)xn+ξnη(xn), nN. (1.4)

    In 2007, Agarwal[21] introduced the following iterative algorithm:

    {x1E,xn+1=(1σn)η(xn)+σnη(yn),yn=(1λn)xn+λnη(xn), nN. (1.5)

    where, {σn} and {λn} are sequences in (0,1).

    Abbas and Nazir[22] come with another three-step iterative algorithm in 2014. For an arbitrary x1 in E and for three sequences of real numbers {σn}, {λn}, and {ξn} in (0,1), then the sequence {xn} obtained by them is given as:

    {xn+1=(1σn)η(yn)+σnη(zn),yn=(1λn)η(xn)+λnη(zn),zn=(1ξn)xn+ξnη(xn), nN. (1.6)

    Thakur et al.[8] proposed another three-step iterative algorithm in 2016. For an arbitrary {x1} in E and for three sequences of real numbers {σn}, {λn}, and {ξn} in (0,1), then the sequence {xn} obtained by Thukar is given as:

    {xn+1=(1σn)η(zn)+σnη(yn),yn=(1λn)zn+λnη(zn),zn=(1ξn)xn+ξnxn, nN. (1.7)

    Ullah and Arshad [23] introduced a new iterative algorithm in 2018 known as M-Iteration. For an arbitrary sequence x1 in E and for appropriate sequence of real number {σn} in (0,1), then the sequence {xn} obatained by the M-Iterative algorithm is:

    {xn+1=η(yn),yn=η(zn),zn=(1σn)xn+σnη(xn), nN. (1.8)

    Ullah et al.[24] proposed a new iterative algorithm in 2022 known as the KF-Iteration. For an arbitrary x1 in E and for two appropriate sequences of real number {σn} and {ξn} in (0,1), then the sequence {xn} obatained by the KF-Iterative algorithm is:

    {xn+1=η((1σn)η(xn)+σnη(yn)),yn=η(zn),zn=η((1λn)xn+λnη(xn)), nN. (1.9)

    In 2022, Abbas et al. [25] proposed the AA-Iterative algorithm. For the class of contractive mappings and enhanced contractive mappings, the AA-Iterative algorithm converges faster than the approaches outlined before. For an arbitrary x1 in E and for three sequences of real numbers {σn}, {λn}, and {ξn} in (0,1), then the sequence {xn} obtained with the help of the AA-iterative algorithm is given as:

    {xn+1=η(yn)yn=η((1σn)η(hn)+σnη(zn),zn=η((1λn)hn+λnη(hn)),hn=(1ξn)xn+ξnη(xn), nN. (1.10)

    In 2019, Ali et al. [26] proved the convergence results for Suzuki's-type generalized nonexpansive mappings. To approximate the FP of a more generalized nonexpansive mapping as fast as possible is of great interest of mathematicians due to the theoratical and practical applications of fixed point theory in nonlinear equations. Motivated by [14,15,25,26,27], we will prove some weak and strong convergence theorems using the AA-Iterative algorithm (1.10) for the class of generalized(α,β)-nonexpasive mappings in uniformly convex Banach space B.

    The following concepts are needed in the main outcome.

    Definition 2.1. [28] Suppose B represents a norm space which satisfies the condition: For each selected ϵ in the interval (0,2], one has a real number namely 0<δ< such that vsϵ; then, for each v,sB satisfying v1,s1, it follows that

    v+s21δ.

    If any norm space B satisfying the above condition, we call it a uniformly convex norm space.

    Definition 2.2. [29] For a given norm space B, we say that B is endowed with the has Opial Property whenever for any sequence {sn} in the space B if it is weakly convergent to v, it is the case that;

    lim supnsnv<lim supnsnu, foranychoiceof uB,

    where vu.

    Definition 2.3. [30] Take a point, namely s, in a norm space B and assume that {sn} is a bounded sequence composed of points of B. Consider the functional.

    Υ(s,sn)=lim supnssn.

    Then, the asymptotic radius, which we denote here as Υ(E,{sn}), shows that E is any subset of B it reads as follows:

    Υ(E,{sn})=inf{Υ(s,{sn}):sE}.

    Similarly, the asymptotic center is denoted by A(E,{xn}) and reads as follows:

    A(E,{sn})={sE:Υ(s,{sn})=Υ(E,{sn}).

    In the case when B is Banach space and uniformly convex, then the asymptotic center admits a unique point.

    Definition 2.4. [29] Suppose E be a closed convex subset of Banach space B then, the mapping η:EB is demiclosed, if for every {xn}E which converge weakly to some x0E and the strong convergence of sequence {η(xn)} to y0B η(x0)=y0.

    Definition 2.5. [31] Assume that E is a subset in a norm space B. The map η:EE is called map with condition (I) whenever one has a funtion Υ:[0,)[0,) such that Υ(0)=0 and Υ(t)>0, t>0, one has

    d(s,ηs)Υ(d(s,F(η))),sB,

    here, d(s,F(η)=inf{d(s,p):pF(η)}.

    The following proposition is due to Ullah et al. [14].

    Proposition 2.6. For a selfmap η over the nonempty closed convex subset E of B, then the following results can be obtained directly:

    a. The Suzuki nonexpansiveness of η leads us to the fact that η is nonexpansive on the set E.

    b. The generalized nonexpansiveness of η leads us to the fact that η is α-nonexpansive on the set E.

    c. The generalized nonexpansiveness of η leads us to the fact that η is β-Reich-Suzuki nonexpansive on the set E.

    Lemma 2.7. [14] Any generalized (α,β)- nonexpansive selfmap η on any subset namely E of a norm space B form a quasi-nonexpansive map on E.

    Lemma 2.8. [14] The fixed point set associated with a (α,β)- nonexpansive selfmap η is always closed in the setting of Banach space.

    Lemma 2.9. [14] Any generalized (α,β)- nonexpansive selfmap η on any subset namely E of a norm space B satisfies the following:

    vηs(3+α+β1αβ)vηs+vs,

    for any choice of v,sE.

    Lemma 2.10. [14] Assume that η denotes any map that is essentially defined on a closed subset E of any given complete norm space B. Suppose E is endowed with the Opial Property and the map is generalized (α,β)-nonexpansive on the set E. Consider {sn} as being convergent to any element sB in the weak sense with limnη(sn)sn=0. In this case, (Iη) is demiclosed at zero, i.e., ηs=s.

    Lemma 2.11. [32] (Property of uniform convexity) For Banach space B, which is uniformly convex, and take any sequence 0<ζt<1,tN. Consider {xt} and {yt} that form two sequences of elements of B satisfying lim suptxt,lim suptytρ, and lim suptζtxt+(1ζt)yt=ρ where ρ is a positive constant. Eventually, it follows that limtxtyt=0.

    In recent years, we note that different iterative methods are used for the fixed point construction of nonlinear maps. This section proposes new fixed point results for the faster iterative scheme (1.10) under mild conditions in a Banach space context. In all the major results, we write simply B for a unifomly convex Banach space. We consider the following result to start the section.

    Lemma 3.1. Define a selfmap η on any subset E that is closed and convex in B. If η forms a generalized (α,β)-nonexpansive map having nonempty fixed point F(η), then, for the sequence of iterations in (1.10), we have limnxnp, which exists by taking any point p in the set F(η).

    Proof. Taking any point, namely pF(η), and suppose sE, then according to Lemma 2.7, η is quasi-nonexpansive on the set E, that is,

    ηspsp.

    Therefore, keeping the above fact in mind, it follows from (1.10) that

    hnp=(1ξn)xn+ξnη(xn)p(1ξn)xnp+ξnη(xn)p.  (3.1)

    From the generalized (α,β)-nonexpansiveness and nonexpansiveness of η, one has

    η(xn)pη(xn)η(p)αpη(xn)+αxnη(p)+βpη(p)+βxnη(xn)+(12α2β)xnpαpη(xn)+αxnp+βxnη(xn)+(12α2β)xnpαpxn+αxnp+βxnp+βpη(xn)+(12α2β)xnpxnp. (3.2)

    Keeping in mind (3.2) and (3.1), one concludes that

    hnp(1ξn)xnp+ξnxnpxnp. (3.3)

    If cn=(1λn)hn+λnη(hn), we have

    znp=η(cn)p. (3.4)

    Now,

    η(cn)η(p)αpη(cn)+αcnp+βpη(p)+βcnη(cn)+(12α2β)cnpcnp. (3.5)

    Now, by using cn=(1λn)hn+λnη(hn), we have

    cnp=(1λn)hn+λnη(hn)p(1λn)hnp+λnη(hn)p.  (3.6)

    Now,

    η(hn)pαpη(hn)+αhnp+βpη(p)+βhnη(hn)+(12α2β)hnphnp. (3.7)

    Now, by using (3.3) and (3.7) in (3.6), we have

    cnpxnp. (3.8)

    It follows from (3.4), (3.5), and (3.8)

    znpxnp. (3.9)

    Now, by taking dn=(1σn)η(hn)+σnη(zn), we have

    ynp=η(dn)p=η(dn)η(p)αpη(dn)+αdnη(p)+βpη(p)+βdnη(dn)+(12α2β)dnpdnp. (3.10)

    Now,

    dnp=(1σn)η(hn)+σnη(zn)p(1σn)η(hn)p+σnη(zn)p.  (3.11)

    Thus,

    η(zn)p=η(zn)η(p)αpη(zn)+αznη(p)+βpη(p)+βznη(zn)+(12α2β)znpznp. (3.12)

    Hence, by using (3.7), (3.12) in (3.11), we have

    dnp(1σn)hnp+σnznp.  (3.13)

    By (3.3), (3.9), and (3.13), we obtain

    dnpxnp.  (3.14)

    It follows from (3.10) and (3.13)

    ynpxnp.  (3.15)

    Now,

    xn+1p=η(yn)p=η(yn)η(p)αpη(yn)+αynη(p)+βpη(p)+βynη(yn)+(12α2β)ynpynp. (3.16)

    Using (3.9) in (3.16), one has

    xn+1pxnp.

    Eventually, we see that {xn+1p} has the property that for any pE, it does not increases and is bounded. From the basic concept of analysis, we conclude that limnxnp exists.

    Lemma 3.2. Define a selfmap η on any subset E that is closed and convex in B. If η forms a generalized (α,β)-nonexpansive map having fixed point F(η). Then, for the sequence of iterations in (1.10), we have limnηxnxn=0 if and only if F(η) is nonempty in E.

    Proof. If F(η) contains at-least one element, then we can assume that p is a point of F(η). It immediately follows from 3.1 that limnxnp exists, and the sequence of iterations {xn} is essentially bounded in E. Thus, we can assume that

    limnxnp=κ. (3.17)

    From (3.9), (3.12), and (3.14), we have

    lim supnhnplim supnxnpκ, (3.18)
    lim supnznplim supnxnpκ, (3.19)
    lim supnynplim supnxnpκ. (3.20)

    It follows from (3.2) that we have

    η(xn)p=η(xn)η(p)κlim supnη(xn)pκ. (3.21)

    So,

    xn+1p=η(yn)η(p)ynp. (3.22)

    Considering limit on (3.22) as follows, we have

    κlim infnynp. (3.23)

    It follows from (3.20) and (3.23), that

    lim infnynp=κ. (3.24)

    Regarding (3.22), (3.10), (3.11), and (3.12), one has

    xn+1pynpη(zn)pznp. (3.25)

    Considering limit on (3.25) as follows, we have

    κlim infnznp. (3.26)

    By (3.20) and (3.26), we obtain

    lim infnznp=κ.

    From (3.25), we have

    xn+1pη(zn)pznphnp. (3.27)

    Considering limit on (3.27) as follows, we have

    κlim infnhnp. (3.28)

    By (3.19) and (3.28), we obtain

    lim infnhnp=κ.

    Morover,

    κlimnhnp=limn(1ξn)xn+ξnη(xn)plimn((1ξn)xnp+ξnη(xn)p)limn((1ξn)xnp+ξnxnp)limnxnpκ.

    Hence,

    limn(1ξn)(xnp)+ξn(η(xn)p)=κ. (3.29)

    By (3.17), (3.21), (3.29), and Lemma 2.11, we have

    limnxnη(xn)=0.

    Conversly, by letting {xn} bounded and limnxnη(xn)=0, let pA(E,{xn}), and then by Lemma 2.9, we have

    Υ(η(p),{xn})=lim supnxnη(xn)lim supn(3+α+β1αβxnη(xn)+xnp)=(3+α+β1αβ)lim supnxnη(xn)+lim supnxnp=lim supnxnp=Υ(p,{xn}).

    This indicates that η(p)A(E,{xn}). Since B is uniformly convex Banach space, A(E,{xn}) is a singleton. Thus, we obtain η(x)=x.

    Theorem 3.3. Define a selfmap η on any subset E that is closed and convex in B. If η forms a generalized (α,β)-nonexpansive map having nonempty fixed point F(η), then, for the sequence of iterations in (1.10), we have {xn}, which is weakly convergent to a FP of η.

    Proof. Consider any point pF(η) so it follows from Lemma 3.1 that limnxnp exists. We need to establish the fact that {xn} admits one and only one weak subsequential limit in the set F(η). To show this, we assume that κ1 and κ2 form two different weak limits for the subsequences, namely {xni} and {xnj} of the given sequence, respectively. It follows now from Lemma 3.2 that limnxnη(xn)=0. Similarly, by Lemma 2.10, one can conclude that (Iη) is demiclosed on zero, that is, (Iη)κ1=0 and hence η(κ1)=κ1. In the same steps, we can show that η(κ2)=κ2.

    Furthermore, we assume that κ1κ2. Using Opial Property, we have

    limnxnκ1=limnixniκ1<limnixniκ2=limnxnnκ2<limnjxnjκ2<limnjxnjκ1=limnxnκ1. (3.30)

    This contradicts our suppostion. κ1=κ2{xn} converges weakly to a pF(η).

    Theorem 3.4. Define a selfmap η on any subset E that is closed and convex in B. If η forms a generalized (α,β)-nonexpansive map having a nonempty fixed point F(η), then, for the sequence of iterations in (1.10), we have that {xn} is weakly convergent to a FP of η lim infnd(xn,F(η))=0.

    Proof. Notice that if {xn} is convergent to some FP p of η, then it follows that lim infnd(xn,F(η))=0, which proves the result.

    On the other hand, consider that lim infnd(xn,F(η))=0 and we want to prove that {xn} is convergent to a FP of η. For this purpose, we notice from Lemma 3.1 that limnxnp=0 exists even for each FP p of η. Hence, from the given condition, it follows that lim infnd(xn,F(η))=0.

    It is the aim to establish that {xn} is a Cauchy sequence in the closed set E. Notice that lim infnd(xn,F(η))=0, which suggests that for all ϵ>0 there must be a number noN with the property that nno with d(xn,F(η))<ϵ2. If follows that

    inf{xnp:pF(η)}<ϵ2.

    Accordingly, we conclude that inf{xnp:pF(η)}<ϵ2. Hence, for any p that forms a FP for η, one has xnop<ϵ2. Thus, for any choice of m,nno, one has

    xm+nxnxm+np+xnpxmop+xnop=2xnopϵ.

    The last conclusion suggests that {xn} forms a Cauchy sequence E. By closeness of E, we can find an element, namely E, with the fact that limnxn=. However, we have limnd(xn,F(η))=0d(,F(η))=0. This proves that F(η), which completes the proof.

    Theorem 3.5. Define a selfmap η on any subset E that is closed and convex in B. If η forms a generalized (α,β)-nonexpansive map having a nonempty fixed point F(η), then, for the sequence of iterations in (1.10), we have that {xn} is weakly convergent to a FP of η if E is compact in B.

    Proof. In the view of Lemma 3.2, we have

    limnxnη(xn)=0.

    Since E is compact, one has a subsequence {xni} of {xn}, such that xnip for some pE. Then, by Lemma 2.9, we obtained

    xniη(p)(3+α+β1αβ)xniη(xni)+xnip i1.

    By applying the limit, we obtained xniη(p) as i. This shows that η(p)=p, which is pF(η). In addition, limnxnp exists by Lemma 3.1. Thus, {xn}p as n.

    The next result is established using the condition (Ⅰ).

    Theorem 3.6. Define a selfmap η on any subset E that is closed and convex in B. If η forms a generalized (α,β)-nonexpansive map having a nonempty fixed point F(η), then, for the sequence of iterations in (1.10), we have that {xn} is weakly convergent to a FP of η if η is endowed with the condition (Ⅰ).

    Proof. As we already did in Lemma 3.2,

    limnxnη(xn)=0.

    From Condition (Ⅰ) and (3.23), we got

    0limnΥ(d(xn,F(η))) limnxnη(xn)=0,
    limnΥ(d(xn,F(η)))=0.

    Since, Υ:[0,1)[0,1) is increasing with Υ(0)=0,Υ(t)>0 t>0, we have

    limnd(xn,F(η))=0.

    One can now conclude from 3.4 that the given sequence is convergent to a FP of η.

    We aim to construct various numerical examples to test our scheme on the considered class of mappings. One example is simple and constructed in one-dimensional space, while the other is two-dimensional. Graphical representations and numerical comparisons clearly show the superior accuracy of our main outcome.

    Example 4.1. Define η:[0,)[0,) by

    η(x)={0,    x[0,2),5x6,    x[2,).

    Here, η does not posseses the Condition(C). However, η is generalized α,β-nonexpansive mapping. Let x=52 and y=32 then η(x)=2512. So,

    12|xη(x)|=12|522512|=12|512|=524.

    Moreover, |xy|=|5232|=1 12|xη(x)||xy|.

    However, |η(x)η(y)|=|24120|=2512, |η(x)η(y)||xy|.

    Hence, η does not posses the Condition(C).

    Now, take α=511 and β=122. Clearly, α+β=12<1, which causes the following cases to arise.

    Case 1: If x,y[0,2), then

    511|xη(y)|+511|yη(x)|+122|xη(x)|+122|yη(y)|0|η(x)η(y)|.

    Case 2: If y[0,2), and x[0,), then we have

    511|xη(y)|+511|yη(x)|+122|xη(x)|+122|yη(y)|=511|x|+511|y5x6|+122|x5x6|+122|y|=511|x|+511|y5x6|+122|x6|+122|y|511|116x|=56|x|=|η(x)η(y)|.

    Case 3: If x,y[0,2), then we have

    511|xη(y)|+511|yη(x)|+122|xη(x)|+122|yη(y)|=511|x5y6|+511|y5x6|+122|x5x6|+122|y5y6|=511|x5y6|+511|y5x6|+122|x6|+122|y6|511|116x116y|+1132|xy|56|xy|=|η(x)η(y)|.

    Hence, η is generalized (511,122)-nonexpansive mapping. However, for x=52, y=32, α=511 and β=122 η is neither generalized 511-nonexpansive nor 122-Reich-Suzuki type map.

    Now, we will draw graphs and tables to show that the sequence {xn} of the AA-Iterative Algorithm (1.10) moves faster to the FP of example 4.1 as compared to the Mann iteration (1.2), Ishkawa iteration (1.3), S iteration (1.5), Thakur (1.7), and M-iteration (1.8). By assuming αn=0.50,λn=0.65 and ξn=0.85 and by taking the initial guess 10.0, the observations are provided in Table 1 and Figure 1, which show that AA-Iterative Algorithm (1.10) is faster than mentioned above.

    Table 1.  Convergence comparison of different algorithms with the AA-Iterative algorithm.
    n AA M Thakur S Ishikawa Mann
    1 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000
    2 4.329059 6.365741 6.568287 7.881944 8.715278 9.166667
    3 1.8740757 4.052266 4.314239 6.212505 7.595607 8.402778
    4 0.000000 2.579567 2.833716 4.896662 6.619782 7.702546
    5 0.000000 0.000000 1.861266 3.859522 5.769324 7.060667
    6 0.000000 0.000000 0.000000 3.042053 5.028126 6.472278
    7 0.000000 0.000000 0.000000 2.397730 4.382152 5.932922
    8 0.000000 0.000000 0.000000 1.889877 3.819167 5.438512
    9 0.000000 0.000000 0.000000 0.000000 3.328510 4.985302
    10 0.000000 0.000000 0.000000 0.000000 2.900889 4.569861

     | Show Table
    DownLoad: CSV
    Figure 1.  Behaviors of various iterative processes using Example 4.1.

    Now, assuming σn=0.60,λn=0.43, and ξn=0.67 and by taking the initial guess 26.0, the observations are provided in Table 2 and Figure 2.

    Table 2.  Convergence comparison of different algorithms with the AA-Iterative algorithm.
    n AA M Thakur S Ishikawa Mann
    1 26.00000 26.00000 26.00000 26.00000 26.00000 26.00000
    2 11.55056 16.25000 17.27917 20.73500 22.46833 23.40000
    3 5.131364 10.15625 11.48345 16.53616 22.46833 21.06000
    4 2.279620 6.347656 7.631707 13.18759 16.77899 18.95400
    5 0.000000 3.967285 5.071905 10.51710 14.49985 17.05860
    6 0.000000 2.479553 3.370704 8.387389 12.53028 15.35274
    7 0.000000 0.000000 2.240113 6.688943 10.82825 13.81747
    8 0.000000 0.000000 0.000000 5.334432 9.357416 12.43572
    9 0.000000 0.000000 0.000000 4.254210 8.086367 11.19215
    10 0.000000 0.000000 0.000000 3.392732 6.987969 10.07293

     | Show Table
    DownLoad: CSV
    Figure 2.  Behaviors of various iterative processes using Example 4.1.

    Now, assuming σn=0.89,λn=0.74, and ξn=0.17 and by taking the initial guess 13.3, the observations are provided in Table 3 and Figure 3.

    Table 3.  Convergence comparison of different algorithms with the AA-Iterative algorithm.
    n AA M Thakur S Ishikawa Mann
    1 13.30000 13.30000 13.30000 13.30000 13.30000 13.30000
    2 5.080648 7.866088 8.222294 9.866753 10.11059 11.32717
    3 1.940826 4.652281 5.083167 7.319760 7.686011 9.646970
    4 0.000000 2.751523 3.142503 5.430245 5.842863 8.216003
    5 0.000000 0.000000 1.942751 4.028488 4.441712 6.997296
    6 0.000000 0.000000 0.000000 2.988579 3.376565 5.959364
    7 0.000000 0.000000 0.000000 2.217110 2.566846 5.075391
    8 0.000000 0.000000 0.000000 0.203231 1.951302 4.322542
    9 0.000000 0.000000 0.000000 0.000000 0.214643 3.681365
    10 0.000000 0.000000 0.000000 0.000000 0.023610 3.135296

     | Show Table
    DownLoad: CSV
    Figure 3.  Behaviors of various iterative processes using Example 4.1.

    Now, assuming σn=0.71,λn=0.1, and ξn=0.3 and by taking the initial guess 7.1, the observations are provided in Table 4 and Figure 4.

    Table 4.  Convergence comparison of different algorithms with the AA-Iterative algorithm.
    n AA M Thakur S Ishikawa Mann
    1 7.100000 7.100000 7.100000 7.100000 7.100000 7.100000
    2 3.402968 4.347106 4.872211 5.846653 6.189819 6.259833
    3 0.000000 2.661596 3.343442 4.814556 5.396319 5.519086
    4 0.000000 0.000000 2.294360 3.964653 4.704541 4.865994
    5 0.000000 0.000000 0.000000 3.264782 4.101445 4.290185
    6 0.000000 0.000000 0.000000 2.688457 3.575662 3.782513
    7 0.000000 0.000000 0.000000 2.213870 3.117282 3.334916
    8 0.000000 0.000000 0.000000 1.823060 2.717664 2.940284
    9 0.000000 0.000000 0.000000 0.000000 2.369275 2.592351
    10 0.000000 0.000000 0.000000 0.000000 2.065547 2.285589

     | Show Table
    DownLoad: CSV
    Figure 4.  Behaviors of various iterative processes using Example 4.1.

    Now, assuming σn=0.791,λn=0.545, and ξn=0.023 and by taking the initial guess 6.853, the observations are provided in Table 5 and Figure 5.

    Table 5.  Convergence comparison of different algorithms with the AA-Iterative algorithm.
    n AA M Thakur S Ishikawa Mann
    1 6.853000 6.853000 6.853000 6.853000 6.853000 6.853000
    2 3.193283 4.131629 4.417096 5.300515 5.539228 5.949546
    3 0.000000 2.490933 2.847035 4.099731 4.477315 5.165198
    4 0.000000 0.000000 1.835054 3.170974 3.618980 4.484252
    5 0.000000 0.000000 0.000000 2.452618 2.925194 3.893078
    6 0.000000 0.000000 0.000000 1.897000 2.364412 3.379841
    7 0.000000 0.000000 0.000000 0.000000 1.911136 2.934265
    8 0.000000 0.000000 0.000000 0.000000 0.399427 2.547431
    9 0.000000 0.000000 0.000000 0.000000 0.083480 2.211595
    10 0.000000 0.000000 0.000000 0.000000 0.017447 1.920033

     | Show Table
    DownLoad: CSV
    Figure 5.  Behaviors of various iterative processes using Example 4.1.

    Example 4.2. Let E=[0,1]. Consider a mapping η:E×EE×E defined by

    η(x,y)=(x2,y4), for any (x,y)E×E.

    We assume that the norm here is taxicab norm. Here, η is generalized (α,β)-nonexpansive mapping.

    For (x1,y1) and (x2,y2) in E×E, whenever 12(x1,y1)η(x1,y1)(x1,y1)(x2,y2). For α=12 and β=14, we have

    12(x1,y1)η((x2,y2))+12(x2,y2)η((x1,y1))+14(x1,y1)η((x1,y1))+14(x2,y2)η((x2,y2))=12(x1,y1)(x22,y24)+12(x2,y2)(x12,y14)+14(x1,y1)(x12,y14)+14(x2,y2)(x22,y24)=12(2x1x22,4y1y24)+12(2x2x12,4y2y14)+14(x12,3y14)+14(x22,3y24)12(x1x22,4y1y24)+12(2x2x12,4y2y14)+14(x12,3y14)+14(x22,3y24)14{(6x16x22,10y110y24)+(x1x22,3y13y24))}=14{|6x16x22|+|10y110y24|+|x1x22|+|3y13y24|}14{|5x15x22|+|7y17y24|+|x1x22|+|3y13y24|}=14{(5x15x22,7y17y24)+(x1x22,3y13y24)14{(4x14x22,4y14y24)}=14{|4x14x22|+|4y14y24)|}=|x1x22|+|y1y24|=(x1x22,y1y24)=η(x1,y1)η(x2,y2).

    Now, we will draw graphs and tables to show that the sequence {xn} of the AA-Iterative Algorithm (1.10) moves faster to FP from example 4.2 as compared to the Mann iteration (1.2), S iteration (1.5), Thakur (1.7), and M-iteration (1.8). By assuming αn=0.34,λn=0.68, and ξn=0.19 and by taking the initial guess (0.8,0.8), the observations are provided in Table 6 and Figure 6, which show that the AA-Iterative Algorithm (1.10) is faster than mentioned above.

    Table 6.  Convergence comparison of different algorithms with the AA-Iterative algorithm.
    n AA M Thakur S Mann
    1 (0.8000, 0.8000) (0.8000, 0.8000) (0.8000, 0.8000) (0.8000, 0.8000) (0.8000, 0.8000)
    2 (0.0699, 0.0075) (0.1660, 0.0372) (0.1769, 0.0413) (0.3538, 0.1653) (0.6639, 0.5959)
    3 (0.0061, 0.0001) (0.0344, 0.0017) (0.0391, 0.0021) (0.1564, 0.0341) (0.5511, 0.4440)
    4 (0.0005, 0.0000) (0.0071, 0.0001) (0.0087, 0.0001) (0.0692, 0.0071) (0.4574, 0.3307)
    5 (0.0000, 0.0000) (0.0015, 0.0000) (0.0019, 0.0000) (0.0306, 0.0014) (0.3797, 0.2464)
    6 (0.0000, 0.0000) (0.0002, 0.0000) (0.0004, 0.0000) (0.0135, 0.0003) (0.3151, 0.1836)
    7 (0.0000, 0.0000) (0.0000, 0.0000) (0.0001, 0.0000) (0.0060, 0.0001) (0.2616, 0.1368)
    8 (0.0000, 0.0000) (0.0000, 0.0000) (0.0000, 0.0000) (0.0026, 0.0000) (0.2171, 0.1019)
    9 (0.0000, 0.0000) (0.0000, 0.0000) (0.0000, 0.0000) (0.0012, 0.0000) (0.1801, 0.0759)
    10 (0.0000, 0.0000) (0.0000, 0.0000) (0.0000, 0.0000) (0.0005, 0.0000) (0.1496, 0.0566)

     | Show Table
    DownLoad: CSV
    Figure 6.  Behaviors of various iterative processes using Example 4.2.

    Now, assuming σn=0.91,λn=0.55, and ξn=0.73 and by taking the initial guess (0.36,0.64), the observations are provided in Table 7 and Figure 7.

    Table 7.  Convergence comparison of different algorithms with the AA-Iterative algorithm.
    n AA M Thakur S Mann
    1 (0.3600, 0.6400) (0.3600, 0.6400) (0.3600, 0.6400) (0.3600, 0.6400) (0.3600, 0.6400)
    2 (0.0120, 0.0010) (0.0490, 0.0127) (0.0675, 0.0249) (0.1350, 0.0999) (0.1962, 0.2032)
    3 (0.0004, 0.0000) (0.0067, 0.0002) (0.01265, 0.0009) (0.0505, 0.0156) (0.1069, 0.0645)
    4 (0.0000, 0.0000) (0.0009, 0.0000) (0.0024, 0.0000) (0.0190, 0.0024) (0.0583, 0.0205)
    5 (0.0000, 0.0000) (0.0001, 0.0000) (0.0004, 0.0000) (0.0071, 0.0003) (0.0318, 0.0065)
    6 (0.0000, 0.0000) (0.0000, 0.0000) (0.0001, 0.0000) (0.0027, 0.0001) (0.0173, 0.0021)
    7 (0.0000, 0.0000) (0.0000, 0.0000) (0.0000, 0.0000) (0.0010, 0.0000) (0.0094, 0.0007)
    8 (0.0000, 0.0000) (0.0000, 0.0000) (0.0000, 0.0000) (0.0004, 0.0000) (0.0051, 0.0002)
    9 (0.0000, 0.0000) (0.0000, 0.0000) (0.0000, 0.0000) (0.0001, 0.0000) (0.0028, 0.0000)
    10 (0.0000, 0.0000) (0.0000, 0.0000) (0.0000, 0.0000) (0.0000, 0.0000) (0.0015, 0.0000)

     | Show Table
    DownLoad: CSV
    Figure 7.  Behaviors of various iterative processes using Example 4.2.

    In this part, we will apply our findings to Fractional differential equations and Convex minimization problems.

    Fractional differential equations (FDEs), unlike traditional integer order differential equations, involve derivatives of the non-integer order, offering a more accurate description of processes exhibiting memory and long-range dependencies. FDEs have a powerful mathematical framework for modeling complex phenomena in various scientific disciplines and is becoming an active field of interest. In recent years, it has been shown by many authors that the concept of FDEs is an appropriate way to solve nonlinear problems of mathematical modeling and engineering (see, e.g., [33] and others). On the one hand, the approximate and exact solutions for these FDEs are comparatively difficult to the ordinary differential. The purpose of this is work is to explore the notion of FDEs using the concept of fixed points and the class of generalized nonexpansive mappings via our AA-iterative algorithm.

    Now, to achieve our main objective, we consider a very general class of FDEs of fractional order as follows:

    {Dζp(υ)+φ(υ,A(υ))=0,h(0)=h(1)=0. (5.1)

    Here, the notations 1ζ2, 0υ1 and eventually Dζ denotes the well-known notion of the fraction order derivative in the sense of Caputo having order ζ and φ as an appropriate function on [0,1]×R.

    Assume that S is the set of solutions for our Problem (5.1). To establish the main result of our paper, we need to express the solution as a fixed point of suitable mapping. To do this, we need the following function known as Green's function of (5.1) as follows:

    G(υ,ν)={1Γζ(υ(1ν))(ζ1)(υν)(ζ1)),  0νυ1υ(1ν)(ζ1)Γ(ζ),  0υν1.

    Now we want obtain the major results of this section.

    Accordingly, our main result that proves the convergence of the AA-iteration approach for the given problem is the following theorem.

    Theorem 5.1. Assume that the Banach space B is the space C[0,1], and η:C[0,1]C[0,1] is a mapping that reads as follows:

    η(h(υ))=10G(υ,ν)φ(ν,h(ν))dν, for each h(ν)C[0,1].

    If

    φ(ν,h(ν))φ(ν,g(ν))αh(ν)η(g(ν))+αg(ν)η(h(ν))|+βh(ν)η(h(ν))+ βg(ν)η(g(ν))+(12α2β)h(ν)g(ν).

    where α,β[0,1) with α+β1. Consequently, the AA-Iterative Algorithm (1.10) converges to some point of solution set "S" of (5.1) provided that, lim infnd(xn,S)=0.

    Proof. hC[0,1] solves (5.1), if and only if it solves

    h(υ)=10G(υ,ν)φ(ν,h(ν))dν.

    The aims is to prove that the above selfmap forms a generalized mapping for some α and β. Hence, selecting any For h,gC[0,1] such that 0υ1, we see that

    η(h(υ))η(g(υ))|10G(υ,ν)φ(ν,h(ν))dν10G(υ,ν)φ(ν,g(ν))dν|=|10G(υ,ν)[φ(ν,h(ν))φ(ν,g(ν))]dν|10G(υ,ν)|φ(ν,h(ν))φ(ν,g(ν))|dν10G(υ,ν)(αh(ν)η(g(ν))+αg(ν)η(h(ν))+βh(ν)η(h(ν))βg(ν)η(g(ν))+(12α2β)h(ν)g(ν))dν(αh(ν)η(g(ν))+αg(ν)η(h(ν))+βh(ν)η(h(ν))βg(ν)η(gν))+(12α2β)h(ν)g(ν))(10G(υ,ν)dν)αh(ν)η(g(ν))+αg(ν)η(h(ν))+βh(ν)η(h(ν))+βh(ν)η(h(ν))+(12α2β)h(ν)g(ν).

    Hence, η is generalized (α,β)-nonexpansive mapping. By Theorem 3.4, the sequence obtained by the AA-iterative algorithm (1.10) converges to the FP of η and to the solution of a given equation.

    Now, we will solve 2D Volterra integral equations in the setting of generalized (α,β)-nonexpansive mapping. Instead of other iterative algorithms, we use the AA-Iterative Algorithm to approximate the solution of following the 2D Volterra integral equation:

    h(r,ξ)=κ(r,ξ)+r0ξ0Λ1(λ,v,h(λ,v))dλdv+δr0Λ2(ξ,v,h(r,v))dv+γξ0Λ3(r,λ,h(ξ,λ))dλ (5.2)

    for all r,ξλ,v[0,1], where hM×M, κ:[0,1]×[0,1]R2, Λi(i=1,2,3):[0,1]×[0,1]×R2,δ,γ0 and M=C[0,1] is Banach space with the maximum norm

    τu=maxω[0,1]|τ(ω)u(ω)|,τ,uC[0,1].

    We are now in a position to present a new application of the algorithm we have studied. This result is obtained under some mild conditions, which are as follows:

    Theorem 5.2. Consider Ω as closed convex subset of M such that η:ΩΩ is a map with

    η(h(r,ξ))=κ(r,ξ)+r0ξ0Λ1(λ,v,h(λ,v))dλdv+δr0Λ2(ξ,v,h(r,v))dv+γξ0Λ3(r,λ,h(ξ,λ))dλ.

    Assume the following assertions below are true

    (A1) the function h:M×MR2 is continuous;

    (A2) the function Λi(i=1,2,3):[0,1]×[0,1]×R2R2 is continuous and there are the constants 1,2,3>0, such that for all τ1,τ2R2

    |Λ1(λ,v,τ1(λ,v))Λ1(λ,v,τ2(λ,v))|1|τ1τ2|,|Λ2(λ,v,τ1(λ,v))Λ2(λ,v,τ2(λ,v))|2|τ1τ2|,|Λ3(λ,v,τ1(λ,v))Λ3(λ,v,τ2(λ,v))|3|τ1τ2|.

    (A3) for δ,γ0, 1+δ2+γ3, where (0,1).

    Consequently, the AA-Iterative Algorithm (1.10) converges to some point of solution set "S" of (5.2) provided that, lim infnd(xn,S)=0.

    Proof. Let h,gM×M, then

    hη(g)=maxω[0,1]|h(r,ξ)(ω)η(g(r,ξ))|=maxω[0,1]|h(r,ξ)(ω)κ(r,ξ)(ω)+r0ξ0Λ1(λ,v,g(λ,v))dλdv+δr0Λ2(ξ,v,g(r,v))dv+γξ0Λ3(r,λ,g(ξ,λ))dλ.|maxω[0,1]{|h(r,ξ)(ω)κ(r,ξ)(ω)+r0ξ0Λ1(λ,v,h(λ,v))dλdvδr0Λ2(ξ,v,h(r,v))dv+γξ0Λ3(r,λ,h(ξ,λ))dλ|+|r0ξ0Λ1(λ,v,h(λ,v))dλdvr0ξ0Λ1(λ,v,g(λ,v))dλdv|+δ|r0Λ2(ξ,v,h(r,v))dvδr0Λ2(ξ,v,g(r,v))dv|+γ|ξ0Λ3(r,λ,h(ξ,λ))dλξ0Λ3(r,λ,g(ξ,λ))dλ|}maxω[0,1]|h(r,ξ)(τη(h(r,ξ))|+1maxω[0,1]r0ξ0|h(λ,v)g(λ,v)|dλdv+δ2maxω[0,1]r0|h(λ,v)g(λ,v)|dv+γ3maxω[0,1]ξ0|h(λ,v)g(λ,v)|dλ

    which implies that

    hη(g)maxω[0,1]|h(r,ξ)(τ)ηh(r,ξ)|+1maxω[0,1]r0ξ0|h(λ,v)g(λ,v)|dλdv+δ2maxω[0,1]r0|h(λ,v)g(λ,v)|dv+γ3maxω[0,1]ξ0|h(λ,v)g(λ,v)|dλhη(g)+(1+δ2+γ3)maxω[0,1]|h(λ,v)g(λ,v)|hη(g)+hghη(g)+hg.

    Hence, by Lemma 2.9, η is generalized (α,β)-nonexpansive mapping becuase it satisfies xη(y)(3+α+β1αβ)xη(x)+xy for (3+α+β1αβ)=1. As all conditions for Lemma 3.2 are satisfied, the AA-iteration converges to the solution.

    In this section, we are concerned with finding a solution to the convex minimization problem using the AA-Iterative algorithm (1.10). Assume g:CR, where C is closed and a convex subset of a real Hilbert space H, and g is a convex mapping. Consider the convex minimization problem

    minxCg(x). (5.3)

    Assmue that PC:HC is a projection map and g is a Fréchet differentiable. Consider that g represents a gradient of g. It is obvious that ˙yC solves (5.3) if it solves the variational inequality:

    g(˙y),x˙y0,xC (5.4)

    that is, ˙yΩ(C,A). Here, Ω(C,A)={yC:Ay,yx0xC} and A:HH is a nonlinear operator. Adding more ˙y solves (5.3) if ˙y=PC(˙yγg(xn)), where x1C and 0<γ<2L2. To solve (5.3), the gradient project algorithm is used and is defend by

    xn+1=PC(xnγg(xn)),

    where x0C and γ is a step size.

    Lemma 5.3. Let η be a generalized (α,β)-nonexpanisve mapping, and if ˙yF(η)Ω(C,A), then η=PC(Iγg) for identinty mapping I.

    Proof. Since ˙yF(η)Ω(C,A), we have ˙yF(η) and ˙yΩ(C,A). This implies that

    ˙yF(η)η(˙y)=˙y. (5.5)

    and

    ˙yΩ(C,A)˙y=PC(˙yγg(˙y))=PC(Iγg)˙y for identity mapping I. (5.6)

    It follows form (5.5) and (5.6)

    η(˙y)=˙y=PC(Iγg)˙y for identinty mapping I.

    Hence, η=PC(Iγg) for identinty mapping I.

    For an arbitrary {x1} in C and for three sequences of real numbers {σn}, {λn}, and {ξn} in (0,1), then the sequence {xn} obtained by the following algorithm converges to the solution of a convex minimization problem (5.3);

    {xn+1=PC(Iγg)ynyn=PC(Iγg)((1σn)PC(Iγg)hn+σnPC(Iγg)zn),zn=PC(Iγg)((1λn)hn+λnPC(Iγg)hn),hn=(1ξn)xn+ξnPC(Iγg)xn, nN. (5.7)

    Theorem 5.4. Suppose that the convex minimization problem (5.3) has a solution, then the sequence obtained by algorithm (5.7) converges weakly to the solution of (5.3).

    Proof. By Lemma 5.3 η=PC(Iγg) for identinty mapping I, then the conclusion follows from Theorem 3.3.

    Theorem 5.5. Suppose that the convex minimization promlem (5.3) has a solution. Then, the sequence obtained by algorithm (5.7) converges strongly to the solution of (5.3) if lim infnd(xn,Ω))=0, where limnd(xn,Ω)=inf{xnp:pΩ}.

    Proof. The proof follows from Theorem 3.4.

    In this study, we used an AA-iterative algorithm to approximate the FP of generalized (α,β)-nonexpansive mappings. We proved weak convergence and strong convergence results for mappings in uniformly convex Banach spaces for generalized (α,β)-nonexpansive. We showed that the AA-iterative algorithm for generalized (α,β)-nonexpansive mappings converged more quickly than other existing algorithms, as demonstrated by a numerical example. We proved in the setting of generalized (α,β)-nonexpansive mappings that the iterative scheme AA can be used to solve fractional differential equations, the 2D voltera differential equation, and a convex minimization problem.

    In the future, we will utilize the AA-iterative algorithm and the results presented in this paper to find optimal solutions for machine learning problems. We also aim to extend our study to the setting of multi-valued mappings. Since we used Hilbert and Banach spaces, which are linear spaces, we will also try to extend the study to the setting of nonlinear CAT(0) and hyperbolic spaces.

    Hamza Bashir: Writing–original draft; Junaid Ahmad: Investigation; Walid Emam: Software; Muhammad Arshad: Supervision; Zhenhua Ma: Writing–review & editing. All authors agree to publish this version.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The study was funded by Researchers Supporting Project number (RSPD2024R749), King Saud University, Riyadh, Saudi Arabia.

    The study was funded by Researchers Supporting Project number (RSPD2024R749), King Saud University, Riyadh, Saudi Arabia. The work in this paper is also supported by HEC, Pakistan under the NRPU research project No 15548 entitled "Fixed Point Based Machine Learning Optimization Algorithms and RealWorld Applications".

    No competing interests are disclosed by the writers.



    [1] L. M. Guo, Y. Wang, H. M. Liu, C. Li, J. B. Zhao, H. L. Chu, On iterative positive solutions for a class of singular infinite-point p-Laplacian fractional differential equations with singular source terms, J. Appl. Anal. Comput., 13 (2023), 2827–2842. https://doi.org/10.11948/20230008 doi: 10.11948/20230008
    [2] S. Banach, Sur les operations dans les ensembles abstraits et leur application aux equations integrales, Fund. Math., 3 (1922), 133–181. https://doi.org/10.4064/fm-3-1-133-181 doi: 10.4064/fm-3-1-133-181
    [3] Y. Yu, T. C. Yin, Strong convergence theorems for a nonmonotone equilibrium problem and a quasi-variational inclusion problem, J. Nonlinear Convex Anal., 25 (2024), 503–512.
    [4] W. A. Kirk, A fixed point theorem for mappings which do not increase distance, Amer. Math. Mon., 72 (1965), 1004–1006. https://doi.org/10.2307/2313345 doi: 10.2307/2313345
    [5] F. E. Browder, Nonexpansive nonlinear operators in a Banach space, Proc. Natl. Acad. Sci., 54 (1965), 1041–1044. https://doi.org/10.1073/pnas.54.4.1041 doi: 10.1073/pnas.54.4.1041
    [6] D. Gohde, Zum Prinzip der Kontraktiven Abbildung, Math. Nachr., 30 (1965), 251–258. https://doi.org/10.1002/mana.19650300312 doi: 10.1002/mana.19650300312
    [7] T. Suzuki, Fixed point theorems and convergence theorems for some generalized non-expansive mapping, J. Math. Anal. Appl., 340 (2008), 1088–1095. https://doi.org/10.1016/j.jmaa.2007.09.023 doi: 10.1016/j.jmaa.2007.09.023
    [8] B. S. Thakur, D. Thakur, M. Postolache, A new iterative scheme for numerical reckoning fixed points of Suzuki's generalized nonexpansive mappings, Appl. Math. Comput., 275 (2016), 147–155. https://doi.org/10.1016/j.amc.2015.11.065 doi: 10.1016/j.amc.2015.11.065
    [9] K. Aoyama, F. Kohsaka, Fixed point theorem for α-nonexpansive mappings in Banach spaces, Nonlineaar Anal. Theor., 74 (2011), 4387–4391. https://doi.org/10.1016/j.na.2011.03.057 doi: 10.1016/j.na.2011.03.057
    [10] H. Piri, B. Daraby, S. Rahrovi, M. Ghasemi, Approximating fixed points of generalized α-nonexpansive mappings in Banach spaces by new faster iteration process, Numer. Algor., 81 (2019), 1129–1148. https://doi.org/10.1007/s11075-018-0588-x doi: 10.1007/s11075-018-0588-x
    [11] R. Pant, R. Shukla, Approximating fixed points of generalized α-nonexpansive mappings in Banach spaces, Numer. Funct. Anal. Opt., 38 (2017), 248–266. https://doi.org/10.1080/01630563.2016.1276075 doi: 10.1080/01630563.2016.1276075
    [12] R. Shukla, R. Pant, M. De la Sen, Generalized α-nonexpansive mappings in Banach spaces, Fixed Point Theory and Appl., 2017, (2016), 4. https://doi.org/10.1186/s13663-017-0597-9 doi: 10.1186/s13663-017-0597-9
    [13] R. Pant, R. Pandey, Existence and convergence results for a class of nonexpansive type mappings in hyperbolic spaces, Appl. Gen. Topol., 20 (2019), 281–295. https://doi.org/10.4995/agt.2019.11057 doi: 10.4995/agt.2019.11057
    [14] K. Ullah, J. Ahmad, M. De La Sen, On generalized nonexpansive maps in Banach spaces, Computation, 8 (2020), 61. https://doi.org/10.3390/computation8030061 doi: 10.3390/computation8030061
    [15] F. Ahmad, K. Ullah, J. Ahmad, H. Bilal, On an efficient iterative scheme for a class of generalized nonexpansive operators in Banach spaces, Asian-Eur. J. Math., 16 (2023), 2350172. https://doi.org/10.1142/S1793557123501723 doi: 10.1142/S1793557123501723
    [16] M. Yang, J. Sun, Z. Fu, Z. Wang, The singular convergence of a Chemotaxis-Fluid system modeling coral fertilization, Acta. Math. Sci., 43, (2023), 492–504. https://doi.org/10.1007/s10473-023-0202-8 doi: 10.1007/s10473-023-0202-8
    [17] S. Shi, Z. Zhai, L. Zhang, Characterizations of the viscosity solution of a nonlocal and nonlinear equation induced by the fractional p-Laplace and the fractional p-convexity, Adv. Calc. Var., 17 (2024), 195–207. https://doi.org/10.1515/acv-2021-0110 doi: 10.1515/acv-2021-0110
    [18] W. R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc., 4 (1953), 506–510. https://doi.org/10.1090/S0002-9939-1953-0054846-3 doi: 10.1090/S0002-9939-1953-0054846-3
    [19] S. Ishikawa, Fixed points by a new iteration method, Proc. Amer. Math. Soc., 44 (1974), 147–150. https://doi.org/10.1090/S0002-9939-1974-0336469-5 doi: 10.1090/S0002-9939-1974-0336469-5
    [20] M. A. Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl., 251 (2000), 217–229. https://doi.org/10.1006/jmaa.2000.7042 doi: 10.1006/jmaa.2000.7042
    [21] R. P. Agarwal, D. O'Regon, D. R. Sahu, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, J. Nonlinear Convex Anal., 8 (2007), 61–79.
    [22] M. Abbas, T. Nazir, A new faster iteration process applied to constrained minimization and feasibility problems, Mat. Vestn., 66 (2014), 223–234.
    [23] N. Hussain, K. Ullah, M. Arshad, Fixed point approximation of Suzuki generalized nonexpansive mappings via new faster iteration process, J. Nonlinear Convex Anal., 19 (2018), 1383–1393.
    [24] K. Ullah, J. Ahmad, F. M. Khan, Numerical reckoning fixed points via new faster iteration process, Appl. Gen. Topol., 23 (2022), 213–223. https://doi.org/10.4995/agt.2022.11902 doi: 10.4995/agt.2022.11902
    [25] M. Abbas, M. W. Asghar, M. De la Sen, Approximation of the solution of delay fractional differential equation using AA-Iterative Scheme, Mathematics, 10 (2022), 273. https://doi.org/10.3390/math10020273 doi: 10.3390/math10020273
    [26] J. Ali, F. Ali, P. Kumar, Approximation of fixed points for Suzuki's generalized nonexpansive mappings, Mathematics, 7 (2019), 522. https://doi.org/10.3390/math7060522 doi: 10.3390/math7060522
    [27] Z. Ma, H. Bashir, A. A. Alshejari, J. Ahmad, M. Arshad, An algorithm for nonlinear problems based on fixed point methodologies with applications, Int. J. Anal. Appl., 22 (2024), 77. https://doi.org/10.28924/2291-8639-22-2024-77 doi: 10.28924/2291-8639-22-2024-77
    [28] J. A. Clarkson, Uniformly convex spaces, Trans. Amer. Math. Soc., 40 (1936), 396–414. https://doi.org/10.1090/S0002-9947-1936-1501880-4 doi: 10.1090/S0002-9947-1936-1501880-4
    [29] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73 (1967), 591–597. https://doi.org/10.1090/S0002-9904-1967-11761-0 doi: 10.1090/S0002-9904-1967-11761-0
    [30] W. Takahashi, Nonlinear functional analysis, Yokohoma: Yokohoma Publishers, 2000.
    [31] H. F. Senter, W. G. Dotson, Approximating fixed points of nonexpansive mappings, Proc. Amer. Math. Soc., 44 (1974), 375–380. https://doi.org/10.1090/S0002-9939-1974-0346608-8 doi: 10.1090/S0002-9939-1974-0346608-8
    [32] J. Schu, Weak and strong convergence to fixed points of asymptotically nonexpansive mappings, Bull. Aust. Math. Soc., 43 (1991), 153–159. https://doi.org/10.1017/S0004972700028884 doi: 10.1017/S0004972700028884
    [33] E. Karapnar, T. Abdeljawad, F. Jarad, Applying new fixed point theorems on fractional and ordinary differential equations, Adv. Differ. Equ., 2019 (2019), 421. https://doi.org/10.1186/s13662-019-2354-3 doi: 10.1186/s13662-019-2354-3
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1001) PDF downloads(102) Cited by(0)

Figures and Tables

Figures(7)  /  Tables(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog