Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems

  • This paper presents and examines a newly improved linear technique for solving the equilibrium problem of a pseudomonotone operator and the fixed point problem of a nonexpansive mapping within a real Hilbert space framework. The technique relies two modified mildly inertial methods and the subgradient extragradient approach. In addition, it can be viewed as an advancement over the previously known inertial subgradient extragradient approach. Based on common assumptions, the algorithm's weak convergence has been established. Finally, in order to confirm the efficiency and benefit of the proposed algorithm, we present a few numerical experiments.

    Citation: Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain. Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems[J]. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839

    Related Papers:

    [1] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [2] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [3] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [4] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [5] Yasir Arfat, Muhammad Aqeel Ahmad Khan, Poom Kumam, Wiyada Kumam, Kanokwan Sitthithakerngkiet . Iterative solutions via some variants of extragradient approximants in Hilbert spaces. AIMS Mathematics, 2022, 7(8): 13910-13926. doi: 10.3934/math.2022768
    [6] Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332
    [7] Fei Ma, Jun Yang, Min Yin . A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Mathematics, 2022, 7(4): 5015-5028. doi: 10.3934/math.2022279
    [8] Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
    [9] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [10] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
  • This paper presents and examines a newly improved linear technique for solving the equilibrium problem of a pseudomonotone operator and the fixed point problem of a nonexpansive mapping within a real Hilbert space framework. The technique relies two modified mildly inertial methods and the subgradient extragradient approach. In addition, it can be viewed as an advancement over the previously known inertial subgradient extragradient approach. Based on common assumptions, the algorithm's weak convergence has been established. Finally, in order to confirm the efficiency and benefit of the proposed algorithm, we present a few numerical experiments.



    In [22], Muu and Oetti considered the equilibrium problem (EP) as a generalization of several problems in nonlinear analysis, and these problems include convex minimization, variational inequalities, Nash-equilibrium problems, fixed point problems, and saddle point problems [8,22]. The EP has applications in many mathematical models from various fields of applied science and engineering such as economics, physics, image restoration, finance, ecology, network elasticity, transportation, and optimization. Let C be a nonempty, convex, and closed subset of a real Hilbert space H and g:C×CR be a bifunction such that g(x,x)=0 for all xC. The EP, as defined by Muu and Oetti [22], is formulated as follows: Find xC such that

    g(x,w)0,wC. (1.1)

    The solution of the EP (1.1) is denoted by EP(g). Due to the fact that many real-life problems are modeled with a pseudomonotone bifunction, several authors have considered solving EP (1.1) with pseudomonotone bifunction g; see, for example, [15,16,17] and the references therein. On the other hand, the theory of fixed points is an important concept in nonlinear analysis. Many problems in applied sciences and engineering can be formulated as fixed point problems. A point xC is called a fixed point of a self mapping T:CC if Tx=x. In this article, the set of all fixed point of T is denoted by F(T)={xC:Tx=x}. Finding the common solutions of EP and fixed points problems (FPP) is important because some mathematical models constraints can be expressed as EP and FFP. Some of these models can be found in several practical problems such as network resource allocation, signal processing, image restoration and so on [17].

    In [32], Tada and Takahashi introduced the following hybrid method for approximating the common solution of monotone EP and FFP of nonexpansive mappings in real Hilbert spaces:

    {x0C0=Q0=C,zmCsuch thatg(zm,w)+1λmwzm,zmxm0,wC,wm=αmxm+(1αm)Tzm,Cm={uC:wmuxmu},Qm={uC:x0xm,uxm0},xm+1=PCmQmx0. (1.2)

    It is worthy to note that the method (1.2) requires solving a strongly monotonic generalized EP for point zm: Find zmC such that

    g(zm,w)+1λmwzm,zmxm0,wC. (1.3)

    It is difficult to approximate the solutions of EP (1.1) when the bifunction assumption is weakened from monotone to pseudomonotone [17]. In 2008, Quoc et al. [28] considered a new method known as the extrgradient method (EM). Their results are extension and generalization of the results of Korpelevic [18] and Antipin [2] to the case of EP involving a pseudomonotone bifunction. In 2013, Ahn [1] considered an iterative method for finding the common solution of EP with a pseudomonotone bifunction and FPP of nonexpansive mappings. The major drawback of the methods in [28] and [1] is that, one needs to solve two strongly convex optimization problems in the feasible set C in each iteration of the algorithms. In order to improve the extragradient method, Hieu [15] followed the results of Censor et al. [9,10] to introduce a Halpern-type subgradient extragradient method. This method involves a half-space in the second minimization problem. It is noticed that the Halpern-type method is dependent on Lipschitz-type constants of the bifunction and that these constants are difficult to determine. Recently, Yang and Liu [33] introduced a modified Halpern-type method which does not require the prior knowledge of the Lipschitz-type constants. Their method has a non-increasing step-size. In real Hilbert spaces, the authors proved a strong convergence theorem for approximating the common solution of pseudomonotone EP and FFP of nonexpansive mappings.

    In order to speed up the process of solving the smooth convex minimization problem, Polyak originally presented and examined the idea of inertial extrapolation in [27] in 1964. Since then, scientists have employed this method to accelerate the rate at which many iterative processes converge. Since its conception, the inertial extrapolation approach has been refined, extended, and generalized by numerous authors; see [3,4,5,6,7] and the references therein. Relaxation techniques have proven to be an effective method for improving the rate of convergence in this field of study; see [23,24,25]. It is common knowledge that when inertial and relaxation techniques are combined, the results increase and the rate of convergence is higher than when either approach is used alone; see [19,20,26]. Very recently, Ceng et al. [11] introduced and studied a mildly inertial algorithm with a linesearch process for finding a common solution of the variational inequality problem and the common fixed-point problem of an asymptotically nonexpansive mapping and finitely many nonexpansive mappings by using a subgradient approach. For more details on the mildly inertial concept; see, [11,31] and the reference in them.

    Is it feasible to introduce a new step-size rule in conjunction with a new double inertial extrapolation to solve a pseudomonotone inequality problem and a fixed point problem in the framework of Hilbert space?

    Motivated by the above results, in this article, we propose a new self-adaptive inertial subgradient extragradient method which does not rely on the prior knowledge of the Lipschitz-type constants of the bifunction. The suggested method is used to approximate the common solution of pseudomonotone EP and FFP of nonexpansive in a real Hilbert space. Our method includes two modified mildly inertial terms which improve the speed of convergence of the suggested method. Under some mild conditions on the control parameters, we prove the weak convergence of the result of the suggested method. Furthermore, we present some numerical examples to demonstrate the computational advantage of our method over some well known method in the literature.

    The remaining part of this article is arranged as follows: In Section 2, we give some useful results and definitions in this study. In Section 3, we present the suggested method and the necessary conditions for obtaining our main result. In Section 4, we establish the strong convergence results of the suggested method. In Section 5, we present a numerical experiment to show the efficiency of our method, and in Section 6, we give th conclusion of our study.

    In this section, we begin by recalling some known and useful results which are needed in the sequel. Let H be a real Hilbert space. We denotes strong and weak convergence by "" and "", respectively. For any x,yH and α[0,1], it is well-known that

    xy2=x22x,y+y2. (2.1)
    x+y2=x2+2x,y+y2. (2.2)
    xy2x2+2y,xy. (2.3)
    αx+(1α)y2=αx2+(1α)y2α(1α)xy2. (2.4)
    αx+βy+γz2=αx2+βy2+γz2αβxy2αγxz2γβyz2. (2.5)

    Definition 2.1. [8,12,14] Let g:C×CR be a mapping. Then g is said to be

    (a) Strongly monotone on C if there exists a constant τ>0 such that

    g(x,y)+g(y,x)τxy2, (2.6)

    for all x,yC;

    (b) Monotone on C if

    g(x,y)+g(y,x)0, (2.7)

    for all x,yC;

    (c) Strongly pseudomonotone on C if there exists a constant γ>0 such that

    g(x,y)0g(x,y)γxy2,x,yC;

    (d) Pseudomonotone on C if

    g(x,y)0g(y,x)0,x,yC;

    (e) Satisfying a Lipschitz-like condition if there exist two positive constants L1,L2 such that

    g(x,y)+g(y,z)g(x,z)L1xy2L2yz2,x,y,zC. (2.8)

    Let C be a nonempty, closed and convex subset of H. For any uH, there exists a unique point PCuC such that

    uPCuuy,yC.

    The operator PC is called the metric projection of H onto C. It is well-known that PC is a nonexpansive mapping and that PC satisfies

    xy,PCxPCyPCxPCy2, (2.9)

    for all x,yH. Furthermore, PC is characterized by the following property:

    xy2xPCx2+yPCx2

    and

    xPCx,yPCx0, (2.10)

    for all xH and yC. A subset C of H is called proximal if for each xH, there exists yC such that

    xy=d(x,C).

    The Hausdorff metric on H is as follows:

    H(A,B):=max{supxAd(x,B),supyBd(y,A)},

    for all subsets A and B of H.

    The normal cone NC to C at a point xC is defined by NC(x)={zH:z,xy0,yC}.

    Lemma 2.1. [13] Let {δn} and {ωn} be sequences of positive real numbers, such that

    δn+1(1+ωn)δn+ωnδn1.

    Then the following holds

    δn+1MΠni=1(1+2ωi),

    where M=max{δ1,δ2}. Moreover, if n=1ωn<, then {δn} is bounded.

    Lemma 2.2. [21] Let {xm} be a sequence in H such that the following conditions hold:

    (1) limmxmx exist for any xH;

    (2) all weak cluster points of {xn} lies in H.

    Then {xm} converges weakly to some point in H.

    Lemma 2.3. [30] Let {δm},{βm} and {ωm} be sequences of positive real numbers, such that

    δm+1(1+ωm)δm+βm.

    If m=1ωm<, and m=1βm<. Then the limit of δm exists.

    Lemma 2.4. [12] Let C be a convex subset of a real Hilbert space H and ϕ:CR be a subdifferential function on C. Then x is a solution to the convex problem: minimize{ϕ(x):xC} if and only if 0ϕ(x)+NC(x), where ϕ(x) denotes the subdifferential of ϕ and NC(x) is the normal cone of C at x.

    Assumption 3.1. Condition A. Suppose that C is a nonempty, closed convex subset of a real Hilbert space H. Let g:C×CR satisfies the following conditions:

    (1) g is pseudomonotone on C, g(x,x)=0 for all xH and satisfies the Lipschitz-type condition (2.8) on H with positive constants c1,c2;

    (2) g(,x) is sequentially weakly upper semi-continuous on C for each fixed xC;

    (3) g(x,) is convex, lower semi-continuous and subdifferential on C for every fixed xC;

    (4) {Tm} is a sequence of nonexpansive mapping;

    (5) S:CC is a nonexpansive mapping;

    (6) The solution set Ω=Ep(g)F(S).

    Next, we present the proposed modified mildly inertial subgradient extragradient algorithm (Algorithm 3.1) and prove its weak convergence results.

    Algorithm 3.1. Modified mildly inertial subgradient extragradient
    Step 0. Let x0,x1H,λ(1,),μ(0,2λ),{γm},{θm}(0,),m=1γm<,m=1θm< and βm(0,1). For all mN, given {xm}, compute the sequence {xm+1} as follows:
    Step 1. Compute
                                                                    wm=xmγm(Tmxm1Tmxm),
                                                                    zm=wmθm(Tmxm1Tmwm),
                                                                ym=argminuC{τmg(zm,u)+12uzm2},
    if ym=zm, then stop and zm is a solution. Otherwise, go to step 2.
    Step 2. Select ψm2g(zm,)(ym) and μmNC(ym) such that
                                                                    μm=zmτmψmym,
    and construct the half-space
                                                                    Cm={wH:zmτmψmym,wym0},
                                                                    um=argminuCm{τmg(ym,u)+12ymzm2},
                                                                            xm+1=(1βm)um+βmSum,
    τm+1={min{τm,μ[ymzm2+umym2]4λ(g(zm,um)g(zm,ym)g(ym,um))}},ifg(zm,um)g(zm,ym)g(ym,um)>0,τm,otherwise.(3.1)

    Remark 3.1. The above iterative method is quite different from the usual double iterative methods in the literature. The role of Tm is justified in our numerical experiment.

    Lemma 4.1. The sequence τm+1 generated by Algorithm 3.1. Then,

    limmτmmin{μ4max{c1,c2},τ1}.

    Proof. Using (3.1) in Algorithm 3.1 and (2.8), we have

    μ[wmum2+xm+1um2]4λ(g(wm,xm+1)g(wm,um)g(um,xm+1))μ[wmum2+xm+1um2]4λ[c1wmum2+c2xn+1um2]μ4λmax{c1,c2}μ4max{c1,c2}. (4.1)

    Thus, the sequence τm is nonincreasing and has a lower bound of μ4max{c1,c2}. It then follows that, there exists

    limmτmmin{μ4max{c1,c2},τ1}.

    Theorem 4.1. Let {xm} be the sequence generated by Algorithm 3.1 such that the Assumption 3.1 holds. Then, {xm} converges weakly to a point pΩ.

    Proof. Let pΩ. Using Algorithm 3.1, Lemma 2.4 and the definition of um, we have

    02(τmg(ym,)+12zm2)(um)+NCm. (4.2)

    Let vm2g(ym,um) and χNCm(um) such that

    χ=zmψmvmum, (4.3)

    since χNCm(um), then

    τmvm,xumzmum,xumxCm, (4.4)

    and since pEp(g), we have

    τmvm,pumzmum,pumpCm. (4.5)

    In addition, since vm2g(ym,um), we obtain

    τm(g(ym,p)g(ym,um))τmvm,pum. (4.6)

    Combining (4.5) and (4.6), we have

    τm(g(ym,p)g(ym,um))zmum,pum. (4.7)

    Since, pΩ, then g(p,ym)0, thus, using the fact that g is pseudomonotone, we have g(ym,p)0. Hence, we get

    2τmg(ym,um)2zmum,pum. (4.8)

    Using the fact that umCm, we obtain

    zmτmψmym,umym0,

    so,

    τmψm,umymzmym,umym.

    Since ψm2g(zm,ym), and the definition of subdifferential, we obtain

    g(zm,y)g(zm,ym)ψm,yymyH,

    it then follows that

    2τm(g(zm,um)g(zm,ym))2zmym,umym. (4.9)

    Using (4.8) and (4.9), we have

    2τm(g(zm,um)g(zm,ym)g(ym,um))2zmym,umym+2zmum,pumzmym2+umym2+ump2pzm2, (4.10)

    which implies that

    pum2zmp2zmym2umym2+2τm(g(zm,um)g(zm,ym)g(ym,um)). (4.11)

    Thus, using (3.1), we have

    ump2zmp2zmym2umym2+μτm2λτm+1[zmym2+umym2]. (4.12)

    Clearly, limmμτm2λτm+1=μ2λ<1. Thus, we have

    ump2zmp2(1μ2λ)[zmym2+umym2]. (4.13)

    In addition, using Algorithm 3.1, we have

    zmp=wmθm(Tmxm1Tmwm)p=(wmp)+[θm(Tmxm1Tmwm)]wmp+(θm)(Tmxm1Tmwm)=wmp+θmxm1wm=xmγm(Tmxm1Tmxm)p+θmxm1(xmγm(Tmxm1Tmxm))xmp+γmxm1xm+θmxm1xm+θmγmxm1xm=xmp+(γm+θm+θmγm)xm1xm. (4.14)

    Using Algorithm 3.1, (4.13), and the definition of S, we have

    xm+1p2=(1βm)um+βmSump2=(1βm)ump2+βmSump2βm(1βm)Sumum2ump2βm(1βm)Sumum2zmp2(1μ2λ)[zmym2+umym2]βm(1βm)Sumum2zmp2. (4.15)

    From (4.14) and (4.15), we have

    xm+1pzmpxmp+(γm+θm+θmγm)xm1xmxmp+ωm[xm1p+xmp]=(1+ωm)xmp+ωmxm1p, (4.16)

    where ωm=γm+θm+θmγm. Using Lemma 2.1, we have that {xm} is bounded, consequently the sequences {wm},{zm} and {um} are bounded. It then follows that m=1ωmxm1p<. Using (4.16) and Lemma 2.3, we have that limmxmp exists. As such from (4.16), we obtain that

    limm(γm+θm+θmγm)xm1xm=0. (4.17)

    Thus, we obtain

    limmxm1xm=0. (4.18)

    From (4.14), we have that

    zmp2xmp2+2(γm+θm+θmγm)xm1xmxmp+(γm+θm+θmγm)2xm1xm2. (4.19)

    Thus, using (4.15) and (4.19), we have

    xm+1p2zmp2(1μ2λ)[zmym2+umym2]βm(1βm)Sumum2xmp2+2(γm+θm+θmγm)xm1xmxmp+(γm+θm+θmγm)2xm1xm2(1μ2λ)[zmym2+umym2]βm(1βm)Sumum2, (4.20)

    which implies that

    (1μ2λ)[zmym2+umym2]+βm(1βm)Sumum2xmp2xm+1p2+2(γm+θm+θmγm)xm1xmxmp+(γm+θm+θmγm)2xm1xm2. (4.21)

    Using (4.18), we have

    limm[(1μ2λ)[zmym2+umym2]+βm(1βm)Sumum2]=0,

    which implies that

    limmzmym=0,limmumym=0andlimmSumum=0. (4.22)

    Furthermore, from Algorithm 3.1, we have

    zmxmwmxm+θmxm1wmxmγm(Tmxm1Tmxm)xm+θmxm1xm+γm(Tmxm1Tmxm)xm1xm+θmxm1xm+θmγmxm1xm. (4.23)

    Using (4.18), we have

    limmzmxm=0=limmwmxm. (4.24)

    Using (4.22) and (4.24), we have

    limmxmumlimmxmzm+limmzmym+limmymum=0. (4.25)

    We need to establish that the sequence {xm} converges weakly to xΩ. Since {xm} is bounded, there exists a weakly convergent subsequence of {xm}. Suppose that {xmk} is the subsequence of {xm} such that {xmk} converges weakly to x. Furthermore, using (4.25), we obtain that a subsequence of {un} say {unk} converges weakly to x, and using (4.22) and by the demiclosedness, we have that xF(S). Using (4.9) and the subdifferential of g, we have

    τmk(g(zmk,x)g(zmk,ymk))zmkymk,xymkxC, (4.26)

    taking limit as k and using Assumption 3.1(1) and (3), and the fact that limkτmk=τ>0, we have that g(x,x)0 for all xC. Hence, xΩ. Using Lemma 2.2, we obtain that {xm} converges weakly to p and pΩ.

    In this section, some numerical examples are given to authenticate our main result. We compare the performance of the numerical behavior of our Algorithm 3.1 (shortly, Alg. 3.1) with Algorithm 3.1 in [33] (shortly, LY Alg. 3.1) and Algorithm 4.1 in [15] (shortly, H Alg. 4.1). We perform all numerical simulations using MATLAB R2020b and carried out on PC Desktop Intel® CoreTM i7-3540M CPU @ 3.00GHz × 4 memory 400.00GB.

    Example 5.1. Let H=2(R)={x=(x1,x2,...,xk,...),,xkRandk=1|xk|2<} with inner product ,:2×2R and norm :2R defined by x,y=k=1xkyk and x=(k=1|xk|2)12, where x={xk}k=1, y={yk}k=1. Now, let C={xH:x1}. The bifunction g:CC is defined by g(x,y)=(3x)x,yx,x,yC. As in [29], one can easily verify that g is a pseudomonotone which is not monotone and g fulfills the Lipschitz-type condition with constants c1=c2=52. Let Tm:22 be defined by Tm=x5m, for all mN, xC and define S:22 by Sx=x4, where x=(x1,x2,...,xk,...), xkR. Now, we consider the following parameters for the various methods:

    For Alg. 3.1: λ=2.4, μ=0.7, γm=θm=110m2+1, βm=15m,

    For LY Alg. 3.1: αm=11000(m+2), βm=0.8, λ0=14, μ=0.7.

    For H Alg. 4.1: λ=2.4, αm=11000(m+2) and βm=0.8.

    Next, we consider the following initial values:

    Case A: x0=(0,1,0,...,0,...), x1=(1,1,1,,...,0,...),

    Case B: x0=(1,1,0,...,0,...), x1=(0,1,1,,...,0,...),

    Case C: x0=(1,0,1,...,0,...), x1=(1,0,1,,...,0,...),

    Case D: x0=(1,0,...,0,...), x1=(1,1,0,,...,0,...).

    For the numerical computation, we used the stopping criterion TOLn=xm+1xm<104 and we obtained the following Table 1 and Figure 1:

    Table 1.  Results of the numerical simulations for various cases.
    Alg. 3.1 Alg. 3.1(Without (Tm)) YL Alg. 3.1 H Alg. 4.1
    Iter CPU time (secs.) Iter CPU time (secs.) CPU time (secs.) Iter CPU time (secs.) Iter
    Case A 21 0.0069 49 0.0125 0.0219 61 0.0236 82
    Case B 35 0.0090 45 0.0098 0.0221 78 0.0236 82
    Case C 40 0.0101 50 0.0105 0.0225 80 0.0237 85
    Case D 41 0.0101 55 0.0112 0.0237 85 0.0237 85

     | Show Table
    DownLoad: CSV
    Figure 1.  Example 1, Case A (top left); Case B (top right); Case C (bottom left); Case D (bottom right).

    Example 5.2. Let H=L2([0,1]) with the inner product x,y=10x(t)y(t)dt with inner product x=(10x2(t)dt)12 for all x,yL2([0,1]). We define the set C by C={xH:10(s2+1)x(s)ds1} and the function g:C×CR is defined by g(x,y)=Bx,yx, where Bx(s)=max{0,x(s)}, s[0,1] for all xH. Now, let the mapping Tm:CC be defined by Tmx=x2m, for each mN and define S:CC by Sx=x3, for each xC. Now, we consider the following initial values:

    Case I: x0=s3+1, x1=sin(3s),

    Case II: x0=sin(4s)30, x1=cos(3s)3,

    Case III: x0=exp(3s)80, x1=exp(4s)80,

    Case IV: x0=s3+s2, x1=s2+1.

    For the numerical computation, we use the same parameters as in Example 1. We consider the stopping criterion Tolm=xm+1xm<105 and the following Table 2 and Figure 2 are obtained.

    Table 2.  Results of the numerical simulations for various cases.
    Alg. 3.1 Alg. 3.1(Without (Tm)) YL Alg. 3.1 H Alg. 4.1
    Iter CPU time (secs.) Iter CPU time (secs.) CPU time (secs.) Iter CPU time (secs.) Iter
    Case A 25 0.0070 30 0.0092 0.0105 50 0.0226 80
    Case B 38 0.0098 49 0.0102 0.0220 79 0.0224 84
    Case C 40 0.0120 50 0.0105 0.0225 83 0.0225 84
    Case D 41 0.0121 56 0.0123 0.0224 85 0.0230 90

     | Show Table
    DownLoad: CSV
    Figure 2.  Example 2, Case I (top left); Case II (top right); Case III (bottom left); Case IV (bottom right).

    Remark 5.1. From the Tables 1 and 2 and Figures 1 and 2, it is evident that our new method outperforms the compared methods.

    In this work, we have introduced a modified subgradient extragradient iterative algorithm for approximating the common solution of EP with pseudomonotone bifunction and FPP of nonexpansive mappings in a real Hilbert space. The proposed method employs a self-adaptive step-size and does not rely on the prior knowledge of the Lipscihtz-type constants of the pseudomonotone bifunctions. Under some mild conditions, we proved the weak convergence result of the new method. We have shown that the suggested method which includes two modified mildly inertial steps outperforms several well known methods in the existing literature.

    Francis Akutsah: Conceptualization, Writing-original draft; Akindele Adebayo Mebawondu: Conceptualization, Software, Writing-original draft; Austine Efut Ofem: Methodology, Software, Writing-original draft; Reny George: Validation, Funding acquisition, Formal analysis; Hossam A. Nabwey: Validation, Formal analysis; Ojen Kumar Narain: Supervision. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this work through the project number (PSAU/2024/01/78917).

    The authors declare no conflict of interests.



    [1] P. Anh, A hybrid extragradient method for pseudomonotone equilibrium problems and fixed point problems, Bull. Malays. Math. Sci. Soc., 36 (2013), 107–116.
    [2] A. Antipin, On a method for convex programs using a symmetrical modification of the Lagrange function, Ekon. Mat. Metody, 12 (1976), 1164–1173.
    [3] F. Akutsah, A. Mebawondu, H. Abass, O. Narain, A self adaptive method for solving a class of bilevel variational inequalities with split variational inequlaity and composed fixed point problem constraints in Hilbert spaces, Numer. Algebra Control, 13 (2023), 117–138. http://dx.doi.org/10.3934/naco.2021046 doi: 10.3934/naco.2021046
    [4] F. Akutsah, A. Mebawondu, G. Ugwunnadi, O. Narain, Inertial extrapolation method with regularization for solving monotone bilevel variation inequalities and fixed point problems in real Hilbert space, J. Nonlinear Funct. Anal., 2022 (2022), 5. http://dx.doi.org/10.23952/jnfa.2022.5 doi: 10.23952/jnfa.2022.5
    [5] F. Akutsah, A. Mebawondu, G. Ugwunnadi, P. Pillay, O. Narain, Inertial extrapolation method with regularization for solving a new class of bilevel problem in real Hilbert spaces, SeMA, 80 (2023), 503–524. http://dx.doi.org/10.1007/s40324-022-00293-2 doi: 10.1007/s40324-022-00293-2
    [6] F. Alvares, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set Valued Anal., 9 (2001), 3–11. http://dx.doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [7] P. Anh, Strong convergence theorems for nonexpansive mappings Ky Fan inequalities, J. Optim. Theory Appl., 154 (2012), 303–320. http://dx.doi.org/10.1007/s10957-012-0005-x doi: 10.1007/s10957-012-0005-x
    [8] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Mathematics Student, 63 (1994), 123–145.
    [9] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert spaces, J. Optim. Theory Appl., 148 (2011), 318–335. http://dx.doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [10] Y. Censor, A. Gibali, S. Reich, Extensions of Korpelevich's extragradient method for variational inequality problems in Euclidean space, Optimization, 61 (2012), 119–1132. http://dx.doi.org/10.1080/02331934.2010.539689 doi: 10.1080/02331934.2010.539689
    [11] L. Ceng, X. Qin, Y. Shehu, J. Yao, Mildly inertial subgradient extragradient method for variational inequalities involving an asymptotically nonexpansive and finitely many nonexpansive mappings, Mathematics, 7 (2019), 881. http://dx.doi.org/10.3390/math7100881 doi: 10.3390/math7100881
    [12] P. Daniele, F. Giannessi, A. Maugeri, Equilibrium problems and variational model, New York: Springer, 2003. http://dx.doi.org/10.1007/978-1-4613-0239-1
    [13] A. Hanjing, S. Suantai, A fast image restoration algorithm based on a fixed point and optimization method, Mathematics, 8 (2020), 378. http://dx.doi.org/10.3390/math8030378 doi: 10.3390/math8030378
    [14] Z. He, The split equilibrium problem and its convergence algorithms, J. Inequal. Appl., 2012 (2012), 162. http://dx.doi.org/10.1186/1029-242X-2012-162 doi: 10.1186/1029-242X-2012-162
    [15] D. Hieu, Halpern subgradient extragradient method extended to equilibrium problems, RACSAM, 111 (2017), 823–840. http://dx.doi.org/10.1007/s13398-016-0328-9 doi: 10.1007/s13398-016-0328-9
    [16] D. Hieu, Hybrid projection methods for equilibrium problems with non-Lipschitz type bifunctions, Math. Method. Appl. Sci., 40 (2017), 4065–4079. http://dx.doi.org/10.1002/mma.4286 doi: 10.1002/mma.4286
    [17] L. Jolaoso, M. Aphane, A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems, Fixed Point Theory Appl., 2020 (2020), 9. http://dx.doi.org/10.1186/s13663-020-00676-y doi: 10.1186/s13663-020-00676-y
    [18] G. Korpelevich, The extragradient method for finding saddle points and other problems, Matekon, 12 (1976), 747–756.
    [19] M. Lukumon, A. Mebawondu, A. Ofem, C. Agbonkhese, F. Akutsah, O. Narain, An efficient iterative method for solving quasimonotone bilevel split variational inequality problem, Adv. Fixed Point Theory, 13 (2023), 26. http://dx.doi.org/10.28919/afpt/8269 doi: 10.28919/afpt/8269
    [20] A. Mebawondu, A. Ofem, F. Akutsah, C. Agbonkhese, F. Kasali, O. Narain, A new double inertial subgradient extragradient algorithm for solving split pseudomonotone equilibrium problems and fixed point problems, Ann. Univ. Ferrara, in press. http://dx.doi.org/10.1007/s11565-024-00496-7
    [21] A. Moudafi, E. Al-Shemas, Simultaneous iterative methods for split equality problem, Trans. Math. Program. Appl., 1 (2013), 1–11.
    [22] L. Muu, W. Oettli, Convergence of an adaptive penalty scheme for finding constrained equilibria, Nonlinear Anal., 18 (1992), 1159–1166. http://dx.doi.org/10.1016/0362-546X(92)90159-C doi: 10.1016/0362-546X(92)90159-C
    [23] A. Ofem, A. Mebawondu, G. Ugwunnadi, H. Isik, O. Narain, A modified subgradient extragradient algorithm-type for solving quasimonotone variational inequality problems with applications, J. Inequal. Appl., 2023 (2023), 73. http://dx.doi.org/10.1186/s13660-023-02981-7 doi: 10.1186/s13660-023-02981-7
    [24] A. Ofem, A. Mebawondu, G. Ugwunnadi, P. Cholamjiak, O. Narain, Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems, Numer. Algor., in press. http://dx.doi.org/10.1007/s11075-023-01674-y
    [25] A. Ofem, A. Mebawondu, C. Agbonkhese, G. Ugwunnadi, O. Narain, Alternated inertial relaxed tseng method for solving fixed point and quasi-monotone variational inequality problems, Nonlinear Functional Analysis and Applications, 29 (2024), 131–164. http://dx.doi.org/10.22771/nfaa.2024.29.01.10 doi: 10.22771/nfaa.2024.29.01.10
    [26] A. Ofem, J. Abuchu, G. Ugwunnadi, H. Nabwey, A. Adamu, O. Narain, Double inertial steps extragadient-type methods for solving optimal control and image restoration problems, AIMS Mathematics, 9 (2024), 12870–12905. http://dx.doi.org/10.3934/math.2024629 doi: 10.3934/math.2024629
    [27] B. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comp. Math. Math. Phys., 4 (1964), 1–17. http://dx.doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [28] D. Quoc Tran, M. Le Dung, V. Nguyen, Extragradient algorithms extended to equilibrium problems, Optimization, 57 (2008), 749–776. http://dx.doi.org/10.1080/02331930601122876 doi: 10.1080/02331930601122876
    [29] H. Rehman, P. Kumam, P. Cho, Y. Suleiman, W. Kumam, Modified Popov's explicit iterative algorithms for solving pseudomonotone equilibrium problems, Optim. Method. Softw., 36 (2021), 82–113. http://dx.doi.org/10.1080/10556788.2020.1734805 doi: 10.1080/10556788.2020.1734805
    [30] K. Tan, H. Xu, Approximating fixed points of nonexpansive mappings by the ishikawa iteration process, J. Math. Anal. Appl., 178 (1993), 301–308. http://dx.doi.org/10.1006/jmaa.1993.1309 doi: 10.1006/jmaa.1993.1309
    [31] D. Thong, D. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer. Algor., 80 (2019), 1283–1307. http://dx.doi.org/10.1007/s11075-018-0527-x doi: 10.1007/s11075-018-0527-x
    [32] A. Tada, W. Takahashi, Strong convergence theorem for an equilibrium problem and a nonexpansive mapping, In: Nonlinear analysis and convex analysis, Yokohama: Yokohama Publishers, 2006,609–617.
    [33] J. Yang, H. Liu, The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space, Optim. Lett., 14 (2020), 1803–1816. http://dx.doi.org/10.1007/s11590-019-01474-1 doi: 10.1007/s11590-019-01474-1
  • This article has been cited by:

    1. Jacob Ashiwere Abuchu, Austine Efut Ofem, Godwin Chidi Ugwunnadi, Ojen Kumar Narain, An inertial-type extrapolation algorithm for solving the multiple-sets split pseudomonotone variational inequality problem in real Hilbert spaces, 2024, 0, 2155-3289, 0, 10.3934/naco.2024056
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1008) PDF downloads(51) Cited by(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog