Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points

  • In this research, we studied modified inertial composite subgradient extragradient implicit rules for finding solutions of a system of generalized equilibrium problems with a common fixed-point problem and pseudomonotone variational inequality constraints. The suggested methods consisted of an inertial iterative algorithm, a hybrid deepest-descent technique, and a subgradient extragradient method. We proved that the constructed algorithms converge to a solution of the considered problem, which also solved some hierarchical variational inequality.

    Citation: Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin. Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points[J]. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672

    Related Papers:

    [1] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [2] Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839
    [3] Fei Ma, Jun Yang, Min Yin . A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Mathematics, 2022, 7(4): 5015-5028. doi: 10.3934/math.2022279
    [4] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [5] Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
    [6] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [7] Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332
    [8] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [9] Yasir Arfat, Muhammad Aqeel Ahmad Khan, Poom Kumam, Wiyada Kumam, Kanokwan Sitthithakerngkiet . Iterative solutions via some variants of extragradient approximants in Hilbert spaces. AIMS Mathematics, 2022, 7(8): 13910-13926. doi: 10.3934/math.2022768
    [10] Wenlong Sun, Gang Lu, Yuanfeng Jin, Zufeng Peng . Strong convergence theorems for split variational inequality problems in Hilbert spaces. AIMS Mathematics, 2023, 8(11): 27291-27308. doi: 10.3934/math.20231396
  • In this research, we studied modified inertial composite subgradient extragradient implicit rules for finding solutions of a system of generalized equilibrium problems with a common fixed-point problem and pseudomonotone variational inequality constraints. The suggested methods consisted of an inertial iterative algorithm, a hybrid deepest-descent technique, and a subgradient extragradient method. We proved that the constructed algorithms converge to a solution of the considered problem, which also solved some hierarchical variational inequality.



    Throughout, assume H is a real Hilbert space with the inner product , and the norm . Assume CH is a closed and convex set. Let S:CH be an operator and Θ:C×CR be a bifunction. Use Fix(S) to mean the fixed-point set of S.

    Recall that the equilibrium problem (EP) is to search an equilibrium point in EP(Θ), where

    EP(Θ)={xC:Θ(x,y)0,yC}.

    Under the theory framework of equilibrium problems, there exists a unified way for exploring a broad number of problems originating in structural analysis, transportation, physics, optimization, finance, and economics [1,8,10,17,20,24,25,26,35]. In order to find an element in EP(Θ), one needs to make the hypotheses below:

    (H1) Θ(v,v)=0,vC;

    (H2) Θ(w,v)+Θ(v,w)0,v,wC;

    (H3) limλ0+Θ((1λ)v+λu,w)Θ(v,w),u,v,wC;

    (H4) For every vC, Θ(v,) is convex and lower semicontinuous (l.s.c.).

    In order to solve the equilibrium problems, in 1994, Blum and Oettli [1] obtained the following valuable lemma:

    Lemma 1.1. [1] Assume that Θ:C×CR fulfills the hypotheses (H1)–(H4). If xH and >0, let TΘ:HC be an operator formulated below:

    TΘ(x):={yC:Θ(y,z)+1zy,yx0,zC}.

    Then, (i) TΘ is single-valued and satisfies TΘvTΘw2TΘvTΘw,vw,v,wH; and (ii) Fix(TΘ)=EP(Θ), and EP(Θ) is convex and closed.

    In particular, in case of Θ(x,y)=Ax,yx,x,yC, and the EP reduces to the classical variational ineqiality problem (VIP) of seeking xC such that

    Ax,yx0,yC.

    The solution set of the VIP is denoted by VI(C,A).

    An effective approach for settling EP and VIP is the Korpelevich's algorithm [15]. The Korpelevich extragradient technique has been adapted and applied extensively; see e.g., the modified extragradient method [11,29,34], subgradient extragradient method [3,13,16,28,31,32], relaxed extragradient method [7], Tseng-type method [22,23,33], inertial extragradient method [14,27], and so on.

    In 2010, Ceng and Yao [6] investigated the system generalized equilibrium problems (SGEP) of finding (x,y)C×C satisfying

    {Θ1(x,u)+B1y,ux+1α1xy,ux0,uC,Θ2(y,v)+B2x,vy+1α2yx,vy0,vC, (1.1)

    where B1,B2:HH are two nonlinear operators, Θ1,Θ2:C×CR are two bifunctions, and α1,α2>0 are two constants.

    If Θ1=Θ2=0, then the SGEP comes down to the generallized variational inequalities considered in [5]: Find (x,y)C×C satisfying

    {α1B1y+xy,ux0,uC,α2B2x+yx,vy0,vC,

    with constants α1,α2>0.

    To solve problem (1.1), the authors in [6] used a fixed point technique. In fact, the SGEP (1.1) can be transformed into the fixed-point problem.

    Lemma 1.2. [6] Suppose that the bifunctions Θ1,Θ2:C×CR satisfy the hypotheses (H1)–(H4) and B1,B2:HH are ρ-ism and σ-ism, respectively. Then, (u,v)C×C is a solution of SGEP (1.1) if and only if uFix(G), where G:=TΘ1α1(Iα1B1)TΘ2α2(Iα2B2) and v=TΘ2α2(Iα2B2)u in which α1(0,2ρ) and α2(0,2σ).

    On the other hand, in 2018, Cai, Shehu, and Iyiola [2] proposed the modified viscosity implicit rule for solving EP and a fixed-point problem: for x1C, let {xk} be the sequence constructed below:

    {uk=σkxk+(1σk)yk,vk=PC(ukα2B2uk),yk=PC(vkα1B1vk),xk+1=PC[ρkf(xk)+(IρkαF)Skyk],k1.

    Under suitable conditions, Cai, Shehu, and Iyiola [2] proved xkuFix(S)Fix(G), which solves the hierarchical variational inequality (HVI):

    (αFf)u,vu0,vFix(S)Fix(G).

    Moreover, Ceng and Shang [4] suggested an algorithm for solving the common fixed-point problem (CFPP) of finite nonexpansive mappings {Sr}Nr=1, an asymptotically nonexpansive mapping S and VIP.

    Algorithm 1.1. [4] Let x1,x0H be arbitrary. Let γ>0,(0,1),ν(0,1), and xk be known. Calculate xk+1 via the following iterative steps:

    Step 1. Set pk=Skxk+εk(SkxkSkxk1) and calculate yk=PC(pkζkApk), where ζk is the largest ζ{γ,γ,γ2,...} fulfilling ζApkAykνpkyk.

    Step 2. Compute zk=PCk(pkζkAyk) where Ck:={yH:pkζkApkyk,yky0}.

    Step 3. Compute xk+1=ρkf(xk)+σkxk+((1σk)IρkαF)Skzk.

    Let k:=k+1 and return to Step 1.

    Motivated and inspired by the work in the literature, the main purpose of this article was to design two modified inertial composite subgradient extragradient implicit rules for solving the SGEP with the VIP and CFPP constraints. The suggested algorithms consisted of the subgradient extragradient rule, inertial iteration approach, and hybrid deepest-descent technique. We proved that the proposed algorithms converge to a solution of the SGEP with the VIP and CFPP constraints, which also solved some HVI.

    Let C be a nonempty, convex, and closed subset of a real Hilbert space H. For all v,wC, an operator T:CH is called

    ● asymptotically nonexpansive if {ϖm}m=1[0,+) satisfying ϖm0(m) and

    TmvTmwϖmvw+vw,m1.

    In particular, in the case of ϖm=0,m1, and T is known as being nonexpansive.

    α-Lipschitzian if α>0 such that TvTwαvw;

    ● monotone if TvTw,vw0;

    ● strongly monotone if there is ρ>0 such that TvTw,vwρvw2;

    ● pseudomonotone if Tv,wv0Tw,wv0;

    σ inverse strongly monotone (σ-ism) if there is σ>0 such that TvTw,vwσTvTw2;

    ● sequentially weakly continuous if {vl}C, vlvTvlTv.

    Recall that the metric (or nearest point) projection from H onto C is the mapping PC:HC which assigns to each point xC the unique point PC(x)C satisfying the property

    xPC(x)=infyCxy.

    The following results are well-known ([12]):

    (a) PC(y)PC(z)2yz,PC(y)PC(z),y,zH;

    (b) z=PC(y)xz,yz0,yH,xC;

    (c) yz2zPC(y)2+yPC(y)2,yH,zC;

    (d) yz2=y22yz,zz2,y,zH;

    (e) ty+(1t)x2=ty2+(1t)x2t(1t)yx2, x,yH,t[0,1].

    Lemma 2.1. [6] Suppose B:HH is an η-ism. Then,

    (IαB)y(IαB)z2yz2α(2ηα)ByBz2,y,zH,α0.

    When 0α2η, we have that IαB is nonexpansive.

    Lemma 2.2. [6] Let B1,B2:HH be ρ-ism and σ-ism, respectively. Suppose that the bifunctions Θ1,Θ2:C×CR satisfy the hypotheses (H1)–(H4). Then, G:=TΘ1α1(Iα1B1)TΘ2α2(Iα2B2) is nonexpansive when 0<α12ρ and 0<α22σ.

    In particular, if Θ1=Θ2=0, using Lemma 1.1, we deduce that TΘ1α1=TΘ2α2=PC.

    Corollary 2.1. [5] Let B1:HH be ρ-ism and B2:HH σ-ism. Define an operator G:HC by G:=PC(Iα1B1)PC(Iα2B2). Then G is nonexpansive when 0<α12ρ and 0<α22σ.

    Lemma 2.3. [9] If the operator A:CH is continuous pseudomonotone, then vVI(C,A) if and only if Aw,wv0,wC.

    Lemma 2.4. [30] Suppose {al}[0,) s.t. al+1(1ωl)al+ωlνl,l1, where {ωl} and {νl} satisfy: (i) {ωl}[0,1]; (ii) l=1ωl=, and (iii) lim suplνl0 or l=1|ωlνl|<. Then, we have limlal=0.

    Lemma 2.5. [18] Let X be a Banach space with a weakly continuous duality mapping. Let C be a nonempty closed convex subset of X and T:CC an asymptotically nonexpansive mapping such that Fix(T). Then IT is demiclosed at zero.

    Lemma 2.6. [19] Suppose that the real number sequence {Γm} is not decreasing at infinity: {Γmk}{Γm} s.t. Γmk<Γmk+1,k1. Let {ϕ(m)}mm0 be an integer sequence defined by

    ϕ(m)=max{km:Γk<Γk+1}.

    Then,

    (i) ϕ(m0)ϕ(m0+1) and ϕ(m) as m;

    (ii) For all mm0, Γϕ(m)Γϕ(m)+1 and ΓmΓϕ(m)+1.

    Lemma 2.7. [30] Let λ(0,1], S:CC be a nonexpansive operator, and F:CH be a κ-Lipschitzian and η-strongly monotone operator. Set Sλv:=(IλαF)Sv,vC. If 0<α<2ηκ2, then SλvSλw(1λτ)vw,v,wC, where τ=11α(2ηακ2)(0,1].

    Let the operator Sr be nonexpansive on H for all r=1,...,N and S:HH be a ϖn-asymptotically nonexpansive operator. Let A:HH be an L-Lipschitz pseudomonotone operator satisfying Axlim infnAxn when xnx. Let Θ1,Θ2:C×CR be two bifunctions fulfilling the hypotheses (H1)–(H4). Let B1:HH be ρ-ism and B2:HH be σ-ism. Let f:HH be δ-contractive and F:HH be κ-Lipschitz η-strongly monotone with δ<τ:=11α(2ηακ2) for α(0,2ηκ2). Suppose that the sequences {εn}[0,1],{ξn}(0,1], and {ρn},{σn}(0,1) satisfy

    (ⅰ) limnρn=0 and n=1ρn=;

    (ⅱ) limnϖnρn=0 and supn1εnρn<;

    (ⅲ) 0<lim infnσnlim supnσn<1;

    (ⅳ) lim supnξn<1.

    Let γ>0,ν(0,1),(0,1), α1(0,2ρ), and α2(0,2σ) be five constants. Set S0:=S and G:=TΘ1α1(Iα1B1)TΘ2α2(Iα2B2). Suppose that Δ:=Nr=0Fix(Sr)Fix(G)VI(C,A).

    Algorithm 3.1. Let x1,x0H be arbitrary. Let xn be known and compute xn+1 below:

    Step 1. Set qn=Snxn+εn(SnxnSnxn1) and calculate

    {pn=ξnqn+(1ξn)un,vn=TΘ2α2(pnα2B2pn),un=TΘ1α1(vnα1B1vn).

    Step 2. Compute yn=PC(pnζnApn), where ζn is the largest ζ{γ,γ,γ2,...} s.t.

    ζApnAynνpnyn. (3.1)

    Step 3. Compute tn=σnxn+(1σn)zn with zn=PCn(pnζnAyn) and

    Cn:={yH:pnζnApnyn,yyn0}.

    Step 4. Compute

    xn+1=ρnf(xn)+(IρnαF)Sntn, (3.2)

    where Sn is constructed as in Algorithm 1.1. Let n:=n+1 and return to Step 1.

    Lemma 3.1. [21] min{γ,ν/L}ζnγ.

    Lemma 3.2. Let pΔ and q=TΘ2α2(pα2B2p). Then,

    znp2qnp2α1(α12ρ)B1vnB1q2(1ξn)[qnpn2(1ν)[ynzn2+ynpn2]α2(α22σ)B2pnB2p2],

    where vn=TΘ2α2(pnα2B2pn).

    Proof. According to Lemma 2.2, there exists the unique point pnH satisfying pn=ξnqn+(1ξn)Gpn. Since pCn, we have

    znp2pnζnAynp,znp=12(pnp2znpn2+znp2)ζnAyn,znp,

    which implies that

    znp2pnp2znpn22ζnAyn,znp.

    Noting that zn=PCn(pnζnAyn), we have pnζnApnyn,znyn0. Owing to the pseudomonotonicity of A, by (3.1), we get

    znp2pnp2znpn22ζnAyn,ynp+znynpnp2znpn22ζnAyn,znyn=pnp2+2pnynpn2ζnAynyn,znynznyn2=pnp2znyn2+2pnζnApnyn,znynynpn2+2ζnApnAyn,znynpnp2+2νpnynznynznyn2ynpn2pnp2ynpn2+ν(pnyn2+znyn2)znyn2=pnp2(1ν)[ynpn2+ynzn2]. (3.3)

    Observe that un=TΘ1α1(vnα1B1vn), vn=TΘ2α2(pnα2B2pn), and q=TΘ2α2(pα2B2p). Then un=Gpn. Applying Lemma 2.1 to get

    unp2vnq2+α1(α12ρ)B1vnB1q2

    and

    vnq2pnp2+α2(α22σ)B2pnB2p2.

    Then,

    unp2pnp2+α1(α12ρ)B1vnB1q2+α2(α22σ)B2pnB2p2.

    Besides, thanks to pn=ξnqn+(1ξn)un, we get pnp2ξnqnp,pnp+(1ξn)pnp2, which results in pnp2qnp,pnp=12[qnp2+pnp2qnpn2]. So,

    pnp2qnp2qnpn2. (3.4)

    Then,

    pnp2(1ξn)unp2+ξnqnp2ξnqnp2+(1ξn)[pnp2+α1(α12ρ)B1vnB1q2+α2(α22σ)B2pnB2p2]ξnqnp2+(1ξn)[qnp2qnpn2+α2(α22σ)×B2pnB2p2+α1(α12ρ)B1vnB1q2]=qnp2(1ξn)[qnpn2α1(α12ρ)B1vnB1q2α2(α22σ)B2pnB2p2],

    which, together with (3.3), yields

    znp2pnp2(1ν)[ynpn2+ynzn2]qnp2(1ξn)[qnpn2α1(α12ρ)B1vnB1q2α2(α22σ)B2pnB2p2](1ν)[ynpn2+ynzn2].

    This ensures that the conclusion holds.

    Lemma 3.3. Assume that

    (i) the sequences {pn},{qn},{yn}, and {zn} are bounded;

    (ii) limn(xn+1xn)=limn(qnzn)=limn(xnyn) =limn(Sn+1xnSnxn)=0.

    Then ωw(xn)Δ, where ωw(xn)={zH,andthereissome{xni}{xn}suchthatxniz}.

    Proof. Take an arbitrary fixed zωw({xn}). Then, there is some {ni}{n} such that xniz and ynizH. Next, we show zΔ. Using Lemma 3.2, we deduce

    (1ξn)[qnpn2α1(α12ρ)B1vnB1q2α2(α22σ)B2pnB2p2]+(1ν)[ynpn2+ynzn2]qnp2znp2qnzn(qnp+znp).

    Because qnzn0,ν(0,1),α1(0,2ρ),α2(0,2σ), and 0<lim infn(1ξn), we deduce that

    limnB2pnB2p=limnB1vnB1q=0, (3.5)

    and

    limnynzn=limnqnpn=limnynpn=0.

    Hence,

    xnqnxnyn+ynzn+znqn0(n),
    xnpnxnqn+qnpn0(n),
    pnznpnyn+ynzn0(n),

    and

    xnznxnqn+qnpn+pnzn0(n).

    Note that

    qnSnxn=εnSnxnSnxn1(1+ϖn)xnxn10.

    Therefore,

    xnSnxnxnqn+qnSnxn0(n).

    Note that

    xnSxnxnSnxn+SxnSn+1xn+Sn+1xnSnxnxnSnxn+SnxnSn+1xn+(1+ϖ1)Snxnxn=(2+ϖ1)xnSnxn+SnxnSn+1xn0. (3.6)

    Observe that

    unp2vnq,unp+α1B1qB1vn,unp12[vnq2vnun+pq2+unp2]+α1B1qB1vnunp,

    which arrives at

    unp2vnq2vnun+pq2+2α1B1qB1vnunp.

    Similarly, we get

    vnq2pnp2pnvn+qp2+2α2B2pB2pnvnq.

    Combining the last two inequalities, we deduce that

    unp2pnp2pnvn+qp2vnun+pq2+2α1B1qB1vnunp+2α2B2pB2pnvnq.

    Hence,

    znp2pnp2ξnqnp2+(1ξn)unp2ξnqnp2+(1ξn)[qnp2pnvn+qp2vnun+pq2+2α1B1qB1vnunp+2α2B2pB2pnvnq]qnp2(1ξn)[pnvn+qp2+vnun+pq2]+2α2B2pB2pn×vnq+2α1B1qB1vnunp.

    This immediately implies that

    (1ξn)[pnvn+qp2+vnun+pq2]qnp2znp2+2α2B2pB2pnvnq+2α1B1qB1vnunpqnzn(qnp+znp)+2α2B2pB2pnvnq+2α1B1qB1vnunp.

    Since qnzn0, and 0<lim infn(1ξn), from (3.5) and the boundedness of {un},{vn}, {qn}, and {zn} we get that

    limnpnvn+qp=limnvnun+pq=0,

    which hence yields

    pnGpn=pnunpnvn+qp+vnun+pq0(n).

    This immediately implies that

    xnGxnxnpn+pnGpn+GpnGxn2xnpn+pnGpn0(n). (3.7)

    Next, we prove limnxnSnxn=0. In fact, we have

    tnxnxnzn0,

    and

    Sntnxn=xn+1xnρn(f(xn)αFSntn)xn+1xn+ρn(f(xn)+αFSntn)0.

    Hence,

    xnSnxnxnSntn+SnxnSntnxnSntn+xntn0.

    Now, we show xnSrxn0,r{1,...,N}. For 1lN, it holds that

    xnSn+lxnxnxn+l+Sn+lxn+lSn+lxn+xn+lSn+lxn+l2xnxn+l+xn+lSn+lxn+l.

    This, together with assumptions, implies that xnSn+lxn0,1lN. So,

    limnxnSrxn=0,1rN. (3.8)

    Now, we show zVI(C,A). If Az=0, then zVI(C,A). Next, we suppose that Az0. By the condition, we conclude that 0<Azlim infiAyni because yniz. Observe that yn=PC(pnζnApn). It follows that pnζnApnyn,yyn0,yC. Therefore,

    1ζnpnyn,yyn+Apn,ynpnApn,ypn,yC. (3.9)

    Since {Apn} and {yn} are all bounded, from (3.9) and Lemma 3.1, we obtain lim infiApni, ypni0,yC. Meanwhile, Ayn,yyn=AynApn,ypn+Apn,ypn+Ayn,pnyn. Using pnyn0 and the uniform continuity of A, we get ApnAyn0, which hence attains lim infiAyni,yyni0,yC.

    To attain zVI(C,A), let {λi}(0,1) be a sequence such that λi0(i). For every i1, let ki be the smallest positive integer satisfying

    Aynj,yynj+λi0,jki. (3.10)

    Put υki=AykiAyki2, and hence get Ayki,υki=1,i1. So, from (3.10), one gets Ayki,y+λiυkiyki0, i1. Since A is pseudomonotone, we have A(y+λiυki),y+λiυkiyki0,i1. So,

    Ay,yykiAyA(y+λiυki),y+λiυkiykiλiAy,υki,i1. (3.11)

    Let us show that limiλiυki=0. In fact, note that {yki}{yni} and λi0 as i. So it follows that 0lim supiλiυki=lim supiλiAykilim supiλilim infiAyni=0. Therefore, one gets λiυki0 as i. Thus, letting i in (3.11), we deduce that Ay,yz=lim infiAy,yyki0,yC. We apply Lemma 2.3 to conclude zVI(C,A).

    Finally, we prove zΔ. Note that xniz and xniSrxni0,r{1,...,N} (due to (3.8)). Since ISr(1rN) is demiclosed by Lemma 2.5, we attain zNr=1Fix(Sr). By (3.6) and (3.7), we have xniSxni0 and xniGxni0, respectively. Similarly, IS and IG are all demiclosed at zero, and we have zFix(S)Fix(G). Therefore, zNr=0Fix(Sr)Fix(G)VI(C,A)=Δ.

    Theorem 3.1. We have the following equivalent relation:

    xnuΔ{Sn+1xnSnxn0,xn+1xn0,

    where uΔ solves the HVI: (αFf)u,pu0,pΔ.

    Proof. According to the condition, we assume that ϖnρn(τδ)2 and {σn}[a,b](0,1) for all n1. x,yH, by Lemma 2.7, we obtain

    PΔ(IαF+f)(x)PΔ(IαF+f)(y)[1(τδ)]xy,

    which implies that PΔ(IαF+f) is contractive. Set u=PΔ(IαF+f)(u). Therefore, there is the unique solution uΔ=Nr=0Fix(Sr)Fix(G)VI(C,A) of the HVI:

    (αFf)u,pu0,pΔ. (3.12)

    If xnuΔ, then we know that u=Su and

    SnxnSn+1xnSnxnu+uSn+1xn(1+ϖn)xnu+(1+ϖn+1)uxn=(2+ϖn+ϖn+1)xnu0.

    Note that

    xn+1xnuxn+1+xnu0.

    Now, we prove the sufficiency.

    Step 1. Let pΔ. Then Gp=p, p=PC(pζnAp), Snp=p,n0, and the inequalities (3.3) and (3.4) hold, i.e.,

    znp2pnp2(1ν)ynpn2(1ν)ynzn2, (3.13)

    and

    pnp2qnp2qnpn2. (3.14)

    Combining (3.13) and (3.14) guarantees that

    znppnpqnp. (3.15)

    Observe that

    qnpSnxnp+εnSnxnSnxn1(1+ϖn)[xnp+εnxnxn1]=(1+ϖn)[xnp+ρnεnρnxnxn1]. (3.16)

    Since supn1εnρnxnxn1<, there is a constant M1>0 satisfying

    εnρnxnxn1M1. (3.17)

    Combining (3.15)–(3.17), we get

    znppnpqnp(1+ϖn)[xnp+ρnM1],n1. (3.18)

    Also, it is readily known that

    tnpσnxnp+(1σn)znp(1+ϖn)[xnp+ρnM1]. (3.19)

    Thus, using (3.19) and ϖnρn(τδ)2n1, from Lemma 2.7, we receive

    xn+1p=ρnf(xn)+(IρnαF)Sntnp=ρn(f(xn)f(p))+(IρnαF)Sntn(IρnαF)p+ρn(fαF)pρnδxnp+(1ρnτ)tnp+ρn(fαF)pρnδxnp+(1ρnτ)(1+ϖn)[xnp+ρnM1]+ρn(fαF)pρnδxnp+[(1ρnτ)+ϖn]xnp+ρn(1+ϖn)M1+ρn(fαF)pρnδxnp+[(1ρnτ)+ρn(τδ)2]xnp+2ρnM1+ρn(fαF)p=[1ρn(τδ)2]xnp+ρn(τδ)22(2M1+(fαF)p)τδ.

    Hence,

    xnpmax{x1p,2(2M1+(fαF)p)τδ},n1.

    We deduce that the sequences {xn}, {pn},{qn},{yn},{zn},{tn},{f(xn)}, {Sntn}, and {Snxn} are bounded.

    Step 2. Observe that tnp=σn(xnp)+(1σn)(znp) and

    xn+1p=ρn(f(xn)f(p))+(IρnαF)Sntn(IρnαF)p+ρn(fαF)p.

    Utilizing Lemma 2.7, we attain

    xn+1p2ρn(f(xn)f(p))+(IρnαF)Sntn(IρnαF)p2+2ρn(fαF)p,xn+1p[ρnδxnp+(1ρnτ)tnp]2+2ρn(fαF)p,xn+1pρnδxnp2+(1ρnτ)tnp2+2ρn(fαF)p,xn+1p=ρnδxnp2+(1ρnτ)[σnxnp2+(1σn)znp2σn(1σn)xnzn2]+2ρn(fαF)p,xn+1pρnδxnp2+(1ρnτ)[σnxnp2+(1σn)znp2](1ρnτ)σn(1σn)xnzn2+ρnM2, (3.20)

    where M2>0 is a constant such that supn12(fαF)pxnpM2. By (3.20) and Lemma 3.2, we have

    xn+1p2ρnδxnp2+(1ρnτ)[σnxnp2+(1σn)znp2](1ρnτ)σn(1σn)xnzn2+ρnM2ρnδxnp2+(1ρnτ){σnxnp2+(1σn)[qnp2(1ξn)qnpn2(1ν)(ynzn2+ynpn2)]}(1ρnτ)σn(1σn)xnzn2+ρnM2. (3.21)

    Taking (3.18) into account, we obtain

    qnp2(1+ϖn)2(xnp+ρnM1)2=(xnp+ρnM1)2+ϖn(2+ϖn))(xnp+ρnM1)2=xnp2+ρn{M1(2xnp+ρnM1)+ϖn(2+ϖn)ρn(xnp+ρnM1)2}xnp2+ρnM3, (3.22)

    where M3>0 is a constant such that supn1{(2xnp+ρnM1)M1+ϖn(2+ϖn)ρn(xnp+ρnM1)2}M3. Based on (3.21) and (3.22), we get

    xn+1p2ρnδxnp2+(1ρnτ){σnxnp2+(1σn)[xnp2+ρnM3(1ξn)qnpn2(1ν)(ynzn2+ynpn2)]}(1ρnτ)σn(1σn)xnzn2+ρnM2[1ρn(τδ)]xnp2(1ρnτ)(1σn)[(1ξn)qnpn2+ρnM3+(1ν)(ynzn2+ynpn2)](1ρnτ)σn(1σn)xnzn2+ρnM2xnp2(1ρnτ)(1σn)[(1ξn)qnpn2+(1ν)(ynzn2+ynpn2)](1ρnτ)σn(1σn)xnzn2+ρnM4,

    where M4:=M3+M2. This immediately implies that

    (1ρnτ)(1σn)[(1ξn)qnpn2+(1ν)(ynzn2+ynpn2)]+(1ρnτ)σn(1σn)xnzn2xnp2xn+1p2+ρnM4. (3.23)

    Step 3. Note that

    qnp2(1+ϖn)2(xnp+εnxnxn1)2=(1+ϖn)2xnp2+(1+ϖn)2εnxnxn1(2xnp+εnxnxn1)=(1+ϖn(2+ϖn))xnp2+(1+ϖn)2εnxnxn1(2xnp+εnxnxn1). (3.24)

    Combining (3.18), (3.20), and (3.24), we receive

    xn+1p2ρnδxnp2+(1ρnτ)[σnxnp2+(1σn)znp2]+2ρn(fαF)p,xn+1p[1ρn(τδ)](1+ϖn)2(xnp+εnxnxn1)2+2ρn(fαF)p,xn+1p[1ρn(τδ)]{(1+ϖn(2+ϖn))xnp2+(1+ϖn)2εnxnxn1×(2xnp+εnxnxn1)}+2ρn(fαF)p,xn+1p[1ρn(τδ)]xnp2+εnxnxn1(1+ϖn)2(2xnp+εnxnxn1)+ϖn(2+ϖn)xnp2+2ρn(fαF)p,xn+1p[1ρn(τδ)]xnp2+(εnxnxn13(1+ϖn)2+ϖn(2+ϖn))M+2ρn(fαF)p,xn+1p=[1ρn(τδ)]xnp2+ρn(τδ)[2(fαF)p,xn+1pτδ+Mτδ(εnρnxnxn1×3(1+ϖn)2+ϖn(2+ϖn)ρn)], (3.25)

    where M>0 is a constant such that supn1{xnp,εnxnxn1,xnp2}M.

    Step 4. Taking p=u, by (3.25), we have

    xn+1u2[1ρn(τδ)]xnu2+ρn(τδ)[2(fαF)u,xn+1uτδ+Mτδ(εnρnxnxn13(1+ϖn)2+ϖn(2+ϖn)ρn)]. (3.26)

    Set Γn=xnu2.

    Case 1. There is an integer n01 such that {Γn} is nonincreasing. In this case, limnΓn=<+. From (3.23), we have

    (1ρnτ)(1b)[(1ξn)qnpn2+(1ν)(ynzn2+ynpn2)]+(1ρnτ)a(1b)xnzn2(1ρnτ)(1σn)[(1ξn)qnpn2+(1ν)(ynzn2+ynpn2)]+(1ρnτ)σn(1σn)xnzn2xnu2xn+1u2+ρnM4=ΓnΓn+1+ρnM4.

    Noticing 0<lim infn(1ξn), ρn0 and ΓnΓn+10, for ν(0,1) one has limnqnpn=limnynzn=0, and limnynpn=limnxnzn=0. Thus, we get

    xnynxnzn+ynzn0, (3.27)

    and

    qnznqnpn+pnyn+ynzn00(n). (3.28)

    Since {xn} is bounded, there is a subsequence {xnk} of {xn} satisfying xnk˜x and

    lim supn(fαF)u,xnu=limk(fαF)u,xnku. (3.29)

    In the light of (3.29), one gets

    lim supn(fαF)u,xnu=limk(fαF)u,xnku=(fαF)u,˜xu. (3.30)

    Since xn+1xn0,ynxn0,znqn0 and Sn+1xnSnxn0, applying Lemma 3.3, we conclude that ˜xωw({xn})Δ. Combining (3.12) and (3.30), we get

    lim supn(fαF)u,xnu=(fαF)u,˜xu0. (3.31)

    Since xnxn+10, we have

    lim supn(fαF)u,xn+1u=lim supn[(fαF)u,xn+1xn+(fαF)u,xnu]lim supn[(fαF)uxn+1xn+(fαF)u,xnu]0.

    Note that

    lim supn[2(fαF)u,xn+1uτδ+Mτδ(εnρnxnxn13(1+ϖn)2+ϖn(2+ϖn)ρn)]0.

    According to (3.26) and Lemma 2.4, we deduce that limnxnu2=0.

    Case 2. Suppose that {Γnk}{Γn} s.t. Γnk<Γnk+1,kN. Let ϕ:NN be a mapping defined by

    ϕ(n):=max{kn:Γk<Γk+1}.

    Based on Lemma 2.6, we have

    Γϕ(n)Γϕ(n)+1andΓnΓϕ(n)+1.

    Putting p=u, from (3.23), we have

    (1ρϕ(n)τ)(1b)[(1ξϕ(n))qϕ(n)pϕ(n)2+(1ν)(yϕ(n)zϕ(n)2+yϕ(n)pϕ(n)2)]+(1ρϕ(n)τ)a(1b)xϕ(n)zϕ(n)2(1ρϕ(n)τ)(1σϕ(n))[(1ξϕ(n))qϕ(n)pϕ(n)2+(1ν)(yϕ(n)zϕ(n)2+yϕ(n)pϕ(n)2)]+(1ρϕ(n)τ)σϕ(n)(1σϕ(n))xϕ(n)zϕ(n)2xϕ(n)u2xϕ(n)+1u2+ρϕ(n)M4=Γϕ(n)Γϕ(n)+1+ρϕ(n)M4, (3.32)

    which immediately yields limnqϕ(n)pϕ(n)=limnyϕ(n)zϕ(n)=0 and limnyϕ(n) pϕ(n)=limnxϕ(n)zϕ(n)=0. Therefore,

    limnxϕ(n)yϕ(n)=limnqϕ(n)zϕ(n)=0, (3.33)

    and

    lim supn(fαF)u,xϕ(n)+1u0. (3.34)

    At the same time, by (3.26), we known that

    ρϕ(n)(τδ)Γϕ(n)Γϕ(n)Γϕ(n)+1+ρϕ(n)(τδ)[2(fαF)u,xϕ(n)+1uτδ+Mτδ(εϕ(n)ρϕ(n)xϕ(n)xϕ(n)13(1+ϖϕ(n))2+ϖϕ(n)(2+ϖϕ(n))ρϕ(n))]ρϕ(n)(τδ)[2(fαF)u,xϕ(n)+1uτδ+Mτδ(εϕ(n)ρϕ(n)xϕ(n)xϕ(n)1×3(1+ϖϕ(n))2+ϖϕ(n)(2+ϖϕ(n))ρϕ(n))],

    which hence arrives at

    lim supnΓϕ(n)lim supn[2(fαF)u,xϕ(n)+1uτδ+Mτδ(εϕ(n)ρϕ(n)xϕ(n)xϕ(n)13(1+ϖϕ(n))2+ϖϕ(n)(2+ϖϕ(n))ρϕ(n))]0.

    Thus, limnxϕ(n)u2=0. Also, note that

    xϕ(n)+1u2xϕ(n)u2=2xϕ(n)+1xϕ(n),xϕ(n)u+xϕ(n)+1xϕ(n)22xϕ(n)+1xϕ(n)xϕ(n)u+xϕ(n)+1xϕ(n)2. (3.35)

    Since ΓnΓϕ(n)+1, we have

    xnu2xϕ(n)+1u2xϕ(n)u2+2xϕ(n)+1xϕ(n)xϕ(n)u+xϕ(n)+1xϕ(n)20(n).

    So, xnu.

    According to Theorem 3.1, we have the following corollary.

    Corollary 3.1. Suppose that S:CC is a nonexpansive mapping. For two fixed points x1,x0H, let the sequence {xn} be defined by

    {qn=Sxn+εn(SxnSxn1),pn=ξnqn+(1ξn)un,vn=TΘ2α2(pnα2B2pn),un=TΘ1α1(vnα1B1vn),yn=PC(pnζnApn),zn=PCn(pnζnAyn),tn=σnxn+(1σn)zn,xn+1=ρnf(xn)+(IρnαF)Sntn,n1, (3.36)

    where Cn and ζn have the same form as in Algorithm 3.1. Then, xnuΔxn+1xn0, where uΔ is the unique solution of the HVI: (αFf)u,pu0,pΔ.

    Next, we put forth another modification of the inertial composite subgradient extragradient implicit rule with line-search process.

    Algorithm 3.2. Let x1,x0H be two fixed points. Let xn be given. Compute xn+1 via the following iterative steps:

    Step 1. Set qn=Snxn+εn(SnxnSnxn1) and calculate

    {pn=ξnqn+(1ξn)un,vn=TΘ2α2(pnα2B2pn),un=TΘ1α1(vnα1B1vn).

    Step 2. Compute yn=PC(pnζnApn), with ζn being chosen to be the largest ζ{γ,γ,γ2,...} s.t.

    ζApnAynνpnyn.

    Step 3. Compute tn=σnzn+(1σn)Sntn with zn=PCn(pnζnAyn) and

    Cn:={yH:pnζnApnyn,yyn0}.

    Step 4. Compute

    xn+1=ρnf(xn)+(IρnαF)Sntn,

    where Sn is constructed as in Algorithm 1.1. Set n:=n+1 and go to Step 1

    Theorem 3.2. Let the sequence {xn} be generated by Algorithm 3.2. Then

    xnuΔ{Sn+1xnSnxn0,xn+1xn0,

    where uΔ is the unique solution of the HVI: (αFf)u,pu0,pΔ.

    Proof. The necessity is obvious. Next, we prove the sufficiency.

    Note that

    tnpσnznp+(1σn)Sntnpσnznp+(1σn)tnp.

    This, together with (3.18), ensures that

    tnpznppnpqnp(1+ϖn)[xnp+ρnM1],n1. (3.37)

    By (3.37) and Lemma 2.7, we have

    xn+1p=ρn(f(xn)f(p))+(IρnαF)Sntn(IρnαF)p+ρn(fαF)pρnδxnp+(1ρnτ)tnp+ρn(fαF)pρnδxnp+(1ρnτ)(1+ϖn)[xnp+ρnM1]+ρn(fαF)p[1ρn(τδ)2]xnp+ρn(τδ)22(2M1+(fαF)p)τδ.

    It follows that xnpmax{x1p,2(2M1+(fαF)p)τδ}n1. Therefore, {xn}, {pn}, {qn}, {yn}, {zn}, {tn}, {f(xn)}, {Sntn}, and {Snxn} are bounded.

    According to Lemma 2.7, we get

    xn+1p2ρn(f(xn)f(p))+(IρnαF)Sntn(IρnαF)p2+2ρn(fαF)p,xn+1pρnδxnp2+(1ρnτ)tnp2+2ρn(fαF)p,xn+1p=ρnδxnp2+(1ρnτ)[σnznp2+(1σn)Sntnp2σn(1σn)znSntn2]+2ρn(fαF)p,xn+1pρnδxnp2+(1ρnτ)[σnznp2+(1σn)tnp2](1ρnτ)σn(1σn)znSntn2+ρnM2, (3.38)

    where M2>0 is a constant such that supn12(fαF)pxnpM2. Using Lemma 3.2, from (3.37) and (3.38), we have

    xn+1p2ρnδxnp2+(1ρnτ)znp2(1ρnτ)σn(1σn)znSntn2+ρnM2ρnδxnp2+(1ρnτ){qnp2(1ξn)qnpn2(1ν)×(ynzn2+ynpn2)}(1ρnτ)σn(1σn)znSntn2+ρnM2. (3.39)

    Also, using the same inferences as those of (3.22) of Theorem 3.1, we have

    qnp2xnp2+ρnM3, (3.40)

    where supn1{M1(2xnp+ρnM1)+ϖn(2+ϖn)ρn(xnp+ρnM1)2}M3 for some constant M3. By (3.39) and (3.40), we attain

    xn+1p2ρnδxnp2+(1ρnτ){xnp2+ρnM3(1ξn)qnpn2(1ν)(ynzn2+ynpn2)}(1ρnτ)σn(1σn)znSntn2+ρnM2[1ρn(τδ)]xnp2(1ρnτ)[(1ξn)qnpn2+(1ν)(ynzn2+ynpn2)](1ρnτ)σn(1σn)znSntn2+ρnM3+ρnM2xnp2(1ρnτ)[(1ξn)qnpn2+(1ν)(ynzn2+ynpn2)](1ρnτ)σn(1σn)znSntn2+ρnM4,

    where M4:=M3+M2. Hence, we attain the assertion.

    By the same argument as those of (3.24), we have

    qnp2(1+ϖn(2+ϖn))xnp2+(1+ϖn)2εnxnxn1(2xnp+εnxnxn1). (3.41)

    By (3.37), (3.38), and (3.41), we obtain

    xn+1p2ρnδxnp2+(1ρnτ)[(1σn)tnp2+σnznp2]+2ρn(fαF)p,xn+1p[1ρn(τδ)](1+ϖn)2(xnp+εnxnxn1)2+2ρn(fαF)p,xn+1p[1ρn(τδ)]xnp2+εnxnxn1(1+ϖn)2(2xnp+εnxnxn1)+ϖn(2+ϖn)xnp2+2ρn(fαF)p,xn+1p[1ρn(τδ)]xnp2+ρn(τδ)[2(fαF)p,xn+1pτδ+Mτδ(εnρnxnxn1×3(1+ϖn)2+ϖn(2+ϖn)ρn)], (3.42)

    where supn1{xnp,εnxnxn1,xnp2}M for some constant M.

    Setting p=u, by (3.42), we have

    xn+1u2[1ρn(τδ)]xnu2+ρn(τδ)[2(fαF)u,xn+1uτδ+Mτδ(εnρnxnxn13(1+ϖn)2+ϖn(2+ϖn)ρn)].

    Set Γn=xnu2.

    Case 1. Assume {Γn} is nonincreasing when nn0. Then, limnΓn=<+. Choosing p=u, from (3.38), we have

    (1ρnτ)[(1ξn)qnpn2+(1ν)(ynzn2+ynpn2)]+(1ρnτ)a(1b)znSntn2(1ρnτ)[(1ξn)qnpn2+(1ν)(ynzn2+ynpn2)]+(1ρnτ)σn(1σn)znSntn2xnu2xn+1u2+ρnM4=ΓnΓn+1+ρnM4.

    Since ΓnΓn+10 for ν(0,1), limnqnpn=limnynzn=0, and limnynpn=limnznSntn=0. Observe that

    znxnznSntn+Sntnxn=znSntn+xn+1xnρn(f(xn)αFSntn)znSntn+xn+1xn+ρn(f(xn)+αFSntn)0(n).

    By the similar arguments as those in Theorem 3.1, we deduce limnxnu2=0.

    Case 2. Assume {Γnk}{Γn} s.t. Γnk<Γnk+1,kN. Let ϕ:NN be a mapping defined by

    ϕ(n)=max{kn:Γk<Γk+1}.

    By Lemma 2.6, we have

    ΓnΓϕ(n)+1andΓϕ(n)Γϕ(n)+1.

    Set p=u. Then,

    (1ρϕ(n)τ)[(1ξϕ(n))qϕ(n)pϕ(n)2+(1ν)(yϕ(n)zϕ(n)2+yϕ(n)pϕ(n)2)]+(1ρϕ(n)τ)a(1b)zϕ(n)Sϕ(n)tϕ(n)2(1ρϕ(n)τ)[(1ξϕ(n))qϕ(n)pϕ(n)2+(1ν)(yϕ(n)zϕ(n)2+yϕ(n)pϕ(n)2)]+(1ρϕ(n)τ)σϕ(n)(1σϕ(n))zϕ(n)Sϕ(n)tϕ(n)2xϕ(n)u2xϕ(n)+1u2+ρϕ(n)M4=Γϕ(n)Γϕ(n)+1+ρϕ(n)M4,

    which immediately yields limnqϕ(n)pϕ(n)=limnyϕ(n)zϕ(n)=0 and limnyϕ(n) pϕ(n)=limnzϕ(n)Sϕ(n)tϕ(n)=0. Therefore, limnzϕ(n)xϕ(n)=0. Finally, using the similar arguments to those in Theorem 3.1, we get the conclusion.

    Remark 3.1. Compared with the corresponding results in Cai, Shehu, and Iyiola [2], Ceng and Shang [4], and Thong and Hieu [28], our results improve and extend them in the following aspects:

    (ⅰ) The problem of finding an element of Fix(S)Fix(G) (with G=PC(Iμ1B1)PC(Iμ2B2)) in [2] is extended to develop our problem of finding an element of Nr=0Fix(Sr)Fix(G)VI(C,A) where G=TΘ1μ1(Iμ1B1)TΘ2μ2(Iμ2B2) and S is an asymptotically nonexpansive mapping. The modified viscosity implicit rule for finding an element of Fix(S)Fix(G) in [2] is extended to develop our modified inertial composite subgradient extragradient implicit rules with line-search process for finding an element of Nr=0Fix(Sr)Fix(G)VI(C,A), which is on the basis of the subgradient extragradient rule with line-search process, inertial iteration approach, viscosity approximation method, and hybrid deepest-descent technique.

    (ⅱ) The problem of finding an element of Fix(S)VI(C,A) with a quasi-nonexpansive mapping S in [4] is extended to develop our problem of finding an element of Nr=0Fix(Sr)Fix(G)VI(C,A) with an asymptotically nonexpansive mapping S. The inertial subgradient extragradient method with line-search process for finding an element of Fix(S)VI(C,A) in [28] is extended to develop our modified inertial composite subgradient extragradient implicit rules with line-search process for finding an element of Nr=0Fix(Sr)Fix(G)VI(C,A), which is on the basis of the subgradient extragradient rule with line-search process, inertial iteration approach, viscosity approximation method, and hybrid deepest-descent technique.

    (ⅲ) The problem of finding an element of Ω=Nr=0Fix(Sr)VI(C,A) is extended to develop our problem of finding an element of Ω=Nr=0Fix(Sr)Fix(G)VI(C,A) with G=TΘ1μ1(Iμ1B1)TΘ2μ2(Iμ2B2). The hybrid inertial subgradient extragradient method with line-search process in [4] is extended to develop our modified inertial composite subgradient extragradient implicit rules with line-search process.

    In this section, we give an example to show the feasibility of our algorithms. Put Θ1=Θ2=0, α=2, α1=α2=13,γ=1,ν==12,σn=ξn=23, and εn=ρn=13(n+1), for all n0. Now, we construct an example of Δ=Nr=0Fix(Sr)Fix(G)VI(C,A) with S0:=S and G=TΘ1α1(Iα1B1)TΘ2α2(Iα2B2)=PC(Iα1B1)PC(Iα2B2), where A:HH is pseudomonotone and a Lipschitz continuous mapping, B1,B2:HH are two inverse-strongly monotone mappings, S:HH is asymptotical nonexpansive, and each Sr:HH is nonexpansive for r=1,...,N.

    Let H=R and use a,b=ab and =|| to denote its inner product and induced norm, respectively. Set C=[2,4] and the starting point x1 is arbitrarily chosen in C. Let f(x)=F(x)=12x, xH with

    δ=12<τ=11α(2ηακ2)=112(2122(12)2)=1.

    Let B1x=B2x:=Bx=x12sinx,xC. Let the operators A,S,Sr:HH be defined by

    Ax:=11+|sinx|11+|x|,Sx:=34sinx,Srx:=S1x=sinx(r=1,,N),xH.

    We have the following assertions:

    (ⅰ) A is 2-Lipschitz continuous, in fact, for each x,yH, we have

    |AxAy|||y||x|(1+y|)(1+|x|)|+||siny||sinx|(1+|siny|)(1+|sinx|)||xy|(1+|x|)(1+|y|)+|sinxsiny|(1+|sinx|)(1+|siny|)|xy|+|sinxsiny|2|xy|.

    (ⅱ) A is pseudomonotone, in fact, for each x,yH, if

    Ax,yx=(11+|sinx|11+|x|)(yx)0,

    then

    Ay,yx=(11+|siny|11+|y|)(yx)0.

    (ⅲ) B is 29-inverse-strongly monotone. In fact, since B is 12-strongly monotone and 32-Lipschitz continuous, we know that B is 29-inverse-strongly monotone with ρ=σ=29.

    Moreover, it is easy to check that S is asymptotically nonexpansive with ϖn=(34)n,n1, such that Sn+1xnSnxn0 as n. In fact, note that

    SnxSny34Sn1xSn1y(34)nxy(1+ϖn)xy,

    and

    Sn+1xnSnxn(34)n1S2xnSxn=(34)n1|34sin(Sxn)34sinxn|2(34)n0.

    It is obvious that Fix(S)={0} and

    limnϖnρn=limn(3/4)n1/3(n+1)=0.

    Accordingly, Δ=Fix(S)Fix(S1)Fix(G)VI(C,A)={0}. In this case, noticing G=PC(Iα1B1)PC(Iα2B2)=[PC(I13B)]2, we rewrite Algorithm 3.1 as follows:

    {qn=Snxn+13(n+1)(SnxnSnxn1),pn=23qn+13un,vn=PC(pn13Bpn),un=PC(vn13Bvn),yn=PC(pnζnApn),zn=PCn(pnζnAyn),tn=23xn+13zn,xn+1=13(n+1)12xn+(113(n+1))S1tn,n1,

    where Cn and ζn are chosen as in Algorithm 3.1. Then, xn0Δ.

    In particular, since Sx:=34sinx is also nonexpansive, we consider the modified version of Algorithm 3.1, that is,

    {qn=Sxn+13(n+1)(SxnSxn1),pn=23qn+13un,vn=PC(pn13Bpn),un=PC(vn13Bvn),yn=PC(pnζnApn),zn=PCn(pnζnAyn),tn=23xn+13zn,xn+1=13(n+1)12xn+(113(n+1))S1tn,n1,

    where Cn and ζn are chosen as above. Then, xn0Δ.

    In a real Hilbert space, we have put forward two modified inertial composite subgradient extragradient implicit rules with line-search process for settling a generalized equilibrium problems system with constraints of a pseudomonotone variational inequality problem and a common fixed-point problem of finite nonexpansive mappings and an asymptotically nonexpansive mapping, respectively. Under the lack of the sequential weak continuity and Lipschitz constant of the cost operator A, we have demonstrated the strong convergence of the proposed algorithms to an element of the studied problem. In addition, an illustrated example was provided to demonstrate the feasibility of our proposed algorithms.

    In the end, it is worthy to mention that part of our future research is aimed at acquiring the strong convergence results for the modifications of our proposed rules with a Nesterov inertial extrapolation step and adaptive stepsizes.

    The authors declare that they have not used artificial intelligence (AI) tools in the creation of this article.

    Yeong-Cheng Liou is partially supported by a grant from the Kaohsiung Medical University Research Foundation (KMU-M113001). Lu-Chuan Ceng is partially supported by the 2020 Shanghai Leading Talents Program of the Shanghai Municipal Human Resources and Social Security Bureau (20LJ2006100), the Innovation Program of Shanghai Municipal Education Commission (15ZZ068) and the Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

    The authors declare that there are no conflicts of interest.



    [1] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Math. Student, 63 (1994), 123–145.
    [2] G. Cai, Y. Shehu, O. Iyiola, Strong convergence results for variational inequalities and fixed point problems using modified viscosity implicit rules, Numer. Algor., 77 (2018), 535–558. http://dx.doi.org/10.1007/s11075-017-0327-8 doi: 10.1007/s11075-017-0327-8
    [3] L. Ceng, A. Petrusel, X. Qin, J. Yao, Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints, Optimization, 70 (2021), 1337–1358. http://dx.doi.org/10.1080/02331934.2020.1858832 doi: 10.1080/02331934.2020.1858832
    [4] L. Ceng, M. Shang, Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings, Optimization, 70 (2021), 715–740. http://dx.doi.org/10.1080/02331934.2019.1647203 doi: 10.1080/02331934.2019.1647203
    [5] L. Ceng, C. Wang, J. Yao, Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities, Math. Meth. Oper. Res., 67 (2008), 375–390. http://dx.doi.org/10.1007/s00186-007-0207-4 doi: 10.1007/s00186-007-0207-4
    [6] L. Ceng, J. Yao, A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem, Nonlinear Anal.-Theor., 72 (2010), 1922–1937. http://dx.doi.org/10.1016/j.na.2009.09.033 doi: 10.1016/j.na.2009.09.033
    [7] J. Chen, S. Liu, X. Chang, Extragradient method and golden ratio method for equilibrium problems on Hadamard manifolds, Int. J. Comput. Math., 98 (2021), 1699–1712. http://dx.doi.org/10.1080/00207160.2020.1846728 doi: 10.1080/00207160.2020.1846728
    [8] P. Combettes, S. Hirstoaga, Equilibrium programming in Hilbert spaces, J. Nonlinear Convex Anal., 6 (2005), 117–136.
    [9] R. Cottle, J. Yao, Pseudomonotone complementarity problems in Hilbert space, J. Optim. Theory Appl., 75 (1992), 281–295. http://dx.doi.org/10.1007/BF00941468 doi: 10.1007/BF00941468
    [10] L. Deng, R. Hu, Y. Fang, Projection extragradient algorithms for solving nonmonotone and non-Lipschitzian equilibrium problems in Hilbert spaces, Numer. Algor., 86 (2021), 191–221. http://dx.doi.org/10.1007/s11075-020-00885-x doi: 10.1007/s11075-020-00885-x
    [11] S. Denisov, V. Semenov, L. Chabak, Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators, Cybern. Syst. Anal., 51 (2015), 757–765. http://dx.doi.org/10.1007/s10559-015-9768-z doi: 10.1007/s10559-015-9768-z
    [12] K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, New York: Marcel Dekker, 1983.
    [13] L. He, Y. Cui, L. Ceng, T. Zhao, D. Wang, H. Hu, Strong convergence for monotone bilevel equilibria with constraints of variational inequalities and fixed points using subgradient extragradient implicit rule, J. Inequal. Appl., 2021 (2021), 146. http://dx.doi.org/10.1186/s13660-021-02683-y doi: 10.1186/s13660-021-02683-y
    [14] L. Jolaoso, Y. Shehu, J. Yao, Inertial extragradient type method for mixed variational inequalities without monotonicity, Math. Comput. Simulat., 192 (2022), 353–369. http://dx.doi.org/10.1016/j.matcom.2021.09.010 doi: 10.1016/j.matcom.2021.09.010
    [15] G. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon, 12 (1976), 747–756.
    [16] R. Kraikaew, S. Saejung, Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces, J. Optim. Theory Appl., 163 (2014), 399–412. http://dx.doi.org/10.1007/s10957-013-0494-2 doi: 10.1007/s10957-013-0494-2
    [17] X. Li, Z. Liu, Sensitivity analysis of optimal control problems described by differential hemivariational inequalities, SIAM J. Comtrol Optim., 56 (2018), 3569–3597. http://dx.doi.org/10.1137/17M1162275 doi: 10.1137/17M1162275
    [18] T. Lim, H. Xu, Fixed point theorems for asymptotically nonexpansive mappings, Nonlinear Anal.-Theor., 22 (1994), 1345–1355. http://dx.doi.org/10.1016/0362-546X(94)90116-3 doi: 10.1016/0362-546X(94)90116-3
    [19] P. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. http://dx.doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [20] A. Moudafi, M. Théra, Proximal and dynamical approaches to equilibrium problems, In: Ill-posed variational problems and regularization techniques, Berlin: Springer, 1999. http://dx.doi.org/10.1007/978-3-642-45780-7_12
    [21] X. Qin, A. Petrusel, B. Tan, J. Yao, Efficient extragradient methods for bilevel pseudomonotone variational inequalities with non-Lipschitz operators and their applications, Fixed Point Theor., 25 (2024), 309–332. http://dx.doi.org/10.24193/fpt-ro.2024.1.19 doi: 10.24193/fpt-ro.2024.1.19
    [22] Y. Shehu, Q. Dong, D. Jiang, Single projection method for pseudo-monotone variational inequality in Hilbert spaces, Optimization, 68 (2019), 385–409. http://dx.doi.org/10.1080/02331934.2018.1522636 doi: 10.1080/02331934.2018.1522636
    [23] Y. Shehu, O. Iyiola, Strong convergence result for monotone variational inequalities, Numer. Algor., 76 (2017), 259–282. http://dx.doi.org/10.1007/s11075-016-0253-1 doi: 10.1007/s11075-016-0253-1
    [24] Y. Song, O. Bazighifan, Two regularization methods for the variational inequality problem over the set of solutions of the generalized mixed equilibrium problem, Mathematics, 10 (2022), 2981. http://dx.doi.org/10.3390/math10162981 doi: 10.3390/math10162981
    [25] Y. Song, Y. Pei, A new viscosity semi-implicit midpoint rule for strict pseudo-contractions and (α,β)-generalized hybrid mappings, Optimization, 70 (2021), 2635–2653. http://dx.doi.org/10.1080/02331934.2020.1789640 doi: 10.1080/02331934.2020.1789640
    [26] G. Stampacchia, Formes bilineaires coercivities sur les ensembles convexes, R. Acad. Scz. Paris, 258 (1964), 4413–4416.
    [27] B. Tan, S. Li, Modified inertial projection and contraction algorithms with non-monotonic step sizes for solving variational inequalities and their applications, Optimization, 73 (2024), 793–832. http://dx.doi.org/10.1080/02331934.2022.2123705 doi: 10.1080/02331934.2022.2123705
    [28] D. Thong, D. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer. Algor., 80 (2019), 1283–1307. http://dx.doi.org/10.1007/s11075-018-0527-x doi: 10.1007/s11075-018-0527-x
    [29] P. Vuong, Y. Shehu, Convergence of an extragradient-type method for variational inequality with applications to optimal control problems, Numer. Algor., 81 (2019), 269–291. http://dx.doi.org/10.1007/s11075-018-0547-6 doi: 10.1007/s11075-018-0547-6
    [30] H. Xu, T. Kim, Convergence of hybrid steepest-descent methods for variational inequalities, J. Optim. Theory Appl., 119 (2003), 185–201. http://dx.doi.org/10.1023/B:JOTA.0000005048.79379.b6 doi: 10.1023/B:JOTA.0000005048.79379.b6
    [31] J. Yang, H. Liu, Z. Liu, Modified subgradient extragradient algorithms for solving monotone variational inequalities, Optimization, 67 (2018), 2247–2258. http://dx.doi.org/10.1080/02331934.2018.1523404 doi: 10.1080/02331934.2018.1523404
    [32] Y. Yao, O. Iyiola, Y. Shehu, Subgradient extragradient method with double inertial steps for variational inequalities, J. Sci. Comput., 90 (2022), 71. http://dx.doi.org/10.1007/s10915-021-01751-1 doi: 10.1007/s10915-021-01751-1
    [33] Y. Yu, T. Yin, Weak convergence of a self-adaptive Tseng-type algorithm for solving variational inclusion problems, U.P.B. Sci. Bull., Series A, 85 (2023), 51–58.
    [34] Y. Yu, T. Yin, Strong convergence theorems for a nonmonotone equilibrium problem and a quasi-variational inclusion problem, J. Nonlinear Convex Anal., 25 (2024), 503–512.
    [35] Z. Jing, Z. Liu, E. Vilches, C. Wen, J. Yao, Optimal control of an evolution hemivariational inequality involving history-dependent operators, Commun. Nonlinear Sci., 103 (2021), 105992. http://dx.doi.org/10.1016/j.cnsns.2021.105992 doi: 10.1016/j.cnsns.2021.105992
  • This article has been cited by:

    1. Nguyen Buong, A first order dynamical system and its discretization for a class of variational inequalities, 2025, 458, 03770427, 116341, 10.1016/j.cam.2024.116341
    2. Prasit Cholamjiak, Zhongbing Xie, Min Li, Papinwich Paimsang, Double inertial subgradient extragradient algorithm for solving equilibrium problems and common fixed point problems with application to image restoration, 2024, 03770427, 116396, 10.1016/j.cam.2024.116396
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1032) PDF downloads(68) Cited by(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog