Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints

  • In this paper, we introduce the modified Mann-like subgradient-like extragradient implicit rules with linear-search process for finding a common solution of a system of generalized equilibrium problems, a pseudomonotone variational inequality problem and a fixed-point problem of an asymptotically nonexpansive mapping in a real Hilbert space. The proposed algorithms are based on the subgradient extragradient rule with linear-search process, Mann implicit iteration approach, and hybrid deepest-descent technique. Under mild restrictions, we demonstrate the strong convergence of the proposed algorithms to a common solution of the investigated problems, which is a unique solution of a certain hierarchical variational inequality defined on their common solution set.

    Citation: Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin. Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints[J]. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154

    Related Papers:

    [1] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [2] Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839
    [3] Fei Ma, Jun Yang, Min Yin . A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Mathematics, 2022, 7(4): 5015-5028. doi: 10.3934/math.2022279
    [4] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [5] Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332
    [6] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [7] Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
    [8] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [9] Wenlong Sun, Gang Lu, Yuanfeng Jin, Zufeng Peng . Strong convergence theorems for split variational inequality problems in Hilbert spaces. AIMS Mathematics, 2023, 8(11): 27291-27308. doi: 10.3934/math.20231396
    [10] Bancha Panyanak, Chainarong Khunpanuk, Nattawut Pholasa, Nuttapol Pakkaranang . A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators. AIMS Mathematics, 2023, 8(4): 9692-9715. doi: 10.3934/math.2023489
  • In this paper, we introduce the modified Mann-like subgradient-like extragradient implicit rules with linear-search process for finding a common solution of a system of generalized equilibrium problems, a pseudomonotone variational inequality problem and a fixed-point problem of an asymptotically nonexpansive mapping in a real Hilbert space. The proposed algorithms are based on the subgradient extragradient rule with linear-search process, Mann implicit iteration approach, and hybrid deepest-descent technique. Under mild restrictions, we demonstrate the strong convergence of the proposed algorithms to a common solution of the investigated problems, which is a unique solution of a certain hierarchical variational inequality defined on their common solution set.



    Let H be a real Hilbert space with its inner product , and induced norm . Let CH be a closed convex set, and PC be the metric projection (or nearest point) from H onto C. Let S:CH be a nonlinear mapping. Let the Fix(S) and R indicate the fixed-point set of S and the real-number set, respectively. We denote by the and the strong convergence and weak convergence in H, respectively. A mapping S:CC is referred to as being asymptotically nonexpansive if {θr}r=1[0,+) s.t. limrθr=0 and θruv+uvSruSrv,r1,u,vC. In particular, if θr=0,r1, then S is known as being nonexpansive.

    Let Θ:C×CR be a bifunction. The equilibrium problem (EP) for Θ is to determine its equilibrium points, that is, the set EP(Θ)={uC:Θ(u,v)0,vC}. Under the theory framework of equilibrium problems, there is a unified way to investigate a wide number of problems arising in the physics, optimization, structural analysis, transportation, finance and economics. In order to find an element in EP(Θ), one assumes that the following hold:

    (H1) Θ(u,u)=0,uC;

    (H2) Θ is monotone, that is, Θ(u,v)+Θ(v,u)0,u,vC;

    (H3) limλ0+Θ((1λ)u+λw,v)Θ(u,v),u,v,wC;

    (H4) vΘ(u,v) is convex and lower semicontinuous (l.s.c.) for each uC.

    In 1994, Blum and Oettli [34] gave the following lemma, which plays a key role in solving the equilibrium problems.

    Lemma 1.1 ([34]). Let Θ:C×CR satisfy the hypotheses (H1)–(H4). For any uC and >0, let S:HC be the mapping formulated below:

    TΘ(u):={wC:Θ(w,v)+1vw,wu0,vC}.

    Then TΘ is well defined and the following hold: (i) TΘ is single-valued, and firmly nonexpansive, that is, TΘuTΘv2TΘuTΘv,uv,u,vH; and (ii) Fix(TΘ)=EP(Θ), and EP(Θ) is convex and closed.

    It is worth pointing out that the variational inequality problem (VIP) is a special case of the EP. In particular, if Θ(u,v)=Au,vu,u,vC, then the EP reduces to the classical VIP of finding uC s.t. Au,vu0,vC, where A is a self-mapping on H. The solution set of the VIP is denoted by VI(C,A). It is well known that, one of the most popular techniques for solving the VIP is the extragradient one put forth by Korpelevich [26] in 1976, that is, for any starting point u0C, let {up} be the sequence constructed below

    {vp=PC(upμAup),up+1=PC(upμAvp),p0,

    where μ(0,1L) and L is Lipschitz constant of A. Whenever VI(C,A), the sequence {up} converges weakly to a point in VI(C,A). Till now, the vast literature on the Korpelevich extragradient technique shows that many authors have paid great attention to it and enhanced it in different manners; see e.g., [1,2,3,4,5,6,7,9,10,12,13,14,15,16,17,18,20,21,22,23,24,25,27,28,29,30,31,36,37,38,39,40,41] and references therein.

    Let Θ1,Θ2:C×CR be two bifunctions and let B1,B2:CH be two nonlinear mappings. In 2010, Ceng and Yao [35] considered the following problem of finding (u,v)C×C such that

    {Θ1(u,u)+B1v,uu+1μ1uv,uu0,uC,Θ2(v,v)+B2u,vv+1μ2vu,vv0,vC, (1.1)

    with μ1,μ2>0, which is called a system of generalized equilibrium problems (SGEP). In particular, if Θ1=Θ2=0, then the SGEP reduces to the following general system of variational inequalities (GSVI) considered in [6]: Find (u,v)C×C such that

    {μ1B1v+uv,uu0,uC,μ2B2u+vu,vv0,vC. (1.2)

    Note that SGEP (1.1) can be transformed into the fixed-point problem.

    Lemma 1.2 ([35]). Let Θ1,Θ2:C×CR be two bifunctions satisfying the hypotheses (H1)–(H4) and let the mappings B1,B2:CH be α-inverse-strongly monotone and β-inverse-strongly monotone, respectively. Let μ1(0,2α) and μ2(0,2β), respectively. Then, for given u,vC, (u,v) is a solution of SGEP (1.1) if and only if xFix(G), where Fix(G) is the fixed point set of the mapping G:=TΘ1μ1(Iμ1B1)TΘ2μ2(Iμ2B2), and v=TΘ2μ2(Iμ2B2)u.

    On the other hand, suppose that the mappings B1,B2:CH are α-inverse-strongly monotone and β-inverse-strongly monotone, respectively. Let f:CC be contractive with constant δ[0,1) and F:CH be κ-Lipschitzian and η-strongly monotone with constants κ,η>0 such that δ<ζ:=11μ(2ημκ2)(0,1] for μ(0,2ηκ2). Let S:CC be an asymptotically nonexpansive mapping with a sequence {θr} such that Ω:=Fix(S)Fix(G), where Fix(G) is the fixed-point set of the mapping G:=PC(Iμ1B1)PC(Iμ2B2) for μ1(0,2α) and μ2(0,2β). Recently, Cai, Shehu and Iyiola [13] proposed the modified viscosity implicit rule for finding an element of Ω, that is, for any starting point x1C, let {xp} be the sequence constructed below

    {up=βpxp+(1βp)yp,vp=PC(upμ2B2up),yp=PC(vpμ1B1vp),xp+1=PC[αpf(xp)+(IαpμF)Tpyp],p1, (1.3)

    where {αp},{βp}(0,1] s.t. (ⅰ) p=1|αp+1αp|<,p=1αp=; (ⅱ) limpαp=0,limpθpαp=0; (ⅲ) 0<εβp1,p=1|βp+1βp|<; and (ⅳ) p=1Tp+1ypTpyp<. It was proved in [13] that the sequence {xp} converges strongly to an element uΩ, which is a unique solution of the hierarchical variational inequality (HVI): (μFf)u,uu0,uΩ. Very recently, Reich et al. [29] suggested the modified projection-type method for solving the pseudomonotone VIP with uniform continuity mapping A. Let {αp}(0,1) and f:CC be contractive with constant δ[0,1). Given any initial x1C.

    Algorithm 1.3 ([29]). Initial step: Let ν>0,(0,1),λ(0,1ν).

    Iterations: Given the current iterate xp, calculate xp+1 as follows:

    Step 1. Compute yp=PC(xpλAxp) and Rλ(xp):=xpyp. If Rλ(xp)=0, then stop; xp is a solution of VI(C,A). Otherwise,

    Step 2. Compute wp=xpτpRλ(xp), where τp:=jp and jp is the smallest nonnegative integer j s.t. ν2Rλ(xp)2AxpA(xpjRλ(xp)),Rλ(xp).

    Step 3. Compute xp+1=αpf(xp)+(1αp)PCp(xp), where Cp:={xC:p(xp)0} and p(x)=Awp,xxp+τp2λRλ(xp)2. Again set p:=p+1 and go to Step 1.

    It was proven in [29] that under mild conditions, {xp} converges strongly to an element of VI(C,A). In a real Hilbert space H, we always assume that the SGEP, VIP, HVI and FPP represent a system of generalized equilibrium problems, a pseudomonotone variational inequality problem, a hierarchical variational inequality and a fixed-point problem of an asymptotically nonexpansive mapping, respectively. We introduce the modified Mann-like subgradient-like extragradient implicit rules with linear-search process for finding a common solution of the SGEP, VIP and FPP. The proposed algorithms are based on the subgradient extragradient rule with linear-search process, Mann implicit iteration approach, and hybrid deepest-descent technique. Under mild restrictions, we demonstrate the strong convergence of the proposed algorithms to a common solution of the SGEP, VIP and FPP, which is a unique solution of a certain HVI defined on their common solution set. In addition, an illustrated example is provided to illustrate the feasibility and implementability of our suggested rules.

    The architecture of this article is constituted below: In Section 2, we present some concepts and basic tools for further use. Section 3 treats the convergence analysis of the suggested algorithms. Last, Section 4 applies our main results to solve the SGEP, VIP and FPP in an illustrated example. Our results improve and extend the ones associated with very recent literature, e.g., [13,17,29].

    Let H be a real Hilbert space and CH be a convex and closed set. Given a sequence {uk}H. We denote by the uku (resp., uku) the strong (resp., weak) convergence of {uk} to u. For all u,vC, an operator Ψ:CH is referred to as being

    (a) L-Lipschitzian (or L-Lipschitz continuous) if L>0 s.t. ΨuΨvLuv;

    (b) pseudomonotone if Ψu,vu0Ψv,vu0;

    (c) monotone if ΨuΨv,uv0;

    (d) α-strongly monotone if α>0 s.t. ΨuΨv,uvαuv2;

    (e) β-inverse-strongly monotone if β>0 s.t. ΨuΨv,uvβΨuΨv2;

    (f) sequentially weakly continuous if {vk}C, the relation holds: vkvΨvkΨv. It is clear that each monotone mapping is pseudomonotone but the converse is not true. Also, vH, | (nearest point) PC(v)C s.t. vPC(v)vwwC. PC is called a nearest point (or metric) projection of H onto C. The following conclusions hold (see [19]):

    (a) vw,PC(v)PC(w)PC(v)PC(w)2,v,wH;

    (b) w=PC(v)vw,uw0,vH,uC;

    (c) vw2vPC(v)2+wPC(v)2,vH,wC;

    (d) vw2=v2w22vw,w,v,wH;

    (e) sv+(1s)w2=sv2+(1s)w2s(1s)vw2,v,wH,s[0,1].

    The following inequality is an immediate consequence of the subdifferential inequality of the function 122:

    u+v2u2+2v,u+v,u,vH.

    The following lemmas will be used for demonstrating our main results in the sequel.

    Lemma 2.1. Let the mapping B:CH be γ-inverse-strongly monotone. Then, for a given λ0,

    (IλB)u(IλB)v2uv2λ(2γλ)BuBv2.

    In particular, if 0λ2γ, then IλB is nonexpansive.

    Using Lemma 1.1 and Lemma 2.1, we immediately derive the following lemma.

    Lemma 2.2 ([35]). Let Θ1,Θ2:C×CR be two bifunctions satisfying the hypotheses (H1)–(H4), and the mappings B1,B2:CH be α-inverse-strongly monotone and β-inverse-strongly monotone, respectively. Let the mapping G:CC be defined as G:=TΘ1μ1(Iμ1B1)TΘ2μ2(Iμ2B2). Then G:CC is a nonexpansive mapping provided 0<μ12α and 0<μ22β.

    In particular, if Θ1=Θ2=0, using Lemma 1.1 we deduce that TΘ1μ1=TΘ2μ2=PC. Thus, from Lemma 2.2 we obtain the corollary below.

    Corollary 2.3 ([6]). Let the mappings B1,B2:CH be α-inverse-strongly monotone and β-inverse-strongly monotone, respectively. Let the mapping G:CC be defined as G:=PC(Iμ1B1)PC(Iμ2B2). If 0<μ12α and 0<μ22β, then G:CC is nonexpansive.

    Lemma 2.4 ([6]). Let A:CH be pseudomonotone and continuous. Then uC is a solution to the VIP Au,υu0,υC, if and only if Aυ,υu0,υC.

    Lemma 2.5 ([8]). Let {al} be a sequence of nonnegative numbers satisfying the conditions: al+1(1λl)al+λlγll1, where {λl} and {γl} are sequences of real numbers such that (i) {λl}[0,1] and l=1λl=, and (ii) lim suplγl0 or l=1|λlγl|<. Then limlal=0.

    Later on, we will make use of the following lemmas to demonstrate our main results.

    Lemma 2.6 ([32]). Let H1 and H2 be two real Hilbert spaces. Suppose that A:H1H2 is uniformly continuous on bounded subsets of H1 and M is a bounded subset of H1. Then, A(M) is bounded.

    Lemma 2.7 ([33]). Let h be a real-valued function on H and define K:={xC:h(x)0}. If K is nonempty and h is Lipschitz continuous on C with modulus θ>0, then dist(x,K)θ1max{h(x),0}xC, where dist(x,K) denotes the distance of x to K.

    Lemma 2.8 ([11]). Let X be a Banach space which admits a weakly continuous duality mapping, C be a nonempty closed convex subset of X, and T:CC be an asymptotically nonexpansive mapping with Fix(T). Then IT is demiclosed at zero, i.e., if {uk} is a sequence in C such that ukuC and (IT)uk0, then (IT)u=0, where I is the identity mapping of X.

    The following lemmas are very crucial to the convergence analysis of the proposed algorithms.

    Lemma 2.9 ([30]). Let {Λm} be a sequence of real numbers that does not decrease at infinity in the sense that, {Λmk}{Λm} s.t. Λmk<Λmk+1k1. Let the sequence {ϕ(m)}mm0 of integers be formulated below:

    ϕ(m)=max{km:Λk<Λk+1},

    with integer m01 satisfying {km0:Λk<Λk+1}. Then there hold the statements below:

    (i) ϕ(m0)ϕ(m0+1) and ϕ(m);

    (ii) Λϕ(m)Λϕ(m)+1 and ΛmΛϕ(m)+1,mm0.

    Lemma 2.10 ([8]). Let λ(0,1] and Let S:CC be a nonexpansive mapping. Let Sλ:CH be the mapping formulated by Sλu:=(IλμF)SuuC with F:CH being κ-Lipschitzian and η-strongly monotone. Then Sλ is a contraction provided 0<μ<2ηκ2, i.e., SλuSλv(1λτ)uv,u,vC, where τ=11μ(2ημκ2)(0,1].

    In this section, let the feasible set C be a nonempty closed convex subset of a real Hilbert space H, and assume always that the following conditions hold:

    (1) S:CC is an asymptotically nonexpansive mapping with a sequence {θn}, and A:HH is pseudomonotone and uniformly continuous on C, s.t. Azlim infn Aun for each {un}C with unz.

    (2) Θ1,Θ2:C×CR are two bifunctions satisfying the hypotheses (H1)–(H4), and B1,B2:CH are α-inverse-strongly monotone and β-inverse-strongly monotone, respectively.

    (3) Ω=Fix(S)Fix(G)VI(C,A) where G:=TΘ1μ1(Iμ1B1)TΘ2μ2(Iμ2B2) for μ1(0,2α) and μ2(0,2β).

    (4) f:CH is a contraction with constant δ[0,1), and F:CH is η-strongly monotone and κ-Lipschitzian such that δ<τ:=11μ(2ημκ2) for μ(0,2ηκ2).

    (5) {σn},{αn}(0,1] and {βn}[0,1] are three real number sequences satisfying

    (ⅰ) n=1αn=,limnαn=0 and limnθnαn=0;

    (ⅱ) 0<lim infnβnlim supnβn<1;

    (ⅲ) lim supnσn<1.

    Algorithm 3.1. Initial step: Given ν>0,(0,1),λ(0,1ν). Let x1C be arbitrary.

    Iterations: Given the current iterate xn, calculate xn+1 below:

    Step 1. Calculate wn=σnxn+(1σn)un with

    vn=TΘ2μ2(wnμ2B2wn),
    un=TΘ1μ1(vnμ1B1vn).

    Step 2. Calculate yn=PC(wnλAwn) and Rλ(wn):=wnyn.

    Step 3. Calculate tn=wnτnRλ(wn), where τn:=jn and jn is the smallest nonnegative integer j satisfying

    AwnA(wnjRλ(wn)),wnynν2Rλ(wn)2. (3.1)

    Step 4. Compute zn=PCn(wn) and xn+1=βnxn+(1βn)PC[αnf(xn)+(IαnμF)Snzn], where Cn:={uC:n(u)0} and

    n(u)=Atn,uwn+τn2λRλ(wn)2. (3.2)

    Again put n:=n+1 and return to Step 1.

    Lemma 3.2. The Armijo-type search approach (3.1) is well formulated, and the relation holds: λ1Rλ(wn)2Rλ(wn),Awn.

    Proof. Since (0,1) and A is of uniform continuity on C, it is clear that limjAwnA(wnjRλ(wn)),Rλ(wn)=0. If Rλ(wn)=0, one gets jn=0. Otherwise, from Rλ(wn)0, it follows that (integer) jn0 fulfilling (3.1). It is readily known that the firm nonexpansivity of PC implies uPCv,uvuPCv2,uC,vH. Setting v=wnλAwn and u=wn, one has λwnPC(wnλAwn),AwnwnPC(wnλAwn)2. Hence the relation holds.

    Lemma 3.3. Suppose that n is the function formulated in (3.2). Then, n(υ)0υΩ. In addition, when Rλ(wn)0, one has n(wn)>0.

    Proof. It suffices to show the former claim of Lemma 3.3 because the latter claim is clear. In fact, pick an arbitrary υΩ. By Lemma 2.4 one gets Atn,tnυ0. Thus, one has

    n(υ)=Atn,υwn+τn2λRλ(wn)2=Atn,tnwn+Atn,υtn+τn2λRλ(wn)2τnAtn,Rλ(wn)+τn2λRλ(wn)2. (3.3)

    Meanwhile, from (3.1) it follows that ν2Rλ(wn)2AwnAtn,Rλ(wn). So, from Lemma 3.2 one gets

    Atn,Rλ(wn)ν2Rλ(wn)2+Rλ(wn),Awn(ν2+1λ)Rλ(wn)2. (3.4)

    This along with (3.3), arrives at

    n(υ)τn2(1λν)Rλ(wn)2. (3.5)

    Therefore, we derive the desired result.

    Lemma 3.4. Let {wn},{xn},{yn},{zn} be the bounded sequences constructed in Algorithm 3.1. Assume that xnxn+10,xnGwn0,wnyn0 and xnzn0. If SnxnSn+1xn0 and {xnk}{xn} such that xnkzC, then zΩ.

    Proof. From Algorithm 3.1, we get wnxn=(1σn)(unxn)n1, and hence wnxn=(1σn)unxnunxn. Utilizing the assumption unxn0, we have

    limnwnxn=0. (3.6)

    Putting qn:=αnf(xn)+(IαnμF)Snzn, by Algorithm 3.1 we know that xn+1=βnxn+(1βn)PC(qn) and qnSnzn=αnf(xn)αnμFSnzn. Hence one gets

    xnSnznxnxn+1+xn+1Snznxnxn+1+βnxnSnzn+(1βn)qnSnznxnxn+1+βnxnSnzn+αnf(xn)+αnμFSnzn.

    This immediately ensures that

    (1βn)xnSnznxnxn+1+αnf(xn)+αnμFSnzn.

    Since xnxn+10,αn0 and lim infn(1βn)>0, by the boundedness of {xn},{zn} we obtain

    limnxnSnzn=0.

    We claim that limnxnSxn=0. In fact, using the asymptotical nonexpansivity of S, one deduces that

    xnSxnxnSnzn+SnznSnxn+SnxnSn+1xn+Sn+1xnSn+1zn+Sn+1znSxnxnSnzn+(1+θn)znxn+SnxnSn+1xn+(1+θn+1)xnzn+(1+θ1)Snznxn=(2+θ1)xnSnzn+(2+θn+θn+1)znxn+SnxnSn+1xn.

    Since xnzn0, xnSnzn0 and SnxnSn+1xn0, we obtain

    limnxnSxn=0. (3.7)

    Also, let us show that limnxnGxn=0. In fact, by Lemma 2.2 we know that G:CC is nonexpansive for μ1(0,2α) and μ2(0,2β). Again from Algorithm 3.1, we have un=Gwn. Since

    GxnxnGxnGwn+Gwnxnxnwn+unxn,

    Noticing unxn0 and xnwn0 (due to (3.6)), we obtain

    limnGxnxn=0. (3.8)

    Next, let us show zVI(C,A). Indeed, noticing xnwn0 and xnkz, we know that wnkz. Since C is convex and closed, from {wn}C and wnkz we get zC. In what follows, we consider two cases. In the case of Az=0, it is clear that zVI(C,A) because Az,yz0,yC. In the case of Az0, it follows from wnxn0 and xnkz that wnkz as k. Utilizing the assumption on A, instead of the sequentially weak continuity of A, we get 0<Azlim infkAwnk. So, we might assume that Awnk0k1. On the other hand, from yn=PC(wnλAwn), one has wnλAwnyn,xyn0,xC, and hence

    1λwnyn,xyn+Awn,ynwnAwn,xwn,xC. (3.9)

    In the light of the uniform continuity of A on C, one knows that {Awn} is bounded (due to Lemma 2.6). Note that {yn} is bounded as well. Thus, from (3.9) we get lim infkAwnk,xwnk0xC.

    To show that zVI(C,A), we now choose a sequence {γk}(0,1) satisfying γk0 as k. For each k1, we denote by lk the smallest positive integer such that

    Awnj,xwnj+γk0,jlk. (3.10)

    Because {γk} is decreasing, it is readily known that {lk} is increasing. Note that Awlk0k1 (due to {Awlk}{Awnk}). Then one puts υlk=AwlkAwlk2, one gets Awlk,υlk=1,k1. So, using (3.10) one has Awlk,x+γkυlkwlk0,k1. Again from the pseudo-monotonicity of A one has A(x+γkυlk),x+γkυlkwlk0,k1. This immediately arrives at

    Ax,xwlkAxA(x+γkυlk),x+γkυlkwlkγkAx,υlk,k1. (3.11)

    We claim that limkγkυlk=0. In fact, from xnkzC and wnxn0, we obtain wnkz. Note that {wlk}{wnk} and γk0 as k. So it follows that

    0lim supkγkυlk=lim supkγkAwlklim supkγklim infkAwnk=0.

    Hence one gets γkυlk0 as k. Thus, letting k, we deduce that the right-hand side of (3.11) tends to zero by the uniform continuity of A, the boundedness of {wlk},{υlk} and the limit limkγkυlk=0. Therefore, Ax,xz=lim infkAx,xwlk0xC. Using Lemma 2.4 one has zVI(C,A).

    Last, we claim that zΩ. In fact, because (3.7) yields xnkSxnk0. By Lemma 2.8 one knows that IS is demiclosed at zero. So, from xnkz it follows that (IS)z=0, i.e., zFix(S). Besides, let us claim that zFix(G). Actually, by Lemma 2.8 we deduce that IG is demiclosed at zero. Thus, from (3.8) and xnkz one has (IG)z=0, i.e., zFix(G). Accordingly, zFix(S)Fix(G)VI(C,A)=Ω. This completes the proof.

    Lemma 3.5. Let {wn} be the sequence constructed in Algorithm 3.1. Then,

    limnτnRλ(wn)2=0limnRλ(wn)=0. (3.12)

    Proof. We claim that lim supnRλ(wn)=0. Conversely, suppose that lim supn Rλ(wn)=d>0. Then, {np}{n} s.t. limpRλ(wnp)=d>0. Note that limpτnpRλ(wnp)2=0. First, if lim infpτnp>0, we might assume that ξ>0 s.t. τnpξ>0,p1. So it follows that

    Rλ(wnp)2=1τnpτnpRλ(wnp)21ξτnpRλ(wnp)2=1ξτnpRλ(wnp)2, (3.13)

    which immediately leads to

    0<d2=limpRλ(wnp)2limp{1ξτnpRλ(wnp)2}=0.

    So, this reaches at a contradiction.

    If lim infpτnp=0, there exists a subsequence of {τnp}, still denoted by {τnp}, s.t. limpτnp=0. We now set

    υnp:=1τnpynp+(11τnp)wnp=wnp1τnp(wnpynp).

    Then, from limpτnpRλ(wnp)2=0 we infer that

    limpυnpwnp2=limp12τnpτnpRλ(wnp)2=0. (3.14)

    Using the stepsize rule (3.1), one gets AwnpAυnp,Rλ(wnp)>ν2Rλ(wnp)2. Since A is uniformly continuous on bounded subsets of C, (3.14) guarantees that

    limpAwnpAυnp=0, (3.15)

    which hence attains limpRλ(wnp)=0. So, this reaches a contradiction. Therefore, Rλ(wn)0 as n. This completes the proof.

    Theorem 3.6. Suppose that {xn} is the sequence constructed in Algorithm 3.1. Then xnxΩ provided SnxnSn+1xn0, with xΩ being only a solution to the HVI

    (μFf)x,yx0,yΩ.

    Proof. First of all, noticing 0<lim infnβnlim supnβn<1 and limnθnαn=0, we may assume, without loss of generality, that {σn}[a,b](0,1) and θnαn(τδ)2,n1. We claim that PΩ(IμF+f):CC is contractive map. In fact, using Lemma 2.10, one has

    PΩ(IμF+f)uPΩ(IμF+f)v[1(τδ)]uv,u,vC.

    This ensures that PΩ(IμF+f) is contractive. Banach's Contraction Mapping Principle guarantees that there exists a unique fixed point of PΩ(IμF+f) in C. Say xC s.t. x=PΩ(IμF+f)x. That is, (solution) xΩ=Fix(S)Fix(G)VI(C,A) of the HVI

    (μFf)x,yx0,yΩ. (3.16)

    Next we demonstrate the conclusion of the theorem. To the goal, we divide the remainder of the proof into several aspects.

    Aspect 1. We assert that {xn} is of boundedness. Indeed, for xΩ=Fix(S)Fix(G)VI(C,A) we have Sx=x,Gx=x and PC(xλAx)=x. We observe that

    znx2=PCn(wn)x2wnx2wnPCn(wn)2=wnx2dist2(wn,Cn), (3.17)

    which hence leads to

    znxwnx,n1. (3.18)

    Using Lemma 2.2, one knows that G=Tθ1μ1(Iμ1B1)Tθ2μ2(Iμ2B2) is nonexpansive for μ1(0,2α) and μ2(0,2β). Thus, by the definition of wn, one gets

    wnxσnxnx+(1σn)Gwnxσnxnx+(1σn)wnx,

    which immediately yields

    wnxxnx,n1.

    This together with (3.18), yields

    znxwnxxnx,n1. (3.19)

    Thus, using (3.19), from Lemma 2.10 we obtain

    xn+1xβnxnx+(1βn)αnf(xn)+(IαnμF)Snznx=βnxnx+(1βn)αn(f(xn)f(x))+(IαnμF)Snzn(IαnμF)x+αn(fμF)xβnxnx+(1βn){αnδxnx+(1αnτ)(1+θn)znx+αn(fμF)x}βnxnx+(1βn){[αnδ+(1αnτ)+θn]xnx+αn(fμF)x}βnxnx+(1βn){[1αn(τδ)+αn(τδ)2]xnx+αn(fμF)x}=βnxnx+(1βn){[1αn(τδ)2]xnx+αn(fμF)x}=[1αn(1βn)(τδ)2]xnx+αn(1βn)(τδ)22(fμF)xτδmax{xnx,2(fμF)xτδ}.

    By induction, we get

    xnxmax{x1x,2(fμF)xτδ},n1.

    Thus, {xn} is bounded, and so are the sequences {wn},{yn},{zn},{f(xn)},{Atn},{Gwn},{Snzn}.

    Aspect 2. We assert that

    (1βn){[1αn(τ+δ)2]wnzn2+qnPC(qn)2}+βn(1βn)xnPC(qn)2xnx2xn+1x2+αnM1,

    for some M1>0. In fact, noticing zn=PCn(wn) and wn=σnxn+(1σn)un, we obtain

    znx2wnx2wnzn2σnxnx2+(1σn)unx2wnzn2.

    Since xn+1=βnxn+(1βn)PC(qn) where qn=αnf(xn)+(IαnμF)Snzn, by Lemma 2.10 and the convexity of the function h(s)=s2,sR we deduce that

    xn+1x2=βnxnx2+(1βn)PC(qn)x2βn(1βn)xnPC(qn)2βnxnx2+(1βn){qnx2qnPC(qn)2}βn(1βn)xnPC(qn)2=βnxnx2+(1βn){αn(f(xn)f(x))+(IαnμF)Snzn(IαnμF)x+αn(fμF)x2qnPC(qn)2}βn(1βn)xnPC(qn)2βnxnx2+(1βn){αn(f(xn)f(x))+(IαnμF)Snzn(IαnμF)x2+2αn(fμF)x,qnxqnPC(qn)2}βn(1βn)xnPC(qn)2βnxnx2+(1βn){[αnf(xn)f(x)+(IαnμF)Snzn(IαnμF)x]2+2αn(fμF)x,qnxqnPC(qn)2}βn(1βn)xnPC(qn)2βnxnx2+(1βn){[αnδxnx+(1αnτ)(1+θn)znx]2+2αn(fμF)x,qnxqnPC(qn)2}βn(1βn)xnPC(qn)2βnxnx2+(1βn){[αnδxnx+[(1αnτ)+θn]znx]2+2αn(fμF)x,qnxqnPC(qn)2}βn(1βn)xnPC(qn)2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]znx2+2αn(fμF)x,qnxqnPC(qn)2}βn(1βn)xnPC(qn)2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn][σnxnx2+(1σn)unx2wnzn2]+2αn(fμF)x,qnxqnPC(qn)2}βn(1βn)xnPC(qn)2, (3.20)

    (due to αnδ+(1αnτ)+θn1αn(τδ)+αn(τδ)2=1αn(τδ)21), which together with un=Gwn and (3.19), ensures that

    xn+1x2βnxnx2+(1βn){αnδxnx2+2αn(fμF)x,qnx+[(1αnτ)+αn(τδ)2][σnxnx2+(1σn)xnx2wnzn2]qnPC(qn)2}βn(1βn)xnPC(qn)2[1αn(1βn)(τδ)2]xnx2(1βn){qnPC(qn)2+[1αn(τ+δ)2]wnzn2}βn(1βn)xnPC(qn)2+2αn(1βn)(fμF)x,qnxxnx2(1βn){[1αn(τ+δ)2]wnzn2+qnPC(qn)2}βn(1βn)xnPC(qn)2+αnM1, (3.21)

    where supn12(fμF)xqnxM1 for some M1>0. This attains the desired assertion.

    Aspect 3. We assert that

    (1βn)[1αn(τ+δ)2][τn2λˉLRλ(wn)2]2xnx2xn+1x2+αnM1.

    In fact, we claim that for some ˉL>0,

    znx2wnx2[τn2λˉLRλ(wn)2]2. (3.22)

    Noticing the boundedness of {Atn}, one knows that ˉL>0 s.t. AtnˉL,n1. This implies that

    |n(u)n(v)|=|Atn,uv|AtnuvˉLuv,u,vCn,

    which hence guarantees that n() is ˉL-Lipschitz continuous on Cn. By Lemmas 2.7 and 3.3, we have

    dist(wn,Cn)1ˉLn(wn)=τn2λˉLRλ(wn)2. (3.23)

    Combining (3.17) and (3.23) yields

    znx2wnx2[τn2λˉLRλ(wn)2]2.

    From (3.20), (3.19) and (3.22) it follows that

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]znx2+2αn(fμF)x,qnx}βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn][wnx2[τn2λˉLRλ(wn)2]2]+2αn(fμF)x,qnx}βnxnx2+(1βn){αnδxnx2+[(1αnτ)+αn(τδ)2][xnx2[τn2λˉLRλ(wn)2]2]+2αn(fμF)x,qnx}=[1αn(1βn)(τδ)2]xnx2(1βn)[1αn(τ+δ)2][τn2λˉLRλ(wn)2]2+2αn(1βn)(fμF)x,qnxxnx2(1βn)[1αn(τ+δ)2][τn2λˉLRλ(wn)2]2+αnM1.

    This hence leads to

    (1βn)[1αn(τ+δ)2][τn2λˉLRλ(wn)2]2xnx2xn+1x2+αnM1.

    Aspect 4. We assert that

    xn+1x2[1αn(1βn)(τδ)]xnx2+αn(1βn)(τδ)×[2(fμF)x,qnxτδ+θnαnMτδ] (3.24)

    for some M>0. In fact, from Lemma 2.10 and (3.19), one obtains

    xn+1x2βnxnx2+(1βn)qnx2=βnxnx2+(1βn)αn(f(xn)f(x))+αn(fμF)x+(IαnμF)Snzn(IαnμF)x2(1βn){αn(f(xn)f(x))+(IαnμF)Snzn(IαnμF)x2+βnxnx2+2αn(fμF)x,qnx}(1βn){[αnf(xn)f(x)+(IαnμF)Snzn(IαnμF)x]2+βnxnx2+2αn(fμF)x,qnx}βnxnx2+(1βn){[αnδxnx+(1αnτ)(1+θn)znx]2+2αn(fμF)x,qnx}βnxnx2+(1βn){[αnδxnx+(1αnτ+θn)znx]2+2αn(fμF)x,qnx}βnxnx2+(1βn){αnδxnx2+(1αnτ+θn)znx2+2αn(fμF)x,qnx}[1αn(1βn)(τδ)]xnx2+(1βn){θnxnx2+2αn(fμF)x,qnx}[1αn(1βn)(τδ)]xnx2+αn(1βn)(τδ)×[2(fμF)x,qnxτδ+θnαnMτδ],

    where supn1xnx2M for some M>0.

    Aspect 5. We assert that xnxΩ, which is only a solution of the HVI (3.16).

    In fact, from (3.24), we have

    xn+1x2[1αn(1βn)(τδ)]xnx2+αn(1βn)(τδ)×[2(fμF)x,qnxτδ+θnαnMτδ]. (3.25)

    Setting Λn=xnx2, we demonstrate the convergence of {Λn} to zero by the following two situations.

    Situation 1. (integer)n01 s.t. {Λn} is nonincreasing. It is clear that the limit limnΛn=k<+ and limn(ΛnΛn+1)=0. From Aspect 2 and {βn}[a,b](0,1) we obtain

    (1b){[1αn(τ+δ)2]wnzn2+qnPC(qn)2}+a(1b)xnPC(qn)2(1βn){[1αn(τ+δ)2]wnzn2+qnPC(qn)2}+βn(1βn)xnPC(qn)2xnx2xn+1x2+αnM1=ΛnΛn+1+αnM1.

    Thanks to the facts that αn0 and ΛnΛn+10, from τ+δ2(0,1) one deduces that

    limnwnzn=limnqnPC(qn)=limnxnPC(qn)=0. (3.26)

    Hence it is readily known that

    xnqnxnPC(qn)+PC(qn)qn0(n),
    xn+1xn=(1βn)PC(qn)xnqnxn0(n),

    and

    Snznxn=qnxnαn(f(xn)μFSnzn)qnxn+αn(f(xn)+μFSnzn)0(n).

    Next, we show that xnun0 as n. Indeed, note that y=TΘ2μ2(xμ2B2x), vn=TΘ2μ2(wnμ2B2wn) and un=TΘ1μ1(vnμ1B1vn). Then un=Gwn. By Lemma 2.1 we have

    vny2wnx2μ2(2βμ2)B2wnB2x2

    and

    unx2vny2μ1(2αμ1)B1vnB1y2.

    Combining the last two inequalities, from (3.19) we obtain

    unx2xnx2μ2(2βμ2)B2wnB2x2μ1(2αμ1)B1vnB1y2.

    This together with (3.20), implies that

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn][σnxnx2+(1σn)unx2]+2αn(fμF)x,qnx}βnxnx2+(1βn){αnδxnx2+[(1αnτ)+αn(τδ)2]×[σnxnx2+(1σn)(xnx2μ2(2βμ2)B2wnB2x2μ1(2αμ1)B1vnB1y2)]+αnM1}[1αn(1βn)(τδ)2]xnx2(1βn)(1σn)[1αn(τ+δ)2]×{μ2(2βμ2)B2wnB2x2+μ1(2αμ1)B1vnB1y2}+αnM1xnx2(1βn)(1σn)[1αn(τ+δ)2]{μ2(2βμ2)B2wnB2x2+μ1(2αμ1)B1vnB1y2}+αnM1,

    which immediately arrives at

    (1βn)(1σn)[1αn(τ+δ)2]{μ2(2βμ2)B2wnB2x2+μ1(2αμ1)B1vnB1y2}xnx2xn+1x2+αnM1=ΛnΛn+1+αnM1.

    Since βnb<1,μ1(0,2α),μ2(0,2β),limnαn=0 and lim supnσn<1, we obtain from ΛnΛn+10 that

    limnB2wnB2x=0andlimnB1vnB1y=0. (3.27)

    On the other hand, by Lemma 1.1 one has

    unx2vny,unx+μ1B1yB1vn,unx12[vny2+unx2vnun+xy2]+μ1B1yB1vnunx,

    which hence leads to

    unx2vny2vnun+xy2+2μ1B1yB1vnunx.

    Similarly, one gets

    vny2wnx2wnvn+yx2+2μ2B2xB2wnvny.

    Combining the last two inequalities, from (3.19) we get

    unx2xnx2wnvn+yx2vnun+xy2+2μ1B1yB1vnunx+2μ2B2xB2wnvny.

    This together with (3.20), ensures that

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]×[σnxnx2+(1σn)unx2]+2αn(fμF)x,qnx}βnxnx2+(1βn){αnδxnx2+[(1αnτ)+αn(τδ)2][σnxnx2+(1σn)(xnx2wnvn+yx2vnun+xy2+2μ1B1yB1vnunx+2μ2B2xB2wnvny)]+αnM1}[1αn(1βn)(τδ)2]xnx2(1βn)(1σn)[1αn(τ+δ)2]{wnvn+yx2+vnun+xy2}+2μ1B1yB1vnunx+2μ2B2xB2wnvny+αnM1xnx2(1βn)(1σn)[1αn(τ+δ)2]{wnvn+yx2+vnun+xy2}+2μ1B1yB1vnunx+2μ2B2xB2wnvny+αnM1,

    which immediately leads to

    (1βn)(1σn)[1αn(τ+δ)2]{wnvn+yx2+vnun+xy2}xnx2xn+1x2+2μ1B1yB1vnunx+2μ2B2xB2wnvny+αnM1=ΛnΛn+1+2μ1B1yB1vnunx+2μ2B2xB2wnvny+αnM1.

    Since βnb<1,limnαn=0, lim supnσn<1 and lim supn(ΛnΛn+1)=0, we deduce from (3.27) and the boundedness of {un},{vn} that

    limnwnvn+yx=0andlimnvnun+xy=0.

    Therefore,

    wnGwn=wnunwnvn+yx+vnun+xy0(n). (3.28)

    Noticing wn=σnxn+(1σn)un, we get

    wnx2=σnxnx,wnx+(1σn)unx,wnxσnxnx,wnx+(1σn)wnx2,

    which immediately yields

    wnx2xnx,wnx12[xnx2+wnx2xnwn2].

    So it follows that

    wnx2xnx2xnwn2,

    which together with (3.20), leads to

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]×[σnxnx2+(1σn)wnx2]+2αn(fμF)x,qnx}βnxnx2+(1βn){αnδxnx2+[(1αnτ)+αn(τδ)2]×[σnxnx2+(1σn)(xnx2xnwn2)]+αnM1}[1αn(1βn)(τδ)2]xnx2(1βn)(1σn)[1αn(τ+δ)2]×xnwn2+αnM1xnx2(1βn)(1σn)[1αn(τ+δ)2]xnwn2+αnM1.

    This hence arrives at

    (1βn)(1σn)[1αn(τ+δ)2]xnwn2xnx2xn+1x2+αnM1=ΛnΛn+1+αnM1.

    Since βnb<1,limnαn=0, lim supnσn<1 and lim supn(ΛnΛn+1)=0, we deduce from τ+δ2(0,1) that

    limnxnwn=0.

    So it follows from (3.26) and (3.28) that

    xnznxnwn+wnzn0(n), (3.29)

    and

    xnGwnxnwn+wnGwn0(n). (3.30)

    Meanwhile, from Aspect 3 we obtain

    (1βn)[1αn(τ+δ)2][τn2λˉLRλ(wn)2]2xnx2xn+1x2+αnM1=ΛnΛn+1+αnM1.

    Noticing βnb<1,αn0 and ΛnΛn+10, one gets

    limn[τn2λˉLRλ(wn)2]2=0,

    which together with Lemma 3.5, yields

    limnwnyn=0. (3.31)

    By the boundedness of {xn}, we know that subsequence{xnp}{xn} s.t.

    lim supn(fμF)x,xnx=limp(fμF)x,xnpx. (3.32)

    Since H is reflexive and {xn} is bounded, we might assume that xnpˉx. Thus, from (3.32) one has

    lim supn(fμF)x,xnx=limp(fμF)x,xnpx=(fμF)x,ˉxx. (3.33)

    Since xnxn+10,xnGwn0,wnyn0,xnzn0 and xnpˉx, by Lemma 3.4 we infer that ˉxΩ. Thus, using (3.16) and (3.33) one has

    lim supn(fμF)x,xnx=(fμF)x,ˉxx0, (3.34)

    which together with (3.26), arrives at

    lim supn(fμF)x,qnx=lim supn[(fμF)x,qnPC(qn)+PC(qn)xn+(fμF)x,xnx]lim supn[(fμF)x(qnPC(qn)+PC(qn)xn)+(fμF)x,xnx]0. (3.35)

    Note that {αn(1βn)(τδ)}[0,1],n=1αn(1βn)(τδ)=, and

    lim supn[2(fμF)x,qnxτδ+θnαnMτδ]0.

    Consequently, applying Lemma 2.5 to (3.25), one has limnxnx2=0.

    Situation 2. {Λnp}{Λn} s.t. Λnp<Λnp+1pN, with N being the set of all natural numbers. Let ϕ:NN be formulated as

    ϕ(n):=max{pn:Λp<Λp+1}.

    Using Lemma 2.9, we have

    Λϕ(n)Λϕ(n)+1andΛnΛϕ(n)+1.

    By Aspect 2 one gets

    (1b){[1αϕ(n)(τ+δ)2]wϕ(n)zϕ(n)2+qϕ(n)PC(qϕ(n))2}+a(1b)xϕ(n)PC(qϕ(n))2(1βϕ(n)){[1αϕ(n)(τ+δ)2]wϕ(n)zϕ(n)2+qϕ(n)PC(qϕ(n))2}+βϕ(n)(1βϕ(n))xϕ(n)PC(qϕ(n))2xϕ(n)x2xϕ(n)+1x2+αϕ(n)M1=Λϕ(n)Λϕ(n)+1+αϕ(n)M1, (3.36)

    which immediately ensures that

    limnwϕ(n)zϕ(n)=limnqϕ(n)PC(qϕ(n))=limnxϕ(n)PC(qϕ(n))=0.

    By Aspect 3 we have

    (1βϕ(n))[1αϕ(n)(τ+δ)2][τϕ(n)2λˉLRλ(wϕ(n))2]2xϕ(n)x2xϕ(n)+1x2+αϕ(n)M1=Λϕ(n)Λϕ(n)+1+αϕ(n)M1,

    which hence leads to

    limn[τϕ(n)2λˉLRλ(wϕ(n))2]2=0.

    Using the similar arguments to those of Situation 1, we infer that limnxϕ(n)+1xϕ(n)=0,

    limnxϕ(n)Gwϕ(n)=limnwϕ(n)yϕ(n)=limnxϕ(n)zϕ(n)=0,

    and

    lim supn(fμF)x,qϕ(n)x0. (3.37)

    On the other hand, from (3.25) we obtain

    αϕ(n)(1βϕ(n))(τδ)Λϕ(n)Λϕ(n)Λϕ(n)+1+αϕ(n)(1βϕ(n))(τδ)[2(fμF)x,qϕ(n)xτδ+θϕ(n)αϕ(n)Mτδ]αϕ(n)(1βϕ(n))(τδ)[2(fμF)x,qϕ(n)xτδ+θϕ(n)αϕ(n)Mτδ],

    which immediately attains

    lim supnΛϕ(n)lim supn[2(fμF)x,qϕ(n)xτδ+θϕ(n)αϕ(n)Mτδ]0.

    Thus, limnxϕ(n)x2=0. In addition, observe that

    xϕ(n)+1x2xϕ(n)x2=2xϕ(n)+1xϕ(n),xϕ(n)x+xϕ(n)+1xϕ(n)22xϕ(n)+1xϕ(n)xϕ(n)x+xϕ(n)+1xϕ(n)2. (3.38)

    Thanks to ΛnΛϕ(n)+1, one gets

    xnx2xϕ(n)x2+2xϕ(n)+1xϕ(n)xϕ(n)x+xϕ(n)+1xϕ(n)20(n).

    That is, xnx as n. This completes the proof.

    Theorem 3.7. If S:CC is nonexpansive and {xn} is the sequence constructed in the modified version of Algorithm 3.1, that is, for any initial x1C,

    {wn=σnxn+(1σn)un,vn=TΘ2μ2(wnμ2B2wn),un=TΘ1μ1(vnμ1B1vn),yn=PC(wnλAwn),tn=(1τn)wn+τnyn,zn=PCn(wn),xn+1=βnxn+(1βn)PC[αnf(xn)+(IαnμF)Szn]n1, (3.39)

    where for each n1, Cn and τn are picked as in Algorithm 3.1, then xnxΩ with xΩ being only a solution to the HVI: (μFf)x,yx0,yΩ.

    Proof. We divide the proof of the theorem into several aspects.

    Aspect 1. We assert the boundedness of {xn}. Indeed, using the same reasonings as in Aspect 1 of the proof of Theorem 3.6, one derives the desired assertion.

    Aspect 2. We assert that

    (1βn){(1αnτ)wnzn2+qnPC(qn)2}+βn(1βn)xnPC(qn)2xnx2xn+1x2+αnM1,

    for some M1>0. In fact, putting θn=0, from (3.20) we get

    xn+1x2βnxnx2+(1βn){αnδxnx2+(1αnτ)[σnxnx2+(1σn)unx2wnzn2]+2αn(fμF)x,qnxqnPC(qn)2}βn(1βn)xnPC(qn)2[1αn(1βn)(τδ)]xnx2(1βn){(1αnτ)wnzn2+qnPC(qn)2}+αnM1βn(1βn)xnPC(qn)2xnx2(1βn){(1αnτ)wnzn2+qnPC(qn)2}+αnM1βn(1βn)xnPC(qn)2,

    where supn12(fμF)xqnxM1 for some M1>0. This attains the desired assertion.

    Aspect 3. We assert that

    (1βn)(1αnτ)[τn2λˉLRλ(wn)2]2xnx2xn+1x2+αnM1.

    Indeed, using the same reasonings as in Aspect 3 of the proof of Theorem 3.6, one deduces the desired assertion.

    Aspect 4. We assert that

    xn+1x2[1αn(1βn)(τδ)]xnx2+αn(1βn)(τδ)2(fμF)x,qnxτδ. (3.40)

    Indeed, using the same reasonings as in Aspect 4 of the proof of Theorem 3.6, one obtains the desired assertion.

    Aspect 5. We assert that {xn} converges strongly to the unique solution xΩ of the HVI (3.16). Indeed, using the same reasonings as in Aspect 5 of the proof of Theorem 3.6, one gets the desired assertion. On the other hand, we put forth another modification of Mann-like subgradient-like extragradient implicit rule with linear-search process.

    Algorithm 3.8. Initial step: Given ν>0,(0,1),λ(0,1ν). Let x1C be arbitrary.

    Iterations: Given the current iterate xn, calculate xn+1 below:

    Step 1. Calculate wn=σnxn+(1σn)un with

    vn=TΘ2μ2(wnμ2B2wn),
    un=TΘ1μ1(vnμ1B1vn).

    Step 2. Calculate yn=PC(wnλAwn) and Rλ(wn):=wnyn.

    Step 3. Calculate tn=wnτnRλ(wn), where τn:=jn and jn is the smallest nonnegative integer j satisfying

    AwnA(wnjRλ(wn)),wnynν2Rλ(wn)2.

    Step 4. Compute zn=PCn(wn) and xn+1=βnwn+(1βn)PC[αnf(zn)+(IαnμF)Snzn], where Cn:={uC:n(u)0} and

    n(u)=Atn,uwn+τn2λRλ(wn)2.

    Again put n:=n+1 and return to Step 1.

    It is worth mentioning that (3.16)–(3.19) and Lemmas 3.2–3.5 remain true for Algorithm 3.8.

    Theorem 3.9. Suppose that {xn} is the sequence constructed in Algorithm 3.8. Then xnxΩ provided SnxnSn+1xn0, with xΩ being only a solution to the HVI: (μFf)x,yx0,yΩ.

    Proof. In what follows, under the assumption SnxnSn+1xn0, one divides the proof into several aspects.

    Aspect 1. We assert that {xn} is of boundedness. Indeed, for xΩ=Fix(S)Fix(G)VI(C,A) we have Sx=x,Gx=x and PC(xλAx)=x. Using (3.19), from Lemma 2.10 we obtain

    xn+1xβnwnx+(1βn){αnδznx+(1αnτ)(1+θn)znx+αn(fμF)x}βnxnx+(1βn){[αnδ+(1αnτ)+θn]xnx+αn(fμF)x}βnxnx+(1βn){[1αn(τδ)+αn(τδ)2]xnx+αn(fμF)x}=[1αn(1βn)(τδ)2]xnx+αn(1βn)(τδ)22(fμF)xτδmax{xnx,2(fμF)xτδ}.

    By induction, we get xnxmax{x1x,2(fμF)xτδ}n1. Thus, {xn} is bounded, and so are the sequences {wn},{yn},{zn},{f(zn)},{Atn},{Gwn},{Snzn}.

    Aspect 2. We assert that

    (1βn){[1αn(τ+δ)2]wnzn2+ˉqnPC(ˉqn)2}+βn(1βn)wnPC(ˉqn)2xnx2xn+1x2+αnM1,

    for some M1>0. In fact, it is clear that

    znx2wnx2wnzn2σnxnx2+(1σn)unx2wnzn2.

    Since xn+1=βnwn+(1βn)PC(ˉqn) where ˉqn=αnf(zn)+(IαnμF)Snzn, Using Lemma 2.10 and the convexity of the function h(s)=s2sR, from (3.19) we obtain that

    xn+1x2=βnwnx2+(1βn)PC(ˉqn)x2βn(1βn)wnPC(ˉqn)2βnwnx2+(1βn){ˉqnx2ˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2=βnwnx2+(1βn){αn(f(zn)f(x))+(IαnμF)Snzn(IαnμF)x+αn(fμF)x2ˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2βnwnx2+(1βn){αn(f(zn)f(x))+(IαnμF)Snzn(IαnμF)x2+2αn(fμF)x,ˉqnxˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2βnwnx2+(1βn){[αnf(zn)f(x)+(IαnμF)Snzn(IαnμF)x]2+2αn(fμF)x,ˉqnxˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]znx2+2αn(fμF)x,ˉqnxˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn][σnxnx2+(1σn)unx2wnzn2]+2αn(fμF)x,ˉqnxˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2 (3.41)

    (due to αnδ+(1αnτ)+θn1αn(τδ)+αn(τδ)2=1αn(τδ)21), which together with un=Gwn, guarantees that

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+αn(τδ)2][σnxnx2+(1σn)xnx2wnzn2]+2αn(fμF)x,ˉqnxˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2xnx2(1βn){[1αn(τ+δ)2]wnzn2+ˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2+αnM1, (3.42)

    where supn12(fμF)xˉqnxM1 for some M1>0. This attains the desired assertion.

    Aspect 3. We assert that

    (1βn)[1αn(τ+δ)2][τn2λˉLRλ(wn)2]2xnx2xn+1x2+αnM1.

    In fact, using the similar arguments to those of (3.22) in the proof of Theorem 3.56, we can deduce that for some ˉL>0,

    znx2wnx2[τn2λˉLRλ(wn)2]2. (3.43)

    From (3.41), (3.19) and (3.43) it follows that

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]znx2+2αn(fμF)x,ˉqnx}βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn][wnx2[τn2λˉLRλ(wn)2]2]+2αn(fμF)x,ˉqnx}[1αn(1βn)(τδ)2]xnx2(1βn)[1αn(τ+δ)2][τn2λˉLRλ(wn)2]2+2αn(1βn)(fμF)x,ˉqnxxnx2(1βn)[1αn(τ+δ)2][τn2λˉLRλ(wn)2]2+αnM1,

    which hence yields the desired assertion.

    Aspect 4. We assert that

    xn+1x2[1αn(1βn)(τδ)]xnx2+αn(1βn)(τδ)×[2(fμF)x,ˉqnxτδ+θnαnMτδ] (3.44)

    for some M>0. In fact, from Lemma 2.10 and (3.19), one obtains

    xn+1x2βnwnx2+(1βn)ˉqnx2=βnwnx2+(1βn)αn(f(zn)f(x))+(IαnμF)Snzn(IαnμF)x+αn(fμF)x2βnwnx2+(1βn){αn(f(zn)f(x))+(IαnμF)Snzn(IαnμF)x2+2αn(fμF)x,ˉqnx}βnwnx2+(1βn){[αnδznx+(1αnτ)(1+θn)znx]2+2αn(fμF)x,ˉqnx}βnxnx2+(1βn){αnδxnx2+(1αnτ+θn)znx2+2αn(fμF)x,ˉqnx}[1αn(1βn)(τδ)]xnx2+αn(1βn)(τδ)×[2(fμF)x,ˉqnxτδ+θnαnMτδ],

    where supn1xnx2M for some M>0.

    Aspect 5. We assert that xnxΩ, which is only a solution of the HVI (3.16).

    In fact, from (3.44), we have

    xn+1x2[1αn(1βn)(τδ)]xnx2+αn(1βn)(τδ)×[2(fμF)x,ˉqnxτδ+θnαnMτδ]. (3.45)

    Setting Λn=xnx2, we demonstrate the convergence of {Λn} to zero by the following two situations.

    Situation 1. (integer)n01 s.t. {Λn} is nonincreasing. It is clear that the limit limnΛn=k<+ and limn(ΛnΛn+1)=0. From Aspect 2 and {βn}[a,b](0,1) we obtain

    (1b){[1αn(τ+δ)2]wnzn2+ˉqnPC(ˉqn)2}+a(1b)wnPC(ˉqn)2(1βn){[1αn(τ+δ)2]wnzn2+ˉqnPC(ˉqn)2}+βn(1βn)wnPC(ˉqn)2xnx2xn+1x2+αnM1=ΛnΛn+1+αnM1.

    Owing to the facts that αn0 and ΛnΛn+10, from τ+δ2(0,1) one deduces that

    limnwnzn=limnˉqnPC(ˉqn)=limnwnPC(ˉqn)=0. (3.46)

    Hence it is readily known that

    wnˉqnwnPC(ˉqn)+PC(ˉqn)ˉqn0(n),xn+1wn=(1βn)PC(ˉqn)wnˉqnwn0(n), (3.47)

    and

    Snznwn=ˉqnwnαn(f(zn)μFSnzn)ˉqnwn+αn(f(zn)+μFSnzn)0(n).

    Next, we show that xnun0 and xnxn+10 as n. Indeed, note that y=TΘ2μ2(xμ2B2x), vn=TΘ2μ2(wnμ2B2wn) and un=TΘ1μ1(vnμ1B1vn). Then un=Gwn. Using Lemma 2.1, from (3.19) we have

    unx2xnx2μ2(2βμ2)B2wnB2x2μ1(2αμ1)B1vnB1y2.

    This together with (3.41), implies that

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]×[σnxnx2+(1σn)unx2]+2αn(fμF)x,ˉqnx}xnx2(1βn)(1σn)[1αn(τ+δ)2]{μ2(2βμ2)B2wnB2x2+μ1(2αμ1)B1vnB1y2}+αnM1,

    which immediately arrives at

    (1βn)(1σn)[1αn(τ+δ)2]{μ2(2βμ2)B2wnB2x2+μ1(2αμ1)B1vnB1y2}xnx2xn+1x2+αnM1=ΛnΛn+1+αnM1.

    This hence ensures that

    limnB2wnB2x=0andlimnB1vnB1y=0.

    On the other hand, using Lemma 1.1, from (3.19) we get

    unx2xnx2wnvn+yx2vnun+xy2+2μ1B1yB1vnunx+2μ2B2xB2wnvny.

    This together with (3.41), implies that

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]×[σnxnx2+(1σn)unx2]+2αn(fμF)x,ˉqnx}xnx2(1βn)(1σn)[1αn(τ+δ)2]{wnvn+yx2+vnun+xy2}+2μ1B1yB1vnunx+2μ2B2xB2wnvny+αnM1,

    which immediately leads to

    (1βn)(1σn)[1αn(τ+δ)2]{wnvn+yx2+vnun+xy2}xnx2xn+1x2+2μ1B1yB1vnunx+2μ2B2xB2wnvny+αnM1=ΛnΛn+1+2μ1B1yB1vnunx+2μ2B2xB2wnvny+αnM1.

    This hence ensures that

    limnwnvn+yx=0andlimnvnun+xy=0.

    Therefore,

    wnGwn=wnunwnvn+yx+vnun+xy0(n). (3.48)

    Noticing wn=σnxn+(1σn)un, we get

    wnx2=σnxnx,wnx+(1σn)unx,wnxσnxnx,wnx+(1σn)wnx2=12σn[xnx2+wnx2xnwn2]+(1σn)wnx2,

    which immediately yields

    wnx2xnx2xnwn2.

    This together with (3.41), arrives at

    xn+1x2βnxnx2+(1βn){αnδxnx2+[(1αnτ)+θn]×[σnxnx2+(1σn)wnx2]+2αn(fμF)x,ˉqnx}xnx2(1βn)(1σn)[1αn(τ+δ)2]xnwn2+αnM1.

    So it follows that

    (1βn)(1σn)[1αn(τ+δ)2]xnwn2xnx2xn+1x2+αnM1=ΛnΛn+1+αnM1,

    which immediately yields

    limnxnwn=0.

    So it follows from (3.46)–(3.48) that

    xnznxnwn+wnzn0(n), (3.49)
    xnxn+1xnwn+wnxn+10(n), (3.50)

    and

    xnGwnxnwn+wnGwn0(n). (3.51)

    Also, using the similar arguments to those of (3.31) in the proof of Theorem 3.6, we can obtain that

    limnwnyn=0. (3.52)

    By the boundedness of {xn}, we know that subsequence{xnp}{xn} s.t.

    lim supn(fμF)x,xnx=limp(fμF)x,xnpx. (3.53)

    Since H is reflexive and {xn} is bounded, we might assume that xnpˉx. Thus, from (3.53) one has

    lim supn(fμF)x,xnx=limp(fμF)x,xnpx=(fμF)x,ˉxx. (3.54)

    Since xnxn+10,xnGwn0,wnyn0,xnzn0 and xnpˉx, by Lemma 3.4 we infer that ˉxΩ. Thus, using (3.16) and (3.54) one has

    lim supn(fμF)x,xnx=(fμF)x,ˉxx0, (3.55)

    which together with (3.46), arrives at

    lim supn(fμF)x,ˉqnx=lim supn[(fμF)x,ˉqnPC(ˉqn)+PC(ˉqn)wn+wnxn+(fμF)x,xnx]lim supn[(fμF)x(ˉqnPC(ˉqn)+PC(ˉqn)wn+wnxn)+(fμF)x,xnx]0. (3.56)

    Note that {αn(1βn)(τδ)}[0,1],n=1αn(1βn)(τδ)=, and

    lim supn[2(fμF)x,ˉqnxτδ+θnαnMτδ]0.

    Consequently, applying Lemma 2.5 to (3.45), one has limnxnx2=0.

    Situation 2. {Λnp}{Λn} s.t. Λnp<Λnp+1pN, with N being the set of all natural numbers. Let ϕ:NN be formulated as

    ϕ(n):=max{pn:Λp<Λp+1}.

    Using Lemma 2.9, we get

    Λϕ(n)Λϕ(n)+1andΛnΛϕ(n)+1.

    In the remainder of the proof, using the same reasonings as in Situation 2 of Aspect 5 in the proof of Theorem 3.6, we obtain the desired assertion.

    Theorem 3.10. If S:CC is nonexpansive and {xn} is the sequence constructed in the modified version of Algorithm 3.8, that is, for any initial x1C,

    {wn=σnxn+(1σn)un,vn=TΘ2μ2(wnμ2B2wn),un=TΘ1μ1(vnμ1B1vn),yn=PC(wnλAwn),tn=(1τn)wn+τnyn,zn=PCn(wn),xn+1=βnwn+(1βn)PC[αnf(zn)+(IαnμF)Szn],n1, (3.57)

    where for each n1, Cn and τn are picked as in Algorithm 3.8, then xnxΩ with xΩ being only a solution to the HVI: (μFf)x,yx0,yΩ.

    Proof. We divide the proof of the theorem into several aspects.

    Aspect 1. We assert the boundedness of {xn}. Indeed, using the same reasonings as in Aspect 1 of the proof of Theorem 3.9, one derives the desired assertion.

    Aspect 2. We assert that

    (1βn){(1αnτ)wnzn2+ˉqnPC(ˉqn)2}+βn(1βn)wnPC(ˉqn)2xnx2xn+1x2+αnM1,

    for some M1>0. In fact, putting θn=0, from (3.41) we get

    xn+1x2βnxnx2+(1βn){αnδxnx2+(1αnτ)[σnxnx2+(1σn)unx2wnzn2]+2αn(fμF)x,ˉqnxˉqnPC(ˉqn)2}βn(1βn)wnPC(ˉqn)2[1αn(1βn)(τδ)]xnx2(1βn){(1αnτ)wnzn2+ˉqnPC(ˉqn)2}+αnM1βn(1βn)wnPC(ˉqn)2xnx2(1βn){(1αnτ)wnzn2+ˉqnPC(ˉqn)2}+αnM1βn(1βn)wnPC(ˉqn)2,

    where supn12(fμF)xˉqnxM1 for some M1>0. This attains the desired assertion.

    Aspect 3. We assert that

    (1βn)(1αnτ)[τn2λˉLRλ(wn)2]2xnx2xn+1x2+αnM1.

    Indeed, using the same reasonings as in Aspect 3 of the proof of Theorem 3.9, one deduces the desired assertion.

    Aspect 4. We assert that

    xn+1x2[1αn(1βn)(τδ)]xnx2+αn(1βn)(τδ)2(fμF)x,ˉqnxτδ. (3.58)

    Indeed, using the same reasonings as in Aspect 4 of the proof of Theorem 3.9, one obtains the desired assertion.

    Aspect 5. We assert that {xn} converges strongly to the unique solution xΩ of the HVI (3.16). Indeed, using the same reasonings as in Aspect 5 of the proof of Theorem 3.9, one gets the desired assertion.

    Remark 3.11. Compared with the corresponding results in Cai, Shehu and Iyiola [13], Thong and Hieu [17] and Reich et al. [29], our results improve and extend them in the following aspects.

    (ⅰ) The problem of finding an element of Fix(S)Fix(G) (with G=PC(Iμ1B1)PC(Iμ2B2)) in [13] is extended to develop our problem of finding an element of Fix(S)Fix(G)VI(C,A) where G=TΘ1μ1(Iμ1B1)TΘ2μ2(Iμ2B2) and S is asymptotically nonexpansive mapping. The modified viscosity implicit rule for finding an element of Fix(S)Fix(G) in [13] is extended to develop our modified Mann-like subgradient-like extragradient implicit rules with linear-search process for finding an element of Fix(S)Fix(G)VI(C,A), which is on the basis of the subgradient extragradient rule with linear-search process, Mann implicit iteration approach, and hybrid deepest-descent technique.

    (ⅱ) The problem of finding an element of Fix(S)VI(C,A) with quasi-nonexpansive mapping S in [17] is extended to develop our problem of finding an element of Fix(S)Fix(G)VI(C,A) with asymptotically nonexpansive mapping S. The inertial subgradient extragradient method with linear-search process for finding an element of Fix(S)VI(C,A) in [17] is extended to develop our modified Mann-like subgradient-like extragradient implicit rules with linear-search process for finding an element of Fix(S)Fix(G)VI(C,A), which is on the basis of the subgradient extragradient rule with linear-search process, Mann implicit iteration approach, and hybrid deepest-descent technique.

    (ⅲ) The problem of finding an element of VI(C,A) with pseudomonotone uniform continuity mapping A is extended to develop our problem of finding an element of Fix(S)Fix(G)VI(C,A) with both asymptotically nonexpansive mapping S and nonexpansive mapping G. The modified projection-type method with linear-search process in [29] is extended to develop our modified Mann-like subgradient-like extragradient implicit rule with linear-search process, e.g., the original projection step yn=PC(xnλAxn) in [29] is developed into the modified Mann-like implicit projection step wn=(1σn)xn+σnGwn and yn=PC(wnλAwn); meantime, the original viscosity step xn+1=αnf(xn)+(1αn)PCn(xn) is developed into the composite viscosity iterative step xn+1=PC[αnf(xn)+(IαnμF)SnPCn(wn)].

    In what follows, we provide an illustrated instance to show the feasibility and implementability of suggested rules. Put Θ1=Θ2=0, μ=2, μ1=μ2=13,ν=1,λ==12,σn=βn=23 and αn=13(n+1). We first provide an example of two inverse-strongly monotone mappings B1,B2:CH, Lipschitz continuous and pseudomonotone mapping A and asymptotically nonexpansive mapping S with Ω=Fix(S)Fix(G)VI(C,A), where G:=TΘ1μ1(Iμ1B1)TΘ2μ2(Iμ2B2)=PC(Iμ1B1)PC(Iμ2B2). We set H=R and use the a,b=ab and =|| to denote its inner product and induced norm, respectively. Moreover, we put C=[2,3]. The starting point x1 is arbitrarily chosen in C. Let f(x)=F(x)=12xxC with

    δ=12<τ=11μ(2ημκ2)=112(2122(12)2)=1.

    We let B1x=B2x:=Bx=x12sinx,xC. Let A:HH and S:CC be formulated as Ax:=11+|sinx|11+|x| and Sx:=56sinx. We now assert that B is 29-inverse-strongly monotone. In fact, since B is 12-strongly monotone and 32-Lipschitz continuous, we know that B is 29-inverse-strongly monotone with α=β=29. Let us assert that A is pseudomonotone and Lipschitz continuous. In fact, for each a,bH one has

    AaAb|ba(1+b)(1+a)|+|sinbsina(1+sinb)(1+sina)|ab(1+a)(1+b)+sinasinb(1+sina)(1+sinb)ab+sinasinb2ab.

    This means that A is Lipschitz continuous with L=2. Next, we assert that A is pseudomonotone. For each a,bH, it is readily known that

    Aa,ba=(11+|sina|11+|a|)(ba)0Ab,ba=(11+|sinb|11+|b|)(ba)0.

    Moreover, it is easy to check that S is asymptotically nonexpansive with θn=(56)n,n1, such that Sn+1xnSnxn0 as n. In fact, note that

    SnaSnb56Sn1aSn1b(56)nab(1+θn)ab,

    and

    Sn+1xnSnxn(56)n1S2xnSxn=(56)n156sin(Sxn)56sinxn2(56)n0.

    It is obvious that Fix(S)={0} and

    limnθnαn=limn(5/6)n1/3(n+1)=0.

    Accordingly, Ω=Fix(S)Fix(G)VI(C,A)={0}. In this case, noticing

    G=PC(Iμ1B1)PC(Iμ2B2)=[PC(I13B)]2,

    we rewrite Algorithm 3.1 as follows:

    {wn=23xn+13un,vn=PC(I13B)wn,un=PC(I13B)vn,yn=PC(wn12Awn),tn=(1τn)wn+τnyn,zn=PCn(wn),xn+1=23xn+13PC[13(n+1)12xn+(113(n+1))Snzn],n1, (4.1)

    where for each n1, Cn and τn are chosen as in Algorithm 3.1. Then, by Theorem 3.6, we know that {xn} converges to 0Ω=Fix(S)Fix(G)VI(C,A).

    In particular, since Sx:=56sinx is also nonexpansive, we consider the modified version of Algorithm 3.1, that is,

    {wn=23xn+13un,vn=PC(I13B)wn,un=PC(I13B)vn,yn=PC(wn12Awn),tn=(1τn)wn+τnyn,zn=PCn(wn),xn+1=23xn+13PC[13(n+1)12xn+(113(n+1))Szn],n1, (4.2)

    where for each n1, Cn and τn are chosen as above. Then, by Theorem 3.7, we know that {xn} converges to 0Ω=Fix(S)Fix(G)VI(C,A).

    In this paper, we introduce the modified Mann-like subgradient-like extragradient implicit rules with linear-search process for finding a common solution of the SGEP, VIP and FPP. The proposed algorithms are on the basis of the subgradient extragradient rule with linear-search process, Mann implicit iteration approach, and hybrid deepest-descent technique. Under mild restrictions, we demonstrate the strong convergence of the suggested algorithms to a common solution of the SGEP, VIP and FPP, which is a unique solution of a certain HVI defined on their common solution set. In addition, an illustrated example is provided to show the feasibility and implementability of our proposed rule.

    This research was supported by the 2020 Shanghai Leading Talents Program of the Shanghai Municipal Human Resources and Social Security Bureau (20LJ2006100), the Innovation Program of Shanghai Municipal Education Commission (15ZZ068) and the Program for Outstanding Academic Leaders in Shanghai City (15XD1503100). Li-Jun Zhu was supported by the National Natural Science Foundation of China [grant number 11861003], the Natural Science Foundation of Ningxia province [grant number NZ17015], the Major Research Projects of NingXia [grant numbers 2021BEG03049]and Major Scientific and Technological Innovation Projects of YinChuan [grant numbers 2022RKX03 and NXYLXK2017B09].

    The authors declare no conflicts of interest.



    [1] Y. Yao, Y. C. Liou, S. M. Kang, Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method, Comput. Math. Appl., 59 (2010), 3472–3480. https://doi.org/10.1016/j.camwa.2010.03.036 doi: 10.1016/j.camwa.2010.03.036
    [2] L. O. Jolaoso, Y. Shehu, J. C. Yao, Inertial extragradient type method for mixed variational inequalities without monotonicity, Math. Comput. Simul., 192 (2022), 353–369. https://doi.org/10.1016/j.matcom.2021.09.010 doi: 10.1016/j.matcom.2021.09.010
    [3] Q. L. Dong, L. Liu, Y. Yao, Self-adaptive projection and contraction methods with alternated inertial terms for solving the split feasibility problem, J. Nonlinear Convex Anal., 23 (2022), 591–605.
    [4] L. C. Ceng, A. Petrusel, X. Qin, J. C. Yao, Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints, Optimization, 70 (2021), 1337–1358. https://doi.org/10.1080/02331934.2020.1858832 doi: 10.1080/02331934.2020.1858832
    [5] Y. Yao, M. Postolache, J. C. Yao, Strong convergence of an extragradient algorithm for variational inequality and fixed point problems, U.P.B. Sci. Bull., Ser. A, 82 (2020), 3–12.
    [6] L. C. Ceng, C. Y. Wang, J. C. Yao, Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities, Math. Methods Oper. Res., 67 (2008), 375–390. https://doi.org/10.1007/s00186-007-0207-4 doi: 10.1007/s00186-007-0207-4
    [7] Y. Yao, N. Shahzad, J. C. Yao, Convergence of Tseng-type self-adaptive algorithms for variational inequalities and fixed point problems, Carpathian J. Math., 37 (2021), 541–550. https://doi.org/10.37193/CJM.2021.03.15 doi: 10.37193/CJM.2021.03.15
    [8] H. K. Xu, T. H. Kim, Convergence of hybrid steepest-descent methods for variational inequalities, J. Optim. Theory Appl., 119 (2003), 185–201. https://doi.org/10.1023/B:JOTA.0000005048.79379.b6 doi: 10.1023/B:JOTA.0000005048.79379.b6
    [9] L. He, Y. L. Cui, L. C. Ceng, Strong convergence for monotone bilevel equilibria with constraints of variational inequalities and fixed points using subgradient extragradient implicit rule, J. Inequal. Appl., 2021 (2021), 146. https://doi.org/10.1186/s13660-021-02683-y doi: 10.1186/s13660-021-02683-y
    [10] L. M. Deng, R. Hu, Y. P. Fang, Projection extragradient algorithms for solving nonmonotone and non-Lipschitzian equilibrium problems in Hilbert spaces, Numer. Algor., 86 (2021), 191–221. https://doi.org/10.1007/s11075-020-00885-x doi: 10.1007/s11075-020-00885-x
    [11] J. Balooee, M. Postolache, Y. Yao, System of generalized nonlinear variational-like inequalities and nearly asymptotically nonexpansive mappings: graph convergence and fixed point problems, Ann. Funct. Anal., 13 (2022), 68. https://doi.org/10.1007/s43034-022-00212-6 doi: 10.1007/s43034-022-00212-6
    [12] S. V. Denisov, V. V. Semenov, L. M. Chabak, Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators, Cybern. Syst. Anal., 51 (2015), 757–765. https://doi.org/10.1007/s10559-015-9768-z doi: 10.1007/s10559-015-9768-z
    [13] G. Cai, Y. Shehu, O. S. Iyiola, Strong convergence results for variational inequalities and fixed point problems using modified viscosity implicit rules, Numer. Algor., 77 (2018), 535–558. https://doi.org/10.1007/s11075-017-0327-8 doi: 10.1007/s11075-017-0327-8
    [14] J. Yang, H. Liu, Z. Liu, Modified subgradient extragradient algorithms for solving monotone variational inequalities, Optimization, 67 (2018), 2247–2258. https://doi.org/10.1080/02331934.2018.1523404 doi: 10.1080/02331934.2018.1523404
    [15] P. T. Vuong, On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities, J. Optim. Theory Appl., 176 (2018), 399–409. https://doi.org/10.1007/s10957-017-1214-0 doi: 10.1007/s10957-017-1214-0
    [16] Y. Yao, H. Li, M. Postolache, Iterative algorithms for split equilibrium problems of monotone operators and fixed point problems of pseudo-contractions, Optimization, 71 (2022), 2451–2469. https://doi.org/10.1080/02331934.2020.1857757 doi: 10.1080/02331934.2020.1857757
    [17] D. V. Thong, D. V. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer. Algor., 80 (2019), 1283–1307. https://doi.org/10.1007/s11075-018-0527-x doi: 10.1007/s11075-018-0527-x
    [18] Y. Shehu, O. S. Iyiola, Strong convergence result for monotone variational inequalities, Numer. Algor., 76 (2017), 259–282. https://doi.org/10.1007/s11075-016-0253-1 doi: 10.1007/s11075-016-0253-1
    [19] K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, Marcel Dekker, New York, 1984.
    [20] D. V. Thong, Q. L. Dong, L. L. Liu, N. A. Triet, N. P. Lan, Two new inertial subgradient extragradient methods with variable step sizes for solving pseudomonotone variational inequality problems in Hilbert spaces, J. Comput. Appl. Math., 245 (2021), 1–23.
    [21] P. T. Vuong, Y. Shehu, Convergence of an extragradient-type method for variational inequality with applications to optimal control problems, Numer. Algor., 81 (2019), 269–291. https://doi.org/10.1007/s11075-018-0547-6 doi: 10.1007/s11075-018-0547-6
    [22] Y. Shehu, Q. L. Dong, D. Jiang, Single projection method for pseudo-monotone variational inequality in Hilbert spaces, Optimization, 68 (2019), 385–409. https://doi.org/10.1080/02331934.2018.1522636 doi: 10.1080/02331934.2018.1522636
    [23] D. V. Thong, D. V. Hieu, Modified subgradient extragradient method for variational inequality problems, Numer. Algor., 79 (2018), 597–610. https://doi.org/10.1007/s11075-017-0452-4 doi: 10.1007/s11075-017-0452-4
    [24] R. Kraikaew, S. Saejung, Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces, J. Optim. Theory Appl., 163 (2014), 399–412. https://doi.org/10.1007/s10957-013-0494-2 doi: 10.1007/s10957-013-0494-2
    [25] Y. Yao, Y. Shehu, X. H. Li, Q. L. Dong, A method with inertial extrapolation step for split monotone inclusion problems, Optimization, 70 (2021), 741–761. https://doi.org/10.1080/02331934.2020.1857754 doi: 10.1080/02331934.2020.1857754
    [26] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon, 12 (1976), 747–756.
    [27] L. C. Ceng, M. J. Shang, Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings, Optimization, 70 (2021), 715–740. https://doi.org/10.1080/02331934.2019.1647203 doi: 10.1080/02331934.2019.1647203
    [28] Y. Yao, O. S. Iyiola, Y. Shehu, Subgradient extragradient method with double inertial steps for variational inequalities, J. Sci. Comput., 90 (2022), 71. https://doi.org/10.1007/s10915-021-01751-1 doi: 10.1007/s10915-021-01751-1
    [29] S. Reich, D. V. Thong, Q. L. Dong, X. H. Li, V. T. Dung, New algorithms and convergence theorems for solving variational inequalities with non-Lipschitz mappings, Numer. Algor., 87 (2021), 527–549. https://doi.org/10.1007/s11075-020-00977-8 doi: 10.1007/s11075-020-00977-8
    [30] P. E. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. https://doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [31] C. Zhang, Z. Zhu, Y. Yao, Q. Liu, Homotopy method for solving mathematical programs with bounded box-constrained variational inequalities, Optimization, 68 (2019), 2293–2312. https://doi.org/10.1080/02331934.2019.1647199 doi: 10.1080/02331934.2019.1647199
    [32] A. N. Iusem, M. Nasri, Korpelevich's method for variational inequality problems in Banach spaces, J. Global Optim., 50 (2011), 59–76. https://doi.org/10.1007/s10898-010-9613-x doi: 10.1007/s10898-010-9613-x
    [33] Y. R. He, A new double projection algorithm for variational inequalities, J. Comput. Appl. Math., 185 (2006), 166–173. https://doi.org/10.1016/j.cam.2005.01.031 doi: 10.1016/j.cam.2005.01.031
    [34] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Math. Student, 63 (1994), 123–145.
    [35] L. C. Ceng, J. C. Yao, A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem, Nonlinear Anal., 72 (2010), 1922–1937. https://doi.org/10.1016/j.na.2009.09.033 doi: 10.1016/j.na.2009.09.033
    [36] Y. Yao, M. Postolache, J. C. Yao, An iterative algorithm for solving the generalized variational inequalities and fixed points problems, Mathematics, 7 (2019), 61. https://doi.org/10.3390/math7010061 doi: 10.3390/math7010061
    [37] X. Zhao, Y. Yao, Modified extragradient algorithms for solving monotone variational inequalities and fixed point problems, Optimization, 69 (2020), 1987–2002. https://doi.org/10.1080/02331934.2019.1711087 doi: 10.1080/02331934.2019.1711087
    [38] M. Farid, Two algorithms for solving mixed equilibrium problems and fixed point problems in Hilbert spaces, Ann. Univ. Ferrara., 68 (2022), 237. https://doi.org/10.1007/s11565-021-00380-8 doi: 10.1007/s11565-021-00380-8
    [39] M. Alansari, R. Ali, M. Farid, Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem, and fixed point problem in a Banach space, J. Ineqal. Appl., 2020 (2020), 42. https://doi.org/10.1186/s13660-020-02313-z doi: 10.1186/s13660-020-02313-z
    [40] M. Farid, W. Cholamjiak, R. Ali, K. R. Kazmi, A new shrinking projection algorithm for a generalized mixed variational-like inequality problem and asymptotically quasi-ϕ-nonexpansive mapping in a Banach space, RACSAM, 115 (2021), 114. https://doi.org/10.1007/s13398-021-01049-9 doi: 10.1007/s13398-021-01049-9
    [41] M. Farid, R. Ali, W. Cholamjiak, An inertial iterative algorithm to find common solution of a split generalized equilibrium and a variational inequality problem in Hilbert spaces, J. Math., 2021 (2021), 3653807. https://doi.org/10.1155/2021/3653807 doi: 10.1155/2021/3653807
  • This article has been cited by:

    1. Muhammad Bux, Saleem Ullah, Muhammad Bilal Khan, Najila Aloraini, A novel iterative approach for resolving generalized variational inequalities, 2023, 8, 2473-6988, 10788, 10.3934/math.2023547
    2. Zai-Yun Peng, Zhi-Ying Peng, Gang Cai, Gao-Xi Li, Inertial subgradient extragradient method for solving pseudomonotone variational inequality problems in Banach spaces, 2024, 103, 0003-6811, 1769, 10.1080/00036811.2023.2267576
    3. Habib ur Rehman, Debdas Ghosh, Jen-Chih Yao, Xiaopeng Zhao, Extragradient methods with dual inertial steps for solving equilibrium problems, 2024, 103, 0003-6811, 2941, 10.1080/00036811.2024.2330509
    4. Cong-Shan Wang, Lu-Chuan Ceng, Bing Li, Sheng-Long Cao, Hui-Ying Hu, Yun-Shui Liang, Modified Inertial-Type Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Finite Bregman Relatively Nonexpansive and Demicontractive Mappings, 2023, 12, 2075-1680, 832, 10.3390/axioms12090832
    5. Yali Zhao, Ziru Zhao, Inertial iterative algorithms for common solution of variational inequality and system of variational inclusion problems, 2024, 32, 0971-3611, 2889, 10.1007/s41478-024-00766-9
    6. Mohammad Farid, Pronpat Peeyada, Rehan Ali, Watcharaporn Cholamjiak, Extragradient method with inertial iterative technique for pseudomonotone split equilibrium and fixed point problems of new mappings, 2024, 32, 0971-3611, 1463, 10.1007/s41478-023-00695-z
    7. Prasit Cholamjiak, Zhongbing Xie, Min Li, Papinwich Paimsang, Double inertial subgradient extragradient algorithm for solving equilibrium problems and common fixed point problems with application to image restoration, 2024, 03770427, 116396, 10.1016/j.cam.2024.116396
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1844) PDF downloads(132) Cited by(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog