Processing math: 100%
Research article Special Issues

Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems

  • In this paper, we introduce a new modified inertial Mann-type method that combines the subgradient extragradient method with the projection contraction method for solving quasimonotone variational inequality problems and fixed point problems in real Hilbert spaces. We establish strong convergence of the proposed method under some mild conditions without knowledge of the operator norm. Finally, we give numerical experiments to illustrate the efficiency of the method over the existing one in the literature.

    Citation: Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane. Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems[J]. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539

    Related Papers:

    [1] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [2] Fei Ma, Jun Yang, Min Yin . A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Mathematics, 2022, 7(4): 5015-5028. doi: 10.3934/math.2022279
    [3] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [4] Wenlong Sun, Gang Lu, Yuanfeng Jin, Zufeng Peng . Strong convergence theorems for split variational inequality problems in Hilbert spaces. AIMS Mathematics, 2023, 8(11): 27291-27308. doi: 10.3934/math.20231396
    [5] Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
    [6] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [7] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [8] Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064
    [9] Pongsakorn Yotkaew, Nopparat Wairojjana, Nuttapol Pakkaranang . Accelerated non-monotonic explicit proximal-type method for solving equilibrium programming with convex constraints and its applications. AIMS Mathematics, 2021, 6(10): 10707-10727. doi: 10.3934/math.2021622
    [10] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
  • In this paper, we introduce a new modified inertial Mann-type method that combines the subgradient extragradient method with the projection contraction method for solving quasimonotone variational inequality problems and fixed point problems in real Hilbert spaces. We establish strong convergence of the proposed method under some mild conditions without knowledge of the operator norm. Finally, we give numerical experiments to illustrate the efficiency of the method over the existing one in the literature.



    Let H be a real Hilbert space with inner product , and induced norm ||||. Let C be a nonempty, convex and closed subset of H and A:HH a nonlinear operator. The classical variational inequality problem (VIP) first introduced by Stampacchi [20] is defined as

    FindapointxCsuchthatAx,yx0,yC. (1.1)

    We denote the solution set of VIP (1.1) by VI(C,A). Several problems that arise from different areas of pure and applied sciences can be studied in a general and unified framework of variational inequality problems. In view of this, the theory of variational inequality has become an important tool in physics, control theory, engineering, economics, management, science and mathematical programming (refer to references [2,3,10,13,18]). One of the most difficult and important problems is developing an efficient method for solving variational inequality problem. Over the years, several iterative methods have been proposed for solving variational inequality problems (see [4,5,9,12,21,25,37]). The simplest and classical iterative method for solving the VIP in a real Hilbert space is the gradient-projection method, which is defined as follows: starting with the initial point x0C, update xn+1 with the formula

    xn+1=PC(xnλAxn), (1.2)

    where λ>0 is a suitable stepsize and PC is the metric projection mapping onto the convex and closed subset C of H. This method is based on the fact that a point xC is a solution of VIP (1.1) if and only if x=PC(xλAx). Even though the gradient-projection method (1.2) can be easily implemented because it only needs to find the function value and one projection onto the closed convex set C per iteration. But the convergence of this method (1.2) needs a kind of strong hypothesis that the operator A is strongly monotone (or inverse strongly monotone, see [37]). To avoid the hypothesis of strong monotonicity on operator A, Korpelevich [12] in 1976 proposed the extragradient method with double projections in Euclidean space, as follows:

    {yn=PC(xnλAxn),xn+1=PC(xnλAyn), (1.3)

    where A:CRn is monotone and L-Lipschitz continuous with L>0 and λ(0,1/L). If the solution set VI(C,A) in (1.3) is nonempty, then the iterative sequence {xn} generated by Algorithm (1.3) converges weakly to a point in VI(C,A). Even though the conditions imposed on operator A under the extragradient method (1.3) are weaker than those of the gradient-projection method (1.2), the iterative algorithm (1.3) needs to calculate two projections onto the closed convex set C per iteration. This may affect its efficiency if C is a general closed convex set. There are some methods to overcome this drawback. In 2011, Censor et al. [5] introduced the subgradient extragradient method in a real Hilbert space H in which the second projection onto C in (1.3) is replaced by a projection onto a specific constructible half-space. Their algorithm is defined as

    {yn=PC(xnλAxn),Tn={zH:xnλAxnyn,zyn0},xn+1=PTn(xnλAyn), (1.4)

    where A is monotone and L-Lipschitz continuous with L>0 and fixed stepsize λ(0,1/L). Also, He [9] and Sun [21] independently studied the projection and contraction method (PCM), proposed as

    {x1H,yn=PC(xnλAxn),d(xn,yn)=(xnyn)λ(AxnAyn),xn+1=xnγηnd(xn,yn),n1, (1.5)

    where γ(0,2),λ(0,1/L) and ηn:=xnyn,d(xn,yn)||d(xn,yn)||2. He [9] established that the sequence {xn} generated by (1.5) converges weakly to a solution of VIP (1.1). Since PCM requires only one projection onto the feasible set C, it reduces the computational cost per iteration. Some researchers have improved PCM in many different ways; see, for instance, [33,34,35]. The major drawback of the projection and contraction method (1.5) is that the step size λ requires the knowledge of an upper bound of the Lipschitz constant L. A greater value of L can lead to very small step-sizes λ, which may contribute to the slow convergence of Algorithm (1.5). To overcome this difficulty in PCM (1.5), the self-adaptive method that does not necessarily know the Lipschitz constant of the mapping in advance is required.

    On the other hand, to speed up the convergence rate of algorithms, Polyak [14] studied the heavy ball method, an inertial extrapolation process for minimizing a smooth convex function. Since then, many authors have introduced this technique in different methods for solving VIPs (see [16,17,27,28,41] for more details). In 2021, Tan et al. [30], using the modified extragradient and projection contraction method proposed by Thong and Hieu [31], studied the inertial modified extragradient projection and contraction method with the hybrid steepest descent method with Armijo-type line search to update the step size in each iteration as follows:

    {x0,x1H,wn=xn+θn(xnxn1),yn=PC(wnλnAwn),zn=PTn(wnθλnηnAyn),Tn={xH:wnλnAwnyn,xyn0},d(wn,yn)=(wnyn)λn(AwnAyn),ηn:=wnyn,d(wn,yn)||d(wn,yn)||2,xn+1=znϕnγSzn,n1, (1.6)

    where λn is chosen to be largest λ{δ,δξ,δξ2,}, δ,ξ(0,1), A is Lipschitz continuous and pseudomonotone, S is Lipschitz and α-strongly monotone and Lipschitz continuous, and {ϕn} is a control sequence in (0,1) with some condition. Under suitable conditions on the parameters, they proved strong convergence of the sequence generated by (1.6). We note that Algorithm (1.6) as proposed by Tan et al. [30] uses an Armijo-type line search criteria to update the step size of each iteration. It is known that an approach with a line search would require many additional computations and further reduce the computational efficiency of the method used. Recently, many methods with a simple step size have been proposed for solving the VIP (see [29,32,36]).

    Inspired and motivated by the above-mentioned results, in this paper we introduce an inertial modified extragradient and contraction projection method with self-adaptive stepsize for finding a common solution to the quasimonotone variational inequality problem and a common fixed point of the infinite family of demimetric mapping in real Hilbert spaces. In this regard, we highlight the following motivations that signify the contributions of our proposed algorithm (method):

    (a) the operator A involved is quasimonotone instead of monotone or pseudomonotone;

    (b) the proposed algorithm embeds inertial terms which helps increase the convergence speed of the iterative sequence;

    (c) the method uses a new non-monotonic step size so that it can work without knowing the Lipschitz constant of the mapping;

    (d) the projection onto the feasible set needs to be evaluated only once in each iteration;

    (e) we establish that the sequence generated by our proposed method converges strongly to common fixed points of the infinite family of demimetric mappings, which is also the solution of variational inequality problems for quasimonotone operators;

    (f) we demonstrate the effectiveness of our proposed method by providing numerical examples and comparing it with related methods in the literature.

    Throughout this section, the symbols and represent the strong and weak convergences respectively.

    Let C be a closed and convex subset of a real Hilbert space H. The metric projection from H onto C is the mapping PC:HC for each xH, and there exists a unique point z=PC(x) such that

    ||xz||=infyC||xy||.

    From this definition, it is easy to show that PC has the following characteristic properties; see [7] for more details.

    Lemma 2.1. (Goebel and Reich [7])} Let xH and zC be any point. Then we have

    (ⅰ) z=PC(x) if and only if the following relation holds

    xz,yz0,yC. (2.1)

    (ⅱ) For all x,yH, we have

    PC(x)PC(y),xy||PC(x)PC(y)||2.

    (ⅲ) For xH and yC

    ||yPC(x)||2+||xPC(x)||2||xy||2.

    We also need the following nonlinear operators, which are introduced below.

    Definition 2.1. Let the fixed point set of a mapping T:CH be denoted by F(T):={xC:Tx=x}. The mapping T is called

    (1) L-Lipschitz continuous with L>0 if for all x,yC,

    TxTyLxy.

    If L=1, then T is called nonexpansive mapping.

    (2) quasi-nonexpansive if Txyxy for all xC,yF(T),

    (3) (α,β)-generalized hybrid [11] if there exist α,βR such that

    αTxTy2+(1α)xTy2βTxy2+(1β)xy2, x,yC,

    (4) τ-demicontractive, if F(T) and there exists τ[0,1) such that

    Txy2xy2+τxTx2, for all xC,yF(T).

    (6) τ-demimetric [22] if F(T) and there exists τ(,1) such that for any xC and yF(T), we have

    xy,xTx1τ2xTx2.

    Observe that a (1,0)-generalized hybrid mapping is nonexpansive and every generalized hybrid mapping with nonempty fixed point set is quasi-nonexpansive. Also, the class of τ-demicontractive covers that of nonexpansive and quasi-nonexpansive. The class of τ-demimetric mappings includes that of τ-demicontractive and generalized hybrid mappings as special cases.

    The following result is important and crucial in the proof of our main result.

    Lemma 2.2. ([23, Lemma 2.2]) Let H be a Hilbert space and let C be a nonempty, closed, and convex subset of H. Let k(,0) and let T be a k-demimetric mapping of C into H such that F(T). Let λ be a real number with 0<λ1k and define S=(1λ)I+λT. Then

    (ⅰ) F(T)=F(S),

    (ⅱ) F(T) is closed and convex,

    (ⅲ) S is a quasi-nonexpansive mapping of C into H.

    We also apply the following results to establish our main result.

    Lemma 2.3. ([19, Lemma 3.3]) Let H be a Hilbert space and C be a nonempty, closed, and convex subset of H. Assume that {Ti}i=1:CH is an infinite family of τidemimetric mappings with sup{τi:iN}<1 such that i=1F(Ti). Assume that {ηi}i=1 is a positive sequence such that i=1ηi=1. Then i=1ηiTi:CH is a τ-demimetric mapping with τ=sup{τi:iN} and F(i=1ηiTi)=i=1F(Ti).

    The so-called demiclosedness principle plays an important role in our argument.

    Lemma 2.4. ([6]) Let T:CH be a nonexpansive mapping, then T is demiclosed on C in the sense that if {xn} converges weakly to xC and {xnTxn} converges strongly to 0 then xF(T).

    Lemma 2.5. ([24]) Let H be a real Hilbert space. Then, for all x,yH and αR, the following hold.

    (1) x+y2x2+2y,x+y;

    (2) αx+(1α)y2=αx2+(1α)y2α(1α)xy2.

    Next, we present some concepts of monotonicity of an operator.

    Definition 2.2. Let A:HH be a mapping and let x,yH. Then, A is said to be

    (a) η-strongly monotone, if there exists η>0 such that

    AxAy,xyη||xy||2;

    (b) monotone, if

    AxAy,xy0;

    (c) pseudomonotone, if

    Ay,xy0Ax,xy0;

    (d) quasimonotone, if

    Ay,xy>0Ax,xy0;

    It is obvious to see that (a)(b)(c) (d). But the converses are not generally true.

    Lemma 2.6. ([8,40]) Let C be a nonempty, closed, and convex subset of a Hilbert space H and F:HH be L-Lipschitzian and quasimonotone operator. Let yC. If, for some xC, we have that F(y),xy0, then at least one of the following must hold:

    F(x),xy0orF(y),zy0zC.

    The following result is useful when proving strong convergence of our iterative sequence.

    Lemma 2.7. ([15]) Let {an} be a sequence of nonnegative real numbers, {αn} be a sequence of real numbers in (0,1) with condition

    n=1αn=

    and {bn} be a sequence of real numbers. Assume that

    an+1(1αn)an+αnbn,n1.

    If lim supkbnk0 for every subsequence {ank} of {an} satisfying the condition

    lim infk(ank+1ank)0,

    then limnan=0.

    We begin this section with the following assumptions for the convergent analysis of our method:

    Assumption 3.1. Let H be a real Hilbert space and C be a nonempty, closed and convex subset of H. Suppose the following conditions are satisfied:

    (C1) A:HH is a quasimonotone and L-Lipschitz continuous with L>0.

    (C2) A:HH is sequential weakly continuous, i.e., for each sequence {xn}C converges weakly to x, implies {A(xn)} converges weakly to A(x).

    (C3) {Ti}i=1:HH is an infinite family of τi-demimetric mapping and demiclosed at zero with τi(,1) for each i1 and τ=sup{τi:i1}1. Letting Ψ:=i=1γiTi, by Lemma 3, Ψ is τ-demimetric mapping, where i=1γi=1 and G:=(1ζ)I+ζΨ with ζ(0,1τ].

    (C4) {μn} is a positive sequence with μn=(αn), {βn}(a,1αn) for some a>0, where {αn}(0,1) satisfies the following limnαn=0 and n=1αn=.

    (C5) Denote the set of solutions VI(C,A)F(Ψ) as Γ and is assumed to be nonempty.

    Now, using the inertial extrapolation term, we introduce a modified inertial Mann-type subgradient extragradient method with projection and contraction iterative techniques for solving variational inequality and common fixed point problems:

    Algorithm 3.1. Initialization: Choose θ>0, λ>0, μ(0,1), ρ(0,2). Let x0,x1H be taken arbitrary.

    Iterative Steps: Calculate xn+1 as follows:

    Step 1. Given the iterates xn1 and xn for each n1, θ>0, choose θn such that 0θn¯θn, where

    ˉθn={min{θ,μnxnxn1},if  xnxn1,θ,otherwise. (3.1)

    Step 2. Compute

    {yn=xn+θn(xnxn1),wn=PC(ynλnAyn), (3.2)

    where

    λn+1={min{μ||ynwn||||AynAwn||,λn},ifAynAwn,λn,otherwise. (3.3)

    Step 3. Compute

    {vn=PTn(ynρλnηn(Awn)),un=(1αn)vn,xn+1=(1βn)un+βnGun, nN, (3.4)

    where Tn:={zH:ynλnAynwn,zwn0} and ηn:=ynwn,dn||dn||2, dn=ynwnλn(AynAwn). Set n:=n+1 and return to Step 1.

    Remark 3.1. From (C4) of Assumption 3.1, we have μn=o(αn), i.e. limnμnαn=0. Also, from (3.1), θn¯θnμnxnxn1 for all n1 and xnxn1, it is easy to see that

    θnαnxnxn1μnαn0  as  n. (3.5)

    The following two lemmas, which were basically proved in [25,38] and [26], respectively, are significant and vital in the proof of our main result. For the sake of completeness, we give their proofs.

    Lemma 3.1. (see Yang et al. [38] and Tan et al. [25]) The sequence {λn} in Algorithm 3.1 generated by (3.3) is nonincreasing and the limit exists.

    Proof. It is straight forward to see that the sequence {λn} is monotone and nonincreasing. Using the fact that A is Lipschitz continuous and AynAwn, we have

    μ||ynwn||||AynAwn||μ||ynwn||L||ynwn||=μL,

    thus, {λn} is bounded from below by min{μL,τ0}. Since the sequence {λn} is monotone nonincreasing and bounded from below, then the limnτn exists. So, by denoting λ=limnτn, we get that λ>0.

    Lemma 3.2. (see [26, Lemma 3.2]) If yn=wn or dn=0 in Algorithm 3.1, then ynVI(C,A).

    Proof. Using (3.2) and (3.3) in Algorithm 3.1, we get

    (1λnλn+1μ)||ynwn||=||ynwn||λnλn+1μ||ynwn||||ynwn||λn||AynAwn||||ynwnλn(AynAwn)||=||dn||||ynwn||+λn||AynAwn||||ynwn||+λnλn+1μ||ynwn||=(1+λnλn+1μ)||ynwn||.

    Therefore (1λnλn+1μ)||ynwn||||dn||(1+λnλn+1μ)||ynwn||. Since the limit of λn exists, then for all n1 we get (1μ)||ynwn||||dn||(1+μ)||ynwn||, thus yn=wn if and only dn=0. Therefore, if yn=wn or dn=0 we get yn=PC(ynλnAyn) which implies ynVI(C,A).

    Lemma 3.3. Let {xn} be the sequence generated by Algorithm 3.1 under Assumption 3.1. Then, {xn} is bounded.

    Proof. Let xΓC, since from (3.2), wnC, then

    Ax,wnx0, (3.6)

    and since A acts on C, by Lemma 2.6, we have

    Awn,wnx0, (3.7)

    thus

    Awn,vnx=Awn,vnwn+Awn,wnxAwn,vnwn. (3.8)

    By definition of Tn in Algorithm 3.1, we get vnTn, which implies

    ynλnAynwn,vnwn0,

    hence

    dn,vnwn=ynwnλn(AynAwn),vnwn=ynλnAynwn,vnwn+λnAwn,vnwnλnAwn,vnwn. (3.9)

    Combining (3.8) and (3.9), we get

    dn,vnyn+dn,ynwn=dn,vnwnλnAwn,vnx. (3.10)

    Using (3.4), (3.10) and the fact that the projection operator is firmly nonexpansive, by Lemma 2.1(ⅱ), we obtain

    2||vnx||22ynxρλnηnAwn,vnx=||vnx||2+||ynxρλnηnAwn||2||vnyn+ρλnηnAwn||2=||vnx||2+||ynx||2||vnyn||22ρλnηnAwn,ynx2ρλnηnAwn,vnyn=||vnx||2+||ynx||2||vnyn||22ρλnηnAwn,vnx=||vnx||2+||ynx||2||vnyn||22ρηndn,vnyn2ρηndn,ynwn||vnx||2+||ynx||2||vnyn||2+2ρηndn,ynvn2ρη2n||dn||2=||vnx||2+||ynx||2||vnyn||2+ρ2η2n||dn||22ρη2n||dn||2+||vnyn||2||ynvnρηndn||2,

    therefore

    ||vnx||2||ynx||2+ρ2η2n||dn||22ρη2n||dn||2||ynvnρηndn||2. (3.11)

    Moreover, we have

    dn,ynwn=||ynwn||2λnAynAwn,ynwn||ynwn||2λn||AynAwn||||ynwn||||ynwn||2λnμλn+1||ynwn||2=(1λnμλn+1)||ynwn||2.

    Since ||dn||(1+λnμλn+1)||ynwn||, then

    η2n||dn||2=dn,ynwndn,ynwn||dn||2(λn+1λnμλn+1+λnμ)2||ynwn||2. (3.12)

    Combining (3.11) and (3.12), we get

    ||vnx||2||ynx||2||ynvnρηndn||2ρ(2ρ)(λn+1λnμλn+1+λnμ)2||ynwn||2. (3.13)

    Thus

    ||vnx||||ynx||. (3.14)

    Also, from the definition of (yn), we have

    ||ynx||=||xnx+θn(xnxn1)||||xnx||+αn(θnαn||xnxn1||).

    Since the sequence {(θnαn||xnxn1||)} converges to zero, then there exists K>0 such that (θnαn||xnxn1||)K for all n1. Thus

    ||ynx||||xnx||+αnK, (3.15)

    it follows from (3.14) that

    ||vnx||||xnx||+αnK. (3.16)

    Also,

    ||unx||=||(1αn)(vnx)+αnx||(1αn)||vnx||+αn||x||(1αn)||xnx||+αn[||x||+K]. (3.17)

    From (C3), since Ψ:=i=1γiTi is τ-demimetric, then by Lemma 2.2, G:=(1ζ)I+ζΨ is a quasinonexansive mapping, and thus from (3.4) and (3.17), we get

    ||xn+1x||(1βn)||unx||+βn||Gunx||(1βn)||unx||+βn||unx||=||unx||(1αn)||xnx||+αn[||x||+K]max{||xnx||,[||x||+K]}max{||x1x||,[||x||+K].

    Therefore, by induction we have

    ||xnx||max{||x1x||,[||x||+K]},n1.

    Hence {xn} is bounded. It follows that {un},{vn}, {wn} and {yn} are also bounded.

    Lemma 3.4. Let {xn} be a sequence generated by Algorithm 3.1 such that Assumption 3.1 holds. Then, for any xΓ, the following inequality holds

    ||xn+1x||2(1αn)||xnx||2+αn(2(1αn)vnx,x+θnαn||xnxn1||+αn||x||2).

    Proof. Let xΓ, then with Lemma 2.5, (3.4) and (3.13), we get

    ||xn+1x||2=||(1βn)(unx)+βn(Gunx)||2=(1βn)||unx||2+βn||Gunx||2βn(1βn)||Gunun||2(1βn)||unx||2+βn||unx||2βn(1βn)||Gunun||2=||(1αn)(vnx)αnx||2βn(1βn)||Gunun||2=(1αn)2||vnx||2+α2n||x||2αn(1αn)vnx,xβn(1βn)||Gunun||2(1αn)||ynx||2αn(1αn)vnx,x+α2n||x||2(1αn)||ynvnρηndn||2βn(1βn)||Gunun||2(1αn)ρ(2ρ)(λn+1λnμλn+1+λnμ)2||ynwn||2. (3.18)

    Since

    ||ynx||2=||xnx+θn(xnxn1)||=||xnx||2+2θnxnx,xnxn1+θ2n||xnxn1||2=||xnx||2+θn||xnxn1||[2||xnx||+θn||xnxn1||]||xnx||2+θn||xnxn1||K1, (3.19)

    for some K1>0. Combining (3.18) and (3.19), we get

    ||xn+1x||2(1αn)||xnx||2+αn(2(1αn)vnx,x+θnαn||xnxn1||+αn||x||2)(1αn)||ynvnρηndn||2βn(1βn)||Gunun||2(1αn)ρ(2ρ)(λn+1λnμλn+1+λnμ)2||ynwn||2(1αn)||xnx||2+αn(2(1αn)vnx,x+θnαn||xnxn1||+αn||x||2). (3.20)

    We know the following lemma obtained by Yotkaew et al. [39].

    Lemma 3.5. (see Lemma 3.6 in [39]) Let {wn} and {yn} be sequences generated by Algorithm 3.1 with condition (C1)–(C4) in Assumption 3.1. Suppose there exists a subsequence {wnk} of {wn} and {ynk} of {yn} such that {ynk} converges weakly to zH and limk||wnkynk||=0, then zVI(C,A).

    Based on the analysis presented above and Lemma 3.5, we demonstrate that Algorithm 3.1 converges under assumptions (C1)–(C5).

    Theorem 3.1. Suppose Assumption 3.1 holds. Then, the sequence {xn} generated by Algorithm 3.1 converges strongly to a point zΓ.

    Proof. Let xΓ. Then, by Lemmas 2.7 and 3.4, we only need to show that

    lim supkvnkx,x0

    for every subsequence {xnkx} of {xnx} satisfying

    lim infk(xnk+1xxnkx)0.

    Now, let {xnkx} be a subsequence of {xnx} such that

    lim infk(xnk+1xxnkx)0,

    holds. Then

    lim infk(xnk+1x2xnkx2)=lim infk{(xnk+1x+xnkx)×(xnk+1xxnkx)}0. (3.21)

    It follows from (3.20) and (3.21) and the fact that the limit of λn exists and limnαn=0, by letting

    Δn:=(1αn)||ynvnρηndn||2+βn(1βn)||Gunun||2+(1αn)ρ(2ρ)(λn+1+λnμλn+1+λnμ)2||ynwn||2,

    then

    limsupkΔnklimsupk(||xnkx||2||xnk+1x||2)+limsupkαnk(2(1αnk)vnkx,x+θnkαnk||xnxnk1||+αnk||x||2||xnkx||2)liminfk(||xnk+1x||2||xnkx||2)0.

    Thus

    limkΔnk=0,

    this implies that

    limk||ynkwnk||=limk||ynkvnkρηndn||=limk||Gunkunk||=0.

    Also, by definition of (yn), we get that

    ||ynkxnk||=θnkαnk||xnkxnk1||0 (3.22)

    as k, it follows from (3.22) that

    ||wnkxnk||||wnkynk||+||ynkxnk||0 (3.23)

    as k. Furthermore, we know that

    ||ynvn||||ynvnρηndn||+ρηn||dn||||ynvnρηndn||+ρynwn,dn||dn||||ynvnρηndn||+ρ||ynwn||,

    thus

    ||ynkvnk||||ynkvnkρηnkdnk||+||ynkwnk||,

    with this and (3.22), we get

    limk||ynkvnk||=0. (3.24)

    From the definition of (un), and (C4), we have

    ||unkvnk||=αnk||vnk||0ask. (3.25)

    Now, combining (3.22) and (3.24), we get

    ||xnkvnk||||xnkynk||+||ynkvnk||0 (3.26)

    as k, also with (3.25) and (3.26), we get

    ||xnkunk||||xnkvnk||+||vnkunk||0 (3.27)

    as k. Moreover from (C3) and (3.22), we have

    limk||Ψunkunk||=1ζlimk||Gunkunk||=0. (3.28)

    Furthermore, since {xnk} is bounded, there then exists a subsequence {xnks} of {xnk} such that {xnks} converges weakly to z in H as s. Then, from (3.27), we get that {unks} also converges weakly to zH and, from (C3), Ψ is demimetric and demiclosed at zero; then, by (3.28), we get zF(Ψ):=F(i=1γiTi). Thus, by Lemma 2.3, we conclude that zi=1F(Ti). On the other hand, from (3.22), we know that {ynks} converges weakly to zH, and thus, with (3.22), we conclude by Lemma 3.5 that zVI(C,A). Therefore, zΓ. Finally, we show that {xn} converges strongly to zΓ. Using Lemma 3.4, we have

    ||xnk+1z||2(1αnk)||xnkz||2+αnk(2(1αnk)vnkz,z+θnkαnk||xnkxnk1||+αnk||z||2). (3.29)

    Since {xnk} converges weakly to z as k, then limkxnkz,z=0 and, implying that 2(1αnk)vnkz,z+θnkαnk||xnkxnk1||+αnk||z||20 as k. It the follows from (3.29) and Lemma 7 that limk||xnz||=0. Hence, xnzΓ. This complete the proof.

    In this section, we provide computational experiments in support of the convergence analysis of the proposed algorithm. We also compare our method with existing methods in the literature.

    Example 4.1. Let H=L2([0,1]) with norm

    ||x||:=(10|x(t)|2dt)12,forallxL2([0,1])

    and inner product

    x,y:=10x(t)y(t)dt,forallx,yL2([0,1]).

    Let the function A:HH be defined by A(x(t))=x1+ex. Then A is quasimonotone (see [1]), and we set S and G to the identity mapping on H in (1.6) and Algorithm 3.1, respectively. We choose λ1=3.1, θ=0.5, ρ=0.5, μ=0.5, αn=1n+1 and μn=1n2.1. Then, we let the iteration terminate if En=xn+1xnϵ where ϵ=103. The numerical experiments are listed in Table 1. Also, we illustrate the efficiency of strong convergence of proposed Algorithm 3.1 in comparison with the convergence of (1.6), called IMSEM, and its, unaccelerated version MSEM in Figure 1.

    Table 1.  Table for Example 1.
    Algorithm 3.1 IMSEM MSEM
    Case A Iter. 12 18 25
    Sec. 0.0076 0.0163 0.0732
    Case B Iter. 12 18 25
    Sec. 0.0211 0.0531 0.0835
    Case C Iter. 12 21 25
    Sec. 0.0164 0.0323 0.0683
    Case D Iter. 13 18 25
    Sec. 0.0212 0.0490 0.1202

     | Show Table
    DownLoad: CSV
    Figure 1.  The graph of En against the number of iterations for Example 4.1; Top Left: Case A; Top Right: Case B; Bottom Left: Case C; Bottom Right: Case D.

    (Case A) x0=e2t+1 and x1=t3+cost;

    (Case B) x0=sin(6t)+cos(5t) and x1=11t2+2t+1;

    (Case C) x0=t4+3t+9 and x1=5t4+1;

    (Case D) x0=ln(2t+1)+5t2 and x1=t8+1.

    The paper presents a modified inertial Mann-type method that combines the subgradient extragradient method with the projection contraction method to solve the Lipschitz continuous quasimonotone variational inequality problem and fixed point problems in real Hilbert spaces. Under certain mild conditions imposed on the parameters, we have proven the strong convergence of the algorithm without requiring prior knowledge of the Lipschitz constant of the operator. Furthermore, we have demonstrated the efficiency of the proposed algorithm by illustrating its convergence and comparing it with previously known algorithms.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Authors are grateful to Department of Mathematics and Applied Mathematics, Sefako Makgato Health Science for supporting this research work.

    The authors declare that they have no competing interests.



    [1] T. Alakoya, O. Mewomo, Y. Shehu, Strong convergence results for quasimonotone variational inequalities, Math. Meth. Oper. Res., 95 (2022), 249–279. http://dx.doi.org/10.1007/s00186-022-00780-2 doi: 10.1007/s00186-022-00780-2
    [2] C. Baiocchi, A. Capelo, Variational and quasivariational inequalities: applications to free boundary problems, New York: Wiley, 1984.
    [3] F. Facchinei, J. Pang, Finite-dimensional variational inequalities and complementarity problems, New York: Springer, 2003. http://dx.doi.org/10.1007/b97543
    [4] R. Bǫt, E. Csetnek, A. Heinrich, C. Hendrich, On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems, Math. Program., 150 (2015), 251–279. http://dx.doi.org/10.1007/s10107-014-0766-0 doi: 10.1007/s10107-014-0766-0
    [5] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. http://dx.doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [6] K. Goebel, W. Kirk, Topics in metric fixed point theory, Cambridge: Cambridge University Press, 1990. http://dx.doi.org/10.1017/CBO9780511526152
    [7] K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, New York: Marcel Dekker, 1983.
    [8] N. Hadjisavvas, S. Schaible, Quasimonotone variational inequalities in Banach spaces, J. Optim. Theory Appl., 90 (1996), 95–111. http://dx.doi.org/10.1007/BF02192248 doi: 10.1007/BF02192248
    [9] B. He, A class of projection and contraction methods for monotone variational inequalities, Appl. Math. Optim., 35 (1997), 69–76. http://dx.doi.org/10.1007/BF02683320 doi: 10.1007/BF02683320
    [10] D. Kinderlehrer, G. Stampacchia, An introduction to variational inequalities and their applications, Philadelphia: Society for Industrial and Applied Mathematics, 1980.
    [11] P. Kocourek, W. Takahashi, J. Yao, Fixed points and weak convergence theorems for generalized hybrid mappings in Hilbert spaces, Taiwanese J. Math., 14 (2010), 2497–2511. http://dx.doi.org/10.11650/twjm/1500406086 doi: 10.11650/twjm/1500406086
    [12] G. Koepelevich, The extragradient method for finding saddle points and other problem, Matecon, 12 (1976), 747–756.
    [13] I. Konnov, Combined relaxation methods for variational inequalities, Berlin: Springer, 2001. http://dx.doi.org/10.1007/978-3-642-56886-2
    [14] B. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Computational Mathematics and Mathematical Physics, 4 (1964), 1–17. http://dx.doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [15] S. Saejung, P. Yotkaew, Approximation of zeros of inverse strongly monotone operators in Banach spaces, Nonlinear Anal.-Theor., 75 (2012), 742–750. http://dx.doi.org/10.1016/j.na.2011.09.005 doi: 10.1016/j.na.2011.09.005
    [16] Y. Shehu, A. Gibali, New inertial relaxed method for solving split feasibilities, Optim. Lett., 15 (2021), 2109–2126. http://dx.doi.org/10.1007/s11590-020-01603-1 doi: 10.1007/s11590-020-01603-1
    [17] Y. Shehu, O. Iyiola, Projection methods with alternating inertial steps for variational inequalities: weak and linear convergence, Appl. Numer. Math., 157 (2020), 315–337. http://dx.doi.org/10.1016/j.apnum.2020.06.009 doi: 10.1016/j.apnum.2020.06.009
    [18] M. Solodov, P. Tseng, Modified projection-type methods for monotone variational inequalities, SIAM J. Control Optim., 34 (1996), 1814–1830. http://dx.doi.org/10.1137/S0363012994268655 doi: 10.1137/S0363012994268655
    [19] Y. Song, Iterative methods for fixed point problems and generalized split feasibility problems in Banach spaces, J. Nonlinear Sci. Appl., 11 (2018), 198–217. http://dx.doi.org/10.22436/jnsa.011.02.03 doi: 10.22436/jnsa.011.02.03
    [20] G. Stampacchi, Formes bilineaires coercivites sur les ensembles convexes, Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences, 258 (1964), 4413–4416.
    [21] D. Sun, A class of iterative methods for solving nonlinear projection equations, J. Optim. Theory Appl., 91 (1996), 123–140. http://dx.doi.org/10.1007/BF02192286 doi: 10.1007/BF02192286
    [22] W. Takahashi, A general iterative method for split common fixed point problems in Hilbert spaces and applications, Pure Appl. Funct. Anal., 3 (2018), 349–369.
    [23] W. Takahashi, C. Wen, J. Yao, The shrinking projection method for a finite family of demimetric mapping with variational inequality problems in a Hilbert space, Fixed Point Theor., 19 (2018), 407–420. http://dx.doi.org/10.24193/fpt-ro.2018.1.32 doi: 10.24193/fpt-ro.2018.1.32
    [24] W. Takahashi, Introduction to nonlinear and convex analysis, Yokohama: Yokohama Publishers, 2009.
    [25] B. Tan, J. Fan, S. Li, Self-adaptive inertial extragradient algorithms for solving variational inequality problems, Comp. Appl. Math., 40 (2021), 19. http://dx.doi.org/10.1007/s40314-020-01393-3 doi: 10.1007/s40314-020-01393-3
    [26] B. Tan, X. Qin, J. Yao, Strong convergence of inertial projection and contraction methods for pseudomonotone variational inequalities with applications to optimal control problems, J. Glob. Optim., 82 (2022), 523–557. http://dx.doi.org/10.1007/s10898-021-01095-y doi: 10.1007/s10898-021-01095-y
    [27] B. Tan, S. Li, S. Cho, Inertial projection and contraction methods for pseudomonotone variational inequalities with non-Lipschitz operators and applications, Appl. Anal., 102 (2023), 1199–1221. http://dx.doi.org/10.1080/00036811.2021.1979219 doi: 10.1080/00036811.2021.1979219
    [28] B. Tan, S. Li, Adaptive inertial subgradient extragradient methods for finding minimum-norm solutions of pseudomonotone variational inequalities, J. Ind. Manag. Optim., 19 (2023), 7640–7659. http://dx.doi.org/10.3934/jimo.2023012 doi: 10.3934/jimo.2023012
    [29] B. Tan, L. Liu, X. Qin, Self adaptive inertial extragradient algorithms for solving bilevel pseudomonotone variational inequality problems, Japan J. Indust. Appl. Math., 38 (2021), 519–543. http://dx.doi.org/10.1007/s13160-020-00450-y doi: 10.1007/s13160-020-00450-y
    [30] B. Tan, X. Qin, J. Yao, Two modified inertial projection algorithms for bilevel pseudomonotone variational inequalities with applications to optimal control problems, Numer. Algor., 88 (2021), 1757–1786. http://dx.doi.org/10.1007/s11075-021-01093-x doi: 10.1007/s11075-021-01093-x
    [31] D. Thong, D. Hieu, A strong convergence of modified subgradient extragradient method for solving bilevel pseudomonotone variational inequality problems, Optimization, 69 (2020), 1313–1334. http://dx.doi.org/10.1080/02331934.2019.1686503 doi: 10.1080/02331934.2019.1686503
    [32] D. Thong, X. Li, Q. Dong, Y. Cho, T. Rassias, A projection and contraction method with adaptive step sizes for solving bilevel pseudomonotone variational inequality problems, Optimization, 71 (2022), 2073–2076. http://dx.doi.org/10.1080/02331934.2020.1849206 doi: 10.1080/02331934.2020.1849206
    [33] D. Thong, N. Vinh, Y. Cho, New strong convergence theorem of the inertial projection and contraction method for variational inequality problems, Numer. Algor., 84 (2020), 285–305. http://dx.doi.org/10.1007/s11075-019-00755-1 doi: 10.1007/s11075-019-00755-1
    [34] D. Thong, D. Hieu, Modified subgradient extragradient algorithms for variational inequalities problems and fixed point algorithms, Optimization, 67 (2018), 83–102. http://dx.doi.org/10.1080/02331934.2017.1377199 doi: 10.1080/02331934.2017.1377199
    [35] M. Tian, B. Jiang, Inertial hybrid algorithm for variational inequality problems in Hilbert spaces, J. Inequal. Appl., 2020 (2020), 12. http://dx.doi.org/10.1186/s13660-020-2286-1 doi: 10.1186/s13660-020-2286-1
    [36] G. Ugwunnadi, M. Harbau, L. Haruna, V. Darvish, J. Yao, Inertial extrapolation method for solving split common fixed point problem and zeros of monotone operators in Hilbert spaces, J. Nonlinear Convex Anal., 23 (2022), 769–791.
    [37] N. Xiu, J. Zhang, Some recent advances in projection-type methods for variational inequalities, J. Comput. Appl. Math., 152 (2003), 559–585. http://dx.doi.org/10.1016/S0377-0427(02)00730-6 doi: 10.1016/S0377-0427(02)00730-6
    [38] J. Yang, H. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algor., 80 (2019), 741–752. http://dx.doi.org/10.1007/s11075-018-0504-4 doi: 10.1007/s11075-018-0504-4
    [39] P. Yotkaew, H. Ur Rehman, B. Panayanak, N. Pakkaranang, Halpern subgradient extargient algorithm for solving quasimonotone variational inequality problems, Carpathian J. Math., 38 (2022), 249–262. http://dx.doi.org/10.37193/CJM.2022.01.20 doi: 10.37193/CJM.2022.01.20
    [40] L. Zheng, A double projection algorithm for quasimonotone variational inequalities in Banach spaces, J. Inequal. Appl., 2018 (2018), 256. http://dx.doi.org/10.1186/s13660-018-1852-2 doi: 10.1186/s13660-018-1852-2
    [41] Z. Zhou, B. Tan, S. Cho, Alternated inertial subgradient extragradient methods for solving variational inequalities, J. Nonlinear Convex Anal., 23 (2022), 2593–2604.
  • This article has been cited by:

    1. Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain, Double inertial steps extragadient-type methods for solving optimal control and image restoration problems, 2024, 9, 2473-6988, 12870, 10.3934/math.2024629
    2. Uzoamaka Azuka Ezeafulukwe, Besheng George Akuchu, Godwin Chidi Ugwunnadi, Maggie Aphane, A Method with Double Inertial Type and Golden Rule Line Search for Solving Variational Inequalities, 2024, 12, 2227-7390, 2203, 10.3390/math12142203
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1157) PDF downloads(72) Cited by(2)

Figures and Tables

Figures(1)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog