1.
Introduction
Let H be a real Hilbert space with inner product ⟨⋅,⋅⟩ and induced norm ||⋅||. Let C be a nonempty, convex and closed subset of H and A:H→H a nonlinear operator. The classical variational inequality problem (VIP) first introduced by Stampacchi [20] is defined as
We denote the solution set of VIP (1.1) by VI(C,A). Several problems that arise from different areas of pure and applied sciences can be studied in a general and unified framework of variational inequality problems. In view of this, the theory of variational inequality has become an important tool in physics, control theory, engineering, economics, management, science and mathematical programming (refer to references [2,3,10,13,18]). One of the most difficult and important problems is developing an efficient method for solving variational inequality problem. Over the years, several iterative methods have been proposed for solving variational inequality problems (see [4,5,9,12,21,25,37]). The simplest and classical iterative method for solving the VIP in a real Hilbert space is the gradient-projection method, which is defined as follows: starting with the initial point x0∈C, update xn+1 with the formula
where λ>0 is a suitable stepsize and PC is the metric projection mapping onto the convex and closed subset C of H. This method is based on the fact that a point x∗∈C is a solution of VIP (1.1) if and only if x∗=PC(x∗−λAx∗). Even though the gradient-projection method (1.2) can be easily implemented because it only needs to find the function value and one projection onto the closed convex set C per iteration. But the convergence of this method (1.2) needs a kind of strong hypothesis that the operator A is strongly monotone (or inverse strongly monotone, see [37]). To avoid the hypothesis of strong monotonicity on operator A, Korpelevich [12] in 1976 proposed the extragradient method with double projections in Euclidean space, as follows:
where A:C→Rn is monotone and L-Lipschitz continuous with L>0 and λ∈(0,1/L). If the solution set VI(C,A) in (1.3) is nonempty, then the iterative sequence {xn} generated by Algorithm (1.3) converges weakly to a point in VI(C,A). Even though the conditions imposed on operator A under the extragradient method (1.3) are weaker than those of the gradient-projection method (1.2), the iterative algorithm (1.3) needs to calculate two projections onto the closed convex set C per iteration. This may affect its efficiency if C is a general closed convex set. There are some methods to overcome this drawback. In 2011, Censor et al. [5] introduced the subgradient extragradient method in a real Hilbert space H in which the second projection onto C in (1.3) is replaced by a projection onto a specific constructible half-space. Their algorithm is defined as
where A is monotone and L-Lipschitz continuous with L>0 and fixed stepsize λ∈(0,1/L). Also, He [9] and Sun [21] independently studied the projection and contraction method (PCM), proposed as
where γ∈(0,2),λ∈(0,1/L) and ηn:=⟨xn−yn,d(xn,yn)⟩||d(xn,yn)||2. He [9] established that the sequence {xn} generated by (1.5) converges weakly to a solution of VIP (1.1). Since PCM requires only one projection onto the feasible set C, it reduces the computational cost per iteration. Some researchers have improved PCM in many different ways; see, for instance, [33,34,35]. The major drawback of the projection and contraction method (1.5) is that the step size λ requires the knowledge of an upper bound of the Lipschitz constant L. A greater value of L can lead to very small step-sizes λ, which may contribute to the slow convergence of Algorithm (1.5). To overcome this difficulty in PCM (1.5), the self-adaptive method that does not necessarily know the Lipschitz constant of the mapping in advance is required.
On the other hand, to speed up the convergence rate of algorithms, Polyak [14] studied the heavy ball method, an inertial extrapolation process for minimizing a smooth convex function. Since then, many authors have introduced this technique in different methods for solving VIPs (see [16,17,27,28,41] for more details). In 2021, Tan et al. [30], using the modified extragradient and projection contraction method proposed by Thong and Hieu [31], studied the inertial modified extragradient projection and contraction method with the hybrid steepest descent method with Armijo-type line search to update the step size in each iteration as follows:
where λn is chosen to be largest λ∈{δ,δξ,δξ2,⋯}, δ,ξ∈(0,1), A is Lipschitz continuous and pseudomonotone, S is Lipschitz and α-strongly monotone and Lipschitz continuous, and {ϕn} is a control sequence in (0,1) with some condition. Under suitable conditions on the parameters, they proved strong convergence of the sequence generated by (1.6). We note that Algorithm (1.6) as proposed by Tan et al. [30] uses an Armijo-type line search criteria to update the step size of each iteration. It is known that an approach with a line search would require many additional computations and further reduce the computational efficiency of the method used. Recently, many methods with a simple step size have been proposed for solving the VIP (see [29,32,36]).
Inspired and motivated by the above-mentioned results, in this paper we introduce an inertial modified extragradient and contraction projection method with self-adaptive stepsize for finding a common solution to the quasimonotone variational inequality problem and a common fixed point of the infinite family of demimetric mapping in real Hilbert spaces. In this regard, we highlight the following motivations that signify the contributions of our proposed algorithm (method):
(a) the operator A involved is quasimonotone instead of monotone or pseudomonotone;
(b) the proposed algorithm embeds inertial terms which helps increase the convergence speed of the iterative sequence;
(c) the method uses a new non-monotonic step size so that it can work without knowing the Lipschitz constant of the mapping;
(d) the projection onto the feasible set needs to be evaluated only once in each iteration;
(e) we establish that the sequence generated by our proposed method converges strongly to common fixed points of the infinite family of demimetric mappings, which is also the solution of variational inequality problems for quasimonotone operators;
(f) we demonstrate the effectiveness of our proposed method by providing numerical examples and comparing it with related methods in the literature.
2.
Preliminaries
Throughout this section, the symbols → and ⇀ represent the strong and weak convergences respectively.
Let C be a closed and convex subset of a real Hilbert space H. The metric projection from H onto C is the mapping PC:H→C for each x∈H, and there exists a unique point z=PC(x) such that
From this definition, it is easy to show that PC has the following characteristic properties; see [7] for more details.
Lemma 2.1. (Goebel and Reich [7])} Let x∈H and z∈C be any point. Then we have
(ⅰ) z=PC(x) if and only if the following relation holds
(ⅱ) For all x,y∈H, we have
(ⅲ) For x∈H and y∈C
We also need the following nonlinear operators, which are introduced below.
Definition 2.1. Let the fixed point set of a mapping T:C→H be denoted by F(T):={x∈C:Tx=x}. The mapping T is called
(1) L-Lipschitz continuous with L>0 if for all x,y∈C,
If L=1, then T is called nonexpansive mapping.
(2) quasi-nonexpansive if ‖Tx−y‖≤‖x−y‖ for all x∈C,y∈F(T),
(3) (α,β)-generalized hybrid [11] if there exist α,β∈R such that
(4) τ-demicontractive, if F(T)≠∅ and there exists τ∈[0,1) such that
(6) τ-demimetric [22] if F(T)≠∅ and there exists τ∈(−∞,1) such that for any x∈C and y∈F(T), we have
Observe that a (1,0)-generalized hybrid mapping is nonexpansive and every generalized hybrid mapping with nonempty fixed point set is quasi-nonexpansive. Also, the class of τ-demicontractive covers that of nonexpansive and quasi-nonexpansive. The class of τ-demimetric mappings includes that of τ-demicontractive and generalized hybrid mappings as special cases.
The following result is important and crucial in the proof of our main result.
Lemma 2.2. ([23, Lemma 2.2]) Let H be a Hilbert space and let C be a nonempty, closed, and convex subset of H. Let k∈(−∞,0) and let T be a k-demimetric mapping of C into H such that F(T)≠∅. Let λ be a real number with 0<λ≤1−k and define S=(1−λ)I+λT. Then
(ⅰ) F(T)=F(S),
(ⅱ) F(T) is closed and convex,
(ⅲ) S is a quasi-nonexpansive mapping of C into H.
We also apply the following results to establish our main result.
Lemma 2.3. ([19, Lemma 3.3]) Let H be a Hilbert space and C be a nonempty, closed, and convex subset of H. Assume that {Ti}∞i=1:C→H is an infinite family of τi−demimetric mappings with sup{τi:i∈N}<1 such that ⋂∞i=1F(Ti)≠∅. Assume that {ηi}∞i=1 is a positive sequence such that ∑∞i=1ηi=1. Then ∑∞i=1ηiTi:C→H is a τ-demimetric mapping with τ=sup{τi:i∈N} and F(∑∞i=1ηiTi)=⋂∞i=1F(Ti).
The so-called demiclosedness principle plays an important role in our argument.
Lemma 2.4. ([6]) Let T:C→H be a nonexpansive mapping, then T is demiclosed on C in the sense that if {xn} converges weakly to x∈C and {xn−Txn} converges strongly to 0 then x∈F(T).
Lemma 2.5. ([24]) Let H be a real Hilbert space. Then, for all x,y∈H and α∈R, the following hold.
(1) ‖x+y‖2≤‖x‖2+2⟨y,x+y⟩;
(2) ‖αx+(1−α)y‖2=α‖x‖2+(1−α)‖y‖2−α(1−α)‖x−y‖2.
Next, we present some concepts of monotonicity of an operator.
Definition 2.2. Let A:H→H be a mapping and let x,y∈H. Then, A is said to be
(a) η-strongly monotone, if there exists η>0 such that
(b) monotone, if
(c) pseudomonotone, if
(d) quasimonotone, if
It is obvious to see that (a)⟹(b)⟹(c)⟹ (d). But the converses are not generally true.
Lemma 2.6. ([8,40]) Let C be a nonempty, closed, and convex subset of a Hilbert space H and F:H→H be L-Lipschitzian and quasimonotone operator. Let y∈C. If, for some x∗∈C, we have that ⟨F(y),x∗−y⟩≥0, then at least one of the following must hold:
The following result is useful when proving strong convergence of our iterative sequence.
Lemma 2.7. ([15]) Let {an} be a sequence of nonnegative real numbers, {αn} be a sequence of real numbers in (0,1) with condition
and {bn} be a sequence of real numbers. Assume that
If lim supk→∞bnk≤0 for every subsequence {ank} of {an} satisfying the condition
then limn→∞an=0.
3.
Main results
We begin this section with the following assumptions for the convergent analysis of our method:
Assumption 3.1. Let H be a real Hilbert space and C be a nonempty, closed and convex subset of H. Suppose the following conditions are satisfied:
(C1) A:H→H is a quasimonotone and L-Lipschitz continuous with L>0.
(C2) A:H→H is sequential weakly continuous, i.e., for each sequence {xn}⊂C converges weakly to x, implies {A(xn)} converges weakly to A(x).
(C3) {Ti}∞i=1:H→H is an infinite family of τi-demimetric mapping and demiclosed at zero with τi∈(−∞,1) for each i≥1 and τ=sup{τi:i≥1}≤1. Letting Ψ:=∑∞i=1γiTi, by Lemma 3, Ψ is τ-demimetric mapping, where ∑∞i=1γi=1 and G:=(1−ζ)I+ζΨ with ζ∈(0,1−τ].
(C4) {μn} is a positive sequence with μn=∘(αn), {βn}⊂(a,1−αn) for some a>0, where {αn}⊂(0,1) satisfies the following limn→∞αn=0 and ∑∞n=1αn=∞.
(C5) Denote the set of solutions VI(C,A)∩F(Ψ) as Γ and is assumed to be nonempty.
Now, using the inertial extrapolation term, we introduce a modified inertial Mann-type subgradient extragradient method with projection and contraction iterative techniques for solving variational inequality and common fixed point problems:
Algorithm 3.1. Initialization: Choose θ>0, λ>0, μ∈(0,1), ρ∈(0,2). Let x0,x1∈H be taken arbitrary.
Iterative Steps: Calculate xn+1 as follows:
Step 1. Given the iterates xn−1 and xn for each n≥1, θ>0, choose θn such that 0≤θn≤¯θn, where
Step 2. Compute
where
Step 3. Compute
where Tn:={z∈H:⟨yn−λnAyn−wn,z−wn⟩≤0} and ηn:=⟨yn−wn,dn⟩||dn||2, dn=yn−wn−λn(Ayn−Awn). Set n:=n+1 and return to Step 1.
Remark 3.1. From (C4) of Assumption 3.1, we have μn=o(αn), i.e. limn→∞μnαn=0. Also, from (3.1), θn≤¯θn≤μn‖xn−xn−1‖ for all n≥1 and xn≠xn−1, it is easy to see that
The following two lemmas, which were basically proved in [25,38] and [26], respectively, are significant and vital in the proof of our main result. For the sake of completeness, we give their proofs.
Lemma 3.1. (see Yang et al. [38] and Tan et al. [25]) The sequence {λn} in Algorithm 3.1 generated by (3.3) is nonincreasing and the limit exists.
Proof. It is straight forward to see that the sequence {λn} is monotone and nonincreasing. Using the fact that A is Lipschitz continuous and Ayn≠Awn, we have
thus, {λn} is bounded from below by min{μL,τ0}. Since the sequence {λn} is monotone nonincreasing and bounded from below, then the limn→∞τn exists. So, by denoting λ=limn→∞τn, we get that λ>0.
□
Lemma 3.2. (see [26, Lemma 3.2]) If yn=wn or dn=0 in Algorithm 3.1, then yn∈VI(C,A).
Proof. Using (3.2) and (3.3) in Algorithm 3.1, we get
Therefore (1−λnλn+1μ)||yn−wn||≤||dn||≤(1+λnλn+1μ)||yn−wn||. Since the limit of λn exists, then for all n≥1 we get (1−μ)||yn−wn||≤||dn||≤(1+μ)||yn−wn||, thus yn=wn if and only dn=0. Therefore, if yn=wn or dn=0 we get yn=PC(yn−λnAyn) which implies yn∈VI(C,A). □
Lemma 3.3. Let {xn} be the sequence generated by Algorithm 3.1 under Assumption 3.1. Then, {xn} is bounded.
Proof. Let x∗∈Γ⊂C, since from (3.2), wn⊂C, then
and since A acts on C, by Lemma 2.6, we have
thus
By definition of Tn in Algorithm 3.1, we get vn∈Tn, which implies
hence
Combining (3.8) and (3.9), we get
Using (3.4), (3.10) and the fact that the projection operator is firmly nonexpansive, by Lemma 2.1(ⅱ), we obtain
therefore
Moreover, we have
Since ||dn||≤(1+λnμλn+1)||yn−wn||, then
Combining (3.11) and (3.12), we get
Thus
Also, from the definition of (yn), we have
Since the sequence {(θnαn||xn−xn−1||)} converges to zero, then there exists K>0 such that (θnαn||xn−xn−1||)≤K for all n≥1. Thus
it follows from (3.14) that
Also,
From (C3), since Ψ:=∑∞i=1γiTi is τ-demimetric, then by Lemma 2.2, G:=(1−ζ)I+ζΨ is a quasinonexansive mapping, and thus from (3.4) and (3.17), we get
Therefore, by induction we have
Hence {xn} is bounded. It follows that {un},{vn}, {wn} and {yn} are also bounded. □
Lemma 3.4. Let {xn} be a sequence generated by Algorithm 3.1 such that Assumption 3.1 holds. Then, for any x∗∈Γ, the following inequality holds
Proof. Let x∗∈Γ, then with Lemma 2.5, (3.4) and (3.13), we get
Since
for some K1>0. Combining (3.18) and (3.19), we get
□ We know the following lemma obtained by Yotkaew et al. [39].
Lemma 3.5. (see Lemma 3.6 in [39]) Let {wn} and {yn} be sequences generated by Algorithm 3.1 with condition (C1)–(C4) in Assumption 3.1. Suppose there exists a subsequence {wnk} of {wn} and {ynk} of {yn} such that {ynk} converges weakly to z∈H and limk→∞||wnk−ynk||=0, then z∈VI(C,A).
Based on the analysis presented above and Lemma 3.5, we demonstrate that Algorithm 3.1 converges under assumptions (C1)–(C5).
Theorem 3.1. Suppose Assumption 3.1 holds. Then, the sequence {xn} generated by Algorithm 3.1 converges strongly to a point z∈Γ.
Proof. Let x∗∈Γ. Then, by Lemmas 2.7 and 3.4, we only need to show that
for every subsequence {‖xnk−x∗‖} of {‖xn−x∗‖} satisfying
Now, let {‖xnk−x∗‖} be a subsequence of {‖xn−x∗‖} such that
holds. Then
It follows from (3.20) and (3.21) and the fact that the limit of λn exists and limn→∞αn=0, by letting
then
Thus
this implies that
Also, by definition of (yn), we get that
as k→∞, it follows from (3.22) that
as k→∞. Furthermore, we know that
thus
with this and (3.22), we get
From the definition of (un), and (C4), we have
Now, combining (3.22) and (3.24), we get
as k→∞, also with (3.25) and (3.26), we get
as k→∞. Moreover from (C3) and (3.22), we have
Furthermore, since {xnk} is bounded, there then exists a subsequence {xnks} of {xnk} such that {xnks} converges weakly to z in H as s→∞. Then, from (3.27), we get that {unks} also converges weakly to z∈H and, from (C3), Ψ is demimetric and demiclosed at zero; then, by (3.28), we get z∈F(Ψ):=F(∑∞i=1γiTi). Thus, by Lemma 2.3, we conclude that z∈∩∞i=1F(Ti). On the other hand, from (3.22), we know that {ynks} converges weakly to z∈H, and thus, with (3.22), we conclude by Lemma 3.5 that z∈VI(C,A). Therefore, z∈Γ. Finally, we show that {xn} converges strongly to z∈Γ. Using Lemma 3.4, we have
Since {xnk} converges weakly to z as k→∞, then limk→∞⟨xnk−z,z⟩=0 and, implying that −2(1−αnk)⟨vnk−z,z⟩+θnkαnk||xnk−xnk−1||+αnk||z||2→0 as k→∞. It the follows from (3.29) and Lemma 7 that limk→∞||xn−z||=0. Hence, xn→z∈Γ. This complete the proof. □
4.
Numerical illustrations
In this section, we provide computational experiments in support of the convergence analysis of the proposed algorithm. We also compare our method with existing methods in the literature.
Example 4.1. Let H=L2([0,1]) with norm
and inner product
Let the function A:H→H be defined by A(x(t))=x1+e‖x‖. Then A is quasimonotone (see [1]), and we set S and G to the identity mapping on H in (1.6) and Algorithm 3.1, respectively. We choose λ1=3.1, θ=0.5, ρ=0.5, μ=0.5, αn=1n+1 and μn=1n2.1. Then, we let the iteration terminate if En=‖xn+1−xn‖≤ϵ where ϵ=10−3. The numerical experiments are listed in Table 1. Also, we illustrate the efficiency of strong convergence of proposed Algorithm 3.1 in comparison with the convergence of (1.6), called IMSEM, and its, unaccelerated version MSEM in Figure 1.
(Case A) x0=e−2t+1 and x1=t3+cost;
(Case B) x0=sin(−6t)+cos(−5t) and x1=11t2+2t+1;
(Case C) x0=t4+3t+9 and x1=5t4+1;
(Case D) x0=ln(−2t+1)+5t2 and x1=t8+1.
5.
Conclusions
The paper presents a modified inertial Mann-type method that combines the subgradient extragradient method with the projection contraction method to solve the Lipschitz continuous quasimonotone variational inequality problem and fixed point problems in real Hilbert spaces. Under certain mild conditions imposed on the parameters, we have proven the strong convergence of the algorithm without requiring prior knowledge of the Lipschitz constant of the operator. Furthermore, we have demonstrated the efficiency of the proposed algorithm by illustrating its convergence and comparing it with previously known algorithms.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
Authors are grateful to Department of Mathematics and Applied Mathematics, Sefako Makgato Health Science for supporting this research work.
Conflict of interest
The authors declare that they have no competing interests.