
In this paper, we proposed a modified projection and contraction method for solving a variational inequality problem in Hilbert spaces. In the existing projection and contraction methods for solving the variational inequality problem, the sequence {βn} has a similar computation manner, which is computed in a self-adaptive manner, but the sequence {βn} in our method is a sequence of numbers in (0, 1) given in advance, which is the main difference of our method with the existing projection and contraction methods. A line search is used to deal with the unknown Lipschitz constant of the mapping. The strong convergence of the proposed method is proved under certain conditions. Finally, some numerical examples are presented to illustrate the effectiveness of our method and compare the computation results with some related methods in the literature. The numerical results show that our method has an obvious competitive advantage compared with the related methods.
Citation: Limei Xue, Jianmin Song, Shenghua Wang. A modified projection and contraction method for solving a variational inequality problem in Hilbert spaces[J]. AIMS Mathematics, 2025, 10(3): 6128-6143. doi: 10.3934/math.2025279
[1] | Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184 |
[2] | Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539 |
[3] | Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426 |
[4] | Pongsakorn Yotkaew, Nopparat Wairojjana, Nuttapol Pakkaranang . Accelerated non-monotonic explicit proximal-type method for solving equilibrium programming with convex constraints and its applications. AIMS Mathematics, 2021, 6(10): 10707-10727. doi: 10.3934/math.2021622 |
[5] | Ting Xie, Dapeng Li . On the stability of projected dynamical system for generalized variational inequality with hesitant fuzzy relation. AIMS Mathematics, 2020, 5(6): 7107-7121. doi: 10.3934/math.2020455 |
[6] | Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020 |
[7] | Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064 |
[8] | Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332 |
[9] | Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093 |
[10] | Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382 |
In this paper, we proposed a modified projection and contraction method for solving a variational inequality problem in Hilbert spaces. In the existing projection and contraction methods for solving the variational inequality problem, the sequence {βn} has a similar computation manner, which is computed in a self-adaptive manner, but the sequence {βn} in our method is a sequence of numbers in (0, 1) given in advance, which is the main difference of our method with the existing projection and contraction methods. A line search is used to deal with the unknown Lipschitz constant of the mapping. The strong convergence of the proposed method is proved under certain conditions. Finally, some numerical examples are presented to illustrate the effectiveness of our method and compare the computation results with some related methods in the literature. The numerical results show that our method has an obvious competitive advantage compared with the related methods.
Let H be a real Hilbert space, C be a nonempty, closed, and convex subset of H, and F:H→H be a nonlinear mapping. The variational inequality problem (VIP in short) is to find a point x∗∈C such that
⟨F(x∗),y−x∗⟩≥0, ∀y∈C. | (1.1) |
Denote the set of the solutions of VIP (1.1) by S. VIP has many applications in economics, optimization, mechanics, and so on; see [1,2].
In recent decades, many methods have been proposed for solving the various classes of VIPs. Korpelevich [3] proposed the famous extragradient method and proved the weak convergence of the method under certain conditions. It is well known that the extragradient method needs compute two projections on the feasible set C at each iteration, which adds the computation load especially when C has a complex construction. To overcome the drawback, some authors proposed improved methods where only one projection is computed at each iteration, such as Tseng's method [4], the subgradient extragradient method [5], and the projection and contraction method [6]. To the content of this paper, we focus on the projection and contraction method. In [6], He proposed a projection and contraction method for solving the monotone VIP (1.1) as follows: x0∈H, and
{yn=PC(xn−τF(xn)),d(xn,yn)=xn−yn−τ(F(xn)−F(yn)),xn+1=xn−γβnd(xn,yn), n≥1, | (1.2) |
where F is a monotone and L-Lipschitz continuous mapping, τ∈(0,1L), γ∈(0,2) and
βn={⟨xn−yn,d(xn,yn)⟩‖d(xn,yn)‖2,if d(xn,yn)≠0,0,otherwise. | (1.3) |
The author proved that under certain conditions the sequence {xn} generated by (1.2) converges weakly to a solution of the VIP (1.1).
Since He's result, many modified versions of the projection and contraction method (1.2) have been introduced. For example, Dong et al. [7] proposed the following inertial projection and contraction method for solving the monotone VIP (1.1): x0∈H, and
{wn=xn+αn(xn−xn−1),yn=PC(wn−τF(wn)),d(wn,yn)=wn−yn−τ(F(wn)−F(yn)),xn+1=wn−γβnd(wn,yn), n≥1, | (1.4) |
where F is a monotone and L-Lipschitz continuous mapping, τ∈(0,1L), γ∈(0,2), {αn}⊂[0,α] with α<1, and
βn={⟨wn−yn,d(wn,yn)⟩‖d(wn,yn)‖2,if d(wn,yn)≠0,0,otherwise. | (1.5) |
The authors proved that under certain conditions the sequence {xn} generated by (1.4) converges weakly to a solution of the VIP (1.1).
In 2023, Zhang and Chu [8] proposed a new extrapolation projection and contraction method by combining the golden ratio technique [9] for solving the VIP (1.1) as follows: x0,y1∈H, and
{yn=ϕ−1ϕxn+1ϕyn−1,ˉyn=PC(yn−λnF(yn)),xn+1=yn−γβnd(yn,ˉyn), n≥1, | (1.6) |
where F is a pseudomonotone and L-Lipschitz continuous mapping, μ∈(0,1), ϕ∈(1,∞), γ∈(0,2),
βn={⟨yn−ˉyn,d(yn,ˉyn)⟩‖d(yn,ˉyn)‖2,d(yn,ˉyn)≠0,0,‖d(yn,ˉyn)‖=0 | (1.7) |
with
d(yn,ˉyn)=yn−ˉyn−λn(F(yn)−F(ˉyn)) |
and
λn+1={min{μ‖yn−ˉyn‖‖F(yn)−F(ˉyn)‖,ξnλn+τn},F(yn)≠F(ˉyn),ξnλn+τn,otherwise, | (1.8) |
where {ξn}⊂[1,∞) with ∑∞n=1(ξn−1)<∞ and {τn}⊂(0,∞) with ∑∞n=1τn<∞. The authors proved that under certain conditions, the sequence {xn} generated by (1.6) converges weakly to a solution of the VIP (1.1). Note the sequence {λn} in (1.8) is permitted to be increasing. The similar definition with λn in (1.8) can be found in [10,11].
On the more projection and contraction methods and golden ratio methods for solving VIPs, the interested readers may refer to [12,13,14].
In (1.3), (1.5), and (1.7), the sequence {βn} is computed in a self-adaptive manner involving d(⋅,⋅) and the constant γ is chosen from (0, 2). In fact, besides the projection and contraction methods mentioned above, in almost all versions of projection and contraction methods, the sequence {βn} is computed in a similar self-adaptive manner. A natural question is if the range of γ can be relaxed or omitted and {βn} can be computed by other mains or is replaced by a sequence of numbers that is given in advance at the initial part of the proposed method. It seems that no authors consider the question. In addition, find that yn in (1.6) can be written as yn=(1−1ϕ)xn+1ϕyn−1, which is a convex combination of xn and yn−1. Replacing ϕ with a sequence {ϕn} in (1,∞), yn will be (1−1ϕn)xn+1ϕnyn−1. In this case, if ϕn=ϕ, then yn is reduced to yn in (1.6) and so such that yn in (1.6) is the special case. Based on the above idea, in this paper, we propose a modified version of the method (1.6) where the constant γ is omitted and the sequence {βn} replaced by a sequence of numbers in (0, 1). The main difference of our method with (1.6) is as follows:
(1) the constant ϕ in (1.6) is replaced by a sequence {ϕn};
(2) the constant γ in (0, 2) is not needed, and the sequence {βn} in (1.7) is replaced by a sequence of numbers in (0, 1), which is the main novelty of our method.
We prove that under certain conditions the proposed method converges strongly to a solution of the VIP (1.1). The numerical examples are presented to illustrate the effectiveness of our method. Although our method is a slight modification of the method (1.6), the numerical results show that our method has a faster convergence rate than the method (1.6) and other related methods.
In this section, let H be a real Hilbert space and C be a nonempty, closed and convex subset of H. Denote the weak convergence by the symbol ⇀. For any x,y∈H, it holds that
‖x+y‖2≤‖x‖2+2⟨y,x+y⟩. | (2.1) |
Definition 2.1. [15] A mapping F:H→H is said to be
(i) Pseudomonotone on C if
∀x,y∈C,⟨F(x),y−x⟩≥0⇒⟨F(y),y−x⟩≥0; |
(ii) Pseudomonotone with respect to a subset B⊂C on C if
∀x∈B,y∈C,⟨F(x),y−x⟩≥0⇒⟨F(y),y−x⟩≥0; |
(iii) L-Lipschitz continuous on H if there exists a constant L>0 such that
∀x,y∈H,‖F(x)−F(y)‖≤L‖x−y‖; |
(iv) Sequentially weakly continuous if for each sequence {xn}, one has
xn⇀x⇒F(xn)⇀F(x). |
Let C be a nonempty closed convex subset of H. For any x∈H, there exists a unique z∈C such that
z=miny∈C‖y−x‖. |
Denote the element z by PC(x). The mapping PC:H→C is called the metric projection from H onto C.
Lemma 2.1. [15] PC has the following properties:
z=PC(x)⇔⟨x−z,z−y⟩≥0, ∀y∈C. |
Lemma 2.2. [16] Assume that {an}⊂[0,∞), {bn}⊂(0,1), and {cn}⊂(0,∞) satisfy the following
an+1≤(1−bn)an+cn,∀n∈N. |
If ∑∞n=1bn=∞ and lim supn→∞cnan≤0 hold, then limn→+∞an=0.
Lemma 2.3. [17] Let {an} be a sequence of non-negative real numbers such that there exists a subsequence {anj} of {an} such that anj<anj+1 for all j∈N. Then there exists a non-decreasing sequence {mk} of N such that limk→∞mk=∞ and the following properties are satisfied by all (sufficiently large) number, k∈N:
amk≤amk+1 and ak≤amk+1. |
In this section, let C be a nonempty, closed and convex subset of a real Hilbert space H and F:H→H be a mapping. Considering the following conditions:
(A1) S≠∅.
(A2) ⟨F(y),y−z⟩≥0 for all z∈S and y∈C.
(A3) F is L-Lipschitz continuous.
(A4) For any sequence {un}⊂C with un⇀u, it has ⟨F(u),u−y⟩≤lim supn→∞⟨F(un),un−y⟩ for all y∈C.
Remark 3.1. Condition (A2), used for example in [18], is weaker than the pseudomonotonicity assumption. In fact, from Conditions (A1) and (A2), it follows that F is pseudomonotone with respect to S on C. In addition, it needs to be noted that Condition (A4) is not related to the condition that F is sequentially weakly continuous on C.
For solving the VIP (1.1), we propose the projection and contraction method as follows.
Algorithm 3.1 |
Initialization. Choose the initial points y0,x1∈H, the parameters τ∈(0,1),μ∈(0,∞), γ∈(0,∞), the sequences {ϕn}⊂[ϕ,∞)} with ϕ>1, {βn}⊂[β,∞) with β>0 and {ψn}∞n=1⊂(0,1) satisfy |
limn→∞ψn=0 and ∞∑n=1ψn=∞. |
Set n=1. |
Step 1. Compute wn=(1−ψn)yn with |
yn=ϕn−1ϕnxn+1ϕnyn−1. |
Step 2. Compute |
ˉyn=PC(wn−λnF(wn)), |
where λn is chosen to be the largest λ∈{γτl:l=0,1,2,⋯} such that |
λ‖F(wn)−F(ˉyn)‖≤μ‖wn−ˉyn‖.(3.1) |
If wn=ˉyn or F(ˉyn)=0, then stop and ˉyn∈S. Otherwise, go to Step 3. |
Step 3. Compute |
xn+1=wn−βnd(wn,ˉyn), |
where |
d(wn,ˉyn)=wn−ˉyn−λn(F(wn)−F(ˉyn)). |
Step 4. Set n=n+1 and go to Step 1. |
Remark 3.2. In Algorithm 3.1, the definition of wn has the same manner as the ones in [19,20,21,22]. Clearly, if ϕn≡ϕ with ϕ>1, then yn is reduced to the one in (1.4). In particular, if ϕn=1+√52, then ϕn is the golden ratio. In the existing projection and contraction methods, {βn} is computed in a self-adaptive manner like (1.3), (1.5), and (1.7) and its value strictly depends on the result, which is computed by the self-adaptive formula. However, in our method, {βn} is a sequence of numbers in (0, 1) given in advance. Since there is no other restriction imposed on {βn}, we have the greater freedom to choose or adjust {βn} in advance. It is the main difference of our method with the existing projection and contraction methods.
The following remark shows that the stopping criterion in Step 2 can work well.
Remark 3.3. By the definition of ˉyn and Lemma 2.1, we have
⟨ˉyn−(wn−λnF(wn)),y−ˉyn⟩≥0, ∀y∈C. |
If ˉyn=wn for some n∈N, then it has
λn⟨F(ˉyn),y−ˉyn⟩≥0, ∀y∈C. |
Since λn>0, we have
⟨F(ˉyn),y−ˉyn⟩≥0, ∀y∈C. |
It follows that ˉyn∈S. In addition, it is clear that F(ˉyn)=0 leads to ˉyn∈S.
In the rest of this section, for showing the convergence of Algorithm 3.1, we assume that {xn}, {wn}, {yn}, and {ˉyn} are infinite sequences.
Lemma 3.1. [23] Assume that (A3) holds. Then the Armijo-line search rule (3.1) is well defined, and min{γ,μτL}≤λn≤γ for all n∈N.
Lemma 3.2. Assume that (A1)–(A3) hold. Then the sequences {xn}, {yn}, {ˉyn} and {wn} are bounded.
Proof. For each z∈S and n∈N, by the definition of ˉyn and Lemma 2.1, it follows that
⟨ˉyn−z,wn−ˉyn−λnF(wn)⟩≥0. | (3.2) |
By (A2) it holds that
⟨ˉyn−z,F(ˉyn)⟩≥0, |
which, together with λn>0, leads to that
λn⟨ˉyn−z,F(ˉyn)⟩≥0. | (3.3) |
Combining (3.2) with (3.3), we have
⟨ˉyn−z,d(wn,ˉyn)⟩=⟨ˉyn−z,wn−ˉyn−λn(F(wn)−F(ˉyn))⟩≥0. |
So
⟨wn−z,d(wn,ˉyn)⟩≥⟨wn−ˉyn,d(wn,ˉyn)⟩. |
Multiplying βn on both sides of the above inequality, we get
βn⟨wn−z,d(wn,ˉyn)⟩≥βn⟨wn−ˉyn,d(wn,ˉyn)⟩. | (3.4) |
Substituting (3.4) into
‖xn+1−z‖2=‖wn−z−βnd(wn,ˉyn)‖2=‖wn−z‖2−2βn⟨wn−z,d(wn,ˉyn)⟩+β2n‖d(wn,ˉyn)‖2, |
we have
‖xn+1−z‖2≤‖wn−z‖2−2βn⟨wn−ˉyn,d(wn,ˉyn)⟩+β2n‖d(wn,ˉyn)‖2≤‖wn−z‖2−2βn⟨wn−ˉyn,d(wn,ˉyn)⟩+βn‖d(wn,ˉyn)‖2=‖wn−z‖2−βn‖wn−ˉyn‖2+βnλ2n‖F(wn)−F(ˉyn)‖2. |
Furthermore, by (3.1) we get
‖xn+1−z‖2≤‖wn−z‖2−βn‖wn−ˉyn‖2+βnμ2‖wn−ˉyn‖2=‖wn−z‖2−βn(1−μ2)‖wn−ˉyn‖2. | (3.5) |
On the other hand, by the definition of yn+1 we have
‖xn+1−z‖2=‖ϕn+1ϕn+1−1yn+1−1ϕn+1−1yn−z‖2=ϕn+1ϕn+1−1‖yn+1−z‖2−1ϕn+1−1‖yn−z‖2+ϕn+1(ϕn+1−1)2‖yn+1−yn‖2=ϕn+1ϕn+1−1‖yn+1−z‖2−1ϕn+1−1‖yn−z‖2+1ϕn+1‖xn+1−yn‖2. | (3.6) |
Substituting (3.6) into (3.5), we obtain
ϕn+1ϕn+1−1‖yn+1−z‖2−1ϕn+1−1‖yn−z‖2+1ϕn+1‖xn+1−yn‖2≤‖wn−z‖2−βn(1−μ2)‖wn−ˉyn‖2=‖(1−ψn)(yn−z)+ψn(−z)‖2−βn(1−μ2)‖wn−ˉyn‖2≤(1−ψn)‖yn−z‖2+ψn‖z‖2−βn(1−μ2)‖wn−ˉyn‖2. | (3.7) |
It follows that
‖yn+1−z‖2≤[1−ψn(1−1ϕn+1)]‖yn−z‖2+ψn(1−1ϕn+1)‖z‖2−βn(1−μ2)(1−1ϕn+1)‖wn−ˉyn‖2≤[1−ψn(1−1ϕn+1)]‖yn−z‖2+ψn(1−1ϕn+1)‖z‖2≤max{‖yn−z‖2,‖z‖2}≤…≤max{‖y1−z‖2,‖z‖2}, ∀n∈N. | (3.8) |
Hence {yn} is bounded. The boundedness of {wn} is from ‖wn‖=(1−ψn)‖yn‖≤‖yn‖, which implies that {PCwn} is also bounded. In addition, {F(wn)} is bounded because of (A3). By the definition of ˉyn we have
‖ˉyn‖≤‖ˉyn−PCwn‖+‖PCwn‖=‖PC(wn−λnF(wn))−PCwn‖+‖PCwn‖≤λn‖F(wn)‖+‖PCwn‖, ∀n∈N. |
From Lemma 3.1 it follows that {ˉyn} is bounded. Finally, by (3.5) and the boundedness of {wn}, we obtain the boundedness of {xn}. The proof is complete.
In the following lemma, we denote the set of weak cluster points of {wn} by ωw(wn).
Lemma 3.3. Assume that (A1)–(A3) hold. If limn→∞‖wn−ˉyn‖=0, then ωw(wn)⊂S.
Proof. Since {wn} is bounded, ωw(wn)≠∅. Choose ˉw∈ωw(wn) arbitrarily and assume that wnk is a subsequence of {wn} such that wnk⇀ˉw as k→∞. From limk→∞‖wnk−ˉynk‖=0 it follows that ˉynk⇀ˉw and so ˉw∈C. For each y∈C, from Lemma 2.1 and the definition of ˉyn we have
⟨ˉynk−(wnk−λnkF(wnk)),ˉynk−y⟩≤0, |
or equivalently,
⟨F(wnk),wnk−y⟩≤⟨F(wnk),wnk−ˉynk⟩+1λnk⟨wnk−ˉynk,ˉynk−y⟩. | (3.9) |
Taking lim supk→∞ in (3.9), by limk→∞‖wnk−ˉynk‖=0 and Lemma 3.1, we get
lim supk→∞⟨F(wnk),wnk−y⟩≤0. | (3.10) |
Since F is Lipschitz continuous and limk→∞‖wnk−ˉynk‖=0, we have
limk→∞‖F(wnk)−F(ˉynk)‖=0. | (3.11) |
Note that
⟨F(ˉynk),ˉynk−y⟩=⟨F(ˉynk)−F(wnk),wnk−y⟩+⟨F(wnk),wnk−y⟩+⟨F(ˉynk),ˉynk−wnk⟩. | (3.12) |
Taking lim supk→∞ in (3.12) and using (3.11) and (A4) we get
⟨F(ˉw),ˉw−y⟩≤0, |
which together with ˉw∈C and the arbitrariness of y∈C leads to ˉw∈S. The proof is complete.
In the position we give the main result on Algorithm 3.1 as follows:
Theorem 3.1. Assume that (A1)–(A4) hold. The sequence {xn} generated by Algorithm 3.1 converges strongly to x∗=PS(0).
Proof. For each n∈N, from the definition of {wn} and (2.1) it follows that
‖wn−x∗‖2=‖(1−ψn)yn−x∗‖2=‖(1−ψn)(yn−x∗)+ψn(−x∗)‖2≤(1−ψn)‖yn−x∗‖2+2ψn⟨−x∗,wn−x∗⟩. | (3.13) |
Replacing z in (3.5) and (3.7), and using (3.13), we get
ϕn+1ϕn+1−1‖yn+1−x∗‖2−1ϕn+1−1‖yn−x∗‖2+1ϕn+1‖xn+1−yn‖2=‖xn+1−x∗‖2≤‖wn−x∗‖2≤(1−ψn)‖yn−x∗‖2+2ψn⟨−x∗,wn−x∗⟩. |
It follows that
‖yn+1−x∗‖2≤[1−ψn(1−1ϕn+1)]‖yn−x∗‖2−ϕn+1−1ϕ2n+1‖xn+1−yn‖2+2ψn(1−1ϕn+1)⟨−x∗,wn−x∗⟩≤[1−ψn(1−1ϕn+1)]‖yn−x∗‖2+2ψn(1−1ϕn+1)⟨−x∗,wn−x∗⟩. | (3.14) |
Next we divide the following two cases to prove limn→∞‖yn−x∗‖=0.
Case 1. Suppose there exists N∈N such that {‖yn−x∗‖2} is monotonically nonincreasing for all n>N. Since {wn} is bounded, there exists a subsequence {wnk} such that wnk⇀ˉw. Without loss generality we may assume that
lim supn→∞⟨−x∗,wn−x∗⟩=limk→∞⟨−x∗,wnk−x∗⟩=⟨−x∗,ˉw−x∗⟩. | (3.15) |
Hence {‖yn−x∗‖2} is convergent. From (3.8) with z=x∗ we have
β(1−μ2)(1−1ϕn+1)‖wn−ˉyn‖2≤βn(1−μ2)(1−1ϕn+1)‖wn−ˉyn‖2≤[1−ψn(1−1ϕn+1)]‖yn−x∗‖2−‖yn+1−x∗‖2+ψn(1−1ϕn+1)‖x∗‖2. | (3.16) |
Since ϕn≥ϕ>1 and limn→∞ψn=0, letting n→∞ in (3.16), we get
limn→∞‖wn−ˉyn‖=0. | (3.17) |
By Lemma 3.3 and (3.17) we have ˉw∈S. From Lemma 2.1 it follows that
⟨−x∗,ˉw−x∗⟩≤0, |
which together with (3.15) leads that
lim supn→∞⟨−x∗,wn−x∗⟩≤0. | (3.18) |
By the hypothesis on {ϕn} and {ψn}, we have
∞∑n=1ψn(1−1ϕn)=∞. | (3.19) |
Applying Lemma 2.2 with an=‖yn−x∗‖2, bn=ψn(1−1ϕn+1) and cn=2ψn(1−1ϕn+1)⟨−x∗,wn−x∗⟩ to (3.17) and using (3.18) and (3.19), we obtain limn→∞‖yn−x∗‖=0.
Case 2. Suppose that there exists a subsequence {mk}⊂N with mk→∞ such that
‖ymk−x∗‖≤‖ymk+1−x∗‖, ∀k∈N. | (3.20) |
From Lemma 2.3 it follows that
‖yk−x∗‖≤‖ymk+1−x∗‖, ∀k∈N. | (3.21) |
By replacing n in (3.14) with mk and using (3.20) we get
‖ymk+1−x∗‖2≤[1−ψmk(1−1ϕmk+1)]‖ymk−x∗‖2+2ψmk(1−1ϕmk+1)⟨−x∗,wmk−x∗⟩≤[1−ψmk(1−1ϕmk+1)]‖ymk+1−x∗‖2+2ψmk(1−1ϕmk+1)⟨−x∗,wmk−x∗⟩, |
which implies that
‖ymk+1−x∗‖2≤2⟨−x∗,wmk−x∗⟩. |
By a similar process of showing (3.18), we can get
lim supk→∞⟨−x∗,wmk−x∗⟩≤0. |
Hence
lim supk→∞‖ymk+1−x∗‖2≤2lim supk→∞⟨−x∗,wmk−x∗⟩≤0. | (3.22) |
Combining (3.21) and (3.22), we have
lim supk→∞‖yk−x∗‖2≤lim supk→∞‖ymk+1−x∗‖2=0. |
It follows that limk→∞‖yk−x∗‖2=0. By the above two cases, we obtain limn→∞‖yn−x∗‖=0. Since limn→∞ψn=0, one has
‖wn−x∗‖=‖(1−ψn)yn−x∗‖≤‖yn−x∗‖+ψn‖yn‖→0asn→∞. | (3.23) |
Finally, by (3.5) and (3.23) we have
‖xn+1−x∗‖≤‖wn−x∗‖→x∗,as n→∞. |
The proof is complete.
In this section, we provide the numerical examples to test the convergence of Algorithm 3.1 and compare the numerical experimental results with some related methods. The codes are executed in MATLAB 2016a using a PC (Surface Pro 5) with an Intel(R) Core(TM) i5-7300U CPU running at 2.60GHz and 8.00 GB of RAM.
We first present the following example to test the effectiveness of Algorithm 3.1 for solving the non-monotone VIP (1.1).
Example 4.1. [24] Let H=Rm, C=[0,π]×[0,π]×[0,1]×⋯×[0,1], F:H→H be a mapping defined by
F(x)=(x2+cos(x2),x1+sin(x1),x3,⋯,xm), ∀x=(x1,⋯,xm)∈H. |
It follows that S={0}, Conditions (A1)-(A4) hold and F is not pseudomonotone; see [24].
We choose ϕn=1+√52+1n, ψn=1010+n, βn=110+n100+10n) and use ‖xn‖≤10−6 as the stopping criterion of Algorithm 3.1. The computed results for {xn} by Algorithm 3.1 with the different initial points y0,x1, different dimension m, and different parameters γ,τ and μ are shown in Figure 1. Table 1 gives CPU time (in seconds) of Algorithm 3.1 for this example. From the curves in Figure 1 we see that the sequence {xn} converges to 0.
(γ,μ,τ) | m=1000 | m=2000 | m=5000 | m=10000 | |||
(2,0.1,0.2) | 0.0546 | 0.0718 | 0.2739 | 0.3488 | |||
(7,0.5,0.7) | 0.0853 | 0.0743 | 0.2751 | 0.2165 | |||
(10,0.6,0.9) | 0.0989 | 0.0758 | 0.2687 | 0.3210 |
Next we use the following example to compare the numerical results by our Algorithm 3.1 and the algorithm (1.2) (denoted by Algorithm H), the algorithm (1.6) (denoted by Algorithm Z) and Algorithm 3.1 in [13] (denoted by Algorithm T).
Example 4.2. [19,25] Let F:Rm→Rm be a mapping defined by F(x)=Mx+q with
M=NNT+B+D, |
where N is an m×m matrix, B is an m×m skew-symmetric matrix, and D is an m×m diagonal matrix with its diagonal entries being nonnegative (hence M is positive semidefinite). The feasible set is
C={x∈Rm:Ex≤f}, |
where E is a k×m matrix and f∈Rk is a vector. It follows that F is monotone and Lipschitz continuous with the constant L=‖M‖. All entries of N, B are generated randomly and uniformly in [-2, 2], those of E and f are generated randomly and uniformly in [0, 1], those of D are generated randomly and uniformly in (0, 2). Conditions (A1)–(A4) are satisfied.
We choose the initial points y0=x1=(1,⋯,1)T for Algorithm 3.1 and Algorithm Z, and x0=(1,⋯,1)T for Algorithm H, and x0=x1=(1,⋯,1)T for Algorithm T. The parameters and control sequences for Algorithm 3.1, Algorithm H and Algorithm Z are as follows:
Algorithm 3.1: γ=2, τ=0.3, μ=0.2, ψn=100100+n, ϕn=√5+12+1n, βn=110+1010+n;
Algorithm H: τ=7L10, γ=32;
Algorithm Z: ϕ=√5+12, λ1=120, γ=32, μ=35, ξn=1+1n2, τn=1n2;
Algorithm T: ϕ=0.6, β=0.8, γ=2, δ=0.4, l=0.5, τ=1.5, ϵn=100(1+n)2, ξn=11+n, σn=0.5∗(1−ξn).
We consider the following two cases for this example.
Case 1 q=0. In this case, S={0}. We use ‖xn‖<10−3 or the maximum number of iterations n=1000 as the common stopping criterion of these algorithms. The numerical results computed are shown in Figure 2 and CPU time (in seconds) is given in Table 2.
Algorithm 3.1 | Algorithm H | Algorithm Z | Algorithm T | ||||
CPU time | CPU time | CPU time | CPU time | ||||
(k,m)=(5,10) | 16.1047 | 55.6034 | 88.2832 | 68.8139 | |||
(k,m)=(10,30) | 30.9358 | 197.4414 | 297.8164 | 189.2787 | |||
(k,m)=(30,50) | 33.6395 | 250.7686 | 236.5039 | 78.8249 | |||
(k,m)=(50,100) | 66.9299 | 210.3711 | 182.2526 | 274.8092 |
Case 2. All entries of q are generated randomly and uniformly in (0, 2). In this case the solution of VIP (1.1) is not known. We use ‖xn−xn−1‖<10−3 as the common stopping criterion of these algorithms. The numerical results computed are shown in Figure 3 and CPU time (in seconds) is given in Table 3.
Algorithm 3.1 | Algorithm H | Algorithm Z | Algorithm T | ||||
CPU time | CPU time | CPU time | CPU time | ||||
(k,m)=(5,10) | 2.2174 | 21.4457 | 34.7764 | 23.5107 | |||
(k,m)=(10,30) | 21.7007 | 50.0565 | 141.1718 | 136.3548 | |||
(k,m)=(30,50) | 49.0367 | 59.7578 | 58.7833 | 58.1909 | |||
(k,m)=(50,100) | 36.1365 | 108.7671 | 188.5018 | 123.2509 |
From Figures 2 and 3, Tables 2 and 3, we see that our Algorithm 3.1 needs the lesser iteration numbers and CPU time than the other three algorithms, which shows that our method has the obvious competitive advantage compared with the algorithms for this numerical example.
In this paper, we proposed a modified projection and contraction method for solving a nonmonotone variational inequality problem in a Hilbert space. Without prior knowledge of the Lipschitz constant of the mapping, we use a line search rule to update the step-size in our algorithm. The main novelty of our method lies in that we use a sequence of numbers in (0, 1) to replace the sequence {βn}, which is computed in a self-adaptive manner involving d(⋅,⋅) in other projection and contraction methods. Under certain conditions, we prove the strong convergence of the proposed method. The numerical examples are given to illustrate the effectiveness of our method. The numerical results show that our method has the obvious competitive advantage compared with the other related algorithms.
Limei Xue : Methodology, computing numerical examples, writing-original draft; Jianmin Song: Formal analysis, writing-original draft; Shenghua Wang: Design of algorithm, proof of conclusions. All authors have read and approved the final version of the manuscript for publication.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
All authors declare no conflicts of interest in this paper.
[1] |
S. Migórski, S. D. Zeng, Hyperbolic hemivariational inequalities controlled by evolution equations with application to adhesive contact model, Nonlinear Anal. Real, 43 (2018), 121–143. https://doi.org/10.1016/j.nonrwa.2018.02.008 doi: 10.1016/j.nonrwa.2018.02.008
![]() |
[2] |
S. D. Zeng, S. Migórski, Noncoercive hyperbolic variational inequalities with applications to contact mechanics, J. Math. Anal. Appl., 455 (2017), 619–637. https://doi.org/10.1016/j.jmaa.2017.05.072 doi: 10.1016/j.jmaa.2017.05.072
![]() |
[3] | G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon, 12 (1976), 747–756. |
[4] |
P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431–446. https://doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
![]() |
[5] |
Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. https://doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
![]() |
[6] |
B. S. He, A class of projection and contraction methods for monotone variational inequalities, Appl. Math. Optim., 35 (1997), 69–76. https://doi.org/10.1007/BF02683320 doi: 10.1007/BF02683320
![]() |
[7] |
Q. L. Dong, Y. J. Cho, L. L. Zhong, T. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Glob. Optim., 70 (2018), 687–704. https://doi.org/10.1007/s10898-017-0506-0 doi: 10.1007/s10898-017-0506-0
![]() |
[8] |
C. J. Zhang, Z. Y. Chu, New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities, AIMS Math., 8 (2023), 23291–23312. https://doi.org/10.3934/math.20231184 doi: 10.3934/math.20231184
![]() |
[9] |
Y. Malitsky, Golden ratio algorithms for variational inequalities, Math. Program., 184 (2020), 383–410. https://doi.org/10.1007/s10107-019-01416-w doi: 10.1007/s10107-019-01416-w
![]() |
[10] |
V. T. Duong, S. Reich, Y. Shehu, O. S. Iyiola, Novel projection methods for solving variational inequality problems and applications, Numer. Algor., 93 (2023), 1105–1135. doi:10.1007/s11075-022-01457-x doi: 10.1007/s11075-022-01457-x
![]() |
[11] |
O. K. Oyewole, H. A. Abass, O. J. Ogunsola, An improved subgradient extragradient self-adaptive algorithm based on the golden ratio technique for variational inequality problems in Banach spaces, J. Comput. Appl. Math., 460 (2025), 116420. https://doi.org/10.1016/j.cam.2024.116420 doi: 10.1016/j.cam.2024.116420
![]() |
[12] |
B. Tan, S. X. Li, S. Y. Cho, Inertial projection and contraction methods for pseudomonotone variational inequalities with non-Lipschitz operators and applications, Appl. Anal., 102 (2023), 1199–1221. https://doi.org/10.1080/00036811.2021.1979219 doi: 10.1080/00036811.2021.1979219
![]() |
[13] |
B. Tan, X. L. Qin, Modified inertial projection and contraction algorithms for solving variational inequality problems with non-Lipschitz continuous operators, Anal. Math. Phys., 12 (2022), 26. https://doi.org/10.1007/s13324-021-00638-6 doi: 10.1007/s13324-021-00638-6
![]() |
[14] |
Z. Y. Chu, C. J. Zhang, R-linear convergence analysis of two golden ratio projection algorithms for strongly pseudo-monotone variational inequalities, Optim. Eruditorum, 1 (2024), 45–55. https://doi.org/10.69829/oper-024-0101-ta04 doi: 10.69829/oper-024-0101-ta04
![]() |
[15] | H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, Berlin: Springer, 2011. |
[16] |
H. K. Xu, Another control condition in an iterative method for nonexpansive mappings, B. Aust. Math. Soc., 65 (2002), 109–113. https://doi.org/10.1017/S0004972700020116 doi: 10.1017/S0004972700020116
![]() |
[17] |
P. E. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. https://doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
![]() |
[18] |
M. V. Solodov, B. F. Svaiter, A new projection method for monotone variational inequalities, SIAM J. Control Optim., 37 (1999), 765–776. https://doi.org/10.1137/S0363012997317475 doi: 10.1137/S0363012997317475
![]() |
[19] |
S. Noinakorn, N. Wairojjana, N. Pakkaranang, A novel accelerated extragradient algorithm to solve pseudomonotone variational inequalities, Arab. J. Math., 12 (2023), 201–218. https://doi.org/10.1007/s40065-022-00400-1 doi: 10.1007/s40065-022-00400-1
![]() |
[20] |
B. Tan, P. Sunthrayuth, P. Cholamjiak, Y. J. Cho, Modified inertial extragradient methods for finding minimum-norm solution of the variational inequality problem with applications to optimal control problem, Int. J. Comput. Math., 100 (2023), 525–545. https://doi.org/10.1080/00207160.2022.2137672 doi: 10.1080/00207160.2022.2137672
![]() |
[21] |
C. Izuchukwu, Y. Shehu, J. C. Yao, Convergence results for proximal point algorithm with inertial and correction terms, Appl. Anal., 2024 (2024), 1–21. https://doi.org/10.1080/00036811.2024.2432527 doi: 10.1080/00036811.2024.2432527
![]() |
[22] |
C. Izuchukwu, Y. Shehu, J. C. Yao, New strong convergence analysis for variational inequalities and fixed point problems, Optimization, 2024 (2024), 1–22. https://doi.org/10.1080/02331934.2024.2424446 doi: 10.1080/02331934.2024.2424446
![]() |
[23] |
B. Tan, X. L. Qin, J. C. Yao, Two modified inertial projection algorithms for bilevel pseudomonotone variational inequalities with applications to optimal control problems, Numer. Algor., 88 (2021), 1757–1786. https://doi.org/10.1007/s11075-021-01093-x doi: 10.1007/s11075-021-01093-x
![]() |
[24] |
Z. Q. Zhu, K. Y. Zheng, S. H. Wang, A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space, AIMS Math., 9 (2024), 20956–20975. https://doi.org/10.3934/math.20241020 doi: 10.3934/math.20241020
![]() |
[25] |
D. V. Thong, D. V. Hieu, Inertial extragradient algorithms for strongly pseudomontone variational inequalities, J. Comput. Appl. Math., 341 (2018), 80–98. https://doi.org/10.1016/j.cam.2018.03.019 doi: 10.1016/j.cam.2018.03.019
![]() |
(γ,μ,τ) | m=1000 | m=2000 | m=5000 | m=10000 | |||
(2,0.1,0.2) | 0.0546 | 0.0718 | 0.2739 | 0.3488 | |||
(7,0.5,0.7) | 0.0853 | 0.0743 | 0.2751 | 0.2165 | |||
(10,0.6,0.9) | 0.0989 | 0.0758 | 0.2687 | 0.3210 |
Algorithm 3.1 | Algorithm H | Algorithm Z | Algorithm T | ||||
CPU time | CPU time | CPU time | CPU time | ||||
(k,m)=(5,10) | 16.1047 | 55.6034 | 88.2832 | 68.8139 | |||
(k,m)=(10,30) | 30.9358 | 197.4414 | 297.8164 | 189.2787 | |||
(k,m)=(30,50) | 33.6395 | 250.7686 | 236.5039 | 78.8249 | |||
(k,m)=(50,100) | 66.9299 | 210.3711 | 182.2526 | 274.8092 |
Algorithm 3.1 | Algorithm H | Algorithm Z | Algorithm T | ||||
CPU time | CPU time | CPU time | CPU time | ||||
(k,m)=(5,10) | 2.2174 | 21.4457 | 34.7764 | 23.5107 | |||
(k,m)=(10,30) | 21.7007 | 50.0565 | 141.1718 | 136.3548 | |||
(k,m)=(30,50) | 49.0367 | 59.7578 | 58.7833 | 58.1909 | |||
(k,m)=(50,100) | 36.1365 | 108.7671 | 188.5018 | 123.2509 |
(γ,μ,τ) | m=1000 | m=2000 | m=5000 | m=10000 | |||
(2,0.1,0.2) | 0.0546 | 0.0718 | 0.2739 | 0.3488 | |||
(7,0.5,0.7) | 0.0853 | 0.0743 | 0.2751 | 0.2165 | |||
(10,0.6,0.9) | 0.0989 | 0.0758 | 0.2687 | 0.3210 |
Algorithm 3.1 | Algorithm H | Algorithm Z | Algorithm T | ||||
CPU time | CPU time | CPU time | CPU time | ||||
(k,m)=(5,10) | 16.1047 | 55.6034 | 88.2832 | 68.8139 | |||
(k,m)=(10,30) | 30.9358 | 197.4414 | 297.8164 | 189.2787 | |||
(k,m)=(30,50) | 33.6395 | 250.7686 | 236.5039 | 78.8249 | |||
(k,m)=(50,100) | 66.9299 | 210.3711 | 182.2526 | 274.8092 |
Algorithm 3.1 | Algorithm H | Algorithm Z | Algorithm T | ||||
CPU time | CPU time | CPU time | CPU time | ||||
(k,m)=(5,10) | 2.2174 | 21.4457 | 34.7764 | 23.5107 | |||
(k,m)=(10,30) | 21.7007 | 50.0565 | 141.1718 | 136.3548 | |||
(k,m)=(30,50) | 49.0367 | 59.7578 | 58.7833 | 58.1909 | |||
(k,m)=(50,100) | 36.1365 | 108.7671 | 188.5018 | 123.2509 |