Research article Special Issues

A modified projection and contraction method for solving a variational inequality problem in Hilbert spaces

  • In this paper, we proposed a modified projection and contraction method for solving a variational inequality problem in Hilbert spaces. In the existing projection and contraction methods for solving the variational inequality problem, the sequence {βn} has a similar computation manner, which is computed in a self-adaptive manner, but the sequence {βn} in our method is a sequence of numbers in (0, 1) given in advance, which is the main difference of our method with the existing projection and contraction methods. A line search is used to deal with the unknown Lipschitz constant of the mapping. The strong convergence of the proposed method is proved under certain conditions. Finally, some numerical examples are presented to illustrate the effectiveness of our method and compare the computation results with some related methods in the literature. The numerical results show that our method has an obvious competitive advantage compared with the related methods.

    Citation: Limei Xue, Jianmin Song, Shenghua Wang. A modified projection and contraction method for solving a variational inequality problem in Hilbert spaces[J]. AIMS Mathematics, 2025, 10(3): 6128-6143. doi: 10.3934/math.2025279

    Related Papers:

    [1] Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184
    [2] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [3] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [4] Pongsakorn Yotkaew, Nopparat Wairojjana, Nuttapol Pakkaranang . Accelerated non-monotonic explicit proximal-type method for solving equilibrium programming with convex constraints and its applications. AIMS Mathematics, 2021, 6(10): 10707-10727. doi: 10.3934/math.2021622
    [5] Ting Xie, Dapeng Li . On the stability of projected dynamical system for generalized variational inequality with hesitant fuzzy relation. AIMS Mathematics, 2020, 5(6): 7107-7121. doi: 10.3934/math.2020455
    [6] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [7] Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064
    [8] Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332
    [9] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [10] Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382
  • In this paper, we proposed a modified projection and contraction method for solving a variational inequality problem in Hilbert spaces. In the existing projection and contraction methods for solving the variational inequality problem, the sequence {βn} has a similar computation manner, which is computed in a self-adaptive manner, but the sequence {βn} in our method is a sequence of numbers in (0, 1) given in advance, which is the main difference of our method with the existing projection and contraction methods. A line search is used to deal with the unknown Lipschitz constant of the mapping. The strong convergence of the proposed method is proved under certain conditions. Finally, some numerical examples are presented to illustrate the effectiveness of our method and compare the computation results with some related methods in the literature. The numerical results show that our method has an obvious competitive advantage compared with the related methods.



    Let H be a real Hilbert space, C be a nonempty, closed, and convex subset of H, and F:HH be a nonlinear mapping. The variational inequality problem (VIP in short) is to find a point xC such that

    F(x),yx0, yC. (1.1)

    Denote the set of the solutions of VIP (1.1) by S. VIP has many applications in economics, optimization, mechanics, and so on; see [1,2].

    In recent decades, many methods have been proposed for solving the various classes of VIPs. Korpelevich [3] proposed the famous extragradient method and proved the weak convergence of the method under certain conditions. It is well known that the extragradient method needs compute two projections on the feasible set C at each iteration, which adds the computation load especially when C has a complex construction. To overcome the drawback, some authors proposed improved methods where only one projection is computed at each iteration, such as Tseng's method [4], the subgradient extragradient method [5], and the projection and contraction method [6]. To the content of this paper, we focus on the projection and contraction method. In [6], He proposed a projection and contraction method for solving the monotone VIP (1.1) as follows: x0H, and

    {yn=PC(xnτF(xn)),d(xn,yn)=xnynτ(F(xn)F(yn)),xn+1=xnγβnd(xn,yn), n1, (1.2)

    where F is a monotone and L-Lipschitz continuous mapping, τ(0,1L), γ(0,2) and

    βn={xnyn,d(xn,yn)d(xn,yn)2,if d(xn,yn)0,0,otherwise. (1.3)

    The author proved that under certain conditions the sequence {xn} generated by (1.2) converges weakly to a solution of the VIP (1.1).

    Since He's result, many modified versions of the projection and contraction method (1.2) have been introduced. For example, Dong et al. [7] proposed the following inertial projection and contraction method for solving the monotone VIP (1.1): x0H, and

    {wn=xn+αn(xnxn1),yn=PC(wnτF(wn)),d(wn,yn)=wnynτ(F(wn)F(yn)),xn+1=wnγβnd(wn,yn), n1, (1.4)

    where F is a monotone and L-Lipschitz continuous mapping, τ(0,1L), γ(0,2), {αn}[0,α] with α<1, and

    βn={wnyn,d(wn,yn)d(wn,yn)2,if d(wn,yn)0,0,otherwise. (1.5)

    The authors proved that under certain conditions the sequence {xn} generated by (1.4) converges weakly to a solution of the VIP (1.1).

    In 2023, Zhang and Chu [8] proposed a new extrapolation projection and contraction method by combining the golden ratio technique [9] for solving the VIP (1.1) as follows: x0,y1H, and

    {yn=ϕ1ϕxn+1ϕyn1,ˉyn=PC(ynλnF(yn)),xn+1=ynγβnd(yn,ˉyn), n1, (1.6)

    where F is a pseudomonotone and L-Lipschitz continuous mapping, μ(0,1), ϕ(1,), γ(0,2),

    βn={ynˉyn,d(yn,ˉyn)d(yn,ˉyn)2,d(yn,ˉyn)0,0,d(yn,ˉyn)=0 (1.7)

    with

    d(yn,ˉyn)=ynˉynλn(F(yn)F(ˉyn))

    and

    λn+1={min{μynˉynF(yn)F(ˉyn),ξnλn+τn},F(yn)F(ˉyn),ξnλn+τn,otherwise, (1.8)

    where {ξn}[1,) with n=1(ξn1)< and {τn}(0,) with n=1τn<. The authors proved that under certain conditions, the sequence {xn} generated by (1.6) converges weakly to a solution of the VIP (1.1). Note the sequence {λn} in (1.8) is permitted to be increasing. The similar definition with λn in (1.8) can be found in [10,11].

    On the more projection and contraction methods and golden ratio methods for solving VIPs, the interested readers may refer to [12,13,14].

    In (1.3), (1.5), and (1.7), the sequence {βn} is computed in a self-adaptive manner involving d(,) and the constant γ is chosen from (0, 2). In fact, besides the projection and contraction methods mentioned above, in almost all versions of projection and contraction methods, the sequence {βn} is computed in a similar self-adaptive manner. A natural question is if the range of γ can be relaxed or omitted and {βn} can be computed by other mains or is replaced by a sequence of numbers that is given in advance at the initial part of the proposed method. It seems that no authors consider the question. In addition, find that yn in (1.6) can be written as yn=(11ϕ)xn+1ϕyn1, which is a convex combination of xn and yn1. Replacing ϕ with a sequence {ϕn} in (1,), yn will be (11ϕn)xn+1ϕnyn1. In this case, if ϕn=ϕ, then yn is reduced to yn in (1.6) and so such that yn in (1.6) is the special case. Based on the above idea, in this paper, we propose a modified version of the method (1.6) where the constant γ is omitted and the sequence {βn} replaced by a sequence of numbers in (0, 1). The main difference of our method with (1.6) is as follows:

    (1) the constant ϕ in (1.6) is replaced by a sequence {ϕn};

    (2) the constant γ in (0, 2) is not needed, and the sequence {βn} in (1.7) is replaced by a sequence of numbers in (0, 1), which is the main novelty of our method.

    We prove that under certain conditions the proposed method converges strongly to a solution of the VIP (1.1). The numerical examples are presented to illustrate the effectiveness of our method. Although our method is a slight modification of the method (1.6), the numerical results show that our method has a faster convergence rate than the method (1.6) and other related methods.

    In this section, let H be a real Hilbert space and C be a nonempty, closed and convex subset of H. Denote the weak convergence by the symbol . For any x,yH, it holds that

    x+y2x2+2y,x+y. (2.1)

    Definition 2.1. [15] A mapping F:HH is said to be

    (i) Pseudomonotone on C if

    x,yC,F(x),yx0F(y),yx0;

    (ii) Pseudomonotone with respect to a subset BC on C if

    xB,yC,F(x),yx0F(y),yx0;

    (iii) L-Lipschitz continuous on H if there exists a constant L>0 such that

    x,yH,F(x)F(y)Lxy;

    (iv) Sequentially weakly continuous if for each sequence {xn}, one has

    xnxF(xn)F(x).

    Let C be a nonempty closed convex subset of H. For any xH, there exists a unique zC such that

    z=minyCyx.

    Denote the element z by PC(x). The mapping PC:HC is called the metric projection from H onto C.

    Lemma 2.1. [15] PC has the following properties:

    z=PC(x)xz,zy0, yC.

    Lemma 2.2. [16] Assume that {an}[0,), {bn}(0,1), and {cn}(0,) satisfy the following

    an+1(1bn)an+cn,nN.

    If n=1bn= and lim supncnan0 hold, then limn+an=0.

    Lemma 2.3. [17] Let {an} be a sequence of non-negative real numbers such that there exists a subsequence {anj} of {an} such that anj<anj+1 for all jN. Then there exists a non-decreasing sequence {mk} of N such that limkmk= and the following properties are satisfied by all (sufficiently large) number, kN:

    amkamk+1 and akamk+1.

    In this section, let C be a nonempty, closed and convex subset of a real Hilbert space H and F:HH be a mapping. Considering the following conditions:

    (A1) S.

    (A2) F(y),yz0 for all zS and yC.

    (A3) F is L-Lipschitz continuous.

    (A4) For any sequence {un}C with unu, it has F(u),uylim supnF(un),uny for all yC.

    Remark 3.1. Condition (A2), used for example in [18], is weaker than the pseudomonotonicity assumption. In fact, from Conditions (A1) and (A2), it follows that F is pseudomonotone with respect to S on C. In addition, it needs to be noted that Condition (A4) is not related to the condition that F is sequentially weakly continuous on C.

    For solving the VIP (1.1), we propose the projection and contraction method as follows.

    Algorithm 3.1
    Initialization. Choose the initial points y0,x1H, the parameters τ(0,1),μ(0,), γ(0,), the sequences {ϕn}[ϕ,)} with ϕ>1, {βn}[β,) with β>0 and {ψn}n=1(0,1) satisfy
    limnψn=0 and n=1ψn=.
    Set n=1.
    Step 1. Compute wn=(1ψn)yn with
    yn=ϕn1ϕnxn+1ϕnyn1.
    Step 2. Compute
    ˉyn=PC(wnλnF(wn)),
    where λn is chosen to be the largest λ{γτl:l=0,1,2,} such that
    λF(wn)F(ˉyn)μwnˉyn.(3.1)
    If wn=ˉyn or F(ˉyn)=0, then stop and ˉynS. Otherwise, go to Step 3.
    Step 3. Compute
    xn+1=wnβnd(wn,ˉyn),
    where
    d(wn,ˉyn)=wnˉynλn(F(wn)F(ˉyn)).
    Step 4. Set n=n+1 and go to Step 1.

    Remark 3.2. In Algorithm 3.1, the definition of wn has the same manner as the ones in [19,20,21,22]. Clearly, if ϕnϕ with ϕ>1, then yn is reduced to the one in (1.4). In particular, if ϕn=1+52, then ϕn is the golden ratio. In the existing projection and contraction methods, {βn} is computed in a self-adaptive manner like (1.3), (1.5), and (1.7) and its value strictly depends on the result, which is computed by the self-adaptive formula. However, in our method, {βn} is a sequence of numbers in (0, 1) given in advance. Since there is no other restriction imposed on {βn}, we have the greater freedom to choose or adjust {βn} in advance. It is the main difference of our method with the existing projection and contraction methods.

    The following remark shows that the stopping criterion in Step 2 can work well.

    Remark 3.3. By the definition of ˉyn and Lemma 2.1, we have

    ˉyn(wnλnF(wn)),yˉyn0, yC.

    If ˉyn=wn for some nN, then it has

    λnF(ˉyn),yˉyn0, yC.

    Since λn>0, we have

    F(ˉyn),yˉyn0, yC.

    It follows that ˉynS. In addition, it is clear that F(ˉyn)=0 leads to ˉynS.

    In the rest of this section, for showing the convergence of Algorithm 3.1, we assume that {xn}, {wn}, {yn}, and {ˉyn} are infinite sequences.

    Lemma 3.1. [23] Assume that (A3) holds. Then the Armijo-line search rule (3.1) is well defined, and min{γ,μτL}λnγ for all nN.

    Lemma 3.2. Assume that (A1)(A3) hold. Then the sequences {xn}, {yn}, {ˉyn} and {wn} are bounded.

    Proof. For each zS and nN, by the definition of ˉyn and Lemma 2.1, it follows that

    ˉynz,wnˉynλnF(wn)0. (3.2)

    By (A2) it holds that

    ˉynz,F(ˉyn)0,

    which, together with λn>0, leads to that

    λnˉynz,F(ˉyn)0. (3.3)

    Combining (3.2) with (3.3), we have

    ˉynz,d(wn,ˉyn)=ˉynz,wnˉynλn(F(wn)F(ˉyn))0.

    So

    wnz,d(wn,ˉyn)wnˉyn,d(wn,ˉyn).

    Multiplying βn on both sides of the above inequality, we get

    βnwnz,d(wn,ˉyn)βnwnˉyn,d(wn,ˉyn). (3.4)

    Substituting (3.4) into

    xn+1z2=wnzβnd(wn,ˉyn)2=wnz22βnwnz,d(wn,ˉyn)+β2nd(wn,ˉyn)2,

    we have

    xn+1z2wnz22βnwnˉyn,d(wn,ˉyn)+β2nd(wn,ˉyn)2wnz22βnwnˉyn,d(wn,ˉyn)+βnd(wn,ˉyn)2=wnz2βnwnˉyn2+βnλ2nF(wn)F(ˉyn)2.

    Furthermore, by (3.1) we get

    xn+1z2wnz2βnwnˉyn2+βnμ2wnˉyn2=wnz2βn(1μ2)wnˉyn2. (3.5)

    On the other hand, by the definition of yn+1 we have

    xn+1z2=ϕn+1ϕn+11yn+11ϕn+11ynz2=ϕn+1ϕn+11yn+1z21ϕn+11ynz2+ϕn+1(ϕn+11)2yn+1yn2=ϕn+1ϕn+11yn+1z21ϕn+11ynz2+1ϕn+1xn+1yn2. (3.6)

    Substituting (3.6) into (3.5), we obtain

    ϕn+1ϕn+11yn+1z21ϕn+11ynz2+1ϕn+1xn+1yn2wnz2βn(1μ2)wnˉyn2=(1ψn)(ynz)+ψn(z)2βn(1μ2)wnˉyn2(1ψn)ynz2+ψnz2βn(1μ2)wnˉyn2. (3.7)

    It follows that

    yn+1z2[1ψn(11ϕn+1)]ynz2+ψn(11ϕn+1)z2βn(1μ2)(11ϕn+1)wnˉyn2[1ψn(11ϕn+1)]ynz2+ψn(11ϕn+1)z2max{ynz2,z2}max{y1z2,z2}, nN. (3.8)

    Hence {yn} is bounded. The boundedness of {wn} is from wn=(1ψn)ynyn, which implies that {PCwn} is also bounded. In addition, {F(wn)} is bounded because of (A3). By the definition of ˉyn we have

    ˉynˉynPCwn+PCwn=PC(wnλnF(wn))PCwn+PCwnλnF(wn)+PCwn, nN.

    From Lemma 3.1 it follows that {ˉyn} is bounded. Finally, by (3.5) and the boundedness of {wn}, we obtain the boundedness of {xn}. The proof is complete.

    In the following lemma, we denote the set of weak cluster points of {wn} by ωw(wn).

    Lemma 3.3. Assume that (A1)(A3) hold. If limnwnˉyn=0, then ωw(wn)S.

    Proof. Since {wn} is bounded, ωw(wn). Choose ˉwωw(wn) arbitrarily and assume that wnk is a subsequence of {wn} such that wnkˉw as k. From limkwnkˉynk=0 it follows that ˉynkˉw and so ˉwC. For each yC, from Lemma 2.1 and the definition of ˉyn we have

    ˉynk(wnkλnkF(wnk)),ˉynky0,

    or equivalently,

    F(wnk),wnkyF(wnk),wnkˉynk+1λnkwnkˉynk,ˉynky. (3.9)

    Taking lim supk in (3.9), by limkwnkˉynk=0 and Lemma 3.1, we get

    lim supkF(wnk),wnky0. (3.10)

    Since F is Lipschitz continuous and limkwnkˉynk=0, we have

    limkF(wnk)F(ˉynk)=0. (3.11)

    Note that

    F(ˉynk),ˉynky=F(ˉynk)F(wnk),wnky+F(wnk),wnky+F(ˉynk),ˉynkwnk. (3.12)

    Taking lim supk in (3.12) and using (3.11) and (A4) we get

    F(ˉw),ˉwy0,

    which together with ˉwC and the arbitrariness of yC leads to ˉwS. The proof is complete.

    In the position we give the main result on Algorithm 3.1 as follows:

    Theorem 3.1. Assume that (A1)(A4) hold. The sequence {xn} generated by Algorithm 3.1 converges strongly to x=PS(0).

    Proof. For each nN, from the definition of {wn} and (2.1) it follows that

    wnx2=(1ψn)ynx2=(1ψn)(ynx)+ψn(x)2(1ψn)ynx2+2ψnx,wnx. (3.13)

    Replacing z in (3.5) and (3.7), and using (3.13), we get

    ϕn+1ϕn+11yn+1x21ϕn+11ynx2+1ϕn+1xn+1yn2=xn+1x2wnx2(1ψn)ynx2+2ψnx,wnx.

    It follows that

    yn+1x2[1ψn(11ϕn+1)]ynx2ϕn+11ϕ2n+1xn+1yn2+2ψn(11ϕn+1)x,wnx[1ψn(11ϕn+1)]ynx2+2ψn(11ϕn+1)x,wnx. (3.14)

    Next we divide the following two cases to prove limnynx=0.

    Case 1. Suppose there exists NN such that {ynx2} is monotonically nonincreasing for all n>N. Since {wn} is bounded, there exists a subsequence {wnk} such that wnkˉw. Without loss generality we may assume that

    lim supnx,wnx=limkx,wnkx=x,ˉwx. (3.15)

    Hence {ynx2} is convergent. From (3.8) with z=x we have

    β(1μ2)(11ϕn+1)wnˉyn2βn(1μ2)(11ϕn+1)wnˉyn2[1ψn(11ϕn+1)]ynx2yn+1x2+ψn(11ϕn+1)x2. (3.16)

    Since ϕnϕ>1 and limnψn=0, letting n in (3.16), we get

    limnwnˉyn=0. (3.17)

    By Lemma 3.3 and (3.17) we have ˉwS. From Lemma 2.1 it follows that

    x,ˉwx0,

    which together with (3.15) leads that

    lim supnx,wnx0. (3.18)

    By the hypothesis on {ϕn} and {ψn}, we have

    n=1ψn(11ϕn)=. (3.19)

    Applying Lemma 2.2 with an=ynx2, bn=ψn(11ϕn+1) and cn=2ψn(11ϕn+1)x,wnx to (3.17) and using (3.18) and (3.19), we obtain limnynx=0.

    Case 2. Suppose that there exists a subsequence {mk}N with mk such that

    ymkxymk+1x, kN. (3.20)

    From Lemma 2.3 it follows that

    ykxymk+1x, kN. (3.21)

    By replacing n in (3.14) with mk and using (3.20) we get

    ymk+1x2[1ψmk(11ϕmk+1)]ymkx2+2ψmk(11ϕmk+1)x,wmkx[1ψmk(11ϕmk+1)]ymk+1x2+2ψmk(11ϕmk+1)x,wmkx,

    which implies that

    ymk+1x22x,wmkx.

    By a similar process of showing (3.18), we can get

    lim supkx,wmkx0.

    Hence

    lim supkymk+1x22lim supkx,wmkx0. (3.22)

    Combining (3.21) and (3.22), we have

    lim supkykx2lim supkymk+1x2=0.

    It follows that limkykx2=0. By the above two cases, we obtain limnynx=0. Since limnψn=0, one has

    wnx=(1ψn)ynxynx+ψnyn0asn. (3.23)

    Finally, by (3.5) and (3.23) we have

    xn+1xwnxx,as n.

    The proof is complete.

    In this section, we provide the numerical examples to test the convergence of Algorithm 3.1 and compare the numerical experimental results with some related methods. The codes are executed in MATLAB 2016a using a PC (Surface Pro 5) with an Intel(R) Core(TM) i5-7300U CPU running at 2.60GHz and 8.00 GB of RAM.

    We first present the following example to test the effectiveness of Algorithm 3.1 for solving the non-monotone VIP (1.1).

    Example 4.1. [24] Let H=Rm, C=[0,π]×[0,π]×[0,1]××[0,1], F:HH be a mapping defined by

    F(x)=(x2+cos(x2),x1+sin(x1),x3,,xm), x=(x1,,xm)H.

    It follows that S={0}, Conditions (A1)-(A4) hold and F is not pseudomonotone; see [24].

    We choose ϕn=1+52+1n, ψn=1010+n, βn=110+n100+10n) and use xn106 as the stopping criterion of Algorithm 3.1. The computed results for {xn} by Algorithm 3.1 with the different initial points y0,x1, different dimension m, and different parameters γ,τ and μ are shown in Figure 1. Table 1 gives CPU time (in seconds) of Algorithm 3.1 for this example. From the curves in Figure 1 we see that the sequence {xn} converges to 0.

    Figure 1.  Numerical results of Algorithm 3.1 for Example 4.1.
    Table 1.  CPU time of Algorithm 3.1 for Example 4.1.
    (γ,μ,τ) m=1000 m=2000 m=5000 m=10000
    (2,0.1,0.2) 0.0546 0.0718 0.2739 0.3488
    (7,0.5,0.7) 0.0853 0.0743 0.2751 0.2165
    (10,0.6,0.9) 0.0989 0.0758 0.2687 0.3210

     | Show Table
    DownLoad: CSV

    Next we use the following example to compare the numerical results by our Algorithm 3.1 and the algorithm (1.2) (denoted by Algorithm H), the algorithm (1.6) (denoted by Algorithm Z) and Algorithm 3.1 in [13] (denoted by Algorithm T).

    Example 4.2. [19,25] Let F:RmRm be a mapping defined by F(x)=Mx+q with

    M=NNT+B+D,

    where N is an m×m matrix, B is an m×m skew-symmetric matrix, and D is an m×m diagonal matrix with its diagonal entries being nonnegative (hence M is positive semidefinite). The feasible set is

    C={xRm:Exf},

    where E is a k×m matrix and fRk is a vector. It follows that F is monotone and Lipschitz continuous with the constant L=M. All entries of N, B are generated randomly and uniformly in [-2, 2], those of E and f are generated randomly and uniformly in [0, 1], those of D are generated randomly and uniformly in (0, 2). Conditions (A1)–(A4) are satisfied.

    We choose the initial points y0=x1=(1,,1)T for Algorithm 3.1 and Algorithm Z, and x0=(1,,1)T for Algorithm H, and x0=x1=(1,,1)T for Algorithm T. The parameters and control sequences for Algorithm 3.1, Algorithm H and Algorithm Z are as follows:

    Algorithm 3.1: γ=2, τ=0.3, μ=0.2, ψn=100100+n, ϕn=5+12+1n, βn=110+1010+n;

    Algorithm H: τ=7L10, γ=32;

    Algorithm Z: ϕ=5+12, λ1=120, γ=32, μ=35, ξn=1+1n2, τn=1n2;

    Algorithm T: ϕ=0.6, β=0.8, γ=2, δ=0.4, l=0.5, τ=1.5, ϵn=100(1+n)2, ξn=11+n, σn=0.5(1ξn).

    We consider the following two cases for this example.

    Case 1 q=0. In this case, S={0}. We use xn<103 or the maximum number of iterations n=1000 as the common stopping criterion of these algorithms. The numerical results computed are shown in Figure 2 and CPU time (in seconds) is given in Table 2.

    Figure 2.  Numerical results of used algorithms for Example 4.2 with Case 1.
    Table 2.  CPU time of used algorithms for Example 4.2 with Case 1.
    Algorithm 3.1 Algorithm H Algorithm Z Algorithm T
    CPU time CPU time CPU time CPU time
    (k,m)=(5,10) 16.1047 55.6034 88.2832 68.8139
    (k,m)=(10,30) 30.9358 197.4414 297.8164 189.2787
    (k,m)=(30,50) 33.6395 250.7686 236.5039 78.8249
    (k,m)=(50,100) 66.9299 210.3711 182.2526 274.8092

     | Show Table
    DownLoad: CSV

    Case 2. All entries of q are generated randomly and uniformly in (0, 2). In this case the solution of VIP (1.1) is not known. We use xnxn1<103 as the common stopping criterion of these algorithms. The numerical results computed are shown in Figure 3 and CPU time (in seconds) is given in Table 3.

    Figure 3.  Numerical results of used algorithms for Example 4.2 with Case 2.
    Table 3.  CPU time of used algorithms for Example 4.2 with Case 2.
    Algorithm 3.1 Algorithm H Algorithm Z Algorithm T
    CPU time CPU time CPU time CPU time
    (k,m)=(5,10) 2.2174 21.4457 34.7764 23.5107
    (k,m)=(10,30) 21.7007 50.0565 141.1718 136.3548
    (k,m)=(30,50) 49.0367 59.7578 58.7833 58.1909
    (k,m)=(50,100) 36.1365 108.7671 188.5018 123.2509

     | Show Table
    DownLoad: CSV

    From Figures 2 and 3, Tables 2 and 3, we see that our Algorithm 3.1 needs the lesser iteration numbers and CPU time than the other three algorithms, which shows that our method has the obvious competitive advantage compared with the algorithms for this numerical example.

    In this paper, we proposed a modified projection and contraction method for solving a nonmonotone variational inequality problem in a Hilbert space. Without prior knowledge of the Lipschitz constant of the mapping, we use a line search rule to update the step-size in our algorithm. The main novelty of our method lies in that we use a sequence of numbers in (0, 1) to replace the sequence {βn}, which is computed in a self-adaptive manner involving d(,) in other projection and contraction methods. Under certain conditions, we prove the strong convergence of the proposed method. The numerical examples are given to illustrate the effectiveness of our method. The numerical results show that our method has the obvious competitive advantage compared with the other related algorithms.

    Limei Xue : Methodology, computing numerical examples, writing-original draft; Jianmin Song: Formal analysis, writing-original draft; Shenghua Wang: Design of algorithm, proof of conclusions. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    All authors declare no conflicts of interest in this paper.



    [1] S. Migórski, S. D. Zeng, Hyperbolic hemivariational inequalities controlled by evolution equations with application to adhesive contact model, Nonlinear Anal. Real, 43 (2018), 121–143. https://doi.org/10.1016/j.nonrwa.2018.02.008 doi: 10.1016/j.nonrwa.2018.02.008
    [2] S. D. Zeng, S. Migórski, Noncoercive hyperbolic variational inequalities with applications to contact mechanics, J. Math. Anal. Appl., 455 (2017), 619–637. https://doi.org/10.1016/j.jmaa.2017.05.072 doi: 10.1016/j.jmaa.2017.05.072
    [3] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon, 12 (1976), 747–756.
    [4] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431–446. https://doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
    [5] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. https://doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [6] B. S. He, A class of projection and contraction methods for monotone variational inequalities, Appl. Math. Optim., 35 (1997), 69–76. https://doi.org/10.1007/BF02683320 doi: 10.1007/BF02683320
    [7] Q. L. Dong, Y. J. Cho, L. L. Zhong, T. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Glob. Optim., 70 (2018), 687–704. https://doi.org/10.1007/s10898-017-0506-0 doi: 10.1007/s10898-017-0506-0
    [8] C. J. Zhang, Z. Y. Chu, New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities, AIMS Math., 8 (2023), 23291–23312. https://doi.org/10.3934/math.20231184 doi: 10.3934/math.20231184
    [9] Y. Malitsky, Golden ratio algorithms for variational inequalities, Math. Program., 184 (2020), 383–410. https://doi.org/10.1007/s10107-019-01416-w doi: 10.1007/s10107-019-01416-w
    [10] V. T. Duong, S. Reich, Y. Shehu, O. S. Iyiola, Novel projection methods for solving variational inequality problems and applications, Numer. Algor., 93 (2023), 1105–1135. doi:10.1007/s11075-022-01457-x doi: 10.1007/s11075-022-01457-x
    [11] O. K. Oyewole, H. A. Abass, O. J. Ogunsola, An improved subgradient extragradient self-adaptive algorithm based on the golden ratio technique for variational inequality problems in Banach spaces, J. Comput. Appl. Math., 460 (2025), 116420. https://doi.org/10.1016/j.cam.2024.116420 doi: 10.1016/j.cam.2024.116420
    [12] B. Tan, S. X. Li, S. Y. Cho, Inertial projection and contraction methods for pseudomonotone variational inequalities with non-Lipschitz operators and applications, Appl. Anal., 102 (2023), 1199–1221. https://doi.org/10.1080/00036811.2021.1979219 doi: 10.1080/00036811.2021.1979219
    [13] B. Tan, X. L. Qin, Modified inertial projection and contraction algorithms for solving variational inequality problems with non-Lipschitz continuous operators, Anal. Math. Phys., 12 (2022), 26. https://doi.org/10.1007/s13324-021-00638-6 doi: 10.1007/s13324-021-00638-6
    [14] Z. Y. Chu, C. J. Zhang, R-linear convergence analysis of two golden ratio projection algorithms for strongly pseudo-monotone variational inequalities, Optim. Eruditorum, 1 (2024), 45–55. https://doi.org/10.69829/oper-024-0101-ta04 doi: 10.69829/oper-024-0101-ta04
    [15] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, Berlin: Springer, 2011.
    [16] H. K. Xu, Another control condition in an iterative method for nonexpansive mappings, B. Aust. Math. Soc., 65 (2002), 109–113. https://doi.org/10.1017/S0004972700020116 doi: 10.1017/S0004972700020116
    [17] P. E. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. https://doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [18] M. V. Solodov, B. F. Svaiter, A new projection method for monotone variational inequalities, SIAM J. Control Optim., 37 (1999), 765–776. https://doi.org/10.1137/S0363012997317475 doi: 10.1137/S0363012997317475
    [19] S. Noinakorn, N. Wairojjana, N. Pakkaranang, A novel accelerated extragradient algorithm to solve pseudomonotone variational inequalities, Arab. J. Math., 12 (2023), 201–218. https://doi.org/10.1007/s40065-022-00400-1 doi: 10.1007/s40065-022-00400-1
    [20] B. Tan, P. Sunthrayuth, P. Cholamjiak, Y. J. Cho, Modified inertial extragradient methods for finding minimum-norm solution of the variational inequality problem with applications to optimal control problem, Int. J. Comput. Math., 100 (2023), 525–545. https://doi.org/10.1080/00207160.2022.2137672 doi: 10.1080/00207160.2022.2137672
    [21] C. Izuchukwu, Y. Shehu, J. C. Yao, Convergence results for proximal point algorithm with inertial and correction terms, Appl. Anal., 2024 (2024), 1–21. https://doi.org/10.1080/00036811.2024.2432527 doi: 10.1080/00036811.2024.2432527
    [22] C. Izuchukwu, Y. Shehu, J. C. Yao, New strong convergence analysis for variational inequalities and fixed point problems, Optimization, 2024 (2024), 1–22. https://doi.org/10.1080/02331934.2024.2424446 doi: 10.1080/02331934.2024.2424446
    [23] B. Tan, X. L. Qin, J. C. Yao, Two modified inertial projection algorithms for bilevel pseudomonotone variational inequalities with applications to optimal control problems, Numer. Algor., 88 (2021), 1757–1786. https://doi.org/10.1007/s11075-021-01093-x doi: 10.1007/s11075-021-01093-x
    [24] Z. Q. Zhu, K. Y. Zheng, S. H. Wang, A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space, AIMS Math., 9 (2024), 20956–20975. https://doi.org/10.3934/math.20241020 doi: 10.3934/math.20241020
    [25] D. V. Thong, D. V. Hieu, Inertial extragradient algorithms for strongly pseudomontone variational inequalities, J. Comput. Appl. Math., 341 (2018), 80–98. https://doi.org/10.1016/j.cam.2018.03.019 doi: 10.1016/j.cam.2018.03.019
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(126) PDF downloads(28) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog