Research article Special Issues

A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces

  • In this paper, we introduce an algorithm for solving variational inequalities problem with Lipschitz continuous and pseudomonotone mapping in Banach space. We modify the subgradient extragradient method with a new and simple iterative step size, and the strong convergence to a common solution of the variational inequalities and fixed point problems is established without the knowledge of the Lipschitz constant. Finally, a numerical experiment is given in support of our results.

    Citation: Fei Ma, Jun Yang, Min Yin. A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces[J]. AIMS Mathematics, 2022, 7(4): 5015-5028. doi: 10.3934/math.2022279

    Related Papers:

    [1] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [2] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [3] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [4] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [5] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [6] Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332
    [7] Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
    [8] Pongsakorn Yotkaew, Nopparat Wairojjana, Nuttapol Pakkaranang . Accelerated non-monotonic explicit proximal-type method for solving equilibrium programming with convex constraints and its applications. AIMS Mathematics, 2021, 6(10): 10707-10727. doi: 10.3934/math.2021622
    [9] Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839
    [10] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
  • In this paper, we introduce an algorithm for solving variational inequalities problem with Lipschitz continuous and pseudomonotone mapping in Banach space. We modify the subgradient extragradient method with a new and simple iterative step size, and the strong convergence to a common solution of the variational inequalities and fixed point problems is established without the knowledge of the Lipschitz constant. Finally, a numerical experiment is given in support of our results.



    In 1959, A. Signorini [1] proposed an interesting contact problem which was well known as Signorini Problem. Since then, many researchers have carried on the research to this problem and reformulated as the variational inequality problem (the VI in short) [2]. A key step for the solution of the VI was introduced by Hartman and Stampacchia [3] in 1966, which produce the VI as an important tool in studying optimization theory, engineering mechanics, economics and applied sciences in a unified and general framework (see [4,5]).

    Under appropriate conditions, there are two general methods for solving the VI problem: The projection method and the regularized method. Many projection-type algorithms for solving the VI problem can be found in [6,7,8,9,10,11]. The gradient method is the simplest algorithm in which only one projection on feasible set is performed, but a strongly monotonicity is required to obtain the convergence of the method. To avoid the hypothesis of the strongly monotonicity, Korpelevich [6] proposed a decisive algorithm for solving the variational inequalities in Euclidean space, which was called the extragradient-type method. In 2011, the subgradient extragradient-type method was introduced by Censor et al. [7], which for solving variational inequalities in real Hilbert space. Very recently, Liu [11] proposed an inertial Tseng's extragradient algorithm for solving pseudomonotone variational inequalities.

    It is natural to consider the algorithm for solving the variational inequalities in the setting of Banach spaces or Hilbert spaces. Several results were obtained in the case of various iterative algorithms for finding a common element of the fixed points set and the set of solutions of the variational inequality problem in Hilbert spaces or Banach spaces (see [12,13,14,15,16,17,18,19,20,21,22,23,24]). Especially, the fixed point technique was introduced by Browder [12] in 1967. Then Liu and Kong [19] provided a algorithm for finding a common element of fixed points set and variational inequality in Banach space. Recently, Ceng [24] introduced two subgradient extragradient methods for solving pseudomonotone variational inequalities and fixed point problems.

    Motivated by the works mentioned, in the present paper, we extend subgradient extragradient algorithm proposed by [22] for solving a common solution of variational inequalities and fixed point problems in Banach spaces. It is worth stressing that our algorithm has a simple structure and the convergence of algorithms is not required to know the Lipschitz constant of the mapping.

    The paper is organized as follows. In Section 2, we present some preliminaries that will be needed in the sequel. In Section 3, we propose an algorithm and analyze its convergence. Finally, in Section 4 we present a numerical example and comparison.

    Assume that X is a real Banach space with its dual X, and denote the norms of X and X, respectively, x,x is the duality coupling in X×X, and xnx (xnx) is called a sequence {xn} convergence to x strongly (weakly). Let C be a nonempty closed convex subset of X, and F:CX be a continuous mapping. Consider with the following variational inequality (for short, VI(F,C)) which consists in finding a point xC such that

    F(x),yx0,  yC. (2.1)

    Let S be the solution set of (2.1).

    Definition 2.1. A mapping F:CX is said as follows:

    (A1) Monotone, if F(x)F(y),xy0,  x,yC;

    (A2) Pseudomonotone, if F(y),xy0F(x),xy0,  x,yC;

    (A3) Lipschitz-continuous with constant L>0, if there exists L>0 such that F(x)F(y)∥≤Lxy,  x,yC.

    Recall that a point xC is called fixed point of an operator T:CC, if Tx=x. We shall denote the set of fixed points of T by F(T). It is well known that in a real Hilbert space, x is the solution of the VI(F,C) if and only if x is the solution of the fixed point equation x=PC(xλF(x)), where λ is an arbitrary positive constant. Therefore, fixed point algorithms can be used to solve VI(F,C). The mapping T:CC is called nonexpansive, if,

    T(x)T(y)∥≤∥xy,  x,yC.

    The normalized duality mapping JX (usually write by J) of X into 2X is defined by

    J(x)={xX|x,x=∥x2=∥x2}

    for all xX. Let q(0,2]. The generalized duality mapping Jq:X2X is defined (the definitions and properties, see [15]) by

    Jq(x)={jq(x)X|jq(x),x=∥x∥∥jq(x), jq(x)∥=∥xq1}

    for all xX. More details, can be found in [25].

    Let U={xX:x=1}, and the norm of X is called G ˆa teaux differentiable if for each x,yU, the limit

    limt0x+tyxt (2.2)

    exists. In this case, the space X is also called smooth. It is well known that if X is a smooth, strictly convex and reflexive Banach space, then J is a single-valued bijection, and furthermore, there exists inverse mapping J1 which coincides with the duality mapping J on X. X is said to be uniformly smooth if (2.2) converges uniformly for x,yU. It is strictly convex if x+y2∥<1 for all x,yU and xy. The modulus δX of convexity is defined by

    δX(ε)=inf{1x+y2|x,yBX,xy∥≥ε},

    for all ε[0,2], where BX is the closed unit ball of X. A Banach space X is called uniformly convex if δX(ε)>0. A Banach space X is uniformly convex iff for any two sequences {xn}, {yn}X,

    limnxn=limnyn=1  and   limnxn+yn=2,limnxnyn=0

    hold. Moreover, X is called 2-uniformly convex if there exists c>0 such that for all ε[0,2], δX(ε)>cε2. Obviously, every 2-uniformly convex Banach space is uniformly convex.

    Alber [25] introduces a functional V(x,y):X×XR by

    V(x,y)=∥x22x,y+y2. (2.3)

    The operator PC:XCX is called the generalized projection operator if it associates to an arbitrary fixed point xX, where x is the solution to the minimization problem

    V(x,~x)=infyCV(x,y),

    and ~x=PCxCX is called a generalized projection of the point x. For more results about PC refer to [25]. The next lemma can describe the properties of PC.

    Lemma 2.1. [25] Let C be a nonempty closed convex set in X and x,yX, ~x=PCx. Then

    (1) J~xx,y~x0,  yC;

    (2) V(J~x,y)V(x,y)V(x,~x),  yC;

    (3) V(x,z)+2y,J1xzV(x+y,z),zX.

    By the definition of V, it is easy to check the following lemma.

    Lemma 2.2. For any x,y,zX and α(0,1),

    (1) (xy)2V(Jx,y)(x+y)2;

    (2) V(αJx+(1α)Jy,z)αV(Jx,z)+(1α)V(Jy,z);

    (3) V(Jx,z)=V(Jx,y)+V(Jy,z)+2JzJy,yx;

    (4) V(Jx,y)≤∥x∥∥JxJy+y∥∥xy.

    In [26], they prove following lemma.

    Lemma 2.3. [26] Let X be a real 2-uniformly convex Banach space. Then, there exists μ1 such that for all x,yX,

    1τxy2φ(x,y).

    The minimum value of the set of all τ is denoted by τX (also write by τ) and is called the 2-uniform convexity constant of X.

    The following Lemma which will be useful to our subsequent convergence analysis.

    Lemma 2.4. [27] Let {an} be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence {anj} of {an} which satisfies anj<anj+1 for all jN. Define the sequence {τ(n)}nn0 of integers as follows:

    τ(n)=max{kn:ak<ak+1},

    where n0N such that {kn0:ak<ak+1} is nonempty. Then the following hold:

    (1) τ(n)τ(n+1)<, and τ(n);

    (2) aτ(n)aτ(n)+1 and anaτ(n)+1.

    In this section, we introduce a new subgradient extragradient algorithm for solving pseudomonotone variational inequality and fixed point problems in Banach spaces. At first, let's make the following assumptions.

    Assumption 3.1:

    (a) X is a real 2-uniformly convex Banach space and C is its nonempty closed convex subset.

    (b) F:XX is pseudomonotone on C, L -Lipschitz continuous on X, and T is a nonexpansive mapping of C into itself such that SF(T).

    (c) The mapping F is sequentially weakly continuous, i.e., for each sequence {xn}C : if xnx, then F(xn)F(x).

    Our algorithm has the following forms:

    Algorithm 3.1:

    (Step 0)    Take λ0>0, x0X, μ(0,1). Choose a nonnegative real sequence {θn} such that n=0θn<.

    (Step 1)    Given the current iterate xn, compute

    yn=PC(JxnλnF(xn)).

    If xn=yn, and Txn=xn, then stop: xn is a solution. Otherwise,

    (Step 2)    Construct Tn={xX|JxnλnF(xn)Jyn,xyn0} and compute

    zn=PTn(JxnλnF(yn)),  tn=J1(αnJx0+(1αn)Jzn),  
    xn+1=J1(βnJzn+(1βn)J(Ttn)).

    (Step 3)    Compute

     λn+1={min{μ(xnyn2+znyn2)2F(xn)F(yn),znyn,λn+θn},if  F(xn)F(yn),znyn>0,λn+θn,otherwise.

    Set n:=n+1 and return to step 1.

    We prove the strong convergence theorem for Algorithm 3.1. Firstly, we give the following lemma, which plays a crucial role in the proof of the main theorem.

    Lemma 3.1. Assume that xn, yn, λn are the sequences generated by Algorithm 3.1 and Assumption 3.1 holds, then

    (1) If xn=yn and Txn=xn, for some nN, then xnSF(T);

    (2) limnλn=λ[min{μL,λ0},λ0+θ], where θ=n=0θn.

    Proof. (1) If xn=yn, by Algorithm 3.1, we have xn=PC(JxnλnF(xn)), and thus xnC. By the definition of PC, we have

    JxnλnF(xn)Jxn,xnx0   xC.

    Therefore,

    λnF(xn),xnx=λnF(xn),xxn0   xC.

    Since λn>0, we have xnS. Combining Txn=xn, we obtain xnSF(T).

    (2) Since F is a Lipschitz-continuous mapping with positive constant L, in the case of F(xn)F(yn),znyn>0, we get

    μ(xnyn2+znyn2)2F(xn)F(yn),znyn2μxnynznyn2F(xn)F(yn)znynμxnynLxnyn=μL.

    Thus, {λn} has the upper bound λ0+θ, and lower bound min{μL,λ0}. Similar to the proof of Lemma 3.1 in [21], we have that

    limnλn=λ[min{μL,λ0},λ0+θ].

    The proof is complete.

    Theorem 3.1. Assume that Assumption 3.1 holds, the sequence {αn} satisfies {αn}(0,1), n=0αn=, limnαn=0 and βn(0,1). Let {xn} be a sequence generated by Algorithm 3.1. Then {xn} strongly converges to a solution x=PSF(T)Jx0.

    Proof. We divide the proof into two steps.

    Step 1. The sequences {xn}, {yn}, {zn}, and {tn} generated by Algorithm 3.1 are bounded.

    To observe this, take uSF(T). Noting that ynC, we have F(u),ynu0, for all nN. Since F is pseudomonotone, we have F(yn),ynu0,   nN. Then,

    0F(yn),ynu+znzn=F(yn),ynznF(yn),uzn.

    This implies that

    F(yn),ynznF(yn),uzn,   nN. (3.1)

    By the definition of Tn, we know JxnλnF(xn)Jyn,znyn0. Then

    JxnλnF(yn)Jyn,znyn                                           =JxnλnF(xn)Jyn,znyn+λnF(xn)F(yn),znynλnF(xn)F(yn),znyn.                                              (3.2)

    By Lemma 2.1(2), the definition of λn+1 and combining (3.1), (3.2), we obtain

    V(Jzn,u)=V(JPTn(JxnλnF(yn)),u)V(JxnλnF(yn),u)V(JxnλnF(yn),zn)=JxnλnF(yn)22JxnλnF(yn),u+u2JxnλnF(yn)2+2JxnλnF(yn),znzn2=2Jxn,u+2λnF(yn),uzn+2Jxn,zn+u2zn2=V(Jxn,u)V(Jxn,zn)+2λnF(yn),uznV(Jxn,u)V(Jxn,zn)+2λnF(yn),ynzn=V(Jxn,u)V(Jxn,yn)V(Jyn,zn)+2JxnλnF(yn)Jyn,znynV(Jxn,u)V(Jxn,yn)V(Jyn,zn)+2λnF(xn)F(yn),znynV(Jxn,u)V(Jxn,yn)V(Jyn,zn)+λnμλn+1(xnyn2+znyn2).            (3.3)

    From Lemma 3.1(2), we obtain limnλnμλn+1=μ(0<μ<1). It means that there exists a positive integer number N0, such that for all n>N0, 0<λnμλn+1<1. Combining Lemma 2.3 and (3.3), we know that there exits a 2-uniformly convex constant τ, such that when n>N0,

    V(Jzn,u)V(Jxn,u)V(Jxn,yn)V(Jyn,zn)+λnμλn+1(xnyn2+znyn2)V(Jxn,u)(1μτ)(V(Jxn,yn)+V(Jyn,zn))V(Jxn,u).

    Then, by Lemma 2.2(2) and the definition of xn+1, we obtain for every n>N0,

    V(Jxn+1,u)=V(βnJzn+(1βn)J(Ttn),u)=βnJzn+(1βn)J(Ttn)22βnJzn+(1βn)J(Ttn),u+u2βnJzn22βnJzn,u+βnu2+(1βn)J(Ttn)22(1βn)J(Ttn),u+(1βn)u2=βnV(Jzn,u)+(1βn)V(J(Ttn),u)βnV(Jzn,u)+(1βn)V(Jtn,u)=βnV(Jzn,u)+(1βn)V(αnJx0+(1αn)Jzn,u)βnV(Jzn,u)+(1βn)(αnV(Jx0,u)+(1αn)V(Jzn,u))=(βn+(1βn)(1αn))V(Jzn,u)+(1βn)αnV(Jx0,u)(βn+(1βn)(1αn))V(Jxn,u)+(1βn)αnV(Jx0,u)=(1(1βn)αn)V(Jxn,u)+(1βn)αnV(Jx0,u)max{V(Jx0,u),V(Jxn,u)}max{V(Jx0,u),V(JxN0,u)}.

    Thus, {V(Jxn,u)} is bounded. Combining V(Jxn,u)1τxnu2, we get {xn} is bounded. Furthermore, from (3.3), we have the fact that {yn}, {zn}, and {tn} are bounded.

    Step 2.{xn} strongly converges to a point x=PSF(T)Jx0.

    Let x=PSF(T)Jx0. From Lemma 2.1(1), we can obtain

    Jx0Jx,zx0, zSF(T).

    From Step 1, we know that there exists N00, such that nN0, V(Jzn,x)V(Jxn,x), and the sequences {xn}, {yn} {zn} and {tn} are bounded. Moreover, by Lemma 2.1(3) and Lemma 2.2, exists N00, such that for every nN0,

    V(Jxn+1,x)=V(βnJzn+(1βn)J(Ttn),x)βnV(Jzn,x)+(1βn)V(J(Ttn),x)βnV(Jzn,x)+(1βn)V(αnJx0+(1αn)Jzn,x)βnV(Jzn,x)+(1βn)(2αnJx0Jx,tnx+(1αn)V(Jzn,x))=(βn+(1βn)(1αn))V(Jzn,x)+2(1βn)αnJx0Jx,tnx.                      (3.4)

    By (3.3), (3.4) and Lemma 2.3, Lemma 3.1(2), we can obtain that for every nN0,

    V(Jxn+1,x)(βn+(1βn)(1αn))V(Jzn,x)+2(1βn)αnJx0Jx,tnx(βn+(1βn)(1αn))(V(Jxn,x)(1λnμλn+1)(V(Jxn,yn)+V(Jyn,zn)))+2(1βn)αnJx0Jx,tnxV(Jxn,x)(1μτ)(V(Jxn,yn)+V(Jyn,zn))+2(1βn)αnJx0Jx,tnx.          (3.5)

    Two cases arise:

    Case 1. From the result of Lemma 2.5 in [28], set an=φ(xn,x)=V(Jxn,x). By the proof of Step 1, there exists N1N (N1N0), such that {φ(xn,x)}n=N1 is nonincreasing sequence. Then {an}n=1 converges. By using this in (3.5), when n>N1N0, we have

    (1μτ)(V(Jxn,yn)+φ(yn,zn))V(Jxn,x)V(Jxn+1,x)+2(1βn)αnJx0Jx,tnx.

    By V(Jx0Jxn,x) is bounded and {an}n=1 converges, we have that when n,

    V(Jxn,x)V(Jxn+1,x)+2(1βn)αnJx0Jx,tnx0.

    Combining φ(xn,yn)0 and 0<μ,αn<1, we have that when n,

    xnyn20  and   ynzn20. (3.6)

    Thus, when n,

    Jxn+1Jzn=βnJzn+(1βn)J(Ttn)Jzn=(1βn)J(Ttn)Jzn=(1βn)(J(Ttn)JtnJtnJzn(1βn)JtnJzn=(1βn)αnJx0Jzn(1βn)αnM10,                   (3.7)

    for some M1>0. By (3.7), we also can see that J(Ttn)Jzn0,JtnJzn0. From J(Ttn)JtnJ(Ttn)Jzn+J(zn)Jtn, we have J(Ttn)Jtn0.

    Since J1 is also uniformly norm-to-norm continuous on bounded subset of X, we have xn+1zn0. Therefore, we get that when n,

    Ttntn0. (3.8)

    Thus, when n,

    xn+1xnxn+1zn+znyn+ynxn0,

    and

    xntnxnxn+1+xn+1zn+zntn0. (3.9)

    Since {xn} is bounded, then there exists a subsequence {xnk} that converges weakly to some z0X, such that xnkz0. By (3.9), we also have {tnk} converges weakly to z0. It follows from (3.8) and the definition of the nonexpansive mapping T that z0F(T).

    Now, we show that z0S.

    Since {xnk} converges weakly to z0, we have

    lim supnJx0Jx,xnx=limkJx0Jx,xnkx=Jx0Jx,z0x. (3.10)

    Since xnyn20, we know that ynkz0 and z0C. Since ynk=PC(xnkλnkF(xnk)), by Lemma 2.1(1), we have that for all zC, JxnkλnkF(xnk)Jynk,zynk0. This implies that

    JxnkJynk,zynkλnkF(xnk),zynk.

    Therefore, we have that for all zC,

    1λnkJxnkJynk,zynk+F(xnk),ynkxnkF(xnk),zxnk.

    Fixing zC, according to (3.6), and considering that {xnk} is bounded, we can obtain

    lim infkF(xnk),zxnk0.

    Choose a decreasing nonnegative sequence {εk}, such that limkεk=0. By definition of the lower limit, for each εk, there exists a smallest positive integer Mk such that for all kMk,

    F(xnk),zxnk+εk0. (3.11)

    Clearly, as {εk} is decreasing, {Mk} is increasing.

    If there exists a subsequence {xnki} of {xnk}, such that for every i, F(xnki)=0, then

    F(z0,zz0=limiF(xnki),zxnki=0.

    It means z0S.

    If there exists a positive integer N2N such that for all positive integer nkiN2, F(xnki)0. Let unki=F(xnki)/F(xnki)2. Then for each positive integer nkiN2, F(xnki),unki=1. Thus, from (3.11), we have that for all positive integer nkiN2,

    F(xnki),z+εkunkixnki0. (3.12)

    Since F is pseudomonotone, then we have from (3.12) that F(z+εkunki),z+εkunkixnki0. This implies that

    F(z),zxnkiF(z)F(z+εkunki),z+εkunkixnkiεkF(z),unki. (3.13)

    Since {xnk} converges weakly to z0C, and F is sequentially weakly continuous on C, we get F(xnk) converges weakly to F(z0). If F(z0)=0, then z0S. Now, assume that F(z0)0. Combining F(z0)lim infkF(xnk) and limkεk=0, we get that the right-hand of (3.13) tends to zero. Thus, we obtain that for all zC,

    F(z),zz0=limkF(z),zxnki0.

    By the result of Lemma 3.1 in [29], we also have z0S. combine Lemma 2.1(1) and (3.10), we can obtain,

    lim supnJx0Jx,xnx0.

    By Lemma 2.1(3) and (3.6), we have that for all positive integer n>max{N1,N2},

    V(Jxn+1,x)(βn+(1βn)(1αn))V(Jxn,x)+2(1βn)αnJx0Jx,tnx=(1(1βn)αn)V(Jxn,x)+2(1βn)αnJx0Jx,tnx.

    It follows from the result of [14, Lemma 3.3] and [28, Lemma 2.5], we obtain limnφ(xn,x)=0, that means

    limnxn=x.

    Case 2. Assume that there exists a subsequence {xmj} of {xn} such that for all jN, φ(xmj,x)<φ(xmj+1,x). Then, from Lemma 2.4, we know that there exists a nondecreasing sequence mkN such that limkmk= and for every kN :

    φ(xmk,x)<φ(xmk+1,x)  and   φ(xk,x)<φ(xmk+1,x). (3.14)

    By (3.5) and Lemma 3.1(2), we have

    (1μτ)(φ(xmk,ymk)+φ(ymk,zmk))V(Jxmk,x)V(Jxmk+1,x)+2(1βn)αnJx0Jx,tnx. (3.15)

    Since {xn} is bounded, then exists a subsequence {xmk} of {xn} which converges weakly to z0X. Use the same argument as shown in Case 1, and Combining (3.15) we obtain

    limkxmkymk=0,  limkzmkymk=0,  limkxmk+1xmk=0.

    Similarly we can obtain

    lim supkJx0Jx,tmk+1x=lim supkJx0Jx,tmkx0.

    It follows from (3.15) and the proof of case 1, for all mkN0, we have

    φ(xmk+1,x)(1αmk(1βmk))φ(xmk,x)+2(1βmk)αmkJx0Jx,tmkx(1αmk(1βmk))φ(xmk+1,x)+2(1βmk)αmkJx0Jx,tmkx.

    Since 0<αn,βn<1, this implies that  mkN1, we have

    φ(xmk,x)φ(xmk+1,x)2Jx0Jx,tmkx.

    And then

    lim supkφ(xmk,x)lim supk2Jx0Jx,xmk+1x0,

    we obtain lim supkφ(xmk,x)=0, that means limkxmkx2=0. Since xkxxmk+1x, we have limkxkx=0. Therefore xkx. The proof is complete.

    In this section, we give a numerical experiment to demonstrate the convergence and efficiency of the proposed algorithm. We will compare Algorithm 3.1 with a strongly convergent algorithms as HSEGM proposed in [30, Theorem 4.2].

    Example 4.1 Let H=L2([0,1]). We apply our problem in H with norm x=(10|x(t)|2dt)12 and inner product x,y=10x(t)y(t)dt, x,yH. The operator F:HH is of form

    Fx(t)=max(0,x(t)), t[0,1],

    for all xH. Clearly, F is Lipschitz-continuous and monotone (so is also pseudomonotone). The feasible set is C={xH:x1}. Observe that 0S and so S. Let T:HH is defined by

    Tx(t)=10tx(s)ds, t[0,1].

    Clearly, 0F(T) and so F(T). Since

    |Tx(t)Ty(t)|2=|10t(x(s)y(s))ds|210|(x(s)y(s))|2ds=xy2.

    This means that T is nonexpansive and therefore

    TxTy2=10|Tx(t)Ty(t))|2dtxy2.

    Hence, the solution of the problem is x=0. To terminate the algorithms, we use the condition xnxε and ε=103 for all the algorithms. We take αn=1100n, βn=12n+1, θn=0, λ0=0.7 and μ=0.9 for Algorithm 3.1. For Theorem 4.2 in [30], we take αn=1100n, βn=12n+1 and τ=0.7. The numerical results are showed in Table 1 and Figure 1.

    Table 1.  Comparison between the Algorithm 3.1 and Theorem 4.2 in [30].
    x0 Algorithm A Theorem 4.2 in [30]
    iter. iter. iter. iter.
    case 1 14t2e4t 4 8.58 4 8.27
    case 2 1120(1t2) 4 8.34 4 8.06
    case 3 1100sin(t) 4 8.34 4 8.14

     | Show Table
    DownLoad: CSV
    Figure 1.  Comparison between the Algorithm 3.1 and Theorem 4.2 in [30] with case 2.

    Remark 4.1. Observing from the numerical results of the example presented above, the conclusion that our algorithm is consistent, stable, effective and easy to implement is obtained. This example shows that Algorithm 3.1 converges slightly slower than that of Theorem 4.2 in [30], but it has some advantages. First, it is done without any information of the Lipschitz constant of the cost operator F. Second, the step size is variable in Algorithm 3.1. Third, it can be applied to solve fixed point problem for non-expansive mappings in a real 2-uniformly convex Banach space, and furthermore, it is more useful in the infinite-dimensional spaces.

    In this paper, we consider a strong convergence result for solving a common solution of pseudomonotone Variational Inequalities and fixed point problems in convex Banach spaces. Our algorithm is based on the subgradient extragradient methods with a new step size, the convergence of algorithm is established without the knowledge of the Lipschitz constant of the mapping. Finally, some numerical experiments are given to illustrate the convergence of our algorithms and compared with other known methods.

    The Project Supported by "Qinglan talents" Program of Xianyang Normal University of China (No. XSYQL201801), The Educational Science Foundation of Shaanxi of China (No. 18JK0830), Scientific research plan projects of Xianyang Normal University of China (No. 14XSYK003).

    The authors declare no conflicts of interest.



    [1] S. Antonio, Questioni di elasticita nonlinearizzata e semilinearizzata, Rend. Mat. Appl., 18 (1959), 95–139.
    [2] G. Stampacchia, Formes bilinaires coercitives sur les ensembles convexes, C. R. Acad. Sci., 258 (1964), 4413–4416.
    [3] P. Hartman, G. Stampacchia, On some non-linear elliptic differential-functional equations, Acta Math., 115 (1966), 271–310. https://doi.org/10.1007/BF02392210 doi: 10.1007/BF02392210
    [4] J. P. Aubin, I. Ekeland, Applied nonlinear analysis, New York: Wiley, 1984.
    [5] C. Baiocchi, A. Capelo, Variational and quasivariational inequalities, applications to free boundary problems, New York: Wiley, 1984.
    [6] G. M. Korpelevich, The extragradient method for finding saddle points and other problem, Ekonomikai Matematicheskie Metody, 12 (1976), 747–756.
    [7] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. https://doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [8] J. Yang, H. W. Liu, Z. X. Liu, Modified subgradient extragradient algorithms for solving monotone variational inequalities, Optimization, 67 (2018), 2247–2258. https://doi.org/10.1080/02331934.2018.1523404 doi: 10.1080/02331934.2018.1523404
    [9] J. Yang, H. W. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algorithms, 80 (2019), 741–752. https://doi.org/10.1007/s11075-018-0504-4 doi: 10.1007/s11075-018-0504-4
    [10] L. Liu, B. Tan, S. Y. Cho, On the resolution of variational inequality problems with a double-hierarchical structure, J. Nonlinear Convex Anal., 21 (2020), 377–386.
    [11] L. Liu, S. Y. Cho, J. C. Yao, Convergence analysis of an inertial Tsengs extragradient algorithm for solving pseudomonotone variational inequalities and applications, J. Nonlinear Var. Anal., 5 (2021), 627–644.
    [12] F. E. Browder, Nonlinear mappings of nonexpansive and accretive-type in Banach spaces, Bull. Amer. Math Soc., 73 (1967), 875–882.
    [13] V. T. Duong, V. H. Dang, Modified subgradient extragradient algorithms for variational inequality problems and fixed point problems, Optimization, 67 (2018), 83–102. https://doi.org/10.1080/02331934.2017.1377199 doi: 10.1080/02331934.2017.1377199
    [14] G. Cai, A. Gibali, O. S. Iyiola, Y. Shehu, A new double-projection method for solving variational inequalities in Banach spaces, J. Optim. Theory Appl., 178 (2018), 219–239. https://doi.org/10.1007/s10957-018-1228-2 doi: 10.1007/s10957-018-1228-2
    [15] S. Chang, C. F. Wen, J. C. Yao, Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces, Optimization, 67 (2018), 1183–1196. https://doi.org/10.1080/02331934.2018.1470176 doi: 10.1080/02331934.2018.1470176
    [16] Y. Yao, M. Postolache, J. C. Yao, Iterative algorithms for pseudomonotone variational inequalities and fixed point problems of pseudocontractive operators, Mathematics, 7 (2019), 1189. https://doi.org/10.3390/math7121189 doi: 10.3390/math7121189
    [17] G. Cai, S. Yekini, O. S. Iyiola, Strong convergence theorems for fixed point problems for strict pseudo-contractions and variational inequalities for inverse-strongly accretive mappings in uniformly smooth Banach spaces, J. Fixed Point Theory Appl., 21 (2019), 41. https://doi.org/10.1007/s11784-019-0677-z doi: 10.1007/s11784-019-0677-z
    [18] L. C. Ceng, A. Petrusel, J. C. Yao, On Mann viscosity subgradient extragradient algorithms for fixed point problems of finitely many strict pseudocontractions and variational inequalities, Mathematics, 7 (2019), 925. https://doi.org/10.3390/math7100925 doi: 10.3390/math7100925
    [19] Y. Liu, H. Kong, Strong convergence theorems for relatively nonexpansive mappings and Lipschitz-continuous monotone mapping in Banach spaces, Indian J. Pure Appl. Math., 50 (2019), 1049–1065. https://doi.org/10.1007/s13226-019-0373-0 doi: 10.1007/s13226-019-0373-0
    [20] S. S. Chang, C. F. Wen, J. C. Yao, Zero point problem of accretive operators in Banach spaces, Bull. Malays. Math. Sci. Soc., 42 (2019), 105–118. https://doi.org/10.1007/s40840-017-0470-3 doi: 10.1007/s40840-017-0470-3
    [21] H. W. Liu, J. Yang, Weak convergence of iterative methods for solving quasimonotone variational inequalities, Comput. Optim. Appl., 77 (2020), 491–508. https://doi.org/10.1007/s10589-020-00217-8 doi: 10.1007/s10589-020-00217-8
    [22] F. Ma, A subgradient extragradient algorithm for solving monotone variational inequalities in Banach spaces, J. Inequal. Appl., 2020 (2020), 26. https://doi.org/10.1186/s13660-020-2295-0 doi: 10.1186/s13660-020-2295-0
    [23] J. Yang, H. W. Liu, The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space, Optim. Lett., 14 (2020), 1803–1816. https://doi.org/10.1007/s11590-019-01474-1 doi: 10.1007/s11590-019-01474-1
    [24] L. C. Ceng, A. Petrusel, X. Qin, J. C. Yao, Pseudomonotone variational inequalities and fixed point, Fixed Point Theory, 22 (2021), 543–558. https://doi.org/10.24193/fpt-ro.2021.2.36 doi: 10.24193/fpt-ro.2021.2.36
    [25] Y. I. Alber, Metric and generalized projection operator in Banach spaces: Properties and applications, In: Theory and applications of nonlinear operators of accretive and monotone type, Lecture Notes in Pure and Applied Mathematics, New York: Dekker, 1996.
    [26] K. Aoyama, F. Kohsaka, Strongly relatively nonexpansive sequences generated by firmly nonexpansive-like mappings, Fixed Point Theory Appl., 2014 (2014), 95. https://doi.org/10.1186/1687-1812-2014-95 doi: 10.1186/1687-1812-2014-95
    [27] P. E. Mainge, The viscosity approximation process for quasi-nonexpansive mapping in Hilbert space, Comput. Math. Appl., 59 (2010), 74–79. https://doi.org/10.1016/j.camwa.2009.09.003 doi: 10.1016/j.camwa.2009.09.003
    [28] H. K. Xu, Iterative algorithm for nonlinear operators, J. London Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
    [29] N. Hadjisavvas, S. Schaible, Quasimonotone variational inequalities in Banach spaces, J. Optim. Theory Appl., 90 (1996), 95–111. https://doi.org/10.1007/BF02192248 doi: 10.1007/BF02192248
    [30] K. Rapeepan, S. Satit, Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces, J. Optim. Theory Appl., 163 (2014), 399–412. https://doi.org/10.1007/s10957-013-0494-2 doi: 10.1007/s10957-013-0494-2
  • This article has been cited by:

    1. Li-Jun Zhu, Tzu-Chien Yin, Hüseyin Işık, A Self-Adaptive Extragradient Algorithm for Solving Quasimonotone Variational Inequalities, 2022, 2022, 2314-8888, 1, 10.1155/2022/9447175
    2. T. O. Alakoya, O. T. Mewomo, S-Iteration inertial subgradient extragradient method for variational inequality and fixed point problems, 2023, 0233-1934, 1, 10.1080/02331934.2023.2168482
    3. Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane, Inertial Method for Solving Pseudomonotone Variational Inequality and Fixed Point Problems in Banach Spaces, 2023, 12, 2075-1680, 960, 10.3390/axioms12100960
    4. Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang, Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems, 2024, 9, 2473-6988, 9705, 10.3934/math.2024475
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2290) PDF downloads(123) Cited by(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog