In this paper, we aimed to consider the common elements of the generalized split variational inclusion and paramonotone equilibrium problem in real Hilbert spaces. Based on the self-adaptive method, a self-adaptive viscosity-type inertial algorithm to solve the problem under consideration was introduced and the inertial technique was used to accelerate the convergence rate of the method. Under the assumption of generalized monotonicity of the related mappings, the strong convergence of the iterative algorithm was established. The results presented here improve and generalize many results in this area.
Citation: Yali Zhao, Qixin Dong, Xiaoqing Huang. A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem[J]. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
[1] | Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971 |
[2] | Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651 |
[3] | Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276 |
[4] | James Abah Ugboh, Joseph Oboyi, Hossam A. Nabwey, Christiana Friday Igiri, Francis Akutsah, Ojen Kumar Narain . Double inertial extrapolations method for solving split generalized equilibrium, fixed point and variational inequity problems. AIMS Mathematics, 2024, 9(4): 10416-10445. doi: 10.3934/math.2024509 |
[5] | Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149 |
[6] | Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286 |
[7] | Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382 |
[8] | Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309 |
[9] | Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194 |
[10] | Savita Rathee, Monika Swami . Algorithm for split variational inequality, split equilibrium problem and split common fixed point problem. AIMS Mathematics, 2022, 7(5): 9325-9338. doi: 10.3934/math.2022517 |
In this paper, we aimed to consider the common elements of the generalized split variational inclusion and paramonotone equilibrium problem in real Hilbert spaces. Based on the self-adaptive method, a self-adaptive viscosity-type inertial algorithm to solve the problem under consideration was introduced and the inertial technique was used to accelerate the convergence rate of the method. Under the assumption of generalized monotonicity of the related mappings, the strong convergence of the iterative algorithm was established. The results presented here improve and generalize many results in this area.
Throughout the paper, unless otherwise stated, let C and Q be nonempty closed convex subsets of real Hilbert spaces H1 and H2, and PC and PQ be the orthogonal projection onto C and Q, respectively. Let B:H1→2H1 and D:H2→2H2 be two maximal monotone mappings and A:H1→H2 be a bounded linear operator with its adjoint A∗. Find
x∗∈Csuch thatAx∗∈Q, | (1.1) |
which is called the split feasibility problem (SFP). It was first introduced by Censor and Elfving [1] in finite dimensional Hilbert spaces to model the inverse problem caused by medical image reconstruction. Since then, SFP has received much attention for its applications in signal processing, image reconstruction, approximate theory, control theory, biomedical engineering, communications, and geophysics. For details, the readers can refer to [1,2,3,4,5] and the references therein. To solve SFP, we proposed the recurrent projection algorithm, but in each iteration process, calculating the inverse of the matrix or the maximum eigenvalue of the matrix is needed. In solving the real problem, calculating the inverse of the matrix takes a lot of time and is not easy to solve. To overcome the disadvantage of finding the matrix inverse in the algorithm, in 2002, Byrne [6] presented the following CQ algorithm:
xn+1=PC(xn−γnAT(I−PQ)Axn), |
where A is the matrix operator, AT is the transpose operator of A, γ∈(0,2L) with L the largest eigenvalue of the matrix ATA.
In 2011, Moudafi [7] first introduced the following problem: Find x∗∈H1such that
0∈B(x∗)and0∈D(Ax∗), | (1.2) |
which is called the split variational inclusion problem (for short, denoted by SVIP). It is clear that the SVIP includes the SFP as a special case. We denote the solution set of the SVIP by SVIP(B,D):={x∗∈C|0∈B(x∗),0∈D(Ax∗)}. The SVIP is at the core of modeling of many inverse problems arising from phase retrieval and other real world problems, for instance, in sensor networks in computerized and data compression [8,9]. In recent years, there has been tremendous interest in solving the SVIP, and many researchers have constructed a large number of methods to solve this problem [10,11,12,13,14,15,16].
In 2014, Yang and Zhao [17] defined the following: Find x∗∈H1such that
x∗∈∩∞i=1B−1i(0)andAx∗∈∩∞i=1D−1i(0), | (1.3) |
which is called the generalized split variational inclusion problem (for short, denoted by GSVIP1), where for each i∈N, Bi:H1→2H1 and Di:H2→2H2 are two families of maximal monotone mappings. To solve the GSVIP1, the following algorithm is introduced :
xn+1=anxn+bnf(xn)+∞∑i=1cn,iJBiβn,i(I−γn,iA∗(I−JDiβn,i)A)xn,n≥0, |
where for each i∈N, the sequences{an},{bn},{cn,i}⊂(0,1), an+bn+∑∞i=1cn,i=1, {βn,i}⊂(0,∞),{γn,i}⊂(0,2‖A‖2+1), f is a k− contraction mapping of H1, and the strong convergence of the above algorithm under mild assumptions has been proved.
Ogbuisi et al. [18] introduced a new inertial algorithm to solve the following problem: Find x∗∈H1 such that
x∗∈∩si=1B−1i(0)andAx∗∈∩tj=1D−1j(0), | (1.4) |
which is also called the generalized split variational inclusion problem (for convenience, denoted by GSVIP2), and where for s t∈N, Bi:H1→2H1(i=1,⋯,s), and Dj:H2→2H2(j=1,⋯,t) are two finite families of maximal monotone mappings. We denote the solution set of the GSVIP2 by GSVIP2(Bi,Dj):={x∗∈C|x∗∈∩si=1B−1i(0)andAx∗∈∩tj=1D−1j(0)}. The following algorithm is introduced to solve the GSVIP2: Choose any initial value u0,v1∈H1,λ>0. Assume un−1,unhave been known. Compute
{xn=un+θn(un−un−1).zn=JλBinxn.yn=A∗(I−JλDjn)Axn,n≥1,wherein∈{i|max1≤i≤s‖xn−JλBixn‖},jn∈{j|max1≤j≤t‖Axn−JλDjAxn‖}.If‖xn+yn−zn‖=0,thenstop(xnisthedesiredsolution);otherwise,continuetocompute,un+1=(1−αn)un+αn[xn−τn(xn+yn−zn)], |
where αn∈(0,1),θn∈[0,1],τn=γn‖xn−zn‖2+‖yn‖22‖xn+yn−zn‖2,γn>0, and they show that the sequences generated by the above algorithm weakly converge to the solution of their problem.
In addition, the equilibrium problem (for short, EP) was first proposed by Nikaido and Isoda [19] in 1955, which is described as: Find u∗∈C such that
f(u∗,v)≥0,∀v∈C, | (1.5) |
where H is a real Hilbert space, C is a nonempty closed convex subset of H,f:C×C→R is a bifunction. We denote the solution set of the EP by EP(f):={u∗∈C|f(u∗,v)≥0,∀v∈C}. Noting that after the publication of the paper by Blum and Oettli [20] in 1994, the EP attracted wide attention, and many scholars published a large number of articles on the problem. The EP includes some important problems such as optimization problem, saddle point, variational inequality, and Nash equilibrium as special cases.
For solving the monotone EP, Korpelevich [21] first extended the extragradient method (double projection) of the saddle point problem to the monotone EP, and many algorithms [22,23,24,25,26] have been developed for solving the EP. Santos and Scheimberg [27] proposed an inexact projection subgradient method to solve the EP involving paramonotone bifunctions in finite dimensional space. It is noted that this algorithm needs only one projection per iteration, and its weak convergence was proved under mild assumptions.
In 2016, Yen et. al. [28] studied the SFP involving paramonotone equilibrium problem and convex optimization problem, which is formulated as: Find x∗∈C such that
f(u∗,v)≥0,∀v∈Candg(Au∗)≤g(y),∀y∈H2, | (1.6) |
where g is a properly lower semicontinuous convex function on H2. They introduced the following algorithm:
{yn=PC(xn−αnηn),zn=PC(yn−μnA∗(I−proxλg)Ayn),xn+1=anxn+(1−an)zn, |
for each xn∈C,ηn∈∂εn2f(xn,xn) and αn=βnγn where γn=max{δn,‖gn‖} and
μn={0,if∇h(yn)=0,ρnh(yn)‖∇h(yn)‖2,if∇h(yn)≠0, |
the selection of the sequences {αn},{δn},{βn},{εn} and {ρn} is described in Algorithm 3.1 [28]. Moreover, they proved the strong convergence of the algorithm under mild assumptions.
The problems of finding common solutions of the set of fixed points of nonlinear mappings and the set of solutions of optimization problems with its related problems have been considered by some authors (for instance, see [29,30,31,32,33] and the references therein). The motivation for studying such a common solution problem lies in its potential application to mathematical models whose constraints can be expressed as fixed point problems and optimization problems. This arises in practical problems, such as signal processing, network resource allocation, and image recovery (see, for instance, [34,35] and the references therein).
Tan, Qin and Yao [36] proposed four self-adaptive inertial algorithms with strong convergence to solve the split variational inclusion problem in real Hilbert spaces. Izuchukwu et al. [37] first proposed and studied several strongly convergent versions of the forward-reflected-backward splitting method of Malitsky and Tam for finding a zero of the sum of two monotone operators in a real Hilbert space, which required only one forward evaluation of the single-valued operator and one backward evaluation of the set-valued operator at each iteration. They also developed inertial versions of their methods with strong convergence when the set-valued operator was maximal monotone and the single-valued operator was Lipschitz continuous and monotone. Moreover, they discussed some examples from image restorations and optimal control regarding the implementations of our methods in comparisons with known related methods in the literature. Zhang and Wang [38] suggested a new inertial iterative algorithm for split null point and common fixed point problems. In [39], the authors focused on a inertial-viscosity approximation method for solving a split generalized equilibrium problem and common fixed point problem in real Hilbert spaces, their algorithm was designed such that its strong convergence did not require the norm of the bounded linear operator underlying the split equilibrium problem and under mild conditions. In [40], the authors studied the split variational inclusion and fixed point problems using Bregman weak relatively nonexpansive mappings in the p−uniformly convex smooth Banach spaces, they introduced an inertial shrinking projection self-adaptive iterative scheme for the problem and proved a strong convergence theorem.
Su et al. [41] constructed a multi-step inertial asynchronous sequential algorithm for common fixed point problems. Zheng et al. [42] considered a new fixed-time stability of a neural network to solve split convex feasibility problems.
Motivated and inspired by the above research work, we aim to consider the common element of the paramonotone equilibrium problem and the GSVIP2: Find u∗∈C such that
f(u∗,u)≥0,∀u∈Cand0∈∩si=1Bi(u∗),0∈∩tj=1Dj(Au∗), | (1.7) |
where s,t,Bi,Dj,f:C×C→R are as mentioned above. We denote the set of solutions of Problem (1.7) by
Γ:={x∗∈C|f(x∗,x)≥0,0∈∩∞i=1Bi(x∗),0∈∩∞j=1Dj(Ax∗),∀x∈C}=GSIVP2(Bi,Dj)∩EP(f). |
It is easy to see, if Bi=0,Dj=0, then Problem (1.7) simplifies to the EP (1.5); if f=0, then Problem (1.7) simplifies to the GSVIP2 (1.4); if s=1,t=1, then Problem (1.7) changes into the following problem: Find x∗∈C such that
f(x∗,y)≥0,∀y∈Cand0∈B(x∗),0∈D(Ax∗), | (1.8) |
if for s,t∈N,i∈{1,2,⋅⋅⋅,s},j∈{1,2,⋅⋅⋅,t},Bi=NCi,Dj=NQj in Problem (1.7), where NCi and NQi are the normal cones of nonempty, closed and convex subsets Ci⊆H1 and Qj⊆H2, respectively. Then, we obtain the following multiple-sets split feasibility problem and paramonotone equilibrium problem: Find u∗∈C such that
f(u∗,u)≥0,∀u∈Candu∗∈∩si=1Ci,Au∗∈∩tj=1Qj. | (1.9) |
Thus, it can be seen that Problem (1.7) considered in this paper is more general, and contains many known and new mathematical models about the common element problems, such as Problems (1.1–1.6), Problems (1.8) and (1.9) as special cases. We are committed to establishing strong convergences of a self-adaptive viscosity-type inertial algorithm for the common solutions of Problem (1.7). The advantages of the suggested iterative algorithm are that (1) the design of the algorithm is self-adaptive, the inertial term can speed up its convergence, (2) the strong convergence analysis does not require a prior estimate of the norm of bounded operator, (3) the strong convergence of the iterative algorithm is established under the weak assumption of paramonotonicity of the related mappings. Our results improve and generalize many known results in the literature [18,27].
In this section, we give some basic concepts, properties, and notations that will be used in the sequel. Let C be a nonempty closed and convex subset of a real Hilbert space H embed with the inner product ⟨⋅,⋅⟩ and the induced norm ‖⋅‖. For each u,v∈H and a∈R, we have the following facts:
ⅰ)‖u+v‖2=‖u‖2+2⟨u,v⟩+‖v‖2;
ⅱ)‖u+v‖2≤‖u‖2+2⟨v,u+v⟩;
ⅲ)‖au+(1−a)v‖2=a‖u‖2+(1−a)‖v‖2−a(1−a)‖u−v‖2.
Definition 2.1 [18] (1) F:H→H is nonexpansive, if ‖Fu−Fv‖≤‖u−v‖,∀u,v∈H.
(2) F:H→H is firmly nonexpansive, if ⟨Fu−Fv,u−v⟩≥‖Fu−Fv‖2,∀u,v∈H.
Definition 2.2 [18] A mapping F:C→C is said to be demiclosed, if for any sequence {un}⊂C which weakly converges to u, and the sequence {Fun} strongly converges to v, then F(u)=v.
Lemma 2.1 [43] Let C be a nonempty, closed and convex subset of a real Hilbert spaceA,F:C→C be a nonexpansive mapping. Then, I−F is demiclosed at 0.
Lemma 2.2 [44] Let B be a maximal monotone mapping on a Hilbert space H for any r>0, we define the resolvent JBr=(I+rB)−1, then the following hold:
(1) JBr is a single-valued and firmly nonexpansive mapping.
(2) D(JBr)=H, andFix(JBr)=B−1(0) where Fix(JBr) stands for the fixed point set of JBr.
Lemma 2.3 [18] Let B:H→2H be a maximal monotone mapping, then the associated resolvent JBr for some r>0 has the following characterization:
⟨u−JBr(u),u−v⟩≥‖u−JBr(u)‖2,∀u∈H,v∈Fix(JBr). |
Lemma 2.4 [45] Let {υn} and {δn} be nonnegative sequences of real numbers satisfying υn+1≤υn+δn with ∞∑n=1δn<+∞. Then, the sequence {υn} is convergent.
Lemma 2.5 [46] Let H be a real Hilbert space, {an} be a sequence of real numbers such that 0<a<an<b<1 for all n≥1 and {bn},{dn} be the sequences in H such that limsupn→∞‖bn‖≤c,limsupn→∞‖dn‖≤c, and for some c>0,limsupn→∞‖anbn+(1−an)dn‖=c. Then limn→∞‖bn−dn‖=0.
In this section, in order to prove the convergence of the algorithm, the following conditions are assumed:
(A1) Let H1 and H2 be two real Hilbert spaces, C be a nonempty closed convex subset of H1,A:H1→H2 be a linear and bounded operator.
(A2) For s,t∈N,i∈{1,2,⋅⋅⋅,s},j∈{1,2,⋅⋅⋅,t},Bi:H1→2H1,Dj:H2→2H2 are two families of maximal monotone mappings.
(A3) The bifunctions f:C×C→R satisfies the following:
(B1) For each u∈C,f(u,u)=0, and f(u,⋅) is lower semicontinuous and convex on C, f(⋅,u) is upper semicontinuous and convex on C.
(B2) ∂λ2f(u,u)is nonempty for any λ>0 and u∈C, and it is bounded on any bounded subset of C where ∂λ2f(u,u) denotes λ−subdifferential of the convex function f(u,⋅) at u, that is
∂λ2f(u,u):={η∈H:⟨η,v−u⟩+f(u,u)≤f(u,v)+λ,∀v∈C}. |
(B3) f is pseudo-monotone on C with respect to every solution of the EP, that is f(u,u∗)≤0 for ∀u∈C,u∗∈EP(f). And f satisfies the following condition, which is called the paramonotonicity properly: u∗∈EP(f),∀v∈C,f(u∗,v)=f(v,u∗)=0⇒v∈EP(f).
(A4) Γ≠∅.
Now, we introduce a self-adaptive viscosity-type inertial algorithm to solve Problem (1.7), which is described as follows.
Algorithm 3.1. Initialization. Pick u0,u1∈H1,r>0, let θ∈[0,1), for any n∈N, the sequence {ρn},{an},{βn},{λn},{δn},{εn}⊂[0,∞) satisfying the following conditions:
ρn>ρ>0,0<a<an<b<1,βn>0,λn>0, |
∞∑n=1εn<+∞,limn→∞an=12,∞∑n=1βnρn=+∞,∞∑n=1β2n<+∞, |
∞∑n=1βnλnρn<+∞,0<liminfn→∞δn≤limsupn→∞δn<4. |
Step 1. Assume un−1,un have been known. Choose αn such that 0<αn≤ˉαn, where
ˉαn={min{θ,ε‖un−un−1‖},ifun≠un−1,θ,otherwise. |
Computer
xn=un+αn(un−un−1). | (3.1) |
Step 2. Compute
yn=JBinrxn, | (3.2) |
zn=A∗(I−JDjnr)Axn,∀n≥1, | (3.3) |
where
in∈{i|max1≤i≤s‖xn−JBirxn‖},jn∈{j|max1≤j≤t‖Axn−JDjrAxn‖}, | (3.4) |
if
‖xn+zn−yn‖=0, | (3.5) |
then stop; otherwise, continue and compute vn as follows
vn=PC(xn−ξn(xn+zn−yn)), | (3.6) |
where
{ξn=δn‖xn−yn‖2+‖(I−JDjnλ)Axn‖22‖xn−yn+zn‖2,δn>0.} |
Step 3. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Computer
wn=PC(vn−τnηn). | (3.7) |
Step 4. Compute
un+1=anun+(1−an)wn. | (3.8) |
Several algorithms can be deduced from our Algorithm 3.1 for solving Problem (1.7) as follows: If αn=0 in Algorithm 3.1, we have the following self-adaptive viscosity-type method:
Algorithm 3.2. Choose any initial value u1∈H1.
Step 1. Compute
yn=JBinrun, |
zn=A∗(I−JDjnr)Aun,∀n≥1, |
where
in∈{i|max1≤i≤s‖un−JBirun‖},jn∈{j|max1≤j≤t‖Aun−JDjrAun‖}, |
if ‖xn+zn−yn‖=0, then stop; otherwise, continue to compute vnas follows
vn=PC(xn−ξn(xn+zn−yn)). |
Step 2. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Compute
un+1=PC(vn−τnηn), |
where ξn=δn‖un−yn‖2+‖(I−JDjnλ)Aun‖22‖un−yn+zn‖2,δn>0 and θ,r,{ρn},{an},{βn},{λn},{εn} are updated by Algorithm 3.1.
If s=1,t=1 then Problem (1.7) reduces to Problem (1.8), and we consider the following Algorithm 3.3 corresponding to Algorithm 3.1 for computing the solution of Problem (1.8):
Algorithm 3.3. Choose any initial value u0,u1∈H1.
Step 1. Assume un−1,un have been known. Compute
xn=un+αn(un−un−1). |
Step 2. Compute
yn=JBrxn, |
zn=A∗(I−JDr)Axn,∀n≥1, |
if ‖xn+zn−yn‖=0, then stop; otherwise, continue to compute vn as follows
vn=PC(xn−ξn(xn+zn−yn)). |
Step 3. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Compute
wn=PC(vn−τnηn). |
Step 4. Compute
un+1=anun+(1−an)wn, |
where θ,r,{ρn},{an},{βn},{λn},{εn},{αn},{ξn} are updated by Algorithm 3.1.
If Bi=NCi(i=1,2,⋯,s),Dj=NQj(i=1,2,⋯,t), then Problem (1.7) reduces to Problem (1.9), and Algorithm 3.1 reduces to the following method:
Algorithm 3.4. Choose any initial value u0,u1∈H1.
Step 1. Compute
xn=un+αn(un−un−1). |
Step 2. Compute
yn=PCinxn, |
zn=A∗(I−PQjn)Axn, |
where
in∈{i|max1≤i≤s‖xn−PCinxn‖},jn∈{j|max1≤j≤t‖Axn−PQjnAxn‖}, |
if ‖xn+zn−yn‖=0, then stop; otherwise, continue to compute vn as follows
Step 3. Take ηn∈∂λn2f(vn,vn) and define τn=βnγn with γn=max{ρn,‖ηn‖}. Compute
wn=PC(vn−τnηn). |
Step 4. Compute
un+1=anun+(1−an)wn, |
where ξn=δn‖xn−yn‖2+‖(I−PQjn)Axn‖22‖xn−yn+zn‖2,δn>0 and θ,{ρn},{an},{βn},{λn},{εn},{αn} are updated by Algorithm 3.1.
If f=0, Problem (1.7) reduces to Problem (1.4), we consider the following inertial Algorithm 3.5 corresponding to Algorithm 3.1 for computing the solution of GSVIP2(1.4):
Algorithm 3.5. Choose any initial value u0,u1∈H1.
Step 1. Assume un−1,un have been known. Compute
xn=un+αn(un−un−1). |
Step 2. Compute
yn=JBinrxn, |
zn=A∗(I−JDjnr)Axn,∀n≥1, |
where
in∈{i|max1≤i≤s‖xn−JBirxn‖},jn∈{j|max1≤j≤t‖Axn−JDjrAxn‖}, |
if ‖xn+zn−yn‖=0, then stop; otherwise, continue to compute {v_n} as follows
{v_n} = {x_n} - {\xi _n}\left( {{x_n} + {z_n} - {y_n}} \right). |
Step 3. Compute
{u_{n + 1}} = {a_n}{u_n} + \left( {1 - {a_n}} \right){v_n}, |
where \theta, r, \left\{ {{a_n}} \right\}, \left\{ {{\alpha _n}} \right\}, \left\{ {{\xi _n}} \right\} are updated by Algorithm 3.1.
If {B_i} = 0, {D_j} = 0, Problem (1.7) reduces to Problem (1.5) and hence we consider the following inertial Algorithm 3.6 corresponding to Algorithm 3.1 for computing the solution of EP(1.5):
Algorithm 3.6. Initialization. Choose any initial value {u_0}, {u_1} \in {H_1}.
Step 1. Compute
{x_n} = {u_n} + {\alpha _n}\left( {{u_n} - {u_{n - 1}}} \right). |
Step 2. Take {\eta _n} \in \partial _2^{{\lambda _n}}f\left({{v_n}, {v_n}} \right) and define {\tau _n} = \frac{{{\beta _n}}}{{{\gamma _n}}} with {\gamma _n} = \max \left\{ {{\rho _n}, \left\| {{\eta _n}} \right\|} \right\}. Compute
{w_n} = {P_C}\left( {{v_n} - {\tau _n}{\eta _n}} \right). |
Step 3. Compute
{u_{n + 1}} = {a_n}{u_n} + \left( {1 - {a_n}} \right){w_n}, |
where \left\{ {{\alpha _n}} \right\}, \left\{ {{\rho _n}} \right\}, \left\{ {{a_n}} \right\}, \left\{ {{\beta _n}} \right\}, \left\{ {{\lambda _n}} \right\} are updated by Algorithm 3.1.
In order to obtain our major results, we also need the following lemmas.
Lemma 3.1. [27] For any n \ge 1 , the following inequalities hold:
(1) {\tau _n}\left\| {{\eta _n}} \right\| \le {\beta _n}. (2) \left\| {{w_n} - {v_n}} \right\| \le {\beta _n}.
Lemma 3.2. [18] The equality (3.5) holds if and only if {u_n} is a solution of GSVIP\left({{B_i}, {D_j}} \right).
Theorem 3.1. Suppose Assumptions (A1–A4) hold and the sequence \left\{ {{u_n}} \right\} generated by Algorithm 3.1 strongly converges to a solution of Problem (1.7).
Proof. We divide the proof into the following several steps.
Step 1. The sequence \left\{ {{{\left\| {{u_n} - {u^ * }} \right\|}^2}} \right\} is convergent for all {u^ * } \in \Gamma, then the sequence \left\{ {{u_n}} \right\} is bounded.
Indeed, for {u^ * } \in \Gamma , we have
\langle {{x_n} + {z_n} - {y_n}, {x_n} - {u^ * }} \rangle \ge {\left\| {{x_n} - {y_n}} \right\|^2} + {\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|^2}. | (3.9) |
From (3.9), we get
\begin{array}{l} {\; \; \; \; }{\left\| {{v_n} - {u^ * }} \right\|^2}\\ \le {\left\| {{x_n} - {\xi _n}\left( {{x_n} + {z_n} - {y_n}} \right) - {u^ * }} \right\|^2}\\ = {\left\| {{x_n} - {u^ * }} \right\|^2} - 2{\xi _n}\langle {{x_n} - {u^ * }, {x_n} + {z_n} - {y_n}} \rangle + {\xi _n}^2{\left\| {{x_n} + {z_n} - {y_n}} \right\|^2}\\ \le {\left\| {{x_n} - {u^ * }} \right\|^2} - 2{\xi _n}\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right) + {\xi _n}^2{\left\| {{x_n} + {z_n} - {y_n}} \right\|^2}\\ \le {\left\| {{x_n} - {u^ * }} \right\|^2} - 2{\delta _n}\frac{{{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}}}{{2{{\left\| {{x_n} - {y_n} + {z_n}} \right\|}^2}}}\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)\\ {\; \; \; \; }+ \delta _n^2\frac{{{{\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)}^2}}}{{4{{\left\| {{x_n} - {y_n} + {z_n}} \right\|}^4}}}{\left\| {{x_n} + {z_n} - {y_n}} \right\|^2}\\ = {\left\| {{x_n} - {u^ * }} \right\|^2} - {\delta _n}\left( {1 - \frac{{{\delta _n}}}{4}} \right)\frac{{{{\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)}^2}}}{{{{\left\| {{x_n} - {y_n} + {z_n}} \right\|}^2}}}. \end{array} | (3.10) |
Form 0 < \mathop {\lim \inf }\limits_{n \to \infty } {\delta _n} \le \mathop {\lim \sup }\limits_{n \to \infty } {\delta _n} < 4 that \left({1 - \frac{{{\delta _n}}}{4}} \right) > 0 and
{\left\| {{v_n} - {u^ * }} \right\|^2} \le {\left\| {{x_n} - {u^ * }} \right\|^2}. | (3.11) |
From the definition of {x_n} that
\begin{array}{l} {\; \; \; \; }{\left\| {{x_n} - {u^ * }} \right\|^2} \\ = {\left\| {{u_n} + {\alpha _n}\left( {{u_n} - {u_{n - 1}}} \right) - {u^ * }} \right\|^2}\\ = {\left\| {{u_n} - {u^ * }} \right\|^2} + 2{\alpha _n}\langle {{u_n} - {u^ * }, {u_n} - {u_{n - 1}}} \rangle + \alpha _n^2{\left\| {{u_n} - {u_{n - 1}}} \right\|^2}\\ = {\left\| {{u_n} - {u^ * }} \right\|^2} + {\alpha _n}\left( {{{\left\| {{u_n} - {u^ * }} \right\|}^2} + {{\left\| {{u_n} - {u_{n - 1}}} \right\|}^2} - {{\left\| {{u_{n - 1}} - {u^ * }} \right\|}^2}} \right) + \alpha _n^2{\left\| {{u_n} - {u_{n - 1}}} \right\|^2}\\ = {\left\| {{u_n} - {u^ * }} \right\|^2} + {\alpha _n}\left( {{{\left\| {{u_n} - {u^ * }} \right\|}^2} - {{\left\| {{u_{n - 1}} - {u^ * }} \right\|}^2}} \right) + {\alpha _n}\left( {1 + {\alpha _n}} \right){\left\| {{u_n} - {u_{n - 1}}} \right\|^2}\\ \le {\left\| {{u_n} - {u^ * }} \right\|^2} + {\alpha _n}\left( {{{\left\| {{u_n} - {u^ * }} \right\|}^2} - {{\left\| {{u_{n - 1}} - {u^ * }} \right\|}^2}} \right) + 2{\alpha _n}{\left\| {{u_n} - {u_{n - 1}}} \right\|^2}\\ \le {\left\| {{u_n} - {u^ * }} \right\|^2} + {\alpha _n}\left( {\left\| {{u_n} - {u^ * }} \right\| + \left\| {{u_{n - 1}} - {u^ * }} \right\|} \right)\left\| {{u_n} - {u_{n - 1}}} \right\| + 2{\alpha _n}{\left\| {{u_n} - {u_{n - 1}}} \right\|^2}\\ = {\left\| {{u_n} - {u^ * }} \right\|^2} + {\alpha _n}\left( {\left\| {{u_n} - {u^ * }} \right\| + \left\| {{u_{n - 1}} - {u^ * }} \right\| + 2\left\| {{u_n} - {u_{n - 1}}} \right\|} \right)\left\| {{u_n} - {u_{n - 1}}} \right\|\\ = {\left\| {{u_n} - {u^ * }} \right\|^2} + {\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\|, \end{array} | (3.12) |
where {c_1} = \left\| {{u_n} - {u^ * }} \right\| + \left\| {{u_{n - 1}} - {u^ * }} \right\| + 2\left\| {{u_n} - {u_{n - 1}}} \right\|. By (3.11) and (3.12), we have
{\left\| {{v_n} - {u^ * }} \right\|^2} \le {\left\| {{u_n} - {u^ * }} \right\|^2} + {\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\|. | (3.13) |
Noting that
{\left\| {{w_n} - {u^ * }} \right\|^2} = {\left\| {{w_n} - {v_n} + {v_n} - {u^ * }} \right\|^2} \le {\left\| {{v_n} - {u^ * }} \right\|^2} + 2\langle {{v_n} - {w_n}, {u^ * } - {w_n}} \rangle . | (3.14) |
By the definition of {w_n} and the projection property, we have \langle {{w_n} - {v_n} + {\tau _n}{\eta _n}, {u^ * } - {w_n}} \rangle \ge 0. so \langle {{\tau _n}{\eta _n}, {u^ * } - {w_n}} \rangle \ge \langle {{v_n} - {w_n}, {u^ * } - {w_n}} \rangle. From (3.14) that
\begin{array}{l} {\left\| {{w_n} - {u^ * }} \right\|^2} \le {\left\| {{v_n} - {u^ * }} \right\|^2} + 2\langle {{\tau _n}{\eta _n}, {u^ * } - {w_n}} \rangle \\ {\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; } = {\left\| {{v_n} - {u^ * }} \right\|^2} + 2\langle {{\tau _n}{\eta _n}, {u^ * } - {v_n}} \rangle + 2\langle {{\tau _n}{\eta _n}, {v_n} - {w_n}} \rangle \end{array}. | (3.15) |
It follows from {\eta _n} \in \partial _2^{{\lambda _n}}f\left({{v_n}, {v_n}} \right) that f\left({{v_n}, {u^ * }} \right) - f\left({{v_n}, {v_n}} \right) \ge \langle {{\eta _n}, {u^ * } - {v_n}} \rangle - {\lambda _n}. Then
f\left( {{v_n}, {u^ * }} \right) + {\lambda _n} \ge \langle {{\eta _n}, {u^ * } - {v_n}} \rangle. | (3.16) |
Moreover, by Lemma 3.1, we get
\langle {{\tau _n}{\eta _n}, {v_n} - {w_n}} \rangle \le {\tau _n}\left\| {{\eta _n}} \right\|\left\| {{v_n} - {w_n}} \right\| \le \beta _n^2. | (3.17) |
By (3.15)–(3.17), we have
{\left\| {{w_n} - {u^ * }} \right\|^2} \le {\left\| {{v_n} - {u^ * }} \right\|^2} + 2{\tau _n}f\left( {{v_n}, {u^ * }} \right) + 2{\tau _n}{\lambda _n} + 2\beta _n^2. | (3.18) |
Combining (3.11), we get
\begin{equation} {\left\| {{w_n} - {u^ * }} \right\|^2} \le {\left\| {{x_n} - {u^ * }} \right\|^2} + 2{\tau _n}f\left( {{v_n}, {u^ * }} \right) + 2{\tau _n}{\lambda _n} + 2\beta _n^2. \end{equation} | (3.19) |
Since {u^ * } \in \Gamma , then {u^ * } \in EP\left(f \right). And f is pseudomonotone on C with respect to every solution of EP(f) , we have f\left({{v_n}, {u^ * }} \right) \le 0. By the definition of {u_{n + 1}}, we obtain
\begin{equation} {\left\| {{u_{n + 1}} - {u^ * }} \right\|^2} = {\left\| {{a_n}{u_n} + \left( {1 - {a_n}} \right){w_n} - {u^ * }} \right\|^2} \le {a_n}{\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right){\left\| {{w_n} - {u^ * }} \right\|^2}. \end{equation} | (3.20) |
By the definition of {y_n} and (3.12), we get
{\left\| {{y_n} - {u^*}} \right\|^2} = {\left\| {J_r^{{B_{{i_n}}}}{x_n} - {u^*}} \right\|^2} \le {\left\| {{x_n} - {u^*}} \right\|^2} \le {\left\| {{u_n} - {u^*}} \right\|^2} + {\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\|. |
From (3.12), (3.19) and (3.20) that
\begin{array}{l} \;\;\;\; \left\| {{u_{n + 1}} - {u^ * }} \right\|^2 \\ \le {a_n}{\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right)\left( {{{\left\| {{x_n} - {u^ * }} \right\|}^2} + 2{\tau _n}f\left( {{v_n}, {u^ * }} \right) + 2{\tau _n}{\lambda _n} + 2\beta _n^2} \right) \\ \le {a_n}{\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right)\left( {{{\left\| {{u_n} - {u^ * }} \right\|}^2} + {\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\| + 2{\tau _n}f\left( {{v_n}, {u^ * }} \right)} \right.\left. { + 2{\tau _n}{\lambda _n} + 2\beta _n^2} \right) \\ = {\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right)\left[ {{\alpha _n}} \right.{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\| + 2{\tau _n}f\left( {{v_n}, {u^ * }} \right) + 2{\tau _n}{\lambda _n} + \left. {2\beta _n^2} \right] \end{array} | (3.21) |
\le {\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right){\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\| + 2\left( {1 - {a_n}} \right){\tau _n}{\lambda _n} + 2\left( {1 - {a_n}} \right)\beta _n^2 |
= {\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right)\left[ {{\alpha _n}} \right.{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\| + 2{\tau _n}f\left( {{v_n}, {u^ * }} \right) + 2{\tau _n}{\lambda _n} + \left. {2\beta _n^2} \right]{\rm{ }}, | (3.22) |
where {\Lambda _n} = 2\left({1 - {a_n}} \right)\left({{\tau _n}{\lambda _n} + \beta _n^2} \right). since {\tau _n} = \frac{{{\beta _n}}}{{{\gamma _n}}} {\gamma _n} = \max \left\{ {{\rho _n}, \left\| {{\eta _n}} \right\|} \right\} , then
\sum\limits_{n = 1}^\infty {{\tau _n}{\lambda _n}} = \sum\limits_{n = 1}^\infty {\frac{{{\beta _n}}}{{{\gamma _n}}}{\lambda _n}} \le \sum\limits_{n = 1}^\infty {\frac{{{\beta _n}}}{{{\rho _n}}}{\lambda _n}} < + \infty . |
Noting \sum\limits_{n = 1}^\infty {\beta _n^2} < + \infty, 0 < a < {a_n} < b < 1, we have \sum\limits_{n = 1}^\infty {{\Lambda _n}} < 2\left({1 - a} \right)\sum\limits_{n = 1}^\infty {\left({{\tau _n}{\lambda _n} + \beta _n^2} \right)} < + \infty. By (3.1), we have {\alpha _n}\left\| {{u_n} - {u_{n - 1}}} \right\| \le {\bar \alpha _n}\left\| {{u_n} - {u_{n - 1}}} \right\| \le {\varepsilon _n}, noting \sum\limits_{n = 1}^\infty {{\varepsilon _n}} < \infty , so
\sum\limits_{n = 1}^\infty {{\alpha _n}\left\| {{u_n} - {u_{n - 1}}} \right\|} < + \infty . |
From Lemma 2.4 and (3.22), we can see that \left\{ {{{\left\| {{u_n} - {u^ * }} \right\|}^2}} \right\} is convergent for all {u^ * } \in \Gamma. Hence \left\{ {{u_n}} \right\} is bounded, consequently, so are the sequences \left\{ {{x_n}} \right\}, \left\{ {{y_n}} \right\}, \left\{ {{v_n}} \right\} and \left\{ {{w_n}} \right\}.
Step 2. For any {u^ * } \in \Gamma , \mathop {\lim }\limits_{n \to \infty } \sup f\left({{v_n}, {u^ * }} \right) = 0. Indeed, from (3.21), we have
- 2\left( {1 - {a_n}} \right){\tau _n}f\left( {{v_n}, {u^ * }} \right) \le {\left\| {{u_n} - {u^ * }} \right\|^2} - {\left\| {{u_{n + 1}} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right){\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\| + {\Lambda _n}. | (3.23) |
Consequently, \sum\limits_{n = 1}^\infty { - 2\left({1 - {a_n}} \right){\tau _n}f\left({{v_n}, {u^ * }} \right)} < + \infty . It follows from Assumption (B2) and the boundedness of \left\{ {{u_n}} \right\}, we get that \left\| {{\eta _n}} \right\| is bounded. Thus, for each n \ge 1 , there is a constant L > \rho such that \left\| {{\eta _n}} \right\| \le L then \frac{{{\gamma _n}}}{{{\rho _n}}} = \max \left\{ {1, \frac{{\left\| {{\eta _n}} \right\|}}{{{\rho _n}}}} \right\} \le \frac{L}{\rho }, so {\tau _n} = \frac{{{\beta _n}}}{{{\gamma _n}}} > \frac{\rho }{L}\frac{{{\beta _n}}}{{{\rho _n}}}. Since {u^ * } \in \Gamma , it follows from the pseudomonotonicity of f that - f\left({{v_n}, {u^ * }} \right) \ge 0 and combine 0 < a < {a_n} < b < 1 , we have \sum\limits_{n = 1}^\infty {\left({1 - b} \right)\frac{{{\beta _n}}}{{{\rho _n}}}\left[{- f\left({{v_n}, {u^ * }} \right)} \right]} < + \infty. Since \sum\limits_{n = 1}^\infty {\frac{{{\beta _n}}}{{{\rho _n}}}} = + \infty, then \mathop {\lim }\limits_{n \to \infty } \sup f\left({{v_n}, {u^ * }} \right) = 0.
Step 3. For any {u^ * } \in \Gamma, let \left\{ {{v_{{n_k}}}} \right\} be a subsequence of \left\{ {{v_n}} \right\}, such that \mathop {\lim }\limits_{n \to \infty } \sup f\left({{v_n}, {u^ * }} \right) = \mathop {\lim }\limits_{j \to \infty } f\left({{v_{{n_k}}}, {u^ * }} \right), and {v^ * } be a weak cluster point of \left\{ {{v_{{n_k}}}} \right\}, then {v^ * } \in EP\left(f \right). Indeed, if {v_{{n_k}}} \rightharpoonup {v^ * }\left({k \to \infty } \right), since f\left({ \cdot, {u^ * }} \right) is upper semi-continuous and by Step2, we have f\left({{v^ * }, {u^ * }} \right) \ge \mathop {\lim }\limits_{j \to \infty } \sup f\left({{v_{{n_k}}}, {u^ * }} \right) = 0 . Since {u^ * } \in \Gamma, and f is pseudomonotone, we have f\left({{v^ * }, {u^ * }} \right) \le 0, and so f\left({{v^ * }, {u^ * }} \right) = 0. Again, by the pseudomonotonicity of f , f\left({{u^ * }, {v^ * }} \right) \le 0, and f\left({{u^ * }, {v^ * }} \right) \ge 0, we obtain f\left({{u^ * }, {v^ * }} \right) = 0. Then, f\left({{v^ * }, {u^ * }} \right) = f\left({{u^ * }, {v^ * }} \right) = 0. Thus, by the pseudomonotonicity of f we have {v^ * } \in EP\left(f \right).
Step 4. Every weak cluster point \bar u of the sequence \left\{ {{u_n}} \right\} all belongs to GSVIP\left({{B_i}, {D_j}} \right). Let \left\{ {{u_{{n_k}}}} \right\} be a subsequence of \left\{ {{u_n}} \right\} such that {u_{{n_k}}} \rightharpoonup \bar u. Easy to know that \sum\limits_{n = 1}^\infty {\left\| {{x_n} - {u_n}} \right\|} = \sum\limits_{n = 1}^\infty {{\alpha _n}\left\| {{u_n} - {u_{n - 1}}} \right\|} < \infty. Implying that
\mathop {\lim }\limits_{n \to \infty } \left\| {{x_n} - {u_n}} \right\| = 0. | (3.24) |
Therefore {x_{{n_k}}} \rightharpoonup \bar u, where \left\{ {{x_{{n_k}}}} \right\} be a subsequence of \left\{ {{x_n}} \right\}. It follows from (3.10), (3.12), (3.18) and (3.20) that
\begin{array}{l} {\; \; \; \; }{\left\| {{u_{n + 1}} - {u^ * }} \right\|^2}\\ = {\left\| {{a_n}{u_n} + \left( {1 - {a_n}} \right){w_n} - {u^ * }} \right\|^2}\\ \le {a_n}{\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right){\left\| {{w_n} - {u^ * }} \right\|^2}\\ \le {a_n}{\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right)\left( {{{\left\| {{v_n} - {u^ * }} \right\|}^2} + 2{\tau _n}f\left( {{v_n}, {u^ * }} \right) + 2{\tau _n}{\lambda _n} + 2\beta _n^2} \right)\\ \le {a_n}{\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right){\left\| {{v_n} - {u^ * }} \right\|^2} + {\Lambda _n}\\ \le {a_n}{\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right)\left( {{{\left\| {{x_n} - {u^ * }} \right\|}^2}} \right.\\ {\; \; \; \; }\left. { - {\delta _n}\left( {1 - \frac{{{\delta _n}}}{4}} \right)\frac{{{{\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)}^2}}}{{{{\left\| {{x_n} - {y_n} + {z_n}} \right\|}^2}}}} \right) + {\Lambda _n}\\ \le {a_n}{\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right)\left( {{{\left\| {{u_n} - {u^ * }} \right\|}^2} + {\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\|} \right.\\ {\; \; \; \; }\left. { - {\delta _n}\left( {1 - \frac{{{\delta _n}}}{4}} \right)\frac{{{{\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)}^2}}}{{{{\left\| {{x_n} - {y_n} + {z_n}} \right\|}^2}}}} \right) + {\Lambda _n}\\ \end{array} |
\begin{array}{l} = {\left\| {{u_n} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right){\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\|\\ {\; \; \; \; }- \left( {1 - {a_n}} \right){\delta _n}\left( {1 - \frac{{{\delta _n}}}{4}} \right)\frac{{{{\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)}^2}}}{{{{\left\| {{x_n} - {y_n} + {z_n}} \right\|}^2}}} + {\Lambda _n}. \end{array} | (3.25) |
Implying that
\begin{array}{c} \left( {1 - {a_n}} \right){\delta _n}\left( {1 - \frac{{{\delta _n}}}{4}} \right)\frac{{{{\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)}^2}}}{{{{\left\| {{w_n} - {u_n} + {v_n}} \right\|}^2}}}\\ {\; \; \; \; }\le {\left\| {{u_n} - {u^ * }} \right\|^2} - {\left\| {{u_{n + 1}} - {u^ * }} \right\|^2} + \left( {1 - {a_n}} \right){\alpha _n}{c_1}\left\| {{u_n} - {u_{n - 1}}} \right\| + {\Lambda _n}. \end{array} | (3.26) |
Observe that
\begin{array}{c} \left( {1 - b} \right)\sum\limits_{n = 1}^\infty {{\delta _n}\left( {1 - \frac{{{\delta _n}}}{4}} \right)\frac{{{{\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)}^2}}}{{{{\left\| {{w_n} - {u_n} + {v_n}} \right\|}^2}}}} \\ \le {\left\| {{u_0} - {u^ * }} \right\|^2} + \left( {1 - a} \right){c_1}\sum\limits_{n = 1}^\infty {{\alpha _n}\left\| {{u_n} - {u_{n - 1}}} \right\|} + \sum\limits_{n = 1}^\infty {{\Lambda _n}} < \infty . \end{array} |
Thus
\mathop {\lim }\limits_{n \to \infty } {\delta _n}\left( {1 - \frac{{{\delta _n}}}{4}} \right)\frac{{{{\left( {{{\left\| {{x_n} - {y_n}} \right\|}^2} + {{\left\| {\left( {I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\|}^2}} \right)}^2}}}{{{{\left\| {{x_n} - {y_n} + {z_n}} \right\|}^2}}} = 0. |
Since \left\{ {{x_n} + {z_n} - {y_n}} \right\} is bounded, then \mathop {\lim }\limits_{n \to \infty } \left\| {{x_n} - {y_n}} \right\| = 0 and \mathop {\lim }\limits_{n \to \infty } \left\| {\left({I - J_\lambda ^{{D_{{j_n}}}}} \right)A{x_n}} \right\| = 0 . Thus \mathop {\lim }\limits_{n \to \infty } \left\| {\left({I - J_r^{{B_{{i_n}}}}} \right){x_n}} \right\| = 0 and \mathop {\lim }\limits_{n \to \infty } \left\| {\left({I - J_r^{{D_{{j_n}}}}} \right)A{x_n}} \right\| = 0 . Note that J_r^{{B_{{i_n}}}} and J_r^{{D_{{j_n}}}} are nonexpansive, then \left({I - J_r^{{B_{{i_n}}}}} \right) and \left({I - J_r^{{D_{{j_n}}}}} \right) are demiclosed at 0. Thus, it follows from {x_{{n_k}}} \rightharpoonup \bar u that \left({I - J_r^{{B_{{i_n}}}}} \right)\bar u = 0, and \left({I - J_r^{{D_{{j_n}}}}} \right)A\bar u = 0 due to the linearity of A . That is, \bar u \in \cap _{i = 1}^sB_i^{ - 1}\left(0 \right) and A\bar u \in \cap _j^tD_j^{ - 1}\left(0 \right) . So \bar u \in GSVIP\left({{B_i}, {D_j}} \right). Noting that by Step 1, we can assume that \mathop {\lim }\limits_{n \to \infty } \left\| {{u_n} - \bar u} \right\| = c < + \infty. From (3.11) and Lemma 3.1(2), we have
\begin{align} \left\| {{w_n} - \bar u} \right\| & \le \left\| {{w_n} - {v_n}} \right\| + \left\| {{v_n} - \bar u} \right\|\\ &\le {\beta _n} + \left\| {{x_n} - \bar u} \right\|\\ & = \left\| {{u_n} + {\alpha _n}\left( {{u_n} - {u_{n - 1}}} \right) - \bar u} \right\| + {\beta _n}\\ &\le \left\| {{u_n} - \bar u} \right\| + \left| {{\alpha _n}} \right|\left\| {{u_n} - {u_{n - 1}}} \right\| + {\beta _n}. \end{align} |
This means that
\mathop {\lim \sup }\limits_{n \to \infty } \left\| {{w_n} - \bar u} \right\| \le \mathop {\lim \sup }\limits_{n \to \infty } \left( {\left\| {{u_n} - \bar u} \right\| + {\alpha _n}\left\| {{u_n} - {u_{n - 1}}} \right\| + {\beta _n}} \right) = c. |
Since \mathop {\lim }\limits_{n \to \infty } \left\| {{a_n}\left({{u_n} - \bar u} \right) + \left({1 - {a_n}} \right)\left({{w_n} - \bar u} \right)} \right\| = \mathop {\lim }\limits_{n \to \infty } \left\| {{u_{n + 1}} - \bar u} \right\| = c. By Lemma 2.5, we have
\mathop {\lim }\limits_{n \to \infty } \left\| {{w_n} - {u_n}} \right\| = 0. | (3.27) |
From Lemma 3.1 (2) and \sum\limits_{n = 1}^\infty {\beta _n^2} < + \infty that \mathop {\lim }\limits_{n \to \infty } \left\| {{v_n} - {w_n}} \right\| = 0 , then \mathop {\lim }\limits_{n \to \infty } \left\| {{v_n} - {u_n}} \right\| = 0. Noting the fact that \bar u is a weak cluster point of the sequence \left\{ {{u_n}} \right\} , we are easy to see that \bar u is also a weak cluster point of the sequence \left\{ {{v_n}} \right\} , thus \bar u \in EP\left(f \right) , then \bar u \in \Gamma.
Step 5. Finally, we show that the sequence \left\{ {{u_n}} \right\} converges strongly to \bar u \in \Gamma. Indeed, combining (3.27) and the fact that \bar{u} is a weak cluster point of the sequence \left\{ {{u_n}} \right\} , we can see that \bar u is also a weak cluster point of the sequence \left\{ {{w_n}} \right\} . Suppose {w_{{n_k}}} \rightharpoonup \bar u, we get
\begin{array}{l} {\left\| {{u_{{n_k} + 1}} - {P_\Gamma }\left( {{u_{{n_k} + 1}}} \right)} \right\|^2} \le {\left\| {{u_{{n_k} + 1}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2}\\ {\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; } = {\left\| {{a_{{n_k}}}{u_{{n_k}}} + \left( {1 - {a_{{n_k}}}} \right){u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2}\\ {\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; }\le {a_{{n_k}}}{\left\| {{u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} + \left( {1 - {a_{{n_k}}}} \right){\left\| {{w_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2}. \end{array} | (3.28) |
Observe that
\begin{array}{l} {\; \; \; \; }{\left\| {{w_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} = {\left\| {{w_{{n_k}}} - {u_{{n_k}}} + {u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2}\\ {\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; } = {\left\| {{w_{{n_k}}} - {u_{{n_k}}}} \right\|^2} - {\left\| {{u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} - 2\langle {{w_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right), {P_\Gamma }\left( {{u_{{n_k}}}} \right) - {u_{{n_k}}}} \rangle . \end{array} | (3.29) |
By (3.28) and (3.29), we have
\begin{array}{l} {\; \; \; \; } {\left\| {{u_{{n_k} + 1}} - {P_\Gamma }\left( {{u_{{n_k} + 1}}} \right)} \right\|^2}\\ \le {a_{{n_k}}}{\left\| {{u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} + \left( {1 - {a_{{n_k}}}} \right)\left( {{{\left\| {{w_{{n_k}}} - {u_{{n_k}}}} \right\|}^2} - {{\left\| {{u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|}^2}} \right.\\ {\; \; \; \; }\left. { - 2\langle {{w_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right), {P_\Gamma }\left( {{u_{{n_k}}}} \right) - {u_{{n_k}}}} \rangle } \right)\\ = \left( {2{a_{{n_k}}} - 1} \right){\left\| {{u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} + \left( {1 - {a_{{n_k}}}} \right){\left\| {{w_{{n_k}}} - {u_{{n_k}}}} \right\|^2}\\ {\; \; \; \; }- 2\left( {1 - {a_{{n_k}}}} \right)\langle {{w_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right), {P_\Gamma }\left( {{u_{{n_k}}}} \right) - {u_{{n_k}}}} \rangle \\ = \left( {2{a_{{n_k}}} - 1} \right){\left\| {{u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} + \left( {1 - {a_{{n_k}}}} \right){\left\| {{w_{{n_k}}} - {u_{{n_k}}}} \right\|^2}\\ {\; \; \; \; }- 2\left( {1 - {a_{{n_k}}}} \right)\langle {{w_{{n_k}}} - \bar u, {P_\Gamma }\left( {{u_{{n_k}}}} \right) - {u_{{n_k}}}} \rangle \\ {\; \; \; \; }- 2\left( {1 - {a_{{n_k}}}} \right)\langle {\bar u - {P_\Gamma }\left( {{u_{{n_k}}}} \right), {P_\Gamma }\left( {{u_{{n_k}}}} \right) - {u_{{n_k}}}} \rangle . \end{array} | (3.30) |
Since \bar u \in \Gamma, we have \langle {\bar u - {P_\Gamma }\left({{u_{{n_k}}}} \right), {P_\Gamma }\left({{u_{{n_k}}}} \right) - {u_{{n_k}}}} \rangle \ge 0, Also, observe that the sequence \left\{ {{u_{{n_k}}}} \right\} is bounded, then so is \left\{ {{u_{{n_k}}} - {P_\Gamma }\left({{u_{{n_k}}}} \right)} \right\}. It follows from \mathop {\lim }\limits_{n \to \infty } \left\| {{w_{{n_k}}} - {u_{{n_k}}}} \right\| = 0, \mathop {\lim }\limits_{n \to \infty } {a_{{n_k}}} = \frac{1}{2} and (3.30), we have
\mathop {\lim }\limits_{n \to \infty } \left\| {{u_{{n_k} + 1}} - {P_\Gamma }\left( {{u_{{n_k} + 1}}} \right)} \right\| = 0. | (3.31) |
Next, we show that \left\{ {{P_\Gamma }\left({{u_{{n_k}}}} \right)} \right\} is a Cauchy sequence. Indeed, for any m > k, we obtain
\begin{array}{l} {\; \; \; \; } {\left\| {{P_\Gamma }\left( {{u_{{n_m}}}} \right) - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} = {\left\| {{P_\Gamma }\left( {{u_{{n_m}}}} \right) - {u_{{n_m}}} + {u_{{n_m}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2}\\ = 4{\left\| {\left( {\frac{1}{2}\left( {{P_\Gamma }\left( {{u_{{n_m}}}} \right) - {u_{{n_m}}}} \right) + \frac{1}{2}\left( {{u_{{n_m}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right)} \right)} \right\|^2}\\ = 2{\left\| {{P_\Gamma }\left( {{u_{{n_m}}}} \right) - {u_{{n_m}}}} \right\|^2} + 2{\left\| {{u_{{n_m}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2}\\ {\; \; \; \; }- 4{\left\| {{u_{{n_m}}} - \frac{1}{2}\left( {{P_\Gamma }\left( {{u_{{n_m}}}} \right) + {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right)} \right\|^2}\\ \le 2{\left\| {{P_\Gamma }\left( {{u_{{n_m}}}} \right) - {u_{{n_m}}}} \right\|^2} + 2{\left\| {{u_{{n_m}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} - 4{\left\| {{u_{{n_m}}} - {P_\Gamma }\left( {{u_{{n_m}}}} \right)} \right\|^2}\\ = 2{\left\| {{P_\Gamma }\left( {{u_{{n_k}}}} \right) - {u_{{n_m}}}} \right\|^2} - 2{\left\| {{u_{{n_m}}} - {P_\Gamma }\left( {{u_{{n_m}}}} \right)} \right\|^2}. \end{array} | (3.32) |
Set {u^ * } = {P_\Gamma }\left({{u_{{n_k}}}} \right) in (3.22), we have
\begin{align} {\left\| {{u_{{n_m}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2}& \le {\left\| {{u_{{n_m} - 1}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} + \left( {1 - {a_{{n_m} - 1}}} \right){\alpha _{{n_m} - 1}}{c_1}\left\| {{u_{{n_m} - 1}} - {u_{{n_m} - 2}}} \right\| + {\Lambda _{{n_m} - 1}}\\ & \vdots \\ & \le {\left\| {{u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} + \sum\limits_{i = {n_k}}^{{n_m} - 1} {\left( {1 - {a_i}} \right)} {\alpha _i}{c_1}\left\| {{u_i} - {u_{i - 1}}} \right\| + \sum\limits_{i = {n_k}}^{{n_m} - 1} {{\Lambda _i}} . \end{align} | (3.33) |
From (3.32) and (3.33) that
\begin{array}{c} {\left\| {{P_\Gamma }\left( {{u_{{n_m}}}} \right) - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} \le 2{\left\| {{u_{{n_k}}} - {P_\Gamma }\left( {{u_{{n_k}}}} \right)} \right\|^2} + 2\sum\limits_{i = {n_k}}^{{n_m} - 1} {\left( {1 - {a_i}} \right)} {\alpha _i}{c_1}\left\| {{u_i} - {u_{i - 1}}} \right\|\\ {\; \; \; \; \; \; \; \; \; \; \; }+ 2\sum\limits_{i = {n_k}}^{{n_m} - 1} {{\Lambda _i}} - 2{\left\| {{u_{{n_m}}} - {P_\Gamma }\left( {{u_{{n_m}}}} \right)} \right\|^2}. \end{array} |
It follows from (3.31) and the fact that \mathop {\lim }\limits_{k \to \infty } \sum\limits_{i = {n_k}}^{{n_m} - 1} {{\Lambda _i}} = 0 and \mathop {\lim }\limits_{k \to \infty } \sum\limits_{i = {n_k}}^{{n_m} - 1} {\left({1 - {a_i}} \right)} {\alpha _i}{c_1}\left\| {{u_i} - {u_{i - 1}}} \right\| = 0 , we obtain that \left\{ {{P_\Gamma }\left({{u_{{n_k}}}} \right)} \right\} is a Cauchy sequence. Hence \left\{ {{P_\Gamma }\left({{u_{{n_k}}}} \right)} \right\} strongly converges to some u \in \Gamma. Noting \mathop {\lim }\limits_{k \to \infty } \left\| {{u_{{n_k} + 1}} - {P_\Gamma }\left({{u_{{n_k} + 1}}} \right)} \right\| = 0, we know that \left\{ {{u_{{n_k}}}} \right\} also strongly converges to u \in \Gamma. Thus \mathop {\lim }\limits_{n \to \infty } {u_n} = \bar u, which completing the proof.
As the consequences of Theorem 3.1 with suitable choices of {B_i}, {D_j}(i \in \left\{ {1, 2, \cdot \cdot \cdot, s} \right\}, j \in \left\{ {1, 2, \cdot \cdot \cdot, t} \right\}) and f , we derive several interesting corollaries as follows.
Corollary 3.1. Suppose Assumptions (A1–A4) hold and let s = 1, t = 1 in (A2), then, the sequence \left\{ {{x_n}} \right\} generated by Algorithm 3.3 strongly converges to a solution of Problem (1.8).
Corollary 3.2. Suppose Assumptions (A1–A4) hold, then the sequence \left\{ {{x_n}} \right\} generated by Algorithm 3.4 strongly converges to a solution of Problem (1.9).
Corollary 3.3. Suppose Assumptions (A1–A4) hold, then the sequence \left\{ {{x_n}} \right\} generated by Algorithm 3.5 strongly converges to a solution of Problem (1.5).
Corollary 3.4. Suppose Assumptions (A1–A4) hold, then the sequence \left\{ {{x_n}} \right\} generated by Algorithm 3.6 strongly converges to a solution of Problem (1.4).
Remark 3.1. (ⅰ) Suppose {\alpha _n} = 0 in Algorithm 3.1, then Algorithm 3.1 reduces to self-adaptive viscosity-type Algorithm 3.2 for solving Problem (1.7).
(ⅱ) Suppose s = 1, t = 1 , then Algorithm 3.1 reduces to Algorithm 3.3 for solving Problem (1.8).
(ⅲ) Suppose {B_i} = {N_{{C_i}}}(i = 1, 2, \cdots, s), {D_j} = {N_{{Q_j}}}(i = 1, 2, \cdots, t) in Problem (1.7), then Algorithm 3.1 reduces to Algorithm 3.4 for solving Problem (1.9).
(ⅳ) Suppose f = 0 in Problem (1.7), then Problem (1.7) reduces to Problem (1.4) studied by Ogbuisi et al. in [18] and Algorithm 3.1 reduces to Algorithm 3.5 for solving Problem (1.4). So our Algorithm 3.1 and Theorem 3.1 generalize the corresponding results in [18].
(ⅴ) Suppose {B_i} = 0, {D_j} = 0 in Problem (1.7), then Problem (1.7) reduces to Problem (1.5) studied by Santos et al. in [27] and Algorithm 3.1 reduces to Algorithm 3.6 for solving Problem (1.5). So our Algorithm 3.1 and Theorem 3.1 generalize the corresponding results in [27].
At last, we give two examples to illustrate the validity of our considered common solution Problem (1.7). In two examples, we take s = 1, t = 1 in Problem (1.7).
Example 3.1. Let {H_1} = {H_2} = {R^2}, C = \left\{ {u \in {R^2}\left| { - 10{e_1} \le u \le 10{e_1}} \right.} \right\}, {e_1} = \left({1, 1} \right). We define the operators B:{H_1} \to {2^{{H_1}}}, D:{H_2} \to {2^{{H_2}}}, A:{H_1}\to{H_2} by
B\left[ {\begin{array}{*{20}{c}} u\\ v \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} 5&0\\ 0&2 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} u\\ v \end{array}} \right], D\left[ {\begin{array}{*{20}{c}} u\\ v \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} 3&0\\ 0&6 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} u\\ v \end{array}} \right], A\left[ {\begin{array}{*{20}{c}} u\\ v \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} 1&2\\ 3&4 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} u\\ v \end{array}} \right], |
respectively. Define mapping f\left({u, v} \right) = u_1^5\left({{v_1} - {u_1}} \right) + u_2^3\left({{v_2} - {u_2}} \right), \forall u, v \in C. Let us observe that A1–A4 hold and A is bounded linear mapping. In addition, {u^ * } = \left({0, 0} \right) is the unique solution of SVIP\left({B, D} \right). Furthermore, EP\left(f \right) has a unique solution {u^ * } = \left({0, 0} \right) . Since f\left({v, {u^ * }} \right) = - v_1^6 - v_2^4 \le 0 for all y \in C and f\left({{u^ * }, \bar u} \right) = 0 = f\left({\bar u, {u^ * }} \right) = - \bar u_1^6 - \bar u_2^4, which implies \bar u = \left({0, 0} \right) \in EP\left(f \right). Hence, \Gamma = SVIP\left({B, D} \right) \cap EP\left(f \right) = \left\{ {\left({0, 0} \right)} \right\}.
Example 3.2. Let {H_1} = {R^2}, {H_2} = {R^3}, C = \left\{ {u \in R_ + ^2\left| {{u_1} + {u_2} = 1} \right.} \right\} \subset {H_1}. We define {B_1}:{R^2} \to {R^2}, {B_2}:{R^3} \to {R^3} be
{B_1}\left[ \begin{array}{l} u\\ v \end{array} \right] = \left[ {\begin{array}{*{20}{c}} 2&{ - 4}\\ { - 4}&2 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} u\\ v \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} 1\\ 1 \end{array}} \right], {B_2}\left[ {\begin{array}{*{20}{c}} u\\ v\\ w \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} 2&0&0\\ 0&2&0\\ 0&0&2 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} u\\ v\\ w \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} 2\\ 2\\ 2 \end{array}} \right], |
respectively. Let A = \left[{\begin{array}{*{20}{c}} 2 & { - 4}\\ { - 4} & 2\\ 2 & { - 4} \end{array}} \right]. We consider the equilibrium problem with the bifunction f\left({u, v} \right) = 2\left| {{v_1}} \right| - \left| {{u_1}} \right| + 2v_2^2 - u_2^2, \forall v \in C. Suppose A1-A4 hold, the optimal point of EP(f) is {u^ * } = \left({\frac{1}{2}, \frac{1}{2}} \right) and the partial subdifferential of f is given by
{\partial _2}f\left( {u, u} \right) = \left\{ \begin{array}{l} \left( {2, 4{u_2}} \right){\rm{ if }}{{\rm{u}}_1} > 0, \\ \left( {\left[ { - 2, 2} \right], 4{u_2}} \right){\rm{ if }}{{\rm{u}}_1} = 0, {\rm{ }}\\ \left( { - 2, 4{u_2}} \right){\rm{ if }}{{\rm{u}}_1} < 0. \end{array} \right. |
Furthermore, we aim to find {u^ * } = \left({u_1^ *, u_2^ * } \right) \in {R^2} such that {B_1}\left({{u^ * }} \right) = \left({0, 0} \right), {B_2}\left({A{u^ * }} \right) = \left({0, 0, 0} \right). Then, we can easy to see that {x^ * } = \left({\frac{1}{2}, \frac{1}{2}} \right) \in SVIP\left({{B_1}, {B_2}} \right). Hence, \Gamma = SVIP\left({{B_1}, {B_2}} \right) \cap EP\left(f \right) = \left\{ {\left({\frac{1}{2}, \frac{1}{2}} \right)} \right\}.
In this paper, a new inertial form algorithm is introduced to approximate the common solutions of the generalized split variational inclusion problem and paramonotone equilibrium problem in real Hilbert spaces. The design of the algorithm is self-adaptive, the inertial term can speed up its convergence, and the strong convergence analysis does not require a prior estimate of the norm of bounded operators. Under the assumption of generalized monotonicity of the correlation mappings, we prove the strong convergence of our iterative algorithms. The results presented here improve and generalize many known results in [18,27].
It should be noted that the way of choosing the inertial parameter {\alpha _k} in our algorithm 3.1 is known as the on-line rule. As part of our future project, following the method in [47], we consider a strong convergence of our proposed algorithm under some conditions on the iterative parameter without on-line rule assumption.
Yali Zhao: Supervision, Conceptualization, Writing-review & editing; Qixin Dong: Writing-review & editing, Project administration; Xiaoqing Huang: Writing-original draft, Formal analysis. All authors have read and agreed to the published version of the manuscript.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research was supported by Liaoning Provincial Department of Education under project No.LJKMZ20221491.
And the authors would like to thank the reviewers for their valuable comments and suggestions, which have helped to improve the quality of our paper.
The authors declare that they have no competing interests.
[1] |
Y. Censor, T. Elfving, A multiprojection algorithm using Bregman projections in a product space, Numer. Algor., 8 (1994), 221–239. https://doi.org/10.1007/BF02142692 doi: 10.1007/BF02142692
![]() |
[2] |
A. Anderson, Simultaneous algebraic reconstruction technique (SART): A superior implementation of the ART algorithm Ultrason, Ultrason. Imaging, 6 (1984), 81–94. https://doi.org/10.1016/0161-7346(84)90008-7 doi: 10.1016/0161-7346(84)90008-7
![]() |
[3] |
Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, A unified approach for inversion problems in intensity modulated radiation therapy, Phys. Med. Biol., 51 (2006), 2353–2365. https://doi.org/10.1088/0031-9155/51/10/001 doi: 10.1088/0031-9155/51/10/001
![]() |
[4] |
N. Buong, Iterative algorithms for the multiple-sets split feasibility problem, Numer. Algor., 76 (2017), 783–798. https://doi.org/10.1007/s11075-017-0282-4 doi: 10.1007/s11075-017-0282-4
![]() |
[5] | H. Stark, Image recovery theory and applications, New York: Academic Press, 1987. https://epubs.siam.org/doi/10.1137/1031042 |
[6] |
C. Byrne, Iterative oblique projection onto convex subsets and the split feasibility problem, Inverse Probl., 18 (2002), 441–453. https://doi.org/10.1088/0266-5611/18/2/310 doi: 10.1088/0266-5611/18/2/310
![]() |
[7] |
A. Moudafi, Split monotone variational inclusions, J. Optim. Theory Appl., 150 (2011), 275–283. https://doi.org/10.1007/s10957-011-9814-6 doi: 10.1007/s10957-011-9814-6
![]() |
[8] |
H. H. Bauschke, J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Review, 38 (1996), 367–426. https://doi.org/10.1137/S0036144593251710 doi: 10.1137/S0036144593251710
![]() |
[9] |
P. L. Combettes, The convex feasibility problem in image recovery, Adv. Imag. Elect. Phys., 95 (1996), 155–270. https://doi.org/10.1016/S1076-5670(08)70157-5 doi: 10.1016/S1076-5670(08)70157-5
![]() |
[10] | C. Byrne, Y. Censor, A. Gibali, S. Reich, Weak and strong convergence of algorithms for the split common null point problem, Nonlinear Convex Anal., 13 (2011), 759–775. |
[11] |
K. R. Kazmi, S. H. Rizvi, An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping, Optim. Lett., 8 (2014), 1113–1124. https://doi.org/10.1007/s11590-013-0629-2 doi: 10.1007/s11590-013-0629-2
![]() |
[12] |
C. S. Chuang, Algorithms with new parameter conditions for split variational inclusion problems in Hilbert spaces with application to split feasibility problem, Optimization, 65 (2016), 859–876. https://doi.org/10.1080/02331934.2015.1072715 doi: 10.1080/02331934.2015.1072715
![]() |
[13] |
K. Sitthithakerngkiet, J. Deepho, P. Kumam, A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems, Appl. Math. Comput., 250 (2015), 986–1001. https://doi.org/10.1016/j.amc.2014.10.130 doi: 10.1016/j.amc.2014.10.130
![]() |
[14] |
L. C. Ceng, E. Kobis, X. P. Zhao, On general implicit hybrid iteration method for triple hierarchical variational inequalities with hierarchical variational inequality constraints, Optimization, 69 (2020), 1961–1986. https://doi.org/10.1080/02331934.2019.1703978 doi: 10.1080/02331934.2019.1703978
![]() |
[15] |
Y. Shehu, F. U. Ogbuisi, An iterative method for solving split monotone variational inclusion and fixed point problems, RACSAM, 110 (2016), 503–518. https://doi.org/10.1007/s13398-015-0245-3 doi: 10.1007/s13398-015-0245-3
![]() |
[16] |
L. C. Ceng, M. J. Shang, Generalized Mann viscosity implicit rules for solving systems of variational inequalities with constraints of variational inclusions and fixed point problems, Mathematics, 7 (2019), 933. https://doi.org/10.3390/math7100933 doi: 10.3390/math7100933
![]() |
[17] |
L. Yang, F. H. Zhao, General split variational inclusion problem in Hilbert spaces, Abstr. Appl. Anal., 2014 (2014), 816035. https://doi.org/10.1155/2014/816035 doi: 10.1155/2014/816035
![]() |
[18] |
P. Chuasuk, F. Ogbuisi, Y. Shehu, P. Cholamjiak, New inertial method for generalized split variational inclusion problems, J. Ind. Manag. Optim., 17 (2021), 3357–3371. http://dx.doi.org/10.3934/jimo.2020123 doi: 10.3934/jimo.2020123
![]() |
[19] |
H. Nikaido, K. Isoda, Note on non-cooperative convex games, Pac. J. Math., 5 (1955), 807–815. https://doi.org/10.2140/pjm.1955.5.807 doi: 10.2140/pjm.1955.5.807
![]() |
[20] | E. Blum, From optimization and variational inequalities to equilibrium problems, Math Student, 63 (1994), 123–145. https://api.semanticscholar.org/CorpusID:117484413 |
[21] | G. M. Korpelevich, The extragradient method foe finding saddle points and other problems, Matecon, 12 (1976), 747–756. |
[22] |
A. von Heusinger, C. Kanzow, Relaxation methods for generalized nash-equilibrium problems with inexact line search, J. Optim. Theory Appl., 143 (2009), 159–183. https://doi.org/10.1007/s10957-009-9553-0 doi: 10.1007/s10957-009-9553-0
![]() |
[23] |
H. Iiduka, I. Yamada, A subgradient-type method for the equilibrium problem problem over the fixed point set and its applications, Optimization, 58 (2009), 251–261. https://doi.org/10.1080/02331930701762829 doi: 10.1080/02331930701762829
![]() |
[24] |
L. D. Muu, T. D. Quoc, Regularization algorithms for solving monotone Ky Fan inequalities with application to a nash-cournot equilibrium model, J. Optim. Theory Appl., 142 (2009), 185–204. https://doi.org/10.1007/s10957-009-9529-0 doi: 10.1007/s10957-009-9529-0
![]() |
[25] |
T. T. V. Nguyen, J. J. Strodiot, V. H. Nguyen, The interior proximal extragradient method for solving equilibrium problems, J. Glob. Optim., 44 (2009), 175–192. https://doi.org/10.1007/s10898-008-9311-0 doi: 10.1007/s10898-008-9311-0
![]() |
[26] |
T. T. Nguyen, J. J. Strodiot, V. H. Nguyen, A bundle method for solving equilibrium problems, Math. Program., 116 (2009), 529–552. https://doi.org/10.1007/s10107-007-0112-x doi: 10.1007/s10107-007-0112-x
![]() |
[27] |
P. Santos, S. Scheimberg, An inexact subgradient algorithm for equilibrium problems, Comput. Appl. Math., 30 (2011), 91–107. http://dx.doi.org/10.1590/S1807-03022011000100005 doi: 10.1590/S1807-03022011000100005
![]() |
[28] |
L. H. Yen, L. D. Muu, N. T. T. Huyen, An algorithm for a class of split feasibility problems: Application to a model in electricity production, Math. Meth. Oper. Res., 84 (2016), 549–565. https://doi.org/10.1007/s00186-016-0553-1 doi: 10.1007/s00186-016-0553-1
![]() |
[29] |
T. O. Alakoya, O. T. Mewomo, Viscosity S-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems, Comp. Appl. Math., 41 (2022), 39. https://doi.org/10.1007/s40314-021-01749-3 doi: 10.1007/s40314-021-01749-3
![]() |
[30] |
E. C. Godwin, C. Izuchukwu, O. T. Mewomo, Image restoration using a modified relaxed inertial method for generalized split feasibility problems, Mathematical Methods in the Applied Sciences, 46 (2022), 5521–5544. https://doi.org/10.1002/mma.8849 doi: 10.1002/mma.8849
![]() |
[31] |
S. H. Khan, T. O. Alakoya, O. T. Mewomo, Relaxed projection methods with self-adaptive step size for solving variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in Banach spaces, Math. Comput. Appl., 25 (2020), 54. https://doi.org/10.3390/mca25030054 doi: 10.3390/mca25030054
![]() |
[32] |
N. Onjai-uea, W. Phuengrattana, On solving split mixed equilibrium problems and fixed point problems of hybrid-type multivalued mappings in Hilbert spaces, J. Inequal. Appl., 2017 (2017), 137. https://doi.org/10.1186/s13660-017-1416-x doi: 10.1186/s13660-017-1416-x
![]() |
[33] |
H. Iiduka, I. Yamada, A subgradient-type method for the equilibrium problem over the fixed point set and its applications, Optimization, 58 (2009), 251–261. https://doi.org/10.1080/02331930701762829 doi: 10.1080/02331930701762829
![]() |
[34] |
H. Iiduka, Fixed point optimization algorithm and its application to network bandwidth allocation, J. Comput. Appl. Math., 236 (2012), 1733–1742. https://doi.org/10.1016/j.cam.2011.10.004 doi: 10.1016/j.cam.2011.10.004
![]() |
[35] | C. Q. Luo, H. Ji, Y. Li, Utility-based muliti-service bandwidth allocation in the 4G heterogeneous wireless networks, In: 2009 IEEE Wireless Communication and Networking Conference, 2009, 1–5. http://doi.org/10.1109/WCNC.2009.4918017 |
[36] |
B. Tan, X. L. Qin, J. C. Yao, Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications, J. Sci. Comput., 87 (2021), 20. https://doi.org/10.1007/s10915-021-01428-9 doi: 10.1007/s10915-021-01428-9
![]() |
[37] |
C. Izuchukwu, S. Reich, Y. Shehu, A. Taiwo, Strong convergence of forward-reflected-backward splitting methods for solving monotone inclusions with applications to image restoration and optimal control, J. Sci. Comput., 94 (2023), 73. https://doi.org/10.1007/s10915-023-02132-6 doi: 10.1007/s10915-023-02132-6
![]() |
[38] |
Y. Q. Zhang, Y. Q. Wang, A new inertial iterative algorithm for split null point and common fixed point problems, J. Nonlinear Funct. Anal., 2023 (2023), 1–19. https://doi.org/10.23952/jnfa.2023.36 doi: 10.23952/jnfa.2023.36
![]() |
[39] |
J. L. Zheng, R. L. Gan, X. X. Ju, X. Q. Ou, A new fixed-time stability of neural network to solve split convex feasibility problems, J. Inequal. Appl., 2023 (2023), 1–21. https://doi.org/10.1186/s13660-023-03046-5 doi: 10.1186/s13660-023-03046-5
![]() |
[40] |
M. Aphane, L. O. Jolaoso, K. O. Aremu, O. K. Oyewole, An inertial-viscosity algorithm for solving split generalized equilibrium problem and a system of demimetric mappings in Hilbert spaces, Rend. Circ. Mat. Palermo Ser., 72 (2023), 1599–1628. https://doi.org/10.1007/s12215-022-00761-8 doi: 10.1007/s12215-022-00761-8
![]() |
[41] |
F. Su, L. L. Liu, X. H. Li, Q. L. Dong, A multi-step inertial asynchronous sequential algorithm for common fixed point problems, J. Nonlinear Var. Anal., 8 (2024), 473–484. https://doi.org/10.23952/jnva.8.2024.3.08 doi: 10.23952/jnva.8.2024.3.08
![]() |
[42] |
M. D. Ngwepe, L. O. Jolaoso, M. Aphane, U. O. Adiele, An inertial shrinking projection self-adaptive algorithm for solving split variational inclusion problems and fixed point problems in Banach spaces, Demonstr. Math., 57 (2024), 20230127. https://doi.org/10.1515/dema-2023-0127 doi: 10.1515/dema-2023-0127
![]() |
[43] | K. Goebel, W. A. Krik, Topics in metric fixed point theory, Cambridge: Cambridge University Press, 1990. https://doi.org/10.1017/CBO9780511526152 |
[44] | W. Takahashi, Nonlinear functional analysis. Fixed point theory and its application, Yokohama: Yokohama Publishers, 2000. |
[45] | B. T. Polyak, Introduction to optimization. Translations series in mathematics and engineering, New York: Cambridge University Press, 1987. |
[46] |
P. N. Anh, L. D. Muu, A hybrid subgradient algorithm for nonexpansive mappings and equilibrium problems, Optim. Lett., 8 (2014), 727–738. https://doi.org/10.1007/s11590-013-0612-y doi: 10.1007/s11590-013-0612-y
![]() |
[47] | C. Izuchukwu, Y. Shehu, J. C. Yao, New strong convergence analysis for variational inequalities and fixed point problems, Optimization, 2024, 1–22. https://doi.org/10.1080/02331934.2024.2424446 |