No. of Iter. | Time | |||
n | Alg. 3.1 | Alg. 4.1 | Alg. 3.1 | Alg. 4.1 |
2 | 120 | 154 | 0.288s | 1.133s |
10 | 156 | 201 | 0.691s | 2.516s |
50 | 157 | 202 | 4.641s | 13.853s |
100 | 157 | 203 | 12.333s | 53.145s |
In this paper, we consider the variational inequality problem and the split common fixed point problem. Considering the common fixed points of an infinite family of nonexpansive mappings, instead of just the fixed point of one nonexpansive mapping, we generalize the results of Tian and Jiang. By removing a projection operator, we improve the efficiency of our algorithm. Finally, we propose a very simple modification to the extragradient method, which gives our algorithm strong convergence properties. We also provide some numerical examples to illustrate our main results.
Citation: Wenlong Sun, Gang Lu, Yuanfeng Jin, Zufeng Peng. Strong convergence theorems for split variational inequality problems in Hilbert spaces[J]. AIMS Mathematics, 2023, 8(11): 27291-27308. doi: 10.3934/math.20231396
[1] | Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382 |
[2] | Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154 |
[3] | Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426 |
[4] | Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077 |
[5] | Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309 |
[6] | Shahram Rezapour, Maryam Iqbal, Afshan Batool, Sina Etemad, Thongchai Botmart . A new modified iterative scheme for finding common fixed points in Banach spaces: application in variational inequality problems. AIMS Mathematics, 2023, 8(3): 5980-5997. doi: 10.3934/math.2023301 |
[7] | Wenlong Sun, Gang Lu, Yuanfeng Jin, Choonkil Park . Self-adaptive algorithms for the split problem of the quasi-pseudocontractive operators in Hilbert spaces. AIMS Mathematics, 2022, 7(5): 8715-8732. doi: 10.3934/math.2022487 |
[8] | Savita Rathee, Monika Swami . Algorithm for split variational inequality, split equilibrium problem and split common fixed point problem. AIMS Mathematics, 2022, 7(5): 9325-9338. doi: 10.3934/math.2022517 |
[9] | Charu Batra, Renu Chugh, Mohammad Sajid, Nishu Gupta, Rajeev Kumar . Generalized viscosity approximation method for solving split generalized mixed equilibrium problem with application to compressed sensing. AIMS Mathematics, 2024, 9(1): 1718-1754. doi: 10.3934/math.2024084 |
[10] | Kifayat Ullah, Junaid Ahmad, Hasanen A. Hammad, Reny George . Iterative schemes for numerical reckoning of fixed points of new nonexpansive mappings with an application. AIMS Mathematics, 2023, 8(5): 10711-10727. doi: 10.3934/math.2023543 |
In this paper, we consider the variational inequality problem and the split common fixed point problem. Considering the common fixed points of an infinite family of nonexpansive mappings, instead of just the fixed point of one nonexpansive mapping, we generalize the results of Tian and Jiang. By removing a projection operator, we improve the efficiency of our algorithm. Finally, we propose a very simple modification to the extragradient method, which gives our algorithm strong convergence properties. We also provide some numerical examples to illustrate our main results.
The variational inequality problem (VIP) was introduced by Stampacchia [1] and provided a very useful tool for researching a large variety of interesting problems arising in physics, economics, finance, elasticity, optimization, network analysis, medical images, water resources, and structural analysis, see for example ([2,3,4,5,6,7,8,9,10,11,12,13,14,15]) and references therein.
Let H be a real Hilbert space with inner product ⟨⋅,⋅⟩ and norm ‖⋅‖, respectively. Let C be a nonempty closed convex subset of H. Let B:C⟶H be an operator.
In this article, our study is related to a classical variational inequality problem (VIP) which aims to find an element x†∈C such that
⟨Bx†,x−x†⟩≥0, ∀x∈C. | (1.1) |
It is well known that x♯∈VI(B,C) if and only if x♯=PC(x♯−ζBx♯), where ζ>0, in other words, the VIP is equivalent to the fixed point problem (see [16]). Supposing that B is η-strongly monotone and L-Lipschitz continuous with 0<ζ<2ηL2, the following sequence {xn} of Picard iterates:
xn+1=PC(xn−ζBxn), | (1.2) |
converges strongly to a point x†∈VI(B,C) due to the fact that PC(I−ζB) is a contraction on C. However, in general, the algorithm (1.2) fails when B is monotone and L-Lipschitz continuous (see [17]). In [7], Korpelevich put forward an extragradient method which provided an important idea for solving monotone variational inequality:
yn=PC(xn−λfxn),xn+1=PC(xn−λfyn), | (1.3) |
where f is monotone, L-Lipschitz continuous in the finite dimensional Euclidean space Rn and λ∈(0,1L).
The another motivation of this article is the split common fixed point problem which aims to find a point u∈H1 such that
u∈Fix(T) and Au∈Fix(S). | (1.4) |
The split common fixed point problem can be regarded as a generalization of the split feasibility problem. Recall that the split feasibility problem is to find a point satisfying
u∈C and Au∈Q, | (1.5) |
where C and Q are two nonempty closed convex subsets of real Hilbert spaces H1 and H2, respectively and A:H1→H2 is a bounded linear operator. Inverse problems in various disciplines can be expressed as the split feasibility problem and the split common fixed point problem. Problem (1.4) was firstly introduced by Censor and Segal [18]. Note that solving (1.4) can be translated to solve the fixed point equation:
u=S(u−τA∗(I−T)Au), τ>0. |
Whereafter, Censor and Segal proposed an algorithm for directed operators. Since then, there has been growing interest in the split common fixed point problem (see [19,20,21,22]).
Censor et al. [23] first proposed split variational inequality problems by combining the variational inequality problem and the split feasibility problem. Very recently, in 2017, Tian and Jiang [24] considered the following split variational inequality problem: finding an element u such that
u∈VI(A,C) and Bu∈Fix(T), | (1.6) |
where T:H2→H2 is nonexpansive, B:H1→H2 is a bounded linear operator with its adjoint B∗, and A:C→H1 is a monotone and L-Lipschitz continuous mapping. Then they presented the following iteration method by combining the extragradient method with CQ algorithm for solving the (1.6):
Algorithm 1.1. Choose an arbitrary initial value x1∈C. Assume xn has been constructed. Compute
yn=PC(xn−τnA∗(I−T)Axn),zn=PC(yn−ςnFyn),xn+1=PC(yn−ςnFzn). | (1.7) |
They proved that the iterative sequence {xn} defined by Eq (1.7) converges weakly to an element z∈Γ, where Γ is the set of solutions of the problem (1.6). However, Algorithm 1.1 fails, in general, to converge strongly in the setting of infinite-dimensional Hilbert spaces. We also notice that Algorithm 1.1 is involved with three metric projections in each iteration, which might seriously affect the efficiency of the method.
Motivated and inspired by the above works, in the present paper, we consider variational inequality problems and split common fixed point problems for finding an element u such that
ˆx∈VI(A,C) and Bˆx∈∞⋂n=1Fix(Tn), | (1.8) |
where {Tn}∞n=1:H2→H2 is an infinite family of nonexpansive mappings, B:H1→H2 is a bounded linear operator with its adjoint B∗, and A:H1→H1 is a monotone and L-Lipschitz continuous mapping. In contrast to Tian and Jiang [24], we consider the common fixed points of an infinite family of nonexpansive mappings instead of only the fixed points of a nonexpansive mapping. The efficiency of the algorithm is also improved by removing the projection operator in the first iteration which might affect the efficiency of the method to a certain extent. Finally, we present a very simple modification to extragradient method, which makes our algorithm have the strong convergence. It is well known that the strong convergence theorem is always more convenient to use.
This paper is organized as follows: In Section 2, we give some definitions and key lemmas which are used in this paper. Section 3 consists of our algorithms and provides the strong convergence theorems. In Section 4, numerical examples are provided for illustration. Finally, this paper is concluded in Section 5.
Let H be a real Hilbert space with inner product ⟨⋅,⋅⟩ and norm ‖⋅‖, respectively. Let C be a nonempty closed convex subset of H. Let T:C⟶C be an operator. We use Fix(T) to denote the set of fixed points of T, that is, Fix(T)={x†|x†=Tx†,x†∈C}. First, we give some definitions and lemmas related to the involved operators.
Definition 2.1. An operator T:C⟶C is said to be nonexpansive if ‖Tu−Tv‖≤‖u−v‖ for all u,v∈C.
Definition 2.2. An operator A:C⟶H is said to be monotone if ⟨Ax−Ay,x−y⟩≥0 for all x,y∈C.
A monotone operator R:H⇉2H is called maximal monotone if the graph of R is a maximal monotone set.
Definition 2.3. An operator T:C⟶H is said to be L-Lipschitzian if there exists L>0 such that ‖Tx−Ty‖≤L‖x−y‖ for all x,y∈C.
Usually, the convergence of fixed point algorithms requires some additional smoothness properties of the mapping T such as demi-closedness.
Definition 2.4. An operator T is said to be demiclosed if, for any sequence {un} which weakly converges to u∗, and if Tun⟶w, then Tu∗=w.
Recall that the (nearest point or metric) projection from H onto C, denoted by PC, assigns to each u∈H, the unique point PCu∈C with the property:
‖u−PCu‖=inf{‖u−v‖:v∈C}. |
The metric projection PC of H onto C is characterized by
⟨u−PCu,v−PCu⟩≤0or ‖u−v‖2≥‖u−PCu‖2+‖v−PCu‖2 | (2.1) |
for all u∈H,v∈C. It is well known that the metric projection PC:H→C is firmly nonexpansive, that is,
⟨u−v,PCu−PCv⟩≥‖PCu−PCv‖2or ‖PCu−PCv‖2≤‖u−v‖2−‖(I−PC)u−(I−PC)v‖2 | (2.2) |
for all u,v∈H. More information on the metric projection can be found, for example, in Section 3 of the book by Goebel et al. (see [25]).
For all u,v∈H, the following conclusions hold:
‖tu+(1−t)v‖2=t‖u‖2+(1−t)‖v‖2−t(1−t)‖u−v‖2, t∈[0,1], | (2.3) |
‖u+v‖2=‖u‖2+2⟨u,v⟩+‖v‖2 | (2.4) |
and
‖u+v‖2≤‖u‖2+2⟨v,u+v⟩. | (2.5) |
Let {Tn}∞n=1:H→H be an infinite family of nonexpansive mappings and λ1,λ2,... be real numbers such that 0≤λi≤1 for each i∈N. For any n∈N, define a mapping Wn of C into H as follows:
Un,n+1=I,Un,n=λnTnUn,n+1+(1−λn)I,Un,n−1=λn−1Tn−1Un,n+(1−λn−1)I, …Un,k=λkTkUn,k+1+(1−λk)I,Un,k−1=λk−1Tk−1Un,k+(1−λk−1)I,…Un,2=λ2T2Un,3+(1−λ2)I,Wn=Un,1=λ1T1Un,2+(1−λ1)I. | (2.6) |
Such a mapping Wn is called the W-mapping generated by T1,T2,...,Tn and λ1,λ2,...,λn. We have the following crucial Lemma concerning Wn:
Lemma 2.1. [26] Let H be a real Hilbert space. Let {Tn}∞n=1:H→H be an infinite family of nonexpansive mappings such that ⋂∞n=1Fix(Tn)≠∅. Let λ1,λ2,... be real numbers such that 0≤λi≤b<1 for each i≥1. Then we have the following:
(1) For any x∈H and k≥1, the limit limn→∞Un,kx exists;
(2) Fix(W)=⋂∞n=1Fix(Tn), where Wx=limn→∞Wnx=limn→∞Un,1x, ∀x∈C;
(3) For any bounded sequence {xn}⊂H, limn→∞Wxn=limn→∞Wnxn.
Lemma 2.2. [27] Assume that {αn} is a sequence of nonnegative real numbers such that
αn+1≤(1−γn)αn+δn, n∈N, |
where {γn} is a sequence in (0,1) and {δn} is a sequence such that
(1) ∑∞n=1γn=∞;
(2) lim supn→∞δnγn≤0 or ∑∞n=1|δn|<∞. Then limn→∞αn=0.
Lemma 2.3. [28] Let {ϖn} be a sequence of real numbers. Assume there exists at least a subsequence {ϖnk} of {ϖn} such that ϖnk≤ϖnk+1 for all k≥0. For every n≥N0, define an integer sequence {τ(n)} as:
τ(n)=max{i≤n:ϖni<ϖni+1}. |
Then, τ(n)→∞ as n→∞ and for all n≥N0, we have max{ϖτ(n),ϖn}≤ϖτ(n)+1.
In this section, we introduce our algorithm and prove its strong convergence. Some assumptions on the underlying spaces and involved operators are listed below.
(R1) H1 and H2 are two real Hilbert spaces and C⊂H1 is a nonempty closed convex subset.
(R2) B:H1→H2 is a bounded linear operator with its adjoint B∗.
(R3) A:H1→H1 is a monotone and L-Lipschitz continuous mapping.
(R4) Ω={ˆx|ˆx∈VI(A,C) and Bˆx∈⋂∞n=1Fix(Tn)}, where Ω is the set of solutions of the problem (1.8).
Next, we present the following iterative algorithm to find a point ˆx∈Ω.
Algorithm 3.1. Choose an arbitrary initial value x1∈H. Assume xn has been constructed. Compute
yn=xn−τnB∗(I−Wn)Bxn,zn=PC(yn−ςnAyn),xn+1=PC((1−αn)(yn−ςnAzn)), | (3.1) |
where {αn} is a sequence in (0,1), ςn is a sequence in (0,1L), and τn is a sequence in (0,1‖B‖2).
Theorem 3.1. If Ω≠∅ and the following conditions are satisfied:
(C1) limn→∞αn=0 and ∑∞n=0αn=∞;
(C2) 0<lim infn→∞ςn≤lim supn→∞ςn<1L;
(C3) 0<lim infn→∞τn≤lim supn→∞τn<1‖B‖2.
Then, the iterative sequence {xn} defined by Eq (3.1) strongly converges to the minimum-norm solution ˆx(=PΩθ).
Proof. Set z=PΩθ. We can obtain that
‖yn−z‖2=‖xn−z−τnB∗(I−Wn)Bxn‖2=‖xn−z‖2−2τn⟨xn−z,B∗(I−Wn)Bxn⟩+‖τnB∗(I−Wn)Bxn‖2=‖xn−z‖2−2τn⟨Bxn−Bz,(I−Wn)Bxn⟩+‖τnB∗(I−Wn)Bxn‖2≤‖xn−z‖2−τn‖(I−Wn)Bxn‖2+τ2n‖B‖2⋅‖(I−Wn)Bxn‖2≤‖xn−z‖2−τn(1−τn‖B‖2)‖(I−Wn)Bxn‖2≤‖xn−z‖2. | (3.2) |
It follows from (2.1) that
‖xn+1−z‖2=‖PC((1−αn)(yn−ςnAzn))−z‖2≤‖(1−αn)(yn−ςnAzn)−z‖2−‖(1−αn)(yn−ςnAzn)−xn+1‖2≤‖(1−αn)(yn−ςnAzn−z)+αn(−z)‖2−‖(1−αn)(yn−ςnAzn−xn+1)+αn(−xn+1)‖2≤(1−αn)‖yn−ςnAzn−z‖2+αn‖−z‖2−(1−αn)αn‖yn−ςnAzn‖2−((1−αn)‖yn−ςnAzn−xn+1‖2+αn‖−xn+1‖2−(1−αn)αn‖yn−ςnAzn‖2)=(1−αn)(‖yn−ςnAzn−z‖2−‖yn−ςnAzn−xn+1‖2)+αn(‖z‖2−‖xn+1‖2). | (3.3) |
We also observe that
‖yn−ςnAzn−z‖2−‖yn−ςnAzn−xn+1‖2=‖yn−z‖2−‖yn−xn+1‖2+2ςn⟨Azn,z−xn+1⟩=‖yn−z‖2−‖yn−xn+1‖2+2ςn⟨Azn,z−zn⟩+2ςn⟨Azn,zn−xn+1⟩=‖yn−z‖2−‖yn−xn+1‖2+2ςn⟨Azn−Az,z−zn⟩+2ςn⟨Az,z−zn⟩+2ςn⟨Azn,zn−xn+1⟩≥‖yn−z‖2−‖yn−xn+1‖2+2ςn⟨Azn,zn−xn+1⟩=‖yn−z‖2−‖yn−zn‖2−‖zn−xn+1‖2+2⟨yn−ςnAzn−zn,xn+1−zn⟩. | (3.4) |
On the other hand, we have that
⟨yn−ςnAzn−zn,xn+1−zn⟩=⟨yn−ςnAyn−zn,xn+1−zn⟩+ςn⟨Ayn−Azn,xn+1−zn⟩≤ςn⟨Ayn−Azn,xn+1−zn⟩≤ςn‖Ayn−Azn‖×‖xn+1−zn‖≤ςnL‖yn−zn‖×‖xn+1−zn‖. | (3.5) |
Hence, we can derive that
‖xn+1−z‖2=(1−αn)(‖yn−ςnAzn−z‖2−‖yn−ςnAzn−xn+1‖2)+αn(‖z‖2−‖xn+1‖2),(by(3.4))≤(1−αn)(‖yn−z‖2−‖yn−zn‖2−‖zn−xn+1‖2+2⟨yn−ςnAzn−zn,xn+1−zn⟩)+αn(‖z‖2−‖xn+1‖2),(by(3.5))≤(1−αn)(‖yn−z‖2−‖yn−zn‖2−‖zn−xn+1‖2+2ςnL‖yn−zn‖×‖xn+1−zn‖)+αn(‖z‖2−‖xn+1‖2)≤(1−αn)(‖yn−z‖2−‖yn−zn‖2−‖zn−xn+1‖2+ς2nL2‖yn−zn‖2+‖xn+1−zn‖2)+αn(‖z‖2−‖xn+1‖2)≤(1−αn)(‖yn−z‖2+(ς2nL2−1)‖yn−zn‖2)+αn(‖z‖2−‖xn+1‖2),(by(3.2))≤(1−αn)(‖xn−z‖2+(ς2nL2−1)‖yn−zn‖2)+αn(‖z‖2−‖xn+1‖2). | (3.6) |
Owing to the assumption (C2), it follows from (3.6) that
‖xn+1−z‖2≤(1−αn)‖yn−z‖2+αn(‖z‖2−‖xn+1‖2),(by(3.2))≤(1−αn)‖xn−z‖2+αn(‖z‖2−‖xn+1‖2)≤(1−αn)‖xn−z‖2+αn‖z‖2≤max{‖xn−z‖2,‖z‖2} | (3.7) |
and so
‖xn−z‖2≤max{‖x1−z‖2,‖z‖2}, | (3.8) |
which implies that the sequence {xn} is bounded. In view of (3.2) and (3.7), we obtain that
τn(1−τn‖B‖2)‖(I−Wn)Bxn‖2≤‖xn−z‖2−‖yn−z‖2≤‖xn−z‖2−‖xn+1−z‖2+αn(‖z‖2−‖xn+1‖2−‖yn−z‖2). | (3.9) |
CASE I. Suppose that there exists m>0 such that the sequence {‖xn−z‖} is decreasing when n≥m. Then, limn→∞‖xn−z‖ exists. Consequently, according to the assumptions (C1) and (C3), we deduce that
limn→∞‖(I−Wn)Bxn‖=0. | (3.10) |
In virtue of the boundedness of the sequence {Bxn} and Lemma 2.1, we get that
limn→∞‖WBxn−WnBxn‖=0. | (3.11) |
This together with (3.24) implies that
limn→∞‖(I−W)Bxn‖=0. | (3.12) |
It follows from (3.6) that
(1−αn)(1−ς2nL2)‖yn−zn‖2≤(1−αn)‖xn−z‖2−‖xn+1−z‖2+αn(‖z‖2−‖xn+1‖2)≤‖xn−z‖2−‖xn+1−z‖2+αn(‖z‖2−‖xn+1‖2−‖xn−z‖2). | (3.13) |
Thanks to the boundedness of the sequence {xn}, we derive that
limn→∞‖yn−zn‖=0. | (3.14) |
In view of (3.30), we can also get that
limn→∞‖yn−xn‖=limn→∞‖τnB∗(I−Wn)Bxn‖=0(by(3.10)). | (3.15) |
Combining (3.14) and (3.15), we obtain that
limn→∞‖zn−xn‖=0. | (3.16) |
On the other hand, we get that
‖xn+1−zn‖=‖PC((1−αn)(yn−ςnAzn))−PC(yn−ςnAyn)‖≤‖(1−αn)(yn−ςnAzn)−(yn−ςnAyn)‖≤‖(yn−ςnAzn)−(yn−ςnAyn)‖+αn‖yn−ςnAzn)‖≤‖ςnAzn−ςnAyn‖+αn‖yn−ςnAzn‖≤ςn‖Azn−Ayn‖+αn‖yn−ςnAzn‖≤ςnL‖zn−yn‖+αn‖yn−ςnAzn‖. | (3.17) |
Hence, by (3.14), it turns out that
limn→∞‖xn+1−zn‖=0 | (3.18) |
and consequently, according to (3.16), we have that
limn→∞‖xn+1−xn‖=0. | (3.19) |
Next, we can take a subsequence {ni} such that
lim supn→∞(‖z‖2−‖xn+1‖2)=limi→∞(‖z‖2−‖xni+1‖2). | (3.20) |
By the boundedness of the real sequence {xni+1}, we may assume that xni+1⇀x†. Since W is nonexpansive, we can derive that Bx†=WBx†(see Corollary 4.28 in [29]), that is, Bx†∈Fix(W)=⋂∞n=1Fix(Tn).
Now, we show that x†∈VI(A,C). Let
R(v)={Av+NC(v), v∈C,∅ v∉C, | (3.21) |
where NC(v) is the normal cone to C at v. According to Reference [30], we can easily derive that R is maximal monotone. Let (v,w)∈G(R). Since w−Av∈NC(v) and xn∈C, we have that
⟨v−xn,w−Av⟩≥0. |
Noting that, due to v∈C, we get
⟨v−xn+1,xn+1−(1−αn)(yn−ςnAzn))⟩≥0. |
It follows that
⟨v−xn+1,xn+1−ynςn+Azn+αnςn(yn−ςnAzn)⟩≥0. |
Thus, we can deduce that
⟨v−xni+1,w⟩≥⟨v−xni+1,Av⟩≥−⟨v−xni+1,xni+1−yniςni+Azni+αniςni(yni−ςniAzni)⟩+⟨v−xni+1,Av⟩≥⟨v−xni+1,Av−Azni⟩−⟨v−xni+1,xni+1−yniςni⟩−⟨v−xni+1,αniςni(yni−ςniAzni)⟩≥⟨v−xni+1,Av−Axni+1⟩+⟨v−xni+1,Axni+1−Azni⟩−⟨v−xni+1,xni+1−yniςni⟩−⟨v−xni+1,αniςni(yni−ςniAzni)⟩≥−⟨v−xni+1,xni+1−yniςni⟩−⟨v−xni+1,αniςni(yni−ςniAzni)⟩+⟨v−xni+1,Axni+1−Azni⟩. | (3.22) |
As i→∞, we obtain that
⟨v−x†,w⟩≥0. |
By the maximal monotonicity of R, we derive that x†∈R−10. Hence, x†∈VI(A,C). Therefore, x†∈Ω. Since the norm of the Hilbert space H1 is weakly lower semicontinuous(see Lemma 2.42 in [29]), we have the following inequality:
‖x†‖≤lim infi→∞‖xni+1‖ |
and therefore
−‖x†‖≥lim supi→∞(−‖xni+1‖). |
From (3.7), we observe that
‖xn+1−z‖2≤(1−αn)‖xn−z‖2+αn(‖z‖2−‖xn+1‖2). | (3.23) |
Thanks to z=PΩθ and x†∈Ω, we can deduce that
lim supn→∞(‖z‖2−‖xni+1‖2)=‖z‖2+lim supn→∞(−‖xni+1‖2)≤‖z‖2−‖x†‖2≤0. |
Applying Lemma 2.2 to (3.23), we derive that limn→∞‖xn−z‖=0, which implies that the sequence {xn} converges strongly to z.
CASE II. For any n0, there exists an integer m≥n0 such that ‖xm−z‖≤‖xm+1−z‖. At this case, we set ϖn=‖xn−z‖. For n≥n0, we define a sequence {τn} by
τ(n)=max{l∈N|n0≤l≤n,ϖl≤ϖl+1}. |
It is easy to show that τ(n) is a non-decreasing sequence such that
limn→∞τ(n)=+∞ |
and
ϖτ(n)≤ϖτ(n)+1. |
This together with (3.9) implies that
limn→∞‖(I−Wτ(n))Bxτ(n)‖2=0. | (3.24) |
Employing techniques similar to CASE I, we have
lim supn→∞(‖z‖2−‖xτ(n)+1‖2)≤0 | (3.25) |
and
ϖ2τ(n)+1≤(1−ατ(n))ϖ2τ(n)+ατ(n)(‖z‖2−‖xτ(n)+1‖2). | (3.26) |
Since ϖτ(n)≤ϖτ(n)+1, we have
ϖ2τ(n)≤‖z‖2−‖xτ(n)+1‖2. | (3.27) |
By (3.25), we obtain that
lim supn→∞ϖτ(n)≤0 |
and so
limn→∞ϖτ(n)=0. | (3.28) |
By Eq (3.26), we also obtain
lim supn→∞ϖτ(n)+1≤lim supn→∞ϖτ(n). |
In the light of the last inequality and Eq (3.28), we derive that
limn→∞ϖτ(n)+1=0. |
Applying Lemma 2.3, we obtain
ϖn≤ϖτ(n)+1. |
Therefore, we get that ϖn→0, that is, xn→z. This completes the proof.
Algorithm 3.2. Choose an arbitrary initial value x1∈C. Assume xn has been constructed. Compute
yn=xn−τnB∗(I−T)Bxn,zn=PC(yn−ςnAyn),xn+1=PC((1−αn)(yn−ςnAzn)), | (3.29) |
where {αn} is a sequence in (0,1), ςn is a sequence in (0,1L), and τn is a sequence in (0,1‖B‖2).
Theorem 3.2. If ˆΩ≠∅ and the following conditions are satisfied:
(C1) limn→∞αn=0 and ∑∞n=0αn=∞;
(C2) 0<lim infn→∞ςn≤lim supn→∞ςn<1L;
(C3) 0<lim infn→∞τn≤lim supn→∞τn<1‖B‖2.
Then, the iterative sequence {xn} defined by Eq (3.29) strongly converges to the minimum-norm solution ˆx(=PˆΩθ), where
ˆΩ={ˆx|ˆx∈VI(A,C) and Bˆx∈Fix(T)}≠∅. |
Algorithm 3.3. Choose an arbitrary initial value x1∈C. Assume xn has been constructed. Compute
zn=PC(xn−ςnAxn),xn+1=PC((1−αn)(xn−ςnAzn)), | (3.30) |
where {αn} is a sequence in (0,1) and ςn is a sequence in (0,1L).
Theorem 3.3. If ˆΩ≠∅ and the following conditions are satisfied:
(C1) limn→∞αn=0 and ∑∞n=0αn=∞;
(C2) 0<lim infn→∞ςn≤lim supn→∞ςn<1L;
Then, the iterative sequence {xn} defined by Eq (3.30) strongly converges to the minimum-norm solution ˆx(=PΩθ), where ˆΩ={ˆx|ˆx∈VI(A,C)}≠∅.
In this section, we present some numerical examples to illustrate our main results. The MATLAB codes run in MATLAB version 9.5 (R2018b) on a PC Intel(R) Core(TM)i5-6200 CPU @ 2.30 GHz 2.40 GHz, RAM 8.00 GB. In all examples y-axes shows the value of ‖xn+1−xn‖ while the x-axis indicates to the number of iterations.
Example 4.1. Let H1=H2=Rn. The feasible set is defined as:
C:={x∈Rn:‖x‖≤1}. |
Let G:Rn→Rn is a linear operator defined by:
Ax:=Gx |
for all x∈Rn, where G=(gij)1≤i,j≤n is a matrix in Rn×n whose terms are given by:
gij={−1, if j=n+1−i and j>i,1, if j=n+1−i and j<i,0, otherwise. | (4.1) |
It is obvious that A is ‖G‖-Lipschitz continuous. By a direct calculation, we also have that ⟨Ax,x⟩=⟨Gx,x⟩=0 and so, A is monotone. Let B be a matrix in Rn×n which is randomly generated.
Taking cognizance of the difference of the problems handled by Algorithm 3.1 and Algorithm in Tian and Jiang [24], in order to comparing these two algorithms, we make a very small modification to the one in [24] such that it can also solve the problem (1.8). The modified algorithm can be written as follows:
Algorithm 4.1.
yn=xn−τnB∗(I−Wn)Bxn,zn=PC(yn−ςnAyn),xn+1=PC((1−αn)(yn−ςnAzn)), | (4.2) |
According to the proof of Theorem 3.1, we can easily verify that this modified algorithm works for solving (1.8). The values of control parameters in these two Algorithms are ςn=12‖G‖, τn=12‖B‖2, α1=12, αn=1n(for all n≥2), λn=1n+1 and x1=(1,⋯,1)T, and the infinite family of nonexpansive mappings {Tk}∞k=1:Rn→Rn is defined by:
Tkx:=Mkx, |
for all x∈Rn, where {Mk} is a sequence of diagonal matrixes in Rn×n:
Mk=[1−1k+2 1−1k+2 ⋱ 1−1k+2 1−1k+3]. | (4.3) |
The numerical results of the Example 4.1 are reported in Table 1 and Figures 1–4 by using the stopping criterion ‖xn+1−xn‖≤10−10.
No. of Iter. | Time | |||
n | Alg. 3.1 | Alg. 4.1 | Alg. 3.1 | Alg. 4.1 |
2 | 120 | 154 | 0.288s | 1.133s |
10 | 156 | 201 | 0.691s | 2.516s |
50 | 157 | 202 | 4.641s | 13.853s |
100 | 157 | 203 | 12.333s | 53.145s |
Example 4.2. Let H1=H2=L2([0,1]) with the inner product:
⟨x,y⟩=∫10x(t)y(t)dt |
and the induced norm:
‖x‖:=(∫10x2(t)dt)12. |
The feasible set is defined as:
C:={x∈Rn:‖x‖≤1}. |
The mapping A:L2([0,1])→L2([0,1]) is defined by:
Ax(t):=(1+t)max{0,x(t)}=(1+t)x(t)+|x(t)|2, x∈L2([0,1]). |
It is easy to see that
⟨Ax−Ay,x−y⟩=∫10(Ax(t)−Ay(t))(x(t)−y(t))dt=∫10(1+t)x(t)−y(t)+|x(t)|−|y(t)|2(x(t)−y(t))dt=∫1012(1+t)((x(t)−y(t))2+(|x(t)|−|y(t)|)(x(t)−y(t)))dt≥0 | (4.4) |
and
‖Ax−Ay‖2=∫10(Ax(t)−Ay(t))2dt=∫10(1+t)2(x(t)−y(t)+|x(t)|−|y(t)|)24dt=∫10(1+t)2(x(t)−y(t))2dt≤4‖x−y‖2. | (4.5) |
Therefore, the operator A is monotone and 2-Lipschitz continuous. Let Wn=I(Identity mapping). The values of control parameters for Algorithm 4.1 and Algorithm 3.1 are ςn=14, α1=12, αn=1n(for all n≥2), λn=1n+1 and x1=8t2. It can be seen easily that {xn} strongly converges to the zero vector θ(∈L2([0,1])). The numerical results of the Example 4.2 are reported in Table 2 and Figures 5 by using the stopping criterion ‖xn+1−xn‖≤ε=0.01.
No. of Iter. | Time | |||
ε | Alg. 3.1 | Alg. 4.1 | Alg. 3.1 | Alg. 4.1 |
0.01 | 8 | 13 | 0.678s | 79.280s |
Remark 4.1. The numerical results of Example 4.1 and Example 4.2 show that the performance of Algorithm 3.1 is better than Algorithm 4.1 both in CPU time and the number of iterations. Algorithm 3.1 is more effective in both finite and infinite dimensional spaces and especially in conditions involving complex projection calculations, see Tables 1, 2 and Figures 1–5. In Example 4.1, we observe that the number of iterations tends to be stable, while the CPU time increases, as n increasing.
In the present paper, we consider variational inequality problems and split common fixed point problems. We construct an iterative algorithm for solving Eq (1.8) which can be regard as a modification and generalization of Algorithm 1.1 with fewer metric projection operators. Under some mild restrictions, we demonstrate the strong convergence analysis of the presented algorithm. We also give some numerical examples to illustrate our main results. Noticeably, in our article, A is assumed to a monotone and L-Lipschitz continuous mapping. A natural question arises: how to weaken this assumption?
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by National Natural Science Foundation of China under Grant (No. 62103289), National Natural Science Foundation of China (No. 11761074), Project of Jilin Science and Technology Development for Leading Talent of Science and Technology Innovation in Middle and Young and Team Project (No. 20200301053RQ), and Natural Science Foundation of Jilin Province(No. 2020122336JC).
The authors declare no conflict of interest.
[1] | G. Stampacchia, Formes bilineaires coercivites sur les ensembles convexes, C. R. Acad. Paris, 258 (1964), 4413–4416. |
[2] |
B. F. Svaiter, A class of Fejer convergent algorithms, approximate resolvents and the hybrid proximal extragradient method, J. Optim. Theory Appl., 162 (2014), 133–153. https://doi.org/10.1007/s10957-013-0449-7 doi: 10.1007/s10957-013-0449-7
![]() |
[3] |
H. K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optim. Theory Appl., 150 (2011), 36–378. https://doi.org/10.1007/s10957-011-9837-z doi: 10.1007/s10957-011-9837-z
![]() |
[4] |
L. J. Zhu, Y. H. Yao, Algorithms for approximating solutions of split variational inclusion and fixed point problems, Mathematics, 11 (2013), 641. https://doi.org/10.3390/math11030641 doi: 10.3390/math11030641
![]() |
[5] |
H. Zegeye, N. Shahzad, Y. Yao, Minimum-norm solution of variational inequality and fixed point problem in banach spaces, Optimization, 64 (2015), 453–471. https://doi.org/10.1080/02331934.2013.764522 doi: 10.1080/02331934.2013.764522
![]() |
[6] |
W. L. Sun, G. Lu, Y. F. Jin, C. Park, A unified framework for solving generalized variational inequalities, J. Math. Inequal., 16 (2022), 189–210. http://doi.org/10.7153/jmi-2022-16-15 doi: 10.7153/jmi-2022-16-15
![]() |
[7] | G. M. Korpelevich, The extragradient method for finding saddle points and for other problems, Matecon, 12 (1976), 747–756. |
[8] | H. Rehman, P. Kumam, Y. J. Cho, P. Yordsorn, Weak convergence of explicit extragradient algorithms for solving equilibrium problems, J. Inequal. Appl., 2019 (2019), 282. |
[9] |
H. Rehman, P. Kumam, Y. J. Cho, Y. I. Suleiman, W. Kumam, Modified popov's explicit iterative algorithms for solving pseudomonotone equilibrium problems, Optim. Method. Softw., 36 (2021), 82–113. https://doi.org/10.1080/10556788.2020.1734805 doi: 10.1080/10556788.2020.1734805
![]() |
[10] |
H. Rehman, P. Kumam, A. B. Abubakar, Y. J. Cho, The extragradient algorithm with inertial effects extended to equilibrium problems, Comput. Appl. Math., 39 (2020), 100. https://doi.org/10.1007/s40314-020-1093-0 doi: 10.1007/s40314-020-1093-0
![]() |
[11] |
H. Rehman, P. Kumam, M. Shutaywi, N. A. Alreshidi, W. Kumam, Inertial optimization based two-step methods for solving equilibrium problems with applications in variational inequality problems and growth control equilibrium models, Energies, 13 (2020), 3932. https://doi.org/10.3390/en13123292 doi: 10.3390/en13123292
![]() |
[12] |
H. Rehman, P. Kumam, Q. L. Dong, Y. Peng, W. Deebani, A new Popov's subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space, Optimization, 70 (2020), 2675–2710. https://doi.org/10.1080/02331934.2020.1797026 doi: 10.1080/02331934.2020.1797026
![]() |
[13] | H. Iiduka, W. Takahashi, M. Toyoda, Approximation of solutions of variational inequalities for monotone mappings, Panamer. Math. J., 14 (2004), 49–61. |
[14] |
Y. Censor, A. Gibali, S. Reich, S. Sabach, Common solutions to variational inequalities, Set Valued Var. Anal., 20 (2012), 229–247. https://doi.org/10.1007/s11228-011-0192-x doi: 10.1007/s11228-011-0192-x
![]() |
[15] |
Y. Censor, A. Gibali, S. Reich, Extensions of Korpelevich's extragradient method for the variational inequality problem in Euclidean space, Optimization, 61 (2012), 1119–1132. https://doi.org/10.1080/02331934.2010.539689 doi: 10.1080/02331934.2010.539689
![]() |
[16] | W. Takahashi, Introduction to nonlinear and convex analysis, Yokohama: Yokohama Publishers, 2009. |
[17] |
H. Zhou, Y. Zhou, G. Feng, Iterative methods for solving a class of monotone variational inequality problems with applications, J. Inequal. Appl., 2015 (2015), 68. https://doi.org/10.1186/s13660-015-0590-y doi: 10.1186/s13660-015-0590-y
![]() |
[18] | Y. Censor, A. Segal, The split common fixed point problem for directed operators, J. Convex. Anal., 16 (2019), 587–600. |
[19] | Q. H. Ansari, A. Rehan, C. F. Wen, Implicit and explicit algorithms for split common fixed point problems, J. Nonlinear. Convex Anal., 17 (2016), 1381–1397. |
[20] |
O. A. Boikanyo, A strongly convergent algorithm for the split common fixed point problem, Appl. Math. Comput., 265 (2015), 844–853. https://doi.org/10.1016/j.amc.2015.05.130 doi: 10.1016/j.amc.2015.05.130
![]() |
[21] | Q. L. Dong, L. Liu, Y. Yao, Self-adaptive projection and contraction methods with alternated inertial terms for solving the split feasibility problem, J. Nonlinear Convex Anal., 23 (2022), 591–605. |
[22] | W. L. Sun, G. Lu, Y. F. Jin, C. Park, Self-adaptive algorithms for the split problem of the quasi-pseudocontractive operators in Hilbert spaces, AIMS Mathematics, 7 (2022), 8715–8732. http://doi.org/2010.3934/math.2022487 |
[23] |
Y. Censor, A. Gibali, S. Reich, Algorithms for the split variational inequality problem, Numer. Algorithms, 59 (2012), 301–323. http://doi.org/10.1007/s11075-011-9490-5 doi: 10.1007/s11075-011-9490-5
![]() |
[24] |
M. Tian, B. N. Jiang, Weak convergence theorem for a class of split variational inequality problems and applications in a Hilbert space, J. Inequal. Appl., 2017 (2017), 123. http://doi.org/10.1186/s13660-017-1397-9 doi: 10.1186/s13660-017-1397-9
![]() |
[25] | K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, Marcel Dekker, 1984. |
[26] |
K. Shimoji, W. Takahashi, Strong convergence to common fixed points of infinite nonexpasnsive mappings and applications, Taiwanese J. Math., 5 (2001), 387–404. http://doi.org/10.11650/twjm/1500407345 doi: 10.11650/twjm/1500407345
![]() |
[27] |
H. K. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
![]() |
[28] |
P. E. Maingé, Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl., 325 (2007), 469–479. https://doi.org/10.1016/j.jmaa.2005.12.066 doi: 10.1016/j.jmaa.2005.12.066
![]() |
[29] | H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer, 2011. https://doi.org/10.1007/978-1-4419-9467-7 |
[30] | L. J. Zhang, J. M. Chen, Z. B. Hou, Viscosity approximation methods for nonexpansive mappings and generalized variational inequalities, Acta Math. Sin., 53 (2010), 691–698. |
No. of Iter. | Time | |||
n | Alg. 3.1 | Alg. 4.1 | Alg. 3.1 | Alg. 4.1 |
2 | 120 | 154 | 0.288s | 1.133s |
10 | 156 | 201 | 0.691s | 2.516s |
50 | 157 | 202 | 4.641s | 13.853s |
100 | 157 | 203 | 12.333s | 53.145s |
No. of Iter. | Time | |||
ε | Alg. 3.1 | Alg. 4.1 | Alg. 3.1 | Alg. 4.1 |
0.01 | 8 | 13 | 0.678s | 79.280s |