School of Mathematics and Information Science, Xianyang Normal University, Xianyang 712000, Shaanxi, China
2.
School of Science, University of Phayao, Phayao 56000, Thailand
3.
Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Thanyaburi, Pathumthani 12110, Thailand
Received:
28 December 2020
Accepted:
25 February 2021
Published:
01 March 2021
In this work, we introduce two modified Tseng's splitting algorithms with a new non-monotone adaptive step size for solving monotone inclusion problem in the framework of Banach spaces. Under some mild assumptions, we establish the weak and strong convergence results of the proposed algorithms. Moreover, we also apply our results to variational inequality problem, convex minimization problem and signal recovery, and provide several numerical experiments including comparisons with other related algorithms.
Citation: Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth. Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces[J]. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
Yali Zhao, Qixin Dong, Xiaoqing Huang .
A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523.
doi: 10.3934/math.2025208
[3]
Meiying Wang, Luoyi Shi, Cuijuan Guo .
An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646.
doi: 10.3934/math.2022971
[4]
Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki .
Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352.
doi: 10.3934/math.2023986
[5]
Wenlong Sun, Gang Lu, Yuanfeng Jin, Choonkil Park .
Self-adaptive algorithms for the split problem of the quasi-pseudocontractive operators in Hilbert spaces. AIMS Mathematics, 2022, 7(5): 8715-8732.
doi: 10.3934/math.2022487
[6]
Meiying Wang, Luoyi Shi .
A new self-adaptive inertial algorithm with $ W $-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783.
doi: 10.3934/math.20221032
[7]
Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad .
Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems. AIMS Mathematics, 2023, 8(10): 24590-24608.
doi: 10.3934/math.20231254
[8]
Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen .
Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of $ \alpha - $inverse strongly monotone operators. AIMS Mathematics, 2021, 6(8): 9000-9019.
doi: 10.3934/math.2021522
[9]
Zheng Zhou, Bing Tan, Songxiao Li .
Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973.
doi: 10.3934/math.2022276
[10]
Ziqi Zhu, Kaiye Zheng, Shenghua Wang .
A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975.
doi: 10.3934/math.20241020
Abstract
In this work, we introduce two modified Tseng's splitting algorithms with a new non-monotone adaptive step size for solving monotone inclusion problem in the framework of Banach spaces. Under some mild assumptions, we establish the weak and strong convergence results of the proposed algorithms. Moreover, we also apply our results to variational inequality problem, convex minimization problem and signal recovery, and provide several numerical experiments including comparisons with other related algorithms.
1.
Introduction
Let E be a real Banach space with its dual space E∗. In this paper, we study the so-called monotone inclusion problem:
findz∈Esuch that0∈(A+B)z,
(1.1)
where A:E→E∗ is a single mapping and B:E→2E∗ is a multi-valued mapping. The set of solutions of the problem (1.1) is denoted by (A+B)−10:={x∈E:0∈(A+B)x}. This problem draws much attention since it stands at the core of many mathematical problems, such as: variational inequalities, split feasibility problem and minimization problem with applications in machine learning, statistical regression, image processing and signal recovery (see [17,33,44]).
A classical method for solving the problem (1.1) in Hilbert space H, is known as forward-backward splitting algorithm (FBSA) [15,29] which generates iterative sequence {xn} by the following algorithm:
{x1∈H,xn+1=JBλ(I−λA)xn,∀n≥1,
(1.2)
where JBλ:=(I+λB)−1 is the resolvent operator of an operator B. Here, I denotes the identity operator on H. It was proved that the sequence generated by (1.2) converges weakly to an element in (A+B)−10 under the assumption of the α-cocoercivity of the operator A, that is,
⟨Ax−Ay,x−y⟩≥α‖Ax−Ay‖2,∀x,y∈H
and λ is chosen in (0,2α). In fact, FBSA includes, as special cases, the proximal point algorithm (when A=0) [11,20,34] and the gradient method [18].
In order to get strong convergence result, Takashashi et al. [41] introduced the following algorithm:
{x1,u∈H,xn+1=αnu+(1−αn)JBλn(xn−λnAxn),∀n≥1,
(1.3)
where A is an α-cocoercive mapping on H. It was shown that if {λn}⊂(0,∞) and {αn}⊂(0,1) satisfy the following assumptions:
0<a≤λn≤b<2α,∞∑n=1|λn+1−λn|<∞,
limn→∞αn=0,∞∑n=1αn=∞and∞∑n=1|αn+1−αn|<∞,
then the sequence {xn} defined by (1.3) converges strongly to an element in (A+B)−10.
In 2016, Cholamjiak [12] introduced the following FBSA in a uniformly convex and q-uniformly smooth Banach space E:
{x1,u∈E,xn+1=αnu+βnxn+γnJBλn(xn−λnAxn),∀n≥1,
(1.4)
where JBλn:=(I+λnB)−1 is the resolvent operator of an m-accretive operator B and A is an α-cocoercive mapping. He proved that the sequence generated by (1.4) converges strongly to a solution of the problem (1.1) under the following assumptions:
{αn},{βn},{γn}⊂(0,1)withαn+βn+γn=1,
limn→∞αn=0,∞∑n=1αn=∞andlim infn→∞γn>0,
0<lim infn→∞λn≤lim supn→∞λn<(αqκq)1q−1,
where κq is the q-uniform smoothness coefficient of E.
In recent years, the FBSA for solving the monotone inclusion problem (1.1), when A is α-cocoercive, was studied and modified by many authors in various settings (see, e.g., [1,9,10,13,26,27,32,37,38,46]). It is important to remark that the α-cocoercivity of the operator A is a strong assumption. To relax this assumption, Tseng [45] introduced the following so-called Tseng's splitting method:
{x1∈H,yn=JBλn(xn−λnAxn),xn+1=yn−λn(Ayn−Axn),∀n≥1,
(1.5)
where A is monotone and L-Lipschitz continuous with L>0. It was proved that the sequence {xn} generated by (1.5) converges weakly to an element in (A+B)−10 provided the step size λn is chosen in (0,1L). It is worth noting that Tseng's splitting method is a requirement to know Lipschitz constant of the mapping. Unfortunately, Lipschitz constants are often unknown or difficult to approximate.
Very recently, Shehu [37] extended Tseng's result to Banach spaces. He proposed the following iterative process for approximating a solution of the problem (1.1) in a 2-uniformly convex Banach space E which is also uniformly smooth:
where A:E→E∗ is monotone and L-Lipschitz continuous, JBλn:=(J+λnB)−1J is the resolvent of B and J is the duality mapping from E into E∗. He obtain weak convergence theorem to the solution of the problem (1.1) provided the step size λn is chosen in (0,1√2μκL), where μ is the 2-uniform convexity constant of E and κ is the 2-uniform smoothness constant of E∗. At the same time, he also proposed a variant of (1.6) with a linesearch for solving the problem (1.1). It is known that any algorithm with a linesearch needs an inner loop with some stopping criterion over iteration.
In this paper, motivated by Shehu [37], we propose two modifications of Tseng's splitting method with non-monotone adaptive step sizes for solving the problem (1.1) in the framework of Banach spaces. The step size of our methods does not require the prior knowledge of the Lipschitz constant of operator and without any linesearch procedure. The remainder of this paper is organized as follows: We recall some definitions and lemmas in Section 2. Our methods are presented and analyzed in Section 3. Theoretical applications to variational inequality problem and convex minimization problem are considered in Section 4 and finally, in Section 5, we provide some numerical experiments to illustrate the behaviour of our methods.
2.
Preliminaries
Let R and N be the set of real numbers and the set of positive integers, respectively. Let E be a real Banach space with its dual space E∗. We denote ⟨x,f⟩ by the value of a functional f in E∗ at x in E, that is, ⟨x,f⟩=f(x). For a sequence {xn} in E, the strong convergence and the weak convergence of {xn} to x∈E are denoted by xn→x and xn⇀x, respectively. Let SE={x∈E:‖x‖=1}. The space E is said to be smooth if the limit
limt→0‖x+ty‖−‖x‖t
(2.1)
exists for all x,y∈SE. The space E is said to be uniformly smooth if the limit (2.1) converges uniformly in x,y∈SE. It is said to be strictly convex if ‖(x+y)/2‖<1 whenever x,y∈SE and x≠y. The space E is said to be uniformly convex if and only if δE(ϵ)>0 for all ϵ∈(0,2], where δE is the modulus of convexity of E defined by
δE(ϵ)=inf{1−‖x+y‖2:x,y∈SE,‖x−y‖≥ϵ}
for all ϵ∈[0,2]. Let p≥2. The space E is said to be p-uniformly convex if there is a c>0 such that δE(ϵ)≥cϵp for all ϵ∈(0,2]. Let 1<q≤2. The space E is said to be q-uniformly smooth if there exists a κ>0 such that ρE(t)≤κtq for all t>0, where ρE is the modulus of smoothness of E defined by
ρE(t)=sup{‖x+ty‖+‖x−ty‖2−1:x,y∈SE}
for all t≥0. Let 1<q≤2<p<∞ with 1p+1q=1. It is observed that every p-uniformly convex (q-uniformly smooth) space is uniformly convex (uniformly smooth) space. It is known that E is p-uniformly convex (q-uniformly smooth) if and only if its dual E∗ is q-uniformly smooth (p-uniformly convex) (see [2]). If E is uniformly convex then E is reflexive and strictly convex and if E is uniformly smooth then E is reflexive and smooth (see [14]). Moreover, we know that for every p>1, Lp and ℓp are min{p,2}-uniformly smooth and max{p,2}-uniformly convex, while Hilbert space is 2-uniformly smooth and 2-uniformly convex (see [4,23,47] for more details).
Definition 2.1. The normalized duality mappingJ:E→2E∗ is defined by
Jx={f∈E∗:⟨x,f⟩=‖x‖2=‖f‖2},∀x∈E,
where ⟨⋅,⋅⟩ denotes the duality pairing between E and E∗.
If E is a Hilbert space, then J=I is the identity mapping on E. It is known that E is smooth if and only if J is single-valued from E into E∗ and if E is a reflexive, smooth and strictly convex, then J−1 is single-valued, one-to-one, surjective and it is the duality mapping from E∗ into E. Moreover, if E is uniformly smooth, then J is norm-to-norm uniformly continuous on bounded subsets of E (see [2,14] for more details). A duality mapping J from a smooth Banach space E into E∗ is said to be weakly sequentially continuous if for any sequence {xn}⊂E such that xn⇀x implies that Jxn⇀∗Jx.
Lemma 2.2.[39]Let E be a smooth Banach space and J be the dualitymapping on E. Then ⟨x−y,Jx−Jy⟩≥0 for all x,y∈E. Further, if E isstrictly convex and ⟨x−y,Jx−Jy⟩=0, then x=y.
Definition 2.3. A mapping A:E→E∗ is said to be:
∙α-cocoercive if there exists a constant α>0 such that ⟨x−y,Ax−Ay⟩≥α‖Ax−Ay‖2 for all x,y∈E;
∙monotone if ⟨x−y,Ax−Ay⟩≥0 for all x,y∈E;
∙L-Lipschitz continuous if there exists a constant L>0 such that ‖Ax−Ay‖≤L‖x−y‖ for all x,y∈E;
∙hemicontinuous if for each x,y∈E, the mapping f:[0,1]→E∗ defined by f(t)=A(tx+(1−t)y) is continuous with respect to the weak∗ topology of E∗.
Remark 2.4. It is easy to see that if A is cocoercive, then A is monotone and Lipschitz continuous but the converse is not true in general.
The next lemma can be found in [49] (see also [47]).
Lemma 2.5.(i) Let E be a 2-uniformly smooth Banach space. Then there exists a constant κ>0 suchthat
‖x−y‖2≤‖x‖2−2⟨y,Jx⟩+κ‖y‖2,∀x,y∈E.
(ii) Let E be a 2-uniformly convex Banach space. Then there exists a constant c>0 such that
‖x−y‖2≥‖x‖2−2⟨y,Jx⟩+c‖y‖2,∀x,y∈E.
Remark 2.6. It is well-known that κ=c=1 whenever E is a Hilbert space. Hence these inequalities reduce to the following well-known polarization identity:
‖x−y‖2=‖x‖2−2⟨x,y⟩+‖y‖2.
Moreover, we refer to [49] for the exact values of constants κ and c.
Next, we recall the following Lyapunov function which was introduced in [3]:
Definition 2.7. Let E be a smooth Banach space. The Lyapunov functional ϕ:E×E→R is defined by
ϕ(x,y)=‖x‖2−2⟨x,Jy⟩+‖y‖2,∀x,y∈E.
(2.2)
If E is a Hilbert space, then ϕ(x,y)=‖x−y‖2 for all x,y∈E. In addition, the Lyapunov function ϕ has the following properties:
Lemma 2.8.[6]Let E be a 2-uniformly convex Banach space, then there exists a constant c>0 such that
c‖x−y‖2≤ϕ(x,y),
where c is a constant in Lemma 2.5 (ii).
We make use of the following functional V:E×E∗→R studied in [3]:
V(x,x∗)=‖x‖2−2⟨x,x∗⟩+‖x∗‖2,∀x∈E,x∗∈E∗.
(2.6)
Obviously, V(x,x∗)=ϕ(x,J−1x∗) for all x∈E and x∗∈E∗.
Lemma 2.9.[3]Let E be a reflexive, strictly convex and smooth Banach space. Then the following statement holds:
V(x,x∗)+2⟨J−1x∗−x,y∗⟩≤V(x,x∗+y∗),∀x∈E,x∗,y∗∈E∗.
Let E be a reflexive, strictly convex and smooth Banach space. Let C be a closed and convex subset of E. Then for any x∈E, there exists a unique element z∈C such that
ϕ(z,x)=miny∈Cϕ(y,x).
Such a mapping ΠC:E→C defined by z=ΠC(x) is called the generalized projection of E onto C. If E is a Hilbert space, then ΠC is coincident with the metric projection denoted by PC.
Lemma 2.10.[3]Let E be a reflexive, strictly convex and smooth Banach space and C be a closed and convex subset of E. Let x∈E and z∈C.Then the following statements hold:
(i)z=ΠC(x) if and only if ⟨y−z,Jx−Jz⟩≤0, ∀y∈C.
(ii)ϕ(y,ΠC(x))+ϕ(ΠC(x),x)≤ϕ(y,x), ∀y∈C.
Lemma 2.11.[25]Let C be a closed and convex subset of a smooth and uniformly convex Banach space E. Let {xn} be a sequence inE such that ϕ(p,xn+1)≤ϕ(p,xn) for all p∈C and n≥1. Then the sequence {ΠC(xn)} converges strongly to some element x∗∈C.
Let B:E→2E∗ be a multi-valued mapping. The effective domain of B is denoted by D(B)={x∈E:Bx≠∅} and the range of B is also denoted by R(B)=⋃{Bx:x∈D(B)}. The set of zeros of B is denoted by B−10={x∈D(B):0∈Bx}. A multi-valued mapping B is said to be monotone if
⟨x−y,u−v⟩≥0,∀x,y∈D(B),u∈Bxandv∈By.
A monotone operator B on E is said to be maximal if its graph G(B)={(x,y)∈E×E∗:x∈D(B),y∈Bx} is not properly contained in the graph of any other monotone operator on E. In other words, the maximality of B is equivalent to R(J+λB)=E∗ for all λ>0 (see [5, Theorem 1.2]). It is known that if B is maximal monotone, then B−10 is closed and convex (see [39]).
For a maximal monotone operator B, we define the resolvent of B by JBλ(x)=(J+λB)−1Jx for x∈E and λ>0. It is also known that B−10=F(JBλ).
Lemma 2.12.[5]Let E be a reflexive Banach space. Let A:E→E∗ be a monotone, hemicontinuous and bounded operator and B:E→2E∗be a maximal monotone operator. Then A+B is maximal monotone.
Lemma 2.13. ([48]) Assume that {an} is a sequence of nonnegative real sequences suchthat
an+1≤(1−γn)an+γnδn,∀n≥1,
where {γn} is a sequence in (0,1) and {δn} is a sequence of real sequences such that
(i)∑∞n=1γn=∞;
(ii)lim supn→∞δn≤0 or∑∞n=1|γnδn|<∞.
Then limn→∞an=0.
Lemma 2.14. ([30]) Let {Γn} be a sequence of real numbers that does not decrease at infinity inthe sense that there exists a subsequence {Γni} of {Γn} which satisfiesΓni<Γni+1 for all ℓ∈N. Define the sequence {σ(n)} of integers as follows:
σ(n)=max{k≤n:Γk<Γk+1},
for all n≥n0 (for some n0 large enough). Then {σ(n)}n≥n0 is a non-decreasingsequence such that limn→∞σ(n)=∞, and it holds that
Γσ(n)≤Γσ(n)+1andΓn≤Γσ(n)+1.
Lemma 2.15. ([42]) Assume that {λn} and {θn} are two nonnegative real sequences such that
λn+1≤λn+θn,∀n≥1.
If ∑∞n=1θn<∞, then limn→∞λn exists.
3.
Main results
In this section, we introduce two modified Tseng's splitting algorithms for solving the monotone inclusion problem in Banach spaces. In order to prove the convergence results of these algorithms, we need make the following assumptions:
Assumption 3.1.(A1) The Banach space E is a real 2-uniformly convex and uniformly smooth.
(A2) The mappings A:E→E∗ is monotone and L-Lipschitz continuous, and B:E→2E∗ is maximal monotone.
(A3) The solution set of the problem (1.1) is nonempty, that is, (A+B)−10≠∅.
Algorithm 1: Tseng type splitting algorithm for monotone inclusion problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈E be arbitrary. Set n=1. Step 1. Compute yn=JBλnJ−1(Jxn−λnAxn).(3.1) If xn=yn, then stop and xn is a solution of the problem (1.1). Otherwise, go to Step 2. Step 2. Compute xn+1=J−1(Jyn−λn(Ayn−Axn)),(3.2) where the sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(3.3) Set n:=n+1 and go to Step 1.
Lemma 3.2.Assume that Assumption 3.1 holds. Let {xn}, {yn} and {λn} be sequences generatedby Algorithm 1. Then the following statements hold:
(i) If xn=yn for all n∈N, then xn∈(A+B)−10.
(ii)limn→∞λn=λ∈[min{μL,λ1},λ1+θ], where θ=∑∞n=1θn.Moreover
‖Axn−Ayn‖≤μλn+1‖xn−yn‖,∀n≥1.
Proof.(i) If xn=yn, then xn=JBλnJ−1(Jxn−λnAxn). It follows that xn=(J+λnB)−1J∘J−1(J−λnA)xn, that is, Jxn−λnAxn∈Jxn+λnBxn, which implies that 0∈(A+B)xn. Hence xn∈(A+B)−10.
(ii) In the case Axn−Ayn≠0, using the Lipschitz continuity of A, we have
μ‖xn−yn‖‖Axn−Ayn‖≥μ‖xn−yn‖L‖xn−yn‖=μL.
From (3.3) and mathematical induction, we have the sequence {λn} has upper bound λ1+θ and lower bound min{μL,λ1}. From Lemma 2.15, we have limn→∞λn exists and we denote λ=limn→∞λn. It is obvious that λ∈[min{μL,λ1},λ1+θ]. By the definition of λn, we have
From the definition of yn, we note that Jxn−λnAxn∈Jyn+λnByn. Since B is maximal monotone, there exists vn∈Byn such that Jxn−λnAxn=Jyn+λnvn, we have
vn=1λn(Jxn−Jyn−λnAxn).
(3.9)
Since 0∈(A+B)z and Ayn+vn∈(A+B)yn, it follows from Lemma 2.12 that A+B is maximal monotone. Hence
⟨yn−z,Ayn+vn⟩≥0.
(3.10)
Substituting (3.9) into (3.10), we have
1λn⟨yn−z,Jxn−Jyn−λnAxn+λnAyn⟩≥0.
Hence
⟨yn−z,Jxn−Jyn−λn(Axn−Ayn)⟩≥0.
(3.11)
Combining (3.8) and (3.11), thus this lemma is proved.
Theorem 3.4.Assume that Assumption 3.1 holds. Suppose, in addition, that J is weakly sequentiallycontinuous on E. Then the sequence {xn} generated by Algorithm 1 converges weakly to an element in (A+B)−10.
Proof. Since limn→∞λn exists and μ∈(0,√cκ), it follows that limn→∞(1−κμ2cλ2nλ2n+1)=1−κμ2c>0. Thus there exists n0∈N such that
1−κμ2cλ2nλ2n+1>0,∀n≥n0.
(3.12)
Combining (3.5) and (3.12), we have
ϕ(z,xn+1)≤ϕ(z,xn),∀n≥n0.
This show that limn→∞ϕ(z,xn) exists and hence {ϕ(z,xn)} is bounded. Applying Lemma 2.8, we also have {xn} is bounded. From (3.5), we have
(1−κμ2cλ2nλ2n+1)ϕ(yn,xn)≤ϕ(z,xn)−ϕ(z,xn+1).
(3.13)
Thus we have
limn→∞ϕ(yn,xn)=0.
Applying Lemma 2.8, we also have
limn→∞‖xn−yn‖=0.
(3.14)
Since J is norm-to-norm uniformly continuous on bounded subsets of E, we have
limn→∞‖Jxn−Jyn‖=0.
(3.15)
Using the fact that A is Lipschitz continuous, we have
limn→∞‖Axn−Ayn‖=0.
(3.16)
By the boundedness of {xn}, there exists a subsequence {xnk} of {xn} such that xnk⇀x∗∈E. From (3.14), we have ynk⇀x∗. We will show that x∗∈(A+B)−10. Let (v,w)∈G(A+B), we have w−Av∈Bv. From the definition of ynk, we note that
Since limk→∞λnk=λ>0 and ynk⇀x∗, it follows from (3.15) and (3.16) that
⟨v−x∗,w⟩≥0.
By the monotonicity of A+B, we get 0∈(A+B)x∗, that is, x∗∈(A+B)−10. Hence x∗∈(A+B)−10. Note that (A+B)−10 is closed and convex. Put un=Π(A+B)−10(xn). It follows from Lemma 2.11 that there exists x∗∈(A+B)−10 such that un→x∗. Finally, we show that xn⇀x∗. Let {xnk} be a subsequence of {xn} such that xnk⇀ˆx∈(A+B)−10. Then we have
⟨ˆx−unk,Jxnk−Junk⟩≤0
for all k∈N. Since un→x∗, xnk⇀ˆx and J is weakly sequentially continuous, we have
⟨ˆx−x∗,Jˆx−Jx∗⟩≤0.
By the strict monotonicity of J, we obtain x∗=ˆx. In summary, we have shown that every subsequence of {xn} has a further subsequence which converges weakly to x∗. We conclude that xn⇀x∗=limn→∞Π(A+B)−10(xn). This completes the proof.
Algorithm 2: Halpern-Tseng type splitting algorithm for monotone inclusion problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈E be arbitrary. Set n=1.
Step 1. Compute yn=JBλnJ−1(Jxn−λnAxn).(3.17) If xn=yn, then stop and xn is a solution of the problem (1.1). Otherwise, go to Step 2. Step 2. Compute zn=J−1(Jyn−λn(Ayn−Axn)).(3.18) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(3.19) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(3.20) Set n:=n+1 and go to Step 1.
Theorem 3.5.Assume that Assumption 3.1 holds. If {αn}⊂(0,1) with limn→∞αn=0 and ∑∞n=1αn=∞, then the sequence {xn} generated by Algorithm 2 converges strongly to x∗∈(A+B)−10.
Proof. We will show that {xn} is bounded. Let z∈(A+B)−10. Using the same arguments as in the proof of Lemma 3.3, we can show that
ϕ(z,zn)≤ϕ(z,xn)−(1−κμ2cλ2nλ2n+1)ϕ(yn,xn).
(3.21)
Since limn→∞λn exists and μ∈(0,√cκ), it follows that limn→∞(1−κμ2cλ2nλ2n+1)=1−κμ2c>0. Thus there exists n0∈N such that
Now, we will divide the rest of the proof into two cases.
Case 1. Suppose that there exists N∈N such that ϕ(x∗,xn+1)≤ϕ(x∗,xn) for all n≥N. Hence limn→∞ϕ(x∗,xn) exists. By our assumptions, we have from (3.24) that
limn→∞ϕ(yn,xn)=0
and hence
limn→∞‖xn−yn‖=0.
(3.25)
Since J is norm-to-norm uniformly continuous on bounded subsets of E, we have
limn→∞‖Jxn−Jyn‖=0.
(3.26)
Using the fact that A is Lipschitz continuous, we have
This together with (3.29) and (3.30), so we can conclude by Lemma 2.13 that ϕ(x∗,xn)→0. Therefore xn→x∗.
Case 2. Suppose that there exists a subsequence {Γni} of {Γn} such that Γni<Γni+1 for all i∈N. In this case, we define σ:N→N by
σ(n)=max{k≤n:Γk<Γk+1}
for all n≥n0 (for some n0 large enough). From Lemma 2.14, we have σ(n) is non-decreasing such that limn→∞σ(n)=∞ and the following inequalities hold for all n≥n0:
Γσ(n)<Γσ(n)+1andΓn≤Γσ(n)+1.
(3.31)
Put Γn=ϕ(x∗,xn) for all n∈N. As proved in the Case 1, we obtain
Hence lim supn→∞ϕ(x∗,xn)=0 and so limn→∞ϕ(x∗,xn)=0. Therefore xn→x∗. This completes the proof.
4.
Theoretical applications
4.1. The case of variational inequality problem
Let C be a nonempty, closed and convex subset of E. Let A:C→E∗ be a mapping. The variational inequality problem is to find x∗∈C such that
⟨y−x∗,Ax∗⟩≥0,∀y∈C.
(4.1)
The set of solutions of the problem (4.1) is denoted by VI(C,A). In particular, if A is a continuous and monotone mapping, then VI(C,A) is closed and convex (see [7,24]). Recall that the indicator function of C given by
iC(x)={0,ifx∈C,∞,ifx∉C.
It is known that iC is proper convex, lower semicontinuous and convex function with its subdifferential ∂iC is maximal monotone (see [35]). From [2], we know that
∂iC(v)=NC(v)={u∈E∗:⟨y−v,u⟩≤0,∀y∈C},
where NC is the normal cone for C at a point v. Thus we can define the resolvent of ∂iC for λ>0 by
J∂iCλ(x)=(J+λ∂iC)−1Jx,∀x∈E.
As shown in [40], for any x∈E and z∈C, z=J∂iCλ(x)⟺z=ΠC(x), where ΠC is the generalized projection from E onto C.
Lemma 4.1.[36]Let C be a nonempty, closed convex subset of a Banach space E. Let A:C→E∗ be amonotone and hemicontinuous operator and T:E→2E∗ be an operator defined as follows:
Tv={Av+NC(v)ifv∈C,∅ifv∉C.
Then T is maximal monotone and T−10=VI(C,A).
If we set B=∂iC, then we obtain the following results regarding the problem (4.1).
Assumption 4.2.(A1) The feasible set C is a nonempty, closed and convex subset of a real 2-uniformly convex and uniformly smooth Banach space E.
(A2) The mapping A:E→E∗ is monotone and L-Lipschitz continuous.
(A3) The solution set of the problem (4.1) is nonempty, that is, VI(C,A)≠∅.
Algorithm 3: Tseng type splitting algorithm for variational inequality problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈C be arbitrary. Set n=1. Step 1. Compute yn=ΠCJ−1(Jxn−λnAxn).(4.2) Step 2. Compute xn+1=J−1(Jyn−λn(Ayn−Axn)),(4.3) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(4.4) Set n:=n+1 and go to Step 1.
Theorem 4.3.Assume that Assumption 4.2 holds. Suppose, in addition, that J is weakly sequentiallycontinuous on E. Then the sequence {xn} generated by Algorithm 3 converges weakly to an element in (A+B)−10.
Algorithm 4: Halpern-Tseng type splitting algorithm for variational inequality problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈C be arbitrary. Set n=1.
Step 1. Compute yn=ΠCJ−1(Jxn−λnAxn).(4.5) Step 2. Compute zn=J−1(Jyn−λn(Ayn−Axn)).(4.6) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(4.7) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(4.8) Set n:=n+1 and go to Step 1.
Theorem 4.4.Assume that Assumption 4.2 holds. If {αn}⊂(0,1) with limn→∞αn=0 and ∑∞n=1αn=∞, then the sequence {xn} generated by Algorithm 4 converges strongly to x∗∈VI(C,A).
4.2. The case of convex minimization problem
Let f:E→R be a convex function and g:E→R be a convex, lower semicontinuous and non-smooth function. We consider the following convex minimization problem:
minx∈Ef(x)+g(x).
(4.9)
By Fermat's rule, we know that above problem is equivalent to the problem of finding x∈E such that
0∈∇f(x)+∂g(x),
(4.10)
where ∇f is the gradient of f and ∂g is the subdifferential of g. In this situation, we assume that f is a convex and differentiable function with its gradient ∇f is L-Lipschitz continuous. Further, ∇f is cocoercive with a constant 1/L (see [31, Theorem 3.13]). This implies that ∇f is monotone and Lipschitz continuous. Moreover, ∂g is maximal monotone (see [35, Theorem A]). In this point of view, we set A=∇f and B=∂g, then we obtain the following results regarding the problem (4.9).
Assumption 4.5.(A1) The Banach space E is real 2-uniformly convex and uniformly smooth Banach space.
(A2) The functions f:E→R is convex and differentiable and its gradient ∇f is L-Lipschitz continuous and g:E→R is convex and lower semicontinuous which f+g attains a minimizer.
Algorithm 5: Tseng type splitting algorithm for convex minimization problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈E be arbitrary. Set n=1. Step 1. Compute yn=J∂gλnJ−1(Jxn−λn∇f(xn)).(4.11) Step 2. Compute xn+1=J−1(Jyn−λn(∇f(yn)−∇f(xn))),(4.12) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖∇f(yn)−∇f(xn)‖,λn+θn}if∇f(yn)−∇f(xn)≠0,λn+θnotherwise.(4.13) Set n:=n+1 and go to Step 1.
Theorem 4.6.Assume that Assumption 4.5 holds. Suppose, in addition, that J is weakly sequentiallycontinuous on E. Then the sequence {xn} generated by Algorithm 5 converges weakly to a minimizer of f+g.
Algorithm 6: Halpern-Tseng type splitting algorithm for convex minimization problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈E be arbitrary. Set n=1. Step 1. Compute yn=J∂gλnJ−1(Jxn−λn∇f(xn)).(4.14) Step 2. Compute zn=J−1(Jyn−λn(∇f(yn)−∇f(xn))).(4.15) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(4.16) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖∇f(yn)−∇f(xn)‖,λn+θn}if∇f(yn)−∇f(xn)≠0,λn+θnotherwise.(4.17) Set n:=n+1 and go to Step 1.
Theorem 4.7.Assume that Assumption 4.5 holds. If {αn}⊂(0,1) with limn→∞αn=0 and ∑∞n=1αn=∞, then the sequence {xn} generated by Algorithm 6 converges strongly to a minimizer of f+g.
5.
Numerical experiments
In this section, we provide some numerical experiments to illustrate the behaviour of our methods and compare them with some existing methods.
Example 5.1.We consider the HpHard problem which is taken from [22]. Let A:Rm→Rm be an operator defined by Ax=Mx+q with q∈Rm and
M=NNT+S+D,
where N is an m×m matrix, S is an m×m skew-symmetric matrix and D is an m×m positive definite diagonal matrix. The feasible set is C=R+m. It is clear that A is monotone and Lipschitz continuous with L=‖M‖. In this experiments, we compare our Algorithm 3 and Algorithm 4 with the extragradient method (EGM) proposed in [28] and the subetaadient extragradient method (SEGM) proposed in [8]. The parameters are chosen as follows:
∙Algorithm 3: λ1=0.4/‖M‖ and μ=0.9;
∙Algorithm 4: λ1=0.4/‖M‖, μ=0.9, αn=110000(n+2) and u=x1;
∙EGM and SEGM: λ=0.4/‖M‖.
All entries of N and S are generated randomly in (−5,5), of D are in (0,0.3), of q uniformly generated from (−500,0). For every m, we have generated two random samples with different choices of M and q. We perform the numerical experiments with three different cases of m(m=100,500,1000). We take the starting point x1=(1,1,1,…,1)T∈Rm and use stopping criterion ‖xn−yn‖≤ε=10−6. The numerical results are reported in Table 1.
Example 5.2.We consider the problem (4.1) in L2([0,2π]) with the inner product ⟨x,y⟩=∫2π0x(t)y(t)dt and the norm ‖x‖=(∫2π0x2(t)dt)1/2 for all x,y∈L2([0,2π]). Let A:L2([0,2π])→L2([0,2π]) be an operator defined by
(Ax)(t)=12max{0,x(t)}
for all x∈L2([0,2π]) and t∈[0,2π]. It can be easily verified that A is monotone and Lipschitz continuous with L=1 (see [50,51]). The feasible set is C={x∈L2([0,2π]):∫2π0(t2+1)x(t)dt≤1}. Observe that 0∈VI(C,A) and so VI(C,A)≠∅. In this numerical experiment, we take all parameters αn, λn and μ are the same as in Example 5.1. We perform numerical experiments with three different cases of starting point x1 and use stopping criterion ‖xn−yn‖≤ε=10−3. The numerical results are reported in Table 2.
where x=(w1,w2,w3)T∈R3. Let f(x)=2‖x‖22+(−1,2,5)x+1 and g(x)=‖x‖1. Thus we have ∇f(x)=4x+(−1,2,5)T. It is easy to check that f is a convex and differentiable function and its gradient ∇f is Lipschitz continuous with L=4. Moreover, g is a convex and lower semicontinuous function but not differentiable on R3. From [21], we know that
for λ>0. In this experiments, we compare our Algorithm 5 and Algorithm 6 with Algorithm (1.4) of Cholamjiak [12]. The parameters are chosen as follows:
∙Algorithm 5: λ1=0.1 and μ=0.9;
∙Algorithm 6: λ1=0.1, μ=0.9, αn=110000(n+1) and u=x1;
∙ Algorithm (1.4): all parameters αn, λn, δn, rn and en are the same as Example 4.2 in [12], and u=x1.
We perform the numerical experiments with four different cases of starting point x1 and use stopping criterion ‖xn+1−xn‖≤ε=10−12. The numerical results are reported in Table 3.
Example 5.4.In signal processing, compressed sensing can be modeled as the following under-determinated linear equation system:
y=Dx+ε,
(5.1)
where x∈RN is a vector with m nonzero components to be recovered, y∈RM is the observed or measured data with noisy ε, and D:RN→RM(M<N) is a bounded linear operator. It is known that to solve (5.1) can be seen as solving the LASSO problem:
minx∈RN12‖Dx−y‖22+λ‖x‖1,
(5.2)
where λ>0. Following [19], we define Ax:=∇(12‖Dx−y‖22)=DT(Dx−y) and Bx:=∂(λ‖x‖1). It is known that A is ‖D‖2-Lipschitz continuous and monotone. Moreover, B is maximal monotone (see [35]).
In this experiment, the sparse vector x∈RN is generated from uniform distribution in the interval [−2,2] with m nonzero elements. The matrix D∈RM×N is generated from a normal distribution with mean zero and one invariance. The observation y is generated by white Gaussian noise with signal-to-noise ratio (SNR) = 40. The restoration accuracy is measured by the mean squared error (MSE) as follows:
En=1N‖xn−x‖22<10−4,
(5.3)
where xn is an estimated signal of x.
We compare our proposed Algorithm 1 and Algorithm 2 with the forward-backward splitting algorithm (FBSA) (1.2), the Tseng's splitting algorithm (TSA) (1.5) and the contraction forward-backward splitting algorithm (CFBSA) proposed in ([43, Algorithm 3.1]). The parameters are chosen as follows:
∙ Algorithm 1: θn=0, λ1=0.0013 and μ=0.5;
∙ Algorithm 2: θn=0, λ1=0.0013, μ=0.5, αn=1200n+5 and u=(1,1,…,1)T;
∙ CFBSA: αn=1200n+5, μ=0.5, δ=0.5, l=0.5, γ=0.45 and f(x)=x5;
∙ TSA: λn=0.2‖D‖2;
∙ FBSA: λ=2×10−5.
The starting points x1 of all methods are randomly chosen in RN. We perform the numerical test with the following three cases:
Case 1: N=512, M=256 and m=20;
Case 2: N=1024, M=512 and m=30;
Case 3: N=2048, M=1024 and m=60;
The numerical results for all test are reported in Figures 1-6.
Figure 1.
Comparison of recovered signal by using different algorithms in Case 1.
In this paper, we propose Tseng's splitting algorithms with non-monotone adaptive step sizes for finding zeros of the sum of two monotone operators in the framework of Banach space. Under some suitable conditions, we prove the weak and strong convergence results of the algorithms without the knowledge of the Lipschitz constant of the mapping. Some applications related to the obtained results are presented. Finally, several numerical experiments are performed to illustrate the convergence of our algorithms and compared with many known methods.
Acknowledgments
P. Cholamjiak was supported by Thailand Science Research and Innovation under the project IRN62W0007 and P. Sunthrayuth was supported by RMUTT Research Grant for New Scholar under Grant NSF62D0602.
Conflict of interest
The authors declare no conflict of interest.
References
[1]
H. A. Abass, C. Izuchukwu, O. T. Mewomo, Q. L. Dong, Strong convergence of an inertial forward-backward splitting method for accretive operators in real Banach space, Fixed Point Theory, 21 (2020), 397-412. doi: 10.24193/fpt-ro.2020.2.28
[2]
R. P. Agarwal, D. O'Regan, D. R. Sahu, Fixed Point Theory for Lipschitzian-type Mappings with Applications, New York: Springer, 2009.
[3]
Y. I. Alber, Metric and generalized projection operators in Banach spaces: properties and applications, In: A. G. Kartsatos, Theory and Applications of Nonlinear Operator of Accretive and Monotone Type, New York: Marcel Dekker, (1996), 15-50.
[4]
K. Ball, E. A. Carlen, E. H. Lieb, Sharp uniform convexity and smoothness inequalities for trace norms, Inventiones Math., 115 (1994), 463-482. doi: 10.1007/BF01231769
[5]
V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Netherlands: Springer, 1976.
[6]
T. Bonesky, K. S. Kazimierski, P. Maass, F. Schöpfer, T. Schuster, Minimization of Tikhonov Functionals in Banach Spaces, Abstr. Appl. Anal., 2008 (2008), 192679.
[7]
F. Browder, Nonlinear monotone operators and convex sets in Banach spaces, Bull. Am. Math. Soc., 71 (1965), 780-785. doi: 10.1090/S0002-9904-1965-11391-X
[8]
Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318-335. doi: 10.1007/s10957-010-9757-3
[9]
S. S. Chang, C. F. Wen, J. C. Yao, Generalized viscosity implicit rules for solving quasi-inclusion problems of accretive operators in Banach spaces, Optimization, 66 (2017), 1105-1117. doi: 10.1080/02331934.2017.1325888
[10]
S. S. Chang, C. F. Wen, J. C. Yao, A generalized forward-backward splitting method for solving a system of quasi variational inclusions in Banach spaces, RACSAM, 113 (2019), 729-747. doi: 10.1007/s13398-018-0511-2
[11]
G. H. Chen, R. T. Rockafellar, Convergence rates in forward-backward splitting, SIAM J. Optim., 7 (1997), 421-444. doi: 10.1137/S1052623495290179
[12]
P. Cholamjiak, A generalized forward-backward splitting method for solving quasi inclusion problems in Banach spaces, Numer. Algorithms, 71 (2016), 915-932. doi: 10.1007/s11075-015-0030-6
[13]
P. Cholamjiak, N. Pholasa, S. Suantai, P. Sunthrayuth, The generalized viscosity explicit rules for solving variational inclusion problems in Banach spaces, Optimization, 2020. Available from: https://doi.org/10.1080/02331934.2020.1789131.
[14]
I. Cioranescu, Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems, Dordrecht: Kluwer Academic, 1990.
[15]
P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168-1200. doi: 10.1137/050626090
[16]
I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math., 57 (2004), 1413-1457. doi: 10.1002/cpa.20042
[17]
J. Duchi, Y. Singer, Efficient online and batch learning using forward-backward splitting, J. Mach. Learn. Res., 10 (2009), 2899-2934.
[18]
J. C. Dunn, Convexity, monotonicity, and gradient processes in Hilbert space, J. Math. Anal. Appl., 53 (1976), 145-158. doi: 10.1016/0022-247X(76)90152-9
[19]
A. Gibali, D. V. Thong, Tseng type methods for solving inclusion problems and its applications, Calcolo, 55 (2018), 49. doi: 10.1007/s10092-018-0292-1
[20]
O. G¨uler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Control Optim., 29 (1991), 403-419. doi: 10.1137/0329022
[21]
E. T. Hale, W. Yin, Y. Zhang, A fixed-point continuation method for ℓ1-regularized minimization with applications to compressed sensing, CAAM Technical Report TR07-07, 2007.
[22]
P. T. Harker, J. S. Pang, A damped-Newton method for the linear complementarity problem, In: G. Allgower, K. Georg, Computational Solution of Nonlinear Systems of Equations, AMS Lectures on Applied Mathematics, 26 (1990), 265-284.
[23]
O. Hanner, On the uniform convexity of Lp and ℓp, Arkiv Matematik, 3 (1956), 239-244. doi: 10.1007/BF02589410
[24]
Hartman, G. Stampacchia, On some non linear elliptic differential functional equations, Acta Math., 115 (1966), 271-310. doi: 10.1007/BF02392210
[25]
H. Iiduka, W. Takahashi, Weak convergence of a projection algorithm for variational inequalities in a Banach space, J. Math. Anal. Appl., 339 (2008), 668-679. doi: 10.1016/j.jmaa.2007.07.019
[26]
C. Izuchukwu, C. C. Okeke, F. O. Isiogugu, Viscosity iterative technique for split variational inclusion problem and fixed point problem between Hilbert space and Banach space, J. Fixed Point Theory Appl., 20 (2018), 1-25. doi: 10.1007/s11784-018-0489-6
[27]
C. C. Okeke, C. Izuchukwu, Strong convergence theorem for split feasibility problem and variational inclusion problem in real Banach spaces, Rend. Circolo Mat. Palermo, 2020. Available from: https://doi.org/10.1007/s12215-020-00508-3.
[28]
G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekonomikai Mat. Metody, 12 (1976), 747-756.
[29]
P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964-979. doi: 10.1137/0716071
[30]
P. E. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899-912. doi: 10.1007/s11228-008-0102-z
[31]
J. Peypouquet, Convex Optimization in Normed Spaces: Theory, Methods and Examples, Springer Briefs in Optimization, 2015.
[32]
N. Pholasa, P. Cholamjiak, The regularization method for solving variational inclusion problems, Thai J. Math., 14 (2016), 369-381.
[33]
H. Raguet, J. Fadili, G. Peyre, A generalized forward-backward splitting, SIAM J. Imaging Sci., 6 (2013), 1199-1226. doi: 10.1137/120872802
[34]
R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877-898. doi: 10.1137/0314056
[35]
R. T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math., 33 (1970), 209-216. doi: 10.2140/pjm.1970.33.209
[36]
R. T. Rockafellar, On the maximality of sums of nonlinear monotone operators, Trans. Amer. Math. Soc., 149 (1970), 75-88. doi: 10.1090/S0002-9947-1970-0282272-5
[37]
Y. Shehu, Convergence results of forward-backward algorithms for sum of monotone operators in Banach spaces, Results Math., 74 (2019), 138. doi: 10.1007/s00025-019-1061-4
[38]
Y. Shehu, G. Cai, Strong convergence result of forward-backward splitting methods for accretive operators in banach spaces with applications, RACSAM, 112 (2018), 71. doi: 10.1007/s13398-016-0366-3
[39]
W. Takahashi, Nonlinear Functional Analysis, Yokohama: Yokohama Publishers, 2000.
[40]
S. Takahashi, W. Takahashi, Split common null point problem and shrinking projection method for generalized resolvents in two Banach spaces, J. Nonlinear Convex Anal., 17 (2016), 2171-2182.
[41]
W. Takahashi, N. C. Wong, J. C. Yao, Two generalized strong convergence theorems of Halpern's type in Hilbert spaces and applications, Taiwan. J. Math., 16 (2012), 1151-1172. doi: 10.11650/twjm/1500406684
[42]
K. K. Tan, H. K. Xu, Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process, J. Math. Anal. Appl., 178 (1993), 301-308. doi: 10.1006/jmaa.1993.1309
[43]
D. V. Thong, P. Cholamjiak, Strong convergence of a forward-backward splitting method with a new step size for solving monotone inclusions, Comput. Appl. Math., 38 (2019), 94. doi: 10.1007/s40314-019-0855-z
P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431-446. doi: 10.1137/S0363012998338806
[46]
Y. Wang, F. Wang, Strong convergence of the forward-backward splitting method with multiple parameters in Hilbert spaces, Optimization, 67 (2018), 493-505. doi: 10.1080/02331934.2017.1411485
[47]
H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal., 16 (1991), 1127-1138. doi: 10.1016/0362-546X(91)90200-K
[48]
H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240-256. doi: 10.1112/S0024610702003332
[49]
Z. B. Xu, G. F. Roach, Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces, J. Math. Anal. Appl., 157 (1991), 189-210. doi: 10.1016/0022-247X(91)90144-O
[50]
J. Yang, H. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algorithms, 80 (2019), 741-752. doi: 10.1007/s11075-018-0504-4
[51]
J. Yang, H. Liu, G. Li, Convergence of a subgradient extragradient algorithm for solving monotone variational inequalities, Numer. Algorithms, 84 (2020), 389-405. doi: 10.1007/s11075-019-00759-x
This article has been cited by:
1.
Kunrada KANKAM, Prasit CHOLAMJİAK, Watcharaporn CHOLAMJİAK,
A modified parallel monotone hybrid algorithm for a finite family of $\mathcal{G}$-nonexpansive mappings apply to a novel signal recovery,
2022,
5,
2636-7556,
393,
10.53006/rna.1122092
Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth,
Weak and strong convergence results for solving monotone variational inequalities in reflexive Banach spaces,
2022,
0233-1934,
1,
10.1080/02331934.2022.2069568
4.
Shaotao Hu, Yuanheng Wang, Liya Liu, Qiao-Li Dong,
An inertial self-adaptive iterative algorithm for finding the common solutions to split feasibility and fixed point problems in specific Banach spaces,
2023,
424,
03770427,
115010,
10.1016/j.cam.2022.115010
5.
Simeon Reich, Truong Minh Tuyen, Pongsakorn Sunthrayuth, Prasit Cholamjiak,
Two New Inertial Algorithms for Solving Variational Inequalities in Reflexive Banach Spaces,
2021,
42,
0163-0563,
1954,
10.1080/01630563.2021.2006692
6.
V. V. Semenov, S. V. Denisov, G. V. Sandrakov, O. S. Kharkov,
Convergence of the Operator Extrapolation Method for Variational Inequalities in Banach Spaces*,
2022,
58,
1060-0396,
740,
10.1007/s10559-022-00507-5
7.
V. V. Semenov, S. V. Denisov,
Convergence of the Method of Extrapolation from the Past for Variational Inequalities in Uniformly Convex Banach Spaces*,
2022,
58,
1060-0396,
564,
10.1007/s10559-022-00490-x
8.
Bing Tan, Pongsakorn Sunthrayuth, Prasit Cholamjiak, Yeol Je Cho,
Modified inertial extragradient methods for finding minimum-norm solution of the variational inequality problem with applications to optimal control problem,
2023,
100,
0020-7160,
525,
10.1080/00207160.2022.2137672
9.
Yana Vedel, Vladimir Semenov, Sergey Denisov,
2021,
Chapter 4,
978-3-030-92710-3,
50,
10.1007/978-3-030-92711-0_4
V.V. Semenov, O.S. Kharkov,
STRONG CONVERGENCE OF THE REGULARIZED OPERATOR EXTRAPOLATION ALGORITHM FOR VARIATIONAL INEQUALITIES,
2024,
10195262,
64,
10.34229/KCA2522-9664.24.3.6
12.
V. V. Semenov, O. S. Kharkov,
Strong Convergence of the Regularized Operator Extrapolation Algorithm For Variational Inequalities,
2024,
60,
1060-0396,
392,
10.1007/s10559-024-00680-9
13.
Zhong-bao Wang, Pongsakorn Sunthrayuth, Ratthaprom Promkam, Abubakar Adamu,
Three novel inertial subgradient extragradient methods for quasi-monotone variational inequalities in Banach spaces,
2024,
43,
2238-3603,
10.1007/s40314-024-02929-7
14.
Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Hüseyin Işık, Ojen Kumar Narain,
A modified subgradient extragradient algorithm-type for solving quasimonotone variational inequality problems with applications,
2023,
2023,
1029-242X,
10.1186/s13660-023-02981-7
15.
Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Prasit Cholamjiak, Ojen Kumar Narain,
Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems,
2024,
96,
1017-1398,
1465,
10.1007/s11075-023-01674-y
16.
V. V. Semenov, O. S. Kharkov,
THE REGULARIZED OPERATOR EXTRAPOLATION ALGORITHM,
2023,
27069680,
15,
10.17721/2706-9699.2023.1.02
Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth. Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces[J]. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth. Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces[J]. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
Algorithm 1: Tseng type splitting algorithm for monotone inclusion problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈E be arbitrary. Set n=1. Step 1. Compute yn=JBλnJ−1(Jxn−λnAxn).(3.1) If xn=yn, then stop and xn is a solution of the problem (1.1). Otherwise, go to Step 2. Step 2. Compute xn+1=J−1(Jyn−λn(Ayn−Axn)),(3.2) where the sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(3.3) Set n:=n+1 and go to Step 1.
Algorithm 2: Halpern-Tseng type splitting algorithm for monotone inclusion problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈E be arbitrary. Set n=1.
Step 1. Compute yn=JBλnJ−1(Jxn−λnAxn).(3.17) If xn=yn, then stop and xn is a solution of the problem (1.1). Otherwise, go to Step 2. Step 2. Compute zn=J−1(Jyn−λn(Ayn−Axn)).(3.18) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(3.19) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(3.20) Set n:=n+1 and go to Step 1.
Algorithm 3: Tseng type splitting algorithm for variational inequality problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈C be arbitrary. Set n=1. Step 1. Compute yn=ΠCJ−1(Jxn−λnAxn).(4.2) Step 2. Compute xn+1=J−1(Jyn−λn(Ayn−Axn)),(4.3) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(4.4) Set n:=n+1 and go to Step 1.
Algorithm 4: Halpern-Tseng type splitting algorithm for variational inequality problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈C be arbitrary. Set n=1.
Step 1. Compute yn=ΠCJ−1(Jxn−λnAxn).(4.5) Step 2. Compute zn=J−1(Jyn−λn(Ayn−Axn)).(4.6) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(4.7) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(4.8) Set n:=n+1 and go to Step 1.
Algorithm 5: Tseng type splitting algorithm for convex minimization problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈E be arbitrary. Set n=1. Step 1. Compute yn=J∂gλnJ−1(Jxn−λn∇f(xn)).(4.11) Step 2. Compute xn+1=J−1(Jyn−λn(∇f(yn)−∇f(xn))),(4.12) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖∇f(yn)−∇f(xn)‖,λn+θn}if∇f(yn)−∇f(xn)≠0,λn+θnotherwise.(4.13) Set n:=n+1 and go to Step 1.
Algorithm 6: Halpern-Tseng type splitting algorithm for convex minimization problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈E be arbitrary. Set n=1. Step 1. Compute yn=J∂gλnJ−1(Jxn−λn∇f(xn)).(4.14) Step 2. Compute zn=J−1(Jyn−λn(∇f(yn)−∇f(xn))).(4.15) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(4.16) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖∇f(yn)−∇f(xn)‖,λn+θn}if∇f(yn)−∇f(xn)≠0,λn+θnotherwise.(4.17) Set n:=n+1 and go to Step 1.
Algorithm 1: Tseng type splitting algorithm for monotone inclusion problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈E be arbitrary. Set n=1. Step 1. Compute yn=JBλnJ−1(Jxn−λnAxn).(3.1) If xn=yn, then stop and xn is a solution of the problem (1.1). Otherwise, go to Step 2. Step 2. Compute xn+1=J−1(Jyn−λn(Ayn−Axn)),(3.2) where the sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(3.3) Set n:=n+1 and go to Step 1.
Algorithm 2: Halpern-Tseng type splitting algorithm for monotone inclusion problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈E be arbitrary. Set n=1.
Step 1. Compute yn=JBλnJ−1(Jxn−λnAxn).(3.17) If xn=yn, then stop and xn is a solution of the problem (1.1). Otherwise, go to Step 2. Step 2. Compute zn=J−1(Jyn−λn(Ayn−Axn)).(3.18) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(3.19) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(3.20) Set n:=n+1 and go to Step 1.
Algorithm 3: Tseng type splitting algorithm for variational inequality problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈C be arbitrary. Set n=1. Step 1. Compute yn=ΠCJ−1(Jxn−λnAxn).(4.2) Step 2. Compute xn+1=J−1(Jyn−λn(Ayn−Axn)),(4.3) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(4.4) Set n:=n+1 and go to Step 1.
Algorithm 4: Halpern-Tseng type splitting algorithm for variational inequality problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈C be arbitrary. Set n=1.
Step 1. Compute yn=ΠCJ−1(Jxn−λnAxn).(4.5) Step 2. Compute zn=J−1(Jyn−λn(Ayn−Axn)).(4.6) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(4.7) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖Axn−Ayn‖,λn+θn}ifAxn−Ayn≠0,λn+θnotherwise.(4.8) Set n:=n+1 and go to Step 1.
Algorithm 5: Tseng type splitting algorithm for convex minimization problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let x1∈E be arbitrary. Set n=1. Step 1. Compute yn=J∂gλnJ−1(Jxn−λn∇f(xn)).(4.11) Step 2. Compute xn+1=J−1(Jyn−λn(∇f(yn)−∇f(xn))),(4.12) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖∇f(yn)−∇f(xn)‖,λn+θn}if∇f(yn)−∇f(xn)≠0,λn+θnotherwise.(4.13) Set n:=n+1 and go to Step 1.
Algorithm 6: Halpern-Tseng type splitting algorithm for convex minimization problem
Step 0. Given λ1>0 and μ∈(0,√cκ). Choose a nonnegative real sequence {θn} such that ∑∞n=1θn<∞. Let u,x1∈E be arbitrary. Set n=1. Step 1. Compute yn=J∂gλnJ−1(Jxn−λn∇f(xn)).(4.14) Step 2. Compute zn=J−1(Jyn−λn(∇f(yn)−∇f(xn))).(4.15) Step 3. Compute xn+1=J−1(αnJu+(1−αn)Jzn),(4.16) where the step sizes are adaptively updated as follows: λn+1={min{μ‖xn−yn‖‖∇f(yn)−∇f(xn)‖,λn+θn}if∇f(yn)−∇f(xn)≠0,λn+θnotherwise.(4.17) Set n:=n+1 and go to Step 1.
m
Algorithm 3 (θn=0)
Algorithm 3 (θn=100/n1.1)
Algorithm 4 (θn=0)
Algorithm 4(θn=100/n1.1)
EGM
SEGM
iter. time
iter. time
iter. time
iter. time
iter. time
iter. time
100
2454 0.02
1162 0.01
35112 1.31
25204 0.65
2454 0.03
2454 0.04
1920 0.04
917 0.02
35072 1.48
25203 0.66
1920 0.03
1920 0.05
500
2275 0.95
1104 0.29
35010 7.28
25201 5.12
2275 0.50
2275 0.65
2291 0.93
1107 0.43
34989 7.20
25198 5.06
2291 0.47
2291 0.59
1000
2027 8.08
996 4.25
34993 113.2
25200 78.2
2027 7.83
2027 7.96
2017 7.80
987 3.87
35003 109.8
25200 78.0
2017 7.01
2017 7.16
x1
Algorithm 3 (θn=0)
Algorithm 3 (θn=0.001/(1.01)n)
Algorithm 4 (θn=0)
Algorithm 4 (θn=0.001/(1.01)n)
iter. time
iter. time
iter. time
iter. time
1100sin(t)
7 9.9
7 8.9
7 9.8
7 10.1
13t2e−4t
5 0.4
5 0.3
5 0.3
5 0.3
170(1−t2)
6 3.2
6 2.5
6 2.7
6 2.7
x1
Algorithm 5 (θn=0)
Algorithm 5 (θn=100/n1.1)
Algorithm 6 (θn=0)
Algorithm 6 (θn=100/n1.1)
Algorithm (1.4)
iter. time
iter. time
iter. time
iter. time
iter. time
(1,2,4)T
101 0.003
284 0.003
27818 0.10
25263 0.08
263957 0.33
(1,−7,3)T
103 0.002
288 0.003
27809 0.12
25264 0.08
314417 0.38
(−100,100,50)T
111 0.004
315 0.004
27802 0.11
25252 0.09
1313442 1.58
(−1000,−5000,−800)T
127 0.005
356 0.01
27787 0.11
25241 0.07
8004199 9.4
Figure 1. Comparison of recovered signal by using different algorithms in Case 1
Figure 2. The plotting of MSE versus number of iterations in Case 1
Figure 3. Comparison of recovered signal by using different algorithms in Case 2
Figure 4. The plotting of MSE versus number of iterations in Case 2
Figure 5. Comparison of recovered signal by using different algorithms in Case 3
Figure 6. The plotting of MSE versus number of iterations in Case 3