1.
Introduction
Let E be a real Banach space with its dual space E∗. In this paper, we study the so-called monotone inclusion problem:
where A:E→E∗ is a single mapping and B:E→2E∗ is a multi-valued mapping. The set of solutions of the problem (1.1) is denoted by (A+B)−10:={x∈E:0∈(A+B)x}. This problem draws much attention since it stands at the core of many mathematical problems, such as: variational inequalities, split feasibility problem and minimization problem with applications in machine learning, statistical regression, image processing and signal recovery (see [17,33,44]).
A classical method for solving the problem (1.1) in Hilbert space H, is known as forward-backward splitting algorithm (FBSA) [15,29] which generates iterative sequence {xn} by the following algorithm:
where JBλ:=(I+λB)−1 is the resolvent operator of an operator B. Here, I denotes the identity operator on H. It was proved that the sequence generated by (1.2) converges weakly to an element in (A+B)−10 under the assumption of the α-cocoercivity of the operator A, that is,
and λ is chosen in (0,2α). In fact, FBSA includes, as special cases, the proximal point algorithm (when A=0) [11,20,34] and the gradient method [18].
In order to get strong convergence result, Takashashi et al. [41] introduced the following algorithm:
where A is an α-cocoercive mapping on H. It was shown that if {λn}⊂(0,∞) and {αn}⊂(0,1) satisfy the following assumptions:
then the sequence {xn} defined by (1.3) converges strongly to an element in (A+B)−10.
In 2016, Cholamjiak [12] introduced the following FBSA in a uniformly convex and q-uniformly smooth Banach space E:
where JBλn:=(I+λnB)−1 is the resolvent operator of an m-accretive operator B and A is an α-cocoercive mapping. He proved that the sequence generated by (1.4) converges strongly to a solution of the problem (1.1) under the following assumptions:
where κq is the q-uniform smoothness coefficient of E.
In recent years, the FBSA for solving the monotone inclusion problem (1.1), when A is α-cocoercive, was studied and modified by many authors in various settings (see, e.g., [1,9,10,13,26,27,32,37,38,46]). It is important to remark that the α-cocoercivity of the operator A is a strong assumption. To relax this assumption, Tseng [45] introduced the following so-called Tseng's splitting method:
where A is monotone and L-Lipschitz continuous with L>0. It was proved that the sequence {xn} generated by (1.5) converges weakly to an element in (A+B)−10 provided the step size λn is chosen in (0,1L). It is worth noting that Tseng's splitting method is a requirement to know Lipschitz constant of the mapping. Unfortunately, Lipschitz constants are often unknown or difficult to approximate.
Very recently, Shehu [37] extended Tseng's result to Banach spaces. He proposed the following iterative process for approximating a solution of the problem (1.1) in a 2-uniformly convex Banach space E which is also uniformly smooth:
where A:E→E∗ is monotone and L-Lipschitz continuous, JBλn:=(J+λnB)−1J is the resolvent of B and J is the duality mapping from E into E∗. He obtain weak convergence theorem to the solution of the problem (1.1) provided the step size λn is chosen in (0,1√2μκL), where μ is the 2-uniform convexity constant of E and κ is the 2-uniform smoothness constant of E∗. At the same time, he also proposed a variant of (1.6) with a linesearch for solving the problem (1.1). It is known that any algorithm with a linesearch needs an inner loop with some stopping criterion over iteration.
In this paper, motivated by Shehu [37], we propose two modifications of Tseng's splitting method with non-monotone adaptive step sizes for solving the problem (1.1) in the framework of Banach spaces. The step size of our methods does not require the prior knowledge of the Lipschitz constant of operator and without any linesearch procedure. The remainder of this paper is organized as follows: We recall some definitions and lemmas in Section 2. Our methods are presented and analyzed in Section 3. Theoretical applications to variational inequality problem and convex minimization problem are considered in Section 4 and finally, in Section 5, we provide some numerical experiments to illustrate the behaviour of our methods.
2.
Preliminaries
Let R and N be the set of real numbers and the set of positive integers, respectively. Let E be a real Banach space with its dual space E∗. We denote ⟨x,f⟩ by the value of a functional f in E∗ at x in E, that is, ⟨x,f⟩=f(x). For a sequence {xn} in E, the strong convergence and the weak convergence of {xn} to x∈E are denoted by xn→x and xn⇀x, respectively. Let SE={x∈E:‖x‖=1}. The space E is said to be smooth if the limit
exists for all x,y∈SE. The space E is said to be uniformly smooth if the limit (2.1) converges uniformly in x,y∈SE. It is said to be strictly convex if ‖(x+y)/2‖<1 whenever x,y∈SE and x≠y. The space E is said to be uniformly convex if and only if δE(ϵ)>0 for all ϵ∈(0,2], where δE is the modulus of convexity of E defined by
for all ϵ∈[0,2]. Let p≥2. The space E is said to be p-uniformly convex if there is a c>0 such that δE(ϵ)≥cϵp for all ϵ∈(0,2]. Let 1<q≤2. The space E is said to be q-uniformly smooth if there exists a κ>0 such that ρE(t)≤κtq for all t>0, where ρE is the modulus of smoothness of E defined by
for all t≥0. Let 1<q≤2<p<∞ with 1p+1q=1. It is observed that every p-uniformly convex (q-uniformly smooth) space is uniformly convex (uniformly smooth) space. It is known that E is p-uniformly convex (q-uniformly smooth) if and only if its dual E∗ is q-uniformly smooth (p-uniformly convex) (see [2]). If E is uniformly convex then E is reflexive and strictly convex and if E is uniformly smooth then E is reflexive and smooth (see [14]). Moreover, we know that for every p>1, Lp and ℓp are min{p,2}-uniformly smooth and max{p,2}-uniformly convex, while Hilbert space is 2-uniformly smooth and 2-uniformly convex (see [4,23,47] for more details).
Definition 2.1. The normalized duality mapping J:E→2E∗ is defined by
where ⟨⋅,⋅⟩ denotes the duality pairing between E and E∗.
If E is a Hilbert space, then J=I is the identity mapping on E. It is known that E is smooth if and only if J is single-valued from E into E∗ and if E is a reflexive, smooth and strictly convex, then J−1 is single-valued, one-to-one, surjective and it is the duality mapping from E∗ into E. Moreover, if E is uniformly smooth, then J is norm-to-norm uniformly continuous on bounded subsets of E (see [2,14] for more details). A duality mapping J from a smooth Banach space E into E∗ is said to be weakly sequentially continuous if for any sequence {xn}⊂E such that xn⇀x implies that Jxn⇀∗Jx.
Lemma 2.2. [39] Let E be a smooth Banach space and J be the duality mapping on E. Then ⟨x−y,Jx−Jy⟩≥0 for all x,y∈E. Further, if E is strictly convex and ⟨x−y,Jx−Jy⟩=0, then x=y.
Definition 2.3. A mapping A:E→E∗ is said to be:
∙ α-cocoercive if there exists a constant α>0 such that ⟨x−y,Ax−Ay⟩≥α‖Ax−Ay‖2 for all x,y∈E;
∙ monotone if ⟨x−y,Ax−Ay⟩≥0 for all x,y∈E;
∙ L-Lipschitz continuous if there exists a constant L>0 such that ‖Ax−Ay‖≤L‖x−y‖ for all x,y∈E;
∙ hemicontinuous if for each x,y∈E, the mapping f:[0,1]→E∗ defined by f(t)=A(tx+(1−t)y) is continuous with respect to the weak∗ topology of E∗.
Remark 2.4. It is easy to see that if A is cocoercive, then A is monotone and Lipschitz continuous but the converse is not true in general.
The next lemma can be found in [49] (see also [47]).
Lemma 2.5. (i) Let E be a 2-uniformly smooth Banach space. Then there exists a constant κ>0 such that
(ii) Let E be a 2-uniformly convex Banach space. Then there exists a constant c>0 such that
Remark 2.6. It is well-known that κ=c=1 whenever E is a Hilbert space. Hence these inequalities reduce to the following well-known polarization identity:
Moreover, we refer to [49] for the exact values of constants κ and c.
Next, we recall the following Lyapunov function which was introduced in [3]:
Definition 2.7. Let E be a smooth Banach space. The Lyapunov functional ϕ:E×E→R is defined by
If E is a Hilbert space, then ϕ(x,y)=‖x−y‖2 for all x,y∈E. In addition, the Lyapunov function ϕ has the following properties:
Lemma 2.8. [6] Let E be a 2-uniformly convex Banach space, then there exists a constant c>0 such that
where c is a constant in Lemma 2.5 (ii).
We make use of the following functional V:E×E∗→R studied in [3]:
Obviously, V(x,x∗)=ϕ(x,J−1x∗) for all x∈E and x∗∈E∗.
Lemma 2.9. [3] Let E be a reflexive, strictly convex and smooth Banach space. Then the following statement holds:
Let E be a reflexive, strictly convex and smooth Banach space. Let C be a closed and convex subset of E. Then for any x∈E, there exists a unique element z∈C such that
Such a mapping ΠC:E→C defined by z=ΠC(x) is called the generalized projection of E onto C. If E is a Hilbert space, then ΠC is coincident with the metric projection denoted by PC.
Lemma 2.10. [3] Let E be a reflexive, strictly convex and smooth Banach space and C be a closed and convex subset of E. Let x∈E and z∈C. Then the following statements hold:
(i) z=ΠC(x) if and only if ⟨y−z,Jx−Jz⟩≤0, ∀y∈C.
(ii) ϕ(y,ΠC(x))+ϕ(ΠC(x),x)≤ϕ(y,x), ∀y∈C.
Lemma 2.11. [25] Let C be a closed and convex subset of a smooth and uniformly convex Banach space E. Let {xn} be a sequence in E such that ϕ(p,xn+1)≤ϕ(p,xn) for all p∈C and n≥1. Then the sequence {ΠC(xn)} converges strongly to some element x∗∈C.
Let B:E→2E∗ be a multi-valued mapping. The effective domain of B is denoted by D(B)={x∈E:Bx≠∅} and the range of B is also denoted by R(B)=⋃{Bx:x∈D(B)}. The set of zeros of B is denoted by B−10={x∈D(B):0∈Bx}. A multi-valued mapping B is said to be monotone if
A monotone operator B on E is said to be maximal if its graph G(B)={(x,y)∈E×E∗:x∈D(B),y∈Bx} is not properly contained in the graph of any other monotone operator on E. In other words, the maximality of B is equivalent to R(J+λB)=E∗ for all λ>0 (see [5, Theorem 1.2]). It is known that if B is maximal monotone, then B−10 is closed and convex (see [39]).
For a maximal monotone operator B, we define the resolvent of B by JBλ(x)=(J+λB)−1Jx for x∈E and λ>0. It is also known that B−10=F(JBλ).
Lemma 2.12. [5] Let E be a reflexive Banach space. Let A:E→E∗ be a monotone, hemicontinuous and bounded operator and B:E→2E∗ be a maximal monotone operator. Then A+B is maximal monotone.
Lemma 2.13. ([48]) Assume that {an} is a sequence of nonnegative real sequences such that
where {γn} is a sequence in (0,1) and {δn} is a sequence of real sequences such that
(i) ∑∞n=1γn=∞;
(ii) lim supn→∞δn≤0 or ∑∞n=1|γnδn|<∞.
Then limn→∞an=0.
Lemma 2.14. ([30]) Let {Γn} be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence {Γni} of {Γn} which satisfies Γni<Γni+1 for all ℓ∈N. Define the sequence {σ(n)} of integers as follows:
for all n≥n0 (for some n0 large enough). Then {σ(n)}n≥n0 is a non-decreasing sequence such that limn→∞σ(n)=∞, and it holds that
Lemma 2.15. ([42]) Assume that {λn} and {θn} are two nonnegative real sequences such that
If ∑∞n=1θn<∞, then limn→∞λn exists.
3.
Main results
In this section, we introduce two modified Tseng's splitting algorithms for solving the monotone inclusion problem in Banach spaces. In order to prove the convergence results of these algorithms, we need make the following assumptions:
Assumption 3.1. (A1) The Banach space E is a real 2-uniformly convex and uniformly smooth.
(A2) The mappings A:E→E∗ is monotone and L-Lipschitz continuous, and B:E→2E∗ is maximal monotone.
(A3) The solution set of the problem (1.1) is nonempty, that is, (A+B)−10≠∅.
Lemma 3.2. Assume that Assumption 3.1 holds. Let {xn}, {yn} and {λn} be sequences generated by Algorithm 1. Then the following statements hold:
(i) If xn=yn for all n∈N, then xn∈(A+B)−10.
(ii) limn→∞λn=λ∈[min{μL,λ1},λ1+θ], where θ=∑∞n=1θn. Moreover
Proof. (i) If xn=yn, then xn=JBλnJ−1(Jxn−λnAxn). It follows that xn=(J+λnB)−1J∘J−1(J−λnA)xn, that is, Jxn−λnAxn∈Jxn+λnBxn, which implies that 0∈(A+B)xn. Hence xn∈(A+B)−10.
(ii) In the case Axn−Ayn≠0, using the Lipschitz continuity of A, we have
From (3.3) and mathematical induction, we have the sequence {λn} has upper bound λ1+θ and lower bound min{μL,λ1}. From Lemma 2.15, we have limn→∞λn exists and we denote λ=limn→∞λn. It is obvious that λ∈[min{μL,λ1},λ1+θ]. By the definition of λn, we have
This implies that
Lemma 3.3. Assume that Assumption 3.1 holds. Let {xn} be a sequence generated by Algorithm 1. Hence
where c and κ are constants in Lemma 2.5.
Proof. Let z∈(A+B)−10. From Lemma 2.5 (i) and (2.5), we have
Combining (3.4) and (3.6), we have
By Lemma 2.8, we have
Now, we will show that
From the definition of yn, we note that Jxn−λnAxn∈Jyn+λnByn. Since B is maximal monotone, there exists vn∈Byn such that Jxn−λnAxn=Jyn+λnvn, we have
Since 0∈(A+B)z and Ayn+vn∈(A+B)yn, it follows from Lemma 2.12 that A+B is maximal monotone. Hence
Substituting (3.9) into (3.10), we have
Hence
Combining (3.8) and (3.11), thus this lemma is proved.
Theorem 3.4. Assume that Assumption 3.1 holds. Suppose, in addition, that J is weakly sequentially continuous on E. Then the sequence {xn} generated by Algorithm 1 converges weakly to an element in (A+B)−10.
Proof. Since limn→∞λn exists and μ∈(0,√cκ), it follows that limn→∞(1−κμ2cλ2nλ2n+1)=1−κμ2c>0. Thus there exists n0∈N such that
Combining (3.5) and (3.12), we have
This show that limn→∞ϕ(z,xn) exists and hence {ϕ(z,xn)} is bounded. Applying Lemma 2.8, we also have {xn} is bounded. From (3.5), we have
Thus we have
Applying Lemma 2.8, we also have
Since J is norm-to-norm uniformly continuous on bounded subsets of E, we have
Using the fact that A is Lipschitz continuous, we have
By the boundedness of {xn}, there exists a subsequence {xnk} of {xn} such that xnk⇀x∗∈E. From (3.14), we have ynk⇀x∗. We will show that x∗∈(A+B)−10. Let (v,w)∈G(A+B), we have w−Av∈Bv. From the definition of ynk, we note that
which implies that
By the maximal monotonicity of B, we have
and by the monotonicity of A, we have
Since limk→∞λnk=λ>0 and ynk⇀x∗, it follows from (3.15) and (3.16) that
By the monotonicity of A+B, we get 0∈(A+B)x∗, that is, x∗∈(A+B)−10. Hence x∗∈(A+B)−10. Note that (A+B)−10 is closed and convex. Put un=Π(A+B)−10(xn). It follows from Lemma 2.11 that there exists x∗∈(A+B)−10 such that un→x∗. Finally, we show that xn⇀x∗. Let {xnk} be a subsequence of {xn} such that xnk⇀ˆx∈(A+B)−10. Then we have
for all k∈N. Since un→x∗, xnk⇀ˆx and J is weakly sequentially continuous, we have
By the strict monotonicity of J, we obtain x∗=ˆx. In summary, we have shown that every subsequence of {xn} has a further subsequence which converges weakly to x∗. We conclude that xn⇀x∗=limn→∞Π(A+B)−10(xn). This completes the proof.
Theorem 3.5. Assume that Assumption 3.1 holds. If {αn}⊂(0,1) with limn→∞αn=0 and ∑∞n=1αn=∞, then the sequence {xn} generated by Algorithm 2 converges strongly to x∗∈(A+B)−10.
Proof. We will show that {xn} is bounded. Let z∈(A+B)−10. Using the same arguments as in the proof of Lemma 3.3, we can show that
Since limn→∞λn exists and μ∈(0,√cκ), it follows that limn→∞(1−κμ2cλ2nλ2n+1)=1−κμ2c>0. Thus there exists n0∈N such that
Combining (3.21) and (3.22), we have
By (2.4), we have
This implies that {ϕ(z,xn)} is bounded. Applying Lemma 2.8, we also have {xn} is bounded.
Let x∗=Π(A+B)−10(u). From (3.21), we have
This implies that
where K=supn∈N{|ϕ(x∗,u)−ϕ(x∗,xn)|}.
Now, we will divide the rest of the proof into two cases.
Case 1. Suppose that there exists N∈N such that ϕ(x∗,xn+1)≤ϕ(x∗,xn) for all n≥N. Hence limn→∞ϕ(x∗,xn) exists. By our assumptions, we have from (3.24) that
and hence
Since J is norm-to-norm uniformly continuous on bounded subsets of E, we have
Using the fact that A is Lipschitz continuous, we have
Then from (3.18), we have
Moreover from (3.26) and (3.27), we obtain
Since J−1 is norm-to-norm uniformly continuous on bounded subset of E∗, we have
By the boundedness of {xn}, there exists a subsequence {xnk} of {xn} such that xnk⇀ˆx∈E and
where x∗=Π(A+B)−10(u). By a similar argument to that of Theorem 3.4, we can show that ˆx∈(A+B)−10. Thus we have
From (3.28), we also have
Finally, we show that xn→x∗. From Lemma 2.9, we have
This together with (3.29) and (3.30), so we can conclude by Lemma 2.13 that ϕ(x∗,xn)→0. Therefore xn→x∗.
Case 2. Suppose that there exists a subsequence {Γni} of {Γn} such that Γni<Γni+1 for all i∈N. In this case, we define σ:N→N by
for all n≥n0 (for some n0 large enough). From Lemma 2.14, we have σ(n) is non-decreasing such that limn→∞σ(n)=∞ and the following inequalities hold for all n≥n0:
Put Γn=ϕ(x∗,xn) for all n∈N. As proved in the Case 1, we obtain
where K=supn∈N{|ϕ(x∗,u)−ϕ(x∗,xσ(n))|}. By our assumptions, we have
and hence
Using the same arguments as in the proof of Case 1, we can show that
and
From (3.30) and (3.31), we have
This implies that
Hence lim supn→∞ϕ(x∗,xn)=0 and so limn→∞ϕ(x∗,xn)=0. Therefore xn→x∗. This completes the proof.
4.
Theoretical applications
4.1. The case of variational inequality problem
Let C be a nonempty, closed and convex subset of E. Let A:C→E∗ be a mapping. The variational inequality problem is to find x∗∈C such that
The set of solutions of the problem (4.1) is denoted by VI(C,A). In particular, if A is a continuous and monotone mapping, then VI(C,A) is closed and convex (see [7,24]). Recall that the indicator function of C given by
It is known that iC is proper convex, lower semicontinuous and convex function with its subdifferential ∂iC is maximal monotone (see [35]). From [2], we know that
where NC is the normal cone for C at a point v. Thus we can define the resolvent of ∂iC for λ>0 by
As shown in [40], for any x∈E and z∈C, z=J∂iCλ(x)⟺z=ΠC(x), where ΠC is the generalized projection from E onto C.
Lemma 4.1. [36] Let C be a nonempty, closed convex subset of a Banach space E. Let A:C→E∗ be a monotone and hemicontinuous operator and T:E→2E∗ be an operator defined as follows:
Then T is maximal monotone and T−10=VI(C,A).
If we set B=∂iC, then we obtain the following results regarding the problem (4.1).
Assumption 4.2. (A1) The feasible set C is a nonempty, closed and convex subset of a real 2-uniformly convex and uniformly smooth Banach space E.
(A2) The mapping A:E→E∗ is monotone and L-Lipschitz continuous.
(A3) The solution set of the problem (4.1) is nonempty, that is, VI(C,A)≠∅.
Theorem 4.3. Assume that Assumption 4.2 holds. Suppose, in addition, that J is weakly sequentially continuous on E. Then the sequence {xn} generated by Algorithm 3 converges weakly to an element in (A+B)−10.
Theorem 4.4. Assume that Assumption 4.2 holds. If {αn}⊂(0,1) with limn→∞αn=0 and ∑∞n=1αn=∞, then the sequence {xn} generated by Algorithm 4 converges strongly to x∗∈VI(C,A).
4.2. The case of convex minimization problem
Let f:E→R be a convex function and g:E→R be a convex, lower semicontinuous and non-smooth function. We consider the following convex minimization problem:
By Fermat's rule, we know that above problem is equivalent to the problem of finding x∈E such that
where ∇f is the gradient of f and ∂g is the subdifferential of g. In this situation, we assume that f is a convex and differentiable function with its gradient ∇f is L-Lipschitz continuous. Further, ∇f is cocoercive with a constant 1/L (see [31, Theorem 3.13]). This implies that ∇f is monotone and Lipschitz continuous. Moreover, ∂g is maximal monotone (see [35, Theorem A]). In this point of view, we set A=∇f and B=∂g, then we obtain the following results regarding the problem (4.9).
Assumption 4.5. (A1) The Banach space E is real 2-uniformly convex and uniformly smooth Banach space.
(A2) The functions f:E→R is convex and differentiable and its gradient ∇f is L-Lipschitz continuous and g:E→R is convex and lower semicontinuous which f+g attains a minimizer.
Theorem 4.6. Assume that Assumption 4.5 holds. Suppose, in addition, that J is weakly sequentially continuous on E. Then the sequence {xn} generated by Algorithm 5 converges weakly to a minimizer of f+g.
Theorem 4.7. Assume that Assumption 4.5 holds. If {αn}⊂(0,1) with limn→∞αn=0 and ∑∞n=1αn=∞, then the sequence {xn} generated by Algorithm 6 converges strongly to a minimizer of f+g.
5.
Numerical experiments
In this section, we provide some numerical experiments to illustrate the behaviour of our methods and compare them with some existing methods.
Example 5.1. We consider the HpHard problem which is taken from [22]. Let A:Rm→Rm be an operator defined by Ax=Mx+q with q∈Rm and
where N is an m×m matrix, S is an m×m skew-symmetric matrix and D is an m×m positive definite diagonal matrix. The feasible set is C=R+m. It is clear that A is monotone and Lipschitz continuous with L=‖M‖. In this experiments, we compare our Algorithm 3 and Algorithm 4 with the extragradient method (EGM) proposed in [28] and the subetaadient extragradient method (SEGM) proposed in [8]. The parameters are chosen as follows:
∙ Algorithm 3: λ1=0.4/‖M‖ and μ=0.9;
∙ Algorithm 4: λ1=0.4/‖M‖, μ=0.9, αn=110000(n+2) and u=x1;
∙ EGM and SEGM: λ=0.4/‖M‖.
All entries of N and S are generated randomly in (−5,5), of D are in (0,0.3), of q uniformly generated from (−500,0). For every m, we have generated two random samples with different choices of M and q. We perform the numerical experiments with three different cases of m (m=100,500,1000). We take the starting point x1=(1,1,1,…,1)T∈Rm and use stopping criterion ‖xn−yn‖≤ε=10−6. The numerical results are reported in Table 1.
Example 5.2. We consider the problem (4.1) in L2([0,2π]) with the inner product ⟨x,y⟩=∫2π0x(t)y(t)dt and the norm ‖x‖=(∫2π0x2(t)dt)1/2 for all x,y∈L2([0,2π]). Let A:L2([0,2π])→L2([0,2π]) be an operator defined by
for all x∈L2([0,2π]) and t∈[0,2π]. It can be easily verified that A is monotone and Lipschitz continuous with L=1 (see [50,51]). The feasible set is C={x∈L2([0,2π]):∫2π0(t2+1)x(t)dt≤1}. Observe that 0∈VI(C,A) and so VI(C,A)≠∅. In this numerical experiment, we take all parameters αn, λn and μ are the same as in Example 5.1. We perform numerical experiments with three different cases of starting point x1 and use stopping criterion ‖xn−yn‖≤ε=10−3. The numerical results are reported in Table 2.
Example 5.3. Consider the minimization problem:
where x=(w1,w2,w3)T∈R3. Let f(x)=2‖x‖22+(−1,2,5)x+1 and g(x)=‖x‖1. Thus we have ∇f(x)=4x+(−1,2,5)T. It is easy to check that f is a convex and differentiable function and its gradient ∇f is Lipschitz continuous with L=4. Moreover, g is a convex and lower semicontinuous function but not differentiable on R3. From [21], we know that
for λ>0. In this experiments, we compare our Algorithm 5 and Algorithm 6 with Algorithm (1.4) of Cholamjiak [12]. The parameters are chosen as follows:
∙ Algorithm 5: λ1=0.1 and μ=0.9;
∙ Algorithm 6: λ1=0.1, μ=0.9, αn=110000(n+1) and u=x1;
∙ Algorithm (1.4): all parameters αn, λn, δn, rn and en are the same as Example 4.2 in [12], and u=x1.
We perform the numerical experiments with four different cases of starting point x1 and use stopping criterion ‖xn+1−xn‖≤ε=10−12. The numerical results are reported in Table 3.
Example 5.4. In signal processing, compressed sensing can be modeled as the following under-determinated linear equation system:
where x∈RN is a vector with m nonzero components to be recovered, y∈RM is the observed or measured data with noisy ε, and D:RN→RM(M<N) is a bounded linear operator. It is known that to solve (5.1) can be seen as solving the LASSO problem:
where λ>0. Following [19], we define Ax:=∇(12‖Dx−y‖22)=DT(Dx−y) and Bx:=∂(λ‖x‖1). It is known that A is ‖D‖2-Lipschitz continuous and monotone. Moreover, B is maximal monotone (see [35]).
In this experiment, the sparse vector x∈RN is generated from uniform distribution in the interval [−2,2] with m nonzero elements. The matrix D∈RM×N is generated from a normal distribution with mean zero and one invariance. The observation y is generated by white Gaussian noise with signal-to-noise ratio (SNR) = 40. The restoration accuracy is measured by the mean squared error (MSE) as follows:
where xn is an estimated signal of x.
We compare our proposed Algorithm 1 and Algorithm 2 with the forward-backward splitting algorithm (FBSA) (1.2), the Tseng's splitting algorithm (TSA) (1.5) and the contraction forward-backward splitting algorithm (CFBSA) proposed in ([43, Algorithm 3.1]). The parameters are chosen as follows:
∙ Algorithm 1: θn=0, λ1=0.0013 and μ=0.5;
∙ Algorithm 2: θn=0, λ1=0.0013, μ=0.5, αn=1200n+5 and u=(1,1,…,1)T;
∙ CFBSA: αn=1200n+5, μ=0.5, δ=0.5, l=0.5, γ=0.45 and f(x)=x5;
∙ TSA: λn=0.2‖D‖2;
∙ FBSA: λ=2×10−5.
The starting points x1 of all methods are randomly chosen in RN. We perform the numerical test with the following three cases:
Case 1: N=512, M=256 and m=20;
Case 2: N=1024, M=512 and m=30;
Case 3: N=2048, M=1024 and m=60;
The numerical results for all test are reported in Figures 1-6.
6.
Conclusions
In this paper, we propose Tseng's splitting algorithms with non-monotone adaptive step sizes for finding zeros of the sum of two monotone operators in the framework of Banach space. Under some suitable conditions, we prove the weak and strong convergence results of the algorithms without the knowledge of the Lipschitz constant of the mapping. Some applications related to the obtained results are presented. Finally, several numerical experiments are performed to illustrate the convergence of our algorithms and compared with many known methods.
Acknowledgments
P. Cholamjiak was supported by Thailand Science Research and Innovation under the project IRN62W0007 and P. Sunthrayuth was supported by RMUTT Research Grant for New Scholar under Grant NSF62D0602.
Conflict of interest
The authors declare no conflict of interest.