1.
Introduction
Let the triplet (H,<⋅,⋅>,‖⋅‖) denote the real Hilbert space with the inner product and induced norm. The classical monotone inclusion problem aims to find
where A⊆H×H is a multi-valued operator and B is a single-valued operator on H. In the context of monotone operator theory, (1.1) has been largely considered for various problems in signal processing, subgradient algorithms, image recovery problem, variational inequality problem and evolution equations, see [17,19,37] and the references cited therein.
In order to study the problem (1.1), one can employ effective iterative algorithms. The elegant forward-backward (FB) iterative algorithm [34,35] is prominent among various iterative algorithms to solve (1.1). However, the FB iterative algorithm exhibits weak convergence, assuming the stronger conditions for the operators A and B [44]. Recently, Gibali and Thong [23] considered a modified variant of the Tseng's splitting method to obtain strong convergence results in Hilbert spaces.
Fixed point problem (FPP) is another important framework to study a variety of problems arising in various branches of sciences [17,24,25]. In 2017, Takahashi [38] proposed and analyzed a new unifying class of nonlinear operators namely the class of η-demimetric operators in Hilbert spaces as follows:
Let C be a nonempty subset of a real Hilbert space H. An operator W:C→C is said to be η-demimetric [38], where η∈(−∞,1), if Fix(W)≠∅ such that
where Id indicates the identity operator and Fix(W)={t∈C|t=Wt} denotes the set of all fixed points of the operator W. Note that
is an equivalent representation of an η-demimetric operator. The class of η-demimetric operators have been studied extensively in various instances of FPP in Hilbert spaces, see [39,40,42]. On the other hand, Baillon [13] established the nonlinear ergodic theorem for nonexpansive operator as follows:
Theorem 1.1. [13] Let C be a nonempty, closed and convex subset of a real Hilbert space H and W:C→C be a nonexpansive operator such that Fix(W)≠∅ then for all s∈C, the Cesˊaro means
weakly converges to a fixed point of W.
Since then, the classical Cesˊaro means method has been considered for various classes of nonlinear operators, see [18,28,29] and the references cited therein. Note that the Cesˊaro means method fails to converge strongly for the class of nonexpansive operators [22]. In order to establish the strong convergence results, one has to impose additional conditions on the algorithm. In 1967, Halpern [27] introduced and analyzed an iterative algorithm which strongly converges to the nearest fixed point of the nonexpansive operator. It is remarked that the Halpern iterative algorithm coincides with the Cesˊaro means method for linear operators. In 2000, Moudafi [33] proposed and analyzed the strongly convergent viscosity iterative algorithm by utilizing a strict contraction operator instead of the anchor point in the Halpern iterative algorithm. In order to generalize the classical Cesˊaro means method for an infinite family of η-demimetric operators, we first consider the following auxiliary operator:
where 0≤βm≤1 and T′m=γs+(1−γ)Tms for all s∈C with Tm being η-demimetric operator and 0<γ<1−η. It is well-known in the context of operator Wn that each T′m is nonexpansive and the limit limn→∞Qn,m exists. Moreover,
It follows from [41] that
Moreover, to enhance the speed of convergence of the proposed iterative algorithm, we also utilize the inertial extrapolation technique essentially due to Polyak [36], see also [1,2,3,4,5,6,7,8,9,10,11,31].
The rest of the paper is organized as follows: We present relevant preliminary concepts and results in Section 2. We show the convergence analysis of the proposed iterative algorithm in Section 3 and compute a numerical experiment for the viability of the algorithm in Section 4. Section 5 includes an experiment on image deblurring with applications.
2.
Preliminaries
We start this section with the mathematical preliminary concepts required in the sequel. Throughout the paper, we assume the triplet (H,<⋅,⋅>,‖⋅‖) to be the real Hilbert space with the inner product and induced norm. For a nonempty closed convex subset C of the Hilbert space H, PHC denotes the associated metric projection operator which is firmly nonexpansive and satisfies ⟨s−PHCs,PHCs−t⟩≥0, for all s∈H and t∈C. Recall that a set-valued operator A:H→2H is said to be monotone, if for all s,t∈H, u∈Ax and v∈Ay, we have ⟨s−t,u−v⟩≥0. Moreover, A is said to be maximal monotone if there is no proper monotone extension of A. For a monotone operator A, the associated resolvent operator JAm of index m>0 is defined as
where (⋅)−1 indicates the inverse operator. Note that the resolvent operator JAm is well defined everywhere on Hilbert space H. Further, JAm is single valued and satisfies the firmly nonexpansiveness. Furthermore, x∈A−1(0) if and only if s∈Fix(JAm).
The rest of this section is organized with the celebrated results required in the sequel. The following lemma is a special case of Lemma 2.4 in [30].
Lemma 2.1. Let μ,ν,ˉξ∈H. Let α,β,γ∈R with α+β+γ=1 then we have
(i) ‖μ+ν‖2≤‖μ‖2+2⟨ν,μ+ν⟩;
(ii) ‖αμ+βν+γˉξ‖2=α‖μ‖2+β‖ν‖2+γ‖ˉξ‖2−αβ‖μ−ν‖2−αγ‖μ−ˉξ‖2−βγ‖ν−ˉξ‖2.
Lemma 2.2. [12] Let W:C→C be an operator defined on a nonempty closed convex subset C of a real Hilbert space H and let (pn) be a sequence in C. If pn⇀p and if (Id−W)pn→0, then p∈Fix(W)
Lemma 2.3. [38] Let C be a nonempty, closed and convex subset of a Hilbert space H and let W:C→H be an η-demimetric operator with η∈(−∞,1). Then Fix(W) is closed and convex.
Lemma 2.4. [42] Let C be a nonempty, closed and convex subset of a Hilbert space H and let W:C→H be an η-demimetric operator with η∈(−∞,1) and Fix(W)≠∅. Let γ be a real number with 0<γ<1−η and set T′=(1−γ)Id+γW, then T′ is a quasi-nonexpansive operator of C into H.
Lemma 2.5. [15] Let C be a nonempty bounded closed convex subset of a uniformly convex Banach space and W:C→C be a nonexpansive operator. For each s∈C and the Cesˊaro means Wns = 1n+1∑ni=0Wis, then lim supn→∞‖Wns−W(Wns)‖=0.
Lemma 2.6. [14] Let A⊆H×H be a maximal monotone operator and let B be a Lipschitz continuous and monotone operator on H. Then A+B is a maximal monotone operator.
Lemma 2.7. [23] Let A⊆H×H be a maximal monotone operator and let B be an operator on H. Define Sμ:=(Id+μA)−1(Id−μB),μ>0. Then we have Fix(Sμ)=(A+B)−1(0), for all μ>0.
Lemma 2.8. [46] Let (bn) be a sequence of nonnegative real numbers and there exists n0∈N such that
where (ψn)⊂(0,1) and (cn), (dn) with the following conditions hold:
(I) ∑∞n=1ψn=∞;
(II) lim supn→∞cn≤0;
(III) ∑∞n=1dn<∞, ∀0≤dn(0≤n);
then limn→∞bn=0.
Lemma 2.9. [32] Let (qn) be a sequence of nonnegative real numbers. Suppose that there is a subsequence (qnj) of (qn) such that qnj<qnj+1 for all j∈N, then there exists a nondecreasing sequence (εk) of N such that limk→∞εk=∞ and satisfy the following properties such that
for some large number k∈N. Thus, εk is the largest number n in the set {1,2,⋯,k} such that qn<qn+1.
3.
Main results
In this section, we prove the following strong convergence result.
Theorem 3.1. Let A⊆H×H be a maximal monotone operator and let B be a monotone and ρ-Lipschitz operator for some ρ>0 on a real Hilbert space H. Let Wn be the W-operator and let h be a λ-contraction on H with λ∈[0,1). Assume that Γ=(A+B)−1(0)∩Fix(W)≠∅, μ1>0, σ∈(0,1), {ˉξn}⊂[0,1) and {αn},{βn} are sequences in (0,1). For given p0,p1∈H, let the iterative sequence {pn} be generated by
Assume that the following step size rule
and conditions:
(C1) ∑∞n=1ˉξn‖pn−pn−1‖<∞;
(C2) limn→∞αn=0 and ∑∞n=1αn=∞, and for each n∈N, 0<a∗<lim infn→∞βn≤lim supn→∞βn<b∗<1−αn, where a∗,b∗ be positive real numbers,
hold. Then the sequence {pn} generated by (3.1) converges strongly to an element in Γ.
The following results from [23] are crucial for the analysis of our main result.
Lemma 3.1. [23] The sequence μn generated by (3.1) is a nonincreasing sequence with a lower bound of min{μ1,σρ}.
Lemma 3.2. [23] Assume that Conditions (C1) and (C2) hold and let (sn) be any sequence generated by (3.1), we have
and
Lemma 3.3. Assume that Conditions (C1) and (C2) hold and suppose that
Let (pn) and (un) be two sequences generated by (3.1). If a subsequence (pnt) of pn converges weakly to some p∗∈H then p∗∈Γ.
Proof. Let p∗∈H such that pnt⇀p∗ then p∗∈(A+B)−1(0) follows from [23, Lemma 7]. Since limn→∞‖pn−sn‖=0 and pnt⇀p∗ therefore we have snt⇀p∗. Since
therefore, utilizing Lemma 2.2, we get p∗∈Fix(Wi) and hence p∗∈Γ. □
Now we are able to prove the main result of this section.
Proof of Theorem 3.1. For simplicity, the proof is divided into the following three steps:
Step 1. Show that the sequence (pn) is bounded.
Let ˉp∈Γ, then for each n∈N we have
Set Wn=1N+1∑Ni=0Wi and utilizing Lemma 2.4 we have
It follows from the above estimate that Wn is a nonexpansive operator. Moreover, for any ˉp∈Γ, we have that Wnˉp=1n∑n−1i=0Wiˉp=ˉp. Since limn→∞(1−σ2μ2nμ2n+1)=1−σ2>0, therefore for each n≥n0 where n0∈N, we have that
From (3.2) and (3.5), we obtain
Further, from (C2) and (3.6), we have
Thus, for all n≥n0, ‖pn+1−ˉp‖≤max{‖pn0−ˉp‖,‖h(ˉp)−ˉp‖1−λ}. This implies that (pn) is bounded.
Step 2. Compute the following two estimates:
Utilizing Lemma 2.1(ⅱ), we obtain
Now utilizing (3.2) in the above estimate, we get
Simplifying the above estimate, we have the desired estimate (3.7).
Next, by using (C2) and setting jn=(1−βn)pn+βnWnsn, we get
and
Utilizing (3.9), (3.10), Lemma 2.1(ⅰ) and (ⅱ), the desired estimate (3.8) follows from the following calculation:
Step 3. Show that limn→∞‖pn−ˉp‖=0.
We consider the two possible cases on the sequence (‖pn−ˉp‖).
Case A. For all n≥n0, ‖pn+1−ˉp‖2≤‖pn−ˉp‖2 and n0∈N. This implies that limn→∞‖pn−ˉp‖ exists. Since limn→∞(1−σ2μ2nμ2n+1)=1−σ2>0. By using (C2) and (3.7), we have
From (3.3), we get
By the definition of (un) and (C1), we have
By using the triangle inequality, we obtain the following estimates:
By using Lemma 2.5, we have
Note that for all n∈N, we get
From (3.11) and (C2), the estimate (3.15) implies that
Similarly, from (3.13), (3.16) and the following triangle inequality, we have
Since (pn) is bounded, then there exists a subsequence (pnt) of (pn) with pnt⇀p∗∈H. Now utilizing Lemma 3.3 we have p∗∈Γ.
By making use of the estimate (3.16), we get
From the estimate (3.17) and Lemma 2.8, we get limn→∞‖pn−ˉp‖=0.
Case B. There exists a subsequence (‖pnk−ˉp‖2) of (‖pn−ˉp‖2) such that ‖pnk−ˉp‖<‖pnk+1−ˉp‖ for all k∈N.
It follows from Lemma 2.9 that there exists a nondecreasing sequence (bm)∈N such that limm→∞bm=∞, for all m∈N with the inequality ‖pbm−ˉp‖2≤‖pbm+1−ˉp‖2 holds. In the similar fashion from (3.7), we obtain
Since limn→∞αn=0, so we get
Similarly from Case A, we have
Using (3.8) for n≥max{n∗,n0}, we have the following estimate:
The above estimate yields that
Therefore, lim supm→∞‖pbm−ˉp‖2≤0. Therefore, pn→ˉp∈Γ and this completes the proof. □
We now propose a variant of the iterative algorithm (3.1) embedded with the Halpern iterative algorithm [27].
Theorem 3.2. Let A⊆H×H be a maximal monotone operator and let B be a monotone and ρ-Lipschitz operator for some ρ>0 on a real Hilbert space H. Let Wn be the W-operator and let h be a λ-contraction on H with λ∈[0,1). Assume that Γ=(A+B)−1(0)∩Fix(W)≠∅, (μ1)>0, σ∈(0,1), {ˉξn}⊂[0,1) and {αn},{βn} are sequences in (0,1). For given q,p0,p1∈H, let the iterative sequences {pn}, {un}, {vn}, {wn} and {pn+1} be generated by
Assume that the following step size rule
and conditions
(C1) ∑∞n=1ˉξn‖pn−pn−1‖<∞;
(C2) limn→∞αnβn=0, 1−αn−βn=0 and ∑∞n=1αnβn=∞;
(C3) For each n∈N, 0<a∗<lim infn→∞βn≤lim supn→∞βn<b∗<1−αn, where a∗,b∗ be positive real numbers hold.
Then the sequence {pn} generated by (3.19) converges strongly to a point in Γ.
Remark 3.1. In order to obtain the desired result, for the iteration (3.19), we have to assume a stopping criteria for (3.19) to be n>nmax for some sufficiently large number nmax.
Proof. Observe that for each n≥1, arguing similarly as in the proof of Theorem 3.1 (Steps 1–3), we deduce that Γ is well defined, closed and bounded. Furthermore, the sequence (pn) is bounded and
Let pn+1=αnq+(1−αn−βn)pn+βnWnsn. An easy calculation along (3.20), (C2) and (C3) implies that
The above estimate infers that
The rest of the proof of Theorem 3.2 is similar to the proof of Theorem 3.1 and is therefore omitted.□
The following remark elaborate how to align condition (C1) in a computer-assisted iterative algorithm.
Remark 3.2. We remark here that the condition (C1) can easily be aligned in a computer-assisted iterative algorithm since the value of ‖pn−pn−1‖ is quantified before choosing ˉξn such that 0≤ˉξn≤^ˉξn with
Here {Θn} denotes a sequence of positives ∑∞n=1Θn<∞ and ˉξ∈[0,1).
4.
Example and numerical results
In this section, we compute a numerical experiment for the viability of the iterative algorithm (3.1).
Example 4.1. Let H=R. We denote the inner product ⟨s,t⟩=st, for all s,t∈R and induced norm |s|=√⟨s,t⟩. Let the operators h,A,B:R→R be defined as h(s)=s8, As=4s and Bs=3s for all s∈R. Observe that, h is a contraction with constant λ∈[0,1), B is a monotone and ρ-Lipschitz operator for some ρ>0 and A is a maximal monotone operator such that (A+B)−1(0)={0}. Let the sequence of operators Ti:R→R be defined by
Note that Ti is an infinite family of 3−i2(3+i)2-demimetric operators with ⋂∞i=1Fix(Ti)=0=Fix(W). Hence Γ=(A+B)−1(0)∩Fix(W)=0. In order to compute the numerical values of (pn), we choose Θ=0.5, αn=1n+1, βn=n2(n+1), μ1=7.45 and σ=0.785. Since
We now provide a numerical test for a comparison between accelerated Tseng's type splitting method defined in Theorem 3.1 (i.e., Theorem 3.1, ˉξn≠0), standard Tseng's type splitting method (i.e., Theorem 3.1, ˉξn=0), Algorithm 1 [31] and Theorem 2 [23]. The stopping criteria is defined as En=‖vn−un‖<10−5. Table 1 summarises the comparison of these algorithm with respect to the following choices of initial inputs:
Choice 1. p0=4, p1=4.5.
Choice 2. p0=5, p1=−3.
Choice 3. p0=−1.3, p1=−4.7.
The error plotting En of ˉξn≠0 and ˉξn=0 for each choice in Table 1 are shown in Figure 1.
We can see from Table 1 and Figure 1 that the Theorem 3.1 with ˉξn≠0 performs better as compared to the Theorem 3.1 with ˉξn=0, Algorithm 1 [31] and Theorem 2 [23].
5.
Applications
In this section, we demonstrate some theoretical as well as applied instances of the main result in Section 3.
5.1. Split feasibility problem
The classical split feasibility problem (SFP), essentially due to Censor and Elfving [16], aims to find ˆs∈ω:=C∩h−1(Q)={ˉt∈C:hˉt∈Q}, where C⊂H1 and Q⊂H2 are nonempty, closed and convex subsets of H1 and H2, respectively. In order to derive the result for SFP from Theorem 3.1, we recall the indicator operator of a nonempty, closed and convex subset C of H1 as
It is well known that the subdifferential ∂ΦC associated with ΦC is a maximal monotone operator. Recall also that ∂ΦC=N(μ,C), where N(μ,C) is the normal cone of C at μ. Utilizing this fact, we conclude that the resolvent operator of ∂ΦC is the metric projection operator of H1 onto C. Setting B(ˉx)=ℏ∗(Id−PQ)ℏˉx, where PQ is the metric projection onto Q and A(ˉx)=∂ΦC(ˉx) then the SCFP has the inclusion structure as defined in (1.1). Since B is ρ-Lipschitz continuous, where ρ=‖ℏ‖2=1 and A is maximal monotone, (see [12]), we, therefore, arrive at the following variant of Theorem 3.1:
Theorem 5.1. Assume that Γ=ω∩Fix(W)≠∅. For given p0,p1∈H1, let the iterative sequence (pn) be generated by
Assume that the following step size rule:
and conditions (C1) and (C2) hold. Then the sequence (pn) generated by (5.1) converges strongly to an element in Γ.
5.2. Convex minimization problems
Let f:H→R∪(+∞) and g:H→R∪(+∞) be two convex, proper and lower semicontinuous functions such that f differentiable with ρ-Lipschitz continuous gradient and g is such that its proximal map. We consider the following convex minimization problem of finding ˉx∈H such that
In view of the Fermat's rule, the problem (5.2) is equivalent to the following problem of finding ˉx∈H such that
where the subdifferential ∂g is a maximal monotone operator and the gradient ∇f is ρ-Lipschitz continuous [12,37]. Assume that ω is the set of solutions of problem (1.1) and ω≠∅. In Theorem 3.1, set that B:=∇f and A:=∂g. Then, we compute the following result.
Theorem 5.2. Let f:H→R∪(+∞), g:H→R∪(+∞) be two proper, convex and lower semicontinuous functions on a real Hilbert space H. Assume that Γ=ω∩⋂∞i=1Fix(Wi)≠∅ and ˉξn is a bounded real sequence. For given p0,p1∈H, let the iterative sequences (pn) be generated by
Assume that the following step size rule
and the conditions (C1) and (C2) hold. Then the sequence (pn) generated by (5.4) converges strongly to an element in Γ.
5.3. Application to image processing problems
Let h∈Rn×m be a blurring operator, z∈Rn be the original image and w∈Rm be the blurred and noisy image (observed image) with v be the additive noise from Rm. The following structure is known as an image recovery problem:
For solving this problem, we make use of the model of Tibshirani [43] which is known as LASSO problem:
where k>0 is a regularization parameter. Problem (5.5) cannot be used to solve the image de-blurring directly, as the image is sparse under some gradient transformation. In order to reconstruct the images from their noisy, blurry and/or incomplete measurements, Guo et al. [26] proposed a novel regularization model for reproducing high-quality images using fewer measurements than the state-of-the-art methods. We therefore use the following model:
The Richardson iteration, which is often called the Landweber method [20,21,45], is generally used as an iterative regularization method to solve (5.6). This method is defined as follows:
where ρ step size is constant. To ensure the convergence, the step size satisfy 0<ρ<2ϵ2max and ϵmax is the largest singular value of h. We set k=0.7875 and μ=0.001, ξn=1(100∗n+1)2, αn=12n, βn=188n+1. The quality of the the restored images are analyzed on the following scale of signal to noise ratio (SNR) defined as SNR=20log10‖z‖2‖z−zn‖2, where z and zn are the original and estimated images at iteration n, respectively. We compare the performance of the algorithms abbreviated as Theorem 5.1, ˉξn≠0, Theorem 5.1, ˉξn=0, Algorithm 1 [31] and Theorem 2 of Gibali et al. [23] on the test images (Mona Lisa and Cameraman) via the image restoration experiment for motion operator, respectively.
It can be observed from Figures 3 and 5 that the larger SNR values infer the better restored images. We can see from Table 2, and the corresponding test images in Figures 2 and 4, that the inertial variant of the iterative algorithm in Theorem 5.1 (i.e., ˉξn≠0) performs better as compared to the non-inertial variant (i.e., ˉξn=0) Algorithm 1 [31] and Theorem 2 of Gibali and Thong [23].
6.
Conclusions
In this paper, we have devised an accelerated Visco-Cesˊaro means Tseng's type splitting method for computing a common solution of a monotone inclusion problem and the FPP associated with an infinite family of η-demimetric operators in Hilbert spaces. We have incorporated an appropriate numerical example for the viability the iterative algorithm. We have also included some theoretical, as well as applied instances, of the main result in Section 3 that can provide an important future research direction in these theories.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgment
Funding was received from the Thailand Science Research and Innovation (TSRI) and Fundamental Fund of Rajamangala University of Technology Rattanakosin with funding under contract No. FRB6620/2566. Moreover, this research has received funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation [grant number B39G660025].
The authors would like to thank the Associate Editor and the anonymous referees for their valuable comments and suggestions. The author Yasir Arfat was supported by the Petchra Pra Jom Klao Ph.D Research Scholarship from King Mongkut's University of Technology Thonburi, Thailand(Grant No.16/2562). The corresponding author Supak Phiangsungnoen acknowledge the financial support provided by Institute of Research and Development, Rajamangala University of Technology Rattanakosin (Fundamental Fund Project).
Conflict of interest
The authors declare that they have no competing interests.