1.
Introduction
Let H represent a real Hilbert space featuring an inner product ⟨⋅,⋅⟩ and its corresponding norm denoted as |⋅|=√⟨⋅,⋅⟩. Our aim is to address the monotone inclusion problem, which seeks to find x∈H such that
where A:H→H is a single-valued mapping, and B:H→2H denotes a multi-valued mapping. The set of all solutions to problem (1.1) is denoted as (A+B)−1(0). Many intriguing problems can be framed within the framework of the monotone inclusion problem (1.1), such as convex minimization problems, variational inequalities, equilibrium problems, image processing challenges, and more. One of the most renowned algorithms used to approximate the solution of problem (1.1) is the forward-backward algorithm (FB), as highlighted in references such as [1,2,3]. This algorithm (FB) was initially introduced by Brezis and Lions [4] and is defined by a sequence (xk)k≥1 according to the recurrence relation:
where JBλ=(Id+λB)−1 represents the resolvent of the operator B, λ>0, and Id denotes the identity mapping. This method is used in the context of solving monotone inclusion problems and has been widely studied and applied in various fields such as optimization and signal processing. After that, Moudafi [5] later introduced the viscosity approximation method to address issues of strong convergence. This method combines the forward-backward splitting algorithm with contraction mappings. The viscosity approach ensures strong convergence of the iterative sequences, which is particularly useful in various applications like fixed-point problems.
The method involves both the proximal point algorithm and the gradient method, as evidenced in references such as [6,7,8,9,10,11,12].
On the other hand, the concept of the heavy ball method (or inertial method), introduced by Polyak in 1964 [13], was an early example of incorporating inertia into optimization algorithms. The discretized form of this method has inspired various optimization algorithms that incorporate momentum to accelerate convergence.
In 2001, Alvarez and Attouch [14] introduced a new algorithm based on the inertial method outlined in [13]. This method is expressed as follows:
They established that the sequence (xk)k≥1 generated by algorithm (1.3) converges weakly to a zero point of the operator B under the conditions (θk)k≥1⊆[0,1] and (λk)k≥1 being non-decreasing, with the constraint
See [15,16] for other types of conditions on θk, which no longer rely on the iterates.
Moudafi and Oliny [17] proposed an iterative method that incorporates the concept of the inertial method to address problem (1.1). They also proved weak convergence of the iterates under the following conditions:
(i) The condition (1.4) holds.
(ii)λk<2/L with L the Lipschitz constant of A.
Their algorithm is defined by
where A:H→H and B:H→2H.
We explore several methods proposed in recent studies for tackling monotone inclusion problems, each with its unique approach and contributions. Cholamjiak et al. [18] extended the inertial forward-backward splitting method to Banach spaces; this study focuses on applications in compressed sensing, showcasing the method's efficacy in real-world scenarios. After that, Shehu et al. [19] introduced inertial terms into iterative methods for nonexpansive mappings; we introduce the advantages of inertial techniques in improving convergence rates and handling compressed sensing problems. In 2020, Artsawang and Ungchittrakool [20] aimed at solving monotone inclusion and image restoration problems; this method develops an inertial Mann-type algorithm for nonexpansive mappings, showcasing its applicability in diverse domains. The approach utilizes the Mann-type algorithm, as demonstrated by references such as [21,22,23,24]. Alternatively, image restoration problems typically require the restoration of a high-quality image from a degraded or noisy version. Monotone inclusion methods provide powerful frameworks for addressing such problems, as can be seen in [25,26,27,28,29,30,31].
Kitkuan et al. [32] recently introduced a viscosity approximation algorithm using the inertial forward-backward approach to solve problem (1.1). Their algorithm is formulated as follows:
where A:H→H represents a μ-inverse strongly monotone operator with μ>0, B:H→2H is a maximal monotone operator, and f:H→H is a contraction with constant constraint c∈(0,1). They also proved the strong convergence of their proposed method under certain appropriate conditions imposed on the parameters.
In 2020, Kitkuan et al. [33] introduced a novel method that combines the Halpern-type method and the forward-backward splitting method to solve the monotone inclusion problem (1.1). Their method is described as:
where JBλk=(Id+λkB)−1 denotes the resolvent of B, and αk,βk,γk∈(0,1). Strong convergence results are obtained under certain appropriate conditions.
By drawing inspiration from the inertial viscosity forward-backward splitting algorithms pioneered by Kitkuan et al. [32,33], we propose the following algorithm:
where (θk)k≥1⊆[0,θ] with θ∈[0,1) and (αk)k≥1,(βk)k≥1 and (γk)k≥1 are sequences in [0,1].
Remark 1.1. If αk=1 in Algorithm 1, we have the inertial viscosity forward-backward splitting algorithm (1.6).
If θk=0 and setting f(xk)=u in Algorithm 1, we have generalized Halpern-type forward-backward splitting method (1.7).
The structure of this work is as follows. In Section 2, we revisit and compile essential definitions and properties crucial to this study. In Section 3, we prove convergence results for our proposed method addressing problem (1.1). After that, Section 4 assesses the proposed method's performance through numerical experiments. Finally, we conclude this work by offering some closing remarks in Section 5.
2.
Preliminaries
An operator T:H→H is nonexpansive if ‖Tx−Ty‖≤‖x−y‖ for all x,y∈H. We denote the set of all fixed points of the operator T as Fix(T):={x∈H:Tx=x}.
Let C be a nonempty closed convex subset of H. The metric projection of H onto C, denoted as projC:H→C, is defined by projC(x)=argminc∈C‖x−c‖ for all x∈H. It is known that
for all x∈H and y∈C.
A mapping K:H→H is called monotone if for all x,y∈H, ⟨Kx−Ky,x−y⟩≥0 and it is said to be β-inverse strongly monotone with parameter β>0 if, there exists a constant β>0 such that
for all x,y∈H.
Let L:H→2H be a set-values operator. We denote by gra(L):={(x,u)∈H×H:u∈Lx} its graph of L. The operator L is called monotone if,
for all (x,u),(y,v)∈gra(L). It is classified as maximal monotone if there exists no proper monotone extension of its graph.
The resolvent of L and parameter λ≥0, JLλ:H→2H defined by JLλ:=(Id+λL)−1, where Id is the identity operator from H to H. If L is maximally monotone, JLλ is a single-valued operator.
Next, we present several results in real Hilbert spaces that will prove to be valuable in our convergence analysis.
Lemma 2.1. [34] Let H be a real Hilbert space. Then, the following conditions are satisfied:
(i)‖x−y‖2=‖x‖2−‖y‖2−2⟨x−y,y⟩ for all x,y∈H.
(ii)‖x+y‖2≤‖x‖2+2⟨y,x+y⟩ for all x,y∈H.
(iii)‖τx+(1−τ)y‖2=τ‖x‖2+(1−τ)‖y‖2−τ(1−τ)‖x−y‖2 for all τ∈[0,1] and x,y∈H.
In order to show the convergence results, we also require the following tools.
Lemma 2.2. [35, Lemma 2.5] Let (Sk)k≥1 be a sequence of nonnegative real numbers satisfying the following inequalities
where (ρk)k≥1 forms a sequence within (0,1), (ηk)k≥1 constitutes a sequence of nonnegative real numbers, and both (σk)k≥1 and (πk)k≥1 are real sequences, satisfying the conditions:
(i)∑k≥1ρk=∞.
(ii)limk→∞πk=0.
(iii)limi→∞ηki=0 implies lim supi→∞σki≤0 for any subsequence (ηki)i≥1 of (ηk)k≥1.
Then the sequence (Sk)k≥1 converges to 0.
For convenience, the following notation will be used
Lemma 2.3. [36] Let A be an μ-inverse strongly monotone operator from a real Hilbert space H into itself and B:H→2H a maximal monotone operator. Then, the following inequalities hold.
for all x,y∈Bλ:={z∈H:‖z‖≤λ}.
Lemma 2.4. [36] Let A be an μ-inverse strongly monotone operator from a real Hilbert space H into itself and B:H→2H a maximal monotone operator. Then, the following conditions hold.
(i) For λ>0, Fix(ΓA,Bλ)=(A+B)−1(0).
(ii) For 0<δ≤λ and x∈H, ‖x−ΓA,Bδx‖≤2‖x−ΓA,Bλx‖.
Theorem 2.5. [37] Let H be a real Hilbert space with a nonempty closed convex subset C, consider a nonexpansive mapping T:C→C with Fix(T)≠∅. For every u∈C and any t∈(0,1), the unique fixed point xt within C derived from the contraction C∋x↦tu+(1−t)Tx converges strongly towards a fixed point of T as t tends to zero.
3.
Main results
In this section, we delve into the intricate details of the convergence analysis for our main results.
Theorem 3.1. Let A:H→H be an μ-inverse strongly monotone operator on a real Hilbert space H with μ>0, and B:H→2H be a maximal monotone operator such that (A+B)−1(0)≠∅. Let f:H→H be a contraction mapping with constant c∈(0,1). Let (xk)k≥1 be generated by Algorithm 1. Assume that the following conditions hold:
(i)limk→∞γk=0 and ∑k≥1γk=+∞.
(ii)limk→∞θkγk‖xk−xk−1‖=0.
(iii)0<lim infk→+∞λk≤lim supk→+∞λk<2μ.
(iv)lim infk→+∞(1−αk)(1−βk)>0.
Then, the sequence (xk)k≥1 converges strongly to ¯x:=proj(A+B)−1(0)(f(¯x)).
Proof. Let Γk=JBλk(Id−λkA). By Lemma, we have for each k∈NΓk is nonexpansive mapping. By Lemma 2.4, we obtain that (A+B)−1(0)=Fix(Γk).
We expect that (xk)k≥1 is bounded. Since f is contraction mapping and proj(A+B)−1(0)(⋅) is nonexpansive, we have proj(A+B)−1(0)(f(⋅)) is contraction mapping. Then, there exists the unique fixed point ¯x∈(A+B)−1(0) such that ¯x=proj(A+B)−1(0)(f(¯x)). Thus ¯x∈Fix(Γk). It follows that
and
On the other hand, we consider
Combining (3.1)–(3.3), we obtain that
Since limk→∞θkγk‖xk−xk−1‖=0, there exists M>0 such that
From (3.4), we can obtain that
It follows that
Therefore, (xk)k≥1 is bounded. So (wk)k≥1, (zk)k≥1 and (yk)k≥1 also bounded. Using the condition (2.1) in Lemma 2.1 and the definition of (zk)k≥1 and (yk)k≥1, we get that
and
Now, consider terms ‖Γkwk−¯x‖2 and ‖Γkzk−¯x‖2 using Lemma 2.3, we have
and
Substituting (3.8) into (3.6), we have
Substituting (3.9) into (3.7), we have
Combining (3.10) and (3.11), we can imply that
From (3.12), we obtain
Then (3.13) reducing to the following:
For each k∈N, we set
Sk=‖xk+1−¯x‖2,
ρk=γk(1−c2), πk=ρkσk,
σk=2(1−c2)⟨f(¯x)−¯x,xk+1−¯x⟩+2(1−γk)θkγk(1−c2)⟨xk−xk−1,wk−¯x⟩ and
As a result, inequality (3.14) reduces to the following:
By the condition (i), we get that ∑k≥1ρk=∞ and limk→∞πk=0. In order to complete proof, by applying Lemma 2.2, it is sufficient to show that limk→∞ηki=0 implies limsupi→∞σki≤0 for any subsequence (ηki)i≥1 of (ηk)k≥1.
Let (ηki)i≥1 be a subsequence of (ηk)k≥1 such that limi→∞ηki=0. Therefore, by the assumptions of Lemma 2.2, we can conclude that
This implies that
From (3.1), we have
On the other hand, we get
From (3.15) and (3.16), we obtain that
Given that lim infk→+∞λk>0, we can find a positive real number λ>0 such that λk≥λ for all k∈N. Specifically, this implies that λki≥λ for all i∈N. By the condition (2.4) of Lemma 2.4, one has
From (3.20), we can imply that
Let
By utilizing Theorem 2.5, zt exhibits strong convergence towards the unique fixed point ¯x=proj(A+B)−1(0)(f(¯x)) as t→0. Consequently, we can conclude that
The inequality (3.23) reduces the following:
Combining (3.19) and (3.24), we get that
where M0=supi∈N,t∈(0,1)‖zt−wki‖. We take k→+∞ in (3.25), we obtain that
Let us consider,
From (3.27), one has
Next, we claim that limi→+∞‖xki+1−xki‖=0. By Algorithm 1, we have the following estimates:
From (3.29), using the boundedness of (xk)k≥1, the condition 3.1, and (3.17) and (3.19), we obtain that
Combining (3.30) and (3.28), we infer that
Hence, lim supi→∞σki≤0. By Lemma 2.2, we observe that limk→∞Sk=0, that is xk→¯x as k→∞. We thus complete the proof. □
Remark 3.2. The condition (ii) in Theorem 3.1 is satisfied when we set θk such that 0≤θk≤¯θk, where
and (εk)k≥1 is a positive sequence such that limk→∞εkγk=0.
4.
Applications
In this section, we delve into the practical applications of our proposed method as outlined in this paper, focusing on its utility in convex minimization problems and image restoration problems.
4.1. Convex minimization problems
Consider a convex and differentiable function h:H→R and a convex, lower-semicontinuous function g:H→R. To solve the following convex minimization problem: Find ¯x∈H such that
By using Fermat's rule, the problem (4.1) can be written in the form of the following problem as: Find ¯x∈H such that
where ∇h is a gradient of h and ∂g is a subdifferential of g.
Remark 4.1. [38] If a function K:H→H is (1/L)-Lipschitz continuous, then K is L-inverse strongly monotone.
Remark 4.2. [39] If a function P:H→R is a convex lower-semicontinuous, then ∂P is maximal monotone.
By applying Theorem 3.1 and set A=∇h and B=∂g, we can obtain the following result.
Theorem 4.3. Let H be a real Hilbert space. Let h:H→R be a convex differentiable function with a (1/L)-Lipschitz continuous gradient ∇h and g:H→R be a convex lower-semicontinuous such that (∇h+∂g)−1(0)≠∅. Let f:H→H be a contraction mapping with constant c∈(0,1). Let (xk)k≥1 be generated by x0,x1∈H
Assume that the following conditions hold:
(i)limk→∞γk=0 and ∑k≥1γk=+∞.
(ii)limk→∞θkγk‖xk−xk−1‖=0.
(iii)0<lim infk→+∞λk≤lim supk→+∞λk<2L.
(iv)lim infk→+∞(1−αk)(1−βk)>0.
Then, the sequence (xk)k≥1 converges strongly to ¯x:=proj(∇h+∂g)−1(0)(f(¯x)).
Next, we present some comparisons among three algorithms: Our proposed algorithm, Kitkuan et al.'s algorithm (2019) (1.6), as presented in [32], and Tan's algorithm (2024), as described in [31, Algorithm 1.3].
Example 4.4. Let K∈Rl×s and b∈Rl with l>s. Let g:Rs→R be defined by g(x)=‖x‖1 for all x∈Rs, and h:Rs→R be defined by h(x)=12‖Kx−b‖22 for all x∈Rs. To find the solution of the minimization problem as follows:
By setting this, we obtain that for each x=(x1,x2,...,xs)∈Rs
∇h(x)=KT(Kx−b) and ∇h is ‖K‖2-Lipschitz continuous, where KT is a transpose of K.
To begin, we randomly select vectors x0,x1∈Rs, along with b∈Rl and the matrix K∈Rl×s. Subsequently, we set f(x)=x6 for all x∈Rs and choose the parameters in this example as follows: αk=1100k+1, βk=1k+1, γk=1100k+1, λk=1‖K‖2+1 and
We compare our proposed algorithm with Kitkuan et al.'s algorithm (2019) (1.6), as presented in [32], and Tan's algorithm (2024), as described in [31, Algorithm 1.3]. For Tan's algorithm (2024), we choose the following parameter values: ζk=θk, δ=1.5,φ=120, and χk=λk. We evaluate all three algorithms and record the number of iterations (k) and the CPU times (seconds) by using the stopping criteria: ‖xk−xk−1‖≤10−6.
Table 1 shows the performance of three algorithms in solving problem (4.3) with different sizes of matrix K. Our algorithm consistently achieves optimality tolerance in the shortest CPU time across all cases. Additionally, it is notable that our algorithm requires fewer iterations compared to Kitkuan et al.'s algorithm (2019) and Tan's algorithm (2024) for each matrix size K.
4.2. Image restoration problems
In this subsection, we showcase the efficacy of the proposed algorithm by employing it to tackle image restoration problems, specifically focusing on deblurring and denoising images. The image restoration problem can be defined as the inversion of the following degradation model:
where y, H, x, and w denote the degraded image, degradation or blurring operator, original image, and noise operator, respectively.
To approximate the reconstructed image by solving the regularized least-squares minimization problem:
where μ>0 is the regularization parameter and ϕ(⋅) represents the regularizer. The l1 norm serves as a regularization functional, commonly utilized to eliminate noise in restoration problems, known as Tikhonov regularization [40]. The problem (4.6) can be reformulated as follows:
where y denotes the degraded image, and H represents a bounded linear operator. We can see that problem (4.7) can be formed in the problem (1.1) by setting B=∂‖⋅‖1, μ=0.001 and A=∇L(⋅) where L(x)=12‖Hx−y‖22. By using this, we observe that A(x)=∇L(x)=HT(Hx−y). First, we degrade image by adding random noise and different types of blurring. The Gaussian blur (size 20 by 20 with the standard deviation 20), the average blur (size 10 by 10), and the motion blur (the linear motion of a camera by 20 pixels with an angle of 40 degrees). Next, we solve problem (4.7) using our algorithm in Theorem 4.3 and putting f(x)=x2 for all x∈Rs, αk=1k+1, βk=1k+1, γk=1100k+1, λk=0.7 and θk is defined as (4.4).
The comparisons of the performance among our proposed algorithm, Kitkuan et al.'s algorithm (2019), and Tan's algorithm (2024) are presented. In the case of the Kitkuan et al.'s algorithm (2019) (1.6) was presented in [32], we set f(x)=x2 for all x∈Rs, γk=1100k+1, λk=0.7 and θk is defined as (4.4). For the Tan's algorithm (2024) presented in [31, Algorithm 1.3], we choose the following parameter values: ζk=θk, δ=1.5,φ=120, and χk=λk. The reconstructed image's quality is evaluated using the signal-to-noise ratio (SNR) formula:
where x represents the original image, while xk stands for the image restored at iteration k.
The effectiveness of image restoration using our proposed algorithm, Kitkuan et al.'s algorithm (2019), and Tan's algorithm (2024) is depicted in Figures 1–3.
The comparisons among our proposed algorithm, Kitkuan et al.'s algorithm (2019), and Tan's algorithm (2024) in image restoration problems are illustrated in Figure 4.
The experiments were carried out using MATLAB 9.19 (R2022b) and were performed on a MacBook Pro 14-inch 2021 model, which is equipped with an Apple M1 Pro processor and 16 GB of memory.
5.
Conclusions
We introduce a novel generalized viscosity forward-backward splitting scheme that incorporates inertial terms aimed at addressing the monotone inclusion problem. We also include a proof of the strong convergence of this algorithm under certain specified conditions for the relevant parameters. Furthermore, we leverage these results to approximate solutions for convex minimization problems. Additionally, we present a numerical example to compare our proposed algorithm with others in convex minimization problems. Finally, we demonstrate the efficacy of our method in solving image restoration problems.
Author Contributions
Kasamsuk Ungchittrakool: Conceptualization, Methodology, Software, Validation, Convergence analysis, Investigation, Writing-original draft preparation, Writing-review and editing, Visualization; Natthaphon Artsawang: Conceptualization, Methodology, Software, Validation, Convergence analysis, Investigation, Writing-original draft preparation, Writing-review and editing, Visualization, Project administration, Funding acquisition. All authors have read and approved the final version of the manuscript for publication.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This research received financial support from the Faculty of Science, Naresuan University, under grant number R2567E_SCI003.
Conflict of interest
The author declare no conflict of interest.