
In this paper, we establish a modified proximal point algorithm for solving the common problem between convex constrained minimization and modified variational inclusion problems. The proposed algorithm base on the proximal point algorithm in [
Citation: Areerat Arunchai, Thidaporn Seangwattana, Kanokwan Sitthithakerngkiet, Kamonrat Sombut. Image restoration by using a modified proximal point algorithm[J]. AIMS Mathematics, 2023, 8(4): 9557-9575. doi: 10.3934/math.2023482
[1] | Lufeng Bai . A new approach for Cauchy noise removal. AIMS Mathematics, 2021, 6(9): 10296-10312. doi: 10.3934/math.2021596 |
[2] | Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088 |
[3] | Chanchal Garodia, Izhar Uddin, Bahaaeldin Abdalla, Thabet Abdeljawad . A modified proximal point algorithm in geodesic metric space. AIMS Mathematics, 2023, 8(2): 4304-4320. doi: 10.3934/math.2023214 |
[4] | Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453 |
[5] | Donghong Zhao, Ruiying Huang, Li Feng . Proximity algorithms for the L1L2/TVα image denoising model. AIMS Mathematics, 2024, 9(6): 16643-16665. doi: 10.3934/math.2024807 |
[6] | Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149 |
[7] | Chainarong Khunpanuk, Chanchal Garodia, Izhar Uddin, Nuttapol Pakkaranang . On a proximal point algorithm for solving common fixed point problems and convex minimization problems in Geodesic spaces with positive curvature. AIMS Mathematics, 2022, 7(5): 9509-9523. doi: 10.3934/math.2022529 |
[8] | Xiao Guo, Chuanpei Xu, Zhibin Zhu, Benxin Zhang . Nonmonotone variable metric Barzilai-Borwein method for composite minimization problem. AIMS Mathematics, 2024, 9(6): 16335-16353. doi: 10.3934/math.2024791 |
[9] | Sani Salisu, Poom Kumam, Songpon Sriwongsa . One step proximal point schemes for monotone vector field inclusion problems. AIMS Mathematics, 2022, 7(5): 7385-7402. doi: 10.3934/math.2022412 |
[10] | Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601 |
In this paper, we establish a modified proximal point algorithm for solving the common problem between convex constrained minimization and modified variational inclusion problems. The proposed algorithm base on the proximal point algorithm in [
Convex optimization, a branch of mathematical optimization, examines the problem of minimizing convex functions over convex sets, which has several applications in a variety of fields, including image processing, automatic control systems, data analysis, and finance. The idea of convex minimization is to determine x∗ in closed convex subset K in a real Hilbert space H. Then
minh(x∗), | (1.1) |
where a convex function h:K→R. argminy∈Kh(y) represents the minimization set of h on H. Note that if h is a differentiable function on K, then x∗ is a solution of (1.1) is also a solution of the variational inequality problems (see in [1]), that is, to find x∗∈K such that ⟨h′(x∗),x−x∗⟩≥0,∀x∈K. In 1970 and 1976, Martinet [2] and Rockafellar [3] presented a tool for finding the solution of (1.1). That is the proximal point algorithm (PPA), which is given by
{x1∈Hxn+1=argminy∈H[h(y)+12λn‖xn−y‖2], |
where 0<λn for every 1≤n. They also demonstrated that this algorithm yields a sequence xn that weakly converges to a h minimizer. Later, a growing number of researchers have been investigating solutions to the convex minimization problem (see in [4,5,6,7,8,9]).
Another interesting problem is the variational inclusion problem (VIP). That is to find x∈H,
0∈Fx+Gx, | (1.2) |
where operator F is single-valued on H and G is multi-valued mappings on 2H. We denote that (F+G)−1(0) is the set of solutions of (1.2). When setting F≡0, (1.2) becomes the monotone inclusion problem (in [3]) which is a generalization of the variational inequality problem (formore information, see [10]). It is well-known that the problem (1.2) provides a general and convenient framework for the unified study of optimal solutions in many optimization related areas such as mathematical programming, variational inequalities, optimal control and many others. There are a lot of methods to solve VIP (see in[11,12,13,14]). The forward-backward splitting method is one of the most well-known (see in [12,13]), as demonstrated by the following:
xn+1=(I+λnG)−1(I−λnF)xn, | (1.3) |
where λn>0,D(G)⊂D(F), F is inverse strongly monotone and G is monotone Lipschitz continuous. Furthermore, the methodology (1.3) refers to {xn} that converges to a VIP solution only weakly. After that, researchers focus on adapting monotone operators to handle zero points of monotone operators (for more detail see in [4,6,12,15,16,17]). In 2016, Boikanyo [18] introduced the viscosity approximation forward-backward splitting technique, a development on the proximal point methodology for estimating a zero point for a coercive operator F and maximum monotone operator G.
Recently, Sow [19] focused on establishing an algorithm based on the methods of [3] and [18] for determining a point in the common solution of a convex optimization problem and VIP, that is, argminu∈Kh(u)∩(F+G)−1(0), where F:K→H is α-inverse strongly monotone, G is maximal monotone operator on H, h:K→(−∞,+∞] is convex lower semi-continuous and T:K→K is a b-contraction mappings. The method is given by
{x0∈Kun=argminu∈K[h(u)+12λn‖u−xn‖2],xn+1=αnT(xn)+(1−αn)JGθn(un−θnFun), |
where JGθn=(I−θnG)−1. Additionally, the xn sequence created by this method, converges to a common solution.
In 2014, the modified variational inclusion problem (MVIP) was first proposed by Khuangsatung and Kangtunyakarn [20]. The problem is determining x in H, thus
0∈N∑i=1aiFix+Gx, | (1.4) |
where i=1,2,...,N,ai∈(0,1) with the condition ∑Ni=1ai=1, Fi:H→H is ai-inverse strongly monotone and G:H→2H maximal monotone mappings. Reduce from the problem (1.4) to the problem (1.2) if Fi≡F for all i=1,2,...,N. Moreover, they proved a strong convergence theorem for finding a common element of the set of fixed points of a κ-strictly pseudononspreading mapping and the set of solutions of a finite family of variational inclusion problems and the set of solutions of a finite family of equilibrium problems in Hilbert space. Afterwards, Khuangsatung and Kangtunyakarn [21] introduced the iterative method for solving a finite family of nonexpansive mappings of fixed point T and a finite family of variational inclusion problems in Hilbert spaces under the condition ∑ni=1ai=∑ni=1δi=1. Their method is given by the following:
{zin=bnxn+(1−bn)Tixn,∀n≥1xn+1=αnf(xn)+βnγnJM,λ(I−λ∑Ni=1δiFi)xn+γn∑Ni=1aizin. |
Furthermore, they only demonstrated that the generated xn strongly converges to a common element of the common problems.
Motivated by the idea of [19,20,21], we establish a modified proximal point algorithm to solve a common problem between the convex constrained optimization problem and the modified variational inclusion problem by combining the algorithm of [19,21] and using the condition ∑Ni=1ai=1. Under appropriate conditions, the strong convergence theorem is presented in Hilbert spaces. Eventually, the proposed algorithm is applied to image restoration problems. Image restoration is an important problem in high-level image processing. During data collection, images usually suffer degradation. Blurring, information loss due to sampling, quantization effects, and different noise sources can all be a part of the degradation. The goal of image restoration is to estimate the original image from degraded data. So, the proposed algorithm could be used to solve image restoration problems where the images have suffered a variety of blurring operations. We also compare the image quality by using the signal-to-noise ratio (SNR). In numerical experiments, it was shown that the proposed algorithm is better than Khuangsatung and Kangtunyakarn's method when applied to image restoration.
The following is a summary of the work's content: basic lemmas and definitions are compiled in Section 2. Our algorithm is presented in detail in Section 3. In Section 4, there is a discussion of the numerical experiments. In the last section, this work's conclusion is given.
We provide certain introductions, definitions, and lemmas in this part that are used in the main result section. Suppose that F is nonlinear with a single-valued of K into H. If F is α-inverse strongly monotone, then there exists α>0, ⟨Fx−Fy,x−y⟩≥α‖Fx−Fy‖2 for every x,y∈K. Obviously, F is a monotone Lipschitz continuous if α-inverse strongly monotonous. In this paper, we assume that G:H→2H, h:K→(−∞,+∞] and T:K→K.
Lemma 1. [22] Assume that F is an α-inverse strongly monotone mapping on H. Then I−θF is nonexpansive for every x,y∈H and θ∈[0,2α].
Lemma 2. [23] Let T be a proper lower semicontinuous. The inequality
T(y)≥1λ‖JTλx−y‖2−1λ‖x−y‖2+1λ‖x−JTλx‖2+T(JTλx),∀x,y∈H, | (2.1) |
and λ>0 holds.
JGλ is a resolvent operator that is determined by: JGλx=(I+λG)−1(x), for every x in H when λ>0 and G is the maximal monotone. The operator JGλ has 1-inverse strongly monotone and single-valued nonexpansive properties. Obviously, a VIP solution is an operator JGλ(I−λF) fixed point, every λ>0 (see [24]).
The definition of the Moreau-Yosida resolvent f is
Jfλx=argminu∈H[f(u)+12λ‖x−u‖2], |
for every x∈H,λ>0. The set of minimizers of f corresponds with the collection of the resolvent's fixed points that are associated to F, as seen in [4]. Subsequently, the resolvent Jfλ is nonexpansive.
Lemma 3. [25] Suppose that T is a proper lower semicontinuous. For every μ>0 and r>0, thus
JTrx=JTu(μrx+(1−μr)JTrx) |
holds.
When discussing fixed point iterative algorithm convergence, the demiclosedness of a nonlinear operator Γ is often discussed correctly.
Lemma 4. [26] Suppose that Γ is nonexpansive and Fix(Γ)≠∅. Then I−Γ is demiclosed. Therefore {xn} converges to x and (I−Γ)xn converges to y. Thus (I−Γ)x=y.
Lemma 5. [11] Suppose that F is a monotone Lipschitz continuous and G is a maximal monotone mapping on H. Then, the mapping G+F is maximal monotone.
Lemma 6. [27] The following statements are hold:
(i) ∀ u,v∈H,‖u‖2−‖v‖2−2⟨u−v,v⟩≥‖u−v‖2;
(ii) ∀ u,v∈H,‖u+v‖2≤‖u‖2+2⟨v,u+v⟩;
(iii) ∀ α,β∈[0,1] with α+β=1,
‖αu+βv‖2=α‖u‖2+β‖v‖2−αβ‖u−v‖2.
Lemma 7. [28] Suppose that {βn}>0 and βn+1≤(1−ϵn)βn+γn for every n≥0,{γn}∈(−∞,∞) and {ϵn}∈(0,1) such that
(i) ∞∑n=0ϵn=∞,
(ii) lim supn→∞γnϵn≤0or∞∑n=0|γn|<∞.
Then, limn→∞βn=0.
In order to find a common element between convex minimization and the solutions of MVIP, we will analyze and collect information on a proposed proximal iterative technique hiring inverse strongly monotone and maximal monotone mapping. For the purposes of this investigation, assume that the following assumptions are acceptable.
Assumption
(A1) T is a b-contraction mapping on K and h:K→(−∞,+∞] is convex and proper lower semicontinuous function.
(A2) Fi:K→H is an αi-inverse strongly monotone for every i and G:K→2H is a maximal monotone operator.
(A3) Ω:=argminu∈Kh(u)∩(N∑n=1δiFi+G)−1(0)≠∅.
Algorithm 1. Choose x0∈K and {αn},{λn},{θn}∈(0,1) and λn≥λ>0.
Step 1. Put un as
un=argminu∈K[h(u)+12λn‖u−xn‖2]. |
Step 2. Compute
xn+1=αnT(xn)+(1−αn)JGθn(un−θnN∑n=1δiFiun). | (3.1) |
Set n = n + 1, and go back to Step 1.
Lemma 8. Algorithm 1 generates a bounded sequence {xn}.
Proof. In Ω, only exists one solution to the variational inequality because (I−T) and Ω have the property of being closed convex. z refers the one and only solution to the variational inequality problem. For every u∈K, h(z)≤h(u) according to the equality (3.9) and the properties of h, so
h(z)+12λn‖z−z‖2≤h(u)+12λn‖u−z‖2. |
From the Moreau-Yosida resolvent definition, we obtain that Jhλnz=z. This implies that
‖un−z‖=‖Jhλnxn−z‖≤‖Jhλnxn−Jhλnz‖≤‖xn−z‖. |
By (3.9), Lemma 1 and z=JGθn(I−θn∑Nn=1δiFi)z, we see that
‖JGθn(I−θnN∑n=1δiFi)un−z‖≤‖un−z‖≤‖xn−z‖, | (3.2) |
for every n≥0. It can conclude
‖xn+1−z‖=‖αnT(xn)+(1−αn)JGθn(I−θnN∑n=1δiFi)un−z‖=‖αnT(xn)+αnT(z)−αnT(z)+(1−αn)JGθn(I−θnN∑n=1δiFi)un−z+αnz−αnz‖=‖αn(T(xn)−T(z))+(1−αn)(JGθn(I−θnN∑n=1δiFi)un−z)+αn(T(z)+z)‖≤αn‖T(xn)−T(z)‖+(1−αn)‖JGθn(I−θnN∑n=1δiFi)un−z‖+αn‖T(z)+z‖≤αnb‖xn−z‖+(1−αn)‖un−z‖+αn‖T(z)+z‖=αnb‖xn−z‖+(1−αn)‖xn−z‖+αn‖T(z)+z‖≤(1−αn(1−b))‖xn−z‖+αn‖T(z)−z‖≤max{‖xn−z‖,‖T(z)−z‖1−b}. |
Using the induction on n, it can deduce that
‖xn−z‖≤max{‖x0−z‖,‖T(z)−z‖1−b},n≥1. |
Hence {xn} is bounded.
Theorem 1. Assume that
(i) limn→∞αn=0 and ∞∑n=0αn=∞,
(ii) θn∈[a,b]⊂(0,min{1,2α}), and
(iii) N∑n=1δi=1.
Then Algorithm 1 generates a sequence {xn} that converges to z∈Ω. That is,
⟨z−f(z),z−q⟩≤0,∀q∈Ω. | (3.3) |
Proof. We shall take into consideration two cases for the proof.
Case 1. Assume that there is n0∈N. Then {‖xn−x∗‖} is decreasing, every n≥n0. Denote that {‖xn−x∗‖} is monotone and bounded, it can imply that {‖xn−x∗‖} is convergent. Thus
limn→∞(‖xn−z‖2−‖xn+1−z‖2)=0. | (3.4) |
By Lemma 2 and h(z)≤h(un),
‖xn−z‖2−‖un−z‖2≥‖xn−un‖2. | (3.5) |
By inequality (3.9), (3.5) and the propoty of ‖.‖2, it obtains
‖xn+1−z‖2=‖αnT(xn)+(1−αn)JGθn(I−θnN∑n=1δiFi)un−z‖2=‖αnT(xn)−αnz+αnz+(1−αn)JGθn(I−θnN∑n=1δiFi)un−z‖2≤(1−αn)‖JGθn(I−θnN∑n=1δiFi)un−z‖2+αn‖T(xn)−z‖2≤(1−αn)‖un−z‖2+αn‖T(xn)−z‖2≤(1−αn)(‖xn−z‖2−‖xn−un‖2)+αn‖T(xn)−z‖2. |
That is,
(1−αn)‖xn−un‖2≤(1−αn)‖xn−z‖2−‖xn+1−z‖2+αn‖T(xn)−z‖2=‖xn−z‖2−αn‖xn−z‖2−‖xn+1−z‖2+αn‖T(xn)−z‖2. |
Thus, (1−αn)‖xn−un‖2≤‖xn−z‖2−‖xn+1−z‖2+αn‖T(xn)−z‖2. From (3.4) and αn→0, so limn→∞‖xn−un‖2=0. Using (3.9) and Lemma 1,
‖xn+1−z‖2=‖αn(T(xn)−z)+(1−αn)(JGθn(I−θnN∑n=1δiFi)un−z)‖2≤αn‖T(xn)−z‖2+(1−αn)‖JGθn(I−θnN∑n=1δiFi)un−z‖2=αn‖T(xn)−z‖2+(1−αn)‖JGθn(I−θnN∑n=1δiFi)un−JGθn(I−θnN∑n=1δiFi)z‖2=αn‖T(xn)−z‖2+(1−αn)‖JGθn(un−θnN∑n=1δiFiun)−JGθn(z−θnN∑n=1δiFiz)‖2≤αn‖T(xn)−z‖2+(1−αn)‖un−θnN∑n=1δiFiun−z+θnN∑n=1δiFiz‖2≤αn‖T(xn)−z‖2+(1−αn)‖un−z‖2−(1−αn)θn‖N∑n=1δiFiun−z+N∑n=1δiFiz‖2≤αn‖T(xn)−z‖2+(1−αn)‖un−z‖2−(1−αn)(2α−b)‖N∑n=1δiFiun−z+N∑n=1δiFiz‖2. |
Therefore,
(1−αn)(2α−b)‖N∑n=1δiFiun−z+N∑n=1δiFiz‖2≤αn‖T(xn)−z‖2+(1−αn)‖un−z‖2−‖xn+1−z‖2≤αn‖T(xn)−z‖2+(1−αn)‖xn−z‖2−‖xn+1−z‖2≤αn(‖T(xn)−z‖2−‖xn−z‖2)+‖xn−z‖2−‖xn+1−z‖2. |
According to {αn} converges to 0, (3.4), and {xn} is bounded, it obtains that
limn→∞‖N∑n=1δiFiun−z+N∑n=1δiFiz‖2=0. | (3.6) |
Since (3.9) and JGθn is 1-inverse strongly monotone, we get
‖JGθn(I−θnN∑n=1δiFi)un−z‖2=‖JGθn(I−θnN∑n=1δiFi)un−JGθn(I−θnN∑n=1δiFi)z‖2≤⟨(I−θnN∑n=1δiFi)un−(I−θnN∑n=1δiFi)z,JGθn(I−θnN∑n=1δiFi)un−z⟩=12[‖(I−θnN∑n=1δiFi)un−(I−θnN∑n=1δiFi)z‖2+‖JGθn(I−θnN∑n=1δiFi)un−z‖2−‖(I−θnN∑n=1δiFi)un−(I−θnN∑n=1δiFi)z−(JGθn(I−θnN∑n=1δiFi)un−z)‖2]≤12[‖un−z‖2+‖JGθn(I−θnN∑n=1δiFi)un−z‖2−‖un−θnN∑n=1δiFiun+θnN∑n=1δiFiz−JGθn(I−θnN∑n=1δiFi)un‖2]=12[‖un−z‖2+‖JGθn(I−θnN∑n=1δiFi)un−z‖2−‖(un−JGθn(I−θnN∑n=1δiFi)un)−θn(N∑n=1δiFiun−N∑n=1δiFiz)‖2]=12[‖un−z‖2+‖JGθn(I−θnN∑n=1δiFi)un−z‖2−‖JGθn(I−θnN∑n=1δiFi)un−un‖2−θn2‖N∑n=1δiFiun−N∑n=1δiFiz)‖2+2θn⟨un−JGθn(I−θnN∑n=1δiFi)un,N∑n=1δiFiun−N∑n=1δiFiz)⟩]≤12[‖un−z‖2+‖JGθn(I−θnN∑n=1δiFi)un−z‖2−‖JGθn(I−θnN∑n=1δiFi)un−un‖2−θn2‖N∑n=1δiFiun−N∑n=1δiFiz)‖2+2θn‖JGθn(I−θnN∑n=1δiFi)un−un‖‖N∑n=1δiFiun−N∑n=1δiFiz)‖]. |
It implies
‖JGθn(I−θnN∑n=1δiFi)un−z‖2−12‖JGθn(I−θnN∑n=1δiFi)un−z‖2≤12[‖un−z‖2−‖JGθn(I−θnN∑n=1δiFi)un−un‖2−θn2‖N∑n=1δiFiun−N∑n=1δiFiz)‖2+2θn‖JGθn(I−θnN∑n=1δiFi)un−un‖‖N∑n=1δiFiun−N∑n=1δiFiz)‖]. |
Therefore
‖JGθn(I−θnN∑n=1δiFi)un−z‖2≤‖un−z‖2−‖JGθn(I−θnN∑n=1δiFi)un−un‖2+2θn‖JGθn(I−θnN∑n=1δiFi)un−un‖‖N∑n=1δiFiun−N∑n=1δiFiz‖. |
By the definition of xn,
‖xn+1−z‖2=‖αnT(xn)+(1−αn)JGθn(I−θnN∑n=1δiFi)un−z‖2=‖αnT(xn)−αnz+αnz+(1−αn)JGθn(I−θnN∑n=1δiFi)un−z‖2=‖αn(T(xn)−z)+(1−αn)(JGθn(I−θnN∑n=1δiFi)un−z)‖2≤αn‖T(xn)−z‖2+(1−αn)‖JGθn(I−θnN∑n=1δiFi)un−z‖2≤αn‖T(xn)−z‖2+(1−αn)[‖un−z‖2−‖JGθn(I−θnN∑n=1δiFi)un−un‖2+2θn‖JGθn(I−θnN∑n=1δiFi)un−un‖‖N∑n=1δiFiun−N∑n=1δiFiz‖]=αn‖T(xn)−z‖2+(1−αn)‖un−z‖2−(1−αn)‖JGθn(I−θnN∑n=1δiFi)un−un‖2+(1−αn)2θn‖JGθn(I−θnN∑n=1δiFi)un−un‖‖N∑n=1δiFiun−N∑n=1δiFiz‖. |
Consider
‖xn+1−z‖2≤αn‖T(xn)−z‖2+(1−αn)‖un−z‖2−(1−αn)‖JGθn(I−θnN∑n=1δiFi)un−un‖2+(1−αn)2θn‖JGθn(I−θnN∑n=1δiFi)un−un‖‖N∑n=1δiFiun−N∑n=1δiFiz‖≤αn‖T(xn)−z‖2+‖xn−z‖2−‖JGθn(I−θnN∑n=1δiFi)un−un‖2+αn‖JGθn(I−θnN∑n=1δiFi)un−un‖2+(1−αn)2θn‖JGθn(I−θnN∑n=1δiFi)un−un‖‖N∑n=1δiFiun−N∑n=1δiFiz‖. |
Hence
‖JGθn(I−θnN∑n=1δiFi)un−un‖2≤αn‖T(xn)−z‖2+‖xn−z‖2−‖xn+1−z‖2+αn‖JGθn(I−θnN∑n=1δiFi)un−un‖2+(1−αn)2θn‖JGθn(I−θnN∑n=1δiFi)un−un‖‖N∑n=1δiFiun−N∑n=1δiFiz‖. |
Since αn→0 as n→∞, inequality (3.4) and (3.6), we have
limn→∞‖JGθn(I−θnN∑n=1δiFi)un−un‖2=0. | (3.7) |
We demonstrate lim supn→+∞⟨z−T(z),z−xn⟩≤0. Because {xn} is bounded, a subsequence {xnk} exists that weakly converges to x∗∈K. Thus lim supn→+∞⟨z−T(z),z−xn⟩=limk→+∞⟨z−T(z),z−xnk⟩. From Lemma 3 and (3.9), we see that
‖xn−Jhλxn‖=‖xn−un+un−Jhλxn‖≤‖un−xn‖+‖Jhλnxn−Jhλxn‖≤‖un−xn‖+‖Jhλ(1−λλnJhλxn+λλnxn)−Jhλxn‖≤‖un−xn‖+‖(1−λλn)Jhλxn−(1−λλn)xn‖=‖un−xn‖+(1−λλn)‖Jhλxn−xn‖=‖un−xn‖+(1−λλn)‖un−xn‖=‖un−xn‖+‖un−xn‖−λλn‖un−xn‖=(2−λλn)‖un−xn‖. |
Hence,
limn→∞‖xn−Jhλxn‖=0. | (3.8) |
By using (3.8), Lemma 4 and Jhλ is nonexpansive, we have x∗∈Fix(Jhλ)=argminu∈kh(u). Now, we will show x∗∈(∑Nn=1δiFi+G)−1(0). By Lemma 5, G+Fi is maximal monotone. Let (v,u)∈M(G+∑Nn=1δiFi). That is, u−∑Nn=1δiFiv∈G(v). We set zn:=JGθn(un−θn∑Nn=1δiFiun). Since znk=JGθnk(unk−θnk∑Nn=1δiFiunk), we have unk−θnkunk∈(I+θnkG)znk,i.e.,1θnk(unk−znk−θnk∑Nn=1δiFiunk)∈G(znk). By maximal monotonicity of G+Fi, we obtain
⟨v−znk,u−N∑n=1δiFiv−1θnk(unk−znk−θnkN∑n=1δiFiunk)⟩≥0. |
Then
⟨v−znk,u⟩ −⟨v−znk,N∑n=1δiFiv−1θnk(unk−znk−θnkN∑n=1δiFiunk)⟩≥0. |
Hence
⟨v−znk,u⟩≥⟨v−znk,N∑n=1δiFiv−1θnk(unk−znk−θnkN∑n=1δiFiunk)⟩=⟨v−znk,N∑n=1δiFiv−N∑n=1δiFiznk+N∑n=1δiFiznk+1θnk(unk−znk)−N∑n=1δiFiunk⟩=⟨v−znk,N∑n=1δiFiv−N∑n=1δiFiznk⟩+⟨v−znk,N∑n=1δiFiznk−N∑n=1δiFiunk⟩+⟨v−znk,1θnk(unk−znk)⟩≥⟨v−znk,N∑n=1δiFiznk−N∑n=1δiFiunk⟩+⟨v−znk,1θnk(unk−znk)⟩. |
From ‖zn−un‖→0,‖∑Nn=1δiFizn−∑Nn=1δiFiun‖→0 and znk⇀x∗, we obtain
0≤⟨v−x∗,u⟩=limk→∞⟨v−znk,u⟩ |
and x∗∈(∑Nn=1δiFi+G)−1(0). Therefore, x∗∈(∑Nn=1δiFi+G)−1(0)∩argminuu∈kh(u). The fact that z solves (3.3), we get
lim supn→+∞⟨z−T(z),z−xn⟩=limk→+∞⟨z−T(z),z−xnk⟩=⟨z−T(z),z−x∗⟩≤0. |
Eventually, we will show that xn→z. By using Lemma 6 and (3.9),
‖xn+1−z‖2=‖αnT(xn)+(1−αn)JGθn(I−θnN∑n=1δiFi)un−z‖2=‖αnT(xn)−αnT(z)+αnT(z)+(1−αn)JGθn(I−θnN∑n=1δiFi)un−z+αnz−αnz‖2=‖αn(T(xn)−T(z))+(1−αn)JGθn(I−θnN∑n=1δiFi)un−(1−αn)z−αn(z−T(z))‖2=‖αn(T(xn)−T(z))+(1−αn)[JGθn(I−θnN∑n=1δiFi)un−z]+αn(T(z)−z)‖2≤‖αn(T(xn)−T(z))+(1−αn)[JGθn(I−θnN∑n=1δiFi)un−z]‖2+2αn⟨T(z)−z,xn+1−z⟩. |
Consider
‖xn+1−z‖2=‖αn(T(xn)−T(z))+(1−αn)[JGθn(I−θnN∑n=1δiFi)un−z]‖2+2αn⟨z−T(z),z−xn+1⟩≤αnb‖xn−z‖2+(1−αn)‖un−z‖2+2αn⟨z−T(z),z−xn+1⟩≤αnb‖xn−z‖2+(1−αn)‖xn−z‖2+2αn⟨z−T(z),z−xn+1⟩=αnb‖xn−z‖2+‖xn−z‖2−αn‖xn−z‖2+2αn⟨z−T(z),z−xn+1⟩≤(1−αn(1−b))‖xn−z‖2+2αn⟨z−T(z),z−xn+1⟩. |
Therefore xn→z.
Case 2. There is no eventual decrease in the sequence {‖xn−z‖}. Put
Υn=‖xn−z‖2. |
Assume that ζ is a mapping on N for every n≥n0. We denote that
ζ(n)=max{k∈N:k≤n,Υk≤Υk+1}. |
So ζ(n)→∞ and Υζ(n)≤Υζ(n)+1 for n≥n0. Case 1 can demonstrate that
lim supζ(n)→+∞⟨z−T(z),z−xζ(n)⟩≤0 |
and {xζ(n)}n≥1 is bounded. For every n≥n0,
0≤‖xζ(n)+1−z‖2−‖xζ(n)−z‖2≤αζ(n)[−(1−b)‖xζ(n)−z‖2+2⟨z−T(z),z−xζ(n)+1⟩]=−αζ(n)(1−b)‖xζ(n)−z‖2+2αζ(n)⟨z−T(z),z−xζ(n)+1⟩. |
Thus 2αζ(n)⟨z−T(z),z−xζ(n)+1⟩≥aζ(n)(1−b)‖xζ(n)−z‖2. Therefore
‖xζ(n)−z‖2≤21−b⟨z−T(z),z−xζ(n)+1⟩. |
It obtains that limn→∞‖xζ(n)−z‖2=0. Hence limn→∞Υζ(n)=limn→∞Υζ(n)+1=0. For n≥n0, it obtains Υζ(n)≤Υζ(n)+1. For ζ(n)+1≤j≤n,Υj>Υj+1. Then n>ζ(n). In particular, for every n≥n0, max{Υζ(n),Υζ(n)+1}=Υζ(n)+1≥Υn≥0. Thus 0≤limn→∞Υn≤limn→∞Υζ(n)+1=0. Hence, limn→∞Υn=limn→∞‖xn−z‖2=0. Thus {xn} strongly converges to z.
When setting Fi≡F in Algorithm 1, it can obtain the following corollary.
Corollary 1. [19] Let K be a nonempty closed convex subset of a real Hilbert space H. Let h:K→(−∞,+∞] be a proper, lower semi-continuous and convex function and F be an α-inverse strongly monotone operator of K into H. Let T:K→K be a b-contraction mapping and G be a maximal monotone operator on H such that Γ:=argminu∈Kh(u)∩(F+G)−1(0) is non-empty and the domain of G is included in K. Let {xn} be a sequence defined as follows:
{x0∈K,un=argminu∈K[h(u)+12λn‖u−xn‖2],xn+1=αnT(xn)+(1−αn)JGθn(un−θnFun), | (3.9) |
where {αn},{λn} and {θn} be sequences in (0,1) and λn≥λ>0 for all n≥1 and some λ satisfying the following conditions:
(i) limn→∞αn=0 and ∞∑n=0αn=∞.
(ii) θn∈[a,b]⊂(0,min{1,2α}).
Then, the sequence {xn} generated by (3.9) converges strongly to p∈Γ, which is the unique solution of the variational inequality problem:
⟨p−f(p),p−q⟩≤0,∀q∈Γ. | (3.10) |
Image restoration is to repair or eliminate noise or damaged images that degrade an image. There are numerous types of deterioration, such as torn, blurred, noisy, out of focus, dirty, scratched, etc. Neither the occasional falling of liquids such as water nor the desire to preserve our ancient images or something similar. By utilizing the actual blurring function, we are able to estimate motion blur. And remove the blur to create an original and realistic image.
Therefore, the researcher is interested in denoising and deblurring images for this section. It is well known that by inverting the following observation model, the general problem of image restoration can be described by
w=Hx+b, | (4.1) |
where H∈Rm×n is the blurring operation, b is additive noise, w∈Rm is the observed image and x∈Rn is an original image. This problem basically relates to the various formulations for optimization methods that are available. The goal in image restoration is to deblur an image without knowing which one is the blurring operator. Thus, we focus on the following problem:
minx∈Rn12‖w1−H1x‖2+κ‖x‖1,minx∈Rn12‖w2−H2x‖2+κ‖x‖1,…,minx∈Rn12‖wN−HNx‖2+κ‖x‖1 | (4.2) |
where ‖x‖1=∑i|xi|, κ is a parameter that is relate to noise b, wi is the blurred image as determined by the blurred matrix Hi for every i=1,2,...,N and x is the original image. Suppose that fi(x)=12‖wi−Hix‖2 and g=κ‖x‖1, the Lipschitz gradient of fi is ▽fi(x)=HTi(Hix−wi). For solving the problem (4.2), we designed the following flowchart (Figure 1). Where ˜X is the deblurred image or the common solutions of the problem (4.2) and as seen in Figure 1. We can apply the algorithm in Theorem 1 to solve the problem (4.2) by setting Fi=▽fi and G=∂g.
For the purpose of this experiment, we will apply our suggested algorithm to resolve the problem (4.2), which entails recovering an original image x∈Rn. In terms of the image's signal-to-noise ratio (SNR),
SNR=20log10‖x‖2‖x−xn+1‖2, |
we compare our proposed algorithm to one developed by Khuangsatung and Kangtunyakarn [20], which also holds strong convergence. A higher SNR indicates a higher recovery quality. Let N=4. Consider a simple linearized image recovery model Hix=ρi∗x, where a motion orientation 11∘(θ=11), ρ1 is a motion blur with a 21-pixel motion length (len = 21), ρ2 is a filter size 9×9 Gaussian blur with a σ=2 standard deviation, ρ3 is a circular averaging filter with radius r=4, and ρ4 is an averaging blur of filter size 9×9. The following values are set for all of the parameters: αn=12n+2,θn=0.1,λn=0.5,κ=0.01, δn=0.25, and f(x)=x100. The numerical results from the experiment are shown in the following: Figure 2 depicts the original grayscale images. Figures 3 and 4 illustrate greyscale images degraded by matrix blurs ρ1 through ρ4. Figures 5 and 6 show the grayscale images result by Arunchai and by Khuangsatung. Figures 7 and 8 depict the SNR result of Arunchai is higher than Khuangsatung.
Remark 1. Experimentally, It was determined that the problem (4.2) could be resolved using our algorithm, and that they are preferable to algorithms developed previously. Our algorithm appears to be more effective at solving these types of problems. This is supported by the SNR values.
The problems of modified variational inclusion and variational inclusion are solved using a modified proximal point algorithm. Additionally, we have used the suggested algorithm to solve the many degradations in image restoration.
The authors would like to thank King Mongkut's University of Technology North Bangkok (KMUTNB), Rajamangala University of Technology Thanyaburi (RMUTT) and Nakhon Sawan Rajabhat University. This research was funded by National Science, Research and Innovation Fund (NSRF), King Mongkut's University of Technology North Bangkok with Contract No. KMUTNB-FF-65-13.
The authors declare no conflict of interest.
[1] | G. P. Crespi, A. Guerraggio, M. Rocca, Minty variational inequality and optimization: Scalar and vector case, In: Generalized convexity and monotonicity and applications, Boston: Springer, 2005. https://doi.org/10.1007/0-387-23639-2_12 |
[2] | B. Martinet, Régularisation d'inéquations variationnelles par approximations successives, Recherche Opérationnelle, 4 (1970), 154–158. |
[3] |
R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877–898. https://doi.org/10.1137/0314056 doi: 10.1137/0314056
![]() |
[4] |
O. Güler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Control Optim., 29 (1991), 403–419. https://doi.org/10.1137/032902 doi: 10.1137/032902
![]() |
[5] |
S. Kamimura, W. Takahashi, Strong convergence of a proximal-type algorithm in a Banach space, SIAM J. Optim., 13 (2003), 938–945. https://doi.org/10.1137/S105262340139611X doi: 10.1137/S105262340139611X
![]() |
[6] |
N. Lehdili, A. Moudafi, Combining the proximal algorithm and Tikhonov regularization, Optimization, 37 (1996), 239–252. https://doi.org/10.1080/02331939608844217 doi: 10.1080/02331939608844217
![]() |
[7] |
S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, J. Math. Anal. Appl., 75 (1980), 287–292. https://doi.org/10.1016/0022-247X(80)90323-6 doi: 10.1016/0022-247X(80)90323-6
![]() |
[8] |
M. V. Solodov, B. F. Svaiter, Forcing strong convergence of proximal point iterations in a Hilbert space, Math. Program., 87 (2000), 189–202. https://doi.org/10.1007/s101079900113 doi: 10.1007/s101079900113
![]() |
[9] |
H. K. Xu, A regularization method for the proximal point algorithm, J. Glob. Optim., 36 (2006), 115–125. https://doi.org/10.1007/s10898-006-9002-7 doi: 10.1007/s10898-006-9002-7
![]() |
[10] | J. M. Borwein, A. S. Lewis, Convex analysis and nonlinear optimization: Theory and examples, New York: Springer, 2005. https://doi.org/10.1007/978-0-387-31256-9 |
[11] |
T. Seangwattana, K. Sombut, A. Arunchai, K. Sitthithakerngkiet, A modified Tseng's method for solving the modified variational inclusion problems and its applications, Symmetry, 13 (2021), 2250. https://doi.org/10.3390/sym13122250 doi: 10.3390/sym13122250
![]() |
[12] |
P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. https://doi.org/10.1137/0716071 doi: 10.1137/0716071
![]() |
[13] |
G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces, J. Math. Anal. Appl., 72 (1979), 383–390. https://doi.org/10.1016/0022-247X(79)90234-8 doi: 10.1016/0022-247X(79)90234-8
![]() |
[14] |
P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (1998), 431–446. https://doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
![]() |
[15] |
G. H-G. Chen, R. T. Rockafellar, Convergence rates in forward-backward splitting, SIAM J. Optim., 7 (1997), 421–444. https://doi.org/10.1137/S1052623495290179 doi: 10.1137/S1052623495290179
![]() |
[16] |
S. Kamimura, W. Takahashi, Approximating solutions of maximal monotone operators in Hilbert spaces, J. Approx. Theory, 106 (2000), 226–240. https://doi.org/10.1006/jath.2000.3493 doi: 10.1006/jath.2000.3493
![]() |
[17] |
A. Adamu, D. Kitkuan, P. Kumam, A. Padcharoen, T. Seangwattana, Approximation method for monotone inclusion problems in real Banach spaces with applications, J. Inequal. Appl., 70 (2022), 70. https://doi.org/10.1186/s13660-022-02805-0 doi: 10.1186/s13660-022-02805-0
![]() |
[18] |
O. A. Boikanyo, The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators, Abstr. Appl. Anal., 2016 (2016), 2371857. https://doi.org/10.1155/2016/2371857 doi: 10.1155/2016/2371857
![]() |
[19] |
T. M. M. Sow, A modified proximal point algorithm for solving variational inclusion problem in real Hilbert spaces, e-J. Anal. Appl. Math., 2020 (2020), 28–39. https://doi.org/10.2478/ejaam-2020-0003 doi: 10.2478/ejaam-2020-0003
![]() |
[20] |
W. Khuangsatung, A. Kangtunyakarn, Algorithm of a new variational inclusion problems and strictly pseudononspreading mapping with application, Fixed Point Theory Appl., 2014 (2014), 209. https://doi.org/10.1186/1687-1812-2014-209 doi: 10.1186/1687-1812-2014-209
![]() |
[21] |
W. Khuangsatung, A. Kangtunyakarn, A theorem of variational inclusion problems and various nonlinear mappings, Appl. Anal., 97 (2017), 1172–1186. https://doi.org/10.1080/00036811.2017.1307965 doi: 10.1080/00036811.2017.1307965
![]() |
[22] |
T. M. M. Sow, An iterative algorithm for solving equilibrium problems, variational inequalities and fixed point problems of multivalued quasi-nonexpansive mappings, Appl. Set-Valued Anal. Optim., 1 (2019), 171–185. https://doi.org/10.23952/asvao.1.2019.2.06 doi: 10.23952/asvao.1.2019.2.06
![]() |
[23] | L. Ambrosio, N. Gigli, G. Savaré, Gradient flows: In metric spaces and in the space of probability measures, Basel: Birkhäuser, 2005. https://doi.org/10.1007/b137080 |
[24] |
J. Jost, Convex functionals and generalized harmonic maps into spaces of nonpositive curvature, Comment. Math. Helvetici, 70 (1995), 659–673. https://doi.org/10.1007/BF02566027 doi: 10.1007/BF02566027
![]() |
[25] | I. Miyadera, Translations of mathematical monographs: Nonlinear semigroups, Rhode Island: American Mathematical Society, 1992. https://doi.org/10.1090/mmono/109 |
[26] |
S. S. Chang, Y. K. Tang, L. Wang, Y. G. Xu, Y. H. Zhao, G. Wang, Convergence theorems for some multi-valued generalized nonexpansive mappings, Fixed Point Theory Appl., 2014 (2014), 33. https://doi.org/10.1186/1687-1812-2014-33 doi: 10.1186/1687-1812-2014-33
![]() |
[27] | W. Takahashi, Nonlinear function analysis, Yokohama: Yokohama Publishers, 2000. |
[28] |
H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal., 16 (1991), 1127–1138. https://doi.org/10.1016/0362-546X(91)90200-K doi: 10.1016/0362-546X(91)90200-K
![]() |
1. | Kamonrat Sombut, Kanokwan Sitthithakerngkiet, Areerat Arunchai, Thidaporn Seangwattana, An Inertial Forward–Backward Splitting Method for Solving Modified Variational Inclusion Problems and Its Application, 2023, 11, 2227-7390, 2107, 10.3390/math11092107 | |
2. | Doaa Filali, Mohammad Dilshad, Mohammad Akram, Generalized variational inclusion: graph convergence and dynamical system approach, 2024, 9, 2473-6988, 24525, 10.3934/math.20241194 | |
3. | Tran Van Thang, Ha Manh Tien, Parallel inertial forward–backward splitting methods for solving variational inequality problems with variational inclusion constraints, 2024, 0170-4214, 10.1002/mma.10356 |