Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Image restoration by using a modified proximal point algorithm

  • In this paper, we establish a modified proximal point algorithm for solving the common problem between convex constrained minimization and modified variational inclusion problems. The proposed algorithm base on the proximal point algorithm in [19] and the method of Khuangsatung and Kangtunyakarn in [21] by using suitable conditions in Hilbert spaces. The proposed algorithm is not only presented in this article; however has also been demonstrated to generate a robust convergence theorem. The proposed algorithm could be used to solve image restoration problems where the images have suffered a variety of blurring operations. Additionally, we contrast the signal-to-noise ratio (SNR) of the proposed algorithm against that of Khuangsatung and Kangtunyakarn's method in [21] in order to compare image quality.

    Citation: Areerat Arunchai, Thidaporn Seangwattana, Kanokwan Sitthithakerngkiet, Kamonrat Sombut. Image restoration by using a modified proximal point algorithm[J]. AIMS Mathematics, 2023, 8(4): 9557-9575. doi: 10.3934/math.2023482

    Related Papers:

    [1] Lufeng Bai . A new approach for Cauchy noise removal. AIMS Mathematics, 2021, 6(9): 10296-10312. doi: 10.3934/math.2021596
    [2] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [3] Chanchal Garodia, Izhar Uddin, Bahaaeldin Abdalla, Thabet Abdeljawad . A modified proximal point algorithm in geodesic metric space. AIMS Mathematics, 2023, 8(2): 4304-4320. doi: 10.3934/math.2023214
    [4] Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453
    [5] Donghong Zhao, Ruiying Huang, Li Feng . Proximity algorithms for the L1L2/TVα image denoising model. AIMS Mathematics, 2024, 9(6): 16643-16665. doi: 10.3934/math.2024807
    [6] Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149
    [7] Chainarong Khunpanuk, Chanchal Garodia, Izhar Uddin, Nuttapol Pakkaranang . On a proximal point algorithm for solving common fixed point problems and convex minimization problems in Geodesic spaces with positive curvature. AIMS Mathematics, 2022, 7(5): 9509-9523. doi: 10.3934/math.2022529
    [8] Xiao Guo, Chuanpei Xu, Zhibin Zhu, Benxin Zhang . Nonmonotone variable metric Barzilai-Borwein method for composite minimization problem. AIMS Mathematics, 2024, 9(6): 16335-16353. doi: 10.3934/math.2024791
    [9] Sani Salisu, Poom Kumam, Songpon Sriwongsa . One step proximal point schemes for monotone vector field inclusion problems. AIMS Mathematics, 2022, 7(5): 7385-7402. doi: 10.3934/math.2022412
    [10] Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601
  • In this paper, we establish a modified proximal point algorithm for solving the common problem between convex constrained minimization and modified variational inclusion problems. The proposed algorithm base on the proximal point algorithm in [19] and the method of Khuangsatung and Kangtunyakarn in [21] by using suitable conditions in Hilbert spaces. The proposed algorithm is not only presented in this article; however has also been demonstrated to generate a robust convergence theorem. The proposed algorithm could be used to solve image restoration problems where the images have suffered a variety of blurring operations. Additionally, we contrast the signal-to-noise ratio (SNR) of the proposed algorithm against that of Khuangsatung and Kangtunyakarn's method in [21] in order to compare image quality.



    Convex optimization, a branch of mathematical optimization, examines the problem of minimizing convex functions over convex sets, which has several applications in a variety of fields, including image processing, automatic control systems, data analysis, and finance. The idea of convex minimization is to determine x in closed convex subset K in a real Hilbert space H. Then

    minh(x), (1.1)

    where a convex function h:KR. argminyKh(y) represents the minimization set of h on H. Note that if h is a differentiable function on K, then x is a solution of (1.1) is also a solution of the variational inequality problems (see in [1]), that is, to find xK such that h(x),xx0,xK. In 1970 and 1976, Martinet [2] and Rockafellar [3] presented a tool for finding the solution of (1.1). That is the proximal point algorithm (PPA), which is given by

    {x1Hxn+1=argminyH[h(y)+12λnxny2],

    where 0<λn for every 1n. They also demonstrated that this algorithm yields a sequence xn that weakly converges to a h minimizer. Later, a growing number of researchers have been investigating solutions to the convex minimization problem (see in [4,5,6,7,8,9]).

    Another interesting problem is the variational inclusion problem (VIP). That is to find xH,

    0Fx+Gx, (1.2)

    where operator F is single-valued on H and G is multi-valued mappings on 2H. We denote that (F+G)1(0) is the set of solutions of (1.2). When setting F0, (1.2) becomes the monotone inclusion problem (in [3]) which is a generalization of the variational inequality problem (formore information, see [10]). It is well-known that the problem (1.2) provides a general and convenient framework for the unified study of optimal solutions in many optimization related areas such as mathematical programming, variational inequalities, optimal control and many others. There are a lot of methods to solve VIP (see in[11,12,13,14]). The forward-backward splitting method is one of the most well-known (see in [12,13]), as demonstrated by the following:

    xn+1=(I+λnG)1(IλnF)xn, (1.3)

    where λn>0,D(G)D(F), F is inverse strongly monotone and G is monotone Lipschitz continuous. Furthermore, the methodology (1.3) refers to {xn} that converges to a VIP solution only weakly. After that, researchers focus on adapting monotone operators to handle zero points of monotone operators (for more detail see in [4,6,12,15,16,17]). In 2016, Boikanyo [18] introduced the viscosity approximation forward-backward splitting technique, a development on the proximal point methodology for estimating a zero point for a coercive operator F and maximum monotone operator G.

    Recently, Sow [19] focused on establishing an algorithm based on the methods of [3] and [18] for determining a point in the common solution of a convex optimization problem and VIP, that is, argminuKh(u)(F+G)1(0), where F:KH is α-inverse strongly monotone, G is maximal monotone operator on H, h:K(,+] is convex lower semi-continuous and T:KK is a b-contraction mappings. The method is given by

    {x0Kun=argminuK[h(u)+12λnuxn2],xn+1=αnT(xn)+(1αn)JGθn(unθnFun),

    where JGθn=(IθnG)1. Additionally, the xn sequence created by this method, converges to a common solution.

    In 2014, the modified variational inclusion problem (MVIP) was first proposed by Khuangsatung and Kangtunyakarn [20]. The problem is determining x in H, thus

    0Ni=1aiFix+Gx, (1.4)

    where i=1,2,...,N,ai(0,1) with the condition Ni=1ai=1, Fi:HH is ai-inverse strongly monotone and G:H2H maximal monotone mappings. Reduce from the problem (1.4) to the problem (1.2) if FiF for all i=1,2,...,N. Moreover, they proved a strong convergence theorem for finding a common element of the set of fixed points of a κ-strictly pseudononspreading mapping and the set of solutions of a finite family of variational inclusion problems and the set of solutions of a finite family of equilibrium problems in Hilbert space. Afterwards, Khuangsatung and Kangtunyakarn [21] introduced the iterative method for solving a finite family of nonexpansive mappings of fixed point T and a finite family of variational inclusion problems in Hilbert spaces under the condition ni=1ai=ni=1δi=1. Their method is given by the following:

    {zin=bnxn+(1bn)Tixn,n1xn+1=αnf(xn)+βnγnJM,λ(IλNi=1δiFi)xn+γnNi=1aizin.

    Furthermore, they only demonstrated that the generated xn strongly converges to a common element of the common problems.

    Motivated by the idea of [19,20,21], we establish a modified proximal point algorithm to solve a common problem between the convex constrained optimization problem and the modified variational inclusion problem by combining the algorithm of [19,21] and using the condition Ni=1ai=1. Under appropriate conditions, the strong convergence theorem is presented in Hilbert spaces. Eventually, the proposed algorithm is applied to image restoration problems. Image restoration is an important problem in high-level image processing. During data collection, images usually suffer degradation. Blurring, information loss due to sampling, quantization effects, and different noise sources can all be a part of the degradation. The goal of image restoration is to estimate the original image from degraded data. So, the proposed algorithm could be used to solve image restoration problems where the images have suffered a variety of blurring operations. We also compare the image quality by using the signal-to-noise ratio (SNR). In numerical experiments, it was shown that the proposed algorithm is better than Khuangsatung and Kangtunyakarn's method when applied to image restoration.

    The following is a summary of the work's content: basic lemmas and definitions are compiled in Section 2. Our algorithm is presented in detail in Section 3. In Section 4, there is a discussion of the numerical experiments. In the last section, this work's conclusion is given.

    We provide certain introductions, definitions, and lemmas in this part that are used in the main result section. Suppose that F is nonlinear with a single-valued of K into H. If F is α-inverse strongly monotone, then there exists α>0, FxFy,xyαFxFy2 for every x,yK. Obviously, F is a monotone Lipschitz continuous if α-inverse strongly monotonous. In this paper, we assume that G:H2H, h:K(,+] and T:KK.

    Lemma 1. [22] Assume that F is an α-inverse strongly monotone mapping on H. Then IθF is nonexpansive for every x,yH and θ[0,2α].

    Lemma 2. [23] Let T be a proper lower semicontinuous. The inequality

    T(y)1λJTλxy21λxy2+1λxJTλx2+T(JTλx),x,yH, (2.1)

    and λ>0 holds.

    JGλ is a resolvent operator that is determined by: JGλx=(I+λG)1(x), for every x in H when λ>0 and G is the maximal monotone. The operator JGλ has 1-inverse strongly monotone and single-valued nonexpansive properties. Obviously, a VIP solution is an operator JGλ(IλF) fixed point, every λ>0 (see [24]).

    The definition of the Moreau-Yosida resolvent f is

    Jfλx=argminuH[f(u)+12λxu2],

    for every xH,λ>0. The set of minimizers of f corresponds with the collection of the resolvent's fixed points that are associated to F, as seen in [4]. Subsequently, the resolvent Jfλ is nonexpansive.

    Lemma 3. [25] Suppose that T is a proper lower semicontinuous. For every μ>0 and r>0, thus

    JTrx=JTu(μrx+(1μr)JTrx)

    holds.

    When discussing fixed point iterative algorithm convergence, the demiclosedness of a nonlinear operator Γ is often discussed correctly.

    Lemma 4. [26] Suppose that Γ is nonexpansive and Fix(Γ). Then IΓ is demiclosed. Therefore {xn} converges to x and (IΓ)xn converges to y. Thus (IΓ)x=y.

    Lemma 5. [11] Suppose that F is a monotone Lipschitz continuous and G is a maximal monotone mapping on H. Then, the mapping G+F is maximal monotone.

    Lemma 6. [27] The following statements are hold:

    (i) u,vH,u2v22uv,vuv2;

    (ii) u,vH,u+v2u2+2v,u+v;

    (iii) α,β[0,1] with α+β=1,

    αu+βv2=αu2+βv2αβuv2.

    Lemma 7. [28] Suppose that {βn}>0 and βn+1(1ϵn)βn+γn for every n0,{γn}(,) and {ϵn}(0,1) such that

    (i) n=0ϵn=,

    (ii) lim supnγnϵn0orn=0|γn|<.

    Then, limnβn=0.

    In order to find a common element between convex minimization and the solutions of MVIP, we will analyze and collect information on a proposed proximal iterative technique hiring inverse strongly monotone and maximal monotone mapping. For the purposes of this investigation, assume that the following assumptions are acceptable.

    Assumption

    (A1) T is a b-contraction mapping on K and h:K(,+] is convex and proper lower semicontinuous function.

    (A2) Fi:KH is an αi-inverse strongly monotone for every i and G:K2H is a maximal monotone operator.

    (A3) Ω:=argminuKh(u)(Nn=1δiFi+G)1(0).

    Algorithm 1. Choose x0K and {αn},{λn},{θn}(0,1) and λnλ>0.

    Step 1. Put un as

    un=argminuK[h(u)+12λnuxn2].

    Step 2. Compute

    xn+1=αnT(xn)+(1αn)JGθn(unθnNn=1δiFiun). (3.1)

    Set n = n + 1, and go back to Step 1.

    Lemma 8. Algorithm 1 generates a bounded sequence {xn}.

    Proof. In Ω, only exists one solution to the variational inequality because (IT) and Ω have the property of being closed convex. z refers the one and only solution to the variational inequality problem. For every uK, h(z)h(u) according to the equality (3.9) and the properties of h, so

    h(z)+12λnzz2h(u)+12λnuz2.

    From the Moreau-Yosida resolvent definition, we obtain that Jhλnz=z. This implies that

    unz=JhλnxnzJhλnxnJhλnzxnz.

    By (3.9), Lemma 1 and z=JGθn(IθnNn=1δiFi)z, we see that

    JGθn(IθnNn=1δiFi)unzunzxnz, (3.2)

    for every n0. It can conclude

    xn+1z=αnT(xn)+(1αn)JGθn(IθnNn=1δiFi)unz=αnT(xn)+αnT(z)αnT(z)+(1αn)JGθn(IθnNn=1δiFi)unz+αnzαnz=αn(T(xn)T(z))+(1αn)(JGθn(IθnNn=1δiFi)unz)+αn(T(z)+z)αnT(xn)T(z)+(1αn)JGθn(IθnNn=1δiFi)unz+αnT(z)+zαnbxnz+(1αn)unz+αnT(z)+z=αnbxnz+(1αn)xnz+αnT(z)+z(1αn(1b))xnz+αnT(z)zmax{xnz,T(z)z1b}.

    Using the induction on n, it can deduce that

    xnzmax{x0z,T(z)z1b},n1.

    Hence {xn} is bounded.

    Theorem 1. Assume that

    (i) limnαn=0 and n=0αn=,

    (ii) θn[a,b](0,min{1,2α}), and

    (iii) Nn=1δi=1.

    Then Algorithm 1 generates a sequence {xn} that converges to zΩ. That is,

    zf(z),zq0,qΩ. (3.3)

    Proof. We shall take into consideration two cases for the proof.

    Case 1. Assume that there is n0N. Then {xnx} is decreasing, every nn0. Denote that {xnx} is monotone and bounded, it can imply that {xnx} is convergent. Thus

    limn(xnz2xn+1z2)=0. (3.4)

    By Lemma 2 and h(z)h(un),

    xnz2unz2xnun2. (3.5)

    By inequality (3.9), (3.5) and the propoty of .2, it obtains

    xn+1z2=αnT(xn)+(1αn)JGθn(IθnNn=1δiFi)unz2=αnT(xn)αnz+αnz+(1αn)JGθn(IθnNn=1δiFi)unz2(1αn)JGθn(IθnNn=1δiFi)unz2+αnT(xn)z2(1αn)unz2+αnT(xn)z2(1αn)(xnz2xnun2)+αnT(xn)z2.

    That is,

    (1αn)xnun2(1αn)xnz2xn+1z2+αnT(xn)z2=xnz2αnxnz2xn+1z2+αnT(xn)z2.

    Thus, (1αn)xnun2xnz2xn+1z2+αnT(xn)z2. From (3.4) and αn0, so limnxnun2=0. Using (3.9) and Lemma 1,

    xn+1z2=αn(T(xn)z)+(1αn)(JGθn(IθnNn=1δiFi)unz)2αnT(xn)z2+(1αn)JGθn(IθnNn=1δiFi)unz2=αnT(xn)z2+(1αn)JGθn(IθnNn=1δiFi)unJGθn(IθnNn=1δiFi)z2=αnT(xn)z2+(1αn)JGθn(unθnNn=1δiFiun)JGθn(zθnNn=1δiFiz)2αnT(xn)z2+(1αn)unθnNn=1δiFiunz+θnNn=1δiFiz2αnT(xn)z2+(1αn)unz2(1αn)θnNn=1δiFiunz+Nn=1δiFiz2αnT(xn)z2+(1αn)unz2(1αn)(2αb)Nn=1δiFiunz+Nn=1δiFiz2.

    Therefore,

    (1αn)(2αb)Nn=1δiFiunz+Nn=1δiFiz2αnT(xn)z2+(1αn)unz2xn+1z2αnT(xn)z2+(1αn)xnz2xn+1z2αn(T(xn)z2xnz2)+xnz2xn+1z2.

    According to {αn} converges to 0, (3.4), and {xn} is bounded, it obtains that

    limnNn=1δiFiunz+Nn=1δiFiz2=0. (3.6)

    Since (3.9) and JGθn is 1-inverse strongly monotone, we get

    JGθn(IθnNn=1δiFi)unz2=JGθn(IθnNn=1δiFi)unJGθn(IθnNn=1δiFi)z2(IθnNn=1δiFi)un(IθnNn=1δiFi)z,JGθn(IθnNn=1δiFi)unz=12[(IθnNn=1δiFi)un(IθnNn=1δiFi)z2+JGθn(IθnNn=1δiFi)unz2(IθnNn=1δiFi)un(IθnNn=1δiFi)z(JGθn(IθnNn=1δiFi)unz)2]12[unz2+JGθn(IθnNn=1δiFi)unz2unθnNn=1δiFiun+θnNn=1δiFizJGθn(IθnNn=1δiFi)un2]=12[unz2+JGθn(IθnNn=1δiFi)unz2(unJGθn(IθnNn=1δiFi)un)θn(Nn=1δiFiunNn=1δiFiz)2]=12[unz2+JGθn(IθnNn=1δiFi)unz2JGθn(IθnNn=1δiFi)unun2θn2Nn=1δiFiunNn=1δiFiz)2+2θnunJGθn(IθnNn=1δiFi)un,Nn=1δiFiunNn=1δiFiz)]12[unz2+JGθn(IθnNn=1δiFi)unz2JGθn(IθnNn=1δiFi)unun2θn2Nn=1δiFiunNn=1δiFiz)2+2θnJGθn(IθnNn=1δiFi)ununNn=1δiFiunNn=1δiFiz)].

    It implies

    JGθn(IθnNn=1δiFi)unz212JGθn(IθnNn=1δiFi)unz212[unz2JGθn(IθnNn=1δiFi)unun2θn2Nn=1δiFiunNn=1δiFiz)2+2θnJGθn(IθnNn=1δiFi)ununNn=1δiFiunNn=1δiFiz)].

    Therefore

    JGθn(IθnNn=1δiFi)unz2unz2JGθn(IθnNn=1δiFi)unun2+2θnJGθn(IθnNn=1δiFi)ununNn=1δiFiunNn=1δiFiz.

    By the definition of xn,

    xn+1z2=αnT(xn)+(1αn)JGθn(IθnNn=1δiFi)unz2=αnT(xn)αnz+αnz+(1αn)JGθn(IθnNn=1δiFi)unz2=αn(T(xn)z)+(1αn)(JGθn(IθnNn=1δiFi)unz)2αnT(xn)z2+(1αn)JGθn(IθnNn=1δiFi)unz2αnT(xn)z2+(1αn)[unz2JGθn(IθnNn=1δiFi)unun2+2θnJGθn(IθnNn=1δiFi)ununNn=1δiFiunNn=1δiFiz]=αnT(xn)z2+(1αn)unz2(1αn)JGθn(IθnNn=1δiFi)unun2+(1αn)2θnJGθn(IθnNn=1δiFi)ununNn=1δiFiunNn=1δiFiz.

    Consider

    xn+1z2αnT(xn)z2+(1αn)unz2(1αn)JGθn(IθnNn=1δiFi)unun2+(1αn)2θnJGθn(IθnNn=1δiFi)ununNn=1δiFiunNn=1δiFizαnT(xn)z2+xnz2JGθn(IθnNn=1δiFi)unun2+αnJGθn(IθnNn=1δiFi)unun2+(1αn)2θnJGθn(IθnNn=1δiFi)ununNn=1δiFiunNn=1δiFiz.

    Hence

    JGθn(IθnNn=1δiFi)unun2αnT(xn)z2+xnz2xn+1z2+αnJGθn(IθnNn=1δiFi)unun2+(1αn)2θnJGθn(IθnNn=1δiFi)ununNn=1δiFiunNn=1δiFiz.

    Since αn0 as n, inequality (3.4) and (3.6), we have

    limnJGθn(IθnNn=1δiFi)unun2=0. (3.7)

    We demonstrate lim supn+zT(z),zxn0. Because {xn} is bounded, a subsequence {xnk} exists that weakly converges to xK. Thus lim supn+zT(z),zxn=limk+zT(z),zxnk. From Lemma 3 and (3.9), we see that

    xnJhλxn=xnun+unJhλxnunxn+JhλnxnJhλxnunxn+Jhλ(1λλnJhλxn+λλnxn)Jhλxnunxn+(1λλn)Jhλxn(1λλn)xn=unxn+(1λλn)Jhλxnxn=unxn+(1λλn)unxn=unxn+unxnλλnunxn=(2λλn)unxn.

    Hence,

    limnxnJhλxn=0. (3.8)

    By using (3.8), Lemma 4 and Jhλ is nonexpansive, we have xFix(Jhλ)=argminukh(u). Now, we will show x(Nn=1δiFi+G)1(0). By Lemma 5, G+Fi is maximal monotone. Let (v,u)M(G+Nn=1δiFi). That is, uNn=1δiFivG(v). We set zn:=JGθn(unθnNn=1δiFiun). Since znk=JGθnk(unkθnkNn=1δiFiunk), we have unkθnkunk(I+θnkG)znk,i.e.,1θnk(unkznkθnkNn=1δiFiunk)G(znk). By maximal monotonicity of G+Fi, we obtain

    vznk,uNn=1δiFiv1θnk(unkznkθnkNn=1δiFiunk)0.

    Then

    vznk,u vznk,Nn=1δiFiv1θnk(unkznkθnkNn=1δiFiunk)0.

    Hence

    vznk,uvznk,Nn=1δiFiv1θnk(unkznkθnkNn=1δiFiunk)=vznk,Nn=1δiFivNn=1δiFiznk+Nn=1δiFiznk+1θnk(unkznk)Nn=1δiFiunk=vznk,Nn=1δiFivNn=1δiFiznk+vznk,Nn=1δiFiznkNn=1δiFiunk+vznk,1θnk(unkznk)vznk,Nn=1δiFiznkNn=1δiFiunk+vznk,1θnk(unkznk).

    From znun0,Nn=1δiFiznNn=1δiFiun0 and znkx, we obtain

    0vx,u=limkvznk,u

    and x(Nn=1δiFi+G)1(0). Therefore, x(Nn=1δiFi+G)1(0)argminuukh(u). The fact that z solves (3.3), we get

    lim supn+zT(z),zxn=limk+zT(z),zxnk=zT(z),zx0.

    Eventually, we will show that xnz. By using Lemma 6 and (3.9),

    xn+1z2=αnT(xn)+(1αn)JGθn(IθnNn=1δiFi)unz2=αnT(xn)αnT(z)+αnT(z)+(1αn)JGθn(IθnNn=1δiFi)unz+αnzαnz2=αn(T(xn)T(z))+(1αn)JGθn(IθnNn=1δiFi)un(1αn)zαn(zT(z))2=αn(T(xn)T(z))+(1αn)[JGθn(IθnNn=1δiFi)unz]+αn(T(z)z)2αn(T(xn)T(z))+(1αn)[JGθn(IθnNn=1δiFi)unz]2+2αnT(z)z,xn+1z.

    Consider

    xn+1z2=αn(T(xn)T(z))+(1αn)[JGθn(IθnNn=1δiFi)unz]2+2αnzT(z),zxn+1αnbxnz2+(1αn)unz2+2αnzT(z),zxn+1αnbxnz2+(1αn)xnz2+2αnzT(z),zxn+1=αnbxnz2+xnz2αnxnz2+2αnzT(z),zxn+1(1αn(1b))xnz2+2αnzT(z),zxn+1.

    Therefore xnz.

    Case 2. There is no eventual decrease in the sequence {xnz}. Put

    Υn=xnz2.

    Assume that ζ is a mapping on N for every nn0. We denote that

    ζ(n)=max{kN:kn,ΥkΥk+1}.

    So ζ(n) and Υζ(n)Υζ(n)+1 for nn0. Case 1 can demonstrate that

    lim supζ(n)+zT(z),zxζ(n)0

    and {xζ(n)}n1 is bounded. For every nn0,

    0xζ(n)+1z2xζ(n)z2αζ(n)[(1b)xζ(n)z2+2zT(z),zxζ(n)+1]=αζ(n)(1b)xζ(n)z2+2αζ(n)zT(z),zxζ(n)+1.

    Thus 2αζ(n)zT(z),zxζ(n)+1aζ(n)(1b)xζ(n)z2. Therefore

    xζ(n)z221bzT(z),zxζ(n)+1.

    It obtains that limnxζ(n)z2=0. Hence limnΥζ(n)=limnΥζ(n)+1=0. For nn0, it obtains Υζ(n)Υζ(n)+1. For ζ(n)+1jn,Υj>Υj+1. Then n>ζ(n). In particular, for every nn0, max{Υζ(n),Υζ(n)+1}=Υζ(n)+1Υn0. Thus 0limnΥnlimnΥζ(n)+1=0. Hence, limnΥn=limnxnz2=0. Thus {xn} strongly converges to z.

    When setting FiF in Algorithm 1, it can obtain the following corollary.

    Corollary 1. [19] Let K be a nonempty closed convex subset of a real Hilbert space H. Let h:K(,+] be a proper, lower semi-continuous and convex function and F be an α-inverse strongly monotone operator of K into H. Let T:KK be a b-contraction mapping and G be a maximal monotone operator on H such that Γ:=argminuKh(u)(F+G)1(0) is non-empty and the domain of G is included in K. Let {xn} be a sequence defined as follows:

    {x0K,un=argminuK[h(u)+12λnuxn2],xn+1=αnT(xn)+(1αn)JGθn(unθnFun), (3.9)

    where {αn},{λn} and {θn} be sequences in (0,1) and λnλ>0 for all n1 and some λ satisfying the following conditions:

    (i) limnαn=0 and n=0αn=.

    (ii) θn[a,b](0,min{1,2α}).

    Then, the sequence {xn} generated by (3.9) converges strongly to pΓ, which is the unique solution of the variational inequality problem:

    pf(p),pq0,qΓ. (3.10)

    Image restoration is to repair or eliminate noise or damaged images that degrade an image. There are numerous types of deterioration, such as torn, blurred, noisy, out of focus, dirty, scratched, etc. Neither the occasional falling of liquids such as water nor the desire to preserve our ancient images or something similar. By utilizing the actual blurring function, we are able to estimate motion blur. And remove the blur to create an original and realistic image.

    Therefore, the researcher is interested in denoising and deblurring images for this section. It is well known that by inverting the following observation model, the general problem of image restoration can be described by

    w=Hx+b, (4.1)

    where HRm×n is the blurring operation, b is additive noise, wRm is the observed image and xRn is an original image. This problem basically relates to the various formulations for optimization methods that are available. The goal in image restoration is to deblur an image without knowing which one is the blurring operator. Thus, we focus on the following problem:

    minxRn12w1H1x2+κx1,minxRn12w2H2x2+κx1,,minxRn12wNHNx2+κx1 (4.2)

    where x1=i|xi|, κ is a parameter that is relate to noise b, wi is the blurred image as determined by the blurred matrix Hi for every i=1,2,...,N and x is the original image. Suppose that fi(x)=12wiHix2 and g=κx1, the Lipschitz gradient of fi is fi(x)=HTi(Hixwi). For solving the problem (4.2), we designed the following flowchart (Figure 1). Where ˜X is the deblurred image or the common solutions of the problem (4.2) and as seen in Figure 1. We can apply the algorithm in Theorem 1 to solve the problem (4.2) by setting Fi=fi and G=g.

    Figure 1.  The image restoration process flowchart.

    For the purpose of this experiment, we will apply our suggested algorithm to resolve the problem (4.2), which entails recovering an original image xRn. In terms of the image's signal-to-noise ratio (SNR),

    SNR=20log10x2xxn+12,

    we compare our proposed algorithm to one developed by Khuangsatung and Kangtunyakarn [20], which also holds strong convergence. A higher SNR indicates a higher recovery quality. Let N=4. Consider a simple linearized image recovery model Hix=ρix, where a motion orientation 11(θ=11), ρ1 is a motion blur with a 21-pixel motion length (len = 21), ρ2 is a filter size 9×9 Gaussian blur with a σ=2 standard deviation, ρ3 is a circular averaging filter with radius r=4, and ρ4 is an averaging blur of filter size 9×9. The following values are set for all of the parameters: αn=12n+2,θn=0.1,λn=0.5,κ=0.01, δn=0.25, and f(x)=x100. The numerical results from the experiment are shown in the following: Figure 2 depicts the original grayscale images. Figures 3 and 4 illustrate greyscale images degraded by matrix blurs ρ1 through ρ4. Figures 5 and 6 show the grayscale images result by Arunchai and by Khuangsatung. Figures 7 and 8 depict the SNR result of Arunchai is higher than Khuangsatung.

    Figure 2.  (a) Kiel original image. (b) Lighthouse original image.
    Figure 3.  Blurred kiel images with filtering H1x,H2x,H3x, and H4x.
    Figure 4.  Blurred lighthouse images with filtering H1x,H2x,H3x, and H4x.
    Figure 5.  (a) Restored by Algorithm 1. (b) Restored by Theorem 3.1 in [21].
    Figure 6.  (a) Restored by Algorithm 1. (b) Restored by Theorem 3.1 in [21].
    Figure 7.  The SNR values of Figures 5 (a), (b).
    Figure 8.  The SNR values of Figures 6 (a), (b).

    Remark 1. Experimentally, It was determined that the problem (4.2) could be resolved using our algorithm, and that they are preferable to algorithms developed previously. Our algorithm appears to be more effective at solving these types of problems. This is supported by the SNR values.

    The problems of modified variational inclusion and variational inclusion are solved using a modified proximal point algorithm. Additionally, we have used the suggested algorithm to solve the many degradations in image restoration.

    The authors would like to thank King Mongkut's University of Technology North Bangkok (KMUTNB), Rajamangala University of Technology Thanyaburi (RMUTT) and Nakhon Sawan Rajabhat University. This research was funded by National Science, Research and Innovation Fund (NSRF), King Mongkut's University of Technology North Bangkok with Contract No. KMUTNB-FF-65-13.

    The authors declare no conflict of interest.



    [1] G. P. Crespi, A. Guerraggio, M. Rocca, Minty variational inequality and optimization: Scalar and vector case, In: Generalized convexity and monotonicity and applications, Boston: Springer, 2005. https://doi.org/10.1007/0-387-23639-2_12
    [2] B. Martinet, Régularisation d'inéquations variationnelles par approximations successives, Recherche Opérationnelle, 4 (1970), 154–158.
    [3] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877–898. https://doi.org/10.1137/0314056 doi: 10.1137/0314056
    [4] O. Güler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Control Optim., 29 (1991), 403–419. https://doi.org/10.1137/032902 doi: 10.1137/032902
    [5] S. Kamimura, W. Takahashi, Strong convergence of a proximal-type algorithm in a Banach space, SIAM J. Optim., 13 (2003), 938–945. https://doi.org/10.1137/S105262340139611X doi: 10.1137/S105262340139611X
    [6] N. Lehdili, A. Moudafi, Combining the proximal algorithm and Tikhonov regularization, Optimization, 37 (1996), 239–252. https://doi.org/10.1080/02331939608844217 doi: 10.1080/02331939608844217
    [7] S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, J. Math. Anal. Appl., 75 (1980), 287–292. https://doi.org/10.1016/0022-247X(80)90323-6 doi: 10.1016/0022-247X(80)90323-6
    [8] M. V. Solodov, B. F. Svaiter, Forcing strong convergence of proximal point iterations in a Hilbert space, Math. Program., 87 (2000), 189–202. https://doi.org/10.1007/s101079900113 doi: 10.1007/s101079900113
    [9] H. K. Xu, A regularization method for the proximal point algorithm, J. Glob. Optim., 36 (2006), 115–125. https://doi.org/10.1007/s10898-006-9002-7 doi: 10.1007/s10898-006-9002-7
    [10] J. M. Borwein, A. S. Lewis, Convex analysis and nonlinear optimization: Theory and examples, New York: Springer, 2005. https://doi.org/10.1007/978-0-387-31256-9
    [11] T. Seangwattana, K. Sombut, A. Arunchai, K. Sitthithakerngkiet, A modified Tseng's method for solving the modified variational inclusion problems and its applications, Symmetry, 13 (2021), 2250. https://doi.org/10.3390/sym13122250 doi: 10.3390/sym13122250
    [12] P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. https://doi.org/10.1137/0716071 doi: 10.1137/0716071
    [13] G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces, J. Math. Anal. Appl., 72 (1979), 383–390. https://doi.org/10.1016/0022-247X(79)90234-8 doi: 10.1016/0022-247X(79)90234-8
    [14] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (1998), 431–446. https://doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
    [15] G. H-G. Chen, R. T. Rockafellar, Convergence rates in forward-backward splitting, SIAM J. Optim., 7 (1997), 421–444. https://doi.org/10.1137/S1052623495290179 doi: 10.1137/S1052623495290179
    [16] S. Kamimura, W. Takahashi, Approximating solutions of maximal monotone operators in Hilbert spaces, J. Approx. Theory, 106 (2000), 226–240. https://doi.org/10.1006/jath.2000.3493 doi: 10.1006/jath.2000.3493
    [17] A. Adamu, D. Kitkuan, P. Kumam, A. Padcharoen, T. Seangwattana, Approximation method for monotone inclusion problems in real Banach spaces with applications, J. Inequal. Appl., 70 (2022), 70. https://doi.org/10.1186/s13660-022-02805-0 doi: 10.1186/s13660-022-02805-0
    [18] O. A. Boikanyo, The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators, Abstr. Appl. Anal., 2016 (2016), 2371857. https://doi.org/10.1155/2016/2371857 doi: 10.1155/2016/2371857
    [19] T. M. M. Sow, A modified proximal point algorithm for solving variational inclusion problem in real Hilbert spaces, e-J. Anal. Appl. Math., 2020 (2020), 28–39. https://doi.org/10.2478/ejaam-2020-0003 doi: 10.2478/ejaam-2020-0003
    [20] W. Khuangsatung, A. Kangtunyakarn, Algorithm of a new variational inclusion problems and strictly pseudononspreading mapping with application, Fixed Point Theory Appl., 2014 (2014), 209. https://doi.org/10.1186/1687-1812-2014-209 doi: 10.1186/1687-1812-2014-209
    [21] W. Khuangsatung, A. Kangtunyakarn, A theorem of variational inclusion problems and various nonlinear mappings, Appl. Anal., 97 (2017), 1172–1186. https://doi.org/10.1080/00036811.2017.1307965 doi: 10.1080/00036811.2017.1307965
    [22] T. M. M. Sow, An iterative algorithm for solving equilibrium problems, variational inequalities and fixed point problems of multivalued quasi-nonexpansive mappings, Appl. Set-Valued Anal. Optim., 1 (2019), 171–185. https://doi.org/10.23952/asvao.1.2019.2.06 doi: 10.23952/asvao.1.2019.2.06
    [23] L. Ambrosio, N. Gigli, G. Savaré, Gradient flows: In metric spaces and in the space of probability measures, Basel: Birkhäuser, 2005. https://doi.org/10.1007/b137080
    [24] J. Jost, Convex functionals and generalized harmonic maps into spaces of nonpositive curvature, Comment. Math. Helvetici, 70 (1995), 659–673. https://doi.org/10.1007/BF02566027 doi: 10.1007/BF02566027
    [25] I. Miyadera, Translations of mathematical monographs: Nonlinear semigroups, Rhode Island: American Mathematical Society, 1992. https://doi.org/10.1090/mmono/109
    [26] S. S. Chang, Y. K. Tang, L. Wang, Y. G. Xu, Y. H. Zhao, G. Wang, Convergence theorems for some multi-valued generalized nonexpansive mappings, Fixed Point Theory Appl., 2014 (2014), 33. https://doi.org/10.1186/1687-1812-2014-33 doi: 10.1186/1687-1812-2014-33
    [27] W. Takahashi, Nonlinear function analysis, Yokohama: Yokohama Publishers, 2000.
    [28] H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal., 16 (1991), 1127–1138. https://doi.org/10.1016/0362-546X(91)90200-K doi: 10.1016/0362-546X(91)90200-K
  • This article has been cited by:

    1. Kamonrat Sombut, Kanokwan Sitthithakerngkiet, Areerat Arunchai, Thidaporn Seangwattana, An Inertial Forward–Backward Splitting Method for Solving Modified Variational Inclusion Problems and Its Application, 2023, 11, 2227-7390, 2107, 10.3390/math11092107
    2. Doaa Filali, Mohammad Dilshad, Mohammad Akram, Generalized variational inclusion: graph convergence and dynamical system approach, 2024, 9, 2473-6988, 24525, 10.3934/math.20241194
    3. Tran Van Thang, Ha Manh Tien, Parallel inertial forward–backward splitting methods for solving variational inequality problems with variational inclusion constraints, 2024, 0170-4214, 10.1002/mma.10356
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1607) PDF downloads(69) Cited by(3)

Figures and Tables

Figures(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog