Research article

A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications

  • Received: 05 May 2024 Revised: 27 July 2024 Accepted: 01 August 2024 Published: 07 August 2024
  • MSC : 47H04, 47H10, 65K05, 90C25

  • Our aim was to establish a novel generalized viscosity forward-backward splitting scheme that incorporates inertial terms for addressing monotone inclusion problems within a Hilbert space. By incorporating appropriate control conditions, we achieved strong convergence. The significance of this theorem lies in its applicability to resolve convex minimization problems. To demonstrate the efficacy of our proposed algorithm, we conducted a comparative analysis of its convergence behavior against other algorithms. Finally, we showcased the performance of our proposed method through numerical experiments aimed at addressing image restoration problems.

    Citation: Kasamsuk Ungchittrakool, Natthaphon Artsawang. A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications[J]. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149

    Related Papers:

    [1] Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen . Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of $ \alpha - $inverse strongly monotone operators. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522
    [2] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [3] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [4] Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559
    [5] Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903
    [6] Suparat Kesornprom, Papatsara Inkrong, Uamporn Witthayarat, Prasit Cholamjiak . A recent proximal gradient algorithm for convex minimization problem using double inertial extrapolations. AIMS Mathematics, 2024, 9(7): 18841-18859. doi: 10.3934/math.2024917
    [7] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [8] Bancha Panyanak, Chainarong Khunpanuk, Nattawut Pholasa, Nuttapol Pakkaranang . A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators. AIMS Mathematics, 2023, 8(4): 9692-9715. doi: 10.3934/math.2023489
    [9] Anantachai Padcharoen, Kritsana Sokhuma, Jamilu Abubakar . Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces. AIMS Mathematics, 2023, 8(3): 7242-7257. doi: 10.3934/math.2023364
    [10] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
  • Our aim was to establish a novel generalized viscosity forward-backward splitting scheme that incorporates inertial terms for addressing monotone inclusion problems within a Hilbert space. By incorporating appropriate control conditions, we achieved strong convergence. The significance of this theorem lies in its applicability to resolve convex minimization problems. To demonstrate the efficacy of our proposed algorithm, we conducted a comparative analysis of its convergence behavior against other algorithms. Finally, we showcased the performance of our proposed method through numerical experiments aimed at addressing image restoration problems.



    Let H represent a real Hilbert space featuring an inner product , and its corresponding norm denoted as ||=,. Our aim is to address the monotone inclusion problem, which seeks to find xH such that

    0Ax+Bx, (1.1)

    where A:HH is a single-valued mapping, and B:H2H denotes a multi-valued mapping. The set of all solutions to problem (1.1) is denoted as (A+B)1(0). Many intriguing problems can be framed within the framework of the monotone inclusion problem (1.1), such as convex minimization problems, variational inequalities, equilibrium problems, image processing challenges, and more. One of the most renowned algorithms used to approximate the solution of problem (1.1) is the forward-backward algorithm (FB), as highlighted in references such as [1,2,3]. This algorithm (FB) was initially introduced by Brezis and Lions [4] and is defined by a sequence (xk)k1 according to the recurrence relation:

    xk+1=JBλ(xkλAxk),for all k1, (1.2)

    where JBλ=(Id+λB)1 represents the resolvent of the operator B, λ>0, and Id denotes the identity mapping. This method is used in the context of solving monotone inclusion problems and has been widely studied and applied in various fields such as optimization and signal processing. After that, Moudafi [5] later introduced the viscosity approximation method to address issues of strong convergence. This method combines the forward-backward splitting algorithm with contraction mappings. The viscosity approach ensures strong convergence of the iterative sequences, which is particularly useful in various applications like fixed-point problems.

    The method involves both the proximal point algorithm and the gradient method, as evidenced in references such as [6,7,8,9,10,11,12].

    On the other hand, the concept of the heavy ball method (or inertial method), introduced by Polyak in 1964 [13], was an early example of incorporating inertia into optimization algorithms. The discretized form of this method has inspired various optimization algorithms that incorporate momentum to accelerate convergence.

    In 2001, Alvarez and Attouch [14] introduced a new algorithm based on the inertial method outlined in [13]. This method is expressed as follows:

      {wk=xk+θk(xkxk1),xk+1=(Id+λkB)1wk,for all k1. (1.3)

    They established that the sequence (xk)k1 generated by algorithm (1.3) converges weakly to a zero point of the operator B under the conditions (θk)k1[0,1] and (λk)k1 being non-decreasing, with the constraint

    k=1θkxkxk12<. (1.4)

    See [15,16] for other types of conditions on θk, which no longer rely on the iterates.

    Moudafi and Oliny [17] proposed an iterative method that incorporates the concept of the inertial method to address problem (1.1). They also proved weak convergence of the iterates under the following conditions:

    (i) The condition (1.4) holds.

    (ii)λk<2/L with L the Lipschitz constant of A.

    Their algorithm is defined by

      {wk=xk+θk(xkxk1),xk+1=(Id+λkB)1(wkλkAwk),for all k1, (1.5)

    where A:HH and B:H2H.

    We explore several methods proposed in recent studies for tackling monotone inclusion problems, each with its unique approach and contributions. Cholamjiak et al. [18] extended the inertial forward-backward splitting method to Banach spaces; this study focuses on applications in compressed sensing, showcasing the method's efficacy in real-world scenarios. After that, Shehu et al. [19] introduced inertial terms into iterative methods for nonexpansive mappings; we introduce the advantages of inertial techniques in improving convergence rates and handling compressed sensing problems. In 2020, Artsawang and Ungchittrakool [20] aimed at solving monotone inclusion and image restoration problems; this method develops an inertial Mann-type algorithm for nonexpansive mappings, showcasing its applicability in diverse domains. The approach utilizes the Mann-type algorithm, as demonstrated by references such as [21,22,23,24]. Alternatively, image restoration problems typically require the restoration of a high-quality image from a degraded or noisy version. Monotone inclusion methods provide powerful frameworks for addressing such problems, as can be seen in [25,26,27,28,29,30,31].

    Kitkuan et al. [32] recently introduced a viscosity approximation algorithm using the inertial forward-backward approach to solve problem (1.1). Their algorithm is formulated as follows:

      {wk=xk+θk(xkxk1),xk+1=γk(f(xk))+(1γk)JBλk(wkλkAwk),for all k1, (1.6)

    where A:HH represents a μ-inverse strongly monotone operator with μ>0, B:H2H is a maximal monotone operator, and f:HH is a contraction with constant constraint c(0,1). They also proved the strong convergence of their proposed method under certain appropriate conditions imposed on the parameters.

    In 2020, Kitkuan et al. [33] introduced a novel method that combines the Halpern-type method and the forward-backward splitting method to solve the monotone inclusion problem (1.1). Their method is described as:

      {u,x1H,zk=αkxk+(1αk)JBλk(xkλkAxk),yk=βkxk+(1βk)JBλk(zkλkAzk),xk+1=γku+(1γk)yk,for all k1, (1.7)

    where JBλk=(Id+λkB)1 denotes the resolvent of B, and αk,βk,γk(0,1). Strong convergence results are obtained under certain appropriate conditions.

    By drawing inspiration from the inertial viscosity forward-backward splitting algorithms pioneered by Kitkuan et al. [32,33], we propose the following algorithm:

    (Algorithm 1)  {wk=xk+θk(xkxk1),zk=αkwk+(1αk)JBλk(wkλkAwk),yk=βkwk+(1βk)JBλk(zkλkAzk),xk+1=γkf(xn)+(1γk)yk,for all k1, (1.8)

    where (θk)k1[0,θ] with θ[0,1) and (αk)k1,(βk)k1 and (γk)k1 are sequences in [0,1].

    Remark 1.1. If αk=1 in Algorithm 1, we have the inertial viscosity forward-backward splitting algorithm (1.6).

    If θk=0 and setting f(xk)=u in Algorithm 1, we have generalized Halpern-type forward-backward splitting method (1.7).

    The structure of this work is as follows. In Section 2, we revisit and compile essential definitions and properties crucial to this study. In Section 3, we prove convergence results for our proposed method addressing problem (1.1). After that, Section 4 assesses the proposed method's performance through numerical experiments. Finally, we conclude this work by offering some closing remarks in Section 5.

    An operator T:HH is nonexpansive if TxTyxy for all x,yH. We denote the set of all fixed points of the operator T as Fix(T):={xH:Tx=x}.

    Let C be a nonempty closed convex subset of H. The metric projection of H onto C, denoted as projC:HC, is defined by projC(x)=argmincCxc for all xH. It is known that

    xprojC(x),yprojC(x)0

    for all xH and yC.

    A mapping K:HH is called monotone if for all x,yH, KxKy,xy0 and it is said to be β-inverse strongly monotone with parameter β>0 if, there exists a constant β>0 such that

    KxKy,xyβKxKy2

    for all x,yH.

    Let L:H2H be a set-values operator. We denote by gra(L):={(x,u)H×H:uLx} its graph of L. The operator L is called monotone if,

    uv,xy0

    for all (x,u),(y,v)gra(L). It is classified as maximal monotone if there exists no proper monotone extension of its graph.

    The resolvent of L and parameter λ0, JLλ:H2H defined by JLλ:=(Id+λL)1, where Id is the identity operator from H to H. If L is maximally monotone, JLλ is a single-valued operator.

    Next, we present several results in real Hilbert spaces that will prove to be valuable in our convergence analysis.

    Lemma 2.1. [34] Let H be a real Hilbert space. Then, the following conditions are satisfied:

    (i)xy2=x2y22xy,y for all x,yH.

    (ii)x+y2x2+2y,x+y for all x,yH.

    (iii)τx+(1τ)y2=τx2+(1τ)y2τ(1τ)xy2 for all τ[0,1] and x,yH.

    In order to show the convergence results, we also require the following tools.

    Lemma 2.2. [35, Lemma 2.5] Let (Sk)k1 be a sequence of nonnegative real numbers satisfying the following inequalities

    Sk+1(1ρk)Sk+ρkσk  k1 and Sk+1Skηk+πk  k1,

    where (ρk)k1 forms a sequence within (0,1), (ηk)k1 constitutes a sequence of nonnegative real numbers, and both (σk)k1 and (πk)k1 are real sequences, satisfying the conditions:

    (i)k1ρk=.

    (ii)limkπk=0.

    (iii)limiηki=0 implies lim supiσki0 for any subsequence (ηki)i1 of (ηk)k1.

    Then the sequence (Sk)k1 converges to 0.

    For convenience, the following notation will be used

    ΓA,Bλ:=(Id+λB)1(IdλA), λ0.

    Lemma 2.3. [36] Let A be an μ-inverse strongly monotone operator from a real Hilbert space H into itself and B:H2H a maximal monotone operator. Then, the following inequalities hold.

    ΓA,BλxΓA,Bλy2xy2λ(2μλ)AxAy2   (IdJBλ)(IdλA)x(IdJBλ)(IdλA)y2 (2.1)

    for all x,yBλ:={zH:zλ}.

    Lemma 2.4. [36] Let A be an μ-inverse strongly monotone operator from a real Hilbert space H into itself and B:H2H a maximal monotone operator. Then, the following conditions hold.

    (i) For λ>0, Fix(ΓA,Bλ)=(A+B)1(0).

    (ii) For 0<δλ and xH, xΓA,Bδx2xΓA,Bλx.

    Theorem 2.5. [37] Let H be a real Hilbert space with a nonempty closed convex subset C, consider a nonexpansive mapping T:CC with Fix(T). For every uC and any t(0,1), the unique fixed point xt within C derived from the contraction Cxtu+(1t)Tx converges strongly towards a fixed point of T as t tends to zero.

    In this section, we delve into the intricate details of the convergence analysis for our main results.

    Theorem 3.1. Let A:HH be an μ-inverse strongly monotone operator on a real Hilbert space H with μ>0, and B:H2H be a maximal monotone operator such that (A+B)1(0). Let f:HH be a contraction mapping with constant c(0,1). Let (xk)k1 be generated by Algorithm 1. Assume that the following conditions hold:

    (i)limkγk=0 and k1γk=+.

    (ii)limkθkγkxkxk1=0.

    (iii)0<lim infk+λklim supk+λk<2μ.

    (iv)lim infk+(1αk)(1βk)>0.

    Then, the sequence (xk)k1 converges strongly to ¯x:=proj(A+B)1(0)(f(¯x)).

    Proof. Let Γk=JBλk(IdλkA). By Lemma, we have for each kNΓk is nonexpansive mapping. By Lemma 2.4, we obtain that (A+B)1(0)=Fix(Γk).

    We expect that (xk)k1 is bounded. Since f is contraction mapping and proj(A+B)1(0)() is nonexpansive, we have proj(A+B)1(0)(f()) is contraction mapping. Then, there exists the unique fixed point ¯x(A+B)1(0) such that ¯x=proj(A+B)1(0)(f(¯x)). Thus ¯xFix(Γk). It follows that

    zk¯x=αkwk+(1αk)Γkwk¯xαkwk¯x+(1αk)Γkwk¯xwk¯x, (3.1)

    and

    yk¯x=βkwk+(1βk)Γkzk¯xβkwk¯x+(1βk)Γkzk¯xβkwk¯x+(1βk)zk¯x. (3.2)

    On the other hand, we consider

    wk¯x=xk+θk(xkxk1¯x)xk¯x+θkxkxk1. (3.3)

    Combining (3.1)–(3.3), we obtain that

    xk+1¯x=γkf(xk)+(1γk)yk¯xγkf(xk)¯x+(1γk)yk¯xγkf(xk)f(¯x)+γkf(¯x)¯x+(1γk)wk¯xγkcxk¯x+γkf(¯x)¯x+(1γk)xk¯x   +(1γk)θkxkxk1(1γk(1c))xk¯x+γkf(¯x)¯x   +(1γk)θkxkxk1(1γk(1c))xk¯x+γkf(¯x)¯x   +(1γk(1c))θkxkxk1. (3.4)

    Since limkθkγkxkxk1=0, there exists M>0 such that

    (1γk(1c))θkγkxkxk1M for all kN.

    From (3.4), we can obtain that

    xk+1¯x(1γk(1c))xk¯x+γk(1c)(f(¯x)¯x+M1c).

    It follows that

    xk+1¯xmax{xk¯x,f(¯x)¯x+M1c}  max{x1¯x,f(¯x)¯x+M1c}. (3.5)

    Therefore, (xk)k1 is bounded. So (wk)k1, (zk)k1 and (yk)k1 also bounded. Using the condition (2.1) in Lemma 2.1 and the definition of (zk)k1 and (yk)k1, we get that

    zk¯x2=αkwk+(1αk)Γkwk¯x2αkwk¯x2+(1αk)Γkwk¯x2, (3.6)

    and

    yk¯x2=βkwk+(1βk)Γkzk¯x2βkwk¯x2+(1βk)Γkzk¯x2. (3.7)

    Now, consider terms Γkwk¯x2 and Γkzk¯x2 using Lemma 2.3, we have

    Γkwk¯x2=ΓkwkΓk¯x2wk¯x2λk(2μλk)AwkA¯x2   wkλkAwkΓkwk+λkA¯x2, (3.8)

    and

    Γkzk¯x2=ΓkzkΓk¯x2zk¯x2λk(2μλk)AzkA¯x2   zkλkAzkΓkzk+λkA¯x2. (3.9)

    Substituting (3.8) into (3.6), we have

    zk¯x2wk¯x2(1αk)λk(2μλk)AwkA¯x2   (1αk)wkλkAwkΓkwk+λkA¯x2. (3.10)

    Substituting (3.9) into (3.7), we have

    yk¯x2βkwk¯x2+(1βk)zk¯x2   (1βk)λk(2μλk)AzkA¯x2   (1βk)zkλkAzkΓkzk+λkA¯x2. (3.11)

    Combining (3.10) and (3.11), we can imply that

    yk¯x2wk¯x2+(1βk)(1αk)λk(2μλk)AwkA¯x2   (1βk)(1αk)wkλkAwkΓkwk+λkA¯x2   (1βk)λk(2μλk)AzkA¯x2   (1βk)zkλkAzkΓkzk+λkA¯x2. (3.12)

    From (3.12), we obtain

    xk+1¯x2=γkf(xn)+(1γk)yk¯x,xk+1¯x=γk(f(xk)¯x),xk+1¯x+(1γk)(yk¯x),xk+1¯x=γkf(xk)f(¯x),xk+1¯x+γkf(¯x)¯x,xk+1¯x   +(1γk)yk¯x,xk+1¯xγkf(xk)f(¯x)xk+1¯x+γkf(¯x)¯x,xk+1¯x   +(1γk)yk¯xxk+1¯xγk2(f(xk)f(¯x)2+xk+1¯x2)+γkf(¯x)¯x,xk+1¯x   +(1γk)2(yk¯x2+xk+1¯x2)γkc22xk¯x2+γk2xk+1¯x2+γkf(¯x)¯x,xk+1¯x   +(1γk)2wk¯x2   (1γk)(1βk)(1αk)2λk(2μλk)AwkA¯x2   (1γk)(1βk)(1αk)2wkλkAwkΓkwk+λkA¯x2   (1γk)(1βk)λk(2μλk)2AzkA¯x2   (1γk)(1βk)2zkλkAzkΓkzk+λkA¯x2   +(1γk)2xk+1¯x2γkc22xk¯x2+12xk+1¯x2+γkf(¯x)¯x,xk+1¯x   +(1γk)2(xk¯x2+2θkxkxk1,wk¯x)   (1γk)(1βk)(1αk)2λk(2μλk)AwkA¯x2   (1γk)(1βk)(1αk)2wkλkAwkΓkwk+λkA¯x2   (1γk)(1βk)λk(2μλk)2AzkA¯x2   (1γk)(1βk)2zkλkAzkΓkzk+λkA¯x2(1γk(1c2))2xk¯x2+12xk+1¯x2+γkf(¯x)¯x,xk+1¯x   +(1γk)θkxkxk1,wk¯x   (1γk)(1βk)(1αk)2λk(2μλk)AwkA¯x2   (1γk)(1βk)(1αk)2wkλkAwkΓkwk+λkA¯x2   (1γk)(1βk)λk(2μλk)2AzkA¯x2   (1γk)(1βk)2zkλkAzkΓkzk+λkA¯x2. (3.13)

    Then (3.13) reducing to the following:

    xk+1¯x2(1γk(1c2))xk¯x2+2γkf(¯x)¯x,xk+1¯x   +2(1γk)θkxkxk1,wk¯x   (1γk)(1βk)(1αk)λk(2μλk)AwkA¯x2   (1γk)(1βk)(1αk)wkλkAwkΓkwk+λkA¯x2   (1γk)(1βk)λk(2μλk)AzkA¯x2   (1γk)(1βk)zkλkAzkΓkzk+λkA¯x2. (3.14)

    For each kN, we set

    Sk=xk+1¯x2,

    ρk=γk(1c2), πk=ρkσk,

    σk=2(1c2)f(¯x)¯x,xk+1¯x+2(1γk)θkγk(1c2)xkxk1,wk¯x and

    ηk=(1γk)(1βk)(1αk)λk(2μλk)AwkA¯x2   +(1γk)(1βk)(1αk)wkλkAwkΓkwk+λkA¯x2   +(1γk)(1βk)λk(2μλk)AzkA¯x2   +(1γk)(1βk)zkλkAzkΓkzk+λkA¯x2.

    As a result, inequality (3.14) reduces to the following:

    Sk+1(1ρk)Sk+ρkσk and Sk+1Skηk+πk.

    By the condition (i), we get that k1ρk= and limkπk=0. In order to complete proof, by applying Lemma 2.2, it is sufficient to show that limkηki=0 implies limsupiσki0 for any subsequence (ηki)i1 of (ηk)k1.

    Let (ηki)i1 be a subsequence of (ηk)k1 such that limiηki=0. Therefore, by the assumptions of Lemma 2.2, we can conclude that

    limiAwkiA¯x=0;limiAzkiA¯x=0;limiwkiλkiAwkiΓkiwki+λkiA¯x=0;limizkiλkiAzkiΓkizki+λkiA¯x=0.

    This implies that

    limiΓkiwkiwki=0; (3.15)
    limiΓkizkizki=0. (3.16)

    From (3.1), we have

    wkixki=θkixkixki10 (i). (3.17)

    On the other hand, we get

    ΓkizkiwkiΓkizkizki+zkiwki=Γkizkizki+(1αki)Γkiwkiwki. (3.18)

    From (3.15) and (3.16), we obtain that

    limiΓkizkiwki=0. (3.19)

    Given that lim infk+λk>0, we can find a positive real number λ>0 such that λkλ for all kN. Specifically, this implies that λkiλ for all iN. By the condition (2.4) of Lemma 2.4, one has

    ΓA,Bλwkiwki2Γkiwkiwki. (3.20)

    From (3.20), we can imply that

    limiΓA,Bλwkiwki=0. (3.21)

    Let

    zt=tf(¯x)+(1t)ΓA,Bλzt, t(0,1). (3.22)

    By utilizing Theorem 2.5, zt exhibits strong convergence towards the unique fixed point ¯x=proj(A+B)1(0)(f(¯x)) as t0. Consequently, we can conclude that

    ztwki2=t(f(¯x)wki)+(1t)(ΓA,Bλztwki)2(1t)2ΓA,Bλztwki2+2tf(¯x)zt,ztwki   +2tztwki,ztwki(1t)2(ΓA,BλztΓA,Bλwki+ΓA,Bλwkiwki)2   +2tf(¯x)zt,ztwki+2tztwki2(1t)2(ztwki+ΓA,Bλwkiwki)2   +2tf(¯x)zt,ztwki+2tztwki2. (3.23)

    The inequality (3.23) reduces the following:

    ztf(¯x),ztwki(1t)22t(ztwki+ΓA,Bλwkiwki)2+(2t1)2tztwki2. (3.24)

    Combining (3.19) and (3.24), we get that

    lim supi+ztf(¯x),ztwki12t[(1t)2+(2t1)]M20, (3.25)

    where M0=supiN,t(0,1)ztwki. We take k+ in (3.25), we obtain that

    lim supi¯xf(¯x),¯xwki0. (3.26)

    Let us consider,

    zf(¯x),zxki=zf(¯x),zwki+θkizf(¯x),xkixki1zf(¯x),zwki+θkizf(¯x)xkixki1. (3.27)

    From (3.27), one has

    lim supi¯xf(¯x),¯xxki0. (3.28)

    Next, we claim that limi+xki+1xki=0. By Algorithm 1, we have the following estimates:

    xki+1xkiγkif(¯x)xki+(1γki)ykixkiγkif(¯x)xki+(1γki)(ykiwki+wkixki)γkif(¯x)xki+(1γki)wkixki   +(1γki)(1βki)Γkizkiwki. (3.29)

    From (3.29), using the boundedness of (xk)k1, the condition 3.1, and (3.17) and (3.19), we obtain that

    limi+xki+1xki=0. (3.30)

    Combining (3.30) and (3.28), we infer that

    lim supi¯xf(¯x),¯xxki+10.

    Hence, lim supiσki0. By Lemma 2.2, we observe that limkSk=0, that is xk¯x as k. We thus complete the proof.

    Remark 3.2. The condition (ii) in Theorem 3.1 is satisfied when we set θk such that 0θk¯θk, where

    ¯θk={min{θ,εkxkxk1},     if xkxk1,θ,                             otherwise,

    and (εk)k1 is a positive sequence such that limkεkγk=0.

    In this section, we delve into the practical applications of our proposed method as outlined in this paper, focusing on its utility in convex minimization problems and image restoration problems.

    Consider a convex and differentiable function h:HR and a convex, lower-semicontinuous function g:HR. To solve the following convex minimization problem: Find ¯xH such that

    h(¯x)+g(¯x)=minxH{h(x)+g(x)}. (4.1)

    By using Fermat's rule, the problem (4.1) can be written in the form of the following problem as: Find ¯xH such that

    0h(¯x)+g(¯x),

    where h is a gradient of h and g is a subdifferential of g.

    Remark 4.1. [38] If a function K:HH is (1/L)-Lipschitz continuous, then K is L-inverse strongly monotone.

    Remark 4.2. [39] If a function P:HR is a convex lower-semicontinuous, then P is maximal monotone.

    By applying Theorem 3.1 and set A=h and B=g, we can obtain the following result.

    Theorem 4.3. Let H be a real Hilbert space. Let h:HR be a convex differentiable function with a (1/L)-Lipschitz continuous gradient h and g:HR be a convex lower-semicontinuous such that (h+g)1(0). Let f:HH be a contraction mapping with constant c(0,1). Let (xk)k1 be generated by x0,x1H

      {wk=xk+θk(xkxk1),zk=αkwk+(1αk)Jgλk(wkλkhwk),yk=βkwk+(1βk)Jgλk(zkλkhzk),xk+1=γkf(xn)+(1γk)yk,  for all k1. (4.2)

    Assume that the following conditions hold:

    (i)limkγk=0 and k1γk=+.

    (ii)limkθkγkxkxk1=0.

    (iii)0<lim infk+λklim supk+λk<2L.

    (iv)lim infk+(1αk)(1βk)>0.

    Then, the sequence (xk)k1 converges strongly to ¯x:=proj(h+g)1(0)(f(¯x)).

    Next, we present some comparisons among three algorithms: Our proposed algorithm, Kitkuan et al.'s algorithm (2019) (1.6), as presented in [32], and Tan's algorithm (2024), as described in [31, Algorithm 1.3].

    Example 4.4. Let KRl×s and bRl with l>s. Let g:RsR be defined by g(x)=x1 for all xRs, and h:RsR be defined by h(x)=12Kxb22 for all xRs. To find the solution of the minimization problem as follows:

    minimize 12Kxb22+x1,subject to xRs. (4.3)

    By setting this, we obtain that for each x=(x1,x2,...,xs)Rs

    Jgλk(x)=(max{0,1λk|x1|}x1,max{0,1λk|x2|}x2,...,max{0,1λk|xs|}xs),

    h(x)=KT(Kxb) and h is K2-Lipschitz continuous, where KT is a transpose of K.

    To begin, we randomly select vectors x0,x1Rs, along with bRl and the matrix KRl×s. Subsequently, we set f(x)=x6 for all xRs and choose the parameters in this example as follows: αk=1100k+1, βk=1k+1, γk=1100k+1, λk=1K2+1 and

    θk={min{12,1(k+1)2xkxk1},ifxkxk1,12,otherwise. (4.4)

    We compare our proposed algorithm with Kitkuan et al.'s algorithm (2019) (1.6), as presented in [32], and Tan's algorithm (2024), as described in [31, Algorithm 1.3]. For Tan's algorithm (2024), we choose the following parameter values: ζk=θk, δ=1.5,φ=120, and χk=λk. We evaluate all three algorithms and record the number of iterations (k) and the CPU times (seconds) by using the stopping criteria: xkxk1106.

    Table 1 shows the performance of three algorithms in solving problem (4.3) with different sizes of matrix K. Our algorithm consistently achieves optimality tolerance in the shortest CPU time across all cases. Additionally, it is notable that our algorithm requires fewer iterations compared to Kitkuan et al.'s algorithm (2019) and Tan's algorithm (2024) for each matrix size K.

    Table 1.  The comparison of three algorithms with different sizes of matrix K.
    (s,l) Our algorithm Kitkuan et al.'s algorithm (2019) Tan's algorithm (2024)
    CPU time (s) Iterations CPU time (s) Iterations CPU time (s) Iterations
    (20,500) 0.1991 8113 0.3707 25476 0.7224 23631
    (50,500) 0.5676 7095 0.8174 17998 1.1194 9539
    (300,500) 0.6786 3757 1.2733 12185 8.9313 35344
    (20, 1000) 0.3004 8475 0.5241 22350 0.6597 12788
    (50, 1000) 0.5820 4968 0.7979 13085 2.6585 18461
    (300, 1000) 1.2100 4577 1.7316 11568 26.6996 72012
    (500, 1000) 1.7890 4705 2.5680 12714 58.4649 106069
    (20, 2000) 0.6905 5459 0.7423 10751 3.7730 23994
    (50, 2000) 1.0284 6016 1.0665 13636 8.7545 42958
    (300, 2000) 2.0282 4260 3.3280 7027 100.4799 129957
    (500, 2000) 3.6868 4829 3.7700 9385 201.6677 183047
    (1000, 2000) 5.2149 3979 6.1491 6603 793.1800 317432

     | Show Table
    DownLoad: CSV

    In this subsection, we showcase the efficacy of the proposed algorithm by employing it to tackle image restoration problems, specifically focusing on deblurring and denoising images. The image restoration problem can be defined as the inversion of the following degradation model:

    y=Hx+w, (4.5)

    where y, H, x, and w denote the degraded image, degradation or blurring operator, original image, and noise operator, respectively.

    To approximate the reconstructed image by solving the regularized least-squares minimization problem:

    minx{12Hxy22+μϕ(x)}, (4.6)

    where μ>0 is the regularization parameter and ϕ() represents the regularizer. The l1 norm serves as a regularization functional, commonly utilized to eliminate noise in restoration problems, known as Tikhonov regularization [40]. The problem (4.6) can be reformulated as follows:

    find xargminxRs{12Hxy22+μx1}, (4.7)

    where y denotes the degraded image, and H represents a bounded linear operator. We can see that problem (4.7) can be formed in the problem (1.1) by setting B=1, μ=0.001 and A=L() where L(x)=12Hxy22. By using this, we observe that A(x)=L(x)=HT(Hxy). First, we degrade image by adding random noise and different types of blurring. The Gaussian blur (size 20 by 20 with the standard deviation 20), the average blur (size 10 by 10), and the motion blur (the linear motion of a camera by 20 pixels with an angle of 40 degrees). Next, we solve problem (4.7) using our algorithm in Theorem 4.3 and putting f(x)=x2 for all xRs, αk=1k+1, βk=1k+1, γk=1100k+1, λk=0.7 and θk is defined as (4.4).

    The comparisons of the performance among our proposed algorithm, Kitkuan et al.'s algorithm (2019), and Tan's algorithm (2024) are presented. In the case of the Kitkuan et al.'s algorithm (2019) (1.6) was presented in [32], we set f(x)=x2 for all xRs, γk=1100k+1, λk=0.7 and θk is defined as (4.4). For the Tan's algorithm (2024) presented in [31, Algorithm 1.3], we choose the following parameter values: ζk=θk, δ=1.5,φ=120, and χk=λk. The reconstructed image's quality is evaluated using the signal-to-noise ratio (SNR) formula:

    SNR(k)=20log10x22xxk22,

    where x represents the original image, while xk stands for the image restored at iteration k.

    The effectiveness of image restoration using our proposed algorithm, Kitkuan et al.'s algorithm (2019), and Tan's algorithm (2024) is depicted in Figures 13.

    Figure 1.  Figure (a) illustrates the original pirate image; Figure (b) depicts the images degraded by Gaussian blur and random noise; Figure (c) showcases the reconstructed images using Kitkuan et al.'s algorithm (2019); Figure (d) showcases the reconstructed images using Tan's algorithm (2024); and Figure (e) presents the reconstructed images using our algorithm as described in (4.2).
    Figure 2.  Figure (a) illustrates the original Lena image; Figure (b) depicts the images degraded by average blur and random noise; Figure (c) showcases the reconstructed images using Kitkuan et al.'s algorithm (2019); Figure (d) showcases the reconstructed images using Tan's algorithm (2024); and Figure (e) presents the reconstructed images using our algorithm as described in (4.2).
    Figure 3.  Figure (a) illustrates the original dog image; Figure (b) depicts the images degraded by motion blur and random noise; Figure (c) showcases the reconstructed images using Kitkuan et al.'s algorithm (2019); Figure (d) showcases the reconstructed images using Tan's algorithm (2024); and Figure (e) presents the reconstructed images using our algorithm as described in (4.2).

    The comparisons among our proposed algorithm, Kitkuan et al.'s algorithm (2019), and Tan's algorithm (2024) in image restoration problems are illustrated in Figure 4.

    Figure 4.  (a) The performance of SNR for the pirate image using three algorithms shown in Figure 1; (b) The performance of SNR for the Lena image using three algorithms displayed in Figure 2; and (c) The performance of SNR for the dog image using three algorithms shown in Figure 3.

    The experiments were carried out using MATLAB 9.19 (R2022b) and were performed on a MacBook Pro 14-inch 2021 model, which is equipped with an Apple M1 Pro processor and 16 GB of memory.

    We introduce a novel generalized viscosity forward-backward splitting scheme that incorporates inertial terms aimed at addressing the monotone inclusion problem. We also include a proof of the strong convergence of this algorithm under certain specified conditions for the relevant parameters. Furthermore, we leverage these results to approximate solutions for convex minimization problems. Additionally, we present a numerical example to compare our proposed algorithm with others in convex minimization problems. Finally, we demonstrate the efficacy of our method in solving image restoration problems.

    Kasamsuk Ungchittrakool: Conceptualization, Methodology, Software, Validation, Convergence analysis, Investigation, Writing-original draft preparation, Writing-review and editing, Visualization; Natthaphon Artsawang: Conceptualization, Methodology, Software, Validation, Convergence analysis, Investigation, Writing-original draft preparation, Writing-review and editing, Visualization, Project administration, Funding acquisition. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research received financial support from the Faculty of Science, Naresuan University, under grant number R2567E_SCI003.

    The author declare no conflict of interest.



    [1] P. L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. https://doi.org/10.1137/0716071 doi: 10.1137/0716071
    [2] G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Anal. Appl., 72 (1979), 383–390. https://doi.org/10.1016/0022-247X(79)90234-8 doi: 10.1016/0022-247X(79)90234-8
    [3] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control. Optim., 38 (2000), 431–446. https://doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
    [4] H. Brezis, P. L. Lions, Produits infinis de résolvantes, Israel J. Math., 29 (1978), 329–345. https://doi.org/10.1007/BF02761171 doi: 10.1007/BF02761171
    [5] A. Moudafi, Viscosity approximation methods for fixed-point problems, J. Math. Anal. Appl., 241 (2000), 46–55.
    [6] H. H. Bauschke, E. Matoušková, S. Reich, Projection and proximal point methods: Convergence results and counterexamples, Nonlinear Anal., 56 (2004), 715–738. https://doi.org/10.1016/j.na.2003.10.010 doi: 10.1016/j.na.2003.10.010
    [7] R. E. Bruck, S. Reich, Nonexpansive projections and resolvents of accretive operators in Banach spaces, Houston J. Math., 3 (1977), 459–470.
    [8] G. H. G. Chen, R. T. Rockafellar, Convergence rates in forward-backward splitting, SIAM J. Optim., 7 (1997), 421–444. https://doi.org/10.1137/S1052623495290179 doi: 10.1137/S1052623495290179
    [9] O. Güler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Control Optim., 29 (1991), 403–419. https://doi.org/10.1137/0329022 doi: 10.1137/0329022
    [10] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877–898. https://doi.org/10.1137/0314056 doi: 10.1137/0314056
    [11] N. Artsawang, K. Ungchittrakool, A new forward-backward penalty scheme and its convergence for solving monotone inclusion problems, Carpath. J. Math., 35 (2019), 349–363.
    [12] N. Artsawang, K. Ungchittrakool, A new splitting forward-backward algorithm and convergence for solving constrained convex optimization problem in Hilbert spaces, J. Nonlinear Convex Anal., 22 (2021), 1003–1023.
    [13] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [14] F. Alvarez, H. Attouch, An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping, Set Valued Anal., 9 (2001), 3–11. https://doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [15] Y. Dong, Q. Luo, On an inertial Krasnoselskii-Mann iteration, Appl. Set-Valued Anal. Optim., 6 (2024), 103–112.
    [16] J. J. Maulen, I. Fierro, J. Peypouquet, Inertial Krasnoselskii-Mann iteration, Set-Valued Var. Anal., 32 (2024). https://doi.org/10.1007/s11228-024-00713-7
    [17] A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. https://doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
    [18] P. Cholamjiak, Y. Shehu, Inertial forward-backward splitting method in Banach spaces with application to compressed sensing, Appl. Math. 64 (2019), 409–435. https://doi.org/10.21136/AM.2019.0323-18
    [19] Y. Shehu, O. S. Iyiola, F. U. Ogbuisi, Iterative method with inertial terms for nonexpansive mappings: Applications to compressed sensing, Numer. Algor., 83 (2020), 1321–1347. https://doi.org/10.1007/s11075-019-00727-5 doi: 10.1007/s11075-019-00727-5
    [20] N. Artsawang, K. Ungchittrakool, Inertial Mann-type algorithm for a nonexpansive mapping to solve monotone inclusion and image restoration problems, Symmetry, 12 (2020), 750. https://doi.org/10.3390/sym12050750 doi: 10.3390/sym12050750
    [21] B. Tan, S. X. Li, Strong convergence of inertial Mann algorithms for solving hierarchical fixed point problems, J. Nonlinear Var. Anal. 4 (2020), 337–355. https://doi.org/10.23952/jnva.4.2020.3.02
    [22] N. Artsawang, Accelerated preconditioning Krasnosel'skiĭ-Mann method for efficiently solving monotone inclusion problems, AIMS Math., 8 (2023), 28398–28412. https://doi.org/10.3934/math.20231453 doi: 10.3934/math.20231453
    [23] N. Nimana, N. Artsawang, A strongly convergent simultaneous cutter method for finding the minimal norm solution to common fixed point problem, Carpathian J. Math., 40 (2024), 155–171.
    [24] Y. X. Hao, J. Zhao, Two-step inertial Bregman projection iterative algorithm for solving the split feasibility problem, Appl. Nonlinear Anal., 1 (2024), 64–78. https://doi.org/10.69829/apna-024-0101-ta04 doi: 10.69829/apna-024-0101-ta04
    [25] K. Ungchittrakool, S. Plubtieng, N. Artsawang, P. Thammasiri, Modified Mann-type algorithm for two countable families of nonexpansive mappings and application to monotone inclusion and image restoration problems, Mathematics, 11 (2023), 2927. https://doi.org/10.3390/math11132927 doi: 10.3390/math11132927
    [26] N. Artsawang, S. Plubtieng, O. Bagdasar, K. Ungchittrakool, S. Baiya, P. Thammasiri, Inertial Krasnosel'skiĭ-Mann iterative algorithm with step-size parameters involving nonexpansive mappings with applications to solve image restoration problems, Carpathian J. Math., 40 (2024), 243–261.
    [27] L. O. Jolaoso, L. Olakunle, Y. Shehu, J. C. Yao, R. Q. Xu, Double inertial parameters forward-backward splitting method: Applications to compressed sensing, image processing, and SCAD penalty problems, J. Nonlinear Var. Anal., 7 (2023), 627–646.
    [28] Y. Pei, Y. Chen, S. Song, A novel accelerated algorithm for solving split variational inclusion problems and fixed point problems, J. Nonlinear Funct. Anal., 2023 (2023), 19.
    [29] O. T. Mewomo, C. C. Okeke, F. U. Ogbuisi, Iterative solutions of split fixed point and monotone inclusion problems in Hilbert spaces, J. Appl. Numer. Optim., 5 (2023), 271–285.
    [30] M. X. Zheng, Y. N. Guo, Scaled forward-backward algorithm and the modified superiorized version for solving the split monotone variational inclusion problem, Optim. Erudit., 1 (2024), 56–74. https://doi.org/10.69829/oper-024-0101-ta05 doi: 10.69829/oper-024-0101-ta05
    [31] B. Tan, X. L. Qin, On relaxed inertial projection and contraction algorithms for solving monotone inclusion problems, Adv. Comput. Math., 50 (2024), 59. https://doi.org/10.1007/s10444-024-10156-1 doi: 10.1007/s10444-024-10156-1
    [32] D. Kitkuan, P. Kumam, J. Martínez-Moreno, K. Sitthithakerngkiet, Inertial viscosity forward-backward splitting algorithm for monotone inclusions and its application to image restoration problems, Int. J. Comput. Math., 97 (2019), 482–497. https://doi.org/10.1080/00207160.2019.1649661 doi: 10.1080/00207160.2019.1649661
    [33] D. Kitkuan, P. Kumam, J. Martínez-Moreno, Generalized Halpern-type forward-backward splitting methods for convex minimization problems with application to image restoration problems, Optimization, 69 (2020), 1557–1581. https://doi.org/10.1080/02331934.2019.1646742 doi: 10.1080/02331934.2019.1646742
    [34] W. Takahashi, Nonlinear functional analysis, Japan: Yokohama Publishers, 2000.
    [35] H. H. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
    [36] G. López, V. Martín-Márquez, F. H. Wang, H. K. Xu, Forward-backward splitting methods for accretive operators in Banach spaces, Abstr. Appl. Anal., 2012 (2012), 109236. https://doi.org/10.1155/2012/109236 doi: 10.1155/2012/109236
    [37] S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, J. Math. Anal. Appl., 75 (1980), 287–292. https://doi.org/10.1007/BF03007664 doi: 10.1007/BF03007664
    [38] J. B. Baillon, G. Haddad, Quelques proprietes des operateurs angle-bornes etn-cycliquement monotones, Israel J. Math. 26 (1977), 137–150. https://doi.org/10.1007/BF03007664
    [39] R. T. Rockafellar, On the maximality of subdifferential mappings, Pac. J. Math., 33 (1970), 209–216. https://doi.org/10.2140/pjm.1970.33.209 doi: 10.2140/pjm.1970.33.209
    [40] A. N. Tikhonov, V. Y. Arsenin, Solutions of Ill-Posed problems, SIAM Rev. 21 (1979), 266–267.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(903) PDF downloads(65) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog