Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems

  • In this paper, we examine the convergence analysis of a variant of Tseng's splitting method for monotone inclusion problem and fixed point problem associated with an infinite family of η-demimetric mappings in Hilbert spaces. The qualitative results of the proposed variant shows strong convergence characteristics under a suitable set of control conditions. We also provide a numerical example to demonstrate the applicability of the variant with some applications.

    Citation: Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad. Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems[J]. AIMS Mathematics, 2023, 8(10): 24590-24608. doi: 10.3934/math.20231254

    Related Papers:

    [1] Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149
    [2] Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen . Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of αinverse strongly monotone operators. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522
    [3] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [4] Natthaphon Artsawang . Accelerated preconditioning Krasnosel'skiĭ-Mann method for efficiently solving monotone inclusion problems. AIMS Mathematics, 2023, 8(12): 28398-28412. doi: 10.3934/math.20231453
    [5] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [6] Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382
    [7] Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
    [8] Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
    [9] Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903
    [10] Ruiping Wen, Wenwei Li . An accelerated alternating directional method with non-monotone technique for matrix recovery. AIMS Mathematics, 2023, 8(6): 14047-14063. doi: 10.3934/math.2023718
  • In this paper, we examine the convergence analysis of a variant of Tseng's splitting method for monotone inclusion problem and fixed point problem associated with an infinite family of η-demimetric mappings in Hilbert spaces. The qualitative results of the proposed variant shows strong convergence characteristics under a suitable set of control conditions. We also provide a numerical example to demonstrate the applicability of the variant with some applications.



    Let the triplet (H,<,>,) denote the real Hilbert space with the inner product and induced norm. The classical monotone inclusion problem aims to find

    sH   such that  0As+Bs, (1.1)

    where AH×H is a multi-valued operator and B is a single-valued operator on H. In the context of monotone operator theory, (1.1) has been largely considered for various problems in signal processing, subgradient algorithms, image recovery problem, variational inequality problem and evolution equations, see [17,19,37] and the references cited therein.

    In order to study the problem (1.1), one can employ effective iterative algorithms. The elegant forward-backward (FB) iterative algorithm [34,35] is prominent among various iterative algorithms to solve (1.1). However, the FB iterative algorithm exhibits weak convergence, assuming the stronger conditions for the operators A and B [44]. Recently, Gibali and Thong [23] considered a modified variant of the Tseng's splitting method to obtain strong convergence results in Hilbert spaces.

    Fixed point problem (FPP) is another important framework to study a variety of problems arising in various branches of sciences [17,24,25]. In 2017, Takahashi [38] proposed and analyzed a new unifying class of nonlinear operators namely the class of η-demimetric operators in Hilbert spaces as follows:

    Let C be a nonempty subset of a real Hilbert space H. An operator W:CC is said to be η-demimetric [38], where η(,1), if Fix(W) such that

    st,(IdW)s12(1η)(IdW)s2,for allsCandtFix(W),

    where Id indicates the identity operator and Fix(W)={tC|t=Wt} denotes the set of all fixed points of the operator W. Note that

    Wst2st2+ηsWs2,for allsCandtFix(W),

    is an equivalent representation of an η-demimetric operator. The class of η-demimetric operators have been studied extensively in various instances of FPP in Hilbert spaces, see [39,40,42]. On the other hand, Baillon [13] established the nonlinear ergodic theorem for nonexpansive operator as follows:

    Theorem 1.1. [13] Let C be a nonempty, closed and convex subset of a real Hilbert space H and W:CC be a nonexpansive operator such that Fix(W) then for all sC, the Cesˊaro means

    Wnx=1n+1ni=0Wix,n{0,1,2,,},

    weakly converges to a fixed point of W.

    Since then, the classical Cesˊaro means method has been considered for various classes of nonlinear operators, see [18,28,29] and the references cited therein. Note that the Cesˊaro means method fails to converge strongly for the class of nonexpansive operators [22]. In order to establish the strong convergence results, one has to impose additional conditions on the algorithm. In 1967, Halpern [27] introduced and analyzed an iterative algorithm which strongly converges to the nearest fixed point of the nonexpansive operator. It is remarked that the Halpern iterative algorithm coincides with the Cesˊaro means method for linear operators. In 2000, Moudafi [33] proposed and analyzed the strongly convergent viscosity iterative algorithm by utilizing a strict contraction operator instead of the anchor point in the Halpern iterative algorithm. In order to generalize the classical Cesˊaro means method for an infinite family of η-demimetric operators, we first consider the following auxiliary operator:

    Qn,n+1=Id,Qn,n=βnTnQn,n+1+(1βn)Id,Qn,n1=βn1Tn1Qn,n+(1βn1)Id,Qn,m=βmTmQn,m+1+(1βm)Id,Qn,2=β2T2Qn,3+(1β2)Id,Wn=Qn,1=β1T1Qn,2+(1β1)Id,

    where 0βm1 and Tm=γs+(1γ)Tms for all sC with Tm being η-demimetric operator and 0<γ<1η. It is well-known in the context of operator Wn that each Tm is nonexpansive and the limit limnQn,m exists. Moreover,

    Ws=limnWns=limnQn,1s,for allsC.

    It follows from [41] that

    Fix(W)=n=1Fix(Wn). (1.2)

    Moreover, to enhance the speed of convergence of the proposed iterative algorithm, we also utilize the inertial extrapolation technique essentially due to Polyak [36], see also [1,2,3,4,5,6,7,8,9,10,11,31].

    The rest of the paper is organized as follows: We present relevant preliminary concepts and results in Section 2. We show the convergence analysis of the proposed iterative algorithm in Section 3 and compute a numerical experiment for the viability of the algorithm in Section 4. Section 5 includes an experiment on image deblurring with applications.

    We start this section with the mathematical preliminary concepts required in the sequel. Throughout the paper, we assume the triplet (H,<,>,) to be the real Hilbert space with the inner product and induced norm. For a nonempty closed convex subset C of the Hilbert space H, PHC denotes the associated metric projection operator which is firmly nonexpansive and satisfies sPHCs,PHCst0, for all sH and tC. Recall that a set-valued operator A:H2H is said to be monotone, if for all s,tH, uAx and vAy, we have st,uv0. Moreover, A is said to be maximal monotone if there is no proper monotone extension of A. For a monotone operator A, the associated resolvent operator JAm of index m>0 is defined as

    JAm=(Id+mA)1,

    where ()1 indicates the inverse operator. Note that the resolvent operator JAm is well defined everywhere on Hilbert space H. Further, JAm is single valued and satisfies the firmly nonexpansiveness. Furthermore, xA1(0) if and only if sFix(JAm).

    The rest of this section is organized with the celebrated results required in the sequel. The following lemma is a special case of Lemma 2.4 in [30].

    Lemma 2.1. Let μ,ν,ˉξH. Let α,β,γR with α+β+γ=1 then we have

    (i) μ+ν2μ2+2ν,μ+ν;

    (ii) αμ+βν+γˉξ2=αμ2+βν2+γˉξ2αβμν2αγμˉξ2βγνˉξ2.

    Lemma 2.2. [12] Let W:CC be an operator defined on a nonempty closed convex subset C of a real Hilbert space H and let (pn) be a sequence in C. If pnp and if (IdW)pn0, then pFix(W)

    Lemma 2.3. [38] Let C be a nonempty, closed and convex subset of a Hilbert space H and let W:CH be an η-demimetric operator with η(,1). Then Fix(W) is closed and convex.

    Lemma 2.4. [42] Let C be a nonempty, closed and convex subset of a Hilbert space H and let W:CH be an η-demimetric operator with η(,1) and Fix(W). Let γ be a real number with 0<γ<1η and set T=(1γ)Id+γW, then T is a quasi-nonexpansive operator of C into H.

    Lemma 2.5. [15] Let C be a nonempty bounded closed convex subset of a uniformly convex Banach space and W:CC be a nonexpansive operator. For each sC and the Cesˊaro means Wns = 1n+1ni=0Wis, then lim supnWnsW(Wns)=0.

    Lemma 2.6. [14] Let AH×H be a maximal monotone operator and let B be a Lipschitz continuous and monotone operator on H. Then A+B is a maximal monotone operator.

    Lemma 2.7. [23] Let AH×H be a maximal monotone operator and let B be an operator on H. Define Sμ:=(Id+μA)1(IdμB),μ>0. Then we have Fix(Sμ)=(A+B)1(0), for all μ>0.

    Lemma 2.8. [46] Let (bn) be a sequence of nonnegative real numbers and there exists n0N such that

    bn+1(1ψn)bn+ψncn+dn,nn0,

    where (ψn)(0,1) and (cn), (dn) with the following conditions hold:

    (I) n=1ψn=;

    (II) lim supncn0;

    (III) n=1dn<, 0dn(0n);

    then limnbn=0.

    Lemma 2.9. [32] Let (qn) be a sequence of nonnegative real numbers. Suppose that there is a subsequence (qnj) of (qn) such that qnj<qnj+1 for all jN, then there exists a nondecreasing sequence (εk) of N such that limkεk= and satisfy the following properties such that

    qεkqεk+1andqkqεk+1,

    for some large number kN. Thus, εk is the largest number n in the set {1,2,,k} such that qn<qn+1.

    In this section, we prove the following strong convergence result.

    Theorem 3.1. Let AH×H be a maximal monotone operator and let B be a monotone and ρ-Lipschitz operator for some ρ>0 on a real Hilbert space H. Let Wn be the W-operator and let h be a λ-contraction on H with λ[0,1). Assume that Γ=(A+B)1(0)Fix(W), μ1>0, σ(0,1), {ˉξn}[0,1) and {αn},{βn} are sequences in (0,1). For given p0,p1H, let the iterative sequence {pn} be generated by

    {un=pn+ˉξn(pnpn1);vn=JAμn(IdμnB)un;sn=vnμn(BvnBun);pn+1=αnh(pn)+(1αnβn)pn+βn1nn1i=0Wisn. (3.1)

    Assume that the following step size rule

    μn+1={min{σunvnBunBvn,μn},ifBunBvn0;μn,otherwise,

    and conditions:

    (C1) n=1ˉξnpnpn1<;

    (C2) limnαn=0 and n=1αn=, and for each nN, 0<a<lim infnβnlim supnβn<b<1αn, where a,b be positive real numbers,

    hold. Then the sequence {pn} generated by (3.1) converges strongly to an element in Γ.

    The following results from [23] are crucial for the analysis of our main result.

    Lemma 3.1. [23] The sequence μn generated by (3.1) is a nonincreasing sequence with a lower bound of min{μ1,σρ}.

    Lemma 3.2. [23] Assume that Conditions (C1) and (C2) hold and let (sn) be any sequence generated by (3.1), we have

    snˉp2pnˉp2(1σ2μ2nμ2n+1)pnvn2 (3.2)

    and

    snvnσμnμn+1pnvn. (3.3)

    Lemma 3.3. Assume that Conditions (C1) and (C2) hold and suppose that

    limnpnun=limnpnvn=limnpnsn=limnsn1nn1i=0Wisn=0.

    Let (pn) and (un) be two sequences generated by (3.1). If a subsequence (pnt) of pn converges weakly to some pH then pΓ.

    Proof. Let pH such that pntp then p(A+B)1(0) follows from [23, Lemma 7]. Since limnpnsn=0 and pntp therefore we have sntp. Since

    limnsn1nn1i=0Wisn=0,

    therefore, utilizing Lemma 2.2, we get pFix(Wi) and hence pΓ.

    Now we are able to prove the main result of this section.

    Proof of Theorem 3.1. For simplicity, the proof is divided into the following three steps:

    Step 1. Show that the sequence (pn) is bounded.

    Let ˉpΓ, then for each nN we have

    unˉp2=pnˉp+ˉξn(pnpn1)2pnˉp2+ˉξ2npnpn12+2ˉξnpnˉp,pnpn1. (3.4)

    Set Wn=1N+1Ni=0Wi and utilizing Lemma 2.4 we have

    WnsWnt=1nn1i=0Wis1nn1i=0Wit1nn1i=0WisWit1nn1i=0st=st.

    It follows from the above estimate that Wn is a nonexpansive operator. Moreover, for any ˉpΓ, we have that Wnˉp=1nn1i=0Wiˉp=ˉp. Since limn(1σ2μ2nμ2n+1)=1σ2>0, therefore for each nn0 where n0N, we have that

    1σ2μ2nμ2n+1>0. (3.5)

    From (3.2) and (3.5), we obtain

    snˉppnˉp. (3.6)

    Further, from (C2) and (3.6), we have

    pn+1ˉp=αn(h(pn)ˉp)+(1αnβn)(pnˉp)+βn(Wnsnˉp)αnh(pn)ˉp+(1αnβn)pnˉp+βnWnsnˉpαnλpnˉp+αnh(ˉp)ˉp+(1αn)pnˉp=[1αn(1λ)]pnˉp+αn(1λ)h(ˉp)ˉp1λmax{pnˉp,h(ˉp)ˉp1λ}.

    Thus, for all nn0, pn+1ˉpmax{pn0ˉp,h(ˉp)ˉp1λ}. This implies that (pn) is bounded.

    Step 2. Compute the following two estimates:

    (i):βn(1σ2μ2nμ2n+1)pnvn2+βn(1αnβn)pnWnsn2pnˉp2pn+1ˉp2+αnh(pn)ˉp2; (3.7)
    (ii):pn+1ˉp2[1αn(1λ)]pnˉp2+αn(1λ)[21λ(βnpnWnsnpn+1ˉp+h(ˉp)ˉp,pn+1ˉp))]. (3.8)

    Utilizing Lemma 2.1(ⅱ), we obtain

    pn+1ˉp2=αn(h(pn)ˉp)+(1αnβn)(pnˉp)+βn(Wnsnˉp)2=αnh(pn)ˉp2+(1αn+βn)pnˉp2+βn(Wnsnˉp)2αn(1αnβn)h(pn)pn2βn(1αnβn)pnWnsn2αnβnh(pn)Wnsn2αnh(pn)ˉp2+(1αnβn)pnˉp2+βnsnˉp2βn(1αnβn)pnWnsn2.

    Now utilizing (3.2) in the above estimate, we get

    pn+1ˉp2αnh(pn)ˉp2+(1αn)pnˉp2βn(1αnβn)pnWnsn2βn(1σ2μ2μ2n+1)pnvn2αnh(pn)ˉp2+pnˉp2βn(1αnβn)pnWnsn2βn(1σ2μ2μ2n+1)pnvn2.

    Simplifying the above estimate, we have the desired estimate (3.7).

    Next, by using (C2) and setting jn=(1βn)pn+βnWnsn, we get

    jnˉppnˉp (3.9)

    and

    pnjn=βnpnWnsn. (3.10)

    Utilizing (3.9), (3.10), Lemma 2.1(ⅰ) and (ⅱ), the desired estimate (3.8) follows from the following calculation:

    pn+1ˉp2=(1αn)(jnˉp)+αn(h(pn)h(ˉp))αn(pnjn)αn(ˉph(ˉp))2(1αn)(jnˉp)+αn(h(pn)h(ˉp))22αnpnjn+ˉph(ˉp),pn+1ˉp(1αn)jnˉp2+αnh(pn)h(ˉp)22αnpnjn+ˉph(ˉp),pn+1ˉp(1αn)pnˉp2+αnλpnˉp2+2αnpnjn,ˉppn+1+2αnh(ˉp)ˉp,pn+1ˉp[1αn(1λ)]pnˉp2+2αnpnjnpn+1ˉp+2αnh(ˉp)ˉp,pn+1ˉp=[1αn(1λ)]pnˉp2+2αnβnpnWnsnpn+1ˉp+2αnh(ˉp)ˉp,pn+1ˉp=[1αn(1λ)]pnˉp2+αn(1λ)[21λ(βnpnWnsnpn+1ˉp+h(ˉp)ˉp,pn+1ˉp)].

    Step 3. Show that limnpnˉp=0.

    We consider the two possible cases on the sequence (pnˉp).

    Case A. For all nn0, pn+1ˉp2pnˉp2 and n0N. This implies that limnpnˉp exists. Since limn(1σ2μ2nμ2n+1)=1σ2>0. By using (C2) and (3.7), we have

    limnpnvn=limnpnWnsn=0. (3.11)

    From (3.3), we get

    limnsnvn=0. (3.12)

    By the definition of (un) and (C1), we have

    limnunpn=limnˉξnpnpn1=0. (3.13)

    By using the triangle inequality, we obtain the following estimates:

    unvnunpn+pnvn0,asn;unsnunvn+vnsn0,asn;pnsnpnvn+vnsn0,asn;snWnsnpnsn+pnWnsn0,asn.

    By using Lemma 2.5, we have

    lim supnWnsnW(Wnsn)=0. (3.14)

    Note that for all nN, we get

    pn+1pnpn+1Wnsn+pnWnsnαnh(pn)pn+(2βn)pnWnsn. (3.15)

    From (3.11) and (C2), the estimate (3.15) implies that

    limnpn+1pn=0. (3.16)

    Similarly, from (3.13), (3.16) and the following triangle inequality, we have

    pn+1unpn+1pn+pnun0,asn.

    Since (pn) is bounded, then there exists a subsequence (pnt) of (pn) with pntpH. Now utilizing Lemma 3.3 we have pΓ.

    By making use of the estimate (3.16), we get

    lim supnh(ˉp)ˉp,pn+1ˉplim supnh(ˉp)ˉp,pn+1pn+lim supnh(ˉp)ˉp,pnˉp0. (3.17)

    From the estimate (3.17) and Lemma 2.8, we get limnpnˉp=0.

    Case B. There exists a subsequence (pnkˉp2) of (pnˉp2) such that pnkˉp<pnk+1ˉp for all kN.

    It follows from Lemma 2.9 that there exists a nondecreasing sequence (bm)N such that limmbm=, for all mN with the inequality pbmˉp2pbm+1ˉp2 holds. In the similar fashion from (3.7), we obtain

    βbm(1σ2μ2bmμ2bm+1)pbmvbm2+βbm(1αbmβbm)pbmSbmwbm2pbmˉp2pbm+1ˉp2+αbmh(pbm)ˉp2αbmh(pbm)ˉp2.

    Since limnαn=0, so we get

    limmpbmvbm=limmpbmSbmwbm=0.

    Similarly from Case A, we have

    lim supmh(ˉp)ˉp,pbm+1ˉp0.

    Using (3.8) for nmax{n,n0}, we have the following estimate:

    pbm+1ˉp2[1αbm(1λ)]pbmˉp2+αbm(1λ)[21λ(βbmpbmSbmwbmpbm+1ˉp+h(ˉp)ˉp,pbm+1ˉp)][1αbm(1λ)]pbm+1ˉp2+αbm(1λ)[21λ(βbmpbmSbmwbmpbm+1ˉp+h(ˉp)ˉp,pbm+1ˉp)].

    The above estimate yields that

    pbm+1ˉp221λ(βbmpbmSbmwbmpbm+1ˉp+h(ˉp)ˉp,pbm+1ˉp). (3.18)

    Therefore, lim supmpbmˉp20. Therefore, pnˉpΓ and this completes the proof.

    We now propose a variant of the iterative algorithm (3.1) embedded with the Halpern iterative algorithm [27].

    Theorem 3.2. Let AH×H be a maximal monotone operator and let B be a monotone and ρ-Lipschitz operator for some ρ>0 on a real Hilbert space H. Let Wn be the W-operator and let h be a λ-contraction on H with λ[0,1). Assume that Γ=(A+B)1(0)Fix(W), (μ1)>0, σ(0,1), {ˉξn}[0,1) and {αn},{βn} are sequences in (0,1). For given q,p0,p1H, let the iterative sequences {pn}, {un}, {vn}, {wn} and {pn+1} be generated by

    {un=pn+ˉξn(pnpn1);vn=JAμn(IdμnB)un;sn=vnμn(BvnBun);pn+1=αnq+(1αnβn)pn+βn1nn1i=0Wisn. (3.19)

    Assume that the following step size rule

    μn+1={min{σunvnBunBvn,μn},ifBunBvn0,μn,otherwise,

    and conditions

    (C1) n=1ˉξnpnpn1<;

    (C2) limnαnβn=0, 1αnβn=0 and n=1αnβn=;

    (C3) For each nN, 0<a<lim infnβnlim supnβn<b<1αn, where a,b be positive real numbers hold.

    Then the sequence {pn} generated by (3.19) converges strongly to a point in Γ.

    Remark 3.1. In order to obtain the desired result, for the iteration (3.19), we have to assume a stopping criteria for (3.19) to be n>nmax for some sufficiently large number nmax.

    Proof. Observe that for each n1, arguing similarly as in the proof of Theorem 3.1 (Steps 1–3), we deduce that Γ is well defined, closed and bounded. Furthermore, the sequence (pn) is bounded and

    limnpn+1pn=0. (3.20)

    Let pn+1=αnq+(1αnβn)pn+βnWnsn. An easy calculation along (3.20), (C2) and (C3) implies that

    Wnsnpn1(βn)pn+1pn+αnβnqpn.

    The above estimate infers that

    limnWnsnpn=0.

    The rest of the proof of Theorem 3.2 is similar to the proof of Theorem 3.1 and is therefore omitted.

    The following remark elaborate how to align condition (C1) in a computer-assisted iterative algorithm.

    Remark 3.2. We remark here that the condition (C1) can easily be aligned in a computer-assisted iterative algorithm since the value of pnpn1 is quantified before choosing ˉξn such that 0ˉξn^ˉξn with

    ^ˉξn={min{Θnpnpn1,ˉξ},ifpnpn1;ˉξ,                       otherwise.

    Here {Θn} denotes a sequence of positives n=1Θn< and ˉξ[0,1).

    In this section, we compute a numerical experiment for the viability of the iterative algorithm (3.1).

    Example 4.1. Let H=R. We denote the inner product s,t=st, for all s,tR and induced norm |s|=s,t. Let the operators h,A,B:RR be defined as h(s)=s8, As=4s and Bs=3s for all sR. Observe that, h is a contraction with constant λ[0,1), B is a monotone and ρ-Lipschitz operator for some ρ>0 and A is a maximal monotone operator such that (A+B)1(0)={0}. Let the sequence of operators Ti:RR be defined by

    Ti(s)={3si,s(,1);s,s(1,).

    Note that Ti is an infinite family of 3i2(3+i)2-demimetric operators with i=1Fix(Ti)=0=Fix(W). Hence Γ=(A+B)1(0)Fix(W)=0. In order to compute the numerical values of (pn), we choose Θ=0.5, αn=1n+1, βn=n2(n+1), μ1=7.45 and σ=0.785. Since

    {min{1n2pnpn1,0.5},    if   pnpn1;0.5,                            otherwise.

    We now provide a numerical test for a comparison between accelerated Tseng's type splitting method defined in Theorem 3.1 (i.e., Theorem 3.1, ˉξn0), standard Tseng's type splitting method (i.e., Theorem 3.1, ˉξn=0), Algorithm 1 [31] and Theorem 2 [23]. The stopping criteria is defined as En=vnun<105. Table 1 summarises the comparison of these algorithm with respect to the following choices of initial inputs:

    Table 1.  Numerical results for Example 4.1.
    Choice 1 Choice 2 Choice 3
    Iteration CPU(s) Iteration CPU(s) Iteration CPU(s)
    (1) Theorem 3.1, ˉξn0 11 0.053120 14 0.051362 10 0.048537
    (2) Theorem 3.1, ˉξn=0 17 0.060018 19 0.058867 16 0.057642
    (3) Algorithm 1 [31] 27 0.068117 37 0.069215 33 0.065345
    (4) Theorem 2, Gibali et al. 36 0.074537 45 0.077642 38 0.068804

     | Show Table
    DownLoad: CSV

    Choice 1. p0=4, p1=4.5.

    Choice 2. p0=5, p1=3.

    Choice 3. p0=1.3, p1=4.7.

    The error plotting En of ˉξn0 and ˉξn=0 for each choice in Table 1 are shown in Figure 1.

    Figure 1.  Comparison of Theorem 3.1 for ˉξn0 and ˉξn=0 with Theorem 2 [23].

    We can see from Table 1 and Figure 1 that the Theorem 3.1 with ˉξn0 performs better as compared to the Theorem 3.1 with ˉξn=0, Algorithm 1 [31] and Theorem 2 [23].

    In this section, we demonstrate some theoretical as well as applied instances of the main result in Section 3.

    The classical split feasibility problem (SFP), essentially due to Censor and Elfving [16], aims to find ˆsω:=Ch1(Q)={ˉtC:hˉtQ}, where CH1 and QH2 are nonempty, closed and convex subsets of H1 and H2, respectively. In order to derive the result for SFP from Theorem 3.1, we recall the indicator operator of a nonempty, closed and convex subset C of H1 as

    ΦC(s):={0,sC;,otherwise.

    It is well known that the subdifferential ΦC associated with ΦC is a maximal monotone operator. Recall also that ΦC=N(μ,C), where N(μ,C) is the normal cone of C at μ. Utilizing this fact, we conclude that the resolvent operator of ΦC is the metric projection operator of H1 onto C. Setting B(ˉx)=(IdPQ)ˉx, where PQ is the metric projection onto Q and A(ˉx)=ΦC(ˉx) then the SCFP has the inclusion structure as defined in (1.1). Since B is ρ-Lipschitz continuous, where ρ=2=1 and A is maximal monotone, (see [12]), we, therefore, arrive at the following variant of Theorem 3.1:

    Theorem 5.1. Assume that Γ=ωFix(W). For given p0,p1H1, let the iterative sequence (pn) be generated by

    {un=pn+ˉξn(pnpn1);vn=PC(Idμn(IdPQ))un;sn=vnμn(((IdPQ))vn((IdPQ))un);pn+1=αnh(pn)+(1αnβn)pn+βn1nn1i=0Wisn. (5.1)

    Assume that the following step size rule:

    μn+1={min{σunvn((IdPQ))un((IdPQ))vn,μn}if((IdPQ))un((IdPQ))vn0;μn,otherwise,

    and conditions (C1) and (C2) hold. Then the sequence (pn) generated by (5.1) converges strongly to an element in Γ.

    Let f:HR(+) and g:HR(+) be two convex, proper and lower semicontinuous functions such that f differentiable with ρ-Lipschitz continuous gradient and g is such that its proximal map. We consider the following convex minimization problem of finding ˉxH such that

    f(ˉx)+g(ˉx)=minxH{f(x)+g(x)}. (5.2)

    In view of the Fermat's rule, the problem (5.2) is equivalent to the following problem of finding ˉxH such that

    0f(ˉx)+g(ˉx), (5.3)

    where the subdifferential g is a maximal monotone operator and the gradient f is ρ-Lipschitz continuous [12,37]. Assume that ω is the set of solutions of problem (1.1) and ω. In Theorem 3.1, set that B:=f and A:=g. Then, we compute the following result.

    Theorem 5.2. Let f:HR(+), g:HR(+) be two proper, convex and lower semicontinuous functions on a real Hilbert space H. Assume that Γ=ωi=1Fix(Wi) and ˉξn is a bounded real sequence. For given p0,p1H, let the iterative sequences (pn) be generated by

    {un=pn+ˉξn(pnpn1);vn=Jgμn(Idμnf)un;sn=vnμn(fvnfun);pn+1=αnh(pn)+(1αnβn)pn+βn1nn1i=0Wisn. (5.4)

    Assume that the following step size rule

    μn+1={min{σunvnfunfvn,μn},iffunfvn0;μn,otherwise,

    and the conditions (C1) and (C2) hold. Then the sequence (pn) generated by (5.4) converges strongly to an element in Γ.

    Let hRn×m be a blurring operator, zRn be the original image and wRm be the blurred and noisy image (observed image) with v be the additive noise from Rm. The following structure is known as an image recovery problem:

    hz=w+v.

    For solving this problem, we make use of the model of Tibshirani [43] which is known as LASSO problem:

    minzRn{12hzw22+kz1}, (5.5)

    where k>0 is a regularization parameter. Problem (5.5) cannot be used to solve the image de-blurring directly, as the image is sparse under some gradient transformation. In order to reconstruct the images from their noisy, blurry and/or incomplete measurements, Guo et al. [26] proposed a novel regularization model for reproducing high-quality images using fewer measurements than the state-of-the-art methods. We therefore use the following model:

    minzRn{12hzw22+kz1}. (5.6)

    The Richardson iteration, which is often called the Landweber method [20,21,45], is generally used as an iterative regularization method to solve (5.6). This method is defined as follows:

    zk+1=zk+ρhT(whzk), (5.7)

    where ρ step size is constant. To ensure the convergence, the step size satisfy 0<ρ<2ϵ2max and ϵmax is the largest singular value of h. We set k=0.7875 and μ=0.001, ξn=1(100n+1)2, αn=12n, βn=188n+1. The quality of the the restored images are analyzed on the following scale of signal to noise ratio (SNR) defined as SNR=20log10z2zzn2, where z and zn are the original and estimated images at iteration n, respectively. We compare the performance of the algorithms abbreviated as Theorem 5.1, ˉξn0, Theorem 5.1, ˉξn=0, Algorithm 1 [31] and Theorem 2 of Gibali et al. [23] on the test images (Mona Lisa and Cameraman) via the image restoration experiment for motion operator, respectively.

    It can be observed from Figures 3 and 5 that the larger SNR values infer the better restored images. We can see from Table 2, and the corresponding test images in Figures 2 and 4, that the inertial variant of the iterative algorithm in Theorem 5.1 (i.e., ˉξn0) performs better as compared to the non-inertial variant (i.e., ˉξn=0) Algorithm 1 [31] and Theorem 2 of Gibali and Thong [23].

    Figure 2.  (a) Original image (182 × 276) with a motion length 30 and an angle 45; (b) Observed image, degraded by motion; (c) Reconstructed image.
    Figure 3.  Comparison of (5.1), ˉξn0, (5.1), ˉξn=0 and Theorem 2 [23].
    Figure 4.  (a) Original image (256 × 256)with Gaussian blur of size 9×9 and standard deviation σ=6; (b) Observed image, degraded by Gaussian; (c) Reconstructed image.
    Figure 5.  Comparison of (5.1), ˉξn0, (5.1), ˉξn=0 and Algorithm 2 [23].
    Table 2.  The SNR in decibel(dB) values and average per iteration computation time of the two optimization algorithms.
    Mona Lisa Cameraman
    No. of test image SNR(dB) CPU(sec) SNR(dB) CPU(sec)
    (1) Theorem 5.1, ˉξn0 38.3032 30.1321 34.9918 20.1326
    (2) Theorem 5.1, ˉξn=0 37.5156 26.3298 27.7731 17.0077
    (3) Algorithm 1 [31] 29.3231 25.4876 21.8794 15.6142
    (4) Theorem 2 of Gibali and Thong 22.3231 23.1861 17.6226 13.0051

     | Show Table
    DownLoad: CSV

    In this paper, we have devised an accelerated Visco-Cesˊaro means Tseng's type splitting method for computing a common solution of a monotone inclusion problem and the FPP associated with an infinite family of η-demimetric operators in Hilbert spaces. We have incorporated an appropriate numerical example for the viability the iterative algorithm. We have also included some theoretical, as well as applied instances, of the main result in Section 3 that can provide an important future research direction in these theories.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Funding was received from the Thailand Science Research and Innovation (TSRI) and Fundamental Fund of Rajamangala University of Technology Rattanakosin with funding under contract No. FRB6620/2566. Moreover, this research has received funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation [grant number B39G660025].

    The authors would like to thank the Associate Editor and the anonymous referees for their valuable comments and suggestions. The author Yasir Arfat was supported by the Petchra Pra Jom Klao Ph.D Research Scholarship from King Mongkut's University of Technology Thonburi, Thailand(Grant No.16/2562). The corresponding author Supak Phiangsungnoen acknowledge the financial support provided by Institute of Research and Development, Rajamangala University of Technology Rattanakosin (Fundamental Fund Project).

    The authors declare that they have no competing interests.



    [1] F. Alvarez, H. Attouch, An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Analysis, 9 (2001), 3–11. http://dx.doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [2] Y. Arfat, P. Kumam, M. Khan, P. Ngiamsunthorn, A. Kaewkhao, A parallel hybrid accelerated extragradient algorithm for pseudomonotone equilibrium, fixed point, and split null point problems, Adv. Differ. Equ., 2021 (2021), 364. http://dx.doi.org/10.1186/s13662-021-03518-2 doi: 10.1186/s13662-021-03518-2
    [3] Y. Arfat, P. Kumam, M. Khan, P. Ngiamsunthorn, Parallel shrinking inertial extragradient approximants for pseudomonotone equilibrium, fixed point and generalized split null point problem, Ricerche Mat., in press. http://dx.doi.org/10.1007/s11587-021-00647-4
    [4] Y. Arfat, P. Kumam, M. Khan, P. Ngiamsunthorn, Shrinking approximants for fixed point problem and generalized split null point problem in Hilbert spaces, Optim. Lett., 16 (2022), 1895–1913. http://dx.doi.org/10.1007/s11590-021-01810-4 doi: 10.1007/s11590-021-01810-4
    [5] Y. Arfat, P. Kumam, M. Khan, O. Iyiola, Multi-inertial parallel hybrid projection algorithm for generalized split null point problems, J. Appl. Math. Comput., 68 (2022), 3179–3198. http://dx.doi.org/10.1007/s12190-021-01660-4 doi: 10.1007/s12190-021-01660-4
    [6] Y. Arfat, P. Kumam, M. Khan, P. Ngiamsunthorn, An inertial extragradient algorithm for equilibrium and generalized split null point problems, Adv. Comput. Math., 48 (2022), 53. http://dx.doi.org/10.1007/s10444-021-09920-4 doi: 10.1007/s10444-021-09920-4
    [7] Y. Arfat, O. Iyiola, M. Khan, P. Kumam, W. Kumam, K. Sitthithakerngkiet, Convergence analysis of the shrinking approximants for fixed point problem and generalized split common null point problem, J. Inequal. Appl., 2022 (2022), 67. http://dx.doi.org/10.1186/s13660-022-02803-2 doi: 10.1186/s13660-022-02803-2
    [8] Y. Arfat, M. Khan, P. Kumam, W. Kumam, K. Sitthithakerngkiet, Iterative solutions via some variants of extragradient approximants in Hilbert spaces, AIMS Mathematics, 7 (2022), 13910–13926. http://dx.doi.org/10.3934/math.2022768 doi: 10.3934/math.2022768
    [9] Y. Arfat, P. Kumam, M. Khan, P. Ngiamsunthorn, An accelerated variant of the projection based parallel hybrid algorithm for split null point problems, Topol. Method. Nonl. Anal., 60 (2022), 457–474. http://dx.doi.org/10.12775/TMNA.2022.015 doi: 10.12775/TMNA.2022.015
    [10] Y. Arfat, P. Kumam, S. Phiangsungnoen, M. Khan, H. Fukhar-ud-din, An inertially constructed projection based hybrid algorithm for fixed point problem and split null point problems, AIMS Mathematics, 8 (2023), 6590–6608. http://dx.doi.org/10.3934/math.2023333 doi: 10.3934/math.2023333
    [11] Y. Arfat, P. Kumam, M. Khan, Y. Cho, A hybrid steepest-descent algorithm for convex minimization over the fixed point set of multivalued mappings, Carpathian J. Math., 39 (2023), 303–314. http://dx.doi.org/10.37193/CJM.2023.01.21 doi: 10.37193/CJM.2023.01.21
    [12] H. Bauschke, P. Combettes, Convex analysis and monotone operators theory in Hilbert spaces, Cham: Springer, 2017. http://dx.doi.org/10.1007/978-3-319-48311-5
    [13] J. Baillon, Un theorem de type ergodique pour les contractions non lineairs dans un e'spaces de Hilbert, C. R. Acad. Sci. Paris Ser. A-B, 280 (1975), 1511–1541.
    [14] H. Brˊezis, I. Chapitre, Operateurs maximaux monotones, North-Holland Math. Stud., 5 (1973), 19–51.
    [15] R. Bruck, On the convex approximation property and the asymptotic behavior of nonlinear contractions in Banach spaces, Israel J. Math., 38 (1981), 304–314. http://dx.doi.org/10.1007/BF02762776 doi: 10.1007/BF02762776
    [16] Y. Censor, T. Elfving, A multiprojection algorithm using Bregman projections in a product space, Numer. Algor., 8 (1994), 221–239. http://dx.doi.org/10.1007/BF02142692 doi: 10.1007/BF02142692
    [17] P. Combettes, The convex feasibility problem in image recovery, Adv. Imag. Elect. Phys., 95 (1996), 155–270. http://dx.doi.org/10.1016/S1076-5670(08)70157-5 doi: 10.1016/S1076-5670(08)70157-5
    [18] J. Deepho, J. Martínez-Moreno, K. Sitthithakerngkiet, P. Kumam, Convergence analysis of hybrid projection with Cesˊaro mean method for the split equilibrium and general system of finite variational inequalities, J. Comput. Appl. Math., 318 (2017), 658–673. http://dx.doi.org/10.1016/j.cam.2015.10.006 doi: 10.1016/j.cam.2015.10.006
    [19] J. Douglas, H. Rachford, On the numerical solution of the heat conduction problem in two and three space variables, Trans. Amer. Math. Soc., 82 (1956), 421–439. http://dx.doi.org/10.2307/1993056 doi: 10.2307/1993056
    [20] J. Duchi, S. Shalev-Shwartz, Y. Singer, T. Chandra, Efficient projections onto the l1-ball for learning in high dimensions, Proceedings of the 25th International Conference on Machine Learning, 2008,272–279. http://dx.doi.org/10.1145/1390156.1390191 doi: 10.1145/1390156.1390191
    [21] H. Engl, M. Hanke, A. Neubauer, Regularization of inverse problems, Dordrecht: Kluwer Academic Publishers, 2000.
    [22] A. Genel, J. Lindenstrauss, An example concerning fixed points, Israel J. Math., 22 (1975), 81–86. http://dx.doi.org/10.1007/BF02757276 doi: 10.1007/BF02757276
    [23] A. Gibali, D. Thong, Tseng type methods for solving inclusion problems and its applications, Calcolo, 55 (2018), 49. http://dx.doi.org/10.1007/s10092-018-0292-1 doi: 10.1007/s10092-018-0292-1
    [24] A. Gibali, A new split inverse problem and an application to least intensity feasible solutions, Online Journal Pure and Applied Functional Analysis, 2 (2017), 243–258.
    [25] A. Gibali, S. Reich, R. Zalas, Outer approximation methods for solving variational inequalities in Hilbert space, Optimization, 66 (2017), 417–437. http://dx.doi.org/10.1080/02331934.2016.1271800 doi: 10.1080/02331934.2016.1271800
    [26] W. Guo, J. Qin, W. Yin, A new detail-preserving regularization scheme, SIAM J. Imaging Sci., 7 (2014), 1309–1334. http://dx.doi.org/10.1137/120904263 doi: 10.1137/120904263
    [27] B. Halpern, Fixed points of nonexpanding maps, Bull. Amer. Math. Soc., 73 (1967), 957–961.
    [28] S. Harisa, M. Khan, F. Mumtaz, N. Farid, A. Morsy, K. Nisar, et al., Shrinking Cesˊaro means method for the split equilibrium and fixed point problems in Hilbert spaces, Adv. Differ. Equ., 2020 (2020), 345. http://dx.doi.org/10.1186/s13662-020-02800-z doi: 10.1186/s13662-020-02800-z
    [29] N. Hirano, W. Takahashi, Nonlinear ergodic theorems for nonexpansive mappings in Hilbert spaces, Kodai Math. J., 2 (1979), 11–25. http://dx.doi.org/10.2996/kmj/1138035962 doi: 10.2996/kmj/1138035962
    [30] O. Iyiola, Y. Shehu, Convergence results of two-step inertial proximal point algorithm, Appl. Numer. Math., 182 (2022), 57–75. http://dx.doi.org/10.1016/j.apnum.2022.07.013 doi: 10.1016/j.apnum.2022.07.013
    [31] N. Kaewyong, K. Sitthithakerngkiet, Modified Tseng's method with inertial viscosity type for solving inclusion problems and its application to image restoration problems, Mathematics, 9 (2021), 1104. http://dx.doi.org/10.3390/math9101104 doi: 10.3390/math9101104
    [32] P. Maingˊe, A hybrid extragradient-viscosity method for monotone operators and fixed point problems, SIAM J. Control Optim., 47 (2008), 1499–1515. http://dx.doi.org/10.1137/060675319 doi: 10.1137/060675319
    [33] A. Moudafi, Viscosity approximation methods for fixed-points problems, J. Math. Anal. Appl., 241 (2000), 46–55. http://dx.doi.org/10.1006/jmaa.1999.6615 doi: 10.1006/jmaa.1999.6615
    [34] P. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. http://dx.doi.org/10.1137/0716071 doi: 10.1137/0716071
    [35] G. Pasty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Anal. Appl., 72 (1979), 383–390. http://dx.doi.org/10.1016/0022-247X(79)90234-8 doi: 10.1016/0022-247X(79)90234-8
    [36] B. Polyak, Introduction to optimization, New York: Optimization Software, 1987.
    [37] R. Rockafellar, On the maximality of sums of nonlinear monotone operators, Trans. Amer. Math. Soc., 149 (1970), 75–88. http://dx.doi.org/10.2307/1995660 doi: 10.2307/1995660
    [38] W. Takahashi, The split common fixed point problem and the shrinking projection method in Banach spaces, J. Convex Anal., 24 (2017), 1015–1028.
    [39] W. Takahashi, Strong convergence theorem for a finite family of demimetric mappings with variational inequality problems in a Hilbert space, Japan J. Indust. Appl. Math., 34 (2017), 41–57. http://dx.doi.org/10.1007/s13160-017-0237-0 doi: 10.1007/s13160-017-0237-0
    [40] W. Takahashi, Weak and strong convergence theorems for new demimetric mappings and the split common fixed point problem in Banach spaces, Numer. Func. Anal. Opt., 39 (2018), 1011–1033. http://dx.doi.org/10.1080/01630563.2018.1466803 doi: 10.1080/01630563.2018.1466803
    [41] W. Takahashi, K. Shimoji, Convergence theorems for nonexpansive mappings and feasibility problems, Math. Comput. Model., 32 (2000), 1463–1471. http://dx.doi.org/10.1016/S0895-7177(00)00218-1 doi: 10.1016/S0895-7177(00)00218-1
    [42] W. Takahashi, C. Wen, J. Yao, The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space, Fixed Point Theory, 19 (2018), 407–420. http://dx.doi.org/10.24193/fpt-ro.2018.1.32 doi: 10.24193/fpt-ro.2018.1.32
    [43] R. Tibshirami, Regression shrinkage and selection via lasso, J. R. Stat. Soc. B, 58 (1996), 267–288. http://dx.doi.org/10.1111/j.2517-6161.1996.tb02080.x doi: 10.1111/j.2517-6161.1996.tb02080.x
    [44] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431–446. http://dx.doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
    [45] C. Vogel, Computational methods for inverse problems, Philadelphia: Society for Industrial and Applied Mathematics, 2002.
    [46] H. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240–256. http://dx.doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
  • This article has been cited by:

    1. Wachirapong Jirakipuwapat, Kamonrat Sombut, Petcharaporn Yodjai, Thidaporn Seangwattana, Enhancing Image Inpainting With Deep Learning Segmentation and Exemplar‐Based Inpainting, 2025, 0170-4214, 10.1002/mma.10827
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1366) PDF downloads(58) Cited by(1)

Figures and Tables

Figures(5)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog