Research article Special Issues

An inertially constructed projection based hybrid algorithm for fixed point and split null point problems

  • In this paper, we posit a framework for the investigation of the fixed point problems (FPP) involving an infinite family of k-demicontractive operators and the split common null point problems (SCNPP) in Hilbert spaces. We employ an accelerated variant of the hybrid shrinking projection algorithm for the construction of a common solution associated with the FPP and SCNPP. Theoretical results comprise strong convergence characteristics under suitable sets of constraints as well as numerical results are established for the underlying algorithm. Applications to signal processing as well as various abstract problems are also incorporated.

    Citation: Yasir Arfat, Poom Kumam, Supak Phiangsungnoen, Muhammad Aqeel Ahmad Khan, Hafiz Fukhar-ud-din. An inertially constructed projection based hybrid algorithm for fixed point and split null point problems[J]. AIMS Mathematics, 2023, 8(3): 6590-6608. doi: 10.3934/math.2023333

    Related Papers:

    [1] Chibueze C. Okeke, Abubakar Adamu, Ratthaprom Promkam, Pongsakorn Sunthrayuth . Two-step inertial method for solving split common null point problem with multiple output sets in Hilbert spaces. AIMS Mathematics, 2023, 8(9): 20201-20222. doi: 10.3934/math.20231030
    [2] Yasir Arfat, Muhammad Aqeel Ahmad Khan, Poom Kumam, Wiyada Kumam, Kanokwan Sitthithakerngkiet . Iterative solutions via some variants of extragradient approximants in Hilbert spaces. AIMS Mathematics, 2022, 7(8): 13910-13926. doi: 10.3934/math.2022768
    [3] Yuanheng Wang, Bin Huang, Bingnan Jiang, Tiantian Xu, Ke Wang . A general hybrid relaxed CQ algorithm for solving the fixed-point problem and split-feasibility problem. AIMS Mathematics, 2023, 8(10): 24310-24330. doi: 10.3934/math.20231239
    [4] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [5] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [6] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [7] Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903
    [8] Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276
    [9] Puntita Sae-jia, Suthep Suantai . A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems. AIMS Mathematics, 2024, 9(4): 8476-8496. doi: 10.3934/math.2024412
    [10] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
  • In this paper, we posit a framework for the investigation of the fixed point problems (FPP) involving an infinite family of k-demicontractive operators and the split common null point problems (SCNPP) in Hilbert spaces. We employ an accelerated variant of the hybrid shrinking projection algorithm for the construction of a common solution associated with the FPP and SCNPP. Theoretical results comprise strong convergence characteristics under suitable sets of constraints as well as numerical results are established for the underlying algorithm. Applications to signal processing as well as various abstract problems are also incorporated.



    Convex optimization problem (COP) is considered very important in the current literature as it covers a diverse range of problems with possible applications in signal processing, image processing and machine learning. As a consequence, the tremendous progress in studying the COP has led the emergence of a theory of convex optimization and a useful interface linking various branches of sciences.

    Monotone operator theory is a prominent framework for various nonlinear problems and closely related with the theory of convex optimization. One of the fundamental themes in monotone operator theory is to compute zeros of the (maximal-) monotone operators. The importance of this concept stems from the fact that the sub-differential operator associated with a proper, convex and lower semicontinuous (PCLS) function is not only a maximal monotone operator but also solves the convex minimization problem. It is remarked that most of the practical phenomenon can be reformulated as zero point problem. This formalism includes variational inequalities, evolution equations, complementarity problems and inclusions [12].

    The class of split feasibility problems (SFP) has an extraordinary utility and broad applicability in medical image reconstruction, signal processing and computerized tomography [15,17,18,21]. Some interesting and crucial results regarding the SFP with areas of feasible applications are established in [16,19,20]. The first prototype strategies for computing the optimal solution of the split common null point problem (SCNPP) can be found in [16]. Since then, different variants of these strategies have been proposed and analyzed for SCNPP and other instances of SFP [19,20,29].

    Another useful formalism for modelling various nonlinear phenomenon is the fixed point problem (FPP) of the operator under consideration. Most of the problems in diverse areas such as mathematical economics, variational inequality theory, control theory and game theory can be reformulated in terms of FPP. It is remarked that various nonlinear fixed point operators play equivalent important role in COP. In 2015, Takahashi et al.[31] investigated a unified formalism of null point problem and FPP in Hilbert spaces. Since then, FPP associated with different nonlinear operators are jointly investigated with (split common-) null point problem in this domain. It is therefore natural to investigate FPP associated with an infinite family of operators jointly with SCNPP in Hilbert spaces.

    A variety of strategies combining iterative optimization algorithms and fixed point algorithms have been introduced and analyzed to construct an optimal solution of the SCNPP and FPP. Each strategy has certain shortcomings in terms of convergence characteristic and/or rate of convergence. The hybrid shrinking projection algorithm is a well-known strategy for the strong convergence characteristic whereas the inertial extrapolation technique, essentially due to [27] and see also [1,2,3,4,5,6,7,8,9,10,11,32], enhances the rate of convergence of the algorithm under consideration.

    Our main contributions in this ongoing fruitful research direction are as follows:

    (1) We posit a framework to jointly investigate SCNPP and FPP associated with an infinite family of operators in Hilbert spaces. For the case of an infinite family of fixed point operators, we exploit the construction of an auxiliary operator defined in [28,34];

    (2) We employ an algorithmic approach combining the hybrid shrinking projection algorithm with the inertial extrapolation technique to construct the common optimal solution of the problems as mentioned in item (1);

    (3) We establish the strong convergence analysis of the proposed algorithm by employing the suitable constraints in accordance with the standard techniques and tools in the current literature;

    (4) We posit different frameworks, as an application of the framework mentioned in item (1), for various instances of SFP in Hilbert spaces;

    (5) Last but not least, we incorporate an appropriate numerical example for the demonstration of the framework as well as the applicability of the proposed algorithm for signal recovery problem.

    Throughout the rest of the sections, the triplet (Ξ1,<,>,) indicates the real Hilbert space with the conventional notations of the inner product and the norm and A1Ξ1×Ξ1 denotes a set-valued operator with the usual definitions of dom(A1), gra(A1) and zer(A1). We use (resp. ) to represent the symbol for strong convergence (resp. weak convergence). The set of reals and natural numbers are symbolized as R and N, respectively.

    Recall from the celebrated monograph [12] that the set-valued operator A1 is called monotone if rs,uv0,(r,u),(s,v)gra(A1). In addition, A1 is coined as maximal monotone operator provided that the graph of A1 could not be properly contained in the graph of any other monotone operator. Let m>0, then the resolvent operator of A1 is defined as JA1m=(Id+mA1)1, where Id denotes the identity operator. In this connection, JA1m is well-defined, single-valued and firmly nonexpansive operator.

    Let T:HH be an operator defined on a nonempty subset H of Ξ1. We use Fix(T)={pH|p=Tp} to denote the set of fixed points of the operator T. The metric projection operator ΠH:Ξ1H associated with the nonempty closed convex subset H of Ξ1 is defined as ΠH(u)=argminvHuv. It is well-known that the operator ΠH is firmly nonexpansive and satisfies uΠHu,ΠHuv0,uΞ1,vH. Recall that the operator T is known as k-demicontractive [24] provided that k[0,1) such that

    Tqp2qp2+kqTq2,qH,pFix(T).

    The class of k-demicontractive operators has been studied extensively in various instances of fixed point problems in Hilbert spaces. However, we are concerned with the fixed point problem of an infinite family of k-demicontractive operators in Hilbert spaces via the following construction of auxiliary operator Sk:

    Qk,k+1=Id,Qk,k=βkTkQk,k+1+(1βk)Id,Qk,k1=βk1Tk1Qk,k+(1βk1)Id,Qk,m=βmTmQk,m+1+(1βm)Id,Qk,2=β2T2Qk,3+(1β2)Id,Sk=Qk,1=β1T1Qk,2+(1β1)Id,

    where 0βm1 and Tm=αx+(1+α)Tmx for all xH with Tm being k-demicontractive operator and α[k,1). It is well-known in the context of operator Sk that each Tm is nonexpansive and the limit limkQk,m exists. Moreover

    Sx=limkSk=limkQk,1,xH.

    This implies that Fix(S)=k=1Fix(Sk) [28,34].

    We now finally introduce the formalism of the proposed problem.

    Let A1Ξ1×Ξ1 and A2Ξ2×Ξ2 be maximal monotone operators such that the domain of A1 is the subset of H and let JA1m and JA2m be the resolvents of A1 and A2, respectively, for m>0. Let h:Ξ1Ξ2 be a bounded linear operator and let h be the adjoint operator of h. Let Sk be the S-operator such that Γ:=ΩFix(S), where Ω:={ˆpA11(0):hˆpA12(0)} indicates the SCNPP [16]. We aim to find

    ˆpΓ. (2.1)

    The following crucial results are required in the sequel:

    Lemma 2.1. [14] Let U:HH be an operator defined on a nonempty closed convex subset H of a real Hilbert space Ξ1 and let (pk) be a sequence in H. If pkp and if (IdU)pk0, then pFix(U).

    Lemma 2.2. Let μ,νΞ1 and θR then

    (i) μ+ν2μ2+2ν,μ+ν;

    (ii) μν2μ2ν22μν,ν;

    (iii) θμ+(1θ)ν2=θμ2+(1θ)ν2θ(1θ)μν2.

    Lemma 2.3. [34] Let H be a nonempty closed and convex subset of a real Hilbert space Ξ1 and let T:HH be a k-demicontractive operator with k[0,1). Let α[k,1) and set T=(1α)Id+αT, then T:HH is a nonexpansive operator such that Fix(T)=Fix(T).

    Lemma 2.4. [26] Let H be a nonempty closed convex subset of a real Hilbert space Ξ1. For every p,q,tΞ1 and γR, the set

    D={vH:qv2pv2+t,v+γ},

    is closed and convex.

    Lemma 2.5. [34] Let H be a nonempty closed and convex subset of a real Hilbert space Ξ1 and let (Tm) be a sequence of nonexpansive operators such that k=1Fix(Tk) and 0βmb<1. Then for a bounded subset K of H, we have

    limksupxKSxSkx=0.

    We start with the architecture of the algorithm for the construction of an optimal solution of the problem (2.1).

    Algorithm 1 Inertially constructed hybrid algorithm (Algo.1)
    Initialization: Choose arbitrarily, p0,p1H, set k1 and nonincreasing sequence αk,βk(0,1), θk[0,1), mk(0,) and δ(0,2h2) such that h2=L is the spectral radius of hh. Choose the inertial parameter
    θk={min{νkpkpk1,θ}ifpkpk1;θ                       otherwise,
    where {νk} is a positive sequence such that k=1νk< and θ[0,1).
    Iterative Steps: Given pkΞ1, calculate ek, ˉwk and xk as follows:
    Step 1. Compute
    {ek=pk+θk(pkpk1);ˉwk=(1αk)ek+αkSkek;xk=(1βk)ˉwk+βk(JA1mk(ˉwk+δh(JA2mkId)hˉwk)).
    The algorithm aborts if xk=ˉwk=ek=pk and pk is the required approximation. Otherwise,
    Step 2. Compute
    Hk+1={zHk:xkz2pkz2+θ2kpkpk12+2θkpkz,pkpk1}, pk+1=ΠHk+1p1, k1,
    set k=:k+1 and go back to Step 1.

    Theorem 3.1. The sequence (pk) generated by the Algorithm 1, under the following control conditions,

    (C1) k=1θkpkpk1<;

    (C2) 0<alim infkαklim supkαka;

    (C3) lim infkβk>0;

    (C4) lim infkmk>0;

    converges strongly to an element pΓ.

    Proof. Step 1. The Algorithm 1 is well-defined.

    Recall that the set Γ is closed and convex whereas the closedness and the convexity of Hk+1, for each k1, follows from Lemma 2.4. Let pΓ, then recalling the Algorithm 1, we have

    ekp2=(pkp)+θk(pkpk1)2pkp2+θ2kpkpk12+2θkpkp,pkpk1. (3.1)

    Also from Algorithm 1 and Lemma 2.3, we have

    ˉwkp2=(1αk)ek+αkSkekp2(1αk)ekp2+αkSkekp2αk(1αk)(IdSk)ek2(1αk)ekp2+αkekp2αk(1αk)(IdSk)ek2=ekp2αk(1αk)(IdSi)ek2pkp2+θ2kpkpk12+2θkpkp,pkpk1. (3.2)

    Further, we obtain

    xkp2=(1βk)(ˉwkp)+βk(JA1mk(ˉwk+δh(JA2mkId)hˉwk)p)2(1βk)ˉwkp2+βkJA1mk(ˉwk+δh(JA2mkId)hˉwk)p2. (3.3)

    Recalling the nonexpansivity of JA1mk, we obtain

    JA1mk(ˉwk+δh(JA2mkId)hˉwk)JA1mkp2ˉwk+δh(JA2mkId)hˉwkp2ˉwkp2+δ2h(JA2mkId)hˉwk2+2δˉwkp,h(JA2mkId)hˉwkˉwkp2+δ2h2(JA2mkId)hˉwk2+2δhˉwkhp,(JA2mkId)hˉwk). (3.4)

    Denote λk=2δhˉwkhp,(JA2mkId)hˉwk and recalling the firm nonexpansivity of JA2mk, we get

    λk=2δhˉwkhp+(JA2mk(hˉwk)hˉwk)(JA2mk(hˉwk)hˉwk),JA2mk(hˉwk)hˉwk=2δ(hˉwkhp+JA2mk(hˉwk)hˉwk,JA2mk(hˉwk)hˉwkJA2mk(hˉwk)hˉwk,JA2mk(hˉwk)hˉwk)=2δ(JA2mk(hˉwk)hp,JA2mk(hˉwk)hˉwkJA2mk(hˉwk)hˉwk2)2δ(JA2mkId)hˉwk)2. (3.5)

    The estimate (3.3) implies after recalling the estimates (3.4) and (3.5)

    xkp2(1βk)ˉwkp2+βk(ˉwkp2+δ2h2(JA2mkId)hˉwk)22δ(JA2mkId)hˉwk2),=(1βk)ˉwkp2+βk(ˉwkp2δ(2δh2)(JA2mkId)hˉwk2)(1βk)ˉwkp2+βkˉwkp2pkp2+θ2kpkpk12+2θkpkp,pkpk1. (3.6)

    The above estimate (3.6) indicates the inclusion ΓHk+1. Summarising the above stated facts, this infers that the Algorithm 1 is well-defined.

    Step 2. The limit limkpkp1 exists.

    Note that pk+1p1pp1, for all pHk+1 by employing the fact that pk+1=ΠHk+1p1. This infers that pk+1p1pp1, for all pΓHk+1 and consequently establishes the boundedness of (pkp1). Also from pk=ΠHkp1, we have that

    pkp1pk+1p1.

    The above approximation infers that the sequence (pkp1) is non-decreasing, therefore, we have

    limkpkp1exists. (3.7)

    Step 3. Prove that qΓ.

    The following crucial estimates are required to prove the claim:

    pk+1pk2=pk+1p1+p1pk2=pk+1p12+pkp122pkp1,pk+1p1=pk+1p12+pkp122pkp1,pk+1pk+pkp1=pk+1p12pkp122pkp1,pk+1pkpk+1p12pkp12.

    By employing the limsup, and recalling the estimate (3.7), the above estimate implies that lim supkpk+1pk2=0. That is

    limkpk+1pk=0. (3.8)
    limkekpk=limkθkpkpk1=0. (3.9)

    As an easy inference of the approximates (3.8) and (3.9), we get

    limkekpk+1=0. (3.10)

    Since pk+1Hk+1, we have

    xkpk+1pkpk+1+θkpkpk1+2θkpkpk+1,pkpk1.

    Recalling the estimate (3.8) and the condition (C1), the above estimate implies that

    limkxkpk+1=0. (3.11)

    Recalling the estimates (3.8), (3.11) and the following triangular inequality:

    xkpkxkpk+1+pk+1pk,

    we get

    limkxkpk=0. (3.12)

    Consider the estimate (3.6) in the following variation:

    a(1a)(IdSk)ek2(pkp+xkp)pkxk+θ2kpkpk12+2θkpkppkpk1.

    Letting k and recalling the conditions (C1)–(C2) and the estimate (3.12), we have

    limk(IdSk)ek=0. (3.13)

    The estimate (3.13) also implies that

    limkˉwkek=limka(IdSk)ek=0. (3.14)

    Recalling the estimates (3.8), (3.14) and the following triangular inequality:

    ˉwkpkˉwkpk+1+pk+1pk,

    we get

    limkˉwkpk=0. (3.15)

    Now recalling the estimates (3.4), (3.5) and Lemma 2.2, we have

    xkp2=(1βk)ˉwk+βk(JA1mk(ˉwk+δh(JA2mkId)hˉwk))p)2(1βk)ˉwkp2+βk(ˉwkp2δ(2δh2)(JA2mkId)hˉwk2)ˉwkp2βkδ(2δh2)(JA2mkId)hˉwk2ekp2βkδ(2δh2)(JA2mkId)hˉwk2pkp2+2θkpkpk1,ekpβkδ(2δh2)(JA2mkId)hˉwk2. (3.16)

    Rearranging the estimate (3.16), we have

    βkδ(2δh2)(JA2mkId)hˉwk2pkp2xkp2+2θkpkpk1,ekp(pkp+ekp)pkek+2θkpkpk1,ekp. (3.17)

    Now recalling the conditions (C1), (C3), the estimate (3.9) and δ(0,2h2), the estimate (3.17) implies that

    limk(JA2mkId)hˉwk=0. (3.18)

    Recalling the estimates (3.4) and (3.5), we obtain

    JA1mk(ˉwk+δh(JA2mkId)hˉwk)JA1mkp2ˉwk+δh(JA2mkId)hˉwkp2ˉwkp2. (3.19)

    Denote ξk=JA1mk(ˉwk+δh(JA2mkId)hˉwk) and recalling the estimate (3.19), it follows that

    ξkp2=JA1mkˉwk+δh(JA2mkId)hˉwk)JA1mkp2JA1mk(ˉwk+δh(JA2mkId)hˉwk)JA1mkp,ˉwk+δh(JA2mkId)hˉwkp=ξkp,ˉwk+δh(JA2mkId)hˉwkp=12(ξkp2+ˉwk+δh(JA2mkId)hˉwkp2ξkˉwkδh(JA2mkId)hˉwk2)12(ξkp2+ˉwkp2ξkˉwkδh(JA2mkId)hˉwk2)=12(ξkp2+ˉwkp2ξkˉwk2δ2h(JA2mkId)hˉwk2+2δξkˉwk,h(JA2mkId)hˉwk)12(ξkp2+ˉwkp2ξkˉwk2δ2h(JA2mkId)hˉwk2+2δξkˉwkh(JA2mkId)hˉwk). (3.20)

    That is

    ξkp2ˉwkp2ξkˉwk2+2δξkˉwkh(JA2mkId)hˉwk. (3.21)

    As a consequence, we have

    xkp2(1βk)ˉwkp2+βkξkp2(1βk)ˉwkp2+βk(ˉwkp2ξkˉwk2+2δξkˉwkh(JA2mkId)hˉwk). (3.22)

    The estimate (3.22), gives that

    βkξkˉwk2ˉwkp2xkp2 2βkδξkˉwkh(JA2mkId)hˉwk). (3.23)

    Recalling the estimate (3.18) and the condition (C3), we have

    limkξkˉwk=0. (3.24)

    Reasoning as above, by recalling the definition of (ek), the condition (C1) and the estimate (3.24), we get

    limkξkpk=0. (3.25)

    Note that the existence of a convergent subsequence (pkj) of (pk) such that pkjqΞ1 as j follows from the boundedness of (pk). This also infers that ξkjq and ˉwkjq as j. To establish the claim, we first prove that qΩ.

    Let (u,v)gra(A1). Since ξkj=JA1mkj(ˉwkj+δh(JA2mkjId)hˉwkj), therefore, we have

    ˉwkj+δh(JA2mkjId)hˉwkjξkj+mkjA1(ξkj).

    This implies that

    1mkj(ˉwkjξkj)+1mkjδh(JA2mkjId)hˉwkjA1(ξkj).

    Recalling the monotonicity of A1, we have

    uξkj,v(1mkj(ˉwkjξkj)+1mkj(δh(JA2mkjId)hˉwkj))0.

    The above estimate infers that

    uξkj,vuξkj,1mkj(ˉwkjξkj)+1mkj(δh(JA2mkjId)hˉwkj)=uξkj,1mkj(ˉwkjξkj)+uξkj,1mkj(δh(JA2mkjId)hˉwkj). (3.26)

    Since ξkjq, we obtain limjuξkj,v=uq,v. Now utilizing the estimates (3.18), (3.24) and (3.26), we have uq,v0. This implies that 0A1q.

    Recalling the facts that (i) h is a bounded linear operator implies that hˉwkjhq as j, (ii) JA2mk is a nonexpansive operator with IdJA2mk being demiclosed at the origin (Lemma 2.1), we also obtain that 0A2(hq). Hence qΩ.

    We now show that qFix(S). Observe that

    ekSekekSkek+SkekSekekSkek+supxKSkxSx.

    Recalling the estimate (3.13) and Lemma 2.5, the above estimate implies that limkekSek=0. This together with the fact that ekjq implies, with the help of Lemma 2.1, that qFix(S)=k=1Fix(Sk).

    Step 4. Prove that pkp=ΠΓp1.

    Let p=ΠΓp1. As pk+1=ΠHk+1p1 and ΓHk+1, therefore, we have

    pk+1p1pp1.

    Also

    p1pp1qlim infjp1pkjlim supkp1pkjp1p.

    That is

    limjpkjp1=qp1=pp1.

    This implies that limkpk=q=p=ΠΓp1 and hence completes the proof.

    If we take A2=0 and Ξ1=Ξ2, then the problem (2.1) reduces to find a point of the following problem:

    ˆpΓ:={pA11(0)Fix(S)}.

    Hence the following result is an easy consequence of the Theorem 3.1:

    Corollary 3.1. Assume that Γ. Then the sequence (pk)

    {ek=pk+θk(pkpk1);ˉwk=(1αk)ek+αkSkek;xk=(1βk)ˉwk+βkJA1mk(ˉwk);Hk+1={zHk:xkz2pkz2+θ2kpkpk12+2θkpkz,pkpk1};pk+1=ΠHk+1p1,k1; (3.27)

    generated by (3.27), under the control conditions (C1)–(C4), converges strongly to an element p=ΠΓp1.

    In this section, we posit different frameworks, as an application of the framework established in Section 3.

    The classical SFP, essentially due to Censor and Elfving [18], aims to find ˆpω:=Hh1(G)={ˉqH:hˉqG}, where HΞ1 and GΞ2 are nonempty, closed and convex subsets of Ξ1 and Ξ2, respectively. For the implementation of the Theorem 3.1, we recall the indicator operator of a nonempty, closed and convex subset H of Ξ1 as

    ΦH(p):={ 0,pH;,otherwise.

    It has been established that the subdifferential ΦH satisfies the maximal monotonicity provided that the operator ΦH is proper, convex and lower semicontinuous. Also ΦG=N(μ,H), where N(μ,H) is the normal cone of H at μ. Utilizing this fact, we conclude that the resolvent operator of ΦH is the metric projection operator of Ξ1 onto H. We, therefore, arrive at the following variant of the problem (2.1):

    Corollary 4.1. Assume that Γ=ωFix(S). Then the sequence (pk)

    {ek=pk+θk(pkpk1);ˉwk=(1αk)ek+αkSkek;xk=(1βk)ˉwk+βk(ΠH(ˉwk+δh(ΠGId)hˉwk));Hk+1={xkz2pkz2+θ2pkpk12+2θkpkz,pkpk1};pk+1=ΠHk+1p1,k1, (4.1)

    generated by (4.1), under the control conditions (C1)–(C4), converges strongly to an element p=ΠΓp1.

    The equilibrium problem from [13] aims to compute a point pH such that

    f(p,ˉy)0,for allˉyH, (4.2)

    where f:H×HR is a bifunction satisfying,

    (A1)f(p,p)=0 for all pH;(A2)f is monotone, i.e., f(p,q)+f(q,p)0 for all p,qH;(A3) for each p,q,tH,lim supx0f(xt+(1x)p,q)f(p,q);(A4)for each pH,qh(p,q) is convex and lower semi-continuous.

    The set EP(f) denotes the set of all solutions associated with the equilibrium problem (4.2). Recall the following auxiliary results for the equilibrium problem:

    Lemma 4.1. [25] Let H be a nonempty closed convex subset of a real Hilbert space Ξ1 and let f:H×HR be a bifunction satisfying (A1)–(A4). For s>0 and pΞ1, there exists tH such that

    f(t,q)+1sqt,tp0, qH.

    Moreover, define an operator Ufs:Ξ1H by

    Ufs(p)={tH:f(t,q)+1sqt,tp0, qH}.

    Lemma 4.2. [30] Let H be a nonempty closed convex subset of Ξ1 and let f:H×HR be a bifunction satisfying (A1)–(A4). Let Af:Ξ12Ξ1 be a multivalued operator defined as:

    Af(p)={{qΞ1:f(p,t)tp,q,tH},ifpH,,ifpH.

    Recall that the operator Af is a maximal monotone operator with domain D(Af)C and EP(f)=A1f0. Moreover, Ufs=(Id+sAf)1 for s>0, i.e., Ufs is the resolvent of Af. We, therefore, arrive at the following variant of the problem (2.1):

    Corollary 4.2. Assume that Γ=EP(f1)h1(EP(f2))Fix(S). Then the sequence (pk)

    {ek=pk+θk(pkpk1);ˉwk=(1αk)ek+αkSkek;xk=(1βk)ˉwk+βk(Uf1s(ˉwk+δh(Uf2sId)hˉwk));Hk+1={xkz2pkz2+θ2kpkpk12+2θkpkz,pkpk1};pk+1=ΠHk+1p1,k1, (4.3)

    generated by (4.3), under the control conditions (C1)-(C4), converges strongly to an element p=ΠΓp1.

    Let g:Ξ1(,] be a PCLS function, then the set of minimizer associated with g is defined as

    argmin g:={pΞ1:g(p)g(q),for allqΞ1}.

    Recall that the g of PCLS function g is a maximal monotone operator and the corresponding resolvent operator of g is called the proximity operator (see[22]). Hence argmin g=(g)10. We, therefore, arrive at the following variant of the problem (2.1).

    Corollary 4.3. Assume that Γ={xargming1:hxargming2}Fix(S). Then the sequence (pk)

    {ek=pk+θk(pkpk1);ˉwk=(1αk)ek+αkSkek;xk=(1βk)ˉwk+βk(Jg1mk(ˉwk+δh(Jg2mkId)hˉwk));Hk+1={xkz2pkz2+θ2kpkpk12+2θkpkz,pkpk1};pk+1=ΠHk+1p1,k1, (4.4)

    generated by (4.4), under the control conditions (C1)–(C4), converges strongly to an element p=ΠΓp1.

    In this section, we present an example that characterizes the performance of our algorithm.

    Example 5.1. Let Ξ1=Ξ2=(R,,,||) where p,q=pq. Consider the operators h,A1,A2:RR are defined as h(p)=3p, A1p=2p and A2p=3p, respectively. It is evident from the definition that A1, A2 are maximal monotone operators such that Ω:={pA110:hpA120}=0 and h is a bounded linear operator on R with the adjoint operator h such that h=h=3. Let the sequence of operators Sk:RR be defined by

    Sk(p)={pk,p[0,);p,p(,0).

    Then Sk is an infinite family of 1k2(1+k)2-demicontractive operators with k=1Fix(Sk)={0}. Hence Γ=Ωk=1Fix(Sk)=0. We use the following initialization parameters for the computation of the Algorithm 1: θ=0.5, αk=1100k+1, βk=k100k+1, δ=18, L=3 and m = 0.02. Also observe that

    {min{1k2pkpk1,0.5},ifpkpk1;0.5, otherwise.

    Let Error=Ek=pkpk1<105 denote the stopping criteria. The performance of the Algorithm 1 (i.e., Algo.1, θk0) is compared with the non-inertial variant of the Algorithm 1 (i.e., Algo.1, θk=0) and Algo. 3.1[16]. For different choices of the initial inputs p0 and p1, the values of Algo.1 are summarized in the following table:

    Choice A. Choose x0=(5), x1=(2),

    Choice B. Choose x0=(4.2), x1=(1.5),

    Choice C. Choose x0=(7), x1=(4).

    The error plotting Ek against Algorithm 1 and Algorithm 3.1[16], for each choices in Table 1, has shown in Figure 1.

    Table 1.  Computed Data Representation for the Algorithm 1 and Algo. 3.1(Byrne et. al [16]).
    No. of Iterations CPU Time(Sec)
    Choice 1 Choice 2 Choice 3 Choice 1 Choice 2 Choice 3
    Algo.1, θk0, 1787 1401 1920 0.064561 0.049590 0.057585
    Algo.1, θk=0, 1819 1620 2008 0.071565 0.063519 0.076661
    Algo. 3.1[16] 7620 6598 8958 0.284874 0.687913 0.912241

     | Show Table
    DownLoad: CSV
    Figure 1.  Comparison of Algorithm 1 (i.e., Algo.1, θk0), Algorithm 1 (i.e., Algo.1, θk=0) and Algorithm 3.1[16].

    It is evident from Figure 1 that Algorithm 1 outperforms its noninertial variant and Algorithm 3.1 [16] with respect to the computation of error, CPU time and the number of iterations.

    The mathematical model of the signal recovery problem in an under-determined linear equation system is expressed as follows:

    ζ=hν+ρ, (5.1)

    where νRD denotes the original unknown signal to be recovered, ζRP denotes the observed signal distort via the bounded linear matrix operator h:RDRP,(P<D) and the noise ρ. One can define a natural convex constrained optimization-theoretic formulation of (5.1) via the following well known LASSO problem [33]:

    minvRD{12hνζ2}subject toν1c,c>0. (5.2)

    The set of solutions of the 1-minimization problem (5.1) is equivalent to the set of solutions of (5.2) under certain control conditions on the matrix h [15]. The 1-norm based regularization problems are widely applicable in signal and image processing.

    Set Γ=Hh1(G) with H={ν|ν1c} and G={ζ}. The experiment is conducted under the matrix hD×P whose elements are generated from independently distributed normal distributions having 0 as mean and 1 as variance. The sparse vector ν, having t=spikes nonzero elements, is generated via uniform distribution in [2,2]. The following iterative regularization method, often known as the Richardson method (or the Landweber method) [23], is generally employed to solve the problem (5.2):

    νk+1=νk+ηhT(ζhνk), (5.3)

    where η, the step size, is assumed to be constant. The algorithm (5.3) converges for 0<η<2ϵ2max, where ϵmax is the maximum singular value of h. The initial points ν0,ν1 are chosen randomly. We use the mean squared error indicator to examine the performance of the algorithm for image restoration, i.e., Ek=1Nνkν<104, where ν is the approximation of the signal ν. The computation of the observed signal ζ is carried out by employing the Gaussian noise associated with the signal-to-noise ratio (SNR = 40). Also set mk=1.85h2, αk=k100k+1, βk=115k+1, δ=0.04, c=t0.002, μ=0 and θk = 0.5.

    Performance Test 1: Fix D=512, P=256 and spikes=15.

    Performance Test 2: Fix D=1024, P=512 and spikes=35.

    It is clear from the Figures 2 and 3 that the Algorithm 1 outperforms its variants and Algorithm 3.1 [16] for the signal recovery problem as well as exhibits fast convergence characteristic with regards to the error and number iterations.

    Figure 2.  Comparison for the performance test 1.
    Figure 3.  Comparison for the performance test 2.

    In this paper, we have posited a framework for the investigation of the SCNPP and the FPP associated with an infinite family of k-demicontractive operators in Hilbert spaces. The common optimal solution of the problem is then constructed via an inertial hybrid projection algorithm under the suitable set of constraints. We have incorporated an appropriate numerical example for the demonstration of the framework as well as for the applicability of our algorithm. We found that our algorithm outperforms its variants and Algorithm 3.1[16]. We have also discussed various instances of the proposed formalism and can pave a way for an important future research direction in the theories of SCNPP and FPP.

    The authors Y. Arfat and P. Kumam acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005. The author Y. Arfat was supported by the Petchra Pra Jom Klao Ph.D Research Scholarship from King Mongkut's University of Technology Thonburi, Thailand (Grant No.16/2562). Finally, the corresponding author Supak Phiangsungnoen acknowledge the financial support provided by institute of Research and Development, Rajamangala University of Technology Rattanakosin (Fundamental Fund Grant No.FRB6620/25).

    Data sharing not applicable to this article as no data-sets were generated or analysed during the current study.

    The authors declare no conflict of interest.



    [1] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-valued Anal., 9 (2001), 3–11. http://doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [2] Y. Arfat, O. S. Iyiola, M. A. A. Khan, P. Kumam, W. Kumam, K. Sitthithakerngkiet, Convergence analysis of the shrinking approximants for fixed point problem and generalized split common null point problem, J. Inequal. Appl., 2022 (2022), 67.
    [3] Y. Arfat, P. Kumam, M. A. A. Khan, Y. J. Cho, A hybrid steepest-descent algorithm for convex minimization over the fixed point set of multivalued mappings, Carpathian J. Math., 39 (2023), 303–314. https://doi.org/10.37193/CJM.2023.01.21 doi: 10.37193/CJM.2023.01.21
    [4] Y. Arfat, P. Kumam, M. A. A. Khan, O. S. Iyiola, Multi-inertial parallel hybrid projection algorithm for generalized split null point problems, J. Appl. Math. Comput., 68 (2022), 3179–3198. http://doi.org/10.1007/s12190-021-01660-4 doi: 10.1007/s12190-021-01660-4
    [5] Y. Arfat, M. A. A. Khan, P. Kumam, W. Kumam, K. Sitthithakerngkiet, Iterative solutions via some variants of extragradient approximants in Hilbert spaces, AIMS Mathematics, 7 (2022), 13910–13926. https://doi.org/10.3934/math.2022768 doi: 10.3934/math.2022768
    [6] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, An accelerated variant of the projection based parallel hybrid algorithm for split null point problems, Topol. Methods Nonlinear Anal., 1 (2022), 1–18. https://doi.org/10.12775/TMNA.2022.015 doi: 10.12775/TMNA.2022.015
    [7] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, An accelerated Visco-Cesaro means Tseng type splitting method for fixed point and monotone inclusion problems, Carpathian J. Math., 38 (2022), 281–297.
    [8] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, An inertial extragradient algorithm for equilibrium and generalized split null point problems, Adv. Comput. Math., 48 (2022), 53.
    [9] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, Parallel shrinking inertial extragradient approximants for pseudomonotone equilibrium, fixed point and generalized split null point problem, Ric. Mat., 2021. https://doi.org/10.1007/s11587-021-00647-4
    [10] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, Shrinking approximants for fixed point problem and generalized split null point problem in Hilbert spaces, Optim. Lett., 16 (2022), 1895–1913. https://doi.org/10.1007/s11590-021-01810-4 doi: 10.1007/s11590-021-01810-4
    [11] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, A. Kaewkhao, A parallel hybrid accelerated extragradient algorithm for pseudomonotone equilibrium, fixed point, and split null point problems, Adv. Differ. Equ., 364 (2021), 364. http://doi.org/10.1186/s13662-021-03518-2 doi: 10.1186/s13662-021-03518-2
    [12] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, In: CMS Books in Mathematics, New York: Springer, 2011.
    [13] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Math. Stud., 63 (1994), 123–145.
    [14] F. E. Browder, Nonexpansive nonlinear operators in a Banach space, Proc. Natl. Acad. Sci., 54 (1965), 1041–1044. https://doi.org/10.1073/pnas.54.4.1041 doi: 10.1073/pnas.54.4.1041
    [15] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
    [16] C. Byrne, Y. Censor, A. Gibali, S. Reich, The split common null point problem, J. Nonlinear Convex Anal., 13 (2012), 759–775. https://doi.org/10.48550/arXiv.1108.5953 doi: 10.48550/arXiv.1108.5953
    [17] Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Phys. Med. Biol., 51 (2006), 2353–2365. https://doi.org/10.1088/0031-9155/51/10/001 doi: 10.1088/0031-9155/51/10/001
    [18] Y. Censor, T. Elfving, A multiprojection algorithm using Bregman projections in a product space, Numer. Algorithms, 8 (1994), 221–239.
    [19] Y. Censor, A. Gibali, S. Reich, Algorithms for the split variational inequality problem, Numer. Algorithms, 59 (2012), 301–323.
    [20] Y. Censor, A. Segal, The split common fixed point problem for directed operators, J. Convex Anal., 26 (2010), 55007. https://doi.org/10.1088/0266-5611/26/5/055007 doi: 10.1088/0266-5611/26/5/055007
    [21] P. L. Combettes, The convex feasibility problem in image recovery, Adv. Imaging Electron Phys., 95 (1996), 155–270. https://doi.org/10.1016/S1076-5670(08)70157-5 doi: 10.1016/S1076-5670(08)70157-5
    [22] P. L. Combettes, J. C. Pesquet, Proximal splitting methods in signal processing, In: Springer optimization and its applications, 49 (2011), 185–212.
    [23] J. Duchi, S. Shalev-Shwartz, Y. Singer, T. Chandra, Efficient projections onto the l1-ball for learning in high dimensions, In: Proceedings of the 25th international conference on machine learning, Helsinki, 2008.
    [24] T. L. Hicks, J. D. Kubicek, On the Mann iteration process in a Hilbert space, J. Math. Anal. Appl., 59 (1977), 498–504. https://doi.org/10.1016/0022-247X(77)90076-2 doi: 10.1016/0022-247X(77)90076-2
    [25] Z. Ma, L. Wang, S-S. Chang, W. Duan, Convergence theorems for split equality mixed equilibrium problems with applications, Fixed Point Theory Appl., 2015 (2015), 31. http://doi.org/10.1186/s13663-015-0281-x doi: 10.1186/s13663-015-0281-x
    [26] C. Martinez-Yanes, H. K. Xu, Strong convergence of CQ method for fixed point iteration processes, Nonlinear Anal., 64 (2006), 2400–2411. https://doi.org/10.1016/j.na.2005.08.018 doi: 10.1016/j.na.2005.08.018
    [27] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [28] W. Takahashi, K. Shimoji, Convergence theorems for nonexpansive mappings and feasibility problems, Math. Comput. Model., 32 (2000), 1463–1471. https://doi.org/10.1016/S0895-7177(00)00218-1 doi: 10.1016/S0895-7177(00)00218-1
    [29] S. Takahashi, W. Takahashi, The split common null point problem and the shrinking projection method in Banach spaces, Optimization, 65 (2016), 281–287. https://doi.org/10.1080/02331934.2015.1020943 doi: 10.1080/02331934.2015.1020943
    [30] S. Takahashi, W. Takahashi, M. Toyoda, Strong convergence theorem for maximal monotone operators with nonlinear mappings in Hilbert spaces, J. Optim. Theory Appl., 147 (2010), 27–41. http://doi.org/10.1007/s10957-010-9713-2 doi: 10.1007/s10957-010-9713-2
    [31] W. Takahashi, H. K. Xu, J. C. Yao, Iterative methods for generalized split feasibility problems in Hilbert spaces, Set Valued Var. Anal., 23 (2015), 205–221. http://doi.org/10.1007/s11228-014-0285-4 doi: 10.1007/s11228-014-0285-4
    [32] B. Tan, Z. Zhou, S. X. Li, Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems, J. Appl. Math. Comput., 68 (2022), 1387-1411. https://doi.org/10.1007/S12190-021-01576-Z doi: 10.1007/S12190-021-01576-Z
    [33] R. Tibshirani, Regression shrinkage and selection via lasso, J. R. Stat. Soc. Ser. B Stat. Methodol., 58 (1996), 267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x doi: 10.1111/j.2517-6161.1996.tb02080.x
    [34] S. Wang, A general iterative method for obtaining an infinite family of strictly pseudo-contractive mappings in Hilbert spaces, Appl. Math. Lett., 24 (2011), 901–907. https://doi.org/10.1016/j.aml.2010.12.048 doi: 10.1016/j.aml.2010.12.048
  • This article has been cited by:

    1. Vasile Berinde, Single-Valued Demicontractive Mappings: Half a Century of Developments and Future Prospects, 2023, 15, 2073-8994, 1866, 10.3390/sym15101866
    2. Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad, Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems, 2023, 8, 2473-6988, 24590, 10.3934/math.20231254
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1772) PDF downloads(66) Cited by(2)

Figures and Tables

Figures(3)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog