Loading [MathJax]/jax/output/SVG/jax.js
Research article

Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces

  • Received: 29 August 2022 Revised: 02 December 2022 Accepted: 28 December 2022 Published: 12 January 2023
  • MSC : 47H09, 47H10

  • This paper proposes a modified D-iteration to approximate the solutions of three quasi-nonexpansive multivalued mappings in a real Hilbert space. Due to the incorporation of an inertial step in the iteration, the sequence generated by the modified method converges faster to the common fixed point of the mappings. Furthermore, the generated sequence strongly converges to the required solution using a shrinking technique. Numerical results obtained indicate that the proposed iteration is computationally efficient and outperforms the standard forward-backward with inertial step.

    Citation: Anantachai Padcharoen, Kritsana Sokhuma, Jamilu Abubakar. Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces[J]. AIMS Mathematics, 2023, 8(3): 7242-7257. doi: 10.3934/math.2023364

    Related Papers:

    [1] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [2] Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651
    [3] Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276
    [4] James Abah Ugboh, Joseph Oboyi, Hossam A. Nabwey, Christiana Friday Igiri, Francis Akutsah, Ojen Kumar Narain . Double inertial extrapolations method for solving split generalized equilibrium, fixed point and variational inequity problems. AIMS Mathematics, 2024, 9(4): 10416-10445. doi: 10.3934/math.2024509
    [5] Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149
    [6] Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
    [7] Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382
    [8] Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
    [9] Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194
    [10] Savita Rathee, Monika Swami . Algorithm for split variational inequality, split equilibrium problem and split common fixed point problem. AIMS Mathematics, 2022, 7(5): 9325-9338. doi: 10.3934/math.2022517
  • This paper proposes a modified D-iteration to approximate the solutions of three quasi-nonexpansive multivalued mappings in a real Hilbert space. Due to the incorporation of an inertial step in the iteration, the sequence generated by the modified method converges faster to the common fixed point of the mappings. Furthermore, the generated sequence strongly converges to the required solution using a shrinking technique. Numerical results obtained indicate that the proposed iteration is computationally efficient and outperforms the standard forward-backward with inertial step.



    Throughout the paper, unless otherwise stated, let C and Q be nonempty closed convex subsets of real Hilbert spaces H1 and H2, and PC and PQ be the orthogonal projection onto C and Q, respectively. Let B:H12H1 and D:H22H2 be two maximal monotone mappings and A:H1H2 be a bounded linear operator with its adjoint A. Find

    xCsuch thatAxQ, (1.1)

    which is called the split feasibility problem (SFP). It was first introduced by Censor and Elfving [1] in finite dimensional Hilbert spaces to model the inverse problem caused by medical image reconstruction. Since then, SFP has received much attention for its applications in signal processing, image reconstruction, approximate theory, control theory, biomedical engineering, communications, and geophysics. For details, the readers can refer to [1,2,3,4,5] and the references therein. To solve SFP, we proposed the recurrent projection algorithm, but in each iteration process, calculating the inverse of the matrix or the maximum eigenvalue of the matrix is needed. In solving the real problem, calculating the inverse of the matrix takes a lot of time and is not easy to solve. To overcome the disadvantage of finding the matrix inverse in the algorithm, in 2002, Byrne [6] presented the following CQ algorithm:

    xn+1=PC(xnγnAT(IPQ)Axn),

    where A is the matrix operator, AT is the transpose operator of A, γ(0,2L) with L the largest eigenvalue of the matrix ATA.

    In 2011, Moudafi [7] first introduced the following problem: Find xH1such that

    0B(x)and0D(Ax), (1.2)

    which is called the split variational inclusion problem (for short, denoted by SVIP). It is clear that the SVIP includes the SFP as a special case. We denote the solution set of the SVIP by SVIP(B,D):={xC|0B(x),0D(Ax)}. The SVIP is at the core of modeling of many inverse problems arising from phase retrieval and other real world problems, for instance, in sensor networks in computerized and data compression [8,9]. In recent years, there has been tremendous interest in solving the SVIP, and many researchers have constructed a large number of methods to solve this problem [10,11,12,13,14,15,16].

    In 2014, Yang and Zhao [17] defined the following: Find xH1such that

    xi=1B1i(0)andAxi=1D1i(0), (1.3)

    which is called the generalized split variational inclusion problem (for short, denoted by GSVIP1), where for each iN, Bi:H12H1 and Di:H22H2 are two families of maximal monotone mappings. To solve the GSVIP1, the following algorithm is introduced :

    xn+1=anxn+bnf(xn)+i=1cn,iJBiβn,i(Iγn,iA(IJDiβn,i)A)xn,n0,

    where for each iN, the sequences{an},{bn},{cn,i}(0,1), an+bn+i=1cn,i=1, {βn,i}(0,),{γn,i}(0,2A2+1), f is a k contraction mapping of H1, and the strong convergence of the above algorithm under mild assumptions has been proved.

    Ogbuisi et al. [18] introduced a new inertial algorithm to solve the following problem: Find xH1 such that

    xsi=1B1i(0)andAxtj=1D1j(0), (1.4)

    which is also called the generalized split variational inclusion problem (for convenience, denoted by GSVIP2), and where for s tN, Bi:H12H1(i=1,,s), and Dj:H22H2(j=1,,t) are two finite families of maximal monotone mappings. We denote the solution set of the GSVIP2 by GSVIP2(Bi,Dj):={xC|xsi=1B1i(0)andAxtj=1D1j(0)}. The following algorithm is introduced to solve the GSVIP2: Choose any initial value u0,v1H1,λ>0. Assume un1,unhave been known. Compute

    {xn=un+θn(unun1).zn=JλBinxn.yn=A(IJλDjn)Axn,n1,wherein{i|max1isxnJλBixn},jn{j|max1jtAxnJλDjAxn}.Ifxn+ynzn=0,thenstop(xnisthedesiredsolution);otherwise,continuetocompute,un+1=(1αn)un+αn[xnτn(xn+ynzn)],

    where αn(0,1),θn[0,1],τn=γnxnzn2+yn22xn+ynzn2,γn>0, and they show that the sequences generated by the above algorithm weakly converge to the solution of their problem.

    In addition, the equilibrium problem (for short, EP) was first proposed by Nikaido and Isoda [19] in 1955, which is described as: Find uC such that

    f(u,v)0,vC, (1.5)

    where H is a real Hilbert space, C is a nonempty closed convex subset of H,f:C×CR is a bifunction. We denote the solution set of the EP by EP(f):={uC|f(u,v)0,vC}. Noting that after the publication of the paper by Blum and Oettli [20] in 1994, the EP attracted wide attention, and many scholars published a large number of articles on the problem. The EP includes some important problems such as optimization problem, saddle point, variational inequality, and Nash equilibrium as special cases.

    For solving the monotone EP, Korpelevich [21] first extended the extragradient method (double projection) of the saddle point problem to the monotone EP, and many algorithms [22,23,24,25,26] have been developed for solving the EP. Santos and Scheimberg [27] proposed an inexact projection subgradient method to solve the EP involving paramonotone bifunctions in finite dimensional space. It is noted that this algorithm needs only one projection per iteration, and its weak convergence was proved under mild assumptions.

    In 2016, Yen et. al. [28] studied the SFP involving paramonotone equilibrium problem and convex optimization problem, which is formulated as: Find xC such that

    f(u,v)0,vCandg(Au)g(y),yH2, (1.6)

    where g is a properly lower semicontinuous convex function on H2. They introduced the following algorithm:

    {yn=PC(xnαnηn),zn=PC(ynμnA(Iproxλg)Ayn),xn+1=anxn+(1an)zn,

    for each xnC,ηnεn2f(xn,xn) and αn=βnγn where γn=max{δn,gn} and

    μn={0,ifh(yn)=0,ρnh(yn)h(yn)2,ifh(yn)0,

    the selection of the sequences {αn},{δn},{βn},{εn} and {ρn} is described in Algorithm 3.1 [28]. Moreover, they proved the strong convergence of the algorithm under mild assumptions.

    The problems of finding common solutions of the set of fixed points of nonlinear mappings and the set of solutions of optimization problems with its related problems have been considered by some authors (for instance, see [29,30,31,32,33] and the references therein). The motivation for studying such a common solution problem lies in its potential application to mathematical models whose constraints can be expressed as fixed point problems and optimization problems. This arises in practical problems, such as signal processing, network resource allocation, and image recovery (see, for instance, [34,35] and the references therein).

    Tan, Qin and Yao [36] proposed four self-adaptive inertial algorithms with strong convergence to solve the split variational inclusion problem in real Hilbert spaces. Izuchukwu et al. [37] first proposed and studied several strongly convergent versions of the forward-reflected-backward splitting method of Malitsky and Tam for finding a zero of the sum of two monotone operators in a real Hilbert space, which required only one forward evaluation of the single-valued operator and one backward evaluation of the set-valued operator at each iteration. They also developed inertial versions of their methods with strong convergence when the set-valued operator was maximal monotone and the single-valued operator was Lipschitz continuous and monotone. Moreover, they discussed some examples from image restorations and optimal control regarding the implementations of our methods in comparisons with known related methods in the literature. Zhang and Wang [38] suggested a new inertial iterative algorithm for split null point and common fixed point problems. In [39], the authors focused on a inertial-viscosity approximation method for solving a split generalized equilibrium problem and common fixed point problem in real Hilbert spaces, their algorithm was designed such that its strong convergence did not require the norm of the bounded linear operator underlying the split equilibrium problem and under mild conditions. In [40], the authors studied the split variational inclusion and fixed point problems using Bregman weak relatively nonexpansive mappings in the puniformly convex smooth Banach spaces, they introduced an inertial shrinking projection self-adaptive iterative scheme for the problem and proved a strong convergence theorem.

    Su et al. [41] constructed a multi-step inertial asynchronous sequential algorithm for common fixed point problems. Zheng et al. [42] considered a new fixed-time stability of a neural network to solve split convex feasibility problems.

    Motivated and inspired by the above research work, we aim to consider the common element of the paramonotone equilibrium problem and the GSVIP2: Find uC such that

    f(u,u)0,uCand0si=1Bi(u),0tj=1Dj(Au), (1.7)

    where s,t,Bi,Dj,f:C×CR are as mentioned above. We denote the set of solutions of Problem (1.7) by

    Γ:={xC|f(x,x)0,0i=1Bi(x),0j=1Dj(Ax),xC}=GSIVP2(Bi,Dj)EP(f).

    It is easy to see, if Bi=0,Dj=0, then Problem (1.7) simplifies to the EP (1.5); if f=0, then Problem (1.7) simplifies to the GSVIP2 (1.4); if s=1,t=1, then Problem (1.7) changes into the following problem: Find xC such that

    f(x,y)0,yCand0B(x),0D(Ax), (1.8)

    if for s,tN,i{1,2,,s},j{1,2,,t},Bi=NCi,Dj=NQj in Problem (1.7), where NCi and NQi are the normal cones of nonempty, closed and convex subsets CiH1 and QjH2, respectively. Then, we obtain the following multiple-sets split feasibility problem and paramonotone equilibrium problem: Find uC such that

    f(u,u)0,uCandusi=1Ci,Autj=1Qj. (1.9)

    Thus, it can be seen that Problem (1.7) considered in this paper is more general, and contains many known and new mathematical models about the common element problems, such as Problems (1.1–1.6), Problems (1.8) and (1.9) as special cases. We are committed to establishing strong convergences of a self-adaptive viscosity-type inertial algorithm for the common solutions of Problem (1.7). The advantages of the suggested iterative algorithm are that (1) the design of the algorithm is self-adaptive, the inertial term can speed up its convergence, (2) the strong convergence analysis does not require a prior estimate of the norm of bounded operator, (3) the strong convergence of the iterative algorithm is established under the weak assumption of paramonotonicity of the related mappings. Our results improve and generalize many known results in the literature [18,27].

    In this section, we give some basic concepts, properties, and notations that will be used in the sequel. Let C be a nonempty closed and convex subset of a real Hilbert space H embed with the inner product , and the induced norm . For each u,vH and aR, we have the following facts:

    ⅰ)u+v2=u2+2u,v+v2;

    ⅱ)u+v2u2+2v,u+v;

    ⅲ)au+(1a)v2=au2+(1a)v2a(1a)uv2.

    Definition 2.1 [18] (1) F:HH is nonexpansive, if FuFvuv,u,vH.

    (2) F:HH is firmly nonexpansive, if FuFv,uvFuFv2,u,vH.

    Definition 2.2 [18] A mapping F:CC is said to be demiclosed, if for any sequence {un}C which weakly converges to u, and the sequence {Fun} strongly converges to v, then F(u)=v.

    Lemma 2.1 [43] Let C be a nonempty, closed and convex subset of a real Hilbert spaceA,F:CC be a nonexpansive mapping. Then, IF is demiclosed at 0.

    Lemma 2.2 [44] Let B be a maximal monotone mapping on a Hilbert space H for any r>0, we define the resolvent JBr=(I+rB)1, then the following hold:

    (1) JBr is a single-valued and firmly nonexpansive mapping.

    (2) D(JBr)=H, andFix(JBr)=B1(0) where Fix(JBr) stands for the fixed point set of JBr.

    Lemma 2.3 [18] Let B:H2H be a maximal monotone mapping, then the associated resolvent JBr for some r>0 has the following characterization:

    uJBr(u),uvuJBr(u)2,uH,vFix(JBr).

    Lemma 2.4 [45] Let {υn} and {δn} be nonnegative sequences of real numbers satisfying υn+1υn+δn with n=1δn<+. Then, the sequence {υn} is convergent.

    Lemma 2.5 [46] Let H be a real Hilbert space, {an} be a sequence of real numbers such that 0<a<an<b<1 for all n1 and {bn},{dn} be the sequences in H such that limsupnbnc,limsupndnc, and for some c>0,limsupnanbn+(1an)dn=c. Then limnbndn=0.

    In this section, in order to prove the convergence of the algorithm, the following conditions are assumed:

    (A1) Let H1 and H2 be two real Hilbert spaces, C be a nonempty closed convex subset of H1,A:H1H2 be a linear and bounded operator.

    (A2) For s,tN,i{1,2,,s},j{1,2,,t},Bi:H12H1,Dj:H22H2 are two families of maximal monotone mappings.

    (A3) The bifunctions f:C×CR satisfies the following:

    (B1) For each uC,f(u,u)=0, and f(u,) is lower semicontinuous and convex on C, f(,u) is upper semicontinuous and convex on C.

    (B2) λ2f(u,u)is nonempty for any λ>0 and uC, and it is bounded on any bounded subset of C where λ2f(u,u) denotes λsubdifferential of the convex function f(u,) at u, that is

    λ2f(u,u):={ηH:η,vu+f(u,u)f(u,v)+λ,vC}.

    (B3) f is pseudo-monotone on C with respect to every solution of the EP, that is f(u,u)0 for uC,uEP(f). And f satisfies the following condition, which is called the paramonotonicity properly: uEP(f),vC,f(u,v)=f(v,u)=0vEP(f).

    (A4) Γ.

    Now, we introduce a self-adaptive viscosity-type inertial algorithm to solve Problem (1.7), which is described as follows.

    Algorithm 3.1. Initialization. Pick u0,u1H1,r>0, let θ[0,1), for any nN, the sequence {ρn},{an},{βn},{λn},{δn},{εn}[0,) satisfying the following conditions:

    ρn>ρ>0,0<a<an<b<1,βn>0,λn>0,
    n=1εn<+,limnan=12,n=1βnρn=+,n=1β2n<+,
    n=1βnλnρn<+,0<liminfnδnlimsupnδn<4.

    Step 1. Assume un1,un have been known. Choose αn such that 0<αnˉαn, where

    ˉαn={min{θ,εunun1},ifunun1,θ,otherwise.

    Computer

    xn=un+αn(unun1). (3.1)

    Step 2. Compute

    yn=JBinrxn, (3.2)
    zn=A(IJDjnr)Axn,n1, (3.3)

    where

    in{i|max1isxnJBirxn},jn{j|max1jtAxnJDjrAxn}, (3.4)

    if

    xn+znyn=0, (3.5)

    then stop; otherwise, continue and compute vn as follows

    vn=PC(xnξn(xn+znyn)), (3.6)

    where

    {ξn=δnxnyn2+(IJDjnλ)Axn22xnyn+zn2,δn>0.}

    Step 3. Take ηnλn2f(vn,vn) and define τn=βnγn with γn=max{ρn,ηn}. Computer

    wn=PC(vnτnηn). (3.7)

    Step 4. Compute

    un+1=anun+(1an)wn. (3.8)

    Several algorithms can be deduced from our Algorithm 3.1 for solving Problem (1.7) as follows: If αn=0 in Algorithm 3.1, we have the following self-adaptive viscosity-type method:

    Algorithm 3.2. Choose any initial value u1H1.

    Step 1. Compute

    yn=JBinrun,
    zn=A(IJDjnr)Aun,n1,

    where

    in{i|max1isunJBirun},jn{j|max1jtAunJDjrAun},

    if xn+znyn=0, then stop; otherwise, continue to compute vnas follows

    vn=PC(xnξn(xn+znyn)).

    Step 2. Take ηnλn2f(vn,vn) and define τn=βnγn with γn=max{ρn,ηn}. Compute

    un+1=PC(vnτnηn),

    where ξn=δnunyn2+(IJDjnλ)Aun22unyn+zn2,δn>0 and θ,r,{ρn},{an},{βn},{λn},{εn} are updated by Algorithm 3.1.

    If s=1,t=1 then Problem (1.7) reduces to Problem (1.8), and we consider the following Algorithm 3.3 corresponding to Algorithm 3.1 for computing the solution of Problem (1.8):

    Algorithm 3.3. Choose any initial value u0,u1H1.

    Step 1. Assume un1,un have been known. Compute

    xn=un+αn(unun1).

    Step 2. Compute

    yn=JBrxn,
    zn=A(IJDr)Axn,n1,

    if xn+znyn=0, then stop; otherwise, continue to compute vn as follows

    vn=PC(xnξn(xn+znyn)).

    Step 3. Take ηnλn2f(vn,vn) and define τn=βnγn with γn=max{ρn,ηn}. Compute

    wn=PC(vnτnηn).

    Step 4. Compute

    un+1=anun+(1an)wn,

    where θ,r,{ρn},{an},{βn},{λn},{εn},{αn},{ξn} are updated by Algorithm 3.1.

    If Bi=NCi(i=1,2,,s),Dj=NQj(i=1,2,,t), then Problem (1.7) reduces to Problem (1.9), and Algorithm 3.1 reduces to the following method:

    Algorithm 3.4. Choose any initial value u0,u1H1.

    Step 1. Compute

    xn=un+αn(unun1).

    Step 2. Compute

    yn=PCinxn,
    zn=A(IPQjn)Axn,

    where

    in{i|max1isxnPCinxn},jn{j|max1jtAxnPQjnAxn},

    if xn+znyn=0, then stop; otherwise, continue to compute vn as follows

    Step 3. Take ηnλn2f(vn,vn) and define τn=βnγn with γn=max{ρn,ηn}. Compute

    wn=PC(vnτnηn).

    Step 4. Compute

    un+1=anun+(1an)wn,

    where ξn=δnxnyn2+(IPQjn)Axn22xnyn+zn2,δn>0 and θ,{ρn},{an},{βn},{λn},{εn},{αn} are updated by Algorithm 3.1.

    If f=0, Problem (1.7) reduces to Problem (1.4), we consider the following inertial Algorithm 3.5 corresponding to Algorithm 3.1 for computing the solution of GSVIP2(1.4):

    Algorithm 3.5. Choose any initial value u0,u1H1.

    Step 1. Assume un1,un have been known. Compute

    xn=un+αn(unun1).

    Step 2. Compute

    yn=JBinrxn,
    zn=A(IJDjnr)Axn,n1,

    where

    in{i|max1isxnJBirxn},jn{j|max1jtAxnJDjrAxn},

    if xn+znyn=0, then stop; otherwise, continue to compute vn as follows

    vn=xnξn(xn+znyn).

    Step 3. Compute

    un+1=anun+(1an)vn,

    where θ,r,{an},{αn},{ξn} are updated by Algorithm 3.1.

    If Bi=0,Dj=0, Problem (1.7) reduces to Problem (1.5) and hence we consider the following inertial Algorithm 3.6 corresponding to Algorithm 3.1 for computing the solution of EP(1.5):

    Algorithm 3.6. Initialization. Choose any initial value u0,u1H1.

    Step 1. Compute

    xn=un+αn(unun1).

    Step 2. Take ηnλn2f(vn,vn) and define τn=βnγn with γn=max{ρn,ηn}. Compute

    wn=PC(vnτnηn).

    Step 3. Compute

    un+1=anun+(1an)wn,

    where {αn},{ρn},{an},{βn},{λn} are updated by Algorithm 3.1.

    In order to obtain our major results, we also need the following lemmas.

    Lemma 3.1. [27] For any n1, the following inequalities hold:

    (1)τnηnβn. (2)wnvnβn.

    Lemma 3.2. [18] The equality (3.5) holds if and only if unis a solution of GSVIP(Bi,Dj).

    Theorem 3.1. Suppose Assumptions (A1–A4) hold and the sequence {un} generated by Algorithm 3.1 strongly converges to a solution of Problem (1.7).

    Proof. We divide the proof into the following several steps.

    Step 1. The sequence {unu2} is convergent for all uΓ, then the sequence {un} is bounded.

    Indeed, for uΓ, we have

    xn+znyn,xnuxnyn2+(IJDjnλ)Axn2. (3.9)

    From (3.9), we get

    vnu2xnξn(xn+znyn)u2=xnu22ξnxnu,xn+znyn+ξn2xn+znyn2xnu22ξn(xnyn2+(IJDjnλ)Axn2)+ξn2xn+znyn2xnu22δnxnyn2+(IJDjnλ)Axn22xnyn+zn2(xnyn2+(IJDjnλ)Axn2)+δ2n(xnyn2+(IJDjnλ)Axn2)24xnyn+zn4xn+znyn2=xnu2δn(1δn4)(xnyn2+(IJDjnλ)Axn2)2xnyn+zn2. (3.10)

    Form 0<liminfnδnlimsupnδn<4 that (1δn4)>0 and

    vnu2xnu2. (3.11)

    From the definition of xn that

    xnu2=un+αn(unun1)u2=unu2+2αnunu,unun1+α2nunun12=unu2+αn(unu2+unun12un1u2)+α2nunun12=unu2+αn(unu2un1u2)+αn(1+αn)unun12unu2+αn(unu2un1u2)+2αnunun12unu2+αn(unu+un1u)unun1+2αnunun12=unu2+αn(unu+un1u+2unun1)unun1=unu2+αnc1unun1, (3.12)

    where c1=unu+un1u+2unun1. By (3.11) and (3.12), we have

    vnu2unu2+αnc1unun1. (3.13)

    Noting that

    wnu2=wnvn+vnu2vnu2+2vnwn,uwn. (3.14)

    By the definition of wn and the projection property, we have wnvn+τnηn,uwn0. so τnηn,uwnvnwn,uwn. From (3.14) that

    wnu2vnu2+2τnηn,uwn=vnu2+2τnηn,uvn+2τnηn,vnwn. (3.15)

    It follows from ηnλn2f(vn,vn) that f(vn,u)f(vn,vn)ηn,uvnλn. Then

    f(vn,u)+λnηn,uvn. (3.16)

    Moreover, by Lemma 3.1, we get

    τnηn,vnwnτnηnvnwnβ2n. (3.17)

    By (3.15)–(3.17), we have

    wnu2vnu2+2τnf(vn,u)+2τnλn+2β2n. (3.18)

    Combining (3.11), we get

    wnu2xnu2+2τnf(vn,u)+2τnλn+2β2n. (3.19)

    Since uΓ, then uEP(f). And f is pseudomonotone on C with respect to every solution of EP(f), we have f(vn,u)0. By the definition of un+1,we obtain

    un+1u2=anun+(1an)wnu2anunu2+(1an)wnu2. (3.20)

    By the definition of yn and (3.12), we get

    ynu2=JBinrxnu2xnu2unu2+αnc1unun1.

    From (3.12), (3.19) and (3.20) that

    un+1u2anunu2+(1an)(xnu2+2τnf(vn,u)+2τnλn+2β2n)anunu2+(1an)(unu2+αnc1unun1+2τnf(vn,u)+2τnλn+2β2n)=unu2+(1an)[αnc1unun1+2τnf(vn,u)+2τnλn+2β2n] (3.21)
    unu2+(1an)αnc1unun1+2(1an)τnλn+2(1an)β2n
    =unu2+(1an)[αnc1unun1+2τnf(vn,u)+2τnλn+2β2n], (3.22)

    where Λn=2(1an)(τnλn+β2n). since τn=βnγn γn=max{ρn,ηn}, then

    n=1τnλn=n=1βnγnλnn=1βnρnλn<+.

    Noting n=1β2n<+,0<a<an<b<1, we have n=1Λn<2(1a)n=1(τnλn+β2n)<+. By (3.1), we have αnunun1ˉαnunun1εn, noting n=1εn<, so

    n=1αnunun1<+.

    From Lemma 2.4 and (3.22), we can see that {unu2} is convergent for all uΓ. Hence {un} is bounded, consequently, so are the sequences {xn},{yn},{vn} and {wn}.

    Step 2. For any uΓ, limnsupf(vn,u)=0. Indeed, from (3.21), we have

    2(1an)τnf(vn,u)unu2un+1u2+(1an)αnc1unun1+Λn. (3.23)

    Consequently, n=12(1an)τnf(vn,u)<+. It follows from Assumption (B2) and the boundedness of {un}, we get that ηn is bounded. Thus, for each n1, there is a constant L>ρ such that ηnL then γnρn=max{1,ηnρn}Lρ, so τn=βnγn>ρLβnρn. Since uΓ, it follows from the pseudomonotonicity of f that f(vn,u)0 and combine 0<a<an<b<1, we have n=1(1b)βnρn[f(vn,u)]<+. Since n=1βnρn=+, then limnsupf(vn,u)=0.

    Step 3. For any uΓ, let {vnk} be a subsequence of {vn}, such that limnsupf(vn,u)=limjf(vnk,u), and v be a weak cluster point of {vnk}, then vEP(f). Indeed, if vnkv(k), since f(,u) is upper semi-continuous and by Step2, we have f(v,u)limjsupf(vnk,u)=0. Since uΓ, and f is pseudomonotone, we have f(v,u)0, and so f(v,u)=0. Again, by the pseudomonotonicity of f, f(u,v)0, and f(u,v)0, we obtain f(u,v)=0. Then, f(v,u)=f(u,v)=0. Thus, by the pseudomonotonicity of f we have vEP(f).

    Step 4. Every weak cluster point ˉu of the sequence {un} all belongs to GSVIP(Bi,Dj). Let {unk} be a subsequence of {un} such that unkˉu. Easy to know that n=1xnun=n=1αnunun1<. Implying that

    limnxnun=0. (3.24)

    Therefore xnkˉu, where {xnk} be a subsequence of {xn}. It follows from (3.10), (3.12), (3.18) and (3.20) that

    un+1u2=anun+(1an)wnu2anunu2+(1an)wnu2anunu2+(1an)(vnu2+2τnf(vn,u)+2τnλn+2β2n)anunu2+(1an)vnu2+Λnanunu2+(1an)(xnu2δn(1δn4)(xnyn2+(IJDjnλ)Axn2)2xnyn+zn2)+Λnanunu2+(1an)(unu2+αnc1unun1δn(1δn4)(xnyn2+(IJDjnλ)Axn2)2xnyn+zn2)+Λn
    =unu2+(1an)αnc1unun1(1an)δn(1δn4)(xnyn2+(IJDjnλ)Axn2)2xnyn+zn2+Λn. (3.25)

    Implying that

    (1an)δn(1δn4)(xnyn2+(IJDjnλ)Axn2)2wnun+vn2unu2un+1u2+(1an)αnc1unun1+Λn. (3.26)

    Observe that

    (1b)n=1δn(1δn4)(xnyn2+(IJDjnλ)Axn2)2wnun+vn2u0u2+(1a)c1n=1αnunun1+n=1Λn<.

    Thus

    limnδn(1δn4)(xnyn2+(IJDjnλ)Axn2)2xnyn+zn2=0.

    Since {xn+znyn} is bounded, then limnxnyn=0 and limn(IJDjnλ)Axn=0. Thus limn(IJBinr)xn=0 and limn(IJDjnr)Axn=0. Note that JBinr and JDjnr are nonexpansive, then (IJBinr) and (IJDjnr) are demiclosed at 0. Thus, it follows from xnkˉu that (IJBinr)ˉu=0, and (IJDjnr)Aˉu=0 due to the linearity of A. That is, ˉusi=1B1i(0) and AˉutjD1j(0). So ˉuGSVIP(Bi,Dj). Noting that by Step 1, we can assume that limnunˉu=c<+. From (3.11) and Lemma 3.1(2), we have

    wnˉuwnvn+vnˉuβn+xnˉu=un+αn(unun1)ˉu+βnunˉu+|αn|unun1+βn.

    This means that

    limsupnwnˉulimsupn(unˉu+αnunun1+βn)=c.

    Since limnan(unˉu)+(1an)(wnˉu)=limnun+1ˉu=c. By Lemma 2.5, we have

    limnwnun=0. (3.27)

    From Lemma 3.1 (2) and n=1β2n<+ that limnvnwn=0, then limnvnun=0. Noting the fact that ˉu is a weak cluster point of the sequence {un}, we are easy to see that ˉu is also a weak cluster point of the sequence{vn}, thus ˉuEP(f), then ˉuΓ.

    Step 5. Finally, we show that the sequence {un} converges strongly to ˉuΓ. Indeed, combining (3.27) and the fact that ˉu is a weak cluster point of the sequence {un}, we can see that ˉu is also a weak cluster point of the sequence{wn}. Suppose wnkˉu, we get

    unk+1PΓ(unk+1)2unk+1PΓ(unk)2=ankunk+(1ank)unkPΓ(unk)2ankunkPΓ(unk)2+(1ank)wnkPΓ(unk)2. (3.28)

    Observe that

    wnkPΓ(unk)2=wnkunk+unkPΓ(unk)2=wnkunk2unkPΓ(unk)22wnkPΓ(unk),PΓ(unk)unk. (3.29)

    By (3.28) and (3.29), we have

    unk+1PΓ(unk+1)2ankunkPΓ(unk)2+(1ank)(wnkunk2unkPΓ(unk)22wnkPΓ(unk),PΓ(unk)unk)=(2ank1)unkPΓ(unk)2+(1ank)wnkunk22(1ank)wnkPΓ(unk),PΓ(unk)unk=(2ank1)unkPΓ(unk)2+(1ank)wnkunk22(1ank)wnkˉu,PΓ(unk)unk2(1ank)ˉuPΓ(unk),PΓ(unk)unk. (3.30)

    Since ˉuΓ, we have ˉuPΓ(unk),PΓ(unk)unk0, Also, observe that the sequence {unk} is bounded, then so is {unkPΓ(unk)}. It follows from limnwnkunk=0,limnank=12 and (3.30), we have

    limnunk+1PΓ(unk+1)=0. (3.31)

    Next, we show that {PΓ(unk)} is a Cauchy sequence. Indeed, for any m>k, we obtain

    PΓ(unm)PΓ(unk)2=PΓ(unm)unm+unmPΓ(unk)2=4(12(PΓ(unm)unm)+12(unmPΓ(unk)))2=2PΓ(unm)unm2+2unmPΓ(unk)24unm12(PΓ(unm)+PΓ(unk))22PΓ(unm)unm2+2unmPΓ(unk)24unmPΓ(unm)2=2PΓ(unk)unm22unmPΓ(unm)2. (3.32)

    Set u=PΓ(unk) in (3.22), we have

    unmPΓ(unk)2unm1PΓ(unk)2+(1anm1)αnm1c1unm1unm2+Λnm1unkPΓ(unk)2+nm1i=nk(1ai)αic1uiui1+nm1i=nkΛi. (3.33)

    From (3.32) and (3.33) that

    PΓ(unm)PΓ(unk)22unkPΓ(unk)2+2nm1i=nk(1ai)αic1uiui1+2nm1i=nkΛi2unmPΓ(unm)2.

    It follows from (3.31) and the fact that limknm1i=nkΛi=0 and limknm1i=nk(1ai)αic1uiui1=0, we obtain that {PΓ(unk)} is a Cauchy sequence. Hence {PΓ(unk)} strongly converges to some uΓ. Noting limkunk+1PΓ(unk+1)=0, we know that {unk} also strongly converges to uΓ. Thus limnun=ˉu, which completing the proof.

    As the consequences of Theorem 3.1 with suitable choices of Bi,Dj(i{1,2,,s},j{1,2,,t}) and f, we derive several interesting corollaries as follows.

    Corollary 3.1. Suppose Assumptions (A1–A4) hold and let s=1,t=1 in (A2), then, the sequence {xn} generated by Algorithm 3.3 strongly converges to a solution of Problem (1.8).

    Corollary 3.2. Suppose Assumptions (A1–A4) hold, then the sequence {xn} generated by Algorithm 3.4 strongly converges to a solution of Problem (1.9).

    Corollary 3.3. Suppose Assumptions (A1–A4) hold, then the sequence {xn} generated by Algorithm 3.5 strongly converges to a solution of Problem (1.5).

    Corollary 3.4. Suppose Assumptions (A1–A4) hold, then the sequence {xn} generated by Algorithm 3.6 strongly converges to a solution of Problem (1.4).

    Remark 3.1. (ⅰ) Suppose αn=0 in Algorithm 3.1, then Algorithm 3.1 reduces to self-adaptive viscosity-type Algorithm 3.2 for solving Problem (1.7).

    (ⅱ) Suppose s=1,t=1, then Algorithm 3.1 reduces to Algorithm 3.3 for solving Problem (1.8).

    (ⅲ) Suppose Bi=NCi(i=1,2,,s),Dj=NQj(i=1,2,,t) in Problem (1.7), then Algorithm 3.1 reduces to Algorithm 3.4 for solving Problem (1.9).

    (ⅳ) Suppose f=0 in Problem (1.7), then Problem (1.7) reduces to Problem (1.4) studied by Ogbuisi et al. in [18] and Algorithm 3.1 reduces to Algorithm 3.5 for solving Problem (1.4). So our Algorithm 3.1 and Theorem 3.1 generalize the corresponding results in [18].

    (ⅴ) Suppose Bi=0,Dj=0 in Problem (1.7), then Problem (1.7) reduces to Problem (1.5) studied by Santos et al. in [27] and Algorithm 3.1 reduces to Algorithm 3.6 for solving Problem (1.5). So our Algorithm 3.1 and Theorem 3.1 generalize the corresponding results in [27].

    At last, we give two examples to illustrate the validity of our considered common solution Problem (1.7). In two examples, we take s=1,t=1 in Problem (1.7).

    Example 3.1. Let H1=H2=R2,C={uR2|10e1u10e1},e1=(1,1). We define the operators B:H12H1,D:H22H2,A:H1H2 by

    B[uv]=[5002][uv],D[uv]=[3006][uv],A[uv]=[1234][uv],

    respectively. Define mapping f(u,v)=u51(v1u1)+u32(v2u2),u,vC. Let us observe that A1–A4 hold and A is bounded linear mapping. In addition, u=(0,0) is the unique solution of SVIP(B,D). Furthermore, EP(f)has a unique solution u=(0,0). Since f(v,u)=v61v420 for all yC and f(u,ˉu)=0=f(ˉu,u)=ˉu61ˉu42, which implies ˉu=(0,0)EP(f). Hence, Γ=SVIP(B,D)EP(f)={(0,0)}.

    Example 3.2. Let H1=R2,H2=R3,C={uR2+|u1+u2=1}H1. We define B1:R2R2,B2:R3R3 be

    B1[uv]=[2442][uv]+[11],B2[uvw]=[200020002][uvw]+[222],

    respectively. Let A=[244224]. We consider the equilibrium problem with the bifunction f(u,v)=2|v1||u1|+2v22u22,vC. Suppose A1-A4 hold, the optimal point of EP(f) is u=(12,12) and the partial subdifferential of f is given by

    2f(u,u)={(2,4u2)ifu1>0,([2,2],4u2)ifu1=0,(2,4u2)ifu1<0.

    Furthermore, we aim to find u=(u1,u2)R2 such that B1(u)=(0,0),B2(Au)=(0,0,0). Then, we can easy to see that x=(12,12)SVIP(B1,B2). Hence, Γ=SVIP(B1,B2)EP(f)={(12,12)}.

    In this paper, a new inertial form algorithm is introduced to approximate the common solutions of the generalized split variational inclusion problem and paramonotone equilibrium problem in real Hilbert spaces. The design of the algorithm is self-adaptive, the inertial term can speed up its convergence, and the strong convergence analysis does not require a prior estimate of the norm of bounded operators. Under the assumption of generalized monotonicity of the correlation mappings, we prove the strong convergence of our iterative algorithms. The results presented here improve and generalize many known results in [18,27].

    It should be noted that the way of choosing the inertial parameter αk in our algorithm 3.1 is known as the on-line rule. As part of our future project, following the method in [47], we consider a strong convergence of our proposed algorithm under some conditions on the iterative parameter without on-line rule assumption.

    Yali Zhao: Supervision, Conceptualization, Writing-review & editing; Qixin Dong: Writing-review & editing, Project administration; Xiaoqing Huang: Writing-original draft, Formal analysis. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was supported by Liaoning Provincial Department of Education under project No.LJKMZ20221491.

    And the authors would like to thank the reviewers for their valuable comments and suggestions, which have helped to improve the quality of our paper.

    The authors declare that they have no competing interests.



    [1] F. E. Browder, Nonexpansive nonlinear operators in a Banach space, Proc. Nat. Acad. Sci. USA., 54 (1965), 1041–1044. https://doi.org/10.1073/pnas.54.4.1041 doi: 10.1073/pnas.54.4.1041
    [2] D. Göhde, Zum prinzip der kontraktiven abbildung, Math. Nachr., 30 (1965), 251–258. https://doi.org/10.1002/mana.19650300312 doi: 10.1002/mana.19650300312
    [3] S. Reich, Approximate selections, best approximations, fixed points, and invariant sets, J. Math. Anal. Appl., 62 (1978), 104–113. https://doi.org/10.1016/0022-247X(78)90222-6 doi: 10.1016/0022-247X(78)90222-6
    [4] K. Goebel, W. A. Kirk, Topics in metric fixed point theory, Cambridge Studies in Advanced Mathematics, 28, Cambridge: Cambridge University Press, 1990.
    [5] W. A. Kirk, A fixed point theorem for mappings which do not increase distances, Am. Math. Mon., 72 (1965), 1004–1006. https://doi.org/10.2307/2313345 doi: 10.2307/2313345
    [6] W. R. Mann, Mean value methods in iteration, Proc. Am. Math. Soc., 4 (1953), 506–510. https://doi.org/10.2307/2032162 doi: 10.2307/2032162
    [7] S. Ishikawa, Fixed points by a new iteration method, Proc. Am. Math. Soc., 44 (1974), 147–150. https://doi.org/10.2307/2039245 doi: 10.2307/2039245
    [8] M. A. Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl., 251 (2000), 217–229. https://doi.org/10.1006/jmaa.2000.7042 doi: 10.1006/jmaa.2000.7042
    [9] I. Yildirim, M. Özdemir, A new iterative process for common fixed points of finite families of non-self-asymptotically non-expansive mappings, Nonlinear Anal.-Theor., 71 (2009), 991–999. https://doi.org/10.1016/j.na.2008.11.017 doi: 10.1016/j.na.2008.11.017
    [10] P. Sainuan, Rate of convergence of P-iteration and S-iteration for continuous functions on closed intervals, Thai J. Math., 13 (2015), 449–457.
    [11] J. Daengsaen, A. Khemphet, On the rate of convergence of P-iteration, SP-iteration, and D-iteration methods for continuous nondecreasing functions on closed intervals, Abstr. Appl. Anal., 2018 (2018), 7345401. https://doi.org/10.1155/2018/7345401 doi: 10.1155/2018/7345401
    [12] B. T. Polyak, Introduction to optimization, New York: Optimization Software, 1987.
    [13] B. T. Polyak, Some methods of speeding up the convergence of iterative methods, Zh. Vychisl. Mat. Mat. Fiz. 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [14] W. Chaolamjiak, D. Yambangwai, H. A. Hammad, Modified hybrid projection methods with SP iterations for quasi-nonexpansive multivalued mappings in Hilbert spaces, B. Iran. Math. Soc., 47 (2021), 1399–1422. https://doi.org/10.1007/s41980-020-00448-9 doi: 10.1007/s41980-020-00448-9
    [15] P. Cholamjiak, W. Cholamjiak, Fixed point theorems for hybrid multivalued mappings in Hilbert spaces, J. Fixed Point Theory Appl., 18 (2016), 673–688. https://doi.org/10.1007/s11784-016-0302-3 doi: 10.1007/s11784-016-0302-3
    [16] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl., 311 (2005), 506–517.
    [17] C. Martinez-Yanes, H. K. Xu, Strong convergence of the CQ method for fixed point iteration processes, Nonlinear Anal., 64 (2006), 2400–2411. https://doi.org/10.1016/j.na.2005.08.018 doi: 10.1016/j.na.2005.08.018
    [18] K. Nakajo, W. Takahashi, Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups, J. Math. Anal. Appl., 279 (2003), 372–379. https://doi.org/10.1016/S0022-247X(02)00458-4 doi: 10.1016/S0022-247X(02)00458-4
    [19] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. https://doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [20] K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, Monographs and Textbooks in Pure and Applied Mathematics, 83, New York: Marcel Dekker Inc, 1984. https://doi.org/10.1112/blms/17.3.293
    [21] W. Takahashi, Fixed point theorems for new nonlinear mappings in a Hilbert space, J. Nonlinear Convex Anal., 11 (2010), 79–88.
    [22] M. Stošsić, J. Xavier, M. Dodig, Projection on the intersection of convex sets, Linear Algebra Appl., 9 (2016), 191–205. https://doi.org/10.1016/j.laa.2016.07.023 doi: 10.1016/j.laa.2016.07.023
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1528) PDF downloads(76) Cited by(3)

Figures and Tables

Figures(1)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog