Research article Special Issues

A general hybrid relaxed CQ algorithm for solving the fixed-point problem and split-feasibility problem

  • In this paper, we introduce a new hybrid relaxed iterative algorithm with two half-spaces to solve the fixed-point problem and split-feasibility problem involving demicontractive mappings. The strong convergence of the iterative sequence produced by our algorithm is proved under certain weak conditions. We give several numerical experiments to demonstrate the efficiency of the proposed iterative method in comparison with previous algorithms.

    Citation: Yuanheng Wang, Bin Huang, Bingnan Jiang, Tiantian Xu, Ke Wang. A general hybrid relaxed CQ algorithm for solving the fixed-point problem and split-feasibility problem[J]. AIMS Mathematics, 2023, 8(10): 24310-24330. doi: 10.3934/math.20231239

    Related Papers:

    [1] Suthep Suantai, Pronpat Peeyada, Andreea Fulga, Watcharaporn Cholamjiak . Heart disease detection using inertial Mann relaxed $ CQ $ algorithms for split feasibility problems. AIMS Mathematics, 2023, 8(8): 18898-18918. doi: 10.3934/math.2023962
    [2] Meiying Wang, Luoyi Shi . A new self-adaptive inertial algorithm with $ W $-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783. doi: 10.3934/math.20221032
    [3] Jamilu Abubakar, Poom Kumam, Jitsupa Deepho . Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Mathematics, 2020, 5(6): 5969-5992. doi: 10.3934/math.2020382
    [4] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [5] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [6] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [7] Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
    [8] Yasir Arfat, Poom Kumam, Supak Phiangsungnoen, Muhammad Aqeel Ahmad Khan, Hafiz Fukhar-ud-din . An inertially constructed projection based hybrid algorithm for fixed point and split null point problems. AIMS Mathematics, 2023, 8(3): 6590-6608. doi: 10.3934/math.2023333
    [9] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [10] Charu Batra, Renu Chugh, Mohammad Sajid, Nishu Gupta, Rajeev Kumar . Generalized viscosity approximation method for solving split generalized mixed equilibrium problem with application to compressed sensing. AIMS Mathematics, 2024, 9(1): 1718-1754. doi: 10.3934/math.2024084
  • In this paper, we introduce a new hybrid relaxed iterative algorithm with two half-spaces to solve the fixed-point problem and split-feasibility problem involving demicontractive mappings. The strong convergence of the iterative sequence produced by our algorithm is proved under certain weak conditions. We give several numerical experiments to demonstrate the efficiency of the proposed iterative method in comparison with previous algorithms.



    The split feasibility problem (SFP) in Euclidean spaces was introduced by Censor and Elfving[1] in 1994. It is depicted as finding a point x such that

    xC,AxQ, (1.1)

    where CH1 and QH2 are nonempty closed convex sets; A is a bounded linear operator from a Hilbert space H1 onto a Hilbert space H2 with A0. Denote the set of solutions for (1.1) by Δ. The SFP plays an important role in the study of medical image reconstruction, signal processing, etc. Many scholars regard the SFP and its generalizations, such as the multiple-set SFP and the split common-fixed-point problem as their research direction (see [2,3,4,5,6]).

    It is known that Byrne [7,8] proposed the famous CQ algorithm (CQA) for solving the problem (1.1). The proposed method is given as follows:

    xn+1=PC(xnτnA(IPQ)Axn),n1, (1.2)

    where τn(0,2A2) is the step size, A is the adjoint operator of A, PC is the projection onto C and PQ is the projection onto Q. Regrettably, there are two drawbacks of applying (1.2). On one hand, estimating the operator norm can often be challenging. On the other hand, computing projections onto both C and Q can be very difficult.

    To overcome the first drawback, López et al. [9] proposed a novel approach for selecting the step size, denoted by τn, which is defined as follows:

    τn=ρnf(xn)f(xn)2,

    where f(xn):=12(IPQ)Axn2, f(xn)=A(IPQ)Axn, ρn(0,4) and infnNρn(4ρn)>0.

    To overcome the second drawback, Fukushima [10] introduced the level sets C and Q:

    C={xH1:ϕ(x)0}, Q={yH2:φ(y)0}, (1.3)

    where ϕ:H1R and φ:H2R are convex, subdifferential and weakly lower semi-continuous functions. Motivated by Fukushima's method, Yang [11] introduced a new relaxed CQA by replacing PC with PCn and PQ with PQn. The algorithm has the form

    xn+1=PCn(xnαnA(IPQn)Axn),n1,

    where αn(0,2AA), A is the adjoint operator of A and Cn and Qn are defined as follows:

    Cn={xH1:ϕ(xn)ϑn,xnx},ϑnϕ(xn),

    and

    Qn={yH2:φ(Axn)χn,Axny},χnφ(Axn),

    where ϕ and φ are bounded operators.

    Qin and Wang [12] introduced (1.4) to find a common solution of the SFP (1.1) and fixed-point problem in 2019. Their algorithm has the form

    {yn=PC((1δn)(xnτnA(IPQ)Axn)+δnSxn),xn+1=αng(xn)+βnxn+γnyn,n1, (1.4)

    where g is a k-contraction from C onto C, S is nonexpansive from C onto C and Fix(S):={xH:Sx=x}. {αn}, {βn}, {γn}, {δn} and {τn} are included in (0,1); they satisfy the following:

    (i) n=1αn=, limnαn=0;

    (ii) 0<lim infnβnlim supnβn<1;

    (iii) αn+βn+γn=1;

    (iv) limn|τnτn+1|=0, 0<lim infnτnlim supnτn<2A2;

    (v) limn|δnδn+1|=0, 0<lim infnδnlim supnδn<1.

    Then, there is a point q of ΔFix(S) with xnq. Furthermore, it satisfies (1.5):

    zq,g(q)q0,zFix(S)Δ. (1.5)

    Motivated by López et al. [9], Wang et al. [13] improved the selection of step size in (1.4). In order to accelerate the convergence rate, they introduced (1.6) to solve the SFP (1.1) and fixed-point problem. Given that x0,x1H, their algorithm has the form

    μn={min{μ,εnxnxn1},if xn1xn,μ,otherwise,
    {wn=xn+μn(xnxn1),yn=PC((1δn)(wnτnA(IPQ)Awn)+δnSwn),xn+1=αng(xn)+βnwn+γnyn,n1, (1.6)

    where μ0, τn=ρnf(wn)f(wn)2, 0<ρn<4, f(wn):=12(IPQ)Awn2 and f(wn)=A(IPQ)Awn. Assume that S:H1H1 is quasi-nonexpansive, IS is demiclosed at zero and g:H1H1 is a k-contraction. {αn}, {βn}, {γn} and {δn} are included in [0,1], which satisfy the following conditions:

    (i) n=1αn=, limnαn=0;

    (ii) lim supnβn<1;

    (iii) αn+βn+γn=1;

    (iv) 0<lim infnδnlim supnδn<1;

    (v) εn>0, limnεnαn=0.

    Then, there is a point q of Fix(S)Δ with xnq. Furthermore, it satisfies (1.7):

    zq,g(q)q0,zFix(S)Δ. (1.7)

    In addition, Tian [14] introduced an iterative form:

    xn+1=αnλg(xn)+(IιαnB)Sxn,nN.

    Suppose that Fix(S), where Fix(S):={xH:Sx=x}. Then, there is a point q in Fix(S) with xnq. Furthermore, it satisfies the following:

    (ιBλg)q,zq0,zFix(S),

    where B is ¯λ-strongly monotone and M-Lipschitz continuous from H onto H with ¯λ,M>0, S is nonexpansive from H onto H and g is a k-contraction from H onto H with 0<k<1. Let ι,λR and {αn}(0,1) satisfy the following:

    (i) 0<ι<2¯λM2;

    (ii) 0<λ<ι¯λM2ι2k;

    (iii) n=1|αn+1αn|<, n=1αn=, limnαn=0.

    Motivated by the algorithms developed by Qin and Wang [12], Wang et al. [13], Tian [14] and Kwari et al. [15], we propose a new half-space relaxation projection method for solving the SFP and fixed-point problem of demicontractive mappings by expanding the scope of operator S, modifying the iteration format and optimizing the selection of step sizes. Our result in this article extends and improves many recent correlative results of other authors. Particularly, our method extends and improves the methods in some papers of other authors in the following ways: (1) We have considered and studied a new modified half-space hybrid method to solve the fixed-point problem and SFP of nonlinear operators at the same time. And, our method can be used more widely; (2) We extend the nonexpansive mapping and the quasi-nonexpansive mapping to the demicontractive mapping, which expands the scope of the study; (3) The inertia is added to accelerate the convergence rate further; (4) The selection of the step size is a multistep and self-adaptive process, so it no longer depends on the operator norm; (5) The projection onto a half-space greatly facilitates the calculation, and under some weaker conditional assumptions, the iterative sequence generated by our new algorithm converges strongly to the common solution of the two problems. At last, we present some numerical experiments to show the effectiveness and feasibility of our new iterative method.

    In this paper, let H be a real Hilbert space; the inner product be denoted by , and the norm be denoted by . We use the symbol to indicate weak convergence and to indicate strong convergence.

    Definition 2.1. A self-mapping P is said to be

    (i) nonexpansive if

    PxPyxy,x,yH;

    (ii) quasi-nonexpansive if Fix(P) such that

    Pxyxy,xH,yFix(P);

    (iii) α -demicontractive (0α<1) if Fix(P) such that

    Pxy2α(IP)x2+xy2,xH,yFix(P);

    (iv) firmly nonexpansive if

    PxPy2PxPy,xy,x,yH;

    (v) t-contractive (0t<1) if

    PxPytxy,x,yH;

    (vi) M-Lipschitz continuous (M>0) if

    PxPyMxy,x,yH;

    (vii) β-strongly monotone (β0) if

    PxPy,xyβxy2,x,yH.

    When Fix(P), we obtain (i)(ii)(iii).

    Definition 2.2. When the function f:HR is convex and subdifferentiable, an element dH is called a subgradient of f(x0) if

    f(y)f(x0)+d,yx0,yH.

    The set of subgradients of f at x0 is denoted by f(x0).

    Definition 2.3. The function f:HR is called weakly lower semi-continuous if xnx0 implies that

    f(x0)lim infnf(xn).

    Lemma 2.1 ([16,17,18]). Let DH be a nonempty, convex and closed set. For any xH, we have

    (i) PDxy2xy2xPDx2, yD;

    (ii) xPDx,yPDx0, yD;

    (iii) PDxPDy2PDxPDy,xy, yH;

    (iv) (IPD)x(IPD)y2(IPD)x(IPD)y,xy, yH.

    Lemma 2.2 ([19,20]). For any x,yH, the following results hold

    (i) x+y22y,x+y+x2;

    (ii) (1m)x+my2=my2+(1m)x2(1m)mxy2, mR.

    Lemma 2.3 ([14]). Assume that B is an M-Lipschitz continuous and ¯λ-strongly monotone operator with ¯λ,M>0. For μ,ν>0, they satisfy the following conditions:

    0<μ<2¯λM2,ν=¯λM2μ2.

    Let {γn}(0,1) and limnγn=0. If n is a sufficiently large number, we have

    (IγnB)x(IγnB)y(1γnν)xy.

    Proof.

    (IγnB)x(IγnB)y2=xyγn(BxBy)2=xy22γnxy,BxBy+γ2nBxBy2xy22γn¯λxy2+γ2nM2xy2=(12γn¯λ+γ2nM2)xy2=(12γnνγnM2μ+γ2nM2)xy2(12γnνγnM2(μγn)+γ2nν2)xy2.

    Combining limnγn=0 and μγn>0 (n sufficiently large), we have

    (IγnB)x(IγnB)y2(12γnν+γ2nν2)xy2=(1γnν)2xy2.

    Since 1γnν>0 (n sufficiently large), we have

    (IγnB)x(IγnB)y(1γnν)xy.

    Lemma 2.4 ([21]). Let the self-mapping S be α-demicontractive in H and Fix(S). Define Sσ=(1σ)I+σS, σ(0,1α); then, we have the following:

    (i)Fix(S)=Fix(Sσ)H is convex and closed;

    (ii) Sσxq2xq21σ(1ασ)(ISσ)x2xH, qFix(S).

    Lemma 2.5 ([8]). Let f(z)=12(IPQ)Az2. We can know that f is M-Lipschitz continuous with M=A2.

    Lemma 2.6 ([22]). Let {an} be a sequence of nonnegative numbers such that

    an+1(1Θn)an+ΘnΥn,nn0,an+1anΠn+Γn,nn0,

    where n0 is a sufficiently large integer, the sequence {Θn} is contained in (0,1), the real sequence {Πn} is nonnegative {Υn} and {Γn} are two sequences which are included in R. If the following conditions hold

    (i)n=0Θn=; (ii)limnΓn=0; (iii)limiΠni=0 lim supiΥni0, {ni}{n}, then limnan=0.

    In this section, C and Q are defined by (1.3). We give some conditions for the convergence analysis of our Algorithm 1.

    (C1) n=1αn=, limnαn=0, 0<αn<1;

    (C2) βn[0,1], lim supnβn<1;

    (C3) εn>0, limnεnαn=0;

    (C4) 0<lim infnδnlim supnδn<1;

    (C5) σn(0,1α], lim infnσn>0;

    (C6) infnNρn(4ρn)>0, {ρn}(0,4).

    Algorithm 1
    Initialization: Apply x0,x1H1, μ0, τ>0, 0<ι<2¯λL2, ν=¯λL2ι2, λ>0 such that 0kλ<ν.
    Iterative step: Compute xn+1 for n1:
    Step 1. Compute
         dn=xn+μn(xnxn1),
    where
         μn={min{μ,εnxnxn1}, if xn1xn,μ,otherwise.
    Step 2. Compute
         yn=PCn((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn)),
    where
         τn={ρnfn(dn)fn(dn)2,if fn(dn)0,τ,if fn(dn)=0,
         fn(dn)=12(IPQn)Adn2,
         fn(dn)=A(IPQn)Adn,
    and
         Cn={xH1:ϕ(xn)ϑn,xnx},where ϑnϕ(xn),
         Qn={yH2:φ(Axn)χn,Axny},where χnφ(Axn).
    Step 3. Compute
         xn+1=αnλg(xn)+βndn+((1βn)IαnB)yn.
    Set n:=n+1 and go to Step 1.

    Theorem 3.1. Let S:H1H1 be α-demicontractive, IS be demiclosed at zero and g:H1H1 be k-contractive. Moreover, let operator B be L-Lipschitz continuous and ¯λ-strongly monotone. Assume that {αn}, {βn}, {δn}, {σn}, {ρn} are sequences satisfying (C1)(C6) and Fix(S)Δ. Then, the {xn} generated by Algorithm 1 converges strongly to xFix(S)Δ. Furthermore, it is the unique solution of (3.1):

    (Bλg)x,zx0,  zFix(S)Δ. (3.1)

    Proof. Let rΔFix(S). Since CCn and QQn, we have that rCn, ArQn and rFix(S). By combining Lemma 2.1 with Lemma 2.2, we get

    ynr2=PCn((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))r2(1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn)r2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn)2=δn((1σn)dn+σnSdnr)+(1δn)(dnτnA(IPQn)Adnr)2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2.

    Set

    Sσndn=(1σn)dn+σnSdn;

    according to Lemma 2.4, we have

    ynr2δnSσndnr2+(1δn)dnτnfn(dn)r2δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2δndnr2+(1δn)dnτnfn(dn)r2δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2δndnr2+(1δn)(dnr2+τ2nfn(dn)22τnfn(dn),dnr)δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2. (3.2)

    From the definition of fn(dn), we have

    fn(dn),dnr=A(IPQn)AdnA(IPQn)Ar,dnr=(IPQn)Adn(IPQn)Ar,AdnAr(IPQn)Adn2=2fn(dn), (3.3)

    which means that when fn(dn)=0, we get that fn(dn)=0. Substituting (3.3) into (3.2), we have

    ynr2dnr2+τ2n(1δn)fn(dn)24(1δn)τnfn(dn)δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2=dnr2+(1δn)τnρnfn(dn)4(1δn)τnfn(dn)δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2dnr2(4ρn)(1δn)τnfn(dn)δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2. (3.4)

    Noting that {δn} is a sequence in (0,1) and ρn(0,4), we derive that

    ynrdnr. (3.5)

    From the definition of dn, we have

    dnr=xn+μn(xnxn1)rμnxnxn1+xnrεn+xnr. (3.6)

    In the rest of the proof, n0 is assumed to be a sufficiently large positive integer and nn0.

    Now, let γn:=αn1βn. By Lemma 2.3, (3.5) and (3.6), we estimate xn+1r:

    xn+1r=αnλg(xn)+βndn+((1βn)IαnB)ynr=αnλg(xn)+βndn+((1βn)IαnB)ynr+αnBrαnBr=αnλg(xn)αnBr+βndnβnr+((1βn)IαnB)yn((1βn)IαnB)rαnλg(xn)Br+βndnr+(1βn)(IγnB)yn(IγnB)rαnλg(xn)λg(r)+αnλg(r)Br+βndnr+(1βn)(1γnν)ynrαnλkxnr+αnλg(r)Br+βndnr+(1βn)(1γnν)ynr=αnλkxnr+αnλg(r)Br+βndnr+(1βnαnν)ynrαnλkxnr+αnλg(r)Br+(1αnν)dnr(1αn(νλk))xnr+αnλg(r)Br+(1αnν)εn=(1αn(νλk))xnr+αn(νλk)(λg(r)Brνλk+(1αnν)εnαn(νλk)).

    We can assume that M is a suitable positive number such that εnαnM since limnεnαn=0. Next, we have

    xn+1r(1αn(νλk))xnr+αn(νλk)(λg(r)Brνλk+M(1αnν)νλk)(1αn(νλk))xnr+αn(νλk)(λg(r)Br+Mνλk)max{xnr,λg(r)Br+Mνλk}max{xn0r,λg(r)Br+Mνλk}.

    Therefore, the sequence {xn} is bounded.

    From the definition of xn+1,

    xn+1=βndn+(1βn)(γnλg(xn)+(IγnB)yn);

    set

    zn:=γnλg(xn)+(IγnB)yn; (3.7)

    then, we obtain

    xn+1=βndn+(1βn)zn. (3.8)

    In view of the arbitrariness of r, Lemma 2.2, Lemma 2.3, (3.5) and (3.7), we have

    znx2=γnλg(xn)+(IγnB)ynx2=γn(λg(xn)Bx)+((IγnB)yn(IγnB)x)2=γ2nλg(xn)Bx2+(IγnB)yn(IγnB)x2+2γnλg(xn)Bx,(IγnB)yn(IγnB)xγ2nλg(xn)Bx2+(1γnν)2ynx2+2γnλg(xn)Bx,ynx2γ2nλg(xn)Bx,BynBxγ2nλg(xn)Bx2+(1γnν)2ynx2+2γnλg(xn)Bx,ynx+2γ2nλg(xn)BxBynBxγ2nλg(xn)Bx2+(1γnν)2ynx2+2γnλg(xn)λg(x),ynx+2γnλg(x)Bx,ynx+2γ2nλg(xn)BxBynBxγ2nλg(xn)Bx2+(1γnν)2ynx2+2γnλkxnxynx+2γnλg(x)Bx,ynx+2γ2nλg(xn)BxBynBxγ2nλg(xn)Bx2+(1γnν)2dnx2+2γnλkxnxdnx+2γnλg(x)Bx,ynx+2γ2nλg(xn)BxBynBx=(12γnν)dnx2+2γnλkxnxdnx+2γnλg(x)Bx,ynx+2γ2nλg(xn)BxBynBx+γ2nλg(xn)Bx2+γ2nν2dnx2. (3.9)

    According to the definition of {dn} and Lemma 2.2, we have

    dnx2=xn+μn(xnxn1)x22μnxnxn1,dnx+xnx22μnxnxn1dnx+xnx22εndnx+xnx2. (3.10)

    Next, we can estimate xn+1x2. On the one hand, in view of (3.6), (3.8), (3.9), (3.10) and Lemma 2.2, we obtain

    xn+1x2=βndn+(1βn)znx2=(1βn)(znx)+βn(dnx)2=(1βn)znx2+βndnx2βn(1βn)dnzn2(1βn)znx2+βndnx2βndnx2+(1βn)(12γnν)dnx2+2(1βn)γnλkxnxdnx+2γn(1βn)λg(x)Bx,ynx+2γ2n(1βn)λg(xn)BxBynBx+γ2n(1βn)λg(xn)Bx2+γ2n(1βn)ν2dnx2=(12αnν)dnx2+2αnλkxnxdnx+2αnλg(x)Bx,ynx+2αnγnλg(xn)BxBynBx+αnγnλg(xn)Bx2+αnγnν2dnx2(12αnν)(xnx2+2εndnx)+2αnλkxnx(xnx+εn)+2αnλg(x)Bx,ynx+2αnγnλg(xn)BxBynBx+αnγnλg(xn)Bx2+αnγnν2dnx2(12αnν)xnx2+2εndnx+2αnλkxnx2+2εnxnx+2αnλg(x)Bx,ynx+2αnγnλg(xn)BxBynBx+αnγnλg(xn)Bx2+αnγnν2dnx2=(12αn(νλk))xnx2+αn(2λg(x)Bx,ynx+2γnλg(xn)BxBynBx+γnλg(xn)Bx2+γnν2dnx2+2εnαndnx+2εnαnxnx)=(12αn(νλk))xnx2+2αn(νλk)(λg(x)Bx,ynxνλk+γnλg(xn)BxBynBxνλk+γnλg(xn)Bx22(νλk)+γnν2dnx22(νλk)+εnαn(νλk)dnx+εnαn(νλk)xnx). (3.11)

    On the other hand, using the definition of {zn}, Lemma 2.2, (3.4) and (3.10), we get

    xn+1x2(1βn)znx2+βndnx2=(1βn)γnλg(xn)+(IγnB)ynx2+βndnx2=(1βn)(ynx)+γn(λg(xn)Byn)2+βndnx2(1βn)(ynx2+2γnλg(xn)Byn,znx)+βndnx2βndnx2+(1βn)(dnx2(4ρn)(1δn)τnfn(dn)δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2)+2αnλg(xn)Byn,znxxnx2+2εndnx(1βn)(4ρn)(1δn)τnfn(dn)(1βn)δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2(1βn)(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2+2αnλg(xn)Byn,znx. (3.12)

    Apply the following:

    Θn=2αn(νλk),Υn=1νλk(λg(x)Bx,ynx+γnλg(xn)BxBynBx+γnλg(xn)Bx22+γnν2dnx22+εndnxαn+εnxnxαn),Πn=(1βn)(4ρn)(1δn)τnfn(dn)+(1βn)δn(1δn)σn(Sdndn)+τnA(IPQn)Adn2+(1βn)(IPCn)((1δn)(dnτnA(IPQn)Adn)+δn((1σn)dn+σnSdn))2,Γn=2αnλg(xn)Byn,znx+2εndnx.

    Thus, we can rewrite (3.11) and (3.12):

    xn+1x2(1Θn)xnx2+ΘnΥn,xn+1x2xnx2Πn+Γn.

    It is easy for us to know that limnΘn=0, n=1Θn= andlimnΓn=0. If we prove that lim supiΥni0 when limiΠni=0 for any subsequence {ni}{n}, we can get that limnxnx=0 by Lemma 2.6.

    Assume that

    limiΠni=0.

    Using the conditions of {βn}, {δn} and {σn}, we have

    limi(4ρni)τnifni(dni)=0, (3.13)
    limiσni(Sdnidni)+τniA(IPQni)Adni2=0, (3.14)
    limi(IPCni)((1δni)(dniτnifni(dni))+δniSσnidni)2=0. (3.15)

    From (3.13) and the conditions of {ρn}, we deduce that

    τnifni(dni)0. (3.16)

    If fni(dni)0, we have that fni(dni)0 and

    τni=ρnifni(dni)fni(dni)2=ρni(IPQni)Adni22A(IPQni)Adni2ρni(IPQni)Adni22A2(IPQni)Adni2=ρni2A2,

    which means that infiNτni>0. Hence, we obtain fni(dni)0 as i. This implies that

    limi(IPQni)Adni=0.

    Using (3.16) and the conditions of {ρn}, we have

    τnifni(dni)=ρniτnifni(dni)0. (3.17)

    Moreover, from (3.14) and the condition of {σn}, we get

    Sdnidni0. (3.18)

    Using (3.15) and the definition of yn, we have

    yni=PCni((1δni)(dniτniA(IPQni)Adni)+δni((1σni)dni+σniSdni));

    again using (3.15), we obtain

    (1δni)(dniτnifni(dni))+δniSσnidniyni0,

    i.e.,

    (1δni)dni(1δni)τnifni(dni)+δniSσnidniyni0. (3.19)

    Due to (3.17) and (3.19), we get

    (1δni)dni+δni((1σni)dni+σniSdni)yni0,

    i.e.,

    dniyni+δniσni(Sdnidni)0. (3.20)

    Then, we have

    dniyni=dniyni+δniσni(Sdnidni)δniσni(Sdnidni)dniyni+δniσni(Sdnidni)+δniσni(Sdnidni);

    combining (3.18) with (3.20), we obtain

    dniyni0. (3.21)

    Moreover, using the definition of {μn} and the boundedness of {xn}, we can have

    xnidni=xnixniμni(xnixni1)=μnixnixni1εni0. (3.22)

    Next, combining (3.21) and (3.22), we have

    xniynixnidni+dniyni0. (3.23)

    It is easy for us to know that ωw(dni)Fix(S) by combining (3.18) and the case that IS is demiclosed at zero. Now, we choose a subsequence {dnij} of {dni} to satisfy

    lim supiλg(x)Bx,dnix=limjλg(x)Bx,dnijx.

    Without loss of generality, we suppose that dnijz. It is easy to get that ynijz and xnijz by using (3.21) and (3.23). Now, we show that zC. We know that ynijCnijby the definition of ynij; this implies that

    ϕ(xnij)ϑnij,xnijynij, (3.24)

    where ϑnijϕ(xnij). From (3.23) and the boundedness of ϕ, we have

    ϕ(xnij)ϑnijxnijynij0 (3.25)

    as n. Because of the weakly lower semi-continuous nature of ϕ, xnijz and (3.25), the following holds:

    ϕ(z)lim infjϕ(xnij)0.

    Hence zC.

    Next, we show that AzQ. Since PQnij(Axnij)Qnij, we have

    φ(Axnij)χnij,AxnijPQnij(Axnij), (3.26)

    where χnijφ(Axnij). Given that limi(IPQni)Adni=0 and (3.22), we get that limi(IPQni)Axni=0. By (3.26) and the boundedness of φ, we have

    φ(Axnij)χnijAxnijPQnij(Axnij)0 (3.27)

    as n. In view of xnijz, AxnijAz, the weakly lower semi-continuous nature of φ and (3.27), we can get

    φ(Az)lim infjφ(Axnij)0.

    Hence AzQ.

    From the above proof process, we can derive that zFix(S)Δ. Now, using (3.21), we get

    lim supiλg(x)Bx,ynix=lim supiλg(x)Bx,dnix=limjλg(x)Bx,dnijx=λg(x)Bx,zx0.

    It means that

    limiΥni0.

    Now, we present a comparison of Algorithm 1 with other algorithms by describing two numerical experiments. All of the programs were written in Matlab 9.5 and performed on a desktop PC with AMD Ryzen 7 4800U with Radeon Graphics 1.80 GHz, RAM 16.0GB.

    Example 4.1. We assume that H1=H2=C=R4 and Q={b}, where bR4; then, we have that Δ={xR4:Ax=b}. Considering the system of linear equations Ax=b, let

    A=(1123235231123522),b=(51615).

    By calculation, we can find that Ax=b has a unique solution x=(1,2,1,2)T. Let

    Sx=(14140001414000141400034)x+(14741412),xR4.

    By calculation, we can find that the mapping S is nonexpansive and x=(1,2,1,2)TFix(S).

    In Algorithm 1 and scheme (1.4), we let αn=110n, βn=0.2, δn=0.5, x1=(1,2,3,4)T and g=0.2I; in scheme (1.4), we let τn=74A2; in Algorithm 1, we let ϕ(x)=0, xR4, φ(y)=yb, yR4, B=I, λ=1, εn=1n2, μ=1.5, σn=1, ρn=3.5, τ=1 and x0=(1,2,3,4)T. The results of numercial experiments are revealed in Table 1 and Figure 1.

    Table 1.  Numerical results of Algorithm 1 for Example 4.1.
    n1 x(1)n x(2)n x(3)n x(4)n En
    0 1.0000 2.0000 3.0000 4.0000 5.2915E+00
    10 -0.9884 -1.9774 0.9764 1.9905 3.5918E-02
    50 -0.9987 -1.9959 0.9963 1.9969 6.4333E-03
    100 -0.9994 -1.9980 0.9982 1.9985 3.1627E-03
    500 -0.9999 -1.9996 0.9996 1.9997 6.3660E-04
    1000 -0.9999 -1.9998 0.9998 1.9998 3.1853E-04
    5000 -1.0000 -2.0000 1.0000 2.0000 6.3743E-05
    10000 -1.0000 -2.0000 1.0000 2.0000 3.1874E-05

     | Show Table
    DownLoad: CSV
    Figure 1.  Comparison of Algorithm 1 and scheme (1.4) for Example 4.1.

    From Table 1, we can find that xn is closer to the exact solution x with an increase of the number of iterations. We can also see that errors are closer to 0. Therefore, it can be concluded that Algorithm 1 is feasible. From Figure 1, we can find that the number of iterations for Algorithm 1 is less than that for scheme (1.4).

    Example 4.2. Assume that H1=R, H2=R3, C=[2,6], Q={(ya,yb,yc)T:|ya|+|yb|+|yc|3},

    A=(234),
    Sx=x2sinx,xR.

    We can see that S is not nonexpansive but quasi-nonexpansive [23]. By calculation, we can find that Fix(S)={0}. It is easy to obtain that 0Δ. We denote x=0.

    In Algorithm 1 and scheme (1.6), we let x0=x1=1, εn=1n2, μ=0.9, βn=0.5, ρn=3.5, δn=0.4, g(x)=13sinx and αn=18np, where 0<p1; in scheme (1.6), we used the function quadprog to compute the projection over Q by using Matlab 9.5 Optimization Toolbox; In Algorithm 1, we let ϕ(x)=(x6)(x+2), xR, φ(y)=|ya|+|yb|+|yc|3, y=(ya,yb,yc)TR3, B=I, λ=1, σn=1 and τ=1. We used xnx108 as the stopping criterion. The results of numerical experiments for different values of p are shown in Table 2. The convergence behavior for p=0.5 is shown in Figures 2 and 3.

    Table 2.  Numerical results of Algorithm 1 and scheme (1.6) for Example 4.2.
    p Algorithm 1 Scheme (1.6)
    Iter. Time [sec] Iter. Time [sec]
    0.5 86 0.0515 86 0.7586
    0.6 86 0.0534 86 0.7643
    0.7 73 0.0517 73 0.7125
    0.8 80 0.0522 80 0.7473
    0.9 80 0.0545 80 0.7242
    1 87 0.0522 87 0.8208

     | Show Table
    DownLoad: CSV
    Figure 2.  Comparison between the number of iterations of Algorithm 1 and scheme (1.6) for Example 4.2 with p=0.5.
    Figure 3.  Comparison between elapsed time of Algorithm 1 and scheme (1.6) for Example 4.2 with p=0.5.

    In this work, we developed a method for solving the SFP and the fixed-point problem involving demicontractive mapping. In Algorithm 1, we have expanded the nonexpansive mapping to the demicontractive mapping, selected a new step size and added the inertial method based on scheme (1.4). Our Algorithm 1 exhibits better convergence behavior than scheme (1.4). Additionally, we have extended the quasi-nonexpansive mapping to the demicontractive mapping, applied projection onto a half-space and selected a new step size in our Algorithm 1 based on scheme (1.6). Our Algorithm 1 also demonstrated superior convergence behavior compared to scheme (1.6). From the above, our algorithm exhibits a superior convergence rate and a wider range for the operator, thus making it applicable in a broader range of scenarios. Furthermore, we have presented two numerical results that demonstrate the feasibility and effectiveness of our method. In the following work, we intend to extend the demicontractive mapping to the quasi-pseudocontractive mapping and extend the SFP to the multiple-set split feasibility problem.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the National Natural Science Foundation of China (Grant no. 12171435).

    The authors declare that they have no competing interests.



    [1] Y. Censor, T. Elfving, A multiprojection algorithm using Bregman projections in a product space, Numer. Algor., 8 (1994), 221–239. https://doi.org/10.1007/BF02142692 doi: 10.1007/BF02142692
    [2] Y. Yao, M. Postolache, X. Qin, J. Yao, Iterative algorithms for the proximal split feasibility problem, U. P. B. Sci. Bull. A, 80 (2018), 37–44.
    [3] A. E. Al-Mazrooei, A. Latif, X. Qin, J. C. Yao, Fixed point algorithms for split feasibility problems, Fixed Point Theor., 20 (2019), 245–254. https://doi.org/10.24193/fpt-ro.2019.1.16 doi: 10.24193/fpt-ro.2019.1.16
    [4] Q. Dong, S. He, Y. Zhao, On global convergence rate of two acceleration projection algorithms for solving the multiple-sets split feasibility problem, Filomat, 30 (2016), 3243–3252. https://doi.org/10.2298/FIL1612243D doi: 10.2298/FIL1612243D
    [5] J. Zhao, D. Hou, A self-adaptive iterative algorithm for the split common fixed problems, Numer Algor., 82 (2019), 1047–1063. https://doi.org/10.1007/s11075-018-0640-x doi: 10.1007/s11075-018-0640-x
    [6] D. Hou, J. Zhao, X. Wang, Weak convergence of a primal-dual algorithm for split common fixed-point problems in Hilbert spaces, J. Appl. Numer. Optim., 2 (2020), 187–197.
    [7] C. Byrne, Iterative oblique projection onto convex set and the split feasibility problem, Inverse Probl., 18 (2002), 441–453. https://doi.org/10.1088/0266-5611/18/2/310 doi: 10.1088/0266-5611/18/2/310
    [8] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
    [9] G. López, V. Martín-Márquez, F. Wang, H. K. Xu, Solving the split feasibility problem without prior knowledge of matrix norms, Inverse Probl., 28 (2012), 085004. https://doi.org/10.1088/0266-5611/28/8/085004 doi: 10.1088/0266-5611/28/8/085004
    [10] M. Fukushima, A relaxed projection method for variational inequalities, Math. Program., 35 (1986), 58–70. https://doi.org/10.1007/BF01589441 doi: 10.1007/BF01589441
    [11] Q. Yang, The relaxed CQ algorithm for solving the split feasibility problem, Inverse Probl., 20 (2004), 1261–1266. https://doi.org/10.1088/0266-5611/20/4/014 doi: 10.1088/0266-5611/20/4/014
    [12] X. Qin, L. Wang, A fixed point method for solving a split feasibility problem in Hilbert spaces, RACSAM, 13 (2019), 215–325. https://doi.org/10.1007/s13398-017-0476-6 doi: 10.1007/s13398-017-0476-6
    [13] Y. H. Wang, T. T. Xu, J. C. Yao, B. N. Jiang, Self-adaptive method and inertial modification for solving the split feasibility problem and fixed-point problem of quasi-nonexpansive mapping, Mathematics, 10 (2022), 1612. https://doi.org/10.3390/math10091612 doi: 10.3390/math10091612
    [14] M. Tian, A general iterative algorithm for nonexpansive mapping in Hilbert spaces, Nonlinear Anal., 73 (2010), 689–694. https://doi.org/10.1016/j.na.2010.03.058 doi: 10.1016/j.na.2010.03.058
    [15] L. J. Kwari, J. Sunday, J. N. Ndam, A. Shokri, Y. Wang, On the Simulations of Second-Order Oscillatory Problems with Applications to Physical Systems, Axioms, 12 (2013), 282. https://doi.org/10.2139/ssrn.4183287 https://doi.org/10.3390/axioms12030282
    [16] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert space, New York: Springer, 2011.
    [17] S. Kesornprom, N. Pholasa, P. Cholamjiak, On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem, Numer. Alogr., 84 (2020), 997–1017. https://doi.org/10.1007/s11075-019-00790-y doi: 10.1007/s11075-019-00790-y
    [18] S. Suantai, S. Kesornprom, N. Pholasa, Y. J. Cho, P. Cholamjiak, A relaxed projections method using a new linesearch for the split feasibility problem, AIMS Mathematics, 6 (2021), 2690–2703.
    [19] Y. H. Wang, M. Y. Yuan, B. N. Jiang, Multi-step inertial hybrid and shrinking Tseng's algorithm with Meir-Keeler contractions for variational inclusion problems, Mathematics, 9 (2021), 1548. https://doi.org/10.3390/math9131548 doi: 10.3390/math9131548
    [20] W. L. Sun, G. Lu, Y. F. Jin, C. Park, Self-adaptive algorithms for the split problem of the quasi-pseudocontractive operators in Hilbert spaces, AIMS Mathematics, 7 (2022), 8715–8732. https://doi.org/10.3934/math.2022487 doi: 10.3934/math.2022487
    [21] D. V. Thong, Viscosity approximation methods for solving fixed point problems and split common fixed point problems, J. Fixed Point Theory Appl., 19 (2017), 1481–1499. https://doi.org/10.1007/s11784-016-0323-y doi: 10.1007/s11784-016-0323-y
    [22] S. He, C. Yang, Solving the variational inequality problem defined on intersection of finite level sets, Abstr. Appl. Anal., 2013 (2013), 942315. https://doi.org/10.1155/2013/942315 doi: 10.1155/2013/942315
    [23] M. Tian, M. Tong, Self-adaptive subgradient extragradient method with inertial modification for solving monotone variational inequality problems and quasi-nonexpansive fixed point problems, J. Inequal. Appl., 2019 (2019), 7. https://doi.org/10.1186/s13660-019-1958-1 doi: 10.1186/s13660-019-1958-1
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1404) PDF downloads(64) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog