Research article

A relaxed projection method using a new linesearch for the split feasibility problem

  • Received: 23 November 2020 Accepted: 28 December 2020 Published: 04 January 2021
  • MSC : 47H10, 54H25

  • In this work, we propose a new relaxed projection algorithm for the split feasibility problem with a new linesearch. The proposed method does not require the computation on the matrix inverse and the largest eigenvalue of the matrix. We then prove some weak convergence theorems under suitable conditions in the framework of Hilbert spaces. Finally, we give some numerical examples in signal processing to validate the theoretical analysis results. The obtained results improve the corresponding results in the literature.

    Citation: Suthep Suantai, Suparat Kesornprom, Nattawut Pholasa, Yeol Je Cho, Prasit Cholamjiak. A relaxed projection method using a new linesearch for the split feasibility problem[J]. AIMS Mathematics, 2021, 6(3): 2690-2703. doi: 10.3934/math.2021163

    Related Papers:

    [1] Suthep Suantai, Pronpat Peeyada, Andreea Fulga, Watcharaporn Cholamjiak . Heart disease detection using inertial Mann relaxed $ CQ $ algorithms for split feasibility problems. AIMS Mathematics, 2023, 8(8): 18898-18918. doi: 10.3934/math.2023962
    [2] Yuanheng Wang, Bin Huang, Bingnan Jiang, Tiantian Xu, Ke Wang . A general hybrid relaxed CQ algorithm for solving the fixed-point problem and split-feasibility problem. AIMS Mathematics, 2023, 8(10): 24310-24330. doi: 10.3934/math.20231239
    [3] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [4] Meiying Wang, Luoyi Shi . A new self-adaptive inertial algorithm with $ W $-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783. doi: 10.3934/math.20221032
    [5] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [6] Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
    [7] Yasir Arfat, Poom Kumam, Supak Phiangsungnoen, Muhammad Aqeel Ahmad Khan, Hafiz Fukhar-ud-din . An inertially constructed projection based hybrid algorithm for fixed point and split null point problems. AIMS Mathematics, 2023, 8(3): 6590-6608. doi: 10.3934/math.2023333
    [8] Andrew Calcan, Scott B. Lindstrom . The ADMM algorithm for audio signal recovery and performance modification with the dual Douglas-Rachford dynamical system. AIMS Mathematics, 2024, 9(6): 14640-14657. doi: 10.3934/math.2024712
    [9] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [10] Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276
  • In this work, we propose a new relaxed projection algorithm for the split feasibility problem with a new linesearch. The proposed method does not require the computation on the matrix inverse and the largest eigenvalue of the matrix. We then prove some weak convergence theorems under suitable conditions in the framework of Hilbert spaces. Finally, we give some numerical examples in signal processing to validate the theoretical analysis results. The obtained results improve the corresponding results in the literature.



    In this work, we study the split feasibility problem (shortly, (SFP)) which is formulated as follows:

    Find a pointxCsuch thatAxQ, (1.1)

    where C and Q are nonempty closed convex subsets of real Hilbert spaces H1 and H2, respectively, and A:H1H2 is a bounded linear operator. This problem was first proposed by Censor and Elfving [5] in Euclidean spaces (for recent results on the problem (SFP), see [10,12,13,25]). There are some related topic for splitting problems see [14,17,22,24]. Many applications in real world such as image reconstruction, signal recovery and intensity-modulated radiation therapy (shortly, (IMRT)) can be seen as solving the problem (SFP).

    Byrne [3,4] proposed the following projection algorithm for solving the problem (SFP):

    xk+1=PC(xkαA(IPQ)Axk) (1.2)

    where α(0,2/AA), A is the adjoint operator of A, PC and PQ are projections onto C and Q, respectively. This method is often called the CQ-algorithm. However, it is observed that, in general, the projections PC and PQ are not an easy task to be computed.

    To eliminate this difficulty, Fukushima [15] introduced the level sets C and Q which are defined by

    C={xH1:c(x)0},Q={yH1:q(y)0} (1.3)

    where c:H1R and q:H2R are convex and subdifferential functions, and c and q are bounded operators. Fukushima [15] also introduced a relaxed projection algorithm for solving variational inequality problem and gave numerical experiments to support main result.

    In 2004, Yang [27] presented a relaxed CQ-algorithm for solving the problem (SFP), by replacing PC and PQ by PCk and PQk, respectively. In this case, Ck and Qk are defined by

    Ck={xH1:c(xk)ξk,xkx}, (1.4)

    where ξkc(xk), and

    Qk={yH2:q(Axk)ζk,Axky}, (1.5)

    where ζkq(Axk).

    It is easily seen that CCk and QQk for all k1. We note that PCk and PQk are easily computed since these sets are half-spaces.

    Precisely, Yang [27] proposed the following relaxed CQ-algorithm:

    xk+1=PCk(xkαk(A(IPQk)Axk)), (1.6)

    where {αn} is a sequence in (0,2/AA), A is the adjoint operator of A and Ck and Qk are given by (1.4) and (1.5), respectively.

    Recently, Gibali et al. [16] (see also Qu and Xiu [20]) proposed the modification of the relaxed CQ-algorithm by using the linesearch technique in real Hilbert spaces. For each k1, define a mapping Fk:H1H2 by

    Fk(x)=12AxPQkAx2,Fk(x)=A(IPQk)Ax. (1.7)

    They constructed the following algorithm:

    Algorithm 1.1. Given constants γ>0, (0,1) and μ(0,1). Let x1 be arbitrary in H1. For each k1, calculate

    yk=PCk(xkαkFk(xk)), (1.8)

    where αk=γmk and mk is the smallest nonnegative integer such that

    αkFk(xk)Fk(yk)μxkyk. (1.9)

    Construct the next iterative step xk+1 by

    xk+1=PCk(xkαkFk(yk)). (1.10)

    There have been various methods invented for solving the problem (SFP) (see, for example, [5,6,7,8,9,10,11,12,14,15,16,17,18,19,21,26]). It is observed that, in each iteration, we have to find the integer mk such that (1.9) holds. So it takes CPU time and number of iterations in convergence. It is our aim to design new linesearch that takes CPU time less than Algorithm 1.1 and others.

    Recently, Tian and Zhang [23] proposed a new regularized CQ algorithm without a priori knowledge of the operator norm to solve the split feasibility problem and provided strong convergence theorem in Hilbert spaces. Later, Dang et. al. [11] introduced inertial relaxed CQ algorithms for the split feasibility problem and proved asymptotical convergence. In 2018, Dong et. al. [13] introduced the projection and contraction methods for solving the split feasibility problem using the inverse strongly monotone property of the underlying operator of the SFP.

    Motivated by the previous works, we modify algorithm of Gibali et al. [16] and propose the relaxed CQ-algorithm with the new linesearch in real Hilbert spaces and prove some weak convergence theorems of the proposed method under mild assumptions. Finally, we give some computational results to compare with the algorithms of Gibali et al. [16] and Yang [27]. Numerical experiments show that our algorithm has a better performance than others.

    In this section, we give some definitions and lemmas which are used in the proof. Throughout this paper, we use the following notations:

    denotes the weak convergence;

    ωw(xk)={x:(xkn)(xk) such that xknx} denotes the weak ω-limit set of (xk).

    Let H be a real Hilbert space. We recall the following definition.

    (1) A mapping T:HH is said to be firmly nonexpansive if, for all x,yH,

    xy,TxTyTxTy2. (2.1)

    (2) A function f:HR is said to be convex if

    f(λx+(1λ)y)λf(x)+(1λ)f(y) (2.2)

    for all λ(0,1) and for all x,yH.

    A differentiable function f is convex if and only if there holds the inequality:

    f(z)f(x)+f(x),zx (2.3)

    for all zH.

    (3) An element gH is called a subgradient of f:HR at x if

    f(z)f(x)+g,zx (2.4)

    for all zH, which is called the subdifferentiable inequality. The set of subgradients of f at x is denoted by f(x).

    (4) A function f:HR is said to be subdifferentiable at x if it has at least one subgradient at x. A function f is said to be subdifferentiable if it is subdifferentiable at all xH.

    (5) A function f:HR is said to be weakly lower semi-continuous (shortly, w-lsc) at x if xkx implies

    f(x)lim infkf(xk). (2.5)

    (6) A projection of H onto a nonempty closed convex subset CH is defined by

    PCx:=argminyCxy2 (2.6)

    for all xH.

    Next, we give some useful lemmas for our proof.

    Lemma 2.1. [2] Let C be a nonempty closed convex subset of a real Hilbert space H. Then, for any xH, the following assertions hold:

    (1) xPCx,zPCx0 for all zC;

    (2) PCxPCy2PCxPCy,xy for all x,yH;

    (3) PCxz2xz2PCxx2 for all zC.

    From Lemma 2.1, we see that the operator IPC is also firmly nonexpansive, where I denotes the identity operator, i.e., for any x,yH,

    (IPC)x(IPC)y,xy(IPC)x(IPC)y2. (2.7)

    Lemma 2.2. [1] Let S be a nonempty closed convex subset of a real Hilbert space H and {xn} be a sequence in H satisfying the following properties:

    (a) limkxkx exists for each xS;

    (b) ωw(xk)S.

    Then {xk} converges weakly to a point in S.

    Now, we give our main results in this paper.

    In this section, we introduce a new relaxed projection algorithm using the linesearch technique and prove its weak convergence. We have already defined the mappings Fk and Fk as in (1.7). Our algorithm is generated by the following manner:

    Algorithm 3.1. Given constants γ>0, (0,1) and μ(0,14). Let x1 be arbitrary in H1. For each k1, calculate

    yk=PCk(xkαkFk(xk)). (3.1)

    Construct the next iterative step xk+1 by

    xk+1=PCk(ykαkFk(yk)), (3.2)

    where αk=γmk and mk is the smallest nonnegative integer such that

    αkmax{Fk(xk+1)Fk(yk),Fk(yk)Fk(xk)}μ(xk+1yk+ykxk). (3.3)

    It is remarked that Algorithm 3.1 is based on extra-gradient type method which was firstly introduced by Noor [28, 29].

    Following the proof line as in [20], we obtain the following lemma:

    Lemma 3.1. Let γ>0, (0,1) and μ(0,14). Then, the linesearch (3.3) terminates after a finite number of steps. In addition, we have the following:

    μL<αkγ (3.4)

    for all k1, where L=A2.

    Proof. From [20], we know that

    Fk(xk+1)Fk(yk)Lxk+1yk (3.5)

    and

    Fk(yk)Fk(xk)Lykxk. (3.6)

    It follows that

    max{Fk(xk+1)Fk(yk),Fk(yk)Fk(xk)}Fk(xk+1)Fk(yk)+Fk(yk)Fk(xk)L(xk+1yk+ykxk). (3.7)

    The second part is obtained by [20]. Hence the linesearch (3.3) is well-defined. This completes the proof.

    In what follows, we denote S by the solution set of the problem (SFP) and also assume that S is nonempty.

    Theorem 3.2. The sequence {xk} generated by Algorithm 3.1 converges weakly to a point in S.

    Proof. Let zS. Since CCk and QQk, it follows that z=PC(z)=PCk(z) and Az=PQ(Az)=PQk(Az). Hence Fk(z)=0. Using Lemma 2.1 (c), we get

    xk+1z2=PCk(ykαkFk(yk))z2ykαkFk(yk)z2xk+1yk+αkFk(yk)2=ykz2+αkFk(yk)22αkFk(yk),ykzxk+1yk2αkFk(yk)22αkFk(yk),xk+1yk=ykz22αkFk(yk),ykzxk+1yk22αkFk(yk),xk+1yk. (3.8)

    Also, we obtain

    ykz2=PCk(xkαkFk(xk))z2xkαkFk(xk)z2ykxk+αkFk(xk)2=xkz2+αkFk(xk)22αkFk(xk),xkzykxk2αkFk(xk)22αkFk(xk),ykxk=xkz22αkFk(xk),xkzykxk22αkFk(xk),ykxk. (3.9)

    Due to (2.7) and Fk(z)=0, we have

    2αkFk(yk),ykz=2αkFk(yk)Fk(z),ykz=2αkAT(IPQk)AykAT(IPQk)Az,ykz=2αk(IPQk)Ayk(IPQk)Az,AykAz2αk(IPQk)Ayk2. (3.10)

    It also follows that

    2αkFk(xk),xkz2αk(IPQk)Axk2. (3.11)

    From (2.3) and (3.8), we have

    2αkFk(yk),xk+1yk=2αkFk(yk)Fk(xk+1),xk+1yk+2αkFk(xk+1),xk+1yk2αkFk(yk)Fk(xk+1)xk+1yk+2αk(12(IPQk)Axk+1212(IPQk)Ayk2)=2αkFk(yk)Fk(xk+1)xk+1yk+αk(IPQk)Axk+12αk(IPQk)Ayk2. (3.12)

    Also, we have

    2αkFk(xk),ykxk2αkFk(yk)Fk(xk)ykxk+αk(IPQk)Ayk2αk(IPQk)Axk2. (3.13)

    Combining (3.8)–(3.13), by Lemma 3.2, we obtain

    xk+1z2xkz22αkFk(xk),xkzykxk22αkFk(xk),ykxk2αkFk(yk),ykzxk+1yk22αkFk(yk),xk+1ykxkz22αk(IPQk)Axk2ykxk2+2αkFk(yk)Fk(xk)ykxkαk(IPQk)Ayk2+αk(IPQk)Axk22αk(IPQk)Ayk2xk+1yk2+2αkFk(yk)Fk(xk+1)xk+1yk+αk(IPQk)Ayk2xkz2αk(IPQk)Axk2ykxk2+2αkFk(yk)Fk(xk)ykxk2αk(IPQk)Ayk2xk+1yk2+2αkFk(yk)Fk(xk+1)xk+1yk. (3.14)

    Using (3.3) and (3.4), we have

    xk+1z2xkz2μL(IPQk)Axk2ykxk2+2μ(xk+1yk+ykxk)ykxk2μL(IPQk)Ayk2xk+1yk2+2μ(xk+1yk+ykxk)xk+1yk=xkz2μL(IPQk)Axk2ykxk2+2μxk+1ykykxk+2μykxk22μL(IPQk)Ayk2xk+1yk2+2μxk+1yk2+2μykxkxk+1ykxkz2μL(IPQk)Axk2ykxk2+μxk+1yk2+μykxk2+2μykxk22μL(IPQk)Ayk2xk+1yk2+2μxk+1yk2+μykxk2+μxk+1yk2=xkz2μL(IPQk)Axk2(14μ)ykxk2(14μ)xk+1yk22μL(IPQk)Ayk2. (3.15)

    This implies that

    xk+1zxkz (3.16)

    and hence limkxkz exists. Moreover, {xk} is bounded. From (3.15), we also have

    limkykxk=0 (3.17)

    and

    limkxk+1yk=0. (3.18)

    Moreover, we have

    limk(IPQk)Axk=0. (3.19)

    So, we obtain

    xk+1xk=xk+1yk+ykxkxk+1yk+ykxk0 (3.20)

    as k. Since {xk} is bounded, assume that ωw(xk) is nonempty. Let xωw(xk). Then there exists a subsequence {xkn} of {xk} such that xknx.

    Now, we show that xS. Since xkn+1Ckn by the definition of Ckn, we get

    c(xkn)ξkn,xknxkn+1, (3.21)

    where ξknc(xkn). By the boundedness of c and (3.20), we have

    c(xkn)ξknxknxkn+10 (3.22)

    as n. By the w-lsc of c, xknx and (3.22), it follows that

    c(x)lim infnc(xkn)0. (3.23)

    Hence xC.

    Next, we show that AxQ. Since PQkn(Axkn)Qkn, we have

    q(Axkn)ηkn,AxknPQkn(Axkn), (3.24)

    where ηknq(Axkn). So, we have

    q(Axkn)ηknAxknPQkn(Axkn)0 (3.25)

    as n. Since xknx, AxknAx. The w-lsc of q and (3.25) imply that

    q(Ax)lim infnq(Axkn)0. (3.26)

    Hence AxQ. Using Lemma 2.2, we conclude that the sequence {xk} converges weakly to a point in S. This completes the proof.

    Remark 3.4. Algorithm 3.1 in Theorem 3.2 is quite different from that of Gibali et al. [16]. To be more precise, the linesearch (3.3) is defined by a new way of two step method.

    Remark 3.5. Theorem 3.2 is more convenient than the results of Yang [27] in practice. In fact, we do not require the information of the operator norm which is not easy in computation.

    In this section, we provide some numerical experiments in signal recovery to compare our algorithm with those of Yang [27] and Gibali et al. [16]. In signal processing, the compressed sensing can be formulated as inverting the equation system:

    y=Ax+ε, (4.1)

    where xRN is a vector with m nonzero components to be recovered, yRM is the observed or measured data with noisy ε and A:RNRM (M<N) is a bounded linear observation operator. Here A is sparse and the range of it is not closed in most inverse problems and thus A is often ill-condition and the problem is also ill-posed.

    When x is a sparse expansion, to find solutions of the problem (4.1) can be seen as solving the following LASSO problem:

    minxRN12yAx2 such that x1t, (4.2)

    where t>0 is a given constant. The sparse vector xRN is generated from the uniform distribution in the interval [2,2] with m nonzero elements. The matrix ARM×N is generated from a normal distribution with mean zero and variance one. The observation y is generated by white Gaussian noise with signal-to-noise ratio SNR = 40. The process is started with t=m and initial point x1 is picked randomly.

    Let C={xRN:x1t} and Q={y}. Then the minimization problem (4.2) can be seen as the problem (SFP) and, since the projection onto the closed convex C does not have a closed form solution, we make use of the subgradient projection. Define a convex function c(x)=x1t and denote the level set Ck by:

    Ck={x:c(xk)+ξk,xxk0}, (4.3)

    where ξkc(xk). Then the orthogonal projection onto Ck can be calculated by the following:

    PCk(x)={x,if c(xk)+ξk,xxk0,xc(xk)+ξk,xxkξk2ξk,otherwise. (4.4)

    The subdifferential c at xk is

    c(xk)={1,xk>0,[1,1],xk=0,1,xk<0. (4.5)

    In our experiment, we consider two cases as follows:

    Case 1: N=512, M=256 and m=10;

    Case 2: N=4096, M=2048 and m=100.

    Next, we give some numerical results by using the relaxed CQ-algorithms defined by Yang [27], Gibali et al. [16] and our algorithms (Algorithm 3.1).

    The iteration is stopped when the following criteria is satisfied:

    MSE=1Nxx2<105, (4.6)

    where x is an estimated signal of x.

    In what follows, let αk=1A2 in the CQ-algorithm by Yang [27]. Define γ=2, =0.5 and μ=0.2 in that of Gibali et al. [16]. Then the numerical results are reported as follows:

    Remark 4.1. (1) In Figures 1 and 4, we see that the signal recovered by Algorithm 3.1 in both Case 1 and Case 2 has number of iterations and cpu time less than algorithm of Yang [27] and Gibali et al. [16]. In the algorithm of Yang [27], the stepsize αk depends on the operator norm A whenever the matrix has a large dimension, it may be compute very hard and has a costly cpu time. It is observed that our new linesearch has a less cpu time than that of Gibali et al. [16].

    Figure 1.  From top to bottom: original signal, observation data, recovered signal by Gibali et al., Yang and Algorithm 3.1 with N=512 and M=256, respectively.

    (2) In Figures 2 and 5, we plot the error value of iterations. We see that the errors of Algorithm 3.1 decrease faster than those of other algorithms. Also, in Figures 3 and 6, the objective function values obtained by Algorithm 3.1 have a better convergence behaviour than other algorithms.

    Figure 2.  MSE versus number of iterations in case N = 512, M = 256.
    Figure 3.  The objective function value versus number of iterations in case N = 512, M = 256.
    Figure 4.  From top to bottom: original signal, observation data, recovered signal by Gibali et al., Yang and Algorithm 3.1 with N=4096 and M=2048, respectively.
    Figure 5.  MSE versus number of iterations in case N = 4096, M = 2048.
    Figure 6.  The objective function value versus number of iterations in case N = 4096, M = 2048.

    In this paper, we proposed the relaxed CQ-algorithm with new two steps by using a new linesearch in real Hilbert spaces. The computation of matrix inverse and norm of operators is not required in our algorithm. Also, we gave in a simple and novel way how the sequence generated by the method weakly converges to a solution of the problem (SFP). All the results are compared to the relaxed CQ-algorithms of Yang [27] and Gibali et al. [16]. Finally, we found that the proposed algorithm is effective and outruns other known methods in the literature.

    The first author thanks Chiang Mai University for financial support.

    The authors declare no conflict of interest.



    [1] H. H. Bauschke, P. L. Combettes, A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces, Math. Oper. Res., 26 (2001), 248–264. doi: 10.1287/moor.26.2.248.10558
    [2] H. H. Bauschke, P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer, London, 2011.
    [3] C. Byrne, Iterative oblique projection onto convex sets and the split feasibility problem, Inverse Probl., 18 (2002), 441–453. doi: 10.1088/0266-5611/18/2/310
    [4] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. doi: 10.1088/0266-5611/20/1/006
    [5] Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Phys. Med. Biol., 51 (2006), 2353–2365. doi: 10.1088/0031-9155/51/10/001
    [6] Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, The split feasibility model leading to a unified approach for inversion problems in intensity-modulated radiation therapy, Technical Report, Department of Mathematics, University of Haifa, Israel, 2005.
    [7] Y. Censor, T. Elfving, A multiprojection algorithms using Bregman projection in a product space, Numer. Algorithms, 8 (1994), 221–239. doi: 10.1007/BF02142692
    [8] Y. Censor, T. Elfving, N. Kopf, T. Bortfeld, The multiple-sets split feasibility problem and its applications for inverse problem, Inverse Prob., 21 (2005), 2071–2084. doi: 10.1088/0266-5611/21/6/017
    [9] Y. Censor, M. Jiang, G. Wang, Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, Medical Physics Publishing, Madison, Wisconsin, 2010.
    [10] P. Cholamjiak, S. Kesornprom, N. Pholasa, Weak and strong convergence theorems for the inclusion problem and the fixed-point problem of nonexpansive mappings, Mathematics, 7 (2019), 1–19.
    [11] Y. Z. Dang, J. Sun, H. L. Xu, Inertial accelerated algorithms for solving a split feasibility problem, J. Ind. Manage. Optim., 13 (2017), 1383–1394.
    [12] Q. L. Dong, X. H. Li, D. Kitkuan, Y. J. Cho, P. Kumam, Some algorithms for classes of split feasibility problems involving paramonotone equilibria and convex optimization, J. Inequal. Appl., 2019 (2019), 1–23. doi: 10.1186/s13660-019-1955-4
    [13] Q. L. Dong, Y. C. Tang, Y. J. Cho, Th. M. Rassias, "Optimal" choice of the step length of the projection and contraction methods for solving the split feasibility problem, J. Global Optim., 71 (2018), 341–360. doi: 10.1007/s10898-018-0628-z
    [14] M. Farid, The subgradient extragradient method for solving mixed equilibrium problems and fixed point problems in Hilbert spaces, J. Appl. Numer. Optim., 1 (2019), 335–345.
    [15] M. Fukushima, A relaxed projection method for variational inequalities, Math. Program., 35 (1986), 58–70. doi: 10.1007/BF01589441
    [16] A. Gibali, L. W. Liu, Y. C. Tang, Note on the modified relaxation CQ-algorithm for the split feasibility problem, Optim. Lett., 12 (2017), 1–14.
    [17] S. Husain, N. Singh, A hybrid iterative algorithm for a split mixed equilibrium problem and a hierarchical fixed point problem, Appl. Set-Valued Anal. Optim., 1 (2019), 149–169.
    [18] G. López, V. Martín-Márquez, F. H. Wang, H. K. Xu, Solving the split feasibility problem without prior knowledge of matrix norms, Inverse Prob., 28 (2012), 085004. Available from: http://iopscience.iop.org/0266-5611/28/8/085004.
    [19] G. López, V. Martín-Márquez, H. K. Xu, Perturbation techniques for nonexpansive mappings with applications, Nonlinear Anal.: Real Word Appl., 10 (2009), 2369–2383. doi: 10.1016/j.nonrwa.2008.04.020
    [20] B. Qu, N. H. Xiu, A note on the CQ-algorithm for the split feasibility problem, Inverse Prob., 21 (2005), 1655–1665. doi: 10.1088/0266-5611/21/5/009
    [21] S. Suantai, S. Kesornprom, P. Cholamjiak, A new hybrid CQ algorithm for the split feasibility problem in Hilbert spaces and its applications to compressed sensing, Mathematics, 7 (2019), 789. doi: 10.3390/math7090789
    [22] W. Takahahsi, J. C. Yao, The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces, J. Nonlinear Convex Anal., 20 (2019), 173–195.
    [23] M. Tian, H. F. Zhang, The regularized CQ algorithm without a priori knowledge of operator norm for solving the split feasibility problem, J. Inequalities Appl., 2017 (2017), 1–10. doi: 10.1186/s13660-016-1272-0
    [24] S. Wang, Y. Zhang, W. Wang, H. Guo, Extragradient algorithms for split pseudomonotone equilibrium problems and fixed point problems in Hilbert spaces, J. Nonlinear Funct. Anal., 2019 (2019), 1–19.
    [25] U. Witthayarat, Y. J. Cho, P. Cholamjiak, On solving proximal split feasibility problems and applications, Ann. Funct. Anal., 9 (2018), 111–122. doi: 10.1215/20088752-2017-0028
    [26] H. K. Xu, Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces, Inverse Prob., 26 (2010), 105018. doi: 10.1088/0266-5611/26/10/105018
    [27] Q. Z. Yang, The relaxed CQ-algorithm for solving the split feasibility problem, Inverse Prob., 20 (2004), 1261–1266. doi: 10.1088/0266-5611/20/4/014
    [28] M. A. Noor, Some developments in general variational inequalities, Appl. Math. Comput., 152 (2004), 199–277.
    [29] M. A. Noor, K. I. Noor, M. Th. Rassias, New trends in general variational inequalities, Acta Applicandae Math., 170 (2020), 981–1064.
  • This article has been cited by:

    1. Suthep Suantai, Muhammad Aslam Noor, Kunrada Kankam, Prasit Cholamjiak, Novel forward–backward algorithms for optimization and applications to compressive sensing and image inpainting, 2021, 2021, 1687-1847, 10.1186/s13662-021-03422-9
    2. Yuanheng Wang, Bin Huang, Bingnan Jiang, Tiantian Xu, Ke Wang, A general hybrid relaxed CQ algorithm for solving the fixed-point problem and split-feasibility problem, 2023, 8, 2473-6988, 24310, 10.3934/math.20231239
    3. Ebru Altıparmak, Lateef Olakunle Jolaoso, Ibrahim Karahan, Habib ur Rehman, Pre-conditioning CQ algorithm for solving the split feasibility problem and its application to image restoration problem, 2024, 0233-1934, 1, 10.1080/02331934.2024.2358412
    4. Yuanheng Wang, Bin Huang, Bingnan Jiang, A self-adaptive relaxed primal-dual iterative algorithm for solving the split feasibility and the fixed point problem, 2024, 129, 10075704, 107699, 10.1016/j.cnsns.2023.107699
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1884) PDF downloads(40) Cited by(4)

Figures and Tables

Figures(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog