Research article

The hybird methods of projection-splitting for solving tensor split feasibility problem

  • Received: 15 April 2023 Revised: 23 May 2023 Accepted: 29 May 2023 Published: 26 June 2023
  • MSC : 60H35; 65F10

  • Projection-type methods are studied widely and deeply to solve multiples-sets split feasibility problem. From the perspective of splitting iteration, in this paper, the efficient tensor projection-splitting iteration methods are firstly investigated for solving tensor split feasibility problem, which is properly transformed into a tensor equation by taking advantage of projection operator. Then, accelerated overrelaxation method and symmetric (alternating) accelerated overrelaxation method are further generalized to solve this problem from general system of linear equations to multi-linear systems. Theoretically, the convergence is proven by the analysis of spectral radius of splitting iteration tensor. Numerical experiments demonstrate the efficiency of the presented method.

    Citation: Yajun Xie, Changfeng Ma. The hybird methods of projection-splitting for solving tensor split feasibility problem[J]. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050

    Related Papers:

    [1] Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180
    [2] Shuting Tang, Xiuqin Deng, Rui Zhan . The general tensor regular splitting iterative method for multilinear PageRank problem. AIMS Mathematics, 2024, 9(1): 1443-1471. doi: 10.3934/math.2024071
    [3] Qingbing Liu, Aimin Xu, Shuhua Yin, Zhe Tu . A note on the preconditioned tensor splitting iterative method for solving strong M-tensor systems. AIMS Mathematics, 2022, 7(4): 7177-7186. doi: 10.3934/math.2022400
    [4] Suthep Suantai, Suparat Kesornprom, Nattawut Pholasa, Yeol Je Cho, Prasit Cholamjiak . A relaxed projection method using a new linesearch for the split feasibility problem. AIMS Mathematics, 2021, 6(3): 2690-2703. doi: 10.3934/math.2021163
    [5] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
    [6] Meiying Wang, Luoyi Shi . A new self-adaptive inertial algorithm with W-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783. doi: 10.3934/math.20221032
    [7] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [8] Xu Li, Rui-Feng Li . Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118. doi: 10.3934/math.2021243
    [9] Ximing Fang . The simplified modulus-based matrix splitting iteration method for the nonlinear complementarity problem. AIMS Mathematics, 2024, 9(4): 8594-8609. doi: 10.3934/math.2024416
    [10] Yuanheng Wang, Bin Huang, Bingnan Jiang, Tiantian Xu, Ke Wang . A general hybrid relaxed CQ algorithm for solving the fixed-point problem and split-feasibility problem. AIMS Mathematics, 2023, 8(10): 24310-24330. doi: 10.3934/math.20231239
  • Projection-type methods are studied widely and deeply to solve multiples-sets split feasibility problem. From the perspective of splitting iteration, in this paper, the efficient tensor projection-splitting iteration methods are firstly investigated for solving tensor split feasibility problem, which is properly transformed into a tensor equation by taking advantage of projection operator. Then, accelerated overrelaxation method and symmetric (alternating) accelerated overrelaxation method are further generalized to solve this problem from general system of linear equations to multi-linear systems. Theoretically, the convergence is proven by the analysis of spectral radius of splitting iteration tensor. Numerical experiments demonstrate the efficiency of the presented method.



    The split feasibility problem (SFP) is formulated as finding a point x with the following property

    xC,AxQ, (1.1)

    where C and Q are the nonempty closed convex sets. In this paper, we will focus on the discussion of the following so-called tensor split feasibility problem (TSFP)

    xC,Axm1Q, (1.2)

    where CRn, QRn are the nonempty closed convex sets, and AR[m,n] is a tensor with mth order and n-dimension. When m=2, i.e., A is a matrix, (1.2) reduces to the split feasibility problem (SFP) (1.1) (see [1,2]). Furthermore, (1.2) can be regarded as the general case of multiple-sets split feasibility problem.

    The SFP has always attracted the attention of some researchers. For instance, Censor et al. [3] presented an interesting approach for solving the multiple-sets split feasibility problem. However, it probably depends heavily on the step size, a fixed Lipschitz constant of gradient of objective function, which is hard to be estimated normally. Moreover, by adopting variable step sizes based on the known information from current iterate, Zhang and Han et al. [4] investigated a self-adaptive projection-type method for nonlinear multiple-sets split feasibility problem, which has a variety of specific applications in real world, e.g., image reconstruction, medical care (such as inverse problem of intensity-modulated radiation therapy (IMRT)) and signal processing [5,6]. To choose a best step-size for current direction, Zhao and Yang et al. [7] studied a simple projection method for solving the multiple-sets split feasibility problem, which is easy to implement and come true. Meanwhile, they also proposed several efficient acceleration schemes for solving the multiple-sets split feasibility problem [8].

    All of the literatures mentioned above mainly focus on some explorations about the solution of equivalent optimization problem. This issue would be developed from another point of view, i.e., the perspective of splitting iteration for solving tensor equation. Certainly, first of all, the split feasibility problem (1.2) should be transformed into multi-linear systems with constraint by projection operator PQ(). The exact scheme is written as follows:

    Axm1=PQ(Axm1),s.t.xC. (1.3)

    As is known that a critical issue in pure and applied mathematics is solving variety classes of systems. Especially for the data analysis need of the background of big data, some rapid and efficient computing techniques of multi-linear systems have received serious attention in the field of science and engineering. In fact, it is not easy to get the precise solution by utilizing some direct methods. As a result, different kinds of iterative strategies were widely established to solve this problem. These research works concentrated mostly upon fast solvers for the multi-linear systems. For instance, Ding and Wei [9,10] extended classical Jocobi, Gauss-Seidel, successive overrelaxation method and Newton methods. In view of the costly computations for the Newton method, Han [11] developed an homotopy method by means of an Euler-Newton prediction-correction strategy to solve multi-linear systems with nonsymmetric M-tensors. Tensor splitting method and its convergence results have been studied in detail by Liu and Li et al. [12]. Moreover, some valuable comparison results of splitting iteration for solving multi-linear systems were achieved in [13].

    Motivated by [13], we extend an tensor alternating splitting iteration scheme for solving constraint multi-linear systems (1.3). Further, the accelerated overrelaxation method (AOR) and symmetric accelerated overrelaxation method (SAOR) are generalized for solving (1.3) in practical alternating splitting iteration application.

    The outline of this paper is organized as follows. Some beneficial and basic notations are provided simply in Section 2. In Section 3, a tensor alternating splitting iteration scheme will be proposed for solving constraint tensor systems. In Section 4, the classical approaches, AOR and SAOR are further extended to solve the constraint multi-linear systems. Meanwhile, a few numerical results are carried out to demonstrate the efficiency of the presented iteration methods. Finally, a conclusion remark is drawn in Section 5.

    In this section, some basic results and useful definitions are introduced which is beneficial for the discussions of the following parts.

    First of all, let AR[2,n] (i.e., matrix) and BR[k,n] (k>2). The matrix-tensor product C=ABR[k,n] is defined by

    cji2ik=nj2=1ajj2bj2i2ik. (2.1)

    Hence, the formula above usually is expressed as

    C(1)=(AB)(1)=AB(1), (2.2)

    where C(1) and B(1) are the matrices generated from C and B flattened along first index. For more details, see [14,15].

    Definition 2.1. [16] Let A=(ai1i2im)R[m,n] be an order m dimension n tensor. Then, the majorization matrix M(A) of A is the n×n matrix with the entries

    M(A)ij=aijj,i,j=1,2,,n. (2.3)

    Definition 2.2. [17] Let A=(ai1i2im)R[m,n]. If M(A) is a nonsingular matrix and A=M(A)Im, M(A)1 is named as the order-2 left-inverse of tensor A, and A is called left-nonsingular, where Im is an identity tensor with all diagonal elements being 1.

    Definition 2.3. [12] Let A,E,FR[m,n]. If E is left-nonsingular, then A=EF is named as a splitting of tensor A. If E is left-nonsingular with M(E)10 and F0 (here or denotes elementwise), then the splitting of A is named as a regular splitting. If E is left-nonsingular with M(E)1F0, then the splitting of A is named as a weak regular splitting. If spectral radius of M(E)1F is less than 1, i.e., ρ(M(E)1F)<1, then the splitting is convergence.

    Definition 2.4. [18] Let AR[m,n]. A is called a Z-tensor if its off-diagonal entries are non-positve. A is called an M-tensor if there exist a nonnegative tensor B and a positive real number ηρ(B) such that

    A=ηImB.

    Further, A is called a strong M-tensor, if η>ρ(B).

    Definition 2.5. [19] Let A=(ai1i2im)R[m,n]. A pair (λ,x)C×(Cn{0}) is called an eigenvalue-eigenvector of tensor A if they satisfy the systems

    Axm1=λx[m1], (2.4)

    where x[m1]=(xm11,xm12,,xm1n)T. The (λ,x) is named as H-eigenpair if both λ and vector x are real.

    Definition 2.6. Let ρ(A)=max{|λ||λσ(A)} be the spectral radius of A, where σ(A) is the set of all eigenvalues of A.

    Lemma 2.1. [13] If A is a Z-tensor, then the follwing conditions are equivalent:

    (a) A is a strong M-tensor.

    (b) There exist an inverse-positive Z-matrix B and a semi-positve Z-tensor C with A=BC.

    (c) A has a convergent (weak) regular splitting.

    From the view of optimization, TSPF problem in (1.2) is equivalent to

    minxCf(x):=12F(x)2, (3.1)

    where

    F(x):=(IPQ)Axm1. (3.2)

    Set

    J(x):=(m1)Axm2,g(x):=f(x)=(IPQ)J(x)TF(x). (3.3)

    In detail, the Levenberg-Marquardt method for solving TSPF can be described as follows (see Algorithm 3.1).

    Algorithm 3.1. Projection method to solve tensor split feasibility problem

    Step1. Given nonempty closed convex sets C and Q, a semi-symmetric tensor A [19], initial point vector x0, set β(0,1), σ(0,1), precision ε1>0, ε2>0. Set k:=1.

    Step2. Compute Fk=F(xk), Jk=J(xk), and gk=f(xk) in (3.2)-(3.3). If gk2<ε1 stop; otherwise, go to Step 3.

    Step3. Let

    μk=F(xk)2.

    Compute a direction dk by solving the following equation

    (J(x)TJ(x)+μkI)d=J(x)TFk.

    Step4. Find the smallest nonnegative integer mk such that αk=βmk satisfies

    f(xk+αkdk)f(xk)+σαkgTkdk.

    Step5.

    xk+1=PC(xk+αkdk).

    If xk+1xk<ε2 stop; otherwise, set k:=k+1, return to Step 2.

    However, in this section, we will restatement TSPF problem in (1.2) form the perspective of tensor split and expect to get better convergence results, which will be further reported in numerical test part.

    Now, we describe briefly alternating direction iterative method for solving unconstrained part of multi-linear systems (1.3) Axm1=PQ(Axm1).

    By A=E1F1, clearly, the above multi-linear systems can be written as

    E1xm1=F1xm1+PQ(Axm1), (3.4)

    i.e.,

    Imxm1=M(E1)1F1xm1+M(E1)1PQ(Axm1). (3.5)

    Here it makes use of the property of order 2 left-nonsingular of tensor E1. Im is an identify tensor with appropriate order. The result can be concluded in Algorithm 3.2.

    Algorithm 3.2. Projection-splitting tensor iterative method

    Step1. Input a strong M-tensor A with (weak) regular splitting A=E1F1. Given nonempty closed convex sets C and Q, a precision ε>0 and initial vector x0. Set k:=1.

    Step2. If Axm1kPQ(Axm1k)2<ε stop; otherwise, go to Step 3.

    Step3.

    xk+1=(M(E1)1F1xm1k+M(E1)1PQ(Axm1k))[1m1].

    Step4.

    xk+1=PC(xk+1).

    Step5. Set k:=k+1, return to Step 2.

    Now, consider the tensor splitting A=E1F1=E2F2. Based on this, two-step tensor alternating splitting iteration method is introduced. Then, it generates the iterative scheme

    {xk+12=(M(E1)1F1xm1k+M(E1)1PQ(Axm1k))[1m1],xk+1=(M(E2)1F2xm1k+12+M(E2)1PQ(Axm1k+12))[1m1].

    Set G:=M(E2)1F2. According to Imxm1=x[m1] where Im is an identify tensor with appropriate order, we have

    Gxm1k+12=M(G)Imxm1k+12=M(G)x[m1]k+12=M(G)(M(E1)1F1xm1k+M(E1)1PQ(Axm1k))=M(G)M(E1)1F1xm1k+M(G)M(E1)1PQ(Axm1k). (3.6)

    Hence,

    xk+1=[M(G)M(E1)1F1xm1k+M(G)M(E1)1PQ(Axm1k)+M(E2)1PQ(Axm1k+12)][1m1]. (3.7)

    The above analysis can be described concretely in Algorithm 3.3.

    Now, let

    T(E1,E2):=M(G)M(E1)1F1. (3.8)

    In the following, we would like to show the spectral radius of iterative tensor ρ(T(E1,E2))<1, i.e., the proof of convergence of Algorithm 3.3.

    Algorithm 3.3. Projection-splitting tensor alternating iterative method

    Step1. Input a strong M-tensor A with (weak) regular splitting A=E1F1. Given nonempty closed convex sets C and Q, a precision ε>0 and initial vector x0. Set k:=1.

    Step2. If Axm1kPQ(Axm1k)2<ε stop; otherwise, go to Step 3.

    Step3.

    {xk+12=(M(E1)1F1xm1k+M(E1)1PQ(Axm1k))[1m1],xk+1=(M(E2)1F2xm1k+12+M(E2)1PQ(Axm1k+12))[1m1].

    Step4.

    xk+1=PC(xk+1).

    Step5. Set k:=k+1, return to Step 2.

    Lemma 3.1. [9] Let A=(ai1i2im)R[m,n], and A=E1F1=E2F2 be a weak regular splitting and a regular splitting, respectively. If F2<F1, F20, and ρ((E1)1F1)<1, then there exists a positive Perron vector xRn such that

    M(E2)1F2xm1ρkx[m1], (3.9)

    where ρk=ρ(M(E1)1F1+1kS), k is a positive integer and SR[m,n] is a positive tensor.

    Proof. Let S be in R[m,n] whose entries are all equal to 1. Using the strong Perron-Frobenius theorem (see [20,21]), for any given k>N, M(E1)1F1+1kS has a positive Perron vector x such that

    (M(E1)1F1+1kS)xm1=ρkx[m1], (3.10)

    where ρk=ρ(M(E1)1F1+1kS).

    Hence, it give rises to

    M(E1)(ρkIm1kS)xm1=F1xm1. (3.11)

    By A=E1F1=E2F2, one gets M(A)=M(E1)M(F1)=M(E2)M(F2).

    So it generates

    M(A)(ρkIm1kS))xm1=F1xm1M(F1)(ρkIm1kS)xm1 (3.12)
    =(1ρk)M(F1)Imxm1+1kM(F1)Sxm1+(F1M(F1)Im)xm1.

    Further, it follows from (3.12) that

    (M(E2)M(F2))(ρkIm1kS)xm1(1ρk)M(F2)Imxm1+1kM(F2)Sxm1+(F2M(F2)Im)xm1, (3.13)

    here, notice that the condition F1F2. So M(F1)M(F2) and F1M(F1)ImF2M(F2)Im.

    By some simple computations, we obtain

    M(E2)(ρkIm1kS)xm1F2xm1. (3.14)

    Note that M(E2)10 and F20 due to the regular splitting of A=E2F2. According to (3.15), it yields

    M(E2)1F2xm1(ρkIm1kS)xm1ρkx[m1]. (3.15)

    Theorem 3.1. Let A=(ai1i2im)R[m,n], and A=E1F1=E2F2 be a weak regular splitting and a regular splitting, respectively. If F2<F1, F20 and ρ((E1)1F1)<1. G:=M(E2)1F2 is order 2 left-nonsingular, i.e., G=M(G)Im, then ρ(T(E1,E2))<1, where T(E1,E2) is defined by (3.8).

    Proof. First of all, similar to the previous discussion, using the strong Perron-Frobenius theorem, for any given k>N, M(E1)1F1+1kS has a positive Perron vector x such that

    (M(E1)1F1+1kS)xm1=ρkx[m1], (3.16)

    where ρk=ρ(M(E1)1F1+1kS). This implies

    M(G)(M(E1)1F1+1kS)xm1=ρkM(G)x[m1]=ρkM(G)Imxm1=ρkGxm1=ρkM(E2)1F2xm1ρkρkx[m1]=(ρk)2x[m1], (3.17)

    where the inequality comes from the Lemma 3.1. As we know that ρ((E1)1F1)<1, hence, when k, (ρk)2<1. This completes the proof.

    In this section, some numerical examples are discussed to validate the performance of effectiveness of the proposed projection-splitting tensor AOR (denoted as 'P-STAOR') and tensor symmetric (alternating) splitting AOR (denoted as 'P-STSAOR') based two-step splitting method for solving the multi-linear systems (see Algorithm 3.2 and Algorithm 3.3). We compare the convergence of these methods with projection tensor Levenberg-Marquardt (denoted as 'P-STLM', see Algorithm 3.1) by the iteration step (denoted as 'IT'), elapsed CPU time in seconds (denoted as 'CPU'), and residual error (denoted as 'RES'). The running is terminated when the current iteration satisfies RES=Axm1PQ(Axm1)2<1010 or if the number of iteration exceeds the prescribed iteration steps kmax=50. All the numerical experiments have been carried out by MATLAB R2011b 7.1.3 on a PC equipped with an Intel(R) Core(TM) i7-2670QM, CPU running at 2.20GHZ with 8 GB of RAM in Windows 10 operating system.

    Now, consider the tensor splitting of (1.2)

    A=DLU. (4.1)

    The layout of splitting description is shown in Table 1, where D=DIm, L=LIm, U=UIm, and D, L, U are the diagonal part, strictly lower and strictly upper triangle part of majorization matrix M(A).

    Table 1.  The corresponding splitting E1 and E2.
    splitting tensor E1 E2
    P-STAOR 1ω(DrL)
    P-STSAOR 1ω(DrL) 1ω(DrU)

     | Show Table
    DownLoad: CSV

    Example 4.1. First consider the tensor split feasibility problem (TSFP) (1.2) with a strong M-tensor A=30Im0.01B, where B(i,j,k)=0.6i+0.1(k+j), Im is an identity tensor with order m dimension n. And the nonempty closed convex sets

    C:={xR5|x20020},Q:={yR5|y505}.

    Three different examples are given for different tensors A, B with various sizes. Parameters r and ω for P-STAOR and P-STSAOR, β for P-STLM are experiential selected according to particular example, see those tables in this section for details. In all examples, parameter μ for P-STLM (in Algorithm 3.1) is selected fixedly with 0.25.

    The numerical results have been shown in Tables 26 and Figures 15. From the numerical results, we can see that P-STAOR and P-STSAOR are efficient methods, and both of them are comparable with P-STLM in all sides. Totally, two approaches proposed in this paper, i.e., P-STAOR and P-STSAOR, are nearly the same merits. However, in some cases, e.g., for the numbers of iteration and elapsed CPU time, P-STSAOR seems to be a more fascinating method than P-STAOR (see Table 4). The reasons are possible the flexible selection of parameters and the superiority of alternating direction iterative. Further from the residual trend chart with the changing numbers of iteration in Figures 15, one can demonstrably find the desired performance of the proposed methods.

    Table 2.  Numerical results of Example 4.1.
    Case P-STLM P-STAOR P-STSAOR
    It 12 1 2
    a. β=0.2, r=3.5, ω=1.2 CPU 0.1893 0.0223 0.0176
    RES 3.5861e12 5.1238e14 4.2633e14
    It 12 1 2
    b. β=0.4, r=3.0, ω=1.1 CPU 0.2759 0.0495 0.0208
    RES 2.5645e12 6.1944e14 1.4211e14
    It 12 1 2
    c. β=0.6, r=2.5, ω=1.4 CPU 0.2489 0.0384 0.0296
    RES 2.4883e11 2.3097e14 2.2102e14
    It 12 1 2
    d. β=0.8, r=2.0, ω=1.5 CPU 0.1834 0.0227 0.0200
    RES 2.2481e11 1.7764e15 8.8818e16

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of Example 4.2.
    Case P-STLM P-STAOR P-STSAOR
    It 50 6 8
    a. β=0.2, r=3.5, ω=1.6 CPU 2.1106 0.1181 0.0710
    RES 9.0252e+00 3.3148e14 2.5864e14
    It 50 7 8
    b. β=0.4, r=3.0, ω=1.7 CPU 1.9999 0.1299 0.0715
    RES 9.1152e+00 2.1620e14 3.8918e14
    It 50 10 9
    c. β=0.6, r=2.5, ω=1.8 CPU 2.1252 0.1819 0.0856
    RES 9.1002e+00 4.5089e14 1.1235e14
    It 50 16 11
    d. β=0.8, r=2.0, ω=1.9 CPU 2.0282 0.2030 0.0842
    RES 9.2103e+00 4.5645e14 3.1175e14

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results of Example 4.3 with n=3, m=5.
    Case P-STLM P-STAOR P-STSAOR
    It 13 4 2
    a. β=0.2, r=3.5, ω=1.6 CPU 0.2244 0.0475 0.0251
    RES 2.1230e10 5.6853e14 2.8432e14
    It 12 5 3
    b. β=0.4, r=4.5, ω=1.8 CPU 0.1970 0.0434 0.0240
    RES 1.5531e03 1.0000e17 3.1786e14
    It 12 4 2
    c. β=0.6, r=5.5, ω=2.0 CPU 0.2253 0.0469 0.0364
    RES 1.2561e03 5.6853e14 1.0000e17
    It 12 6 3
    d. β=0.8, r=6.5, ω=2.2 CPU 0.2165 0.0416 0.0377
    RES 1.8021e03 5.6853e14 2.8432e14

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical results of Example 4.3 with n=4, m=5.
    Case P-STLM P-STAOR P-STSAOR
    It 13 4 2
    a. β=0.2, r=3.5, ω=1.6 CPU 0.3404 0.0539 0.0304
    RES 0.1020e+01 6.5519e14 1.0000e17
    It 14 4 2
    b. β=0.4, r=4.5, ω=1.8 CPU 0.3332 0.0534 0.0316
    RES 5.7338e11 5.6853e14 6.7042e14
    It 13 5 3
    c. β=0.6, r=5.5, ω=2.0 CPU 0.3430 0.0512 0.0441
    RES 0.2000e+01 8.1645e14 7.6538e14
    It 13 4 3
    d. β=0.8, r=6.5, ω=2.2 CPU 0.3176 0.0505 0.0419
    RES 0.0000e+01 6.0302e14 7.1064e14

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical results of Example 4.3 with n=5, m=5.
    Case P-STLM P-STAOR P-STSAOR
    It 13 6 4
    a. β=0.2, r=3.5, ω=1.6 CPU 0.4062 0.0687 0.0530
    RES 1.6000e01 9.9739e13 1.0050e14
    It 14 7 4
    b. β=0.4, r=4.5, ω=1.8 CPU 0.4510 0.0697 0.0587
    RES 2.2000e03 1.1763e13 1.5953e13
    It 14 6 3
    c. β=0.6, r=5.5, ω=2.0 CPU 0.4383 0.0677 0.0543
    RES 1.3700e04 1.4973e13 7.9133e14
    It 14 6 3
    d. β=0.8, r=6.5, ω=2.2 CPU 0.4234 0.0631 0.0547
    RES 1.3700e04 1.7593e13 1.6282e13

     | Show Table
    DownLoad: CSV
    Figure 1.  The residual of P-STAOR, P-STSAOR and P-STLM methods in Example 1.
    Figure 2.  The residual of P-STAOR, P-STSAOR and P-STLM mehtods in Example 2.
    Figure 3.  The residual of P-STAOR, P-STSAOR and P-STLM mehtods in Example 3 with n=3, m=5.
    Figure 4.  The residual of P-STAOR, P-STSAOR and P-STLM mehtods in Example 3 with n=4, m=5.
    Figure 5.  The residual of P-STAOR, P-STSAOR and P-STLM mehtods in Example 3 with n=5, m=5.

    From three examples, whatever the parameters r and ω in two methods P-STAOR and P-STSAOR are, the results of convergence are satisfactory as always. However, P-STLM dependents heavily on parameter β. By the Case b, i.e., when β=0.2 in Table 6, P-STLM cann't guarantee the global convergence any more. At the same time, we notice that the residual of P-STLM disappears from the restricted area in Figure 5. Nevertheless, the residuals of P-STAOR and P-STSAOR still decline rapidly and converge well.

    Example 4.2. Consider the tensor split feasibility problem (TSFP) (1.2) with a strong M-tensor A=50Im0.01B, where B(i,j,k,l,p)=0.85i+0.3(k+j+l+p), Im is an identity tensor with order m dimension n. And the nonempty closed convex sets

    C:={xR5|x303},Q:={yR5|y540}.

    Example 4.3. Finally, consider the tensor split feasibility problem (TSFP) (1.2) with a strong M-tensor A=300Im0.5B, where B=rand(n,n,n,n,n), a random generated tensor with order m dimension n, Im is an identity tensor with order m dimension n. And the nonempty closed convex sets

    C:={xRn|x8060},Q:={yRn|y20200}.

    We test this example with different n.

    In this example, all the numerical results are depicted in Tables 46 and Figures 35. From the elapsed CPU and numbers of iteration, P-STSAOR performs always well than all of other approaches. Although the optimum parameters are perhaps hard to determine, we just choose them tentatively in some reality application according to experiment essential effects. Also, we try to introduce the preconditioner to precondition the constraint tensor system (1.3). However, it is difficult to choose a suitable preconditioner for this problem. Hence, the investigations of optimum parameters and befitting preconditioners for P-STSAOR and P-STAOR will be further accomplished in our future work. In a word, P-STSAOR and P-STAOR can be regarded as the promising and efficient approaches for solving the multi-linear split feasibility problem.

    In this paper, by making use of the idea of tensor alternating splitting, two efficient methods are presented to solve the multi-linear split feasibility problem. Concretely, by extending accelerated overrelaxation and symmetric accelerated overrelaxation, tensor accelerated overrelaxation and tensor symmetric accelerated overrelaxation splitting iteration strategies are introduced for solving the multi-linear split feasibility problem, which are distinctly different from some techniques on the perspective of optimization. The convergence result is presented in detail by analysis of tensor spectral radius. Precisely, the proposed P-STSAOR and P-STAOR have been verified to be quite meaningful approaches. Many numerical test results illustrate they meet the expectation as the current efficient methods. Finally, some advices about future research work are also provided.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are very much indebted to the anonymous referees for their constructive and valuable comments and suggestions which greatly improved the original manuscript of this paper. This work was supported by Fujian Natural Science Foundation (Grant No.2022J01378).

    The authors declare that they have no conflict of interest.



    [1] C. Byrne, Iterative oblique projection onto convex sets and the split feasibility problem, Inverse Probl., 18 (2002), 411–453. https://doi.org/10.1088/0266-5611/18/2/310 doi: 10.1088/0266-5611/18/2/310
    [2] H. K. Xu, Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces, Inverse Probl., 26 (2010), 105018. https://doi.org/10.1088/0266-5611/26/10/105018 doi: 10.1088/0266-5611/26/10/105018
    [3] Y. Censor, T. Elfving, N. Kopf, T. Bortfeld, The multiple-sets split feasibility problem and its applications for inverse problems, Inverse Probl., 21 (2005), 2071–2084. https://doi.org/10.1088/0266-5611/21/6/017 doi: 10.1088/0266-5611/21/6/017
    [4] W. Zhang, D. Han, Z. Li, A self-adaptive projection method for solving the multiple-sets split feasibility problem, Inverse Probl., 25 (2009), 115001. https://doi.org/10.1088/0266-5611/25/11/115001 doi: 10.1088/0266-5611/25/11/115001
    [5] Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Phys. Med. Biol., 51 (2006), 2353–2365. https://doi.org/10.1088/0031-9155/51/10/001 doi: 10.1088/0031-9155/51/10/001
    [6] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
    [7] J. Zhao, Q. Yang, A simple projection method for solving the multiple-sets split feasibility problem, Inverse Probl. Sci. Eng., 21 (2013), 537–546. https://doi.org/10.1080/17415977.2012.712521 doi: 10.1080/17415977.2012.712521
    [8] J. Zhao, Q. Yang, Several acceleration schemes for solving the multiple-sets split feasibility problem, Linear Algebra Appl., 437 (2012), 1648–1657. https://doi.org/10.1016/j.laa.2012.05.018 doi: 10.1016/j.laa.2012.05.018
    [9] W. Ding, Y. Wei, Solving multilinear systems with M-tensors, J. Sci. Comput., 68 (2016), 689–715. https://doi.org/10.1007/s10915-015-0156-7 doi: 10.1007/s10915-015-0156-7
    [10] W. Ding, Y. Wei, Generalized tensor eigenvalue problems, SIAM J. Matrix Anal. Appl., 36 (2015), 1073–1099. https://doi.org/10.1137/140975656 doi: 10.1137/140975656
    [11] L. Han, A homotopy method for solving multilinear systems with M-tensors, Appl. Math. Lett., 69 (2017), 49–54. https://doi.org/10.1016/j.aml.2017.01.019 doi: 10.1016/j.aml.2017.01.019
    [12] D. D. Liu, W. Li, S. W. Vong, The tensor splitting with application to solve multi-linear systems, J. Comput. Appl. Math., 330 (2018), 75–94. https://doi.org/10.1016/j.cam.2017.08.009 doi: 10.1016/j.cam.2017.08.009
    [13] W. Li, D. D. Liu, S. W. Vong, Comparison results for splitting iterations for solving multi-linear systems, Appl. Numer. Math., 134 (2018), 105–121. https://doi.org/10.1016/j.apnum.2018.07.009 doi: 10.1016/j.apnum.2018.07.009
    [14] T. G. Kolda, B. W. Bader, Tensor decompositions and applications, SIAM Rev., 51 (2009), 455–500. https://doi.org/10.1137/07070111X doi: 10.1137/07070111X
    [15] A. Cichocki, R. Zdunek, A. H. Phan, S. I. Amari, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, Wiley, 2009.
    [16] K. Pearson, Essentially positive tensors, Int. J. Algebra, 4 (2010), 421–427.
    [17] W. Li, M. K. Ng, On the inverse of a tensor, Linear Algebra Appl., 495 (2016), 199–205. https://doi.org/10.1016/j.laa.2016.01.011 doi: 10.1016/j.laa.2016.01.011
    [18] L. Zhang, L. Qi, G. Zhou, M-tensor and some applications, SIAM J. Matrix Anal. Appl., 35 (2014), 437–452. https://doi.org/10.1137/130915339 doi: 10.1137/130915339
    [19] L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symb. Comput., 40 (2005), 1302–1324. https://doi.org/10.1016/j.jsc.2005.05.007 doi: 10.1016/j.jsc.2005.05.007
    [20] L. Qi, Z. Luo, Tensor Analysis: Spectral Theory and Special Tensors, Philadelphia: Society for Industrial and Applied Mathematics, 2017. https://doi.org/10.1137/1.9781611974751
    [21] Y. Yang, Q. Yang, Further results for Perron-Frobenius theorem for nonnegative tensors, SIAM J. Matrix Anal. Appl., 31 (2010), 2517–2530. https://doi.org/10.1137/090778766 doi: 10.1137/090778766
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1339) PDF downloads(72) Cited by(0)

Figures and Tables

Figures(5)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog