splitting tensor | E1 | E2 |
P-STAOR | 1ω(D−rL) | − |
P-STSAOR | 1ω(D−rL) | 1ω(D−rU) |
Projection-type methods are studied widely and deeply to solve multiples-sets split feasibility problem. From the perspective of splitting iteration, in this paper, the efficient tensor projection-splitting iteration methods are firstly investigated for solving tensor split feasibility problem, which is properly transformed into a tensor equation by taking advantage of projection operator. Then, accelerated overrelaxation method and symmetric (alternating) accelerated overrelaxation method are further generalized to solve this problem from general system of linear equations to multi-linear systems. Theoretically, the convergence is proven by the analysis of spectral radius of splitting iteration tensor. Numerical experiments demonstrate the efficiency of the presented method.
Citation: Yajun Xie, Changfeng Ma. The hybird methods of projection-splitting for solving tensor split feasibility problem[J]. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
[1] | Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180 |
[2] | Shuting Tang, Xiuqin Deng, Rui Zhan . The general tensor regular splitting iterative method for multilinear PageRank problem. AIMS Mathematics, 2024, 9(1): 1443-1471. doi: 10.3934/math.2024071 |
[3] | Qingbing Liu, Aimin Xu, Shuhua Yin, Zhe Tu . A note on the preconditioned tensor splitting iterative method for solving strong M-tensor systems. AIMS Mathematics, 2022, 7(4): 7177-7186. doi: 10.3934/math.2022400 |
[4] | Suthep Suantai, Suparat Kesornprom, Nattawut Pholasa, Yeol Je Cho, Prasit Cholamjiak . A relaxed projection method using a new linesearch for the split feasibility problem. AIMS Mathematics, 2021, 6(3): 2690-2703. doi: 10.3934/math.2021163 |
[5] | Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698 |
[6] | Meiying Wang, Luoyi Shi . A new self-adaptive inertial algorithm with W-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783. doi: 10.3934/math.20221032 |
[7] | Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428 |
[8] | Xu Li, Rui-Feng Li . Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118. doi: 10.3934/math.2021243 |
[9] | Ximing Fang . The simplified modulus-based matrix splitting iteration method for the nonlinear complementarity problem. AIMS Mathematics, 2024, 9(4): 8594-8609. doi: 10.3934/math.2024416 |
[10] | Yuanheng Wang, Bin Huang, Bingnan Jiang, Tiantian Xu, Ke Wang . A general hybrid relaxed CQ algorithm for solving the fixed-point problem and split-feasibility problem. AIMS Mathematics, 2023, 8(10): 24310-24330. doi: 10.3934/math.20231239 |
Projection-type methods are studied widely and deeply to solve multiples-sets split feasibility problem. From the perspective of splitting iteration, in this paper, the efficient tensor projection-splitting iteration methods are firstly investigated for solving tensor split feasibility problem, which is properly transformed into a tensor equation by taking advantage of projection operator. Then, accelerated overrelaxation method and symmetric (alternating) accelerated overrelaxation method are further generalized to solve this problem from general system of linear equations to multi-linear systems. Theoretically, the convergence is proven by the analysis of spectral radius of splitting iteration tensor. Numerical experiments demonstrate the efficiency of the presented method.
The split feasibility problem (SFP) is formulated as finding a point x with the following property
x∈C,Ax∈Q, | (1.1) |
where C and Q are the nonempty closed convex sets. In this paper, we will focus on the discussion of the following so-called tensor split feasibility problem (TSFP)
x∈C,Axm−1∈Q, | (1.2) |
where C⊆Rn, Q⊆Rn are the nonempty closed convex sets, and A∈R[m,n] is a tensor with mth order and n-dimension. When m=2, i.e., A is a matrix, (1.2) reduces to the split feasibility problem (SFP) (1.1) (see [1,2]). Furthermore, (1.2) can be regarded as the general case of multiple-sets split feasibility problem.
The SFP has always attracted the attention of some researchers. For instance, Censor et al. [3] presented an interesting approach for solving the multiple-sets split feasibility problem. However, it probably depends heavily on the step size, a fixed Lipschitz constant of gradient of objective function, which is hard to be estimated normally. Moreover, by adopting variable step sizes based on the known information from current iterate, Zhang and Han et al. [4] investigated a self-adaptive projection-type method for nonlinear multiple-sets split feasibility problem, which has a variety of specific applications in real world, e.g., image reconstruction, medical care (such as inverse problem of intensity-modulated radiation therapy (IMRT)) and signal processing [5,6]. To choose a best step-size for current direction, Zhao and Yang et al. [7] studied a simple projection method for solving the multiple-sets split feasibility problem, which is easy to implement and come true. Meanwhile, they also proposed several efficient acceleration schemes for solving the multiple-sets split feasibility problem [8].
All of the literatures mentioned above mainly focus on some explorations about the solution of equivalent optimization problem. This issue would be developed from another point of view, i.e., the perspective of splitting iteration for solving tensor equation. Certainly, first of all, the split feasibility problem (1.2) should be transformed into multi-linear systems with constraint by projection operator PQ(⋅). The exact scheme is written as follows:
Axm−1=PQ(Axm−1),s.t.x∈C. | (1.3) |
As is known that a critical issue in pure and applied mathematics is solving variety classes of systems. Especially for the data analysis need of the background of big data, some rapid and efficient computing techniques of multi-linear systems have received serious attention in the field of science and engineering. In fact, it is not easy to get the precise solution by utilizing some direct methods. As a result, different kinds of iterative strategies were widely established to solve this problem. These research works concentrated mostly upon fast solvers for the multi-linear systems. For instance, Ding and Wei [9,10] extended classical Jocobi, Gauss-Seidel, successive overrelaxation method and Newton methods. In view of the costly computations for the Newton method, Han [11] developed an homotopy method by means of an Euler-Newton prediction-correction strategy to solve multi-linear systems with nonsymmetric M-tensors. Tensor splitting method and its convergence results have been studied in detail by Liu and Li et al. [12]. Moreover, some valuable comparison results of splitting iteration for solving multi-linear systems were achieved in [13].
Motivated by [13], we extend an tensor alternating splitting iteration scheme for solving constraint multi-linear systems (1.3). Further, the accelerated overrelaxation method (AOR) and symmetric accelerated overrelaxation method (SAOR) are generalized for solving (1.3) in practical alternating splitting iteration application.
The outline of this paper is organized as follows. Some beneficial and basic notations are provided simply in Section 2. In Section 3, a tensor alternating splitting iteration scheme will be proposed for solving constraint tensor systems. In Section 4, the classical approaches, AOR and SAOR are further extended to solve the constraint multi-linear systems. Meanwhile, a few numerical results are carried out to demonstrate the efficiency of the presented iteration methods. Finally, a conclusion remark is drawn in Section 5.
In this section, some basic results and useful definitions are introduced which is beneficial for the discussions of the following parts.
First of all, let A∈R[2,n] (i.e., matrix) and B∈R[k,n] (k>2). The matrix-tensor product C=AB∈R[k,n] is defined by
cji2⋯ik=n∑j2=1ajj2bj2i2⋯ik. | (2.1) |
Hence, the formula above usually is expressed as
C(1)=(AB)(1)=AB(1), | (2.2) |
where C(1) and B(1) are the matrices generated from C and B flattened along first index. For more details, see [14,15].
Definition 2.1. [16] Let A=(ai1i2⋯im)∈R[m,n] be an order m dimension n tensor. Then, the majorization matrix M(A) of A is the n×n matrix with the entries
M(A)ij=aij⋯j,i,j=1,2,⋯,n. | (2.3) |
Definition 2.2. [17] Let A=(ai1i2⋯im)∈R[m,n]. If M(A) is a nonsingular matrix and A=M(A)Im, M(A)−1 is named as the order-2 left-inverse of tensor A, and A is called left-nonsingular, where Im is an identity tensor with all diagonal elements being 1.
Definition 2.3. [12] Let A,E,F∈R[m,n]. If E is left-nonsingular, then A=E−F is named as a splitting of tensor A. If E is left-nonsingular with M(E)−1≥0 and F≥0 (here ≤ or ≥ denotes elementwise), then the splitting of A is named as a regular splitting. If E is left-nonsingular with M(E)−1F≥0, then the splitting of A is named as a weak regular splitting. If spectral radius of M(E)−1F is less than 1, i.e., ρ(M(E)−1F)<1, then the splitting is convergence.
Definition 2.4. [18] Let A∈R[m,n]. A is called a Z-tensor if its off-diagonal entries are non-positve. A is called an M-tensor if there exist a nonnegative tensor B and a positive real number η≥ρ(B) such that
A=ηIm−B. |
Further, A is called a strong M-tensor, if η>ρ(B).
Definition 2.5. [19] Let A=(ai1i2⋯im)∈R[m,n]. A pair (λ,x)∈C×(Cn∖{0}) is called an eigenvalue-eigenvector of tensor A if they satisfy the systems
Axm−1=λx[m−1], | (2.4) |
where x[m−1]=(xm−11,xm−12,⋯,xm−1n)T. The (λ,x) is named as H-eigenpair if both λ and vector x are real.
Definition 2.6. Let ρ(A)=max{|λ||λ∈σ(A)} be the spectral radius of A, where σ(A) is the set of all eigenvalues of A.
Lemma 2.1. [13] If A is a Z-tensor, then the follwing conditions are equivalent:
(a) A is a strong M-tensor.
(b) There exist an inverse-positive Z-matrix B and a semi-positve Z-tensor C with A=BC.
(c) A has a convergent (weak) regular splitting.
From the view of optimization, TSPF problem in (1.2) is equivalent to
minx∈Cf(x):=12‖F(x)‖2, | (3.1) |
where
F(x):=(I−PQ)Axm−1. | (3.2) |
Set
J(x):=(m−1)Axm−2,g(x):=∇f(x)=(I−PQ)J(x)TF(x). | (3.3) |
In detail, the Levenberg-Marquardt method for solving TSPF can be described as follows (see Algorithm 3.1).
Algorithm 3.1. Projection method to solve tensor split feasibility problem
Step1. Given nonempty closed convex sets C and Q, a semi-symmetric tensor A [19], initial point vector x0, set β∈(0,1), σ∈(0,1), precision ε1>0, ε2>0. Set k:=1.
Step2. Compute Fk=F(xk), Jk=J(xk), and gk=∇f(xk) in (3.2)-(3.3). If ‖gk‖2<ε1 stop; otherwise, go to Step 3.
Step3. Let
μk=‖F(xk)‖2. |
Compute a direction dk by solving the following equation
(J(x)TJ(x)+μkI)d=−J(x)TFk. |
Step4. Find the smallest nonnegative integer mk such that αk=βmk satisfies
f(xk+αkdk)≤f(xk)+σαkgTkdk. |
Step5.
xk+1=PC(xk+αkdk). |
If ‖xk+1−xk‖<ε2 stop; otherwise, set k:=k+1, return to Step 2.
However, in this section, we will restatement TSPF problem in (1.2) form the perspective of tensor split and expect to get better convergence results, which will be further reported in numerical test part.
Now, we describe briefly alternating direction iterative method for solving unconstrained part of multi-linear systems (1.3) Axm−1=PQ(Axm−1).
By A=E1−F1, clearly, the above multi-linear systems can be written as
E1xm−1=F1xm−1+PQ(Axm−1), | (3.4) |
i.e.,
Imxm−1=M(E1)−1F1xm−1+M(E1)−1PQ(Axm−1). | (3.5) |
Here it makes use of the property of order 2 left-nonsingular of tensor E1. Im is an identify tensor with appropriate order. The result can be concluded in Algorithm 3.2.
Algorithm 3.2. Projection-splitting tensor iterative method
Step1. Input a strong M-tensor A with (weak) regular splitting A=E1−F1. Given nonempty closed convex sets C and Q, a precision ε>0 and initial vector x0. Set k:=1.
Step2. If ‖Axm−1k−PQ(Axm−1k)‖2<ε stop; otherwise, go to Step 3.
Step3.
xk+1=(M(E1)−1F1xm−1k+M(E1)−1PQ(Axm−1k))[1m−1]. |
Step4.
xk+1=PC(xk+1). |
Step5. Set k:=k+1, return to Step 2.
Now, consider the tensor splitting A=E1−F1=E2−F2. Based on this, two-step tensor alternating splitting iteration method is introduced. Then, it generates the iterative scheme
{xk+12=(M(E1)−1F1xm−1k+M(E1)−1PQ(Axm−1k))[1m−1],xk+1=(M(E2)−1F2xm−1k+12+M(E2)−1PQ(Axm−1k+12))[1m−1]. |
Set G:=M(E2)−1F2. According to Imxm−1=x[m−1] where Im is an identify tensor with appropriate order, we have
Gxm−1k+12=M(G)⋅Imxm−1k+12=M(G)x[m−1]k+12=M(G)(M(E1)−1F1xm−1k+M(E1)−1PQ(Axm−1k))=M(G)M(E1)−1F1xm−1k+M(G)M(E1)−1PQ(Axm−1k). | (3.6) |
Hence,
xk+1=[M(G)M(E1)−1F1xm−1k+M(G)M(E1)−1PQ(Axm−1k)+M(E2)−1PQ(Axm−1k+12)][1m−1]. | (3.7) |
The above analysis can be described concretely in Algorithm 3.3.
Now, let
T(E1,E2):=M(G)M(E1)−1F1. | (3.8) |
In the following, we would like to show the spectral radius of iterative tensor ρ(T(E1,E2))<1, i.e., the proof of convergence of Algorithm 3.3.
Algorithm 3.3. Projection-splitting tensor alternating iterative method
Step1. Input a strong M-tensor A with (weak) regular splitting A=E1−F1. Given nonempty closed convex sets C and Q, a precision ε>0 and initial vector x0. Set k:=1.
Step2. If ‖Axm−1k−PQ(Axm−1k)‖2<ε stop; otherwise, go to Step 3.
Step3.
{xk+12=(M(E1)−1F1xm−1k+M(E1)−1PQ(Axm−1k))[1m−1],xk+1=(M(E2)−1F2xm−1k+12+M(E2)−1PQ(Axm−1k+12))[1m−1]. |
Step4.
xk+1=PC(xk+1). |
Step5. Set k:=k+1, return to Step 2.
Lemma 3.1. [9] Let A=(ai1i2⋯im)∈R[m,n], and A=E1−F1=E2−F2 be a weak regular splitting and a regular splitting, respectively. If F2<F1, F2≠0, and ρ((E1)−1F1)<1, then there exists a positive Perron vector x∈Rn such that
M(E2)−1F2xm−1≤ρkx[m−1], | (3.9) |
where ρk=ρ(M(E1)−1F1+1kS), k is a positive integer and S∈R[m,n] is a positive tensor.
Proof. Let S be in R[m,n] whose entries are all equal to 1. Using the strong Perron-Frobenius theorem (see [20,21]), for any given k>N, M(E1)−1F1+1kS has a positive Perron vector x such that
(M(E1)−1F1+1kS)xm−1=ρkx[m−1], | (3.10) |
where ρk=ρ(M(E1)−1F1+1kS).
Hence, it give rises to
M(E1)(ρkIm−1kS)xm−1=F1xm−1. | (3.11) |
By A=E1−F1=E2−F2, one gets M(A)=M(E1)−M(F1)=M(E2)−M(F2).
So it generates
M(A)(ρkIm−1kS))xm−1=F1xm−1−M(F1)(ρkIm−1kS)xm−1 | (3.12) |
=(1−ρk)M(F1)Imxm−1+1kM(F1)Sxm−1+(F1−M(F1)Im)xm−1. |
Further, it follows from (3.12) that
(M(E2)−M(F2))(ρkIm−1kS)xm−1≥(1−ρk)M(F2)Imxm−1+1kM(F2)Sxm−1+(F2−M(F2)Im)xm−1, | (3.13) |
here, notice that the condition F1≥F2. So M(F1)≥M(F2) and F1−M(F1)Im≥F2−M(F2)Im.
By some simple computations, we obtain
M(E2)(ρkIm−1kS)xm−1≥F2xm−1. | (3.14) |
Note that M(E2)−1≥0 and F2≥0 due to the regular splitting of A=E2−F2. According to (3.15), it yields
M(E2)−1F2xm−1≤(ρkIm−1kS)xm−1≤ρkx[m−1]. | (3.15) |
Theorem 3.1. Let A=(ai1i2⋯im)∈R[m,n], and A=E1−F1=E2−F2 be a weak regular splitting and a regular splitting, respectively. If F2<F1, F2≠0 and ρ((E1)−1F1)<1. G:=M(E2)−1F2 is order 2 left-nonsingular, i.e., G=M(G)Im, then ρ(T(E1,E2))<1, where T(E1,E2) is defined by (3.8).
Proof. First of all, similar to the previous discussion, using the strong Perron-Frobenius theorem, for any given k>N, M(E1)−1F1+1kS has a positive Perron vector x such that
(M(E1)−1F1+1kS)xm−1=ρkx[m−1], | (3.16) |
where ρk=ρ(M(E1)−1F1+1kS). This implies
M(G)(M(E1)−1F1+1kS)xm−1=ρkM(G)x[m−1]=ρkM(G)Imxm−1=ρkGxm−1=ρkM(E2)−1F2xm−1≤ρkρkx[m−1]=(ρk)2x[m−1], | (3.17) |
where the inequality comes from the Lemma 3.1. As we know that ρ((E1)−1F1)<1, hence, when k→∞, (ρk)2<1. This completes the proof.
In this section, some numerical examples are discussed to validate the performance of effectiveness of the proposed projection-splitting tensor AOR (denoted as 'P-STAOR') and tensor symmetric (alternating) splitting AOR (denoted as 'P-STSAOR') based two-step splitting method for solving the multi-linear systems (see Algorithm 3.2 and Algorithm 3.3). We compare the convergence of these methods with projection tensor Levenberg-Marquardt (denoted as 'P-STLM', see Algorithm 3.1) by the iteration step (denoted as 'IT'), elapsed CPU time in seconds (denoted as 'CPU'), and residual error (denoted as 'RES'). The running is terminated when the current iteration satisfies RES=‖Axm−1−PQ(Axm−1)‖2<10−10 or if the number of iteration exceeds the prescribed iteration steps kmax=50. All the numerical experiments have been carried out by MATLAB R2011b 7.1.3 on a PC equipped with an Intel(R) Core(TM) i7-2670QM, CPU running at 2.20GHZ with 8 GB of RAM in Windows 10 operating system.
Now, consider the tensor splitting of (1.2)
A=D−L−U. | (4.1) |
The layout of splitting description is shown in Table 1, where D=DIm, L=LIm, U=UIm, and D, −L, −U are the diagonal part, strictly lower and strictly upper triangle part of majorization matrix M(A).
splitting tensor | E1 | E2 |
P-STAOR | 1ω(D−rL) | − |
P-STSAOR | 1ω(D−rL) | 1ω(D−rU) |
Example 4.1. First consider the tensor split feasibility problem (TSFP) (1.2) with a strong M-tensor A=30∗Im−0.01∗B, where B(i,j,k)=0.6∗i+0.1∗(k+j), Im is an identity tensor with order m dimension n. And the nonempty closed convex sets
C:={x∈R5|‖x−200‖≤20},Q:={y∈R5|‖y−50‖≤5}. |
Three different examples are given for different tensors A, B with various sizes. Parameters r and ω for P-STAOR and P-STSAOR, β for P-STLM are experiential selected according to particular example, see those tables in this section for details. In all examples, parameter μ for P-STLM (in Algorithm 3.1) is selected fixedly with 0.25.
The numerical results have been shown in Tables 2–6 and Figures 1–5. From the numerical results, we can see that P-STAOR and P-STSAOR are efficient methods, and both of them are comparable with P-STLM in all sides. Totally, two approaches proposed in this paper, i.e., P-STAOR and P-STSAOR, are nearly the same merits. However, in some cases, e.g., for the numbers of iteration and elapsed CPU time, P-STSAOR seems to be a more fascinating method than P-STAOR (see Table 4). The reasons are possible the flexible selection of parameters and the superiority of alternating direction iterative. Further from the residual trend chart with the changing numbers of iteration in Figures 1–5, one can demonstrably find the desired performance of the proposed methods.
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | 13 | 4 | 2 | |
a. β=0.2, r=3.5, ω=1.6 | CPU | 0.2244 | 0.0475 | 0.0251 |
RES | 2.1230e−10 | 5.6853e−14 | 2.8432e−14 | |
It | 12 | 5 | 3 | |
b. β=0.4, r=4.5, ω=1.8 | CPU | 0.1970 | 0.0434 | 0.0240 |
RES | 1.5531e−03 | 1.0000e−17 | 3.1786e−14 | |
It | 12 | 4 | 2 | |
c. β=0.6, r=5.5, ω=2.0 | CPU | 0.2253 | 0.0469 | 0.0364 |
RES | 1.2561e−03 | 5.6853e−14 | 1.0000e−17 | |
It | 12 | 6 | 3 | |
d. β=0.8, r=6.5, ω=2.2 | CPU | 0.2165 | 0.0416 | 0.0377 |
RES | 1.8021e−03 | 5.6853e−14 | 2.8432e−14 |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | 13 | 6 | 4 | |
a. β=0.2, r=3.5, ω=1.6 | CPU | 0.4062 | 0.0687 | 0.0530 |
RES | 1.6000e−01 | 9.9739e−13 | 1.0050e−14 | |
It | 14 | 7 | 4 | |
b. β=0.4, r=4.5, ω=1.8 | CPU | 0.4510 | 0.0697 | 0.0587 |
RES | 2.2000e−03 | 1.1763e−13 | 1.5953e−13 | |
It | 14 | 6 | 3 | |
c. β=0.6, r=5.5, ω=2.0 | CPU | 0.4383 | 0.0677 | 0.0543 |
RES | 1.3700e−04 | 1.4973e−13 | 7.9133e−14 | |
It | 14 | 6 | 3 | |
d. β=0.8, r=6.5, ω=2.2 | CPU | 0.4234 | 0.0631 | 0.0547 |
RES | 1.3700e−04 | 1.7593e−13 | 1.6282e−13 |
From three examples, whatever the parameters r and ω in two methods P-STAOR and P-STSAOR are, the results of convergence are satisfactory as always. However, P-STLM dependents heavily on parameter β. By the Case b, i.e., when β=0.2 in Table 6, P-STLM cann't guarantee the global convergence any more. At the same time, we notice that the residual of P-STLM disappears from the restricted area in Figure 5. Nevertheless, the residuals of P-STAOR and P-STSAOR still decline rapidly and converge well.
Example 4.2. Consider the tensor split feasibility problem (TSFP) (1.2) with a strong M-tensor A=50∗Im−0.01∗B, where B(i,j,k,l,p)=0.85∗i+0.3∗(k+j+l+p), Im is an identity tensor with order m dimension n. And the nonempty closed convex sets
C:={x∈R5|‖x−30‖≤3},Q:={y∈R5|‖y−5‖≤40}. |
Example 4.3. Finally, consider the tensor split feasibility problem (TSFP) (1.2) with a strong M-tensor A=300∗Im−0.5∗B, where B=rand(n,n,n,n,n), a random generated tensor with order m dimension n, Im is an identity tensor with order m dimension n. And the nonempty closed convex sets
C:={x∈Rn|‖x−80‖≤60},Q:={y∈Rn|‖y−20‖≤200}. |
We test this example with different n.
In this example, all the numerical results are depicted in Tables 4–6 and Figures 3–5. From the elapsed CPU and numbers of iteration, P-STSAOR performs always well than all of other approaches. Although the optimum parameters are perhaps hard to determine, we just choose them tentatively in some reality application according to experiment essential effects. Also, we try to introduce the preconditioner to precondition the constraint tensor system (1.3). However, it is difficult to choose a suitable preconditioner for this problem. Hence, the investigations of optimum parameters and befitting preconditioners for P-STSAOR and P-STAOR will be further accomplished in our future work. In a word, P-STSAOR and P-STAOR can be regarded as the promising and efficient approaches for solving the multi-linear split feasibility problem.
In this paper, by making use of the idea of tensor alternating splitting, two efficient methods are presented to solve the multi-linear split feasibility problem. Concretely, by extending accelerated overrelaxation and symmetric accelerated overrelaxation, tensor accelerated overrelaxation and tensor symmetric accelerated overrelaxation splitting iteration strategies are introduced for solving the multi-linear split feasibility problem, which are distinctly different from some techniques on the perspective of optimization. The convergence result is presented in detail by analysis of tensor spectral radius. Precisely, the proposed P-STSAOR and P-STAOR have been verified to be quite meaningful approaches. Many numerical test results illustrate they meet the expectation as the current efficient methods. Finally, some advices about future research work are also provided.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors are very much indebted to the anonymous referees for their constructive and valuable comments and suggestions which greatly improved the original manuscript of this paper. This work was supported by Fujian Natural Science Foundation (Grant No.2022J01378).
The authors declare that they have no conflict of interest.
[1] |
C. Byrne, Iterative oblique projection onto convex sets and the split feasibility problem, Inverse Probl., 18 (2002), 411–453. https://doi.org/10.1088/0266-5611/18/2/310 doi: 10.1088/0266-5611/18/2/310
![]() |
[2] |
H. K. Xu, Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces, Inverse Probl., 26 (2010), 105018. https://doi.org/10.1088/0266-5611/26/10/105018 doi: 10.1088/0266-5611/26/10/105018
![]() |
[3] |
Y. Censor, T. Elfving, N. Kopf, T. Bortfeld, The multiple-sets split feasibility problem and its applications for inverse problems, Inverse Probl., 21 (2005), 2071–2084. https://doi.org/10.1088/0266-5611/21/6/017 doi: 10.1088/0266-5611/21/6/017
![]() |
[4] |
W. Zhang, D. Han, Z. Li, A self-adaptive projection method for solving the multiple-sets split feasibility problem, Inverse Probl., 25 (2009), 115001. https://doi.org/10.1088/0266-5611/25/11/115001 doi: 10.1088/0266-5611/25/11/115001
![]() |
[5] |
Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Phys. Med. Biol., 51 (2006), 2353–2365. https://doi.org/10.1088/0031-9155/51/10/001 doi: 10.1088/0031-9155/51/10/001
![]() |
[6] |
C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
![]() |
[7] |
J. Zhao, Q. Yang, A simple projection method for solving the multiple-sets split feasibility problem, Inverse Probl. Sci. Eng., 21 (2013), 537–546. https://doi.org/10.1080/17415977.2012.712521 doi: 10.1080/17415977.2012.712521
![]() |
[8] |
J. Zhao, Q. Yang, Several acceleration schemes for solving the multiple-sets split feasibility problem, Linear Algebra Appl., 437 (2012), 1648–1657. https://doi.org/10.1016/j.laa.2012.05.018 doi: 10.1016/j.laa.2012.05.018
![]() |
[9] |
W. Ding, Y. Wei, Solving multilinear systems with M-tensors, J. Sci. Comput., 68 (2016), 689–715. https://doi.org/10.1007/s10915-015-0156-7 doi: 10.1007/s10915-015-0156-7
![]() |
[10] |
W. Ding, Y. Wei, Generalized tensor eigenvalue problems, SIAM J. Matrix Anal. Appl., 36 (2015), 1073–1099. https://doi.org/10.1137/140975656 doi: 10.1137/140975656
![]() |
[11] |
L. Han, A homotopy method for solving multilinear systems with M-tensors, Appl. Math. Lett., 69 (2017), 49–54. https://doi.org/10.1016/j.aml.2017.01.019 doi: 10.1016/j.aml.2017.01.019
![]() |
[12] |
D. D. Liu, W. Li, S. W. Vong, The tensor splitting with application to solve multi-linear systems, J. Comput. Appl. Math., 330 (2018), 75–94. https://doi.org/10.1016/j.cam.2017.08.009 doi: 10.1016/j.cam.2017.08.009
![]() |
[13] |
W. Li, D. D. Liu, S. W. Vong, Comparison results for splitting iterations for solving multi-linear systems, Appl. Numer. Math., 134 (2018), 105–121. https://doi.org/10.1016/j.apnum.2018.07.009 doi: 10.1016/j.apnum.2018.07.009
![]() |
[14] |
T. G. Kolda, B. W. Bader, Tensor decompositions and applications, SIAM Rev., 51 (2009), 455–500. https://doi.org/10.1137/07070111X doi: 10.1137/07070111X
![]() |
[15] | A. Cichocki, R. Zdunek, A. H. Phan, S. I. Amari, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, Wiley, 2009. |
[16] | K. Pearson, Essentially positive tensors, Int. J. Algebra, 4 (2010), 421–427. |
[17] |
W. Li, M. K. Ng, On the inverse of a tensor, Linear Algebra Appl., 495 (2016), 199–205. https://doi.org/10.1016/j.laa.2016.01.011 doi: 10.1016/j.laa.2016.01.011
![]() |
[18] |
L. Zhang, L. Qi, G. Zhou, M-tensor and some applications, SIAM J. Matrix Anal. Appl., 35 (2014), 437–452. https://doi.org/10.1137/130915339 doi: 10.1137/130915339
![]() |
[19] |
L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symb. Comput., 40 (2005), 1302–1324. https://doi.org/10.1016/j.jsc.2005.05.007 doi: 10.1016/j.jsc.2005.05.007
![]() |
[20] | L. Qi, Z. Luo, Tensor Analysis: Spectral Theory and Special Tensors, Philadelphia: Society for Industrial and Applied Mathematics, 2017. https://doi.org/10.1137/1.9781611974751 |
[21] |
Y. Yang, Q. Yang, Further results for Perron-Frobenius theorem for nonnegative tensors, SIAM J. Matrix Anal. Appl., 31 (2010), 2517–2530. https://doi.org/10.1137/090778766 doi: 10.1137/090778766
![]() |
splitting tensor | E1 | E2 |
P-STAOR | 1ω(D−rL) | − |
P-STSAOR | 1ω(D−rL) | 1ω(D−rU) |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | 13 | 4 | 2 | |
a. β=0.2, r=3.5, ω=1.6 | CPU | 0.2244 | 0.0475 | 0.0251 |
RES | 2.1230e−10 | 5.6853e−14 | 2.8432e−14 | |
It | 12 | 5 | 3 | |
b. β=0.4, r=4.5, ω=1.8 | CPU | 0.1970 | 0.0434 | 0.0240 |
RES | 1.5531e−03 | 1.0000e−17 | 3.1786e−14 | |
It | 12 | 4 | 2 | |
c. β=0.6, r=5.5, ω=2.0 | CPU | 0.2253 | 0.0469 | 0.0364 |
RES | 1.2561e−03 | 5.6853e−14 | 1.0000e−17 | |
It | 12 | 6 | 3 | |
d. β=0.8, r=6.5, ω=2.2 | CPU | 0.2165 | 0.0416 | 0.0377 |
RES | 1.8021e−03 | 5.6853e−14 | 2.8432e−14 |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | 13 | 6 | 4 | |
a. β=0.2, r=3.5, ω=1.6 | CPU | 0.4062 | 0.0687 | 0.0530 |
RES | 1.6000e−01 | 9.9739e−13 | 1.0050e−14 | |
It | 14 | 7 | 4 | |
b. β=0.4, r=4.5, ω=1.8 | CPU | 0.4510 | 0.0697 | 0.0587 |
RES | 2.2000e−03 | 1.1763e−13 | 1.5953e−13 | |
It | 14 | 6 | 3 | |
c. β=0.6, r=5.5, ω=2.0 | CPU | 0.4383 | 0.0677 | 0.0543 |
RES | 1.3700e−04 | 1.4973e−13 | 7.9133e−14 | |
It | 14 | 6 | 3 | |
d. β=0.8, r=6.5, ω=2.2 | CPU | 0.4234 | 0.0631 | 0.0547 |
RES | 1.3700e−04 | 1.7593e−13 | 1.6282e−13 |
splitting tensor | E1 | E2 |
P-STAOR | 1ω(D−rL) | − |
P-STSAOR | 1ω(D−rL) | 1ω(D−rU) |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | 13 | 4 | 2 | |
a. β=0.2, r=3.5, ω=1.6 | CPU | 0.2244 | 0.0475 | 0.0251 |
RES | 2.1230e−10 | 5.6853e−14 | 2.8432e−14 | |
It | 12 | 5 | 3 | |
b. β=0.4, r=4.5, ω=1.8 | CPU | 0.1970 | 0.0434 | 0.0240 |
RES | 1.5531e−03 | 1.0000e−17 | 3.1786e−14 | |
It | 12 | 4 | 2 | |
c. β=0.6, r=5.5, ω=2.0 | CPU | 0.2253 | 0.0469 | 0.0364 |
RES | 1.2561e−03 | 5.6853e−14 | 1.0000e−17 | |
It | 12 | 6 | 3 | |
d. β=0.8, r=6.5, ω=2.2 | CPU | 0.2165 | 0.0416 | 0.0377 |
RES | 1.8021e−03 | 5.6853e−14 | 2.8432e−14 |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | ||||
a. |
CPU | |||
RES | ||||
It | ||||
b. |
CPU | |||
RES | ||||
It | ||||
c. |
CPU | |||
RES | ||||
It | ||||
d. |
CPU | |||
RES |
Case | P-STLM | P-STAOR | P-STSAOR | |
It | 13 | 6 | 4 | |
a. β=0.2, r=3.5, ω=1.6 | CPU | 0.4062 | 0.0687 | 0.0530 |
RES | 1.6000e−01 | 9.9739e−13 | 1.0050e−14 | |
It | 14 | 7 | 4 | |
b. β=0.4, r=4.5, ω=1.8 | CPU | 0.4510 | 0.0697 | 0.0587 |
RES | 2.2000e−03 | 1.1763e−13 | 1.5953e−13 | |
It | 14 | 6 | 3 | |
c. β=0.6, r=5.5, ω=2.0 | CPU | 0.4383 | 0.0677 | 0.0543 |
RES | 1.3700e−04 | 1.4973e−13 | 7.9133e−14 | |
It | 14 | 6 | 3 | |
d. β=0.8, r=6.5, ω=2.2 | CPU | 0.4234 | 0.0631 | 0.0547 |
RES | 1.3700e−04 | 1.7593e−13 | 1.6282e−13 |