Splitting tensor | EP1 | EP2 |
PTJb | D | − |
PTGS | D−L | − |
PTSOR | 1ω(D−ωL) | − |
PTAOR | 1ω(D−rL) | − |
PTAAOR | 1ω(D−rL) | 1ω(D−rU) |
Citation: Yajun Xie, Minhua Yin, Changfeng Ma. Novel accelerated methods of tensor splitting iteration for solving multi-systems[J]. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180
[1] | Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050 |
[2] | Shuting Tang, Xiuqin Deng, Rui Zhan . The general tensor regular splitting iterative method for multilinear PageRank problem. AIMS Mathematics, 2024, 9(1): 1443-1471. doi: 10.3934/math.2024071 |
[3] | Qingbing Liu, Aimin Xu, Shuhua Yin, Zhe Tu . A note on the preconditioned tensor splitting iterative method for solving strong M-tensor systems. AIMS Mathematics, 2022, 7(4): 7177-7186. doi: 10.3934/math.2022400 |
[4] | Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428 |
[5] | Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698 |
[6] | Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han . A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677 |
[7] | Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208 |
[8] | Mingyuan Cao, Yueting Yang, Chaoqian Li, Xiaowei Jiang . An accelerated conjugate gradient method for the Z-eigenvalues of symmetric tensors. AIMS Mathematics, 2023, 8(7): 15008-15023. doi: 10.3934/math.2023766 |
[9] | Huiling Wang, Nian-Ci Wu, Yufeng Nie . Two accelerated gradient-based iteration methods for solving the Sylvester matrix equation AX + XB = C. AIMS Mathematics, 2024, 9(12): 34734-34752. doi: 10.3934/math.20241654 |
[10] | Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536 |
In this paper, we will discuss the following multi-linear system
Axm−1=b, | (1.1) |
where A=(ai1⋯im)∈R[m,n] is an order m dimension n tensor, b∈Rn is a dimension n vector.
We know an essential problem in pure and applied mathematics is solving various classes of equations. The rapid and efficient calculation approaches of multi-linear systems [1,2,3,4,5] are becoming more and more significant in the field of science and engineering due to their wide applications, especially for the data analysis need of big data era (see [6,7,8,9,10]). Normally, it is hard to get the exact solution by means of direct methods even for smaller-scale general linear systems, which greatly promotes the substantial developments of presenting various kinds of iterative strategies. Many research works have been investigated in some literatures on fast solvers for the multi-linear systems (1.1). Ding and Wei [11,12] proposed some classical iterative, such as Jocobi, Gauss-Seidel methods, and Newton methods through translating (1.1) into the optimization problem. In general, the computational cost for the Newton method is expensive. Then Han [13] investigated an homotopy method by the Euler-Newton prediction-correction technique to solve multi-linear systems with nonsymmetric M-tensors, which is shown a better result than Newton method in the sense of convergence performance. Tensor splitting method and its convergence results have been studied by Liu and Li et al. [14]. Further some comparison results for splitting iteration for solving multi-linear systems were investigated widely in [15,16], however, we find that some acceleration versions can be introduced further and may probably improve their work. Motivated by [15], we propose an tensor alternating splitting iteration scheme for solving multi-linear systems. In practical application, the accelerated overrelaxation method (AOR), symmetric accelerated overrelaxation method (SAOR) and their preconditioned versions are generalized for solving multi-linear systems (1.1).
The remainder of this paper is organized as follows. In Section 2, some basic and useful notations are described simply. In Section 3, we will propose a tensor alternating splitting iteration scheme for solving multi-linear systems. Meanwhile, a preconditioner is introduced to accelerate the novel method. In Section 4, the classical approaches, AOR and SAOR, will be generalized to solve multilinear systems, then some numerical tests are provided to illustrate the superiority of the presented iteration methods. Finally, a concluding remark is given in Section 5.
Let A∈R[2,n] and B∈R[k,n]. The matrix-tensor product C=AB∈R[k,n] is defined by
cji2⋯ik=n∑j2=1ajj2bj2i2⋯ik. | (2.1) |
The above formula can be regarded as
C(1)=(AB)(1)=AB(1), | (2.2) |
where C(1) and B(1) are the matrices generated from C and B flattened along first index. For more details, see [17,18].
Definition 2.1. ([19]). Let A=(ai1i2⋯im)∈R[m,n]. Then the majorization matrix M(A) of A is the n×n matrix with the entries
M(A)ij=aij⋯j,i,j=1,2,⋯,n. | (2.3) |
Definition 2.2. ([15]). Let A=(ai1i2⋯im)∈R[m,n]. If M(A) is a nonsingular matrix and A=M(A)Im, we call M(A)−1 the order-2 left-inverse of tensor A, and A is called left-nonsingular, where Im is identity tensor with all diagonal elements be 1.
Definition 2.3. ([15]). Let A,E,F∈R[m,n]. A=E−F is named as a splitting of tensor A if E is left-nonsingular. A regular splitting of A if E is left-nonsingular with M(E)−1≥0 and F≥0 (here ≤ or ≥ denotes elementwise). A weak regular splitting of A if E is left-nonsingular with M(E)−1F≥0. A convergence splitting if spectral radius of M(E)−1F is less than 1, i.e., ρ(M(E)−1F)<1.
Definition 2.4. ([20]). Let A=(ai1i2⋯im)∈R[m,n]. A pair (λ,x)∈C×(Cn∖{0}) is called an eigenvalue-eigenvector of tensor A if they satisfy the systems
Axm−1=λx[m−1], | (2.4) |
where x[m−1]=(xm−11,xm−12,⋯,xm−1n)T. The (λ,x) is named as H-eigenpair if both λ and vector x are real.
Definition 2.5. Let ρ(A)=max{|λ||λ∈σ(A)} be the spectral radius of A, where σ(A) is the set of all eigenvalues of A.
Definition 2.6. ([21]). Let A=(ai1i2⋯im)∈R[m,n]. A is called a Z-tensor if its off-diagonal entries are non-positive. A is called an M-tensor if there exists a nonnegative tensor B and a positive real number η≥ρ(B) such that
A=ηIm−B. |
If η>ρ(B), then A is called a strong M-tensor.
Consider two tensor splittings A=E1−F1=E2−F2. First of all, we describe briefly alternating direction iterative method for solving multi-linear systems Axm−1=b.
By A=E1−F1, clearly, the above multi-linear systems can be written as
E1xm−1=F1xm−1+b, | (3.1) |
i.e.,
Imxm−1=M(E1)−1F1xm−1+M(E1)−1b, | (3.2) |
here use the property of order 2 left-nonsingular of tensor E1. Im is an identify tensor with appropriate order. The result leads to Algorithm 3.1.
Algorithm 3.1. (Preconditioned tensor splitting iterative method (PTSI) [15])
Step1 Input a vector b, and a preconditioner P, a strong M-tensor A with (weak) regular splitting AP:=PA=EP1−FP1. Given a precision ε>0 and initial vector x0. Set k:=1.
Step2 If ‖APxm−1k−b‖2<ε stop; otherwise, go to Step 3.
Step3
xk+1=(M(EP1)−1FP1xm−1k+M(EP1)−1b)[1m−1]. |
Step4 Set k:=k+1, return to Step 2.
Based on this, we introduce two-step tensor alternating splitting iteration method, then it gets the iterative scheme
{xk+12=(M(E1)−1F1xm−1k+M(E1)−1b)[1m−1],xk+1=(M(E2)−1F2xm−1k+12+M(E2)−1b)[1m−1]. | (3.3) |
We further consider preconditioned multi-linear systems PAxm−1=Pb. It follows from the splitting AP:=PA=EP1−FP1=EP2−FP2 that
{xk+12=(M(EP1)−1FP1xm−1k+M(EP1)−1Pb)[1m−1],xk+1=(M(EP2)−1FP2xm−1k+12+M(EP2)−1Pb)[1m−1]. | (3.4) |
Set G:=M(EP2)−1FP2. By Imxm−1=x[m−1] where Im be an identify tensor with appropriate order, we have
Gxm−1k+12=M(G)⋅Imxm−1k+12=M(G)x[m−1]k+12=M(G)(M(EP1)−1FP1xm−1k+M(EP1)−1Pb)=M(G)M(EP1)−1FP1xm−1k+M(G)M(EP1)−1Pb. | (3.5) |
Hence,
xk+1=[M(G)M(EP1)−1FP1xm−1k+M(G)M(EP1)−1Pb+M(EP2)−1Pb][1m−1]. | (3.6) |
The above analysis can be described concretely in Algorithm 3.2. Now, let
T(EP1,EP2):=M(G)M(EP1)−1FP1. | (3.7) |
Next we will show the spectral radius of preconditioned iterative tensor ρ(T(EP1,EP2))<1, namely, the proof of convergence of Algorithm 3.2.
Algorithm 3.2. (Preconditioned tensor alternating splitting iterative method (PTASI))
Step1 Input a vector b, a preconditioner P, a strong M-tensor A with (weak) regular splitting AP:=PA=EP1−FP1=EP2−FP2. Given a precision ε>0 and initial vector x0. Set k:=1.
Step2 If ‖APxm−1k−b‖2<ε stop; otherwise, go to Step 3.
Step3
{xk+12=(M(E1)−1F1xm−1k+M(E1)−1b)[1m−1],xk+1=(M(E2)−1F2xm−1k+12+M(E2)−1b)[1m−1]. |
Step4 Set k:=k+1, return to Step 2.
Lemma 3.1. [11]. Let AP=(ai1i2⋯im)∈R[m,n], and AP=EP1−FP1=EP2−FP2 be a weak regular splitting and a regular splitting, respectively. If FP2<FP1, FP2≠0, and ρ((EP1)−1FP1)<1, then there exists a positive Perron vector x∈Rn such that
M(EP2)−1FP2xm−1≤ρkx[m−1], | (3.8) |
where ρk=ρ(M(EP1)−1FP1+1kS), k is a positive integer and S∈R[m,n] is a positive tensor.
Proof. Let S be in R[m,n] whose entries are all equal to 1. There exists a positive integer N such that ρ(M(EP1)−1FP1)≤ρ(M(EP1)−1FP1+1kS)<1 as k>N. It is clear to check that M(EP1)−1FP1+1kS is positive and hence is irreducible. Using the strong Perron-Frobenius theorem (see [22,23]), M(EP1)−1FP1+1kS has a positive Perron vector x such that
(M(EP1)−1FP1+1kS)xm−1=ρkx[m−1] | (3.9) |
for k>N, where ρk=ρ(M(EP1)−1FP1+1kS).
Hence, it give rises to
M(EP1)(ρkIm−1kS)xm−1=FP1xm−1. | (3.10) |
By AP=EP1−FP1=EP2−FP2, one gets M(AP)=M(EP1)−M(FP1)=M(EP2)−M(FP2).
So it generates
M(AP)(ρkIm−1kS))xm−1=FP1xm−1−M(FP1)(ρkIm−1kS)xm−1,=(1−ρk)M(FP1)Imxm−1+1kM(FP1)Sxm−1+(FP1−M(FP1)Im)xm−1. | (3.11) |
Further, it follows from (3.11) that
(M(EP2)−M(FP2))(ρkIm−1kS)xm−1≥(1−ρk)M(FP2)Imxm−1+1kM(FP2)Sxm−1+(FP2−M(FP2)Im)xm−1, | (3.12) |
here, one should notice that the condition FP1≥FP2 and Definition 2.1, so M(FP1)≥M(FP2), FP1−M(FP1)Im≥FP2−M(FP2)Im and ρk<1.
By some simple computations, we have
M(EP2)(ρkIm−1kS)xm−1≥FP2xm−1. | (3.13) |
Observe that M(EP2)−1≥0 and FP2≥0 due to the regular splitting of AP=EP2−FP2. From (3.14), it gets
M(EP2)−1FP2xm−1≤(ρkIm−1kS)xm−1≤ρkx[m−1]. | (3.14) |
Theorem 3.1. Let AP=(ai1i2⋯im)∈R[m,n], and AP=EP1−FP1=EP2−FP2 be a weak regular splitting and a regular splitting, respectively. If FP2<FP1, FP2≠0 and ρ((EP1)−1FP1)<1. G:=M(EP2)−1FP2 is order 2 left-nonsingular, i.e., G=M(G)Im, then ρ(T(EP1,EP2))<1, where T(EP1,EP2) is defined by (3.7).
Proof. First of all, similar to the previous discussion, using the strong Perron-Frobenius theorem, ∃ N>0, M(EP1)−1FP1+1kS has a positive Perron vector x such that
(M(EP1)−1FP1+1kS)xm−1=ρkx[m−1] | (3.15) |
for k>N, where ρk=ρ(M(EP1)−1FP1+1kS). That it to say
M(G)(M(EP1)−1FP1+1kS)xm−1=ρkM(G)x[m−1]=ρkM(G)Imxm−1=ρkGxm−1=ρkM(EP2)−1FP2xm−1≤ρkρkx[m−1]=(ρk)2x[m−1], | (3.16) |
where the inequality comes from the Lemma 3.1. When k→∞, it gets the result. This completes the proof.
In this section, some numerical examples are discussed to validate the performance of effectiveness of the proposed preconditioned tensor AOR ('PTAOR') and preconditioned tensor alternating splitting AOR ('PTAAOR') based two-step splitting method for solving the multi-linear systems (see Algorithms 3.1, 3.2). We compare the convergence of preconditioned tensor Jacobi method (denoted as 'PTJb'), preconditioned tensor Gauss-Seidel method (denoted as 'PTGS') and preconditioned tensor SOR method (denoted as 'PTSOR') and unpreconditioned versions by the iteration step (denoted as 'IT'), elapsed CPU time in seconds (denoted as 'CPU'), and residual error (denoted as 'RES').
Now, consider the tensor preconditioned splitting of (1.1).
AP:=PA=D−L−U. | (4.1) |
The layout of splitting description is shown in Table 1, where D=DIm, L=LIm, U=UIm, and D, −L, −U are the diagonal part, strictly lower and strictly upper triangle part of M(PA). Clearly, the splittings described in Table 1 satisfy the condition of weak regular splitting, i.e., M(E)−1F≥0.
Splitting tensor | EP1 | EP2 |
PTJb | D | − |
PTGS | D−L | − |
PTSOR | 1ω(D−ωL) | − |
PTAOR | 1ω(D−rL) | − |
PTAAOR | 1ω(D−rL) | 1ω(D−rU) |
A type of preconditioner (as a variant form in [24,25]) Pα=I+Sα is considered for solving PAxm−1=Pb, where
Sα=(0−α1a12⋯20⋯0−α1a21⋯10−α2a23⋯3⋯0⋮⋱⋱⋱000⋱⋱−αn−1an−1,n⋯n000−αn−1an,n−1⋯n−10), |
αi=0.01,i=1,2,⋯n−1.
All the numerical experiments have been carried out by MATLAB R2011b 7.1.3 on a PC equipped with an Intel(R) Core(TM) i7-2670QM, CPU running at 2.20 GHZ with 8 GB of RAM in Windows 7 operating system.
Example 4.1. First consider the multi-linear systems (1.1) with a strong M-tensor A in difference cases [14,15].
Case 1. A=864.4895Im−B, where B∈R[3,5] is a nonnegative tensor with bi1,i2,i3=|tan(i1+i2+i3)|.
Case 2. A=n2I3−B, where B∈R[3,3] is a nonnegative tensor with bi1,i2,i3=|sin(i1+i2+i3)|.
Case 3. A=sI3−B, where B is generated randomly by MATLAB, and s=(1+δ)max1,2,⋯,n(Be2)i, e=(1,1,⋯,1)T. Let δ=8. b=x0=e.
We give three different cases for different tensors A, B with various sizes. Parameter r and ω are experiential selected according to particular example. In this example, the running is terminated when the current iteration satisfies RES=‖Axm−1−b‖2<10−11 or if the number of iteration exceeds the prescribed iteration steps kmax=500.
The numerical results have been shown in Tables 2, 3 and Figures 1, 2. From the numerical results, we can see that PTAAOR and PTAOR are efficient methods, and both of them overmatch the PTSOR, PTGS, and PTJb in all sides. PTAAOR seems to be a fascinating approach, however the PTAOR is more efficient method than the PTSOR, PTGS, PTJb methods from all aspects due to the flexible selection of parameters. It is clear PTSOR and PTGS are nearly similar efficiency when ω approaching to 1. Meanwhile, we find that PTGS is superior to PTSOR from the view of iteration number. It further bears out the conclusions in [15], one of which clarifies a fact the spectral radius of PTGS is small than PTSOR's (Corollary 5.6 [15]). Further from the residual trend chart with the changing numbers of iteration in Figure 1, one can demonstrably find the desired performance of the proposed methods.
Case | PTAAOR | PTAOR | PTSOR | PTGS | PTJb | |
It | |
|
|
135 | 151 | |
1 | CPU | |
|
|
|
|
RES | |
|
|
|
||
It | |
|
|
|
|
|
2 | CPU | |
|
|
|
|
RES | |
|
|
|
||
It | |
|
|
|
|
|
3 | CPU | |
|
|
|
|
RES | |
|
|
|
Case | TAAOR | TAOR | TSOR | TGS | TJb | |
It | 133 | 307 | 380 | 338 | 364 | |
1 | CPU | 0.7945 | 1.0568 | 1.5005 | 1.3059 | 1.4757 |
RES | 8.6739e−12 | 9.3398e−12 | 9.8938e−12 | 9.9219e−12 | 9.9961e−12 | |
It | 23 | 39 | 51 | 50 | 56 | |
2 | CPU | 0.2982 | 0.2325 | 0.2984 | 0.3897 | 0.5306 |
RES | 3.5812e−12 | 9.1947e−12 | 9.3484e−12 | 9.4678e−12 | 9.7318e−12 | |
It | 3 | 5 | 7 | 6 | 6 | |
3 | CPU | 0.0432 | 0.0490 | 0.0538 | 0.1521 | 0.1652 |
RES | 6.1840e−13 | 1.5752e−13 | 2.5318e−12 | 7.1052e−12 | 2.3677e−12 |
Example 4.2. Consider the following higher-order Markov chain model:
x=Bxm−1,‖x‖1=1, | (4.2) |
where B=(bi1,i2,⋯,im) is an order m tensor representing an (m−1)th order Markov chain, which is called an order m dimension n transiting probability tensor, i.e., bi1,i2,⋯,im≥0,Σni1=1bi1,i2,⋯,im=1, and x is named as a stochastic vector with xi≥0 and Σni=1xi=1 [16].
Observe that (4.2) can be transformed into the following systems
(Im−βB)xm−1=x[m−1]−βx,s.t.‖x‖1=1. | (4.3) |
Next we set tensor A:=Im−βB, b:=x[m−1]−βx, then use splitting iteration approaches to solve the systems (4.3), where
B(:,:,1)=(0.5800.24320.142900.41090.07010.41900.34590.7870),B(:,:,2)=(0.47080.13300.03270.13410.54500.20420.39510.32200.7631), |
B(:,:,3)=(0.43810.100300.02290.43380.09300.53900.46590.9070), |
Im is an identity tensor of order m dimension n. In this example, the running is terminated when the current iteration satisfies RES=‖Axm−1−b‖2<10−6 or if the number of iteration exceeds the prescribed iteration steps kmax=100.
In this example, all the numerical results are depicted in Table 4 and Figure 3. From the elapsed CPU and numbers of iteration, PTAOR performs always well than all of other approaches, although the optimum parameters are hard to determine, we can just choose them tentatively in some reality application according to experiment initial effects. Preconditioned scheme can modify all these methods to some extent, but the efficiency need to be improved further in some actual applications, which depends closely on the construction of preconditioner. Hence, the research of optimum parameters and preconditioners for PTAOR will be further proceeded in future. PTAAOR can be considered as a novel and efficient approach, however, the selection of splitting tensor EP1 and EP2 should be adequately studied discussed in a later work.
Case | PTAOR | PTAAOR | PTSOR | PTGS | PTJb | |
It | 17 | 38 | 37 | 37 | 39 | |
1 | CPU | 0.0743 | 0.2126 | 0.1484 | 0.2037 | 0.2665 |
RES | 7.1715e−06 | 7.6163e−06 | 9.7565e−06 | 9.4313e−06 | 7.7173e−06 | |
Case | TAOR | TAAOR | TSOR | TGS | TJb | |
It | 17 | 45 | 37 | 38 | 39 | |
2 | CPU | 0.1016 | 0.2981 | 0.1845 | 0.2616 | 0.3213 |
RES | 7.1715e−06 | 7.6163e−06 | 9.7565e−06 | 9.4313e−06 | 7.7173e−06 |
In this paper, an tensor alternating splitting iteration scheme is proposed for solving multi-linear systems, and the tensor accelerated overrelaxation and tensor symmetric accelerated overrelaxation splitting iteration strategies are introduced to solve this kind of systems, which can be regarded as the generalizations of AOR and SAOR for linear systems. Meanwhile, some efficient preconditioned techniques are provided to improve the efficiency of solving multi-linear systems. The proposed approaches have been demonstrated to be superior to classical SOR, GS, Jacobi methods under normal conditions, which can be fully validated in our numerical experiments section. Further, we point out our future research work.
The project is supported by Fujian Natural Science Foundation (Grant No. 2019J01879) and The New Century Training Plan of Fujian Province University (Grant No. Minjiao2017[54]), JT180586.
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
[1] |
H. He, L. Chen, L. Qi, et. al. A globally and quadratically convergent algorithm for solving multilinear systems with M-tensors, J. Sci. Comput., 76 (2018), 1718-1741. doi: 10.1007/s10915-018-0689-7
![]() |
[2] |
C. Lv, C. Ma, A Levenberg-Marquardt method for solving semi-symmetric tensor equations, J. Comput. Appl. Math., 332 (2018), 13-25. doi: 10.1016/j.cam.2017.10.005
![]() |
[3] |
X. Wang, M. Che, Y. Wei, Neural networks based approach solving multi-linear systems with M-tensors, Neurocomputing, 351 (2019), 33-42. doi: 10.1016/j.neucom.2019.03.025
![]() |
[4] |
Z. Xie, X. Jin, Y. Wei, Tensor methods for solving symmetric M-tensor systems, J. Sci. Comput., 74 (2018), 412-425. doi: 10.1007/s10915-017-0444-5
![]() |
[5] | Z. Xie, X. Jin, Y. Wei, A fast algorithm for solving circulant tensor systems, Linear Multilinear Algebra, 65 (2017), 1894-1904. |
[6] | T. G. Kolda, Multilinear operators for higher-order decompositions, Technical report SAND2006-2081, Sandia National Laboratories, Albuquerque, NM and Livermore, CA, 2006. |
[7] | D. Liu, W. Li, S. W. Vong, Relaxation methods for solving the tensor equation arising from the higher-order Markov chains, Numer. Linear Algebra Appl., 26 (2019): e2260. |
[8] | Y. Song, L. Qi, Spectral properties of positively homogeneous operators induced by higher order tensors, SIAM J. Matrix Anal. Appl., 34 (2013), 1302-1324. |
[9] | Y. Song, L. Qi, Properties of some classes of structured tensors, J. Optim. Theory Appl., 165 (2015), 854-873. |
[10] | L. Zhang, L. Qi, G. Zhou, M-tensors and some applications, SIAM J. Matrix Anal. Appl., 35 (2014), 437-452. |
[11] |
W. Ding, Y. Wei, Solving multilinear systems with M-tensors, J. Sci. Comput., 68 (2016), 689-715. doi: 10.1007/s10915-015-0156-7
![]() |
[12] |
W. Ding, Y. Wei, Generalized tensor eigenvalue problems, SIAM J. Matrix Anal. Appl., 36 (2015), 1073-1099. doi: 10.1137/140975656
![]() |
[13] |
L. Han, A homotopy method for solving multilinear systems with M-tensors, Appl. Math. Lett., 69 (2017), 49-54. doi: 10.1016/j.aml.2017.01.019
![]() |
[14] |
D. Liu, W. Li, S. W. Vong, The tensor splitting with application to solve multi-linear systems, J. Comput. Appl. Math., 330 (2018), 75-94. doi: 10.1016/j.cam.2017.08.009
![]() |
[15] |
W. Li, D. Liu, S. W. Vong, Comparison results for splitting iterations for solving multi-linear systems, Appl. Numeri. Math., 134 (2018), 105-121. doi: 10.1016/j.apnum.2018.07.009
![]() |
[16] |
W. Li, M. K. Ng, On the limiting probability distribution of a transition probability tensor, Linear Multilinear Algebra, 62 (2014), 362-385. doi: 10.1080/03081087.2013.777436
![]() |
[17] | A. Cichocki, R. Zdunek, A-H. Phan, et al., Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, A John Wiley and Sons, Ltd, Publication, 2009. |
[18] |
T. G. Kolda, B. W. Bader, Tensor decompositions and applications, SIAM Rev., 51 (2009), 455-500. doi: 10.1137/07070111X
![]() |
[19] | K. Pearson, Essentially positive tensors, Int. J. Algebra, 4 (2010), 421-427. |
[20] | L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symb. Comput., 40 (2005), 1302-1324. |
[21] | W. Ding, L. Qi, Y. Wei, M-tensors and nonsingular M-tensors, Linear Algebra Appl., 439 (2013), 3264-3278. |
[22] | L. Qi, Z. Luo, Tensor Analysis: Spectral Theory and Special Tensors, SIAM, 2017. |
[23] | Y. Yang, Q. Yang, Further results for Perron-Frobenius theorem for nonnegative tensors, SIAM J. Matrix Anal. Appl., 31 (2010), 2517-2530. |
[24] | A. Berman, J. R. Plemmons, Nonnegative Matrices in the Mathematical Science, SIAM, Philadelphia, 1994. |
[25] | T. Kohno, H. Kotakemori, H. Niki, et al., Improving the modified Gauss-Seidel method for Zmatrices, Linear Algebra Appl., 267 (1917), 113-123. |
Splitting tensor | EP1 | EP2 |
PTJb | D | − |
PTGS | D−L | − |
PTSOR | 1ω(D−ωL) | − |
PTAOR | 1ω(D−rL) | − |
PTAAOR | 1ω(D−rL) | 1ω(D−rU) |
Case | PTAAOR | PTAOR | PTSOR | PTGS | PTJb | |
It | |
|
|
135 | 151 | |
1 | CPU | |
|
|
|
|
RES | |
|
|
|
||
It | |
|
|
|
|
|
2 | CPU | |
|
|
|
|
RES | |
|
|
|
||
It | |
|
|
|
|
|
3 | CPU | |
|
|
|
|
RES | |
|
|
|
Case | TAAOR | TAOR | TSOR | TGS | TJb | |
It | 133 | 307 | 380 | 338 | 364 | |
1 | CPU | 0.7945 | 1.0568 | 1.5005 | 1.3059 | 1.4757 |
RES | 8.6739e−12 | 9.3398e−12 | 9.8938e−12 | 9.9219e−12 | 9.9961e−12 | |
It | 23 | 39 | 51 | 50 | 56 | |
2 | CPU | 0.2982 | 0.2325 | 0.2984 | 0.3897 | 0.5306 |
RES | 3.5812e−12 | 9.1947e−12 | 9.3484e−12 | 9.4678e−12 | 9.7318e−12 | |
It | 3 | 5 | 7 | 6 | 6 | |
3 | CPU | 0.0432 | 0.0490 | 0.0538 | 0.1521 | 0.1652 |
RES | 6.1840e−13 | 1.5752e−13 | 2.5318e−12 | 7.1052e−12 | 2.3677e−12 |
Case | PTAOR | PTAAOR | PTSOR | PTGS | PTJb | |
It | 17 | 38 | 37 | 37 | 39 | |
1 | CPU | 0.0743 | 0.2126 | 0.1484 | 0.2037 | 0.2665 |
RES | 7.1715e−06 | 7.6163e−06 | 9.7565e−06 | 9.4313e−06 | 7.7173e−06 | |
Case | TAOR | TAAOR | TSOR | TGS | TJb | |
It | 17 | 45 | 37 | 38 | 39 | |
2 | CPU | 0.1016 | 0.2981 | 0.1845 | 0.2616 | 0.3213 |
RES | 7.1715e−06 | 7.6163e−06 | 9.7565e−06 | 9.4313e−06 | 7.7173e−06 |
Splitting tensor | EP1 | EP2 |
PTJb | D | − |
PTGS | D−L | − |
PTSOR | 1ω(D−ωL) | − |
PTAOR | 1ω(D−rL) | − |
PTAAOR | 1ω(D−rL) | 1ω(D−rU) |
Case | PTAAOR | PTAOR | PTSOR | PTGS | PTJb | |
It | |
|
|
135 | 151 | |
1 | CPU | |
|
|
|
|
RES | |
|
|
|
||
It | |
|
|
|
|
|
2 | CPU | |
|
|
|
|
RES | |
|
|
|
||
It | |
|
|
|
|
|
3 | CPU | |
|
|
|
|
RES | |
|
|
|
Case | TAAOR | TAOR | TSOR | TGS | TJb | |
It | 133 | 307 | 380 | 338 | 364 | |
1 | CPU | 0.7945 | 1.0568 | 1.5005 | 1.3059 | 1.4757 |
RES | 8.6739e−12 | 9.3398e−12 | 9.8938e−12 | 9.9219e−12 | 9.9961e−12 | |
It | 23 | 39 | 51 | 50 | 56 | |
2 | CPU | 0.2982 | 0.2325 | 0.2984 | 0.3897 | 0.5306 |
RES | 3.5812e−12 | 9.1947e−12 | 9.3484e−12 | 9.4678e−12 | 9.7318e−12 | |
It | 3 | 5 | 7 | 6 | 6 | |
3 | CPU | 0.0432 | 0.0490 | 0.0538 | 0.1521 | 0.1652 |
RES | 6.1840e−13 | 1.5752e−13 | 2.5318e−12 | 7.1052e−12 | 2.3677e−12 |
Case | PTAOR | PTAAOR | PTSOR | PTGS | PTJb | |
It | 17 | 38 | 37 | 37 | 39 | |
1 | CPU | 0.0743 | 0.2126 | 0.1484 | 0.2037 | 0.2665 |
RES | 7.1715e−06 | 7.6163e−06 | 9.7565e−06 | 9.4313e−06 | 7.7173e−06 | |
Case | TAOR | TAAOR | TSOR | TGS | TJb | |
It | 17 | 45 | 37 | 38 | 39 | |
2 | CPU | 0.1016 | 0.2981 | 0.1845 | 0.2616 | 0.3213 |
RES | 7.1715e−06 | 7.6163e−06 | 9.7565e−06 | 9.4313e−06 | 7.7173e−06 |