Loading [MathJax]/jax/output/SVG/jax.js
Research article

Novel accelerated methods of tensor splitting iteration for solving multi-systems

  • Received: 04 November 2019 Accepted: 04 March 2020 Published: 17 March 2020
  • MSC : 65H10, 65K05, 49M15

  • Tensor splitting iteration method is a class of popular technique for solving multi-linear systems. In this paper, we present one kind of efficient alternating splitting iteration method, and further generalize accelerated overrelaxation method (AOR) and symmetric accelerated overrelaxation method (SAOR) from linear systems to multi-systems. Then, one type of preconditioned (alternating) tensor splitting method is also applied for solving multi-systems. Numerical experiments illustrate the efficiency of the provided methods.

    Citation: Yajun Xie, Minhua Yin, Changfeng Ma. Novel accelerated methods of tensor splitting iteration for solving multi-systems[J]. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180

    Related Papers:

    [1] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [2] Shuting Tang, Xiuqin Deng, Rui Zhan . The general tensor regular splitting iterative method for multilinear PageRank problem. AIMS Mathematics, 2024, 9(1): 1443-1471. doi: 10.3934/math.2024071
    [3] Qingbing Liu, Aimin Xu, Shuhua Yin, Zhe Tu . A note on the preconditioned tensor splitting iterative method for solving strong M-tensor systems. AIMS Mathematics, 2022, 7(4): 7177-7186. doi: 10.3934/math.2022400
    [4] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [5] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
    [6] Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han . A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677
    [7] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [8] Mingyuan Cao, Yueting Yang, Chaoqian Li, Xiaowei Jiang . An accelerated conjugate gradient method for the Z-eigenvalues of symmetric tensors. AIMS Mathematics, 2023, 8(7): 15008-15023. doi: 10.3934/math.2023766
    [9] Huiling Wang, Nian-Ci Wu, Yufeng Nie . Two accelerated gradient-based iteration methods for solving the Sylvester matrix equation AX + XB = C. AIMS Mathematics, 2024, 9(12): 34734-34752. doi: 10.3934/math.20241654
    [10] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
  • Tensor splitting iteration method is a class of popular technique for solving multi-linear systems. In this paper, we present one kind of efficient alternating splitting iteration method, and further generalize accelerated overrelaxation method (AOR) and symmetric accelerated overrelaxation method (SAOR) from linear systems to multi-systems. Then, one type of preconditioned (alternating) tensor splitting method is also applied for solving multi-systems. Numerical experiments illustrate the efficiency of the provided methods.


    In this paper, we will discuss the following multi-linear system

    Axm1=b, (1.1)

    where A=(ai1im)R[m,n] is an order m dimension n tensor, bRn is a dimension n vector.

    We know an essential problem in pure and applied mathematics is solving various classes of equations. The rapid and efficient calculation approaches of multi-linear systems [1,2,3,4,5] are becoming more and more significant in the field of science and engineering due to their wide applications, especially for the data analysis need of big data era (see [6,7,8,9,10]). Normally, it is hard to get the exact solution by means of direct methods even for smaller-scale general linear systems, which greatly promotes the substantial developments of presenting various kinds of iterative strategies. Many research works have been investigated in some literatures on fast solvers for the multi-linear systems (1.1). Ding and Wei [11,12] proposed some classical iterative, such as Jocobi, Gauss-Seidel methods, and Newton methods through translating (1.1) into the optimization problem. In general, the computational cost for the Newton method is expensive. Then Han [13] investigated an homotopy method by the Euler-Newton prediction-correction technique to solve multi-linear systems with nonsymmetric M-tensors, which is shown a better result than Newton method in the sense of convergence performance. Tensor splitting method and its convergence results have been studied by Liu and Li et al. [14]. Further some comparison results for splitting iteration for solving multi-linear systems were investigated widely in [15,16], however, we find that some acceleration versions can be introduced further and may probably improve their work. Motivated by [15], we propose an tensor alternating splitting iteration scheme for solving multi-linear systems. In practical application, the accelerated overrelaxation method (AOR), symmetric accelerated overrelaxation method (SAOR) and their preconditioned versions are generalized for solving multi-linear systems (1.1).

    The remainder of this paper is organized as follows. In Section 2, some basic and useful notations are described simply. In Section 3, we will propose a tensor alternating splitting iteration scheme for solving multi-linear systems. Meanwhile, a preconditioner is introduced to accelerate the novel method. In Section 4, the classical approaches, AOR and SAOR, will be generalized to solve multilinear systems, then some numerical tests are provided to illustrate the superiority of the presented iteration methods. Finally, a concluding remark is given in Section 5.

    Let AR[2,n] and BR[k,n]. The matrix-tensor product C=ABR[k,n] is defined by

    cji2ik=nj2=1ajj2bj2i2ik. (2.1)

    The above formula can be regarded as

    C(1)=(AB)(1)=AB(1), (2.2)

    where C(1) and B(1) are the matrices generated from C and B flattened along first index. For more details, see [17,18].

    Definition 2.1. ([19]). Let A=(ai1i2im)R[m,n]. Then the majorization matrix M(A) of A is the n×n matrix with the entries

    M(A)ij=aijj,i,j=1,2,,n. (2.3)

    Definition 2.2. ([15]). Let A=(ai1i2im)R[m,n]. If M(A) is a nonsingular matrix and A=M(A)Im, we call M(A)1 the order-2 left-inverse of tensor A, and A is called left-nonsingular, where Im is identity tensor with all diagonal elements be 1.

    Definition 2.3. ([15]). Let A,E,FR[m,n]. A=EF is named as a splitting of tensor A if E is left-nonsingular. A regular splitting of A if E is left-nonsingular with M(E)10 and F0 (here or denotes elementwise). A weak regular splitting of A if E is left-nonsingular with M(E)1F0. A convergence splitting if spectral radius of M(E)1F is less than 1, i.e., ρ(M(E)1F)<1.

    Definition 2.4. ([20]). Let A=(ai1i2im)R[m,n]. A pair (λ,x)C×(Cn{0}) is called an eigenvalue-eigenvector of tensor A if they satisfy the systems

    Axm1=λx[m1], (2.4)

    where x[m1]=(xm11,xm12,,xm1n)T. The (λ,x) is named as H-eigenpair if both λ and vector x are real.

    Definition 2.5. Let ρ(A)=max{|λ||λσ(A)} be the spectral radius of A, where σ(A) is the set of all eigenvalues of A.

    Definition 2.6. ([21]). Let A=(ai1i2im)R[m,n]. A is called a Z-tensor if its off-diagonal entries are non-positive. A is called an M-tensor if there exists a nonnegative tensor B and a positive real number ηρ(B) such that

    A=ηImB.

    If η>ρ(B), then A is called a strong M-tensor.

    Consider two tensor splittings A=E1F1=E2F2. First of all, we describe briefly alternating direction iterative method for solving multi-linear systems Axm1=b.

    By A=E1F1, clearly, the above multi-linear systems can be written as

    E1xm1=F1xm1+b, (3.1)

    i.e.,

    Imxm1=M(E1)1F1xm1+M(E1)1b, (3.2)

    here use the property of order 2 left-nonsingular of tensor E1. Im is an identify tensor with appropriate order. The result leads to Algorithm 3.1.

    Algorithm 3.1. (Preconditioned tensor splitting iterative method (PTSI) [15])

    Step1 Input a vector b, and a preconditioner P, a strong M-tensor A with (weak) regular splitting AP:=PA=EP1FP1. Given a precision ε>0 and initial vector x0. Set k:=1.

    Step2 If APxm1kb2<ε stop; otherwise, go to Step 3.

    Step3

    xk+1=(M(EP1)1FP1xm1k+M(EP1)1b)[1m1].

    Step4 Set k:=k+1, return to Step 2.

    Based on this, we introduce two-step tensor alternating splitting iteration method, then it gets the iterative scheme

    {xk+12=(M(E1)1F1xm1k+M(E1)1b)[1m1],xk+1=(M(E2)1F2xm1k+12+M(E2)1b)[1m1]. (3.3)

    We further consider preconditioned multi-linear systems PAxm1=Pb. It follows from the splitting AP:=PA=EP1FP1=EP2FP2 that

    {xk+12=(M(EP1)1FP1xm1k+M(EP1)1Pb)[1m1],xk+1=(M(EP2)1FP2xm1k+12+M(EP2)1Pb)[1m1]. (3.4)

    Set G:=M(EP2)1FP2. By Imxm1=x[m1] where Im be an identify tensor with appropriate order, we have

    Gxm1k+12=M(G)Imxm1k+12=M(G)x[m1]k+12=M(G)(M(EP1)1FP1xm1k+M(EP1)1Pb)=M(G)M(EP1)1FP1xm1k+M(G)M(EP1)1Pb. (3.5)

    Hence,

    xk+1=[M(G)M(EP1)1FP1xm1k+M(G)M(EP1)1Pb+M(EP2)1Pb][1m1]. (3.6)

    The above analysis can be described concretely in Algorithm 3.2. Now, let

    T(EP1,EP2):=M(G)M(EP1)1FP1. (3.7)

    Next we will show the spectral radius of preconditioned iterative tensor ρ(T(EP1,EP2))<1, namely, the proof of convergence of Algorithm 3.2.

    Algorithm 3.2. (Preconditioned tensor alternating splitting iterative method (PTASI))

    Step1 Input a vector b, a preconditioner P, a strong M-tensor A with (weak) regular splitting AP:=PA=EP1FP1=EP2FP2. Given a precision ε>0 and initial vector x0. Set k:=1.

    Step2 If APxm1kb2<ε stop; otherwise, go to Step 3.

    Step3

    {xk+12=(M(E1)1F1xm1k+M(E1)1b)[1m1],xk+1=(M(E2)1F2xm1k+12+M(E2)1b)[1m1].

    Step4 Set k:=k+1, return to Step 2.

    Lemma 3.1. [11]. Let AP=(ai1i2im)R[m,n], and AP=EP1FP1=EP2FP2 be a weak regular splitting and a regular splitting, respectively. If FP2<FP1, FP20, and ρ((EP1)1FP1)<1, then there exists a positive Perron vector xRn such that

    M(EP2)1FP2xm1ρkx[m1], (3.8)

    where ρk=ρ(M(EP1)1FP1+1kS), k is a positive integer and SR[m,n] is a positive tensor.

    Proof. Let S be in R[m,n] whose entries are all equal to 1. There exists a positive integer N such that ρ(M(EP1)1FP1)ρ(M(EP1)1FP1+1kS)<1 as k>N. It is clear to check that M(EP1)1FP1+1kS is positive and hence is irreducible. Using the strong Perron-Frobenius theorem (see [22,23]), M(EP1)1FP1+1kS has a positive Perron vector x such that

    (M(EP1)1FP1+1kS)xm1=ρkx[m1] (3.9)

    for k>N, where ρk=ρ(M(EP1)1FP1+1kS).

    Hence, it give rises to

    M(EP1)(ρkIm1kS)xm1=FP1xm1. (3.10)

    By AP=EP1FP1=EP2FP2, one gets M(AP)=M(EP1)M(FP1)=M(EP2)M(FP2).

    So it generates

    M(AP)(ρkIm1kS))xm1=FP1xm1M(FP1)(ρkIm1kS)xm1,=(1ρk)M(FP1)Imxm1+1kM(FP1)Sxm1+(FP1M(FP1)Im)xm1. (3.11)

    Further, it follows from (3.11) that

    (M(EP2)M(FP2))(ρkIm1kS)xm1(1ρk)M(FP2)Imxm1+1kM(FP2)Sxm1+(FP2M(FP2)Im)xm1, (3.12)

    here, one should notice that the condition FP1FP2 and Definition 2.1, so M(FP1)M(FP2), FP1M(FP1)ImFP2M(FP2)Im and ρk<1.

    By some simple computations, we have

    M(EP2)(ρkIm1kS)xm1FP2xm1. (3.13)

    Observe that M(EP2)10 and FP20 due to the regular splitting of AP=EP2FP2. From (3.14), it gets

    M(EP2)1FP2xm1(ρkIm1kS)xm1ρkx[m1]. (3.14)

    Theorem 3.1. Let AP=(ai1i2im)R[m,n], and AP=EP1FP1=EP2FP2 be a weak regular splitting and a regular splitting, respectively. If FP2<FP1, FP20 and ρ((EP1)1FP1)<1. G:=M(EP2)1FP2 is order 2 left-nonsingular, i.e., G=M(G)Im, then ρ(T(EP1,EP2))<1, where T(EP1,EP2) is defined by (3.7).

    Proof. First of all, similar to the previous discussion, using the strong Perron-Frobenius theorem, N>0, M(EP1)1FP1+1kS has a positive Perron vector x such that

    (M(EP1)1FP1+1kS)xm1=ρkx[m1] (3.15)

    for k>N, where ρk=ρ(M(EP1)1FP1+1kS). That it to say

    M(G)(M(EP1)1FP1+1kS)xm1=ρkM(G)x[m1]=ρkM(G)Imxm1=ρkGxm1=ρkM(EP2)1FP2xm1ρkρkx[m1]=(ρk)2x[m1], (3.16)

    where the inequality comes from the Lemma 3.1. When k, it gets the result. This completes the proof.

    In this section, some numerical examples are discussed to validate the performance of effectiveness of the proposed preconditioned tensor AOR ('PTAOR') and preconditioned tensor alternating splitting AOR ('PTAAOR') based two-step splitting method for solving the multi-linear systems (see Algorithms 3.1, 3.2). We compare the convergence of preconditioned tensor Jacobi method (denoted as 'PTJb'), preconditioned tensor Gauss-Seidel method (denoted as 'PTGS') and preconditioned tensor SOR method (denoted as 'PTSOR') and unpreconditioned versions by the iteration step (denoted as 'IT'), elapsed CPU time in seconds (denoted as 'CPU'), and residual error (denoted as 'RES').

    Now, consider the tensor preconditioned splitting of (1.1).

    AP:=PA=DLU. (4.1)

    The layout of splitting description is shown in Table 1, where D=DIm, L=LIm, U=UIm, and D, L, U are the diagonal part, strictly lower and strictly upper triangle part of M(PA). Clearly, the splittings described in Table 1 satisfy the condition of weak regular splitting, i.e., M(E)1F0.

    Table 1.  The corresponding splitting EP1 and EP2.
    Splitting tensor EP1 EP2
    PTJb D
    PTGS DL
    PTSOR 1ω(DωL)
    PTAOR 1ω(DrL)
    PTAAOR 1ω(DrL) 1ω(DrU)

     | Show Table
    DownLoad: CSV

    A type of preconditioner (as a variant form in [24,25]) Pα=I+Sα is considered for solving PAxm1=Pb, where

    Sα=(0α1a12200α1a2110α2a2330000αn1an1,nn000αn1an,n1n10),

    αi=0.01,i=1,2,n1.

    All the numerical experiments have been carried out by MATLAB R2011b 7.1.3 on a PC equipped with an Intel(R) Core(TM) i7-2670QM, CPU running at 2.20 GHZ with 8 GB of RAM in Windows 7 operating system.

    Example 4.1. First consider the multi-linear systems (1.1) with a strong M-tensor A in difference cases [14,15].

    Case 1. A=864.4895ImB, where BR[3,5] is a nonnegative tensor with bi1,i2,i3=|tan(i1+i2+i3)|.

    Case 2. A=n2I3B, where BR[3,3] is a nonnegative tensor with bi1,i2,i3=|sin(i1+i2+i3)|.

    Case 3. A=sI3B, where B is generated randomly by MATLAB, and s=(1+δ)max1,2,,n(Be2)i, e=(1,1,,1)T. Let δ=8. b=x0=e.

    We give three different cases for different tensors A, B with various sizes. Parameter r and ω are experiential selected according to particular example. In this example, the running is terminated when the current iteration satisfies RES=Axm1b2<1011 or if the number of iteration exceeds the prescribed iteration steps kmax=500.

    The numerical results have been shown in Tables 2, 3 and Figures 1, 2. From the numerical results, we can see that PTAAOR and PTAOR are efficient methods, and both of them overmatch the PTSOR, PTGS, and PTJb in all sides. PTAAOR seems to be a fascinating approach, however the PTAOR is more efficient method than the PTSOR, PTGS, PTJb methods from all aspects due to the flexible selection of parameters. It is clear PTSOR and PTGS are nearly similar efficiency when ω approaching to 1. Meanwhile, we find that PTGS is superior to PTSOR from the view of iteration number. It further bears out the conclusions in [15], one of which clarifies a fact the spectral radius of PTGS is small than PTSOR's (Corollary 5.6 [15]). Further from the residual trend chart with the changing numbers of iteration in Figure 1, one can demonstrably find the desired performance of the proposed methods.

    Table 2.  Preconditioned numerical results for Example 4.1 with r=2.3, ω=0.99.
    Case PTAAOR PTAOR PTSOR PTGS PTJb
    It 73 123 137 135 151
    1 CPU 0.5987 0.6179 0.6572 0.6761 0.7375
    RES 9.1232e12 9.2032e12 9.3271e12 9.9916e12 9.4513e12
    It 23 39 51 49 56
    2 CPU 0.2614 0.1993 0.2169 0.3837 0.4039
    RES 3.2527e12 7.2353e12 7.7315e12 8.6080e12 8.7315e12
    It 2 5 6 5 5
    3 CPU 0.0384 0.0390 0.0438 0.1281 0.1556
    RES 6.1840e13 7.9154e13 1.8964e12 7.1052e13 1.6462e12

     | Show Table
    DownLoad: CSV
    Table 3.  Unpreconditioned numerical results for Example 4.1 with r=2.3, ω=0.99.
    CaseTAAORTAORTSORTGSTJb
    It133307380338364
    1CPU0.79451.05681.50051.30591.4757
    RES8.6739e129.3398e129.8938e129.9219e129.9961e12
    It2339515056
    2CPU0.29820.23250.29840.38970.5306
    RES3.5812e129.1947e129.3484e129.4678e129.7318e12
    It35766
    3CPU0.04320.04900.05380.15210.1652
    RES6.1840e131.5752e132.5318e127.1052e122.3677e12

     | Show Table
    DownLoad: CSV
    Figure 1.  The relative residual for PTAAOR, PTAOR, PTSOR, PTGS and PTJb mehtods with difference cases in Example 4.1.
    Figure 2.  The relative residual for TAAOR, TAOR, TSOR, TGS and TJb mehtods with difference cases in Example 4.1.

    Example 4.2. Consider the following higher-order Markov chain model:

    x=Bxm1,x1=1, (4.2)

    where B=(bi1,i2,,im) is an order m tensor representing an (m1)th order Markov chain, which is called an order m dimension n transiting probability tensor, i.e., bi1,i2,,im0,Σni1=1bi1,i2,,im=1, and x is named as a stochastic vector with xi0 and Σni=1xi=1 [16].

    Observe that (4.2) can be transformed into the following systems

    (ImβB)xm1=x[m1]βx,s.t.x1=1. (4.3)

    Next we set tensor A:=ImβB, b:=x[m1]βx, then use splitting iteration approaches to solve the systems (4.3), where

    B(:,:,1)=(0.5800.24320.142900.41090.07010.41900.34590.7870),B(:,:,2)=(0.47080.13300.03270.13410.54500.20420.39510.32200.7631),
    B(:,:,3)=(0.43810.100300.02290.43380.09300.53900.46590.9070),

    Im is an identity tensor of order m dimension n. In this example, the running is terminated when the current iteration satisfies RES=Axm1b2<106 or if the number of iteration exceeds the prescribed iteration steps kmax=100.

    In this example, all the numerical results are depicted in Table 4 and Figure 3. From the elapsed CPU and numbers of iteration, PTAOR performs always well than all of other approaches, although the optimum parameters are hard to determine, we can just choose them tentatively in some reality application according to experiment initial effects. Preconditioned scheme can modify all these methods to some extent, but the efficiency need to be improved further in some actual applications, which depends closely on the construction of preconditioner. Hence, the research of optimum parameters and preconditioners for PTAOR will be further proceeded in future. PTAAOR can be considered as a novel and efficient approach, however, the selection of splitting tensor EP1 and EP2 should be adequately studied discussed in a later work.

    Table 4.  Numerical results for Example 4.2 with r=2.3, ω=0.99.
    Case PTAOR PTAAOR PTSOR PTGS PTJb
    It 17 38 37 37 39
    1 CPU 0.0743 0.2126 0.1484 0.2037 0.2665
    RES 7.1715e06 7.6163e06 9.7565e06 9.4313e06 7.7173e06
    Case TAOR TAAOR TSOR TGS TJb
    It 17 45 37 38 39
    2 CPU 0.1016 0.2981 0.1845 0.2616 0.3213
    RES 7.1715e06 7.6163e06 9.7565e06 9.4313e06 7.7173e06

     | Show Table
    DownLoad: CSV
    Figure 3.  The relative residual for five methods with two different cases in Example 4.2.

    In this paper, an tensor alternating splitting iteration scheme is proposed for solving multi-linear systems, and the tensor accelerated overrelaxation and tensor symmetric accelerated overrelaxation splitting iteration strategies are introduced to solve this kind of systems, which can be regarded as the generalizations of AOR and SAOR for linear systems. Meanwhile, some efficient preconditioned techniques are provided to improve the efficiency of solving multi-linear systems. The proposed approaches have been demonstrated to be superior to classical SOR, GS, Jacobi methods under normal conditions, which can be fully validated in our numerical experiments section. Further, we point out our future research work.

    The project is supported by Fujian Natural Science Foundation (Grant No. 2019J01879) and The New Century Training Plan of Fujian Province University (Grant No. Minjiao2017[54]), JT180586.

    All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.



    [1] H. He, L. Chen, L. Qi, et. al. A globally and quadratically convergent algorithm for solving multilinear systems with M-tensors, J. Sci. Comput., 76 (2018), 1718-1741. doi: 10.1007/s10915-018-0689-7
    [2] C. Lv, C. Ma, A Levenberg-Marquardt method for solving semi-symmetric tensor equations, J. Comput. Appl. Math., 332 (2018), 13-25. doi: 10.1016/j.cam.2017.10.005
    [3] X. Wang, M. Che, Y. Wei, Neural networks based approach solving multi-linear systems with M-tensors, Neurocomputing, 351 (2019), 33-42. doi: 10.1016/j.neucom.2019.03.025
    [4] Z. Xie, X. Jin, Y. Wei, Tensor methods for solving symmetric M-tensor systems, J. Sci. Comput., 74 (2018), 412-425. doi: 10.1007/s10915-017-0444-5
    [5] Z. Xie, X. Jin, Y. Wei, A fast algorithm for solving circulant tensor systems, Linear Multilinear Algebra, 65 (2017), 1894-1904.
    [6] T. G. Kolda, Multilinear operators for higher-order decompositions, Technical report SAND2006-2081, Sandia National Laboratories, Albuquerque, NM and Livermore, CA, 2006.
    [7] D. Liu, W. Li, S. W. Vong, Relaxation methods for solving the tensor equation arising from the higher-order Markov chains, Numer. Linear Algebra Appl., 26 (2019): e2260.
    [8] Y. Song, L. Qi, Spectral properties of positively homogeneous operators induced by higher order tensors, SIAM J. Matrix Anal. Appl., 34 (2013), 1302-1324.
    [9] Y. Song, L. Qi, Properties of some classes of structured tensors, J. Optim. Theory Appl., 165 (2015), 854-873.
    [10] L. Zhang, L. Qi, G. Zhou, M-tensors and some applications, SIAM J. Matrix Anal. Appl., 35 (2014), 437-452.
    [11] W. Ding, Y. Wei, Solving multilinear systems with M-tensors, J. Sci. Comput., 68 (2016), 689-715. doi: 10.1007/s10915-015-0156-7
    [12] W. Ding, Y. Wei, Generalized tensor eigenvalue problems, SIAM J. Matrix Anal. Appl., 36 (2015), 1073-1099. doi: 10.1137/140975656
    [13] L. Han, A homotopy method for solving multilinear systems with M-tensors, Appl. Math. Lett., 69 (2017), 49-54. doi: 10.1016/j.aml.2017.01.019
    [14] D. Liu, W. Li, S. W. Vong, The tensor splitting with application to solve multi-linear systems, J. Comput. Appl. Math., 330 (2018), 75-94. doi: 10.1016/j.cam.2017.08.009
    [15] W. Li, D. Liu, S. W. Vong, Comparison results for splitting iterations for solving multi-linear systems, Appl. Numeri. Math., 134 (2018), 105-121. doi: 10.1016/j.apnum.2018.07.009
    [16] W. Li, M. K. Ng, On the limiting probability distribution of a transition probability tensor, Linear Multilinear Algebra, 62 (2014), 362-385. doi: 10.1080/03081087.2013.777436
    [17] A. Cichocki, R. Zdunek, A-H. Phan, et al., Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, A John Wiley and Sons, Ltd, Publication, 2009.
    [18] T. G. Kolda, B. W. Bader, Tensor decompositions and applications, SIAM Rev., 51 (2009), 455-500. doi: 10.1137/07070111X
    [19] K. Pearson, Essentially positive tensors, Int. J. Algebra, 4 (2010), 421-427.
    [20] L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symb. Comput., 40 (2005), 1302-1324.
    [21] W. Ding, L. Qi, Y. Wei, M-tensors and nonsingular M-tensors, Linear Algebra Appl., 439 (2013), 3264-3278.
    [22] L. Qi, Z. Luo, Tensor Analysis: Spectral Theory and Special Tensors, SIAM, 2017.
    [23] Y. Yang, Q. Yang, Further results for Perron-Frobenius theorem for nonnegative tensors, SIAM J. Matrix Anal. Appl., 31 (2010), 2517-2530.
    [24] A. Berman, J. R. Plemmons, Nonnegative Matrices in the Mathematical Science, SIAM, Philadelphia, 1994.
    [25] T. Kohno, H. Kotakemori, H. Niki, et al., Improving the modified Gauss-Seidel method for Zmatrices, Linear Algebra Appl., 267 (1917), 113-123.
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3515) PDF downloads(276) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog