Research article

A note on the preconditioned tensor splitting iterative method for solving strong M-tensor systems

  • Received: 19 October 2021 Revised: 26 January 2022 Accepted: 06 February 2022 Published: 09 February 2022
  • MSC : 15A69, 65F10

  • In this note, we present a new preconditioner for solving the multi-linear systems, which arise from many practical problems and are different from the traditional linear systems. Based on the analysis of the spectral radius, we give new comparison results between some preconditioned tensor splitting iterative methods. Numerical examples are given to demonstrate the efficiency of the proposed preconditioned method.

    Citation: Qingbing Liu, Aimin Xu, Shuhua Yin, Zhe Tu. A note on the preconditioned tensor splitting iterative method for solving strong M-tensor systems[J]. AIMS Mathematics, 2022, 7(4): 7177-7186. doi: 10.3934/math.2022400

    Related Papers:

    [1] Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180
    [2] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [3] Panpan Liu, Hongbin Lv . An algorithm for calculating spectral radius of $ s $-index weakly positive tensors. AIMS Mathematics, 2024, 9(1): 205-217. doi: 10.3934/math.2024012
    [4] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [5] Hong-Mei Song, Shi-Wei Wang, Guang-Xin Huang . Tensor Conjugate-Gradient methods for tensor linear discrete ill-posed problems. AIMS Mathematics, 2023, 8(11): 26782-26800. doi: 10.3934/math.20231371
    [6] Shuting Tang, Xiuqin Deng, Rui Zhan . The general tensor regular splitting iterative method for multilinear PageRank problem. AIMS Mathematics, 2024, 9(1): 1443-1471. doi: 10.3934/math.2024071
    [7] Xu Li, Rui-Feng Li . Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118. doi: 10.3934/math.2021243
    [8] Xueyong Wang, Gang Wang, Ping Yang . Novel Pareto $ Z $-eigenvalue inclusion intervals for tensor eigenvalue complementarity problems and its applications. AIMS Mathematics, 2024, 9(11): 30214-30229. doi: 10.3934/math.20241459
    [9] Xiaofeng Guo, Jianyu Pan . Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 17284-17306. doi: 10.3934/math.2023884
    [10] Junxiang Lu, Chengyi Zhang . On the strong P-regular splitting iterative methods for non-Hermitian linear systems. AIMS Mathematics, 2021, 6(11): 11879-11893. doi: 10.3934/math.2021689
  • In this note, we present a new preconditioner for solving the multi-linear systems, which arise from many practical problems and are different from the traditional linear systems. Based on the analysis of the spectral radius, we give new comparison results between some preconditioned tensor splitting iterative methods. Numerical examples are given to demonstrate the efficiency of the proposed preconditioned method.



    In this note, we consider the following multi-linear system

    Axm1=b, (1.1)

    where A=(ai1i2im) is an order m dimension n tensor, x and b are n dimensional vectors, and the tensor-vector product Axm1 is defined as [1]

    (Axm1)i=ni2,,im=1aii2imxi2xim,i=1,2,,n, (1.2)

    where xi denotes the i-th component of x. The multi-linear system (1.1) arises from a number of scientific computing and engineering applications [1,2,6], such as data analysis [10], the sparsest solutions to tensor complementarity problems [11], and so on.

    One of the applications of the multi-linear system (1.1) is the numerical solution of the partial differential equation with Dirichlet's boundary condition

    {u(x)m2u(x)=f(x)inΩ,u(x)=g(x)onΩ,(m=3,4,) (1.3)

    where =dk=12x2k and Ω=[0,1]d. When f() is a constant function, this PDE is a nonlinear Klein-Gordon equation (see [9,13,14]). Just as authors studied in [13,14], uuθu can also be discretized into an mth-order nonsingular M-tensor

    L(d)h=d1k=0IIkLhIIdk1,

    which satisfies

    (Lhum1)i=um2i(Lhu)i

    for i=1,2,,n, where Lh is an mth-order tensor M-tensor with

    {(Lh)1,1,,1=(Lh)n,n,,n=1h2,(Lh)i,i,,i=2h2,(Lh)i,i1,i,,i=(Lh)i,i,i1,,i==(Lh)i,i,i,,i1=1h2(m1),i=2,3,,n1,(Lh)i,i+1,i,,i=(Lh)i,i,i+1,,i==(Lh)i,i,i,,i+1=1h2(m1),i=2,3,,n1,

    The PDE in (1.3) is discretized into an M-equation L(d)hum1=f. This class of multi-linear equations can be regarded as a higher-order generalization of the one discussed in [9,15,16].

    To solve the multi-linear system (1.1), Ng, Qi and Zhou [12] proposed an algorithm for b=0. When A is a strong M-tensor, Ding and Wei [9] generalized the Jacobi method, the Gauss-Seidel method and the Newton algorithm. Liu et al. [3] discussed the tensor splitting A=EF, and then proposed a general tensor splitting iterative method for solving the multi-linear system (1.1) as follows:

    xk=[M(E)1Fxm1k1+M(E)1b][1m1],k=1,2,, (1.4)

    where x0 is a given initial vector and the tensor T=M(E)1F is called the iterative tensor of the splitting method (see [3,4]). They discussed the convergence rate for the tensor splitting iterative method and showed that the spectral radius ρ(M(E)1F) can be seen as an approximate convergence rate of the iteration (1.4).

    For matrix splitting iterative methods, it is well known that the preconditioning technique is very important, which can be used to improve the rate of convergence of the iterative method when a suitable preconditioner is chosen [4,5,7]. In [3], Liu et al. explored preconditioning techniques for tensor splitting methods and discussed the preconditioned Gauss-Seidel type and SOR type iterative methods, and proved that the Guass-Seidel type method demonstrates faster convergence than the Jacobi method, that is to say, the spectral radius of the iterative matrix of the Guass-Seidel method is not larger than the one of the Jacobi method. Recently, Cui et al. [5] proposed a new preconditioner for solving M-tensor systems and gave some comparison theorems of the preconditioned Gauss-Seidel type method. The preconditioned iterative method is to transform the original system into the preconditioned form

    PAxm1=Pb, (1.5)

    where the matrix P is a nonsingular preconditioner. Let PA=EPFP be a splitting of PA. Then the corresponding preconditioned tensor splitting iterative method is given as follows:

    xk=[M(EP)1FPxm1k1+M(EP)1Pb][1m1],k=1,2,. (1.6)

    The rest of this paper is organized as follows. In Section 2 we introduce some definitions and some related lemmas which will be used in the sequel. In Section 3, we first present a counter-example for existing research results and then propose a new special preconditioner. Meanwhile, we give the comparison theorems for the preconditioned Gauss-Seidel type iterative methods. The final section is the concluding remark.

    Let 0, O and O denote a zero vector, a zero matrix and a zero tensor, respectively. Let A and B be two tensors with the same sizes. The order AB(>B) means that each entry of A is no less than (larger than) corresponding one of B.

    For a positive integer n, let n={1,2,,n}. A tensor A consists of n1××nm entries in the real field R:

    A=(ai1i2im),ai1i2imR,ijnj,j=1,,m.

    If n1==nm=n, A is called an order m dimension n tensor. We denote the set of all order m dimension n tensors by R[m,n]. When m=1, R[1,n] is simplified as Rn, which is the set of all n-dimension real vectors. When m=2, R[2,n] denotes the set of all n×n real matrices. Similarly, the above notions can be generalized to the complex number field C. Let R+ be the nonnegative real field. If each entry of A is nonnegative, we call A a nonnegative tensor, and the set of all the order m dimension n nonnegative tensors is denoted by R[m,n]+.

    Let AR[2,n] and BR[k,n]. The matrix-tensor product C=ABR[k,n] is defined by

    cji2ik=nj2=1ajj2bj2i2ik. (2.1)

    The formular (2.1) can be written as follows (see [3]):

    C(1)=(AB)(1)=AB(1), (2.2)

    where C(1) and B(1) are the matrices obtained from C and B flattened along the first index (see [3]), For example, if B=(bijk)C[3,n], then

    B(1)=(b111b1n1b112b1n2b11nb1nnb211b2n1b212b2n2b21nb2nnbn11bnn1bn12bnn2bn1nbnnn).

    Next we recall some definitions and lemmas for the completeness our presentation.

    Definition 2.1. ([1]) Let AR[m,n]. A pair (λ,x)C×(C0) is called an eigenvalue-eigenvector (or simply eigenpair) of A if they satisfy the equation

    Axm1=λx[m1], (2.3)

    where x[m1]=(xm11,,xm1n)T. We call (λ,x) an H-eigenpair if both λ and x are real.

    Let ρ(A)=max{|λ|:λσ(A)} be the spectral radius of A, where σ(A) is the set of all eigenvalue of A. We use Ik=(δi1ik) to denote a unit tensor with its entries given by:

    δi1ik={1,i1==ik,0,else.

    Definition 2.2. ([6]) Let AR[m,n]. A is called a Z-tensor if its off-diagonal entries are non-positive. A is called an M-tensor if there exist a nonnegative tensor B and a positive real number ηρ(B) such that

    A=ηImB.

    If η>ρ(B), then A is called a strong M-tensor.

    Definition 2.3. ([2]) Let AC[m,n]. Then the majorization matrix M(A) of A is the n×n matrix with the entries

    M(A)ij=aijj,i,j=1,,n.

    Lemma 2.1. ([4,8])For AR[m,n]+, the following inequalities hold:

    μx[m1](<)Axm1,x0,x0, implies μ(<)ρ(A),

    and

    Axm1νx[m1],x>0, implies ρ(A)ν.

    Lemma 2.2. ([6])Let A be a Z-tensor, then A is a strong M-tensor if and only if A is a semi-positive; that is, there exists x>0 with Axm1>0.

    The preconditioner Pα was introduced in [4] as follows:

    Pα=I+Sα,

    where

    Sα=(0α1a1220000α2a2330000αn1an1,n,,n0000) (3.1)

    and I is an identity matrix, α=(αi) and αi is a parameter, i=1,,n1.

    In [5], Cui et al. considered the preconditioner with Pmax=I+Smax, where Smax was given by

    Smax=(smi,ki)={aikiki,i=1,,n1,ki>i,0,otherwise, (3.2)

    where

    ki=min{j|maxj|aijj|,i<n,j>i}.

    Some results in [4,5] were given below.

    Lemma 3.1. ([4])Let A be a strong M-tensor. If α=(αi) and αi[0,1],i=1,2,,n1, Aα=PαA=EαFα and (ρα,xα) is Perron eigenpair of Tα=M(Eα)1Fα, then Axm1α0.

    Lemma 3.2. ([4])Let AR[m,n] and A=ImLF,FO, where L=LIm, L is the strictly lower part of M(A). If A is a strong M-tensor, then for all αi[0,1],i=1,,n1, (I+Sα)A is a strong M-tensor.

    Lemma 3.3. ([5])Let AR[m,n]+. If A is a strong M-tensor and A=ImLF, where L=LIm, L is the strictly lower triangle part of M(A), then Amax=(I+Smax)A is a strong M-tensor.

    Lemma 3.4. ([5], Lemma 3)Let AR[m,n] be a strong M-tensor. For Amax=EmaxFmax, we have the following inequality holds if

    0<αiai,i+1,,i+1ai+1,j,,jaikikiakijj<1,ki>i,ji.
    M(Emax)1M(^Eα)1. (3.3)

    Theorem 3.1. ([5], Theorem 4)Let AR[m,n] be a strong M-tensor. For

    ^Aα=(I+Sα)A=^Eα^Fα

    and

    Amax=(I+Smax)A=EmaxFmax,

    under the conditions made in Lemma 3.4, there exists a positive vector x such that 0ˆAxm1Amaxxm1, then we have the following inequality holds

    ρ(Tmax)ρ(^Tα), (3.4)

    where Tmax=M(Emax)1Fmax and ^Tα=M(^Eα)1^Fα.

    We first consider a counter-example for Theorem 3.1.

    Example 3.1. We consider a tensor AR[3,4], where

    A(:,:,1)=(10.040.040.040.040.040.040.040.040.040.040.040.040.040.040.04),A(:,:,2)=(0.040.040.040.040.0410.040.040.040.040.040.040.040.040.040.04),A(:::,3)=(0.040.040.10.040.040.040.040.040.040.0410.040.040.040.040.04),A(::.,4)=(0.040.040.040.040.0410.040.10.040.040.040.10.040.040.041).

    It is easy to show that A is a strong M-tensor and

    M(^E1)1=(1.00160000.04171.0016000.04610.04421.00400.04360.04180.04021)M(Emax)1=(1.0040000.04441.004000.04630.04441.00400.04380.04190.04021)

    From the above computation we can see that M(Emax)1M(^E1)10. That is, the tensor A satisfies the condition of the Lemma 3.4. But by computation, we have ρ(Tmax)=0.5896>0.5877=ρ(^Tα). This contradicts the conclusion of the Theorem 3.1 in [5].

    We next propose a new preconditioner ˜P=I+˜S, where

    ˜S=(0a1220a1k1k1000a233a2k2k20000an1,n,,n0000),

    where

    ki=min{j|maxj|aijj|,i<n,ji+1}.

    Without loss of generality, we assume that each diagonal entry of the tensor A is 1. Let

    ˜A=˜PA=˜E˜F=˜D˜L˜U,

    where ˜D=˜DIm,˜L=˜LIm, and ˜D,˜L are the diagonal part, the strictly lower triangular part of M(˜A). If ai,i+1,,i+1ai+1,i,,i+aikikiakiii1, then M(˜D˜L)1 exists, we get the Gauss-Seidel type iteration tensor ˜T can be defined by M(˜D˜L)1˜U.

    Lemma 3.5. Let AR[m,n] be a strong M-tensor, then ˜A=˜PA=(I+˜S)A is a strong M-tensor.

    Proof. We first show that ˜A is a Z-tensor. Since

    ˜aii2im={aii2imai,i+1,,i+1ai+1,i2,,imaikikiakii2im,i=1,,n2,an1,i2,,iman1,i+1,,i+1ai+1,i2,,im,i=n1,ani2im,j=n, (3.5)

    for (i,i2,,im)(i,i,,i), we have ˜aii2im0, that is, ˜A is a Z-tensor.

    As A is a strong M-tensor, from Lemma 2.2, there exists a positive vector x such that Axm1>0. Thus, ˜Axm1=(I+˜S)Axm1>0. That is to say, there exists a positive vector x such that ˜Axm1>0, Then from Lemma 2.2 again, we know that ˜A is a strong M-tensor.

    Lemma 3.6. Let AR[m,n] be a strong M-tensor, let

    ˆA1=(I+S1)A=E1F1,
    Amax=(I+Smax)A=EmaxFmax,

    and

    ˜A=(I+˜S)A=˜E˜F,

    then

    M(˜E)1M(Emax)1M(E1)1.

    Proof. Let Amax=(amii2im) and ˜A=(˜aii2im), we have

    amii2im={aii2imaikikiakii2im,i=1,,n1,ani2im,j=n,

    and ˜aii2im is defined by (3.5). Since A is a strong M-tensor, from Lemma 3.3 and Lemma 3.5, we know that Amax and ˜A are strong M-tensors. Thus we have M(Emax) and M(˜E) are nonsingular M-matrices. Since

    aii2imai,i+1,,i+1ai+1,i2,,imaikikiakii2imaii2imaikikiakii2im,

    for i=1,,n2 and ˆaii2im=˜aii2im, for i=n1,n, then we get M(˜E)1M(Emax)1. The later inequality can be obtained from Lemma 3.4.

    Theorem 3.2. Let A be a strong M-tensor. For ˆA1=E1F1 and ˜A=˜E˜F, let ^T1=M(E1)1F1 and ˜T=M(˜E)1˜F, then we have

    ρ(˜T)ρ(^T1). (3.6)

    Proof. From Lemma 3.1, we know there exists a positive Perron vector x of ^T1 such that Axm10. Thus we have ˆA1xm1=(I+ˆS1)Axm10. Notice that

    ˜Axm1ˆA1xm1=(˜AˆA1)xm1=(˜SS1)Axm10.

    This means that ˜Axm1ˆA1xm10. From Lemma 3.6, we have M(˜E)1M(E1)1O. Hence, we have

    M(˜E)1˜Axm1M(E1)1ˆA1xm1
    =M(˜E)1(˜AˆA1)xm1+(M(˜E)1M(E1)1)ˆA1xm10.

    Since

    M(˜E)1˜A=ImM(˜E)1˜F,M(E1)1ˆA1=ImM(E1)1F1.

    Then, we obtain

    M(˜E)1˜Fxm1M(E1)1F1xm1=^T1xm1=ρ(M(E1)1F1)x[m1].

    By Lemma 2.1, we have ρ(M(˜E)1˜F)ρ(M(E1)1F1), i.e. ρ(˜T)ρ(^T1).

    Remark 3.1. We consider the Example 3.1, it is easy to show that

    M(˜E)1=(1.00560000.04611.0056000.04650.04441.00400.04390.04200.04021)

    That is, M(˜E)1M(E1)1, by computation, we have ρ(^T1)=0.5877, and its corresponding Perron vector

    x=(x1,x2,x3,x4)T=(1,0.9124,0.9496,0.9557)T.

    Hence, according to Theorem 3.2, we have ρ(˜T)=0.5816<0.5877=ρ(^T1).

    Theorem 3.3. Let A be a strong M-tensor. For Amax=EmaxFmax and ˜A=˜E˜F, let

    Tmax=M(Emax)1Fmax

    and

    ˜T=M(˜E)1˜F.

    If there exists a positive Perron vector x of Tmax such as Axm10. then we have

    ρ(˜T)ρ(Tmax). (3.7)

    Proof. The proof is similar to those in Theorem 3.2, here we omit it.

    Example 3.2. We consider the tensor AR[3,3] in [5], where

    A(:,:,1)=(10.120.130.120.030.060.130.020.1), A(:,:,2)=(0.040.020.030.0110.020.030.040.02), A(:,:,3)=(0.030.020.040.020.060.030.020.11).

    We can show that A is a strong M-tensor. Let α=(α1,α2)=(1,1), we get:

    M(^E1)1=(1.0024000.01251.001200.01360.041),M(Emax)1=(1.0052000.012471.001200.013570.041),M(˜E)1=(1.0077000.01251.001200.01360.041).

    From the above we can see that

    M(˜E)1M(Emax)1O,M(˜E)1M(^E1)1O.

    By computation, we have

    ρ(˜T)=0.3346<0.3386=ρ(Tmax)<0.3451=ρ(^T1).

    In this paper, we present a new preconditioner I+˜S for solving multi-linear sysyems and give new comparison results between two different preconditioned tensor splitting iterative methods. Comparison theorems show that the spectral radius of the proposed preconditioner is less than those of the preconditioners in [5]. We present two numerical experiments to validate the effectiveness of the proposed preconditioner.

    The authors gratefully thank to the Referee for the constructive comments and recommendations which definitely help to improve the readability and quality of the paper. This work was supported by the Exploration project of Zhejiang Natural Science Foundation (No. LQ20G010004) and Ningbo Natural Science Foundation (No. 2021J179).

    The authors declare that there are no conflicts of interest regarding the publication of this paper.



    [1] L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symb. Comput., 40 (2005), 1302–1324. https://doi.org/10.1016/j.jsc.2005.05.007 doi: 10.1016/j.jsc.2005.05.007
    [2] K. Pearson, Essentially positive tensors, Int. J. Algebra., 4 (2010), 421–427.
    [3] D. Liu, W. Li, S. Vong, The tensor splitting with application to solve multi-linear systems, J. Comput. Appl. Math., 330 (2018), 75–94. https://doi.org/10.1016/j.cam.2017.08.009 doi: 10.1016/j.cam.2017.08.009
    [4] W. Li, D. Liu, S. Vong, Comparison results for splitting iterations for solving multi-linear system, Appl. Numer. Math., 134 (2018), 105–121. https://doi.org/10.1016/j.apnum.2018.07.009 doi: 10.1016/j.apnum.2018.07.009
    [5] L. Cui, M. Li, Y. Song, Preconditioned tensor splitting iterations method for solving multi-linear systems, Appl. Math. Lett., 96 (2019), 89–94. https://doi.org/10.1016/j.aml.2019.04.019 doi: 10.1016/j.aml.2019.04.019
    [6] W. Ding, L. Qi, Y. Wei, M-tensors and nonsingular M-tensors, Linear Algebra Appl., 439 (2013), 3264–3278. https://doi.org/10.1016/j.laa.2013.08.038 doi: 10.1016/j.laa.2013.08.038
    [7] Q. Liu, J. Huang, S. Zeng, Convergence analysis of the two preconditioned iterative methods for M-matrix linear systems, J. Comput. Appl. Math., 281 (2015), 49–57. https://doi.org/10.1016/j.cam.2014.11.034 doi: 10.1016/j.cam.2014.11.034
    [8] K. C. Chang, K. Pearson, T. Zhang, Perron-Frobenius theorem for nonnegative tensors, Commun. Math. Sci., 6 (2008), 507–520. https://dx.doi.org/10.4310/CMS.2008.v6.n2.a12 doi: 10.4310/CMS.2008.v6.n2.a12
    [9] W. Ding, Y. Wei, Solving multi-linear system with M-tensors, J. Sci. Comput., 68 (2016), 689–715. https://doi.org/10.1007/s10915-015-0156-7 doi: 10.1007/s10915-015-0156-7
    [10] L. Zhang, L. Qi, G. Zhou, M-tensors and some applications, SIAM J. Matrix Anal. Appl., 35 (2014), 437–452. https://doi.org/10.1137/130915339 doi: 10.1137/130915339
    [11] Z. Luo, L. Qi, N. Xiu, The sparsest solutions to Z-tensor complementarity problems, Optim. Lett., 11 (2017), 471–482. https://doi.org/10.1007/s11590-016-1013-9 doi: 10.1007/s11590-016-1013-9
    [12] M. Ng, L. Qi, G. Zhou, Finding the largest eigenvalue of a nonnegative tensor, SIAM J. Matrix Anal. Appl., 31 (2009), 1090–1099. https://doi.org/10.1137/09074838X doi: 10.1137/09074838X
    [13] Y. Matsuno, Exact solutions for the nonlinear Klein-Gordon and Liouville equations in four-dimensional Euclidean space, J. Math. Phys., 28 (1987), 2317–2322. https://doi.org/10.1063/1.527764 doi: 10.1063/1.527764
    [14] D. Zwillinger, Handbook of differential equations, 3 Eds., Boston: Academic Press Inc, 1997.
    [15] D. Kressner, C. Tobler, Krylov subspace methods for linear systems with tensor product structure, SIAM J. Matrix Anal. Appl., 31 (2010), 1688–1714. https://doi.org/10.1137/090756843 doi: 10.1137/090756843
    [16] C. Tobler, Low-rank Tensor methods for linear systems and eigenvalue problems, Ph.D. thesis, 2012.
  • This article has been cited by:

    1. Zhuling Jiang, Jicheng Li, A new preconditioned AOR-type method for M-tensor equation, 2023, 01689274, 10.1016/j.apnum.2023.03.013
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1910) PDF downloads(82) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog