In this note, we present a new preconditioner for solving the multi-linear systems, which arise from many practical problems and are different from the traditional linear systems. Based on the analysis of the spectral radius, we give new comparison results between some preconditioned tensor splitting iterative methods. Numerical examples are given to demonstrate the efficiency of the proposed preconditioned method.
Citation: Qingbing Liu, Aimin Xu, Shuhua Yin, Zhe Tu. A note on the preconditioned tensor splitting iterative method for solving strong M-tensor systems[J]. AIMS Mathematics, 2022, 7(4): 7177-7186. doi: 10.3934/math.2022400
[1] | Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180 |
[2] | Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050 |
[3] | Panpan Liu, Hongbin Lv . An algorithm for calculating spectral radius of $ s $-index weakly positive tensors. AIMS Mathematics, 2024, 9(1): 205-217. doi: 10.3934/math.2024012 |
[4] | Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428 |
[5] | Hong-Mei Song, Shi-Wei Wang, Guang-Xin Huang . Tensor Conjugate-Gradient methods for tensor linear discrete ill-posed problems. AIMS Mathematics, 2023, 8(11): 26782-26800. doi: 10.3934/math.20231371 |
[6] | Shuting Tang, Xiuqin Deng, Rui Zhan . The general tensor regular splitting iterative method for multilinear PageRank problem. AIMS Mathematics, 2024, 9(1): 1443-1471. doi: 10.3934/math.2024071 |
[7] | Xu Li, Rui-Feng Li . Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118. doi: 10.3934/math.2021243 |
[8] | Xueyong Wang, Gang Wang, Ping Yang . Novel Pareto $ Z $-eigenvalue inclusion intervals for tensor eigenvalue complementarity problems and its applications. AIMS Mathematics, 2024, 9(11): 30214-30229. doi: 10.3934/math.20241459 |
[9] | Xiaofeng Guo, Jianyu Pan . Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 17284-17306. doi: 10.3934/math.2023884 |
[10] | Junxiang Lu, Chengyi Zhang . On the strong P-regular splitting iterative methods for non-Hermitian linear systems. AIMS Mathematics, 2021, 6(11): 11879-11893. doi: 10.3934/math.2021689 |
In this note, we present a new preconditioner for solving the multi-linear systems, which arise from many practical problems and are different from the traditional linear systems. Based on the analysis of the spectral radius, we give new comparison results between some preconditioned tensor splitting iterative methods. Numerical examples are given to demonstrate the efficiency of the proposed preconditioned method.
In this note, we consider the following multi-linear system
Axm−1=b, | (1.1) |
where A=(ai1i2⋯im) is an order m dimension n tensor, x and b are n dimensional vectors, and the tensor-vector product Axm−1 is defined as [1]
(Axm−1)i=n∑i2,⋯,im=1aii2⋯imxi2⋯xim,i=1,2,⋯,n, | (1.2) |
where xi denotes the i-th component of x. The multi-linear system (1.1) arises from a number of scientific computing and engineering applications [1,2,6], such as data analysis [10], the sparsest solutions to tensor complementarity problems [11], and so on.
One of the applications of the multi-linear system (1.1) is the numerical solution of the partial differential equation with Dirichlet's boundary condition
{u(x)m−2⋅△u(x)=−f(x)inΩ,u(x)=g(x)on∂Ω,(m=3,4,⋯) | (1.3) |
where △=∑dk=1∂2∂x2k and Ω=[0,1]d. When f(⋅) is a constant function, this PDE is a nonlinear Klein-Gordon equation (see [9,13,14]). Just as authors studied in [13,14], u↦uθ⋅△u can also be discretized into an mth-order nonsingular M-tensor
L(d)h=d−1∑k=0I⊗⋯⊗I⏟k⊗Lh⊗I⊗⋯⊗I⏟d−k−1, |
which satisfies
(Lhum−1)i=um−2i⋅(Lhu)i |
for i=1,2,⋯,n, where Lh is an mth-order tensor M-tensor with
{(Lh)1,1,⋯,1=(Lh)n,n,⋯,n=1h2,(Lh)i,i,⋯,i=2h2,(Lh)i,i−1,i,⋯,i=(Lh)i,i,i−1,⋯,i=⋯=(Lh)i,i,i,⋯,i−1=−1h2(m−1),i=2,3,⋯,n−1,(Lh)i,i+1,i,⋯,i=(Lh)i,i,i+1,⋯,i=⋯=(Lh)i,i,i,⋯,i+1=−1h2(m−1),i=2,3,⋯,n−1, |
The PDE in (1.3) is discretized into an M-equation L(d)hum−1=f. This class of multi-linear equations can be regarded as a higher-order generalization of the one discussed in [9,15,16].
To solve the multi-linear system (1.1), Ng, Qi and Zhou [12] proposed an algorithm for b=0. When A is a strong M-tensor, Ding and Wei [9] generalized the Jacobi method, the Gauss-Seidel method and the Newton algorithm. Liu et al. [3] discussed the tensor splitting A=E−F, and then proposed a general tensor splitting iterative method for solving the multi-linear system (1.1) as follows:
xk=[M(E)−1Fxm−1k−1+M(E)−1b][1m−1],k=1,2,⋯, | (1.4) |
where x0 is a given initial vector and the tensor T=M(E)−1F is called the iterative tensor of the splitting method (see [3,4]). They discussed the convergence rate for the tensor splitting iterative method and showed that the spectral radius ρ(M(E)−1F) can be seen as an approximate convergence rate of the iteration (1.4).
For matrix splitting iterative methods, it is well known that the preconditioning technique is very important, which can be used to improve the rate of convergence of the iterative method when a suitable preconditioner is chosen [4,5,7]. In [3], Liu et al. explored preconditioning techniques for tensor splitting methods and discussed the preconditioned Gauss-Seidel type and SOR type iterative methods, and proved that the Guass-Seidel type method demonstrates faster convergence than the Jacobi method, that is to say, the spectral radius of the iterative matrix of the Guass-Seidel method is not larger than the one of the Jacobi method. Recently, Cui et al. [5] proposed a new preconditioner for solving M-tensor systems and gave some comparison theorems of the preconditioned Gauss-Seidel type method. The preconditioned iterative method is to transform the original system into the preconditioned form
PAxm−1=Pb, | (1.5) |
where the matrix P is a nonsingular preconditioner. Let PA=EP−FP be a splitting of PA. Then the corresponding preconditioned tensor splitting iterative method is given as follows:
xk=[M(EP)−1FPxm−1k−1+M(EP)−1Pb][1m−1],k=1,2,⋯. | (1.6) |
The rest of this paper is organized as follows. In Section 2 we introduce some definitions and some related lemmas which will be used in the sequel. In Section 3, we first present a counter-example for existing research results and then propose a new special preconditioner. Meanwhile, we give the comparison theorems for the preconditioned Gauss-Seidel type iterative methods. The final section is the concluding remark.
Let 0, O and O denote a zero vector, a zero matrix and a zero tensor, respectively. Let A and B be two tensors with the same sizes. The order A≥B(>B) means that each entry of A is no less than (larger than) corresponding one of B.
For a positive integer n, let ⟨n⟩={1,2,⋯,n}. A tensor A consists of n1×⋯×nm entries in the real field R:
A=(ai1i2⋯im),ai1i2⋯im∈R,ij∈⟨nj⟩,j=1,⋯,m. |
If n1=⋯=nm=n, A is called an order m dimension n tensor. We denote the set of all order m dimension n tensors by R[m,n]. When m=1, R[1,n] is simplified as Rn, which is the set of all n-dimension real vectors. When m=2, R[2,n] denotes the set of all n×n real matrices. Similarly, the above notions can be generalized to the complex number field C. Let R+ be the nonnegative real field. If each entry of A is nonnegative, we call A a nonnegative tensor, and the set of all the order m dimension n nonnegative tensors is denoted by R[m,n]+.
Let A∈R[2,n] and B∈R[k,n]. The matrix-tensor product C=AB∈R[k,n] is defined by
cji2⋯ik=n∑j2=1ajj2bj2i2⋯ik. | (2.1) |
The formular (2.1) can be written as follows (see [3]):
C(1)=(AB)(1)=AB(1), | (2.2) |
where C(1) and B(1) are the matrices obtained from C and B flattened along the first index (see [3]), For example, if B=(bijk)∈C[3,n], then
B(1)=(b111⋯b1n1b112⋯b1n2⋯b11n⋯b1nnb211⋯b2n1b212⋯b2n2⋯b21n⋯b2nn⋮⋱⋮⋮⋱⋮⋮⋮⋱⋮bn11⋯bnn1bn12⋯bnn2⋯bn1n⋯bnnn). |
Next we recall some definitions and lemmas for the completeness our presentation.
Definition 2.1. ([1]) Let A∈R[m,n]. A pair (λ,x)∈C×(C∖0) is called an eigenvalue-eigenvector (or simply eigenpair) of A if they satisfy the equation
Axm−1=λx[m−1], | (2.3) |
where x[m−1]=(xm−11,⋯,xm−1n)T. We call (λ,x) an H-eigenpair if both λ and x are real.
Let ρ(A)=max{|λ|:λ∈σ(A)} be the spectral radius of A, where σ(A) is the set of all eigenvalue of A. We use Ik=(δi1⋯ik) to denote a unit tensor with its entries given by:
δi1⋯ik={1,i1=⋯=ik,0,else. |
Definition 2.2. ([6]) Let A∈R[m,n]. A is called a Z-tensor if its off-diagonal entries are non-positive. A is called an M-tensor if there exist a nonnegative tensor B and a positive real number η≥ρ(B) such that
A=ηIm−B. |
If η>ρ(B), then A is called a strong M-tensor.
Definition 2.3. ([2]) Let A∈C[m,n]. Then the majorization matrix M(A) of A is the n×n matrix with the entries
M(A)ij=aij⋯j,i,j=1,⋯,n. |
Lemma 2.1. ([4,8])For A∈R[m,n]+, the following inequalities hold:
μx[m−1]≤(<)Axm−1,x≥0,x≠0, implies μ≤(<)ρ(A),
and
Axm−1≤νx[m−1],x>0, implies ρ(A)≤ν.
Lemma 2.2. ([6])Let A be a Z-tensor, then A is a strong M-tensor if and only if A is a semi-positive; that is, there exists x>0 with Axm−1>0.
The preconditioner Pα was introduced in [4] as follows:
Pα=I+Sα, |
where
Sα=(0−α1a12⋯20⋯000−α2a23⋯3⋯0⋮⋮⋮⋱⋮000⋯−αn−1an−1,n,⋯,n000⋯0) | (3.1) |
and I is an identity matrix, α=(αi) and αi is a parameter, i=1,⋯,n−1.
In [5], Cui et al. considered the preconditioner with Pmax=I+Smax, where Smax was given by
Smax=(smi,ki)={−aiki⋯ki,i=1,⋯,n−1,ki>i,0,otherwise, | (3.2) |
where
ki=min{j|maxj|aij⋯j|,i<n,j>i}. |
Some results in [4,5] were given below.
Lemma 3.1. ([4])Let A be a strong M-tensor. If α=(αi) and αi∈[0,1],i=1,2,⋯,n−1, Aα=PαA=Eα−Fα and (ρα,xα) is Perron eigenpair of Tα=M(Eα)−1Fα, then Axm−1α≥0.
Lemma 3.2. ([4])Let A∈R[m,n] and A=Im−L−F,F≥O, where L=LIm, −L is the strictly lower part of M(A). If A is a strong M-tensor, then for all αi∈[0,1],i=1,⋯,n−1, (I+Sα)A is a strong M-tensor.
Lemma 3.3. ([5])Let A∈R[m,n]+. If A is a strong M-tensor and A=Im−L−F, where L=LIm, −L is the strictly lower triangle part of M(A), then Amax=(I+Smax)A is a strong M-tensor.
Lemma 3.4. ([5], Lemma 3)Let A∈R[m,n] be a strong M-tensor. For Amax=Emax−Fmax, we have the following inequality holds if
0<αiai,i+1,⋯,i+1ai+1,j,⋯,j≤aiki⋯kiakij⋯j<1,ki>i,j≤i. |
M(Emax)−1≥M(^Eα)−1. | (3.3) |
Theorem 3.1. ([5], Theorem 4)Let A∈R[m,n] be a strong M-tensor. For
^Aα=(I+Sα)A=^Eα−^Fα |
and
Amax=(I+Smax)A=Emax−Fmax, |
under the conditions made in Lemma 3.4, there exists a positive vector x such that 0≤ˆAxm−1≤Amaxxm−1, then we have the following inequality holds
ρ(Tmax)≤ρ(^Tα), | (3.4) |
where Tmax=M(Emax)−1Fmax and ^Tα=M(^Eα)−1^Fα.
We first consider a counter-example for Theorem 3.1.
Example 3.1. We consider a tensor A∈R[3,4], where
A(:,:,1)=(1−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04),A(:,:,2)=(−0.04−0.04−0.04−0.04−0.041−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04−0.04),A(:::,3)=(−0.04−0.04−0.1−0.04−0.04−0.04−0.04−0.04−0.04−0.041−0.04−0.04−0.04−0.04−0.04),A(::.,4)=(−0.04−0.04−0.04−0.04−0.041−0.04−0.1−0.04−0.04−0.04−0.1−0.04−0.04−0.041). |
It is easy to show that A is a strong M-tensor and
M(^E1)−1=(1.00160000.04171.0016000.04610.04421.00400.04360.04180.04021)M(Emax)−1=(1.0040000.04441.004000.04630.04441.00400.04380.04190.04021) |
From the above computation we can see that M(Emax)−1≥M(^E1)−1≥0. That is, the tensor A satisfies the condition of the Lemma 3.4. But by computation, we have ρ(Tmax)=0.5896>0.5877=ρ(^Tα). This contradicts the conclusion of the Theorem 3.1 in [5].
We next propose a new preconditioner ˜P=I+˜S, where
˜S=(0−a12⋯20⋯−a1k1⋯k1⋯⋯000−a23⋯3⋯⋯−a2k2⋯k2⋯0⋮⋮⋮⋮⋮⋮⋮⋮000⋯⋯⋯⋯−an−1,n,⋯,n000⋯⋯⋯⋯0), |
where
ki=min{j|maxj|aij⋯j|,i<n,j≥i+1}. |
Without loss of generality, we assume that each diagonal entry of the tensor A is 1. Let
˜A=˜PA=˜E−˜F=˜D−˜L−˜U, |
where ˜D=˜DIm,˜L=˜LIm, and ˜D,−˜L are the diagonal part, the strictly lower triangular part of M(˜A). If ai,i+1,⋯,i+1ai+1,i,⋯,i+aiki⋯kiakii⋯i≠1, then M(˜D−˜L)−1 exists, we get the Gauss-Seidel type iteration tensor ˜T can be defined by M(˜D−˜L)−1˜U.
Lemma 3.5. Let A∈R[m,n] be a strong M-tensor, then ˜A=˜PA=(I+˜S)A is a strong M-tensor.
Proof. We first show that ˜A is a Z-tensor. Since
˜aii2⋯im={aii2⋯im−ai,i+1,⋯,i+1ai+1,i2,⋯,im−aiki⋯kiakii2⋯im,i=1,⋯,n−2,an−1,i2,⋯,im−an−1,i+1,⋯,i+1ai+1,i2,⋯,im,i=n−1,ani2⋯im,j=n, | (3.5) |
for (i,i2,⋯,im)≠(i,i,⋯,i), we have ˜aii2⋯im≤0, that is, ˜A is a Z-tensor.
As A is a strong M-tensor, from Lemma 2.2, there exists a positive vector x such that Axm−1>0. Thus, ˜Axm−1=(I+˜S)Axm−1>0. That is to say, there exists a positive vector x such that ˜Axm−1>0, Then from Lemma 2.2 again, we know that ˜A is a strong M-tensor.
Lemma 3.6. Let A∈R[m,n] be a strong M-tensor, let
ˆA1=(I+S1)A=E1−F1, |
Amax=(I+Smax)A=Emax−Fmax, |
and
˜A=(I+˜S)A=˜E−˜F, |
then
M(˜E)−1≥M(Emax)−1≥M(E1)−1. |
Proof. Let Amax=(amii2⋯im) and ˜A=(˜aii2⋯im), we have
amii2⋯im={aii2⋯im−aiki⋯kiakii2⋯im,i=1,⋯,n−1,ani2⋯im,j=n, |
and ˜aii2⋯im is defined by (3.5). Since A is a strong M-tensor, from Lemma 3.3 and Lemma 3.5, we know that Amax and ˜A are strong M-tensors. Thus we have M(Emax) and M(˜E) are nonsingular M-matrices. Since
aii2⋯im−ai,i+1,⋯,i+1ai+1,i2,⋯,im−aiki⋯kiakii2⋯im≤aii2⋯im−aiki⋯kiakii2⋯im, |
for i=1,⋯,n−2 and ˆaii2⋯im=˜aii2⋯im, for i=n−1,n, then we get M(˜E)−1≥M(Emax)−1. The later inequality can be obtained from Lemma 3.4.
Theorem 3.2. Let A be a strong M-tensor. For ˆA1=E1−F1 and ˜A=˜E−˜F, let ^T1=M(E1)−1F1 and ˜T=M(˜E)−1˜F, then we have
ρ(˜T)≤ρ(^T1). | (3.6) |
Proof. From Lemma 3.1, we know there exists a positive Perron vector x of ^T1 such that Axm−1≥0. Thus we have ˆA1xm−1=(I+ˆS1)Axm−1≥0. Notice that
˜Axm−1−ˆA1xm−1=(˜A−ˆA1)xm−1=(˜S−S1)Axm−1≥0. |
This means that ˜Axm−1≥ˆA1xm−1≥0. From Lemma 3.6, we have M(˜E)−1≥M(E1)−1≥O. Hence, we have
M(˜E)−1˜Axm−1−M(E1)−1ˆA1xm−1 |
=M(˜E)−1(˜A−ˆA1)xm−1+(M(˜E)−1−M(E1)−1)ˆA1xm−1≥0. |
Since
M(˜E)−1˜A=Im−M(˜E)−1˜F,M(E1)−1ˆA1=Im−M(E1)−1F1. |
Then, we obtain
M(˜E)−1˜Fxm−1≤M(E1)−1F1xm−1=^T1xm−1=ρ(M(E1)−1F1)x[m−1]. |
By Lemma 2.1, we have ρ(M(˜E)−1˜F)≤ρ(M(E1)−1F1), i.e. ρ(˜T)≤ρ(^T1).
Remark 3.1. We consider the Example 3.1, it is easy to show that
M(˜E)−1=(1.00560000.04611.0056000.04650.04441.00400.04390.04200.04021) |
That is, M(˜E)−1≥M(E1)−1, by computation, we have ρ(^T1)=0.5877, and its corresponding Perron vector
x=(x1,x2,x3,x4)T=(1,0.9124,0.9496,0.9557)T. |
Hence, according to Theorem 3.2, we have ρ(˜T)=0.5816<0.5877=ρ(^T1).
Theorem 3.3. Let A be a strong M-tensor. For Amax=Emax−Fmax and ˜A=˜E−˜F, let
Tmax=M(Emax)−1Fmax |
and
˜T=M(˜E)−1˜F. |
If there exists a positive Perron vector x of Tmax such as Axm−1≥0. then we have
ρ(˜T)≤ρ(Tmax). | (3.7) |
Proof. The proof is similar to those in Theorem 3.2, here we omit it.
Example 3.2. We consider the tensor A∈R[3,3] in [5], where
A(:,:,1)=(1−0.12−0.13−0.12−0.03−0.06−0.13−0.02−0.1), A(:,:,2)=(−0.04−0.02−0.03−0.011−0.02−0.03−0.04−0.02), A(:,:,3)=(−0.03−0.02−0.04−0.02−0.06−0.03−0.02−0.11). |
We can show that A is a strong M-tensor. Let α=(α1,α2)=(1,1), we get:
M(^E1)−1=(1.0024000.01251.001200.01360.041),M(Emax)−1=(1.0052000.012471.001200.013570.041),M(˜E)−1=(1.0077000.01251.001200.01360.041). |
From the above we can see that
M(˜E)−1≥M(Emax)−1≥O,M(˜E)−1≥M(^E1)−1≥O. |
By computation, we have
ρ(˜T)=0.3346<0.3386=ρ(Tmax)<0.3451=ρ(^T1). |
In this paper, we present a new preconditioner I+˜S for solving multi-linear sysyems and give new comparison results between two different preconditioned tensor splitting iterative methods. Comparison theorems show that the spectral radius of the proposed preconditioner is less than those of the preconditioners in [5]. We present two numerical experiments to validate the effectiveness of the proposed preconditioner.
The authors gratefully thank to the Referee for the constructive comments and recommendations which definitely help to improve the readability and quality of the paper. This work was supported by the Exploration project of Zhejiang Natural Science Foundation (No. LQ20G010004) and Ningbo Natural Science Foundation (No. 2021J179).
The authors declare that there are no conflicts of interest regarding the publication of this paper.
[1] |
L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symb. Comput., 40 (2005), 1302–1324. https://doi.org/10.1016/j.jsc.2005.05.007 doi: 10.1016/j.jsc.2005.05.007
![]() |
[2] | K. Pearson, Essentially positive tensors, Int. J. Algebra., 4 (2010), 421–427. |
[3] |
D. Liu, W. Li, S. Vong, The tensor splitting with application to solve multi-linear systems, J. Comput. Appl. Math., 330 (2018), 75–94. https://doi.org/10.1016/j.cam.2017.08.009 doi: 10.1016/j.cam.2017.08.009
![]() |
[4] |
W. Li, D. Liu, S. Vong, Comparison results for splitting iterations for solving multi-linear system, Appl. Numer. Math., 134 (2018), 105–121. https://doi.org/10.1016/j.apnum.2018.07.009 doi: 10.1016/j.apnum.2018.07.009
![]() |
[5] |
L. Cui, M. Li, Y. Song, Preconditioned tensor splitting iterations method for solving multi-linear systems, Appl. Math. Lett., 96 (2019), 89–94. https://doi.org/10.1016/j.aml.2019.04.019 doi: 10.1016/j.aml.2019.04.019
![]() |
[6] |
W. Ding, L. Qi, Y. Wei, M-tensors and nonsingular M-tensors, Linear Algebra Appl., 439 (2013), 3264–3278. https://doi.org/10.1016/j.laa.2013.08.038 doi: 10.1016/j.laa.2013.08.038
![]() |
[7] |
Q. Liu, J. Huang, S. Zeng, Convergence analysis of the two preconditioned iterative methods for M-matrix linear systems, J. Comput. Appl. Math., 281 (2015), 49–57. https://doi.org/10.1016/j.cam.2014.11.034 doi: 10.1016/j.cam.2014.11.034
![]() |
[8] |
K. C. Chang, K. Pearson, T. Zhang, Perron-Frobenius theorem for nonnegative tensors, Commun. Math. Sci., 6 (2008), 507–520. https://dx.doi.org/10.4310/CMS.2008.v6.n2.a12 doi: 10.4310/CMS.2008.v6.n2.a12
![]() |
[9] |
W. Ding, Y. Wei, Solving multi-linear system with M-tensors, J. Sci. Comput., 68 (2016), 689–715. https://doi.org/10.1007/s10915-015-0156-7 doi: 10.1007/s10915-015-0156-7
![]() |
[10] |
L. Zhang, L. Qi, G. Zhou, M-tensors and some applications, SIAM J. Matrix Anal. Appl., 35 (2014), 437–452. https://doi.org/10.1137/130915339 doi: 10.1137/130915339
![]() |
[11] |
Z. Luo, L. Qi, N. Xiu, The sparsest solutions to Z-tensor complementarity problems, Optim. Lett., 11 (2017), 471–482. https://doi.org/10.1007/s11590-016-1013-9 doi: 10.1007/s11590-016-1013-9
![]() |
[12] |
M. Ng, L. Qi, G. Zhou, Finding the largest eigenvalue of a nonnegative tensor, SIAM J. Matrix Anal. Appl., 31 (2009), 1090–1099. https://doi.org/10.1137/09074838X doi: 10.1137/09074838X
![]() |
[13] |
Y. Matsuno, Exact solutions for the nonlinear Klein-Gordon and Liouville equations in four-dimensional Euclidean space, J. Math. Phys., 28 (1987), 2317–2322. https://doi.org/10.1063/1.527764 doi: 10.1063/1.527764
![]() |
[14] | D. Zwillinger, Handbook of differential equations, 3 Eds., Boston: Academic Press Inc, 1997. |
[15] |
D. Kressner, C. Tobler, Krylov subspace methods for linear systems with tensor product structure, SIAM J. Matrix Anal. Appl., 31 (2010), 1688–1714. https://doi.org/10.1137/090756843 doi: 10.1137/090756843
![]() |
[16] | C. Tobler, Low-rank Tensor methods for linear systems and eigenvalue problems, Ph.D. thesis, 2012. |
1. | Zhuling Jiang, Jicheng Li, A new preconditioned AOR-type method for M-tensor equation, 2023, 01689274, 10.1016/j.apnum.2023.03.013 |