Loading [MathJax]/jax/output/SVG/jax.js
Research article

An algorithm for calculating spectral radius of s-index weakly positive tensors

  • Received: 02 August 2023 Revised: 28 October 2023 Accepted: 13 November 2023 Published: 27 November 2023
  • MSC : 15A18, 15A69

  • In this paper, we introduced s-index weakly positive tensors and discussed the calculation of the spectral radius of this kind of nonnegative tensors. Using the diagonal similarity transformation of tensor and Perron-Frobenius theory of nonnegative tensor, the calculation method of the maximum H-eigenvalue of s-index weakly positive tensors was given. A variable parameter was introduced in each iteration of the algorithm, which is equivalent to a translation transformation of the tensor in each iteration to improve the calculation speed. At the same time, it was proved that the algorithm is linearly convergent for the calculation of the spectral radius of s-index weakly positive tensors. The final numerical example shows the effectiveness of the algorithm.

    Citation: Panpan Liu, Hongbin Lv. An algorithm for calculating spectral radius of s-index weakly positive tensors[J]. AIMS Mathematics, 2024, 9(1): 205-217. doi: 10.3934/math.2024012

    Related Papers:

    [1] Xueyong Wang, Gang Wang, Ping Yang . Novel Pareto $ Z $-eigenvalue inclusion intervals for tensor eigenvalue complementarity problems and its applications. AIMS Mathematics, 2024, 9(11): 30214-30229. doi: 10.3934/math.20241459
    [2] Wenbin Gong, Yaqiang Wang . Some new criteria for judging $ \mathcal{H} $-tensors and their applications. AIMS Mathematics, 2023, 8(4): 7606-7617. doi: 10.3934/math.2023381
    [3] Qin Zhong, Na Li . Generalized Perron complements in diagonally dominant matrices. AIMS Mathematics, 2024, 9(12): 33879-33890. doi: 10.3934/math.20241616
    [4] Dongjian Bai, Feng Wang . New methods based $ \mathcal{H} $-tensors for identifying the positive definiteness of multivariate homogeneous forms. AIMS Mathematics, 2021, 6(9): 10281-10295. doi: 10.3934/math.2021595
    [5] Shunjie Bai . A tighter M-eigenvalue localization set for partially symmetric tensors and its an application. AIMS Mathematics, 2022, 7(4): 6084-6098. doi: 10.3934/math.2022339
    [6] Tinglan Yao . An optimal $ Z $-eigenvalue inclusion interval for a sixth-order tensor and its an application. AIMS Mathematics, 2022, 7(1): 967-985. doi: 10.3934/math.2022058
    [7] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [8] Ruiping Wen, Tingyan Liu, Yalei Pei . A new algorithm by embedding structured data for low-rank tensor ring completion. AIMS Mathematics, 2025, 10(3): 6492-6511. doi: 10.3934/math.2025297
    [9] Shixian Ren, Yu Zhang, Ziqiang Wang . An efficient spectral-Galerkin method for a new Steklov eigenvalue problem in inverse scattering. AIMS Mathematics, 2022, 7(5): 7528-7551. doi: 10.3934/math.2022423
    [10] Yuanqiang Chen, Jihui Zheng, Jing An . A Legendre spectral method based on a hybrid format and its error estimation for fourth-order eigenvalue problems. AIMS Mathematics, 2024, 9(3): 7570-7588. doi: 10.3934/math.2024367
  • In this paper, we introduced s-index weakly positive tensors and discussed the calculation of the spectral radius of this kind of nonnegative tensors. Using the diagonal similarity transformation of tensor and Perron-Frobenius theory of nonnegative tensor, the calculation method of the maximum H-eigenvalue of s-index weakly positive tensors was given. A variable parameter was introduced in each iteration of the algorithm, which is equivalent to a translation transformation of the tensor in each iteration to improve the calculation speed. At the same time, it was proved that the algorithm is linearly convergent for the calculation of the spectral radius of s-index weakly positive tensors. The final numerical example shows the effectiveness of the algorithm.



    Consider an m-order n-dimensional square tensor A consisting of nm entries in the real field R:

    A=(ai1i2im),ai1i2imR,1i1,i2,,imn.

    If ai1i2im0,1i1,i2,,imn, then A is called an m-order n-dimensional nonnegative tensor. Denote the set of all m-order n-dimensional nonnegative tensors as R[m,n]+. Rn, Rn+, Rn++ represents the set of all n-dimensional vectors, the set of all n-dimensional nonnegative vectors and the set of all n-dimensional positive vectors, respectively. Tensors play an important role in physics, engineering and mathematics. There are many application domains of tensors such as data analysis and mining, information science, image processing and computational biology [1,2,3,4].

    In 2005, Qi [2] introduced the notion of eigenvalues of higher-order tensors and studied the existence of both complex and real eigenvalues and eigenvectors. Independently, in the same year, Lim [3] also defined eigenvalues and eigenvectors but restricted them to be real.

    Qi [2] proposed the definition of H-eigenvalue. If a real number λ and a nonzero real vector xRn satisfy the following homogeneous polynomial equation:

    Axm1=λx[m1],

    where

    Axm1=(ni2,,im=1aii2imxi2xim)1in

    is an n-dimensional vector and

    (x[m1])i=xm1i,

    then λ is an H-eigenalue of A and x is an H-eigenector of A associated with λ. Define the set of H-eigenvalues of AR[m,n]+ as σH(A) and ρ(A)=maxλσH(A)|λ| as the H-spectral radius of tensor A. Let σ(A) be the set of eigenvalues of a tensor A.

    Let A=(ai1i2im)R[m,n]+. Ng et al. [5] proposed the NQZ algorithm for the largest H-eigenalue of a nonnegative irreducible tensor.

    Algorithm 1 ([5]) NQZ algorithm
    Step 0. Choose x(0)>0,x(0)Rn. Let y(0)=A(x(0))m1 and set k:=0.
    Step 1. Compute
    x(k+1)=(y(k))[1m1](y(k))[1m1],
    y(k+1)=A(x(k+1))m1,
    λ_k+1=min(x(k+1))i>0(y(k+1))i(x(k+1))m1i,
    ¯λk+1=max(x(k+1))i>0(y(k+1))i(x(k+1))m1i.
    Step 2. If ¯λk+1=λ_k+1, stop. Otherwise, replace k by k+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Subsequently, the NQZ algorithm was proved to be convergent for primitive tensors in [6] and for weakly primitive tensors in [4], and the NQZ algorithm was shown to have an explicit linear convergence rate for essentially positive tensors in [7]. However, some examples [5] showed that it did not converge for some irreducible nonnegative tensors.

    In 2010, Liu et al.[8] modified the NQZ algorithm and proposed the LZI algorithm. Let E=(δi1i2im) be the m-order n-dimensional unit tensor whose entries are

    δi1i2im={1,ifi1=i2==im,0,otherwise.
    Algorithm 2 ([8]) LZI algorithm
    Step 0. Choose x(0)>0,x(0)Rn. Let B=A+ρE, where ρ>0, and set k:=0.
    Step 1. Compute
    y(k)=B(x(k))m1,
    λ_k=min(x(k))i>0(y(k))i(x(k))m1i,
    ¯λk=max(x(k))i>0(y(k))i(x(k))m1i.
    Step 2. If ¯λk=λ_k, then let λ=¯λk and stop. Otherwise, compute
    x(k+1)=(y(k))[1m1](y(k))[1m1],
    replace k by k+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Liu et al.[8] proved that the LZI algorithm is convergent for irreducible nonnegative tensors. In 2012, Zhang et al.[9] proved the linear convergence of the LZI algorithm for weakly positive tensors. Since then, there have been many studies on the calculation of the maximum eigenvalue of nonnegative tensors. For example, Yang and Ni [10] gave a nonlinear algorithm for calculating the maximum eigenvalue of symmetric tensors; another example, Zhang and Bu [11] gave a diagonal similar iterative algorithm for calculating the maximum H-eigenvalue of a class of generalized weakly positive tensors.

    In this section, we mainly introduce some related concepts and important properties of tensors and matrices. For a positive integer n, let n={1,2,,n}.

    Definition 2.1. [12] An m-order n-dimensional tensor A is called reducible if there exists a nonempty proper index subset In such that

    ai1i2im=0,i1I,i2,,imI.

    If A is not reducible, then A is irreducible.

    Definition 2.2. [13] A nonnegative matrix ˚A is called the majorization associated to nonnegative tensor A if the (i,j)-th element of ˚A is defined to be aijj for any i,j=1,,n.

    Definition 2.3. [13] A nonnegative m-order n-dimensional tensor A is essentially positive if Axm1Rn++ for any nonzero xRn+.

    Definition 2.4. [9] Let A be a nonnegative tensor of order m and dimension n. A is weakly positive if

    aijj>0forijandi,j{1,2,,n}.

    Definition 2.5. [11] Let A=(ai1i2im)R[m,n]+, then A is generalized weakly positive if there exists i0n, such that ai0jj>0,aji0i0>0 for all jn{i0}.

    Definition 2.6. [14] Let A=(ai1i2im)R[m,n]+.

    (1) We call a nonnegative matrix G(A) the representation associated to nonnegative tensor A if the (i,j)-th element of G(A) is defined to be the summation of a{ii2im} with indices {i2im}j.

    (2) We call A weakly reducible if its representation G(A) is a reducible matrix and weakly primitive if G(A) is a primitive matrix. If A is not weakly reducible, then it is called weakly irreducible.

    Definition 2.7. Let A=(ai1i2im)R[m,n]+ and πs1(i,j) be an arrangement of s1 letters i and ms letters j. If there exists sm1 and i0n for any jn,ji0, such that ai0πs1(i0,j)0 and ajπs1(j,i0)0 hold, then A is called an s-index weakly positive tensor.

    For example, A=(aijk)R[3,3]+, where a113=1,a223=4,a331=2,a332=5 and ai1i2i30(1i1,i2,i33) elsewhere, then A is a two-index weakly positive tensor.

    Remark 2.1. The essentially positive tensors, the weakly positive tensors and the generalized weakly positive tensors are all one-index weakly positive tensors, which are special tensor classes of s-index weakly positive tensors..

    Theorem 2.1. [15] For any nonnegative tensor A=(ai1i2im)R[m,n]+, ρ(A) is an eigenvalue with a nonnegative eigenvector xRn+ corresponding to it.

    Theorem 2.2. [16] Let A=(ai1i2im)R[m,n]+. ρ(A) is the spectral radius of A, then

    mininni2,,im=1aii2imρ(A)maxinni2,,im=1aii2im.

    Theorem 2.3. [12] If A is an irreducible nonnegative tensor of order m and dimension n, then there exists λ0>0 and x0>0,x0Rn such that Axm10=λ0x[m1]0. Moreover, if λ is an eigenvalue with a nonnegative eigenvector, then λ=λ0. If λ is an eigenvalue of A, then |λ|λ0.

    Definition 2.8. [17] Let A=(ai1i2im)R[m,n]+, B=(bi1i2im)R[m,n]+. The tensors A and B are said to be diagonal similar if there exists some invertible diagonal matrix D=diag(d1,d2,,dn) of order n such that B=A×1D(m1)×2D×3×mD, where bi1i2im=ai1i2imd(m1)i1di2dim.

    Theorem 2.4. [17] If the two m-order n-dimensional tensors A and B are diagonal similar, then σ(A)=σ(B).

    In 1981, Bunse [18] gave a diagonal similar iterative algorithm for calculating the maximum eigenvalue of irreducible nonnegative matrices. In 2008, Lv [19] further studied the diagonal similarity iterative algorithm for calculating the maximum eigenvalue of irreducible nonnegative matrices. In 2021, Zhang and Bu [11] gave a diagonal similar iterative algorithm for calculating the maximum H-eigenvalue of nonnegative tensors. In this paper, according to the construction idea of the algorithm in [19], a numerical algorithm for calculating the maximum H-eigenvalue and corresponding eigenvector of s-index weakly positive tensors is given.

    Let A=A(0)=(a(0)i1i2im)R[m,n]+, ri(A(0))=ni2,,im=1a(0)ii2im(in) and ¯r(A(0))=maxinri(A(0)). ε is a sufficiently small positive number. αkR(k=0,1,2,) satisfies εmininaiii<αk¯r(A(0)).

    Algorithm 3
    Step 0. Given A(0)=A=(ai1i2im), εmininaiii<αk¯r(A(0)), ε>0. Set k:=0.
    Step 1. Compute
    ri(A(k))=ni2,,im=1a(k)ii2im,in,
    ¯r(A(k))=maxinri(A(k)),r_(A(k))=mininri(A(k)).
    Step 2. If ¯r(A(k))r_(A(k))<ε, then ρ(A)=12(¯r(A(k))+r_(A(k))) and stop.
    Step 3. Set
    D(k)=(diag(r1(A(k)),r2(A(k)),,rn(A(k)))+αkI¯r(A(k))+αk)1m1,
    A(k+1)=A(k)×1(D(k))(m1)×2D(k)×3×mD(k),
    and replace k by k+1, go to Step 1.

     | Show Table
    DownLoad: CSV

    In the following, we will give the convergence condition of Algorithm 3.

    Define A(k)+αkE=A(k)(αk)=:(ai1i2im(αk))ni1,i2,,im=1.

    Lemma 3.1. Let A=(ai1i2im)R[m,n]+. For tensor sequence A(l)(l=0,1,2,), we have ¯r(A(l))(l=0,1,2,) as monotonically decreasing with lower bound, and r_(A(l))(l=0,1,2,) as monotonically increasing with upper bound.

    Proof. Notice that

    ri(A(l+1))=ni2,,im=1a(l+1)ii2im=ni2,,im=1a(l)ii2immj=2(rij(A(l))+αl)1m1ri(A(l))+αl=ni2,,im=1aii2im(αl)mj=2(rij(A(l))+αl)1m1ri(A(l))+αlαlni2,,im=1aii2im(αl)¯r(A(l))+αlri(A(l))+αlαl=¯r(A(l)),

    then ¯r(A(l+1))¯r(A(l)),l=0,1,2,. Therefore, ¯r(A(l)) is monotonically decreasing. Similarly, it can be proved that r_(A(l+1))r_(A(l)),l=0,1,2,, so r_(A(l)) is monotonically increasing. By r_(A(l))¯r(A(l)) we can obtain that ¯r(A(l))(l=0,1,2,) is monotonically decreasing with lower bound r_(A(0)), and r_(A(l))(l=0,1,2,) is monotonically increasing with upper bound ¯r(A(0)).

    Lemma 3.2. Let A=(ai1i2im)R[m,n]+ be an s-index weakly positive tensor, then it is an irreducible tensor.

    Proof. Because A is an s-index weakly positive tensor, by Definition 2.7, there are sm1 and i0n such that ai0πs1(i0,j)0, ajπs1(j,i0)0, jn,ji0. That is, for any jIn, there is i0nI such that ajπs1(j,i0)0. By Definition 2.1, A is an irreducible tensor.

    Lemma 3.3. Let A=(ai1i2im)R[m,n]+ be an s-index weakly positive tensor. That is, there are sm1 and i0n for any jn,ji0, such that ai0πs1(i0,j)0 and ajπs1(j,i0)0 hold, then for any kN,

    a(k)jπs1(j,i0)(αk)min{˜a2¯r(A(0)),ε},where˜a=minjn{i0}{ai0πs1(i0,j),ajπs1(j,i0)}.

    Proof. If ajπs1(j,i0)0, then a(k)jπs1(j,i0)0,a(k)i0i0i0=ai0i0i0,k=0,1,2,. In the case of ji0, it can be obtained from Lemma 3.1 that

    ¯r(A(0))¯r(A(k))a(k)jπs1(j,i0)=a(k1)jπs1(j,i0)(ri0(A(k1))+αk1)ms(rj(A(k1))+αk1)ms==a(0)jπs1(j,i0)k1t=0(ri0(A(t))+αt)msk1t=0(rj(A(t))+αt)ms˜ak1t=0(ri0(A(t))+αt)msk1t=0(rj(A(t))+αt)ms, (3.1)

    where ˜a=minjn{i0}{ai0πs1(i0,j),ajπs1(j,i0)}, then

    k1t=0(ri0(A(t))+αt)msk1t=0(rj(A(t))+αt)ms¯r(A(0))˜a.

    Then, by

    k1t=0(ri0(A(t))+αt)msk1t=0(rj(A(t))+αt)msk1t=0(rj(A(t))+αt)msk1t=0(ri0(A(t))+αt)ms=1,

    we have

    k1t=0(ri0(A(t))+αt)msk1t=0(rj(A(t))+αt)ms˜a¯r(A(0)),

    then a(k)i0πs1(i0,j)˜a2¯r(A(0))(ji0) can be obtained by (3.1).

    In the case of j=i0, it holds that a(k)i0i0i0(αk)=a(k)i0i0i0+αkmininaiiimininaiii+ε=ε, then a(k)jπs1(j,i0)(αk)min{˜a2¯r(A(0)),ε}.

    Theorem 3.1. Let A=(ai1i2im)R[m,n]+ be an s-index weakly positive tensor. That is, there are sm1 and i0n for any jn,ji0, such that ai0πs1(i0,j)0 and ajπs1(j,i0)0 hold, then for Algorithm 3,

    ¯r(A(k+1))r_(A(k+1))α(¯r(A(k))r_(A(k))),

    where α=1ˆa22¯r(A(0)),¯r(A(0))=maxinri(A),ˆa=minjn{i0}{˜a2¯r(A(0)),ε}, then we can get

    limk¯r(A(k))=limkr_(A(k))=ρ(A).

    Proof. Let A[0]=(a[0]ij)n×n,ai0j=ai0πs1(i0,j),aji0=ajπs1(j,i0),jn and zero elsewhere, then A[0] is an irreducible matrix. Let ¯r(A(k+1))=r(k+1)p,r_(A(k+1))=r(k+1)q, then

    ¯r(A(k+1))r_(A(k+1))=r(k+1)pr(k+1)q=ni2,,im=1(a(k)pi2imr(k)p+αka(k)qi2imr(k)q+αk)mj=2(rij(A(k))+αk)1m1.

    Denote I={i2im|i2,,imn} and

    I(k)={i2im|a(k)pi2imr(k)p+αka(k)qi2imr(k)q+αk},

    then

    ¯r(A(k+1))r_(A(k+1))=r(k+1)pr(k+1)q=i2,,imI(k)(a(k)pi2imr(k)p+αka(k)qi2imr(k)q+αk)mj=2(rij(A(k))+αk)1m1+i2,,imII(k)(a(k)pi2imr(k)p+αka(k)qi2imr(k)q+αk)mj=2(rij(A(k))+αk)1m1(¯r(A(k))+αk)i2,,imI(k)(a(k)pi2imr(k)p+αka(k)qi2imr(k)q+αk)+(r_(A(k))+αk)i2,,imII(k)(a(k)pi2imr(k)p+αka(k)qi2imr(k)q+αk)
    =(¯r(A(k))+αk)i2,,imI(k)(a(k)pi2imr(k)p+αka(k)qi2imr(k)q+αk)(r_(A(k))+αk)i2,,imI(k)(a(k)pi2imr(k)p+αka(k)qi2imr(k)q+αk)=(¯r(A(k))r_(A(k)))(1(i2,,imII(k)a(k)pi2imr(k)p+αk+i2,,imI(k)a(k)qi2imr(k)q+αk))(¯r(A(k))r_(A(k)))(1i2,,imII(k)a(k)pi2im+i2,,imI(k)a(k)qi2im2¯r(A(0))).

    Therefore, either i2,,imII(k)a(k)pi2im includes a(k)pπs1(p,i0) or i2,,imI(k)a(k)qi2im includes a(k)qπs1(q,i0), then

    i2,,imII(k)a(k)pi2im+i2,,imI(k)a(k)qi2immin{a(k)i0i0(αk),a(k)pπs1(p,i0)(αk),a(k)qπs1(q,i0)(αk)}min{˜a2¯r(A(0)),ε}=:ˆa.

    Thus,

    ¯r(A(k+1))r_(A(k+1))α(¯r(A(k))r_(A(k))),

    where α=1ˆa2¯r(A(0)). Therefore,

    ¯r(A(k))r_(A(k))α(¯r(A(k1))r_(A(k1)))αk(¯r(A(0))r_(A(0))).

    Note that 0<α=1ˆa2¯r(A(0))<1, and we can obtain limk(¯r(A(k))r_(A(k)))=0. From Lemma 3.1,

    limk¯r(A(k))=limkr_(A(k))=ρ(A).

    Corollary 3.1. Let A=(ai1i2im)R[m,n]+,ε>0. If A is an s-index weakly positive tensor, then by Algorithm 3 there must be

    K=[log(ε¯r(A(0))r_(A(0)))log(α)]+1

    that satisfies ¯r(A(K))r_(A(K))<ε, where α is defined in Theorem 3.1.

    From Definitions 2.3–2.5, we can see that the essentially positive tensors, the weakly positive tensors and the generalized weakly positive tensors are all s-index weakly positive tensors, so we have the following corollary.

    Corollary 3.2. If A=(ai1i2im)R[m,n]+ is an essentially positive tensor or a weakly positive tensor, then

    ¯r(A(k+1))r_(A(k+1))α(¯r(A(k))r_(A(k))),

    where α=1ˆa2¯r(A(0)),¯r(A(0))=maxinri(A),ˆa=min{˜a2¯r(A(0)),ε},˜a=minji{aijj} and, thus,

    limk¯r(A(k))=limkr_(A(k))=ρ(A).

    Corollary 3.2 further confirms the linear convergence of the LZI algorithm for weakly essentially positive tensors in Theorem 4.1 in [9].

    From above, the inclusion relationship is shown among irreducible tensor, primitive tensor, s-index weakly positive tensor, generalized weakly positive tensor, weakly positive tensor and essentially positive tensor sets in Figure 1.

    Figure 1.  Relations among six classes of nonnegative tensors.

    Theorem 3.2. Let A=(ai1i2im)R[m,n]+ be an s-index weakly positive tensor. For the positive diagonal matrix Di(i=0,1,2,) in the construction process of Algorithm 3, define D(t)=ti=0Di, then limkD(t) exists and denote it as ˆD. Then, x=ˆDeRn++ satisfies Axm1=ρ(A)x[m1], where e=(1,1,,1)TRn++.

    Proof. From the construction of Algorithm 3, it is known that the sequence of diagonal elements (D(t))jj(j=1,2,,n) of the positive diagonal matrix D(t)(j=1,2,,n) monotonically decreases with a lower bound, so limkD(t) exists. According to Lemma 1, the sequence of each element corresponding to tensor sequence {A(k)}k=0 is nonnegative and has an upper bound, so there is a convergent tensor subsequence of {A(k)}k=0 marked as {A(kl)}l=0, and denote limkA(kl)=ˆA. Take the limit on both sides of A(kl+1)=A×1(D(kl))m1×2D(kl)×3×mD(kl) to get

    limkA(kl+1)=limk(A×1(D(kl))m1×2D(kl)×3×mD(kl))=A×1(limkD(kl))m1×2limkD(kl)×3×mlimkD(kl);

    that is,

    ˆA=A×1(ˆD)m1×2ˆD×3×mˆD.

    It can be seen from Theorem 2.4 that ri(ˆA)=ρ(A),in; therefore

    ρ(A)(ˆDe)m1=A(ˆDe)[m1],

    where e=(1,1,,1)TRn++. Denote x=ˆDe, then x=ˆDeRn++, and Axm1=ρ(A)x[m1].

    In this section, to show the effectiveness of Algorithm 3, we compare it with the LZI algorithm. For the parameter αk in the algorithm, we selected different values and compared the corresponding results.

    Example 4.1. Let A=(ai1i2i3i4i5)R[m,n]+(m=5), where a111jj=1,ajjj11=1 for all jn{1} and zero elsewhere.

    In the experiment, αk=(¯r(A(k))r_(A(k)))/mn in Algorithm 3, and x(0)=e=(1,1,,1)T Rn++, ρ=1 in the LZI algorithm. We terminated the iteration when one of the conditions below was met:

    (1) ¯r(A(k))r_(A(k))<108.

    (2) The number of iteration exceeds 104.

    Some numerical results are given in Table 1, where ρ(A) denotes the H-spectral radius of A, Iter denotes the iteration of the algorithms and Time(s) denotes the CPU time (in seconds) used when the conditions (1) are met.

    Table 1.  The comparison of the Algorithm 3 and LZI algorithm.
    n Algorithm Iter Time(s) ρ(A)
    5 Algorithm 3 3 0.0996 2
    5 LZI 18 0.2412 2
    10 Algorithm 3 3 2.3991 3
    10 LZI 15 5.6626 3
    15 Algorithm 3 3 17.5646 3.7414
    15 LZI 14 40.6961 3.7414
    20 Algorithm 3 3 73.7770 4.3589
    20 LZI 13 157.9810 4.3589
    25 Algorithm 3 3 224.9131 4.8990
    25 LZI 13 489.2777 4.8990
    30 Algorithm 3 3 598.0174 5.3852
    30 LZI 12 1148.8 5.3852

     | Show Table
    DownLoad: CSV

    Table 1 shows a comparison between Algorithm 3 and the LZI algorithm given in [8], with the same error and the number of iterations and calculation time significantly reduced, which further verifies that our proposed algorithm is more efficient.

    Example 4.2. Consider a random tensor AR[m,n]+(m=3), whose all entries of random values drawn from the standard uniform distribution on (0, 1).

    Obviously, this is an s-index weakly positive tensor (s=1 or 2). Choose ε=108 and the termination conditions are the same as in Example 1. Take different values for αk in Algorithm 3, and the corresponding results are shown in Table 2, where ¯rr_=¯r(A(k))r_(A(k)).

    Table 2.  The comparison of different values of αk in Algorithm 3.
    αk=(¯rr_)/mn αk=¯rr_ αk=n(¯rr_) αk=mn(¯rr_)
    n iter Time(s) iter Time(s) iter Time(s) iter Time(s)
    5 7 0.0142 7 0.0006 9 0.0255 15 0.0333
    10 6 0.0010 6 0.0009 10 0.0014 14 0.0056
    20 6 0.0044 6 0.0124 9 0.0091 13 0.0179
    40 5 0.0225 6 0.0516 10 0.0513 14 0.0725
    60 5 0.0670 5 0.0621 10 0.1305 15 0.1917
    80 5 0.1702 5 0.1622 10 0.3074 16 0.4595
    100 5 0.3940 5 0.3183 10 0.5547 15 0.8285

     | Show Table
    DownLoad: CSV

    From the data in Table 2, it can be seen that when the value of αk is different, there are differences in the number of iteration steps and operation times. The calculation time is almost the same, but the difference in iteration steps is quite significant, so selecting the appropriate αk will improve the efficiency of the algorithm.

    In this paper, a class of s-index weakly positive tensors was defined and a diagonal similar iterative algorithm for the maximum H-eigenvalue of such tensors was given. In the algorithm, a variable parameter was introduced in each iteration, which is equivalent to a translation transformation for each iteration of the tensor. Compared with the LZI algorithm, the number of iterations and time of calculation have great advantages. It was also proved that the algorithm has linearly convergence for s-index weakly positive tensors.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported by the Natural Sciences Program of Science and Technology of Jilin Province of China (20190201139JC).

    The authors declare that there are no conflict of interest.



    [1] C. V. Loan, Future dirctions in tensor-based computation and modeling, Workshop Report in Arlington, 2009.
    [2] L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symb. Comput., 40 (2005), 1302–1324. https://doi.org/10.1016/j.jsc.2005.05.007 doi: 10.1016/j.jsc.2005.05.007
    [3] L. H. Lim, Singular value and and eigenvalue of tensors: a variational approach, In: IEEE International Workshop on Computational Advances in multi Sensor Adaptive Processing, 2005,129–132. https://doi.org/10.1109/CAMAP.2005.1574201.
    [4] S. Friedland, S. Gaubert, L. Han, Perron-Frobenius theorems for nonnegative multilinear forms and extensions, Linear Algebra Appl., 438 (2013), 738–749. https://doi.org/10.1016/j.laa.2011.02.042 doi: 10.1016/j.laa.2011.02.042
    [5] M. Ng, L. Qi, G. Zhou, Finding the largest eigenvalue of a nonnegative tensor, SIAM J. Matrix Anal. A., 31 (2010), 1090–1099. https://doi.org/10.1137/09074838X doi: 10.1137/09074838X
    [6] K. C. Chang, K. J. Pearson, T. Zhang, Primitivity, the convergence of the NQZ method, and the largest eigenvalue for nonnegative tensors, SIAM J. Matrix Anal. A., 32 (2011), 806–819. https://doi.org/10.1137/100807120 doi: 10.1137/100807120
    [7] L. Zhang, L. Qi, Linear convergence of the an algorithm for computing the largest eigenvalue of a nonnegative tensors, Numer. Linear Algebra, 19 (2012), 830–841. https://doi.org/10.1002/nla.822 doi: 10.1002/nla.822
    [8] Y. Liu, G. Zhou, N. F. Ibrahim, An always convergent algorithm for the largest eigenvalue of an irreducible nonnegative tensor, J. Comput. Appl. Math., 235 (2010), 286–292. https://doi.org/10.1016/j.cam.2010.06.002 doi: 10.1016/j.cam.2010.06.002
    [9] L. Zhang, L. Qi, Y. Xu, Linear convergence of the LZI algorithm for weakly positive tensors, J. Comput. Math., 30 (2012), 24–33. https://doi.org/10.4208/jcm.1110-m11si09 doi: 10.4208/jcm.1110-m11si09
    [10] W. Yang, Q. Ni, A cubically convergent method for solving the largest eigenvalue of a nonnegative irreducible tensor, Numer. Algor., 77 (2018), 1183–1197. https://doi.org/10.1007/s11075-017-0358-1 doi: 10.1007/s11075-017-0358-1
    [11] J. Zhang, C. Bu, An iterative method for finding the spectral radius of an irreducible nonnegative tensor, Comput. Appl. Math., 40 (2021), 8. https://doi.org/10.1007/s40314-020-01375-5 doi: 10.1007/s40314-020-01375-5
    [12] K. C. Chang, K. Pearson, T. Zhang, Perron-Frobenius theorem for nonnegative tensors, Commun. Math. Sci., 6 (2008), 507–520. https://doi.org/10.4310/CMS.2008.v6.n2.a12 doi: 10.4310/CMS.2008.v6.n2.a12
    [13] K. J. Pearson, Essentially positive tensors, Int. J. Algebra, 4 (2010), 421–427.
    [14] Y. Li, Q. Yang, X. He, A method with parameter for solving the spectral radius of nonnegative tensor, J. Oper. Res. Soc. China, 5 (2017), 3–25. https://doi.org/10.1007/s40305-016-0132-4 doi: 10.1007/s40305-016-0132-4
    [15] S. Hu, Z. Huang, L. Qi, Strictly nonnegative tensor and nonnegative tensor partition, Sci. China Math., 57 (2014), 181–195. https://doi.org/10.1007/s11425-013-4752-4 doi: 10.1007/s11425-013-4752-4
    [16] Y. Yang, Q. Yang, Further results for Perron-Frobenius theorem for nonnegative tensors, SIAM J. Matrix Anal. A., 31 (2010), 2517–2530. https://doi.org/10.1137/090778766 doi: 10.1137/090778766
    [17] J. Shao, A general product of tensors with applications, Linear Algebra Appl., 439 (2013), 2350–2366. https://doi.org/10.1016/j.laa.2013.07.010 doi: 10.1016/j.laa.2013.07.010
    [18] W. Bunse, A class of diagonal transformation methods for the computation of the spectral radius of a nonnegative irreducible matrix, SIAM J. Numer. Anal., 18 (1981), 693–704. https://doi.org/10.1137/0718046 doi: 10.1137/0718046
    [19] H. Lv, Numerical algorithm for spectral radius of irreducibly nonnegative matrix, J. Jilin Univ. Sci. Edit., 46 (2008), 6–12. https://doi.org/10.3321/j.issn:1671-5489.2008.01.002 doi: 10.3321/j.issn:1671-5489.2008.01.002
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1177) PDF downloads(55) Cited by(0)

Figures and Tables

Figures(1)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog