1.
Introduction
Consider an m-order n-dimensional square tensor A consisting of nm entries in the real field R:
If ai1i2⋯im≥0,1≤i1,i2,⋯,im≤n, then A is called an m-order n-dimensional nonnegative tensor. Denote the set of all m-order n-dimensional nonnegative tensors as R[m,n]+. Rn, Rn+, Rn++ represents the set of all n-dimensional vectors, the set of all n-dimensional nonnegative vectors and the set of all n-dimensional positive vectors, respectively. Tensors play an important role in physics, engineering and mathematics. There are many application domains of tensors such as data analysis and mining, information science, image processing and computational biology [1,2,3,4].
In 2005, Qi [2] introduced the notion of eigenvalues of higher-order tensors and studied the existence of both complex and real eigenvalues and eigenvectors. Independently, in the same year, Lim [3] also defined eigenvalues and eigenvectors but restricted them to be real.
Qi [2] proposed the definition of H-eigenvalue. If a real number λ and a nonzero real vector x∈Rn satisfy the following homogeneous polynomial equation:
where
is an n-dimensional vector and
then λ is an H-eigenalue of A and x is an H-eigenector of A associated with λ. Define the set of H-eigenvalues of A∈R[m,n]+ as σH(A) and ρ(A)=maxλ∈σH(A)|λ| as the H-spectral radius of tensor A. Let σ(A) be the set of eigenvalues of a tensor A.
Let A=(ai1i2⋯im)∈R[m,n]+. Ng et al. [5] proposed the NQZ algorithm for the largest H-eigenalue of a nonnegative irreducible tensor.
Subsequently, the NQZ algorithm was proved to be convergent for primitive tensors in [6] and for weakly primitive tensors in [4], and the NQZ algorithm was shown to have an explicit linear convergence rate for essentially positive tensors in [7]. However, some examples [5] showed that it did not converge for some irreducible nonnegative tensors.
In 2010, Liu et al.[8] modified the NQZ algorithm and proposed the LZI algorithm. Let E=(δi1i2⋯im) be the m-order n-dimensional unit tensor whose entries are
Liu et al.[8] proved that the LZI algorithm is convergent for irreducible nonnegative tensors. In 2012, Zhang et al.[9] proved the linear convergence of the LZI algorithm for weakly positive tensors. Since then, there have been many studies on the calculation of the maximum eigenvalue of nonnegative tensors. For example, Yang and Ni [10] gave a nonlinear algorithm for calculating the maximum eigenvalue of symmetric tensors; another example, Zhang and Bu [11] gave a diagonal similar iterative algorithm for calculating the maximum H-eigenvalue of a class of generalized weakly positive tensors.
2.
Preliminaries
In this section, we mainly introduce some related concepts and important properties of tensors and matrices. For a positive integer n, let ⟨n⟩={1,2,…,n}.
Definition 2.1. [12] An m-order n-dimensional tensor A is called reducible if there exists a nonempty proper index subset I⊂⟨n⟩ such that
If A is not reducible, then A is irreducible.
Definition 2.2. [13] A nonnegative matrix ˚A is called the majorization associated to nonnegative tensor A if the (i,j)-th element of ˚A is defined to be aij⋯j for any i,j=1,⋯,n.
Definition 2.3. [13] A nonnegative m-order n-dimensional tensor A is essentially positive if Axm−1∈Rn++ for any nonzero x∈Rn+.
Definition 2.4. [9] Let A be a nonnegative tensor of order m and dimension n. A is weakly positive if
Definition 2.5. [11] Let A=(ai1i2⋯im)∈R[m,n]+, then A is generalized weakly positive if there exists i0∈⟨n⟩, such that ai0j⋯j>0,aji0⋯i0>0 for all j∈⟨n⟩∖{i0}.
Definition 2.6. [14] Let A=(ai1i2⋯im)∈R[m,n]+.
(1) We call a nonnegative matrix G(A) the representation associated to nonnegative tensor A if the (i,j)-th element of G(A) is defined to be the summation of a{ii2⋯im} with indices {i2⋯im}∋j.
(2) We call A weakly reducible if its representation G(A) is a reducible matrix and weakly primitive if G(A) is a primitive matrix. If A is not weakly reducible, then it is called weakly irreducible.
Definition 2.7. Let A=(ai1i2⋯im)∈R[m,n]+ and πs−1(i,j) be an arrangement of s−1 letters i and m−s letters j. If there exists s∈⟨m−1⟩ and i0∈⟨n⟩ for any j∈⟨n⟩,j≠i0, such that ai0πs−1(i0,j)≠0 and ajπs−1(j,i0)≠0 hold, then A is called an s-index weakly positive tensor.
For example, A=(aijk)∈R[3,3]+, where a113=1,a223=4,a331=2,a332=5 and ai1i2i3≥0(1≤i1,i2,i3≤3) elsewhere, then A is a two-index weakly positive tensor.
Remark 2.1. The essentially positive tensors, the weakly positive tensors and the generalized weakly positive tensors are all one-index weakly positive tensors, which are special tensor classes of s-index weakly positive tensors..
Theorem 2.1. [15] For any nonnegative tensor A=(ai1i2⋯im)∈R[m,n]+, ρ(A) is an eigenvalue with a nonnegative eigenvector x∈Rn+ corresponding to it.
Theorem 2.2. [16] Let A=(ai1i2⋯im)∈R[m,n]+. ρ(A) is the spectral radius of A, then
Theorem 2.3. [12] If A is an irreducible nonnegative tensor of order m and dimension n, then there exists λ0>0 and x0>0,x0∈Rn such that Axm−10=λ0x[m−1]0. Moreover, if λ is an eigenvalue with a nonnegative eigenvector, then λ=λ0. If λ is an eigenvalue of A, then |λ|≤λ0.
Definition 2.8. [17] Let A=(ai1i2⋯im)∈R[m,n]+, B=(bi1i2⋯im)∈R[m,n]+. The tensors A and B are said to be diagonal similar if there exists some invertible diagonal matrix D=diag(d1,d2,⋯,dn) of order n such that B=A×1D−(m−1)×2D×3⋯×mD, where bi1i2⋯im=ai1i2⋯imd−(m−1)i1di2⋯dim.
Theorem 2.4. [17] If the two m-order n-dimensional tensors A and B are diagonal similar, then σ(A)=σ(B).
3.
An algorithm and its convergence analysis
In 1981, Bunse [18] gave a diagonal similar iterative algorithm for calculating the maximum eigenvalue of irreducible nonnegative matrices. In 2008, Lv [19] further studied the diagonal similarity iterative algorithm for calculating the maximum eigenvalue of irreducible nonnegative matrices. In 2021, Zhang and Bu [11] gave a diagonal similar iterative algorithm for calculating the maximum H-eigenvalue of nonnegative tensors. In this paper, according to the construction idea of the algorithm in [19], a numerical algorithm for calculating the maximum H-eigenvalue and corresponding eigenvector of s-index weakly positive tensors is given.
Let A=A(0)=(a(0)i1i2⋯im)∈R[m,n]+, ri(A(0))=n∑i2,⋯,im=1a(0)ii2⋯im(i∈⟨n⟩) and ¯r(A(0))=maxi∈⟨n⟩ri(A(0)). ε is a sufficiently small positive number. αk∈R(k=0,1,2,⋯) satisfies ε−mini∈⟨n⟩aii⋯i<αk≤¯r(A(0)).
In the following, we will give the convergence condition of Algorithm 3.
Define A(k)+αkE=A(k)(αk)=:(ai1i2⋯im(αk))ni1,i2,⋯,im=1.
Lemma 3.1. Let A=(ai1i2⋯im)∈R[m,n]+. For tensor sequence A(l)(l=0,1,2,⋯), we have ¯r(A(l))(l=0,1,2,⋯) as monotonically decreasing with lower bound, and r_(A(l))(l=0,1,2,⋯) as monotonically increasing with upper bound.
Proof. Notice that
then ¯r(A(l+1))≤¯r(A(l)),l=0,1,2,⋯. Therefore, ¯r(A(l)) is monotonically decreasing. Similarly, it can be proved that r_(A(l+1))≥r_(A(l)),l=0,1,2,⋯, so r_(A(l)) is monotonically increasing. By r_(A(l))≤¯r(A(l)) we can obtain that ¯r(A(l))(l=0,1,2,⋯) is monotonically decreasing with lower bound r_(A(0)), and r_(A(l))(l=0,1,2,⋯) is monotonically increasing with upper bound ¯r(A(0)).
□
Lemma 3.2. Let A=(ai1i2⋯im)∈R[m,n]+ be an s-index weakly positive tensor, then it is an irreducible tensor.
Proof. Because A is an s-index weakly positive tensor, by Definition 2.7, there are s∈⟨m−1⟩ and i0∈⟨n⟩ such that ai0πs−1(i0,j)≠0, ajπs−1(j,i0)≠0, j∈⟨n⟩,j≠i0. That is, for any j∈I⊂⟨n⟩, there is i0∈⟨n⟩∖I such that ajπs−1(j,i0)≠0. By Definition 2.1, A is an irreducible tensor. □
Lemma 3.3. Let A=(ai1i2⋯im)∈R[m,n]+ be an s-index weakly positive tensor. That is, there are s∈⟨m−1⟩ and i0∈⟨n⟩ for any j∈⟨n⟩,j≠i0, such that ai0πs−1(i0,j)≠0 and ajπs−1(j,i0)≠0 hold, then for any k∈N,
Proof. If ajπs−1(j,i0)≠0, then a(k)jπs−1(j,i0)≠0,a(k)i0i0⋯i0=ai0i0⋯i0,k=0,1,2,⋯. In the case of j≠i0, it can be obtained from Lemma 3.1 that
where ˜a=minj∈⟨n⟩∖{i0}{ai0πs−1(i0,j),ajπs−1(j,i0)}, then
Then, by
we have
then a(k)i0πs−1(i0,j)≥˜a2¯r(A(0))(j≠i0) can be obtained by (3.1).
In the case of j=i0, it holds that a(k)i0i0⋯i0(αk)=a(k)i0i0⋯i0+αk≥mini∈⟨n⟩aii⋯i−mini∈⟨n⟩aii⋯i+ε=ε, then a(k)jπs−1(j,i0)(αk)≥min{˜a2¯r(A(0)),ε}. □
Theorem 3.1. Let A=(ai1i2⋯im)∈R[m,n]+ be an s-index weakly positive tensor. That is, there are s∈⟨m−1⟩ and i0∈⟨n⟩ for any j∈⟨n⟩,j≠i0, such that ai0πs−1(i0,j)≠0 and ajπs−1(j,i0)≠0 hold, then for Algorithm 3,
where α=1−ˆa22¯r(A(0)),¯r(A(0))=maxi∈⟨n⟩ri(A),ˆa=minj∈⟨n⟩∖{i0}{˜a2¯r(A(0)),ε}, then we can get
Proof. Let A[0]=(a[0]ij)n×n,ai0j=ai0πs−1(i0,j),aji0=ajπs−1(j,i0),j∈⟨n⟩ and zero elsewhere, then A[0] is an irreducible matrix. Let ¯r(A(k+1))=r(k+1)p,r_(A(k+1))=r(k+1)q, then
Denote I={i2⋯im|i2,⋯,im∈⟨n⟩} and
then
Therefore, either ∑i2,⋯,im∈I∖I(k)a(k)pi2⋯im includes a(k)pπs−1(p,i0) or ∑i2,⋯,im∈I(k)a(k)qi2⋯im includes a(k)qπs−1(q,i0), then
Thus,
where α=1−ˆa2¯r(A(0)). Therefore,
Note that 0<α=1−ˆa2¯r(A(0))<1, and we can obtain limk→∞(¯r(A(k))−r_(A(k)))=0. From Lemma 3.1,
□
Corollary 3.1. Let A=(ai1i2⋯im)∈R[m,n]+,ε>0. If A is an s-index weakly positive tensor, then by Algorithm 3 there must be
that satisfies ¯r(A(K))−r_(A(K))<ε, where α is defined in Theorem 3.1.
From Definitions 2.3–2.5, we can see that the essentially positive tensors, the weakly positive tensors and the generalized weakly positive tensors are all s-index weakly positive tensors, so we have the following corollary.
Corollary 3.2. If A=(ai1i2⋯im)∈R[m,n]+ is an essentially positive tensor or a weakly positive tensor, then
where α=1−ˆa2¯r(A(0)),¯r(A(0))=maxi∈⟨n⟩ri(A),ˆa=min{˜a2¯r(A(0)),ε},˜a=minj≠i{aij⋯j} and, thus,
Corollary 3.2 further confirms the linear convergence of the LZI algorithm for weakly essentially positive tensors in Theorem 4.1 in [9].
From above, the inclusion relationship is shown among irreducible tensor, primitive tensor, s-index weakly positive tensor, generalized weakly positive tensor, weakly positive tensor and essentially positive tensor sets in Figure 1.
Theorem 3.2. Let A=(ai1i2⋯im)∈R[m,n]+ be an s-index weakly positive tensor. For the positive diagonal matrix Di(i=0,1,2,⋯) in the construction process of Algorithm 3, define D(t)=t∏i=0Di, then limk→∞D(t) exists and denote it as ˆD. Then, x=ˆDe∈Rn++ satisfies Axm−1=ρ(A)x[m−1], where e=(1,1,⋯,1)T∈Rn++.
Proof. From the construction of Algorithm 3, it is known that the sequence of diagonal elements (D(t))jj(j=1,2,⋯,n) of the positive diagonal matrix D(t)(j=1,2,⋯,n) monotonically decreases with a lower bound, so limk→∞D(t) exists. According to Lemma 1, the sequence of each element corresponding to tensor sequence {A(k)}∞k=0 is nonnegative and has an upper bound, so there is a convergent tensor subsequence of {A(k)}∞k=0 marked as {A(kl)}∞l=0, and denote limk→∞A(kl)=ˆA. Take the limit on both sides of A(kl+1)=A×1(D(kl))m−1×2D(kl)×3⋯×mD(kl) to get
that is,
It can be seen from Theorem 2.4 that ri(ˆA)=ρ(A),i∈⟨n⟩; therefore
where e=(1,1,⋯,1)T∈Rn++. Denote x=ˆDe, then x=ˆDe∈Rn++, and Axm−1=ρ(A)x[m−1]. □
4.
Numerical examples
In this section, to show the effectiveness of Algorithm 3, we compare it with the LZI algorithm. For the parameter αk in the algorithm, we selected different values and compared the corresponding results.
Example 4.1. Let A=(ai1i2i3i4i5)∈R[m,n]+(m=5), where a111jj=1,ajjj11=1 for all j∈⟨n⟩∖{1} and zero elsewhere.
In the experiment, αk=(¯r(A(k))−r_(A(k)))/mn in Algorithm 3, and x(0)=e=(1,1,⋯,1)T ∈Rn++, ρ=1 in the LZI algorithm. We terminated the iteration when one of the conditions below was met:
(1) ¯r(A(k))−r_(A(k))<10−8.
(2) The number of iteration exceeds 104.
Some numerical results are given in Table 1, where ρ(A) denotes the H-spectral radius of A, Iter denotes the iteration of the algorithms and Time(s) denotes the CPU time (in seconds) used when the conditions (1) are met.
Table 1 shows a comparison between Algorithm 3 and the LZI algorithm given in [8], with the same error and the number of iterations and calculation time significantly reduced, which further verifies that our proposed algorithm is more efficient.
Example 4.2. Consider a random tensor A∈R[m,n]+(m=3), whose all entries of random values drawn from the standard uniform distribution on (0, 1).
Obviously, this is an s-index weakly positive tensor (s=1 or 2). Choose ε=10−8 and the termination conditions are the same as in Example 1. Take different values for αk in Algorithm 3, and the corresponding results are shown in Table 2, where ¯r−r_=¯r(A(k))−r_(A(k)).
From the data in Table 2, it can be seen that when the value of αk is different, there are differences in the number of iteration steps and operation times. The calculation time is almost the same, but the difference in iteration steps is quite significant, so selecting the appropriate αk will improve the efficiency of the algorithm.
5.
Conclusions
In this paper, a class of s-index weakly positive tensors was defined and a diagonal similar iterative algorithm for the maximum H-eigenvalue of such tensors was given. In the algorithm, a variable parameter was introduced in each iteration, which is equivalent to a translation transformation for each iteration of the tensor. Compared with the LZI algorithm, the number of iterations and time of calculation have great advantages. It was also proved that the algorithm has linearly convergence for s-index weakly positive tensors.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This work is supported by the Natural Sciences Program of Science and Technology of Jilin Province of China (20190201139JC).
Conflict of interest
The authors declare that there are no conflict of interest.