In this paper, according to the theory of two classes of α-diagonally dominant matrices, the row index set of the matrix is divided properly, and then some positive diagonal matrices are constructed. Furthermore, some new criteria for nonsingular H-matrix are obtained. Finally, numerical examples are given to illustrate the effectiveness of the proposed criteria.
Citation: Panpan Liu, Haifeng Sang, Min Li, Guorui Huang, He Niu. New criteria for nonsingular H-matrices[J]. AIMS Mathematics, 2023, 8(8): 17484-17502. doi: 10.3934/math.2023893
[1] | Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of $ SDD_1^{+} $ matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034 |
[2] | Deshu Sun . Note on error bounds for linear complementarity problems involving $ B^S $-matrices. AIMS Mathematics, 2022, 7(2): 1896-1906. doi: 10.3934/math.2022109 |
[3] | Qin Zhong, Na Li . Generalized Perron complements in diagonally dominant matrices. AIMS Mathematics, 2024, 9(12): 33879-33890. doi: 10.3934/math.20241616 |
[4] | Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007 |
[5] | Maja Nedović, Dunja Arsić . New scaling criteria for $ H $-matrices and applications. AIMS Mathematics, 2025, 10(3): 5071-5094. doi: 10.3934/math.2025232 |
[6] | Ali Algefary . Diagonal solutions for a class of linear matrix inequality. AIMS Mathematics, 2024, 9(10): 26435-26445. doi: 10.3934/math.20241286 |
[7] | Zhen Lin . On the sum of the largest $ A_{\alpha} $-eigenvalues of graphs. AIMS Mathematics, 2022, 7(8): 15064-15074. doi: 10.3934/math.2022825 |
[8] | Xiaofeng Guo, Jianyu Pan . Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 17284-17306. doi: 10.3934/math.2023884 |
[9] | Peiying Huang, Yiyuan Zhang . H-Toeplitz operators on the Dirichlet type space. AIMS Mathematics, 2024, 9(7): 17847-17870. doi: 10.3934/math.2024868 |
[10] | Qingmei Bai, Alatancang Chen, Jingying Gao . Browder spectra of closed upper triangular operator matrices. AIMS Mathematics, 2024, 9(2): 5110-5121. doi: 10.3934/math.2024248 |
In this paper, according to the theory of two classes of α-diagonally dominant matrices, the row index set of the matrix is divided properly, and then some positive diagonal matrices are constructed. Furthermore, some new criteria for nonsingular H-matrix are obtained. Finally, numerical examples are given to illustrate the effectiveness of the proposed criteria.
Let Cn×n be the set of n order complex matrices and A=(aij)∈Cn×n. For any i,j∈N={1,2,⋯,n}, denote
Ri(A)=∑j∈N,j≠i|aij|,Ci(A)=∑j∈N,j≠i|aji|. |
Let A=(aij)∈Cn×n. If |aii|≥Ri(A)(i∈N), then A is called a diagonally dominant matrix, and denoted by A∈D0. If |aii|>Ri(A)(i∈N), then A is called a strictly diagonally dominant matrix and denoted by A∈D.
If there is a positive diagonal matrix X such that AX∈D, then A is called a generalized strictly diagonally dominant matrix, denoted by A∈D∗, and also called a nonsingular H-matrix.
A matrix A is said to be an H-matrix if its comparison matrix is an M-matrix. Throughout this paper, we are working with H-matrices such that their comparison matrices are nonsingular. These matrices are called invertible class of H-matrices in [].
As a result of that a nonsingular H-matrix has nonzero diagonal entries, we always assume that aii≠0(i∈N).
The nonsingular H-matrix is a kind of special matrix that is widely used in matrix theory. Many practical problems can usually be attributed to the problems of solving one or a group of linear algebraic equations for large sparse matrices. In the process of solving linear equations, it is often necessary to assume that the coefficient matrix is a nonsingular H-matrix. At the same time, nonsingular H-matrix has important practical value in many fields, such as economic mathematics, electric system theory, control theory and computational mathematics[2,3]. However, it is very difficult to determine the nonsingular H-matrix in practice. So the determination of nonsingular H-matrix is a very meaningful topic in the study of matrix theory. Many scholars have conducted in-depth research on its sufficient conditions, and have further given many simple and practical results [4,5,6,7,8,9,10,11,12,13,14,15,16].
In this paper, we introduce two different classes of α-diagonally dominant matrices defined in [6,7]. In order to avoid confusion, they are called α1-diagonally dominant matrix and α2-diagonally dominant matrix respectively.
Definition 1. [6] Let A=(aij)∈Cn×n. If α∈[0,1] exists, making
|aii|≥α[Ri(A)]+(1−α)[Ci(A)],i∈N, |
then A is called an α1-diagonally dominant matrix, and denoted by A∈Dα10. If α∈[0,1] exists, making
|aii|>α[Ri(A)]+(1−α)[Ci(A)],i∈N, | (1.1) |
then A is called a strictly α1-diagonally dominant matrix, and denoted by A∈Dα1.
Definition 2. [7] Let A=(aij)∈Cn×n. If α∈[0,1] exists, making
|aii|≥[Ri(A)]α[Ci(A)]1−α,i∈N, |
then A is called an α2-diagonally dominant matrix, and denoted by A∈Dα20. If α∈[0,1] exists, making
|aii|>[Ri(A)]α[Ci(A)]1−α,i∈N, | (1.2) |
then A is called a strictly α2-diagonally dominant matrix, and denoted by A∈Dα2.
At present, many scholars have studied the properties and determination methods of α1-(and α2-) diagonally dominant matrices, see [5,6,7,8,9,10,11,17]. α2-diagonally dominant matrix is called geometrically α-diagonally dominant matrix in [8], α-chain diagonally dominant matrix in [9], and product α-diagonally dominant matrix in [17].
In Definitions 1 and 2, if α=1, we can know |aii|>Ri(A),∀i∈N, by (1.1) and (1.2), that is, A∈D. If α=0, we can know |aii|>Ci(A),∀i∈N, by (1.1) and (1.2), that is, AT∈D. Therefore, if α=0 or 1, A is a nonsingular H-matrix, so only the case of α∈(0,1) is considered in this paper.
If A is an α1-(or α2-) diagonally dominant matrix, then A∈D∗[6,7]. So α1-(or α2-) diagonally dominant matrix is also a class of nonsingular H-matrix. These two classes are both subclasses of nonsingular H-matrix, and they have their equivalent theorems in the field of eigenvalue localization. It is easy to see that the class of α1-diagonally dominant matrix is contained in that of α2-diagonally dominant matrix[18].
In this paper, by using the properties of α1-(or α2-) diagonally dominant matrix, we give some criteria for determining nonsingular H-matrix. Finally, numerical examples are used to compare the criteria obtained in this paper with the existing results.
Some relevant concepts and important conclusions are given in this section.
Definition 3. [9] Let A=(aij)∈Cn×n. If there is a positive diagonal matrix X such that AX∈Dα1, then A is called a generalized α1-diagonally dominant matrix, which is denoted by A∈D∗α1.
Definition 4. [7] Let A=(aij)∈Cn×n. If there is a positive diagonal matrix X such that AX∈Dα2, then A is called a generalized α2-diagonally dominant matrix, which is denoted by A∈D∗α2.
Definition 5. [10] Let A=(aij)∈Cn×n be an irreducible matrix. If there exists α∈[0,1] such that |aii|≥α[Ri(A)]+(1−α)[Ci(A)],∀i∈N, and at least one strict inequality holds, then A is said to be an irreducible α1-diagonally dominant matrix.
Here, similar to irreducible α1-diagonally dominant matrix, we give the definition of irreducible α2-diagonally dominant matrix.
Definition 6. Let A=(aij)∈Cn×n be an irreducible matrix. If there exists α∈[0,1] such that |aii|≥[Ri(A)]α[Ci(A)]1−α,∀i∈N, and at least one strict inequality holds, then A is said to be an irreducible α2-diagonally dominant matrix.
Lemma 1. [9] Let A=(aij)∈Cn×n. If A is a generalized α1-diagonally dominant matrix, then A is a nonsingular H-matrix.
Lemma 2. [7] Let A=(aij)∈Cn×n. Then A is a generalized strictly diagonally dominant matrix if and only if A is a generalized α2-diagonally dominant matrix.
Lemma 3. [10] Let A∈Dα10 be an irreducible matrix, and there is at least one i∈N to make |aii|>α[Ri(A)]+(1−α)[Ci(A)] hold, then A∈D∗.
Lemma 4. [11] Let A∈Dα20 be an irreducible matrix, and there is at least one i∈N to make |aii|>[Ri(A)]α[Ci(A)]1−α hold, then A∈D∗.
Lemma 5. [3] Suppose A=(aij)∈Cn×n, if AX is a nonsingular H-matrix, with X=diag(x1,x2,…,xn)(xi>0,i=1,2,…,n), then A is a nonsingular H-matrix.
Denote
M1(α)={i∈N||aii|=Λi(A)},M2(α)={i∈N|0<|aii|<Λi(A)},M3(α)={i∈N||aii|>Λi(A)}. |
It is obvious that Mi(α)∩Mj(α)=∅(i≠j) and M1(α)∪M2(α)∪M3(α)=N. We denote ∑i∈∅⋅=0 and
Λi(A)=αRi(A)+(1−α)Ci(A),α∈(0,1), |
r=maxi∈M3(α){α(∑j∈M1(α)|aij|+∑j∈M2(α)|aij|)|aii|−α∑j∈M3(α),j≠i|aij|−(1−α)Ci(A)},s=maxi∈M2(α){Λi(A)−|aii|Λi(A)},δ=max{r,s}, |
Ti,r(A)=α(∑j∈M1(α)|aij|+∑j∈M2(α)|aij|+r∑j∈M3(α),j≠i|aij|)+(1−α)rCi(A),i∈M3(α), |
h=maxi∈M3(α){δα(∑j∈M1(α)|aij|+∑j∈M2(α)|aij|)Ti,r(A)−α∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|−(1−α)Ci(A)Ti,r(A)|aii|}. |
Theorem 1. Let A=(aij)∈Cn×n. If there is α∈(0,1), such that for any i∈M2(α),
|aii|Λi(A)−|aii|Λi(A)>α(δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|aii|Λj(A)+h∑j∈M3(α)|aij|Tj,r(A)|ajj|)+(1−α)Ci(A)Λi(A)−|aii|Λi(A) | (3.1) |
holds, then A is a nonsingular H-matrix.
Proof. We are going to proof the following inequality for all indices in each set M1(α),M2(α) and M3(α).
|bii|>Λi(B)=αRi(B)+(1−α)Ci(B),i∈M1(α)∪M2(α)∪M3(α)=N. |
It can be seen from the previous denotions that 0≤r<1,0<δ<1. From the definition of Ti,r(A), we can get that for any i∈M3(α),
r|aii|≥α(∑j∈M1(α)|aij|+∑j∈M2(α)|aij|+r∑j∈M3(α),j≠i|aij|)+(1−α)rCi(A) |
holds, that is, Ti,r(A)≤r|aii|,i∈M3(α). Therefore
0≤Ti,r(A)|aii|≤r≤δ<1,i∈M3(α). |
Furthermore, according to the definition of Ti,r(A), for any i∈M3(α),
α(∑j∈M1(α)|aij|+∑j∈M2(α)|aij|)=Ti,r(A)−r{α∑j∈M3(α),j≠i|aij|+(1−α)rCi(A)}. |
So
δα(∑j∈M1(α)|aij|+∑j∈M2(α)|aij|)Ti,r(A)−α∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|−(1−α)Ci(A)Ti,r(A)|aii|<Ti,r(A)−r(α∑j∈M3(α),j≠i|aij|+(1−α)rCi(A))Ti,r(A)−α∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|−(1−α)Ci(A)Ti,r(A)|aii|≤1. |
According to the definition of h, we can get 0≤h<1, and for all i∈M3(α),
hTi,r(A)≥α(δ∑j∈M1(α)|aij|+δ∑j∈M2(α)|aij|+h∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|+(1−α)hCi(A)Ti,r(A)|aii|). | (3.2) |
By (3.1), for all i∈M2(α), we can get
|aii|Λi(A)−|aii|Λi(A)−(α(δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λi(A)−|aii|Λi(A)+h∑j∈M3(α)|aij|Tj,r(A)|ajj|)+(1−α)Ci(A)Λi(A)−|aii|Λi(A))>0. |
Let
ki=|aii|Λi(A)−|aii|Λi(A)−(α(δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λi(A)−|aii|Λi(A)+h∑j∈M3(α)|aij|Tj,r(A)|ajj|)+(1−α)Ci(A)Λi(A)−|aii|Λi(A)) |
and
wi=kiα∑j∈M3(α)|aij|,i∈M2(α). | (3.3) |
In particular, if ∑j∈M3(α)|aij|=0, then denote wi=+∞, according to (3.3), wi>0,i∈M2(α). Notice that
0≤Ti,r(A)|aii|h<Ti,r(A)|aii|≤δ<1,i∈M3(α). |
Thus, take a sufficiently small positive number η to make it meet both
0<η<mini∈M2(α){wi}≤+∞ |
and
maxi∈M3(α){Ti,r(A)|aii|h+η}<δ<1. |
Construct a positive diagonal matrix X=diag(x1,x2,…,xn), where
xi={δ,i∈M1(α),Λi(A)−|aii|Λi(A),i∈M2(α),Ti,r(A)|aii|h+η,i∈M3(α). |
And let B=AX=(bi,j).
For any i∈M1(α), it can be obtained from 0<δ<1,0<Λi(A)−|aii|Λi(A)≤δ<1(i∈M2(α)), and 0<Ti,r(A)|aii|h+η<δ<1(i∈M3(α)) that
Λi(B)=α(δ∑j∈M1(α),j≠i|aij|+∑j∈M2(α)|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α)|aij|(Tj,r(A)|ajj|h+η))+(1−α)δCi(A)<α(δ∑j∈M1(α),j≠i|aij|+δ∑j∈M2(α)|aij|+δ∑j∈M3(α)|aij|)+(1−α)δCi(A)=δ(αRi(A)+(1−α)Ci(A))=δΛi(A)=δ|aii|=|bii|. |
For any i∈M2(α), if ∑j∈M3(α)|aij|=0, it can be deduced from (3.1) that
Λi(B)=α(δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α)|aij|(Tj,r(A)|ajj|h+η))+(1−α)Ci(A)Λi(A)−|aii|Λi(A)=α(δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|ajj|Λj(A))+(1−α)Ci(A)Λi(A)−|aii|Λi(A)<|aii|Λi(A)−|aii|Λi(A)=|bii|. |
If ∑j∈M3(α)|aij|≠0, it can be obtained from (3.3) that
Λi(B)=α(δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α)|aij|(Tj,r(A)|ajj|h+η))+(1−α)Ci(A)Λi(A)−|aii|Λi(A)=α(η∑j∈M3(α)|aij|+δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α)|aij|(Tj,r(A)|ajj|))+(1−α)Ci(A)Λi(A)−|aii|Λi(A)=ηα∑j∈M3(α)|aij|+α(δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α)|aij|(Tj,r(A)|ajj|))+(1−α)Ci(A)Λi(A)−|aii|Λi(A)<wiα∑j∈M3(α)|aij|+α(δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α)|aij|(Tj,r(A)|ajj|))+(1−α)Ci(A)Λi(A)−|aii|Λi(A)=|aii|Λi(A)−|aii|Λi(A)=|bii|. |
For any i∈M3(α), it can be deduced from 0<Λi(A)−|aii|Λi(A)≤δ<1(i∈M2(α)) and (3.2) that
Λi(B)=α[δ∑j∈M1(α)|aij|+∑j∈M2(α)|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α),j≠i|aij|(Tj,r(A)|ajj|h+η)]+(1−α)Ci(A)(Ti,r(A)|aii|h+η)=ηα∑j∈M3(α),j≠i|aij|+α(δ∑j∈M1(α)|aij|+∑j∈M2(α)|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|h)+(1−α)Ci(A)Ti,r(A)|aii|h+η(1−α)Ci(A)=η[α∑j∈M3(α),j≠i|aij|+(1−α)Ci(A)]+α(δ∑j∈M1(α)|aij|+∑j∈M2(α)|aij|Λj(A)−|ajj|Λj(A)+h∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|)+(1−α)hCi(A)Ti,r(A)|aii|≤η[α∑j∈M3(α),j≠i|aij|+(1−α)Ci(A)]+α(δ∑j∈M1(α)|aij|+δ∑j∈M2(α)|aij|+h∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|)+(1−α)hCi(A)Ti,r(A)|aii|≤η[α∑j∈M3(α),j≠i|aij|+(1−α)Ci(A)]+hTi,r(A)≤η[αRi(A)+(1−α)Ci(A)]+hTi,r(A)<η|aii|+hTi,r(A)=η|aii|+|aii|Ti,r(A)|aii|h=|aii|(Ti,r(A)|aii|h+η)=|bii|. |
In conclusion, the following inequalities are always valid
|bii|>Λi(B)=αRi(B)+(1−α)Ci(B),i∈M1(α)∪M2(α)∪M3(α)=N. |
By Definition 1, matrix B is a strictly α1-diagonally dominant matrix, so matrix A is a generalized α1-diagonally dominant matrix. According to Lemma 1, A is a nonsingular H-matrix.
Remark 1. If α=1, Theorem 1 is equivalent to Theorem 4 in [12]. At the same time, in Theorem 1, we improve the conditions of the theorems in [13,14,15]. So Theorem 1 in this paper is a further supplement to the determination methods of nonsingular H- matrices.
Theorem 2. Let A=(aij)∈Cn×n be an irreducible matrix. If there is α∈(0,1), such that for any i∈M2(α),
|aii|Λi(A)−|aii|Λi(A)≥α[δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|ajj|Λj(A)+h∑j∈M3(α)|aij|Tj,r(A)|ajj|]+(1−α)Ci(A)Λi(A)−|aii|Λi(A), | (3.4) |
and at least one strict inequality in (3.4) holds, then matrix A is a nonsingular H-matrix.
Proof. We are going to proof the following inequality for all indices in each set M1(α),M2(α) and M3(α).
|bii|≥Λi(B)=αRi(B)+(1−α)Ci(B),i∈M1(α)∪M2(α)∪M3(α)=N. |
Construct a positive diagonal matrix X=diag(x1,x2,…,xn), where
xi={δ,i∈M1(α),Λi(A)−|aii|Λi(A),i∈M2(α),Ti,r(A)|aii|h,i∈M3(α). |
And denote B=AX=(bij). Similar to the proof process of Theorem 1, for any i∈M1(α),
Λi(B)=α[δ∑j∈M1(α),j≠i|aij|+∑j∈M2(α)|aij|Λj(A)−|ajj|Λj(A)+h∑j∈M3(α)|aij|Tj,r(A)|ajj|]+(1−α)δCi(A)≤δ[αRi(A)+(1−α)Ci(A)]=δΛi(A)=δ|aii|=|bii|. |
For any i∈M2(α), it can be obtained from (3.4) that
Λi(B)=α[δ∑j∈M1(α)|aij|+∑j∈M2(α),j≠i|aij|Λj(A)−|ajj|Λj(A)+h∑j∈M3(α)|aij|Tj,r(A)|ajj|]+(1−α)Ci(A)Λi(A)−|aii|Λi(A)≤|aii|Λi(A)−|aii|Λi(A)=|bii|. |
For any i∈M3(α), by (3.2) we can obtain
Λi(B)=α[δ∑j∈M1(α)|aij|+∑j∈M2(α)|aij|Λj(A)−|ajj|Λj(A)+∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|h]+(1−α)Ci(A)Ti,r(A)|aii|h≤α[δ∑j∈M1(α)|aij|+δ∑j∈M2(α)|aij|+h∑j∈M3(α),j≠i|aij|Tj,r(A)|ajj|]+(1−α)Ci(A)Ti,r(A)|aii|h<hTi,r(A)=|aii|Ti,r(A)|aii|h=|bii|. |
To sum up, we can always get the following inequalities
|bii|≥Λi(B)=αRi(B)+(1−α)Ci(B),i∈M1(α)∪M2(α)∪M3(α)=N. |
Notice that there is at least one i0∈M3(α), such that |bi0,i0|>Λi0(B), so B is an irreducible α1-diagonally dominant matrix. According to Lemma 3, B is a nonsingular H-matrix. Therefore, A is also a nonsingular H-matrix by Lemma 5.
Let
Qi(A)=(Ri(A))α(Ci(A))1−α,α∈(0,1). |
N1(α)={i∈N|0<|aii|<Qi(A)},N2(α)={i∈N||aii|=Qi(A)>0}, |
N3(α)={i∈N||aii|>Qi(A)}. |
It is obvious that Ni(α)∩Nj(α)=∅(i≠j) and N1(α)∪N2(α)∪N3(α)=N.
For any i∈N3(α), denote
Pi(A)=(∑j∈N1(α)|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α),j≠i|aij|Rj(A)(Cj(A))1−αα|ajj|1α)(Ci(A))1−αα. |
Obviously,
Pi(A)|aii|1α=(Pi(A)α|aii|)1α=(∑j∈N1(α)|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+(∑j∈N3(α),j≠i|aij|Rj(A)(Cj(A))1−αα|ajj|1α)α(Ci(A))1−α|aii|)1α<((Ri(A))α(Ci(A))1−α|aii|)1α<1. |
Theorem 3. Let A=(aij)∈Cn×n. If there exists α∈(0,1), such that
|aii|Qi(A)−|aii|Qi(A)>[∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|Pj(A)|ajj|1α]α⋅[Ci(A)Qi(A)−|aii|Qi(A)]1−α | (4.1) |
holds for any i∈N1(α), then the matrix A is a nonsingular H-matrix.
Proof. We are going to proof the following inequality for all indices in each set N1(α),N2(α) and N3(α).
|bii|>(Ri(B))α(Ci(B))1−α,i∈N1(α)∪N2(α)∪N3(α)=N. |
For any i∈N1(α), denote
gi(A)=(∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|Pj(A)|ajj|1α)(Ci(A)Qi(A)−|aii|Qi(A))1−αα, |
Gi(A)=(|aii|Qi(A)−|aii|Qi(A))1α−gi(A)(∑j∈N3(α)|aij|)[Ci(A)Qi(A)−|aii|Qi(A)]1−αα. |
It is known by (4.1) that Gi(A)>0,i∈N1(α). In particular, if ∑j∈N3(α)|aij|=0(i∈N1(α)),Gi(A) = +∞ is denoted. Take {a} sufficiently small positive number ε to satisfy
0<ε<min{Gj(A)(j∈N1(α)),1−Pi(A)|aii|1α(i∈N3(α))}. | (4.2) |
Construct a positive diagonal matrix X=diag(d1,d2,…,dn), where
di={Qi(A)−|aii|Qi(A),∀i∈N1(α),1,∀i∈N2(α),Pi(A)|aii|1α+ε,∀i∈N3(α). |
It is proved below that B=AX=(bij)∈Dα2. For any i∈N1(α), according to (4.1) and (4.2),
Ri(B)(Ci(B))1−αα=[∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|(Pj(A)|ajj|1α+ε)][Ci(A)Qi(A)−|aii|Qi(A)]1−αα=[∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|Pj(A)|ajj|1α][Ci(A)Qi(A)−|aii|Qi(A)]1−αα+ε(∑j∈N3(α)|aij|)[Ci(A)Qi(A)−|aii|Qi(A)]1−αα<[∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|Pj(A)|ajj|1α][Ci(A)Qi(A)−|aii|Qi(A)]1−αα]+Gi(A)(∑j∈N3(α)|aij|)[Ci(A)Qi(A)−|aii|Qi(A)]1−αα=(|aii|Qi(A)−|aii|Qi(A))1α=|bii|1α, |
that is, |bii|>Ri(B)α(Ci(B))1−α,i∈N1(α).
For any i∈N2(α), because Qi(A)−|aii|Qi(A)<1,i∈N1(α), and Pi(A)|aii|1α+ε<1,i∈N3(α), obtained by (4.2), so,
(Ri(B))α(Ci(B))1−α=[∑j∈N1(α)|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α),j≠i|aij|+∑j∈N3(α)|aij|(Pj(A)|ajj|1α+ε)]α[Ci(A)]1−α<(∑j∈N1(α)|aij|+∑j∈N2(α),j≠i|aij|+∑j∈N3(α)|aij|)α(Ci(A))1−α=(Ri(A))α(Ci(A))1−α=|aii|=|bii|. |
For any i∈N3(α), obviously
|aii|1α>Ri(A)(Ci(A))1−αα=(∑j∈N1(α)|aij|+∑j∈N2(α)|aij|+∑j∈N3(α),j≠i|aij|)(Ci(A))1−αα>(∑j∈N3(α),j≠i|aij|)(Ci(A))1−αα, |
hence
|aii|1α(Pi(A)|aii|1α+ε)=Pi(A)+ε|aii|1α>(∑j∈N1(α)|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α),j≠i|aij|Rj(A)(Cj(A))1−αα|ajj|1α)(Ci(A))1−αα+ε(∑j∈N3(α),j≠i|aij|)(Ci(A))1−αα=[∑j∈N1(α)|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α),j≠i|aij|(Rj(A)(Cj(A))1−αα|ajj|1α+ε)][Ci(A)]1−αα≥[∑j∈N1(α)|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α),j≠i|aij|(Pj(A)|ajj|1α+ε)][Ci(A)]1−αα. |
Take the two sides of the inequality to the power of α respectively, we can get
|aii|(Pi(A)|aii|1α+ε)α>[∑j∈N1(α)|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α),j≠i|aij|(Pj(A)|ajj|1α+ε)]α[Ci(A)]1−α. |
Further multiply both sides of the inequality by (Pi(A)|aii|1α+ε)1−α, then
|bii|=|aii|(Pi(A)|aii|1α+ε)>[∑j∈N1(α)|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α),j≠i|aij|(Pj(A)|ajj|1α+ε)]α[Ci(A)(Pi(A)|aii|1α+ε)]1−α=(∑j≠i(bij))α(∑j≠i(bji))1−α, |
that is, |bii|>(Ri(B))α(Ci(B))1−α. To sum up, the following inequality is always true.
|bii|>(Ri(B))α(Ci(B))1−α,i∈N1(α)∪N2(α)∪N3(α)=N, |
that is, B∈Dα2. Therefore, we know that A∈D∗α2, and according to Lemma 2, A is a nonsingular H-matrix.
Remark 2. According to (4.1) in Theorem 3, for any i∈N1(α), the following inequality is always true.
Qi(A)Qi(A)−|aii|[∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|Pj(A)|ajj|1α]α[Ci(A)Qi(A)−|aii|Qi(A)]1−α≤Qi(A)Qi(A)−|aii|[α(∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|Pj(A)|ajj|1α)+(1−α)Ci(A)Qi(A)−|aii|Qi(A)]≤Qi(A)Qi(A)−|aii|α[∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|Pj(A)|ajj|1α]+(1−α)Ci(A). |
Therefore, for Theorem 3 in this paper, we improve Theorem 1 in [10] and Theorem 1 in [16].
Theorem 4. Let A=(aij)∈Cn×n be an irreducible matrix. If there exists α∈(0,1), such that
|aii|Qi(A)−|aii|Qi(A)≥[∑j∈N1(α),j≠i|aij|Qj(A)−|ajj|Qj(A)+∑j∈N2(α)|aij|+∑j∈N3(α)|aij|Pj(A)|ajj|1α]α⋅[Ci(A)Qi(A)−|aii|Qi(A)]1−α | (4.3) |
is true for any i\in N_1{(\alpha)} , then the matrix A is a nonsingular H -matrix.
Proof. We are going to proof the following inequality for all indices in each set N_1(\alpha), N_2(\alpha) and N_3(\alpha) .
|b_{ii}|\geq(R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, \; i\in N_1{(\alpha)}\cup N_2{(\alpha)}\cup N_3{(\alpha)} = N. |
Construct a positive diagonal matrix X = \text{diag} \; (d_1, d_2, \ldots, d_n) , where
d_{i} = \left\{\begin{array}{ccc} {\frac{Q_i(A)-|a_{ii}|}{Q_i(A)}},&\; \forall i\in N_1{(\alpha)},\\ 1,&\; \forall i\in N_2{(\alpha)},\\ {\frac{P_i(A)}{|a_{ii}|^{\frac{1}{\alpha}}}},&\; \forall i\in N_3{(\alpha)}. \end{array} \right. |
Let B = AX = (b_{ij}) . For any i\in N_1{(\alpha)} , it can be obtained from (4.3) that
\begin{eqnarray} \begin{array}{lll} (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}& = [\sum\limits_{j\in N_1{(\alpha)},j\neq i}|a_{ij} |{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)}}|a_{ij} |{{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}}]^{\alpha}\cdot[C_i(A){{\frac{Q_i(A)-|a_{ii}|}{Q_i(A)}}}]^{1-\alpha}\\ &\le|a_{ii}|{\frac{Q_i(A)-|a_{ii}|}{Q_i(A)}} = |b_{ii}|, \end{array} \end{eqnarray} |
that is, |b_{ii}|\geq(R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, i\in N_1{(\alpha)}.
For any i\in N_2{(\alpha)} , because {\frac{Q_i(A)-|a_{ii}|}{Q_i(A)}} < 1, \; i\in N_1{(\alpha)} , and {\frac{P_i(A)}{{|a_{ii}|^{\frac{1}{\alpha}}}}} < 1, \; i\in N_3{(\alpha)} , we can obtain that
\begin{eqnarray} \begin{array}{lll} (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}& = [\sum\limits_{j\in N_1{(\alpha)}}|a_{ij}|{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)},j\neq i}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)}}|a_{ij} |{{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}}]^{\alpha}[C_i(A)]^{1-\alpha}\\ &\le[\sum\limits_{j\in N_1{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_2{(\alpha)},j\neq i}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)}}|a_{ij} |]^{\alpha}[C_i(A)]^{1-\alpha}\\ & = (R_i(A))^{\alpha}(C_i(A))^{1-\alpha} = |a_{ii}| = |b_{ii}|, \end{array} \end{eqnarray} |
that is, |b_{ii}| > (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, i\in N_2{(\alpha)}.
For any i\in N_3{(\alpha)} ,
\begin{eqnarray} \begin{array}{lll} |a_{ii}|^{{\frac{1}{\alpha}}}({\frac{P_i(A)}{|a_{ii}|^{\frac{1}{\alpha}}}})& = P_i(A)\\ & = [\sum\limits_{j\in N_1{(\alpha)}}|a_{ij} |{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)},j\neq i}|a_{ij}|{{\frac{R_j(A)(C_j(A))^{\frac{1-\alpha}{\alpha}}}{|a_{jj}|^{\frac{1}{\alpha}}}}}](C_i(A))^{\frac{1-\alpha}{\alpha}}\\ & > [\sum\limits_{j\in N_1{(\alpha)}}|a_{ij} |{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)},j\neq i}|a_{ij}|{{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}}](C_i(A))^{\frac{1-\alpha}{\alpha}}. \end{array} \end{eqnarray} |
Take the power \alpha on both sides and multiply by ({{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}})^{1-\alpha} at the same time, then
\begin{eqnarray} \begin{array}{lll} |b_{ii}|& = |a_{ii}|({\frac{P_i(A)}{|a_{ii}|^{\frac{1}{\alpha}}}})\\ & > [\sum\limits_{j\in N_1{(\alpha)}}|a_{ij} |{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)},j\neq i}|a_{ij}|({{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}})]^{\alpha}[(C_i(A)){\frac{P_i(A)}{|a_{ii}^{\frac{1}{\alpha}}|}}]^{1-\alpha}\\ & = (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, \end{array} \end{eqnarray} |
that is, |b_{ii}| > (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, i\in N_3{(\alpha)}.
In conclusion, the following inequalities are always valid.
|b_{ii}|\geq(R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, \; i\in N_1{(\alpha)}\cup N_2{(\alpha)}\cup N_3{(\alpha)} = N. |
Thus, B is an irreducible \alpha_2 -diagonally dominant matrix. According to Lemma 4, B is a nonsingular H -matrix. Therefore, A is also a nonsingular H -matrix by Lemma 5.
Example 1. Let
A = \left(\begin{matrix} 1 & \frac{18}{19} & 0 & \frac{1}{19} &0\\ \frac{412}{475} & 4 & \frac{58}{19} & 1 &17.08\\ \frac{13}{475} & \frac{20}{19} & 7.76 & 8 &0.92\\ \frac{1}{19} & 0 & \frac{18}{19} & 10 &0\\ \frac{1}{19} & 0 & 0 & \frac{18}{19} &\frac{23}{9}\\ \end{matrix}\right). |
Taking \alpha = \frac{19}{20} , we will show that
(1) The matrix A satisfies the conditions of Theorem 1 in this paper, so we can determine that A is a nonsingular H -matrix according to Theorem 1.
(2) A does not meet the criteria in [13,14,15], so it cannot be determined by applying the methods in these papers.
In fact, for (1), it can be obtained through calculation that
R_1(A) = C_1(A) = |a_{11}| = 1 = \alpha R_1(A)+(1-\alpha)C_1(A) = \Lambda_1(A), |
R_2(A) = 22, C_2(A) = 2, |
|a_{22}| = 4 < \alpha R_2(A)+(1-\alpha)C_2(A) = \Lambda_2(A) = \frac{19}{20}\times 22+\frac{1}{20}\times 2 = 21. |
R_3(A) = 10, C_3(A) = 4, |
|a_{33}| = 7.76 < \alpha R_3(A)+(1-\alpha)C_3(A) = \Lambda_3(A) = \frac{19}{20}\times 10+\frac{1}{20}\times 4 = 9.7. |
R_4(A) = 1, C_4(A) = 10, |
|a_{44}| = 10 > \alpha R_4(A)+(1-\alpha)C_4(A) = \Lambda_4(A) = \frac{19}{20}\times 1+\frac{1}{20}\times 10 = 1.45. |
R_5(A) = 1, C_5(A) = 18, |
|a_{55}| = 2.825 > \alpha R_5(A)+(1-\alpha)C_5(A) = \Lambda_5(A) = \frac{19}{20}\times 1+\frac{1}{20}\times 18 = 1.85. |
So, M_1{(\alpha)} = \{1\}, M_2{(\alpha)} = \{2, 3\}, M_3{(\alpha)} = \{4, 5\} . And then
\begin{equation*} \begin{aligned} r& = \max\{\frac{\frac{19}{20}(|a_{41}|+|a_{42}|+|a_{43}|{)}}{|a_{44}|-\frac{19}{20}|a_{45}|+\frac{1}{20}C_4(A)},\frac{\frac{19}{20}(|a_{51}|+|a_{52}|+|a_{53}|{)}}{|a_{55}|-\frac{19}{20}|a_{54}|-\frac{1}{20}C_5(A)}\}\\& = \max\{\frac{\frac{19}{20}(\frac{1}{19}+0+\frac{18}{19})}{10-\frac{19}{20}\times 0+\frac{1}{20}\times 10},\frac{\frac{19}{20}(\frac{1}{19}+0+0)}{\frac{23}{9}-\frac{19}{20}\times \frac{18}{19}-\frac{1}{20}\times 18}\} = \max\{\frac{1}{10},\frac{9}{136}\} = \frac{1}{10}, \end{aligned} \end{equation*} |
s = \max\{\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)},\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}\} = \max\{\frac{21-4}{21},\frac{9.7-7.76}{9.7}\} = \frac{17}{21}, |
\delta = \max\{r,s\} = \max\{\frac{1}{10}{,} \frac{17}{21}\} = \frac{17}{21}. |
\begin{equation*} \begin{aligned} T_{4,r}(A)& = \alpha(|a_{41}|+|a_{42}|+|a_{43}|+r|a_{45}|)+(1-\alpha)rC_4(A)\\ & = \frac{19}{20}(\frac{1}{19}+0+\frac{18}{19}+\frac{1}{10}\times 0)+\frac{1}{20}\times\frac{1}{10}\times 10 = \frac{19}{20}+\frac{1}{20} = 1, \end{aligned} \end{equation*} |
\begin{equation*} \begin{aligned} T_{5,r}(A)& = \alpha(|a_{51}|+|a_{52}|+|a_{53}|+r|a_{54}|)+(1-\alpha)rC_5(A)\\ & = \frac{19}{20}(\frac{1}{19}+0+0+\frac{1}{10}\times \frac{18}{19})+\frac{1}{20}\times\frac{1}{10}\times 18 = \frac{23}{100} = 0.23. \end{aligned} \end{equation*} |
\frac{\delta\alpha(|a_{41}|+|a_{42}|+|a_{43}|)}{T_{4,r}(A)-\alpha|a_{45}|\frac{T_{5,r}(A)}{|a_{55}|}-(1-\alpha)C_4(A)\frac{T_{4,r}(A)}{|a_{44}|}} = \frac{\frac{17}{21}\times\frac{19}{20}(\frac{1}{19}+0+\frac{18}{19})}{1-\frac{19}{20}\times 0\times \frac{0.23}{23/9}-\frac{1}{20}\times 10\times\frac{1}{10}} = \frac{17}{21}, |
\begin{equation*} \begin{aligned} &\frac{\delta\alpha(|a_{51}|+|a_{52}|+|a_{53}|)}{T_{5,r}(A)-\alpha|a_{54}|\frac{T_{4,r}(A)}{|a_{44}|}-(1-\alpha)C_5(A)\frac{T_{5,r}(A)}{|a_{55}|}} = \frac{\frac{17}{21}\times\frac{19}{20}(\frac{1}{19}+0+0)}{0.23-\frac{19}{20}\times\frac{18}{19}\times\frac{1}{10}-\frac{1}{20}\times 18\times\frac{0.23}{23/9}} = \frac{850}{1239}. \end{aligned} \end{equation*} |
Therefore, h = \max\{\frac{17}{21}, \frac{850}{1239}\} = \frac{17}{21} . And notice that
|a_{22}|\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)} = 4\times\frac{21-4}{21} = \frac{68}{21} = 3.2381, |
\begin{equation*} \begin{aligned} &\alpha[\delta|a_{21}|+|a_{23}|\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}+h(|a_{24}|\frac{T_{4,r}(A)}{|a_{44}|}+|a_{25}|\frac{T_{5,r}(A)}{|a_{55}|})]+(1-\alpha) C_2(A)\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)}\\ & = \frac{19}{20}\times[\frac{17}{21}\times\frac{412}{475}+\frac{58}{19}\times\frac{1}{5}+\frac{17}{21}\times(1\times\frac{1}{10}+17.08\times\frac{0.23}{23/9})]+\frac{1}{20}\times 2\times\frac{17}{21} = 2.5871, \end{aligned} \end{equation*} |
\begin{equation*} \begin{aligned} &|a_{22}|\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)} > \alpha[\delta|a_{21}|+|a_{23}|\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}+h(|a_{24}|\frac{T_{4,r}(A)}{|a_{44}|}+|a_{25}|\frac{T_{5,r}(A)}{|a_{55}|})]+(1-\alpha)C_2(A)\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)}. \end{aligned} \end{equation*} |
|a_{33}|\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)} = 7.76\times\frac{1}{5} = 1.5520, |
\begin{equation*} \begin{aligned} &\alpha[\delta|a_{31}|+|a_{32}|\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)}+h(|a_{34}|\frac{T_{4,r}(A)}{|a_{44}|}+|a_{35}|\frac{T_{5,r}(A)}{|a_{55}|})]+(1-\alpha) C_3(A)\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}\\ & = \frac{19}{20}\times[\frac{17}{21}\times\frac{13}{475}+\frac{20}{19}\times\frac{17}{21}+\frac{17}{21}\times(8\times\frac{1}{10}+0.92\times\frac{0.23}{23/9})]+\frac{1}{20}\times4\times\frac{1}{5} = 1.5495, \end{aligned} \end{equation*} |
\begin{equation*} \begin{aligned} &|a_{33}|\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)} > \alpha[\delta|a_{31}|+|a_{32}|\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)}+h(|a_{34}|\frac{T_{4,r}(A)}{|a_{44}|}+|a_{35}|\frac{T_{5,r}(A)}{|a_{55}|})]+(1-\alpha) C_3(A)\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}. \end{aligned} \end{equation*} |
To sum up, the conditions of Theorem 1 in this paper are satisfied. So we can determine that A is a nonsingular H -matrix.
For (2), it is calculated that
|a_{22}| = 4, |
\begin{equation*} \begin{aligned} &\frac{R_2(A)}{|a_{22}|}(|a_{21}|\frac{a_{11}}{R_1(A)}+|a_{23}|\frac{a_{33}}{R_3(A)}+\frac{R_4(A)}{|a_{44}|}+\frac{R_5(A)}{|a_{55}|}) = \frac{22}{4}(\frac{412}{475}\times\frac{1}{1}+\frac{58}{19}\times\frac{7.76}{10}+\frac{1}{10}+\frac{1}{23/9}) = 20.2622, \end{aligned} \end{equation*} |
|a_{22}| < \frac{R_2(A)}{|a_{22}|}(|a_{21}|\frac{a_{11}}{R_1(A)}+|a_{23}|\frac{a_{33}}{R_3(A)}+\frac{R_4(A)}{|a_{44}|}+\frac{R_5(A)}{|a_{55}|}). |
Then the conditions of the decision theorem in [13] are not satisfied.
\begin{equation*} \begin{aligned} &\; \; \; \; \; \; \frac{R_2(A)}{R_2(A)-|a_{22}|}(|a_{21}|+|a_{23}|\frac{R_3(A)-|a_{33}|}{R_3(A)}+|a_{24}|\frac{R_4(A)}{|a_{44}|}+|a_{25}|\frac{R_5(A)}{|a_{55}|})\\ & = \frac{22}{22-4}(\frac{412}{475}+\frac{58}{19}\times\frac{10-7.76}{10}+1\times\frac{1}{10} +17.08\times\frac{1}{23/9}) = 9.2791, \end{aligned} \end{equation*} |
|a_{22}| < \frac{R_2(A)}{R_2(A)-|a_{22}|}(|a_{21}|+|a_{23}|\frac{R_3(A)-|a_{33}|}{R_3(A)}+|a_{24}|\frac{R_4(A)}{|a_{44}|}+|a_{25}|\frac{R_5(A)}{|a_{55}|}). |
So the conditions of the decision theorem in [14] are also not satisfied.
Further calculation shows that
r = \max\{\frac{|a_{41}|+|a_{42}|+|a_{43}|}{|a_{44}|-|a_{45}|},\frac{|a_{51}|+|a_{52}|+|a_{53}|}{|a_{55}|-a_{54}|}\} = \max\{\frac{\frac{1}{19}+0+\frac{18}{19}}{10-0},\frac{\frac{1}{19}+0+0}{\frac{23}{9}-\frac{18}{19}}\} = \frac{1}{10}, |
P_4(A) = |a_{41}|+|a_{42}|+|a_{43}|+r\times|a_{45}| = \frac{1}{19}+0+\frac{18}{19}+\frac{1}{10}\times 0 = 1, |
P_5(A) = |a_{51}|+|a_{52}|+|a_{53}|+r\times|a_{54}| = \frac{1}{19}+0+0+\frac{1}{10}\times\frac{18}{19} = \frac{14}{95}. |
|a_{33}| = 7.76, |
\begin{equation*} \begin{aligned} &\; \; \; \; \; \; \frac{R_3(A)}{R_3(A)-|a_{33}|}(|a_{31}|+|a_{32}|\frac{R_2(A)-|a_{22}|}{R_2(A)}+|a_{34}|\frac{P_4(A)}{|a_{44}|}+|a_{35}|\frac{P_5(A)}{|a_{55}|})\\ & = \frac{10}{10-7.76}\times(\frac{13}{475}+\frac{20}{19}\times\frac{22-4}{22}+8\times\frac{1}{10}+0.92\times\frac{14/95}{23/9}) = 7.7753, \end{aligned} \end{equation*} |
|a_{33}| < \frac{R_3(A)}{R_3(A)-|a_{33}|}(|a_{31}|+|a_{32}|\frac{R_2(A)-|a_{22}|}{R_2(A)}+|a_{34}|\frac{P_4(A)}{|a_{44}|}+|a_{35}|\frac{P_5(A)}{|a_{55}|}). |
|a_{22}| = 4, |
\begin{equation*} \begin{aligned} &\; \; \; \; \; \; \frac{R_2(A)}{R_2(A)-|a_{22}|}(|a_{21}|+|a_{23}|\frac{R_3(A)-|a_{33}|}{R_3(A)}+|a_{24}|\frac{P_4(A)}{|a_{44}|}+|a_{25}|\frac{P_5(A)}{|a_{55}|})\\ & = \frac{22}{22-4}\times(\frac{412}{475}+\frac{58}{19}\times\frac{10-7.76}{10}+1\times\frac{1}{10}+17.08\times\frac{14/95}{23/9}) = 3.0881, \end{aligned} \end{equation*} |
|a_{22}| > \frac{R_2(A)}{R_2(A)-|a_{22}|}(|a_{21}|+|a_{23}|\frac{R_3(A)-|a_{33}|}{R_3(A)}+|a_{24}|\frac{P_4(A)}{|a_{44}|}+|a_{25}|\frac{P_5(A)}{|a_{55}|}). |
The conditions of the decision theorem in [15] are not satisfied.
Therefore, we know that the matrix A does not meet the criteria in [13,14,15], so it cannot be determined by these existing methods.
Example 2. Let
A = \left(\begin{matrix} 1 & 0.1 & -0.1 & -0.1 & 0.1 & 0\\ 0.1 & 0.6 & 0 & 0 & -0.2 & 0.3\\ 0.1 & 0 & 0.4 & -0.1 & 0 & -0.3\\ -0.1 & 0 & -0.1 & 0.3 & 0 & 0.2\\ 0.1 & 0.1 & -0.1 & -0.1 & 0.5 &0.1\\ 0 & -0.4 & 0.1 & 0 & -0.2 & 2\\ \end{matrix}\right). |
Taking \alpha = {\frac{1}{4}} , we will show that
(1) The matrix A satisfies the conditions of Theorem 3 in this paper, so we can get that A is a nonsingular H -matrix.
(2) A does not meet the criteria in [10,16], so it cannot be determined by applying the methods in [10,16].
In fact, for (1), it is calculated that
R_1(A) = 0.4,C_1(A) = 0.4,|a_{11}| = 1 > Q_1(A) = 0.4^{{\frac{1}{4}}}\times0.4^{{\frac{3}{4}}} = 0.4, |
R_2(A) = 0.6,C_2(A) = 0.6,|a_{22}| = 0.6 = Q_2(A) = 0.6^{{\frac{1}{4}}}\times0.6^{{\frac{3}{4}}} = 0.6, |
R_3(A) = 0.5,C_3(A) = 0.4,|a_{33}| = 0.4 < Q_3(A) = 0.5^{{\frac{1}{4}}}\times0.4^{{\frac{3}{4}}} = 0.4229, |
R_4(A) = 0.4,C_4(A) = 0.3,|a_{44}| = 0.3 < Q_4(A) = 0.4^{{\frac{1}{4}}}\times0.3^{{\frac{3}{4}}} = 0.3224, |
R_5(A) = 0.5,C_5(A) = 0.5,|a_{55}| = 0.5 = 0.5^{{\frac{1}{4}}}\times0.5^{{\frac{3}{4}}} = 0.5, |
R_6(A) = 0.7,C_6(A) = 0.9,|a_{66}| = 2 > 0.7^{{\frac{1}{4}}}\times0.9^{{\frac{3}{4}}} = 0.8452. |
So N_1{(\alpha)} = \{3, 4\}, \; N_2{(\alpha)} = \{2, 5\}, \; N_3{(\alpha)} = \{1, 6\} , and then calculate
\begin{eqnarray} \begin{array}{lll} P_1(A)& = [|a_{13}|{{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}}+|a_{14}|{{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}}+|a_{12}|+|a_{15}|+|a_{16}|{{\frac{R_6(A)(C_6(A))^3}{|a_{66}|^4}}}](C_1(A))^3\\ & = [0.1\times{{\frac{0.2295}{0.4229}}}+0.1\times{{\frac{0.0224}{0.3224}}}+0.1+0.1+0\times{{\frac{0.7\times(0.9)^3}{2^4}}}]\times0.4^3 = 0.0136, \end{array} \end{eqnarray} |
\begin{eqnarray} \begin{array}{lll} P_6(A)& = [|a_{63}|{{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}}+|a_{64}|{{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}}+|a_{62}|+|a_{65}|+|a_{61}|{{\frac{R_1(A)(C_1(A))^3}{|a_{11}|^4}}}](C_6(A))^3\\ & = [0.1\times{{\frac{0.2295}{0.4229}}}+0.1\times{{\frac{0.0224}{0.3224}}}+0.4+0.2+0\times{{\frac{0.4\times(0.4)^3}{1^4}}}]\times0.9^3 = 0.4414. \end{array} \end{eqnarray} |
\begin{eqnarray} \begin{array}{lll} &\; \; \; |a_{33}|{{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}} = 0.4\times{{\frac{0.2295}{0.4229}}} = 0.0217\\ & > [|a_{34}|{{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}}+|a_{32}|+|a_{35}|+|a_{31}|{{\frac{P_1(A)}{|a_{11}|^4}}}+|a_{36}|{{\frac{P_6(A)}{|a_{66}|^4}}}]^{{\frac{1}{4}}}[C_3(A){{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}}]^{{\frac{3}{4}}}\\ & = [0.1\times{{\frac{0.0224}{0.3224}}+0+0+0.1\times{{\frac{0.0136}{1^4}}}+0.3\times{{\frac{0.4414}{2^4}}}]^{{\frac{1}{4}}}}\times[0.4\times{{\frac{0.0229}{0.4229}}}]^{{\frac{3}{4}}}\\ & = (0.0166)^{{\frac{1}{4}}}\times(0.2173)^{{\frac{3}{4}}} = 0.0203, \end{array} \end{eqnarray} |
\begin{eqnarray} \begin{array}{lll} &\; \; \; |a_{44}|{{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}} = 0.3\times{{\frac{0.0224}{0.3224}}} = 0.0208\\ & > [|a_{43}|{{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}}+|a_{42}|+|a_{45}|+|a_{41}|{{\frac{P_1(A)}{|a_{11}|^4}}}+|a_{46}|{{\frac{P_6(A)}{|a_{66}|^4}}}]^{{\frac{1}{4}}}[C_4(A){{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}}]^{{\frac{3}{4}}}\\ & = [0.1\times{{\frac{0.2295}{0.4229}}}+0+0+0.1\times{{\frac{0.0136}{1^4}}}+0.2\times{{\frac{0.4414}{2^4}}}]^{{\frac{1}{4}}}\times[0.3\times{{\frac{0.0224}{0.3224}}}]^{{\frac{3}{4}}}\\ & = (0.0123)^{{\frac{1}{4}}}\times(0.0208)^{{\frac{3}{4}}} = 0.0183. \end{array} \end{eqnarray} |
So the conditions of Theorem 3 in this paper are satisfied, thus we can determine that A is a nonsingular H -matrix.
For (2), using Theorem 3 in [16], we can get
E_1(A) = {\frac{1}{4}}R_1(A)+{\frac{3}{4}}C_1(A) = {\frac{1}{4}}\times0.4+{\frac{3}{4}}\times0.4 = 0.4 < |a_{11}| = 1, |
E_2(A) = {\frac{1}{4}}R_2(A)+{\frac{3}{4}}C_2(A) = {\frac{1}{4}}\times0.6+{\frac{3}{4}}\times0.6 = 0.6 = |a_{22}|, |
E_3(A) = {\frac{1}{4}}R_3(A)+{\frac{3}{4}}C_3A) = {\frac{1}{4}}\times0.5+{\frac{3}{4}}\times0.4 = 0.425 > |a_{33}| = 0.4, |
E_4(A) = {\frac{1}{4}}R_4(A)+{\frac{3}{4}}C_4(A) = {\frac{1}{4}}\times0.4+{\frac{3}{4}}\times0.3 = 0.325 > |a_{44}| = 0.3, |
E_5(A) = {\frac{1}{4}}R_5(A)+{\frac{3}{4}}C_5(A) = {\frac{1}{4}}\times0.5+{\frac{3}{4}}\times0.5 = 0.5 = |a_{55}|, |
E_6(A) = {\frac{1}{4}}R_6(A)+{\frac{3}{4}}C_6(A) = {\frac{1}{4}}\times0.7+{\frac{3}{4}}\times0.9 = 0.85 < |a_{66}| = 2. |
It can be obtained through calculation that
\begin{eqnarray} \begin{array}{lll} P_1(A)& = {\frac{1}{4}}(|a_{13}|{{\frac{E_3(A)-|a_{33}|}{E_3(A)}}}+|a_{14}|{{\frac{E_4(A)-|a_{44}|}{E_4(A)}}}+|a_{12}|+|a_{15}|+|a_{16}|{{\frac{E_1(A)}{|a_{66}|}}})+{\frac{3}{4}}C_1(A){{\frac{E_1(A)}{|a_{11}|}}}\\ & = {\frac{1}{4}}\times(0.1\times{{\frac{0.425-0.4}{0.425}}}+0.1\times{{\frac{0.325-0.3}{0.325}}}+0.1+0.1+0\times{{\frac{0.85}{2}}})+{\frac{3}{4}}\times0.4\times{\rm\frac{0.4}{1}}\\ & = 0.0534+0.12 = 0.1734, \end{array} \end{eqnarray} |
\begin{eqnarray} \begin{array}{lll} P_6(A)& = {\frac{1}{4}}(|a_{63}|{{\frac{E_3(A)-|a_{33}|}{E_3(A)}}}+|a_{64}|{{\frac{E_4(A)-|a_{44}|}{E_4(A)}}}+|a_{62}|+|a_{65}|+|a_{61}|{{\frac{E_1(A)}{|a_{11}|}}})+{\frac{3}{4}}C_6(A){{\frac{E_6(A)}{|a_{66}|}}}\\ & = {\frac{1}{4}}\times(0.1\times{{\frac{0.425-0.4}{0.425}}}+0.1\times{{\frac{0.325-0.3}{0.325}}}+0.4+0.2+0\times{{\frac{0.4}{1}}})+{\frac{3}{4}}\times0.9\times{\rm\frac{0.85}{2}}\\ & = 0.1515+0.2869 = 0.4383. \end{array} \end{eqnarray} |
\begin{eqnarray} \begin{array}{lll} &\; \; \; \; |a_{33}|{{\frac{E_3(A)-|a_{33}|}{E_3(A)}}} = 0.4\times{{\frac{0.425-0.4}{0.425}}} = 0.0235\\ & < {\frac{1}{4}}(|a_{34}|{{\frac{E_4(A)-|a_{44}|}{E_4(A)}}}+|a_{32}|+|a_{35}|+|a_{31}|{{\frac{P_1(A)}{|a_{11}|}}}+|a_{36}|{{\frac{P_6(A)}{|a_{66}|}}})+{\frac{3}{4}}C_3(A){{\frac{E_3(A)-|a_{33}|}{E_3(A)}}}\\ & = {\frac{1}{4}}(0.1\times{{\frac{0.325-0.3}{0.325}}}+0+0+0.1\times{{\frac{0.1734}{1}}}+0.3\times{{\frac{0.4383}{2}}})+{\frac{3}{4}}\times0.4\times{{\frac{0.425-0.4}{0.425}}}\\ & = 0.0227+0.0176 = 0.0403, \end{array} \end{eqnarray} |
\begin{eqnarray} \begin{array}{lll} &\; \; \; \; |a_{44}|{{\frac{E_4(A)-|a_{44}|}{E_4(A)}}} = 0.3\times{{\frac{0.325-0.3}{0.325}}} = 0.0231\\ & < {\frac{1}{4}}(|a_{43}|{{\frac{E_3(A)-|a_{33}|}{E_3(A)}}}+|a_{42}|+|a_{45}|+|a_{41}|{{\frac{P_1(A)}{|a_{11}|}}}+|a_{46}|{{\frac{P_6(A)}{|a_{66}|}}})+{\frac{3}{4}}C_4(A){{\frac{E_4(A)-|a_{44}|}{E_4(A)}}}\\ & = {\frac{1}{4}}(0.1\times{{\frac{0.425-0.4}{0.425}}}+0+0+0.1\times{{\frac{0.1734}{1}}}+0.2\times{{\frac{0.4383}{2}}})+{\frac{3}{4}}\times0.3\times{{\frac{0.325-0.4}{0.325}}}\\ & = 0.0168+0.0173 = 0.0341. \end{array} \end{eqnarray} |
So the matrix A does not satisfy the conditions of the theorem in [16], thus it cannot be judged using the method in [16].
Using Theorem 3 in [10], we can obtain
x_1 = {\frac{{\frac{1}{4}}R_1(A)+{\frac{3}{4}}C_1(A)}{|a_{11}|}} = {\frac{{\frac{1}{4}\times0.4}+{\frac{3}{4}}\times0.4}{1}} = 0.4, |
x_2 = {\frac{{\frac{1}{4}}R_2(A)+{\frac{3}{4}}C_2(A)}{|a_{22}|}} = {\frac{{\frac{1}{4}\times0.6}+{\frac{3}{4}}\times0.6}{0.6}} = 1, |
x_3 = {\frac{{\frac{1}{4}}R_3(A)+{\frac{3}{4}}C_3(A)}{|a_{33}|}} = {\frac{{\frac{1}{4}\times0.5}+{\frac{3}{4}}\times0.4}{0.5}} = {\frac{0.425}{0.4}} = 1.0625, |
x_4 = {\frac{{\frac{1}{4}}R_4(A)+{\frac{3}{4}}C_4(A)}{|a_{44}|}} = {\frac{{\frac{1}{4}\times0.4}+{\frac{3}{4}}\times0.3}{0.3}} = 1.0833, |
x_5 = {\frac{{\frac{1}{4}}R_5(A)+{\frac{3}{4}}C_5(A)}{|a_{55}|}} = {\frac{{\frac{1}{4}\times0.5}+{\frac{3}{4}}\times0.5}{0.5}} = 1, |
x_6 = {\frac{{\frac{1}{4}}R_6(A)+{\frac{3}{4}}C_6(A)}{|a_{66}|}} = {\frac{{\frac{1}{4}\times0.7}+{\frac{3}{4}}\times0.9}{21}} = {\frac{0.85}{2}} = 0.425. |
It is known by calculation that
\begin{eqnarray} \begin{array}{lll} |a_{33}|& = 0.4\\ & < {\frac{x_3}{x_3-1}}{\frac{1}{4}}[|a_{32}|+|a_{35}|+(1-{\frac{1}{x_4}})|a_{34}|+x_1|a_{31}|+x_6|a_{36}|]+{\frac{3}{4}}C_3(A)\\ & < {\frac{1.0625}{1.0625-1}}\times{\frac{1}{4}}\times[0+0+(1-{\frac{0.3}{0.325}})\times0.1+0.4\times0.1+0.425\times0.3]\\ & = 0.7446+0.3 = 1.0446. \end{array} \end{eqnarray} |
\begin{eqnarray} \begin{array}{lll} |a_{44}|& = 0.3\\ & < {\frac{x_4}{x_4-1}}{\frac{1}{4}}[|a_{42}|+|a_{45}|+(1-{\frac{1}{x_3}})|a_{43}|+x_1|a_{41}|+x_6|a_{46}|]+{\frac{3}{4}}C_4(A)\\ & < {\frac{{\frac{0.325}{0.3}}}{{\frac{0.325}{0.3}}-1}}\times{\frac{1}{4}}\times[0+0+(1-{\frac{0.4}{0.425}})\times0.1+0.4\times0.1+0.425\times0.2]+{\frac{3}{4}}\times0.4\\ & = 0.4254+0.225 = 0.6504. \end{array} \end{eqnarray} |
Through calculation, we know that the matrix A also does not meet the criteria in [10], so it also cannot be determined by applying the method in [10].
In this paper, based on the relevant properties of two classes of \alpha -diagonally dominant matrices, we obtain several sufficient conditions to determine nonsingular H -matrix, which improves the existing results and also extends the determination theory of nonsingular H -matrix.
This work is supported by the Science and technology research project of education department of Jilin Province of China (JJKH20220041KJ), the Natural Sciences Program of Science and Technology of Jilin Province of China(20190201139JC) and the Graduate Inovation Project of Beihua University (2022003, 2021004).
The authors declare that there are no conflict of interest.
[1] |
R. Bru, C. Corral, I. Giménez, J. Mas, Classes of general H-matrices, Linear Algebra Appl., 429 (2008), 2358–2366. https://doi.org/10.1016/j.laa.2007.10.030 doi: 10.1016/j.laa.2007.10.030
![]() |
[2] |
M. Alanelli, A. Hadjidimos, On iterative criteria for H- and non-H-matrices, Appl. Math. Comput., 188 (2007), 19–30. https://doi.org/10.1016/j.amc.2006.09.089 doi: 10.1016/j.amc.2006.09.089
![]() |
[3] | A. Berman, R. Plemmons, Nonnegative matrices in the mathematical sciences, Philadelphia: SIAM Press, 1994. https://doi.org/10.1137/1.9781611971262 |
[4] |
J. Zhao, Q. Liu, C. Li, Y. Li, Dashnic-Zusmanovich type matrices: a new subclass of nonsingular H-matrices, Linear Algebra Appl., 552 (2018), 277–287. https://doi.org/10.1016/j.laa.2018.04.028 doi: 10.1016/j.laa.2018.04.028
![]() |
[5] |
M. Li, Y. Sun, Practical criteria for H-matrices, Appl. Math. Comput., 211 (2009), 427–433. https://doi.org/10.1016/j.amc.2009.01.083 doi: 10.1016/j.amc.2009.01.083
![]() |
[6] | Y. Sun, Sufficient conditions for generalized diagonally dominant matrices (Chinese), Numerical Mathematics A Journal of Chinese Universities, 19 (1997), 216–223. |
[7] | Y. Sun, An improvement on a theorem by ostrowski and its applications (Chinese), Northeastern Math. J., 7 (1991), 497–502. |
[8] | L. Wang, B. Xi, F. Qi, Necessary and sufficient conditions for identifying strictly geometrically \alpha-bidiagonally dominant matrices, U.P.B. Sci. Bull. Series A, 76 (2014), 57–66. |
[9] | J. Li, W. Zhang, Criteria for H-matrices (Chinese), Numerical Mathematics A Journal of Chinese Universities, 21 (1999), 264–268. |
[10] | R. Jiang, New criteria for nonsingular H-matrices (Chinese), Chinese Journal of Engineering Mathematics, 28 (2011), 393–400. |
[11] |
G. Han, C. Zhang, H. Gao, Discussion for identifying H-matrices, J. Phys.: Conf. Ser., 1288 (2019), 012031. https://doi.org/10.1088/1742-6596/1288/1/012031 doi: 10.1088/1742-6596/1288/1/012031
![]() |
[12] | X. Chen, Q. Tuo, A set of new criteria for nonsingular H-matrices (Chinese), Chinese Journal of Engineering Mathematics, 37 (2020), 325–334. |
[13] |
T. Gan, T. Huang, Simple criteria for nonsingular H-matrices, Linear Algebra Appl., 374 (2003), 317–326. https://doi.org/10.1016/S0024-3795(03)00646-3 doi: 10.1016/S0024-3795(03)00646-3
![]() |
[14] | T. Gan, T. Huang, Practical sufficient conditions for nonsingular H-matrices (Chinese), Mathematica Numerica Sinica, 26 (2004), 109–116. |
[15] | Q. Tuo, L. Zhu, J. Liu, One type of new criteria conditions for nonsingular H-matrices (Chinese), Mathematica Numerica Sinica, 30 (2008), 177–182. |
[16] |
Y. Yang, M. Liang, A new type of determinations for nonsingular H-matrices (Chinese), Journal of Hexi University, 37 (2021), 20–25. https://doi.org/10.13874/j.cnki.62-1171/g4.2021.02.004 doi: 10.13874/j.cnki.62-1171/g4.2021.02.004
![]() |
[17] |
J. Liu, J. Li, Z. Huang, X. Kong, Some properties of Schur complements and diagonal-Schur complements of diagonally dominant matrices, Linear Algebra Appl., 428 (2008), 1009–1030. https://doi.org/10.1016/j.laa.2007.09.008 doi: 10.1016/j.laa.2007.09.008
![]() |
[18] |
L. Cvetković, H-matrix theory vs. eigenvalue localization, Numer. Algor., 42 (2006), 229–245. https://doi.org/10.1007/s11075-006-9029-3 doi: 10.1007/s11075-006-9029-3
![]() |
1. | Yan Li, Xiaoyong Chen, Yaqiang Wang, Some new criteria for identifying H-matrices, 2024, 38, 0354-5180, 1375, 10.2298/FIL2404375L |