Processing math: 67%
Research article

New criteria for nonsingular H-matrices

  • Received: 06 March 2023 Revised: 27 April 2023 Accepted: 07 May 2023 Published: 19 May 2023
  • MSC : 15A57

  • In this paper, according to the theory of two classes of α-diagonally dominant matrices, the row index set of the matrix is divided properly, and then some positive diagonal matrices are constructed. Furthermore, some new criteria for nonsingular H-matrix are obtained. Finally, numerical examples are given to illustrate the effectiveness of the proposed criteria.

    Citation: Panpan Liu, Haifeng Sang, Min Li, Guorui Huang, He Niu. New criteria for nonsingular H-matrices[J]. AIMS Mathematics, 2023, 8(8): 17484-17502. doi: 10.3934/math.2023893

    Related Papers:

    [1] Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of $ SDD_1^{+} $ matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034
    [2] Deshu Sun . Note on error bounds for linear complementarity problems involving $ B^S $-matrices. AIMS Mathematics, 2022, 7(2): 1896-1906. doi: 10.3934/math.2022109
    [3] Qin Zhong, Na Li . Generalized Perron complements in diagonally dominant matrices. AIMS Mathematics, 2024, 9(12): 33879-33890. doi: 10.3934/math.20241616
    [4] Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007
    [5] Maja Nedović, Dunja Arsić . New scaling criteria for $ H $-matrices and applications. AIMS Mathematics, 2025, 10(3): 5071-5094. doi: 10.3934/math.2025232
    [6] Ali Algefary . Diagonal solutions for a class of linear matrix inequality. AIMS Mathematics, 2024, 9(10): 26435-26445. doi: 10.3934/math.20241286
    [7] Zhen Lin . On the sum of the largest $ A_{\alpha} $-eigenvalues of graphs. AIMS Mathematics, 2022, 7(8): 15064-15074. doi: 10.3934/math.2022825
    [8] Xiaofeng Guo, Jianyu Pan . Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 17284-17306. doi: 10.3934/math.2023884
    [9] Peiying Huang, Yiyuan Zhang . H-Toeplitz operators on the Dirichlet type space. AIMS Mathematics, 2024, 9(7): 17847-17870. doi: 10.3934/math.2024868
    [10] Qingmei Bai, Alatancang Chen, Jingying Gao . Browder spectra of closed upper triangular operator matrices. AIMS Mathematics, 2024, 9(2): 5110-5121. doi: 10.3934/math.2024248
  • In this paper, according to the theory of two classes of α-diagonally dominant matrices, the row index set of the matrix is divided properly, and then some positive diagonal matrices are constructed. Furthermore, some new criteria for nonsingular H-matrix are obtained. Finally, numerical examples are given to illustrate the effectiveness of the proposed criteria.



    Let Cn×n be the set of n order complex matrices and A=(aij)Cn×n. For any i,jN={1,2,,n}, denote

    Ri(A)=jN,ji|aij|,Ci(A)=jN,ji|aji|.

    Let A=(aij)Cn×n. If |aii|Ri(A)(iN), then A is called a diagonally dominant matrix, and denoted by AD0. If |aii|>Ri(A)(iN), then A is called a strictly diagonally dominant matrix and denoted by AD.

    If there is a positive diagonal matrix X such that AXD, then A is called a generalized strictly diagonally dominant matrix, denoted by AD, and also called a nonsingular H-matrix.

    A matrix A is said to be an H-matrix if its comparison matrix is an M-matrix. Throughout this paper, we are working with H-matrices such that their comparison matrices are nonsingular. These matrices are called invertible class of H-matrices in [].

    As a result of that a nonsingular H-matrix has nonzero diagonal entries, we always assume that aii0(iN).

    The nonsingular H-matrix is a kind of special matrix that is widely used in matrix theory. Many practical problems can usually be attributed to the problems of solving one or a group of linear algebraic equations for large sparse matrices. In the process of solving linear equations, it is often necessary to assume that the coefficient matrix is a nonsingular H-matrix. At the same time, nonsingular H-matrix has important practical value in many fields, such as economic mathematics, electric system theory, control theory and computational mathematics[2,3]. However, it is very difficult to determine the nonsingular H-matrix in practice. So the determination of nonsingular H-matrix is a very meaningful topic in the study of matrix theory. Many scholars have conducted in-depth research on its sufficient conditions, and have further given many simple and practical results [4,5,6,7,8,9,10,11,12,13,14,15,16].

    In this paper, we introduce two different classes of α-diagonally dominant matrices defined in [6,7]. In order to avoid confusion, they are called α1-diagonally dominant matrix and α2-diagonally dominant matrix respectively.

    Definition 1. [6] Let A=(aij)Cn×n. If α[0,1] exists, making

    |aii|α[Ri(A)]+(1α)[Ci(A)],iN,

    then A is called an α1-diagonally dominant matrix, and denoted by ADα10. If α[0,1] exists, making

    |aii|>α[Ri(A)]+(1α)[Ci(A)],iN, (1.1)

    then A is called a strictly α1-diagonally dominant matrix, and denoted by ADα1.

    Definition 2. [7] Let A=(aij)Cn×n. If α[0,1] exists, making

    |aii|[Ri(A)]α[Ci(A)]1α,iN,

    then A is called an α2-diagonally dominant matrix, and denoted by ADα20. If α[0,1] exists, making

    |aii|>[Ri(A)]α[Ci(A)]1α,iN, (1.2)

    then A is called a strictly α2-diagonally dominant matrix, and denoted by ADα2.

    At present, many scholars have studied the properties and determination methods of α1-(and α2-) diagonally dominant matrices, see [5,6,7,8,9,10,11,17]. α2-diagonally dominant matrix is called geometrically α-diagonally dominant matrix in [8], α-chain diagonally dominant matrix in [9], and product α-diagonally dominant matrix in [17].

    In Definitions 1 and 2, if α=1, we can know |aii|>Ri(A),iN, by (1.1) and (1.2), that is, AD. If α=0, we can know |aii|>Ci(A),iN, by (1.1) and (1.2), that is, ATD. Therefore, if α=0 or 1, A is a nonsingular H-matrix, so only the case of α(0,1) is considered in this paper.

    If A is an α1-(or α2-) diagonally dominant matrix, then AD[6,7]. So α1-(or α2-) diagonally dominant matrix is also a class of nonsingular H-matrix. These two classes are both subclasses of nonsingular H-matrix, and they have their equivalent theorems in the field of eigenvalue localization. It is easy to see that the class of α1-diagonally dominant matrix is contained in that of α2-diagonally dominant matrix[18].

    In this paper, by using the properties of α1-(or α2-) diagonally dominant matrix, we give some criteria for determining nonsingular H-matrix. Finally, numerical examples are used to compare the criteria obtained in this paper with the existing results.

    Some relevant concepts and important conclusions are given in this section.

    Definition 3. [9] Let A=(aij)Cn×n. If there is a positive diagonal matrix X such that AXDα1, then A is called a generalized α1-diagonally dominant matrix, which is denoted by ADα1.

    Definition 4. [7] Let A=(aij)Cn×n. If there is a positive diagonal matrix X such that AXDα2, then A is called a generalized α2-diagonally dominant matrix, which is denoted by ADα2.

    Definition 5. [10] Let A=(aij)Cn×n be an irreducible matrix. If there exists α[0,1] such that |aii|α[Ri(A)]+(1α)[Ci(A)],iN, and at least one strict inequality holds, then A is said to be an irreducible α1-diagonally dominant matrix.

    Here, similar to irreducible α1-diagonally dominant matrix, we give the definition of irreducible α2-diagonally dominant matrix.

    Definition 6. Let A=(aij)Cn×n be an irreducible matrix. If there exists α[0,1] such that |aii|[Ri(A)]α[Ci(A)]1α,iN, and at least one strict inequality holds, then A is said to be an irreducible α2-diagonally dominant matrix.

    Lemma 1. [9] Let A=(aij)Cn×n. If A is a generalized α1-diagonally dominant matrix, then A is a nonsingular H-matrix.

    Lemma 2. [7] Let A=(aij)Cn×n. Then A is a generalized strictly diagonally dominant matrix if and only if A is a generalized α2-diagonally dominant matrix.

    Lemma 3. [10] Let ADα10 be an irreducible matrix, and there is at least one iN to make |aii|>α[Ri(A)]+(1α)[Ci(A)] hold, then AD.

    Lemma 4. [11] Let ADα20 be an irreducible matrix, and there is at least one iN to make |aii|>[Ri(A)]α[Ci(A)]1α hold, then AD.

    Lemma 5. [3] Suppose A=(aij)Cn×n, if AX is a nonsingular H-matrix, with X=diag(x1,x2,,xn)(xi>0,i=1,2,,n), then A is a nonsingular H-matrix.

    Denote

    M1(α)={iN||aii|=Λi(A)},M2(α)={iN|0<|aii|<Λi(A)},M3(α)={iN||aii|>Λi(A)}.

    It is obvious that Mi(α)Mj(α)=(ij) and M1(α)M2(α)M3(α)=N. We denote i=0 and

    Λi(A)=αRi(A)+(1α)Ci(A),α(0,1),
    r=maxiM3(α){α(jM1(α)|aij|+jM2(α)|aij|)|aii|αjM3(α),ji|aij|(1α)Ci(A)},s=maxiM2(α){Λi(A)|aii|Λi(A)},δ=max{r,s},
    Ti,r(A)=α(jM1(α)|aij|+jM2(α)|aij|+rjM3(α),ji|aij|)+(1α)rCi(A),iM3(α),
    h=maxiM3(α){δα(jM1(α)|aij|+jM2(α)|aij|)Ti,r(A)αjM3(α),ji|aij|Tj,r(A)|ajj|(1α)Ci(A)Ti,r(A)|aii|}.

    Theorem 1. Let A=(aij)Cn×n. If there is α(0,1), such that for any iM2(α),

    |aii|Λi(A)|aii|Λi(A)>α(δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|aii|Λj(A)+hjM3(α)|aij|Tj,r(A)|ajj|)+(1α)Ci(A)Λi(A)|aii|Λi(A) (3.1)

    holds, then A is a nonsingular H-matrix.

    Proof. We are going to proof the following inequality for all indices in each set M1(α),M2(α) and M3(α).

    |bii|>Λi(B)=αRi(B)+(1α)Ci(B),iM1(α)M2(α)M3(α)=N.

    It can be seen from the previous denotions that 0r<1,0<δ<1. From the definition of Ti,r(A), we can get that for any iM3(α),

    r|aii|α(jM1(α)|aij|+jM2(α)|aij|+rjM3(α),ji|aij|)+(1α)rCi(A)

    holds, that is, Ti,r(A)r|aii|,iM3(α). Therefore

    0Ti,r(A)|aii|rδ<1,iM3(α).

    Furthermore, according to the definition of Ti,r(A), for any iM3(α),

    α(jM1(α)|aij|+jM2(α)|aij|)=Ti,r(A)r{αjM3(α),ji|aij|+(1α)rCi(A)}.

    So

    δα(jM1(α)|aij|+jM2(α)|aij|)Ti,r(A)αjM3(α),ji|aij|Tj,r(A)|ajj|(1α)Ci(A)Ti,r(A)|aii|<Ti,r(A)r(αjM3(α),ji|aij|+(1α)rCi(A))Ti,r(A)αjM3(α),ji|aij|Tj,r(A)|ajj|(1α)Ci(A)Ti,r(A)|aii|1.

    According to the definition of h, we can get 0h<1, and for all iM3(α),

    hTi,r(A)α(δjM1(α)|aij|+δjM2(α)|aij|+hjM3(α),ji|aij|Tj,r(A)|ajj|+(1α)hCi(A)Ti,r(A)|aii|). (3.2)

    By (3.1), for all iM2(α), we can get

    |aii|Λi(A)|aii|Λi(A)(α(δjM1(α)|aij|+jM2(α),ji|aij|Λi(A)|aii|Λi(A)+hjM3(α)|aij|Tj,r(A)|ajj|)+(1α)Ci(A)Λi(A)|aii|Λi(A))>0.

    Let

    ki=|aii|Λi(A)|aii|Λi(A)(α(δjM1(α)|aij|+jM2(α),ji|aij|Λi(A)|aii|Λi(A)+hjM3(α)|aij|Tj,r(A)|ajj|)+(1α)Ci(A)Λi(A)|aii|Λi(A))

    and

    wi=kiαjM3(α)|aij|,iM2(α). (3.3)

    In particular, if jM3(α)|aij|=0, then denote wi=+, according to (3.3), wi>0,iM2(α). Notice that

    0Ti,r(A)|aii|h<Ti,r(A)|aii|δ<1,iM3(α).

    Thus, take a sufficiently small positive number η to make it meet both

    0<η<miniM2(α){wi}+

    and

    maxiM3(α){Ti,r(A)|aii|h+η}<δ<1.

    Construct a positive diagonal matrix X=diag(x1,x2,,xn), where

    xi={δ,iM1(α),Λi(A)|aii|Λi(A),iM2(α),Ti,r(A)|aii|h+η,iM3(α).

    And let B=AX=(bi,j).

    For any iM1(α), it can be obtained from 0<δ<1,0<Λi(A)|aii|Λi(A)δ<1(iM2(α)), and 0<Ti,r(A)|aii|h+η<δ<1(iM3(α)) that

    Λi(B)=α(δjM1(α),ji|aij|+jM2(α)|aij|Λj(A)|ajj|Λj(A)+jM3(α)|aij|(Tj,r(A)|ajj|h+η))+(1α)δCi(A)<α(δjM1(α),ji|aij|+δjM2(α)|aij|+δjM3(α)|aij|)+(1α)δCi(A)=δ(αRi(A)+(1α)Ci(A))=δΛi(A)=δ|aii|=|bii|.

    For any iM2(α), if jM3(α)|aij|=0, it can be deduced from (3.1) that

    Λi(B)=α(δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|ajj|Λj(A)+jM3(α)|aij|(Tj,r(A)|ajj|h+η))+(1α)Ci(A)Λi(A)|aii|Λi(A)=α(δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|ajj|Λj(A))+(1α)Ci(A)Λi(A)|aii|Λi(A)<|aii|Λi(A)|aii|Λi(A)=|bii|.

    If jM3(α)|aij|0, it can be obtained from (3.3) that

    Λi(B)=α(δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|ajj|Λj(A)+jM3(α)|aij|(Tj,r(A)|ajj|h+η))+(1α)Ci(A)Λi(A)|aii|Λi(A)=α(ηjM3(α)|aij|+δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|ajj|Λj(A)+jM3(α)|aij|(Tj,r(A)|ajj|))+(1α)Ci(A)Λi(A)|aii|Λi(A)=ηαjM3(α)|aij|+α(δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|ajj|Λj(A)+jM3(α)|aij|(Tj,r(A)|ajj|))+(1α)Ci(A)Λi(A)|aii|Λi(A)<wiαjM3(α)|aij|+α(δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|ajj|Λj(A)+jM3(α)|aij|(Tj,r(A)|ajj|))+(1α)Ci(A)Λi(A)|aii|Λi(A)=|aii|Λi(A)|aii|Λi(A)=|bii|.

    For any iM3(α), it can be deduced from 0<Λi(A)|aii|Λi(A)δ<1(iM2(α)) and (3.2) that

    Λi(B)=α[δjM1(α)|aij|+jM2(α)|aij|Λj(A)|ajj|Λj(A)+jM3(α),ji|aij|(Tj,r(A)|ajj|h+η)]+(1α)Ci(A)(Ti,r(A)|aii|h+η)=ηαjM3(α),ji|aij|+α(δjM1(α)|aij|+jM2(α)|aij|Λj(A)|ajj|Λj(A)+jM3(α),ji|aij|Tj,r(A)|ajj|h)+(1α)Ci(A)Ti,r(A)|aii|h+η(1α)Ci(A)=η[αjM3(α),ji|aij|+(1α)Ci(A)]+α(δjM1(α)|aij|+jM2(α)|aij|Λj(A)|ajj|Λj(A)+hjM3(α),ji|aij|Tj,r(A)|ajj|)+(1α)hCi(A)Ti,r(A)|aii|η[αjM3(α),ji|aij|+(1α)Ci(A)]+α(δjM1(α)|aij|+δjM2(α)|aij|+hjM3(α),ji|aij|Tj,r(A)|ajj|)+(1α)hCi(A)Ti,r(A)|aii|η[αjM3(α),ji|aij|+(1α)Ci(A)]+hTi,r(A)η[αRi(A)+(1α)Ci(A)]+hTi,r(A)<η|aii|+hTi,r(A)=η|aii|+|aii|Ti,r(A)|aii|h=|aii|(Ti,r(A)|aii|h+η)=|bii|.

    In conclusion, the following inequalities are always valid

    |bii|>Λi(B)=αRi(B)+(1α)Ci(B),iM1(α)M2(α)M3(α)=N.

    By Definition 1, matrix B is a strictly α1-diagonally dominant matrix, so matrix A is a generalized α1-diagonally dominant matrix. According to Lemma 1, A is a nonsingular H-matrix.

    Remark 1. If α=1, Theorem 1 is equivalent to Theorem 4 in [12]. At the same time, in Theorem 1, we improve the conditions of the theorems in [13,14,15]. So Theorem 1 in this paper is a further supplement to the determination methods of nonsingular H- matrices.

    Theorem 2. Let A=(aij)Cn×n be an irreducible matrix. If there is α(0,1), such that for any iM2(α),

    |aii|Λi(A)|aii|Λi(A)α[δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|ajj|Λj(A)+hjM3(α)|aij|Tj,r(A)|ajj|]+(1α)Ci(A)Λi(A)|aii|Λi(A), (3.4)

    and at least one strict inequality in (3.4) holds, then matrix A is a nonsingular H-matrix.

    Proof. We are going to proof the following inequality for all indices in each set M1(α),M2(α) and M3(α).

    |bii|Λi(B)=αRi(B)+(1α)Ci(B),iM1(α)M2(α)M3(α)=N.

    Construct a positive diagonal matrix X=diag(x1,x2,,xn), where

    xi={δ,iM1(α),Λi(A)|aii|Λi(A),iM2(α),Ti,r(A)|aii|h,iM3(α).

    And denote B=AX=(bij). Similar to the proof process of Theorem 1, for any iM1(α),

    Λi(B)=α[δjM1(α),ji|aij|+jM2(α)|aij|Λj(A)|ajj|Λj(A)+hjM3(α)|aij|Tj,r(A)|ajj|]+(1α)δCi(A)δ[αRi(A)+(1α)Ci(A)]=δΛi(A)=δ|aii|=|bii|.

    For any iM2(α), it can be obtained from (3.4) that

    Λi(B)=α[δjM1(α)|aij|+jM2(α),ji|aij|Λj(A)|ajj|Λj(A)+hjM3(α)|aij|Tj,r(A)|ajj|]+(1α)Ci(A)Λi(A)|aii|Λi(A)|aii|Λi(A)|aii|Λi(A)=|bii|.

    For any iM3(α), by (3.2) we can obtain

    Λi(B)=α[δjM1(α)|aij|+jM2(α)|aij|Λj(A)|ajj|Λj(A)+jM3(α),ji|aij|Tj,r(A)|ajj|h]+(1α)Ci(A)Ti,r(A)|aii|hα[δjM1(α)|aij|+δjM2(α)|aij|+hjM3(α),ji|aij|Tj,r(A)|ajj|]+(1α)Ci(A)Ti,r(A)|aii|h<hTi,r(A)=|aii|Ti,r(A)|aii|h=|bii|.

    To sum up, we can always get the following inequalities

    |bii|Λi(B)=αRi(B)+(1α)Ci(B),iM1(α)M2(α)M3(α)=N.

    Notice that there is at least one i0M3(α), such that |bi0,i0|>Λi0(B), so B is an irreducible α1-diagonally dominant matrix. According to Lemma 3, B is a nonsingular H-matrix. Therefore, A is also a nonsingular H-matrix by Lemma 5.

    Let

    Qi(A)=(Ri(A))α(Ci(A))1α,α(0,1).
    N1(α)={iN|0<|aii|<Qi(A)},N2(α)={iN||aii|=Qi(A)>0},
    N3(α)={iN||aii|>Qi(A)}.

    It is obvious that Ni(α)Nj(α)=(ij) and N1(α)N2(α)N3(α)=N.

    For any iN3(α), denote

    Pi(A)=(jN1(α)|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α),ji|aij|Rj(A)(Cj(A))1αα|ajj|1α)(Ci(A))1αα.

    Obviously,

    Pi(A)|aii|1α=(Pi(A)α|aii|)1α=(jN1(α)|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+(jN3(α),ji|aij|Rj(A)(Cj(A))1αα|ajj|1α)α(Ci(A))1α|aii|)1α<((Ri(A))α(Ci(A))1α|aii|)1α<1.

    Theorem 3. Let A=(aij)Cn×n. If there exists α(0,1), such that

    |aii|Qi(A)|aii|Qi(A)>[jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|Pj(A)|ajj|1α]α[Ci(A)Qi(A)|aii|Qi(A)]1α (4.1)

    holds for any iN1(α), then the matrix A is a nonsingular H-matrix.

    Proof. We are going to proof the following inequality for all indices in each set N1(α),N2(α) and N3(α).

    |bii|>(Ri(B))α(Ci(B))1α,iN1(α)N2(α)N3(α)=N.

    For any iN1(α), denote

    gi(A)=(jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|Pj(A)|ajj|1α)(Ci(A)Qi(A)|aii|Qi(A))1αα,
    Gi(A)=(|aii|Qi(A)|aii|Qi(A))1αgi(A)(jN3(α)|aij|)[Ci(A)Qi(A)|aii|Qi(A)]1αα.

    It is known by (4.1) that Gi(A)>0,iN1(α). In particular, if jN3(α)|aij|=0(iN1(α)),Gi(A) = + is denoted. Take {a} sufficiently small positive number ε to satisfy

    0<ε<min{Gj(A)(jN1(α)),1Pi(A)|aii|1α(iN3(α))}. (4.2)

    Construct a positive diagonal matrix X=diag(d1,d2,,dn), where

    di={Qi(A)|aii|Qi(A),iN1(α),1,iN2(α),Pi(A)|aii|1α+ε,iN3(α).

    It is proved below that B=AX=(bij)Dα2. For any iN1(α), according to (4.1) and (4.2),

    Ri(B)(Ci(B))1αα=[jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|(Pj(A)|ajj|1α+ε)][Ci(A)Qi(A)|aii|Qi(A)]1αα=[jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|Pj(A)|ajj|1α][Ci(A)Qi(A)|aii|Qi(A)]1αα+ε(jN3(α)|aij|)[Ci(A)Qi(A)|aii|Qi(A)]1αα<[jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|Pj(A)|ajj|1α][Ci(A)Qi(A)|aii|Qi(A)]1αα]+Gi(A)(jN3(α)|aij|)[Ci(A)Qi(A)|aii|Qi(A)]1αα=(|aii|Qi(A)|aii|Qi(A))1α=|bii|1α,

    that is, |bii|>Ri(B)α(Ci(B))1α,iN1(α).

    For any iN2(α), because Qi(A)|aii|Qi(A)<1,iN1(α), and Pi(A)|aii|1α+ε<1,iN3(α), obtained by (4.2), so,

    (Ri(B))α(Ci(B))1α=[jN1(α)|aij|Qj(A)|ajj|Qj(A)+jN2(α),ji|aij|+jN3(α)|aij|(Pj(A)|ajj|1α+ε)]α[Ci(A)]1α<(jN1(α)|aij|+jN2(α),ji|aij|+jN3(α)|aij|)α(Ci(A))1α=(Ri(A))α(Ci(A))1α=|aii|=|bii|.

    For any iN3(α), obviously

    |aii|1α>Ri(A)(Ci(A))1αα=(jN1(α)|aij|+jN2(α)|aij|+jN3(α),ji|aij|)(Ci(A))1αα>(jN3(α),ji|aij|)(Ci(A))1αα,

    hence

    |aii|1α(Pi(A)|aii|1α+ε)=Pi(A)+ε|aii|1α>(jN1(α)|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α),ji|aij|Rj(A)(Cj(A))1αα|ajj|1α)(Ci(A))1αα+ε(jN3(α),ji|aij|)(Ci(A))1αα=[jN1(α)|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α),ji|aij|(Rj(A)(Cj(A))1αα|ajj|1α+ε)][Ci(A)]1αα[jN1(α)|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α),ji|aij|(Pj(A)|ajj|1α+ε)][Ci(A)]1αα.

    Take the two sides of the inequality to the power of α respectively, we can get

    |aii|(Pi(A)|aii|1α+ε)α>[jN1(α)|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α),ji|aij|(Pj(A)|ajj|1α+ε)]α[Ci(A)]1α.

    Further multiply both sides of the inequality by (Pi(A)|aii|1α+ε)1α, then

    |bii|=|aii|(Pi(A)|aii|1α+ε)>[jN1(α)|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α),ji|aij|(Pj(A)|ajj|1α+ε)]α[Ci(A)(Pi(A)|aii|1α+ε)]1α=(ji(bij))α(ji(bji))1α,

    that is, |bii|>(Ri(B))α(Ci(B))1α. To sum up, the following inequality is always true.

    |bii|>(Ri(B))α(Ci(B))1α,iN1(α)N2(α)N3(α)=N,

    that is, BDα2. Therefore, we know that ADα2, and according to Lemma 2, A is a nonsingular H-matrix.

    Remark 2. According to (4.1) in Theorem 3, for any iN1(α), the following inequality is always true.

    Qi(A)Qi(A)|aii|[jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|Pj(A)|ajj|1α]α[Ci(A)Qi(A)|aii|Qi(A)]1αQi(A)Qi(A)|aii|[α(jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|Pj(A)|ajj|1α)+(1α)Ci(A)Qi(A)|aii|Qi(A)]Qi(A)Qi(A)|aii|α[jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|Pj(A)|ajj|1α]+(1α)Ci(A).

    Therefore, for Theorem 3 in this paper, we improve Theorem 1 in [10] and Theorem 1 in [16].

    Theorem 4. Let A=(aij)Cn×n be an irreducible matrix. If there exists α(0,1), such that

    |aii|Qi(A)|aii|Qi(A)[jN1(α),ji|aij|Qj(A)|ajj|Qj(A)+jN2(α)|aij|+jN3(α)|aij|Pj(A)|ajj|1α]α[Ci(A)Qi(A)|aii|Qi(A)]1α (4.3)

    is true for any i\in N_1{(\alpha)} , then the matrix A is a nonsingular H -matrix.

    Proof. We are going to proof the following inequality for all indices in each set N_1(\alpha), N_2(\alpha) and N_3(\alpha) .

    |b_{ii}|\geq(R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, \; i\in N_1{(\alpha)}\cup N_2{(\alpha)}\cup N_3{(\alpha)} = N.

    Construct a positive diagonal matrix X = \text{diag} \; (d_1, d_2, \ldots, d_n) , where

    d_{i} = \left\{\begin{array}{ccc} {\frac{Q_i(A)-|a_{ii}|}{Q_i(A)}},&\; \forall i\in N_1{(\alpha)},\\ 1,&\; \forall i\in N_2{(\alpha)},\\ {\frac{P_i(A)}{|a_{ii}|^{\frac{1}{\alpha}}}},&\; \forall i\in N_3{(\alpha)}. \end{array} \right.

    Let B = AX = (b_{ij}) . For any i\in N_1{(\alpha)} , it can be obtained from (4.3) that

    \begin{eqnarray} \begin{array}{lll} (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}& = [\sum\limits_{j\in N_1{(\alpha)},j\neq i}|a_{ij} |{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)}}|a_{ij} |{{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}}]^{\alpha}\cdot[C_i(A){{\frac{Q_i(A)-|a_{ii}|}{Q_i(A)}}}]^{1-\alpha}\\ &\le|a_{ii}|{\frac{Q_i(A)-|a_{ii}|}{Q_i(A)}} = |b_{ii}|, \end{array} \end{eqnarray}

    that is, |b_{ii}|\geq(R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, i\in N_1{(\alpha)}.

    For any i\in N_2{(\alpha)} , because {\frac{Q_i(A)-|a_{ii}|}{Q_i(A)}} < 1, \; i\in N_1{(\alpha)} , and {\frac{P_i(A)}{{|a_{ii}|^{\frac{1}{\alpha}}}}} < 1, \; i\in N_3{(\alpha)} , we can obtain that

    \begin{eqnarray} \begin{array}{lll} (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}& = [\sum\limits_{j\in N_1{(\alpha)}}|a_{ij}|{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)},j\neq i}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)}}|a_{ij} |{{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}}]^{\alpha}[C_i(A)]^{1-\alpha}\\ &\le[\sum\limits_{j\in N_1{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_2{(\alpha)},j\neq i}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)}}|a_{ij} |]^{\alpha}[C_i(A)]^{1-\alpha}\\ & = (R_i(A))^{\alpha}(C_i(A))^{1-\alpha} = |a_{ii}| = |b_{ii}|, \end{array} \end{eqnarray}

    that is, |b_{ii}| > (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, i\in N_2{(\alpha)}.

    For any i\in N_3{(\alpha)} ,

    \begin{eqnarray} \begin{array}{lll} |a_{ii}|^{{\frac{1}{\alpha}}}({\frac{P_i(A)}{|a_{ii}|^{\frac{1}{\alpha}}}})& = P_i(A)\\ & = [\sum\limits_{j\in N_1{(\alpha)}}|a_{ij} |{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)},j\neq i}|a_{ij}|{{\frac{R_j(A)(C_j(A))^{\frac{1-\alpha}{\alpha}}}{|a_{jj}|^{\frac{1}{\alpha}}}}}](C_i(A))^{\frac{1-\alpha}{\alpha}}\\ & > [\sum\limits_{j\in N_1{(\alpha)}}|a_{ij} |{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)},j\neq i}|a_{ij}|{{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}}](C_i(A))^{\frac{1-\alpha}{\alpha}}. \end{array} \end{eqnarray}

    Take the power \alpha on both sides and multiply by ({{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}})^{1-\alpha} at the same time, then

    \begin{eqnarray} \begin{array}{lll} |b_{ii}|& = |a_{ii}|({\frac{P_i(A)}{|a_{ii}|^{\frac{1}{\alpha}}}})\\ & > [\sum\limits_{j\in N_1{(\alpha)}}|a_{ij} |{{\frac{Q_j(A)-|a_{jj}|}{Q_j(A)}}}+\sum\limits_{j\in N_2{(\alpha)}}|a_{ij}|+\sum\limits_{j\in N_3{(\alpha)},j\neq i}|a_{ij}|({{\frac{P_j(A)}{|a_{jj}|^{\frac{1}{\alpha}}}}})]^{\alpha}[(C_i(A)){\frac{P_i(A)}{|a_{ii}^{\frac{1}{\alpha}}|}}]^{1-\alpha}\\ & = (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, \end{array} \end{eqnarray}

    that is, |b_{ii}| > (R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, i\in N_3{(\alpha)}.

    In conclusion, the following inequalities are always valid.

    |b_{ii}|\geq(R_i(B))^{\alpha}(C_i(B))^{1-\alpha}, \; i\in N_1{(\alpha)}\cup N_2{(\alpha)}\cup N_3{(\alpha)} = N.

    Thus, B is an irreducible \alpha_2 -diagonally dominant matrix. According to Lemma 4, B is a nonsingular H -matrix. Therefore, A is also a nonsingular H -matrix by Lemma 5.

    Example 1. Let

    A = \left(\begin{matrix} 1 & \frac{18}{19} & 0 & \frac{1}{19} &0\\ \frac{412}{475} & 4 & \frac{58}{19} & 1 &17.08\\ \frac{13}{475} & \frac{20}{19} & 7.76 & 8 &0.92\\ \frac{1}{19} & 0 & \frac{18}{19} & 10 &0\\ \frac{1}{19} & 0 & 0 & \frac{18}{19} &\frac{23}{9}\\ \end{matrix}\right).

    Taking \alpha = \frac{19}{20} , we will show that

    (1) The matrix A satisfies the conditions of Theorem 1 in this paper, so we can determine that A is a nonsingular H -matrix according to Theorem 1.

    (2) A does not meet the criteria in [13,14,15], so it cannot be determined by applying the methods in these papers.

    In fact, for (1), it can be obtained through calculation that

    R_1(A) = C_1(A) = |a_{11}| = 1 = \alpha R_1(A)+(1-\alpha)C_1(A) = \Lambda_1(A),
    R_2(A) = 22, C_2(A) = 2,
    |a_{22}| = 4 < \alpha R_2(A)+(1-\alpha)C_2(A) = \Lambda_2(A) = \frac{19}{20}\times 22+\frac{1}{20}\times 2 = 21.
    R_3(A) = 10, C_3(A) = 4,
    |a_{33}| = 7.76 < \alpha R_3(A)+(1-\alpha)C_3(A) = \Lambda_3(A) = \frac{19}{20}\times 10+\frac{1}{20}\times 4 = 9.7.
    R_4(A) = 1, C_4(A) = 10,
    |a_{44}| = 10 > \alpha R_4(A)+(1-\alpha)C_4(A) = \Lambda_4(A) = \frac{19}{20}\times 1+\frac{1}{20}\times 10 = 1.45.
    R_5(A) = 1, C_5(A) = 18,
    |a_{55}| = 2.825 > \alpha R_5(A)+(1-\alpha)C_5(A) = \Lambda_5(A) = \frac{19}{20}\times 1+\frac{1}{20}\times 18 = 1.85.

    So, M_1{(\alpha)} = \{1\}, M_2{(\alpha)} = \{2, 3\}, M_3{(\alpha)} = \{4, 5\} . And then

    \begin{equation*} \begin{aligned} r& = \max\{\frac{\frac{19}{20}(|a_{41}|+|a_{42}|+|a_{43}|{)}}{|a_{44}|-\frac{19}{20}|a_{45}|+\frac{1}{20}C_4(A)},\frac{\frac{19}{20}(|a_{51}|+|a_{52}|+|a_{53}|{)}}{|a_{55}|-\frac{19}{20}|a_{54}|-\frac{1}{20}C_5(A)}\}\\& = \max\{\frac{\frac{19}{20}(\frac{1}{19}+0+\frac{18}{19})}{10-\frac{19}{20}\times 0+\frac{1}{20}\times 10},\frac{\frac{19}{20}(\frac{1}{19}+0+0)}{\frac{23}{9}-\frac{19}{20}\times \frac{18}{19}-\frac{1}{20}\times 18}\} = \max\{\frac{1}{10},\frac{9}{136}\} = \frac{1}{10}, \end{aligned} \end{equation*}
    s = \max\{\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)},\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}\} = \max\{\frac{21-4}{21},\frac{9.7-7.76}{9.7}\} = \frac{17}{21},
    \delta = \max\{r,s\} = \max\{\frac{1}{10}{,} \frac{17}{21}\} = \frac{17}{21}.
    \begin{equation*} \begin{aligned} T_{4,r}(A)& = \alpha(|a_{41}|+|a_{42}|+|a_{43}|+r|a_{45}|)+(1-\alpha)rC_4(A)\\ & = \frac{19}{20}(\frac{1}{19}+0+\frac{18}{19}+\frac{1}{10}\times 0)+\frac{1}{20}\times\frac{1}{10}\times 10 = \frac{19}{20}+\frac{1}{20} = 1, \end{aligned} \end{equation*}
    \begin{equation*} \begin{aligned} T_{5,r}(A)& = \alpha(|a_{51}|+|a_{52}|+|a_{53}|+r|a_{54}|)+(1-\alpha)rC_5(A)\\ & = \frac{19}{20}(\frac{1}{19}+0+0+\frac{1}{10}\times \frac{18}{19})+\frac{1}{20}\times\frac{1}{10}\times 18 = \frac{23}{100} = 0.23. \end{aligned} \end{equation*}
    \frac{\delta\alpha(|a_{41}|+|a_{42}|+|a_{43}|)}{T_{4,r}(A)-\alpha|a_{45}|\frac{T_{5,r}(A)}{|a_{55}|}-(1-\alpha)C_4(A)\frac{T_{4,r}(A)}{|a_{44}|}} = \frac{\frac{17}{21}\times\frac{19}{20}(\frac{1}{19}+0+\frac{18}{19})}{1-\frac{19}{20}\times 0\times \frac{0.23}{23/9}-\frac{1}{20}\times 10\times\frac{1}{10}} = \frac{17}{21},
    \begin{equation*} \begin{aligned} &\frac{\delta\alpha(|a_{51}|+|a_{52}|+|a_{53}|)}{T_{5,r}(A)-\alpha|a_{54}|\frac{T_{4,r}(A)}{|a_{44}|}-(1-\alpha)C_5(A)\frac{T_{5,r}(A)}{|a_{55}|}} = \frac{\frac{17}{21}\times\frac{19}{20}(\frac{1}{19}+0+0)}{0.23-\frac{19}{20}\times\frac{18}{19}\times\frac{1}{10}-\frac{1}{20}\times 18\times\frac{0.23}{23/9}} = \frac{850}{1239}. \end{aligned} \end{equation*}

    Therefore, h = \max\{\frac{17}{21}, \frac{850}{1239}\} = \frac{17}{21} . And notice that

    |a_{22}|\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)} = 4\times\frac{21-4}{21} = \frac{68}{21} = 3.2381,
    \begin{equation*} \begin{aligned} &\alpha[\delta|a_{21}|+|a_{23}|\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}+h(|a_{24}|\frac{T_{4,r}(A)}{|a_{44}|}+|a_{25}|\frac{T_{5,r}(A)}{|a_{55}|})]+(1-\alpha) C_2(A)\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)}\\ & = \frac{19}{20}\times[\frac{17}{21}\times\frac{412}{475}+\frac{58}{19}\times\frac{1}{5}+\frac{17}{21}\times(1\times\frac{1}{10}+17.08\times\frac{0.23}{23/9})]+\frac{1}{20}\times 2\times\frac{17}{21} = 2.5871, \end{aligned} \end{equation*}
    \begin{equation*} \begin{aligned} &|a_{22}|\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)} > \alpha[\delta|a_{21}|+|a_{23}|\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}+h(|a_{24}|\frac{T_{4,r}(A)}{|a_{44}|}+|a_{25}|\frac{T_{5,r}(A)}{|a_{55}|})]+(1-\alpha)C_2(A)\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)}. \end{aligned} \end{equation*}
    |a_{33}|\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)} = 7.76\times\frac{1}{5} = 1.5520,
    \begin{equation*} \begin{aligned} &\alpha[\delta|a_{31}|+|a_{32}|\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)}+h(|a_{34}|\frac{T_{4,r}(A)}{|a_{44}|}+|a_{35}|\frac{T_{5,r}(A)}{|a_{55}|})]+(1-\alpha) C_3(A)\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}\\ & = \frac{19}{20}\times[\frac{17}{21}\times\frac{13}{475}+\frac{20}{19}\times\frac{17}{21}+\frac{17}{21}\times(8\times\frac{1}{10}+0.92\times\frac{0.23}{23/9})]+\frac{1}{20}\times4\times\frac{1}{5} = 1.5495, \end{aligned} \end{equation*}
    \begin{equation*} \begin{aligned} &|a_{33}|\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)} > \alpha[\delta|a_{31}|+|a_{32}|\frac{\Lambda_2(A)-|a_{22}|}{\Lambda_2(A)}+h(|a_{34}|\frac{T_{4,r}(A)}{|a_{44}|}+|a_{35}|\frac{T_{5,r}(A)}{|a_{55}|})]+(1-\alpha) C_3(A)\frac{\Lambda_3(A)-|a_{33}|}{\Lambda_3(A)}. \end{aligned} \end{equation*}

    To sum up, the conditions of Theorem 1 in this paper are satisfied. So we can determine that A is a nonsingular H -matrix.

    For (2), it is calculated that

    |a_{22}| = 4,
    \begin{equation*} \begin{aligned} &\frac{R_2(A)}{|a_{22}|}(|a_{21}|\frac{a_{11}}{R_1(A)}+|a_{23}|\frac{a_{33}}{R_3(A)}+\frac{R_4(A)}{|a_{44}|}+\frac{R_5(A)}{|a_{55}|}) = \frac{22}{4}(\frac{412}{475}\times\frac{1}{1}+\frac{58}{19}\times\frac{7.76}{10}+\frac{1}{10}+\frac{1}{23/9}) = 20.2622, \end{aligned} \end{equation*}
    |a_{22}| < \frac{R_2(A)}{|a_{22}|}(|a_{21}|\frac{a_{11}}{R_1(A)}+|a_{23}|\frac{a_{33}}{R_3(A)}+\frac{R_4(A)}{|a_{44}|}+\frac{R_5(A)}{|a_{55}|}).

    Then the conditions of the decision theorem in [13] are not satisfied.

    \begin{equation*} \begin{aligned} &\; \; \; \; \; \; \frac{R_2(A)}{R_2(A)-|a_{22}|}(|a_{21}|+|a_{23}|\frac{R_3(A)-|a_{33}|}{R_3(A)}+|a_{24}|\frac{R_4(A)}{|a_{44}|}+|a_{25}|\frac{R_5(A)}{|a_{55}|})\\ & = \frac{22}{22-4}(\frac{412}{475}+\frac{58}{19}\times\frac{10-7.76}{10}+1\times\frac{1}{10} +17.08\times\frac{1}{23/9}) = 9.2791, \end{aligned} \end{equation*}
    |a_{22}| < \frac{R_2(A)}{R_2(A)-|a_{22}|}(|a_{21}|+|a_{23}|\frac{R_3(A)-|a_{33}|}{R_3(A)}+|a_{24}|\frac{R_4(A)}{|a_{44}|}+|a_{25}|\frac{R_5(A)}{|a_{55}|}).

    So the conditions of the decision theorem in [14] are also not satisfied.

    Further calculation shows that

    r = \max\{\frac{|a_{41}|+|a_{42}|+|a_{43}|}{|a_{44}|-|a_{45}|},\frac{|a_{51}|+|a_{52}|+|a_{53}|}{|a_{55}|-a_{54}|}\} = \max\{\frac{\frac{1}{19}+0+\frac{18}{19}}{10-0},\frac{\frac{1}{19}+0+0}{\frac{23}{9}-\frac{18}{19}}\} = \frac{1}{10},
    P_4(A) = |a_{41}|+|a_{42}|+|a_{43}|+r\times|a_{45}| = \frac{1}{19}+0+\frac{18}{19}+\frac{1}{10}\times 0 = 1,
    P_5(A) = |a_{51}|+|a_{52}|+|a_{53}|+r\times|a_{54}| = \frac{1}{19}+0+0+\frac{1}{10}\times\frac{18}{19} = \frac{14}{95}.
    |a_{33}| = 7.76,
    \begin{equation*} \begin{aligned} &\; \; \; \; \; \; \frac{R_3(A)}{R_3(A)-|a_{33}|}(|a_{31}|+|a_{32}|\frac{R_2(A)-|a_{22}|}{R_2(A)}+|a_{34}|\frac{P_4(A)}{|a_{44}|}+|a_{35}|\frac{P_5(A)}{|a_{55}|})\\ & = \frac{10}{10-7.76}\times(\frac{13}{475}+\frac{20}{19}\times\frac{22-4}{22}+8\times\frac{1}{10}+0.92\times\frac{14/95}{23/9}) = 7.7753, \end{aligned} \end{equation*}
    |a_{33}| < \frac{R_3(A)}{R_3(A)-|a_{33}|}(|a_{31}|+|a_{32}|\frac{R_2(A)-|a_{22}|}{R_2(A)}+|a_{34}|\frac{P_4(A)}{|a_{44}|}+|a_{35}|\frac{P_5(A)}{|a_{55}|}).
    |a_{22}| = 4,
    \begin{equation*} \begin{aligned} &\; \; \; \; \; \; \frac{R_2(A)}{R_2(A)-|a_{22}|}(|a_{21}|+|a_{23}|\frac{R_3(A)-|a_{33}|}{R_3(A)}+|a_{24}|\frac{P_4(A)}{|a_{44}|}+|a_{25}|\frac{P_5(A)}{|a_{55}|})\\ & = \frac{22}{22-4}\times(\frac{412}{475}+\frac{58}{19}\times\frac{10-7.76}{10}+1\times\frac{1}{10}+17.08\times\frac{14/95}{23/9}) = 3.0881, \end{aligned} \end{equation*}
    |a_{22}| > \frac{R_2(A)}{R_2(A)-|a_{22}|}(|a_{21}|+|a_{23}|\frac{R_3(A)-|a_{33}|}{R_3(A)}+|a_{24}|\frac{P_4(A)}{|a_{44}|}+|a_{25}|\frac{P_5(A)}{|a_{55}|}).

    The conditions of the decision theorem in [15] are not satisfied.

    Therefore, we know that the matrix A does not meet the criteria in [13,14,15], so it cannot be determined by these existing methods.

    Example 2. Let

    A = \left(\begin{matrix} 1 & 0.1 & -0.1 & -0.1 & 0.1 & 0\\ 0.1 & 0.6 & 0 & 0 & -0.2 & 0.3\\ 0.1 & 0 & 0.4 & -0.1 & 0 & -0.3\\ -0.1 & 0 & -0.1 & 0.3 & 0 & 0.2\\ 0.1 & 0.1 & -0.1 & -0.1 & 0.5 &0.1\\ 0 & -0.4 & 0.1 & 0 & -0.2 & 2\\ \end{matrix}\right).

    Taking \alpha = {\frac{1}{4}} , we will show that

    (1) The matrix A satisfies the conditions of Theorem 3 in this paper, so we can get that A is a nonsingular H -matrix.

    (2) A does not meet the criteria in [10,16], so it cannot be determined by applying the methods in [10,16].

    In fact, for (1), it is calculated that

    R_1(A) = 0.4,C_1(A) = 0.4,|a_{11}| = 1 > Q_1(A) = 0.4^{{\frac{1}{4}}}\times0.4^{{\frac{3}{4}}} = 0.4,
    R_2(A) = 0.6,C_2(A) = 0.6,|a_{22}| = 0.6 = Q_2(A) = 0.6^{{\frac{1}{4}}}\times0.6^{{\frac{3}{4}}} = 0.6,
    R_3(A) = 0.5,C_3(A) = 0.4,|a_{33}| = 0.4 < Q_3(A) = 0.5^{{\frac{1}{4}}}\times0.4^{{\frac{3}{4}}} = 0.4229,
    R_4(A) = 0.4,C_4(A) = 0.3,|a_{44}| = 0.3 < Q_4(A) = 0.4^{{\frac{1}{4}}}\times0.3^{{\frac{3}{4}}} = 0.3224,
    R_5(A) = 0.5,C_5(A) = 0.5,|a_{55}| = 0.5 = 0.5^{{\frac{1}{4}}}\times0.5^{{\frac{3}{4}}} = 0.5,
    R_6(A) = 0.7,C_6(A) = 0.9,|a_{66}| = 2 > 0.7^{{\frac{1}{4}}}\times0.9^{{\frac{3}{4}}} = 0.8452.

    So N_1{(\alpha)} = \{3, 4\}, \; N_2{(\alpha)} = \{2, 5\}, \; N_3{(\alpha)} = \{1, 6\} , and then calculate

    \begin{eqnarray} \begin{array}{lll} P_1(A)& = [|a_{13}|{{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}}+|a_{14}|{{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}}+|a_{12}|+|a_{15}|+|a_{16}|{{\frac{R_6(A)(C_6(A))^3}{|a_{66}|^4}}}](C_1(A))^3\\ & = [0.1\times{{\frac{0.2295}{0.4229}}}+0.1\times{{\frac{0.0224}{0.3224}}}+0.1+0.1+0\times{{\frac{0.7\times(0.9)^3}{2^4}}}]\times0.4^3 = 0.0136, \end{array} \end{eqnarray}
    \begin{eqnarray} \begin{array}{lll} P_6(A)& = [|a_{63}|{{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}}+|a_{64}|{{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}}+|a_{62}|+|a_{65}|+|a_{61}|{{\frac{R_1(A)(C_1(A))^3}{|a_{11}|^4}}}](C_6(A))^3\\ & = [0.1\times{{\frac{0.2295}{0.4229}}}+0.1\times{{\frac{0.0224}{0.3224}}}+0.4+0.2+0\times{{\frac{0.4\times(0.4)^3}{1^4}}}]\times0.9^3 = 0.4414. \end{array} \end{eqnarray}
    \begin{eqnarray} \begin{array}{lll} &\; \; \; |a_{33}|{{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}} = 0.4\times{{\frac{0.2295}{0.4229}}} = 0.0217\\ & > [|a_{34}|{{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}}+|a_{32}|+|a_{35}|+|a_{31}|{{\frac{P_1(A)}{|a_{11}|^4}}}+|a_{36}|{{\frac{P_6(A)}{|a_{66}|^4}}}]^{{\frac{1}{4}}}[C_3(A){{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}}]^{{\frac{3}{4}}}\\ & = [0.1\times{{\frac{0.0224}{0.3224}}+0+0+0.1\times{{\frac{0.0136}{1^4}}}+0.3\times{{\frac{0.4414}{2^4}}}]^{{\frac{1}{4}}}}\times[0.4\times{{\frac{0.0229}{0.4229}}}]^{{\frac{3}{4}}}\\ & = (0.0166)^{{\frac{1}{4}}}\times(0.2173)^{{\frac{3}{4}}} = 0.0203, \end{array} \end{eqnarray}
    \begin{eqnarray} \begin{array}{lll} &\; \; \; |a_{44}|{{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}} = 0.3\times{{\frac{0.0224}{0.3224}}} = 0.0208\\ & > [|a_{43}|{{\frac{Q_3(A)-|a_{33}|}{Q_3(A)}}}+|a_{42}|+|a_{45}|+|a_{41}|{{\frac{P_1(A)}{|a_{11}|^4}}}+|a_{46}|{{\frac{P_6(A)}{|a_{66}|^4}}}]^{{\frac{1}{4}}}[C_4(A){{\frac{Q_4(A)-|a_{44}|}{Q_4(A)}}}]^{{\frac{3}{4}}}\\ & = [0.1\times{{\frac{0.2295}{0.4229}}}+0+0+0.1\times{{\frac{0.0136}{1^4}}}+0.2\times{{\frac{0.4414}{2^4}}}]^{{\frac{1}{4}}}\times[0.3\times{{\frac{0.0224}{0.3224}}}]^{{\frac{3}{4}}}\\ & = (0.0123)^{{\frac{1}{4}}}\times(0.0208)^{{\frac{3}{4}}} = 0.0183. \end{array} \end{eqnarray}

    So the conditions of Theorem 3 in this paper are satisfied, thus we can determine that A is a nonsingular H -matrix.

    For (2), using Theorem 3 in [16], we can get

    E_1(A) = {\frac{1}{4}}R_1(A)+{\frac{3}{4}}C_1(A) = {\frac{1}{4}}\times0.4+{\frac{3}{4}}\times0.4 = 0.4 < |a_{11}| = 1,
    E_2(A) = {\frac{1}{4}}R_2(A)+{\frac{3}{4}}C_2(A) = {\frac{1}{4}}\times0.6+{\frac{3}{4}}\times0.6 = 0.6 = |a_{22}|,
    E_3(A) = {\frac{1}{4}}R_3(A)+{\frac{3}{4}}C_3A) = {\frac{1}{4}}\times0.5+{\frac{3}{4}}\times0.4 = 0.425 > |a_{33}| = 0.4,
    E_4(A) = {\frac{1}{4}}R_4(A)+{\frac{3}{4}}C_4(A) = {\frac{1}{4}}\times0.4+{\frac{3}{4}}\times0.3 = 0.325 > |a_{44}| = 0.3,
    E_5(A) = {\frac{1}{4}}R_5(A)+{\frac{3}{4}}C_5(A) = {\frac{1}{4}}\times0.5+{\frac{3}{4}}\times0.5 = 0.5 = |a_{55}|,
    E_6(A) = {\frac{1}{4}}R_6(A)+{\frac{3}{4}}C_6(A) = {\frac{1}{4}}\times0.7+{\frac{3}{4}}\times0.9 = 0.85 < |a_{66}| = 2.

    It can be obtained through calculation that

    \begin{eqnarray} \begin{array}{lll} P_1(A)& = {\frac{1}{4}}(|a_{13}|{{\frac{E_3(A)-|a_{33}|}{E_3(A)}}}+|a_{14}|{{\frac{E_4(A)-|a_{44}|}{E_4(A)}}}+|a_{12}|+|a_{15}|+|a_{16}|{{\frac{E_1(A)}{|a_{66}|}}})+{\frac{3}{4}}C_1(A){{\frac{E_1(A)}{|a_{11}|}}}\\ & = {\frac{1}{4}}\times(0.1\times{{\frac{0.425-0.4}{0.425}}}+0.1\times{{\frac{0.325-0.3}{0.325}}}+0.1+0.1+0\times{{\frac{0.85}{2}}})+{\frac{3}{4}}\times0.4\times{\rm\frac{0.4}{1}}\\ & = 0.0534+0.12 = 0.1734, \end{array} \end{eqnarray}
    \begin{eqnarray} \begin{array}{lll} P_6(A)& = {\frac{1}{4}}(|a_{63}|{{\frac{E_3(A)-|a_{33}|}{E_3(A)}}}+|a_{64}|{{\frac{E_4(A)-|a_{44}|}{E_4(A)}}}+|a_{62}|+|a_{65}|+|a_{61}|{{\frac{E_1(A)}{|a_{11}|}}})+{\frac{3}{4}}C_6(A){{\frac{E_6(A)}{|a_{66}|}}}\\ & = {\frac{1}{4}}\times(0.1\times{{\frac{0.425-0.4}{0.425}}}+0.1\times{{\frac{0.325-0.3}{0.325}}}+0.4+0.2+0\times{{\frac{0.4}{1}}})+{\frac{3}{4}}\times0.9\times{\rm\frac{0.85}{2}}\\ & = 0.1515+0.2869 = 0.4383. \end{array} \end{eqnarray}
    \begin{eqnarray} \begin{array}{lll} &\; \; \; \; |a_{33}|{{\frac{E_3(A)-|a_{33}|}{E_3(A)}}} = 0.4\times{{\frac{0.425-0.4}{0.425}}} = 0.0235\\ & < {\frac{1}{4}}(|a_{34}|{{\frac{E_4(A)-|a_{44}|}{E_4(A)}}}+|a_{32}|+|a_{35}|+|a_{31}|{{\frac{P_1(A)}{|a_{11}|}}}+|a_{36}|{{\frac{P_6(A)}{|a_{66}|}}})+{\frac{3}{4}}C_3(A){{\frac{E_3(A)-|a_{33}|}{E_3(A)}}}\\ & = {\frac{1}{4}}(0.1\times{{\frac{0.325-0.3}{0.325}}}+0+0+0.1\times{{\frac{0.1734}{1}}}+0.3\times{{\frac{0.4383}{2}}})+{\frac{3}{4}}\times0.4\times{{\frac{0.425-0.4}{0.425}}}\\ & = 0.0227+0.0176 = 0.0403, \end{array} \end{eqnarray}
    \begin{eqnarray} \begin{array}{lll} &\; \; \; \; |a_{44}|{{\frac{E_4(A)-|a_{44}|}{E_4(A)}}} = 0.3\times{{\frac{0.325-0.3}{0.325}}} = 0.0231\\ & < {\frac{1}{4}}(|a_{43}|{{\frac{E_3(A)-|a_{33}|}{E_3(A)}}}+|a_{42}|+|a_{45}|+|a_{41}|{{\frac{P_1(A)}{|a_{11}|}}}+|a_{46}|{{\frac{P_6(A)}{|a_{66}|}}})+{\frac{3}{4}}C_4(A){{\frac{E_4(A)-|a_{44}|}{E_4(A)}}}\\ & = {\frac{1}{4}}(0.1\times{{\frac{0.425-0.4}{0.425}}}+0+0+0.1\times{{\frac{0.1734}{1}}}+0.2\times{{\frac{0.4383}{2}}})+{\frac{3}{4}}\times0.3\times{{\frac{0.325-0.4}{0.325}}}\\ & = 0.0168+0.0173 = 0.0341. \end{array} \end{eqnarray}

    So the matrix A does not satisfy the conditions of the theorem in [16], thus it cannot be judged using the method in [16].

    Using Theorem 3 in [10], we can obtain

    x_1 = {\frac{{\frac{1}{4}}R_1(A)+{\frac{3}{4}}C_1(A)}{|a_{11}|}} = {\frac{{\frac{1}{4}\times0.4}+{\frac{3}{4}}\times0.4}{1}} = 0.4,
    x_2 = {\frac{{\frac{1}{4}}R_2(A)+{\frac{3}{4}}C_2(A)}{|a_{22}|}} = {\frac{{\frac{1}{4}\times0.6}+{\frac{3}{4}}\times0.6}{0.6}} = 1,
    x_3 = {\frac{{\frac{1}{4}}R_3(A)+{\frac{3}{4}}C_3(A)}{|a_{33}|}} = {\frac{{\frac{1}{4}\times0.5}+{\frac{3}{4}}\times0.4}{0.5}} = {\frac{0.425}{0.4}} = 1.0625,
    x_4 = {\frac{{\frac{1}{4}}R_4(A)+{\frac{3}{4}}C_4(A)}{|a_{44}|}} = {\frac{{\frac{1}{4}\times0.4}+{\frac{3}{4}}\times0.3}{0.3}} = 1.0833,
    x_5 = {\frac{{\frac{1}{4}}R_5(A)+{\frac{3}{4}}C_5(A)}{|a_{55}|}} = {\frac{{\frac{1}{4}\times0.5}+{\frac{3}{4}}\times0.5}{0.5}} = 1,
    x_6 = {\frac{{\frac{1}{4}}R_6(A)+{\frac{3}{4}}C_6(A)}{|a_{66}|}} = {\frac{{\frac{1}{4}\times0.7}+{\frac{3}{4}}\times0.9}{21}} = {\frac{0.85}{2}} = 0.425.

    It is known by calculation that

    \begin{eqnarray} \begin{array}{lll} |a_{33}|& = 0.4\\ & < {\frac{x_3}{x_3-1}}{\frac{1}{4}}[|a_{32}|+|a_{35}|+(1-{\frac{1}{x_4}})|a_{34}|+x_1|a_{31}|+x_6|a_{36}|]+{\frac{3}{4}}C_3(A)\\ & < {\frac{1.0625}{1.0625-1}}\times{\frac{1}{4}}\times[0+0+(1-{\frac{0.3}{0.325}})\times0.1+0.4\times0.1+0.425\times0.3]\\ & = 0.7446+0.3 = 1.0446. \end{array} \end{eqnarray}
    \begin{eqnarray} \begin{array}{lll} |a_{44}|& = 0.3\\ & < {\frac{x_4}{x_4-1}}{\frac{1}{4}}[|a_{42}|+|a_{45}|+(1-{\frac{1}{x_3}})|a_{43}|+x_1|a_{41}|+x_6|a_{46}|]+{\frac{3}{4}}C_4(A)\\ & < {\frac{{\frac{0.325}{0.3}}}{{\frac{0.325}{0.3}}-1}}\times{\frac{1}{4}}\times[0+0+(1-{\frac{0.4}{0.425}})\times0.1+0.4\times0.1+0.425\times0.2]+{\frac{3}{4}}\times0.4\\ & = 0.4254+0.225 = 0.6504. \end{array} \end{eqnarray}

    Through calculation, we know that the matrix A also does not meet the criteria in [10], so it also cannot be determined by applying the method in [10].

    In this paper, based on the relevant properties of two classes of \alpha -diagonally dominant matrices, we obtain several sufficient conditions to determine nonsingular H -matrix, which improves the existing results and also extends the determination theory of nonsingular H -matrix.

    This work is supported by the Science and technology research project of education department of Jilin Province of China (JJKH20220041KJ), the Natural Sciences Program of Science and Technology of Jilin Province of China(20190201139JC) and the Graduate Inovation Project of Beihua University (2022003, 2021004).

    The authors declare that there are no conflict of interest.



    [1] R. Bru, C. Corral, I. Giménez, J. Mas, Classes of general H-matrices, Linear Algebra Appl., 429 (2008), 2358–2366. https://doi.org/10.1016/j.laa.2007.10.030 doi: 10.1016/j.laa.2007.10.030
    [2] M. Alanelli, A. Hadjidimos, On iterative criteria for H- and non-H-matrices, Appl. Math. Comput., 188 (2007), 19–30. https://doi.org/10.1016/j.amc.2006.09.089 doi: 10.1016/j.amc.2006.09.089
    [3] A. Berman, R. Plemmons, Nonnegative matrices in the mathematical sciences, Philadelphia: SIAM Press, 1994. https://doi.org/10.1137/1.9781611971262
    [4] J. Zhao, Q. Liu, C. Li, Y. Li, Dashnic-Zusmanovich type matrices: a new subclass of nonsingular H-matrices, Linear Algebra Appl., 552 (2018), 277–287. https://doi.org/10.1016/j.laa.2018.04.028 doi: 10.1016/j.laa.2018.04.028
    [5] M. Li, Y. Sun, Practical criteria for H-matrices, Appl. Math. Comput., 211 (2009), 427–433. https://doi.org/10.1016/j.amc.2009.01.083 doi: 10.1016/j.amc.2009.01.083
    [6] Y. Sun, Sufficient conditions for generalized diagonally dominant matrices (Chinese), Numerical Mathematics A Journal of Chinese Universities, 19 (1997), 216–223.
    [7] Y. Sun, An improvement on a theorem by ostrowski and its applications (Chinese), Northeastern Math. J., 7 (1991), 497–502.
    [8] L. Wang, B. Xi, F. Qi, Necessary and sufficient conditions for identifying strictly geometrically \alpha-bidiagonally dominant matrices, U.P.B. Sci. Bull. Series A, 76 (2014), 57–66.
    [9] J. Li, W. Zhang, Criteria for H-matrices (Chinese), Numerical Mathematics A Journal of Chinese Universities, 21 (1999), 264–268.
    [10] R. Jiang, New criteria for nonsingular H-matrices (Chinese), Chinese Journal of Engineering Mathematics, 28 (2011), 393–400.
    [11] G. Han, C. Zhang, H. Gao, Discussion for identifying H-matrices, J. Phys.: Conf. Ser., 1288 (2019), 012031. https://doi.org/10.1088/1742-6596/1288/1/012031 doi: 10.1088/1742-6596/1288/1/012031
    [12] X. Chen, Q. Tuo, A set of new criteria for nonsingular H-matrices (Chinese), Chinese Journal of Engineering Mathematics, 37 (2020), 325–334.
    [13] T. Gan, T. Huang, Simple criteria for nonsingular H-matrices, Linear Algebra Appl., 374 (2003), 317–326. https://doi.org/10.1016/S0024-3795(03)00646-3 doi: 10.1016/S0024-3795(03)00646-3
    [14] T. Gan, T. Huang, Practical sufficient conditions for nonsingular H-matrices (Chinese), Mathematica Numerica Sinica, 26 (2004), 109–116.
    [15] Q. Tuo, L. Zhu, J. Liu, One type of new criteria conditions for nonsingular H-matrices (Chinese), Mathematica Numerica Sinica, 30 (2008), 177–182.
    [16] Y. Yang, M. Liang, A new type of determinations for nonsingular H-matrices (Chinese), Journal of Hexi University, 37 (2021), 20–25. https://doi.org/10.13874/j.cnki.62-1171/g4.2021.02.004 doi: 10.13874/j.cnki.62-1171/g4.2021.02.004
    [17] J. Liu, J. Li, Z. Huang, X. Kong, Some properties of Schur complements and diagonal-Schur complements of diagonally dominant matrices, Linear Algebra Appl., 428 (2008), 1009–1030. https://doi.org/10.1016/j.laa.2007.09.008 doi: 10.1016/j.laa.2007.09.008
    [18] L. Cvetković, H-matrix theory vs. eigenvalue localization, Numer. Algor., 42 (2006), 229–245. https://doi.org/10.1007/s11075-006-9029-3 doi: 10.1007/s11075-006-9029-3
  • This article has been cited by:

    1. Yan Li, Xiaoyong Chen, Yaqiang Wang, Some new criteria for identifying H-matrices, 2024, 38, 0354-5180, 1375, 10.2298/FIL2404375L
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1883) PDF downloads(86) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog