Research article Special Issues

Infinity norm upper bounds for the inverse of SDD1 matrices

  • In this paper, a new proof that SDD1 matrices is a subclass of H-matrices is presented, and some properties of SDD1 matrices are obtained. Based on the new proof, some upper bounds of the infinity norm of inverse of SDD1 matrices and SDD matrices are given. Moreover, we show that these new bounds of SDD matrices are better than the well-known Varah bound for SDD matrices in some cases. In addition, some numerical examples are given to illustrate the corresponding results.

    Citation: Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang. Infinity norm upper bounds for the inverse of SDD1 matrices[J]. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493

    Related Papers:

    [1] Xiaodong Wang, Feng Wang . Infinity norm upper bounds for the inverse of $ {SDD_k} $ matrices. AIMS Mathematics, 2023, 8(10): 24999-25016. doi: 10.3934/math.20231276
    [2] Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of $ SDD_1^{+} $ matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034
    [3] Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of $ SD{{D}_{1}} $ matrices and $ SD{{D}_{1}} $-$ B $ matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662
    [4] Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of $ S $-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317
    [5] Deshu Sun . Note on error bounds for linear complementarity problems involving $ B^S $-matrices. AIMS Mathematics, 2022, 7(2): 1896-1906. doi: 10.3934/math.2022109
    [6] Xinnian Song, Lei Gao . CKV-type $ B $-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630
    [7] Baijuan Shi . A particular matrix with exponential form, its inversion and some norms. AIMS Mathematics, 2022, 7(5): 8224-8234. doi: 10.3934/math.2022458
    [8] Man Chen, Huaifeng Chen . On ideal matrices whose entries are the generalized $ k- $Horadam numbers. AIMS Mathematics, 2025, 10(2): 1981-1997. doi: 10.3934/math.2025093
    [9] Yuanjie Geng, Deshu Sun . Error bounds for linear complementarity problems of strong $ SDD_{1} $ matrices and strong $ SDD_{1} $-$ B $ matrices. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384
    [10] Maja Nedović, Dunja Arsić . New scaling criteria for $ H $-matrices and applications. AIMS Mathematics, 2025, 10(3): 5071-5094. doi: 10.3934/math.2025232
  • In this paper, a new proof that SDD1 matrices is a subclass of H-matrices is presented, and some properties of SDD1 matrices are obtained. Based on the new proof, some upper bounds of the infinity norm of inverse of SDD1 matrices and SDD matrices are given. Moreover, we show that these new bounds of SDD matrices are better than the well-known Varah bound for SDD matrices in some cases. In addition, some numerical examples are given to illustrate the corresponding results.



    Let n be an integer number, N={1,2,,n}, and Cn×n be the set of all complex matrices of order n. A matrix A=[aij]Cn×n (n2) is called a strictly diagonally dominant (SDD) matrix if

    |aii|>ri(A),iN,

    where

    ri(A)=nj=1,ji|aij|,iN.

    It was shown that SDD matrices is a subclass of H-matrices [1], where a matrix A=[aij]Cn×n is an H-matrix if and only if there exists a positive diagonal matrix X such that AX is an SDD matrix [1].

    In 2011, a new subclass of H-matrices was proposed by J. M. Peña, which is called SDD1 matrices [2], and the definition of SDD1 matrix is given as follows.

    Definition 1. [2] A matrix A=[aij]Cn×n(n2) is called an SDD1 matrix if

    |aii|>pi(A),iN1(A),

    where

    pi(A)=jN1(A){i}|aij|+jN2(A){i}rj(A)|ajj||aij|,

    N1(A)={i||aii|ri(A)}andN2(A)={i||aii|>ri(A)}.

    In [2], J. M. Peña "proved" the following result:

    Theorem 1. ([2 Theorem 2.3]) If a matrix A=[aij]Cn×n is an SDD1 matrix by rows, then it is an H-matrix.

    From the definition of H-matrix and Theorem 1, given an SDD1 matrix A, there exists a correspondingly positive diagonal matrix D, such that AD is an SDD matrix. The great interest of the constitution of positive diagonal matrix D was commented in the introduction in [2], and divided it into two cases, that is, the given SDD1 matrix has a unique row i strictly diagonally dominant and at least two rows i and j strictly diagonally dominant, to constitute positive diagonal matrix. However, Dai in [3] found that the proof of Theorem 1 is incorrect, and a counter example was given as follows.

    Example 1. [3] Let us consider SDD1 matrices

    A=[4111030100411112].

    From the proof of Theorem 1 in [2], it is easy to obtain that D=diag{34,13,14,1}, however, AD is not an SDD matrix by rows.

    Dai found that the proof of the case that the given SDD1 matrix has at least two rows i and j strictly diagonally dominant is incorrect, and a correct proof of Theorem 1 was presented in [3]. The correct proof of Theorem 1 divides the case that SDD1 matrices have at least two rows i and j strictly diagonally dominant into S= and S, where S is given as follows:

    S={i|aij=0,forsomeiN2(A),alljN2(A){i}}.

    However, when we use the correct proof to give the upper bound for the infinity norm of the inverse of SDD1 matrices, the upper bound needs to be considered in different cases. Therefore, in order to avoid the difficult, we need to improve the proof of Theorem 1.

    In addition, it was shown that upper bound of the infinity norm of inverse of a given nonsingular matrix has many potential applications in computational mathematics, such as for bounding the condition number and for proving the convergence of iteration methods. Moreover, upper bounds of the infinity norm of inverse for different classes of matrix have been widely studied, such as Nekrasov matrices [4,5,6], S-Nekrasov matrices [7,8], QN-Nekrasov matrices [8], {p1,p2}-Nekrasov matrices [9,10], DZT matrices [11,12], S-SDD matrices [13], S-SDDS matrices [14] and so on. However, the estimation of upper bounds of the infinity norm of inverse for SDD1 matrices has never been reported.

    In this paper, a new proof of Theorem 1 is given firstly. Secondly, some properties of SDD1 matrices are presented. Finally, based on the new proof, some upper bounds of the infinity norm of inverse of SDD1 matrices and SDD matrices are obtained. Moreover, it is shown that these new bounds of SDD matrices works better than the well-known Varah bound in some cases, and numerical examples are given to illustrate the corresponding results.

    Firstly, some notations and a lemma are listed.

    D=diag{d1,d2,,dn} denotes a diagonal matrix.

    (AD)ij denotes the entry (i,j) of matrix AD, and (AD)ii denotes the diagonal element of the ith row of matrix AD.

    Lemma 1. If a matrix A=[aij]Cn×n(n2) is an SDD1 matrix if and only if |aii|>pi(A) for all iN.

    Proof. From Definition 1, we get that |aii|>pi(A) for any iN1(A) and for any iN2(A),

    |aii|>ri(A)jN1(A){i}|aij|+jN2(A){i}rj(A)|ajj||aij|=pi(A), (2.1)

    thus, we obtain that a matrix A is an SDD1 matrix if and only if |aii|>pi(A) for all iN.

    Next, a new proof of Theorem 1 is given as follows.

    Proof. It is sufficient to prove that each SDD1 matrix A is an H-matrix. In order to do that, let us define the diagonal matrix as D=diag{d1,d2,,dn}, where

    dj={1,jN1(A),pj(A)|ajj|+ε,jN2(A), (2.2)

    and

    0<ε<miniN|aii|pi(A)jN2(A){i}|aij|, (2.3)

    if jN2(A){i}|aij|=0, then the corresponding fraction is defined to be .

    Since matrix A is an SDD1 matrix, D is a positive diagonal matrix.

    In the following, we prove that AD is an SDD matrix, and divided it into two cases.

    Case 1: for any iN1(A), it is easy to obtain that |(AD)ii|=|aii|, and

    ri(AD)=nj=1ji|aij|dj=jN1(A){i}|aij|+jN2(A){i}(pj(A)|ajj|+ε)|aij|jN1(A){i}|aij|+jN2(A){i}rj(A)|ajj||aij|+jN2(A){i}ε|aij|(byinequality(2.1))=pi(A)+εjN2(A){i}|aij|(bytheexpressionofpi(A))<pi(A)+|aii|pi(A)(byinequality(2.3))=|aii|=|(AD)ii|.

    Case 2: for any iN2(A), we get that |(AD)ii|=|aii|(pi(A)|aii|+ε)=pi(A)+ε|aii|, and

    ri(AD)=nj=1ji|aij|dj=jN1(A){i}|aij|+jN2(A){i}(pj(A)|ajj|+ε)|aij|jN1(A){i}|aij|+jN2(A){i}rj(A)|ajj||aij|+εjN2(A){i}|aij|(byinequality(2.1))=pi(A)+εjN2(A){i}|aij|(bytheexpressionofpi(A))<pi(A)+ε|aii|=|(AD)ii|(by|aii|>ri(A),foriN2(A)).

    From Cases 1 and 2, we obtain that |(AD)ii|>nj=1ji|aij|dj=ri(AD) for any iN, that is, AD is an SDD matrix, then according to the definition of H-matrix, A is an H-matrix.

    Since the definition of SDD1 matrix was proposed, some properties of SDD1 matrices were obtained, such as Schur complements of SDD1 matrices [2], subdirect sums of SDD1 matrices [15]. Next, some new properties of SDD1 matrices are listed as follows.

    Theorem 2. If a matrix A=[aij]Cn×n(n2) is an SDD1 matrix by rows, and N1(A), then for each iN1(A), there is at least one aij0, where jN2(A) and ji.

    Proof. Suppose on the contrary that for each iN1(A), aij=0, where jN2(A) and ji, then it is easy to obtain that pi(A)=ri(A) for any iN1(A) from Definition 1, thus we obtain that |aii|>pi(A)=ri(A)|aii|, which does not hold, hence for each iN1(A), there is at least one aij0, where jN2(A) and ji.

    Theorem 3. If a matrix A=[aij]Cn×n(n2) is an SDD1 matrix by rows, and for each iN2(A), there is at least one aij0, where jN2(A) and ji, then |aii|>pi(A)>0 for any iN and |aii|>ri(A)>pi(A)>0 for any iN2(A).

    Proof. From the Lemma 1, we get that |aii|>pi(A) for any iN and |aii|>ri(A)pi(A) for all iN2(A).

    Since A is an SDD1 matrix, and from the condition that for each iN2(A), A has at least one aij0, where jN2(A) and ji, it is easy to obtain that |aii|>ri(A)>pi(A)>0 for any iN2(A).

    We next prove that |aii|>pi(A)>0 for any iN, and consider the following two cases separately.

    Case 1: if N1(A)=, then A is an SDD matrix, and from the condition that for each iN2(A), A has at least one aij0, where jN2(A) and ji, thus it is easy to get |aii|>pi(A)>0 for any iN=N2(A).

    Case 2: if N1(A), then from Theorem 2 and the condition that for each iN2(A), A has at least one aij0, where jN2(A) and ji, we obtain that |aii|>pi(A)>0 for all iN.

    From Cases 1 and 2, we obtain that |aii|>pi(A)>0 for any iN.

    Theorem 4. Let A=(aij)Cn×n (n2) be an SDD1 matrix by rows, and for each iN2(A), A has at least one aij0, where jN2(A) and ji, then there exists a diagonal matrix D=diag{d1,d2,,dn}, where di=pi(A)|aii|,i=1,2,,n, such that AD is an SDD matrix.

    Proof. In order to prove that matrix AD is an SDD matrix, we need to prove that matrix AD satisfies the following inequalities:

    |(AD)ii|>ri(AD)foranyiN.

    Since for each iN2(A), there is at least one aij0, where jN2(A) and ji, from Theorems 2 and 3, we obtain that |aii|>pi(A)>0 for any iN and |aii|>ri(A)>pi(A)>0 for all iN2(A).

    Therefore, for any iN, it is easy to get |(AD)ii|=pi(A), and from 0<pj(A)|ajj|<rj(A)|ajj|<1 for any jN2(A), Theorems 2 and 3, we get that

    ri(AD)=nj=1ji|aij|dj=jN1(A){i}pj(A)|ajj||aij|+jN2(A){i}pj(A)|ajj||aij|<jN1(A){i}|aij|+jN2(A){i}rj(A)|ajj||aij|=pi(A)=|(AD)ii|.

    Obviously, for any iN, we get |(AD)ii|>ri(AD), that is, AD is an SDD matrix.

    Finally, some upper bounds of the infinity norm of inverse of SDD1 matrices and SDD matrices are established. Before that, a theorem which will be used later is listed.

    Theorem 5. (Varah bound) [4] Let A=(aij)Cn×n (n2) be an SDD matrix, then

    ||A1||1min1in(|aii|ri(A)). (2.4)

    Theorem 6. Let A=(aij)Cn×n (n2) be an SDD1 matrix, then

    ||A1||max{1,maxiN2(A)pi(A)|aii|+ε}min{miniN1(A)Hi,miniN2(A)Qi}, (2.5)

    where

    Hi=|aii|jN1(A){i}|aij|jN2(A){i}(pj(A)|ajj|+ε)|aij|,iN1(A),
    Qi=ε(|aii|jN2(A){i}|aij|)+jN2(A){i}rj(A)pj(A)|ajj||aij|,iN2(A),

    and ε satisfy inequality (2.3).

    Proof. From the new proof of Theorem 1, we obtain that there exists a positive diagonal matrix D such that AD is an SDD matrix, where D is defined as Eq (2.2). Therefore, we have the following inequality:

    ||A1||=||D(D1A1)||=||D(AD)1||||D||||(AD)1||.

    Since the matrix D is positive diagonal, it is easy to obtain that

    ||D||=max1indi=max{1,maxiN2(A)pi(A)|aii|+ε},

    where ε satisfy inequality (2.3).

    Since AD is an SDD matrix, by Theorem 5, we obtain

    ||(AD)1||1min1in(|(AD)ii|ri(AD))=1min1in(|aii|diri(AD))=1min{miniN1(A)Hi,miniN2(A)Qi}.

    Thus, we get

    ||A1||max{1,maxiN2(A)pi(A)|aii|+ε}min{miniN1(A)Hi,miniN2(A)Qi}.

    Based the new proof, the upper bound of the infinity norm of inverse of SDD1 matrix is presented, and since SDD matrices is a subclass of SDD1 matrices, from Theorem 6, it is easy to obtain the following Corollary 1.

    Corollary 1. Let A=(aij)Cn×n (n2) be an SDD matrix, then

    ||A1||maxiNpi(A)|aii|+εminiNMi, (2.6)

    where

    Mi=ε(|aii|ri(A))+jN{i}rj(A)pj(A)|ajj||aij|,iN (2.7)

    and

    0<ε<miniN|aii|pi(A)ri(A). (2.8)

    Example 2. Considering the following SDD1 matrices

    A1=[4120141020802008]

    and

    A2=[4110141011400000.2].

    Obviously, A1 and A2 are also SDD matrices. By calculation, we have

    p1(A1)=1,p2(A1)=1,p3(A1)=1.5p4(A1)=1.5and0<ε1<1,

    and

    p1(A2)=1,p2(A2)=1,p3(A2)=1,p4(A2)=0and0<ε2<1.5.

    By the Varah bound (2.4) of Theorem 5, we obtain that ||A11||1 and ||A12||5. By the bound (2.6) of Corollary 1, we obtain that ||A11||0.25+ε10.375+ε1 (where 0<ε1<1) and ||A12||5+54ε2(where 0<ε2<1.5). In fact, ||A11||0.4434 and ||A12||=5. Obviously, for the matrix A1, it is easy to obtain that ||A11||0.4434<0.25+ε10.375+ε1<1 for any the number 0<ε1<1. However, for the matrix A2, we have that ||A12||=5<5+54ε2 for any the number 0<ε2<1.5, which means that the bound in Corollary 1 is better than the Varah bound in Theorem 5 in some cases. Then, a meaningful discussion is concerned: under what conditions, the bound in Corollary 1 is better than the Varah bound in Theorem 5.

    The following Theorem 7 shows that the bound in Corollary 1 is better than in Theorem 5 in some conditions.

    Theorem 7. Let A=(aij)Cn×n (n2) be an SDD matrix, if

    maxiNpi(A)|aii|miniN(|aii|ri(A))miniNjN{i}rj(A)pj(A)|ajj||aij|,

    then

    ||A1||maxiNpi(A)|aii|+εminiNMi1min1in(|aii|ri(A)),

    where Mi is given as in Eq (2.7) and ε satisfy inequality (2.8).

    Proof. From the condition

    maxiNpi(A)|aii|miniN(|aii|ri(A))miniNjN{i}rj(A)pj(A)|ajj||aij|,

    it is easy to obtain that

    maxiNpi(A)|aii|miniN(|aii|ri(A))+εminiN(|aii|ri(A))miniNjN{i}rj(A)pj(A)|ajj||aij|+εminiN(|aii|ri(A)),

    thus, from combining similar terms at the left end of the above inequality, we obtain the following inequality

    (maxiNpi(A)|aii|+ε)miniN(|aii|ri(A))miniNjN{i}rj(A)pj(A)|ajj||aij|+εminiN(|aii|ri(A))=miniNjN{i}rj(A)pj(A)|ajj||aij|+miniN(ε(|aii|ri(A)))miniN(ε(|aii|ri(A))+jN{i}rj(A)pj(A)|ajj||aij|)=miniNMi. (2.9)

    Since A is an SDD matrix, we have

    |aii|>ri(A)andMi>0foranyiN.

    Therefore, from inequality (2.9), it is easy to have

    maxiNpi(A)|aii|+εminiNMi1min1in(|aii|ri(A)),

    and thus from Corollary 1, we have

    ||A1||maxiNpi(A)|aii|+εminiNMi1min1in(|aii|ri(A)).

    The following Example 3 also illustrates the Theorem 7.

    Example 3. This is the previous Example 2. For the matrix A1, by a simple calculation, we obtain

    p1(A1)|a11|=0.25,p2(A1)|a22|=0.25,p3(A1)|a33|=0.1875andp4(A1)|a44|=0.1875,

    thus,

    jN{1}rj(A1)pj(A1)|ajj||a1j|=0.375,jN{2}rj(A1)pj(A1)|ajj||a2j|=0.5625,
    jN{3}rj(A1)pj(A1)|ajj||a3j|=1andjN{4}rj(A1)pj(A1)|ajj||a4j|=1.

    It is easy to verify that

    maxiNpi(A1)|aii|miniN(|aii|ri(A1))=0.25<0.375=miniNjN{i}rj(A1)pj(A1)|ajj||aij|,

    that is, the matrix A1 satisfies the conditions of Theorem 7. Therefore, from Theorem 7, we obtain that for any 0<ε1<1,

    ||A11||0.25+ε10.375+ε1<1=1min1in(|aii|ri(A1)).

    However, the upper bound (2.5) contains the parameter ε. Next, based on the Theorem 4, a upper bound of the infinity norm of inverse of SDD1 matrices is presented as follows, and this upper bound only depends on the elements of given matrices.

    Theorem 8. Let A=(aij)Cn×n (n2) be an SDD1 matrix, and for each iN2(A), there is at least one aij0, where jN2(A) and ji, then

    ||A1||maxiNpi(A)|aii|miniN(pi(A)jN{i}pj(A)|ajj||aij|).

    Proof. By Theorem 4, we obtain that there exists a positive diagonal matrix D such that AD is an SDD matrix, where D is defined as Theorem 4. Therefore, we get the following inequality:

    ||A1||=||D(D1A1)||=||D(AD)1||||D||||(AD)1||.

    Since the matrix D is positive diagonal, it is easy to obtain that

    ||D||=max1indi=maxiNpi(A)|aii|.

    Since AD is an SDD matrix, by Theorem 5, we obtain

    ||(AD)1||1min1in(|(AD)ii|ri(AD))=1min1in(|aii|diri(AD))=1miniN(pi(A)jN{i}pj(A)|ajj||aij|).

    Thus, we get that

    ||A1||maxiNpi(A)|aii|miniN(pi(A)jN{i}pj(A)|ajj||aij|).

    Since SDD matrices is a subclass of SDD1 matrices, from Theorem 8, it is easy to obtain the following corollary.

    Corollary 2. Let A=(aij)Cn×n (n2) be an SDD matrix, if ri(A)0 for all iN, then

    ||A1||maxiNpi(A)|aii|miniNjN{i}rj(A)pj(A)|ajj||aij|.

    The following Theorems 9 and 10 show that the bound in Corollary 2 is better than in Theorem 5 in some conditions.

    Theorem 9. Let A=(aij)Cn×n (n2) be an SDD matrix, if ri(A)0 for all iN and

    miniNjN{i}rj(A)pj(A)|ajj||aij|miniN(|aii|ri(A)),

    then

    ||A1||maxiNpi(A)|aii|miniNjN{i}rj(A)pj(A)|ajj||aij|<1min1in(|aii|ri(A)).

    Proof. Since A is an SDD matrix, it is easy to get that

    |aii|>pi(A)foranyiN,

    thus

    maxiNpi(A)|aii|<1,

    and from the condition

    miniNjN{i}rj(A)pj(A)|ajj||aij|miniN(|aii|ri(A)),

    we obtain

    maxiNpi(A)|aii|miniNjN{i}rj(A)pj(A)|ajj||aij|<1min1in(|aii|ri(A)).

    Therefore, from Corollary 2, we get

    ||A1||maxiNpi(A)|aii|miniNjN{i}rj(A)pj(A)|ajj||aij|<1min1in(|aii|ri(A)).

    The following Example 4 also illustrates the Theorem 9.

    Example 4. Considering the following SDD matrix

    A3=[2.520.4025.530123.501203.5].

    By the Varah bound (2.4) of Theorem 5, we obtain that ||A13||10.

    By a simple calculation, we obtain

    p1(A3)2.1610,p2(A3)4.4914,p3(A3)2.7782andp4(A3)2.7782,

    then,

    p1(A3)|a11|0.8644,p2(A3)|a22|0.8166,p3(A3)|a33|0.7938andp4(A3)|a44|0.7938

    thus,

    jN{1}rj(A3)pj(A3)|ajj||a1j|0.2103,jN{2}rj(A3)pj(A3)|ajj||a2j|0.3813,
    jN{3}rj(A3)pj(A3)|ajj||a3j|0.2806andjN{4}rj(A3)pj(A3)|ajj||a4j|0.2806.

    It is easy to verify that

    miniNjN{i}rj(A3)pj(A3)|ajj||aij|=0.2103>0.1=miniN(|aii|ri(A3)).

    Therefore, the matrix A3 satisfies the conditions of Theorem 9, thus from the bound of Theorem 9, we obtain

    ||A13||4.1103.

    In fact, ||A13||0.9480. Obviously, ||A13||0.9480<4.1103<10, which means that the bound in Corollary 2 is better than Varah bound of Theorem 5 in some conditions.

    Theorem 10. Let A=(aij)Cn×n (n2) be an SDD matrix, if ri(A)0 for all iN and

    maxiNpi(A)|aii|miniN(|aii|ri(A))miniNjN{i}rj(A)pj(A)|ajj||aij|<miniN(|aii|ri(A)),

    then

    ||A1||maxiNpi(A)|aii|miniNjN{i}rj(A)pj(A)|ajj||aij|1min1in(|aii|ri(A)).

    Proof. Since A is an SDD matrix, we have

    |aii|>ri(A)foranyiN.

    From the condition ri(A)0 for all iN and Theorem 4, it is easy to obtain that

    jN{i}rj(A)pj(A)|ajj||aij|>0foranyiN.

    Therefore, from the condition

    maxiNpi(A)|aii|miniN(|aii|ri(A))miniNjN{i}rj(A)pj(A)|ajj||aij|,

    we obtain

    maxiNpi(A)|aii|miniNjN{i}rj(A)pj(A)|ajj||aij|1min1in(|aii|ri(A)),

    and thus from Corollary 2, we get

    ||A1||maxiNpi(A)|aii|miniNjN{i}rj(A)pj(A)|ajj||aij|1min1in(|aii|ri(A)).

    The following Example 5 also illustrates the Theorem 10.

    Example 5. Considering the following the following SDD matrix

    A4=[4110282012401204].

    By the Varah bound (2.4) of Theorem 5, we obtain that ||A14||1.

    By calculation, we obtain that

    p1(A4)=1.25,p2(A4)=2.5,p3(A4)=1.5andp4(A4)=1.5,

    then,

    p1(A4)|a11|=0.3125,p2(A4)|a22|=0.3125,p3(A4)|a33|=0.375andp4(A4)|a44|=0.375,

    thus,

    jN{1}rj(A4)pj(A4)|ajj||a1j|=0.5625,jN{2}rj(A4)pj(A4)|ajj||a2j|=1.125,
    jN{3}rj(A4)pj(A4)|ajj||a3j|=0.5625andjN{4}rj(A4)pj(A4)|ajj||a4j|=0.5625.

    It is easy to verify that

    maxiNpi(A4)|aii|miniN(|aii|ri(A4))=0.375<miniNjN{i}rj(A4)pj(A4)|ajj||aij|=0.5625<1=miniN(|aii|ri(A4)),

    that is, the matrix A4 satisfies the conditions of Theorem 10, thus by the bound of Theorem 10, we obtain

    ||A14||0.6667.

    In fact, ||A14||0.4019. Obviously, ||A14||0.4019<0.6667<1, which means that the bound in Corollary 2 is better than Varah bound of Theorem 5 in some conditions.

    In this paper, a new proof that SDD1 matrices is a subclass of H-matrices is given and based on the new proof, some upper bounds of the infinity norm of inverse of SDD1 matrices are established, and some new upper bounds of the infinity norm of inverse of SDD matrices are also obtained. Moreover, we show that these new upper bounds of the infinity norm of inverse of SDD matrices are better than well-known Varah bound under some cases. In addition, some numerical examples are given to illustrate the corresponding results.

    This work is partly supported by the National Natural Science Foundations of China (31600299), Natural Science Basic Research Program of Shaanxi, China (2020JM-622); the Scientific Research Program Funded by Shaanxi Provincial Education Department (18JK0044); the Science and Technology Project of Baoji (2017JH2-24); the Key Project of Baoji University of Arts and Sciences (ZK16050) and the Postgraduate Innovative Research Project of Baoji University of Arts and Sciences (YJSCX20ZD05).

    The authors declare that they have no competing interests.



    [1] A. Berman, R. J. Plemmons, Nonnegative matrices in the mathematical sciences, New York: Academic Press, 1979.
    [2] J. M. Peña, Diagonal dominance, Schur complements and some classes of H-matrices and P-matrices, Adv. Comput. Math., 35 (2011), 357–373. https://doi.org/10.1007/s10444-010-9160-5 doi: 10.1007/s10444-010-9160-5
    [3] P. F. Dai, A note diagonal dominance, Schur complements and some classes of H-matrices and P-matrices, Adv. Comput. Math., 42 (2016), 1–4. https://doi.org/10.1007/s10444-014-9375-y doi: 10.1007/s10444-014-9375-y
    [4] L. Y. Kolotilina, On bounding inverses to Nekrasov matrices in the infinity norm, J. Math. Sci., 199 (2014), 432–437. https://doi.org/10.1007/s10958-014-1870-7 doi: 10.1007/s10958-014-1870-7
    [5] L. Gao, Q. Liu, New upper bounds for the infinity norm of Nekrasov matrices, J. Math. Inequal., 14 (2020), 723–733. http://dx.doi.org/10.7153/jmi-2020-14-46 doi: 10.7153/jmi-2020-14-46
    [6] H. Orera, J. M. Peña, Infinity norm bounds for the inverse of Nekrasov matrices using scaling matrices, Appl. Math. Comput., 358 (2019), 119–127. https://doi.org/10.1016/j.amc.2019.04.027 doi: 10.1016/j.amc.2019.04.027
    [7] L. Cvetković, V. Kostić, K. Doroslovačkic, Max-norm bounds for the inverse of S-Nekrasov matrices, Appl. Math. Comput., 218 (2012), 9498–9503. https://doi.org/10.1016/j.amc.2012.03.040 doi: 10.1016/j.amc.2012.03.040
    [8] L. Y. Kolotilina, Bounds for the inverses of generalized Nekrasov matrices, J. Math. Sci., 207 (2015), 786–794. https://doi.org/10.1007/s10958-015-2401-x doi: 10.1007/s10958-015-2401-x
    [9] L. Y. Kolotilina, Nekrasov type matrices and upper bounds for their inverses, J. Math. Sci., 249 (2020), 221–230. https://doi.org/10.1007/s10958-020-04936-5 doi: 10.1007/s10958-020-04936-5
    [10] Y. Q. Wang, L. Gao, An improvement of the infinity norm bound for the inverse of {p1,p2}-Nekrasov matrices, J. Inequal. Appl., 2019 (2019), 177. https://doi.org/10.1186/s13660-019-2134-3 doi: 10.1186/s13660-019-2134-3
    [11] C. L. Li, L. Cvetković, Y. M. Wei, J. X. Zhao, An infinity norm bound for the inverse of Dashnic-Zusmanovich type matrices with applications, Linear Algebra Appl., 565 (2019), 99–122. https://doi.org/10.1016/j.laa.2018.12.013 doi: 10.1016/j.laa.2018.12.013
    [12] L. Y. Kolotilina, On Dashnic-Zusmanovich (DZ) and Dashnic-Zusmanovich type (DZT) matrices and their inverses, J. Math. Sci., 240 (2019), 799–812. https://doi.org/10.1007/s10958-019-04397-5 doi: 10.1007/s10958-019-04397-5
    [13] N. Morača, Upper bounds for the infinity norm of the inverse of SDD and S-SDD matrices, J. Comput. Appl. Math., 206 (2007), 666–678. https://doi.org/10.1016/j.cam.2006.08.013 doi: 10.1016/j.cam.2006.08.013
    [14] L. Y. Kolotilina, Some bounds for inverses involving matrix sparsity pattern, J. Math. Sci., 249 (2020), 242–255. https://doi.org/10.1007/s10958-020-04938-3 doi: 10.1007/s10958-020-04938-3
    [15] X. Y. Chen, Y. Q. Wang, Subdirect sums of SDD1 matrices, J. Math., 2020 (2020), 3810423. https://doi.org/10.1155/2020/3810423 doi: 10.1155/2020/3810423
  • This article has been cited by:

    1. Yinghua Wang, Xinnian Song, Lei Gao, An infinity norm bound for the inverse of strong SDD$$_{1}$$ matrices with applications, 2023, 0916-7005, 10.1007/s13160-023-00576-9
    2. Ping-Fan Dai, Jinping Li, Shaoyu Zhao, Infinity norm bounds for the inverse for $$\textrm{GSDD}_1$$ matrices using scaling matrices, 2023, 42, 2238-3603, 10.1007/s40314-022-02165-x
    3. L. Yu. Kolotilina, On SDD1 Matrices, 2023, 272, 1072-3374, 541, 10.1007/s10958-023-06448-4
    4. Xiaodong Wang, Feng Wang, Infinity norm upper bounds for the inverse of $ {SDD_k} $ matrices, 2023, 8, 2473-6988, 24999, 10.3934/math.20231276
    5. L. Yu. Kolotilina, SDD1 Matrices and Their Generalizations, 2024, 281, 1072-3374, 272, 10.1007/s10958-024-07100-5
    6. Yan Li, Yaqiang Wang, Some new results for $ B_1 $-matrices, 2023, 31, 2688-1594, 4773, 10.3934/era.2023244
    7. Dragana Cvetković, Đorđe Vukelić, Ksenija Doroslovački, A New Subclass of H-Matrices with Applications, 2024, 12, 2227-7390, 2322, 10.3390/math12152322
    8. Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of Quasi-$$SDD_k$$ matrices with applications, 2024, 1017-1398, 10.1007/s11075-024-01949-y
    9. Ksenija Doroslovački, Dragana Cvetković, On Matrices with Only One Non-SDD Row, 2023, 11, 2227-7390, 2382, 10.3390/math11102382
    10. 云云 陈, The Infinity Norm for Inverse and Application of the SDD1 Matrices Based on Schur Complement, 2024, 14, 2160-7583, 142, 10.12677/pm.2024.147281
    11. Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of generalized $${SDD_2}$$ matrices with applications, 2024, 41, 0916-7005, 1477, 10.1007/s13160-024-00658-2
    12. Wenwen Ran, Feng Wang, Extended $$SDD_1^{\dag } $$ matrices and error bounds for linear complementarity problems, 2024, 0916-7005, 10.1007/s13160-024-00685-z
    13. Yuanjie Geng, Yuxue Zhu, Fude Zhang, Feng Wang, Infinity Norm Bounds for the Inverse of $$\textrm{SDD}_1$$-Type Matrices with Applications, 2025, 2096-6385, 10.1007/s42967-024-00457-z
    14. Maja Nedović, Dunja Arsić, New scaling criteria for $ H $-matrices and applications, 2025, 10, 2473-6988, 5071, 10.3934/math.2025232
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3012) PDF downloads(191) Cited by(14)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog