
Citation: Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng. Infinity norm bounds for the inverse of SDD+1 matrices with applications[J]. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034
[1] | Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of $ SDD_1 $ matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493 |
[2] | Xiaodong Wang, Feng Wang . Infinity norm upper bounds for the inverse of $ {SDD_k} $ matrices. AIMS Mathematics, 2023, 8(10): 24999-25016. doi: 10.3934/math.20231276 |
[3] | Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of $ S $-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317 |
[4] | Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of $ SD{{D}_{1}} $ matrices and $ SD{{D}_{1}} $-$ B $ matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662 |
[5] | Xinnian Song, Lei Gao . CKV-type $ B $-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630 |
[6] | Maja Nedović, Dunja Arsić . New scaling criteria for $ H $-matrices and applications. AIMS Mathematics, 2025, 10(3): 5071-5094. doi: 10.3934/math.2025232 |
[7] | Deshu Sun . Note on error bounds for linear complementarity problems involving $ B^S $-matrices. AIMS Mathematics, 2022, 7(2): 1896-1906. doi: 10.3934/math.2022109 |
[8] | Yuanjie Geng, Deshu Sun . Error bounds for linear complementarity problems of strong $ SDD_{1} $ matrices and strong $ SDD_{1} $-$ B $ matrices. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384 |
[9] | Hongmin Mo, Yingxue Dong . A new error bound for linear complementarity problems involving $ B- $matrices. AIMS Mathematics, 2023, 8(10): 23889-23899. doi: 10.3934/math.20231218 |
[10] | Lanlan Liu, Pan Han, Feng Wang . New error bound for linear complementarity problem of $ S $-$ SDDS $-$ B $ matrices. AIMS Mathematics, 2022, 7(2): 3239-3249. doi: 10.3934/math.2022179 |
The nonsingular H-matrix and its subclass play an important role in a lot of fields of science such as computational mathematics, mathematical physics, and control theory, see [1,2,3,4]. Meanwhile, the infinity norm bounds of the inverse for nonsingular H-matrices can be used in convergence analysis of matrix splitting and matrix multi-splitting iterative methods for solving large sparse systems of linear equations [5], as well as bounding errors of linear complementarity problems [6,7]. In recent years, many scholars have developed a deep research interest in the infinity norm of the inverse for special nonsingular H-matrices, such as GSDD1 matrices [8], CKV-type matrices [9], S-SDDS matrices [10], S-Nekrasov matrices [11], S-SDD matrices [12], and so on, which depends only on the entries of the matrix.
In this paper, we prove that M is a nonsingular H-matrix by constructing a scaling matrix D for SDD+1 matrices such that MD is a strictly diagonally dominant matrix. The use of the scaling matrix is important for some applications, for example, the infinity norm bound of the inverse [13], eigenvalue localization [3], and error bounds of the linear complementarity problem [14]. We consider the infinity norm bound of the inverse for the SDD+1 matrix by multiplying the scaling matrix, then we use the result to discuss the error bounds of the linear complementarity problem.
For a positive integer n≥2, let N denote the set {1,2,…,n} and Cn×n(Rn×n) denote the set of all complex (real) matrices. Successively, we review some special subclasses of nonsingular H-matrices and related lemmas.
Definition 1. [5] A matrix M=(mij)∈Cn×n is called a strictly diagonally dominant (SDD) matrix if
|mii|>ri(M),∀i∈N, | (1.1) |
where ri(M)=n∑j=1,j≠i|mij|.
Various generalizations of SDD matrices have been introduced and studied in the literatures, see [7,15,16,17,18].
Definition 2. [8] A matrix M=(mij)∈Cn×n is called a generalized SDD1 (GSDD1) matrix if
{ri(M)−pN2i(M)>0,i∈N2,(ri(M)−pN2i(M))(|mjj|−pN1j(M))>pN1i(M)pN2j(M),i∈N2,j∈N1, | (1.2) |
where N1={i∈N|0<|mii|≤ri(M)}, N2={i∈N||mii|>ri(M)}, pN2i(M)=∑j∈N2∖{i}|mij|rj(M)|mjj|, pN1i(M)=∑j∈N1∖{i}|mij|,i∈N.
Definition 3. [9] A matrix M=(mij)∈Cn×n, with n⩾2, is called a CKV-type matrix if for all i∈N the set S⋆i(M) is not empty, where
S⋆i(M)={S∈∑(i):|mii|>rsi(M),andforallj∈¯S(|mii|−rSi(M))(|mjj|−r¯Sj(M))>r¯Si(M)rSj(M)}, |
with ∑(i)={S⊊ and r_{i}^{S}\left(M \right) : = \sum\limits_{j \in S\backslash \left\{ i \right\}} {\left| {{m_{ij}}} \right|}.
Lemma 1. [8] Let M = (m_{ij})\in C^{n\times n} be a GSDD_1 matrix. Then
\begin{equation} \|M^{-1}\|_{\infty} \leq \frac{\max \left\{\varepsilon, \max\limits_{i\in N_2}\frac{{r_i}\left( M \right)}{|m_{ii}|} \right\}}{\min\left\{\min\limits_{i\in N_2}\phi_i, \min\limits_{i\in N_1}\psi_i \right\} }, \end{equation} | (1.3) |
where
\begin{equation} \phi_i = {r_i}\left( M \right)-\sum\limits_{j \in N_2\backslash \left\{i\right\}}|m_{ij}|\frac{{r_j}\left( M \right)}{|m_{jj}|}-\sum\limits_{j \in N_1}|m_{ij}|\varepsilon, \; \; \; i\in N_2, \end{equation} | (1.4) |
\begin{equation} \psi_i = |m_{ii}|\varepsilon-\sum\limits_{j \in N_1\backslash \left\{i\right\}}|m_{ij}|\varepsilon-\sum\limits_{j \in N_2}|m_{ij}|\frac{{r_j}\left( M \right)}{|m_{jj}|}, \; \; \; i\in N_1, \end{equation} | (1.5) |
and
\begin{equation} \varepsilon\in \left\{ \max\limits_{i\in N_1}\frac{{p_{i}^{N_2}}\left( M \right)}{|m_{ii}|-{p_{i}^{N_1}}\left( M \right)}, \min\limits_{j\in N_2}\frac{{r_j}\left( M \right)- {p_{j}^{N_2}}\left( M \right)}{{p_{j}^{N_1}}\left( M \right)} \right\}. \end{equation} | (1.6) |
Lemma 2. [8] Suppose that M = (m_{ij})\in R^{n\times n} is a GSDD_1 matrix with positive diagonal entries, and D = diag(d_i) with d_i\in[0, 1] . Then
\begin{eqnarray} \begin{aligned} \max\limits_{d\in [0, 1]^{n}} \|(I-D+DM)^{-1}\|_{\infty} \leq \max\left\{ \frac{\max \left\{\varepsilon, \max\limits_{i\in N_2}\frac{{r_i}\left( M \right)}{|m_{ii}|} \right\}}{\min\left\{\min\limits_{i\in N_2}\phi_i, \min\limits_{i\in N_1}\psi_i \right\}}, \frac{\max \left\{\varepsilon, \max\limits_{i\in N_2}\frac{{r_i}\left( M \right)}{|m_{ii}|} \right\}}{\min \left\{\varepsilon, \min\limits_{i\in N_2}\frac{{r_i}\left( M \right)}{|m_{ii}|} \right\}} \right\}, \end{aligned} \end{eqnarray} | (1.7) |
where \phi_i, \; \psi_i, and \; \varepsilon are shown in (1.4)–(1.6), respectively.
Definition 4. [19] Matrix M = (m_{ij})\in C^{n\times n} is called an SDD_1 matrix if
\begin{equation} |m_{ii}| > {r_i}^{\rm{'}}\left( M \right), \quad for\; each\; i \in N_1, \end{equation} | (1.8) |
where
{r_i}^{\rm{'}}\left( M \right) = \sum\limits_{j \in N_1\backslash \left\{i\right\}}|m_{ij}|+\sum\limits_{j \in N_2\backslash \left\{i\right\}}|m_{ij}|\frac{{r_j}\left( M \right)}{|m_{jj}|}, |
N_1 = \left\{ i\in N|0 < |m_{ii}|\leq {r_i}\left( M \right)\right\}, \; N_2 = \left\{ i\in N||m_{ii}| > {r_i}\left( M \right)\right\}. |
The rest of this paper is organized as follows: In Section 2, we propose a new subclass of nonsingular H -matrices referred to SDD_1^{+} matrices, discuss some of the properties of it, and consider the relationships among subclasses of nonsingular H -matrices by numerical examples, including SDD_1 matrices, GSDD_1 matrices, and CKV -type matrices. At the same time, a scaling matrix D is constructed to verify that the matrix M is a nonsingular H -matrix. In Section 3, two methods are utilized to derive two different upper bounds of infinity norm for the inverse of the matrix (one with parameter and one without parameter), and numerical examples are used to show the validity of the results. In Section 4, two error bounds of the linear complementarity problems for SDD_1^{+} matrices are given by using the scaling matrix D , and numerical examples are used to illustrate the effectiveness of the obtained results. Finally, a summary of the paper is given in Section. 5.
For the sake of the following description, some symbols are first explained:
\begin{array}{c} N = N_1\cup N_2, \; N_1 = N_{1}^{(1)}\cup N_{2}^{(1)}\neq\emptyset , \; r_i(M)\neq 0, \\ N_1 = \left\{ i\in N|0 < |m_{ii}|\leq {r_i}\left( M \right)\right\}, \; N_2 = \left\{ i\in N||m_{ii}| > {r_i}\left( M \right)\right\}, \end{array} | (2.1) |
\begin{equation} N_{1}^{(1)} = \left\{ i\in N_1|0 < |m_{ii}|\leq {r_i}^{\rm{'}}\left( M \right)\right\}, \; N_{2}^{(1)} = \left\{ i\in N_1||m_{ii}| > {r_i}^{\rm{'}}\left( M \right)\right\}, \end{equation} | (2.2) |
\begin{equation} {r_i}^{\rm{'}}\left( M \right) = \sum\limits_{j \in N_1\backslash \left\{i\right\}}|m_{ij}|+\sum\limits_{j \in N_2}|m_{ij}|\frac{{r_j}\left( M \right)}{|m_{jj}|}, \; \; \; i\in N_1. \end{equation} | (2.3) |
By the definitions of N_{1}^{(1)} and N_{2}^{(1)} , for N_1 = N_{1}^{(1)}\cup N_{2}^{(1)} , {r_i}^{\rm{'}}\left(M \right) in Definition 4 can be rewritten as
\begin{eqnarray} {r_i}^{\rm{'}}\left( M \right) & = &\sum\limits_{j \in N_1\backslash \left\{i\right\}}|m_{ij}|+\sum\limits_{j \in N_2\backslash \left\{i\right\}}|m_{ij}|\frac{{r_j}\left( M \right)}{|m_{jj}|}\\ & = &\sum\limits_{j \in N_{1}^{(1)}\backslash \left\{i\right\}}|m_{ij}|+\sum\limits_{j \in N_{2}^{(1)}\backslash \left\{i\right\}}|m_{ij}|+\sum\limits_{j \in N_2\backslash \left\{i\right\}}|m_{ij}|\frac{{r_j}\left( M \right)}{|m_{jj}|}. \end{eqnarray} | (2.4) |
According to (2.4), it is easy to get the following equivalent form for SDD_1 matrices.
A matrix M is called an SDD_1 matrix, if
\begin{equation} \left\{\begin{array}{ll} \left|m_{i i}\right| > \sum\limits_{j \in N_1^{(1)}\backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2} \frac{r_j(M)}{\left|m_{j j}\right|}\left|m_{i j}\right|, \quad \; i \in N_1^{(1)}, \\ \left|m_{i i}\right| > \sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}\backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2} \frac{r_j(M)}{\left|m_{j j}\right|}\left|m_{i j}\right|, \quad i \in N_2^{(1)} . \end{array}\right. \end{equation} | (2.5) |
By scaling conditions (2.5), we introduce a new class of matrices. As we will see, these matrices belong to a new subclass of nonsingular H -matrices.
Definition 5. A matrix M = (m_{ij})\in C^{n\times n} is called an SDD_1^{+} matrix if
\begin{equation} \left\{\begin{array}{ll} \left|m_{i i}\right| > F_i(M) = &\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}, i \in N_1^{(1)}, \\ \left|m_{i i}\right| > F_i^{\prime}(M) = &\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|, \; \; i \in N_2^{(1)}, \end{array}\right. \end{equation} | (2.6) |
where N_1, N_2, \; N_1^{(1)}, \; N_2^{(1)}, and \; {r_i}^{\rm{'}}(M) are defined by (2.1)–(2.3), respectively.
Proposition 1. If M = (m_{ij})\in C^{n\times n} is an SDD_1^{+} matrix and N_{1}^{(1)}\neq \emptyset , then \sum\limits_{j \in N_2^{(1)} }\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\neq 0 for i\in N_{1}^{(1)} .
Proof. Assume that there exists i\in N_{1}^{(1)} such that \sum\limits_{j \in N_2^{(1)} }\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right| = 0 . We find that
\begin{eqnarray*} \left|m_{i i}\right| > \sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|& = &\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}\backslash\{i\}}\left|m_{i j}\right| +\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}\\ & = &\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|} = r_{i}^{\rm{'}}(M), \end{eqnarray*} |
which contradicts i\in N_{1}^{(1)} . The proof is completed.
Proposition 2. If M = (m_{ij})\in C^{n\times n} is an SDD_1^{+} matrix with N_2^{(1)} = \emptyset , then M is also an SDD_1 matrix.
Proof. From Definition 5, we have
\left|m_{i i}\right| > \sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}, \; \; \; \; \; \; \forall i\in N_1^{(1)}. |
Because of N_1 = N_1^{(1)} , it holds that
\left|m_{i i}\right| > \sum\limits_{j \in N_1 \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|} = {r_i}^{\rm{'}}\left( M \right), \; \; \; \; \; \; \forall i\in N_1. |
The proof is completed.
Example 1. Consider the following matrix:
M_1 = \left( {\begin{array}{*{20}{c}} 8 &{-5}&{ 2}&3\\ {3}&9&{6}&2\\ { 4}&{-2}&9&0\\ -3&1&2&10 \end{array}} \right). |
In fact, N_1 = \left\{1, 2\right\} and N_2 = \left\{3, 4\right\} . Through calculations, we obtain that
r_{1}\left(M_1\right) = 10, \; \; r_{2}\left(M_1\right) = 11, \; \; r_{3}\left(M_1\right) = 6, \; \; r_{4}\left(M_1\right) = 6, |
{r_1}^{\rm{'}}\left(M_1\right) = |m_{12}|+|m_{13}|\frac{r_{3}\left(M_1\right)}{|m_{33}|} +|m_{14}|\frac{r_{4}\left(M_1\right)}{|m_{44}|}\approx 8.1333, |
{r_2}^{\rm{'}}\left(M_1\right) = |m_{21}|+|m_{23}|\frac{r_{3}\left(M_1\right)}{|m_{33}|} +|m_{24}|\frac{r_{4}\left(M_1\right)}{|m_{44}|} = 8.2000 . |
Because of |m_{11}| = 8 < 8.1333 = {r_1}^{\rm{'}}\left(M_1\right) , M_1 is not an SDD_1 matrix.
Since
r_{1}^{\rm{'}}\left(M_1\right)\approx 8.1333 > |m_{11}| = 8, |
r_{2}^{\rm{'}}\left(M_1\right) = 8.2000 < |m_{22}| = 9, |
then N_1^{(1)} = \left\{1\right\} , N_2^{(1)} = \left\{2\right\} . As
|m_{11}| = 8 > |m_{12}|\frac{r_{2}^{\rm{'}}\left(M_1\right)}{|m_{22}|} +|m_{13}|\frac{r_{3}\left(M_1\right)}{|m_{33}|}+|m_{14}|\frac{r_{4}\left(M_1\right)}{|m_{44}|}\approx7.6889, |
|m_{22}| = 9 > |m_{23}|+|m_{24}| = 8, |
M_1 is an SDD_1^{+} matrix by Definition 5.
Example 2. Consider the following matrix:
M_2 = \left( {\begin{array}{*{20}{c}} 15 &{4}&8\\ {-7}&7&-5\\ 1&-2&16 \end{array}} \right). |
In fact, N_1 = \left\{2\right\} and N_2 = \left\{1, 3\right\} . By calculations, we get
r_{1}\left(M_2\right) = 12, \; \; r_{2}\left(M_2\right) = 12, \; \; r_{3}\left(M_2\right) = 3, |
{r_2}^{\rm{'}}\left(M_2\right) = 0+|m_{21}|\frac{r_{1}\left(M_2\right)}{|m_{11}|} +|m_{23}|\frac{r_{3}\left(M_2\right)}{|m_{33}|} = 6.5375 . |
Because of |m_{22}| = 7 > 6.5375 = {r_2}^{\rm{'}}\left(M_2\right) , M_2 is an SDD_1 matrix.
According to
r_{2}^{\rm{'}}\left(M_2\right) = 6.5375 < |m_{22}| = 7, |
we know that N_2^{(1)} = \left\{2\right\} . In addition,
|m_{22}| = 7 < 0+|m_{21}|+|m_{23}| = 12, |
and M_2 is not an SDD_1^{+} matrix.
As shown in Examples 1 and 2 and Proposition 2, it can be seen that SDD_1^{+} matrices and SDD_1 matrices have an intersecting relationship:
\{SDD_1\} \nsubseteq\left\{SDD_1^+\right\} \text { and } \left\{SDD_1^+\right\} \nsubseteq\{SDD_1 \}. |
The following examples will demonstrate the relationships between SDD_1^{+} matrices and other subclasses of nonsingular H -matrices.
Example 3. Consider the following matrix:
M_3 = \left( {\begin{array}{*{20}{c}} 40 &1&{-2}&1&2\\ {0}&10&4.1&4&6\\ 20&-2&33&4&8\\ 0&4&-6&20&2\\ 30&-4&2&0&40 \end{array}} \right). |
In fact, N_1 = \left\{2, 3\right\} and N_2 = \left\{1, 4, 5\right\} . Through calculations, we get that
r_{1}\left(M_3\right) = 6, \; \; r_{2}\left(M_3\right) = 14.1, \; \; r_{3}\left(M_3\right) = 34, \; \; r_{4}\left(M_3\right) = 12, \; \; r_{5}\left(M_3\right) = 36, |
r_{2}^{\rm{'}}\left(M_3\right) = |m_{23}|+|m_{21}|\frac{r_{1}\left(M_3\right)}{|m_{11}|} +|m_{24}|\frac{r_{4}\left(M_3\right)}{|m_{44}|}+|m_{25}|\frac{r_{5}\left(M_3\right)}{|m_{55}|} = 11.9 > |m_{22}|, |
r_{3}^{\rm{'}}\left(M_3\right) = |m_{32}|+|m_{31}|\frac{r_{1}\left(M_3\right)}{|m_{11}|} +|m_{34}|\frac{r_{4}\left(M_3\right)}{|m_{44}|}+|m_{35}|\frac{r_{5}\left(M_3\right)}{|m_{55}|} = 14.6 < |m_{33}|, |
and N_1^{(1)} = \left\{2\right\} , N_2^{(1)} = \left\{3\right\} . Because of
|m_{22}| > |m_{23}|\frac{r_{3}^{\rm{'}}\left(M_3\right)}{|m_{33}|} +|m_{21}|\frac{r_{1}\left(M_3\right)}{|m_{11}|}+|m_{24}|\frac{r_{4}\left(M_3\right)}{|m_{44}|}+ |m_{25}|\frac{r_{5}\left(M_3\right)}{|m_{55}|}\approx9.61, |
|m_{33}| = 33 > 0+|m_{31}|+|m_{34}|+|m_{35}| = 32. |
So, M_3 is an SDD_1^{+} matrix. However, since |m_{22}| = 10 < 11.9 = r_{2}^{\rm{'}}\left(M_3\right) , then M_3 is not an SDD_1 matrix. And we have
p^{N_1}_{1}\left(M_3\right) = 3, \; p^{N_1}_{2}\left(M_3\right) = 4.1, \; p_3^{N_1}\left(M_3\right) = 2, \; p_4^{N_1}\left(M_3\right) = 10, \; p_5^{N_1}\left(M_3\right) = 6, |
p_1^{N_2}\left(M_3\right) = 2.4, \; p_2^{N_2}\left(M_3\right) = 7.8, \; p_3^{N_2}\left(M_3\right) = 12.6, \; p_4^{N_2}\left(M_3\right) = 1.8, \; p_5^{N_2}\left(M_3\right) = 4.5. |
Note that, taking i = 1, j = 2 , we have that
\left(r_1\left(M_3\right)-p_1^{N_2}\left(M_3\right)\right)\left(\left|m_{2 2}\right|-p_2^{N_1}\left(M_3\right)\right) = 21.24 < p_1^{N_1}\left(M_3\right) p_2^{N_2}\left(M_3\right) = 23.4. |
So, M_3 is not a GSDD_1 matrix. Moreover, it can be verified that M_3 is not a CKV -type matrix.
Example 4. Consider the following matrix:
M_4 = \left( {\begin{array}{*{20}{c}} -1 &0.4&{0}&0\\ {0.5}&1&0&0\\ -1&2.1&1&-0.5\\ 1&-2&0.31&1 \end{array}} \right). |
It is easy to check that M_4 is a CKV -type matrix and a GSDD_1 matrix, but not an SDD_1^{+} matrix.
Example 5. Consider the following matrix:
M_5 = \left( {\begin{array}{*{20}{c}} 30.78 & 6 & 6 & 6.75 & 5.25 & 6 & 3 & 6 \\ 2.25 & 33.78 & 3 & 6 & 1.5 & 4.5 & 3 & 2.25 \\ 6 & 6 & 33.03 & 6 & 6 & 6 & 5.25 & 3 \\ 0.75 & 2.25 & 6 & 28.53 & 1.5 & 2.25 & 4.5 & 4.5 \\ 0.75 & 5.25 & 1.5 & 0.75 & 29.28 & 6 & 6 & 2.25 \\ 5.25 & 4.5 & 1.5 & 0.75 & 5.25 & 35.28 & 3 & 5.25 \\ 5.25 & 3 & 4.5 & 6 & 3 & 6.75 & 30.03 & 3.75 \\ 4.5 & 0.75 & 3.75 & 7.5 & 5.25 & 0.75 & 0.75 & 29.28 \end{array}} \right). |
We know that M_5 is not only an SDD_1^{+} matrix, but also CKV -type matrix and GSDD_1 matrix.
According to Examples 3–5, we find that SDD_1^{+} matrices have intersecting relationships with the CKV -type matrices and GSDD_1 matrices, as shown in the Figure 1 below. In this paper, we take N_1\neq\emptyset and r_i(M)\neq 0 , so we will not discuss the relationships among SDD matrices, SDD_1^{+} matrices, and GSDD_1 matrices.
Example 6. Consider the tri-diagonal matrix M\in R^{n\times n} arising from the finite difference method for free boundary problems [8], where
M_6 = \left(\begin{array}{ccccc} b+\alpha sin \left(\frac{1}{n}\right) &c &0 &\cdots &0 \\ a &b+\alpha sin \left(\frac{2}{n}\right) &c &\cdots &0 \\ &\ddots &\ddots &\ddots &\\ 0 &\cdots &a &b+\alpha sin \left(\frac{n-1}{n}\right) &c \\ 0 &\cdots &0 &a & b+\alpha sin \left(1\right) \end{array} \right). |
Take n = 12000 , a = 5.5888 , b = 16.5150 , c = 10.9311 , and \alpha = 14.3417 . It is easy to verify that M_6 is an SDD_1^{+} matrix, but not an SDD matrix, a GSDD_1 matrix, an SDD_1 matrix, nor a CKV -type matrix.
As is shown in [19] and [8], SDD_1 matrix and GSDD_1 matrix are both nonsingular H -matrices, and there exists an explicit construction of the diagonal matrix D , whose diagonal entries are all positive, such that MD is an SDD matrix. In the following, we construct a positive diagonal matrix D involved with a parameter that scales an SDD_1^{+} matrix to transform it into an SDD matrix.
Theorem 1. Let M = (m_{ij})\in C^{n\times n} be an SDD_1^{+} matrix. Then, there exists a diagonal matrix D = diag(d_1, d_2, \cdots, d_n) with
\begin{equation} \begin{aligned} d_i = \left\{\begin{array}{ll}\; \; \; \; \; \; \; \; 1\; , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i \in N_1^{(1)} , & \\ \varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\; , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i \in N_2^{(1)} , \\ \varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\; , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i \in N_2 , \end{array}\right.\end{aligned} \end{equation} | (2.7) |
where
\begin{equation} 0 < \varepsilon < \min\limits_{i\in N_1^{(1)}}p_i, \end{equation} | (2.8) |
and for all i \in N_1^{(1)} , we have
\begin{equation} p_i = \frac{\left|m_{i i}\right|-\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}}{\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|}, \end{equation} | (2.9) |
such that MD is an SDD matrix.
Proof. By (2.6), we have
\begin{equation} \left|m_{i i}\right|-\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|} > 0. \end{equation} | (2.10) |
From Proposition 1, for all i \in N_1^{(1)} , it is easy to know that
\begin{equation} p_i = \frac{\left|m_{i i}\right|-\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}}{\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|} > 0. \end{equation} | (2.11) |
Immediately, there exists a positive number \varepsilon such that
\begin{equation} 0 < \varepsilon < \min\limits_{i\in N_1^{(1)}}p_i. \end{equation} | (2.12) |
Now, we construct a diagonal matrix D = diag(d_1, d_2, \cdots, d_n) with
\begin{equation} d_i = \left\{\begin{array}{ll}\; \; \; \; \; \; \; \; 1\; , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i \in N_1^{(1)} , & \\ \varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\; , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i \in N_2^{(1)} , \\ \varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\; , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i \in N_2 , \end{array}\right. \end{equation} | (2.13) |
where \varepsilon is given by (2.12). It is easy to find that all the elements in the diagonal matrix D are positive. Next, we will prove that MD is strictly diagonally dominant.
Case 1. For each i\in N_1^{(1)} , it is not difficult to find that |\left(MD\right)_{ii}| = |m_{ii}| . By (2.11) and (2.13), we have
\begin{aligned} r_i(M D) = & \sum\limits_{j \in N_1^{(1)} \backslash\{i\}} d_j\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}} d_j\left|m_{i j}\right|+\sum\limits_{j \in N_2} d_j\left|m_{i j}\right| \\ = &\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left(\varepsilon+\frac{r_j^{\prime}(M)}{|m_{j j}|}\right)| m_{i j}|+\sum\limits_{j \in N_2}\left(\varepsilon+\frac{r_j(M)}{|m_{j j}|}\right)| m_{i j}| \\ = & \sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}+\varepsilon\left(\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\right) \\ < &\left|m_{i i}\right| = \left|(M D)_{i i}\right| . \end{aligned} |
Case 2. For each i\in N_2^{(1)} , we obtain
\begin{equation} \left|(M D)_{i i}\right| = |m_{i i}|\left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right) = \varepsilon| m_{i i}| +r_i^{\prime}(M). \end{equation} | (2.14) |
From (2.3), (2.13), and (2.14), we derive that
\begin{aligned} r_i(M D) & = \sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}+\varepsilon\left(\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\right) \\ &\leq r_i^{\prime}(M)+\varepsilon\left(\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\right) \\ & < r_i^{\prime}(M)+\varepsilon\left|m_{i i}\right| = \left|(M D)_{i i}\right|.\end{aligned} |
The first inequality holds because of |m_{ii}| > r_i^{\prime}(M) for any i\in N_2^{(1)} , and
\begin{aligned} r_i^{\prime}(M) & = \sum\limits_{j \in N_1 \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|} \\ & = \sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|} \\ & \geq \sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}. \end{aligned} |
Case 3. For each i\in N_2 , we have
\begin{align} r_i(M) & = \sum\limits_{j \in N_1 \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right| \\ & = \sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right| \\ & \geq \sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}. \end{align} | (2.15) |
Meanwhile, for each i\in N_2 , it is easy to get
\begin{equation} \left|m_{i i}\right| > r_i(M) \geq \sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right|, \end{equation} | (2.16) |
and
\begin{equation} \left|(M D)_{i i}\right| = \left|m_{i i}\right|\left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right) = \varepsilon\left|m_{i i}\right|+r_i(M). \end{equation} | (2.17) |
From (2.13), (2.15), and (2.16), it can be deduced that
\begin{aligned} r_i(M D) = &\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left(\varepsilon+\frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|+\sum\limits_{j \in N_2 \backslash\{i\}}\left(\varepsilon+\frac{r_j(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right| \\ = & \sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}+\varepsilon\left(\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right|\right)\\ < &r_i(M)+\varepsilon\left|m_{i i}\right| = \left|(M D)_{i i}\right|. \end{aligned} |
So, \left|(M D)_{i i}\right| > r_i(MD) for i\in N . Thus, MD is an SDD matrix. The proof is completed.
It is well known that the H -matrix M is nonsingular if there exists a diagonal matrix D such that MD is an SDD matrix (see [1,19]). Therefore, from Theorem 1, SDD_1^{+} matrices are nonsingular H -matrices.
Corollary 1. Let M = (m_{ij})\in C^{n\times n} be an SDD_1^{+} matrix. Then, M is also an H -matrix. If, in addition, M has positive diagonal entries, then det(M) > 0 .
Proof. We see from Theorem 1 that there is a positive diagonal matrix D such that MD is an SDD matrix (cf. (M35) of Theorem 2.3 of Chapter 6 of [1]). Thus, M is a nonsingular H -matrix. Since the diagonal entries of M and D are positive, MD has positive diagonal entries. From the fact that MD is an SDD matrix, it is well known that 0 < det(MD) = det(M)det(D) , which means det(M) > 0 .
In this section, we start to consider two infinity norm bounds of the inverse of SDD_1^{+} matrices. Before that, some notations are defined:
\begin{align} M_i = & |m_{i i}|-\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}| m_{i j}|-\sum\limits_{j \in N_2^{(1)}}| m_{i j}| \frac{r_j^{\prime}(M)}{|m_{j j}|} \\ & -\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}-\varepsilon\left(\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\right), \quad i \in N_1^{(1)}, \end{align} | (3.1) |
\begin{align} N_i = & r_i^{\prime}(M)-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}\\ &+\varepsilon\left(\left|m_{i i}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2}\left|m_{i j}\right|\right), \quad i \in N_2^{(1)}, \end{align} | (3.2) |
\begin{align} Z_i = & r_i(M)-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{ij}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}\\ &+\varepsilon\left(\left|m_{i i}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2\backslash\{i\}}\left|m_{i j}\right|\right), \quad i \in N_2. \end{align} | (3.3) |
Next, let us review an important result proposed by Varah (1975).
Theorem 2. [20] If M = (m_{ij})\in C^{n\times n} is an SDD matrix, then
\begin{equation} \left\|M^{-1}\right\|_{\infty} \leq \frac{1}{\min \limits_{i \in N}\left\{\left|m_{i i}\right|-r_i(M)\right\}}. \end{equation} | (3.4) |
Theorem 2 can be used to bound the infinity norm of the inverse of an SDD matrix. This theorem together with the scaling matrix D = diag(d_1, d_2, \cdots, d_n) allows us to gain the following Theorem 3.
Theorem 3. Let M = (m_{ij})\in C^{n\times n} be an SDD_1^{+} matrix. Then,
\begin{equation} \left\|M^{-1}\right\|_{\infty} \leq \frac{\max \left\{1, \max\limits _{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \max\limits _{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}}{\min \left\{\min\limits _{i \in N_1^{(1)}} M_i, \min\limits _{i \in N_2^{(1)}} N_i, \min\limits _{i \in N_2} Z_i\right\}} , \end{equation} | (3.5) |
where \varepsilon , M_i, \; N_i, \; and\; Z_i are defined in (2.8), (2.9), and (3.1)–(3.3), respectively.
Proof. By Theorem 1, there exists a positive diagonal matrix D such that MD is an SDD matrix, where D is defined as (2.13). Hence, we have the following result:
\begin{align} \left\|M^{-1}\right\|_{\infty} = \left\|D\left(D^{-1} M^{-1}\right)\right\|_{\infty} & = \left\|D(M D)^{-1}\right\|_{\infty}\leq\|D\|_{\infty}\left\|(M D)^{-1}\right\|_{\infty} \end{align} | (3.6) |
and
\|D\|_{\infty} = \max\limits _{1 \leq i \leq n} d_i = \max \left\{1, \max\limits _{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \max\limits _{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}, |
where \varepsilon is given by (2.8). Note that MD is an SDD matrix, by Theorem 2, we have
\left\|(MD)^{-1}\right\|_{\infty} \leq \frac{1}{\min \limits_{i \in N}\left\{\left|\left(MD\right)_{i i}\right|-{r_i}\left(MD\right)\right\}}. |
However, there are three scenarios to solve \left|\left(MD\right)_{i i}\right|-{r_i}\left(MD\right) . For i\in N_1^{(1)} , we get
\begin{equation*} \begin{array}{l}\begin{aligned} &\left|(M D)_{i i}\right|-r_i(M D) \\ = &\left|m_{i i}\right|-\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left(\varepsilon+\frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|-\sum\limits_{j \in N_2}\left(\varepsilon+\frac{r_j(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right| \\ = &\left|m_{i i}\right|-\sum\limits_{j \in N_1^{{(1)}} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}-\varepsilon\left(\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\right)\\ = &M_i. \end{aligned} \end{array} \end{equation*} |
For i\in N_2^{(1)} , we have
\begin{array}{l}\begin{aligned} &\left|(M D)_{i i}\right|-r_i(M D)\\ = &\left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right)\left|m_{i i}\right| -\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left(\varepsilon+\frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|-\sum\limits_{j \in N_2}\left(\varepsilon+\frac{r_j(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right| \\ = &r_i^{\prime}(M)-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}+\varepsilon\left(\left|m_{i i}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2}\left|m_{i j}\right|\right)\\ = &N_{i} . \end{aligned} \end{array} |
For i\in N_2 , we obtain
\begin{array}{l}\begin{aligned} &\left|(M D)_{i i}\right|-r_i(M D)\\ = &\left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\left|m_{i i}\right|-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left(\varepsilon+\frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|-\sum\limits_{j \in N_2 \backslash\{i\}}\left(\varepsilon+\frac{r_j(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right| \\ = &r_i(M)-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}+\varepsilon\left(\left|m_{i i}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right|\right) \\ = &Z_i. \end{aligned} \end{array} |
Hence, according to (3.6) we have
\begin{equation*} \left\|M^{-1}\right\|_{\infty} \leq \frac{\max \left\{1, \max\limits _{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \max\limits _{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}}{\min \left\{\min\limits _{i \in N_1^{(1)}} M_i, \min\limits _{i \in N_2^{(1)}} N_i, \min\limits _{i \in N_2} Z_i\right\}} . \end{equation*} |
The proof is completed.
It is noted that the upper bound in Theorem 3 is related to the interval values of the parameter. Next, another upper bound of \left\|M^{-1}\right\|_{\infty} is given, which depends only on the elements in the matrix.
Theorem 4. Let M = (m_{ij})\in C^{n\times n} be an SDD_1^{+} matrix. Then,
\begin{align} \left\|M^{-1}\right\|_{\infty} \leq& \max \left\{ \frac{1}{\min\limits _{i \in N_1^{(1)}}\left\{\left|m_{i i}\right|-F_i(M)\right\}}, \right. \left.\frac{1}{\min \limits_{i \in N_2^{(1)}}\left\{\left|m_{i i}\right|-F_i^{\prime}(M)\right\}}, \frac{1}{\min\limits _{i \in N_2}\left\{\left|m_{i i}\right|-\sum\limits_{j \neq i}\left|m_{i j}\right|\right\} } \right\}, \end{align} | (3.7) |
where F_i(M) and F_i^{\prime}(M) are shown in (2.6).
Proof. By the well-known fact (see[21,22]) that
\begin{equation} \left\|M^{-1}\right\|_{\infty}^{-1} = \inf\limits_{x \neq 0} \frac{\|M x\|_{\infty}}{\|x\|_{\infty}} = \min\limits_{\|x\|_{\infty} = 1}\|M x\|_{\infty} = \|M x\|_{\infty} = \max\limits_{i \in N}\left|(M x)_i\right|, \end{equation} | (3.8) |
for some x = [x_1, x_2, \cdots, x_n]^T , we have
\begin{equation} \left\|M^{-1}\right\|_{\infty}^{-1} \geq\left|(M x)_i\right|. \end{equation} | (3.9) |
Assume that there is a unique k \in N such that \|x\|_{\infty} = 1 = |x_k| , then
\begin{aligned} m_{k k} x_k = &(M x)_k-\sum\limits_{j \neq k} m_{k j} x_j\\ = &(M x)_k-\sum\limits_{j \in N_1^{(1)} \backslash\{k\}} m_{k j} x_j-\sum\limits_{j \in N_2^{(1)} \backslash\{k\}} m_{k j} x_j-\sum\limits_{j \in N_2 \backslash\{k\}} m_{k j} x_j.\end{aligned} |
When k\in N_1^{(1)} , let \left|x_j\right| = \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|} ( j\in N_2^{(1)} ), and \left|x_j\right| = \frac{r_j(M)}{\left|m_{j j}\right|} ( j \in N_2 ). Then we have
\begin{array}{l}\begin{aligned} \left|m_{k k}\right| = &\left|m_{k k} x_k\right| = \left|(M x)_k-\sum\limits_{j \in N_1^{(1)} \backslash\{k\}} m_{k j} x_j-\sum\limits_{j \in N_2^{(1)}} m_{k j} x_j-\sum\limits_{j \in N_2} m_{k j} x_j\right| \\ \leq&\left|(M x)_k\right|+\left|\sum\limits_{j \in N_1^{(1)} \backslash\{k\}} m_{k j} x_j\right|+\left|\sum\limits_{j \in N_2^{(1)}} m_{k j} x_j\right|+\left|\sum\limits_{j \in N_2} m_{k j} x_j\right| \\ \leq&\left|(M x)_k\right|+\sum\limits_{j \in N_1^{(1)} \backslash\{k\}}\left|m_{k j}\right|\left|x_j\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{k j}\right|\left|x_j\right|+\sum\limits_{j \in N_2}\left|m_{k j}\right|\left|x_j\right| \\ \leq&\left|(M x)_k\right|+\sum\limits_{j \in N_1^{(1)} \backslash\{k\}}\left|m_{k j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{k j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2}\left|m_{k j}\right| \frac{r_j(M)}{\left|m_{j j}\right|} \\ \leq&\left\|M^{-1}\right\|_{\infty}^{-1}+\sum\limits_{j \in N_1^{(1)} \backslash\{k\}}\left|m_{k j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{k j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2}\left|m_{k j}\right| \frac{r_j(M)}{\left|m_{j j}\right|} \\ = &\left\|M^{-1}\right\|_{\infty}^{-1}+F_k(M) .\end{aligned} \end{array} |
Which implies that
\left\|M^{-1}\right\|_{\infty} \leq \frac{1}{\left|m_{k k}\right|-F_k(M)} \leq \frac{1}{\min\limits _{i \in N_1^{(1)}}\left\{\left|m_{i i}\right|-F_i(M)\right\}}. |
For k\in N_2^{(1)} , let \sum\limits_{j \in N_1^{(1)}}\left|m_{k j}\right| = 0 . It follows that
\begin{array}{l}\begin{aligned} \left|m_{k k}\right| \leq &\left|(M x)_k\right|+\sum\limits_{j \in N_1^{(1)}}\left|m_{k j}\right|\left|x_j\right|+\sum\limits_{j \in N_2^{(1)}\backslash\{k\}}\left|m_{k j}\right|\left|x_j\right|+\sum\limits_{j \in N_2}\left|m_{k j}\right|\left|x_j\right| \\ \leq&\left\|M^{-1}\right\|_{\infty}^{-1}+0+\sum\limits_{j \in N_2^{(1)}\backslash\{k\}}\left|m_{k j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2}\left|m_{k j}\right| \frac{r_j(M)}{\left|m_{j j}\right|} \\ = &\left\|M^{-1}\right\|_{\infty}^{-1}+F_k^{\prime}(M) .\end{aligned} \end{array} |
Hence, we obtain that
\left\|M^{-1}\right\|_{\infty} \leq \frac{1}{\left|m_{k k}\right|-F_k^{\prime}(M)} \leq \frac{1}{\min \limits_{i \in N_2^{(1)}}\left\{\left|m_{i i}\right|-F_i^{\prime}(M)\right\}} . |
For k\in N_2 , we get
0 < {\min \limits _{i \in N_2}\left\{\left|m_{i i}\right|-\sum \limits_{j \neq i}\left|m_{i j}\right|\right\}}\leq\left|m_{kk}\right|-\sum \limits_{j \neq k}\left|m_{k j}\right|, |
and
\begin{aligned} 0 & < \min _{i \in N_2}\left(\left|m_{i i}\right|-\sum\limits_{j \neq i}\left|m_{i j}\right|\right)\left|x_k\right| \leq\left|m_{k k}\right|\left|x_k\right|-\sum\limits_{j \neq k}\left|m_{k j}\right|\left|x_j\right| \\ & \leq\left|m_{k k}\right|\left|x_k\right|-\left|\sum\limits_{j \neq k} m_{k j} x_j\right| \leq\left|\sum\limits_{k \in N_2, j \in N} m_{k j} x_j\right| \leq \max _{i \in N_2}\left|\sum\limits_{j \in N} m_{i j} x_j\right| \\ & \leq\left\|M^{-1}\right\|_{\infty}^{-1} , \end{aligned} |
which implies that
\left\|M^{-1}\right\|_{\infty} \leq \frac{1}{\left|m_{k k}\right|-F_k(M)} \leq \frac{1}{\min \limits _{i \in N_2}\left\{\left|m_{i i}\right|-\sum \limits_{j \neq i}\left|m_{i j}\right|\right\}}. |
To sum up, we obtain that
\begin{equation*} \begin{aligned} \left\|M^{-1}\right\|_{\infty} \leq& \max \left\{ \frac{1}{\min\limits _{i \in N_1^{(1)}}\left\{\left|m_{i i}\right|-F_i(M)\right\}}, \right.\left.\frac{1}{\min \limits_{i \in N_2^{(1)}}\left\{\left|m_{i i}\right|-F_i^{\prime}(M)\right\}}, \frac{1}{\min\limits _{i \in N_2}\left\{\left|m_{i i}\right|-\sum\limits_{j \neq i}\left|m_{i j}\right|\right\} } \right\}. \end{aligned} \end{equation*} |
The proof is completed.
Next, some numerical examples are given to illustrate the superiority of our results.
Example 7. Consider the following matrix:
M_7 = \left( {\begin{array}{*{20}{c}} 2 &-1&{0}&0&0&0\\ {-1}&2 &-1&{0}&0&0\\ 0&-1&2 &-1&{0}&0\\ 0&0&-1&2 &-1&0\\ 0&0&0&-1&2 &-1\\ 0&0&0&0&2 &-1 \end{array}} \right). |
It is easy to verify that M_7 is an SDD_1^{+} matrix. However, we know that M_7 is not an SDD matrix, a GSDD_1 matrix, an SDD_1 matrix and a CKV -type matrix. By the bound in Theorem 3, we have
\min p_i = 0.25, \; \varepsilon\in \left(0, 0.25\right). |
When \varepsilon = 0.1225 , we get
\left\|M_7^{-1}\right\|_{\infty} \leq 8.1633 . |
The range of parameter values in Theorem 3 is not empty set, and its optimal solution can be illustrated through examples. In Example 7, the range of values for the error bound and its optimal solution can be seen from Figure 2. The bound for Example 7 is (8.1633,100), and the optimal solution for Example 7 is 8.1633.
However, according to Theorem 4, we obtain
\left\|M_7^{-1}\right\|_{\infty} \leq\max\left\{ 4, 1, 1 \right\} = 4. |
Through this example, it can be found that the bound of Theorem 4 is better than Theorem 3 in some cases.
Example 8. Consider the following matrix:
M_8 = \left( {\begin{array}{*{20}{c}} b_1 & c & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ a & b_2 & c & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & a & b_3 & c & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & a & b_4 & c & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & a & b_5 & c & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & a & b_6 & c & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & a & b_7 & c & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & a & b_8 & c & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & a & b_9 & c \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & a & b_{10} \end{array}} \right). |
Here, a = -2 , c = 2.99 , b_1 = 6.3304 , b_2 = 6.0833 , b_3 = 5.8412 , b_4 = 5.6065 , b_5 = 5.3814 , b_6 = 5.1684 , b_7 = 4.9695 , b_8 = 4.7866 , b_9 = 4.6217 , and b_{10} = 4.4763 . It is easy to verify that M_8 is an SDD_1^{+} matrix. However, we can get that M_8 is not an SDD matrix, a GSDD_1 matrix, an SDD_1 matrix, and a CKV -type matrix. By the bound in Theorem 3, we have
\min p_i = 0.1298, \; \varepsilon\in \left(0, 0.1298\right). |
When \varepsilon = 0.01 , we have
\left\|M_8^{-1}\right\|_{\infty} \leq \frac{\max \{1, 1.0002, 0.9755\}}{\min \{0.5981, 0.0163, 0.1765\}} = 61.3620 . |
If \varepsilon = 0.1 , we have
\left\|M_8^{-1}\right\|_{\infty} \leq \frac{\max \{1, 1.0902, 1.0655\}}{\min \{0.1490, 0.1632, 0.1925\}} = 7.3168 . |
Taking \varepsilon = 0.11 , then it is easy to calculate
\left\|M_8^{-1}\right\|_{\infty} \leq \frac{\max \{1, 1.1002, 1.0755\}}{\min \{0.0991, 0.1795, 0.1943\}} = 11.1019 . |
By the bound in Theorem 4, we have
\left\|M_8^{-1}\right\|_{\infty} \leq\max\left\{ 1.5433, 0.6129, 5.6054 \right\} = 5.6054. |
Example 9. Consider the following matrix:
M_9 = \left( {\begin{array}{*{20}{c}} 11.3 & -1.2 & -1.1 & 4.7 \\ -1.2 & 14.2 & 9.1 & -4 \\ 3.2 & -1.1 & 14.3 & 0.3 \\ 4.6 & 7.6 & 3.2 & 11.3 \\ \end{array}} \right). |
It is easy to verify that the matrix M_9 is a GSDD_1 matrix and an SDD_1^{+} matrix. When M_9 is a GSDD_1 matrix, it can be calculated according to Lemma 1 that
\varepsilon\in \left(1.0484, 1.1265\right). |
According to Figure 3, if we take \varepsilon = 1.0964 , we can obtain an optimal bound, namely
\left\|M_9^{-1}\right\|_{\infty} \leq 6.1806. |
When M_9 is an SDD_1^{+} matrix, it can be calculated according to Theorem 3. We obtain
\varepsilon\in \left(0, 0.2153\right). |
According to Figure 3, if we take \varepsilon = 0.1707 , we can obtain an optimal bound, namely
\left\|M_9^{-1}\right\|_{\infty} \leq 1.5021. |
However, according to Theorem 4, we get
\left\|M_9^{-1}\right\|_{\infty} \leq\max\left\{ 0.3016, 0.2564, 0.2326 \right\} = 0.3016. |
Example 10. Consider the following matrix:
M_{10} = \left( {\begin{array}{*{20}{c}} 7 &3&{1}&1\\ {1}&7&3&4\\ 2&2&9&3\\ 3&1&3&7 \end{array}} \right). |
It is easy to verify that the matrix {M}_{10} is a C K V -type matrix and an S D D_1^{+} matrix. When {M}_{10} is a CKV -type matrix, it can be calculated according to Theorem 21 in [9]
\left\|M_{10}{ }^{-1}\right\|_{\infty} \leq 11 . |
When {M}_{10} is an S D D_1^{+} matrix, take \varepsilon = 0.0914 according to Theorem 3. We obtain an optimal bound, namely
\left\|M_{10}{ }^{-1}\right\|_{\infty} \leq \frac{\max \{1, 0.8737, 0.8692\}}{\min \{0.0919, 0.0914, 3.5901\}} = 10.9409. |
When {M}_{10} is an S D D_1^{+} matrix, it can be calculated according to Theorem 4. We can obtain
\left\|M_{10}{ }^{-1}\right\|_{\infty} \leq \max \{1, 2149, 1, 1\} = 1.2149 . |
From Examples 9 and 10, it is easy to know that the bound in Theorem 3 and Theorem 4 in our paper is better than available results in some cases.
The P -matrix refers to a matrix in which all principal minors are positive [19], and it is widely used in optimization problems in economics, engineering, and other fields. In fact, the linear complementarity problem in the field of optimization has a unique solution if and only if the correlation matrix is a P -matrix, so the P -matrix has attracted extensive attention, see [23,24,25]. As we all know, the linear complementarity problem of matrix M , denoted by LCP(M, q) , is to find a vector x\in R^{n} such that
\begin{equation} Mx+q\geq 0, \ \ \ \ \ \ (Mx+q)^{T}x = 0, \ \ \ \ \ \ x\geq 0, \end{equation} | (4.1) |
or to prove that no such vector x exists, where M\in R^{n\times n} and q\in R^{n} . One of the essential problems in LCP(M, q) is to estimate
\max\limits_{d\in [0, 1]^{n}} \|(I-D+DM)^{-1}\|_{\infty}, |
where D = diag(d_{i}) , d = (d_{1}, d_{2}, \cdots, d_{n}) , 0\leq d_{i}\leq 1 , i = 1, 2, \cdots, n . It is well known that when M is a P -matrix, there is a unique solution to linear complementarity problems.
In [2], Chen et al. gave the following error bound for LCP(M, q) ,
\begin{equation} \|x-x^{*}\|_{\infty}\leq \max\limits_{d\in [0, 1]^{n}} \| (I-D+DM)^{-1}\|_{\infty} \| r(x)\|_{\infty}, \ \ \ \ \ \forall x\in R^n, \end{equation} | (4.2) |
where x^{*} is the solution of LCP(M, q) , r(x) = \min\{x, Mx+q\} , and the min operator r(x) denotes the componentwise minimum of two vectors. However, for P-matrices that do not have a specific structure and have a large order, it is very difficult to calculate the error bound of \max\limits_{d\in [0, 1]^{n}} \| (I-D+DM)^{-1}\|_{\infty} . Nevertheless, the above problem is greatly alleviated when the proposed matrix has a specific structure [7,16,26,27,28].
It is well known that a nonsingular H -matrix with positive diagonal entries is a P -matrix. In [29], when the matrix M is a nonsingular H -matrix with positive diagonal entries, and there is a diagonal matrix D so that MD is an SDD matrix, the authors propose a method to solve the error bounds of the linear complementarity problem of the matrix M . Now let us review it together.
Theorem 5. [29] Assume that M = (m_{ij})\in R^{n\times n} is an H -matrix with positive diagonal entries. Let D = diag(d_i) , d_i > 0 , for all i \in N = \left\{1, \ldots, n\right\} , be a diagonal matrix such that MD is strictly diagonally dominant by rows. For any i \in N = \left\{1, \ldots, n\right\} , let \beta_i : = m_{ii}d_i-\sum\limits_{j\neq i}|m_{ij}|d_j . Then,
\begin{equation} \max\limits_{d \in[0, 1]^n}\left\|(I-D+D M)^{-1}\right\|_{\infty} \leq \max \left\{\frac{\max _i\left\{d_i\right\}}{\min _i\left\{\beta_i\right\}}, \frac{\max _i\left\{d_i\right\}}{\min _i\left\{d_i\right\}}\right\} . \end{equation} | (4.3) |
Next, the error bound of the linear complementarity problem of SDD_1^{+} matrices is given by using the positive diagonal matrix D in Theorem 1.
Theorem 6. Suppose that M = (m_{ij})\in R^{n\times n}\; (n\geq 2) is an SDD_1^{+} matrix with positive diagonal entries, and for any i\in N_{1}^{(1)} , \sum\limits_{j \in N_2^{(1)} }\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\neq 0 . Then,
\begin{align} &\max _{d \in[0, 1]^n}\left\|(I-D+D M)^{-1}\right\|_{\infty} \\ \leq &\max \left\{\frac{\max \left\{1, \max\limits _{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \max\limits _{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}}{\min \left\{\min\limits _{i \in N_1^{(1)}} M_i, \min\limits _{i \in N_2^{(1)}} N_i, \min\limits _{i \in N_2} Z_i\right\}}, \right. \left.\frac{\max \left\{1, \max\limits _{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \max \limits_{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}}{\min \left\{1, \min \limits_{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \min \limits_{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}}\right\} , \end{align} | (4.4) |
where \varepsilon , M_i, \; N_i, \; and\; Z_i are defined in (2.8), (2.9), and (3.1)–(3.3), respectively.
Proof. Since M is an SDD_1^{+} matrix with positive diagonal elements, the existence of a positive diagonal matrix D such that MD is a strictly diagonal dominance matrix can be seen. For i\in N , we can get
\begin{aligned} \beta_i = &\left|(M D)_{i i}\right|-\sum\limits_{j \in N \backslash\{i\}}\left|(M D)_{i j}\right| \\ = &\left|(M D)_{i i}\right|-\left(\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|(M D)_{i j}\right|+\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|(M D)_{i j}\right|+\sum\limits_{j \in N_2 \backslash\{i\}}\left|(M D)_{i j}\right|\right) \\ = &\left|(M D)_{i i}\right|-\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|(M D)_{i j}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|(M D)_{i j}\right|-\sum\limits_{j \in N_2 \backslash\{i\}}\left|(M D)_{i j}\right|. \end{aligned} |
By Theorem 5, for i\in N_1^{(1)} , we get
\begin{equation*} \begin{array}{l}\begin{aligned} \beta_i = &m_{i i} d_i-\sum\limits_{j \neq i}\left|m_{i j}\right| d_j = \left|m_{i i}\right|-\left(\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left(\varepsilon+\frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left(\varepsilon+\frac{r_j(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|\right) \\ = &\left|m_{i i}\right|-\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}-\varepsilon\left(\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\right). \end{aligned}\end{array} \end{equation*} |
For i\in N_2^{(1)} , we have
\begin{array}{l}\begin{aligned} \beta_i = &m_{i i} d_i-\sum\limits_{j \neq i}\left|m_{i j}\right| d_j \\ = &\left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right)\left|m_{i i}\right|-\left(\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left(\varepsilon+\frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left(\varepsilon+\frac{r_j(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|\right) \\ = &\varepsilon\left|m_{i i}\right|+r_i^{\prime}(M)-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\varepsilon \sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}-\varepsilon \sum\limits_{j \in N_2}\left|m_{i j}\right| \\ = &r_i^{\prime}(M)-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}+\varepsilon\left(\left|m_{i i}\right|-\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|-\sum\limits_{j \in N_2}\left|m_{i j}\right|\right). \end{aligned}\end{array} |
For i\in N_2 , we have
\begin{aligned} \beta_i = &m_{i i} d_i-\sum\limits_{j \neq i}\left|m_{i j}\right| d_j\\ = &\left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\left|m_{i i}\right|-\left(\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left(\varepsilon+\frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|+\sum\limits_{j \in N_2 \backslash\{i\}}\left(\varepsilon+\frac{r_j(M)}{\left|m_{j j}\right|}\right)\left|m_{i j}\right|\right) \\ = &\varepsilon\left|m_{i i}\right|+r_i(M)-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\varepsilon \sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}-\varepsilon \sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right| \\ = &r_i(M)-\sum\limits_{j \in N_1^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}-\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}+\varepsilon\left(\left|m_{i i}\right|-\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right|-\sum\limits_{j \in N_2 \backslash\{i\}}\left|m_{i j}\right|\right). \end{aligned} |
To sum up, it can be seen that
\beta_i = \left\{\begin{array}{l} M_i, \; \; i \in N_1^{(1)}, \\ N_i, \; \; \; i \in N_2^{(1)}, \\ Z_i, \; \; \; \; i \in N_2 . \end{array}\right. |
According to Theorems 1 and 5, it can be obtained that
\begin{equation*} \begin{array}{l}\begin{aligned} &\max _{d \in[0, 1]^n}\left\|(I-D+D M)^{-1}\right\|_{\infty} \\ \leq& \max \left\{\frac{\max \left\{1, \max\limits _{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \max\limits _{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}}{\min \left\{\min\limits _{i \in N_1^{(1)}} M_i, \min\limits _{i \in N_2^{(1)}} N_i, \min\limits _{i \in N_2} Z_i\right\}}, \right. \left.\frac{\max \left\{1, \max\limits _{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \max \limits_{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}}{\min \left\{1, \min \limits_{i \in N_2^{(1)}} \left(\varepsilon+\frac{r_i^{\prime}(M)}{\left|m_{i i}\right|}\right), \min \limits_{i \in N_2} \left(\varepsilon+\frac{r_i(M)}{\left|m_{i i}\right|}\right)\right\}}\right\}.\\ \end{aligned}\end{array} \end{equation*} |
The proof is completed.
It is noted that the error bound of Theorem 6 is related to the interval of the parameter \varepsilon .
Lemma 3. [16] Letting \gamma > 0 and \eta > 0 , for any x\in[0, 1] ,
\begin{equation} \frac{1}{1-x+x \gamma} \leq \frac{1}{\min \{\gamma, 1\}}, \frac{\eta x}{1-x+x \gamma} \leq \frac{\eta}{\gamma} . \end{equation} | (4.5) |
Theorem 7. Let M = ({{m_{ij}}}) \in {R^{n \times n}} be an SDD_{1}^+ matrix. Then, \overline {M} = (\overline {m}_{ij}) = I - D + DM is also an SDD_{1}^+ matrix, where D = diag\left({{d_i}} \right) with 0 \le {d_i} \le 1 , \forall i\in N .
Proof. Since \overline {M} = I-D+DM = (\overline {m}_{ij}) , then
\overline{m}_{ij} = \left\{\begin{array}{cc} 1-d_{i}+d_{i}m_{ii}, & i = j, \\ d_{i}m_{ij}, &i\neq j. \end{array} \right. |
By Lemma 3, for any i\in N_1^{(1)} , we have
\begin{equation*} \begin{aligned} F_i(\overline{M}) = &\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|d_i m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|d_i m_{i j}\right| \frac{d_j r_j^{\prime}(M)}{1-d_j+m_{j j} d_j}+\sum\limits_{j \in N_2}\left|d_i m_{i j}\right| \frac{d_j r_j(M)}{1-d_j+m_{j j} d_j} \\ = &d_i\left(\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{d_j r_j^{\prime}(M)}{1-d_j+m_{j j} d_j}\right. \left.+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{d_j r_j(M)}{1-d_j+m_{jj} d_j}\right) \\ \leq &d_i\left(\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{\left|m_{j j}\right|}+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{\left|m_{j j}\right|}\right) \\ = & d_i F_i(M). \end{aligned} \end{equation*} |
In addition, d_i F_i(M) < 1-d_i+d_i\left|m_{i i}\right| = \left|\overline{m}_{i i}\right| , that is, for each i \in N_1^{(1)}(\overline{M}) \subseteq N_1^{(1)}(M), \left|\overline{m}_{i i}\right| > F_i(\overline{M}) .
For any i \in N_2^{(1)} , we have
\begin{aligned} F_i(\overline{M})& = \sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|d_i m_{i j}\right|+\sum\limits_{j \in N_2}\left|d_i m_{i j}\right| = d_i\left(\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\right) = d_i F_i^{\prime}(M).\end{aligned} |
So, d_i F_i^{\prime}(M) < 1-d_i+d_i\left|m_{i i}\right| = \left|\overline{m}_{i i}\right| , that is, for each i \in N_2^{(1)}(\overline{M}) \subseteq N_2^{(1)}(M), \left|\overline{m}_{i i}\right| > F_i^{\prime}(\overline{M}) . Therefore, \overline{M} = \left(\overline{m}_{i j}\right) = I-D+D M is an SDD_{1}^+ matrix.
Next, another upper bound about \max\limits _{d \in[0, 1]^n}\left\|(I-D+D M)^{-1}\right\|_{\infty} is given, which depends on the result in Theorem 4.
Theorem 8. Assume that M = (m_{ij})\in R^{n\times n}\; (n\geq 2) is an SDD_1^{+} matrix with positive diagonal entries, and {\overline M} = I-D+DM , D = diag\left({{d_i}} \right) with 0 \le {d_i} \le 1 . Then,
\begin{eqnarray} \max _{d \in[0, 1]^n}\left\|\overline{M}^{-1}\right\|_{\infty} \leq \max\left\{\frac{1}{\min\limits _{i \in N_1^{(1)}}\left\{\left|m_{i i}\right|-F_i(M), 1\right\}}, \frac{1}{\min\limits _{i \in N_2^{(1)}}\left\{\left|m_{i i}\right|-F_i^{\prime}(M), 1\right\}}, \frac{1}{\min \limits _{i \in N_2}\left\{\left|m_{i i}\right|-\sum\limits _{j \neq i}\left|m_{i j}\right|, 1\right\}}\right\}, \end{eqnarray} | (4.6) |
where N_2, \; N_1^{(1)}, \; N_2^{(1)}, \; F_i({M}), \; and\; F_i^{\prime}(M) are given by (2.1), (2.2), and (2.6), respectively.
Proof. Because M is an SDD_1^{+} matrix, according to Theorem 7, {\overline M} = I-D+DM is also an SDD_1^{+} matrix, where
{\overline M} = \left(\overline{m}_{ij}\right) = \left\{\begin{array}{cc} 1-d_{i}+d_{i}m_{ii}, & i = j, \\ d_{i}m_{ij}, &i\neq j. \end{array} \right. |
By (3.7), we can obtain that
\begin{array}{l}\begin{aligned} \left\|\overline{M}^{-1}\right\|_{\infty} \leq& \max \left\{\frac{1}{\min\limits _{i \in N_1^{(1)}(\overline{M})}\left\{\left|\overline{m}_{i i}\right|-F_i(\overline{M})\right\}}, \right. \left.\frac{1}{\min \limits _{i \in N_2^{(1)}(\overline{M})}\left\{\left|\overline{m}_{i i}\right|-F_i^{\prime}(\overline{M})\right\}}, \frac{1}{\min \limits _{i \in N_2(\overline{M})}\left\{\left|\overline{m}_{i i}\right|-\sum\limits _{j \neq i}\left|\overline{m}_{i j}\right|\right\}}\right\} . \\ \end{aligned}\end{array} |
According to Theorem 7, for i \in N_1^{(1)} , let
\begin{aligned} \nabla(\overline{M}) = &\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|d_i m_{i j}\right|+\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|d_i m_{i j}\right| \frac{d_j r_j^{\prime}(M)}{1-d_j+d_j m_{j j}}+\sum\limits_{j \in N_2}\left|d_i m_{i j}\right| \frac{d_j r_j(M)}{1-d_j+d_j m_{j j}}, \end{aligned} |
we have
\begin{aligned} \frac{1}{\left|\overline{m}_{i i}\right|-F_i(\overline{M})} = &\frac{1}{1-d_i+d_i m_{i i}-\nabla(\overline{M})} \\ \leq & \frac{1}{1-d_i+d_i m_{i i}-d_i\left(\sum\limits_{j \in N_1^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right| \frac{r_j^{\prime}(M)}{|m_{j j}|}+\sum\limits_{j \in N_2}\left|m_{i j}\right| \frac{r_j(M)}{|m_{j j}|}\right)} \\ = & \frac{1}{1-d_i+d_i\left(\left|m_{i i}\right|-F_i(M)\right)} \\ \leq & \frac{1}{\min \left\{\left|m_{i i}\right|-F_i(M), 1\right\}} . \end{aligned} |
For i \in N_2^{(1)} , we get
\begin{aligned} \frac{1}{\left|\overline{m}_{i i}\right|-F_i^{\prime}(\overline{M})} & = \frac{1}{1-d_i+d_i m_{i i}-\left(\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|d_i m_{i j}\right|+\sum\limits_{j \in N_2}\left|d_i m_{i j}\right|\right)} \\ & = \frac{1}{1-d_i+d_i m_{i i}-d_i\left(\sum\limits_{j \in N_2^{(1)} \backslash\{i\}}\left|m_{i j}\right|+\sum\limits_{j \in N_2}\left|m_{i j}\right|\right)} \\ & = \frac{1}{1-d_i+d_i\left(\left|m_{i i}\right|-F_i^{\prime}(M)\right)} \\ & \leq \frac{1}{\min \left\{\left|m_{i i}\right|-F_i^{\prime}(M), 1\right\}}. \end{aligned} |
For i \in N_2 , we can obtain that
\begin{aligned} \frac{1}{\left|\overline{m}_{i i}\right|-\sum\limits_{j \neq i}\left|\overline{m}_{i j}\right|} & = \frac{1}{1-d_i+d_i m_{i i}-\sum\limits_{j \neq i}\left|d_i m_{i j}\right|} \\ & = \frac{1}{1-d_i+d_i\left(\left|m_{i i}\right|-\sum\limits_{j \neq i}\left|m_{i j}\right|\right)} \\ & \leq \frac{1}{\min \left\{\left|m_{i i}\right|-\sum\limits_{j \neq i}\left|m_{i j}\right|, 1\right\}} . \end{aligned} |
To sum up, it holds that
\begin{equation*} \begin{aligned} \max _{d \in[0, 1]^n}&\left\|\overline{M}^{-1}\right\|_{\infty} \leq \max\left\{\frac{1}{\min\limits _{i \in N_1^{(1)}}\left\{\left|m_{i i}\right|-F_i(M), 1\right\}}, \right. \nonumber\left.\frac{1}{\min\limits _{i \in N_2^{(1)}}\left\{\left|m_{i i}\right|-F_i^{\prime}(M), 1\right\}}, \frac{1}{\min \limits _{i \in N_2}\left\{\left|m_{i i}\right|-\sum\limits _{j \neq i}\left|m_{i j}\right|, 1\right\}}\right\}. \end{aligned} \end{equation*} |
The proof is completed.
The following examples show that the bound (4.6) in Theorem 8 is better than the bound (4.4) in some conditions.
Example 11. Let us consider the matrix in Example 6. According to Theorem 6, by calculation, we obtain
\varepsilon\in \left( 0, 0.0386 \right). |
Taking \varepsilon = 0.01 , then
\max\limits _{d \in[0, 1]^{12000}}\left\|(I-D+D M_6)^{-1}\right\|_{\infty}\leq 589.4024. |
In addition, from Theorem 8, we get
\max\limits _{d \in[0, 1]^{12000}}\left\|(I-D+D M_6)^{-1}\right\|_{\infty}\leq929.6202. |
Example 12. Let us consider the matrix in Example 9. Since M_9 is a GSDD_1 matrix, then, by Lemma 2, we get
\varepsilon\in \left( 1.0484, 1.1265 \right). |
From Figure 4, when \varepsilon = 1.0964 , the optimal bound can be obtained as follows:
\max\limits _{d \in[0, 1]^4}\left\|(I-D+D M_9)^{-1}\right\|_{\infty}\leqslant6.1806. |
Moreover, M_9 is an SDD_1^{+} matrix, and by Theorem 6, we get
\varepsilon\in \left( 0, 0.2153 \right). |
From Figure 4, when \varepsilon = 0.1794 , the optimal bound can be obtained as follows:
\max\limits _{d \in[0, 1]^4}\left\|(I-D+D M_9)^{-1}\right\|_{\infty}\leq1.9957. |
However, according to Theorem 8, we obtain that
\max\limits _{d \in[0, 1]^4}\left\|(I-D+D M_9)^{-1}\right\|_{\infty}\leq1. |
Example 13. Consider the following matrix:
M_{11} = \left( {\begin{array}{*{20}{c}} 7 &-3&{-1}&-1\\ {-1}&7&-3&-4\\ -2&-2&9&-3\\ -3&-1&-3&7 \end{array}} \right). |
Obviously, {B}^{+} = {M}_{11} and {C} = 0 . By calculations, we know that the matrix {B}^{+} is a {CKV} -type matrix with positive diagonal entries, and thus {M}_{11} is a CKV -type B -matrix. It is easy to verify that the matrix {M}_{11} is an S D D_1^{+} matrix. By bound (4.4) in Theorem 6, we get
\max\limits _{d \in[0, 1]^4}\left\|\left(I-D+D M_{11}\right)^{-1}\right\|_{\infty}\leq10.9890(\varepsilon = 0.091), \; \; \; \; \varepsilon \in(0, 0.1029). |
By the bound (4.6) in Theorem 8, we get
\max\limits _{d \in[0, 1]^4}\left\|\left(I-D+D M_{11}\right)^{-1}\right\|_{\infty} \leq 1.2149, |
while by Theorem 3.1 in [18], it holds that
\max\limits_{d \in[0, 1]^4}\left\|\left(I-D+D M_{11}\right)^{-1}\right\|_{\infty} \leq 147. |
From Examples 12 and 13, it is obvious to know that the bounds in Theorems 6 and 8 in our paper is better than available results in some cases.
In this paper, a new subclass of nonsingular H -matrices, SDD_1^{+} matrices, have been introduced. Some properties of SDD_1^{+} matrices are discussed, and the relationships among SDD_1^{+} matrices and SDD_1 matrices, GSDD_1 matrices, and CKV -type matrices are analyzed through numerical examples. A scaling matrix D is used to transform the matrix M into a strictly diagonal dominant matrix, which proves the non-singularity of SDD_1^{+} matrices. Two upper bounds of the infinity norm of the inverse matrix are deduced by two methods. On this basis, two error bounds of the linear complementarity problem are given. Numerical examples show the validity of the obtained results.
Lanlan Liu: Conceptualization, Methodology, Supervision, Validation, Writing-review; Yuxue Zhu: Investigation, Writing-original draft, Conceptualization, Writing-review and editing; Feng Wang: Formal analysis, Validation, Conceptualization, Writing-review; Yuanjie Geng: Conceptualization, Validation, Editing, Visualization. All authors have read and approved the final version of the manuscript for publication.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research is supported by Guizhou Minzu University Science and Technology Projects (GZMUZK[2023]YB10), the Natural Science Research Project of Department of Education of Guizhou Province (QJJ2022015), the Talent Growth Project of Education Department of Guizhou Province (2018143), and the Research Foundation of Guizhou Minzu University (2019YB08).
The authors declare no conflicts of interest.
[1] | A. Berman, R. Plemmons, Nonnegative Matrices in the Mathematical Sciences, New York: SIAM, 1994. |
[2] | C. Zhang, New Advances in Research on H-matrices, Beijing: Science Press, 2017. |
[3] |
L. Cvetković, H-matrix theory vs. eigenvalue localization, Numer. Algor., 42 (2006), 229–245. http://doi.org/10.1007/s11075-006-9029-3 doi: 10.1007/s11075-006-9029-3
![]() |
[4] |
X. Gu, S. Wu, A parallel-in-time iterative algorithm for Volterra partial integro-differential problems with weakly singular kernel, J. Comput. Phys., 417 (2020), 109576. http://doi.org/10.1016/j.jcp.2020.109576 doi: 10.1016/j.jcp.2020.109576
![]() |
[5] | A. Berman, R. Plemmons, Nonnegative Matrix in the Mathematical Sciences, New York: Academic Press, 1979. |
[6] | R. Cottle, J. Pang, R. Stone, The Linear Complementarity Problem, New York: SIAM, 2009. |
[7] |
X. Chen, S. Xiang, Computation of error bounds for P-matrix linear complementarity problems, Math. Program., 106 (2006), 513–525. http://doi.org/10.1007/s10107-005-0645-9 doi: 10.1007/s10107-005-0645-9
![]() |
[8] |
P. Dai, J. Li, S. Zhao, Infinity norm bounds for the inverse for GSDD_1 matrices using scaling matrices, Comput. Appl. Math., 42 (2023), 121–141. http://doi.org/10.1007/s40314-022-02165-x doi: 10.1007/s40314-022-02165-x
![]() |
[9] |
D. Cvetković, L. Cvetković, C. Li, CKV-type matrices with applications, Linear Algebra Appl., 608 (2020), 158–184. http://doi.org/10.1016/j.laa.2020.08.028 doi: 10.1016/j.laa.2020.08.028
![]() |
[10] |
L. Kolotilina, Some bounds for inverses involving matrix sparsity pattern, J. Math. Sci., 249 (2020), 242–255. http://doi.org/10.1007/s10958-020-04938-3 doi: 10.1007/s10958-020-04938-3
![]() |
[11] |
L. Cvetković, Kostić, S. Rauški, A new subclass of H-matrices, Appl. Math. Comput., 208 (2009), 206–210. http://dx.doi.org/10.1016/j.amc.2008.11.037 doi: 10.1016/j.amc.2008.11.037
![]() |
[12] | L. Cvetković, V. Kostic, R. Varga, A new Geršgorin-type eigenvalue inclusion set, Electron. Trans. Numer. Anal., 18 (2004), 73–80. |
[13] |
H. Orera, J. Peña, Infinity norm bounds for the inverse of Nekrasov matrices using scaling matrices, Appl. Math. Comput., 358 (2019), 119–127. http://doi.org/10.1016/j.amc.2019.04.027 doi: 10.1016/j.amc.2019.04.027
![]() |
[14] |
P. Dai, C. Lu, Y. Li, New error bounds for the linear complementarity problem with an SB-matrix, Numer. Algor., 64 (2013), 741–757. http://doi.org/10.1007/s11075-012-9691-6 doi: 10.1007/s11075-012-9691-6
![]() |
[15] |
P. Dai, Y. Li, C. Lu, Error bounds for linear complementarity problems for SB-matrices, Numer. Algor., 61 (2012), 121–139. http://doi.org/10.1007/s11075-012-9580-z doi: 10.1007/s11075-012-9580-z
![]() |
[16] |
C. Li, Y. Li, Note on error bounds for linear complementarity problems for B-matrices, Appl. Math. Lett., 57 (2016), 108–113. http://doi.org/10.1016/j.aml.2016.01.013 doi: 10.1016/j.aml.2016.01.013
![]() |
[17] | R. Cottle, J. Pang, R. E. Stone, The Linear Complementarity Problem, San Diego: Academic Press, 1992. |
[18] |
X. Song, L. Gao, CKV-type B-matrices and error bounds for linear complementarity problems, AIMS Math., 6 (2021), 10846–10860. http://doi.org/10.3934/math.2021630 doi: 10.3934/math.2021630
![]() |
[19] |
J. Peña, Diagonal dominance, Schur complements and some classes of H-matrices and P-matrices, Adv. Comput. Math., 35 (2011), 357–373. http://doi.org/10.1007/s10444-010-9160-5 doi: 10.1007/s10444-010-9160-5
![]() |
[20] |
J. Varah, A lower bound for the smallest singular value of a matrix, Linear Algebra Appl., 11 (1975), 3–5. http://doi.org/10.1016/0024-3795(75)90112-3 doi: 10.1016/0024-3795(75)90112-3
![]() |
[21] | R. S. Varga, Geršgorin and His Circles, Berlin: Springer-Verlag, 2004. |
[22] |
V. Kostić, L. Cvetković, D. Cvetković, Pseudospectra localizations and their applications, Numer. Linear Algebra Appl., 23 (2016), 356–372. http://doi.org/10.1002/nla.2028 doi: 10.1002/nla.2028
![]() |
[23] |
J. Peña, A class of P-matrices with applications to the localization of the eigenvalues of a real matrix, SIAM J. Matrix Anal. Appl., 22 (2001), 1027–1037. http://doi.org/10.1137/s0895479800370342 doi: 10.1137/s0895479800370342
![]() |
[24] |
M. Neumann, J. Peña, O. Pryporova, Some classes of nonsingular matrices and applications, Linear Algebra Appl., 438 (2013), 1936–1945. http://doi.org/10.1016/j.laa.2011.10.041 doi: 10.1016/j.laa.2011.10.041
![]() |
[25] |
J. Peña, On an alternative to Gerschgorin circles and ovals of Cassini, Numer. Math., 95 (2003), 337–345. http://doi.org/10.1007/s00211-002-0427-8 doi: 10.1007/s00211-002-0427-8
![]() |
[26] |
P. Dai, J. Li, Y. Li, C. Zhang, Error bounds for linear complementarity problems of QN-matrices, Calcolo, 53 (2016), 647–657. http://doi.org/10.1007/s10092-015-0167-7 doi: 10.1007/s10092-015-0167-7
![]() |
[27] |
J. Li, G. Li, Error bounds for linear complementarity problems of S-QN matrices, Numer. Algor., 83 (2020), 935–955. http://doi.org/10.1007/s11075-019-00710-0 doi: 10.1007/s11075-019-00710-0
![]() |
[28] |
C. Li, P. Dai, Y. Li, New error bounds for linear complementarity problems of Nekrasov matrices and B-Nekrasov matrices, Numer. Algor., 74 (2016), 997–1009. http://doi.org/10.1007/s11075-016-0181-0 doi: 10.1007/s11075-016-0181-0
![]() |
[29] |
M. García-Esnaola, J. Peña, A comparison of error bounds for linear complementarity problems of H-matrices, Linear Algebra Appl., 433 (2010), 956–964. http://doi.org/10.1016/j.laa.2010.04.024 doi: 10.1016/j.laa.2010.04.024
![]() |