Processing math: 100%
Research article Special Issues

Schur complement-based infinity norm bounds for the inverse of S-Sparse Ostrowski Brauer matrices

  • In this paper, we study the Schur complement problem of S-SOB matrices, and prove that the Schur complement of S-Sparse Ostrowski-Brauer (S-SOB) matrices is still in the same class under certain conditions. Based on the Schur complement of S-SOB matrices, some upper bound for the infinite norm of S-SOB matrices is obtained. Numerical examples are given to certify the validity of the obtained results. By using the infinity norm bound, an error bound is given for the linear complementarity problems of S-SOB matrices.

    Citation: Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu. Schur complement-based infinity norm bounds for the inverse of S-Sparse Ostrowski Brauer matrices[J]. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317

    Related Papers:

    [1] Tianyong Han, Ying Liang, Wenjie Fan . Dynamics and soliton solutions of the perturbed Schrödinger-Hirota equation with cubic-quintic-septic nonlinearity in dispersive media. AIMS Mathematics, 2025, 10(1): 754-776. doi: 10.3934/math.2025035
    [2] Elsayed M. E. Zayed, Mona El-Shater, Khaled A. E. Alurrfi, Ahmed H. Arnous, Nehad Ali Shah, Jae Dong Chung . Dispersive optical soliton solutions with the concatenation model incorporating quintic order dispersion using three distinct schemes. AIMS Mathematics, 2024, 9(4): 8961-8980. doi: 10.3934/math.2024437
    [3] Noha M. Kamel, Hamdy M. Ahmed, Wafaa B. Rabie . Solitons unveilings and modulation instability analysis for sixth-order coupled nonlinear Schrödinger equations in fiber bragg gratings. AIMS Mathematics, 2025, 10(3): 6952-6980. doi: 10.3934/math.2025318
    [4] Yazid Alhojilan, Islam Samir . Investigating stochastic solutions for fourth order dispersive NLSE with quantic nonlinearity. AIMS Mathematics, 2023, 8(7): 15201-15213. doi: 10.3934/math.2023776
    [5] Ninghe Yang . Exact wave patterns and chaotic dynamical behaviors of the extended (3+1)-dimensional NLSE. AIMS Mathematics, 2024, 9(11): 31274-31294. doi: 10.3934/math.20241508
    [6] Islam Samir, Hamdy M. Ahmed, Wafaa Rabie, W. Abbas, Ola Mostafa . Construction optical solitons of generalized nonlinear Schrödinger equation with quintuple power-law nonlinearity using Exp-function, projective Riccati, and new generalized methods. AIMS Mathematics, 2025, 10(2): 3392-3407. doi: 10.3934/math.2025157
    [7] Azzh Saad Alshehry, Safyan Mukhtar, Ali M. Mahnashi . Optical fractals and Hump soliton structures in integrable Kuralay-Ⅱ system. AIMS Mathematics, 2024, 9(10): 28058-28078. doi: 10.3934/math.20241361
    [8] Mohammad Alqudah, Safyan Mukhtar, Albandari W. Alrowaily, Sherif. M. E. Ismaeel, S. A. El-Tantawy, Fazal Ghani . Breather patterns and other soliton dynamics in (2+1)-dimensional conformable Broer-Kaup-Kupershmit system. AIMS Mathematics, 2024, 9(6): 13712-13749. doi: 10.3934/math.2024669
    [9] F. A. Mohammed, Mohammed K. Elboree . Soliton solutions and periodic solutions for two models arises in mathematical physics. AIMS Mathematics, 2022, 7(3): 4439-4458. doi: 10.3934/math.2022247
    [10] Mohammed Aldandani, Abdulhadi A. Altherwi, Mastoor M. Abushaega . Propagation patterns of dromion and other solitons in nonlinear Phi-Four (ϕ4) equation. AIMS Mathematics, 2024, 9(7): 19786-19811. doi: 10.3934/math.2024966
  • In this paper, we study the Schur complement problem of S-SOB matrices, and prove that the Schur complement of S-Sparse Ostrowski-Brauer (S-SOB) matrices is still in the same class under certain conditions. Based on the Schur complement of S-SOB matrices, some upper bound for the infinite norm of S-SOB matrices is obtained. Numerical examples are given to certify the validity of the obtained results. By using the infinity norm bound, an error bound is given for the linear complementarity problems of S-SOB matrices.



    In this paper, we consider the linear complementarity problem, which is to find a vector xRn such that

    xT(Ax+q)=0,x0andAx+q0, (1.1)

    where A=(aij)Rn×n and qRn are given. For convenience, such problem is usually abbreviated as LCP(A,q), which is related to many practical problems, such as American option pricing problems, the market equilibrium problems and the free boundary problems for journal bearings, see [1,2,3,4,5] and the references therein.

    To obtain the numerical solution of the LCP(A,q) with a large and sparse system matrix, many kinds of iteration methods have been proposed and analyzed in recent decades. The projected method is a well known iteration method, which includes the smoothing projected gradient method [6], the projected gradient method [7], the partial projected Newton method [8], the improved projected successive over relaxation (IPSOR) [9], and so on. For more detailed materials about projected type iteration methods, we refer readers to [10,11,12,13] and the references therein. There are other efficient iteration methods for solving the LCP, e.g., the modulus-based matrix splitting iteration methods [2] and the nonstationary extrapolated modulus algorithms [14], the two-sweep modulus-based matrix splitting iteration methods [15], the general modulus-based matrix splitting method [5], the two-step modulus-based matrix splitting iteration method [16], the accelerated modulus-based matrix splitting iteration methods [17]. The main difference between the projected type iteration methods and the modulus-based type matrix splitting iteration methods is that the projected type iteration methods directly construct the iterative forms on an equivalent fixed-point equation based on the projection approaches and matrix splittings, however, the modulus-based type matrix splitting iteration methods reformulate the LCP(A,q) as an implicit fixed-point equation by introducing positive diagonal parameter matrices, then construct the iterative forms based on all kinds of matrix splittings and avoid the projections. The most prominent difference is that one requires the projection and the other does not. For other iteration methods for solving the complementarity problems, we refer readers to [18,19,20,21,22,23,24] and the references therein.

    In [4], Shi, Yang, and Huang presented a fixed-point (FP) method for solving a concrete LCP(A,q) arising in American option pricing problems. The FP method is based on a fixed-point equation and belongs to the projected type iteration methods. However, the numerical examples in [4] shows that the number of iteration steps is very large when a suitable approximate solution is obtained. In this paper, we further discuss the projected type iteration methods and consider a general fixed-point equation by introducing a positive diagonal parameter matrix Ω. We note that [4] considered the case where Ω=αI with α>0 and I is the identity matrix. We prove the equivalence between the general fixed-point equation and the linear complementarity problem. Based on the new fixed-point equation, we propose the general fixed-point (GFP) method with two iteration forms. We discuss the convergence conditions and provide the concrete convergence domains for the proposed method. Moreover, we discuss the optimal parameter problem and obtain an optimal parameter value.

    The rest of this paper is organized as follows. In Section 2, we introduce some notations and concepts briefly, and then provide two lemmas which are required to derive the new fixed-point method. In Section 3, we propose the general fixed-point method with convergence analysis. In Section 4, we present numerical examples to illustrate the efficiency of the proposed method. Finally, we give the concluding remark in Section 5.

    In this section, we first briefly review some notations and concepts, then provide two lemmas that will be used in Section 3.

    A matrix A=(aij)Rn×n is denoted by A0(orA>0) if aij0(oraij>0), and the absolute value matrix of A=(aij)Rn×n is denoted by |A|=(|aij|). The spectral radius of a square matrix A is denoted by ρ(A). A matrix A=(aij)Rn×n is called a Z-matrix if aij0 for ij and i,j=1,2,...,n, an M-matrix if A is a Z-matrix with A10, and a P-matrix if all of its principal minors are positive ([25]). A matrix A=(aij)Rn×n is called an H-matrix if its comparison matrix A is an M-matrix, and an H+-matrix if it is an H-matrix with all positive diagonal elements, where the comparison matrix A=(aij) is defined by aii=|aii| and aij=|aij| with ij for i,j=1,2,...,n ([2]). For a given matrix ARn×n, the splitting A=FG is called an M-splitting if F is an Mmatrix and G0 ([26]). For a given vector xRn, the symbols x+ and x denote the vectors x+=max{0,x} and x=max{0,x}, respectively. For x, x+ and x, we have

    x+0,x0,x=x+x,xT+x=0.

    Lemma2.1 [27,28] Let ARn×n be an M-matrix and A=FG be an M-splitting. Then

    ρ(F1G)<1.

    For the equation

    x=x+Ω(Ax++q), (2.1)

    where Ω is a given positive diagonal matrix, we have the following conclusion.

    Lemma2.2 The solution of the linear complementarity problem (1.1) and the solution of equation (2.1) have the following relation:

    (ⅰ) If x is a solution of (1.1) and x=Ax+q, then x=xΩx is a solution of (2.1).

    (ⅱ) If x is a solution of (2.1), then x+ is a solution of (1.1).

    Proof. (ⅰ) Suppose x is a solution of (1.1), then we have

    x0,Ax+q0,(x)T(Ax+q)=0.

    Therefore, denoting Ax+q by x, for positive diagonal matrix Ω, we have

    x0,Ωx0,(x)T(Ωx)=0.

    It follows that

    x+=(xΩx)+=x,x=(xΩx)=Ωx.

    Thus

    x=x+x=x+Ωx=x+Ω(Ax+q)=x+Ω(Ax++q).

    That is, x is a solution of (2.1).

    (ⅱ) Suppose x is a solution of (2.1), i.e.,

    x=x+Ω(Ax++q).

    Notice that x=x+x, we have x=Ω(Ax++q), and it follows that Ax++q=Ω1x0. Moreover,

    (x+)T(Ax++q)=(x+)T(Ω1x)=(x+)Tx=0.

    Therefore, x+ is a solution of (1.1).

    From Lemma 2.2 (ⅱ), we know that the solution of (1.1) can be obtained by solving Eq (2.1).

    In this section, we first prove that the solution of equation (2.1) is unique when the system matrix is a P-matrix, then propose the general fixed-point (GFP) method for solving (1.1) based on (2.1), and discuss the convergence conditions.

    Theorem3.1 If A is a P-matrix, then for any positive diagonal matrix Ω, (2.1) has a unique solution.

    Proof. Since A is a P-matrix, the linear complementarity problem (1.1) has a unique solution for any qRn ([29]). Therefore, from Lemma 2.2, we know that Eq (2.1) has solution(s). Suppose y,z are solutions of (2.1), then

    y=y+Ω(Ay++q),z=z+Ω(Az++q).

    By Lemma 2.2, y+ and z+ are are solutions of (1.1). Since (1.1) has a unique solution, y+=z+. It follows that y=z.

    Based on (2.1) and Lemma 2.2 (ⅱ), we get an iterative method for solving (1.1):

    x(k+1)=x(k)+Ω(Ax(k)++q)=(IΩA)x(k)+Ωq,k=0,1,2,. (3.1)

    We state the algorithm for (3.1) as follows.

    Algorithm 1 Iterative method based on (3.1)
    1: Given x(0)Rn, ε>0.
    2: for k=0,1,2,,
    3:   x(k)+=max{0,x(k)}
    4:    compute RES=norm(min(x(k)+,Ax(k)++q))
    5:   if then RES<ε
    6:      x=x(k)+
    7:      break
    8:   else
    9:      x(k+1)=(IΩA)x(k)+Ωq
    10:   end if
    11: end for

     | Show Table
    DownLoad: CSV

    Theorem3.2 Assume A is a P-matrix. Let {x(k)}+k=1 be the sequence generated by (3.1) and x be the solution of (2.1). If

    ρ(|IΩA|)<1, (3.2)

    then {x(k)}+k=1 converges to x for any initial vector x(0)Rn.

    Proof. Since x is the unique solution of (2.1), we have

    x=(IΩA)x+Ωq.

    Combining with Eq (3.1), we get

    x(k+1)x=(IΩA)(x(k)+x+).

    It follows that

    |x(k+1)x||IΩA||x(k)+x+||IΩA||x(k)x|.

    Therefore, if ρ(|IΩA|)<1, then {x(k)}+k=1 converges to x for any initial vector x(0)Rn.

    Since ρ(A)A for any matrix norm , we can get the following corollary easily from Theorem 3.2.

    Corollary3.1 Suppose A is a P-matrix. Let {x(k)}+k=1 be the sequence generated by (3.1) and x be the solution of (2.1). If

    |IΩA|<1, (3.3)

    then {x(k)}+k=1 converges to x for any initial vector x(0)Rn, where is any matrix norm.

    In the following, we consider two special cases: A is an H+ matrix with Ω=ωD1, where D=diag(A), and A is a symmetric positive definite matrix with Ω=ωI, where I is the identity matrix.

    Theorem3.3 Suppose A is an H+-matrix, D=diag(A) and B=DA. Let Ω=ωD1 with ω>0 and {x(k)}+k=1 be the sequence generated by (3.1) and x be the solution of (2.1). If

    0<ω<21+ρ(D1|B|),

    then {x(k)}+k=1 converges to x for any initial vector x(0)Rn. Moreover, ω=1 is the optimal choice.

    Proof. Since A is an H+-matrix, D=diag(A), and A=DB, we have that ρ(D1|B|)<1; see [21]. For Ω=ωD1 with ω>0, we have

    |IΩA|=|IωD1(DB)|=|1ω|I+ωD1|B|={(1ω)I+ωD1|B|,if0<ω1,(ω1)I+ωD1|B|,ifω>1.

    It follows that

    ρ(|IΩA|)={1(1ρ(D1|B|))ω,if0<ω1,(1+ρ(D1|B|)ω1,ifω>1. (3.4)

    It can be easily seen from (3.4) that ρ(|IΩA|)<1 for ω(0,1] and for ω>1, ρ(|IΩA|)<1 if and only if ω<21+ρ(D1|B|). Therefore, if 0<ω<21+ρ(D1|B|), {x(k)}+k=1 converges to x for any initial vector x(0)Rn. Moreover, it can be seen from (3.4) that when ω=1, ρ(|IΩA|)=ρ(D1|B|) is minimal. That is, ω=1 is the optimal choice.

    Theorem3.4 Suppose A is a symmetric positive definite matrix. Set Ω=ωI with ω>0 and denote the smallest and the largest eigenvalues of A by λmin and λmax, respectively. Let {x(k)}+k=1 be the sequence generated by (3.1) and x be the solution of (2.1). If

    0<ω<2λmax,

    then {x(k)}+k=1 converges to x for any initial vector x(0)Rn.

    Proof. From the proof of Theorem 3.2, similarly, we can have

    x(k+1)x=(IΩA)(x(k)+x+).

    Then

    x(k+1)x2≤∥IΩA2x(k)+x+2≤∥IΩA2x(k)x2,

    where 2 is the spectral norm of matrix. So, we have a convergence condition of (3.1), that is

    IΩA2<1.

    Since

    IΩA2=IωIA2=max{|1ωλmin|,|1ωλmax|}={|1ωλmin|,if|1ωλmin||1ωλmax|,|1ωλmax|,if|1ωλmax||1ωλmin|,

    we can solve

    (I){|1ωλmin|<1,|1ωλmin||1ωλmax|,

    and

    (II){|1ωλmax|<1,|1ωλmax||1ωλmin|,

    to obtain the convergence conditions of (3.1), that is 0<ω2λmin+λmax and 2λmin+λmaxω<2λmax, which can be combined into

    0<ω<2λmax.

    Thus, the conclusion is proved.

    Let A be split as A=DLU, where D, L and U are the diagonal, strictly lower and strictly upper triangular parts of A, respectively. We can derive another iterative method for solving (1.1) based on (2.1) by using the idea of Gauss-Seidel:

    x(k+1)=(IΩ(DU))x(k)++ΩLx(k+1)+Ωq,k=0,1,2,. (3.5)

    We state the algorithm for (3.5) as follows.

    We call iterative methods (3.1) and (3.5) the general fixed-point method. We note that in [4], Shi, Yang, and Huang introduced a fixed-point method, which is a special case of (3.1) with Ω=αI (α>0). In the following, we analyze the convergence of iterative method (3.5). In particular, we consider the convergence domain of Ω when A is an H+-matrix.

    Theorem3.5 Suppose A is a P-matrix, then the sequence {x(k)}+k=1 generated by (3.5) converges to the unique solution x of (2.1) for any initial vector x(0)Rn if

    ρ((I|ΩL|)1|IΩ(DU)|)<1. (3.6)
    Algorithm 2 Iterative method based on (3.5)
    1: Given x(0)Rn, ε>0 2: for k=0,1,2,, do
    3:   x(k)+=max{0,x(k)}
    4:    compute RES=norm(min(x(k)+,Ax(k)++q))
    5:   if RES<ε then
    6:      x=x(k)+
    7:      break
    8:   else
    9:      x(k+1)1=((IΩ(DU))x(k)+Ωq)1
    10:     for i=2,3,...,n
    11:        x(k+1)i=((IΩ(DU))x(k)+Ωq)i+(ΩLx(k+1)+)i
    12:     end for
    13:   end if
    14: end for

     | Show Table
    DownLoad: CSV

    Proof. Since A is a P-matrix, equation (2.1) has a unique solution for any positive diagonal matrix Ω. Based on (2.1) and A=DLU, we get

    x=(IΩ(DU))x++ΩLx+Ωq.

    From the above formula and equation (3.5), we have

    x(k+1)x=(IΩ(DU))(x(k)+x+)+ΩL(x(k+1)+x+). (3.7)

    Thus

    |x(k+1)x||IΩ(DU)||x(k)+x+|+|ΩL||x(k+1)+x+||IΩ(DU)||x(k)x|+|ΩL||x(k+1)x|.

    It follows that

    (I|ΩL|)|x(k+1)x||IΩ(DU)||x(k)x|.

    Since I|ΩL| is an M-matrix, i.e., (I|ΩL|)10, we have

    |x(k+1)x|(I|ΩL|)1|IΩ(DU)||x(k)x|.

    Therefore, if ρ((I|ΩL|)1|IΩ(DU)|)<1, then {x(k)}+k=1 converges to x for any initial vector x(0)Rn.

    We now consider the convergence domain of Ω for iterative method (3.5) when A is an H+-matrix.

    Theorem3.6 Suppose A is an H+-matrix and either of the following conditions holds:

    (1) 0<ΩD1;

    (2) Ω>D1 and 2Ω1D|B| is an M-matrix, where B=L+U.

    Then the sequence {x(k)}+k=1 generated by (3.5) converges to the unique solution of (2.1) for any initial vector x(0)Rn.

    Proof. Since any H+-matrix is a P-matrix ([2]), Eq (2.1) has a unique solution. Consider the splitting

    (I|ΩL|)|IΩ(DU)|=(I|IΩD|)Ω|B|={Ω(D|B|),if0<ΩD12IΩDΩ|B|,ifΩ>D1={ΩA,if0<ΩD1,Ω(2Ω1D|B|),ifΩ>D1.

    (1) When 0<ΩD1, (I|ΩL|)|IΩ(DU)| is an M-splitting of the M-matrix ΩA, therefore, it follows from Lemma 2.1 that ρ((I|ΩL|)1|IΩ(DU)|)<1.

    (2) When Ω>D1, if 2Ω1D(|L|+|U|) is an M-matrix, then the splitting (I|ΩL|)|IΩ(DU)| is an M-splitting of the M-matrix Ω(2Ω1D|B|), therefore ρ((I|ΩL|)1|IΩ(DU)|)<1.

    Collecting (1) and (2), this theorem is established based on Theorem 3.5.

    Corollary3.2 Suppose A is an H+-matrix and Ω=ωD1 with ω>0, then the sequence {x(k)}+k=1 generated by (3.5) converges to the unique solution x of (2.1) for any initial vector x(0)Rn if

    0<ω<21+ρ(D1|B|). (3.8)

    Proof. For 0<ω1, we have ΩD1, that is, Ω satisfies the first condition of Theorem 3.6. When Ω=ωD1, the second condition of Theorem 3.6 can be represented as

    ω>1and(2ω1)D|B|isanMmatrix.

    That is,

    ω>1and2ω1>ρ(D1|B|).

    Therefore, if 0<ω<21+ρ(D1|B|) holds, {x(k)}+k=1 converges to x.

    To end this section, we consider the LCP(A,q) arising in American option pricing problem, where A is a symmetric tridiagonal M-matrix:

    A=(1+2λθλθλθ1+2λθλθλθ1+2λθλθλθ1+2λθ)Rn×n (3.9)

    with λ,θ>0; see for instance [4]. In the following, we derive the optimal value of the parameter ω for the case Ω=ωD1 under the sense of 1-norm.

    For iterative method (3.1), based on (3.4), we consider

    f1(ω)=IΩA1.

    Let μ=λθ. By some computation, we get

    f1(ω)=|1ω|+2μ1+2μω={1ω1+2μ,if0<ω1,(1+4μ)ω1+2μ1,ifω>1.

    It can be easily seen that

    minω>0f1(ω)=f1(1)=2μ1+2μ=2λθ1+2λθ<1. (3.10)

    That is, ω=1 is an optimal parameter.

    We now consider the value of ω for iterative method (3.5). Let α=ω1+2μ, i.e., ω=α(1+2μ), then

    (IωD1|L|)1=(1αμ1αμ1)1=(1αμ1(αμ)2αμ1(αμ)n1(αμ)2αμ1).

    Let ν=|1ω|, and ai=(αμ)i, i=0,1,2,n. Then aiaj=ai+j and it is easy to get

    (IωD1|L|)1|IωD1(DU)|=(1a11a2a11an1a2a11)(νa1νa1νa1ν)=(νa1a1νa2+νa1an2νan1+an3νa2+νa1an1νan+an2νa3+a1νa2+ν).

    Based on (3.6) and (3.8), we define

    f2(ω)=(IωD1|L|)1|IωD1(DU)|1.

    Since ai>0 and 0<ν<1, it can be seen that

    f2(ω)=ni=1(αμ)i+|1ω|n2i=0(αμ)i.

    We consider a particular ω, that is ω=1, then

    f2(1)=μ1+μ[1(μ1+2μ)n].

    Remark. Based on the above discussions and notice that

    f2(1)=μ1+μ[1(μ1+2μ)n]=λθ1+λθ[1(λθ1+2λθ)n]<λθ1+λθ<2λθ1+2λθ=f1(1),

    we expect that iterative method (3.5) converges faster than iterative method (3.1) when ω=1.

    In this section, we illustrate some examples. Since most of the projected methods involve parameters, and it is not easy to select a proper in practice. Meanwhile, the implicit equations related to the projected methods are different, it is difficult to compare the GFP method with other projected methods fairly. So we do not compare the GFP method with other projected methods except for the FP method. The modulus-based approaches for solving the LCP(A,q) include many cases, we select the modulus-based SOR (MSOR) iteration method with the best cases as comparison. Besides, we use the GFP method to solve the LCP(A,q) arising in American option pricing problem([4]). The number of iteration steps, the elapsed time and the norm of the residual vector are denoted by IT, CPU and RES, respectively. RES is defined as:

    RES(x(k)+)=||min(x(k)+,Ax(k)++q)||2,

    where x(k)+ is the kth approximate solution of (1.1). The iteration process stops if RES(x(k)+)<105 or the number of iteration steps reaches 1000.

    The system matrix A in the first two examples is generated by

    A(μ,η,ζ)=ˆA+μI+ηB+ζC,

    here, μ, η and ζ are given constants, ˆA=Tridiag(I,S,I)Rn×n is a block-tridiagonal matrix,

    B=Tridiag(0,0,1)Rn×nandS=tridiag(1,4,1)Rm×m

    are two tridiagonal matrices, and C=diag([1212]) is a diagonal matrix of order n, m and n satisfy n=m2. For convenience, we set

    q=(1,1,1,1,,1,1,)TRn,

    then the LCP(A(μ,η,ζ),q) have a unique solution when A(μ,η,ζ) is a P-matrix. All computations are run by using Matlab version 2016 on a Dell Laptop (Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz, 4.00GB RAM).

    Example4.1 In this example, we compare the GFP method with the MSOR iteration method ([2]), which is a very effective method in solving the LCP(A,q). Both the GFP method and the MSOR iteration method involve a parameter matrix Ω, which has many choices and it is not appropriate if we take the same Ω. We set Ω=ωD1 in the GFP method and Ω=ωD in the MSOR iteration method, respectively. For fairness, we compare the two methods with the best cases by performing many experiments. Set x(0)=zeros(n,1) and n=10000, then we obtain the following Table 1.

    Table 1.  The comparison between the GFP method and the MSOR iteration method.
    A(1,1,0) A(1,1,0) A(1,1,1)
    Alg 1 Alg 2 MSOR Alg 1 Alg 2 MSOR Alg 1 Alg 2 MSOR
    ω 1.1 1.1 1 1.1 1.1 1.2 1 1.2 1
    α - - 0.8 - - 1.1 - - 1.1
    IT 17 10 20 17 10 11 39 16 24
    CPU 0.0056 7.8000 0.0129 0.0053 7.6147 0.0078 0.0092 11.788 0.0124
    RES 0.5e-5 0.3e-5 0.4e-5 0.5e-5 0.5e-5 0.7e-5 0.8e-5 0.4e-5 0.5e-5
    A(1,0,1) A(0,1,0) A(1,1,1)
    Alg 1 Alg 2 MSOR Alg 1 Alg 2 MSOR Alg 1 Alg 2 MSOR
    ω 1.1 1 1 1 1.1 1.1 1 1.1 1.2
    α - - 1 - - 1.1 - - 1.2
    IT 12 9 9 23 12 15 12 9 9
    CPU 0.0026 7.7473 0.0045 0.0049 8.7403 0.0158 0.0025 7.3931 0.0075
    RES 0.6e-5 0.6e-5 0.8e-5 0.8e-5 0.3e-5 0.4e-5 0.6e-5 0.6e-5 0.7e-5

     | Show Table
    DownLoad: CSV

    In Table 1, Alg 1 and Alg 2 denote Algorithm 1 and Algorithm 2, respectively. From Table 1, we can find that iterative method (3.1) is better than the MSOR iteration method in the running time, and iterative method (3.5) is better than the MSOR iteration method in the iteration steps. Since Algorithm 2 calculates the numerical solution entry by entry, the matrix-vector fast calculation technique can not be used, which makes the method more time-consuming. For Algorithm 1, single-step running time is short, but the number of iteration steps is relatively large. Besides, the best parameters for the MSOR iteration method are selected by many experiments, however, the best parameter for iterative method (3.1) is near or equals to 1, which is a conclusion of Theorem 3.3.

    Example4.2 In this example, we illustrate the convergence domains of ω and the convergence rates of iterative methods (3.1) and (3.5). We consider A(0,0,1) and A(0,1,1). The former is a symmetric H+-matrix and the latter is a nonsymmetric H+-matrix, both of which are P-matrices. We set Ω=ωD1=ωdiag(A(μ,η,ζ))1. From Theorem 3.3 and Corollary 3.2, we know that ω(0,21+ρ(D1|B|)) is a sufficient convergence domain for both iterative methods (3.1) and (3.5). Here, we consider a larger domain and set ω to be

    1floor(1δ)δ:δ:21+ρ(D1|B|)+δ,

    where δ=12(21+ρ(D1|B|)1). The symbols 'floor' and ':' are an integer function and a command in Matlab software, respectively. Then the right boundary of interval (0,21+ρ(D1|B|)) is the penultimate one in these points. We denote ρ(|IΩA|) in (3.2) and ρ((IΩ|L|)1|IΩ(DU)|) in (3.6) by ρ1 and ρ2, respectively. Set n=900 and x(0)=zeros(n,1), then we obtain Table 2 and the corresponding Figure 1 as follows.

    Table 2.  ρ and IT of Algorithms 1 and 2 with Ω=ωD1.
    A(0,0,1)
    ω ω1 ω2 ω3 ω4 ω5 ω6 ω7 ω8
    ρ1 0.9833 0.9621 0.9410 0.9198 0.8987 0.8776 0.8564 0.8353
    ρ2 0.9829 0.9601 0.9358 0.9099 0.8821 0.8522 0.8198 0.7845
    IT1 344 148 93 66 51 41 34 28
    IT2 341 145 89 63 47 37 30 25
    ω ω9 ω10 ω11 ω12 ω13=1 ω14 ω15 ω16
    ρ1 0.8141 0.7930 0.7719 0.7507 0.7296 0.8648 1.0000 1.1352
    ρ2 0.7457 0.7027 0.6542 0.5984 0.5323 0.7671 1.0000 1.2358
    IT1 24 21 18 16 14 12 16 22
    IT2 20 17 14 12 9 8 10 13
    A(0,1,1)
    ω ω1 ω2 ω3 ω4 ω5=1 ω6 ω7 ω8
    ρ1 0.8897 0.7744 0.6580 0.5427 0.4255 0.7128 1.0029 1.2882
    ρ2 0.8840 0.7488 0.5957 0.4131 0.1629 0.5813 1.0035 1.4651
    IT1 105 48 29 20 14 19 62 1000
    IT2 101 44 25 16 9 12 19 35

     | Show Table
    DownLoad: CSV
    Figure 1.  ρ and IT of Algorithms 1 and 2 with Ω=ωD1.

    From Table 2 and Figure 1, we can find that Theorem 3.3 and Corollary 3.2 provide the convergent domains of ω for iterative methods (3.1) and (3.5), respectively, and the initial iteration vector is arbitrary when ω falls in these domains. When ω exceeds the convergence domain, the two iterative methods are convergent sometimes when x(0)=zeros(n,1). Meanwhile, we can find that iterative method (3.5) is usually faster than iterative method (3.1) in terms of IT when ω takes the same value. In addition, this example illustrates the optimal parameter conclusion in Theorem 3.3, i.e., ω=1 is a good parameter for iterative method (3.1).

    Example4.3 In this example, we apply the GFP method to solve the LCP (A,q) in [4], where the matrix A satisfies (3.9). We set Ω=ωD1 and λθ=0.5,1,1.5,2, respectively. Similarly, just as Example 4.2, we set ω=1floor(1δ)δ:δ:21+ρ(D1|B|)+δ with δ=12(21+ρ(D1|B|)1), ρ1=ρ(|IΩA|) in (3.2) and ρ2=ρ((IΩ|L|)1|IΩ(DU)|) in (3.6). Then ω=1 is the fourth from the bottom of these values of ω. We set n=900 and x(0)=randn(n,1) in our experiments, then we obtain Figure 2 as follows.

    Figure 2.  The numerical results for cases λθ=0.5,1,1.5,2 and Ω=ωD1.

    From Figure 2, we can find that both iterative method (3.1) and iterative method (3.5) have good performance when ω=1, i.e., the two iterative methods solve the LCP (A,q) in very few iteration steps. Meanwhile, this example also verifies that iterative method (3.5) is faster than iterative method (3.1) when ω=1, which is a conclusion given at the end of Section 3.

    Example4.4 In this example, we compare the GFP method with the FP method ([4]). The system matrix is generated by

    A(η)=ˆA4I+ηB+C,

    where C=diag([12n]) and n=900. The initial iteration vector is x(0)=zeros(n,1). For the FP method, the parameter matrix is Ω=ωI, and for the GFP method, the parameter matrix is Ω=ωD1=ω(diag(A))1. We consider two cases in our experiments, that is η=0 and η=1. Thus, these two matrices are H+-matrices. Since the convergence domains of ω are different, we can not use the same ω values for the two methods. For the GFP method, based on Theorem 3.3 and Corollary 3.2, we know that the convergence domain is 0<ω<21+ρ(D1|B|), and we set ω to be 13(1+ρ(D1|B|)):13(1+ρ(D1|B|)):21+ρ(D1|B|). For the FP method, there are two different situations. For η=0, since the system matrix is a symmetric positive definite matrix, based on Theorem 3.4, we know that the convergence domain is 0<ω<2λmax, we set ω to be

    13λmax:13λmax:2λmax.

    For η=1, by performing many experiments, we set ω to be 0.0003:0.0002:0.0013, which include the convergence parameter values. The numerical results are shown in Tables 3 and 4, respectively.

    Table 3.  The comparison between GFP method and FP method for η=0.
    GFP method FP method
    ω ρ1 IT1 ρ2 IT2 ω ρ IT
    0.1795 0.9743 77 0.9721 76 0.0004 0.9999 1000
    0.3591 0.9486 35 0.9390 34 0.0007 0.9998 1000
    0.5386 0.9228 21 0.8989 20 0.0011 0.9998 1000
    0.7181 0.8971 13 0.8487 12 0.0015 0.9997 1000
    0.8976 0.8714 9 0.7828 7 0.0019 0.9996 1000
    1.0772 1.0000 8 1.0000 6 0.0022 1.0000 1000

     | Show Table
    DownLoad: CSV
    Table 4.  The comparison between GFP method and FP method for η=1.
    GFP method FP method
    ω ρ1 IT1 ρ2 IT2 ω ρ IT
    0.2822 0.7689 46 0.7624 46 0.0003 0.9997 1000
    0.5645 0.5378 19 0.5084 18 0.0005 0.9995 1000
    0.8467 0.3066 10 0.2262 8 0.0007 0.9993 1000
    1.1289 0.3333 10 0.2261 8 0.0009 0.9991 1000
    1.4111 0.6667 23 0.6109 17 0.0011 0.9989 1000
    1.6934 1.0000 125 1.0000 40 0.0013 0.9987 1000

     | Show Table
    DownLoad: CSV

    From Tables 3 and 4, we can find that since ρ is very large, the FP method can not obtain an approximate solution even the number of iteration steps reaches 1000. On the contrary, for the GFP method, both (3.1) and (3.5) can obtain the approximate solution in fewer iteration steps. Therefore, it is obvious that the GFP method is better than the FP method.

    In this paper, based on an equivalent fixed-point equation with a parameter matrix Ω, we present the general fixed-point (GFP) method for solving the LCP(A,q), which is the generalization of the fixed-point (FP) method. For this method, we discussed two iterative forms: one is the basic form and the other is the converted form, which is associated with matrix splitting. Both iterative forms can keep the spare structure of A in the iteration processes, thus the sparse structure of A can be applied to improve the effectiveness of this method. For the GFP method, the convergence conditions are proved and some concrete convergence domains of Ω as well as the optimal cases are presented. The iteration form of the GFP method is simple and the convergence rate is affected by the spectral radius of the iteration matrix. The numerical experiments show that the GFP method is an effective and competitive iterative method.

    The author thanks the anonymous referees for providing many useful comments and suggestions that made this paper more readable. This work was supported by Zhaoqing Education and Development Project (No. ZQJYY2020093), the Characteristic Innovation Project of Department of Education of Guangdong Province (No. 2020KTSCX159), the Innovative Research Team Project of Zhaoqing University, the Scientific Research Ability Enhancement Program for Excellent Young Teachers of Zhaoqing University and Zhaoqing University Research Project (No. 611-612279).

    The author confirms that there has no conflict of interest.



    [1] D. Carlson, T. Markham, Schur complements of diagonally dominant matrices, Czech. Math. J., 29 (1979), 246–251.
    [2] L. Cvetkoviˊc, V. Kostiˊc, M, Kova˘ceviˊc, T. Szulc, Further results on H-matrices and their Schur complement, Appl. Math. Comput., 198 (2008), 506–510. https://doi.org/10.1016/j.amc.2007.09.001 doi: 10.1016/j.amc.2007.09.001
    [3] L. Cvetkoviˊc, M. Nedoviˊc, Special H-matrices and their Schur and diagonal-Schur complements, Appl. Math. Comput., 208 (2009), 225–230. https://doi.org/10.1016/j.amc.2008.11.040 doi: 10.1016/j.amc.2008.11.040
    [4] K. D. Ikramov, Invariance of the Brauer diagonal dominance in gaussian elimination, Moscow University Computational Mathematics and Cybernetics, 2 (1989), 91–94.
    [5] B. S. Li, M. J. Tsatsomeros, Doubly diagonally dominant matrices, Linear Algebra Appl., 261 (1997), 221–235. https://doi.org/10.1016/S0024-3795(96)00406-5 doi: 10.1016/S0024-3795(96)00406-5
    [6] C. Q. Li, Z. Y. Huang, J. X. Zhao, On Schur complements of Dashnic-Zusmanovich type matrices, Linear Multilinear A., 70 (2020), 4071–4096. https://doi.org/10.1080/03081087.2020.1863317 doi: 10.1080/03081087.2020.1863317
    [7] X. N. Song, L. Gao, On Schur Complements of Cvetkoviˊc-Kostiˊc-Varga type matrices, Bull. Malays. Math. Sci. Soc., 46 (2023), 49. https://doi.org/10.1007/s40840-022-01440-8 doi: 10.1007/s40840-022-01440-8
    [8] C. R. Johnson, Inverse M-matrices, Linear Algebra Appl., 47 (1982), 195–216. https://doi.org/10.1016/0024-3795(82)90238-5
    [9] J. Z. Liu, Y. Q. Huang, F. Z. Zhang, The Schur complements of generalized doubly diagonally dominant matrices, Linear Algebra Appl., 378 (2004), 231–244. https://doi.org/10.1016/j.laa.2003.09.012 doi: 10.1016/j.laa.2003.09.012
    [10] R. L. Smith, Some interlacing propeties of the Schur complement theory of a Hermitian matrix, Linear Algebra Appl., 177 (1992), 137–144. https://doi.org/10.1016/0024-3795(92)90321-Z doi: 10.1016/0024-3795(92)90321-Z
    [11] L. S. Dashnic, M. S. Zusmanovich, On some regularity criteria for matrices and localization of their spectra, Zh. Vychisl. Mat. Mat. Fiz., 10 (1970), 1092–1097.
    [12] R. A. Horn, C. R. Johnson, Topics in matrix analysis, Cambridge: Cambridge University Press, 1991.
    [13] J. Z. Liu, J. C. Li, Z. H. Huang, X. Kong, Some properties of Schur complements and diagonal-Schur complements of diagonally dominant matrices, Linear Algebra Appl., 428 (2008), 1009–1030. https://doi.org/10.1016/j.laa.2007.09.008 doi: 10.1016/j.laa.2007.09.008
    [14] Y. T. Li, S. P. Ouyang, S. J. Cao, R. W. Wang, On diagonal-Schur complements of block diagonally dominant matrices, Appl. Math. Comput., 216 (2010), 1383–1392. https://doi.org/10.1016/j.amc.2010.02.038 doi: 10.1016/j.amc.2010.02.038
    [15] M. Nedoviˊc, L. Cvetkoviˊc, The Schur complement of PH-matrices, Appl. Math. Comput., 362 (2019), 124541. https://doi.org/10.1016/j.amc.2019.06.055 doi: 10.1016/j.amc.2019.06.055
    [16] V. R. Kostiˊc, L. Cvetkoviˊc, D. L. Cvetkoviˊc, Pseudospectra localizations and their applications, Numer. Linear Algebr., 23 (2016), 356–372. https://doi.org/10.1002/nla.2028 doi: 10.1002/nla.2028
    [17] C. Q. Li, L. Cvetkoviˊc, Y. M. Wei, J. X. Zhao, An infinity norm bound for the inverse of Dashnic-Zusmanovich type matrices with applications, Linear Algebra Appl., 565 (2019), 99–122. https://doi.org/10.1016/j.laa.2018.12.013 doi: 10.1016/j.laa.2018.12.013
    [18] J. Z. Liu, J. Zhang, Y. Liu, The Schur complement of strictly doubly diagonally dominant matrices and its application, Linear Algebra Appl., 437 (2012), 168–183. https://doi.org/10.1016/j.laa.2012.02.001 doi: 10.1016/j.laa.2012.02.001
    [19] J. M. Varah, A lower bound for the smallest singular value of a matrix, Linear Algebra Appl., 11 (1975), 3–5. https://doi.org/10.1016/0024-3795(75)90112-3 doi: 10.1016/0024-3795(75)90112-3
    [20] C. Q. Li, Schur complement-based infinity norm bounds for the inverse of SDD matrices, Bull. Malays. Math. Sci. Soc., 43 (2020), 3829–3845. https://doi.org/10.1007/s40840-020-00895-x doi: 10.1007/s40840-020-00895-x
    [21] C. L. Sang, Schur complement-based infinity norm bounds for the inverse of DSDD matrices, Bull. Iran. Math. Soc., 47 (2021), 1379–1398. https://doi.org/10.1007/s41980-020-00447-w doi: 10.1007/s41980-020-00447-w
    [22] Y. Li, Y. Wang, Schur complement-based infinity norm bounds for the inverse of GDSDD matrices, Mathematics, 10 (2022), 186–214.
    [23] L. Y. Kolotilina, A new subclass of the class of nonsingular H-matrices and related inclusion sets for eigenvalues and singular values, J. Math. Sci., 240 (2019), 813–821. https://doi.org/10.1007/s10958-019-04398-4 doi: 10.1007/s10958-019-04398-4
    [24] Y. M. Gao, X. H. Wang, Criteria for generalized diagonally dominant matrices and M-matrices, Linear Algebra Appl., 169 (1992), 257–268. https://doi.org/10.1016/0024-3795(92)90182-A doi: 10.1016/0024-3795(92)90182-A
    [25] R. A. Horn, C. R. Johnson, Matrix analysis, Cambridge: Cambridge University Press, 1985.
    [26] L. Cvetkovi, H-matrix theory vs. eigenvalue localization, Numer. Algorithms, 42 (2016), 229–245. https://doi.org/10.1007/s11075-006-9029-3 doi: 10.1007/s11075-006-9029-3
    [27] L. Y. Kolotilina, Some bounds for inverses involving matrix sparsity pattern, Journal of Mathematical Sciences, 249 (2020), 242–255.
    [28] N. Moraa, Upper bounds for the infinity norm of the inverse of SDD and S-SDD matrices, J. Comput. Appl. Math., 206 (2007), 667–678. https://doi.org/10.1016/j.cam.2006.08.013 doi: 10.1016/j.cam.2006.08.013
    [29] C. Q. Li, Y. T. Li, Note on error bounds for linear complementarity problem for B-matrix, Appl. Math. Lett., 57 (2016), 108–113. https://doi.org/10.1016/j.aml.2016.01.013 doi: 10.1016/j.aml.2016.01.013
  • This article has been cited by:

    1. Areej A. Almoneef, Rashad A. Abdel-Baky, Kinematic Differential Geometry of a Line Trajectory in Spatial Movement, 2023, 12, 2075-1680, 472, 10.3390/axioms12050472
    2. Areej A. Almoneef, Rashad A. Abdel-Baky, Kinematic Geometry of a Timelike Line Trajectory in Hyperbolic Locomotions, 2023, 12, 2075-1680, 915, 10.3390/axioms12100915
    3. Harun Barış Çolakoğlu, İskender Öztürk, Oğuzhan Çelik, Mustafa Özdemir, Generalized Galilean Rotations, 2024, 16, 2073-8994, 1553, 10.3390/sym16111553
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1430) PDF downloads(74) Cited by(0)

Figures and Tables

Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog