Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

A preconditioned new modulus-based matrix splitting method for solving linear complementarity problem of H+-matrices

  • For solving the linear complementarity problem (LCP), we propose a preconditioned new modulus-based matrix splitting (PNMMS) iteration method by extending the state-of-the-art new modulus-based matrix splitting (NMMS) iteration method to a more general framework with a customized preconditioner. We devise a generalized preconditioner that is associated with both H+-matrix A and vector q of the LCP. The convergence analysis is conducted under some mild conditions. In particular, we provide a comparison theorem to theoretically show the PNMMS method accelerates the convergence rate. Numerical experiments further illustrate that the PNMMS method is efficient and has better performance for solving the large and sparse LCP.

    Citation: Dongmei Yu, Yifei Yuan, Yiming Zhang. A preconditioned new modulus-based matrix splitting method for solving linear complementarity problem of H+-matrices[J]. Electronic Research Archive, 2023, 31(1): 123-146. doi: 10.3934/era.2023007

    Related Papers:

    [1] Jia-Min Luo, Hou-Biao Li, Wei-Bo Wei . Block splitting preconditioner for time-space fractional diffusion equations. Electronic Research Archive, 2022, 30(3): 780-797. doi: 10.3934/era.2022041
    [2] Xing Zhang, Xiaoyu Jiang, Zhaolin Jiang, Heejung Byun . Algorithms for solving a class of real quasi-symmetric Toeplitz linear systems and its applications. Electronic Research Archive, 2023, 31(4): 1966-1981. doi: 10.3934/era.2023101
    [3] Xin Liu, Guang-Xin Huang . New error bounds for the tensor complementarity problem. Electronic Research Archive, 2022, 30(6): 2196-2204. doi: 10.3934/era.2022111
    [4] Peng Zhou, Teng Wang . The parameter-Newton iteration for the second-order cone linear complementarity problem. Electronic Research Archive, 2022, 30(4): 1454-1462. doi: 10.3934/era.2022076
    [5] Haiyan Song, Fei Sun . A numerical method for parabolic complementarity problem. Electronic Research Archive, 2023, 31(2): 1048-1064. doi: 10.3934/era.2023052
    [6] Qian He, Wenxin Du, Feng Shi, Jiaping Yu . A fast method for solving time-dependent nonlinear convection diffusion problems. Electronic Research Archive, 2022, 30(6): 2165-2182. doi: 10.3934/era.2022109
    [7] Yan Li, Yaqiang Wang . Some new results for $ B_1 $-matrices. Electronic Research Archive, 2023, 31(8): 4773-4787. doi: 10.3934/era.2023244
    [8] Yiyuan Qian, Kai Zhang, Jingzhi Li, Xiaoshen Wang . Adaptive neural network surrogate model for solving the implied volatility of time-dependent American option via Bayesian inference. Electronic Research Archive, 2022, 30(6): 2335-2355. doi: 10.3934/era.2022119
    [9] Guoliang Ju, Can Chen, Rongliang Chen, Jingzhi Li, Kaitai Li, Shaohui Zhang . Numerical simulation for 3D flow in flow channel of aeroengine turbine fan based on dimension splitting method. Electronic Research Archive, 2020, 28(2): 837-851. doi: 10.3934/era.2020043
    [10] Saeedreza Tofighi, Farshad Merrikh-Bayat, Farhad Bayat . Designing and tuning MIMO feedforward controllers using iterated LMI restriction. Electronic Research Archive, 2022, 30(7): 2465-2486. doi: 10.3934/era.2022126
  • For solving the linear complementarity problem (LCP), we propose a preconditioned new modulus-based matrix splitting (PNMMS) iteration method by extending the state-of-the-art new modulus-based matrix splitting (NMMS) iteration method to a more general framework with a customized preconditioner. We devise a generalized preconditioner that is associated with both H+-matrix A and vector q of the LCP. The convergence analysis is conducted under some mild conditions. In particular, we provide a comparison theorem to theoretically show the PNMMS method accelerates the convergence rate. Numerical experiments further illustrate that the PNMMS method is efficient and has better performance for solving the large and sparse LCP.



    In this paper, we are concerned with the solution of the linear complementarity problem, abbreviated as LCP, that consists in finding a pair of real vectors zRn and vRn satisfying the conditions

    z0,v:=Az+q0,zv=0, (1.1)

    where ARn×n is a given matrix, qRn is a given vector.

    The LCP (1.1) not only provides a unified framework for linear programming, quadratic programming, bi-matrix games, but also can be used to model many practically relevant situations such as spatial price balance problem, obstacles and free boundary problem, market equilibrium problem, optimal control problem, contact mechanics problem and so forth. To solve the LCP (1.1) numerically, various iteration methods have been presented and investigated, for example, the pivot algorithms [1,2], the projected iterative methods [3,4,5], the SOR-like iteration methods [6,7], the multisplitting methods [8,9,10,11,12,13], the modulus-based matrix splitting (MMS) methods [14,15], the Newton-type iteration methods [16,17], the multigrid methods [18] and others. Notably, among these methods, the MMS iteration method is particularly attractive since its ability to obtain numerical solutions more quickly and accurately which was first introduced in [14], and was extended in many works [19,20,21,22,23]. Making use of z=|x|+xγ and v=Ωγ(|x|x), Bai [14] transformed the LCP (1.1) into an equivalent system of fixed-point equation

    (Ω+A)x=(ΩA)|x|γq, (1.2)

    where γ is a positive constant and ΩRn×n is a positive diagonal matrix. Based on (1.2), the MMS method was presented as follows.

    Method 1.1 (MMS method). Let A=MN be a splitting of the given matrix ARn×n, and the matrix Ω+M be nonsingular, where ΩRn×n is a positive diagonal matrix. Given a nonnegative initial vector x0Rn, for k=0,1,2, until the iteration sequence {zk}+k=0Rn converges, compute xk+1Rn by solving the linear system

    (M+Ω)xk+1=Nxk+(ΩA)|xk|γq,

    and set

    zk+1=1γ(|xk+1|+xk+1),

    where γ is a positive constant.

    Furthermore, motivated by this method for solving the LCP (1.1), many researchers extended the MMS iteration method together with its variants to solve the nonlinear complementarity problems [24,25,26], the horizontal linear complementarity problems [27], the implicit complementarity problems [28,29,30], the second-order cone linear complementarity problems [31], the circular cone nonlinear complementarity problems [32], the semidefinite linear complementarity problems [33] and so forth.

    Recently, from a different perspective, Wu and Li [34] subtly designed a new equivalent form by directly exploiting the inequality system of the LCP (1.1) and reformulated the LCP (1.1) as a system of fixed-point equation without employing variable transformation, which could be described as follows

    (Ω+A)z=|(ΩA)zq|q. (1.3)

    Based on (1.3), a new modulus-based matrix splitting (NMMS) iteration method for solving the large and sparse LCP (1.1) was presented as Method 1.2.

    Method 1.2 (NMMS method). Let A=MN be a splitting of the given matrix ARn×n, and matrix Ω+M be nonsingular, where ΩRn×n is a positive diagonal matrix. Given a nonnegative initial vector z0Rn, for k=0,1,2, until the iteration sequence {zk}+k=0Rn converges, compute zk+1Rn by solving the linear system

    (Ω+M)zk+1=Nzk+|(AΩ)zk+q|q.

    The NMMS iteration method has the merits of simple calculation and high calculation efficiency, so it is feasible and practical in actual implementations. Investigating Method 1.1 and Method 1.2, the MMS and NMMS iteration methods have resemblances, but they do not belong to each other. Compared with the MMS method, the NMMS method does not need to use the skill of variable transformation in the process of iteration, which provides a new general framework for solving the LCP (1.1). It was shown in [34] that the NMMS iteration method is superior to the MMS iteration method for certain LCPs. Furthermore, by ingeniously utilizing the matrix splitting technique and properly choosing the involved relaxation parameters, the NMMS iteration method can induce some new modulus-based relaxation methods, such as the new modulus-based Jacobi (NMJ) method, the new modulus-based Gauss-Seidel (NMGS) method, the new modulus-based successive overrelaxation (NMSOR) method and the new modulus-based accelerated overrelaxation (NMAOR) method. In this paper, we mainly focus on the NMMS iteration method for solving the LCP (1.1).

    Preconditioned acceleration is a classical acceleration technique for fixed-point iterations, and can essentially improve the convergence rate. In order to accelerate the convergence of the MMS iteration method for solving the LCP (1.1), some preconditioning solvers have been developed. For example, Li and Zheng [35] and Zheng and Luo [36] respectively developed a preconditioned MMS iteration method and a preconditioned two-step MMS iteration method by utilizing a variable transformation and the preconditioners were both chosen as

    ˉP=(1t1t1t1). (1.4)

    Ren et al. [37] proposed the preconditioned general two-step MMS iteration method based on the two-step MMS iteration method [38] and gave its convergence analysis. Wu et al. [39] presented the preconditioned general MMS iteration method by making use of the left multiplicative preconditioner in the implicit fixed-point equation, and four different preconditioners were chosen for comparison with P1=I+t1C1 and P2=I+t2(C1+Cm), which were both lower-triangular matrices, P3=I+t3C1, which was the preconditioner in (1.4), and P4=I+t4(C1+C1), which was a Hessenberg matrix. The element ckj of Ci was given as

    ckj={1,fork=j+i,0,others,

    and ts>0, s=1,2,3,4. Experimental results show that the preconditioners P1 and P2 have better computational efficiency than P3 and P4 in most cases. Dai et al. [40] proposed a preconditioner ˜P which was defined as follows

    ˜P=(1a1k1ak1k1a1krakrkr1ak1krakrkrakrk1ak1k11ank1ak1k1ankrakrkr1), (1.5)

    and suggested a preconditioned two-step MMS iteration method for the LCP (1.1) associated with an M-matrix.

    In this paper, we get inspiration from the work of [35] and extend Method 1.2 to a more general framework with a customized preconditioner. Different from the above mentioned preconditioners, we develop a generalized preconditioner associated with both H+-matrix A and vector q of the LCP (1.1) and devise a preconditioned new modulus-based matrix splitting (PNMMS) iteration method for solving the LCP (1.1) of H+-matrix. Particularly, the PNMMS iteration method can yield a series of preconditioned relaxation modulus-based matrix splitting iteration methods by suitable choices of matrix splitting. We give the convergence analysis of the PNMMS iteration method under some mild conditions and provide a comparison theorem to show the PNMMS iteration method accelerates the convergence rate theoretically.

    The rest of this paper is arranged as follows. In section 2, we present some classical definitions and preliminary results relevant to our later developments. Section 3 establishes the PNMMS iteration method for solving the LCP (1.1) and its convergence properties are explored in detail in section 4. The comparison theorem between the NMMS and the PNMMS method is presented in section 5. Section 6 provides some numerical examples to illustrate the theoretical results. Finally, some concluding remarks are given in section 7.

    In this section, we will present the notations and some auxiliary results that lay our claims' foundation.

    Let A=(aij)Rn×n. tridiag(a,b,c) represents a matrix with a,b,c as the subdiagonal, main diagonal and superdiagonal entries in the matrix. We denote the spectral radius of matrix A by ρ(A). I is the identity matrix with suitable dimension. A is referred to a Z-matrix if aij<0 for any ij. If A is a Z-matrix and satisfies A10, then the matrix A is called an M-matrix. The comparison matrix of A is denoted by A=(aij)Rn×n, where

    aij={|aii|,ifi=j,|aij|,ifij.

    Obviously, the comparison matrix is a Z-matrix. A is called an H-matrix if its comparison matrix is an M-matrix. If A is an H-matrix with positive diagonals, it is an H+-matrix, see [41]. An M-matrix is an H+-matrix, and an H+-matrix is an H-matrix.

    Let A=DLU=DB, where D, L, U and B represent the diagonal matrix, strictly lower triangular matrix, strictly upper triangular matrix and off-diagonal matrix of matrix A, respectively. We say A=MN is a splitting if M is nonsingular. The splitting A=MN is called a weak regular splitting of ARn×n, if M10,M1N0, see [42]; an M-splitting if M is an M-matrix and N is nonnegative; an H-splitting if M|N| is an M-matrix; an H-compatible splitting if A=M|N|, see [15]. Note that an M-splitting is an H-compatible splitting, and an H-compatible splitting is an H-splitting.

    Lemma 2.1 ([14]). Let ARn×n be an H+-matrix, then the LCP (1.1) has a unique solution for any qRn.

    Lemma 2.2 ([43]). Let ARn×n be an H+-matrix, then |A1|A1.

    Lemma 2.3 ([44]). Let A be a Z-matrix, then the following statements are equivalent:

    A is an M-matrix;

    There exists a positive vector x, such that Ax>0;

    Let A=MN be a splitting of A and M10, N0, then ρ(M1N)<1.

    Lemma 2.4 ([44]). Let A, B be two Z-matrices, A be an M-matrix and BA, then B is an M-matrix.

    Lemma 2.5 ([45]). A is monotone if and only if ARn×n is nonsingular with A10.

    Lemma 2.6 ([46]). Suppose that E=FG and ˉE=ˉFˉG are weak regular splittings of the monotone matrices E and ˉE, respectively, such that F1ˉF1. If there exists a positive vector x such that 0ExˉEx, then for the monotonic norm associated with x, ˉF1ˉGxF1Gx. In particular, if F1G has a positive Perron vector, then ρ(ˉF1ˉG)ρ(F1G).

    Lemma 2.7 ([47]). Let A=(aij)Rn×n be an M-matrix, then there exists δ0>0 such that for any 0<δδ0, A(δ)=(aij(δ))Rn×n is a nonsingular M-matrix, where

    aij(δ)={aij,aij0,δ,aij=0.

    In this section, the PNMMS iteration method for solving the LCP (1.1) will be constructed and the new generalized preconditioner will be introduced.

    Enlightened by the idea of preconditioner in [40], we propose a generalized preconditioner P associated with both H+-matrix A and vector q of the LCP (1.1) with the following form

    P:=(pij)=(1|a1k1|ak1k1|a1kr|akrkr1|ak1kr|akrkr|akrk1|ak1k11|ank1|ak1k1|ankr|akrkr1), (3.1)

    where pii=1, the elements pikm=|aikm|akmkm0, m{1,2,,r} and km satisfies qkm<0, while other entries are all 0.

    It is worth noting that the preconditioner (3.1) is established on the premise that A is an H+-matrix, which is an extension of the preconditioner established by M-matrix in [40], and naturally includes (1.5).

    Let PA=ˉMˉN be a splitting of the matrix PA, we can rewrite the fixed-point system (1.3) as

    (PΩ+ˉM)z=ˉNz+|P(ΩA)zPq|Pq,

    then we construct the PNMMS iteration method below.

    Method 3.1 (PNMMS method). Let PRn×n be a given preconditioner, PA=ˉMˉN be a splitting of the matrix PARn×n, and the matrix PΩ+ˉM be nonsingular, where ΩRn×n is a positive diagonal matrix. Given a nonnegative initial vector z0Rn, for k=0,1,2, until the iteration sequence {zk}+k=0Rn converges, compute zk+1Rn by solving the linear system

    (PΩ+ˉM)zk+1=ˉNzk+|P(ΩA)zkPq|Pq. (3.2)

    Method 3.1 provides a general framework of the PNMMS method for solving the LCP (1.1). Besides including the NMMS method [34] as special case, it can generate a series of relaxation versions by suitable choices of matrix splitting. The framework of the PNMMS method is summarized as the following Algorithm 1.

    Algorithm 1 The PNMMS method for the LCP (1.1).
    Step 0 : We have A, P, ΩRn×n and qRn in advance. Select an arbitrary initial vector z0Rn. Give ε and set k:=0.
    Step 1 : Compute
         ˉA=PA,ˉq=Pq,ˉΩ=PΩ.
    Step 2 : Consider ˉA=ˉMˉN be a splitting of ˉA.
    Step 3 : Solve zk+1 by
         (ˉΩ+ˉM)zk+1=ˉNzk+|(ˉΩˉA)zkˉq|ˉq.
    Step 4 : If Res =min(Az+q,z)2ε, then stop. If not, then turn to Step 5.
    Step 5 : Set k:=k+1, then turn back to Step 3.

     | Show Table
    DownLoad: CSV

    Remark 3.1. Let PRn×n be a given preconditioner and PA=ˉMˉN=ˉDˉLˉU be the splitting of the matrix PARn×n. Here, we give the following remarks on Method 3.1.

    When P=I, then Method 3.1 reduces to the NMMS method [34].

    When ˉM=ˉD,ˉN=ˉL+ˉU, then Method 3.1 reduces to the preconditioned new modulus-based Jacobi (PNMJ) method:

    (PΩ+ˉD)zk+1=(ˉL+ˉU)zk+|P(ΩA)zkPq|Pq.

    When ˉM=ˉDˉL,ˉN=ˉU, then Method 3.1 becomes the preconditioned new modulus-based Gauss-Seidel (PNMGS) method:

    (PΩ+ˉDˉL)zk+1=ˉUzk+|P(ΩA)zkPq|Pq.

    When ˉM=1αˉDˉL,ˉN=1ααˉD+ˉU, then Method 3.1 turns into the preconditioned new modulus-based successive overrelaxation (PNMSOR) method:

    (αPΩ+ˉDαˉL)zk+1=((1α)ˉD+αˉU)zk+α(|P(ΩA)zkPq|Pq).

    When ˉM=1αˉDβαˉL,ˉN=1ααˉD+αβαˉL+ˉU, then Method 3.1 reduces to the preconditioned new modulus-based accelerated overrelaxation (PNMAOR) method:

    (αPΩ+ˉDβˉL)zk+1=((1α)ˉD+(αβ)ˉL+αˉU)zk+α(|P(ΩA)zkPq|Pq).

    In this section, we will analyze the convergence of Method 3.1 for the LCP (1.1) with an H+-matrix. Some mild convergence conditions are given to guarantee the convergence of Method 3.1.

    First of all, a general convergence condition for Method 3.1 is established in Theorem 4.1.

    Theorem 4.1. Let ARn×n be an H+-matrix, P be a given nonsingular matrix with positive diagonals such that PA is an H+-matrix. Make PA=ˉMˉN be a splitting of the matrix PA and PΩ+ˉM be an invertible H+-matrix, where Ω is a positive diagonal matrix. Let

    ˉF:=PΩ+ˉM,ˉG:=|ˉN|+|P||ΩA|, (4.1)

    if ρ(ˉF1ˉG)<1, then the iteration sequence {zk}+k=0Rn produced by Method 3.1 converges to the unique solution zRn of the LCP (1.1) with a nonnegative initial vector.

    Proof. Let A be an H+-matrix, it follows from Lemma 2.1 that the LCP (1.1) has a unique solution z, which means

    (PΩ+ˉM)z=ˉNz+|P(ΩA)zPq|Pq. (4.2)

    Subtracting (4.2) from (3.2), we have

    (PΩ+ˉM)(zk+1z)=ˉN(zkz)+|P(ΩA)zkPq||P(ΩA)zPq|.

    If PΩ+ˉM is invertible, then it holds that

    zk+1z=(PΩ+ˉM)1(ˉN(zkz)+|P(ΩA)zkPq||P(ΩA)zPq|). (4.3)

    Taking absolute value on both sides of (4.3) and utilizing the triangle inequality, one can obtain

    |zk+1z|=|(PΩ+ˉM)1(ˉN(zkz)+|P(ΩA)zkPq||P(ΩA)zPq|)||(PΩ+ˉM)1|(|ˉN(zkz)|+||P(ΩA)zkPq||P(ΩA)zPq||)|(PΩ+ˉM)1|(|ˉN||zkz|+|P||ΩA||zkz|)=|(PΩ+ˉM)1|(|ˉN|+|P||ΩA|)|zkz|PΩ+ˉM1(|ˉN|+|P||ΩA|)|zkz|:=ˉF1ˉG|zkz|,

    where the last inequality follows from Lemma 2.2. Evidently, if ρ(ˉF1ˉG)<1, then the sequence {zk}+k=0 converges to the unique solution z of the LCP (1.1).

    In particular, if P=I, the following corollary can be obtained.

    Corollary 4.1. Let ARn×n be an H+-matrix, A=MN be a splitting of the matrix A, and the matrix Ω+M be nonsingular H+-matrix, where Ω is a positive diagonal matrix. Let

    F:=Ω+M,G:=|N|+|ΩA|,

    if ρ(F1G)<1, then the iteration sequence {zk}+k=0Rn produced by Method 1.2 converges to the unique solution zRn of the LCP (1.1) with a nonnegative initial vector.

    Theorem 4.2. Let ARn×n be an H+-matrix, P be a given nonsingular matrix with positive diagonals such that PA is an H+-matrix. Make PA=ˉMˉN be a splitting of the matrix PA and PΩ+ˉM is an invertible H+-matrix. The diagonal matrix satisfies

    DxΩx<(PA+|P|A)x2BP (4.4)

    or

    12D1P(|P||A|PA)x<Ωx<Dx, (4.5)

    where A=DB, P=DPBP and x is an identity column vector. Then the iteration sequence {zk}+k=0Rn produced by Method 3.1 converges to the unique solution zRn of the LCP (1.1) with a nonnegative initial vector.

    Proof. Denote ˉE:=ˉFˉG, where ˉF and ˉG are given as in (4.1), then it can be concluded that

    ˉE=PΩ+ˉM|ˉN||P||ΩA|=(DPΩ+DˉM)|BPΩ+BˉM||ˉN||P||ΩA|DPΩ+DˉM|BPΩ||BˉM||ˉN||P||ΩA|=PΩ+ˉM|ˉN||P||ΩA|=PA+PΩ|P||ΩA|PA+PΩ|P|(|ΩD|+|B|)={PA+|P|A2|BP|Ω,ΩD,PA|P||A|+2DPΩ,Ω<D.

    For the case ΩD, the parameter Ω satisfies (4.4), we have

    (PA+|P|A2|BP|Ω)x>0.

    It is obvious that PA+|P|A2|BP|Ω is a Z-matrix. According to Lemma 2.3, it implies that PA+|P|A2|BP|Ω is an M-matrix. It can be got from Lemma 2.4 that ˉE is an M-matrix.

    For the case Ω<D, the parameter Ω satisfies (4.5), it holds that

    (PA|P||A|+2DPΩ)x>0.

    Analogously, PA|P||A|+2DPΩ is an M-matrix. In the light of Lemma 2.4, it follows that the Z-matrix ˉE is a nonsingular M-matrix.

    As we know ˉE=ˉFˉG be a splitting of the M-matrix ˉE. Since PΩ+ˉM is an H+-matrix, then ˉF1=PΩ+ˉM10 and ˉF is an M-matrix. It is obvious that ˉG=|ˉN|+|P||ΩA|0. Then Lemma 2.3 leads to ρ(ˉF1ˉG)<1, the assertion then follows by Theorem 4.1, the proof is completed.

    In particular, if P=I, it implies that BP=0. Taking the right-hand side of (4.4) to be +, then the following corollary can be obtained. It is worth noting that Corollary 4.2 leads to a broader convergence region than Theorem 4.2 in [34].

    Corollary 4.2. Let ARn×n be an H+-matrix, A=MN be a splitting of the matrix A, and the matrix Ω+M be nonsingular H+-matrix. If the diagonal matrix Ω satisfies

    ΩDorΩx>|B|x,

    where A=DB and x is an identity column vector. Then the iteration sequence {zk}+k=0Rn produced by Method 3.1 converges to the unique solution zRn of the LCP (1.1) with a nonnegative initial vector.

    Similarly, we establish the following convergence theorem on the PNMAOR method.

    Theorem 4.3. Let ARn×n be an H+-matrix, P be a given nonsingular matrix with positive diagonals such that PA is an H+-matrix. Make PA=ˉDˉLˉU=ˉDˉB, P=DPBP and ρ:=ρ(ˉD1(|ˉB|+|BPΩ|)). If the positive diagonal matrix Ω satisfies DPΩ12αˉD. Then for any initial vector, the PNMAOR iteration method is convergent if the following conditions are satisfied

    (1) when 12αˉDDPΩ<1αˉD,

    0βα,12(1ρ)<α<32(1+ρ),ρ<12;
    α<β<14ρ,4βρ+12<α<34βρ2,ρ<14β.

    (2) when DPΩ1αˉD,

    0βα,0<α<21+ρ,ρ<1;
    α<β<12ρ,2βρ<α<22βρ,ρ<12β.

    Proof. Since

    ˉM=1αˉDβαˉL,ˉN=1ααˉD+αβαˉL+ˉU,

    where PA=ˉMˉN. By some calculations, it holds that

    ˉF=PΩ+ˉM=PΩ+1αˉDβαˉL=DPΩ+1αˉD|BPΩ+βαˉL|DPΩ+1αˉD|BPΩ|βα|ˉL|,

    and

    ˉG=|ˉN|+|P||ΩA|2|ˉN|+|PΩˉM|=2|1ααˉD+αβαˉL+ˉU|+|PΩ1αˉD+βαˉL|2|1α|αˉD+2|αβ|α|ˉL|+2|ˉU|+|PΩ1αˉD+βαˉL|=2|1α|αˉD+2|αβ|α|ˉL|+2|ˉU|+|DPΩ1αˉD|+|BPΩβαˉL|2|1α|αˉD+2|αβ|α|ˉL|+2|ˉU|+|DPΩ1αˉD|+|BPΩ|+βα|ˉL|,

    where ˉF and ˉG are defined as (4.1). It is obvious to see that ˉF is an M-matrix and ˉG0. According to Lemma 2.3, we need to prove ˉE=ˉFˉG is an M-matrix, where

    ˉEDPΩ|DPΩ1αˉD|+12|1α|αˉD2(|αβ|+β)α|ˉL|2|ˉU|2|BPΩ|. (4.6)

    When 12αˉDDPΩ<1αˉD and 0βα, we know DPΩ|DPΩ1αˉD|0. Based on (4.6), we can simplify it to

    ˉE12|1α|αˉD2|ˉL|2|ˉU|2|BPΩ|=12|1α|αˉD2(|ˉB|+|BPΩ|):=ˆE.

    From Lemma 2.4, if ˆE is an M-matrix, then ˉE is an M-matrix, and it holds if and only if

    12|1α|0,ρ(α12|1α|ˉD12(|ˉB|+|BPΩ|))<1,

    and then we can obtain

    12(1ρ)<α<32(1+ρ),ρ<12.

    When 12αˉDDPΩ<1αˉD and α<β, we know DPΩ|DPΩ1αˉD|0 and α|ˉL|0. Based on (4.6), we can simplify it to

    ˉE12|1α|αˉD4βα|ˉL|2|ˉU|2|BPΩ|12|1α|αˉD4βα(|ˉB|+|BPΩ|):=ˆE.

    From Lemma 2.4, if ˆE is an M-matrix, then ˉE is an M-matrix, and it holds if and only if

    12|1α|0,ρ(α12|1α|ˉD14βα(|ˉB|+|BPΩ|))<1,

    and then we get

    4βρ+12<α<34βρ2,ρ<14β.

    When Ω1αˉD and 0βα, we know DPΩ|DPΩ1αˉD|=1αˉD>0. Based on (4.6), we can simplify it to

    ˉE1αˉD+12|1α|αˉD2|ˉL|2|ˉU|2|BPΩ|=2(1|1α|)αˉD2(|ˉB|+|BPΩ|):=ˆE.

    From Lemma 2.4, if ˆE is an M-matrix, then ˉE is an M-matrix, and it holds if and only if

    1|1α|0,ρ(α2(1|1α|)ˉD12(|ˉB|+|BPΩ|))<1,

    and then we have

    0<α<21+ρ,ρ<1.

    When Ω1αˉD and α<β, we know DPΩ|DPΩ1αˉD|=1αˉD>0 and α|ˉL|0. Based on (4.6), we can simplify it to

    ˉE1αˉD+12|1α|αˉD4βα|ˉL|2|ˉU|2|BPΩ|=2(1|1α|)αˉD4βα(|ˉB|+|BPΩ|):=ˆE.

    From Lemma 2.4, if ˆE is an M-matrix, then ˉE is an M-matrix, and it holds if and only if

    1|1α|0,ρ(α2(1|1α|)ˉD14βα(|ˉB|+|BPΩ|))<1,

    and then we obtain

    2βρ<α<22βρ,ρ<12β.

    Therefore, the convergence of the PNMAOR method can be proved by Lemma 2.3, thus completing the proof.

    Under the conditions of Theorem 4.3, if we take the specific choices of α and β for the PNMAOR method, the following corollaries on the PNMJ, PNMGS and PNMSOR methods can be derived.

    Corollary 4.3. Let ARn×n be an H+-matrix, and P be a given nonsingular matrix with positive diagonals such that PA is an H+-matrix. Make PA=ˉDˉLˉU=ˉDˉB, P=DPBP and ρ:=ρ(ˉD1(|ˉB|+|BPΩ|)). If the positive diagonal matrix Ω satisfies DPΩ12αˉD. Then for any initial vector,

    if α=1 and β=0, the PNMJ iteration method is convergent;

    if α=β=1, the PNMGS iteration method is convergent;

    if α=β, the PNMSOR iteration method is convergent for

    {12(1ρ)<α<32(1+ρ),ρ<12,if12αˉDDPΩ<1αˉD,0<α<21+ρ,ρ<1,ifDPΩ1αˉD.

    When P=I, the convergence theorem of the PNMAOR method can be extended to the NMAOR method in [34]. For most cases, the range of the parameters of the AOR method is usually set as 0βα, then the following result can be obtained.

    Corollary 4.4. Let ARn×n be an H+-matrix. Make A=DLU=DB satisfy ρ:=ρ(D1|B|)<12 and Ω12αD. Then for any initial vector, the NMAOR iteration method is convergent for

    0βαand12(1ρ)<α<32(1+ρ).

    In this section, we provide a comparison theorem between the PNMMS iteration method and the NMMS iteration method, which indicates that the PNMMS method for solving the LCP (1.1) can accelerate the convergence rate of the original NMMS method.

    Let

    Ω=D,M=DL,N=UandˉM=ˉDˉL,ˉN=ˉU,

    where A=DLU and PA=ˉDˉLˉU. It is easy to see that this is a special case of the NMMS method and the PNMMS method, other relaxation versions can also be theoretically analyzed in the similar way.

    We get the following useful lemma on the premise of the structure-preserving preconditioner P in (3.1). The proof of Lemma 5.1 is similar to that of Lemma 4.1 in [48] and thus we omit the detail.

    Lemma 5.1 ([48]). Let ARn×n be an H+-matrix, P be the preconditioner from (3.1) such that PA is an H+-matrix. Assume that D, L and ˉD, ˉL are given by A=DLU and PA=ˉDˉLˉU, respectively. Then

    ˉDD,|L||ˉL|.

    Theorem 5.1. Let ARn×n be an H+-matrix, P be the preconditioner in (3.1) such that PA is an H+-matrix. Make A and PA have the splitting A=DLU and PA=ˉDˉLˉU, respectively. Then for the iterative matrices F1G of the NMMS method and ˉF1ˉG of the PNMMS method, it holds that

    ρ(ˉF1ˉG)ρ(F1G)<1,

    where F, G are given as

    F=2D|L|,G=|L|+2|U| (5.1)

    and ˉF, ˉG are given as

    ˉF=2ˉD|ˉL|,ˉG=|ˉL|+2|ˉU|. (5.2)

    Proof. Since

    |ΩA|=|ΩD+L+U||ΩD+L|+|U|,

    we generalize the convergence conditions of Corollary 4.1 and Theorem 4.1, and obtain the conclusions ρ(F1G)<1 and ρ(ˉF1ˉG)<1, where F, G and ˉF, ˉG are given as (5.1) and (5.2), respectively. Now we only need to prove ρ(ˉF1ˉG)ρ(F1G).

    Here, we denote

    E:=FG=2D2|L|2|U|,ˉE:=ˉFˉG=2ˉD2|ˉL|2|ˉU|. (5.3)

    It is easy to check that E and ˉE of (5.3) are both M-matrices, then E10 and ˉE10. In this way, Lemma 2.5 shows that E and ˉE are monotone matrices. Since F=2D|L| and ˉF=2ˉD|ˉL| are both M-matrices, we have F10 and ˉF10. Evidently, the matrix G=|L|+2|U|0 and ˉG=|ˉL|+2|ˉU|0 are nonnegative matrices. So we know that E=FG and ˉE=ˉFˉG are weak regular splittings. From Lemma 5.1, we figure out that 2ˉD|ˉL|2D|L|, hence (2D|L|)1(2ˉD|ˉL|)1, that is to say F1ˉF1. Following Lemma 2.3, since E is an M-matrix, there exists a vector x>0 such that Ex>0. For the preconditioner (3.1), we have PI. Moreover ˉE=PE, we infer the conclusion 0ExˉEx easily.

    If the matrix A is irreducible, then F1G is also a nonnegative irreducible matrix. In combination with Lemma 2.6 and Perror-Frobeni theorem [45], F1G has a positive Perron vector such that ρ(ˉF1ˉG)ρ(F1G). However, if the matrix A is reducible, then we can construct an irreducible matrix A(δ) by Lemma 2.7 which leads to ρ(ˉF1ˉG)ρ(F1G).

    In view of the above, the comparison theorem shows that the convergence rate of the PNMMS method is faster than the NMMS method whenever these methods are convergent.

    In this section, four numerical examples will be presented to illustrate the efficiency of the PNMMS iteration method for solving the LCP (1.1). All test problems are conducted in MATLAB R2014b on a PC Windows 10 operating system with an intel i5-10400F CPU and 8GB RAM. In the numerical results, we report the number of iteration steps (denoted by "Iter"), the elapsed CPU time in seconds (denoted by "Time"), the relative residual (denoted by "Res") and the spectral radius (denoted by "Rho"). The stopping criterion of iteration is defined as

    Res=min(Azk+q,zk)2106,

    or the prescribed maximal iteration number kmax=500 is exceeded ("" is used in the following tables to demonstrate this circumstance). All tests are started from the initial vector z0=(1,0,1,0,,1,0,)Rn.

    With regard to the comparison of Method 3.1 and Method 1.2, we exhibit the performance of the PNMSOR and the NMSOR method. In our implementations, the parameter Ω is chosen as Ω=1αD, the parameter α is obtained experimentally.

    Example 6.1 ([14]). Consider the LCP (1.1) with A=ˆA+μIRn×n, in which

    ˆA=tridiag(I,S,I)=(SIISIISIISIIS)Rn×n

    with S=tridiag(1,4,1)Rm×m and n=m2, the vector

    q=(1,1,1,1,,1,1,)Rn.

    For this example, the system matrix A is a strictly diagonally dominant symmetric positive definite H+-matrix when μ>0, it is known that the LCP (1.1) has a unique solution. The preconditioner is chosen as (3.1), where km{1,3,5,7,}.

    The value of optimal parameter α involved in the PNMSOR and NMSOR methods is obtained experimentally, which leads to the least number of iteration steps. We present the iteration steps for the PNMSOR and NMSOR methods under different α for the test problem in Figure 1. As shown in Figure 1, the PNMSOR method is better no matter how the parameter α is chosen. When the parameter selection is α(0.9,1.1), it can be regarded as a good choice, and then we set α=1. Numerical results for Example 6.1 with the different problem sizes of n and μ=4 are reported in Table 1.

    Figure 1.  Experimental optimal parameter for Example 6.1.
    Table 1.  Numerical results for Example 6.1.
    Method n
    256 1024 4096 16384
    NMSOR Iter 10 11 11 11
    Time 0.0021 0.1123 2.0915 131.5611
    Res 6.0407×107 2.0204×107 3.9786×107 7.7943×107
    Rho 4.1313×101 4.1930×101 4.2096×101 4.2139×101
    PNMSOR Iter 7 7 7 7
    Time 0.0014 0.0739 1.2387 75.1883
    Res 1.0844×107 1.6364×107 2.4015×107 3.5057×107
    Rho 2.7729×101 2.8296×101 2.8451×101 2.8491×101

     | Show Table
    DownLoad: CSV

    It follows from Table 1 that the PNMSOR method is superior to the NMSOR method with respect to iteration steps and the elapsed CPU time. We also present the spectral radius of the iterative matrix, which further verifies the theoretical result of the comparison theorem from the numerical experiment. In addition, we provide a diagram of the relationship between the iteration steps and the relative residual of two methods in Figure 2. It follows from Figure 2. that the relative residual of the PNMSOR method decreases faster than that of the NMSOR method in each step, and the final error accuracy can be determined to be about 1015. That is to say, even if we improve the accuracy of the solution, the PNMSOR method also has great advantage and can be solved at a faster speed. As a result, the PNMSOR method has better computational efficiency in Example 6.1.

    Figure 2.  Residual comparison for Example 6.1.

    Example 6.2 ([14]). Consider the LCP (1.1) with A=ˆA+μIRn×n, in which

    ˆA=tridiag(0.5I,S,1.5I)=(S1.5I0.5IS1.5I0.5IS0.5I1.5IS1.5I0.5IS)Rn×n

    with S=tridiag(0.5,4,1.5)Rm×m and n=m2, the vector

    q=(1,1,1,1,,1,1,)Rn.

    For this example, the matrix A is a nonsymmetric positive definite H+-matrix with strict diagonal dominance when μ>0, thus the LCP (1.1) has a unique solution. We present the iteration steps for the PNMSOR and NMSOR methods under different α and the residual comparison for the test problem in Figure 3 and Figure 4, respectively. The experimentally optimal parameter α is located in (0.85,1.2), and we choose α=1. Numerical results for Example 6.2 with the different problem sizes of n and μ=4 are reported in Table 2.

    Figure 3.  Experimental optimal parameter for Example 6.2.
    Figure 4.  Residual comparison for Example 6.2.
    Table 2.  Numerical results for Example 6.2.
    Method n
    256 1024 4096 16384
    NMSOR Iter 12 12 13 13
    Time 0.0020 0.1249 2.4304 136.7768
    Res 2.6442×107 7.7866×107 3.4928×107 7.3951×107
    Rho 3.4965×101 3.5477×101 3.5615×101 3.7080×101
    PNMSOR Iter 6 6 7 7
    Time 0.0011 0.0693 1.2343 66.7233
    Res 5.7054×107 8.6806×107 8.3153×107 1.2105×107
    Rho 2.2499×101 2.2935×101 2.3055×101 2.3512×101

     | Show Table
    DownLoad: CSV

    It follows from Table 2 that the performance of the PNMSOR method is much more competitive than the NMSOR method in terms of computing efficiency, especially for large and sparse problems.

    Example 6.3 (Black-Scholes American option pricing). Consider the American option pricing problem which was introduced in [50]. Let V(S,t) and G(S,t) represent the value of an American option and the given payoff function of this option correspondingly. By a standard no-arbitrage argument, V(S,t) must satisfy the following complementarity conditions

    {(Vt+12σ2S22VS2+(rδ)SVSrV)(V(S,t)G(S,t))=0,Vt+12σ2S22VS2+(rδ)SVSrV0,V(S,t)G(S,t)0,S0,0tT.

    This model can be further reformulated into the following inequality system

    ut2ux20,ug0and(ut2ux2)(ug)=0, (6.1)

    which satisfies u(x,0)=g(x,0), 0t12σ2T and limx±u(x,t)=limx±g(x,t). Here, we limit x[a,b] and choose the values of a and b based on the way in [50]. Using the forward difference scheme for time t and implicit difference scheme for the price x to discretize (6.1), one can obtain

    Aub0,ug0and(Aub)(ug)=0. (6.2)

    By utilizing the transformation z=ug,q=Agb, then (6.2) can be rewritten as the LCP (1.1) [50].

    Following the parameter setting in [50], we take

    A=(1+2λθλθλθ1+2λθλθλθ1+2λθλθλθ1+2λθλθλθ1+2λθ)R(n1)×(n1),

    where λ=dt(dx)2, dt=12σ2Tm denotes the time step, dx=bam denotes the price step. Through the different choices of parameter θ, we can obtain different schemes, i.e., θ=0,12,1. Here, we choose θ=1, and it becomes a backward difference problem. Evidently, the matrix A is an H+-matrix as well. After that, let q=Agb where g=0.5z, b=Azv and the exact solution z=(1,0,1,0,,1,0,)Rn1, v=(0,1,0,1,,0,1,)Rn1 and the initial value z0=(1,1,1,1,,1,1,)Rn1 in this experiment.

    From Tables 36, we compare the efficiency of the PNMSOR method and the NMSOR method for solving the LCP (1.1) generated from the American option pricing, and show the influence of American option changes under different volatility (σ=0.2,0.6) and different maturity (T=0.5,5) on the solving efficiency. The selection of parameters depends on the reference [50] and the specific parameters are indicated in the tables below. By a simple calculation of vector q, the preconditioner P is chosen the same as in Example 6.1 and α=1.

    Table 3.  Numerical results for Example 6.3. (a=0.5,b=0.5,σ=0.2,T=0.5).
    Method (m,n)
    (400,800) (800,1600) (1600,3200) (3200,6400)
    NMSOR Iter 41 63 107 195
    Time 0.2651 1.7211 12.1052 87.7615
    Res 8.4847×108 2.2189×107 3.1211×107 1.7606×107
    Rho 9.5993×101 9.7957×101 9.8969×101 9.9482×101
    PNMSOR Iter 20 29 43 69
    Time 0.1303 0.7637 4.6886 30.4895
    Res 5.5860×107 8.4083×107 9.4161×107 8.7167×107
    Rho 8.5442×101 9.2228×101 9.5978×101 9.7953×101

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results for Example 3. (a=1,b=1,σ=0.6,T=0.5).
    Method (m,n)
    (400,800) (800,1600) (1600,3200) (3200,6400)
    NMSOR Iter 69 118 217 417
    Time 0.4147 3.3851 23.5351 182.5583
    Res 9.8260×107 2.2605×107 1.9309×107 1.3172×107
    Rho 9.8179×101 9.9082×101 9.9539×101 9.9769×101
    PNMSOR Iter 30 45 77 134
    Time 0.1693 1.2625 8.3526 60.4824
    Res 7.8560×107 3.9124×107 4.1874×107 9.4258×107
    Rho 9.3036×101 9.6410×101 9.8177×101 9.9081×101

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical results for Example 3. (a=1.5,b=1.5,σ=0.2,T=5).
    Method (m,n)
    (400,800) (800,1600) (1600,3200) (3200,6400)
    NMSOR Iter 44 68 119 217
    Time 0.2755 1.8640 12.9576 94.6231
    Res 7.6780×107 4.2072×107 1.2522×107 4.0707×107
    Rho 9.6379×101 9.8158×101 9.9071×101 9.9533×101
    PNMSOR Iter 23 29 46 74
    Time 0.1424 0.7828 5.0038 32.5015
    Res 2.8951×107 4.4231×107 9.0936×107 6.5158×107
    Rho 8.6728×101 9.2957×101 9.6368×101 9.8155×101

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical results for Example 3. (a=2,b=2,σ=0.6,T=5).
    Method (m,n)
    (400,800) (800,1600) (1600,3200) (3200,6400)
    NMSOR Iter 144 268
    Time 0.8479 7.2326
    Res 8.2116×107 8.5279×107
    Rho 9.9263×101 9.9631×101
    PNMSOR Iter 54 91 166 312
    Time 0.3191 2.5341 17.6367 133.2185
    Res 3.3286×107 7.1404×107 4.9194×107 5.0296×107
    Rho 9.7107×101 9.8536×101 9.9264×101 9.9631×101

     | Show Table
    DownLoad: CSV

    As shown in Table 3, compared with the NMSOR method, the PNMSOR method can shorten a half of time in the case of small volatility and short maturity. And the overall elapsed CPU time is the shortest. For the case of large volatility and short maturity in Table 4, the PNMSOR method also has an incredible solving speed, but the overall elapsed CPU time is longer than that in small volatility. It can be seen from Table 5 that although the elapsed CPU time has increased, the influence of maturity limit on the solving efficiency is not as great as that of volatility. At this point, the performance of our method is still excellent. From Table 6, the case with large volatility and long maturity consumes the longest solution time, but the PNMSOR method still has its superiority. At higher dimensions, the NMSOR method can not reach sufficient accuracy within 500 steps, but the PNMSOR method can converge rapidly. We observed that experimentally, when the size is (1600, 3200), the NMSOR method achieves 3.6375×108 precision at step 516 and takes 55.0822 seconds; when the size is (3200, 6400), the NMSOR method reaches the precision of 3.9548×107 at step 1011 and costs 432.2135 seconds, what they mean is that the NMSOR method is convergent, but it does not reach sufficient accuracy within the presupposed maximum number of iteration steps. Over here, we can find the superior performance of the PNMSOR method more vividly by comparison.

    Example 6.4 (Continuous optimal control problem). Consider the quasi-variational inequality problem (QIP) from the continuous optimal control problem, which was proposed in [49]: find zK(z) such that

    (vz)(Az+F(z))0,vK(z), (6.3)

    where , is a positive cone in , and represent the implicit obstacle function and the mapping from to itself, respectively.

    The QIP (6.3) can be simplified to the LCP (1.1) by setting and . In Example 6.4, let , where

    with and , which may be more consistent with the actual condition in the application. Apparently, there is a unique solution to this problem for any . The vector is the same as in Example 6.1 and . Numerical results for this example are reported in Table 7, from which we can find that the PNMSOR method is superior to the NMSOR method in terms of CPU time.

    Table 7.  Numerical results for Example 4.
    Method
    NMSOR Iter
    Time
    Res
    Rho
    PNMSOR Iter 6 7 7 8
    Time 0.0015 0.0886 1.3135 103.4370
    Res
    Rho

     | Show Table
    DownLoad: CSV

    On the whole, from these numerical results, we can see that the PNMMS iteration method for solving the LCP (1.1) is much more competitive than the NMMS iteration method.

    In this paper, a class of preconditioned new modulus-based matrix splitting (PNMMS) method with a generalized preconditioner is developed to solve the LCP (1.1). The convergence analysis of the PNMMS method is established when is an -matrix. Particularly, we provide a comparison theorem between the PNMMS iteration method and the NMMS iteration method, which exhibits that the PNMMS method improves the convergence rate of the original NMMS method for solving the LCP (1.1). Numerical experiments are reported to demonstrate the efficiency of the proposed method.

    The first author would like to thank Dr. Cairong Chen from Fujian Normal University for his helpful discussions and guidance. The authors are grateful to the reviewers and editor for their helpful comments and suggestions which have helped to improve the paper. This work is supported by the Social Science Foundation of Liaoning Province (L22BGL028).

    The authors declare there is no conflicts of interest.



    [1] N. W. Kappel, L. T. Watson, Iterative algorithms for the linear complementarity problem, Int. J. Comput. Math., 19 (1986), 273–297. https://doi.org/10.1080/00207168608803522 doi: 10.1080/00207168608803522
    [2] K. G. Murty, F.-T. Yu, Linear Complementarity, Linear and Nonlinear Programming, Heldermann: Berlin, Germany, 1988.
    [3] Y. Li, P. Dai, Generalized AOR methods for linear complementarity problem, Appl. Math. Comput., 188 (2007), 7–18. https://doi.org/10.1016/j.amc.2006.09.067 doi: 10.1016/j.amc.2006.09.067
    [4] O. L. Mangasarian, Solution of symmetric linear complementarity problems by iterative methods, J. Optim. Theory Appl., 22 (1977), 465–485. https://doi.org/10.1007/bf01268170 doi: 10.1007/bf01268170
    [5] H. Saberi Najafi, S. A. Edalatpanah, On the convergence regions of generalized accelerated overrelaxation method for linear complementarity problems, J. Optim. Theory Appl., 156 (2013), 859–866. https://doi.org/10.1007/s10957-012-0135-1 doi: 10.1007/s10957-012-0135-1
    [6] Y.-F. Ke, C.-F. Ma, On SOR-like iteration methods for solving weakly nonlinear systems, Optim. Methods Softw., 37 (2020), 320–337. https://doi.org/10.1080/10556788.2020.1755861 doi: 10.1080/10556788.2020.1755861
    [7] Y.-F. Ke, The new iteration algorithm for absolute value equation, Appl. Math. Lett., 99 (2020), 105990. https://doi.org/10.1016/j.aml.2019.07.021 doi: 10.1016/j.aml.2019.07.021
    [8] Z.-Z. Bai, On the convergence of the multisplitting methods for the linear complementarity problem, SIAM J. Matrix Anal. Appl., 21 (1999), 67–78. https://doi.org/10.1137/s0895479897324032 doi: 10.1137/s0895479897324032
    [9] Z.-Z. Bai, Parallel chaotic multisplitting iterative methods for the large sparse linear complementarity problem, J. Comput. Math., 19 (2001), 281–292.
    [10] Z.-Z. Bai, L.-L. Zhang, Modulus-based synchronous multisplitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 20 (2012), 425–439. https://doi.org/10.1007/s11075-012-9566-x doi: 10.1007/s11075-012-9566-x
    [11] Z.-Z. Bai, L.-L. Zhang, Modulus-based synchronous two-stage multisplitting iteration methods for linear complementarity problems, Numer. Algorithms, 62 (2013), 59–77. https://doi.org/10.1007/s11075-012-9566-x doi: 10.1007/s11075-012-9566-x
    [12] Z.-Z. Bai, D. J. Evans, Matrix multisplitting methods with applications to linear complementarity problems: Parallel synchronous and chaotic methods, Calculateurs Paralleles Reseaux et systemes repartis, 13 (2001), 125–141.
    [13] Z.-Z. Bai, D. J. Evans, Matrix multisplitting methods with applications to linear complementarity problems: parallel asynchronous methods, Int. J. Comput. Math., 79 (2002), 205–232. https://doi.org/10.1080/00207160211927 doi: 10.1080/00207160211927
    [14] Z.-Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 17 (2010), 917–933. https://doi.org/10.1002/nla.680 doi: 10.1002/nla.680
    [15] L.-L. Zhang, Z.-R. Ren, Improved convergence theorems of modulus-based matrix splitting iteration methods for linear complementarity problems, Appl. Math. Lett., 26 (2013), 638–642. https://doi.org/10.1016/j.aml.2013.01.001 doi: 10.1016/j.aml.2013.01.001
    [16] C. Kanzow, Inexact semismooth Newton methods for large-scale complementarity problems, Optim. Methods Softw., 19 (2004), 309–325. https://doi.org/10.1080/10556780310001636369 doi: 10.1080/10556780310001636369
    [17] Z. Sun, J.-P. Zeng, A monotone semismooth Newton type method for a class of complementarity problems, J. Comput. Appl. Math., 235 (2011), 1261–1274. https://doi.org/10.1016/j.cam.2010.08.012 doi: 10.1016/j.cam.2010.08.012
    [18] Z.-Z. Bai, L.-L. Zhang, Modulus-based multigrid methods for linear complementarity problems, Numer. Linear Algebra Appl., 24 (2017), e2105. https://doi.org/10.1002/nla.2105 doi: 10.1002/nla.2105
    [19] Z.-G. Huang, J.-J. Cui, Accelerated relaxation modulus-based matrix splitting iteration method for linear complementarity problems, Bull. Malays. Math. Sci. Soc., 44 (2021), 2175–2213. https://doi.org/10.1007/s40840-020-01049-9 doi: 10.1007/s40840-020-01049-9
    [20] D.-K. Li, L. Wang, Y.-Y. Liu, A relaxation general two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems, J. Comput. Appl. Math., 409 (2022), 114140. https://doi.org/10.1016/j.cam.2022.114140 doi: 10.1016/j.cam.2022.114140
    [21] X.-F. Peng, M. Wang, W. Li, A relaxation two-sweep modulus-based matrix splitting iteration method for linear complementarity problems, East Asian J. Appl. Math., 9 (2019), 102–121. https://doi.org/10.4208/eajam.020318.220618 doi: 10.4208/eajam.020318.220618
    [22] H. Ren, X. Wang, X.-B. Tang, T. Wang, The general two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems, Comput. Math. Appl., 77 (2018), 1–11. https://doi.org/10.1016/j.camwa.2018.10.040 doi: 10.1016/j.camwa.2018.10.040
    [23] S.-L. Wu, C.-X. Li, P. Agarwal, Relaxed modulus-based matrix splitting methods for the linear complementarity problem, Symmetry, 13 (2021), 503. https://doi.org/10.3390/sym13030503 doi: 10.3390/sym13030503
    [24] B.-H. Huang, C.-F. Ma, Accelerated modulus-based matrix splitting iteration method for a class of nonlinear complementarity problems, Comp. Appl. Math., 37 (2018), 3053–3076. https://doi.org/10.1007/s40314-017-0496-z doi: 10.1007/s40314-017-0496-z
    [25] S.-L. Xie, H.-R. Xu, J.-P. Zeng, Two-step modulus-based matrix splitting iteration method for a class of nonlinear complementarity problems, Linear Algebra Appl., 494 (2016), 1–10. https://doi.org/10.1016/j.laa.2016.01.002 doi: 10.1016/j.laa.2016.01.002
    [26] H. Zheng, S. Vong, L. Liu, The relaxation modulus-based matrix splitting iteration method for solving a class of nonlinear complementarity problems, Int. J. Comput. Math., 96 (2019), 1648–1667. https://doi.org/10.1080/00207160.2018.1504928 doi: 10.1080/00207160.2018.1504928
    [27] H. Zheng, S. Vong, A two-step modulus-based matrix splitting iteration method for horizontal linear complementarity problems, Numer. Algorithms, 86 (2021), 1791–1810. https://doi.org/10.1007/s11075-020-00954-1 doi: 10.1007/s11075-020-00954-1
    [28] Y. Cao, A. Wang, Two-step modulus-based matrix splitting iteration methods for implicit complementarity problems, Numer. Algorithms, 82 (2019), 1377–1394. https://doi.org/10.1007/s11075-019-00660-7 doi: 10.1007/s11075-019-00660-7
    [29] J. Lu, W. Xiang, A generalized two-step modulus-based matrix splitting iteration method for implicit complementarity problems of -matrices, Filomat, 33 (2019), 4875–4888. https://doi.org/10.2298/fil1915875j doi: 10.2298/fil1915875j
    [30] Y. Wang, J.-F. Yin, Q.-Y. Dou, R. Li, Two-step modulus-based matrix splitting iteration methods for a class of implicit complementarity problems, Numer. Math. Theor. Meth. Appl., 12 (2019), 867–883. https://doi.org/10.4208/nmtma.oa-2018-0034 doi: 10.4208/nmtma.oa-2018-0034
    [31] Y.-F. Ke, C.-F. Ma, H. Zhang, The modulus-based matrix splitting iteration methods for second-order cone linear complementarity problems, Numer. Algorithms, 79 (2018), 1283–1303. https://doi.org/10.1007/s11075-018-0484-4 doi: 10.1007/s11075-018-0484-4
    [32] Y.-F. Ke, C.-F. Ma, H. Zhang, The relaxation modulus-based matrix splitting iteration methods for circular cone nonlinear complementarity problems, J. Comput. Appl. Math., 37 (2018), 6795–6820. https://doi.org/10.1007/s40314-018-0687-2 doi: 10.1007/s40314-018-0687-2
    [33] Y.-F. Ke, Convergence analysis on matrix splitting iteration algorithm for semidefinite linear complementarity problems, Numer. Algorithms, 86 (2021), 257–279. https://doi.org/10.1007/s11075-020-00888-8 doi: 10.1007/s11075-020-00888-8
    [34] S.-L. Wu, C.-X. Li, A class of new modulus-based matrix splitting methods for linear complementarity problem, Optim. Lett., 16 (2022), 1427–1443. https://doi.org/10.1007/s11590-021-01781-6 doi: 10.1007/s11590-021-01781-6
    [35] W. Li, H. Zheng, A preconditioned modulus-based iteration method for solving linear complementarity problems of -matrices, Linear Multilinear Algebra, 64 (2016), 1390–1403. https://doi.org/10.1080/03081087.2015.1087457 doi: 10.1080/03081087.2015.1087457
    [36] H. Zheng, J. Luo, A preconditioned two-step modulus-based matrix splitting iteration method for linear complementarity problems of -matrices, Math. Numer. Sin., 40 (2018), 24–32. (in Chinese)
    [37] H. Ren, X. Wang, X.-B. Tang, T. Wang, A preconditioned general two-step modulus-based matrix splitting iteration method for linear complementarity problems of -matrices, Numer. Algorithms, 82 (2019), 969–986. https://doi.org/10.1007/s11075-018-0637-5 doi: 10.1007/s11075-018-0637-5
    [38] L.-L. Zhang, Two-step modulus-based matrix splitting iteration method for linear complementarity problems, Numer. Algorithms, 57 (2011), 83–99. https://doi.org/10.1007/s11075-010-9416-7 doi: 10.1007/s11075-010-9416-7
    [39] X.-P. Wu, X.-F. Peng, W. Li, A preconditioned general modulus-based matrix splitting iteration method for linear complementarity problems of -matrices, Numer. Algorithms, 79 (2018), 1131–1146. https://doi.org/10.1007/s11075-018-0477-3 doi: 10.1007/s11075-018-0477-3
    [40] P.-F. Dai, J.-C. Li, J.-C. Bai, J.-M. Qiu, A preconditioned two-step modulus-based matrix splitting iteration method for linear complementarity problem, Appl. Math. Comput., 348 (2019), 542–551. https://doi.org/10.1016/j.amc.2018.12.012 doi: 10.1016/j.amc.2018.12.012
    [41] Z.-Z. Bai, J.-Y. Pan, Matrix Analysis and Computations, Philadelphia(PA): SIAM, 2021. https://doi.org/10.1137/1.9781611976632
    [42] R. S. Varga, Matrix Iterative Analysis, edition, Springer Berlin, Heidelberg, 1962. https://doi.org/10.1007/978-3-642-05156-2
    [43] A. Frommer, G. Mayer, Convergence of relaxed parallel multisplitting methods, Linear Algebra Appl., 119 (1989), 141–152. https://doi.org/10.1016/0024-3795(89)90074-8 doi: 10.1016/0024-3795(89)90074-8
    [44] A. Berman, R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Philadelphia(PA): SIAM, 1994. https://doi.org/10.1137/1.9781611971262
    [45] O. Axelsson, Iterative Solution Methods, Cambridge University Press, 1996.
    [46] M. Neumann, R. J. Plemmons, Convergence of parallel multisplitting iterative methods for -matrices, Linear Algebra Appl., 88–89 (1987), 559–573. https://doi.org/10.1016/0024-3795(87)90125-x doi: 10.1016/0024-3795(87)90125-x
    [47] L. Wang, Y.-Z. Song, Preconditioned AOR iterative methods for -matrices, J. Comput. Appl. Math., 226 (2009), 114–124. https://doi.org/10.1016/j.cam.2008.05.022 doi: 10.1016/j.cam.2008.05.022
    [48] P.-F. Dai, J.-C. Li, J.-C. Bai, A preconditioned accelerated modulus-based Gauss-Seidel iterative method for linear complementarity problems, Math. Numer. Sin., 41 (2019), 308–319. (in Chinese)
    [49] G. Isac, Complementarity Problems and Variational Inequalities, Springer, Boston, MA, 2006. https://doi.org/10.1007/0-387-32900-5-2
    [50] X.-J. Shi, L. Yang, Z.-H. Huang, A fixed point method for the linear complementarity problem arising from american option pricing, Acta Math. Appl. Sin. Engl. Ser., 32 (2016), 921–932. https://doi.org/10.1007/s10255-016-0613-6 doi: 10.1007/s10255-016-0613-6
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1606) PDF downloads(66) Cited by(0)

Figures and Tables

Figures(4)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog