Research article

A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems

  • Received: 26 January 2023 Revised: 13 March 2023 Accepted: 15 March 2023 Published: 06 April 2023
  • MSC : 65F10, 65H10, 90C30

  • A new relaxed acceleration two-sweep modulus-based matrix splitting (NRATMMS) iteration method is developed to solve linear complementarity problems. The convergence of the NRATMMS method is established with the system matrix A being an H+-matrix. Numerical experiments show that the proposed method is superior to some existing algorithms under appropriate conditions.

    Citation: Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han. A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems[J]. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677

    Related Papers:

    [1] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
    [2] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [3] Ximing Fang . The simplified modulus-based matrix splitting iteration method for the nonlinear complementarity problem. AIMS Mathematics, 2024, 9(4): 8594-8609. doi: 10.3934/math.2024416
    [4] Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi . A general modulus-based matrix splitting method for quasi-complementarity problem. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614
    [5] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [6] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [7] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [8] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [9] Ting Lin, Hong Zhang, Chaofan Xie . A modulus-based modified multivariate spectral gradient projection method for solving the horizontal linear complementarity problem. AIMS Mathematics, 2025, 10(2): 3251-3268. doi: 10.3934/math.2025151
    [10] Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180
  • A new relaxed acceleration two-sweep modulus-based matrix splitting (NRATMMS) iteration method is developed to solve linear complementarity problems. The convergence of the NRATMMS method is established with the system matrix A being an H+-matrix. Numerical experiments show that the proposed method is superior to some existing algorithms under appropriate conditions.



    Over the past decades, owing to a broad variety of applications in engineering, sciences and economics, the linear complementarity problem (LCP) has been an active topic in the optimization community and has garnered a flurry of interest. The LCP is a powerful mathematical model which is intimately related to many significant scientific problems, such as the well-known primal-dual linear programming, bimatrix game, convex quadratic programming, American option pricing problem and others, see e.g., [1,2,3] for more details. The LCP consists in determining a vector zRn such that

    z0,v=Az+q0andzv, (1.1)

    where ARn×n and qRn are given. We hereafter abbreviate the problem (1.1) by LCP(A,q).

    The LCP(A,q) of form (1.1) together with its extensions are extensively investigated in recent years, and designing efficient numerical algorithms to fast and economically obtain the solution of the LCP(A,q) (1.1) is of great significance. Some numerical iterative algorithms have been developed for solving the LCP(A,q) (1.1) over the past decades, such as the pivot algorithms [1,2,4], the projected iterative methods [5,6,7,8], the multisplitting methods [9,10,11,12,13,14], the Newton-type iteration methods [15,16] and others, see e.g., [17,18,19] and the references therein. The modulus-based matrix splitting (MMS) iteration method, which was first introduced in[20], is particularly attractive for solving the LCP(A,q) (1.1). Based on the general variable transformation, by setting z=|x|+xγ and v=Ωγ(|x|x), and let A=MN, Bai reformulated the LCP(A,q) (1.1) as the following equivalent form [20]

    (Ω+M)x=Nx+(ΩA)|x|γq,

    where γ>0 and ΩRn×n is a positive diagonal matrix. Then he skillfully designed a general framework of MMS iteration method for solving the large-scale sparse LCP(A,q) (1.1), which exhibits the following formal formulation.

    Algorithm 1.1. ([20]) (The MMS method) Let A=MN be a splitting of the matrix ARn×n. Assume that x0Rn is an arbitrary initial guess. For k=0,1,2,, compute {xk+1} by solving the linear system

    (Ω+M)xk+1=Nxk+(ΩA)|xk|γq,

    and then set

    zk+1=1γ(|xk+1|+xk+1)

    until the iterative sequence {zk} is convergent. Here, ΩRn×n is a positive diagonal matrix and γ is a positive constant.

    The MMS iteration method not only covers some presented iteration methods, such as the nonstationary extrapolated modulus method [21] and the modified modulus method [22] as its special cases, but also yields a series of modulus-based relaxation methods, such as the modulus-based Jacobi (MJ), the modulus-based Gauss-Seidel (MGS), the modulus-based successive overrelaxation (MSOR) and the modulus-based accelerated overrelaxation (MAOR) methods. Thereafter, since the promising behaviors and elegant mathematical properties of the MMS iterative scheme, it immediately received considerable attention and diverse versions of the MMS method occurred. For instance, Zheng and Yin [23] established a new class of accelerated MMS (AMMS) iteration methods for solving the large-scale sparse LCP(A,q) (1.1), and the convergence analyses of the AMMS method with the system matrix A being a positive definite matrix or an H+-matrix were explored. In order to further accelerate the MMS method, Zheng et al. [24] combined the relaxation strategy with the matrix splitting technique in the modulus equation of [25] and presented a relaxation MMS (RMMS) iteration method for solving the LCP(A,q) (1.1). The parametric selection strategies of the RMMS method were discussed in depth [24]. In addition, the RMMS method covers the general MMS (GMMS) method [25] as a special case. In the sequel, by extending the two-sweep iteration methods [26,27], Wu and Li [28] developed a general framework of two-sweep MMS (TMMS) iteration method to solve the LCP(A,q) (1.1), and the convergences of the TMMS method were established with the system matrix A being either an H+-matrix or a positive-definite matrix. Ren et al. [29] proposed a class of general two-sweep MMS (GTMMS) iteration methods to solve the LCP(A,q) (1.1) which encompasses the TMMS method by selecting appropriate parameter matrices. Peng et al. [30] presented a relaxation two-sweep MMS (RTMMS) iteration method for solving the LCP(A,q) (1.1) and gave its convergence theories with the system matrix A being an H+-matrix or a positive-definite matrix. Huang et al. [31] combined the parametric strategy, the relaxation technique and the acceleration technique to construct an accelerated relaxation MMS (ARMMS) iteration method for solving the LCP(A,q) (1.1). The ARMMS method can be regarded as a generalization of some existing methods, such as the MMS [20], the GMMS [25] and the RMMS [24]. For more modulus-based matrix splitting type iteration methods, see [32,33,34,35,36,37,38,39,40,41] and the references therein.

    On the other hand, Bai and Tong [42] equivalently transformed the LCP(A,q) (1.1) into a nonlinear equation without using variable transformation and proposed an efficient iterative algorithm by using the matrix splittings and extrapolation acceleration techniques. Then some relaxed versions of the method proposed in [42] were constructed by Bai and Huang [43] and the convergence theories were established under some mild conditions. Recently, Wu and Li [44] recasted the LCP(A,q) (1.1) into an implicit fixed-point equation

    (Ω+M)z=Nz+|(AΩ)z+q|q, (1.2)

    where A=MN. In fact, if M=A and Ω=I, then (1.2) reduces to the fixed-point equation proposed in [42]. Based on (1.2), the new MMS (NMMS) method for solving the LCP(A,q) (1.1) was constructed in [44].

    Algorithm 1.2. ([44]) (The NMMS method) Let A=MN be a splitting of the matrix ARn×n and the matrix Ω+M be nonsingular, where ΩRn×n is a positive diagonal matrix. Given a nonnegative initial vector z0Rn, for k=0,1,2, until the iteration sequence {zk} is convergent, compute zk+1Rn by solving the linear system

    (Ω+M)zk+1=Nzk+|(AΩ)zk+q|q.

    It is obvious that the NMMS method does not need any variable transformations, which is different from the above mentioned MMS type iteration methods. However, the NMMS method still inherits the merits of the MMS type iteration methods and some relaxation versions of it are studied.

    Remark 1.1. Let A=DALAUA, where DA, LA and UA are the diagonal, strictly lower-triangular and strictly upper-triangular parts of A, respectively. It has been mentioned in [44] that the Algorithm 1.2 can reduce to the following methods.

    (i) If M=A, Ω=I and N=0, then the Algorithm 1.2 becomes the new modulus method:

    (I+A)zk+1=|(AI)zk+q|q.

    (ii) If M=A, N=0 and Ω=αI, then Algorithm 1.2 turns into the new modified modulus iteration method:

    (αI+A)zk+1=|(AαI)zk+q|q.

    (iii) Let M=1α(DAβLA) and N=1α((1α)DA+(αβ)LA+αUA), then Algorithm 1.2 reduces to the new MAOR iteration method:

    (αΩ+DAβLA)zk+1=[(1α)DA+(αβ)LA+αUA]zk+α(|(AΩ)zk+q|q). (1.3)

    Evidently, based on (1.3), when (α,β) is equal to (α,α), (1,1) and (1,0), respectively, we can obtain the new MSOR (NMSOR), the new MGS (NMGS) and the new MJ (NMJ) iteration methods, respectively.

    The goal of this paper is to further improve the computing efficiency of the Algorithm 1.2 for solving the LCP(A,q) (1.1). To this end, we utilize the two-sweep matrix splitting iteration technique in [28,29] and the relaxation technique, and construct a new class of relaxed acceleration two-sweep MMS (NRATMMS) iteration method for solving the LCP(A,q) (1.1). Convergence analysis of the NRATMMS iteration method is studied in detail. By choosing suitable parameter matrices, the NRATMMS iteration method can generate some relaxation versions. Numerical results are reported to demonstrate the efficiency of the NRATMMS iteration method.

    The remainder of this paper is organized as follows. In Section 2, {we present some notations and definitions used hereinafter.} Section 3 is devoted to establishing the NRATMMS iteration method for solving the LCP(A,q) (1.1) and the global linear convergence of the proposed method is explored. Section 4 reports the numerical results. Finally, some concluding remarks are given in Section 5.

    In this section, we collect some notations, classical definitions and some auxiliary results which lay the foundation of our developments.

    Rn×n denotes the set of all n×n real matrices and Rn=Rn×1. I is the identity matrix with suitable dimension. || denotes absolute value for real scalar or modulus for complex scalar. For xRn, xi refers to its i-th entry, |x|=(|x1|,|x2|,,|xn|)Rn represents the componentwise absolute value of a vector x. tridiag(a,b,c) denotes a tridiagonal matrix that has a,b,c as the subdiagonal, main diagonal and superdiagonal entries in the matrix, respectively. Tridiag(A,B,C) denotes a block tridiagonal matrix that has A, B, C as the subdiagonal, main diagonal and superdiagonal block entries in the matrix, respectively.

    Let two matrices P=(pij)Rm×n and Q=(qij)Rm×n, we write PQ(P>Q) if pijqij(pij>qij) holds for any i and j. For A=(aij)Rm×n, A and |A| represent the transpose of A and the absolute value of A(|A|=(|aij|)Rm×n), respectively. For A=(aij)Rn×n, ρ(A) represents its spectral radius. Moreover, the comparison matrix A is defined by

    aij={|aij|,if i=j,|aij|,if ij, i,j=1,2,,n.

    A matrix ARn×n is called a Z-matrix if all of its off-diagonal entries are nonpositive, and it is a P-matrix if all of its principal minors are positive; we call a real matrix as an M-matrix if it is a Z-matrix with A10, and it is called an H-matrix if its comparison matrix A is an M-matrix. In particular, an H-matrix with positive diagonals is called an H+-matrix [9]. In addition, a sufficient condition for the matrix A to be a P-matrix is that A is an H+-matrix. ARn×n is called a strictly diagonal dominant matrix if |aii|>ji|aij| for all 1in.

    Let M be nonsingular, then A=MN is called an M-splitting if M is an M-matrix and N0, an H-splitting if M|N| is an M-matrix and an H-compatible splitting if A=M|N|[45]. Finally, the following lemmas are needed in the convergence analysis of the proposed method.

    Lemma 1. ([46]) Let ARn×n be an H+-matrix, then the LCP(A,q) (1.1) has a unique solution for any qRn.

    Lemma 2. ([47]) Let BRn×n be a strictly diagonal dominant matrix. Then for all CRn×n,

    B1Cmax1in(|C|e)i(Be)i

    holds, where e=(1,1,,1).

    Lemma 3. ([48]) Let A be an H-matrix, then |A1|A1.

    Lemma 4. ([49]) If A is an M-matrix, there exists a positive diagonal matrix V such that AV is a strictly diagonal dominant matrix with positive diagonal entries.

    Lemma 5. ([49]) Let A, B be two Z-matrices, A be an M-matrix, and BA. Then B is an M-matrix.

    Lemma 6. ([26]) Let

    A=(BCI0)0 and ρ(B+C)<1,

    then ρ(A)<1.

    Lemma 7. ([45]) If A=MN is an M-splitting of A, then ρ(M1N)<1 if and only if A is an M-matrix.

    In this section, the NRATMMS iteration method for solving the LCP(A,q) (1.1) is developed, and the general convergence analysis of the NRATMMS iteration method will be explored.

    Let A=M1N1=M2N2 be two splittings of A and Ω=Ω1Ω2=Ω3Ω4 with Ωi (i=1,2,3,4) being all nonnegative diagonal matrices, then (1.2) can be reformulated to the following fixed point format:

    (Ω1+M1)z=(N1+Ω2)[θz+(1θ)z]+|(M2Ω3)z+(Ω4N2)z+q|q, (3.1)

    where θ0 is a relaxation parameter. Based on (3.1), the NRATMMS iteration method is established as in the following Algorithm 3.1.

    Algorithm 3.1. (The NRATMMS iteration method) Let A=M1N1=M2N2 be two splittings of A and Ω=Ω1Ω2=Ω3Ω4 with Ωi(i=1,2,3,4) being all nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Given two initial guesses z0,z1Rn and a nonnegative relaxation parameter θ, the iteration sequence {zk} is generated by

    (Ω1+M1)zk+1=(N1+Ω2)[θzk+(1θ)zk1]+|(M2Ω3)zk+(Ω4N2)zk1+q|q (3.2)

    for k=1,2, until convergence.

    The Algorithm 3.1 provides a general framework of NMMS iteration methods for solving the LCP(A,q) (1.1), and it can yield a series of NMMS type iteration methods with suitable choices of the matrix splittings and the relaxation parameter. For instance, when θ=1 and Ωi=0 (i=1,2,3,4), the Algorithm 3.1 reduces to the new accelerated two-sweep MMS (NATMMS) iteration method

    M1zk+1=N1zk+|M2zkN2zk1+q|q.

    When θ=1, Ω1=Ω3=Ω, Ω2=Ω4=0, M2=A and N2=0, the Algorithm 3.1 reduces to the Algorithm 1.2. When M1=1α(DAβLA), N1=1α[(1α)DA+(αβ)LA+αUA], M2=DAUA, N2=LA with α,β>0, the Algorithm 3.1 gives the new relaxed acceleration two-sweep MAOR (NRATMAOR) iteration method. If (α,β) is equal to (α,α),(1,1), and (1,0), the NRATMAOR iteration method reduces to the new relaxed acceleration two-sweep MSOR (NRATMSOR) iteration method, the new relaxed acceleration two-sweep MGS (NRATMGS) iteration method and the new relaxed acceleration two-sweep MJ (NRATMJ) iteration method, respectively.

    The convergence analysis for Algorithm 3.1 is investigated with the system matrix A of the LCP(A,q) (1.1) being an H+-matrix.

    Lemma 8. Assume that ARn×n is an H+-matrix. Let A=M1N1 and A=M2N2 be an H-splitting and a general splitting of A, respectively, and Ωi(i=1,2,3,4) be four nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Denote

    ˜L=(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    then the iteration sequence {zk} generated by the Algorithm 3.1 converges to the unique solution z for arbitrary two initial vectors if ρ(˜L)<1.

    Proof. Let z be the exact solution of the LCP(A,q) (1.1), then it satisfies

    (Ω1+M1)z=(N1+Ω2)[θz+(1θ)z]+|(M2Ω3)z+(Ω4N2)z+q|q. (3.3)

    Subtracting (3.3) from (3.2), we have

    |zk+1z|=|(Ω1+M1)1(N1+Ω2)[θ(zkz)+(1θ)(zk1z)]+(Ω1+M1)1|(M2Ω3)zk+(Ω4N2)zk1+q|(Ω1+M1)1|(M2Ω3)z+(Ω4N2)z+q|||(Ω1+M1)1||N1+Ω2||θ(zkz)+(1θ)(zk1z)|+|(Ω1+M1)1|||(M2Ω3)zk+(Ω4N2)zk1+q||(M2Ω3)z+(Ω4N2)z+q|||(Ω1+M1)1||N1+Ω2|[θ|zkz|+|1θ||zk1z|]+|(Ω1+M1)1||(M2Ω3)(zkz)+(Ω4N2)(zk1z)||(Ω1+M1)1||N1+Ω2|[θ|zkz|+|1θ||zk1z|]+|(Ω1+M1)1|[|M2Ω3||zkz|+|Ω4N2||zk1z|]=|(Ω1+M1)1|[θ|N1+Ω2|+|M2Ω3|]|zkz|+|(Ω1+M1)1|[|1θ||N1+Ω2|+|Ω4N2|]|zk1z|.

    For simplicity, let

    F=|(Ω1+M1)1|[θ|N1+Ω2|+|M2Ω3|], (3.4)

    and

    G=|(Ω1+M1)1|[|1θ||N1+Ω2|+|Ω4N2|]. (3.5)

    Then we have

    |(zk+1zzkz)|(FGI0)|(zkzzk1z)|.

    Let

    L=(FGI0),

    then the iteration sequence {zk} converges to the unique solution z if ρ(L)<1. Since L0, according to Lemma 6, ρ(F+G)<1 implies ρ(L)<1. To prove the convergence of the Algorithm 3.1, it is sufficient to prove ρ(F+G)<1.

    Under the conditions that A is an H+-matrix and A=M1N1 {is} an H-splitting of A, i.e., M1|N1| is an M-matrix, then by Lemma 5, M1M1|N1| implies that M1 is an H-matrix, and Ω1+M1 is also an H-matrix. In the light of Lemma 3, it follows that

    0|(Ω1+M1)1|(Ω1+M1)1.

    Recall (3.4) and (3.5), we obtain

    F+G=|(Ω1+M1)1|[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    which yields that

    0F+G(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]:=˜L.

    As a consequence, based on the monotone property of the spectral radius, the iteration sequence {zk} generated by Algorithm 3.1 converges to the unique solution z of the LCP(A,q) (1.1) if ρ(˜L)<1. The proof is completed.

    Theorem 3.1. Assume that ARn×n is an H+-matrix. Let A=M1N1 be an H-compatible splitting and A=M2N2 be an M-splitting of A, and Ωi(i=1,2,3,4) be four nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Denote

    ˜L=(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    then the iteration sequence {zk} generated by the Algorithm 3.1 converges to the unique solution z of the LCP(A,q) (1.1) for arbitrary two initial vectors if one of the following two conditions holds.

    (i) 0<θ1 and Ωi(i=1,2,3,4) satisfy

    {AVe>Ω4Ve,ifΩ3DM2,(A+Ω)Ve>DM2Ve,ifΩ3<DM2. (3.6)

    (ii) θ>1 and Ωi(i=1,2,3,4) satisfy

    {θ<1+min1in[(AΩ4)Ve]i[(|N1|+Ω2)Ve]iand[(AΩ4)Ve]i[(|N1|+Ω2)Ve]i>0,ifΩ3DM2,θ<1+min1in[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]iand[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i>0,ifΩ3<DM2. (3.7)

    Here, Ω=Ω1Ω2=Ω3Ω4 and V is an arbitrary positive diagonal matrix such that (Ω1+M1)V is a strictly diagonal dominant matrix.

    Proof. According to Lemma 8, we only need to demonstrate ρ(˜L)<1. Then, on the basis of Lemma 2 and Lemma 4, it follows that

    ρ(˜L)=ρ(V1˜LV)||V1˜LV||=||[(Ω1+M1)V]1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]V||max1in{[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i.

    When 0<θ1, it holds that

    ρ(˜L)max1in{[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i. (3.8)

    Since A=M2N2 is an M-splitting of A, M2 is an M-matrix. Let M2=DM2BM2 be a splitting of M2, where DM2 is the positive diagonal matrix of M2.

    If Ω3DM2, it can be concluded that

    (Ω1+M1)Ve[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1|N1+Ω2||Ω3M2||Ω4N2|)Ve=(Ω1+M1|N1+Ω2||Ω3DM2+BM2||Ω4N2|)Ve(Ω1+M1|N1+Ω2||Ω3DM2||BM2||Ω4N2|)Ve(Ω1+M1|N1|Ω2Ω3+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|Ω2Ω3+M2Ω4|N2|)Ve=(2A+Ω1Ω2Ω3Ω4)Ve=(2A2Ω4)Ve. (3.9)

    If Ω3<DM2, we get

    (Ω1+M1)Ve[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1|N1+Ω2||M2Ω3||Ω4N2|)Ve=(Ω1+M1|N1+Ω2||DM2Ω3BM2||Ω4N2|)Ve(Ω1+M1|N1+Ω2||DM2Ω3||BM2||Ω4N2|)Ve(Ω1+M1|N1|Ω2+Ω32DM2+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|Ω2+Ω32DM2+M2Ω4|N2|)Ve=(2A2DM2+Ω1Ω2+Ω3Ω4)Ve=(2A2DM2+2Ω)Ve. (3.10)

    According to (3.8), (3.9) and (3.10), we have ρ(˜L)<1 if (3.6) holds.

    When θ>1, it follows that

    ρ(˜L)max1in{[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i. (3.11)

    If Ω3DM2, it can be derived that

    (Ω1+M1)Ve[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1(2θ1)|N1+Ω2||Ω3M2||Ω4N2|)Ve=(Ω1+M1(2θ1)|N1+Ω2||Ω3DM2+BM2||Ω4N2|)Ve(Ω1+M1(2θ1)|N1|(2θ1)Ω2|Ω3DM2||BM2|Ω4|N2|)Ve=(Ω1+M1(2θ1)|N1|(2θ1)Ω2Ω3+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|2(θ1)|N1|2(θ1)Ω2Ω2Ω3+M2Ω4|N2|)Ve=(2A2Ω42(θ1)(|N1|+Ω2))Ve,

    from which we have

    [2A2Ω42(θ1)(|N1|+Ω2)]Ve>0 (3.12)

    provided that 1<θ<min1in1+[(AΩ4)Ve]i[(|N1|+Ω2)Ve]i and [(AΩ4)Ve]i[(|N1|+Ω2)Ve]i>0(i=1,2,,n).

    If Ω3<DM2, it is implied that

    (Ω1+M1)Ve[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1(2θ1)|N1+Ω2||M2Ω3||Ω4N2|)Ve=(Ω1+M1(2θ1)|N1+Ω2||DM2Ω3BM2||Ω4N2|)Ve(Ω1+M1(2θ1)|N1|(2θ1)Ω2|DM2Ω3||BM2|Ω4|N2|)Ve=(Ω1+M1(2θ1)|N1|(2θ1)Ω2DM2+Ω3|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|2(θ1)|N1|2(θ1)Ω22DM2Ω2+Ω3+M2Ω4|N2|)Ve=(2A+2Ω2DM22(θ1)(|N1|+Ω2))Ve,

    from which we have

    [2A+2Ω2DM22(θ1)(|N1|+Ω2)]Ve>0 (3.13)

    provided that 1<θ<min1in1+[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i and [(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i>0(i=1,2,,n).

    According to (3.11), (3.12) and (3.13), we have ρ(˜L)<1 if (3.7) holds.

    Theorem 3.2. Assume that ARn×n is an H+-matrix. Let ϱρ(D1A|BA|). Assume that the choices of the four nonnegative diagonal matrices Ωi(i=1,2,3,4) and the three positive parameters α,β,θ such that M1+Ω1 is nonsingular and either Ω3DAmin{2Ω,2Ω2(θ1)Ω2} or max{2Ω4,2Ω4+2(θ1)Ω2}DA<Ω3. Then the NRATMAOR iteration method is convergent for arbitrary two initial vectors if one of the following eight conditions holds:

    (i) 0<θ1,0<α1,0<βα,ϱ<12;

    (ii) 0<θ1,1<α<2,0<βα,ϱ<2α2α;

    (iii) 0<θ1,0<α1,0<αβ,ϱ<α2β;

    (iv) 0<θ1,1<α<2,0<αβ,ϱ<2α2β;

    (v) 1<θ<2α2αϱ2α+2,2(θ1)2θ1<α1,0<βα,ϱ<12;

    (vi) 1<θ<α2αϱ+2α2,1<α<2θ2θ1,0<βα,ϱ<2α2α;

    (vii) 1<θ<2α2βϱ2α+2,2(θ1)2θ1<α1,0<αβ,ϱ<α2β;

    (viii) 1<θ<α2βϱ+2α2,1<α<2θ2θ1,0<αβ,ϱ<2α2β.

    Proof. For the NRATMAOR iteration method, we have A=M1N1=M2N2 with

    M1=1α(DAβLA),N1=1α[(1α)DA+(αβ)LA+αUA] (3.14)

    and

    M2=DAUA,N2=LA,

    where α,β>0 are parameters. In order to use the result of Lemma 8, we need A=M1N1 to be an H-splitting of A. Since A is an H+-matrix, we have DA>0. It follows from (3.14) that

    M1|N1|=1α(DAβLA)|1α[(1α)DA+(αβ)LA+αUA]|=1α(DAβ|LA|)1α|[(1α)DA+(αβ)LA+αUA]|=1αDAβα|LA||1α|αDA|αβ|α|LA||UA|=1|1α|αDAβ+|αβ|α|LA||UA|S

    If 0<βα, then

    S=1|1α|αDA|LA||UA|=1|1α|αDA|BA|,

    and it follows from Lemma 7 that S is an M-matrix if

    1|1α|>0andϱ<1|1α|α,

    which is satisfied if

    0<α1andϱ<1

    or

    1<α<2andϱ<2αα.

    If 0<αβ, then

    S1|1α|αDA2βα|LA||UA|1|1α|αDA2βα|BA|ˉS. (3.15)

    It follows from Lemma 7 that ˉS is an M-matrix if

    1|1α|>0andϱ<1|1α|2β,

    which is satisfied if

    0<α1andϱ<α2β (3.16)

    or

    1<α<2andϱ<2α2β. (3.17)

    In this case, since S is a Z-matrix, it follows from Lemma 5 and (3.15) that S is an M-matrix if (3.16) or (3.17) holds.

    In conclusion, A=M1N1 is an H-splitting of A (or, equivalently, S is an M-matrix) if one of the following four conditions holds:

    0<α1,0<βα,ϱ<1, (3.18)
    1<α<2,0<βα,ϱ<2αα, (3.19)
    0<α1,0<αβ,ϱ<α2β (3.20)

    or

    1<α<2,0<αβ,ϱ<2α2β. (3.21)

    In the following, let ˆA=ˆMˆN with ˆM=Ω1+M1 and ˆN=(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|, then ˜L=ˆM1ˆN. In order to prove the convergence of the NRATMAOR iteration method, based on Lemma 8, it suffices to prove ρ(˜L)<1 provided that A=M1N1 is an H-splitting of A.

    Since

    ˆM=Ω1+1α(DAβLA)=Ω1+1α(DAβ|LA|)

    is a lower triangular matrix with positive diagonal entries and non-positive off-diagonal entries, it is an M-matrix. In addition, ˆN0. According to Lemma 7, ˆA is an M-matrix implies ρ(˜L)<1. Thus, we will prove that the Z-matrix ˆA is an M-matrix in the following.

    Case I: 0<θ1. In this case, we have

    ˆN=|N1+Ω2|+|M2Ω3|+|Ω4N2|=|1α[(1α)DA+(αβ)LA+αUA]+Ω2|+|DAΩ3UA|+|Ω4LA||1α|αDA+α+|αβ|α|LA|+2|UA|+Ω2+Ω4+|DAΩ3|˜P,

    from which we have

    ˆA=ˆMˆNˆM˜P=Ω1+1α(DAβ|LA|)|1α|αDAα+|αβ|α|LA|2|UA|Ω2Ω4|DAΩ3|=(Ω32Ω4|DAΩ3|)+1|1α|αDAα+β+|αβ|α|LA|2|UA|. (3.22)

    It can be easy to prove that the first term of (3.22) is nonnegative if

    Ω3DA2Ω (3.23)

    or

    2Ω4DA<Ω3. (3.24)

    Then it follows from (3.22) that

    ˆA1|1α|αDAα+β+|αβ|α|LA|2|UA|. (3.25)

    (i) If 0<βα, then it can be deduced from (3.25) that

    ˆA1|1α|αDA2|LA|2|UA|=1|1α|αDA2|BA|T,

    from which and Lemma 5, we obtain that ˆA is an M-matrix whenever T is. It follows from Lemma 7 that T is an M-matrix if

    1|1α|>0andϱ<1|1α|2α,

    which is satisfied if

    0<α1andϱ<12

    or

    1<α<2andϱ<2α2α.

    (ii) If 0<αβ, it can be deduced from (3.25) that

    ˆA1|1α|αDA2βα|LA|2|UA|1|1α|αDA2βα|BA|=ˉS,

    which is an M-matrix if (3.16) or (3.17) holds.

    In Case I, it can be concluded from (i) and (ii) that ˆA is an M-matrix if one of the following four conditions holds:

    0<θ1,0<α1,0<βα,ϱ<12, (3.26)
    0<θ1,1<α<2,0<βα,ϱ<2α2α, (3.27)
    0<θ1,,0<α1,0<αβ,ϱ<α2β (3.28)

    or

    0<θ1,1<α<2,0<αβ,ϱ<2α2β. (3.29)

    Case II: θ>1. In this case, we have

    ˆN=(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|=(2θ1)|1α[(1α)DA+(αβ)LA+αUA]+Ω2|+|DAΩ3UA|+|Ω4LA|(2θ1)|1α|αDA+α+(2θ1)|αβ|α|LA|+2θ|UA|+(2θ1)Ω2+Ω4+|DAΩ3|˜N,

    from which we obtain

    ˆA=ˆMˆNˆM˜N=Ω1+1α(DAβ|LA|)(2θ1)|1α|αDAα+(2θ1)|αβ|α|LA|2θ|UA|(2θ1)Ω2Ω4|DAΩ3|=(Ω32Ω42(θ1)Ω2|DAΩ3|)+1(2θ1)|1α|αDAα+β+(2θ1)|αβ|α|LA|2θ|UA|. (3.30)

    The first term of (3.30) is nonnegative if

    Ω3DA2Ω2(θ1)Ω2<2Ω (3.31)

    or

    2Ω42Ω4+2(θ1)Ω2DA<Ω3. (3.32)

    Then it follows from (3.30) that

    ˆA1(2θ1)|1α|αDAα+β+(2θ1)|αβ|α|LA|2θ|UA|. (3.33)

    (a) If 0<βα, then it follows from (3.33) that

    ˆA1(2θ1)|1α|αDA2θ|BA|R,

    from which and Lemma 5, we find that ˆA is an M-matrix whenever R is. It follows from Lemma 7 that R is an M-matrix if

    1(2θ1)|1α|>0andϱ<1(2θ1)|1α|2θα,

    which is satisfied if

    2(θ1)2θ1<α1,1<θ<2α2αϱ2α+2,ϱ<12

    or

    1<α<2θ2θ1,1<θ<α2αϱ+2α2,ϱ<2α2α.

    (b) If 0<αβ, then

    ˆA1(2θ1)|1α|αDA2θβ2α(θ1)α|LA|2θ|UA|=1(2θ1)|1α|αDA(2θβα|LA|+2θ|UA|)+2(θ1)|LA|1(2θ1)|1α|αDA2θ(βα|LA|+|UA|)1(2θ1)|1α|αDA2θβα|BA|˜R,

    from which and Lemma 5, we find that ˆA is an M-matrix whenever ˜R is. It follows from Lemma 7 that ˜R is an M-matrix if

    1(2θ1)|1α|>0andϱ<1(2θ1)|1α|2θβ,

    which is satisfied if

    2(θ1)2θ1<α1,1<θ<2α2βϱ2α+2,ϱ<α2β

    or

    1<α<2θ2θ1,1<θ<α2βϱ+2α2,ϱ<2α2β.

    In Case II, it can be concluded from (a) and (b) that ˆA is an M-matrix if one of the following four conditions holds:

    θ>1,2(θ1)2θ1<α1,0<βα,1<θ<2α2αϱ2α+2,ϱ<12, (3.34)
    θ>1,1<α<2θ2θ1,0<βα,1<θ<α2αϱ+2α2,ϱ<2α2α, (3.35)
    θ>1,2(θ1)2θ1<α1,0<αβ,1<θ<2α2βϱ2α+2,ϱ<α2β (3.36)

    or

    θ>1,1<α<2θ2θ1,0<αβ,1<θ<α2βϱ+2α2,ϱ<2α2β. (3.37)

    The proof is completed by combining (3.18)–(3.21), (3.23), (3.24), (3.26)–(3.29), (3.31), (3.32) and (3.34)–(3.37).

    In this section, three numerical examples are performed to validate the effectiveness of the NRATMMS iteration method.

    All test problems are conducted in MATLAB R2016a on a personal computer with 1.19 GHz central processing unit (Intel (R) Core (TM) i5-1035U), 8.00 GB memory and Windows 10 operating system. In the numerical results, we report the number of iteration steps (denoted by "IT"), the elapsed CPU time in seconds (denoted as "CPU") and the norm of the absolute residual vector (denoted by "RES"). Here, RES is defined by

    RES(zk)min{Azk+q,zk}2.

    As [44], the following three examples are used.

    Example 4.1. ([20]) Consider the LCP(A,q), where the matrix A=ˆA+μIm2(μ0) with

    ˆA=Tridiag(Im,Sm,Im)Rm2×m2,Sm=tridiag(1,4,1)Rm×m,

    and q=AzRm2 with z=(1,2,1,2,,1,2,) being the unique solution of the LCP(A,q) (1.1).

    Example 4.2. ([20]) Consider the LCP(A,q), where the matrix A=ˆA+μIm2(μ0) with

    ˆA=Tridiag(1.5Im,Sm,0.5Im)Rm2×m2,Sm=tridiag(1.5,4,0.5)Rm×m,

    and q=AzRm2 with z=(1,2,1,2,,1,2,) being the unique solution of the LCP(A,q) (1.1).

    Example 4.3. (Black-Scholes American option pricing). In [50], the original Black-Scholes-Merton model changes to the following partial differential complementarity system

    (yτ2yx2)(y(x,τ)g(x,τ))=0, yτ2yx20, y(x,τ)g(x,τ)0. (4.1)

    The initial and boundary conditions of the American put option price y(x,τ) become y(x,0)=g(x,0) and limx±y(x,τ)=limx±g(x,τ), where g(x,τ) is the transformed payoff function. For the price x[a,b], (a,b) is equal to (1.5,1.5), (2,2) or (4,4). Let ϑ,η be a number of time steps and a number of x-nodes, σ,T be fixed volatility and expiry time. According to [50], by discretizing (4.1), we obtain the LCP:

    w:=Aud0, ug0 and w(ug)=0, (4.2)

    with A=tridiag(τ,1+2τ,τ) and τ=Δt(Δx)2, where Δt=0.5σ2Tϑ, Δx=baη denote the time step and the price step, respectively. And then, if we employ a transformation z=ug and q=Agd to formula (4.2), the American option pricing problem can be rewritten as LCP (1.1). In our numerical computations, we take g=0.5z, and z=(1,0,1,0,,1,0,). The vector d is adjusted such that d=Azw, where w=(0,1,0,1,,0,1,). The involved parameters are detailed in Table 3.

    As shown in [44], the NMGS method can be top-priority when the six tested methods there are used to solve the LCP(A,q) in the three examples. Therefore, in this paper, we focus on comparing the performance of the NMGS method in [44] with the NRATMGS method. For the NMGS iteration method, Ω=DA is used [44]. For the NRATMGS method, we take Ω1=Ω3=DA, Ω2=Ω4=0, M2=A, N2=0 and α=β=1. In addition, the optimal parameter θexp in the NRATMGS iteration method is obtained experimentally (ranging from 0 to 2 with step size 0.1 for Example 4.1 and Example 4.2, and with step size 0.01 for Example 4.3) by minimizing the corresponding iteration step number. For the sake of fairness, each methods are run 10 times and we take the average value of computing times as the reported CPU. Both methods are started from the initial vectors z0=z1=(1,0,1,0,,1,0,) and stopped if RES(zk)<105 or the prescribed maximal iteration number kmax=500 is exceeded. The involved linear systems are solved by "". Numerical results for Examples 1–3 are reported in Tables 13. It follows from Tables 13 the NRATMGS method is better than the NMGS method (and the other methods tested in [44]) in terms of the iteration step number and CPU time when the parameter θexp is selected appropriately.

    Table 1.  Numerical results of Example 4.1.
    Method m
    16 32 64 128
    μ=2 NMGS IT 28 31 32 34
    CPU 0.0009 0.0018 0.0055 0.0364
    RES 9.6887×106 6.1988×106 8.7866×106 6.8116×106
    NRATMGS θexp 1.4 1.4 1.4 1.4
    IT 24 25 26 28
    CPU 0.0005 0.0012 0.0048 0.0318
    RES 6.6171×106 8.5424×106 9.5986×106 5.5387×106
    μ=4 NMGS IT 18 20 21 21
    CPU 0.0003 0.0009 0.0036 0.0224
    RES 9.3072×106 4.4327×106 4.2816×106 9.0760×106
    NRATMGS θexp 1.2 1.2 1.3 1.3
    IT 16 17 17 18
    CPU 0.0002 0.0009 0.0032 0.0161
    RES 9.3811×106 9.4009×106 8.9594×106 5.9780×106
    μ=6 NMGS IT 15 16 16 17
    CPU 0.0002 0.0011 0.0030 0.0194
    RES 4.3687×106 3.6549×106 8.1157×106 5.6681×106
    NRATMGS θexp 1.2 1.2 1.2 1.2
    IT 13 14 14 15
    CPU 0.0002 0.0009 0.0025 0.0138
    RES 5.5869×106 3.4746×106 6.9767×106 3.9164×106
    μ=8 NMGS IT 13 13 14 15
    CPU 0.0003 0.0006 0.0027 0.0173
    RES 4.0397×106 9.9549×106 5.9143×106 3.3650×106
    NRATMGS θexp 1.2 1.2 1.2 1.2
    IT 11 12 12 13
    CPU 0.0003 0.0006 0.0022 0.0118
    RES 8.9972×106 4.0405×106 6.4052×106 2.6779×106

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical results of Example 4.2.
    Method m
    16 32 64 128
    μ=2 NMGS IT 24 26 28 29
    CPU 0.0007 0.0011 0.0051 0.0324
    RES 8.9826×106 9.8366×106 7.5002×106 9.2033×106
    NRATMGS θexp 1.8 1.9 1.9 1.9
    IT 18 19 20 21
    CPU 0.0003 0.0010 0.0037 0.0232
    RES 8.1939×106 9.1146×106 7.2571×106 5.7367×106
    μ=4 NMGS IT 16 17 18 19
    CPU 0.0002 0.0007 0.0030 0.0238
    RES 8.3608×106 8.5767×106 7.6698×106 6.2746×106
    NRATMGS θexp 1.5 1.5 1.6 1.6
    IT 13 14 17 14
    CPU 0.0002 0.0007 0.0026 0.0145
    RES 6.5082×106 4.9487×106 6.5055×106 9.6328×106
    μ=6 NMGS IT 13 14 15 15
    CPU 0.0003 0.0008 0.0028 0.0224
    RES 7.5880×106 5.6490×106 3.6901×106 7.7841×106
    NRATMGS θexp 1.4 1.4 1.4 1.4
    IT 11 11 12 12
    CPU 0.0002 0.0008 0.0022 0.0121
    RES 4.7675×106 8.9849×106 3.8629×106 7.2580×106
    μ=8 NMGS IT 12 12 13 13
    CPU 0.0003 0.0005 0.0024 0.0163
    RES 2.8368×106 7.0877×106 3.6923×106 7.7385×106
    NRATMGS θexp 1.3 1.3 1.3 1.3
    IT 10 10 11 11
    CPU 0.0002 0.0005 0.0021 0.0112
    RES 3.3256×106 7.2249×106 2.6219×106 5.2755×106

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of Example 4.3.
    Case Grid(η, ϑ) τ NMGS NRATMGS
    IT CPU RES θexp IT CPU RES
    σ=0.2
    T=0.5
    a=1.5
    b=1.5
    (4000,2000) 8.8889 23 0.0034 3.4764×106 0.95 20 0.0028 3.2943×106
    (6000,3000) 13.3333 26 0.0062 7.7964×106 0.93 22 0.0046 5.1432×106
    (8000,5000) 14.2222 25 0.0075 1.5296×106 1.03 23 0.0062 7.9327×106
    (8000,8000) 8.8889 23 0.0065 4.9204×106 0.95 20 0.0056 4.6315×106
    (10000,10000) 11.1111 23 0.0077 9.1134×107 1.04 21 0.0073 5.2285×106
    σ=0.2
    T=0.25
    a=1.5
    b=1.5
    (6000,3000) 6.6667 21 0.0043 6.0408×106 0.93 18 0.0038 7.5730×106
    (8000,4000) 8.8889 23 0.0064 4.9204×106 0.95 20 0.0055 4.6315×106
    (10000,5000) 11.1111 23 0.0078 9.1134×107 1.04 21 0.0073 5.2285×106
    (15000,15000) 8.3333 22 0.0111 8.1100×106 0.82 19 0.0101 7.3832×106
    (20000,20000) 11.1111 23 0.0177 1.2828×106 1.04 21 0.0152 7.2461×106
    σ=0.3
    T=0.5
    a=2
    b=2
    (4000,2500) 9 23 0.0034 3.8926×106 0.95 20 0.0029 3.1298×106
    (6000,3000) 16.875 27 0.0056 7.5378×106 1.1 25 0.0052 7.0849×106
    (8000,4000) 22.5 32 0.0088 5.2859×106 0.99 28 0.0076 4.8398×106
    (8000,6000) 15 26 0.0071 4.9459×106 0.86 23 0.0062 7.8720×106
    (10000,10000) 14.0625 25 0.0087 2.5353×106 1.05 23 0.0081 6.5500×106
    σ=0.3
    T=0.25
    a=4
    b=4
    (8000,4000) 2.8125 18 0.0050 6.2985×106 0.92 16 0.0045 8.3277×106
    (16000,8000) 5.625 21 0.0112 5.5724×106 0.94 18 0.0101 7.3356×106
    (20000,10000) 7.0313 23 0.0177 6.2544×107 0.91 20 0.0144 6.1941×106
    (24000,15000) 6.75 23 0.0202 6.6919×107 0.91 20 0.0179 6.9170×106
    (30000,24000) 6.5918 23 0.0275 7.3343×107 0.91 20 0.0243 7.7511×106

     | Show Table
    DownLoad: CSV

    In this paper, by applying the matrix splitting, relaxation technique and two-sweep iteration form to the new modulus-based matrix splitting formula, we propose a new relaxed acceleration two-sweep modulus-based matrix splitting (NRATMMS) iteration method for solving the LCP(A,q) (1.1). We investigate the convergence properties of the NRATMMS iteration method with the system matrix A of the LCP(A,q) (1.1) being an H+-matrix. Numerical experiments illustrate that the NRATMMS iteration method is effective, and it can be superior to some existing methods. However, the choices of the optimal parameters in theory require further investigation.

    The authors are grateful to the five reviewers and the editor for their helpful comments and suggestions that have helped to improve the paper. This research is supported by the National Natural Science Foundation of China (12201275, 12131004), the Ministry of Education in China of Humanities and Social Science Project (21YJCZH204), the Project of Liaoning Provincial Federation Social Science Circles (2023lslqnkt-044, 2022lslwtkt-069), the Natural Science Foundation of Fujian Province (2021J01661) and the Ministry of Science and Technology of China (2021YFA1003600).

    The authors confirm that there has no conflict of interest.



    [1] R. W. Cottle, J. S. Pang, R. E. Stone, The linear complementarity problem, San Diego: Academic Press, 1992. http://doi.org/10.1137/1.9780898719000
    [2] K. G. Murty, Linear complementarity, linear and nonlinear programming, Berlin: Heldermann, 1988.
    [3] U. Schäfer, A linear complementarity problem with a P-matrix, SIAM Rev., 46 (2004), 189–201. http://dx.doi.org/10.1137/S0036144502420083 doi: 10.1137/S0036144502420083
    [4] N. W. Kappel, L. T. Watson, Iterative algorithms for the linear complementarity problem, Int. J. Comput. Math., 19 (1986), 273–297. http://dx.doi.org/10.1080/00207168608803522 doi: 10.1080/00207168608803522
    [5] Z. Z. Bai, On the monotone convergence of the projected iteration methods for linear complementarity problem, Numer. Math. J. Chinese Univ. (English Ser.), 5 (1996), 228–233.
    [6] Y. Li, P. Dai, Generalized AOR methods for linear complementarity problem, Appl. Math. Comput., 188 (2007), 7–18. http://dx.doi.org/10.1016/j.amc.2006.09.067 doi: 10.1016/j.amc.2006.09.067
    [7] O. L. Mangasarian, Solution of symmetric linear complementarity problems by iterative methods, J. Optim. Theory Appl., 22 (1977), 465–485. http://dx.doi.org/10.1007/BF01268170 doi: 10.1007/BF01268170
    [8] H. S. Najafi, S. A. Edalatpanah, On the convergence regions of generalized accelerated overrelaxation method for linear complementarity problems, J. Optim. Theory Appl., 156 (2013), 859–866. http://dx.doi.org/10.1007/s10957-012-0135-1 doi: 10.1007/s10957-012-0135-1
    [9] Z. Z. Bai, On the convergence of the multisplitting methods for the linear complementarity problem, SIAM J. Matrix Anal. Appl., 21 (1999), 67–78. https://dx.doi.org/10.1137/S0895479897324032 doi: 10.1137/S0895479897324032
    [10] Z. Z. Bai, Parallel chaotic multisplitting iterative methods for the large sparse linear complementarity problem, J. Comput. Math., 19 (2001), 281–292.
    [11] Z. Z. Bai, L. L. Zhang, Modulus-based synchronous multisplitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 20 (2013), 425–439. https://dx.doi.org/10.1002/nla.1835 doi: 10.1002/nla.1835
    [12] Z. Z. Bai, D. J. Evans, Matrix multisplitting methods with applications to linear complementarity problems: parallel asynchronous methods, Int. J. Comput. Math., 79 (2002), 205–232. https://dx.doi.org/10.1080/00207160211927 doi: 10.1080/00207160211927
    [13] Z. Z. Bai, L. L. Zhang, Modulus-based synchronous two-stage multisplitting iteration methods for linear complementarity problems, Numer. Algorithms, 62 (2013), 59–77. https://dx.doi.org/10.1007/s11075-012-9566-x doi: 10.1007/s11075-012-9566-x
    [14] N. Machida, M. Fukushima, T. Ibaraki, A multisplitting method for symmetric linear complementarity problems, J. Comput. Appl. Math., 62 (1995), 217–227. http://dx.doi.org/10.1016/0377-0427(94)00103-2 doi: 10.1016/0377-0427(94)00103-2
    [15] C. Kanzow, Inexact semismooth Newton methods for large-scale complementarity problems, Optim. Methods Softw., 19 (2004), 309–325. http://dx.doi.org/10.1080/10556780310001636369 doi: 10.1080/10556780310001636369
    [16] Z. Sun, J. P. Zeng, A monotone semismooth Newton type method for a class of complementarity problems, J. Comput. Appl. Math., 235 (2011), 1261–1274. http://dx.doi.org/10.1016/j.cam.2010.08.012 doi: 10.1016/j.cam.2010.08.012
    [17] Z. Z. Bai, L. L. Zhang, Modulus-based multigrid methods for linear complementarity problems, Numer. Linear Algebra Appl., 24 (2017), e2105. https://dx.doi.org/10.1002/nla.2105 doi: 10.1002/nla.2105
    [18] L. Jia, X. Wang, X. Y. Xiao, The nonlinear lopsided HSS-like modulus-based matrix splitting iteration method for linear complementarity problems with positive-definite matrices, Commun. Appl. Math. Comput., 3 (2021), 109–122. http://dx.doi.org/10.1007/s42967-019-00038-5 doi: 10.1007/s42967-019-00038-5
    [19] L. L. Zhang, Two-stage multisplitting iteration methods using modulus-based matrix splitting as inner iteration for linear complementarity problems, J. Optim. Theory Appl., 160 (2014), 189–203. http://dx.doi.org/10.1007/s10957-013-0362-0 doi: 10.1007/s10957-013-0362-0
    [20] Z. Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 17 (2010), 917–933. https://dx.doi.org/10.1002/nla.680 doi: 10.1002/nla.680
    [21] A. Hadjidimos, M. Tzoumas, Nonstationary extrapolated modulus algorithms for the solution of the linear complementarity problem, Linear Algebra Appl., 431 (2009), 197–210. http://dx.doi.org/10.1016/j.laa.2009.02.024 doi: 10.1016/j.laa.2009.02.024
    [22] J. L. Dong, M. Q. Jiang, A modified modulus method for symmetric positive-definite linear complementarity problems, Numer. Linear Algebra Appl., 16 (2009), 129–143. http://dx.doi.org/10.1002/nla.609 doi: 10.1002/nla.609
    [23] N. Zheng, J. F. Yin, Accelerated modulus-based matrix splitting iteration methods for linear complementarity problem, Numer. Algorithms, 64 (2013), 245–262. http://dx.doi.org/10.1007/s11075-012-9664-9 doi: 10.1007/s11075-012-9664-9
    [24] H. Zheng, W. Li, S. Vong, A relaxation modulus-based matrix splitting iteration method for solving linear complementarity problems, Numer. Algorithms, 74 (2016), 137–152. http://dx.doi.org/10.1007/s11075-016-0142-7 doi: 10.1007/s11075-016-0142-7
    [25] W. Li, A general modulus-based matrix splitting method for linear complementarity problems of H-matrices, Appl. Math. Lett., 26 (2013), 1159–1164. http://dx.doi.org/10.1016/j.aml.2013.06.015 doi: 10.1016/j.aml.2013.06.015
    [26] S. Q. Shen, T. Z. Huang, Convergence and comparison theorems for double splittings of matrices, Comput. Math. Appl., 51 (2006), 1751–1760. http://dx.doi.org/10.1016/j.camwa.2006.02.006 doi: 10.1016/j.camwa.2006.02.006
    [27] Z. I. Woźnicki, Estimation of the optimum relaxation factors in partial factorization iterative methods, SIAM J. Matrix Anal. Appl., 14 (1993), 59–73. http://dx.doi.org/10.1137/0614005 doi: 10.1137/0614005
    [28] S. L. Wu, C. X. Li, Two-sweep modulus-based matrix splitting iteration methods for linear complementarity problems, J. Comput. Appl. Math., 302 (2016), 327–339. http://dx.doi.org/10.1016/j.cam.2016.02.011 doi: 10.1016/j.cam.2016.02.011
    [29] H. Ren, X. Wang, X. B. Tang, T. Wang, The general two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems, Comput. Math. Appl., 77 (2019), 1071–1081. http://dx.doi.org/10.1016/j.camwa.2018.10.040 doi: 10.1016/j.camwa.2018.10.040
    [30] X. Peng, M. Wang, W. Li, A relaxation two-sweep modulus-based matrix splitting iteration method for linear complementarity problems, East Asian J. Appl. Math., 9 (2019), 102–121. http://dx.doi.org/10.4208/eajam.020318.220618 doi: 10.4208/eajam.020318.220618
    [31] Z. G. Huang, J. J. Cui, Accelerated relaxation modulus-based matrix splitting iteration method for linear complementarity problems, Bull. Malays. Math. Sci. Soc., 44 (2021), 2175–2213. http://dx.doi.org/10.1007/s40840-020-01049-9 doi: 10.1007/s40840-020-01049-9
    [32] P. F. Dai, J. Li, J. Bai, J. Qiu, A preconditioned two-step modulus-based matrix splitting iteration method for linear complementarity problem, Appl. Math. Comput., 348 (2019), 542–551. http://dx.doi.org/10.1016/j.amc.2018.12.012 doi: 10.1016/j.amc.2018.12.012
    [33] B. Huang, C. Ma, Accelerated modulus-based matrix splitting iteration method for a class of nonlinear complementarity problems, Comput. Appl. Math., 37 (2018), 3053–3076. http://dx.doi.org/10.1007/s40314-017-0496-z doi: 10.1007/s40314-017-0496-z
    [34] N. Li, J. Ding, J. F. Yin, Modified relaxation two-sweep modulus-based matrix splitting iteration method for solving a class of implicit complementarity problems, J. Comput. Appl. Math., 413 (2022), 114370. http://dx.doi.org/10.1016/j.cam.2022.114370 doi: 10.1016/j.cam.2022.114370
    [35] S. Liu, H. Zheng, W. Li, A general accelerated modulus-based matrix splitting iteration method for solving linear complementarity problems, Calcolo, 53 (2015), 189–199. http://dx.doi.org/10.1007/s10092-015-0143-2 doi: 10.1007/s10092-015-0143-2
    [36] F. Mezzadri, On the equivalence between some projected and modulus-based splitting methods for linear complementarity problems, Calcolo, 56 (2019), 41. http://dx.doi.org/10.1007/s10092-019-0337-0 doi: 10.1007/s10092-019-0337-0
    [37] H. Ren, X. Wang, X. B. Tang, T. Wang, A preconditioned general two-step modulus-based matrix splitting iteration method for linear complementarity problems of H+-matrices, Numer. Algorithms, 82 (2019), 969–986. http://dx.doi.org/10.1007/s11075-018-0637-5 doi: 10.1007/s11075-018-0637-5
    [38] Y. J. Wu, W. H. Zhang, A. L. Yang, Modulus-based inexact non-alternating preconditioned splitting method for linear complementarity problems, Linear Multilinear Algebra, in press. http://dx.doi.org/10.1080/03081087.2021.1991874
    [39] L. L. Zhang, Two-step modulus-based matrix splitting iteration method for linear complementarity problems, Numer. Algorithms, 57 (2011), 83–99. http://dx.doi.org/10.1007/s11075-010-9416-7 doi: 10.1007/s11075-010-9416-7
    [40] L. L. Zhang, Z. R. Ren, Improved convergence theorems of modulus-based matrix splitting iteration methods for linear complementarity problems, Appl. Math. Lett., 26 (2013), 638–642. http://dx.doi.org/10.1016/j.aml.2013.01.001 doi: 10.1016/j.aml.2013.01.001
    [41] N. Zheng, J. F. Yin, Convergence of accelerated modulus-based matrix splitting iteration methods for linear complementarity problem with an H+-matrix, J. Comput. Appl. Math., 260 (2014), 281–293. http://dx.doi.org/10.1016/j.cam.2013.09.079 doi: 10.1016/j.cam.2013.09.079
    [42] Z. Z. Bai, P. L. Tong, Iterative methods for linear complementarity problem, J. UEST China, 22 (1993), 420–424.
    [43] Z. Z. Bai, T. Z. Huang, Accelerated overrelaxation methods for solving linear complementarity problem, J. UEST China, 23 (1994), 428–432.
    [44] S. L. Wu, C. X. Li, A class of new modulus-based matrix splitting methods for linear complementarity problem, Optim. Lett., 16 (2022), 1427–1443. http://dx.doi.org/10.1007/s11590-021-01781-6 doi: 10.1007/s11590-021-01781-6
    [45] A. Frommer, D. B. Szyld, H-splittings and two-stage iterative methods, Numer. Math., 63 (1992), 345–356. http://dx.doi.org/10.1007/BF01385865 doi: 10.1007/BF01385865
    [46] Z. Z. Bai, D. J. Evans, Matrix multisplitting relaxation methods for linear complementarity problems, Int. J. Comput. Math., 63 (1997), 309–326. https://dx.doi.org/10.1080/00207169708804569 doi: 10.1080/00207169708804569
    [47] J. G. Hu, Estimates of B1A and their applications, Math. Numer. Sin., 3 (1982), 272–282.
    [48] A. Frommer, G. Mayer, Convergence of relaxed parallel multisplitting methods, Linear Algebra Appl., 119 (1989), 141–152. http://dx.doi.org/10.1016/0024-3795(89)90074-8 doi: 10.1016/0024-3795(89)90074-8
    [49] A. Berman, R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Philadelphia: SIAM, 1994. http://dx.doi.org/10.1137/1.9781611971262
    [50] X. J. Shi, Y. Lei, Z. H. Huang, A fixed point method for the linear complementarity problem arising from american option pricing, Acta Math. Appl. Sin. Engl. Ser., 32 (2016), 921–932. http://dx.doi.org/10.1007/s10255-016-0613-6 doi: 10.1007/s10255-016-0613-6
  • This article has been cited by:

    1. Zhengge Huang, Jingjing Cui, Accelerated Relaxation Two-Sweep Modulus-Based Matrix Splitting Iteration Method for Linear Complementarity Problems, 2024, 21, 0219-8762, 10.1142/S0219876223500251
    2. Dongmei Yu, Huiling Wei, Cairong Chen, Deren Han, Scalable Relaxation Two-Sweep Modulus-Based Matrix Splitting Methods for Vertical LCP, 2024, 203, 0022-3239, 714, 10.1007/s10957-024-02529-9
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1589) PDF downloads(81) Cited by(2)

Figures and Tables

Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog