Loading [MathJax]/jax/output/SVG/jax.js
Research article

A general modulus-based matrix splitting method for quasi-complementarity problem

  • Received: 22 December 2021 Revised: 14 March 2022 Accepted: 28 March 2022 Published: 06 April 2022
  • MSC : 65F10

  • For large sparse quasi-complementarity problem (QCP), Wu and Guo [35] recently studied a modulus-based matrix splitting (MMS) iteration method, which belongs to a class of inner-outer iteration methods. In order to improve the convergence rate of the inner iteration so as to get fast convergence rate of the outer iteration, a general MMS (GMMS) iteration method is proposed in this paper. Convergence analyses on the GMMS method are studied in detail when the system matrix is either an H+-matrix or a positive definite matrix. In the case of H+-matrix, weaker convergence condition of the GMMS iteration method is obtained. Finally, two numerical experiments are conducted and the results indicate that the new proposed GMMS method achieves a better performance than the MMS iteration method.

    Citation: Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi. A general modulus-based matrix splitting method for quasi-complementarity problem[J]. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614

    Related Papers:

    [1] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
    [2] Ximing Fang . The simplified modulus-based matrix splitting iteration method for the nonlinear complementarity problem. AIMS Mathematics, 2024, 9(4): 8594-8609. doi: 10.3934/math.2024416
    [3] Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han . A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677
    [4] Ting Lin, Hong Zhang, Chaofan Xie . A modulus-based modified multivariate spectral gradient projection method for solving the horizontal linear complementarity problem. AIMS Mathematics, 2025, 10(2): 3251-3268. doi: 10.3934/math.2025151
    [5] Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064
    [6] Xu Li, Rui-Feng Li . Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118. doi: 10.3934/math.2021243
    [7] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [8] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [9] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [10] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
  • For large sparse quasi-complementarity problem (QCP), Wu and Guo [35] recently studied a modulus-based matrix splitting (MMS) iteration method, which belongs to a class of inner-outer iteration methods. In order to improve the convergence rate of the inner iteration so as to get fast convergence rate of the outer iteration, a general MMS (GMMS) iteration method is proposed in this paper. Convergence analyses on the GMMS method are studied in detail when the system matrix is either an H+-matrix or a positive definite matrix. In the case of H+-matrix, weaker convergence condition of the GMMS iteration method is obtained. Finally, two numerical experiments are conducted and the results indicate that the new proposed GMMS method achieves a better performance than the MMS iteration method.



    Consider the quasi-complementarity problem (QCP) [28,35], to find a couple of vector solutions z,wRn such that

    w=Az+q+Ψ(z)0,zΦ(z)0andwT(zΦ(z))=0, (1.1)

    where A=(aij)Rn×n and qRn are given, Ψ() is a nonlinear transformation from Rn into itself and Φ() denotes a point-to-point mapping. Here and in the sequel, '' denotes the componentwise defined partial ordering between two vectors and ()T stands for the transpose of either a vector or a matrix. In fact, the QCP (1.1) can be regarded as a generalized case of many well-known complementarity problems [21]. In greater detail, if Φ() is the zero mapping, then the QCP (1.1) reduces to the weakly or restricted nonlinear complementarity problem (WNCP) [25]. If Ψ()=0, then the QCP (1.1) reduces to the implicit complementarity problem (ICP) [11,19]. If the conditions about Φ() and Ψ() mentioned above are met simultaneously, then the QCP (1.1) reduces to the well-known linear complementarity problem (LCP) [2,13,14].

    As is well-known, many problems in engineering and economics applications result in the form of QCP and its special cases, including the linear and quadratic programming [13], the Nash equilibrium problems [12], the traffic bottleneck model simulation [10,30], the free boundary problems [37], the nonnegatively constrained image restoration [15] and so on, see [13,16,21] and references therein for more details. Due to the exponential computational complexity of direct complementarity pivot algorithms, iterative methods for solving the QCP (1.1) are preferred and widely studied. For instance, the inexact Newton methods [27,29], the projected methods [24], the matrix multisplitting iteration method [1,4,5], the modulus-based matrix splitting (MMS) iteration methods [2,14,32,35] and so forth. Among these existing iterative methods, the MMS iteration method has attracted considerable attention due to its simple structure and high performance.

    The modulus method was first devised to solve the finite-dimensional discrete linear complementarity problems [31]. The basic idea of the modulus method is to cast the LCP as an absolute value equation (AVE) [34] by setting two nonnegative and orthogonal vectors and construct high efficiency algorithm to solve the AVE. Then, on one hand, abundant improvements have been done to accelerate the convergence rate and improve the computing efficiency of the modulus method, including the modified modulus-based iteration method [14], the modulus-based matrix splitting (MMS) iteration method [2], the general MMS iteration method [26], the two-step MMS iteration method [23,36], the modulus-based synchronous multisplitting iteration method [8] and so on. On the other hand, the classical modulus method and its improvements have been extended to study the ICP [11,19,22], the WNCP [25,38], the QCP [32,35]. Numerical results indicate that the modulus-based iteration methods perform much better than the Newton-based iteration methods and the projected iteration methods.

    Let g(z)=zΦ(z) be invertible and A=MN be a splitting of the matrix ARn×n. By introducing a positive parameter γ>0, a positive diagonal matrix Ω and letting

    g(z)=zΦ(z)=1γ(|x|+x),w=1γΩ(|x|x),

    then z=g1(1γ(|x|+x)) and the QCP (1.1) can be equivalently expressed as the following implicit fixed point equation

    (Ω+M)x=Nx+(ΩA)|x|γAΦ(g1(1γ(|x|+x)))γΨ(g1(1γ(|x|+x)))γq. (1.2)

    Further, define a set

    Z={z|zΦ(z)0,Az+q+Ψ(z)0}, (1.3)

    then the MMS iteration method is briefly summarized as follow.

    Method 1.1. [35] (The MMS iteration method for QCP)

    Step 1. Given ε>0, z(0)Z, set k=0.

    Step 2. Find the solution z(k+1):

    (1) Calculate the initial vector

    x(0,k)=γ2(z(k)Ω1w(k)Φ(z(k)))

    and set j = 0.

    (2) Compute x(j+1,k) by iteratively solving

    (Ω+M)x(j+1,k)=Nx(j,k)+(ΩA)|x(j,k)|γAΦ(z(k))γΨ(z(k))γq.

    (3) Compute

    z(k+1)=1γ(|x(j+1,k)|+x(j+1,k))+Φ(z(k)).

    Step 3. If RES(z(k+1))=|(Az(k+1)+q+Ψ(z(k+1)))T(z(k+1)Φ(z(k+1)))|<ε, then stop; otherwise, set k=k+1 and return to step 2.

    It can be seen from Method 1.1 that the MMS iteration method for solving the QCP (1.1) belongs to a class of inner-outer iteration methods. Depending on that the step of inner iteration j is fixed or varies with the number of outer iteration k, the MMS iteration method can be classified into two categories, i.e. the stationary and nonstationary MMS methods. Generally speaking, the convergence rate of the inner solver has great impact on the global convergence rate [11,36].

    To improve the convergence rate of the inner iteration of the MMS iteration method so as to obtain fast global convergence rate, inspired and motivated by the general MMS iteration method studied in [26] for solving the LCP, we propose a general modulus-based matrix splitting (GMMS) iteration method for solving the QCP (1.1). In the GMMS iteration method, an additional diagonal matrix is introduced for g(z). Through the selection of appropriate diagonal matrices, the GMMS iteration method not only covers the existing MMS method, but also leads to a new series of modulus-based relaxation methods. Convergence conditions are analyzed in detail when the system matrix is either an H+-matrix or a positive definite matrix. Moreover, in the case of H+-matrix, weaker convergence conditions than that given in [35,Theorem 3.3] can be obtained.

    The rest of this paper is organized as follows. In Section 2, we establish the GMMS iteration method for solving the QCP (1.1). Convergence conditions of the GMMS iteration method are proved in Section 3. In Section 4, two numerical examples are presented to demonstrate the effectiveness and advantages of the new proposed GMMS iteration method. Finally, we end this paper with a brief conclusion and outlook in Section 5.

    Let Ω1Rn×n, Ω2Rn×n be two positive diagonal matrices, and

    g(z)=zΦ(z)=Ω1(|x|+x),w=Ω2(|x|x),

    then we have

    z=g1(Ω1(|x|+x)),x=12(Ω11zΩ11Φ(z)Ω12w).

    Further let AΩ1=MΩ1NΩ1 be a splitting of AΩ1Rn×n. Then similar to (1.2), we can transform the original QCP (1.1) into the following implicit fixed-point equation with respect to x:

    (Ω2+MΩ1)x=NΩ1x+(Ω2AΩ1)|x|AΦ(g1(Ω1(|x|+x))Ψ(g1(Ω1(|x|+x))q. (2.1)

    It follows from [35,Theorem 3.1] that if x is a solution of (2.1), then (z,w)=(g1(Ω1(|x|+x)),Ω2(|x|x)) is a solution pair of the QCP (1.1).

    Based on (2.1), the initial set (1.3) and the MMS iteration method (Method 1.1), a general modulus-based matrix splitting iteration method is proposed as follows.

    Method 2.1. (The GMMS iteration method for QCP)

    Step 1. Given ε>0, z(0)Z, set k=0.

    Step 2. Find the solution z(k+1):

    (1) Calculate the initial vector

    w(k)=Az(k)+q+Ψ(z(k)),
    x(0,k)=12(Ω11z(k)Ω11Φ(z(k))Ω12w(k)), (2.2)

    and set j = 0.

    (2) Compute x(j+1,k) by iteratively solving

    (Ω2+MΩ1)x(j+1,k)=NΩ1x(j,k)+(Ω2AΩ1)|x(j,k)|AΦ(z(k))Ψ(z(k))q. (2.3)

    (3) Compute

    z(k+1)=1γ(|x(j+1,k)|+x(j+1,k))+Φ(z(k)). (2.4)

    Step 3. If RES(z(k+1))=|(Az(k+1)+q+Ψ(z(k+1)))T(z(k+1)Φ(z(k+1)))|<ε, then stop; otherwise, set k=k+1 and return to step 2.

    Remark 2.1. In particular, if Ω1=1γI, Ω2=1γΩ, MΩ1=1γM and NΩ1=1γN, then (2.3) can be rewritten as

    (Ω+M)x(j+1,k)=Nx(j,k)+(ΩA)|x(j,k)|γAΦ(z(k))γΨ(z(k))γq,

    which reduces to the MMS iteration scheme in [35].

    From the new proposed GMMS iteration method (see Method 2.1) and the original MMS iteration method (see Method 1.1), we see that only the inner iteration (i.e. the second step) is different. As we discussed in Section 1, the inner iteration is critical for the MMS iteration method. Here, we provide a general framework for the inner iteration. With suitable choices of the positive diagonal matrices Ω1 and Ω2, we can speed up the inner iteration so as to get fast convergence rate of the outer iteration.

    We would emphasize that the implicit fixed-point equation (2.1) is a weakly nonlinear system [3,7]. By selecting different parameter matrices, Method 2.1 can also yield a series of general modulus-based relaxation methods. More specifically, let AΩ1=DAΩ1LAΩ1UAΩ1, with DAΩ1, LAΩ1, UAΩ1 being the diagonal, strictly lower-triangular, and strictly upper-triangular matrices of AΩ1, respectively. Then four specific iteration schemes can be obtained:

    (a) When MΩ1=DAΩ1 and NΩ1=LAΩ1+UAΩ1, Method 2.1 is known as the general modulus-based Jacobi (GMJ) iteration method.

    (Ω2+DAΩ1)x(j+1,k)=(LAΩ1+UAΩ1)x(j,k)+(Ω2AΩ1)|x(j,k)|AΦ(z(k))Ψ(z(k))q,

    with z(k+1)=Ω1(|x(j+1,k)|+x(j+1,k))+Φ(z(k)).

    (b) When MΩ1=DAΩ1LAΩ1 and NΩ1=UAΩ1, Method 2.1 is named as the general modulus-based Gauss-Seidel (GMGS) iteration method.

    (Ω2+DAΩ1LAΩ1)x(j+1,k)=UAΩ1x(j,k)+(Ω2AΩ1)|x(j,k)|AΦ(z(k))Ψ(z(k))q,

    with z(k+1)=Ω1(|x(j+1,k)|+x(j+1,k))+Φ(z(k)).

    (c) When MΩ1=1αDAΩ1LAΩ1 and NΩ1=(1α1)DAΩ1+UAΩ1, Method 2.1 is referred to as the general modulus-based successive overrelaxation (GMSOR) iteration method.

    (αΩ2+DAΩ1αLAΩ1)x(j+1,k)=[(1α)DAΩ1+αUAΩ1]x(j,k)+α(Ω2AΩ1)|x(j,k)|αAΦ(z(k))αΨ(z(k))αq,

    with z(k+1)=Ω1(|x(j+1,k)|+x(j+1,k))+Φ(z(k)).

    (d) When MΩ1=1α(DAΩ1βLAΩ1) and NΩ1=1α[(1α)DAΩ1+(αβ)LAΩ1+αUAΩ1], Method 2.1 reduces to the general modulus-based accelerated overrelaxation (GMAOR) iteration method.

    (αΩ2+DAΩ1βLAΩ1)x(j+1,k)=[(1α)DAΩ1+(αβ)LAΩ1+αUAΩ1]x(j,k)+α(Ω2AΩ1)|x(j,k)|αAΦ(z(k))αΨ(z(k))αq,

    with z(k+1)=Ω1(|x(j+1,k)|+x(j+1,k))+Φ(z(k)).

    The above four modulus-based splitting iteration methods based on classical matrix splitting iteration methods for system of linear equations [6]. Obviously, computational workload of solving the inverse of system matrix A can be reduced. In addition, the relaxation parameters α and β can be tuned in order to improve the convergence speed in GMAOR method. But in practice, different combination of parameters α and β have a great influence on the iteration steps. As a result, the GMGS iteration method appears to be more competitive compared with the GMSOR and GMAOR methods.

    Remark 2.2. In actual computations, the choices of the parameter matrices Ω1 and Ω2 in the GMMS iteration method are flexible. One can choose tI(t>0), or sDA(s>0) with DA being the diagonal matrix of A, or other positive diagonal matrices with unequal diagonal elements. By suitable choosing the parameter matrices, faster convergence rate of the proposed GMMS iteration method can be obtained.

    In this section, we will make convergence analysis on the GMMS iteration methods when the system matrix A of the QCP(1.1) is an H+-matrix and a positive definite matrix.

    The following notations, definitions and lemmas are to be used in the subsequent sections. Let A=(aij), B=(bij)Rn×n be two square matrices. Define AB (A>B) if aijbij (aij>bij), for all 1in, 1jn. We say A is a nonnegative (positive) matrix if aij0 (aij>0). A is called a Z-matrix if aij0 for any ij. If A is a Z-matrix and A10, then A is an M-matrix. If the comparison matrix A=(aij)Rn×n, where we define

    aij={|aij|,fori=j,|aij|,forij,i,j=1,2,,n

    is an M-matrix, A is called an H-matrix. In particular, an H-matrix is called an H+-matrix when its diagonal entries are positive.

    We use sp(A), ρ(A) to represent the spectrum and the spectral radius of the matrix A, respectively. The splitting A=MN is an M-splitting if M is a nonsingular M-matrix and N is nonnegative. A=MN is called an H-splitting if M|N| is an M-matrix. Further, if A=M|N|, then A=MN is called an H-compatible splitting [9]. I is the identity matrix of the corresponding scale.

    Lemma 3.1. [18] Let ARn×n be an M-matrix and BRn×n be an Z-matrix. If AB, then B is an M-matrix.

    Lemma 3.2. [17] If ARn×n is an H+-matrix, then |A1|A1.

    Lemma 3.3. [33] Let ARn×n, then ρ(A)<1 iff limn+An=0.

    Lemma 3.4. [6] ARn×n is a generalized strictly diagonally dominant matrix if and only if there exists a positive diagonal matrix D such that the matrix AD is strictly diagonally dominant.

    Lemma 3.5. [20] Let BRn×n be a strictly diagonally dominant (s.d.d.) matrix, then for any matrix CRn×n,

    B1Cmax1in(|C|e)i(Be)i

    holds, where e=(1,1,1)T.

    Assume that z() is the exact solution of the QCP (1.1) and x() is the exact solution of the implicit fixed point equation (2.1). And from (2.2)–(2.4), we have

    z()=Ω1(|x()|+x())+Φ(z()), (3.1)
    x()=12(Ω11z()Ω11Φ(z())Ω12w()) (3.2)

    and

    (Ω2+MΩ1)x()=NΩ1x()+(Ω2AΩ1)|x()|AΦ(z())Ψ(z())q.

    In this subsection, we derive sufficient convergence conditions for the GMMS iteration method when the system matrix A is an H+-matrix. To this end, the following two notations are introduced:

    {δ1=(Ω2+MΩ1)1(|NΩ1|+|Ω2AΩ1|),δ2=(Ω2+MΩ1)1(|A|l1+l2I).

    Theorem 3.1. Let Ω1,Ω2Rn×n be two positive diagonal matrices, ARn×n be an H+-matrix and AΩ1=MΩ1NΩ1 be an H-splitting of the matrix AΩ1. Let Φ() and Ψ() be two Lipschitz continuous functions, and satisfy

    |Φ(s)Φ(t)|l1|st|and|Ψ(s)Ψ(t)|l2|st|.

    l1, l2 are Lipschitz constants. D is a positive diagonal matrix and MΩ1|NΩ1| is a generalized strictly diagonally dominant matrix. If

    Ω2e>DAΩ1eD1(MΩ1|NΩ1|)Deand2Ω1δ22+l11δ12<1,

    where δ12<1, then the sequence {z(k)}+k=0 generated by GMMS method converges to the unique solution z() of the QCP (1.1) for any initial values for the vector z(0)Z.

    Proof. Since AΩ1=MΩ1NΩ1 is an H-splitting of the matrix AΩ1, ˆAΩ1=MΩ1|NΩ1| is an M-matrix. Evidently, MΩ1ˆAΩ1, by Lemma 3.1, MΩ1 is an M-matrix and Ω2+MΩ1 is an H+-matrix. Again by the application Lemma 3.2, we have

    |(Ω2+MΩ1)1|(Ω2+MΩ1)1.

    Subtracting (3.1) from (2.4) and taking the absolute value, we obtain

    |z(k+1)z()|=|Ω1(|x(j+1,k)|+x(j+1,k))+Φ(z(k))Ω1(|x()|+x())Φ(z())||Φ(z(k))Φ(z())|+Ω1||x(j+1,k)||x()||+Ω1|x(j+1,k)x()||Φ(z(k))Φ(z())|+2Ω1|x(j+1,k)x()|.

    Substituting x() into (2.3) and then subtracting from (2.3) gives

    |x(j+1,k)x()|=|(Ω2+MΩ1)1[NΩ1(x(j,k)x())+(Ω2AΩ1)(|x(j,k)||x()|)A(Φ(z(k))Φ(z()))(Ψ(z(k))Ψ(z()))]||(Ω2+MΩ1)1||NΩ1(x(j,k)x())+(Ω2AΩ1)(|x(j,k)||x()|)|+|(Ω2+MΩ1)1||A(Φ(z(k))Φ(z()))|+|(Ω2+MΩ1)1||(Ψ(z(k))Ψ(z())|(Ω2+MΩ1)1(|NΩ1|+|Ω2AΩ1|)|x(j,k)x()|+(Ω2+MΩ1)1|A||(Φ(z(k))Φ(z())|+(Ω2+MΩ1)1|(Ψ(z(k))Ψ(z())|(Ω2+MΩ1)1(|NΩ1|+|Ω2AΩ1|)|x(j,k)x()|+(Ω2+MΩ1)1(|A|l1+l2I)|z(k)z()|=δ1|x(j,k)x()|+δ2|z(k)z()|.

    Then

    |z(k+1)z()|l1|z(k)z()|+2Ω1(δ1|x(j,k)x()|+δ2|z(k)z()|)=2Ω1δ1|x(j,k)x()|+(l1I+2Ω1δ2)|z(k)z()|2Ω1δj+11|x(0,k)x()|+(2Ω1(δj1+δj11+δ01)δ2+l1I)|z(k)z()|=2Ω1δj+11|x(0,k)x()|+(2Ω1δ2ji=0δi1+l1I)|z(k)z()|.

    From (3.2) and (2.2),

    |x(0,k)x()|=12|Ω11z(k)Ω11Φ(z(k))Ω12w(k)Ω11z()+Ω11Φ(z())+Ω12w()|=12|Ω11(z(k)z())Ω11(Φ(z(k))Φ(z()))Ω12(Az(k)Az()+Ψ(z(k))Ψ(z()))|12|Ω11Ω12A||z(k)z()|+|Ω11|2|Φ(z(k))Φ(z())|+|Ω12|2|Ψ(z(k))Ψ(z())|12|Ω11Ω12A||z(k)z()|+l1|Ω11|2|z(k)z()|+l2|Ω12|2|z(k)z()|=12(|Ω11Ω12A|+l1|Ω11|+l2|Ω12|)|z(k)z()|.

    Thus,

    |z(k+1)z()|[Ω1δj+11(|Ω11Ω12A|+l1|Ω11|+l2|Ω12|)+2Ω1δ2ji=0δi1+l1I]|z(k)z()|.

    Let ˉY=Ω1δj+11(|Ω11Ω12A|+l1|Ω11|+l2|Ω12|)+2Ω1δ2ji=0δi1+l1I, now we just have to prove that ρ(ˉY)<1.

    ρ(ˉY)=ρ(Ω1δj+11(|Ω11Ω12A|+l1|Ω11|+l2|Ω12|)+2Ω1δ2ji=0δi1+l1I)Ω1δj+11(|Ω11Ω12A|+l1|Ω11|+l2|Ω12|)+2Ω1δ2ji=0δi1+l1I2Ω1δj+112 |Ω11Ω12A|+l1|Ω11|+l2|Ω12| 2+2Ω1δ2ji=0δi12+l1.

    From Lemma 3.4, since MΩ1|NΩ1| is a generalized s.d.d., there exists a positive diagonal matrix D such that (MΩ1|NΩ1|)D is s.d.d. And by Lemma 3.5, we have ρ(δ1)<1, refer to [26] for detailed proof. Then by Lemma 3.3, limj+δj+11=0 holds true. Therefore, for any ϵ1>0, there exists J1 such that for all jJ1, Ω1δj+112ϵ1. Note that  |Ω11Ω12A|+l1|Ω11|+l2|Ω12| 2 is a constant. Hence, there is a positive integer J1 such that

    Ω1δj+112 |Ω11Ω12A|+l1|Ω11|+l2|Ω12| 2ϵ1

    for any ϵ1>0 (ϵ11). Finally, we acquire

    ρ(ˉY)ϵ1+2Ω1δ2ji=0δi12+l1ϵ1+2Ω1δ221δ12+l1=ϵ1+2Ω1δ22+l1l1δ121δ12<ϵ1+2Ω1δ22+l11δ12<1.

    This completes the proof.

    Remark 3.1. In Theorem 3.1, AΩ1=MΩ1NΩ1 is assumed to be an H-splitting of the matrix AΩ1, whereas the condition in [35,Theorem 3.3] is required A=MN to be an H-compatible splitting of the matrix A. We present a simple example to show that our convergence condition weakens from H-compatible splitting to H-splitting. Suppose that Ω1=I, the matrix A and the splitting matrices of AΩ1 are

    A=(8234),MΩ1=(8404),NΩ1=(0230),MΩ1|NΩ1|=(8634)AΩ1=A.

    Obviously, one can see from the above simple example, this way of splitting does not satisfy [35,Theorem 3.3], but it is possible in Theorem 3.1.

    In this subsection, the convergence analysis of the GMMS iteration method is analyzed when the system matrix A is a positive definite matrix.

    Theorem 3.2. Let Ω1,Ω2Rn×n be two positive diagonal matrices, ARn×n be a positive definite matrix and AΩ1=MΩ1NΩ1 be a splitting of the matrix AΩ1. Define η1=(Ω2+MΩ1)1NΩ12+(Ω2+MΩ1)1(Ω2AΩ1)2, η2=(Ω2+MΩ1)1A2l1+(Ω2+MΩ1)12l2. Suppose that Φ() and Ψ() are Lipschitz continuous functions, i.e., for any s,tRn satisfy

    Φ(s)Φ(t)2l1st2andΨ(s)Ψ(t)2l2st2,

    where l1, l2 are Lipschitz constants. If

    ηj+11Ω12((1+l1)Ω112+(A2+l2)Ω122)+2Ω12η2ji=0ηi1+l1<1 (3.3)

    holds true, then for any initial vector z(0)Z, the sequence {z(k)}+k=0 generated by GMMS iteration method converges to the unique solution z() of the QCP (1.1).

    Proof. Similar to the analysis of Theorem 3.1, subtracting (3.1) from (2.4) and taking the 2-norm on both sides give the error expression:

    z(k+1)z()2Φ(z(k))Φ(z())2+2Ω12x(j+1,k)x()2l1z(k)z()2+2Ω12x(j+1,k)x()2.

    Substituting x() into (2.3) and subtracting from (2.3), we have

    x(j+1,k)x()2=(Ω2+MΩ1)1[NΩ1(x(j,k)x())+(Ω2AΩ1)(|x(j,k)||x()|)A(Φ(z(k))Φ(z()))(Ψ(z(k))Ψ(z()))]2(Ω2+MΩ1)1NΩ12x(j,k)x()2+(Ω2+MΩ1)1(Ω2AΩ1)2x(j,k)x()2+(Ω2+MΩ1)1A2(Φ(z(k))Φ(z()))2+(Ω2+MΩ1)12(Ψ(z(k))Ψ(z())2η1x(j,k)x()2+η2z(k)z()2.

    Then

    z(k+1)z()2l1z(k)z()2+2Ω12(η1x(j,k)x()2+η2z(k)z()2)2ηj+11Ω12x(0,k)x()2+(2η2Ω12ji=0ηi1+l1)z(k)z()2.

    Subtracting (3.2) from (2.2) and take the 2-norm, we have

    x(0,k)x()2=12Ω11z(k)Ω11Φ(z(k))Ω12w(k)Ω11z()+Ω11Φ(z())+Ω12w()2=12Ω11(z(k)z())Ω11(Φ(z(k))Φ(z()))Ω12(Az(k)Az()+Ψ(z(k))Ψ(z()))212Ω11Ω12A2z(k)z()2+l1Ω1122z(k)z()2+l2Ω1222z(k)z()2=12(Ω11Ω12A2+l1Ω112+l2Ω122)z(k)z()2.

    Thus, the inequality holds,

    z(k+1)z()2[ηj+11Ω12(Ω11Ω12A2+l1Ω112+l2Ω122)+2Ω1η2ji=0ηi1+l1]z(k)z()2[ηj+11Ω12(Ω112+Ω12A2+l1Ω112+l2Ω122)+2Ω1η2ji=0ηi1+l1]z(k)z()2[ηj+11Ω12((1+l1)Ω112+(A2+l2)Ω122)+2Ω1η2ji=0ηi1+l1]z(k)z()2.

    It can be seen from (3.3), the iteration method of Method 2.1 converges to the unique solution of the QCP (1.1) for any initial vector z(0)Z. This completes the proof.

    In particular, if Ω1=ω1I, Ω2=ω2IRn×n are positive diagonal matrices, define τ=A2, then

    z(k+1)z()2[ηj+11(1+l1+ω1ω2(τ+l2))+2Ω1η2ji=0ηi1+l1]z(k)z()2,

    naturally, we can draw the following corollary.

    Corollary 3.1. Let Ω1=ω1I, Ω2=ω2IRn×n be two positive diagonal matrices, AΩ1=MΩ1NΩ1 be a splitting of the matrix AΩ1 and ARn×n be a positive definite matrix. Define τ=A2, η1=(Ω2+MΩ1)1NΩ12+(Ω2+MΩ1)1(Ω2AΩ1)2, η2=(Ω2+MΩ1)1A2l1+(Ω2+MΩ1)12l2. Suppose that Φ() and Ψ() are Lipschitz continuous functions, i.e., for any s,tRn satisfy

    Φ(s)Φ(t)2l1st2andΨ(s)Ψ(t)2l2st2,

    where l1, l2 are Lipschitz constants. If

    ηj+11(1+l1+ω1ω2(τ+l2))+2Ω1η2ji=0ηi1+l1<1

    holds true, then for any initial vector z(0)Z, the sequence {z(k)}+k=0 generated by GMMS iteration method converges to the unique solution z() of the QCP (1.1).

    Suppose that MΩ1Rn×n is a symmetric positive definite matrix and Ω2=ωIRn×n is a positive scalar matrix, a specific sufficient condition for the convergence is discussed in the next theorem.

    Theorem 3.3. Let AΩ1=MΩ1NΩ1 be a splitting of the matrix AΩ1Rn×n with MΩ1Rn×n being symmetric positive definite, Ω2=ωIRn×n being a positive scalar matrix. Use λmax and λmin to represent the largest and smallest eigenvalues of the matrix MΩ1. Define τ1=M1Ω1NΩ12, τ2=M1Ω1(Ω2AΩ1)2. If the Lipschitz constant l1 and the iteration parameter ω meet the following conditions:

    2Ω12η2+l11η1<1,ω>λmax(τ1+τ21)andη1<1, (3.4)

    then the iteration sequence {z(k)}k=0Rn+ generated by Method 2.1 converges to the unique solution z() of the QCP (1.1) for any initial vector z(0)Z.

    Proof. We only need to prove the condition for (3.3) is true. From the characteristics of the matrices MΩ1 and Ω2, we have

    (Ω2+MΩ1)1NΩ12(Ω2+MΩ1)1MΩ12M1Ω1NΩ12=(ωI+MΩ1)1MΩ12M1Ω1NΩ12=maxλsp(MΩ1)λτ1ω+λ=λmaxτ1ω+λmax.

    Analogously, we have

    (Ω2+MΩ1)1(Ω2AΩ1)2(Ω2+MΩ1)1MΩ12M1Ω1(Ω2AΩ1)2=(ωI+MΩ1)1MΩ12M1Ω1(Ω2AΩ1)2=maxλsp(MΩ1)λτ2ω+λ=λmaxτ2ω+λmax.

    Hence, it holds that when ω>λmax(τ1+τ21), then

    η1λmaxτ1ω+λmax+λmaxτ2ω+λmax=λmax(τ1+τ2)ω+λmax<1,

    accordingly, limj+ηj+11=0. Note that Ω12((1+l1)Ω112+(A2+l2)Ω122) is a constant. Therefore, there must exist a positive integer J2 such that for any ϵ2>0, when jJ2, the following inequality holds:

    ηj+11Ω12((1+l1)Ω112+1ω(A2+l2))<ϵ2. (3.5)

    From η1<1, we can derive

    2Ω12η2ji=0ηi1+l12Ω12η21η1+l1<2Ω12η2+l11η1. (3.6)

    Combine (3.3), (3.5) and (3.6), we can find a small enough ϵ2>0(ϵ21) for all jJ2, it satisfies

    ηj+11Ω12((1+l1)Ω112+1ω(A2+l2))+2Ω12η2ji=0ηi1+l1<ϵ2+2Ω12η2+l11η1.

    Therefore, when j+, we have ηj+11Ω12((1+l1)Ω112+1ω(A2+l2)+2Ω12η2ji=0ηi1+l1<1 provided that the condition (3.4) holds. This completes the proof.

    In this section, two numerical experiments are performed to verify the effectiveness of Method 2.1 for solving the QCP (1.1). Both experiments are investigating the factors of iteration steps (denoted by "IT"), the elapsed CPU time in seconds (denoted by "CPU") and residual errors (denoted by "RES"). Experiment 1 is studying when the matrix considered is symmetric, while Experiment 2 is focusing on a nonsymmetric case.

    In our experiments, "RES" is defined as

    RES(z(k))=|(Az(k)+q+Ψ(z(k)))T(z(k)Φ(z(k)))|,

    initially, the vector z(k) is chosen to be z(0)=(0,0,0)TRn. For Method 1.1, we take Ω=7I,γ=1. For Method 2.1, we take Ω1=0.06DA, DA=diag(A), Ω2=I. Parameters α and β are experimentally found optimal ones, which lead to the least iteration steps. The total step of inner iteration is set j=2 in both Methods 1 and 2 to simplify the process. The termination criteria is RES(z(k))106, or when k reaches the maximum number of iterations, e.g. 50. Both experiments are performed in MATLAB (R2018b) where all variables are defined as double. Intel(R) Core(TM) with i7-10710U CPU and 16 GB RAM, under Windows 10 operating system are used. Table 1 lists the abbreviations used in the following description of the experiments.

    Table 1.  Test methods.
    Method Description
    MM The modulus-based iteration method
    MGS The modulus-based Gauss-Seidel iteration method
    MSOR The modulus-based SOR iteration method
    MAOR The modulus-based AOR iteration method
    GMGS The general modulus-based Gauss-Seidel iteration method
    GMSOR The general modulus-based SOR iteration method
    GMAOR The general modulus-based AOR iteration method

     | Show Table
    DownLoad: CSV

    Let m be a positive integer and n=m2. Consider QCP (1.1), in which A=ˆA+μIRn×n and qRn are defined as follows.

    ˆA=Tridiag(I,S,I)=[SI000ISI000IS00000SI000IS]Rn×n

    is a symmetric block tridiagonal matrix,

    S=Tridiag(1,4,1)=[4100014100014000004100014]Rm×m

    is a tridiagonal matrix, and

    q=(1,1,1,1,,(1)n1,(1)n)T,

    the point-to-point mapping Φ(z) and the nonlinear transformation Ψ() are defined as

    Φ(z)=(atan(z1),atan(z2),,atan(zn))TRn
    andΨ(z)=(σsin(z1),σsin(z2),,σsin(zn))TRn.

    In the Experiment 1, we take μ=0 and μ=1, respectively. Five different sizes of matrix A are analyzed with n is given the values of 900, 3600, 14400, 57600, 230400. For each matrix size, seven iteration methods have been employed with σ=0.01. The three performance evaluation indicators are given in Tables 2 and 3 when μ=0 and μ=1 respectively. In addition, the factor of σ is investigated by giving it three different values: 0.001, 0.01 and 0.1, when the size of matrix n=14400 and μ=1. The results are shown in Table 4.

    Table 2.  Numerical results for Experiment 1 with μ=0, σ=0.01.
    Methods n
    900 3600 14400 57600 230400
    MM IT 10 11 12 13 14
    CPU 0.0551 0.0309 0.1023 0.5239 2.9227
    RES 9.10E-7 7.56E-7 5.99E-7 4.66E-7 3.59E-7
    MGS IT 12 13 14 15 16
    CPU 0.0017 0.0062 0.0345 0.1518 0.8573
    RES 4.84E-7 5.19E-7 5.24E-7 5.15E-7 5.01E-7
    MSOR α 2.0 1.9 1.8 1.7 1.6
    IT 9 10 11 12 13
    CPU 0.0012 0.0050 0.0233 0.1347 0.6957
    RES 9.23E-7 8.74E-7 8.45E-7 8.68E-7 9.70E-7
    MAOR (α, β) (2.0, 2.0) (1.7, 2.0) (1.5, 2.0) (1.3, 2.0) (1.2, 2.0)
    IT 9 10 11 12 13
    CPU 0.0011 0.0050 0.0278 0.1339 0.6600
    RES 9.23E-7 9.45E-7 9.16E-7 9.98E-7 9.49E-7
    GMGS IT 7 7 8 8 9
    CPU 0.0097 0.0033 0.0165 0.0795 0.4487
    RES 8.53E-8 3.15E-7 7.99E-8 2.94E-7 7.36E-8
    GMSOR α 1.7 2.6 2 2.1 1.7
    IT 4 3 4 4 5
    CPU 0.0006 0.0015 0.0078 0.0409 0.2512
    RES 8.55E-7 4.43E-7 7.89E-7 6.94E-7 7.24E-7
    GMAOR (α, β) (1.5, 2.0) (1.8, 2.0) (2.0, 1.8) (1.3, 2.0) (1.5, 1.9)
    IT 4 4 4 5 5
    CPU 0.0006 0.0025 0.0081 0.0509 0.2451
    RES 9.64E-7 9.79E-7 9.53E-7 9.20E-7 9.11E-7

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results for Experiment 1 with μ=1, σ=0.01.
    Methods n
    900 3600 14400 57600 230400
    MM IT 6 7 7 8 8
    CPU 0.0034 0.0147 0.0731 0.3915 2.2515
    RES 4.93E-7 1.42E-7 5.78E-7 1.60E-7 6.39E-7
    MGS IT 7 8 9 9 10
    CPU 0.0044 0.0040 0.0147 0.0830 0.4336
    RES 5.46E-7 2.42E-7 1.03E-7 4.20E-7 1.74E-7
    MSOR α 1.8 1.5 1.8 2.0 1.7
    IT 5 6 6 6 7
    CPU 0.0006 0.0026 0.0094 0.0621 0.3162
    RES 7.62E-7 7.81E-7 6.04E-7 9.13E-7 8.13E-7
    MAOR (α, β) (1.6, 2.0) (1.2, 2.0) (1.6, 1.9) (2.0, 2.0) (1.5, 2.0)
    IT 5 6 6 6 7
    CPU 0.0006 0.0029 0.0109 0.0612 0.4298
    RES 8.97E-7 8.09E-7 9.01E-7 9.13E-7 7.97E-7
    GMGS IT 5 5 6 6 6
    CPU 0.0015 0.003 0.0127 0.0601 0.2854
    RES 3.85E-8 2.26E-7 2.30E-8 1.10E-7 4.71E-7
    GMSOR α 0.6 0.6 0.6 0.5 0.6
    IT 4 4 4 5 5
    CPU 0.0006 0.0018 0.0077 0.0530 0.2709
    RES 3.14E-7 3.89E-7 3.27E-7 5.02E-7 1.69E-7
    GMAOR (α, β) (0.6, 0.3) (0.6, 0.3) (0.8, 0.3) (0.9, 0.3) (0.7, 0.3)
    IT 3 3 3 3 4
    CPU 0.0004 0.0015 0.0072 0.0435 0.2783
    RES 5.74E-7 4.20E-8 2.39E-7 3.02E-7 2.89E-7

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results for Experiment 1 with μ=1, n=14400.
    Ψ(z) Methods IT CPU RES Methods IT CPU RES
    0.1sin(z) MGS 8 0.0176 9.39E-8 GMGS 7 0.022 7.18E-7
    MSOR 4 0.0072 8.08E-7 GMSOR 6 0.0111 9.91E-7
    MAOR 4 0.0069 8.08E-7 GMAOR 4 0.0080 2.94E-7
    0.01sin(z) MGS 9 0.0147 1.03E-7 GMGS 6 0.0127 2.30E-8
    MSOR 6 0.0094 6.04E-7 GMSOR 4 0.0077 3.27E-7
    MAOR 6 0.0109 9.01E-7 GMAOR 3 0.0072 2.39E-7
    0.001sin(z) MGS 9 0.0278 1.30E-7 GMGS 5 0.0148 5.46E-7
    MSOR 6 0.0115 8.70E-7 GMSOR 4 0.0093 4.32E-7
    MAOR 6 0.0136 9.37E-7 GMAOR 3 0.0063 1.31E-7

     | Show Table
    DownLoad: CSV

    The findings are as follows:

    (a) For most methods except GMSOR and GMAOR methods when μ=0, σ=0.01 and n=3600, the iteration steps increase with the size of the matrix. However, all the methods can converge rapidly in spite of the size n.

    (b) MGS requires the most number of iteration steps while the proposed GMAOR method needs the least number of iterations. The proposed GMSOR method achieves a similar performance as the GMAOR.

    (c) The three proposed methods: GMGS, GMAOR and GMSOR demonstrate a slight improvement in terms of iteration steps and elapsed CPU times. GMMS iteration method uses almost half of iteration steps of MMS, especially when μ=0.

    (d) The proposed three methods show a close performance for all three indicators, however, GMAOR and GMSOR need extra-optimization on the parameter α and β to complete the calculation.

    Given that m to be a positive integer and n=m2 in the QCP (1.1), where A=ˆA+μIRn×n and qRn are defined as follows.

    ˆA=Tridiag(1.5I,S,0.5I)=[S0.5I0001.5IS0.5I0001.5IS00000S0.5I0001.5IS]Rn×n

    is a nonsymmetric block tridiagonal matrix,

    S=Tridiag(1.5,4,0.5)=[40.50001.540.50001.540000040.50001.54]Rm×m

    is a block tridiagonal matrix, and q is still,

    q=(1,1,1,1,,(1)n1,(1)n)TRn,

    the Φ(z) is the same as in the Experiment 1, and Ψ(z) are defined as

    Ψ(z)=(0.01sin(z1),0.01sin(z2),,0.01sin(zn))TRn.

    In this experiment, we also take μ=0 and μ=1. Five different matrix sizes are considered, i.e. n=900,3600,14400,57600,230400. The results are shown in the Tables 5 and 6 for μ=0 and μ=1 respectively.

    Table 5.  Numerical results for Experiment 2 with μ=0.
    Methods n
    900 3600 14400 57600 230400
    MM IT 10 11 12 13 14
    CPU 0.0044 0.0187 0.0895 0.4898 2.8718
    RES 7.52E-7 6.88E-7 5.71E-7 4.54E-7 3.54E-7
    MGS IT 11 12 13 14 15
    CPU 0.0087 0.0059 0.0222 0.1382 0.7027
    RES 5.39E-7 5.50E-7 5.11E-7 4.57E-7 4.01E-7
    MSOR α 1.5 2.0 1.8 1.7 1.6
    IT 9 9 10 11 12
    CPU 0.0013 0.0040 0.0192 0.1106 0.5986
    RES 9.63E-7 9.07E-7 9.94E-7 8.82E-7 8.57E-7
    MAOR (α, β) (0.6, 1.9) (1.9, 2.0) (1.5, 2.0) (1.4, 1.9) (1.2, 1.9)
    IT 8 9 10 11 12
    CPU 0.0010 0.0046 0.0197 0.1183 0.6338
    RES 2.87E-7 9.83E-7 9.95E-7 8.71E-7 9.56E-7
    GMGS IT 5 6 6 7 7
    CPU 0.0012 0.0035 0.0134 0.0902 0.3350
    RES 8.64E-7 1.52E-7 3.94E-7 5.90E-8 1.49E-7
    GMSOR α 1.6 1.1 1.2 1.3 1.4
    IT 3 5 5 5 5
    CPU 0.0005 0.0028 0.0156 0.0502 0.2494
    RES 3.91E-7 7.28E-7 6.02E-7 5.04E-7 4.09E-7
    GMAOR (α, β) (1.6, 1.6) (1.5, 1.6) (2.0, 1.5) (0.9, 1.4) (1.3, 1.4)
    IT 3 3 4 5 5
    CPU 0.0004 0.0016 0.0094 0.0575 0.2558
    RES 3.91E-7 2.79E-7 3.54E-7 3.59E-7 7.11E-7

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical results for Experiment 2 with μ=1.
    Methods n
    900 3600 14400 57600 230400
    MM IT 6 7 7 8 8
    CPU 0.0037 0.0148 0.0708 0.3984 2.2429
    RES 4.38E-7 1.33E-7 5.59E-7 1.57E-7 6.33E-7
    MGS IT 7 7 8 8 9
    CPU 0.0019 0.0034 0.0146 0.0795 0.4032
    RES 1.31E-7 5.82E-7 2.05E-7 8.36E-7 2.85E-7
    MSOR α 1.5 1.8 1.5 1.7 1.4
    IT 5 5 6 6 7
    CPU 0.0007 0.0024 0.0132 0.0602 0.3247
    RES 7.39E-7 9.00E-7 6.03E-7 8.29E-7 9.41E-7
    MAOR (α, β) (1.3, 1.8) (1.8, 1.8) (1.3, 1.7) (1.7, 1.6) (1.3, 1.6)
    IT 5 5 6 6 7
    CPU 0.0008 0.0026 0.0126 0.0632 0.3491
    RES 9.55E-7 9.00E-7 8.74E-7 9.99E-7 8.05E-7
    GMGS IT 5 5 6 6 7
    CPU 0.0011 0.0028 0.0137 0.0631 0.347
    RES 1.30E-7 6.72E-7 1.03E-7 4.32E-7 6.02E-8
    GMSOR α 1.1 0.6 1.1 1.2 1.1
    IT 4 5 5 5 6
    CPU 0.0006 0.0028 0.0116 0.0523 0.3078
    RES 5.91E-7 3.37E-7 2.43E-7 5.59E-7 7.39E-8
    GMAOR (α, β) (0.7, 0.1) (0.7, 0.2) (0.8, 0.2) (0.7, 0.2) (0.7, 0.2)
    IT 3 3 3 4 4
    CPU 0.0005 0.0019 0.0067 0.0442 0.1899
    RES 6.57E-7 4.48E-7 7.09E-7 5.72E-9 1.30E-8

     | Show Table
    DownLoad: CSV

    From Tables 5 and 6, we find:

    (a) The GMAOR method requires the least number of iterations among the proposed three methods. For the four conventional methods, MAOR needs the least.

    (b) The GMAOR method can obtain a better performance when μ=1 than that when μ=0, where the system matrix A is strictly diagonally dominant when μ=1.

    (c) GMMS achieves a similar performance despite the chosen value for n.

    It has been verified that the proposed GMMS iteration methods including GMGS, GMSOR and GMAOR are much more efficient than that conventional MMS based methods. The performance is improved in both the running time and the iteration steps. The effectiveness of the proposed methods were proved regardless the status of symmetry for the system matrix.

    In particular, GMGS method can converge without a need for α and β optimization such as in GMSOR and GMAOR, but with a slight increase in iteration steps. This implies that it might be more useful in practice. Compared with the conventional methods, the three proposed methods involve two scalar matrices rather than one, this added complexity to make a choice in the computation. However, the benefit from the proposed methods weighs out the cost.

    For solving QCP (1.1), the general modulus-based matrix splitting iteration method including GMGS, GMSOR, GMAOR are proposed. They are analogy to the MMS methods but with a better convergence rate. Two experiments have been performed to verify the effectiveness of the proposed methods considering the factor of symmetry condition of the system matrix. It is indicated that the methods are more efficient for all three indicators. It was proven that the more stringent conditions for the H-compatible splitting employed in the classic methods are relaxed to a simpler one, i.e., H-splitting. GMGS method can converge without requiring optimization on the parameters α and β, which is considered to be more useful in practice.

    This work was supported by the QingLan Project of Jiangsu Province, the '226' Talent Scientific Research Project of Nantong City, and the Science and Technology Project of Nantong City (No. JC2021198).

    We declare no conflicts of interest in publishing this article.



    [1] Z. Z. Bai, On the convergence of the multisplitting methods for the linear complementarity problem, SIAM J. Matrix Anal. Appl., 21 (1999), 67–78. https://doi.org/10.1137/S0895479897324032 doi: 10.1137/S0895479897324032
    [2] Z. Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 17 (2010), 917–933. https://doi.org/10.1002/nla.680 doi: 10.1002/nla.680
    [3] Z. Z. Bai, A class of two-stage iterative methods for systems of weakly nonlinear equations, Numer. Algor., 14 (1997), 295–319. https://doi.org/10.1023/A:1019125332723 doi: 10.1023/A:1019125332723
    [4] Z. Z. Bai, D. J. Evans, Matrix multisplitting relaxation methods for linear complementarity problems, Int. J. Comput. Math., 63 (1997), 309–326. https://doi.org/10.1080/00207169708804569 doi: 10.1080/00207169708804569
    [5] Z. Z. Bai, D. J. Evans, Matrix multisplitting methods with applications to linear complementarity problems: Parallel asynchronous methods, Int. J. Comput. Math., 79 (2002), 205–232. https://doi.org/10.1080/00207160211927 doi: 10.1080/00207160211927
    [6] Z. Z. Bai, J. Y. Pan, Matrix analysis and computations, Philadelphia: SIAM, 2021.
    [7] Z. Z. Bai, X. Yang, On HSS-based iteration methods for weakly nonlinear systems, Appl. Numer. Math., 59 (2009), 2923–2936. https://doi.org/10.1016/j.apnum.2009.06.005 doi: 10.1016/j.apnum.2009.06.005
    [8] Z. Z. Bai, L. L. Zhang, Modulus-based synchronous multisplitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 20 (2013), 425–439. https://doi.org/10.1002/nla.1835 doi: 10.1002/nla.1835
    [9] A. Berman, R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Philadelphia: SIAM, 1994.
    [10] Y. Cao, Q. Shi, S. L. Zhu, A relaxed generalized Newton iteration method for generalized absolute value equations, AIMS Math., 6 (2021), 1258–1275. https://doi.org/10.3934/math.2021078 doi: 10.3934/math.2021078
    [11] Y. Cao, A. Wang, Two-step modulus-based matrix splitting iteration methods for implicit complementarity problems, Numer. Algor., 82 (2019), 1377–1394. https://doi.org/10.1007/s11075-019-00660-7 doi: 10.1007/s11075-019-00660-7
    [12] R. W. Cottle, F. Giannessi, J. L. Lions, Variational inequalities and complementarity problems: Theory and applications, Hoboken, New Jersey: John Wiley & Sons, 1980.
    [13] R. W. Cottle, J. S. Pang, R. E. Stone, The linear complementarity problem, Philadelphia: SIAM, 2009.
    [14] J. L. Dong, M. Q. Jiang, A modified modulus method for symmetric positive-definite linear complementarity problems, Numer. Linear Algebra Appl., 16 (2009), 129–143. https://doi.org/10.1002/nla.609 doi: 10.1002/nla.609
    [15] J. L. Dong, J. B. Gao, F. J. Ju, J. H. Shen, Modulus methods for nonnegatively constrained image restoration, SIAM J. Imaging Sci., 9 (2016), 1226–1246. https://doi.org/10.1137/15M1045892 doi: 10.1137/15M1045892
    [16] M. C. Ferris, J. S. Pang, Engineering and economic applications of complementarity problems, SIAM Rev., 39 (1997), 669–713. https://doi.org/10.1137/S0036144595285963 doi: 10.1137/S0036144595285963
    [17] A. Frommer, G. Mayer, Convergence of relaxed parallel multisplitting methods, Linear Algebra Appl., 119 (1989), 141–152. https://doi.org/10.1016/0024-3795(89)90074-8 doi: 10.1016/0024-3795(89)90074-8
    [18] A. Frommer, D. B. Szyld, H-splittings and two-stage iterative methods, Numer. Math., 63 (1992), 345–356. https://doi.org/10.1007/BF01385865 doi: 10.1007/BF01385865
    [19] J. T. Hong, C. L. Li, Modulus-based matrix splitting iteration methods for a class of implicit complementarity problems, Numer. Linear Algebra Appl., 23 (2016), 629–641. https://doi.org/10.1002/nla.2044 doi: 10.1002/nla.2044
    [20] J. G. Hu, Estimates of B1A and their applications (in Chinese), Math. Numer. Sin., 3 (1982), 272–282.
    [21] G. Isac, Complementarity problems, Berlin, Heidelberg: Springer, 1992. https://doi.org/10.1007/BFb0084653
    [22] L. Jia, X. Wang, A generalized two-step modulus-based matrix splitting iteration method for implicit complementarity problems of H+-matrices, Filomat, 33 (2019), 4875–4888. https://doi.org/10.2298/FIL1915875J doi: 10.2298/FIL1915875J
    [23] Y. F. Ke, C. F. Ma, On the convergence analysis of two-step modulus-based matrix splitting iteration method for linear complementarity problems, Appl. Math. Comput., 243 (2014), 413–418. https://doi.org/10.1016/j.amc.2014.05.119 doi: 10.1016/j.amc.2014.05.119
    [24] M. D. Koulisianis, T. S. Papatheodorou, Improving projected successive overrelaxation method for linear complementarity problems, Appl. Numer. Math., 45 (2003), 29–40. https://doi.org/10.1016/S0168-9274(02)00233-7 doi: 10.1016/S0168-9274(02)00233-7
    [25] R. Li, J. F. Yin, Accelerated modulus-based matrix splitting iteration methods for a restricted class of nonlinear complementarity problems, Numer. Algor., 75 (2017), 339–358. https://doi.org/10.1007/s11075-016-0243-3 doi: 10.1007/s11075-016-0243-3
    [26] W. Li, A general modulus-based matrix splitting method for linear complementarity problems of H-matrices, Appl. Math. Lett., 26 (2013), 1159–1164. https://doi.org/10.1016/j.aml.2013.06.015 doi: 10.1016/j.aml.2013.06.015
    [27] F. Mezzadri, E. Galligani, An inexact Newton method for solving complementarity problems in hydrodynamic lubrication, Calcolo, 55 (2018), 1–28. https://doi.org/10.1007/s10092-018-0244-9 doi: 10.1007/s10092-018-0244-9
    [28] M. A. Noor, The quasi-complementarity problem, J. Math. Anal. Appl., 130 (1988), 344–353. https://doi.org/10.1016/0022-247X(88)90310-1 doi: 10.1016/0022-247X(88)90310-1
    [29] J. S. Pang, Inexact Newton methods for the nonlinear complementarity problem, Math. Program., 36 (1986), 54–71. https://doi.org/10.1007/BF02591989 doi: 10.1007/BF02591989
    [30] G. Ramadurai, S. V. Ukkusuri, J. Y. Zhao, J. S. Pang, Linear complementarity formulation for single bottleneck model with heterogeneous commuters, Transport. Res. B Meth., 44 (2010), 193–214. https://doi.org/10.1016/j.trb.2009.07.005 doi: 10.1016/j.trb.2009.07.005
    [31] U. Sch{ä}fer, On the modulus algorithm for the linear complementarity problem, Oper. Res. Lett., 32 (2004), 350–354. https://doi.org/10.1016/j.orl.2003.11.004 doi: 10.1016/j.orl.2003.11.004
    [32] Q. Shi, Q. Q. Shen, T. P. Tang, A class of two-step modulus-based matrix splitting iteration methods for quasi-complementarity problems, Comput. Appl. Math., 39 (2020), 1–23. https://doi.org/10.1007/s40314-019-0984-4 doi: 10.1007/s40314-019-0984-4
    [33] R. S. Varga, Matrix iterative analysis, Berlin, Heidelberg: Springer, 2000.
    [34] A. Wang, Y. Cao, J. X. Chen, Modified Newton-type iteration methods for generalized absolute value equations, J. Optim. Theory Appl., 181 (2019), 216–230. https://doi.org/10.1007/s10957-018-1439-6 doi: 10.1007/s10957-018-1439-6
    [35] S. L. Wu, P. Guo, Modulus-based matrix splitting algorithms for the quasi-complementarity problems, Appl. Numer. Math., 132 (2018), 127–137. https://doi.org/10.1016/j.apnum.2018.05.017 doi: 10.1016/j.apnum.2018.05.017
    [36] L. L. Zhang, Two-step modulus-based matrix splitting iteration method for linear complementarity problems, Numer. Algor., 57 (2011), 83–99. https://doi.org/10.1007/s11075-010-9416-7 doi: 10.1007/s11075-010-9416-7
    [37] L. L. Zhang, Z. R. Ren, A modified modulus-based multigrid method for linear complementarity problems arising from free boundary problems, Appl. Numer. Math., 164 (2021), 89–100. https://doi.org/10.1016/j.apnum.2020.09.008 doi: 10.1016/j.apnum.2020.09.008
    [38] H. Zheng, L. Liu, A two-step modulus-based matrix splitting iteration method for solving nonlinear complementarity problems of H+-matrices, Comput. Appl. Math., 37 (2018), 5410–5423. https://doi.org/10.1007/s40314-018-0646-y doi: 10.1007/s40314-018-0646-y
  • This article has been cited by:

    1. Lu-Xin Wang, Yang Cao, Qin-Qin Shen, Two Variants of Robust Two-Step Modulus-Based Matrix Splitting Iteration Methods for Mixed-Cell-Height Circuit Legalization Problem, 2024, 2096-6385, 10.1007/s42967-024-00400-2
    2. Lu-Xin Wang, Qin-Qin Shen, Yang Cao, Modulus-Based Matrix Splitting Iteration Method for Horizontal Quasi-complementarity Problem, 2023, 2096-6385, 10.1007/s42967-023-00311-8
    3. Chen-Can Zhou, Yang Cao, Quan Shi, Jie Qiu, A Robust Two-Step Modulus-Based Matrix Splitting Iteration Method for Mixed-Size Cell Circuit Legalization Problem, 2023, 32, 0218-1266, 10.1142/S0218126623501293
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1758) PDF downloads(65) Cited by(3)

Figures and Tables

Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog