Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A two-step iteration method for solving vertical nonlinear complementarity problems

  • In this paper, for vertical nonlinear complementarity problems, a two-step modulus-based matrix splitting iteration method is established by applying the two-step splitting technique to the modulus-based matrix splitting iteration method. The convergence theorems of the proposed method are given when the number of system matrices is larger than 2. Numerical results show that the convergence rate of the proposed method can be accelerated compared to the existing modulus-based matrix splitting iteration method.

    Citation: Wenxiu Guo, Xiaoping Lu, Hua Zheng. A two-step iteration method for solving vertical nonlinear complementarity problems[J]. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698

    Related Papers:

    [1] Ximing Fang . The simplified modulus-based matrix splitting iteration method for the nonlinear complementarity problem. AIMS Mathematics, 2024, 9(4): 8594-8609. doi: 10.3934/math.2024416
    [2] Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han . A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677
    [3] Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi . A general modulus-based matrix splitting method for quasi-complementarity problem. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614
    [4] Bin Hou, Jie Zhang, Chen Qiu . A neural network for a generalized vertical complementarity problem. AIMS Mathematics, 2022, 7(4): 6650-6668. doi: 10.3934/math.2022371
    [5] Ting Lin, Hong Zhang, Chaofan Xie . A modulus-based modified multivariate spectral gradient projection method for solving the horizontal linear complementarity problem. AIMS Mathematics, 2025, 10(2): 3251-3268. doi: 10.3934/math.2025151
    [6] Fiza Zafar, Alicia Cordero, Dua-E-Zahra Rizvi, Juan Ramon Torregrosa . An optimal eighth order derivative free multiple root finding scheme and its dynamics. AIMS Mathematics, 2023, 8(4): 8478-8503. doi: 10.3934/math.2023427
    [7] Luyao Zhao, Jingyong Tang . Convergence properties of a family of inexact Levenberg-Marquardt methods. AIMS Mathematics, 2023, 8(8): 18649-18664. doi: 10.3934/math.2023950
    [8] Dingyu Zhu, Yueting Yang, Mingyuan Cao . An accelerated adaptive two-step Levenberg–Marquardt method with the modified Metropolis criterion. AIMS Mathematics, 2024, 9(9): 24610-24635. doi: 10.3934/math.20241199
    [9] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [10] Xiaorui He, Jingyong Tang . A smooth Levenberg-Marquardt method without nonsingularity condition for wLCP. AIMS Mathematics, 2022, 7(5): 8914-8932. doi: 10.3934/math.2022497
  • In this paper, for vertical nonlinear complementarity problems, a two-step modulus-based matrix splitting iteration method is established by applying the two-step splitting technique to the modulus-based matrix splitting iteration method. The convergence theorems of the proposed method are given when the number of system matrices is larger than 2. Numerical results show that the convergence rate of the proposed method can be accelerated compared to the existing modulus-based matrix splitting iteration method.



    Let M1,M2,MsRn×n, and let u1(z),u2(z),,us(z):RnRn, be s nonlinear mappings. The definition of the vertical nonlinear complementarity problem (abbreviated as VNCP) is: Find zRn such that

    ri=Miz+ui(z),1is,withmin{z,r1,,rs}=0. (1.1)

    Here, the minimum operation is taken component-wise.

    The VNCP has many applications in generalized Leontief input-output models, control theory, nonlinear networks, contact problems in lubrication, and so on; e.g., see [11,18,34,35,39]. Taking ui(z)=uiRn,1is, the VNCP reduces to the vertical linear complementarity problem (abbreviated as VLCP) [9,20,40]. Further, taking s=1, the VLCP reduces to the linear complementarity problem (abbreviated as LCP) [10].

    In earlier literatures, there were some iterative methods for such problems. In [39], the Mangasarian's general iterative algorithm was constructed to solve the VLCP. Solving an approximation equation of the nonlinear complementarity problem by a continuation method can be extended to solve the VNCP; see [8]. Furthermore, in [37] the authors approximated the equivalent minimum equation of the VNCP by a sequence of aggregation functions, and found the zero solution of the approximated problems. Moreover, some interior-point methods were applied to solve complementarity problems, such as LCP [24,29], nonlinear complementarity problems (abbreviated as NCP) [36] and horizontal LCP [46]. In recent years, modulus-based matrix splitting (abbreviated as MMS) iteration methods have gained popularity for numerical solutions to various complementarity problems. References [2,12,15,16] detail their application to the LCP, while [13,32,50] focus on the horizontal LCP. The second-order cone LCP is discussed in [27], implicit complementarity problems in [7,14,25], quasi-complementarity problem in [38], NCP in [41], and the circular cone NCP is addressed in [28]. Numerical examples have demonstrated that MMS iteration methods often outperform state-of-the-art smooth Newton methods in practical applications. Specifically, for the VLCP, a special case of the VNCP, the MMS iteration method was introduced in [31]. Alternatively, an MMS iteration method without auxiliary variables based on a non-auxiliary-variable equivalent modulus equation was presented in [23] and shown to be more efficient than the method in [31]. Accelerated techniques and improved results for MMS iteration methods in the VLCP are further detailed in [21,48,49,51]. On the other hand, Projected type methods were also used to solve the VLCP; see [6,33]. For the VNCP, the only literature on the MMS iteration method currently is [42].

    To improve the convergence rate of the MMS iteration method for solving the VNCP, in this work, we aim to construct a two-step MMS iteration method. The two-step splitting technique had been successfully used in other complementarity problems, e.g., see [43,44,45,52], where the main idea is to change the iteration in the MMS to two iterations based on two matrix splittings of the system matrices, which can make full use of the information of the matrices for acceleration.

    In the following, after presenting some required preliminaries, the new two-step MMS iteration method is established in Section 2. The convergence analysis of the proposed method is given in Section 3, which can generalize and improve the results in [42]. Some numerical examples are presented to illustrate the efficiency of the proposed method in Section 4, and concluding remarks of the whole work are presented in Section 5.

    First, some notations, definitions, and existing results needed in the following discussion are introduced.

    Let M=(mij)Rn×n and M=DMBM, where DM and BM are the diagonal and the nondiagonal matrices of M, respectively. For two matrices M and N, the order in inequality M(>)N means mij(>)nij for every i,j. Let |M|=(|mij|) and the comparison matrix of M be denoted by M=(mij), where mij=|mij| if i=j and mij=|mij| if ij. If mij0 for any ij, M is called a Z-matrix. If M is a nonsingular Z-matrix and M10, it is called a nonsingular M-matrix. M is called an H-matrix if M is a nonsingular M-matrix. If |mii|>ji|mij| for all 1in, M is called a strictly diagonal dominant (abbreviated as s.d.d.) matrix (see [5]). If M is an H-matrix with all its diagonal entries positive (e.g., see [1]), it is called an H+-matrix. If M is a nonsingular M-matrix, it is well known that there exists a positive diagonal matrix Λ, which can make MΛ be an s.d.d. matrix with all the diagonal entries of AΛ positive [5]. M=FG is called an H-splitting if F|G| is a nonsingular M-matrix [19].

    Let Y={Y1,Y2,,Ys} be a set of n×n real matrices (or n×1 real vectors). Denote a mapping φ as follows:

    φ(Y)=s1i=12si1Yi+Ys.

    Let ΩRn×n be a positive diagonal matrix, σ>0R and Mi=FiGi (1is) be s splittings of Mi. Denote

    F={F1,F2,,Fs},G={G1,G2,,Gs},M={M1,M2,,Ms},
    DF={DF1,DF2,,DFs},BF={BF1,BF2,,BFs},

    and

    U(z)={u1(z),u2(z),,us(z)}.

    By equivalently transforming the VNCP to a fixed-point equation, the MMS iteration method for the VNCP was given in [42].

    Method 2.1. [42] (MMS) Given x(0)1Rn, for k=0,1,2,, compute x(k+1)1Rn by

    (2s1Ω+φ(F))x(k+1)1=φ(G)x(k)1+(2s1Ωφ(M))|x(k)1|+Ωsi=22si+1|x(k)i|σφ(U(z(k))),

    where

    z(k)=1σ(x(k)1+|x(k)1|)

    and x(k)2,,x(k)s are computed by

    {x(k)s=12Ω1[(Ms1Ms)(|x(k)1|+x(k)1)+σus1(z(k))σus(z(k))],x(k)j=12Ω1[(Mj1Mj)(|x(k)1|+x(k)1)+Ω(|x(k)j+1|+x(k)j+1)+σuj1(z(k))σuj(z(k))],j=s1,s2,,2, (2.1)

    until the iteration is convergent.

    Based on Method 2.1, to fully utilize the information of the entries of the matrix set M, for 1is, consider two matrix splittings of Mi, e.g., Mi=F(1)iG(1)i=F(2)iG(2)i. Then, the two-step MMS (abbreviated as TMMS) iteration method can be established as follows:

    Method 2.2. (TMMS) Given x(0)1Rn, for k=0,1,2,, compute x(k+1)1Rn by

    {(2s1Ω+φ(F(1)))x(k+12)1=φ(G(1))x(k)1+(2s1Ωφ(M))|x(k)1|+Ωsi=22si+1|x(k)i|σφ(U(z(k))),(2s1Ω+φ(F(2)))x(k+1)1=φ(G(2))x(k+12)1+(2s1Ωφ(M))|x(k+12)1|+Ωsi=22si+1|x(k+12)i|σφ(U(z(k+12))), (2.2)

    where

    z(k)=1σ(x(k)1+|x(k)1|),z(k+12)=1σ(x(k+12)1+|x(k+12)1|),

    where x(k)2,,x(k)s are computed by (2.1) and

    {x(k+12)s=12Ω1[(Ms1Ms)(|x(k+12)1|+x(k+12)1)+σus1(z(k+12))σus(z(k+12))],x(k+12)j=12Ω1[(Mj1Mj)(|x(k+12)1|+x(k+12)1)+Ω(|x(k+12)j+1|+x(k+12)j+1)+σuj1(z(k))σuj(z(k+12))],j=s1,s2,,2, (2.3)

    until the iteration is convergent.

    Clearly, if we take F(1)i=F(2)i, Method 2.2 reduces to Method 2.1 immediately. Furthermore, we can obtain a class of relaxation methods from Method 2.2 by specially choosing the two matrix splittings, similar to those in [43,44,45,49,50,52]. For example, for i=1,2,,s, taking

    {F(1)i=1α(D(1)AiβL(1)Ai),G(1)i=F(1)iAi,F(2)i=1α(D(2)AiβU(2)Ai),G(2)i=F(2)iAi, (2.4)

    one can get the two-step modulus-based accelerated overrelaxation (abbreviated as TMAOR) iteration method, which can reduce to the two-step modulus-based successive overrelaxation (abbreviated as TMSOR) and Gauss-Seidel (abbreviated as TMGS) methods, when (α,β)=(α,α) and (α,β)=(1,1), respectively.

    In this section, the convergence conditions of Method 2.2 are given under the assumption that the VNCP has a unique solution z, the same as that in [42]. Furthermore, for 1is, ui(z) is assumed to satisfy the locally Lipschitz smoothness conditions: Let

    ui(z)=(ui(z1),ui(z2),,ui(zn))T

    be differentiable with

    0ui(z)zUi,i=1,2,,s. (3.1)

    Then, by Lagrange mean value theorem, there exists ξj between z(k)j and zj such that

    ui(z(k))ui(z)=U(k)i(z(k)z)=1σU(k)i[(x(k)1x1)+(|x(k)1||x1|)], (3.2)

    where U(k)i is a nonnegative diagonal matrix with diagonal entries ui(zj)zj|zj=ξj,1jn. Furthermore, by (3.1), we have

    0U(k)iUi. (3.3)

    Denote

    U(k)={U(k)1,U(k)2,,U(k)s},U={U1,U2,,Us}.

    Lemma 3.1. Let Mi,1is, be H+-matrices, ΩRn×n be a positive diagonal matrix, and σ>0R. For t=1,2, assume that:

    (I) DF(t)i>0,i=1,2,,s1, and Ms=F(t)sG(t)s be an H-splitting of Ms;

    (II)

    {F(t)s1F(t)s,2sjF(t)j1s1i=j2si1F(t)i+F(t)s,2js1; (3.4)

    (III) There exists a positive diagonal matrix Λ such that (F(t)s|G(t)s|)Λ is an s.d.d. matrix.

    Then, 2s1Ω+φ(F(t)) is an H-matrix.

    Proof. Let e=∈Rn be a vector with all entries being 1. By reusing (3.4) s2 times, we have

    2s1Ω+φ(F(t))Λe>φ(F(t))Λe(2s2F(t)1+s1i=22si1Fi+F(t)s)Λe2s1i=22si1F(t)i+F(t)sΛe2(2s3F(t)2+s1i=32si1Fi+F(t)s)Λe2s1F(t)sΛe2s1(F(t)s|G(t)s|)Λe>0.

    Hence, 2s1Ω+φ(F(t))Λ is an s.d.d matrix, which implies that 2s1Ω+φ(F(t)) is an H-matrix.

    Lemma 3.2. With the same notations as those in Lemma 3.1, denote xi,i=1,2,,s, as the solution of the VNCP, and let

    δ(k)i=x(k)ixi,ˉδ(k)i=|x(k)i||xi|.

    Then, we have

    si=22si+1Ω|ˉδ(k)i|[2(s2j=12sj1Γj+Γs1)+Θ]|δ(k)1|, (3.5)

    where

    Γj={|Ms1Ms|,ifj=1,|2j1Msjs1t=sj+12st1MtMs|,if2js1,

    and

    Θ=s1j=2(2s2sj)Uj+(2s2)Us.

    Proof. By (2.1), we get

    {δ(k)s=Ω1[(Ms1Ms)+(U(k)s1U(k)s)](ˉδ(k)1+δ(k)1),δ(k)j=12Ω1[(Mj1Mj)+(U(k)j1U(k)j)(ˉδ(k)1+δ(k)1)+Ω(ˉδ(k)j+1+δ(k)j+1)],j=s1,s2,,2. (3.6)

    By the first equation of (3.6), we have

    21Ω|δ(k)s|=|[(Ms1Ms)+(U(k)s1U(k)s)](ˉδ(k)1+δ(k)1)|(|Ms1Ms|+Us1+Us)|δ(k)1|=2(Γ1+Us1+Us)|δ(k)1|.

    Then, when the subscript is s1, with the second equation of (3.6), we can get

    22Ω|δ(k)s1|=|[2(Ms2Ms1)+2(U(k)s2U(k)s1)](ˉδ(k)1+δ(k)1)+2Ω(ˉδ(k)s+δ(k)s)|=|[(2Ms2Ms1Ms)+(2U(k)s2U(k)s1U(k)s)](ˉδ(k)1+δ(k)1)+2Ωˉδ(k)s|2(Γ2+2Us2+Us1+Us)|δ(k)1|+2Ω|δ(k)s|2[Γ2+Γ1+2(Us2+Us1+Us)]|δ(k)1|.

    By induction, for 2is, we have

    2si+1Ω|δ(k)i|(sij=12sij+1Γj+2Γsi+1+2si+1sj=i1Uj)|δ(k)1|. (3.7)

    Then, by (3.7), we get

    si=22si+1Ω|ˉδ(k)i|si=22si+1Ω|δ(k)i|si=2(sij=12sij+1Γj+2Γsi+1+2si+1sj=i1Uj)|δ(k)1|=(s2j=1sji=22sij+1Γj+2s1i=1Γi+si=22si+1sj=i1Uj)|δ(k)1|=[s2j=1(2sj2)Γj+2s1j=1Γj+s1j=2(2s2sj)Uj+(2s2)Us)]|δ(k)1|=[2(s2j=12sj1Γj+Γs1)+Θ]|δ(k)1|.

    Theorem 3.1. With the same notations and assumptions as those in Lemmas 3.1 and 3.2, for t=1,2, assume that

    {|G(t)s1||G(t)s|,2sj|G(t)j||s1i=j2si1G(t)i+G(t)s|,2js1, (3.8)

    Then, Method 2.1 converges for any initial vector x(0)1 provided that

    Ω21s(φ(DM(t))+φ(U)+Θ) (3.9)

    or

    [21s(φ(DM(t))+φ(U))+2sΘ(F(t)s|G(t)s|)]Λe<ΩΛe<21s(φ(DM(t))+Θ)Λe. (3.10)

    Proof.

    By (3.2), we get

    ui(z(k))ui(z)=1σU(k)i(δ(k)1+ˉδ(k)1).

    By the definition of φ, we have

    φ(U(z(k)))φ(U(z))=φ(U(z(k))U(z))=1σφ(U(k))(δ(k)1+ˉδ(k)1). (3.11)

    Combining the first equation of (2.2) and (3.11), we can get

    (2s1Ω+φ(F(1)))δ(k+12)1=φ(G(1))δ(k)1+(2s1Ωφ(M))ˉδ(k)1+Ωsi=22si+1ˉδ(k)iσ[φ(U(z(k)))φ(U(z))]=φ(G(1))δ(k)1+(2s1Ωφ(M))ˉδ(k)1+Ωsi=22si+1ˉδ(k)iφ(U(k))(δ(k)i+ˉδ(k)i)=[φ(G(1))φ(U(k))]δ(k)1+[2s1Ωφ(M)φ(U(k))]ˉδ(k)1+Ωsi=22si+1ˉδ(k)i.

    Similarly, by the second equation of (2.2), we have

    (2s1Ω+φ(F(2)))δ(k+1)1=[φ(G(2))φ(U(k+12))]δ(k+12)1+[2s1Ωφ(M)φ(U(k+12))]ˉδ(k+12)1+Ωsi=22si+1ˉδ(k+12)i.

    Then we have

    {|δ(k+12)1||2s1Ω+φ(F(1))|1(|φ(G(1))φ(U(k))||δ(k)1|+|2s1Ωφ(M)φ(U(k))||ˉδ(k)1|+si=22si+1Ω|ˉδ(k)i|),|δ(k+1)1||2s1Ω+φ(F(2))|1(|φ(G(2))φ(U(k+12))||δ(k+12)1|+|2s1Ωφ(M)φ(U(k+12))||ˉδ(k+12)1|+si=22si+1Ω|ˉδ(k+12)i|). (3.12)

    By Lemma 3.1, we have that 2s1Ω+φ(F) is an H-matrix. By [17], with (3.5) and (3.12), we get

    |δ(k+1)1|P(2)1Q(2)P(1)1Q(1)|δ(k)1|, (3.13)

    where

    P(1)=2s1Ω+φ(F(1)),
    Q(1)=|φ(G(1))φ(U(k))|+|2s1Ωφ(M)φ(U(k))|+2(s2j=12sj1Γj+Γs1)+Θ,
    P(2)=2s1Ω+φ(F(2)),
    Q(2)=|φ(G(2))φ(U(k))|+|2s1Ωφ(M)φ(U(k))|+2(s2j=12sj1Γj+Γs1)+Θ.

    First, we estimate F(1)1G(1). By Lemma 3.1 and [26], we have

    Λ1F(1)1G(1)Λ=(F(1)Λ)1(G(1)Λ)max1in(G(1)Λe)i(F(1)Λe)i. (3.14)

    Let

    Γ(1)j={|DF(1)s1DF(1)s|,ifj=1,|2j1DF(1)sjs1t=sj+12st1DF(1)tDF(1)s|,if2js1,
    Γ(2)j={|BF(1)s1BF(1)s|,ifj=1,|2j1BF(1)sjs1t=sj+12st1BF(1)tBF(1)s|,if2js1,

    and

    Γ(3)j={|G(1)s1G(1)s|,ifj=1,|2j1G(1)sjs1t=sj+12st1G(1)tG(1)s|,if2js1.

    Clearly, we have ΓjΓ(1)j+Γ(2)j+Γ(3)j. By (3.4), we can get

    Γ(1)j={DFs1DFs,ifj=1,2j1DFsjs1t=sj+12st1DFtDFs,if2js1.

    Then, by direct computation, we can obtain

    φ(D(1)M)2(s2j=12sj1Γ(1)j+Γ(1)s1)=s1i=12si1DF(1)i+DF(1)s2(s2j=12sj1Γ(1)j+Γ(1)s1)=(2s1)DF(1)ss1i=12si1DF(1)i. (3.15)

    Moreover, by reusing (3.4) s2 times, we get

    2(s2j=12sj1Γ(2)j+Γ(2)s1)+2|s1j=12sj1BF(1)j+BF(1)s|=2s2j=12sj1Γ(2)j+(2|2s2BF(1)1s1j=22sj1BF(1)jBF(1)s|+2|2s2BF(1)1+s1j=22sj1BF(1)j+BF(1)s|)=2s2j=12sj1Γ(2)j+22|s1j=22sj1BF(1)j+BF(1)s|=2s3j=12sj1Γ(2)j+(22|2s3BF(1)2s1j=32sj1BF(1)jBF(1)s|+22|2s3BF2+s1j=32sj1BF(1)j+BF(1)s|)=2s3j=12sj1Γ(2)j+23|s1j=32sj1BF(1)j+BF(1)s|==2s|BFs|. (3.16)

    Analogously, by reusing (3.8) s2 times, we get

    2(s2j=12sj1Γ(3)j+Γ(3)s1)+2|φ(G(1))|=2s|G(1)s|. (3.17)

    By (3.15)–(3.17), we get

    P(1)ΛeQ(1)Λe=[2s1Ω+φ(F(1))|φ(G(1))φ(U(k))||2s1Ωφ(M)φ(U(k))|2(s2j=12sj1Γj+Γs1)Θ]Λe[2s1Ω+φ(DM(1))|φ(BM(1))||φ(G(1))φ(U(k))||2s1Ωφ(DM)φ(U(k))||φ(BM(1))||φ(G(1))|2(s2j=12sj1Γ(1)j+Γ(1)s1)2(s2j=12sj1Γ(2)j+Γ(2)s1)2(s2j=12sj1Γ(3)j+Γ(3)s1)Θ]Λe{2s1Ω+[φ(DM(1))2(s2j=12sj1Γ(1)j+Γ(1)s1)][2|φ(BM(1))|+2(s2j=12sj1Γ(2)j+Γ(2)s1)][2|φ(G)|+2(s2j=12sj1Γ(3)j+Γ(3)s1)]φ(U(k))|2s1Ωφ(DM(1))φ(U(k))Θ|}Λe=[2s1Ω+(2s1)DF(1)ss1i=12si1DF(1)i2s|BF(1)s|2s|G(1)s|φ(U(k))|2s1Ωφ(DM(1))φ(U(k))Θ|]Λe. (3.18)

    Next, consider the following two cases.

    Case 1. When

    Ω21s(φ(DM(1))+φ(U)+Θ),

    by (3.18), we have

    P(1)ΛeQ(1)Λe[2s1Ω+(2s1)DF(1)ss1i=12si1DF(1)i2s|BF(1)s|2s|G(1)s|φ(U(k))(2s1Ωφ(DM)φ(U(k))Θ)]Λe=[2s(DF(1)s|G(1)s||BF(1)s|)+Θ]Λe=[2s(Fs|G(1)s|)+Θ]Λe>0.

    Case 2. When

    [21s(φ(DM(1))+φ(U))+2sΘ(F(1)s|G(1)s|)]Λe<ΩΛe<21s(φ(DM(1))+Θ)Λe,

    by (3.18), we have

    P(1)ΛeQ(1)Λe[2s1Ω+(2s1)DF(1)ss1i=12si1DF(1)i2s|BF(1)s|2s|G(1)s|φ(U(k))(φ(DM(1))+φ(U(k))+Θ2s1Ω)]Λe[2sΩ2φ(DM(1))2φ(U)Θ+2s(DFs|G(1)s||BF(1)s|)]Λe=[2sΩ2φ(DM(1))2φ(U)Θ+2s(F(1)s|G(1)s|)]Λe>0.

    Summarizing Cases 1 and 2, we immediately get P(1)ΛeQ(1)Λe>0, provided that (3.9) or (3.10) holds, which implies that

    Λ1P(1)1Q(1)Λ<1

    by (3.14).

    By similar deductions, we can also get

    Λ1P(2)1Q(2)Λ<1,

    when (3.9) or (3.10) is satisfied.

    In summary, the spectral radius of the iteration matrix given by (3.13) can be estimated as follows:

    ρ(P(2)1Q(2)F(1)1Q(1))=ρ(Λ1P(2)1Q(2)ΛΛ1P(1)1Q(1)Λ)Λ1P(2)1Q(2)ΛΛ1P(1)1Q(1)Λ<1.

    Hence, we can see that {x(k)1}k=0 converges by (3.13).

    Based on Theorem 3.1, we have the next theorem for the TMAOR method.

    Theorem 3.2. With the same notations and assumptions as those in Theorem 3.1, for t=1,2, let F(t)i and G(t)i be given by (2.4), 1is. Assume that

    0<βα<21+ρ(D1MsBMs). (3.19)

    Then, the TMAOR converges to the solution of the VNCP.

    Proof. By (2.4), with direct computation, we can get

    F(1)sG(1)s=F(2)sG(2)s=11ααDMsBMs,

    if 0<βα. Since Ms is an H+-matrix, by [5] we have ρ(D1MsBMs)<1. Since (3.19) holds, we can easily have that 11ααDMsBMs is a nonsingular M-matrix. Note that all the assumptions of Theorem 3.1 hold. Hence, the TMAOR is convergent by Theorem 3.1.

    Next, some numerical examples are given to shown the efficiency of Method 2.2 compared to Method 2.1.

    Example 4.1. [42] Consider a VNCP (s=2), where the two system matrices are given by

    M1=(RImImRImImRImImR)Rn×nandM2=(RRRR)Rn×n,

    where

    R=(4114114)Rm×m.

    Example 4.2. [42] Consider a VNCP (s=2), where the two system matrices are given by

    M1=(T0.5Im1.5ImT0.5Im1.5ImT0.5Im1.5ImT)Rn×nandM2=(TTTT)Rn×n,

    where

    T=(40.51.540.51.54)Rm×m.

    Example 4.3. Consider the VNCP (s=2) whose system matrices M1,M2Rm2×m2 come from the discretization of Hamilton-Jacobi-Bellman (HJB) equation [4]

    {maxi=1,2{Li+fi}=0inΓ,z=0onΓ,

    with Γ={(x,y)0<x<2,0<y<1},

    {L1=0.002zxx+0.001zyy20u1(z),f1=1,L2=0.001zxx+0.001zyy10u2(z),f2=1.

    Same as those in [42], all the nonlinear functions in Examples 4.1–4.3 are set to u1(z)=z1+z and u2(z)=arctan(z).

    The programming language is MATLAB, whose codes are run on a PC with a 12th Gen Intel(R) Core(TM) i7-12700 2.10 GHz and 32G memory. Consider the Gauss-Seidel (abbreviated as GS), SOR and AOR splittings, where the SOR splitting is

    F(t)s=1α(DM(t)sαLM(t)s),G(t)s=F(t)sM(t)s,t=1,2,

    with α being the relaxation parameter and starting from 0.8 to 1.2 with a step size of 0.1, while the parameters of AOR splitting given by (2.4) satisfy α=β+0.1, and β starts from 0.8 to 1.2 with a step size of 0.1.

    All initial vectors are set to x(0)1=e. The stopping criterion in the iteration of both Methods 2.1 and 2.2 is taken as

    min{z(k),M1z(k)+u1(z(k)),M2z(k)+u2(z(k))}2<1010,

    and Ω is chosen as

    Ω=12α(DM1+DM2)+I.

    Let m=256,512,1024 for the three examples. For fair comparison, when the relaxation parameters are chosen as the experimentally optimal ones for each size of the three examples, the results of the MMS and TMMS methods are shown in Tables 13, where "IT" is the iteration steps, "CPU" is the elapsed CPU time, and

    SAVE=CPUMMSCPUTMMSCPUMMS×100%.
    Table 1.  Numerical results of Example 4.1.
    m splitting Method 2.1 Method 2.2
    IT CPU IT CPU SAVE
    256 GS 98 0.6945 50 0.5976 14%
    SOR 94 0.6679 46 0.5652 15%
    AOR 97 0.7469 47 0.5902 21%
    512 GS 100 3.7686 51 3.1709 16%
    SOR 94 3.6039 46 3.0237 16%
    AOR 101 4.0266 47 3.1373 22%
    1024 GS 102 22.7605 52 18.9571 17%
    SOR 134 21.1283 47 17.7199 16%
    AOR 97 24.4213 47 18.8083 23%

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical results of Example 4.2.
    m splitting Method 2.1 Method 2.2
    IT CPU IT CPU SAVE
    256 GS 86 0.5939 34 0.3792 36%
    SOR 69 0.4937 31 0.3571 27%
    AOR 71 0.5252 32 0.4136 21%
    512 GS 89 3.4883 35 2.1793 38%
    SOR 70 2.6886 31 1.9244 28%
    AOR 71 4.5891 32 3.3323 27%
    1024 GS 91 19.1887 36 12.4029 35%
    SOR 70 15.3799 32 11.5026 25%
    AOR 72 17.7342 33 13.3755 24%

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of Example 4.3.
    m splitting Method 2.1 Method 2.2
    IT CPU IT CPU SAVE
    256 GS 76 0.6216 39 0.5478 12%
    SOR 59 0.4853 30 0.4394 9%
    AOR 54 0.4732 28 0.4103 13%
    512 GS 288 17.4591 145 14.2667 18%
    SOR 253 12.6084 101 11.6480 7%
    AOR 204 14.9972 103 12.6497 16%
    1024 GS 1137 172.0573 569 144.5840 16%
    SOR 885 137.8101 443 116.6376 15%
    AOR 805 130.3913 403 111.1060 15%

     | Show Table
    DownLoad: CSV

    It is found from the numerical results that Method 2.2 always converges faster than Method 2.1 for all cases. Specifically, the iteration steps of Method 2.1 take almost twice as much time as those of Method 2.2 due to the fact that there are two equations needed to solve in each iteration in Method 2.2 versus only one in Method 2.1. Focusing on the CPU time, the cost of Method 2.2 is 14%–23% less than that of Method 2.1 for Example 4.1, while the percentages of Examples 4.2 and 4.3 are 21%–38% and 7%–18%, respectively. It is clear that the two-step splitting technique works for accelerating the convergence of the MMS iteration method.

    By the two-step matrix splitting technique, we have constructed the TMMS iteration method for solving the VNCP. We also present the convergence theorems of the TMMS iteration method for an arbitrary s, which can extend the research scope of the modulus-based methods for the VNCP.

    Note that to show the effectiveness of the proposed two-step method, the choice of the two-step splittings in Section 4 is common in existing literatures. However, since the iteration matrix in (3.13) is essentially an extended bound containing absolute operation, it is very hard to minimize its spectral radius. How to choose an optimal two-step splitting in general or based some special structure is still an open question. On the other hand, it is noted that there were some other accelerated techniques for the MMS iteration method, such as relaxation, precondition, two-sweep and so on. One can also mix a certain technique with the two-step splittings for acceleration, e.g., [3,22,30,47]. More techniques to improve the convergence of the MMS iteration method are worth studying in the future.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by Basic and Applied Basic Research Foundation of Guangdong (No. 2024A1515011822), Scientific Computing Research Innovation Team of Guangdong Province (No. 2021KCXTD052), Science and Technology Development Fund, Macau SAR (No. 0096/2022/A), Guangdong Key Construction Discipline Research Capacity Enhancement Project (No. 2022ZDJS049) and Technology Planning Project of Shaoguan (No. 230330108034184).

    The authors declare that they have no conflict of interest.



    [1] Z. Bai, On the convergence of the multisplitting methods for the linear complementarity problem, SIAM J. Matrix Anal. Appl., 21 (1999), 67–78. http://dx.doi.org/10.1137/S0895479897324032 doi: 10.1137/S0895479897324032
    [2] Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebr., 17 (2010), 917–933. http://dx.doi.org/10.1002/nla.680 doi: 10.1002/nla.680
    [3] M. Bashirizadeh, M. Hajarian, Two-step two-sweep modulus-based matrix splitting iteration method for linear complementarity problems, Numer. Math. Theor. Meth. Appl., 15 (2022), 592–619. http://dx.doi.org/10.4208/nmtma.OA-2021-0131 doi: 10.4208/nmtma.OA-2021-0131
    [4] A. Bensoussan, J. Lions, Applications of variational inequalities in stochastic control, Amsterdam: Elsevier, 1982.
    [5] A. Berman, R. Plemmons, Nonnegative matrix in the mathematical sciences, Philadelphia: SIAM Publisher, 1994. http://dx.doi.org/10.1137/1.9781611971262
    [6] Y. Cao, G. Yang, Q. Shen, Convergence analysis of projected SOR iteration method for a class of vertical linear complementarity problems, Comp. Appl. Math., 42 (2023), 191. http://dx.doi.org/10.1007/s40314-023-02334-6 doi: 10.1007/s40314-023-02334-6
    [7] Y. Cao, A. Wang, Two-step modulus-based matrix splitting iteration methods for implicit complementarity problems, Numer. Algor., 82 (2019), 1377–1394. http://dx.doi.org/10.1007/s11075-019-00660-7 doi: 10.1007/s11075-019-00660-7
    [8] B. Chen, P. Harker, Smooth approximations to nonlinear complementarity problems, SIAM J. Optim., 7 (1997), 403–420. http://dx.doi.org/10.1137/S1052623495280615 doi: 10.1137/S1052623495280615
    [9] R. Cottle, G. Dantzig, A generalization of the linear complementarity problem, J. Comb. Theory, 8 (1970), 79–90. http://dx.doi.org/10.1016/S0021-9800(70)80010-2 doi: 10.1016/S0021-9800(70)80010-2
    [10] R. Cottle, J. Pang, R. Stone, The linear complementarity problem, SanDiego: SIAM Publisher, 1992. http://dx.doi.org/10.1137/1.9780898719000
    [11] A. Ebiefung, M. Kostreva, The generalized Leontief input-output model and its application to the choice of the new technology, Ann. Oper. Res., 44 (1993), 161–172. http://dx.doi.org/10.1007/BF02061065 doi: 10.1007/BF02061065
    [12] X. Fang, General fixed-point method for solving the linear complementarity problem, AIMS Mathematics, 6 (2021), 11904–11920. http://dx.doi.org/10.3934/math.2021691 doi: 10.3934/math.2021691
    [13] X. Fang, The convergence of the modulus-based Jacobi (MJ) iteration method for solving horizontal linear complementarity problems, Comp. Appl. Math., 41 (2022), 134. http://dx.doi.org/10.1007/s40314-022-01842-1 doi: 10.1007/s40314-022-01842-1
    [14] X. Fang, The convergence of modulus-based matrix splitting iteration methods for implicit complementarity problems, J. Comput. Appl. Math., 411 (2022), 114241. http://dx.doi.org/10.1016/j.cam.2022.114241 doi: 10.1016/j.cam.2022.114241
    [15] X. Fang, Z. Gu, Z. Qiao, Convergence of the two-point modulus-based matrix splitting iteration method, J. Appl. Anal. Comput., 13 (2023), 2504–2521. http://dx.doi.org/10.11948/20220400 doi: 10.11948/20220400
    [16] X. Fang, Z. Zhu, The modulus-based matrix double splitting iteration method for linear complementarity problems, Comput. Math. Appl., 78 (2019), 3633–3643. http://dx.doi.org/10.1016/j.camwa.2019.06.012 doi: 10.1016/j.camwa.2019.06.012
    [17] A. Frommer, G. Mayer, Convergence of relaxed parallel multisplitting methods, Linear Algebra Appl., 119 (1989), 141–152. http://dx.doi.org/10.1016/0024-3795(89)90074-8 doi: 10.1016/0024-3795(89)90074-8
    [18] T. Fujisawa, E. Kuh, Piecewise-linear theory of nonlinear networks, SIAM J. Appl. Math., 22 (1972), 307–328. http://dx.doi.org/10.1137/0122030 doi: 10.1137/0122030
    [19] A. Frommer, D. Szyld, H-splittings and two-stage iterative methods, Numer. Math., 63 (1992), 345–356. http://dx.doi.org/10.1007/BF01385865 doi: 10.1007/BF01385865
    [20] M. Seetharama Gowda, R. Sznajder, The generalized order linear complementarity problem, SIAM J. Matrix Anal. Appl., 15 (1994), 779–795. http://dx.doi.org/10.1137/S0895479892237859 doi: 10.1137/S0895479892237859
    [21] W. Guo, H. Zheng, X. Peng, New convergence results of the modulus-based methods for vertical linear complementarity problems, Appl. Math. Lett., 135 (2023), 108444. http://dx.doi.org/10.1016/j.aml.2022.108444 doi: 10.1016/j.aml.2022.108444
    [22] W. Guo, H. Zheng, X. Lu, Y. Zhang, S. Vong, A relaxation two-step parallel modulus method without auxiliary variable for solving large sparse vertical linear complementarity problems, Numer. Algor., in press. http://dx.doi.org/10.1007/s11075-024-01800-4
    [23] J. He, S. Vong, A new kind of modulus-based matrix splitting methods for vertical linear complementarity problems, Appl. Math. Lett., 134 (2022), 108344. http://dx.doi.org/10.1016/j.aml.2022.108344 doi: 10.1016/j.aml.2022.108344
    [24] M. Haddou, M. Tangi, O. Jˊerˊemy, A generalized direction in interior point method for monotone linear complementarity problems, Optim. Lett., 13 (2019), 35–53. http://dx.doi.org/10.1007/s11590-018-1241-2 doi: 10.1007/s11590-018-1241-2
    [25] J. Hong, C. Li, Modulus-based matrix splitting iteration methods for a class of implicit complementarity problems, Numer. Linear Algebr., 23 (2016), 629–641. http://dx.doi.org/10.1002/nla.2044 doi: 10.1002/nla.2044
    [26] J. Hu, Estimates of ||B1C|| and their applications (Chinese), Mathematica Numerica Sinica, 3 (1982), 272–282.
    [27] Y. Ke, C. Ma, H. Zhang, The modulus-based matrix splitting iteration methods for second-order cone linear complementarity problems, Numer. Algor., 79 (2018), 1283–1303. http://dx.doi.org/10.1007/s11075-018-0484-4 doi: 10.1007/s11075-018-0484-4
    [28] Y. Ke, C. Ma, H. Zhang, The relaxation modulus-based matrix splitting iteration methods for circular cone nonlinear complementarity problems, Comp. Appl. Math., 37 (2018), 6795–6820. http://dx.doi.org/10.1007/s40314-018-0687-2 doi: 10.1007/s40314-018-0687-2
    [29] M. Kojima, S. Susumu, H. Shinji, Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices, SIAM J. Optim., 7 (1997), 86–125. http://dx.doi.org/10.1137/S1052623494269035 doi: 10.1137/S1052623494269035
    [30] D. Li, L. Wang, Y. Liu, A relaxation general two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems, J. Comput. Appl. Math., 409 (2022), 114140. http://dx.doi.org/10.1016/j.cam.2022.114140 doi: 10.1016/j.cam.2022.114140
    [31] F. Mezzadri, A modulus-based formulation for the vertical linear complementarity problems, Numer. Algor., 90 (2022), 1547–1568. http://dx.doi.org/10.1007/s11075-021-01240-4 doi: 10.1007/s11075-021-01240-4
    [32] F. Mezzadri, E. Galligani, Modulus-based matrix splitting methods for horizontal linear complementarity problems, Numer. Algor., 83 (2020), 201–219. http://dx.doi.org/10.1007/s11075-019-00677-y doi: 10.1007/s11075-019-00677-y
    [33] F. Mezzadri, E. Galligani, Projected splitting methods for vertical linear complementarity problems, J. Optim. Theory Appl., 193 (2022), 598–620. http://dx.doi.org/10.1007/s10957-021-01922-y doi: 10.1007/s10957-021-01922-y
    [34] T. Nagae, T. Akamatsu, A generalized complementarity approach to solving real option problems, J. Econ. Dyn. Control., 32 (2008), 1754–1779. http://dx.doi.org/10.1016/j.jedc.2007.04.010 doi: 10.1016/j.jedc.2007.04.010
    [35] K. Oh, The formulation of the mixed lubrication problem as a generalized nonlinear complementarity problem, J. Tribol., 108 (1986), 598–603. http://dx.doi.org/10.1115/1.3261274 doi: 10.1115/1.3261274
    [36] F. Potra, Y. Ye, Interior-point methods for nonlinear complementarity problems, J. Optim. Theory Appl., 88 (1996), 617–642. http://dx.doi.org/10.1007/BF02192201 doi: 10.1007/BF02192201
    [37] H. Qi, L. Liao, Z. Lin, Regularized smoothing approximations to vertical nonlinear complementarity problems, J. Math. Anal. Appl., 230 (1999), 261–276. http://dx.doi.org/10.1006/jmaa.1998.6205 doi: 10.1006/jmaa.1998.6205
    [38] Q. Shi, Q. Shen, T. Tang, A class of two-step modulus-based matrix splitting iteration methods for quasi-complementarity problems, Comp. Appl. Math., 39 (2020), 11. http://dx.doi.org/10.1007/s40314-019-0984-4 doi: 10.1007/s40314-019-0984-4
    [39] M. Sun, Monotonicity of Mangasarian's iterative algorithm for generalized linear complementarity problems, J. Math. Anal. Appl., 144 (1989), 474–485. http://dx.doi.org/10.1016/0022-247X(89)90347-8 doi: 10.1016/0022-247X(89)90347-8
    [40] R. Sznajder, M. Gowda, Generalizations of P0- and P-properties; extended vertical and horizontal linear complementarity problems, Linear Algebra Appl., 223-224 (1995), 695–715. http://dx.doi.org/10.1016/0024-3795(93)00184-2 doi: 10.1016/0024-3795(93)00184-2
    [41] Z. Xia, C. Li, Modulus-based matrix splitting iteration methods for a class of nonlinear complementarity problem, Appl. Math. Comput., 271 (2015), 34–42. http://dx.doi.org/10.1016/j.amc.2015.08.108 doi: 10.1016/j.amc.2015.08.108
    [42] S. Xie, Z. Yang, H. Xu, A modulus-based matrix splitting method for the vertical nonlinear complementarity problem, J. Appl. Math. Comput., 69 (2023), 2987–3003. http://dx.doi.org/10.1007/s12190-023-01866-8 doi: 10.1007/s12190-023-01866-8
    [43] S. Xie, H. Xu, J. Zeng, Two-step modulus-based matrix splitting iteration method for a class of nonlinear complementarity problems, Linear Algebra Appl., 494 (2016), 1–10. http://dx.doi.org/10.1016/j.laa.2016.01.002 doi: 10.1016/j.laa.2016.01.002
    [44] L. Zhang, Two-step modulus based matrix splitting iteration for linear complementarity problems, Numer. Algor., 57 (2011), 83–99. http://dx.doi.org/10.1007/s11075-010-9416-7 doi: 10.1007/s11075-010-9416-7
    [45] L. Zhang, Two-step modulus-based synchronous multisplitting iteration methods for linear complementarity problems, J. Comput. Math., 33 (2015), 100–112. http://dx.doi.org/10.4208/jcm.1403-m4195 doi: 10.4208/jcm.1403-m4195
    [46] Y. Zhang, On the convergence of a class of infeasible interior-point methods for the horizontal linear complementarity problem, SIAM J. Optim., 4 (1994), 208–227. http://dx.doi.org/10.1137/0804012 doi: 10.1137/0804012
    [47] Y. Zhang, W. Guo, H. Zheng, S. Vong, A relaxed two-step modulus-based matrix synchronous multisplitting iteration method for linear complementarity problems, Comp. Appl. Math., 43 (2024), 33. http://dx.doi.org/10.1007/s40314-023-02563-9 doi: 10.1007/s40314-023-02563-9
    [48] Y. Zhang, H. Zheng, X. Lu, S. Vong, Modulus-based synchronous multisplitting iteration methods without auxiliary variable for solving vertical linear complementarity problems, Appl. Math. Comput., 458 (2023), 128248. http://dx.doi.org/10.1016/j.amc.2023.128248 doi: 10.1016/j.amc.2023.128248
    [49] Y. Zhang, H. Zheng, X. Lu, S. Vong, A two-step parallel iteration method for large sparse horizontal linear complementarity problems, Appl. Math. Comput., 438 (2023), 127609. http://dx.doi.org/10.1016/j.amc.2022.127609 doi: 10.1016/j.amc.2022.127609
    [50] H. Zheng, X. Lu, S. Vong, A two-step modulus-based matrix splitting iteration method without auxiliary variable for solving vertical linear complementarity problems, Commun. Appl. Math. Comput., in press. http://dx.doi.org/10.1007/s42967-023-00280-y
    [51] H. Zheng, Y. Zhang, X. Lu, S. Vong, Modulus-based synchronous multisplitting iteration methods for large sparse vertical linear complementarity problems, Numer. Algor., 93 (2023), 711–729. http://dx.doi.org/10.1007/s11075-022-01436-2 doi: 10.1007/s11075-022-01436-2
    [52] H. Zheng, S. Vong, A two-step modulus-based matrix splitting iteration method for horizontal linear complementarity problems, Numer. Algor., 86 (2021), 1791–1810. http://dx.doi.org/10.1007/s11075-020-00954-1 doi: 10.1007/s11075-020-00954-1
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(874) PDF downloads(54) Cited by(0)

Figures and Tables

Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog