1.
Introduction
Let M1,M2…,Ms∈Rn×n, and let u1(z),u2(z),…,us(z):Rn→Rn, be s nonlinear mappings. The definition of the vertical nonlinear complementarity problem (abbreviated as VNCP) is: Find z∈Rn such that
Here, the minimum operation is taken component-wise.
The VNCP has many applications in generalized Leontief input-output models, control theory, nonlinear networks, contact problems in lubrication, and so on; e.g., see [11,18,34,35,39]. Taking ui(z)=ui∈Rn,1≤i≤s, the VNCP reduces to the vertical linear complementarity problem (abbreviated as VLCP) [9,20,40]. Further, taking s=1, the VLCP reduces to the linear complementarity problem (abbreviated as LCP) [10].
In earlier literatures, there were some iterative methods for such problems. In [39], the Mangasarian's general iterative algorithm was constructed to solve the VLCP. Solving an approximation equation of the nonlinear complementarity problem by a continuation method can be extended to solve the VNCP; see [8]. Furthermore, in [37] the authors approximated the equivalent minimum equation of the VNCP by a sequence of aggregation functions, and found the zero solution of the approximated problems. Moreover, some interior-point methods were applied to solve complementarity problems, such as LCP [24,29], nonlinear complementarity problems (abbreviated as NCP) [36] and horizontal LCP [46]. In recent years, modulus-based matrix splitting (abbreviated as MMS) iteration methods have gained popularity for numerical solutions to various complementarity problems. References [2,12,15,16] detail their application to the LCP, while [13,32,50] focus on the horizontal LCP. The second-order cone LCP is discussed in [27], implicit complementarity problems in [7,14,25], quasi-complementarity problem in [38], NCP in [41], and the circular cone NCP is addressed in [28]. Numerical examples have demonstrated that MMS iteration methods often outperform state-of-the-art smooth Newton methods in practical applications. Specifically, for the VLCP, a special case of the VNCP, the MMS iteration method was introduced in [31]. Alternatively, an MMS iteration method without auxiliary variables based on a non-auxiliary-variable equivalent modulus equation was presented in [23] and shown to be more efficient than the method in [31]. Accelerated techniques and improved results for MMS iteration methods in the VLCP are further detailed in [21,48,49,51]. On the other hand, Projected type methods were also used to solve the VLCP; see [6,33]. For the VNCP, the only literature on the MMS iteration method currently is [42].
To improve the convergence rate of the MMS iteration method for solving the VNCP, in this work, we aim to construct a two-step MMS iteration method. The two-step splitting technique had been successfully used in other complementarity problems, e.g., see [43,44,45,52], where the main idea is to change the iteration in the MMS to two iterations based on two matrix splittings of the system matrices, which can make full use of the information of the matrices for acceleration.
In the following, after presenting some required preliminaries, the new two-step MMS iteration method is established in Section 2. The convergence analysis of the proposed method is given in Section 3, which can generalize and improve the results in [42]. Some numerical examples are presented to illustrate the efficiency of the proposed method in Section 4, and concluding remarks of the whole work are presented in Section 5.
2.
New method
First, some notations, definitions, and existing results needed in the following discussion are introduced.
Let M=(mij)∈Rn×n and M=DM−BM, where DM and −BM are the diagonal and the nondiagonal matrices of M, respectively. For two matrices M and N, the order in inequality M≥(>)N means mij≥(>)nij for every i,j. Let |M|=(|mij|) and the comparison matrix of M be denoted by ⟨M⟩=(⟨mij⟩), where ⟨mij⟩=|mij| if i=j and ⟨mij⟩=−|mij| if i≠j. If mij≤0 for any i≠j, M is called a Z-matrix. If M is a nonsingular Z-matrix and M−1≥0, it is called a nonsingular M-matrix. M is called an H-matrix if ⟨M⟩ is a nonsingular M-matrix. If |mii|>∑j≠i|mij| for all 1≤i≤n, M is called a strictly diagonal dominant (abbreviated as s.d.d.) matrix (see [5]). If M is an H-matrix with all its diagonal entries positive (e.g., see [1]), it is called an H+-matrix. If M is a nonsingular M-matrix, it is well known that there exists a positive diagonal matrix Λ, which can make MΛ be an s.d.d. matrix with all the diagonal entries of AΛ positive [5]. M=F−G is called an H-splitting if ⟨F⟩−|G| is a nonsingular M-matrix [19].
Let Y={Y1,Y2,…,Ys} be a set of n×n real matrices (or n×1 real vectors). Denote a mapping φ as follows:
Let Ω∈Rn×n be a positive diagonal matrix, σ>0∈R and Mi=Fi−Gi (1≤i≤s) be s splittings of Mi. Denote
and
By equivalently transforming the VNCP to a fixed-point equation, the MMS iteration method for the VNCP was given in [42].
Method 2.1. [42] (MMS) Given x(0)1∈Rn, for k=0,1,2,…, compute x(k+1)1∈Rn by
where
and x(k)2,…,x(k)s are computed by
until the iteration is convergent.
Based on Method 2.1, to fully utilize the information of the entries of the matrix set M, for 1≤i≤s, consider two matrix splittings of Mi, e.g., Mi=F(1)i−G(1)i=F(2)i−G(2)i. Then, the two-step MMS (abbreviated as TMMS) iteration method can be established as follows:
Method 2.2. (TMMS) Given x(0)1∈Rn, for k=0,1,2,…, compute x(k+1)1∈Rn by
where
where x(k)2,…,x(k)s are computed by (2.1) and
until the iteration is convergent.
Clearly, if we take F(1)i=F(2)i, Method 2.2 reduces to Method 2.1 immediately. Furthermore, we can obtain a class of relaxation methods from Method 2.2 by specially choosing the two matrix splittings, similar to those in [43,44,45,49,50,52]. For example, for i=1,2,…,s, taking
one can get the two-step modulus-based accelerated overrelaxation (abbreviated as TMAOR) iteration method, which can reduce to the two-step modulus-based successive overrelaxation (abbreviated as TMSOR) and Gauss-Seidel (abbreviated as TMGS) methods, when (α,β)=(α,α) and (α,β)=(1,1), respectively.
3.
Convergence analysis
In this section, the convergence conditions of Method 2.2 are given under the assumption that the VNCP has a unique solution z∗, the same as that in [42]. Furthermore, for 1≤i≤s, ui(z) is assumed to satisfy the locally Lipschitz smoothness conditions: Let
be differentiable with
Then, by Lagrange mean value theorem, there exists ξj between z(k)j and z∗j such that
where U(k)i is a nonnegative diagonal matrix with diagonal entries ∂ui(zj)∂zj|zj=ξj,1≤j≤n. Furthermore, by (3.1), we have
Denote
Lemma 3.1. Let Mi,1≤i≤s, be H+-matrices, Ω∈Rn×n be a positive diagonal matrix, and σ>0∈R. For t=1,2, assume that:
(I) DF(t)i>0,i=1,2,…,s−1, and Ms=F(t)s−G(t)s be an H-splitting of Ms;
(II)
(III) There exists a positive diagonal matrix Λ such that (⟨F(t)s⟩−|G(t)s|)Λ is an s.d.d. matrix.
Then, 2s−1Ω+φ(F(t)) is an H-matrix.
Proof. Let e=∈Rn be a vector with all entries being 1. By reusing (3.4) s−2 times, we have
Hence, ⟨2s−1Ω+φ(F(t))⟩Λ is an s.d.d matrix, which implies that 2s−1Ω+φ(F(t)) is an H-matrix. □
Lemma 3.2. With the same notations as those in Lemma 3.1, denote x∗i,i=1,2,…,s, as the solution of the VNCP, and let
Then, we have
where
and
Proof. By (2.1), we get
By the first equation of (3.6), we have
Then, when the subscript is s−1, with the second equation of (3.6), we can get
By induction, for 2≤i≤s, we have
Then, by (3.7), we get
□
Theorem 3.1. With the same notations and assumptions as those in Lemmas 3.1 and 3.2, for t=1,2, assume that
Then, Method 2.1 converges for any initial vector x(0)1 provided that
or
Proof.
By (3.2), we get
By the definition of φ, we have
Combining the first equation of (2.2) and (3.11), we can get
Similarly, by the second equation of (2.2), we have
Then we have
By Lemma 3.1, we have that 2s−1Ω+φ(F) is an H-matrix. By [17], with (3.5) and (3.12), we get
where
First, we estimate F(1)−1G(1). By Lemma 3.1 and [26], we have
Let
and
Clearly, we have Γj≤Γ(1)j+Γ(2)j+Γ(3)j. By (3.4), we can get
Then, by direct computation, we can obtain
Moreover, by reusing (3.4) s−2 times, we get
Analogously, by reusing (3.8) s−2 times, we get
By (3.15)–(3.17), we get
Next, consider the following two cases.
Case 1. When
by (3.18), we have
Case 2. When
by (3.18), we have
Summarizing Cases 1 and 2, we immediately get P(1)Λe−Q(1)Λe>0, provided that (3.9) or (3.10) holds, which implies that
by (3.14).
By similar deductions, we can also get
when (3.9) or (3.10) is satisfied.
In summary, the spectral radius of the iteration matrix given by (3.13) can be estimated as follows:
Hence, we can see that {x(k)1}∞k=0 converges by (3.13). □
Based on Theorem 3.1, we have the next theorem for the TMAOR method.
Theorem 3.2. With the same notations and assumptions as those in Theorem 3.1, for t=1,2, let F(t)i and G(t)i be given by (2.4), 1≤i≤s. Assume that
Then, the TMAOR converges to the solution of the VNCP.
Proof. By (2.4), with direct computation, we can get
if 0<β≤α. Since Ms is an H+-matrix, by [5] we have ρ(D−1Ms∣BMs∣)<1. Since (3.19) holds, we can easily have that 1−∣1−α∣αDMs−∣BMs∣ is a nonsingular M-matrix. Note that all the assumptions of Theorem 3.1 hold. Hence, the TMAOR is convergent by Theorem 3.1. □
4.
Numerical examples
Next, some numerical examples are given to shown the efficiency of Method 2.2 compared to Method 2.1.
Example 4.1. [42] Consider a VNCP (s=2), where the two system matrices are given by
where
Example 4.2. [42] Consider a VNCP (s=2), where the two system matrices are given by
where
Example 4.3. Consider the VNCP (s=2) whose system matrices M1,M2∈Rm2×m2 come from the discretization of Hamilton-Jacobi-Bellman (HJB) equation [4]
with Γ={(x,y)∣0<x<2,0<y<1},
Same as those in [42], all the nonlinear functions in Examples 4.1–4.3 are set to u1(z)=z1+z and u2(z)=arctan(z).
The programming language is MATLAB, whose codes are run on a PC with a 12th Gen Intel(R) Core(TM) i7-12700 2.10 GHz and 32G memory. Consider the Gauss-Seidel (abbreviated as GS), SOR and AOR splittings, where the SOR splitting is
with α being the relaxation parameter and starting from 0.8 to 1.2 with a step size of 0.1, while the parameters of AOR splitting given by (2.4) satisfy α=β+0.1, and β starts from 0.8 to 1.2 with a step size of 0.1.
All initial vectors are set to x(0)1=e. The stopping criterion in the iteration of both Methods 2.1 and 2.2 is taken as
and Ω is chosen as
Let m=256,512,1024 for the three examples. For fair comparison, when the relaxation parameters are chosen as the experimentally optimal ones for each size of the three examples, the results of the MMS and TMMS methods are shown in Tables 1–3, where "IT" is the iteration steps, "CPU" is the elapsed CPU time, and
It is found from the numerical results that Method 2.2 always converges faster than Method 2.1 for all cases. Specifically, the iteration steps of Method 2.1 take almost twice as much time as those of Method 2.2 due to the fact that there are two equations needed to solve in each iteration in Method 2.2 versus only one in Method 2.1. Focusing on the CPU time, the cost of Method 2.2 is 14%–23% less than that of Method 2.1 for Example 4.1, while the percentages of Examples 4.2 and 4.3 are 21%–38% and 7%–18%, respectively. It is clear that the two-step splitting technique works for accelerating the convergence of the MMS iteration method.
5.
Conclusions
By the two-step matrix splitting technique, we have constructed the TMMS iteration method for solving the VNCP. We also present the convergence theorems of the TMMS iteration method for an arbitrary s, which can extend the research scope of the modulus-based methods for the VNCP.
Note that to show the effectiveness of the proposed two-step method, the choice of the two-step splittings in Section 4 is common in existing literatures. However, since the iteration matrix in (3.13) is essentially an extended bound containing absolute operation, it is very hard to minimize its spectral radius. How to choose an optimal two-step splitting in general or based some special structure is still an open question. On the other hand, it is noted that there were some other accelerated techniques for the MMS iteration method, such as relaxation, precondition, two-sweep and so on. One can also mix a certain technique with the two-step splittings for acceleration, e.g., [3,22,30,47]. More techniques to improve the convergence of the MMS iteration method are worth studying in the future.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgement
This work was supported by Basic and Applied Basic Research Foundation of Guangdong (No. 2024A1515011822), Scientific Computing Research Innovation Team of Guangdong Province (No. 2021KCXTD052), Science and Technology Development Fund, Macau SAR (No. 0096/2022/A), Guangdong Key Construction Discipline Research Capacity Enhancement Project (No. 2022ZDJS049) and Technology Planning Project of Shaoguan (No. 230330108034184).
Conflict of interest
The authors declare that they have no conflict of interest.