1.
Introduction
In this paper, we are concerned with the solution of the linear complementarity problem, abbreviated as LCP, that consists in finding a pair of real vectors z∈Rn and v∈Rn satisfying the conditions
where A∈Rn×n is a given matrix, q∈Rn is a given vector.
The LCP (1.1) not only provides a unified framework for linear programming, quadratic programming, bi-matrix games, but also can be used to model many practically relevant situations such as spatial price balance problem, obstacles and free boundary problem, market equilibrium problem, optimal control problem, contact mechanics problem and so forth. To solve the LCP (1.1) numerically, various iteration methods have been presented and investigated, for example, the pivot algorithms [1,2], the projected iterative methods [3,4,5], the SOR-like iteration methods [6,7], the multisplitting methods [8,9,10,11,12,13], the modulus-based matrix splitting (MMS) methods [14,15], the Newton-type iteration methods [16,17], the multigrid methods [18] and others. Notably, among these methods, the MMS iteration method is particularly attractive since its ability to obtain numerical solutions more quickly and accurately which was first introduced in [14], and was extended in many works [19,20,21,22,23]. Making use of z=|x|+xγ and v=Ωγ(|x|−x), Bai [14] transformed the LCP (1.1) into an equivalent system of fixed-point equation
where γ is a positive constant and Ω∈Rn×n is a positive diagonal matrix. Based on (1.2), the MMS method was presented as follows.
Method 1.1 (MMS method). Let A=M−N be a splitting of the given matrix A∈Rn×n, and the matrix Ω+M be nonsingular, where Ω∈Rn×n is a positive diagonal matrix. Given a nonnegative initial vector x0∈Rn, for k=0,1,2,… until the iteration sequence {zk}+∞k=0⊂Rn converges, compute xk+1∈Rn by solving the linear system
and set
where γ is a positive constant.
Furthermore, motivated by this method for solving the LCP (1.1), many researchers extended the MMS iteration method together with its variants to solve the nonlinear complementarity problems [24,25,26], the horizontal linear complementarity problems [27], the implicit complementarity problems [28,29,30], the second-order cone linear complementarity problems [31], the circular cone nonlinear complementarity problems [32], the semidefinite linear complementarity problems [33] and so forth.
Recently, from a different perspective, Wu and Li [34] subtly designed a new equivalent form by directly exploiting the inequality system of the LCP (1.1) and reformulated the LCP (1.1) as a system of fixed-point equation without employing variable transformation, which could be described as follows
Based on (1.3), a new modulus-based matrix splitting (NMMS) iteration method for solving the large and sparse LCP (1.1) was presented as Method 1.2.
Method 1.2 (NMMS method). Let A=M−N be a splitting of the given matrix A∈Rn×n, and matrix Ω+M be nonsingular, where Ω∈Rn×n is a positive diagonal matrix. Given a nonnegative initial vector z0∈Rn, for k=0,1,2,… until the iteration sequence {zk}+∞k=0⊂Rn converges, compute zk+1∈Rn by solving the linear system
The NMMS iteration method has the merits of simple calculation and high calculation efficiency, so it is feasible and practical in actual implementations. Investigating Method 1.1 and Method 1.2, the MMS and NMMS iteration methods have resemblances, but they do not belong to each other. Compared with the MMS method, the NMMS method does not need to use the skill of variable transformation in the process of iteration, which provides a new general framework for solving the LCP (1.1). It was shown in [34] that the NMMS iteration method is superior to the MMS iteration method for certain LCPs. Furthermore, by ingeniously utilizing the matrix splitting technique and properly choosing the involved relaxation parameters, the NMMS iteration method can induce some new modulus-based relaxation methods, such as the new modulus-based Jacobi (NMJ) method, the new modulus-based Gauss-Seidel (NMGS) method, the new modulus-based successive overrelaxation (NMSOR) method and the new modulus-based accelerated overrelaxation (NMAOR) method. In this paper, we mainly focus on the NMMS iteration method for solving the LCP (1.1).
Preconditioned acceleration is a classical acceleration technique for fixed-point iterations, and can essentially improve the convergence rate. In order to accelerate the convergence of the MMS iteration method for solving the LCP (1.1), some preconditioning solvers have been developed. For example, Li and Zheng [35] and Zheng and Luo [36] respectively developed a preconditioned MMS iteration method and a preconditioned two-step MMS iteration method by utilizing a variable transformation and the preconditioners were both chosen as
Ren et al. [37] proposed the preconditioned general two-step MMS iteration method based on the two-step MMS iteration method [38] and gave its convergence analysis. Wu et al. [39] presented the preconditioned general MMS iteration method by making use of the left multiplicative preconditioner in the implicit fixed-point equation, and four different preconditioners were chosen for comparison with P1=I+t1C1 and P2=I+t2(C1+Cm), which were both lower-triangular matrices, P3=I+t3C⊤1, which was the preconditioner in (1.4), and P4=I+t4(C1+C⊤1), which was a Hessenberg matrix. The element ckj of Ci was given as
and ts>0, s=1,2,3,4. Experimental results show that the preconditioners P1 and P2 have better computational efficiency than P3 and P4 in most cases. Dai et al. [40] proposed a preconditioner ˜P which was defined as follows
and suggested a preconditioned two-step MMS iteration method for the LCP (1.1) associated with an M-matrix.
In this paper, we get inspiration from the work of [35] and extend Method 1.2 to a more general framework with a customized preconditioner. Different from the above mentioned preconditioners, we develop a generalized preconditioner associated with both H+-matrix A and vector q of the LCP (1.1) and devise a preconditioned new modulus-based matrix splitting (PNMMS) iteration method for solving the LCP (1.1) of H+-matrix. Particularly, the PNMMS iteration method can yield a series of preconditioned relaxation modulus-based matrix splitting iteration methods by suitable choices of matrix splitting. We give the convergence analysis of the PNMMS iteration method under some mild conditions and provide a comparison theorem to show the PNMMS iteration method accelerates the convergence rate theoretically.
The rest of this paper is arranged as follows. In section 2, we present some classical definitions and preliminary results relevant to our later developments. Section 3 establishes the PNMMS iteration method for solving the LCP (1.1) and its convergence properties are explored in detail in section 4. The comparison theorem between the NMMS and the PNMMS method is presented in section 5. Section 6 provides some numerical examples to illustrate the theoretical results. Finally, some concluding remarks are given in section 7.
2.
Preliminaries
In this section, we will present the notations and some auxiliary results that lay our claims' foundation.
Let A=(aij)∈Rn×n. tridiag(a,b,c) represents a matrix with a,b,c as the subdiagonal, main diagonal and superdiagonal entries in the matrix. We denote the spectral radius of matrix A by ρ(A). I is the identity matrix with suitable dimension. A is referred to a Z-matrix if aij<0 for any i≠j. If A is a Z-matrix and satisfies A−1⩾0, then the matrix A is called an M-matrix. The comparison matrix of A is denoted by ⟨A⟩=(⟨a⟩ij)∈Rn×n, where
Obviously, the comparison matrix is a Z-matrix. A is called an H-matrix if its comparison matrix is an M-matrix. If A is an H-matrix with positive diagonals, it is an H+-matrix, see [41]. An M-matrix is an H+-matrix, and an H+-matrix is an H-matrix.
Let A=D−L−U=D−B, where D, −L, −U and −B represent the diagonal matrix, strictly lower triangular matrix, strictly upper triangular matrix and off-diagonal matrix of matrix A, respectively. We say A=M−N is a splitting if M is nonsingular. The splitting A=M−N is called a weak regular splitting of A∈Rn×n, if M−1⩾0,M−1N⩾0, see [42]; an M-splitting if M is an M-matrix and N is nonnegative; an H-splitting if ⟨M⟩−|N| is an M-matrix; an H-compatible splitting if ⟨A⟩=⟨M⟩−|N|, see [15]. Note that an M-splitting is an H-compatible splitting, and an H-compatible splitting is an H-splitting.
Lemma 2.1 ([14]). Let A∈Rn×n be an H+-matrix, then the LCP (1.1) has a unique solution for any q∈Rn.
Lemma 2.2 ([43]). Let A∈Rn×n be an H+-matrix, then |A−1|⩽⟨A⟩−1.
Lemma 2.3 ([44]). Let A be a Z-matrix, then the following statements are equivalent:
● A is an M-matrix;
● There exists a positive vector x, such that Ax>0;
● Let A=M−N be a splitting of A and M−1⩾0, N⩾0, then ρ(M−1N)<1.
Lemma 2.4 ([44]). Let A, B be two Z-matrices, A be an M-matrix and B⩾A, then B is an M-matrix.
Lemma 2.5 ([45]). A is monotone if and only if A∈Rn×n is nonsingular with A−1⩾0.
Lemma 2.6 ([46]). Suppose that E=F−G and ˉE=ˉF−ˉG are weak regular splittings of the monotone matrices E and ˉE, respectively, such that F−1⩽ˉF−1. If there exists a positive vector x such that 0⩽Ex⩽ˉEx, then for the monotonic norm associated with x, ‖ˉF−1ˉG‖x⩽‖F−1G‖x. In particular, if F−1G has a positive Perron vector, then ρ(ˉF−1ˉG)⩽ρ(F−1G).
Lemma 2.7 ([47]). Let A=(aij)∈Rn×n be an M-matrix, then there exists δ0>0 such that for any 0<δ⩽δ0, A(δ)=(aij(δ))∈Rn×n is a nonsingular M-matrix, where
3.
The PNMMS method
In this section, the PNMMS iteration method for solving the LCP (1.1) will be constructed and the new generalized preconditioner will be introduced.
Enlightened by the idea of preconditioner in [40], we propose a generalized preconditioner P associated with both H+-matrix A and vector q of the LCP (1.1) with the following form
where pii=1, the elements pikm=|aikm|akmkm⩾0, m∈{1,2,…,r} and km satisfies qkm<0, while other entries are all 0.
It is worth noting that the preconditioner (3.1) is established on the premise that A is an H+-matrix, which is an extension of the preconditioner established by M-matrix in [40], and naturally includes (1.5).
Let PA=ˉM−ˉN be a splitting of the matrix PA, we can rewrite the fixed-point system (1.3) as
then we construct the PNMMS iteration method below.
Method 3.1 (PNMMS method). Let P∈Rn×n be a given preconditioner, PA=ˉM−ˉN be a splitting of the matrix PA∈Rn×n, and the matrix PΩ+ˉM be nonsingular, where Ω∈Rn×n is a positive diagonal matrix. Given a nonnegative initial vector z0∈Rn, for k=0,1,2,… until the iteration sequence {zk}+∞k=0⊂Rn converges, compute zk+1∈Rn by solving the linear system
Method 3.1 provides a general framework of the PNMMS method for solving the LCP (1.1). Besides including the NMMS method [34] as special case, it can generate a series of relaxation versions by suitable choices of matrix splitting. The framework of the PNMMS method is summarized as the following Algorithm 1.
Remark 3.1. Let P∈Rn×n be a given preconditioner and PA=ˉM−ˉN=ˉD−ˉL−ˉU be the splitting of the matrix PA∈Rn×n. Here, we give the following remarks on Method 3.1.
● When P=I, then Method 3.1 reduces to the NMMS method [34].
● When ˉM=ˉD,ˉN=ˉL+ˉU, then Method 3.1 reduces to the preconditioned new modulus-based Jacobi (PNMJ) method:
● When ˉM=ˉD−ˉL,ˉN=ˉU, then Method 3.1 becomes the preconditioned new modulus-based Gauss-Seidel (PNMGS) method:
● When ˉM=1αˉD−ˉL,ˉN=1−ααˉD+ˉU, then Method 3.1 turns into the preconditioned new modulus-based successive overrelaxation (PNMSOR) method:
● When ˉM=1αˉD−βαˉL,ˉN=1−ααˉD+α−βαˉL+ˉU, then Method 3.1 reduces to the preconditioned new modulus-based accelerated overrelaxation (PNMAOR) method:
4.
Convergence analysis
In this section, we will analyze the convergence of Method 3.1 for the LCP (1.1) with an H+-matrix. Some mild convergence conditions are given to guarantee the convergence of Method 3.1.
First of all, a general convergence condition for Method 3.1 is established in Theorem 4.1.
Theorem 4.1. Let A∈Rn×n be an H+-matrix, P be a given nonsingular matrix with positive diagonals such that PA is an H+-matrix. Make PA=ˉM−ˉN be a splitting of the matrix PA and PΩ+ˉM be an invertible H+-matrix, where Ω is a positive diagonal matrix. Let
if ρ(ˉF−1ˉG)<1, then the iteration sequence {zk}+∞k=0⊂Rn produced by Method 3.1 converges to the unique solution z∗∈Rn of the LCP (1.1) with a nonnegative initial vector.
Proof. Let A be an H+-matrix, it follows from Lemma 2.1 that the LCP (1.1) has a unique solution z∗, which means
Subtracting (4.2) from (3.2), we have
If PΩ+ˉM is invertible, then it holds that
Taking absolute value on both sides of (4.3) and utilizing the triangle inequality, one can obtain
where the last inequality follows from Lemma 2.2. Evidently, if ρ(ˉF−1ˉG)<1, then the sequence {zk}+∞k=0 converges to the unique solution z∗ of the LCP (1.1).
In particular, if P=I, the following corollary can be obtained.
Corollary 4.1. Let A∈Rn×n be an H+-matrix, A=M−N be a splitting of the matrix A, and the matrix Ω+M be nonsingular H+-matrix, where Ω is a positive diagonal matrix. Let
if ρ(F−1G)<1, then the iteration sequence {zk}+∞k=0⊂Rn produced by Method 1.2 converges to the unique solution z∗∈Rn of the LCP (1.1) with a nonnegative initial vector.
Theorem 4.2. Let A∈Rn×n be an H+-matrix, P be a given nonsingular matrix with positive diagonals such that PA is an H+-matrix. Make PA=ˉM−ˉN be a splitting of the matrix PA and PΩ+ˉM is an invertible H+-matrix. The diagonal matrix satisfies
or
where A=D−B, P=DP−BP and x is an identity column vector. Then the iteration sequence {zk}+∞k=0⊂Rn produced by Method 3.1 converges to the unique solution z∗∈Rn of the LCP (1.1) with a nonnegative initial vector.
Proof. Denote ˉE:=ˉF−ˉG, where ˉF and ˉG are given as in (4.1), then it can be concluded that
For the case Ω⩾D, the parameter Ω satisfies (4.4), we have
It is obvious that ⟨PA⟩+|P|⟨A⟩−2|BP|Ω is a Z-matrix. According to Lemma 2.3, it implies that ⟨PA⟩+|P|⟨A⟩−2|BP|Ω is an M-matrix. It can be got from Lemma 2.4 that ˉE is an M-matrix.
For the case Ω<D, the parameter Ω satisfies (4.5), it holds that
Analogously, ⟨PA⟩−|P||A|+2DPΩ is an M-matrix. In the light of Lemma 2.4, it follows that the Z-matrix ˉE is a nonsingular M-matrix.
As we know ˉE=ˉF−ˉG be a splitting of the M-matrix ˉE. Since PΩ+ˉM is an H+-matrix, then ˉF−1=⟨PΩ+ˉM⟩−1⩾0 and ˉF is an M-matrix. It is obvious that ˉG=|ˉN|+|P||Ω−A|⩾0. Then Lemma 2.3 leads to ρ(ˉF−1ˉG)<1, the assertion then follows by Theorem 4.1, the proof is completed.
In particular, if P=I, it implies that ‖BP‖∞=0. Taking the right-hand side of (4.4) to be +∞, then the following corollary can be obtained. It is worth noting that Corollary 4.2 leads to a broader convergence region than Theorem 4.2 in [34].
Corollary 4.2. Let A∈Rn×n be an H+-matrix, A=M−N be a splitting of the matrix A, and the matrix Ω+M be nonsingular H+-matrix. If the diagonal matrix Ω satisfies
where A=D−B and x is an identity column vector. Then the iteration sequence {zk}+∞k=0⊂Rn produced by Method 3.1 converges to the unique solution z∗∈Rn of the LCP (1.1) with a nonnegative initial vector.
Similarly, we establish the following convergence theorem on the PNMAOR method.
Theorem 4.3. Let A∈Rn×n be an H+-matrix, P be a given nonsingular matrix with positive diagonals such that PA is an H+-matrix. Make PA=ˉD−ˉL−ˉU=ˉD−ˉB, P=DP−BP and ρ:=ρ(ˉD−1(|ˉB|+|BPΩ|)). If the positive diagonal matrix Ω satisfies DPΩ⩾12αˉD. Then for any initial vector, the PNMAOR iteration method is convergent if the following conditions are satisfied
(1) when 12αˉD⩽DPΩ<1αˉD,
(2) when DPΩ⩾1αˉD,
Proof. Since
where PA=ˉM−ˉN. By some calculations, it holds that
and
where ˉF and ˉG are defined as (4.1). It is obvious to see that ˉF is an M-matrix and ˉG⩾0. According to Lemma 2.3, we need to prove ˉE=ˉF−ˉG is an M-matrix, where
When 12αˉD⩽DPΩ<1αˉD and 0⩽β⩽α, we know DPΩ−|DPΩ−1αˉD|⩾0. Based on (4.6), we can simplify it to
From Lemma 2.4, if ˆE is an M-matrix, then ˉE is an M-matrix, and it holds if and only if
and then we can obtain
When 12αˉD⩽DPΩ<1αˉD and α<β, we know DPΩ−|DPΩ−1αˉD|⩾0 and α|ˉL|⩾0. Based on (4.6), we can simplify it to
From Lemma 2.4, if ˆE is an M-matrix, then ˉE is an M-matrix, and it holds if and only if
and then we get
When Ω⩾1αˉD and 0⩽β⩽α, we know DPΩ−|DPΩ−1αˉD|=1αˉD>0. Based on (4.6), we can simplify it to
From Lemma 2.4, if ˆE is an M-matrix, then ˉE is an M-matrix, and it holds if and only if
and then we have
When Ω⩾1αˉD and α<β, we know DPΩ−|DPΩ−1αˉD|=1αˉD>0 and α|ˉL|⩾0. Based on (4.6), we can simplify it to
From Lemma 2.4, if ˆE is an M-matrix, then ˉE is an M-matrix, and it holds if and only if
and then we obtain
Therefore, the convergence of the PNMAOR method can be proved by Lemma 2.3, thus completing the proof.
Under the conditions of Theorem 4.3, if we take the specific choices of α and β for the PNMAOR method, the following corollaries on the PNMJ, PNMGS and PNMSOR methods can be derived.
Corollary 4.3. Let A∈Rn×n be an H+-matrix, and P be a given nonsingular matrix with positive diagonals such that PA is an H+-matrix. Make PA=ˉD−ˉL−ˉU=ˉD−ˉB, P=DP−BP and ρ:=ρ(ˉD−1(|ˉB|+|BPΩ|)). If the positive diagonal matrix Ω satisfies DPΩ⩾12αˉD. Then for any initial vector,
● if α=1 and β=0, the PNMJ iteration method is convergent;
● if α=β=1, the PNMGS iteration method is convergent;
● if α=β, the PNMSOR iteration method is convergent for
When P=I, the convergence theorem of the PNMAOR method can be extended to the NMAOR method in [34]. For most cases, the range of the parameters of the AOR method is usually set as 0⩽β⩽α, then the following result can be obtained.
Corollary 4.4. Let A∈Rn×n be an H+-matrix. Make A=D−L−U=D−B satisfy ρ:=ρ(D−1|B|)<12 and Ω⩾12αD. Then for any initial vector, the NMAOR iteration method is convergent for
5.
A comparison theorem between PNMMS and NMMS
In this section, we provide a comparison theorem between the PNMMS iteration method and the NMMS iteration method, which indicates that the PNMMS method for solving the LCP (1.1) can accelerate the convergence rate of the original NMMS method.
Let
where A=D−L−U and PA=ˉD−ˉL−ˉU. It is easy to see that this is a special case of the NMMS method and the PNMMS method, other relaxation versions can also be theoretically analyzed in the similar way.
We get the following useful lemma on the premise of the structure-preserving preconditioner P in (3.1). The proof of Lemma 5.1 is similar to that of Lemma 4.1 in [48] and thus we omit the detail.
Lemma 5.1 ([48]). Let A∈Rn×n be an H+-matrix, P be the preconditioner from (3.1) such that PA is an H+-matrix. Assume that D, L and ˉD, ˉL are given by A=D−L−U and PA=ˉD−ˉL−ˉU, respectively. Then
Theorem 5.1. Let A∈Rn×n be an H+-matrix, P be the preconditioner in (3.1) such that PA is an H+-matrix. Make A and PA have the splitting A=D−L−U and PA=ˉD−ˉL−ˉU, respectively. Then for the iterative matrices F−1G of the NMMS method and ˉF−1ˉG of the PNMMS method, it holds that
where F, G are given as
and ˉF, ˉG are given as
Proof. Since
we generalize the convergence conditions of Corollary 4.1 and Theorem 4.1, and obtain the conclusions ρ(F−1G)<1 and ρ(ˉF−1ˉG)<1, where F, G and ˉF, ˉG are given as (5.1) and (5.2), respectively. Now we only need to prove ρ(ˉF−1ˉG)⩽ρ(F−1G).
Here, we denote
It is easy to check that E and ˉE of (5.3) are both M-matrices, then E−1⩾0 and ˉE−1⩾0. In this way, Lemma 2.5 shows that E and ˉE are monotone matrices. Since F=2D−|L| and ˉF=2ˉD−|ˉL| are both M-matrices, we have F−1⩾0 and ˉF−1⩾0. Evidently, the matrix G=|L|+2|U|⩾0 and ˉG=|ˉL|+2|ˉU|⩾0 are nonnegative matrices. So we know that E=F−G and ˉE=ˉF−ˉG are weak regular splittings. From Lemma 5.1, we figure out that 2ˉD−|ˉL|⩽2D−|L|, hence (2D−|L|)−1⩽(2ˉD−|ˉL|)−1, that is to say F−1⩽ˉF−1. Following Lemma 2.3, since E is an M-matrix, there exists a vector x>0 such that Ex>0. For the preconditioner (3.1), we have P⩾I. Moreover ˉE=PE, we infer the conclusion 0⩽Ex⩽ˉEx easily.
If the matrix A is irreducible, then F−1G is also a nonnegative irreducible matrix. In combination with Lemma 2.6 and Perror-Frobeni theorem [45], F−1G has a positive Perron vector such that ρ(ˉF−1ˉG)⩽ρ(F−1G). However, if the matrix A is reducible, then we can construct an irreducible matrix A(δ) by Lemma 2.7 which leads to ρ(ˉF−1ˉG)⩽ρ(F−1G).
In view of the above, the comparison theorem shows that the convergence rate of the PNMMS method is faster than the NMMS method whenever these methods are convergent.
6.
Numerical experiments
In this section, four numerical examples will be presented to illustrate the efficiency of the PNMMS iteration method for solving the LCP (1.1). All test problems are conducted in MATLAB R2014b on a PC Windows 10 operating system with an intel i5-10400F CPU and 8GB RAM. In the numerical results, we report the number of iteration steps (denoted by "Iter"), the elapsed CPU time in seconds (denoted by "Time"), the relative residual (denoted by "Res") and the spectral radius (denoted by "Rho"). The stopping criterion of iteration is defined as
or the prescribed maximal iteration number kmax=500 is exceeded ("−" is used in the following tables to demonstrate this circumstance). All tests are started from the initial vector z0=(1,0,1,0,⋯,1,0,⋯)⊤∈Rn.
With regard to the comparison of Method 3.1 and Method 1.2, we exhibit the performance of the PNMSOR and the NMSOR method. In our implementations, the parameter Ω is chosen as Ω=1αD, the parameter α is obtained experimentally.
Example 6.1 ([14]). Consider the LCP (1.1) with A=ˆA+μI∈Rn×n, in which
with S=tridiag(−1,4,−1)∈Rm×m and n=m2, the vector
For this example, the system matrix A is a strictly diagonally dominant symmetric positive definite H+-matrix when μ>0, it is known that the LCP (1.1) has a unique solution. The preconditioner is chosen as (3.1), where km∈{1,3,5,7,⋯}.
The value of optimal parameter α involved in the PNMSOR and NMSOR methods is obtained experimentally, which leads to the least number of iteration steps. We present the iteration steps for the PNMSOR and NMSOR methods under different α for the test problem in Figure 1. As shown in Figure 1, the PNMSOR method is better no matter how the parameter α is chosen. When the parameter selection is α∈(0.9,1.1), it can be regarded as a good choice, and then we set α=1. Numerical results for Example 6.1 with the different problem sizes of n and μ=4 are reported in Table 1.
It follows from Table 1 that the PNMSOR method is superior to the NMSOR method with respect to iteration steps and the elapsed CPU time. We also present the spectral radius of the iterative matrix, which further verifies the theoretical result of the comparison theorem from the numerical experiment. In addition, we provide a diagram of the relationship between the iteration steps and the relative residual of two methods in Figure 2. It follows from Figure 2. that the relative residual of the PNMSOR method decreases faster than that of the NMSOR method in each step, and the final error accuracy can be determined to be about 10−15. That is to say, even if we improve the accuracy of the solution, the PNMSOR method also has great advantage and can be solved at a faster speed. As a result, the PNMSOR method has better computational efficiency in Example 6.1.
Example 6.2 ([14]). Consider the LCP (1.1) with A=ˆA+μI∈Rn×n, in which
with S=tridiag(−0.5,4,−1.5)∈Rm×m and n=m2, the vector
For this example, the matrix A is a nonsymmetric positive definite H+-matrix with strict diagonal dominance when μ>0, thus the LCP (1.1) has a unique solution. We present the iteration steps for the PNMSOR and NMSOR methods under different α and the residual comparison for the test problem in Figure 3 and Figure 4, respectively. The experimentally optimal parameter α is located in (0.85,1.2), and we choose α=1. Numerical results for Example 6.2 with the different problem sizes of n and μ=4 are reported in Table 2.
It follows from Table 2 that the performance of the PNMSOR method is much more competitive than the NMSOR method in terms of computing efficiency, especially for large and sparse problems.
Example 6.3 (Black-Scholes American option pricing). Consider the American option pricing problem which was introduced in [50]. Let V(S,t) and G(S,t) represent the value of an American option and the given payoff function of this option correspondingly. By a standard no-arbitrage argument, V(S,t) must satisfy the following complementarity conditions
This model can be further reformulated into the following inequality system
which satisfies u(x,0)=g(x,0), 0⩽t⩽12σ2T and limx→±∞u(x,t)=limx→±∞g(x,t). Here, we limit x∈[a,b] and choose the values of a and b based on the way in [50]. Using the forward difference scheme for time t and implicit difference scheme for the price x to discretize (6.1), one can obtain
By utilizing the transformation z=u−g,q=Ag−b, then (6.2) can be rewritten as the LCP (1.1) [50].
Following the parameter setting in [50], we take
where λ=dt(dx)2, dt=12σ2Tm denotes the time step, dx=b−am denotes the price step. Through the different choices of parameter θ, we can obtain different schemes, i.e., θ=0,12,1. Here, we choose θ=1, and it becomes a backward difference problem. Evidently, the matrix A is an H+-matrix as well. After that, let q=Ag−b where g=0.5z∗, b=Az∗−v∗ and the exact solution z∗=(1,0,1,0,⋯,1,0,⋯)⊤∈Rn−1, v∗=(0,1,0,1,⋯,0,1,⋯)⊤∈Rn−1 and the initial value z0=(1,1,1,1,⋯,1,1,⋯)⊤∈Rn−1 in this experiment.
From Tables 3–6, we compare the efficiency of the PNMSOR method and the NMSOR method for solving the LCP (1.1) generated from the American option pricing, and show the influence of American option changes under different volatility (σ=0.2,0.6) and different maturity (T=0.5,5) on the solving efficiency. The selection of parameters depends on the reference [50] and the specific parameters are indicated in the tables below. By a simple calculation of vector q, the preconditioner P is chosen the same as in Example 6.1 and α=1.
As shown in Table 3, compared with the NMSOR method, the PNMSOR method can shorten a half of time in the case of small volatility and short maturity. And the overall elapsed CPU time is the shortest. For the case of large volatility and short maturity in Table 4, the PNMSOR method also has an incredible solving speed, but the overall elapsed CPU time is longer than that in small volatility. It can be seen from Table 5 that although the elapsed CPU time has increased, the influence of maturity limit on the solving efficiency is not as great as that of volatility. At this point, the performance of our method is still excellent. From Table 6, the case with large volatility and long maturity consumes the longest solution time, but the PNMSOR method still has its superiority. At higher dimensions, the NMSOR method can not reach sufficient accuracy within 500 steps, but the PNMSOR method can converge rapidly. We observed that experimentally, when the size is (1600, 3200), the NMSOR method achieves 3.6375×10−8 precision at step 516 and takes 55.0822 seconds; when the size is (3200, 6400), the NMSOR method reaches the precision of 3.9548×10−7 at step 1011 and costs 432.2135 seconds, what they mean is that the NMSOR method is convergent, but it does not reach sufficient accuracy within the presupposed maximum number of iteration steps. Over here, we can find the superior performance of the PNMSOR method more vividly by comparison.
Example 6.4 (Continuous optimal control problem). Consider the quasi-variational inequality problem (QIP) from the continuous optimal control problem, which was proposed in [49]: find z∈K(z) such that
where , is a positive cone in , and represent the implicit obstacle function and the mapping from to itself, respectively.
The QIP (6.3) can be simplified to the LCP (1.1) by setting and . In Example 6.4, let , where
with and , which may be more consistent with the actual condition in the application. Apparently, there is a unique solution to this problem for any . The vector is the same as in Example 6.1 and . Numerical results for this example are reported in Table 7, from which we can find that the PNMSOR method is superior to the NMSOR method in terms of CPU time.
On the whole, from these numerical results, we can see that the PNMMS iteration method for solving the LCP (1.1) is much more competitive than the NMMS iteration method.
7.
Conclusions
In this paper, a class of preconditioned new modulus-based matrix splitting (PNMMS) method with a generalized preconditioner is developed to solve the LCP (1.1). The convergence analysis of the PNMMS method is established when is an -matrix. Particularly, we provide a comparison theorem between the PNMMS iteration method and the NMMS iteration method, which exhibits that the PNMMS method improves the convergence rate of the original NMMS method for solving the LCP (1.1). Numerical experiments are reported to demonstrate the efficiency of the proposed method.
Acknowledgments
The first author would like to thank Dr. Cairong Chen from Fujian Normal University for his helpful discussions and guidance. The authors are grateful to the reviewers and editor for their helpful comments and suggestions which have helped to improve the paper. This work is supported by the Social Science Foundation of Liaoning Province (L22BGL028).
Conflict of interest
The authors declare there is no conflicts of interest.