Algorithm 1 Iterative method based on (3.1) |
1: Given x(0)∈Rn, ε>0.
2: for k=0,1,2,…, 3: x(k)+=max{0,x(k)} 4: compute RES=norm(min(x(k)+,Ax(k)++q)) 5: if then RES<ε 6: x=x(k)+ 7: break 8: else 9: x(k+1)=(I−ΩA)x(k)+−Ωq 10: end if 11: end for |
In this paper, we consider numerical methods for the linear complementarity problem (LCP). By introducing a positive diagonal parameter matrix, the LCP is transformed into an equivalent fixed-point equation and the equivalence is proved. Based on such equation, the general fixed-point (GFP) method with two cases are proposed and analyzed when the system matrix is a P-matrix. In addition, we provide several concrete sufficient conditions for the proposed method when the system matrix is a symmetric positive definite matrix or an H+-matrix. Meanwhile, we discuss the optimal case for the proposed method. The numerical experiments show that the GFP method is effective and practical.
Citation: Xi-Ming Fang. General fixed-point method for solving the linear complementarity problem[J]. AIMS Mathematics, 2021, 6(11): 11904-11920. doi: 10.3934/math.2021691
[1] | Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233 |
[2] | Ting Lin, Hong Zhang, Chaofan Xie . A modulus-based modified multivariate spectral gradient projection method for solving the horizontal linear complementarity problem. AIMS Mathematics, 2025, 10(2): 3251-3268. doi: 10.3934/math.2025151 |
[3] | Panjie Tian, Zhensheng Yu, Yue Yuan . A smoothing Levenberg-Marquardt algorithm for linear weighted complementarity problem. AIMS Mathematics, 2023, 8(4): 9862-9876. doi: 10.3934/math.2023498 |
[4] | Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han . A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677 |
[5] | Li Dong, Jingyong Tang . New convergence analysis of a class of smoothing Newton-type methods for second-order cone complementarity problem. AIMS Mathematics, 2022, 7(9): 17612-17627. doi: 10.3934/math.2022970 |
[6] | Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078 |
[7] | Lu Zhang, Xiaoni Chi, Suobin Zhang, Yuping Yang . A predictor-corrector interior-point algorithm for P∗(κ)-weighted linear complementarity problems. AIMS Mathematics, 2023, 8(4): 9212-9229. doi: 10.3934/math.2023462 |
[8] | Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698 |
[9] | Xiaorui He, Jingyong Tang . A smooth Levenberg-Marquardt method without nonsingularity condition for wLCP. AIMS Mathematics, 2022, 7(5): 8914-8932. doi: 10.3934/math.2022497 |
[10] | James Abah Ugboh, Joseph Oboyi, Hossam A. Nabwey, Christiana Friday Igiri, Francis Akutsah, Ojen Kumar Narain . Double inertial extrapolations method for solving split generalized equilibrium, fixed point and variational inequity problems. AIMS Mathematics, 2024, 9(4): 10416-10445. doi: 10.3934/math.2024509 |
In this paper, we consider numerical methods for the linear complementarity problem (LCP). By introducing a positive diagonal parameter matrix, the LCP is transformed into an equivalent fixed-point equation and the equivalence is proved. Based on such equation, the general fixed-point (GFP) method with two cases are proposed and analyzed when the system matrix is a P-matrix. In addition, we provide several concrete sufficient conditions for the proposed method when the system matrix is a symmetric positive definite matrix or an H+-matrix. Meanwhile, we discuss the optimal case for the proposed method. The numerical experiments show that the GFP method is effective and practical.
In this paper, we consider the linear complementarity problem, which is to find a vector x∈Rn such that
xT(Ax+q)=0,x≥0andAx+q≥0, | (1.1) |
where A=(aij)∈Rn×n and q∈Rn are given. For convenience, such problem is usually abbreviated as LCP(A,q), which is related to many practical problems, such as American option pricing problems, the market equilibrium problems and the free boundary problems for journal bearings, see [1,2,3,4,5] and the references therein.
To obtain the numerical solution of the LCP(A,q) with a large and sparse system matrix, many kinds of iteration methods have been proposed and analyzed in recent decades. The projected method is a well known iteration method, which includes the smoothing projected gradient method [6], the projected gradient method [7], the partial projected Newton method [8], the improved projected successive over relaxation (IPSOR) [9], and so on. For more detailed materials about projected type iteration methods, we refer readers to [10,11,12,13] and the references therein. There are other efficient iteration methods for solving the LCP, e.g., the modulus-based matrix splitting iteration methods [2] and the nonstationary extrapolated modulus algorithms [14], the two-sweep modulus-based matrix splitting iteration methods [15], the general modulus-based matrix splitting method [5], the two-step modulus-based matrix splitting iteration method [16], the accelerated modulus-based matrix splitting iteration methods [17]. The main difference between the projected type iteration methods and the modulus-based type matrix splitting iteration methods is that the projected type iteration methods directly construct the iterative forms on an equivalent fixed-point equation based on the projection approaches and matrix splittings, however, the modulus-based type matrix splitting iteration methods reformulate the LCP(A,q) as an implicit fixed-point equation by introducing positive diagonal parameter matrices, then construct the iterative forms based on all kinds of matrix splittings and avoid the projections. The most prominent difference is that one requires the projection and the other does not. For other iteration methods for solving the complementarity problems, we refer readers to [18,19,20,21,22,23,24] and the references therein.
In [4], Shi, Yang, and Huang presented a fixed-point (FP) method for solving a concrete LCP(A,q) arising in American option pricing problems. The FP method is based on a fixed-point equation and belongs to the projected type iteration methods. However, the numerical examples in [4] shows that the number of iteration steps is very large when a suitable approximate solution is obtained. In this paper, we further discuss the projected type iteration methods and consider a general fixed-point equation by introducing a positive diagonal parameter matrix Ω. We note that [4] considered the case where Ω=αI with α>0 and I is the identity matrix. We prove the equivalence between the general fixed-point equation and the linear complementarity problem. Based on the new fixed-point equation, we propose the general fixed-point (GFP) method with two iteration forms. We discuss the convergence conditions and provide the concrete convergence domains for the proposed method. Moreover, we discuss the optimal parameter problem and obtain an optimal parameter value.
The rest of this paper is organized as follows. In Section 2, we introduce some notations and concepts briefly, and then provide two lemmas which are required to derive the new fixed-point method. In Section 3, we propose the general fixed-point method with convergence analysis. In Section 4, we present numerical examples to illustrate the efficiency of the proposed method. Finally, we give the concluding remark in Section 5.
In this section, we first briefly review some notations and concepts, then provide two lemmas that will be used in Section 3.
A matrix A=(aij)∈Rn×n is denoted by A≥0(orA>0) if aij≥0(oraij>0), and the absolute value matrix of A=(aij)∈Rn×n is denoted by |A|=(|aij|). The spectral radius of a square matrix A is denoted by ρ(A). A matrix A=(aij)∈Rn×n is called a Z-matrix if aij≤0 for i≠j and i,j=1,2,...,n, an M-matrix if A is a Z-matrix with A−1≥0, and a P-matrix if all of its principal minors are positive ([25]). A matrix A=(aij)∈Rn×n is called an H-matrix if its comparison matrix ⟨A⟩ is an M-matrix, and an H+-matrix if it is an H-matrix with all positive diagonal elements, where the comparison matrix ⟨A⟩=(⟨aij⟩) is defined by ⟨aii⟩=|aii| and ⟨aij⟩=−|aij| with i≠j for i,j=1,2,...,n ([2]). For a given matrix A∈Rn×n, the splitting A=F−G is called an M-splitting if F is an M−matrix and G≥0 ([26]). For a given vector x∈Rn, the symbols x+ and x− denote the vectors x+=max{0,x} and x−=max{0,−x}, respectively. For x, x+ and x−, we have
x+≥0,x−≥0,x=x+−x−,xT+x−=0. |
Lemma2.1 [27,28] Let A∈Rn×n be an M-matrix and A=F−G be an M-splitting. Then
ρ(F−1G)<1. |
For the equation
x=x+−Ω(Ax++q), | (2.1) |
where Ω is a given positive diagonal matrix, we have the following conclusion.
Lemma2.2 The solution of the linear complementarity problem (1.1) and the solution of equation (2.1) have the following relation:
(ⅰ) If x∗ is a solution of (1.1) and x∗∗=Ax∗+q, then x∗∗∗=x∗−Ωx∗∗ is a solution of (2.1).
(ⅱ) If x∗ is a solution of (2.1), then x∗+ is a solution of (1.1).
Proof. (ⅰ) Suppose x∗ is a solution of (1.1), then we have
x∗≥0,Ax∗+q≥0,(x∗)T(Ax∗+q)=0. |
Therefore, denoting Ax∗+q by x∗∗, for positive diagonal matrix Ω, we have
x∗≥0,Ωx∗∗≥0,(x∗)T(Ωx∗∗)=0. |
It follows that
x∗∗∗+=(x∗−Ωx∗∗)+=x∗,x∗∗∗−=(x∗−Ωx∗∗)−=Ωx∗∗. |
Thus
x∗∗∗=x∗∗∗+−x∗∗∗−=x∗∗∗+−Ωx∗∗=x∗∗∗+−Ω(Ax∗+q)=x∗∗∗+−Ω(Ax∗∗∗++q). |
That is, x∗∗∗ is a solution of (2.1).
(ⅱ) Suppose x∗ is a solution of (2.1), i.e.,
x∗=x∗+−Ω(Ax∗++q). |
Notice that x∗=x∗+−x∗−, we have x∗−=Ω(Ax∗++q), and it follows that Ax∗++q=Ω−1x∗−≥0. Moreover,
(x∗+)T(Ax∗++q)=(x∗+)T(Ω−1x∗−)=(x∗+)Tx∗−=0. |
Therefore, x∗+ is a solution of (1.1).
From Lemma 2.2 (ⅱ), we know that the solution of (1.1) can be obtained by solving Eq (2.1).
In this section, we first prove that the solution of equation (2.1) is unique when the system matrix is a P-matrix, then propose the general fixed-point (GFP) method for solving (1.1) based on (2.1), and discuss the convergence conditions.
Theorem3.1 If A is a P-matrix, then for any positive diagonal matrix Ω, (2.1) has a unique solution.
Proof. Since A is a P-matrix, the linear complementarity problem (1.1) has a unique solution for any q∈Rn ([29]). Therefore, from Lemma 2.2, we know that Eq (2.1) has solution(s). Suppose y∗,z∗ are solutions of (2.1), then
y∗=y∗+−Ω(Ay∗++q),z∗=z∗+−Ω(Az∗++q). |
By Lemma 2.2, y∗+ and z∗+ are are solutions of (1.1). Since (1.1) has a unique solution, y∗+=z∗+. It follows that y∗=z∗.
Based on (2.1) and Lemma 2.2 (ⅱ), we get an iterative method for solving (1.1):
x(k+1)=x(k)+−Ω(Ax(k)++q)=(I−ΩA)x(k)+−Ωq,k=0,1,2,…. | (3.1) |
We state the algorithm for (3.1) as follows.
Algorithm 1 Iterative method based on (3.1) |
1: Given x(0)∈Rn, ε>0.
2: for k=0,1,2,…, 3: x(k)+=max{0,x(k)} 4: compute RES=norm(min(x(k)+,Ax(k)++q)) 5: if then RES<ε 6: x=x(k)+ 7: break 8: else 9: x(k+1)=(I−ΩA)x(k)+−Ωq 10: end if 11: end for |
Theorem3.2 Assume A is a P-matrix. Let {x(k)}+∞k=1 be the sequence generated by (3.1) and x∗ be the solution of (2.1). If
ρ(|I−ΩA|)<1, | (3.2) |
then {x(k)}+∞k=1 converges to x∗ for any initial vector x(0)∈Rn.
Proof. Since x∗ is the unique solution of (2.1), we have
x∗=(I−ΩA)x∗+−Ωq. |
Combining with Eq (3.1), we get
x(k+1)−x∗=(I−ΩA)(x(k)+−x∗+). |
It follows that
|x(k+1)−x∗|≤|I−ΩA|⋅|x(k)+−x∗+|≤|I−ΩA|⋅|x(k)−x∗|. |
Therefore, if ρ(|I−ΩA|)<1, then {x(k)}+∞k=1 converges to x∗ for any initial vector x(0)∈Rn.
Since ρ(A)≤‖ for any matrix norm \|\cdot\| , we can get the following corollary easily from Theorem 3.2.
\bf{Corollary\; 3.1} Suppose A is a P -matrix. Let \left\{x^{(k)}\right\}_{k = 1}^{+\infty} be the sequence generated by (3.1) and x^{*} be the solution of (2.1). If
\begin{equation} \|\; |I-\Omega A|\; \| < 1, \end{equation} | (3.3) |
then \left\{x^{(k)}\right\}_{k = 1}^{+\infty} converges to x^{*} for any initial vector x^{(0)}\in \mathbb{R}^n , where \|\cdot\| is any matrix norm.
In the following, we consider two special cases: A is an H_{+} matrix with \Omega = \omega D^{-1} , where D = \mathrm{diag}(A) , and A is a symmetric positive definite matrix with \Omega = \omega I , where I is the identity matrix.
\bf{Theorem\; 3.3} Suppose A is an H_{+} -matrix, D = \mathrm{diag}(A) and B = D-A . Let \Omega = \omega D^{-1} with \omega > 0 and \left\{x^{(k)}\right\}_{k = 1}^{+\infty} be the sequence generated by (3.1) and x^{*} be the solution of (2.1). If
0 < \omega < \frac{2}{1+\rho(D^{-1}|B|)}, |
then \left\{x^{(k)}\right\}_{k = 1}^{+\infty} converges to x^{*} for any initial vector x^{(0)}\in \mathbb{R}^n . Moreover, \omega = 1 is the optimal choice.
Proof. Since A is an H_{+} -matrix, D = \mathrm{diag}(A) , and A = D-B , we have that \rho(D^{-1}|B|) < 1 ; see [21]. For \Omega = \omega D^{-1} with \omega > 0 , we have
\begin{eqnarray*} |I-\Omega A|& = & |I-\omega D^{-1}(D-B)| \nonumber \\ & = & |1-\omega|I+\omega D^{-1}|B| \nonumber \\ & = & \begin{cases} (1-\omega)I+\omega D^{-1}|B|,\; \mathrm{if}\; 0 < \omega\leq 1,\\ (\omega-1)I+\omega D^{-1}|B|,\; \mathrm{if}\; \omega > 1.\\ \end{cases} \end{eqnarray*} |
It follows that
\begin{equation} \rho(|I-\Omega A|) = \begin{cases} 1-(1-\rho(D^{-1}|B|))\omega,\; \mathrm{if}\; 0 < \omega \leq 1, \\ (1+\rho(D^{-1}|B|)\omega-1,\; \mathrm{if}\; \omega > 1.\\ \end{cases} \end{equation} | (3.4) |
It can be easily seen from (3.4) that \rho(|I-\Omega A|) < 1 for \omega \in (0, 1] and for \omega > 1 , \rho(|I-\Omega A|) < 1 if and only if \omega < \frac{2}{1+\rho(D^{-1}|B|)} . Therefore, if 0 < \omega < \frac{2}{1+\rho(D^{-1}|B|)} , \left\{x^{(k)}\right\}_{k = 1}^{+\infty} converges to x^{*} for any initial vector x^{(0)}\in \mathbb{R}^n . Moreover, it can be seen from (3.4) that when \omega = 1 , \rho(|I-\Omega A|) = \rho(D^{-1}|B|) is minimal. That is, \omega = 1 is the optimal choice.
\bf{Theorem\; 3.4} Suppose A is a symmetric positive definite matrix. Set \Omega = \omega I with \omega > 0 and denote the smallest and the largest eigenvalues of A by \lambda_{\min} and \lambda_{\max} , respectively. Let \left\{x^{(k)}\right\}_{k = 1}^{+\infty} be the sequence generated by (3.1) and x^{*} be the solution of (2.1). If
0 < \omega < \frac{2}{\lambda_{\max}}, |
then \left\{x^{(k)}\right\}_{k = 1}^{+\infty} converges to x^{*} for any initial vector x^{(0)}\in \mathbb{R}^n .
Proof. From the proof of Theorem 3.2, similarly, we can have
x^{(k+1)}-x^{*} = (I-\Omega A)(x_{+}^{(k)}-x_{+}^{*}). |
Then
\parallel x^{(k+1)}-x^{*}\parallel_2\leq\parallel I-\Omega A\parallel_2 \cdot \parallel x_{+}^{(k)}-x_{+}^{*}\parallel_2 \leq\parallel I-\Omega A\parallel_2 \cdot \parallel x^{(k)}-x^{*}\parallel_2, |
where \parallel\cdot\parallel_2 is the spectral norm of matrix. So, we have a convergence condition of (3.1), that is
\parallel I-\Omega A\parallel_2 < 1. |
Since
\begin{eqnarray*} \|I-\Omega A\|_2 = \|I-\omega I \cdot A\|_2 & = & \max\{|1-\omega \lambda_{\min}|,|1-\omega \lambda_{\max}|\} \nonumber \\ & = & \begin{cases} |1-\omega \lambda_{\min}|,\; \mathrm{if}\; |1-\omega \lambda_{\min}|\geq |1-\omega \lambda_{\max}|,\\ |1-\omega \lambda_{\max}|,\; \mathrm{if}\; |1-\omega \lambda_{\max}|\geq|1-\omega \lambda_{\min}|,\\ \end{cases} \end{eqnarray*} |
we can solve
\begin{eqnarray*} (\mathrm{I})\begin{cases} |1-\omega \lambda_{\min}| < 1,\\ |1-\omega \lambda_{\min}|\geq |1-\omega \lambda_{\max}|,\\ \end{cases} \end{eqnarray*} |
and
\begin{eqnarray*} (\mathrm{II}) \begin{cases} |1-\omega \lambda_{\max}| < 1,\\ |1-\omega \lambda_{\max}|\geq|1-\omega \lambda_{\min}|,\\ \end{cases} \end{eqnarray*} |
to obtain the convergence conditions of (3.1), that is 0 < \omega\leq\frac{2}{\lambda_{\min}+\lambda_{\max}} and \frac{2}{\lambda_{\min}+\lambda_{\max}}\leq\omega < \frac{2}{\lambda_{\max}} , which can be combined into
0 < \omega < \frac{2}{\lambda_{\max}}. |
Thus, the conclusion is proved.
Let A be split as A = D-L-U , where D , -L and -U are the diagonal, strictly lower and strictly upper triangular parts of A , respectively. We can derive another iterative method for solving (1.1) based on (2.1) by using the idea of Gauss-Seidel:
\begin{equation} x^{(k+1)} = (I-\Omega (D-U))x_{+}^{(k)}+\Omega Lx^{(k+1)}_{+}-\Omega q, \quad k = 0,1,2,\ldots. \end{equation} | (3.5) |
We state the algorithm for (3.5) as follows.
We call iterative methods (3.1) and (3.5) the general fixed-point method. We note that in [4], Shi, Yang, and Huang introduced a fixed-point method, which is a special case of (3.1) with \Omega = \alpha I ( \alpha > 0 ). In the following, we analyze the convergence of iterative method (3.5). In particular, we consider the convergence domain of \Omega when A is an H_{+} -matrix.
\bf{Theorem\; 3.5} Suppose A is a P -matrix, then the sequence \left\{x^{(k)}\right\}_{k = 1}^{+\infty} generated by (3.5) converges to the unique solution x^{*} of (2.1) for any initial vector x^{(0)}\in \mathbb{R}^n if
\begin{equation} \rho((I-|\Omega L|)^{-1} \cdot |I-\Omega(D-U)|) < 1. \end{equation} | (3.6) |
Algorithm 2 Iterative method based on (3.5) |
1: Given x^{(0)} \in \mathbb{R}^n , \varepsilon > 0 2: for k=0, 1, 2, \ldots, do
3: x^{(k)}_{+}=\max\{0, x^{(k)}\} 4: compute RES=norm (\min(x^{(k)}_{+}, Ax^{(k)}_{+}+q)) 5: if RES < \varepsilon then 6: x=x^{(k)}_{+} 7: break 8: else 9: x^{(k+1)}_{1}=((I-\Omega(D-U))x^{(k)}_{+}-\Omega q)_{1} 10: for i=2, 3, ..., n 11: x^{(k+1)}_{i}=((I-\Omega(D-U))x^{(k)}_{+}-\Omega q)_{i}+(\Omega L x^{(k+1)}_{+})_{i} 12: end for 13: end if 14: end for |
Proof. Since A is a P -matrix, equation (2.1) has a unique solution for any positive diagonal matrix \Omega . Based on (2.1) and A = D-L-U , we get
x^{*} = (I-\Omega(D-U))x_{+}^{*}+\Omega Lx_{+}^{*}-\Omega q. |
From the above formula and equation (3.5), we have
\begin{equation} x^{(k+1)}-x^{*} = (I-\Omega(D-U))(x_{+}^{(k)}-x_{+}^{*}) +\Omega L(x^{(k+1)}_{+}-x^{*}_{+}). \end{equation} | (3.7) |
Thus
\begin{equation*} \begin{split} |x^{(k+1)}-x^{*}|&\leq|I-\Omega(D-U)| \cdot |x_{+}^{(k)}-x_{+}^{*}|+|\Omega L| \cdot |x^{(k+1)}_{+}-x^{*}_{+}|\\ &\leq|I-\Omega(D-U)|\cdot |x^{(k)}-x^{*}|+|\Omega L|\cdot |x^{(k+1)}-x^{*}|. \end{split} \end{equation*} |
It follows that
\begin{equation*} \begin{split} (I-|\Omega L|)|x^{(k+1)}-x^{*}|&\leq|I-\Omega(D-U)|\cdot |x^{(k)}-x^{*}|. \end{split} \end{equation*} |
Since I-|\Omega L| is an M -matrix, i.e., (I-|\Omega L|)^{-1}\geq 0 , we have
\begin{equation*} \begin{split} |x^{(k+1)}-x^{*}|&\leq(I-|\Omega L|)^{-1} \cdot |I-\Omega(D-U)|\cdot |x^{(k)}-x^{*}|. \end{split} \end{equation*} |
Therefore, if \rho((I-|\Omega L|)^{-1}|I-\Omega(D-U)|) < 1 , then \left\{x^{(k)}\right\}_{k = 1}^{+\infty} converges to x^{*} for any initial vector x^{(0)}\in \mathbb{R}^n .
We now consider the convergence domain of \Omega for iterative method (3.5) when A is an H_{+} -matrix.
\bf{Theorem\; 3.6} Suppose A is an H_{+} -matrix and either of the following conditions holds:
(1) 0 < \Omega\leq D^{-1} ;
(2) \Omega > D^{-1} and 2\Omega^{-1}-D-|B| is an M -matrix, where B = L+U .
Then the sequence \left\{x^{(k)}\right\}_{k = 1}^{+\infty} generated by (3.5) converges to the unique solution of (2.1) for any initial vector x^{(0)}\in \mathbb{R}^n .
Proof. Since any H_{+} -matrix is a P -matrix ([2]), Eq (2.1) has a unique solution. Consider the splitting
\begin{equation*} \begin{split} (I-|\Omega L|)-|I-\Omega(D-U)|& = (I-|I-\Omega D|)-\Omega |B| \\ & = \begin{cases} \Omega(D-|B|), \; \mathrm{if}\; 0 < \Omega\leq D^{-1} \\ 2I-\Omega D-\Omega |B|, \; \mathrm{if}\; \Omega > D^{-1} \end{cases}\\ & = \begin{cases} \Omega \langle A\rangle, \; \mathrm{if}\; 0 < \Omega\leq D^{-1}, \\ \Omega(2\Omega^{-1}-D-|B|), \; \mathrm{if}\; \Omega > D^{-1}. \end{cases} \end{split} \end{equation*} |
(1) When 0 < \Omega\leq D^{-1} , (I-|\Omega L|)-|I-\Omega(D-U)| is an M -splitting of the M -matrix \Omega \langle A\rangle , therefore, it follows from Lemma 2.1 that \rho((I-|\Omega L|)^{-1}\cdot|I-\Omega(D-U)|) < 1 .
(2) When \Omega > D^{-1} , if 2\Omega^{-1}-D-(|L|+|U|) is an M -matrix, then the splitting (I-|\Omega L|)-|I-\Omega(D-U)| is an M -splitting of the M -matrix \Omega(2\Omega^{-1}-D-|B|) , therefore \rho((I-|\Omega L|)^{-1}\cdot|I-\Omega(D-U)|) < 1 .
Collecting (1) and (2), this theorem is established based on Theorem 3.5.
\bf{Corollary\; 3.2} Suppose A is an H_{+} -matrix and \Omega = \omega D^{-1} with \omega > 0 , then the sequence \left\{x^{(k)}\right\}_{k = 1}^{+\infty} generated by (3.5) converges to the unique solution x^{*} of (2.1) for any initial vector x^{(0)}\in \mathbb{R}^n if
\begin{equation} 0 < \omega < \frac{2}{1+\rho(D^{-1}|B|)}. \end{equation} | (3.8) |
Proof. For 0 < \omega \leq 1 , we have \Omega \leq D^{-1} , that is, \Omega satisfies the first condition of Theorem 3.6. When \Omega = \omega D^{-1} , the second condition of Theorem 3.6 can be represented as
\omega > 1\; {\rm{ and }}\; \left(\frac{2}{\omega}-1\right) D - |B| \;{\rm{ is\; an}} \; M -{\rm{matrix}}. |
That is,
\begin{equation*} \omega > 1 \;{\rm{ and }}\; \frac{2}{\omega}-1 > \rho(D^{-1}|B|). \end{equation*} |
Therefore, if 0 < \omega < \frac{2}{1+\rho(D^{-1}|B|)} holds, \left\{x^{(k)}\right\}_{k = 1}^{+\infty} converges to x^{*} .
To end this section, we consider the LCP (A, q) arising in American option pricing problem, where A is a symmetric tridiagonal M -matrix:
\begin{equation} A = \left( \begin{array}{ccccc} 1+2\lambda\theta&-\lambda\theta& & \\ -\lambda\theta& 1+2\lambda\theta &-\lambda\theta & \\ &\ddots&\ddots&\ddots\\ &&-\lambda\theta& 1+2\lambda\theta & -\lambda\theta \\ &&&-\lambda\theta& 1+2\lambda\theta \end{array} \right)\in \mathbb{R}^{n\times n} \end{equation} | (3.9) |
with \lambda, \theta > 0 ; see for instance [4]. In the following, we derive the optimal value of the parameter \omega for the case \Omega = \omega D^{-1} under the sense of 1 -norm.
For iterative method (3.1), based on (3.4), we consider
f_1(\omega) = \|I-\Omega A\|_1. |
Let \mu = \lambda \theta . By some computation, we get
f_1(\omega) = |1-\omega|+\frac{2\mu}{1+2\mu} \omega = \left\{\begin{array}{ll} 1-\frac{\omega}{1+2\mu}, \; \mathrm{if}\; 0 < \omega \leq 1, \\ \frac{(1+4\mu)\omega}{1+2\mu} -1, \; \mathrm{if}\; \omega > 1. \end{array}\right. |
It can be easily seen that
\begin{equation} \min\limits_{\omega > 0} f_1(\omega) = f_1(1) = \frac{2\mu}{1+2\mu} = \frac{2\lambda\theta}{1+2\lambda\theta} < 1. \end{equation} | (3.10) |
That is, \omega = 1 is an optimal parameter.
We now consider the value of \omega for iterative method (3.5). Let \alpha = \frac{\omega}{1+2 \mu} , i.e., \omega = \alpha (1+2\mu) , then
(I-\omega D^{-1} |L|)^{-1} = \begin{pmatrix} 1 & \\ -\alpha \mu & 1 \\ & \ddots & \ddots \\ & & -\alpha\mu & 1 \end{pmatrix}^{-1} = \begin{pmatrix} 1 & \\ \alpha \mu & 1 \\ (\alpha \mu)^2 & \alpha \mu & 1 \\ \vdots & \ddots & \ddots & \ddots \\ (\alpha \mu)^{n-1} & \cdots & (\alpha \mu)^2 & \alpha\mu & 1 \end{pmatrix}. |
Let \nu = |1- \omega| , and a_i = (\alpha \mu)^{i} , i = 0, 1, 2\ldots, n . Then a_{i} a_j = a_{i+j} and it is easy to get
\begin{eqnarray*} && (I- \omega D^{-1}|L|)^{-1} \cdot |I-\omega D^{-1} (D-U)| \\ & = & \begin{pmatrix} 1 & \\ a_1 & 1 \\ a_2 & a_1 & 1 \\ \vdots & \ddots & \ddots & \ddots \\ a_{n-1} & \cdots & a_2 & a_1 & 1 \end{pmatrix} \begin{pmatrix} \nu & a_1 \\ & \nu & a_1 \\ && \ddots & \ddots \\ &&& \nu & a_1 \\ &&&& \nu \end{pmatrix} \\ & = & \begin{pmatrix} \nu & a_1 \\ a_1 \nu & a_2 + \nu & a_1 \\ \vdots & \vdots & \ddots & \ddots \\ a_{n-2} \nu & a_{n-1} + a_{n-3} \nu & \cdots & a_2 + \nu & a_1 \\ a_{n-1} \nu & a_n + a_{n-2} \nu & \cdots & a_3 + a_1 \nu & a_2 + \nu \end{pmatrix}. \end{eqnarray*} |
Based on (3.6) and (3.8), we define
f_2(\omega) = \| (I- \omega D^{-1} |L|)^{-1} \cdot |I-\omega D^{-1} (D-U)| \|_{1}. |
Since a_i > 0 and 0 < \nu < 1 , it can be seen that
\begin{eqnarray*} f_2(\omega) & = & \sum\limits_{i = 1}^n (\alpha \mu)^i + |1-\omega| \sum\limits_{i = 0}^{n-2} (\alpha \mu)^i. \end{eqnarray*} |
We consider a particular \omega , that is \omega = 1 , then
f_2(1) = \frac{\mu}{1+\mu}\left[1-\left(\frac{\mu}{1+2\mu}\right)^n \right]. |
Remark. Based on the above discussions and notice that
\begin{eqnarray*} f_2(1) = \frac{\mu}{1+\mu}\left[1-\left(\frac{\mu}{1+2\mu}\right)^n \right] & = & \frac{\lambda\theta}{1+\lambda\theta} \left[1-\left(\frac{\lambda\theta}{1+2\lambda\theta}\right)^n \right]\\ & < & \frac{\lambda\theta}{1+\lambda\theta}\\& < & \frac{2\lambda\theta}{1+2\lambda\theta} = f_1(1), \end{eqnarray*} |
we expect that iterative method (3.5) converges faster than iterative method (3.1) when \omega = 1 .
In this section, we illustrate some examples. Since most of the projected methods involve parameters, and it is not easy to select a proper in practice. Meanwhile, the implicit equations related to the projected methods are different, it is difficult to compare the GFP method with other projected methods fairly. So we do not compare the GFP method with other projected methods except for the FP method. The modulus-based approaches for solving the LCP (A, q) include many cases, we select the modulus-based SOR (MSOR) iteration method with the best cases as comparison. Besides, we use the GFP method to solve the LCP (A, q) arising in American option pricing problem([4]). The number of iteration steps, the elapsed time and the norm of the residual vector are denoted by IT, CPU and RES, respectively. RES is defined as:
\mathrm{RES}(x^{(k)}_{+}) = ||\mathrm{min}(x^{(k)}_{+},Ax^{(k)}_{+}+q)||_2, |
where x^{(k)}_{+} is the k th approximate solution of (1.1). The iteration process stops if \mathrm{RES}(x^{(k)}_{+}) < 10^{-5} or the number of iteration steps reaches 1000.
The system matrix A in the first two examples is generated by
A(\mu,\eta,\zeta) = \widehat{A}+\mu I+\eta B+\zeta C, |
here, \mu , \eta and \zeta are given constants, \widehat{A} = \mathrm{Tridiag}(-I, S, -I) \in \mathbb{R}^{n\times n} is a block-tridiagonal matrix,
B = \mathrm{Tridiag}(0,0,1) \in \mathbb{R}^{n\times n}\; \; \mathrm{and}\; \; S = \mathrm{tridiag}(-1,4,-1) \in \mathbb{R}^{m\times m} |
are two tridiagonal matrices, and C = \mathrm{diag}([1\; 2\; 1\; 2\; \cdots]) is a diagonal matrix of order n , m and n satisfy n = m^{2} . For convenience, we set
q = (1,-1,1,-1,\ldots,1,-1,\ldots)^{\mathrm{T}}\in \mathbb{R}^n, |
then the LCP( A(\mu, \eta, \zeta), q ) have a unique solution when A(\mu, \eta, \zeta) is a P -matrix. All computations are run by using Matlab version 2016 on a Dell Laptop (Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz, 4.00GB RAM).
\bf{Example\; 4.1} In this example, we compare the GFP method with the MSOR iteration method ([2]), which is a very effective method in solving the LCP( A, q ). Both the GFP method and the MSOR iteration method involve a parameter matrix \Omega , which has many choices and it is not appropriate if we take the same \Omega . We set \Omega = \omega D^{-1} in the GFP method and \Omega = \omega D in the MSOR iteration method, respectively. For fairness, we compare the two methods with the best cases by performing many experiments. Set x^{(0)} = \mathrm{zeros}(n, 1) and n = 10000 , then we obtain the following Table 1.
A(1, -1, 0) | A(1, 1, 0) | A(1, 1, -1) | |||||||||
Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | |||
\omega | 1.1 | 1.1 | 1 | 1.1 | 1.1 | 1.2 | 1 | 1.2 | 1 | ||
\alpha | - | - | 0.8 | - | - | 1.1 | - | - | 1.1 | ||
IT | 17 | 10 | 20 | 17 | 10 | 11 | 39 | 16 | 24 | ||
CPU | 0.0056 | 7.8000 | 0.0129 | 0.0053 | 7.6147 | 0.0078 | 0.0092 | 11.788 | 0.0124 | ||
RES | 0.5e-5 | 0.3e-5 | 0.4e-5 | 0.5e-5 | 0.5e-5 | 0.7e-5 | 0.8e-5 | 0.4e-5 | 0.5e-5 | ||
A(1, 0, 1) | A(0, 1, 0) | A(1, 1, 1) | |||||||||
Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | |||
\omega | 1.1 | 1 | 1 | 1 | 1.1 | 1.1 | 1 | 1.1 | 1.2 | ||
\alpha | - | - | 1 | - | - | 1.1 | - | - | 1.2 | ||
IT | 12 | 9 | 9 | 23 | 12 | 15 | 12 | 9 | 9 | ||
CPU | 0.0026 | 7.7473 | 0.0045 | 0.0049 | 8.7403 | 0.0158 | 0.0025 | 7.3931 | 0.0075 | ||
RES | 0.6e-5 | 0.6e-5 | 0.8e-5 | 0.8e-5 | 0.3e-5 | 0.4e-5 | 0.6e-5 | 0.6e-5 | 0.7e-5 |
In Table 1, Alg 1 and Alg 2 denote Algorithm 1 and Algorithm 2, respectively. From Table 1, we can find that iterative method (3.1) is better than the MSOR iteration method in the running time, and iterative method (3.5) is better than the MSOR iteration method in the iteration steps. Since Algorithm 2 calculates the numerical solution entry by entry, the matrix-vector fast calculation technique can not be used, which makes the method more time-consuming. For Algorithm 1, single-step running time is short, but the number of iteration steps is relatively large. Besides, the best parameters for the MSOR iteration method are selected by many experiments, however, the best parameter for iterative method (3.1) is near or equals to 1, which is a conclusion of Theorem 3.3.
\bf{Example\; 4.2} In this example, we illustrate the convergence domains of \omega and the convergence rates of iterative methods (3.1) and (3.5). We consider A(0, 0, 1) and A(0, 1, 1) . The former is a symmetric H_{+} -matrix and the latter is a nonsymmetric H_{+} -matrix, both of which are P -matrices. We set \Omega = \omega D^{-1} = \omega\mathrm{diag}(A(\mu, \eta, \zeta))^{-1} . From Theorem 3.3 and Corollary 3.2, we know that \omega\in\left(0, \frac{2}{1+\rho(D^{-1}|B|)}\right) is a sufficient convergence domain for both iterative methods (3.1) and (3.5). Here, we consider a larger domain and set \omega to be
1-\mathrm{floor}\left(\frac{1}{\delta}\right)*\delta:\delta:\frac{2}{1+\rho(D^{-1}|B|)}+\delta, |
where \delta = \frac{1}{2}\left(\frac{2}{1+\rho(D^{-1}|B|)}-1\right) . The symbols ' \mathrm{floor} ' and ':' are an integer function and a command in Matlab software, respectively. Then the right boundary of interval \left(0, \frac{2}{1+\rho(D^{-1}|B|)}\right) is the penultimate one in these points. We denote \rho(|I-\Omega A|) in (3.2) and \rho((I-\Omega|L|)^{-1}|I-\Omega(D-U)|) in (3.6) by \rho_1 and \rho_2 , respectively. Set n = 900 and x^{(0)} = \mathrm{zeros}(n, 1) , then we obtain Table 2 and the corresponding Figure 1 as follows.
A(0, 0, 1) | ||||||||
\omega | \omega_{1} | \omega_{2} | \omega_{3} | \omega_{4} | \omega_{5} | \omega_{6} | \omega_{7} | \omega_{8} |
\rho_1 | 0.9833 | 0.9621 | 0.9410 | 0.9198 | 0.8987 | 0.8776 | 0.8564 | 0.8353 |
\rho_2 | 0.9829 | 0.9601 | 0.9358 | 0.9099 | 0.8821 | 0.8522 | 0.8198 | 0.7845 |
IT _1 | 344 | 148 | 93 | 66 | 51 | 41 | 34 | 28 |
IT _2 | 341 | 145 | 89 | 63 | 47 | 37 | 30 | 25 |
\omega | \omega_{9} | \omega_{10} | \omega_{11} | \omega_{12} | \omega_{13}=1 | \omega_{14} | \omega_{15} | \omega_{16} |
\rho_1 | 0.8141 | 0.7930 | 0.7719 | 0.7507 | 0.7296 | 0.8648 | 1.0000 | 1.1352 |
\rho_2 | 0.7457 | 0.7027 | 0.6542 | 0.5984 | 0.5323 | 0.7671 | 1.0000 | 1.2358 |
IT _1 | 24 | 21 | 18 | 16 | 14 | 12 | 16 | 22 |
IT _2 | 20 | 17 | 14 | 12 | 9 | 8 | 10 | 13 |
A(0, 1, 1) | ||||||||
\omega | \omega_{1} | \omega_{2} | \omega_{3} | \omega_{4} | \omega_{5}=1 | \omega_{6} | \omega_{7} | \omega_{8} |
\rho_1 | 0.8897 | 0.7744 | 0.6580 | 0.5427 | 0.4255 | 0.7128 | 1.0029 | 1.2882 |
\rho_2 | 0.8840 | 0.7488 | 0.5957 | 0.4131 | 0.1629 | 0.5813 | 1.0035 | 1.4651 |
IT _1 | 105 | 48 | 29 | 20 | 14 | 19 | 62 | 1000 |
IT _2 | 101 | 44 | 25 | 16 | 9 | 12 | 19 | 35 |
From Table 2 and Figure 1, we can find that Theorem 3.3 and Corollary 3.2 provide the convergent domains of \omega for iterative methods (3.1) and (3.5), respectively, and the initial iteration vector is arbitrary when \omega falls in these domains. When \omega exceeds the convergence domain, the two iterative methods are convergent sometimes when x^{(0)} = \mathrm{zeros}(n, 1) . Meanwhile, we can find that iterative method (3.5) is usually faster than iterative method (3.1) in terms of IT when \omega takes the same value. In addition, this example illustrates the optimal parameter conclusion in Theorem 3.3, i.e., \omega = 1 is a good parameter for iterative method (3.1).
\bf{Example\; 4.3} In this example, we apply the GFP method to solve the LCP ( A, q ) in [4], where the matrix A satisfies (3.9). We set \Omega = \omega D^{-1} and \lambda\theta = 0.5, 1, 1.5, 2 , respectively. Similarly, just as Example 4.2, we set \omega = 1-\mathrm{floor}\left(\frac{1}{\delta}\right)*\delta:\delta:\frac{2}{1+\rho(D^{-1}|B|)}+\delta with \delta = \frac{1}{2}\left(\frac{2}{1+\rho(D^{-1}|B|)}-1\right) , \rho_{1} = \rho(|I-\Omega A|) in (3.2) and \rho_{2} = \rho((I-\Omega|L|)^{-1}|I-\Omega(D-U)|) in (3.6). Then \omega = 1 is the fourth from the bottom of these values of \omega . We set n = 900 and x^{(0)} = \mathrm{randn}(n, 1) in our experiments, then we obtain Figure 2 as follows.
From Figure 2, we can find that both iterative method (3.1) and iterative method (3.5) have good performance when \omega = 1 , i.e., the two iterative methods solve the LCP ( A, q ) in very few iteration steps. Meanwhile, this example also verifies that iterative method (3.5) is faster than iterative method (3.1) when \omega = 1 , which is a conclusion given at the end of Section 3.
\bf{Example\; 4.4} In this example, we compare the GFP method with the FP method ([4]). The system matrix is generated by
A(\eta) = \widehat{A}-4 I+\eta B+C, |
where C = \mathrm{diag}([1\; 2\; \cdots\; n]) and n = 900 . The initial iteration vector is x^{(0)} = \mathrm{zeros}(n, 1) . For the FP method, the parameter matrix is \Omega = \omega I , and for the GFP method, the parameter matrix is \Omega = \omega D^{-1} = \omega (\mathrm{diag}(A))^{-1} . We consider two cases in our experiments, that is \eta = 0 and \eta = 1 . Thus, these two matrices are H_{+} -matrices. Since the convergence domains of \omega are different, we can not use the same \omega values for the two methods. For the GFP method, based on Theorem 3.3 and Corollary 3.2, we know that the convergence domain is 0 < \omega < \frac{2}{1+\rho(D^{-1}|B|)} , and we set \omega to be \frac{1}{3(1+\rho(D^{-1}|B|))}:\frac{1}{3(1+\rho(D^{-1}|B|))}:\frac{2}{1+\rho(D^{-1}|B|)}. For the FP method, there are two different situations. For \eta = 0 , since the system matrix is a symmetric positive definite matrix, based on Theorem 3.4, we know that the convergence domain is 0 < \omega < \frac{2}{\lambda_{\max}} , we set \omega to be
\frac{1}{3\lambda_{\max}}:\frac{1}{3\lambda_{\max}}:\frac{2}{\lambda_{\max}}. |
For \eta = 1 , by performing many experiments, we set \omega to be 0.0003:0.0002:0.0013 , which include the convergence parameter values. The numerical results are shown in Tables 3 and 4, respectively.
GFP method | FP method | |||||||
\omega | \rho_{1} | IT _1 | \rho_{2} | IT _2 | \omega | \rho | IT | |
0.1795 | 0.9743 | 77 | 0.9721 | 76 | 0.0004 | 0.9999 | 1000 | |
0.3591 | 0.9486 | 35 | 0.9390 | 34 | 0.0007 | 0.9998 | 1000 | |
0.5386 | 0.9228 | 21 | 0.8989 | 20 | 0.0011 | 0.9998 | 1000 | |
0.7181 | 0.8971 | 13 | 0.8487 | 12 | 0.0015 | 0.9997 | 1000 | |
0.8976 | 0.8714 | 9 | 0.7828 | 7 | 0.0019 | 0.9996 | 1000 | |
1.0772 | 1.0000 | 8 | 1.0000 | 6 | 0.0022 | 1.0000 | 1000 |
GFP method | FP method | |||||||
\omega | \rho_{1} | IT _1 | \rho_{2} | IT _2 | \omega | \rho | IT | |
0.2822 | 0.7689 | 46 | 0.7624 | 46 | 0.0003 | 0.9997 | 1000 | |
0.5645 | 0.5378 | 19 | 0.5084 | 18 | 0.0005 | 0.9995 | 1000 | |
0.8467 | 0.3066 | 10 | 0.2262 | 8 | 0.0007 | 0.9993 | 1000 | |
1.1289 | 0.3333 | 10 | 0.2261 | 8 | 0.0009 | 0.9991 | 1000 | |
1.4111 | 0.6667 | 23 | 0.6109 | 17 | 0.0011 | 0.9989 | 1000 | |
1.6934 | 1.0000 | 125 | 1.0000 | 40 | 0.0013 | 0.9987 | 1000 |
From Tables 3 and 4, we can find that since \rho is very large, the FP method can not obtain an approximate solution even the number of iteration steps reaches 1000. On the contrary, for the GFP method, both (3.1) and (3.5) can obtain the approximate solution in fewer iteration steps. Therefore, it is obvious that the GFP method is better than the FP method.
In this paper, based on an equivalent fixed-point equation with a parameter matrix \Omega , we present the general fixed-point (GFP) method for solving the LCP( A, q ), which is the generalization of the fixed-point (FP) method. For this method, we discussed two iterative forms: one is the basic form and the other is the converted form, which is associated with matrix splitting. Both iterative forms can keep the spare structure of A in the iteration processes, thus the sparse structure of A can be applied to improve the effectiveness of this method. For the GFP method, the convergence conditions are proved and some concrete convergence domains of \Omega as well as the optimal cases are presented. The iteration form of the GFP method is simple and the convergence rate is affected by the spectral radius of the iteration matrix. The numerical experiments show that the GFP method is an effective and competitive iterative method.
The author thanks the anonymous referees for providing many useful comments and suggestions that made this paper more readable. This work was supported by Zhaoqing Education and Development Project (No. ZQJYY2020093), the Characteristic Innovation Project of Department of Education of Guangdong Province (No. 2020KTSCX159), the Innovative Research Team Project of Zhaoqing University, the Scientific Research Ability Enhancement Program for Excellent Young Teachers of Zhaoqing University and Zhaoqing University Research Project (No. 611-612279).
The author confirms that there has no conflict of interest.
[1] |
S. R. Sacher, On the solution of large, structured linear complementarity problems: The block partitioned case, Appl. Math. Optim., 4 (1977), 347–363. doi: 10.1007/BF01442149
![]() |
[2] |
Z. Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 17 (2010), 917–933. doi: 10.1002/nla.680
![]() |
[3] |
C. W. Cryer, The solution of a quadratic programming problem using systematic over-relaxation, SIAM J. Control Optim., 9 (1971), 385–392. doi: 10.1137/0309028
![]() |
[4] |
X. J. Shi, L. Yang, Z. H. Huang, A fixed-point method for the linear complementarity problem arising from American option pricing, Acta Math. Appl. Sinica(Eglish Ser.), 32 (2016), 921–932. doi: 10.1007/s10255-016-0613-6
![]() |
[5] |
W. Li, A general modulus-based matrix splitting method for linear complementarity problems of H-matrices, Appl. Math. Lett., 26 (2013), 1159–1164. doi: 10.1016/j.aml.2013.06.015
![]() |
[6] |
C. Zhang, X. J. Chen, Smoothing projected gradient method and its application to stochastic linear complementarity problems, SIAM J. Optim., 20 (2009), 627–649. doi: 10.1137/070702187
![]() |
[7] |
P. H. Calamai, J. J. Mor, Projected gradient methods for linearly constrained problems, Math. Program., 39 (1987), 93–116. doi: 10.1007/BF02592073
![]() |
[8] |
H. Liu, Y. Huang, X. Li, Partial projected Newton method for a class of stochastic linear complementarity problems, Numer. Algorithms, 58 (2011), 593–618. doi: 10.1007/s11075-011-9472-7
![]() |
[9] |
M. D. Koulisianis, T. S. Papatheodorou, Improving projected successive overrelaxation method for linear complementarity problems, Appl. Numer. Math., 45 (2003), 29–40. doi: 10.1016/S0168-9274(02)00233-7
![]() |
[10] |
Y. C. Cheng, On the gradient-projection method for solving the nonsymmetric linear complementarity problem, J. Optimiz. Theory App., 43 (1984), 527–541. doi: 10.1007/BF00935004
![]() |
[11] | H. Liu, Y. Huang, X. Li, New reformulation and feasible semismooth Newton method for a class of stochastic linear complementarity problems, Appl. Math. Comput., 217 (2011), 9723–9740. |
[12] |
B. S. He, A projection and contraction method for a class of linear complementarity problems and its application in convex quadratic programming, Appl. Math. Optim., 25 (1992), 247–262. doi: 10.1007/BF01182323
![]() |
[13] | I. M. Sharaf, A projection algorithm for positive definite linear complementarity problems with applications, Int. J. Manag. Sci. Eng., 12 (2017), 141–147. |
[14] | A. Hadjidimos, M. Tzoumas, Nonstationary extrapolated modulus algorithms for the solution of the linear complementarity problem, Linear Algebra Appl., 43 (2009), 197–210. |
[15] |
S. L. Wu, C. X. Li, Two-sweep modulus-based matrix splitting iteration methods for linear complementarity problems, J. Comput. Appl. Math., 302 (2016), 327–339. doi: 10.1016/j.cam.2016.02.011
![]() |
[16] |
L. L. Zhang, Two-step modulus-based matrix splitting iteration method for linear complementarity problems, Numer. Algorithms, 57 (2011), 83–99. doi: 10.1007/s11075-010-9416-7
![]() |
[17] |
N. Zheng, J. F. Yin, Accelerated modulus-based matrix splitting iteration methods for linear complementarity problem, Numer. Algorithms, 64 (2013), 245–262. doi: 10.1007/s11075-012-9664-9
![]() |
[18] |
S. M. Liu, H. Zheng, W. Li, A general accelerated modulus-based matrix splitting iteration method for solving linear complementarity problems, Calcolo, 53 (2016), 189–199. doi: 10.1007/s10092-015-0143-2
![]() |
[19] | B. L. Wen, H. Zheng, W. Li, X. F. Peng, The relaxation modulus-based matrix splitting iteration method for solving linear complementarity problems of positive definite matrices, Appl. Math. Comput., 321 (2018), 349–357. |
[20] |
A. Hadjidimos, M. Lapidakis, M. Tzoumas, On iterative solution for linear complementarity problem with an H_+-matrix, SIAM J. Matrix Anal. Appl., 33 (2012), 97–110. doi: 10.1137/100811222
![]() |
[21] | H. Zheng, S. Vong, On convergence of the modulus-based matrix splitting iteration method for horizontal linear complementarity problems of H_+-matrices, Appl. Math. Comput., 369 (2020), 124–890. |
[22] |
H. Zheng, W. Li, S. W. Vong, A relaxation modulus-based matrix splitting iteration method for solving linear complementarity problems, Numer. Algorithms, 74 (2017), 137–152. doi: 10.1007/s11075-016-0142-7
![]() |
[23] |
H. Zheng, W. Li, W. Qu, A non-modulus linear method for solving the linear complementarity problem, Linear Algebra Appl., 495 (2016), 38–50. doi: 10.1016/j.laa.2016.01.032
![]() |
[24] |
H. Zheng, S. Vong, A two-step modulus-based matrix splitting iteration method for horizontal linear complementarity problems, Numer. Algorithms, 86 (2021), 1791–1810. doi: 10.1007/s11075-020-00954-1
![]() |
[25] | A. Berman, R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Philadelphia: SIAM Publisher, 1994. |
[26] | Z. Z. Bai, On the convergence of the multisplitting methods for the linear complementarity problem, SIAM J. Matrix Anal. Appl., 21 (2006), 67–78. |
[27] |
G. Csordas, R. S. Varga, Comparisons of regular splittings of matrices, Numer. Math., 44 (1984), 23–35. doi: 10.1007/BF01389752
![]() |
[28] |
A. Frommer, D. B. Szyld, H-splittings and two-stage iterative methods, Numer. Math., 63 (1992), 345–356. doi: 10.1007/BF01385865
![]() |
[29] |
K. G. Murty, On the number of solutions to the complementarity problem and spanning properties of complementary cones, Linear Algebra Appl., 5 (1972), 65–108. doi: 10.1016/0024-3795(72)90019-5
![]() |
1. | Bharat Kumar, A. Dutta, A. K. Das, More on matrix splitting modulus-based iterative methods for solving linear complementarity problem, 2023, 0030-3887, 10.1007/s12597-023-00634-3 | |
2. | Bharat Kumar, Arup Kumar Das, On general fixed point method based on matrix splitting for solving linear complementarity problem, 2022, 51, 2501-059X, 189, 10.33993/jnaat512-1285 | |
3. | Bharat Kumar, A. K. Das, Projected fixed point iterative method for large and sparse horizontal linear complementarity problem, 2023, 0019-5588, 10.1007/s13226-023-00403-4 | |
4. | Wenxiu Guo, Xiaoping Lu, Hua Zheng, A two-step iteration method for solving vertical nonlinear complementarity problems, 2024, 9, 2473-6988, 14358, 10.3934/math.2024698 | |
5. | New accelerated modulus-based iteration method for solving large and sparse linear complementarity problem, 2024, 53, 2501-059X, 120, 10.33993/jnaat531-1370 | |
6. | Bharat Kumar, Arup K Das, A note on fixed point method and linear complementarity problem, 2023, 52, 2501-059X, 82, 10.33993/jnaat521-1290 |
Algorithm 1 Iterative method based on (3.1) |
1: Given x^{(0)} \in \mathbb{R}^n , \varepsilon > 0 .
2: for k=0, 1, 2, \ldots, 3: x^{(k)}_{+}=\max\{0, x^{(k)}\} 4: compute RES=norm (\min(x^{(k)}_{+}, Ax^{(k)}_{+}+q)) 5: if then RES < \varepsilon 6: x=x^{(k)}_{+} 7: break 8: else 9: x^{(k+1)}=(I-\Omega A)x_{+}^{(k)}-\Omega q 10: end if 11: end for |
Algorithm 2 Iterative method based on (3.5) |
1: Given x^{(0)} \in \mathbb{R}^n , \varepsilon > 0 2: for k=0, 1, 2, \ldots, do
3: x^{(k)}_{+}=\max\{0, x^{(k)}\} 4: compute RES=norm (\min(x^{(k)}_{+}, Ax^{(k)}_{+}+q)) 5: if RES < \varepsilon then 6: x=x^{(k)}_{+} 7: break 8: else 9: x^{(k+1)}_{1}=((I-\Omega(D-U))x^{(k)}_{+}-\Omega q)_{1} 10: for i=2, 3, ..., n 11: x^{(k+1)}_{i}=((I-\Omega(D-U))x^{(k)}_{+}-\Omega q)_{i}+(\Omega L x^{(k+1)}_{+})_{i} 12: end for 13: end if 14: end for |
A(1, -1, 0) | A(1, 1, 0) | A(1, 1, -1) | |||||||||
Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | |||
\omega | 1.1 | 1.1 | 1 | 1.1 | 1.1 | 1.2 | 1 | 1.2 | 1 | ||
\alpha | - | - | 0.8 | - | - | 1.1 | - | - | 1.1 | ||
IT | 17 | 10 | 20 | 17 | 10 | 11 | 39 | 16 | 24 | ||
CPU | 0.0056 | 7.8000 | 0.0129 | 0.0053 | 7.6147 | 0.0078 | 0.0092 | 11.788 | 0.0124 | ||
RES | 0.5e-5 | 0.3e-5 | 0.4e-5 | 0.5e-5 | 0.5e-5 | 0.7e-5 | 0.8e-5 | 0.4e-5 | 0.5e-5 | ||
A(1, 0, 1) | A(0, 1, 0) | A(1, 1, 1) | |||||||||
Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | |||
\omega | 1.1 | 1 | 1 | 1 | 1.1 | 1.1 | 1 | 1.1 | 1.2 | ||
\alpha | - | - | 1 | - | - | 1.1 | - | - | 1.2 | ||
IT | 12 | 9 | 9 | 23 | 12 | 15 | 12 | 9 | 9 | ||
CPU | 0.0026 | 7.7473 | 0.0045 | 0.0049 | 8.7403 | 0.0158 | 0.0025 | 7.3931 | 0.0075 | ||
RES | 0.6e-5 | 0.6e-5 | 0.8e-5 | 0.8e-5 | 0.3e-5 | 0.4e-5 | 0.6e-5 | 0.6e-5 | 0.7e-5 |
A(0, 0, 1) | ||||||||
\omega | \omega_{1} | \omega_{2} | \omega_{3} | \omega_{4} | \omega_{5} | \omega_{6} | \omega_{7} | \omega_{8} |
\rho_1 | 0.9833 | 0.9621 | 0.9410 | 0.9198 | 0.8987 | 0.8776 | 0.8564 | 0.8353 |
\rho_2 | 0.9829 | 0.9601 | 0.9358 | 0.9099 | 0.8821 | 0.8522 | 0.8198 | 0.7845 |
IT _1 | 344 | 148 | 93 | 66 | 51 | 41 | 34 | 28 |
IT _2 | 341 | 145 | 89 | 63 | 47 | 37 | 30 | 25 |
\omega | \omega_{9} | \omega_{10} | \omega_{11} | \omega_{12} | \omega_{13}=1 | \omega_{14} | \omega_{15} | \omega_{16} |
\rho_1 | 0.8141 | 0.7930 | 0.7719 | 0.7507 | 0.7296 | 0.8648 | 1.0000 | 1.1352 |
\rho_2 | 0.7457 | 0.7027 | 0.6542 | 0.5984 | 0.5323 | 0.7671 | 1.0000 | 1.2358 |
IT _1 | 24 | 21 | 18 | 16 | 14 | 12 | 16 | 22 |
IT _2 | 20 | 17 | 14 | 12 | 9 | 8 | 10 | 13 |
A(0, 1, 1) | ||||||||
\omega | \omega_{1} | \omega_{2} | \omega_{3} | \omega_{4} | \omega_{5}=1 | \omega_{6} | \omega_{7} | \omega_{8} |
\rho_1 | 0.8897 | 0.7744 | 0.6580 | 0.5427 | 0.4255 | 0.7128 | 1.0029 | 1.2882 |
\rho_2 | 0.8840 | 0.7488 | 0.5957 | 0.4131 | 0.1629 | 0.5813 | 1.0035 | 1.4651 |
IT _1 | 105 | 48 | 29 | 20 | 14 | 19 | 62 | 1000 |
IT _2 | 101 | 44 | 25 | 16 | 9 | 12 | 19 | 35 |
GFP method | FP method | |||||||
\omega | \rho_{1} | IT _1 | \rho_{2} | IT _2 | \omega | \rho | IT | |
0.1795 | 0.9743 | 77 | 0.9721 | 76 | 0.0004 | 0.9999 | 1000 | |
0.3591 | 0.9486 | 35 | 0.9390 | 34 | 0.0007 | 0.9998 | 1000 | |
0.5386 | 0.9228 | 21 | 0.8989 | 20 | 0.0011 | 0.9998 | 1000 | |
0.7181 | 0.8971 | 13 | 0.8487 | 12 | 0.0015 | 0.9997 | 1000 | |
0.8976 | 0.8714 | 9 | 0.7828 | 7 | 0.0019 | 0.9996 | 1000 | |
1.0772 | 1.0000 | 8 | 1.0000 | 6 | 0.0022 | 1.0000 | 1000 |
GFP method | FP method | |||||||
\omega | \rho_{1} | IT _1 | \rho_{2} | IT _2 | \omega | \rho | IT | |
0.2822 | 0.7689 | 46 | 0.7624 | 46 | 0.0003 | 0.9997 | 1000 | |
0.5645 | 0.5378 | 19 | 0.5084 | 18 | 0.0005 | 0.9995 | 1000 | |
0.8467 | 0.3066 | 10 | 0.2262 | 8 | 0.0007 | 0.9993 | 1000 | |
1.1289 | 0.3333 | 10 | 0.2261 | 8 | 0.0009 | 0.9991 | 1000 | |
1.4111 | 0.6667 | 23 | 0.6109 | 17 | 0.0011 | 0.9989 | 1000 | |
1.6934 | 1.0000 | 125 | 1.0000 | 40 | 0.0013 | 0.9987 | 1000 |
Algorithm 1 Iterative method based on (3.1) |
1: Given x^{(0)} \in \mathbb{R}^n , \varepsilon > 0 .
2: for k=0, 1, 2, \ldots, 3: x^{(k)}_{+}=\max\{0, x^{(k)}\} 4: compute RES=norm (\min(x^{(k)}_{+}, Ax^{(k)}_{+}+q)) 5: if then RES < \varepsilon 6: x=x^{(k)}_{+} 7: break 8: else 9: x^{(k+1)}=(I-\Omega A)x_{+}^{(k)}-\Omega q 10: end if 11: end for |
Algorithm 2 Iterative method based on (3.5) |
1: Given x^{(0)} \in \mathbb{R}^n , \varepsilon > 0 2: for k=0, 1, 2, \ldots, do
3: x^{(k)}_{+}=\max\{0, x^{(k)}\} 4: compute RES=norm (\min(x^{(k)}_{+}, Ax^{(k)}_{+}+q)) 5: if RES < \varepsilon then 6: x=x^{(k)}_{+} 7: break 8: else 9: x^{(k+1)}_{1}=((I-\Omega(D-U))x^{(k)}_{+}-\Omega q)_{1} 10: for i=2, 3, ..., n 11: x^{(k+1)}_{i}=((I-\Omega(D-U))x^{(k)}_{+}-\Omega q)_{i}+(\Omega L x^{(k+1)}_{+})_{i} 12: end for 13: end if 14: end for |
A(1, -1, 0) | A(1, 1, 0) | A(1, 1, -1) | |||||||||
Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | |||
\omega | 1.1 | 1.1 | 1 | 1.1 | 1.1 | 1.2 | 1 | 1.2 | 1 | ||
\alpha | - | - | 0.8 | - | - | 1.1 | - | - | 1.1 | ||
IT | 17 | 10 | 20 | 17 | 10 | 11 | 39 | 16 | 24 | ||
CPU | 0.0056 | 7.8000 | 0.0129 | 0.0053 | 7.6147 | 0.0078 | 0.0092 | 11.788 | 0.0124 | ||
RES | 0.5e-5 | 0.3e-5 | 0.4e-5 | 0.5e-5 | 0.5e-5 | 0.7e-5 | 0.8e-5 | 0.4e-5 | 0.5e-5 | ||
A(1, 0, 1) | A(0, 1, 0) | A(1, 1, 1) | |||||||||
Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | Alg 1 | Alg 2 | MSOR | |||
\omega | 1.1 | 1 | 1 | 1 | 1.1 | 1.1 | 1 | 1.1 | 1.2 | ||
\alpha | - | - | 1 | - | - | 1.1 | - | - | 1.2 | ||
IT | 12 | 9 | 9 | 23 | 12 | 15 | 12 | 9 | 9 | ||
CPU | 0.0026 | 7.7473 | 0.0045 | 0.0049 | 8.7403 | 0.0158 | 0.0025 | 7.3931 | 0.0075 | ||
RES | 0.6e-5 | 0.6e-5 | 0.8e-5 | 0.8e-5 | 0.3e-5 | 0.4e-5 | 0.6e-5 | 0.6e-5 | 0.7e-5 |
A(0, 0, 1) | ||||||||
\omega | \omega_{1} | \omega_{2} | \omega_{3} | \omega_{4} | \omega_{5} | \omega_{6} | \omega_{7} | \omega_{8} |
\rho_1 | 0.9833 | 0.9621 | 0.9410 | 0.9198 | 0.8987 | 0.8776 | 0.8564 | 0.8353 |
\rho_2 | 0.9829 | 0.9601 | 0.9358 | 0.9099 | 0.8821 | 0.8522 | 0.8198 | 0.7845 |
IT _1 | 344 | 148 | 93 | 66 | 51 | 41 | 34 | 28 |
IT _2 | 341 | 145 | 89 | 63 | 47 | 37 | 30 | 25 |
\omega | \omega_{9} | \omega_{10} | \omega_{11} | \omega_{12} | \omega_{13}=1 | \omega_{14} | \omega_{15} | \omega_{16} |
\rho_1 | 0.8141 | 0.7930 | 0.7719 | 0.7507 | 0.7296 | 0.8648 | 1.0000 | 1.1352 |
\rho_2 | 0.7457 | 0.7027 | 0.6542 | 0.5984 | 0.5323 | 0.7671 | 1.0000 | 1.2358 |
IT _1 | 24 | 21 | 18 | 16 | 14 | 12 | 16 | 22 |
IT _2 | 20 | 17 | 14 | 12 | 9 | 8 | 10 | 13 |
A(0, 1, 1) | ||||||||
\omega | \omega_{1} | \omega_{2} | \omega_{3} | \omega_{4} | \omega_{5}=1 | \omega_{6} | \omega_{7} | \omega_{8} |
\rho_1 | 0.8897 | 0.7744 | 0.6580 | 0.5427 | 0.4255 | 0.7128 | 1.0029 | 1.2882 |
\rho_2 | 0.8840 | 0.7488 | 0.5957 | 0.4131 | 0.1629 | 0.5813 | 1.0035 | 1.4651 |
IT _1 | 105 | 48 | 29 | 20 | 14 | 19 | 62 | 1000 |
IT _2 | 101 | 44 | 25 | 16 | 9 | 12 | 19 | 35 |
GFP method | FP method | |||||||
\omega | \rho_{1} | IT _1 | \rho_{2} | IT _2 | \omega | \rho | IT | |
0.1795 | 0.9743 | 77 | 0.9721 | 76 | 0.0004 | 0.9999 | 1000 | |
0.3591 | 0.9486 | 35 | 0.9390 | 34 | 0.0007 | 0.9998 | 1000 | |
0.5386 | 0.9228 | 21 | 0.8989 | 20 | 0.0011 | 0.9998 | 1000 | |
0.7181 | 0.8971 | 13 | 0.8487 | 12 | 0.0015 | 0.9997 | 1000 | |
0.8976 | 0.8714 | 9 | 0.7828 | 7 | 0.0019 | 0.9996 | 1000 | |
1.0772 | 1.0000 | 8 | 1.0000 | 6 | 0.0022 | 1.0000 | 1000 |
GFP method | FP method | |||||||
\omega | \rho_{1} | IT _1 | \rho_{2} | IT _2 | \omega | \rho | IT | |
0.2822 | 0.7689 | 46 | 0.7624 | 46 | 0.0003 | 0.9997 | 1000 | |
0.5645 | 0.5378 | 19 | 0.5084 | 18 | 0.0005 | 0.9995 | 1000 | |
0.8467 | 0.3066 | 10 | 0.2262 | 8 | 0.0007 | 0.9993 | 1000 | |
1.1289 | 0.3333 | 10 | 0.2261 | 8 | 0.0009 | 0.9991 | 1000 | |
1.4111 | 0.6667 | 23 | 0.6109 | 17 | 0.0011 | 0.9989 | 1000 | |
1.6934 | 1.0000 | 125 | 1.0000 | 40 | 0.0013 | 0.9987 | 1000 |