The summer cooling potential of urban vegetation is investigated using an urban climate model for the current and future climates in the Melbourne central business district (CBD) area with various urban forms and vegetation schemes. Simulation results suggest that the average seasonal summer temperatures can be reduced in the range of around 0.5 and 2°C if the Melbourne CBD were replaced by vegetated suburbs and planted parklands, respectively, benefiting a reduction in the number of hot days. It was also found that despite the projected warming in the future and variations in the climate projections among different climate models, the average seasonal cooling potential due to various urban vegetation schemes may not change significantly in comparison with those predicted for the current climate, indicating little dependency on climate change. This finding suggests that the average seasonal cooling potential as a result of urban vegetation in future climates may be empirically quantified in similar amounts to those under the current climate. When urban climate models are used, the cooling potential of urban vegetation in future climates may be quantified by modeling several selected years with one or a few climate models.
1.
Introduction
In this paper, we are concerned with the solution of the linear complementarity problem, abbreviated as LCP, that consists in finding a pair of real vectors $ z \in \mathbb{R}^n $ and $ v \in \mathbb{R}^n $ satisfying the conditions
where $ A \in \mathbb{R}^{n \times n} $ is a given matrix, $ q \in \mathbb{R}^n $ is a given vector.
The LCP (1.1) not only provides a unified framework for linear programming, quadratic programming, bi-matrix games, but also can be used to model many practically relevant situations such as spatial price balance problem, obstacles and free boundary problem, market equilibrium problem, optimal control problem, contact mechanics problem and so forth. To solve the LCP (1.1) numerically, various iteration methods have been presented and investigated, for example, the pivot algorithms [1,2], the projected iterative methods [3,4,5], the SOR-like iteration methods [6,7], the multisplitting methods [8,9,10,11,12,13], the modulus-based matrix splitting (MMS) methods [14,15], the Newton-type iteration methods [16,17], the multigrid methods [18] and others. Notably, among these methods, the MMS iteration method is particularly attractive since its ability to obtain numerical solutions more quickly and accurately which was first introduced in [14], and was extended in many works [19,20,21,22,23]. Making use of $ z = \frac{|x|+x}{\gamma} $ and $ v = \frac{\Omega}{\gamma}(|x|-x) $, Bai [14] transformed the LCP (1.1) into an equivalent system of fixed-point equation
where $ \gamma $ is a positive constant and $ \Omega \in \mathbb{R}^{n \times n} $ is a positive diagonal matrix. Based on (1.2), the MMS method was presented as follows.
Method 1.1 (MMS method). Let $ A = M-N $ be a splitting of the given matrix $ A \in \mathbb{R}^{n \times n} $, and the matrix $ \Omega+M $ be nonsingular, where $ \Omega \in \mathbb{R}^{n \times n} $ is a positive diagonal matrix. Given a nonnegative initial vector $ x^0 \in \mathbb{R}^n $, for $ k = 0, 1, 2, \ldots $ until the iteration sequence $ \{z^k\}^{+\infty}_{k = 0} \subset \mathbb{R}^n $ converges, compute $ x^{k+1} \in \mathbb{R}^n $ by solving the linear system
and set
where $ \gamma $ is a positive constant.
Furthermore, motivated by this method for solving the LCP (1.1), many researchers extended the MMS iteration method together with its variants to solve the nonlinear complementarity problems [24,25,26], the horizontal linear complementarity problems [27], the implicit complementarity problems [28,29,30], the second-order cone linear complementarity problems [31], the circular cone nonlinear complementarity problems [32], the semidefinite linear complementarity problems [33] and so forth.
Recently, from a different perspective, Wu and Li [34] subtly designed a new equivalent form by directly exploiting the inequality system of the LCP (1.1) and reformulated the LCP (1.1) as a system of fixed-point equation without employing variable transformation, which could be described as follows
Based on (1.3), a new modulus-based matrix splitting (NMMS) iteration method for solving the large and sparse LCP (1.1) was presented as Method 1.2.
Method 1.2 (NMMS method). Let $ A = M-N $ be a splitting of the given matrix $ A \in \mathbb{R}^{n \times n} $, and matrix $ \Omega+M $ be nonsingular, where $ \Omega \in \mathbb{R}^{n \times n} $ is a positive diagonal matrix. Given a nonnegative initial vector $ z^0 \in\mathbb{R}^n $, for $ k = 0, 1, 2, \ldots $ until the iteration sequence $ \{z^k\}^{+\infty}_{k = 0} \subset \mathbb{R}^n $ converges, compute $ z^{k+1} \in \mathbb{R}^n $ by solving the linear system
The NMMS iteration method has the merits of simple calculation and high calculation efficiency, so it is feasible and practical in actual implementations. Investigating Method 1.1 and Method 1.2, the MMS and NMMS iteration methods have resemblances, but they do not belong to each other. Compared with the MMS method, the NMMS method does not need to use the skill of variable transformation in the process of iteration, which provides a new general framework for solving the LCP (1.1). It was shown in [34] that the NMMS iteration method is superior to the MMS iteration method for certain LCPs. Furthermore, by ingeniously utilizing the matrix splitting technique and properly choosing the involved relaxation parameters, the NMMS iteration method can induce some new modulus-based relaxation methods, such as the new modulus-based Jacobi (NMJ) method, the new modulus-based Gauss-Seidel (NMGS) method, the new modulus-based successive overrelaxation (NMSOR) method and the new modulus-based accelerated overrelaxation (NMAOR) method. In this paper, we mainly focus on the NMMS iteration method for solving the LCP (1.1).
Preconditioned acceleration is a classical acceleration technique for fixed-point iterations, and can essentially improve the convergence rate. In order to accelerate the convergence of the MMS iteration method for solving the LCP (1.1), some preconditioning solvers have been developed. For example, Li and Zheng [35] and Zheng and Luo [36] respectively developed a preconditioned MMS iteration method and a preconditioned two-step MMS iteration method by utilizing a variable transformation and the preconditioners were both chosen as
Ren et al. [37] proposed the preconditioned general two-step MMS iteration method based on the two-step MMS iteration method [38] and gave its convergence analysis. Wu et al. [39] presented the preconditioned general MMS iteration method by making use of the left multiplicative preconditioner in the implicit fixed-point equation, and four different preconditioners were chosen for comparison with $ P_1 = I + t_1 C_1 $ and $ P_2 = I + t_2 (C_1 + C_m) $, which were both lower-triangular matrices, $ P_3 = I + t_3 C_1^{\top} $, which was the preconditioner in (1.4), and $ P_4 = I + t_4 (C_1 + C_1^{\top}) $, which was a Hessenberg matrix. The element $ c_{kj} $ of $ C_i $ was given as
and $ t_s > 0 $, $ s = 1, 2, 3, 4 $. Experimental results show that the preconditioners $ P_1 $ and $ P_2 $ have better computational efficiency than $ P_3 $ and $ P_4 $ in most cases. Dai et al. [40] proposed a preconditioner $ \tilde{P} $ which was defined as follows
and suggested a preconditioned two-step MMS iteration method for the LCP (1.1) associated with an $ M $-matrix.
In this paper, we get inspiration from the work of [35] and extend Method 1.2 to a more general framework with a customized preconditioner. Different from the above mentioned preconditioners, we develop a generalized preconditioner associated with both $ H_+ $-matrix $ A $ and vector $ q $ of the LCP (1.1) and devise a preconditioned new modulus-based matrix splitting (PNMMS) iteration method for solving the LCP (1.1) of $ H_+ $-matrix. Particularly, the PNMMS iteration method can yield a series of preconditioned relaxation modulus-based matrix splitting iteration methods by suitable choices of matrix splitting. We give the convergence analysis of the PNMMS iteration method under some mild conditions and provide a comparison theorem to show the PNMMS iteration method accelerates the convergence rate theoretically.
The rest of this paper is arranged as follows. In section 2, we present some classical definitions and preliminary results relevant to our later developments. Section 3 establishes the PNMMS iteration method for solving the LCP (1.1) and its convergence properties are explored in detail in section 4. The comparison theorem between the NMMS and the PNMMS method is presented in section 5. Section 6 provides some numerical examples to illustrate the theoretical results. Finally, some concluding remarks are given in section 7.
2.
Preliminaries
In this section, we will present the notations and some auxiliary results that lay our claims' foundation.
Let $ A = (a_{ij}) \in \mathbb{R}^{n \times n} $. $ \mathit{\boldsymbol{tridiag}}(a, b, c) $ represents a matrix with $ a, b, c $ as the subdiagonal, main diagonal and superdiagonal entries in the matrix. We denote the spectral radius of matrix $ A $ by $ \rho(A) $. $ I $ is the identity matrix with suitable dimension. $ A $ is referred to a $ Z $-matrix if $ a_{ij} < 0 $ for any $ i \neq j $. If $ A $ is a $ Z $-matrix and satisfies $ A^{-1} \geqslant 0 $, then the matrix $ A $ is called an $ M $-matrix. The comparison matrix of $ A $ is denoted by $ \langle A \rangle = (\langle a \rangle_{ij}) \in \mathbb{R}^{n \times n} $, where
Obviously, the comparison matrix is a $ Z $-matrix. $ A $ is called an $ H $-matrix if its comparison matrix is an $ M $-matrix. If $ A $ is an $ H $-matrix with positive diagonals, it is an $ H_+ $-matrix, see [41]. An $ M $-matrix is an $ H_+ $-matrix, and an $ H_+ $-matrix is an $ H $-matrix.
Let $ A = D-L-U = D-B $, where $ D $, $ -L $, $ -U $ and $ -B $ represent the diagonal matrix, strictly lower triangular matrix, strictly upper triangular matrix and off-diagonal matrix of matrix $ A $, respectively. We say $ A = M-N $ is a splitting if $ M $ is nonsingular. The splitting $ A = M-N $ is called a weak regular splitting of $ A \in \mathbb{R}^{n \times n} $, if $ M^{-1} \geqslant 0, M^{-1} N \geqslant 0 $, see [42]; an $ M $-splitting if $ M $ is an $ M $-matrix and $ N $ is nonnegative; an $ H $-splitting if $ \langle M \rangle - |N| $ is an $ M $-matrix; an $ H $-compatible splitting if $ \langle A \rangle = \langle M \rangle - |N| $, see [15]. Note that an $ M $-splitting is an $ H $-compatible splitting, and an $ H $-compatible splitting is an $ H $-splitting.
Lemma 2.1 ([14]). Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, then the LCP (1.1) has a unique solution for any $ q \in \mathbb{R}^n $.
Lemma 2.2 ([43]). Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, then $ |A^{-1}| \leqslant \langle A \rangle ^ {-1} $.
Lemma 2.3 ([44]). Let $ A $ be a $ Z $-matrix, then the following statements are equivalent:
● $ A $ is an $ M $-matrix;
● There exists a positive vector $ x $, such that $ Ax > 0 $;
● Let $ A = M-N $ be a splitting of $ A $ and $ M^{-1} \geqslant 0 $, $ N \geqslant 0 $, then $ \rho (M^{-1} N) < 1 $.
Lemma 2.4 ([44]). Let $ A $, $ B $ be two $ Z $-matrices, $ A $ be an $ M $-matrix and $ B \geqslant A $, then $ B $ is an $ M $-matrix.
Lemma 2.5 ([45]). $ A $ is monotone if and only if $ A \in \mathbb{R}^{n \times n} $ is nonsingular with $ A^{-1} \geqslant 0 $.
Lemma 2.6 ([46]). Suppose that $ E = F-G $ and $ \bar{E} = \bar{F} - \bar{G} $ are weak regular splittings of the monotone matrices $ E $ and $ \bar{E} $, respectively, such that $ F^{-1} \leqslant \bar{F}^{-1} $. If there exists a positive vector $ x $ such that $ 0 \leqslant Ex \leqslant \bar{E}x $, then for the monotonic norm associated with $ x $, $ \| \bar{F}^{-1} \bar{G} \|_x \leqslant \| F^{-1} G \|_x $. In particular, if $ F^{-1} G $ has a positive Perron vector, then $ \rho(\bar{F}^{-1} \bar{G}) \leqslant \rho(F^{-1} G) $.
Lemma 2.7 ([47]). Let $ A = (a_{ij}) \in \mathbb{R}^{n \times n} $ be an $ M $-matrix, then there exists $ \delta_0 > 0 $ such that for any $ 0 < \delta \leqslant \delta_0 $, $ A(\delta) = (\; a_{ij}(\delta)\; ) \in \mathbb{R}^{n \times n} $ is a nonsingular $ M $-matrix, where
3.
The PNMMS method
In this section, the PNMMS iteration method for solving the LCP (1.1) will be constructed and the new generalized preconditioner will be introduced.
Enlightened by the idea of preconditioner in [40], we propose a generalized preconditioner $ P $ associated with both $ H_+ $-matrix $ A $ and vector $ q $ of the LCP (1.1) with the following form
where $ p_{ii} = 1 $, the elements $ p_{ik_m} = \frac{|a_{ik_m}|}{a_{k_mk_m}} \geqslant 0 $, $ m \in \{ 1, 2, \ldots, r \} $ and $ k_m $ satisfies $ q_{k_m} < 0 $, while other entries are all $ 0 $.
It is worth noting that the preconditioner (3.1) is established on the premise that $ A $ is an $ H_+ $-matrix, which is an extension of the preconditioner established by $ M $-matrix in [40], and naturally includes (1.5).
Let $ PA = \bar{M} - \bar{N} $ be a splitting of the matrix $ PA $, we can rewrite the fixed-point system (1.3) as
then we construct the PNMMS iteration method below.
Method 3.1 (PNMMS method). Let $ P \in \mathbb{R}^{n \times n} $ be a given preconditioner, $ PA = \bar{M} - \bar{N} $ be a splitting of the matrix $ PA\in \mathbb{R}^{n \times n} $, and the matrix $ P\Omega + \bar{M} $ be nonsingular, where $ \Omega \in \mathbb{R}^{n \times n} $ is a positive diagonal matrix. Given a nonnegative initial vector $ z^0 \in\mathbb{R}^n $, for $ k = 0, 1, 2, \ldots $ until the iteration sequence $ \{z^k\}^{+\infty}_{k = 0} \subset \mathbb{R}^n $ converges, compute $ z^{k+1} \in \mathbb{R}^n $ by solving the linear system
Method 3.1 provides a general framework of the PNMMS method for solving the LCP (1.1). Besides including the NMMS method [34] as special case, it can generate a series of relaxation versions by suitable choices of matrix splitting. The framework of the PNMMS method is summarized as the following Algorithm 1.
Remark 3.1. Let $ P \in \mathbb{R}^{n \times n} $ be a given preconditioner and $ PA = \bar{M}-\bar{N} = \bar{D}-\bar{L}-\bar{U} $ be the splitting of the matrix $ PA \in \mathbb{R}^{n \times n} $. Here, we give the following remarks on Method 3.1.
● When $ P = I $, then Method 3.1 reduces to the NMMS method [34].
● When $ \bar{M} = \bar{D}, \; \bar{N} = \bar{L} + \bar{U} $, then Method 3.1 reduces to the preconditioned new modulus-based Jacobi (PNMJ) method:
● When $ \bar{M} = \bar{D} - \bar{L}, \; \bar{N} = \bar{U} $, then Method 3.1 becomes the preconditioned new modulus-based Gauss-Seidel (PNMGS) method:
● When $ \bar{M} = \frac{1}{\alpha} \bar{D} - \bar{L}, \; \bar{N} = \frac{1-\alpha}{\alpha} \bar{D} + \bar{U} $, then Method 3.1 turns into the preconditioned new modulus-based successive overrelaxation (PNMSOR) method:
● When $ \bar{M} = \frac{1}{\alpha} \bar{D} - \frac{\beta}{\alpha} \bar{L}, \; \bar{N} = \frac{1-\alpha}{\alpha} \bar{D} + \frac{\alpha-\beta}{\alpha} \bar{L} + \bar{U} $, then Method 3.1 reduces to the preconditioned new modulus-based accelerated overrelaxation (PNMAOR) method:
4.
Convergence analysis
In this section, we will analyze the convergence of Method 3.1 for the LCP (1.1) with an $ H_+ $-matrix. Some mild convergence conditions are given to guarantee the convergence of Method 3.1.
First of all, a general convergence condition for Method 3.1 is established in Theorem 4.1.
Theorem 4.1. Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, $ P $ be a given nonsingular matrix with positive diagonals such that $ PA $ is an $ H_+ $-matrix. Make $ PA = \bar{M} - \bar{N} $ be a splitting of the matrix $ PA $ and $ P\Omega + \bar{M} $ be an invertible $ H_+ $-matrix, where $ \Omega $ is a positive diagonal matrix. Let
if $ \rho(\bar{F}^{-1} \bar{G}) < 1 $, then the iteration sequence $ \{z^k\}^{+\infty}_{k = 0} \subset \mathbb{R}^n $ produced by Method 3.1 converges to the unique solution $ z^* \in \mathbb{R}^n $ of the LCP (1.1) with a nonnegative initial vector.
Proof. Let $ A $ be an $ H_+ $-matrix, it follows from Lemma 2.1 that the LCP (1.1) has a unique solution $ z^* $, which means
Subtracting (4.2) from (3.2), we have
If $ P\Omega + \bar{M} $ is invertible, then it holds that
Taking absolute value on both sides of (4.3) and utilizing the triangle inequality, one can obtain
where the last inequality follows from Lemma 2.2. Evidently, if $ \rho (\bar{F}^{-1} \bar{G}) < 1 $, then the sequence $ \{z^k\}^{+\infty}_{k = 0} $ converges to the unique solution $ z^* $ of the LCP (1.1).
In particular, if $ P = I $, the following corollary can be obtained.
Corollary 4.1. Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, $ A = M-N $ be a splitting of the matrix $ A $, and the matrix $ \Omega+M $ be nonsingular $ H_+ $-matrix, where $ \Omega $ is a positive diagonal matrix. Let
if $ \rho(F^{-1} G) < 1 $, then the iteration sequence $ \{z^k\}^{+\infty}_{k = 0} \subset \mathbb{R}^n $ produced by Method 1.2 converges to the unique solution $ z^* \in \mathbb{R}^n $ of the LCP (1.1) with a nonnegative initial vector.
Theorem 4.2. Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, $ P $ be a given nonsingular matrix with positive diagonals such that $ PA $ is an $ H_+ $-matrix. Make $ PA = \bar{M} - \bar{N} $ be a splitting of the matrix $ PA $ and $ P\Omega + \bar{M} $ is an invertible $ H_+ $-matrix. The diagonal matrix satisfies
or
where $ A = D-B $, $ P = D_P - B_P $ and $ x $ is an identity column vector. Then the iteration sequence $ \{z^k\}^{+\infty}_{k = 0} \subset \mathbb{R}^n $ produced by Method 3.1 converges to the unique solution $ z^* \in \mathbb{R}^n $ of the LCP (1.1) with a nonnegative initial vector.
Proof. Denote $ \bar{E} : = \bar{F} - \bar{G} $, where $ \bar{F} $ and $ \bar{G} $ are given as in (4.1), then it can be concluded that
For the case $ \Omega \geqslant D $, the parameter $ \Omega $ satisfies (4.4), we have
It is obvious that $ \langle PA \rangle + |P| \langle A \rangle - 2 |B_P| \Omega $ is a $ Z $-matrix. According to Lemma 2.3, it implies that $ \langle PA \rangle + |P| \langle A \rangle - 2 |B_P| \Omega $ is an $ M $-matrix. It can be got from Lemma 2.4 that $ \bar{E} $ is an $ M $-matrix.
For the case $ \Omega < D $, the parameter $ \Omega $ satisfies (4.5), it holds that
Analogously, $ \langle PA \rangle - |P| |A| + 2 D_P \Omega $ is an $ M $-matrix. In the light of Lemma 2.4, it follows that the $ Z $-matrix $ \bar{E} $ is a nonsingular $ M $-matrix.
As we know $ \bar{E} = \bar{F} - \bar{G} $ be a splitting of the $ M $-matrix $ \bar{E} $. Since $ P\Omega + \bar{M} $ is an $ H_+ $-matrix, then $ \bar{F}^{-1} = \langle P\Omega + \bar{M} \rangle ^{-1} \geqslant 0 $ and $ \bar{F} $ is an $ M $-matrix. It is obvious that $ \bar{G} = |\bar{N}| + |P| |\Omega-A| \geqslant 0 $. Then Lemma 2.3 leads to $ \rho (\bar{F}^{-1} \bar{G}) < 1 $, the assertion then follows by Theorem 4.1, the proof is completed.
In particular, if $ P = I $, it implies that $ \| B_P \|_\infty = 0 $. Taking the right-hand side of (4.4) to be $ +\infty $, then the following corollary can be obtained. It is worth noting that Corollary 4.2 leads to a broader convergence region than Theorem 4.2 in [34].
Corollary 4.2. Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, $ A = M-N $ be a splitting of the matrix $ A $, and the matrix $ \Omega+M $ be nonsingular $ H_+ $-matrix. If the diagonal matrix $ \Omega $ satisfies
where $ A = D-B $ and $ x $ is an identity column vector. Then the iteration sequence $ \{z^k\}^{+\infty}_{k = 0} \subset \mathbb{R}^n $ produced by Method 3.1 converges to the unique solution $ z^* \in \mathbb{R}^n $ of the LCP (1.1) with a nonnegative initial vector.
Similarly, we establish the following convergence theorem on the PNMAOR method.
Theorem 4.3. Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, $ P $ be a given nonsingular matrix with positive diagonals such that $ PA $ is an $ H_+ $-matrix. Make $ PA = \bar{D} - \bar{L} - \bar{U} = \bar{D} - \bar{B} $, $ P = D_P - B_P $ and $ \rho : = \rho(\bar{D}^{-1} (|\bar{B}| + | B_P \Omega |)) $. If the positive diagonal matrix $ \Omega $ satisfies $ D_P \Omega \geqslant \frac{1}{2\alpha} \bar{D} $. Then for any initial vector, the PNMAOR iteration method is convergent if the following conditions are satisfied
(1) when $ \frac{1}{2\alpha} \bar{D} \leqslant D_P \Omega < \frac{1}{\alpha} \bar{D} $,
(2) when $ D_P \Omega \geqslant \frac{1}{\alpha} \bar{D} $,
Proof. Since
where $ PA = \bar{M} - \bar{N} $. By some calculations, it holds that
and
where $ \bar{F} $ and $ \bar{G} $ are defined as (4.1). It is obvious to see that $ \bar{F} $ is an $ M $-matrix and $ \bar{G} \geqslant 0 $. According to Lemma 2.3, we need to prove $ \bar{E} = \bar{F} -\bar{G} $ is an $ M $-matrix, where
When $ \frac{1}{2\alpha} \bar{D} \leqslant D_P \Omega < \frac{1}{\alpha} \bar{D} $ and $ 0 \leqslant \beta \leqslant \alpha $, we know $ D_P \Omega - |D_P \Omega - \frac{1}{\alpha} \bar{D}| \geqslant 0 $. Based on (4.6), we can simplify it to
From Lemma 2.4, if $ \hat{E} $ is an $ M $-matrix, then $ \bar{E} $ is an $ M $-matrix, and it holds if and only if
and then we can obtain
When $ \frac{1}{2\alpha} \bar{D} \leqslant D_P \Omega < \frac{1}{\alpha} \bar{D} $ and $ \alpha < \beta $, we know $ D_P \Omega - |D_P \Omega - \frac{1}{\alpha} \bar{D}| \geqslant 0 $ and $ \alpha |\bar{L}| \geqslant 0 $. Based on (4.6), we can simplify it to
From Lemma 2.4, if $ \hat{E} $ is an $ M $-matrix, then $ \bar{E} $ is an $ M $-matrix, and it holds if and only if
and then we get
When $ \Omega \geqslant \frac{1}{\alpha} \bar{D} $ and $ 0 \leqslant \beta \leqslant \alpha $, we know $ D_P \Omega - |D_P \Omega - \frac{1}{\alpha} \bar{D}| = \frac{1}{\alpha} \bar{D} > 0 $. Based on (4.6), we can simplify it to
From Lemma 2.4, if $ \hat{E} $ is an $ M $-matrix, then $ \bar{E} $ is an $ M $-matrix, and it holds if and only if
and then we have
When $ \Omega \geqslant \frac{1}{\alpha} \bar{D} $ and $ \alpha < \beta $, we know $ D_P \Omega - |D_P \Omega - \frac{1}{\alpha} \bar{D}| = \frac{1}{\alpha} \bar{D} > 0 $ and $ \alpha |\bar{L}| \geqslant 0 $. Based on (4.6), we can simplify it to
From Lemma 2.4, if $ \hat{E} $ is an $ M $-matrix, then $ \bar{E} $ is an $ M $-matrix, and it holds if and only if
and then we obtain
Therefore, the convergence of the PNMAOR method can be proved by Lemma 2.3, thus completing the proof.
Under the conditions of Theorem 4.3, if we take the specific choices of $ \alpha $ and $ \beta $ for the PNMAOR method, the following corollaries on the PNMJ, PNMGS and PNMSOR methods can be derived.
Corollary 4.3. Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, and $ P $ be a given nonsingular matrix with positive diagonals such that $ PA $ is an $ H_+ $-matrix. Make $ PA = \bar{D} - \bar{L} - \bar{U} = \bar{D} - \bar{B} $, $ P = D_P - B_P $ and $ \rho : = \rho(\bar{D}^{-1} (|\bar{B}| + | B_P \Omega |)) $. If the positive diagonal matrix $ \Omega $ satisfies $ D_P \Omega \geqslant \frac{1}{2\alpha} \bar{D} $. Then for any initial vector,
● if $ \alpha = 1 $ and $ \beta = 0 $, the PNMJ iteration method is convergent;
● if $ \alpha = \beta = 1 $, the PNMGS iteration method is convergent;
● if $ \alpha = \beta $, the PNMSOR iteration method is convergent for
When $ P = I $, the convergence theorem of the PNMAOR method can be extended to the NMAOR method in [34]. For most cases, the range of the parameters of the AOR method is usually set as $ 0 \leqslant \beta \leqslant \alpha $, then the following result can be obtained.
Corollary 4.4. Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix. Make $ A = D-L-U = D-B $ satisfy $ \rho : = \rho(D^{-1} |B|) < \frac{1}{2} $ and $ \Omega \geqslant \frac{1}{2\alpha} D $. Then for any initial vector, the NMAOR iteration method is convergent for
5.
A comparison theorem between PNMMS and NMMS
In this section, we provide a comparison theorem between the PNMMS iteration method and the NMMS iteration method, which indicates that the PNMMS method for solving the LCP (1.1) can accelerate the convergence rate of the original NMMS method.
Let
where $ A = D-L-U $ and $ PA = \bar{D} - \bar{L} - \bar{U} $. It is easy to see that this is a special case of the NMMS method and the PNMMS method, other relaxation versions can also be theoretically analyzed in the similar way.
We get the following useful lemma on the premise of the structure-preserving preconditioner $ P $ in (3.1). The proof of Lemma 5.1 is similar to that of Lemma 4.1 in [48] and thus we omit the detail.
Lemma 5.1 ([48]). Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, $ P $ be the preconditioner from (3.1) such that $ PA $ is an $ H_+ $-matrix. Assume that $ D $, $ L $ and $ \bar{D} $, $ \bar{L} $ are given by $ A = D-L-U $ and $ PA = \bar{D} - \bar{L} - \bar{U} $, respectively. Then
Theorem 5.1. Let $ A \in \mathbb{R}^{n \times n} $ be an $ H_+ $-matrix, $ P $ be the preconditioner in (3.1) such that $ PA $ is an $ H_+ $-matrix. Make $ A $ and $ PA $ have the splitting $ A = D-L-U $ and $ PA = \bar{D} - \bar{L} - \bar{U} $, respectively. Then for the iterative matrices $ F^{-1}G $ of the NMMS method and $ \bar{F}^{-1} \bar{G} $ of the PNMMS method, it holds that
where $ F $, $ G $ are given as
and $ \bar{F} $, $ \bar{G} $ are given as
Proof. Since
we generalize the convergence conditions of Corollary 4.1 and Theorem 4.1, and obtain the conclusions $ \rho(F^{-1} G) < 1 $ and $ \rho(\bar{F}^{-1} \bar{G}) < 1 $, where $ F $, $ G $ and $ \bar{F} $, $ \bar{G} $ are given as (5.1) and (5.2), respectively. Now we only need to prove $ \rho(\bar{F}^{-1} \bar{G}) \leqslant \rho(F^{-1}G) $.
Here, we denote
It is easy to check that $ E $ and $ \bar{E} $ of (5.3) are both $ M $-matrices, then $ E^{-1} \geqslant 0 $ and $ \bar{E}^{-1} \geqslant 0 $. In this way, Lemma 2.5 shows that $ E $ and $ \bar{E} $ are monotone matrices. Since $ F = 2D-|L| $ and $ \bar{F} = 2\bar{D} - |\bar{L}| $ are both $ M $-matrices, we have $ F^{-1} \geqslant 0 $ and $ \bar{F}^{-1} \geqslant 0 $. Evidently, the matrix $ G = |L|+2|U| \geqslant 0 $ and $ \bar{G} = |\bar{L}| + 2|\bar{U}| \geqslant 0 $ are nonnegative matrices. So we know that $ E = F-G $ and $ \bar{E} = \bar{F}-\bar{G} $ are weak regular splittings. From Lemma 5.1, we figure out that $ 2\bar{D}-|\bar{L}| \leqslant 2D-|L| $, hence $ \left(2D-|L| \right)^{-1} \leqslant \left(2\bar{D}-|\bar{L}| \right)^{-1} $, that is to say $ F^{-1} \leqslant \bar{F}^{-1} $. Following Lemma 2.3, since $ E $ is an $ M $-matrix, there exists a vector $ x > 0 $ such that $ Ex > 0 $. For the preconditioner (3.1), we have $ P \geqslant I $. Moreover $ \bar{E} = PE $, we infer the conclusion $ 0 \leqslant Ex \leqslant \bar{E}x $ easily.
If the matrix $ A $ is irreducible, then $ F^{-1} G $ is also a nonnegative irreducible matrix. In combination with Lemma 2.6 and Perror-Frobeni theorem [45], $ F^{-1}G $ has a positive Perron vector such that $ \rho(\bar{F}^{-1} \bar{G}) \leqslant \rho(F^{-1}G) $. However, if the matrix $ A $ is reducible, then we can construct an irreducible matrix $ A(\delta) $ by Lemma 2.7 which leads to $ \rho(\bar{F}^{-1} \bar{G}) \leqslant \rho(F^{-1}G) $.
In view of the above, the comparison theorem shows that the convergence rate of the PNMMS method is faster than the NMMS method whenever these methods are convergent.
6.
Numerical experiments
In this section, four numerical examples will be presented to illustrate the efficiency of the PNMMS iteration method for solving the LCP (1.1). All test problems are conducted in MATLAB R2014b on a PC Windows 10 operating system with an intel i5-10400F CPU and 8GB RAM. In the numerical results, we report the number of iteration steps (denoted by "Iter"), the elapsed CPU time in seconds (denoted by "Time"), the relative residual (denoted by "Res") and the spectral radius (denoted by "Rho"). The stopping criterion of iteration is defined as
or the prescribed maximal iteration number $ k_{max} = 500 $ is exceeded ($ "-" $ is used in the following tables to demonstrate this circumstance). All tests are started from the initial vector $ z^0 = (1, 0, 1, 0, \cdots, 1, 0, \cdots)^{\top} \in \mathbb{R}^n $.
With regard to the comparison of Method 3.1 and Method 1.2, we exhibit the performance of the PNMSOR and the NMSOR method. In our implementations, the parameter $ \Omega $ is chosen as $ \Omega = \frac{1}{\alpha} D $, the parameter $ \alpha $ is obtained experimentally.
Example 6.1 ([14]). Consider the LCP (1.1) with $ A = \hat{A} + \mu I \in \mathbb{R}^{n \times n} $, in which
with $ S = \mathit{\boldsymbol{tridiag}}(-1, 4, -1) \in \mathbb{R}^{m \times m} $ and $ n = m^2 $, the vector
For this example, the system matrix $ A $ is a strictly diagonally dominant symmetric positive definite $ H_+ $-matrix when $ \mu > 0 $, it is known that the LCP (1.1) has a unique solution. The preconditioner is chosen as (3.1), where $ k_m \in \{ 1, 3, 5, 7, \cdots \} $.
The value of optimal parameter $ \alpha $ involved in the PNMSOR and NMSOR methods is obtained experimentally, which leads to the least number of iteration steps. We present the iteration steps for the PNMSOR and NMSOR methods under different $ \alpha $ for the test problem in Figure 1. As shown in Figure 1, the PNMSOR method is better no matter how the parameter $ \alpha $ is chosen. When the parameter selection is $ \alpha \in (0.9, 1.1) $, it can be regarded as a good choice, and then we set $ \alpha = 1 $. Numerical results for Example 6.1 with the different problem sizes of $ n $ and $ \mu = 4 $ are reported in Table 1.
It follows from Table 1 that the PNMSOR method is superior to the NMSOR method with respect to iteration steps and the elapsed CPU time. We also present the spectral radius of the iterative matrix, which further verifies the theoretical result of the comparison theorem from the numerical experiment. In addition, we provide a diagram of the relationship between the iteration steps and the relative residual of two methods in Figure 2. It follows from Figure 2. that the relative residual of the PNMSOR method decreases faster than that of the NMSOR method in each step, and the final error accuracy can be determined to be about $ 10^{-15} $. That is to say, even if we improve the accuracy of the solution, the PNMSOR method also has great advantage and can be solved at a faster speed. As a result, the PNMSOR method has better computational efficiency in Example 6.1.
Example 6.2 ([14]). Consider the LCP (1.1) with $ A = \hat{A} + \mu I \in \mathbb{R}^{n \times n} $, in which
with $ S = \mathit{\boldsymbol{tridiag}}(-0.5, 4, -1.5) \in \mathbb{R}^{m \times m} $ and $ n = m^2 $, the vector
For this example, the matrix $ A $ is a nonsymmetric positive definite $ H_+ $-matrix with strict diagonal dominance when $ \mu > 0 $, thus the LCP (1.1) has a unique solution. We present the iteration steps for the PNMSOR and NMSOR methods under different $ \alpha $ and the residual comparison for the test problem in Figure 3 and Figure 4, respectively. The experimentally optimal parameter $ \alpha $ is located in $ (0.85, 1.2) $, and we choose $ \alpha = 1 $. Numerical results for Example 6.2 with the different problem sizes of $ n $ and $ \mu = 4 $ are reported in Table 2.
It follows from Table 2 that the performance of the PNMSOR method is much more competitive than the NMSOR method in terms of computing efficiency, especially for large and sparse problems.
Example 6.3 (Black-Scholes American option pricing). Consider the American option pricing problem which was introduced in [50]. Let $ V(S, t) $ and $ G(S, t) $ represent the value of an American option and the given payoff function of this option correspondingly. By a standard no-arbitrage argument, $ V(S, t) $ must satisfy the following complementarity conditions
This model can be further reformulated into the following inequality system
which satisfies $ u(x, 0) = g(x, 0) $, $ 0 \leqslant t \leqslant \frac{1}{2} \sigma^2 T $ and $ {\lim\limits_{x \to \pm \infty}} u(x, t) = {\lim\limits_{x \to \pm \infty}} g(x, t) $. Here, we limit $ x \in [a, b] $ and choose the values of $ a $ and $ b $ based on the way in [50]. Using the forward difference scheme for time $ t $ and implicit difference scheme for the price $ x $ to discretize (6.1), one can obtain
By utilizing the transformation $ z = u-g, \; q = Ag-b $, then (6.2) can be rewritten as the LCP (1.1) [50].
Following the parameter setting in [50], we take
where $ \lambda = \frac{dt}{(dx)^2} $, $ dt = \frac{ \frac{1}{2} \sigma^2 T }{m} $ denotes the time step, $ dx = \frac{b-a}{m} $ denotes the price step. Through the different choices of parameter $ \theta $, we can obtain different schemes, i.e., $ \theta = 0, \frac{1}{2}, 1 $. Here, we choose $ \theta = 1 $, and it becomes a backward difference problem. Evidently, the matrix $ A $ is an $ H_+ $-matrix as well. After that, let $ q = Ag-b $ where $ g = 0.5 z^* $, $ b = Az^* - v^* $ and the exact solution $ z^* = (1, 0, 1, 0, \cdots, 1, 0, \cdots)^{\top} \in \mathbb{R}^{n-1} $, $ v^* = (0, 1, 0, 1, \cdots, 0, 1, \cdots)^{\top} \in \mathbb{R}^{n-1} $ and the initial value $ z^0 = (1, 1, 1, 1, \cdots, 1, 1, \cdots)^{\top} \in \mathbb{R}^{n-1} $ in this experiment.
From Tables 3–6, we compare the efficiency of the PNMSOR method and the NMSOR method for solving the LCP (1.1) generated from the American option pricing, and show the influence of American option changes under different volatility ($ \sigma = 0.2, 0.6 $) and different maturity ($ T = 0.5, 5 $) on the solving efficiency. The selection of parameters depends on the reference [50] and the specific parameters are indicated in the tables below. By a simple calculation of vector $ q $, the preconditioner $ P $ is chosen the same as in Example 6.1 and $ \alpha = 1 $.
As shown in Table 3, compared with the NMSOR method, the PNMSOR method can shorten a half of time in the case of small volatility and short maturity. And the overall elapsed CPU time is the shortest. For the case of large volatility and short maturity in Table 4, the PNMSOR method also has an incredible solving speed, but the overall elapsed CPU time is longer than that in small volatility. It can be seen from Table 5 that although the elapsed CPU time has increased, the influence of maturity limit on the solving efficiency is not as great as that of volatility. At this point, the performance of our method is still excellent. From Table 6, the case with large volatility and long maturity consumes the longest solution time, but the PNMSOR method still has its superiority. At higher dimensions, the NMSOR method can not reach sufficient accuracy within $ 500 $ steps, but the PNMSOR method can converge rapidly. We observed that experimentally, when the size is ($ 1600 $, $ 3200 $), the NMSOR method achieves $ 3.6375\times10^{-8} $ precision at step $ 516 $ and takes $ 55.0822 $ seconds; when the size is ($ 3200 $, $ 6400 $), the NMSOR method reaches the precision of $ 3.9548\times10^{-7} $ at step $ 1011 $ and costs $ 432.2135 $ seconds, what they mean is that the NMSOR method is convergent, but it does not reach sufficient accuracy within the presupposed maximum number of iteration steps. Over here, we can find the superior performance of the PNMSOR method more vividly by comparison.
Example 6.4 (Continuous optimal control problem). Consider the quasi-variational inequality problem (QIP) from the continuous optimal control problem, which was proposed in [49]: find $ z \in K(z) $ such that
where $ K(z) = \phi(z) + K $, $ K $ is a positive cone in $ \mathbb{R}^n $, $ \phi(z) $ and $ F(z) $ represent the implicit obstacle function and the mapping from $ \mathbb{R}^n $ to itself, respectively.
The QIP (6.3) can be simplified to the LCP (1.1) by setting $ F(z) = q $ and $ v = 2z $. In Example 6.4, let $ A = \hat{A} + 4I \in \mathbb{R}^{n \times n} $, where
with $ S = \mathit{\boldsymbol{tridiag}}(-1, 4, -1) \in \mathbb{R}^{m \times m} $ and $ n = m^2 $, which may be more consistent with the actual condition in the application. Apparently, there is a unique solution to this problem for any $ q \in \mathbb{R}^n $. The vector $ q $ is the same as in Example 6.1 and $ \alpha = 1 $. Numerical results for this example are reported in Table 7, from which we can find that the PNMSOR method is superior to the NMSOR method in terms of CPU time.
On the whole, from these numerical results, we can see that the PNMMS iteration method for solving the LCP (1.1) is much more competitive than the NMMS iteration method.
7.
Conclusions
In this paper, a class of preconditioned new modulus-based matrix splitting (PNMMS) method with a generalized preconditioner is developed to solve the LCP (1.1). The convergence analysis of the PNMMS method is established when $ A $ is an $ H_+ $-matrix. Particularly, we provide a comparison theorem between the PNMMS iteration method and the NMMS iteration method, which exhibits that the PNMMS method improves the convergence rate of the original NMMS method for solving the LCP (1.1). Numerical experiments are reported to demonstrate the efficiency of the proposed method.
Acknowledgments
The first author would like to thank Dr. Cairong Chen from Fujian Normal University for his helpful discussions and guidance. The authors are grateful to the reviewers and editor for their helpful comments and suggestions which have helped to improve the paper. This work is supported by the Social Science Foundation of Liaoning Province (L22BGL028).
Conflict of interest
The authors declare there is no conflicts of interest.