1.
Introduction
Consider the large and sparse saddle-point problem Ax=b of form:
where A∈Rn×n is nonsymmetric positive definite, in other words, its symmetric part H=12(A+AT) is positive definite, B∈Rn×m(n≫m) is a full column rank matrix, f∈Rn and g∈Rm are given vectors, and BT denotes the transpose of B. Under these assumptions, the nonsingularity of A assures that the existence and uniqueness of the saddle problem (1.1). When the matrices A and B are large and sparse, this class of linear system of the form (1.1) arises in many problems of scientific computing and engineering applications, including computational fluid dynamics, optimization, constrained least squares problems, generalized least squares problems, incompressible flow problems, mixed finite element approximation of elliptic PDEs and so forth. See [1] and references therein for more examples and additional information. In addition, we also refer to [2] for periodic Sylvester matrix equations and generalized coupled Sylvester matrix equations.
In general, the coefficient matrix A of (1.1) is usually large, sparse and extremely ill-conditioned, it is a common belief that iteration methods become more attractive than direct methods in terms of storage requirements and computing time. As is well known, a clustered spectrum of preconditioned matrix often results in rapid convergence for Krylov subspace methods. Therefore, in light of the special structure of the saddle point problem (1.1), for the sake of achieving rapid convergence rate and improving computational efficiency, a lot of effective and practical preconditioning techniques have been investigated in the literatures, such as preconditioned Krylov subspace methods [3], SOR-type methods [4,5], Uzawa-type methods [6,7], HSS-type methods and its accelerated variants [8,9,10], and shift-splitting (denoted by SS) and generalized shift-splitting (denoted by GSS) iteration methods [11,12,13], and so forth.
In the past few years, many (GSS-)SS-type iteration methods have been proposed to greatly improve convergence rate by choosing the appropriate iteration parameters. Based on the theory of the shifting and clustered spectrum of the coefficient matrix, Bai and Zhang [12] presented a regularized conjugate gradient method for symmetric positive definite system. In [13], Bai et al. investigated a kind of shift-splitting iteration method for solving large and sparse non-Hermitian positive definite system. A modification of the SS preconditioner (i.e., the GSS preconditioner) has been established in [14,15] with a symmetric positive definite (1, 1)-block sub-matrix A for the positive iteration parameters α and β. Here, the GSS iteration method is defined as
where the related GSS preconditioner is given by
with Iq denoting the identity matrix of size q. Particularly, as α=β, the GSS iteration method (1.2) reduces to the SS iteration method [16], where the SS preconditioner is given by the following form
For solving the saddle problem (1.1), Cao et al. [17] applied the GSS iteration method (1.2) to solve the saddle point system (1.1) and proposed a deteriorated shift-splitting (DSS) preconditioner by removing the shift parameter α in the (1,1)-block of the GSS preconditioner. Based on the symmetric and skew-symmetric splitting A=H+S with S=12(A−AT) of the (1,1)-block sub-matrix A, Zhou et al. [18] proposed a modified shift-splitting (denoted by MSS) preconditioner of form
and proposed a sufficient condition on the iteration parameter α for the convergence of the MSS iteration method. In [19], Huang et al. proposed a generalized modified shift-splitting iteration method (denoted by GMSS) by introducing two iteration parameters α and β instead of one parameter α, which further improve the convergence rate of the MSS iteration method, where the corresponding GMSS preconditioner is given by
The aim of this paper is to propose an improvement for the (GSS-)SS-type iteration methods and investigate the SFHSS preconditioner for Krylov subspace methods, such as GMRES iteration method [20], which significantly accelerate the convergence speed of the related iterative method. According to theoretical analysis, when the iteration parameters α and β satisfy certain conditions, we prove that the SFHSS iteration method converges to the unique solution of the saddle point system (1.1). The rest of this paper is organized as follows. In Section 2, we review the SFHSS iteration method and its implementations. In Section 3, we analysis the convergent property of the SFHSS iteration method for nonsingular saddle point problems with nonsymmetric positive definite (1, 1)-block. In Section 4, we discuss the algebraic properties of the resulted SFHSS preconditioned matrix. Finally, numerical experiments arise from finite element discretization of the Oseen problem are presented in Section 5 to show the correctness of the theoretical analysis and the feasibility of the SFHSS iteration method.
2.
The SFHSS iteration method and its implementations
The main purpose of this section is to introduce the new GSS-type iteration method (i.e., the SFHSS iteration method) for the nonsingular saddle-point problems (1.1). Based on the local Hermitian and skew-Hermitian splitting scheme of the non-Hermitian positive definite (1, 1)-block sub-matrix, Yang and Wu [6] presented the Uzawa-HSS iteration method. Let A=H+S be the symmetric and skew-symmetric splitting of the (1, 1)-block of the coefficient matrix A defined as in (1.1) with S=12(A−AT). Following the idea of the Uzawa-HSS iteration method, we make the following matrix splitting for the coefficient matrix A of the linear system (1.1):
where the shift-HSS preconditioner PSFHSS is given by
and the matrix NSFHSS is defined as
On the basis of the above splitting, we derive the following SFHSS iteration method
and the generalized shift-HSS iteration matrix τ(α,β) can be constructed as follows
Denoted by
More precisely, multiply both sides of (2.2) from the left by the matrices
respectively, then we can easily obtain the explicit formulations of u(k+1) and v(k+1), respectively. Therefore, the iteration scheme (2.2) naturally leads to the following algorithmic description of the SFHSS iteration method:
Method 2.1. (The SFHSS method) Given an initial guess x0=[u(0)T,v(0)T]T, for k=0,1,2,⋯, until the iteration sequence [u(k)∗,v(k)∗]∗ converges,
(i) Compute u(k) from
(ii) Compute v(k) from
Following the iteration scheme (2.2) again, it is not difficult to know that the SFHSS iteration method can be written as the following fixed point form
It is easy to see that at each step of the SFHSS iteration method (2.4) or the preconditioned Krylov subspace method, such as GMRES iteration method, we need to solve a linear system PSFHSSz=r, i.e.,
where z1,r1∈Rn and z2,r2∈Rm. A brisk calculation confirms that
From the decomposition of PSFHSS in (2.6), it is obvious that
Then we have the following algorithmic implementation of the SFHSS iteration method.
Algorithm 2.1. For a given vector (rT1,rT2)T, the vector (zT1,zT2)T can be derived from (2.7) by the following steps:
(1) solve BTBt=1βr2.
(2) solve Φ2z1=4(r1−2Bt);
(3) solve BTBp=2βBTz1.
(4) compute z2=p+4t.
Since BTB is symmetric positive definite and (αIn+2H)(αIn+2S)+4B(βBTB)−1BT is nonsymmetric positive definite, then in practical implementations, the sub-linear systems BTBz=r can be solved by the Cholesky factorization and [1α(αIn+2H)(αIn+2S)+4B(βBTB)−1BT]z=r contained in the SFHSS iteration method can be solved inexactly by the sparse ILU factorization or gmres iteration method.
3.
Convergence analysis of the SFHSS method
Consider the solution of a linear system of form Ax=b. To a matrix A∈R(n+m)×(n+m), if M is nonsingular, then the representation A=M−N is called as a splitting. Assume that the iteration matrix T=M−1N and c=M−1b, then the stationary iteration scheme of the linear system Ax=b is defined as
As is well known, if the spectral radius ρ(T) of the iteration matrix T is less than one, then the stationary iteration scheme (3.1) converges to the unique solution of Ax=b, for any choice of the initial vector x0.
Let λ be an eigenvalue of the iteration matrix τ(α,β) in (2.3) and xT=(u∗,v∗)∗ be the corresponding eigenvector. To study the convergence property of the generalized shift-HSS iteration method (2.4), we need to consider the following generalized eigenvalue problem:
By straightforward computation, the generalized eigenvalue problem (3.2) is equivalent to the following form
For convenience, we denote
where a>0, e≥0,
In what follows, we give the following lemmas to verify the convergence of the SFHSS iteration method (2.4).
Lemma 3.1. ([21]). If S is a skew-Hermitian matrix, then iS is a Hermitian matrix and u∗Su is a purely imaginary number or zero for all u∈Cn.
Lemma 3.2. ([22]). Both roots of the complex quadratic equation λ2−Φλ+Ψ=0 have modulus less than one if and only if |Φ−¯ΦΨ|+|Ψ|2<1, where ¯Φ denotes the conjugate complex of Φ.
Lemma 3.3. Let A be nonsymmetric positive definite and B have full column rank. Assume λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3) with β>0, if α>0, then λ≠1, and if α2>4|c|max with |c|max denoting the maximum of c, then λ≠−1.
Proof. Following the spirit of the proof of [17]. If λ=1, then it is straightforward to show that the Eq (3.2) yield the following result
Since A is nonsymmetric positive definite and B has full column rank, then we can easily know that u=0 and v=0. This is a contradiction as (u∗,v∗)∗ is an eigenvector.
If λ=−1, then the Eq (3.2) reduce to the following forms
If α2>4|c|max, we can easily know that α2In+4HS is nonsingular. Therefore, it is easy to see that u=0 and v=0. The result seems to contradict with (u∗,v∗)∗ being an eigenvector.
Thus, we complete the proof.
Lemma 3.4. Let the conditions of Lemma 3.3 be satisfied. Assume that λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3) and (u∗,v∗)∗ is the corresponding eigenvector with u∈Cn and v∈Cm, if 0≠u∈ℵ(BT), then |λ|<1.
Proof. We demonstrate the verification of u≠0. Unless, if u=0, then it follows from the second of the Eq (3.2) that α(λ−1)BTBv=0. According to Lemma 3.3, since λ≠1, then BTBv=0. As B has full column rank, then BTB is nonsingular. Therefore, we further conclude v=0. This is a contradiction since (u∗,v∗)∗ is an eigenvector, so u≠0.
We now turn to verify |λ|<1. Assume u∈ℵ(BT) with ∥u∥2=1, from the second of the Eq (3.2), we get v=0. Following [8, Theorem 2.2] and multiplying the first of the Eq (3.2) from the left-hand side by u∗, it is obvious that
where λi(H) denotes the ith eigenvalue of the symmetric positive definite matrix H.
Therefore, the proof is completed.
Theorem 3.1. Let the conditions of Lemma 3.3 be satisfied. Assume that λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3), if the positive iteration parameters α and β satisfy the following conditions
If ac+bd≥0, then
and if ac+bd<0, then
then the iteration method (2.4) converges to the unique solution of the nonsymmetric saddle point problem (1.1), i.e.,
Proof. Combine Lemmas 3.3 and 3.4, in order to complete the proof, we need only to verify the case BTu≠0. Suppose u∉ℵ(BT), then the second of the Eq (3.2) yields the following result
By substituting the relationship (3.5) into the first of the Eq (3.2), we have
Multiplying the Eq (3.6) from the left-hand side by u∗, after straightforward calculations, then the Eq (3.6) yields the following form
Following (3.4), a quadratic equation of λ is derived from the Eq (3.7). After some algebra, it is straightforward to show that
If (α2+2αa+4c)β+4αe+2(αb+2d)βi=0, then it is easy to see that (α2+2αa+4c)β+4αe=0 and αb+2d=0. Therefore, the Eq (3.8) yields
By Lemma 3.3, we get λ≠±1 as α2>4|c|max. Note that a>0 and e≥0, we have
In what follows, we consider the case (α2+2αa+4c)β+4αe+2(αb+2d)βi≠0. From lemma 3.2, we know that |λ|<1 if and only if |Φ−¯ΦΨ|+|Ψ|2<1. For convenience, we denote Φ and Ψ by
and
After straightforward computation, we have
where Γ=(4αae−α2βa−4βac−4βbd)2 and Υ=(α2β−2αβa+4βc+4αe)2.
The following inequality
holds true for this case
This implies
Following the inequalities (3.9) and (3.10), if ac+bd≥0, we have
and
If ac+bd<0, then we get
and
By making use of α2>4|c|max, we complete the proof.
For the sake of convenience, we denote the maxima of |bd|, |d| and e by |bd|max, |d|max and emax, respectively, and denote the minimums of a, b, c and d by amin, bmin, cmin, and dmin, respectively. According to Theorem 3.1, we give the following sufficient conditions for the convergence of the SFHSS iteration method (2.4).
Corollary 3.1. Let the conditions of Theorem 3.1 be satisfied. If the positive iteration parameters α and β satisfy:
If ac+bd≥0, then
If ac+bd<0, then
and
Then the generalized shift-HSS iteration method (2.4) converges to the unique solution of the nonsymmetric saddle point problem (1.1).
Proof. According to Theorem 3.1, if ac+bd≥0, then
and if ac+bd<0, then
and
Therefore, we complete the proof.
4.
The spectral properties of the preconditioned matrix
The main objective of this section is to introduce some elegant inclusion regions for the spectrum of P−1SFHSSA for the saddle point problem (1.1).
In the following, to derive some related bounds of the eigenvalues of the preconditioned saddle point matrix P−1SFHSSA, we study the eigenvalue problem P−1SFHSSAx=ηx, that is to say
where η denotes an any eigenvalue of the preconditioned matrix P−1SFHSSA with the corresponding eigenvector x=(u∗,v∗)∗.
For simplicity, we denote v∗BTBv by σ2 and the null space of BT by ℵ(BT), at the same time, the matrix RSFHSS is defined by
then it is easy to see that
After some algebra, we can rewrite the generalized eigenvalue problem (4.1) as
Following Lemma 3.3, since the eigenvalue λ=1−η of τ(α,β) satisfies λ≠−1 with α2>4|c|max, then η=1−λ≠2. So, 1−12η≠0, we set
For convenience, we use ℜ(θ) and ℑ(θ) to denote the real part and image part of the eigenvalue θ, respectively.
We can explicitly write the equivalent eigenproblem Ax=θRSFHSSx as
The equivalent results to Eq (4.4) are given by
It is obvious that u≠0, otherwise the second equation of (4.5) would implies θ=0 or v=0. However, from Lemma 3.3, neither of them can be satisfied. So, u≠0. If v=0 and α2>4|c|max, then
Theorem 4.1. Let the conditions of Lemma 3.3 be satisfied. Assume (η,x) is an eigenpair of (4.1) with x=(u∗,v∗)∗ and ∥u∥2=1. Then for ∀α,β>0, the eigenvalue η can be written as η=2θθ+2, where θ satisfies the following:
(i) If v=0, then u∈ℵ(BT) and
with
(ii) If v≠0, then u∉ℵ(BT) and
with
and
Proof. In order to obtain the inequalities (4.6) and (4.8), we need to consider two cases: (i) v=0, (ii) v≠0.
We now turn to verify (i). If v=0, from (4.5), it is easy to see that
Multiplying u∗ to the two sides of (4.10) from left, it then from (3.4) that
Consequently, we obtain (4.7). After some algebra, it is straightforward to show that
By straightforward calculation, we can get the inequality (4.6).
We demonstrate the validity of (ii). If v≠0, multiplying by u∗ from left, then the first of the Eq (4.5) yields
Multiplying the transposed conjugate of the second of Eq (4.5) by v∗, we get
Substituting (4.14) into (4.13), we obtain
Following the above notes, it will be shown that
It is obvious to obtain that
and
Through direct calculations, we get (4.9) and
where
and
Since α2a+4ac+4bd>0, then we have
Consider
or
By straightforward computation, we can find that
Additionally, as a≥b, use α2a+4ac+4bd>0 again, we have
As a<b, then we obtain
Hence, combine (4.20) and (4.21), we have
Hence, we complete the proof.
Remark 4.1. Following Theorem 4.1, the eigenvalue θ satisfies two cases:
(i) If v=0 and α2b+4bc=4ad, then u∈ℵ(BT) and the real eigenvalue
is bounded by
(ii) If v≠0 and α2b+4bc+αβbσ2=4ad, then u∉ℵ(BT) and the real eigenvalue
is bounded as
Theorem 4.2. Let the conditions of Theorem 4.1 be satisfied. For any iteration parameters α,β>0, then the eigenvalue η of the SFHSS preconditioned matrix P−1SFHSSA satisfies
Proof. For any iteration parameters α,β>0, since |θ|=√ℜ2(θ)+ℑ2(θ), then we get 0<ℜ(θ)≤|θ|. From η=2θθ+2, it is easy to see that
Hence, we have
Together the monotone properties of 2|θ||θ|+2 and 2|θ|√|θ|2+4 with respect to |θ|, we complete the proof of Theorem 4.2.
Combine Theorems 4.1 and 4.2, we can find the following result.
Remark 4.2. Combine Theorems 4.1 and 4.2, for ∀α,β>0, some refined bounds for the eigenvalue η of the SFHSS preconditioned matrix P−1SFHSSA are given by
(i) If v=0, then u∈ℵ(BT) and
where
(ii) If v≠0, then
Remark 4.3. Since η=2θθ+2, by simple algebra, it is easy to see that η is real if and only if ℑ(θ)=0. Following Remark 4.1, the real eigenvalue η satisfies two cases of forms:
(i) If v=0 and α2b+4bc=4ad, then u∈ℵ(BT) and the real eigenvalue
meets the following inequality
(ii) If v≠0 and α2b+4bc+αβbσ2=4ad, then u∉ℵ(BT) and the real eigenvalue
is bounded by
In the following, based on the above descriptions, I will further discuss the algebraic properties of the preconditioned matrix P−1SFHSSA, where the preconditioner Pα,0 is a reduced form of (2.1) with β=0. For simplicity of description, we denote PSFHSS and NSFHSS with α2>4|c|max and β=0 by Pα,0 and Nα,0, respectively. For more details on the the algebraic properties of the preconditioned matrix, we refer to [23,24,25].
Theorem 4.3. Let A be nonsymmetric positive definite and B have full column rank. Then the preconditioned saddle point matrix P−1α,0A has an eigenvalue η=2 with algebraic multiplicity at least m and the remaining eigenvalues are ηj=4α[λj(H)+λj(S)][α+2λj(H)][α+2λj(S)](j=1,2,…,n), where λj(H) and λj(S) denote the jth eigenvalue of H and S, respectively.
Proof. If β=0, according to the second of the Eq (3.2), we can obtain
then we further get either λ+1=0 or BTu=0. If λ=−1, i.e. η=1−λ=2, by the first of the Eq (3.2), we have
Since α2In+4HS is nonsingular with α2>4|c|max, then it is easy to obtain the related eigenvectors have the form of [0vTl](l=1,2,…,m). If BTu=0, use the first of the Eq (3.2) again, we obtain
therefore, we further get
Hence, we complete the proof of Theorem 4.3.
Remark 4.4. Following Theorem 4.3, it is easy to know that the preconditioned matrix P−1α,0A has m+j(1≤j≤n) linearly independent eigenvectors, where
(i) m linearly independent eigenvectors related to the eigenvalue 2 have the form of [0vTl](l=1,2,…,m).
(ii) j(j=1,2,…,n) linearly independent eigenvectors associated with eigenvalues unequal to 2 have the form [uTsvTs](s=1,2,…,j) with BTu=0.
In what follows, we devote to study the properties of the minimal polynomial for the preconditioned matrix P−1α,0A. which are beneficial to the Krylov subspace acceleration. To derive an expression for the corresponding characteristic polynomial of P−1α,0A, we decompose once again the preconditioner Pα,0 as
where
It is obvious that
A simple computation reveals that
where
and
Since BTu=0, then for j=1,2,…,n, we can get
Then the characteristic polynomial of P−1α,0A is described as follows
Denote by
It is straightforward to show that Ψ(η) is a polynomial related to η of degree n+1. Then a simple computation reveals that
where Θ=In−F−1G+F−1BD.
Since ηj(j=1,2,…,n) are the eigenvalues of the matrix Θ, following the spirit of the Hamilton-Cayley theorem, it is easy to see that n∏j=1(Θ−ηjIn), this leads to Ψ(P−1α,0A)=0.
The following conclusion is direct consequence of the above statements and therefore its proof is omitted.
Theorem 4.4. Under the assumptions of Theorem 4.3, if P−1α,0A has k(1≤k≤n) distinct eigenvalues ηj(1≤j≤k) related to algebraic multiplicity γj with k∑j=1=n, respectively, then the degree of the minimal polynomial of the preconditioned matrix P−1α,0A is at most k+1(1≤k≤n). Thus, the dimension of the Krylov subspace K(P−1α,0A,b) is at most k+1(1≤k≤n).
Next, we restrict our attentions to the determination of the optimal parameters problem. It is easy to see that the performance of the PSFHSS preconditioner largely depends on the choices of parameters α and β. However, in our experience this is a difficult task to select the optimal parameters, therefore, we usually need to investigate the estimation method in practical implementations.
By taking similar steps to those taken in [26], we are ready to consider the choice of the parameters α and β in the SFHSS iteration methods. In order to obtain the fast convergence rate of the SFHSS iteration method (2.2) and clustered eigenvalue distribution of the SFHSS-preconditioned matrix, we usually think it is very important to choice a suitable preconditioner 2PSFHSS in (4.2) to approximate infinitely A, hence, we may expect RSFHSS≈0 defined as in (4.2) to compute the quasi-optimal iteration parameters αexp and βexp.
We begin our analysis by minimizing the following Frobenius norm of RSFHSS
where tr(E) denotes the trace of the matrix E.
By taking partial derivative for Θ(α,β), we can obtain
It is obvious that Θ(α,β) has a minimum if
The proof of tr(HS2H)<0 refer to [26, Lemma 1]. In addition, in practical implementations, it seems a good idea to try some values as close to 0 as possible for the iteration parameter βexp.
5.
Numerical examples
In this section, we present some numerical experiments to test the feasibility and robustness of the generalized shift-HSS iteration method for solving the saddle point problem (1.1) arising from Oseen models of incompressible flow. In order to evaluate the performance of the proposed generalized shift-HSS preconditioner over some existing matrix splitting preconditioners, we compare the numerical results of the generalized shift-HSS preconditioner PSFHSS (2.1) with the GSS preconditioner PGSS (1.3), the SS preconditioner PSS (1.4), the MSS preconditioner PMSS (1.5), the GMSS preconditioner PMSS (1.5), the DPSS preconditioner PDPSS presented in [27,28] and the preconditioner PIDPSS established by [29], where the preconditioner PDPSS is defined by
Here, the optimal iteration parameter is given by αexp=||A||F+2||B||F2(n+m) ([30]). In addition, the preconditioner PIDPSS is given by
the optimal iteration parameter is given by αexp=||A||F+||B||F2√n ([24]).
We use the above preconditioners to accelerate GMRES iteration method and compare these different preconditioned GMRES iteration methods in terms of both the number of iteration steps (denoted by IT) and elapsed CPU times in seconds (denoted by CPU). In our implementations, we choose the zero vector x(0)=0 as the initial guess, take the right-hand-side vector b so that the exact solutions u and v are the unity vectors with all entries equal to one, and set the stopping criterion to be the residual norm
or the prescribed iteration number kmax=n, where x(k) is the solution at the kth iteration. In actual applications, the iteration parameters α and β for the preconditioner PSFHSS are chosen to be the experimentally found optimal value, which leads to the least numbers of iterations of the preconditioned GMRES method for each choice of the spatial mesh-sizes [18].
Example 5.1. [31]. Consider the Oseen equation of form
where Ω is a bounded domain with suitable boundary conditions, the parameter ν>0 denotes the viscosity, u represents the vector field and stands for the velocity, v is the approximation of u from the previous Picard iteration, and p denotes the pressure. The test problem is the classical two-dimensional leaky-lid driven cavity problem. Here, we usually employ the "IFISS" software package proposed in [32] to discretize the Oseen problem (5.3) with the Q2−Q1 mixed finite element method on uniform grid. The generated saddle point system of type (1.1) has nonsymmetric positive definite sub-matrix B which corresponds to a discretization of the convection diffusion operator L[u]:=−ν△u+(v⋅∇)u. In actual implementation, four values of the viscosity parameters are used, such as ν=1,0.1,0.01,0.001, and four increasing grids are selected, i.e., 16×16, 32×32, 64×64 and 128×128 grids.
In Tables 1–7, we use GMRES iteration method in conjunction with the corresponding preconditioners and present IT and CPU with respect to different sizes of the discretization grids for different values of α, β and ν. As these tables show, we can easily know that the generalized shift-HSS method behaves much better than the MSS, GSS, SS, DPSS and IDPSS iteration methods, especially when problem size increases, the convergence rate of GMRES iteration method with the generalized shift-HSS preconditioner are much faster than that of GMRES iteration method with the GMSS, MSS, GSS, SS, DPSS and IDPSS preconditioners. Therefore, the generalized shift-HSS preconditioner is more efficient and stable to accelerate the convergence rate of the GMRES iteration method.
From Figures 1–4, we give the convergence history of the corresponding iteration methods to compare effects of the corresponding preconditioners with respect to the iteration parameters α and β. It is easily seen that the generalized shift-HSS iteration method has more smooth convergence curves than the GMSS, MSS, GSS and SS iteration methods.
Example 5.2. [33,34]. Consider the two-dimensional convection-diffusion equation
with Dirichlet boundary condition and the constant coefficient q. Similar to the three-dimensional case proposed in [35], the five-point centered finite difference discretization is used for the above equation, the nonsymmetric saddle point system (1.1) can be easily obtained, where
and
Here, h=1l+1 represents an equidistant step-size in each coordinate direction, ⊗ denotes the Kronecker product and r=qh/2 indicates the mesh Reynolds number.
In Tables 8–11, from two aspects of IT and CPU, we use SFHSS, DPSS and IDPSS preconditioners to accelerate GMRES iteration method associated with different sizes of the discretization grids for different values of q with αexp and β=0.00001. As these tables show, we can easily know that the SFHSS method outperforms the DPSS and IDPSS methods, especially when problem size increases, the convergence rate of GMRES iteration method with the generalized shift-HSS preconditioner are much faster than that of GMRES iteration method with the DPSS and IDPSS preconditioners. Therefore, the generalized SFHSS preconditioner is more efficient and stable.
6.
Conclusions
The novelty of this present paper is the construction and analysis of the generalized shift-HSS iteration method for nonsingular saddle point systems with nonsymmetric positive definite (1, 1)-block. We investigate the convergence property of the SFHSS iteration method and further illustrate the robustness and efficiency of the generalized shift-HSS preconditioner by a numerical example. Future work should focus on developing the modified forms of the GSS iteration method and generalized shift GSOR-like method for complex symmetric linear system, and study the effects of iteration parameters on eigenvalue-clustering of the corresponding preconditioned matrices.
Acknowledgments
This research is supported by Scientific and Technological Research Program of Chongqing Research Program of Basic Research and Frontier Technology (cstc2018jcyjAX0685), Xiangsi Lake Young Scholars Innovation Team of Guangxi Minzu University (No.2021RSCXSHQN05) and Natural Science Foundation of Guangxi Minzu University (2021KJQD01).
Conflict of interest
The authors declared that they have no conflicts of interest to this work.