Research article

An efficient relaxed shift-splitting preconditioner for a class of complex symmetric indefinite linear systems

  • Received: 26 May 2022 Revised: 08 July 2022 Accepted: 19 July 2022 Published: 21 July 2022
  • MSC : 65F10, 65F50, 65W05

  • In this work, by introducing a scalar matrix αI, we transform the complex symmetric indefinite linear systems (W+iT)x=b into a block two-by-two complex equations equivalently, and propose an efficient relaxed shift-splitting (ERSS) preconditioner. By adopting the relaxation technique, the ERSS preconditioner is not only a computational advantage but also closer to the original two-by-two of complex coefficient matrix. The eigenvalue distributions of the preconditioned matrix are analysed. An efficient and practical formula for computing the parameter value α is also derived by computing the Frobenius norm of symmetric indefinite matrix T. Numerical examples on a few model problems are illustrated to verify the performances of the ERSS preconditioner.

    Citation: Qian Li, Qianqian Yuan, Jianhua Chen. An efficient relaxed shift-splitting preconditioner for a class of complex symmetric indefinite linear systems[J]. AIMS Mathematics, 2022, 7(9): 17123-17132. doi: 10.3934/math.2022942

    Related Papers:

    [1] Li-Tao Zhang, Xian-Yu Zuo, Shi-Liang Wu, Tong-Xiang Gu, Yi-Fan Zhang, Yan-Ping Wang . A two-sweep shift-splitting iterative method for complex symmetric linear systems. AIMS Mathematics, 2020, 5(3): 1913-1925. doi: 10.3934/math.2020127
    [2] Xiaofeng Guo, Jianyu Pan . Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 17284-17306. doi: 10.3934/math.2023884
    [3] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [4] Limin Li . Normwise condition numbers of the indefinite least squares problem with multiple right-hand sides. AIMS Mathematics, 2022, 7(3): 3692-3700. doi: 10.3934/math.2022204
    [5] Shahbaz Ahmad, Adel M. Al-Mahdi, Rashad Ahmed . Two new preconditioners for mean curvature-based image deblurring problem. AIMS Mathematics, 2021, 6(12): 13824-13844. doi: 10.3934/math.2021802
    [6] Qingbing Liu, Aimin Xu, Shuhua Yin, Zhe Tu . A note on the preconditioned tensor splitting iterative method for solving strong $ \mathcal{M} $-tensor systems. AIMS Mathematics, 2022, 7(4): 7177-7186. doi: 10.3934/math.2022400
    [7] Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180
    [8] Zhuo-Hong Huang . A generalized Shift-HSS splitting method for nonsingular saddle point problems. AIMS Mathematics, 2022, 7(7): 13508-13536. doi: 10.3934/math.2022747
    [9] Ting Huang, Shu-Xin Miao . On comparison results for $ K $-nonnegative double splittings of different $ K $-monotone matrices. AIMS Mathematics, 2021, 6(7): 7741-7748. doi: 10.3934/math.2021450
    [10] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
  • In this work, by introducing a scalar matrix αI, we transform the complex symmetric indefinite linear systems (W+iT)x=b into a block two-by-two complex equations equivalently, and propose an efficient relaxed shift-splitting (ERSS) preconditioner. By adopting the relaxation technique, the ERSS preconditioner is not only a computational advantage but also closer to the original two-by-two of complex coefficient matrix. The eigenvalue distributions of the preconditioned matrix are analysed. An efficient and practical formula for computing the parameter value α is also derived by computing the Frobenius norm of symmetric indefinite matrix T. Numerical examples on a few model problems are illustrated to verify the performances of the ERSS preconditioner.



    Consider the iterative solution of the following complex linear equations of the form

    ˜A˜x=˜b,˜ACn×n and  ˜x,˜b Cn, (1.1)

    where ˜A=W+iTCn×n is a complex matrix, with W,TRn×n being symmetric matrices, ˜x=y+iz,˜b=f+ig, and i=1 denotes the imaginary unit.

    Complex systems such as (1.1) are important and arise in various scientific computing and engineering applications, such as diffuse optical tomography, structural dynamics [7], optimal control problems for PDEs with various kinds of state stationary or time dependent equations, e.g., Poisson, convection diffusion, Stokes [2], wave propagation and so on. More details on this class of questions are given in references [1,3,5,14,15].

    When the matrices W,T are symmetric positive semi-definite with at least one of them being positive definite, Bai et al.[7,8] proposed the modified Hermitian and skew-Hermitian splitting (MHSS) iteration method and the preconditioned MHSS (PMHSS) iteration methods to compute an approximate solution for the complex linear systems (1.1), see also [6]. It is proved in [7,8] that the MHSS and PMHSS methods converge to the unique solution of (1.1) unconditionally. Moreover, Bai et al. pointed out the h-independent behavior of the corresponding preconditioner of the PMHSS iteration method. To solve the systems (1.1) further and more efficiently, Zheng et al. [24] designed a double scale splitting (DSS) iteration method and also analyzed the unconditional convergence property. Furthermore, two reciprocal optimal iteration parameters and corresponding optimal convergence factor are determined simultaneously. There are some other effective iteration methods at the same time, such as Euler preconditioned SHSS iteration method [17], Double parameter splitting (DPS) iteration method [18], etc.

    However, the matrix T of complex symmetric system of linear equations arises in direct frequency domain analysis [10] and time integration of parabolic partial differential equations [4] is usually symmetric indefinite, the MHSS, PMHSS and DSS methods may be applicative or not, due to the fact that the coefficient matrices αI+T, αV+T and αW+T are indefinite or singular. For such a problem, multiply the complex linear systems on the left by T, Wu [19] developed the simplified Hermitian normal splitting (SHNS) iteration method. In order to accelerate the convergence of the SHNS method, Zhang et al. [21] established a preconditioned SHNS (PSHNS) iteration method and constructed a corresponding preconditioner. Although these two iteration methods are unconditionally convergent, they still involve the complex arithmetics in each inner iteration, which can result in expensive computational costs. More importantly, computation of the optimal values with any of the aforementioned two methods is a time-consuming because it first needs to compute the maximum and the minimum eigenvalues of some dense matrices.

    In this paper, we will focus on the case that W is symmetric positive definite and T is symmetric indefinite. In order to avoid the complex arithmetic, the complex linear systems (1.1) are often transformed into the real block two-by-two systems as follows [3,25]:

    [WTTW][yz]=[fg], (1.2)

    this real form can be regarded as a special class of generalized saddle point problems [11]. Based on the relaxed preconditioning technique [12] for generalized saddle point problems, Zhang et al. [22] proposed a block splitting (BS) preconditioner. In order to overcome the nonzero off-diagonal block becoming unbounded as the relaxed parameter approaches 0, Zhang et al. [23] proposed an improved block (IB) splitting preconditioner. All of these preconditioners are highly close to the coefficient matrix of the real linear systems (1.2), when accelerating the Krylov subspace methods, a linear sub-system with coefficient matrix αW+T2 must be solved in each inner iteration. However, αW+T2 is a dense symmetric positive and definite matrix. Unlike sparse matrices, the computation of a dense matrix is more difficult, for some high dimensional problems, it may be hard to solve.

    Fortunately, constructing these preconditioners give us some inspiration, while the linear systems (1.2) need to make some changes first. We introduce a scalar matrix αI to construct a new complex block two-by-two linear systems, then based on the shift-splitting preconditioner [13] for saddle point problems, we propose an efficient relaxed shift-splitting (ERSS) preconditioner. This preconditioner not only avoid the complex arithmetics but also maintain the sparse properties of the matrices W and T. More importantly, the ERSS preconditioner is highly close to the original coefficient matrix of the new complex block two-by-two linear systems and the relax parameter is easily to implemented.

    The remainder of this work is organized as follows. In Section 2.1, we propose an efficient relaxed shift-splitting (ERSS) preconditioner and the eigenvalue properties of preconditioned matrix are discussed. In Section 2.2, using the scaled norm minimization (SNM) method [20], we derive a practical formula for computing the parameter value α. In Section 3, numerical experiments are presented to show the effectiveness of the ERSS preconditioner. Finally, we end this paper with some conclusions in Section 4.

    In this section, we first build the ERSS preconditioner and derive some spectral properties of the corresponding preconditioned system. Then by using the scaled norm minimization (SNM) method, a practical estimation formula is given to compute the relaxed parameter.

    Firstly, we reconstruct the complex linear systems (1.1) into the following structure by introducing a scalar matrix αI:

    Ax[αIαIWiT][˜x˜x]=[0˜b]b, (2.1)

    where α is a positive constant. We regard this system (2.1) as a "saddle point system". Based on the shift-splitting preconditioner [13] for saddle point problems, we propose the following relax shift-splitting preconditioner:

    PERSS=[II1αWαI][αI00iαT]. (2.2)

    The difference between PERSS and A is

    RERSS=PERSSA=[0αIiαT00].

    Only the (1, 2) block being nonzero in RERSS shows that the preconditioner PERSS is a good approximation to the coefficient matrix A and it may be easier to analyze the eigenvalue distributions of the preconditioned matrix P1ERSSA.

    In actual implementations, the actions of the preconditioned Krylov subspace methods with the preconditioner PERSS, are often realized through solving a sequence of generalized residual equations of the form PERSSz=r, where r=(r1,r2)C2n, with r1,r2Cn represent the current residual vector, z=(z1,z2)C2n, with z1,z2Cn represent the generalized residual vector, i.e.,

    [I01αWI][II0αI+1αW][αI00iαT][z1z2]=[r1r2].

    By using the matrix factorization of P1ERSS, we obtain the following procedure for the residual vector z=(z1,z2):

    Algorithm 1:

    (1) solve (αI+1αW)u1=r21αWr1;

    (2) z1=1α(r1+u1);

    (3) solve Tu2=u1;

    (4) z2=iαu2.

    From Algorithm 1, we see that two linear subsystems with sparse real coefficient matrices αI+1αW and T need to be solved at steps (1) and (3). Since the matrix αI+1αW is symmetric positive and definite and T is symmetric, both of them are sparse matrices, then the above linear subsystems can be solved effectively by sparse Cholesky factorization and LU method, respectively.

    The spectral distributions of the preconditioned matrix relate closely to the convergence rate of Krylov subspace methods [9]. The following result shows the eigenvalue distributions of the preconditioned matrix P1ERSSA.

    Theorem 2.1. Let WRn×n is symmetric and positive definite and TRn×n is symmetric indefinite, α is a positive constant. Then the preconditioned matrix P1ERSSA has eigenvalues at 1 with multiplicity n, and the remaining n eigenvalues are of the form α2(ωiτ)1+α2ω, where

    ω=μW1μμμ>0,τ=μT1μμμR,forμCnandμ0.

    Proof. The preconditioned matrix can be rewritten as

    P1ERSSA=P1ERSS(PERSSRERSS)=IP1ERSSRERSS.

    From (2.2), we can get

    P1ERSSRERSS=[1αI00iαT1][I(αI+1αW)10(αI+1αW)1][I01αWI][0αIiαT00]=[0(αI+1αW)1(αIiαT)0Θ],

    where Θ=iT1(αI+1αW)1W(αIiαT). Define ˜Θ=(1αI+αW1)1(1αI+iαT1), then Θ is similar to ˜Θ. Assume that (˜λ,μ) is an eigenpair of ˜Θ, i.e., ˜Θμ=˜λμ,

    (1αI+iαT1)μ=˜λ(1αI+αW1)μ. (2.3)

    Multiplying μμμ by the right and left side of Eq (2.3), by simple calculations, we have

    ˜λ=1+iα2τ1+α2ω.

    Hence the eigenvalues of the preconditioned matrix P1ERSSA are at 1 with multiplicity n, and the remaining n eigenvalues are of the form α2(ωiτ)1+α2ω.

    Remark 2.1. Let WRn×n is symmetric and positive definite and TRn×n is symmetric indefinite, α is a positive constant. Then the non-unit eigenvalues of the preconditioned matrix P1ERSSA are clustered at 0+ if α is close to 0, while the real parts of the eigenvalues are around at 1 if α approaches to .

    Remark 2.2. If |τ|maxωmin, then for α>0, all eigenvalues of P1ERSSA satisfies |λ1|<1, where |τ|max is the maximum value of the absolute value of eigenvalues of the matrix T1 and ωmin is the smallest eigenvalue of the matrix W1.

    When PERSS is used as a preconditioner, we expect that PERSS is as close as possible to the coefficient matrix A of the complex linear systems (2.1). So we try to derive a practical formula for computing the optimal parameter α such that RERSS0. Recently, Yang [20] proposed an easily implemented scaled norm minimization (SNM) method to compute the parameter values including several traces of some matrices for the Hermitian and skew-Hermitian splitting (HSS) method [9,14,16]. Here, we define tr() as a matrix's trace. Owing to RERSS2F=tr(RERSSRERSS), we first give

    RERSSRERSS=[000α2I+1α2T2].

    Because tr(A+B)=tr(A)+tr(B) and tr(kA)=ktr(A) for any ARn×n and kR. It follows that

    RERSS2F=tr(α2I+1α2T2)=α2n+1α2T2F.

    It's clear to know when α=TF4n that minimizes RERSS2F. Obviously, the calculation of relax parameter α is easy to realise.

    In this section, we employ two examples to test the performances of the ERSS preconditioner in terms of both iteration count (denoted as IT) and computing time (in seconds, denoted as CPU). To show the effectiveness of the ERSS preconditioner (2.2), we also test the other two preconditioners: PSHNS preconditioner PPSHNS [21] and IB preconditioner PIB [23] as follows:

    PPSHNS=12α(αW+I)(αT+iI),PIB=1α[WTTαI][αIβI+W].

    PPSHNS is used to precondition the complex linear systems (1.1), and PIB preconditions the real linear systems (1.2).

    In implementations, we use those preconditioners to accelerate the convergence of the generalized minimum residual (GMRES) method. The initial guess x(0) for the preconditioned GMRES method is chosen to zero vector, and the iterations are terminated once the current iterate x(k) satisfies

    bAx(k)2bAx(0)2<106.

    The relaxed parameters used in both PSHNS and IB preconditioners are the experimentally found ones, which minimize the number of iteration steps, while the relaxed parameter of ERSS preconditioner is α=TF/4n. In addition, the systems of linear equations involved in the preconditioned GMRES method are solved by direct methods, that is, the Cholesky factorization in combination with the symmetric approximate minimum degree reordering and the LU factorization in combination with the column approximate minimum degree reordering, respectively.

    All experiments are performed by using MATLAB (version R2009b) in double precision on a personal computer with 3.60GHz central processing unit (Intel(R) Core(TM) i7-4790 CPU), 8.00G memory and Windows 7 operating system.

    Example 3.1 (See [8,23,24,25]) The following complex symmetric linear system is considered

    [(ωCV+CH)+i(KωM)]x=b,

    where M and K are the inertia and stiffness matrices, respectively; CV and CH are the viscous and hysteretic damping matrices, respectively; and ω is the driving circular frequency.

    In our numerical computations, we take CH=0.02K, ω=2π, CV=12M, M=kI and K is the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square [0,1]×[0,1] with the mesh size h=1m+1. In this case, the matrix KRn×n possesses the tensor-product form K=IVm+VmI with Vm=h2tridiag(1,2,1)Rm×m. Hence, the total number of variables is n=m2. In addition, the right-hand side vector ˜b=(1+i)˜Aones(n, 1). Furthermore, we normalize the coefficient matrix and right-hand side by multiplying both by h2.

    In Table 1, we report results for GMRES preconditioned with ERSS, PSHNS and IB preconditioners for different mesh-size h and symmetric positive and definite matrices M. From these results we observe that when used as a preconditioner, by choosing the theoretical optimal parameter α, ERSS performs much better than PSHNS and IB in both iteration steps and CPU times, especially when the mesh-size h becomes small. While the number of iterations with the PSHNS and IB preconditioners increase with problem size, those for the ERSS preconditioner are almost constant. In addition, searching for optimal parameters of the PSHNS and IB preconditioners is quite time-consuming, especially for the latter, while the calculation of the parameter of the ERSS preconditioner is effortless.

    Table 1.  IT and CPU for preconditioned GMRES for Example 3.1.
    m PERSS PPSHNS PIB
    k 5 10 20 5 10 20 5 10 20
    128 α 2.1135 2.1131 2.1123 90 90 90 (0.5, 0.1) (0.5, 0.1) (0.5, 0.1)
    IT 5 5 6 24 23 23 17 17 17
    CPU 0.2978 0.2980 0.3272 0.5483 0.5215 0.5313 0.5156 0.5271 0.5219
    256 α 2.1142 2.1141 2.1139 200 200 230 (0.5, 0.1) (0.5, 0.1) (0.5, 0.08)
    IT 5 5 6 34 33 33 23 22 23
    CPU 1.7823 1.7877 1.9229 5.1613 5.0060 5.0521 4.9696 4.7127 4.9145
    512 α 2.1145 2.1145 2.1144 550 550 700 (0.5, 0.1) (0.5, 0.1) (0.5, 0.1)
    IT 5 5 6 47 47 47 34 34 37
    CPU 10.5151 10.4153 11.2287 42.4040 41.8526 42.7765 40.1000 40.3012 42.3688

     | Show Table
    DownLoad: CSV

    Example 3.2. (See [23]) Consider the system of linear Eq (1.1) as following:

    [(K+(3+3)τIm2)+i(K(33)ωIm2)]˜x=˜b,

    where K=ImVm+VmIm, τ=2π2, h=1m+1, n=m2 and Vm=h2tridiag(1,2,1)Rm×m is a tridiagonal matrix. ω=kπ2 is a variable.

    We choose the symmetric positive and definite matrix W=K+(3+3)τIm2 and the symmetric indefinite matrix T=K(33)ωIm2, the right-hand side vector ˜b=(1+i)˜Aones(m2,1). Furthermore, we normalize the coefficient matrix and right-hand side by multiplying both by h2.

    In Table 2, we list results for GMRES preconditioned with ERSS, PSHNS and IB for different mesh-size h and variable k. Note that the parameter values of the PSHNS and IB preconditioners are the experimentally found optimal ones, which is time-consuming. Despite the iteration steps and CPU times of ERSS preconditioner exceed that of IB as the mesh-size h decreases, the difference is acceptable. As a consequence, although the number of iteration steps of the ERSS method increase slightly with the mesh refinement, it still has strong competitiveness due to the fast parameter calculation method.

    Table 2.  IT and CPU for preconditioned GMRES for Example 3.2.
    m PERSS PPSHNS PIB
    k 5 10 20 5 10 20 5 10 20
    128 α 2.1136 2.1134 2.1132 9 9 9 (2.5, 0.001) (2.5, 0.001) (3, 0.001)
    IT 11 13 13 37 37 37 9 9 8
    CPU 0.4396 0.4937 0.4913 0.7752 0.7757 0.7870 0.3763 0.3764 0.3714
    256 α 2.1142 2.1142 2.1142 25 25 25 (2, 0.0004) (2, 0.0004) (2.4, 0.0004)
    IT 10 12 13 54 54 53 9 9 9
    CPU 2.4741 2.7851 2.9757 8.8896 9.0279 9.0138 2.3160 2.3157 2.3864
    512 α 2.1145 2.1145 2.1145 60 60 60 (2.5, 0.0001) (2.5, 0.0001) (2.5, 0.0001)
    IT 10 12 13 79 79 78 9 9 9
    CPU 14.0046 15.8918 17.0173 89.3188 90.6522 89.2453 14.8628 14.3777 14.6812

     | Show Table
    DownLoad: CSV

    Finally, we present the experimental optimal results for the ERSS-preconditioned GMRES method by minimizing the numbers of iterations with respect to different test examples and variables in Table 3. We can see that the experimental optimal parameters in Table 3 are consistent with theoretical optimal parameters α=TF/4n in Tables 1 and 2. While, the experimental optimal results are slightly larger than that of theoretical optimal results in Table 2, and this insignificant difference in iteration steps are acceptable. Table 3 demonstrates that the ERSS-preconditioned GMRES method is efficient and stable when the relaxation parameter selected theoretically optimal α.

    Table 3.  The experimental optimal results for ERSS-preconditioned GMRES method by minimizing iteration steps.
    m Example 3.1 Example 3.2
    k 5 10 20 5 10 20
    128 αexp 2 2 2 7.4 6.2 4
    IT 5 5 6 8 9 11
    CPU 0.2896 0.2878 0.3159 0.3744 0.4020 0.4564
    256 αexp 2 2 2 7 8.6 4
    IT 5 5 6 8 9 11
    CPU 1.7275 1.7325 1.8809 2.1710 2.3576 2.6778
    512 αexp 2 2 2 6.5 4.6 3.6
    IT 5 5 6 8 10 11
    CPU 10.1367 10.1427 10.9061 11.4431 12.3632 15.3991

     | Show Table
    DownLoad: CSV

    Eigenvalue distributions (48×48 grids) of the three preconditioned matrices are plotted in Figures 1 and 2 for different variables k. It is evident that the ERSS preconditioned matrix is of a well-clustered spectrum around 1 away from zero, especially in Example 3.1.

    Figure 1.  Eigenvalue distributions of three preconditioned matrices for Example 3.1 (m = 48, k = 5).
    Figure 2.  Eigenvalue distributions of three preconditioned matrices for Example 3.2 (m = 48, k = 10).

    To solve a class of complex linear systems (1.1), an efficient relaxed shift-splitting preconditioner is proposed in this paper by introducing a scalar matrix αI. The new preconditioner not only remains easy computational but also is closer to the original two-by-two complex coefficient matrix (2.1). Theoretical analysis proves that the preconditioned matrix has a well-clustered eigenvalue distribution with a reasonable choice of the relaxation parameters. More importantly, an efficient and practical formula for computing the relax parameter value α is derived by computing dimension and Frobenius norm of the matrix T. Numerical experiments are presented to illustrate that the presented preconditioner is feasible and effective compared with other existing block preconditioners.

    This work is supported by the Teacher education curriculum reform research project in Henan Province (NO.2021JSJYYB124), Xinyang City Federation of Social Science planning project (NO.2021JY063), Xinyang University scientific research project (NO.2022-XJLYB-002).

    The authors declare no conflict of interest.



    [1] O. Axelsson, Optimality properties of a square block matrix preconditioner with applications, Comput. Math. Appl., 80 (2020), 286–294. https://doi.org/10.1016/j.camwa.2019.09.024 doi: 10.1016/j.camwa.2019.09.024
    [2] O. Axelsson, S. Farouq, M. Neytcheva, Comparison of preconditioned Krylov subspace iteration methods for PDE-constrained optimization problems, Numer. Algorithms, 73 (2016), 631–663. https://doi.org/10.1007/s11075-016-0111-1 doi: 10.1007/s11075-016-0111-1
    [3] O. Axelsson, J. Karátson, Superior properties of the PRESB preconditioner for operators on two-by-two block form with square blocks, Numer. Math., 146 (2020), 335–368. https://doi.org/10.1007/s00211-020-01143-x doi: 10.1007/s00211-020-01143-x
    [4] O. Axelsson, A. Kucherov, Real valued iterative methods for solving complex symmetric linear systems, Numer. Linear Algebr. Appl., 7 (2000), 197–218. https://doi.org/10.1002/1099-1506(200005)7:4<197::AID-NLA194>.0.CO;2-S doi: 10.1002/1099-1506(200005)7:4<197::AID-NLA194>3.0.CO;2-S
    [5] O. Axelsson, M. Neytcheva, B. Ahmad, A comparison of iterative methods to solve complex valued linear algebraic systems, Numer. Algorithms, 66 (2014), 811–841. https://doi.org/10.1007/s11075-013-9764-1 doi: 10.1007/s11075-013-9764-1
    [6] Z. Z. Bai, On preconditioned iteration methods for complex linear systems, J. Eng. Math., 93 (2015), 41–60. https://doi.org/10.1007/s10665-013-9670-5 doi: 10.1007/s10665-013-9670-5
    [7] Z. Z. Bai, M. Benzi, F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing, 87 (2010), 93–111. https://doi.org/10.1007/s00607-010-0077-0 doi: 10.1007/s00607-010-0077-0
    [8] Z. Z. Bai, M. Benzi, F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms, 56 (2011), 297–317. https://doi.org/10.1007/s11075-010-9441-6 doi: 10.1007/s11075-010-9441-6
    [9] Z. Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. A., 24 (2003), 603–626. https://doi.org/10.1137/S0895479801395458 doi: 10.1137/S0895479801395458
    [10] M. Benzi, D. Bertaccini, Block preconditioning of real-valued iterative algorithms for complex linear systems, IMA J. Numer. Anal., 28 (2008), 598–618. https://doi.org/10.1093/imanum/drm039 doi: 10.1093/imanum/drm039
    [11] M. Benzi, G. H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numer., 14 (2005), 1–137. https://doi.org/10.1017/S0962492904000212 doi: 10.1017/S0962492904000212
    [12] M. Benzi, M. K. Ng, Q. Niu, Z. Wang, A relaxed dimensional factorization preconditioner for the incompressible Navier-Stokes equations, J. Comput. Phys., 230 (2011), 6185–6202. https://doi.org/10.1016/j.jcp.2011.04.001 doi: 10.1016/j.jcp.2011.04.001
    [13] Y. Cao, J. Du, Q. Niu, Shift-splitting preconditioners for saddle point problems, J. Comput. Appl. Math., 272 (2014), 239–250. https://doi.org/10.1016/j.cam.2014.05.017 doi: 10.1016/j.cam.2014.05.017
    [14] F. Chen, T. Y. Li, K. Y. Lu, G. V. Muratova, Modified QHSS iteration methods for a class of complex symmetric linear systems, Appl. Numer. Math., 164 (2021), 3–14. https://doi.org/10.1016/j.apnum.2020.01.018 doi: 10.1016/j.apnum.2020.01.018
    [15] V. E. Howle, S. A.Vavasis, An iterative method for solving complex-symmetric systems arising in electrical power modeling, SIAM J. Matrix Anal. A., 26 (2005), 1150–1178. https://doi.org/10.1137/S0895479800370871 doi: 10.1137/S0895479800370871
    [16] Y. M. Huang, A practical formula for computing optimal parameters in the HSS iteration methods, J. Comput. Appl. Math., 255 (2014), 142–149. https://doi.org/10.1016/j.cam.2013.01.023 doi: 10.1016/j.cam.2013.01.023
    [17] C. L. Li, C. F. Ma, On Euler preconditioned SHSS iterative method for a class of complex symmetric linear systems, ESAIM-Math. Model. Num., 53 (2019), 1607–1627. https://doi.org/10.1051/m2an/2019029 doi: 10.1051/m2an/2019029
    [18] A. Shirilord, M. Dehghan, Double parameter splitting (DPS) iteration method for solving complex symmetric linear systems, Appl. Numer. Math., 171 (2022), 176–192. https://doi.org/10.1016/j.apnum.2021.08.010 doi: 10.1016/j.apnum.2021.08.010
    [19] S. L. Wu, Several splittings of the Hermitian and skew-Hermitian splitting method for a class of complex symmetric linear systems, Numer. Linear Algebr., 22 (2015), 338–356. https://doi.org/10.1002/nla.1952 doi: 10.1002/nla.1952
    [20] A. L. Yang, Scaled norm minimization method for computing the parameters of the HSS and the two-parameter HSS preconditioners, Numer. Linear Algebr., 25 (2018), e2169. https://doi.org/10.1002/nla.2169 doi: 10.1002/nla.2169
    [21] J. H. Zhang, H. Dai, A new splitting preconditioner for the iterative solution of complex symmetric indefinite linear systems, Appl. Math. Lett., 49 (2015), 100–106. https://doi.org/10.1016/j.aml.2015.05.006 doi: 10.1016/j.aml.2015.05.006
    [22] J. H. Zhang, H. Dai, A new block preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms, 74 (2017), 889–903. https://doi.org/10.1007/s11075-016-0175-y doi: 10.1007/s11075-016-0175-y
    [23] J. L. Zhang, H. T. Fan, C. Q. Gu, An improved block splitting preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms, 72 (2018), 451–478. https://doi.org/10.1007/s11075-017-0323-z doi: 10.1007/s11075-017-0323-z
    [24] Z. Zheng, F. L. Huang, Y. C. Peng, Double-step scale splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett., 73 (2017), 91–97. https://doi.org/10.1016/j.aml.2017.04.017 doi: 10.1016/j.aml.2017.04.017
    [25] Z. Zheng, M. L. Zeng, G. F. Zhang, A variant of PMHSS iteration method for a class of complex symmetric indefinite linear systems, Numer. Algorithms, 2022, 1–18. https://doi.org/10.1007/s11075-022-01262-6
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1701) PDF downloads(79) Cited by(0)

Figures and Tables

Figures(2)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog